Guest post by Presidential Innovation Fellows Justin Koufopoulos and Gil Alterovitz, PhD.
Getting into a clinical trial is challenging for patients. Researchers estimate that only 5% of patients eligible to participate in a cancer clinical trial actually take part in one. Many factors impact this statistic, including how findable and accessible is information about the clinical trials.
Patients often learn about clinical trials from their doctors or through patient advocacy groups like the American Cancer Society. They then typically search for trials on the internet, often ending up on websites like the NIH-run ClinicalTrials.gov or trials.cancer.gov.
Once on these websites, patients still face challenges to access. Prime among them: what search terms to use to find relevant trials.
The terms a patient or doctor uses may not match how researchers running a trial describe the focus of their study, for example “breast cancer” vs. “ductal carcinoma.” While the NIH clinical trials databases track synonyms and work to make the proper matches, users cannot escape this recurring mismatch in language that challenges access.
This challenge becomes even more pronounced with clinical trial eligibility criteria. These criteria describe who can and cannot participate in a study. For example, an eligibility criterion might be “age 18 years or older” or “confirmed breast lesions that can proceed to removal without chemotherapy.” While a computer can easily match a patient to the first criterion, the second involves many more concepts that are harder to separate, understand, and match.
Artificial intelligence can be part of the solution, particularly “machine learning,” which leverages data to teach a program how to make predictions on chosen topics.
Various technology companies have already used machine learning to address language translation problems. As a result, computers can now translate English to Japanese with few errors, and speech-to-text applications can translate human speech to computer inputs and can even reply.
We adopted a similar, albeit scaled back, approach to translate diverse clinical trials eligibility criteria into standardized and structured language. We also drew inspiration from writing tools that help writers improve their text’s readability and grammar.
Instead of highlighting repeated words or sentences in the passive voice, our prototype nudges researchers toward writing eligibility criteria in a way more easily translated by machine. It offers feedback and suggestions, almost like an English language tutor, and proposes alternative ways to write the criteria that would make them more standard and eventually, more translatable.
This shift toward more standardized language can make it easier to match content across databases, such as matching a list of patients with a set of conditions and prior treatments.
The prototype also helps researchers understand the consequences of their word choices. It looks at previous studies with similar eligibility criteria and notes how many participants they recruited. Additionally, input from consensus-based standards may also be presented. While not a perfect metric for inclusiveness, this feedback shows someone running a study how their word choices compare to others and the potential impact of those choices on their study’s overall success.
Research by academic psychologists has shown that nudging works in a wide variety of settings. To the best of our knowledge, this is the first time a nudge has been used to coach researchers, but these nudges are not requirements. Researchers can still write their eligibility criteria in the way they think makes the most sense. However, by moving researchers toward standardized phrasings, our prototype can help computers match patient characteristics with eligibility criteria and potentially get more eligible patients into clinical trials.
More work is needed before we can fully implement our tool and test at scale, but we are making progress. We recently completed a pilot study with non-federal groups to determine whether the structured data (so, not the nudging agent but the data our tool learns from) could be used to create tools to help with clinical trials access. Our findings were positive, confirming that private industry and academia need more data like ours for building artificial intelligence tools. The work was featured by the White House on AI.gov as an example for “Industries of the Future.”
The Health Sprint piloting effort included physicians and patient advocates as well as data stewards and experts in the relevant domain areas from within government. For example, Rick Bangs, MBA, PMP, a patient advocate, has worked with various organizations including the National Cancer Institute and the ClinicalTrials.gov development team. Regarding clinical trial matching, Bangs noted, “The solution here will require vision, and that vision will cross capabilities that no one supplier will individually have.”
Next up, we need to evaluate whether this tool helps researchers write eligibility criteria in the “real world,” where all innovations must live.
Justin Koufopoulos is a Presidential Innovation Fellow and product manager working to making clinical research more patient-centered. He has worked with the White House, CIO Council, National Library of Medicine, General Services Administration, Department of Commerce, and Veterans Administration on issues ranging from internet access to artificial intelligence.
Gil Alterovitz, PhD, FACMI, is a Presidential Innovation Fellow who has worked on bridging data ecosystems and artificial intelligence at the interface of several federal organizations, including the White House, National Cancer Institute, General Services Administration, CIO Council, and Veterans Administration.
The Presidential Innovation Fellowship brings together top innovators and their insights from outside of government, including the private sector, non-profits, and academia. Their insights are brought to bear on some of the most challenging problems within government and its agencies. The goal is to challenge existing paradigms by rethinking problems and leveraging novel, agile approaches. PIF was congressionally mandated under HR 39, the Tested Ability to Leverage Exceptional National Talent (TALENT) Act. The program is administered as a partnership between the White House Office of Science and Technology Policy, the White House Office of Management and Budget, and the General Services Administration.