In mid-October I gave the NLM Research in Trustable, Transparent AI for Decision Support keynote speech to the 50th Institute of Electrical and Electronics Engineers (IEEE) Applied Imagery Pattern Recognition conference in Washington, D.C. (virtually, for me). The IEEE continues to advance new topics in applied image and visual understanding, and the focus this year was to explore artificial intelligence (AI) in medicine, health care, and neuroscience.
To prepare for my talk, I reviewed our extramural research portfolio so I could highlight current research on these topics. NLM’s brilliant investigators are using a range of machine learning and AI strategies to analyze diverse image types. Some of the work fosters biomedical discovery; other work is focused on creating novel decision support or quality improvement strategies for clinical care. As I did with the audience at IEEE, I’d like to introduce you to a few of these investigators and their projects.
Hagit Shatkay and her colleagues from the University of Delaware direct a project titled Incorporating Image-based Features into Biomedical Document Classification. This research aims to support and accelerate the search for biomedical literature by leveraging images within articles that are rich and essential indicators for relevance-based searches. This project will build robust tools for harvesting images from PDF articles and segment compound figures into individual image panels, identify and investigate features for biomedical image-representation and categorization of biomedical images, and create an effective representation of documents using text and images grounded in the integration of text-based and image-based classifiers.
Hailing from the University of Michigan, Jenna Wiens leads a project called Leveraging Clinical Time Series to Learn Optimal Treatment of Acute Dyspnea. Managing patients with acute dyspnea is a challenge, sometimes requiring minute-to-minute changes in care approaches. This team will develop a novel clinician-in-the-loop reinforcement learning (RL) framework that analyzes electronic health record (EHR) clinical time-series data to support physician decision-making. RL differs from the more traditional classification-based supervised learning approach to prediction; RL “learns” from evaluating multiple pathways to many different solution states. Wiens’ team will create a shareable, de-identified EHR time-series dataset of 35,000 patients with acute dyspnea and develop techniques for exploiting invariances (different approaches to the same outcome) in tasks involving clinical time-series data. Finally, the team will develop and evaluate an RL-based framework for learning optimal treatment policies and validating the learned treatment model prospectively.
Quynh Nguyen from the University of Maryland leads a project called Neighborhood Looking Glass: 360 Degree Automated Characterization of the Built Environment for Neighborhood Effects Research. Using geographic information systems and images to assemble a national collection of all road intersections and street segments in the United States, this team is developing informatics algorithms to capture neighborhood characteristics to assess the potential impact on health.
Corey Lester from the University of Michigan leads a multidisciplinary team using machine intelligence in a project titled Preventing Medication Dispensing Errors in Pharmacy Practice with Interpretable Machine Intelligence. Machine intelligence is a branch of AI distinguished by its reliance on deductive logic, and the ability to make continuous modifications based in part on its ability to detect patterns and trends in data. The team is designing interpretable machine intelligence to double-check dispensed medication images in real-time, evaluate changes in pharmacy staff trust, and determine the effect of interpretable machine intelligence on long-term pharmacy staff performance. More than 50,000 images are captured and put through an automated check process to predict the shape, color, and National Drug Code of the medication product. This use of interpretable machine intelligence methods in the context of medication dispensing is designed to provide pharmacists with confirmatory information about prescription accuracy in a way that reduces cognitive demand while promoting patient safety.
Alan McMillan from the University of Wisconsin-Madison and his team are examining how image interpretation can improve noisy data in a project called Can Machines be Trusted? Robustification of Deep Learning for Medical Imaging. Noisy data is information that cannot be understood and interpreted correctly by machines (such as unstructured text). While deep learning approaches (methods that automatically extract high-level features from input data to discern relationships) to image interpretation is gaining acceptance, these algorithms can fail when the images themselves include small errors arising from problems with the image capture or slight movements (e.g., chest excursion in the breathing of the patient). The project team will probe the limits of deep learning when presented with noisy data with the ultimate goal of making the deep learning algorithms more robust for clinical use.
In the work of Joshua Campbell’s team at Boston University, the images emerge at the end of the process to allow for visualization of large-scale datasets of single-cell data. The project, titled Integrative Clustering of Cells and Samples Using Multi-Modal Single-Cell Data, uses a Bayesian hierarchical model developed by the team to perform bi-clustering of genes into modules and cells into subpopulations. The team is developing innovative models that cluster cells into subpopulations using multiple data types and cluster patients into subgroups using both single-cell data and patient-level characteristics. This approach offers improvements over discrete Bayesian hierarchical models for classification in that it will support multi-modal and multilevel clustering of data.
Several things struck me as I reviewed these research projects. The first was a sense of excitement over the engagement of so many smart young people at the intersection of analytics, biomedicine, and technology. The second was the variety of types of images considered within each project. While one study explores radiological images, another study examines how image data types vary from figures in journal papers to pictures of the built environment and images of workflows in a pharmacy. Two of these studies use AI techniques to analyze the impact of the physical environment to better understand its influence on patient health and safety, and one study uses images as a visualization tool to better support inference of large-scale biomedical research projects. Images appear at all points of the research process, and their effective use heralds an era of image-based medicine. Let’s see what lies ahead!