“Computers don’t diagnose the same way that doctors do”

- NLM Lecture Explored How a Cancer Diagnosis Can Help Illustrate Algorithmic Biases

Guest post by Maryam Zaringhalam, PhD, Data Science and Open Science Officer for the NLM Office of Strategic Initiatives. This post summarizes and discusses a presentation by Meredith Broussard, “How Can Cancer Help Us Understand Algorithmic Bias?” which can be found on NIH Videocast and NLM’s YouTube Channel.

Meredith Broussard is an associate professor at the New York University (NYU) Arthur L. Carter Journalism Institute, a research director at the NYU Alliance for Public Interest Technology, and the author of several books, including More Than a Glitch: Confronting Race, Gender, and Ability Bias in Tech.
Meredith Broussard is an associate professor at the New York University (NYU) Arthur L. Carter Journalism Institute, a research director at the NYU Alliance for Public Interest Technology, and the author of several books, including More Than a Glitch: Confronting Race, Gender, and Ability Bias in Tech.

In the last year, discussions around artificial intelligence (AI)—from excitement about the possibilities unlocked by these technologies to concerns about potential consequences of deploying AI without appropriate frameworks for responsibility—have taken off. While the widespread popularity of these discussions may be new, experts in the government and the broader research community have long been grappling with the benefits and ramifications of the rise of AI.

Since 2021, the NLM Office of Strategic Initiatives has sponsored an annual lecture on Science, Technology, and Society to raise awareness about the societal and ethical implications of using advanced technologies like AI when conducting biomedical research. On March 6, 2024, AI ethics expert and journalist Meredith Broussard presented the fourth lecture in this series, titled “How Can Cancer Help Us Understand Algorithmic Bias?”

During her hour-long presentation, Broussard disentangled popular Hollywood depictions of AI from the real limitations and challenges that can come with this technology. By focusing on the realities of what AI can and can’t do today, Broussard challenged attendees to think about how the inputs, code, and outputs of algorithms could reflect and reinforce harmful biases within society and what can be done about it.

Broussard used her own experience with breast cancer as a window into what to expect from AI (and reassured us that she is doing well now!). After learning that an AI algorithm had read one of her mammography reports, she sought to reproduce its conclusions for an article she was writing about state-of-the-art technology in cancer detection. She expected to feed the algorithm with her mammograms and medical records and for the tool to spit out a detailed analysis of her diagnosis. Instead, it simply read a static image and drew a circle around an area of concern. Importantly, the output was a prediction, not a diagnosis, which would require her physician’s follow-up and judgment… as it should be!

“It identified an area of concern, and it gave me a score. I was really surprised that it didn’t give me… you know how when you text somebody ‘congratulations,’ it gives you balloons? I guess I was expecting that—You have cancer! or You don’t have cancer! But that’s not actually how it works.”

Broussard noted that AI is useful for limited, low-stakes, and mundane tasks, not for high-stakes or general-purpose uses. In medicine, the goal is to save lives, which means ensuring that everyone receives the care they need, and a false negative may keep us from meeting that goal. Developers of such predictive systems must decide how much risk they’re willing to tolerate, and when applying these algorithms to medical contexts, it may be more appropriate to err on the side of getting more false positives than false negatives. While these systems can support a doctor in making a diagnosis, they should not be depended on as an end-all, be-all diagnostic tool.

She also explained that the context in which these technologies are deployed matters. There is often an idea that algorithms are neutral, objective, and all-knowing rather than mathematical models that reflect the data they are trained on. These data reflect our social realities: In medicine, as in many areas, inequities based on a patient’s race, sex, gender, disability status, and other factors persist to this day. To address such inequities necessitates collaborations among humanities scholars, social scientists, technologists, biomedical researchers, and clinicians.

“Race is often embedded in medical systems [and] absolutely embedded in all instances of machine learning systems as if it were a biological and social reality. So we need to do more reflecting on the social underpinnings of systems before we go making algorithmic systems and fossilizing these discriminatory decisions in code.”

Broussard pointed to a powerful AI evaluation effort, known as “algorithmic auditing,” which includes a range of approaches to evaluate risks associated with the use of a particular algorithm in a specific context. She provided an example of applying an algorithmic auditing approach to the major large language models and cautioned that while some large language models may possess the ability to respond to queries about a subject, the answers they provide may not be accurate and have the potential to cause significant harms in certain contexts.

You can find Broussard’s lecture archived on NIH Videocast and NLM’s YouTube Channel. We look forward to continuing these discussions as NLM works with the broader biomedical community to unlock the potential of artificial intelligence and to solve some of biomedical research and human health’s most pressing challenges, all while protecting against potential risks.

Maryam Zaringhalam, PhD

Data Science and Open Science Officer, Office of Strategic Initiatives, NLM

Dr. Zaringhalam is responsible for monitoring and coordinating data science and open science activities and development across NLM, NIH, and beyond. She completed her PhD in molecular biology at Rockefeller University before joining NLM as a Science & Technology Policy Fellow from the American Association for Advancement Studies (AAAS).

Leave a Reply