How fair are AI recommendations in healthcare?
As artificial intelligence (AI) rapidly integrates into health care, a new study by researchers at the Icahn School of Medicine at Mount Sinai revealed that all generative AI models may recommend different treatments for the same medical condition based solely on a patient’s socioeconomic and demographic background.

Image by Freepik
Their findings, which are detailed in the April 7, 2025, online issue of Nature Medicine, highlighted the importance of early detection and intervention to ensure that AI-driven care is safe, effective, and appropriate for all.
As part of their investigation, the researchers stress-tested nine large language models (LLMs) on 1,000 emergency department cases, each replicated with 32 different patient backgrounds, generating more than 1.7 million AI-generated medical recommendations. Despite identical clinical details, the AI models occasionally altered their decisions based on a patient’s socioeconomic and demographic profile, affecting key areas such as triage priority, diagnostic testing, treatment approach, and mental health evaluation.
“Our research provides a framework for AI assurance, helping developers and healthcare institutions design fair and reliable AI tools,” said co-senior author Eyal Klang, MD, Chief of Generative-AI in the Windreich Department of Artificial Intelligence and Human Health at the Icahn School of Medicine at Mount Sinai, adding: “By identifying when AI shifts its recommendations based on background rather than medical need, we inform better model training, prompt design, and oversight. Our rigorous validation process tests AI outputs against clinical standards, incorporating expert feedback to refine performance. This proactive approach not only enhances trust in AI-driven care but also helps shape policies for better health care for all.”
One of the study’s most striking findings was the tendency of some AI models to escalate care recommendations, particularly for mental health evaluations, based on patient demographics rather than medical necessity. In addition, high-income patients were more often recommended advanced diagnostic tests such as CT scans or MRIs, while low-income patients were more frequently advised to undergo no further testing. The scale of these inconsistencies emphasised the need for stronger oversight, said the researchers.
While the study provided critical insights, researchers cautioned that it represents only a snapshot of AI behaviour. Future research will continue to include assurance testing to evaluate how AI models perform in real-world clinical settings and whether different prompting techniques can reduce bias. The team also aims to work with other healthcare institutions to refine AI tools, ensuring they uphold the highest ethical standards and treat all patients fairly.
“I am delighted to partner with Mount Sinai on this critical research to ensure AI-driven medicine benefits patients across the globe,” said physician-scientist and first author of the study, Mahmud Omar, MD, who consults with the research team, continuing: “As AI becomes more integrated into clinical care, it’s essential to thoroughly evaluate its safety, reliability, and fairness. By identifying where these models may introduce bias, we can work to refine their design, strengthen oversight, and build systems that ensure patients remain at the heart of safe, effective care. This collaboration is an important step toward establishing global best practices for AI assurance in health care.”
“AI has the power to revolutionise health care, but only if it’s developed and used responsibly,” said co-senior author Girish N Nadkarni, MD, MPH, Chair of the Windreich Department of Artificial Intelligence and Human Health Director of the Hasso Plattner Institute for Digital Health, and the Irene and Dr Arthur M Fishberg Professor of Medicine at the Icahn School of Medicine at Mount Sinai. “Through collaboration and rigorous validation, we are refining AI tools to uphold the highest ethical standards and ensure appropriate, patient-centred care. By implementing robust assurance protocols, we not only advance technology but also build the trust essential for transformative health care. With proper testing and safeguards, we can ensure these technologies improve care for everyone—not just certain groups,” he added.
Next, the investigators plan to expand their work by simulating multistep clinical conversations and piloting AI models in hospital settings to measure their real-world effect. They hope their findings will guide the development of policies and best practices for AI assurance in health care, fostering trust in these powerful new tools.
The paper is titled ‘Socio-Demographic Biases in Medical Decision-Making by Large Language Models: A Large-Scale Multi-Model Analysis’.
The study’s authors, as listed in the journal, are Mahmud Omar, Shelly Soffer, Reem Agbareia, Nicola Luigi Bragazzi, Donald U Apakama, Carol R Horowitz, Alexander W Charney, Robert Freeman, Benjamin Kummer, Benjamin S Glicksberg, Girish N Nadkarni, and Eyal Klang.
DOI: 10.1038/s41591-025-03626-6