News — Bedford, Mass., October 7, 2022 – The use of artificial intelligence (AI) in healthcare offers enormous potential for accelerating clinical research and improving the quality and delivery of healthcare. However, a growing body of evidence shows that such tools can perpetuate and increase harmful bias absent a framework designed for health equity that addresses algorithmic bias.

The , a community of academic health systems, organizations, and expert practitioners in AI and data science, this spring to identify priority areas where standards, best practices, and norms need to be developed and guidance needs to be developed to frame for directions in research, technology, and policy. The coalition intends to advance AI for healthcare with a careful eye on health equity, aiming to address algorithmic bias. Members of the coalition include Change Healthcare, Duke AI Health, Google, Johns Hopkins University, Mayo Clinic, Microsoft, MITRE, Stanford Medicine, UC Berkeley, and UC San Francisco.

The coalition focused on the foundational themes of in its first in a series of workshops aimed at developing guidelines for the responsible use of AI in healthcare. This inaugural meeting was made possible with generous funding from the , and in collaboration with the .

The result of this two-day convening —published for public input—that summarized presentations, group discussions, and breakout sessions addressing the following topics: Health Equity by Design; Bias and Fairness Processes and Metrics; and Impacting Marginalized Groups: Mitigation Strategies for Data, Model, and Application Bias.

“We are pleased to see such energetic engagement throughout our first meeting, and also to be able to benefit from a diverse range of knowledge and opinions. It’s especially encouraging to see the group’s strong commitment to making equity the cornerstone of the ethical framework we are trying to build for AI in healthcare,” said Michael Pencina, cofounder of the coalition and director of . “Although AI has the potential to elevate care for everyone, we cannot take that for granted. It’s essential that all stakeholders—physicians, scientists, researchers, programmers, manufacturers, and patients—are able to contribute meaningfully to building better health outcomes for everyone.”

The coalition also held virtual workshops on Testability, Usability, and Safety; Transparency; and Reliability and Monitoring; and is subsequently publishing associated papers for public feedback .

“Application of AI brings a tremendous benefit for patient care, but so is its potential to exacerbate inequity in healthcare,” said John Halamka, M.D., president, and cofounder of the coalition. “The guidelines for ethical use of an AI solution cannot be an afterthought. Our coalition experts share commitment to ensure patient-centered and stakeholder-informed guidelines can achieve equitable outcomes for all populations.”  

Coalition to Design Guidelines for Health AI Tools Continues to Grow

The has stepped forward to serve as a third federal observer of the coalition, officially joining the U.S. Food and Drug Administration and the National Institutes of Health.

“The enthusiastic participation of leading academic health systems, technology organizations, and federal observers demonstrates the significant national interest in ensuring that Health AI serves all of us,” said Dr. Brian Anderson, cofounder of the coalition and chief digital health physician at . “This national need is driving our coalition’s work to build a national framework for Health AI that promotes transparency and trustworthiness.”

These collaborations will help create an AI framework that begins and ends with intentionality to foster resilient AI assurance, safety, and security. The coalition is working to build a toolset and guidelines that apply throughout the patient care journey—from chat bots to patient records—and ensures no population is left behind or adversely affected by algorithmic bias.

“With AI, we have the opportunity to harness the vast potential of data and machine learning to improve the accuracy, efficiency, and quality of healthcare, and to empower patients and providers with more personalized and evidence-based care,” said Eric Horvitz, M.D. Ph.D, Chief Scientist and CHAI cofounder. “But we also have a responsibility to ensure that our systems meet our goals and expectations around safety and reliability, and that they advance equity, transparency, and trust. The opportunities are great and the stakes are high.”

White House Office of Science and Technology Policy Offers Blueprint for an AI Bill of Rights

“It is inspiring to see towards instilling ethical standards in AI,” said Dr. Halamka. “As a coalition we share many of the same goals, including the removal of bias in health-focused algorithms, and look forward to offering our support and expertise as the policy process advances.”

The Coalition for AI Health will convene in mid-October to finalize its framework and share recommendations by year-end. To join our efforts, please visit the to learn more.

About the Coalition for Health AI

The Coalition for Health AI is a community of academic health systems, organizations, and expert practitioners of artificial intelligence (AI) and data science. Coalition members have come together to harmonize standards and reporting for health AI, and to educate end-users on how to evaluate these technologies to drive their adoption. Its mission is to provide guidelines regarding an ever-evolving landscape of health AI tools to ensure high quality care, increase credibility amongst users, and meet health care needs. Learn more at .

MEDIA CONTACT
Register for reporter access to contact details
RELATED EXPERTS