RESEARCH SUMMARY 

News — Study Title: GPT-4 in a Cancer Center: Institute-Wide Deployment Challenges and Lessons Learned

Publication: 

Dana-Farber Cancer Institute authors: Renato Umeton, PhD; Anne Kwok; Rahul Maurya; Domenic Leco, JD, MBA; Naomi Lenane; Jennifer Willcox, JD;  Mary Tolikas, PhD, MBA; Dana-Farber Generative AI Governance Committee; Jason M. Johnson, PhD

Summary:

Dana-Farber Cancer Institute has implemented an artificial intelligence (AI) application intended for general use in a medical center or hospital. The system, called GPT4DFCI, is permitted for operations, administrative, and research uses but prohibited in direct clinical care. The system is deployed within the Dana-Farber digital premises, so all operations, prompts, and responses occur inside a private network. The application is private, secure, HIPAA-compliant, and auditable.

The application rolled out in phases over the last year to increasingly more users. GPT4DFCI was rolled out with detailed guidance; e.g., users were reminded that they are directly responsible for the completeness, veracity, and fairness of any final work products, and must verify GPT-generated content because it might be incomplete, biased, or factually false. The rollout of this tool and associated policy has been guided by the Dana-Farber Generative AI Governance Committee, which includes broad representation of DFCI constituencies, including research, operations, legal, privacy, information security, bioethics, compliance, intellectual property, and patients.

Once clinical care use was ruled out, a survey of initial users showed that the most reported primary uses were extracting or searching for information in notes, reports, or other files and answering general knowledge questions. Other reported uses included summarizing documents or research papers, and drafting or editing letters, meeting minutes, or presentations. The most common concerns reported were inaccurate or false output and ethics and/or compliance with policies.

Impact:

There is significant potential for generative AI to aid in healthcare, coupled with significant risks of bias, inaccuracy, incompleteness, and misuse. Despite these risks, the Dana-Farber team decided that broad prohibition of generative AI tools would inhibit learning and innovation, which are central to the Dana-Farber mission. To manage risk and advance discovery, a broadly representative, multidisciplinary governance body guided the technical, ethical, and policy decisions behind this implementation. The experience and  have been shared to inform other healthcare institutions considering similar efforts.

Disclosures:

The Microsoft Azure teams supported Dana-Farber in handling of Azure OpenAI Service quotas, and shared expertise to ensure a resilient application.