Skip to main content

When it comes to health care, how can AI help — or hurt — patients?

Platform can help patients make sense of their health data but it is not protected by HIPAA
Pros and cons of ChatGPT Health
Northwestern AI-in-medicine expert Dr. David Liebovitz spoke with Northwestern Now about the launch of ChatGPT Health and what it means for patient privacy, democratization of health data and more. Image credit OpenAI

OpenAI recently introduced ChatGPT Health, “a dedicated experience in ChatGPT designed for health and wellness,” as a response to the millions of people who ask ChatGPT a health care-related question every day, the company said.

To learn more about the implications and potential of this new feature, Northwestern Now sat down with Dr. David Liebovitz, co-director of the Institute for Artificial Intelligence in Medicine’s Center for Medical Education in Data Science and Digital Health at Northwestern University Feinberg School of Medicine.

Liebovitz has been teaching clinical informatics for several decades, incorporating new methods for education and applications of AI within clinical patient care. He has also been a chief medical information officer at two organizations where he actively implemented AI in clinical medicine.

How do you approach the conversation about patients interacting with AI?

“The question isn’t whether patients will use AI for health information, 40 million people already ask ChatGPT health questions daily. The question is whether we can help them do so more effectively and safely, with appropriate guardrails and realistic expectations about what these tools can and cannot do.”

What opportunity does ChatGPT Health represent for patients?

“The 21st Century Cures Act now requires health care systems to provide patients complete access to their medical records through standardized application programming interfaces (APIs) that electronic health record vendors like Epic are now required to provide. AI tools like ChatGPT Health can help patients make sense of that data. For essentially zero incremental cost, a patient can get help understanding lab results, preparing questions for appointments and identifying gaps in their care that might otherwise be missed.”

“Here is what true democratization of health AI looks like: A patient downloads their records using the APIs health care systems are now required to provide, runs them through an AI model on their own phone and gets personalized insights without their data ever touching a third-party server. No subscription fees, no privacy tradeoffs, no dependence on any company’s policies or terms of service.”

How can tools like this help move medicine forward?

“More than 25 years after the Institute of Medicine report ‘To Err is Human: Building a Safer Health System’ documented tens of thousands of preventable deaths from diagnostic errors and care gaps, we still haven’t solved this problem. AI assistants that can review a patient’s full history and flag potential concerns represent a significant step forward from patients showing up with Google searches. These tools synthesize information in context rather than generating alarm from isolated symptoms.”

What concerns do you have about ChatGPT Health?

“Patients should understand that health data shared with ChatGPT is not protected by HIPAA. Unlike conversations with physicians or therapists, there’s no legal privilege. This data could potentially be subpoenaed in litigation or accessed through other legal processes. For sensitive health matters, particularly reproductive or mental health concerns, that’s a real consideration.”

How could patient privacy be better approached?

“There’s an alternative approach that sidesteps the privacy concerns entirely: running AI models locally on a patient’s own device. Modern smartphones now have sufficient processing power to run capable language models without any data ever leaving the phone. No cloud storage, no corporate servers, no subpoena risk.”

“On-device AI capabilities, which run AI directly on local hardware such as phones and wearables instead of sending data to the cloud, are advancing rapidly. Apple’s own approach with Apple Intelligence validates that sophisticated AI can run locally. Open-source models optimized for mobile hardware are improving month over month. Within a year or two, a patient could have a highly capable health assistant running entirely on their phone, analyzing their downloaded medical records with complete privacy.”

What is your research exploring in relation to this area?

“Our research group is actively exploring how to make this practical for the public. The technical pieces are falling into place: access to standardized health records, powerful mobile hardware and increasingly capable open-source models. The goal is giving everyone access to meaningful second opinions on their health data while keeping that data entirely under their control.”