Surgeon Reveals How ChatGPT's Faulty Medical Advice Endangers Patients
By admin | Jan 13, 2026 | 3 min read
Dr. Sina Bari, a surgeon and AI healthcare lead at the data firm iMerit, has personally observed how ChatGPT can misguide patients with incorrect medical information. He recounted an instance where a patient presented a printed ChatGPT dialogue warning that a prescribed medication carried a 45% risk of pulmonary embolism. Upon closer examination, Dr. Bari discovered the statistic was drawn from a study focusing on a specific subgroup of tuberculosis patients, a context entirely irrelevant to his patient.
Despite this, Dr. Bari expressed more optimism than worry upon the recent announcement of ChatGPT Health, a dedicated health chatbot set to launch in the coming weeks. This tool will enable users to discuss health concerns in a more private environment, with assurances that their conversations will not be used to train the AI model. "I think it’s great," Dr. Bari stated. "This is already occurring informally, so creating a formalized system that safeguards patient information and incorporates protective measures will ultimately empower patients to use it more effectively."
The service aims to offer more tailored guidance by allowing users to upload their medical records and connect with applications such as Apple Health and MyFitnessPal. This approach, however, immediately raises privacy concerns for security-conscious individuals. "I’m curious to see how regulators will address this," Dr. Bari noted.
Many industry experts believe this shift is already inevitable. With over 230 million people consulting ChatGPT on health matters weekly, AI chatbots are becoming a common alternative to searching symptoms online. "It’s logical that they would develop a more private, secure, and optimized version of ChatGPT specifically for healthcare inquiries," one perspective holds.
A significant challenge for AI in this domain is its tendency to produce hallucinations, or factual inaccuracies, which is especially critical in medical contexts. Research from Vectara’s Factual Consistency Evaluation Model indicates that OpenAI’s GPT-5 demonstrates a higher propensity for hallucinations compared to several models from Google and Anthropic. Nevertheless, AI companies see an opportunity to address systemic inefficiencies in healthcare, a sentiment echoed by Anthropic’s own health product announcement this week.
For Dr. Nigam Shah, a Stanford medicine professor and chief data scientist for Stanford Health Care, the pressing issue is not flawed AI advice but the severe lack of access to care. "Currently, if you try to see a primary care doctor at any health system, wait times can stretch from three to six months," Dr. Shah explained. "If the choice is between waiting six months for a physician or consulting a non-doctor that can still provide some assistance, what would most people choose?"
Dr. Shah advocates for integrating AI into healthcare primarily on the provider side rather than directly with patients. Studies frequently note that administrative duties can occupy up to half of a primary care doctor's time, drastically limiting daily patient capacity. Automating these tasks could free physicians to see more patients, potentially reducing reliance on tools like ChatGPT Health without professional oversight.
Leading a team at Stanford, Dr. Shah is developing ChatEHR, software integrated into electronic health record systems to help clinicians navigate patient records more efficiently. "Making the electronic medical record more user-friendly means physicians can spend less time scouring every nook and cranny of it for the information they need," said Dr. Sneha Jain, an early tester of ChatEHR, in a Stanford Medicine article. "ChatEHR can present that information upfront, allowing them to focus on what truly matters—talking to patients and diagnosing their conditions."
Anthropic is similarly focusing on AI solutions for clinicians and insurers, beyond its public Claude chatbot. This week, the company introduced Claude for Healthcare, highlighting its potential to save time on arduous administrative tasks like processing prior authorization requests with insurance providers. "Some of you handle hundreds or thousands of these prior authorization cases weekly," said Anthropic CPO Mike Krieger during a presentation at J.P. Morgan’s Healthcare Conference. "Imagine saving twenty to thirty minutes on each one—the cumulative time savings are dramatic."
As AI and medicine grow increasingly connected, a fundamental tension persists between the two fields: a doctor's primary duty is to the patient, while technology companies are ultimately accountable to their shareholders, even with the best of intentions. "I believe that tension is crucial," Dr. Bari reflected. "Patients depend on us to be skeptical and cautious in order to protect them."
Comments
Please log in to leave a comment.
No comments yet. Be the first to comment!