Although wider access to health information can support patient understanding, clinicians report a growing trend of patients placing excessive trust in AI outputs—sometimes refusing treatment or challenging physicians’ expertise based on chatbot responses.

AI-generated medical information is designed as a general reference, not a substitute for personalized diagnosis or treatment—a distinction with significant clinical consequences. A 2024 study found that AI delivered accurate diagnoses and treatment plans in only about half of reviewed medical cases, yet many patients continue to view these outputs as authoritative.
“AI is not a stand-in for medical professionals but a tool to support clinical judgment,” said Dr. Hong Yoo, Director of Clinical Services at Busan On Hospital. “Non-experts who blindly trust AI responses are taking a dangerous gamble.”
Dr. Yoo emphasized that clinicians must proactively educate patients about AI’s limitations to preserve trust. He has found success using case-specific examples to demonstrate AI’s potential for errors and the necessity of tailored medical assessments.
Internet misinformation further aggravates the issue. Unverified hair-loss treatments, fraudulent health advertisements, and COVID-19 vaccine falsehoods have all contributed to delayed treatments and adverse health outcomes, physicians warn.
Dr. Dongheon Kim, President of Busan On Hospital, likened unvalidated AI advice to “an experimental drug untested by clinical trials.” He urged patients to seek professional validation rather than accepting AI responses uncritically.
Lim Hye Jung, HEALTH IN NEWS TEAM
press@hinews.co.kr