Subscribe

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Service

Why AI Health Chatbots Are Failing Patients

Why AI Health Chatbots Are Failing Patients Why AI Health Chatbots Are Failing Patients
IMAGE CREDITS: HAVARD BUSINESS REVIEW

More people are turning to AI health chatbots for quick answers to medical concerns. With rising healthcare costs and long wait times, it’s no surprise that tools like ChatGPT are now a go-to for basic health advice. In fact, one recent survey showed that about one in six American adults use chatbots for health tips at least once a month.

But new research from Oxford suggests this growing trend may do more harm than good. The study found that many users struggle to ask the right questions, and chatbots often respond with mixed or unclear advice.

Chatbots Miss the Mark on Accuracy and Clarity

Oxford researchers studied how well people could identify health conditions using different tools. They asked 1,300 UK participants to review medical scenarios created by real doctors. Each participant then used AI chatbots, online searches, or personal judgment to decide what to do next.

The chatbots tested included OpenAI’s GPT-4o, Cohere’s Command R+, and Meta’s Llama 3. But instead of helping users make better choices, these tools often made things worse. Participants using chatbots were less likely to spot serious conditions. Even when they did, they often underestimated the urgency.

Adam Mahdi, co-author of the study and director at the Oxford Internet Institute, explained that many users left out key symptoms when asking questions. The answers they got in return were often confusing, with helpful tips mixed in with poor suggestions. This created a false sense of confidence—and left many unsure about what steps to take.

Big Tech Is Moving Fast, But Experts Urge Caution

Despite these concerns, major tech companies are racing ahead. Apple is working on an AI tool that gives tips on sleep, diet, and exercise. Amazon is using AI to analyze social and lifestyle factors that affect health. Microsoft is building tools to help healthcare teams handle patient messages more efficiently.

But experts remain divided. The American Medical Association has warned doctors not to use chatbots like ChatGPT for clinical decisions. And even the makers of these tools advise against relying on them for medical diagnoses.

Mahdi summed it up clearly: people should stick with trusted sources for health decisions. He also called for real-world testing of chatbots—similar to clinical trials for new drugs. That way, we can understand how these tools work in practice, not just in theory.

Until then, AI health chatbots may be helpful for learning—but not for making serious health choices.

Share with others