Site icon Newzhealth

Study raises safety concerns as people turn to AI for symptom checks

Study raises safety concerns as people turn to AI for symptom checks
The researchers also warned that the stakes are high when people rely on a chatbot to decide whether symptoms are urgent.

AI for symptom checks: People looking for quick medical guidance from AI chatbots may not be getting an advantage over old-fashioned internet searches, according to a new study that tested how well everyday users handle symptom-checking with popular large language models.

Researchers ran a randomised trial with 1,298 adults in the UK, giving participants 10 common health scenarios, ranging from a severe headache after a night out to postpartum exhaustion and gallstone symptoms, then asking them to identify what might be wrong and decide what kind of care was needed.

Also Read | Study finds AI chatbots inconsistent in responses to suicide-related questions

Participants were assigned to use one of three chatbots, OpenAI’s GPT-4o, Meta’s Llama 3, or Cohere’s Command R+, or to a control group that used standard search engines. The result: users with chatbots correctly identified the underlying issue only about a third of the time (34.5%), and chose the right next step, self-care, a GP visit, or urgent/emergency care, 44.2% of the time, essentially matching the search-engine group.

The team said the findings help explain a gap between chatbot performance on medical tests and real-world symptom use. In practice, users often leave out key details, provide vague or incomplete descriptions, or struggle to interpret what the chatbot is telling them, sometimes misunderstanding or ignoring the advice altogether.

The researchers also warned that the stakes are high when people rely on a chatbot to decide whether symptoms are urgent. In public-facing guidance around the study, University of Oxford researchers cautioned that these tools can produce inconsistent or incorrect information and may fail to flag situations that need immediate care.

The paper lands as consumer use of AI for health questions grows. One widely cited US survey has found roughly one in six adults uses AI chatbots for health information at least monthly, suggesting the audience and the risk of missteps may expand quickly.

For now, experts continue to advise that people treat chatbot outputs as general information, not a diagnosis, and prioritise trusted medical sources such as the National Health Service or a qualified clinician, especially when symptoms are severe, sudden, or worsening.

Exit mobile version