ChatGPT shows one dangerous flaw when responding to health crisis questions, study finds

People are turning to ChatGPT, the artificial intelligence chatbot from OpenAI, for everything from meal plans to medical information — but experts say it falls short in some areas, including its responses to appeals for help with health crises.
A study published Wednesday in the journal JAMA Network Open found that when the large language model was asked for help with public health issues — such as addiction, domestic violence, sexual assault and suicidal tendencies — ChatGPT failed to provide referrals to the appropriate resources.
Led by John W. Ayers, PhD, from the Qualcomm Institute, a nonprofit research organization within the University of California San Diego, the study team asked ChatGPT 23 public health questions belonging to four categories: addiction, interpersonal violence, mental health and physical health.

[Read More…]