A recent study conducted by researchers from the Massachusetts Institute of Technology has revealed a troubling trend: individuals are increasingly trusting medical advice from artificial intelligence (AI) over traditional healthcare providers, even when the accuracy of such information is in question. The findings, published in the New England Journal of Medicine, indicate that participants preferred AI-generated medical responses despite their potential for inaccuracy.
The research involved a total of 300 participants, comprising both medical experts and laypeople. They were presented with medical advice generated by three sources: a qualified medical doctor, an online healthcare platform, and an AI model, such as ChatGPT. Participants rated the AI-generated responses as more accurate, valid, trustworthy, and complete compared to those provided by human doctors.
The study’s results highlighted a significant concern: neither medical professionals nor laypeople could consistently distinguish between AI-generated content and that produced by human doctors. Alarmingly, when participants were exposed to AI responses that had low accuracy, they regarded these answers as valid and were inclined to follow the advice, potentially leading to detrimental health decisions.
“The participants not only found these low-accuracy AI-generated responses to be valid and trustworthy but also indicated a high tendency to follow the potentially harmful medical advice,” the researchers noted. This reliance on AI could result in individuals seeking unnecessary medical attention based on misleading information.
Several real-world incidents underscore the dangers associated with AI-generated medical advice. In one case, a 35-year-old man from Morocco ended up in the emergency room after following chatbot instructions to wrap rubber bands around his hemorrhoids. In another instance, a 60-year-old man was hospitalized for three weeks after consuming sodium bromide, a substance typically used for pool sanitation, following a suggestion from ChatGPT.
Dr. Darren Lebl, research service chief of spine surgery at the Hospital for Special Surgery in New York, has expressed concern over the reliability of AI-generated medical advice. He stated, “The problem is that what they’re getting out of those AI programs is not necessarily a real, scientific recommendation with an actual publication behind it. About a quarter of them were made up.”
The study also aligns with findings from a recent survey conducted by Censuswide, where nearly 40 percent of respondents indicated a level of trust in medical advice from AI bots like ChatGPT. This raises important questions about the implications of AI in healthcare and the necessity for enhanced public awareness regarding the limitations of AI-generated information.
As medical technology continues to evolve, the reliance on AI for health advice presents a complex challenge. Ensuring accurate information and promoting critical evaluation of AI-generated content will be crucial in mitigating the risks associated with these technologies. The findings serve as a cautionary reminder that while AI can enhance healthcare delivery, human oversight remains indispensable.
