Select Language

English

Down Icon

Select Country

Netherlands

Down Icon

Using ChatGPT for medical questions? Why this isn't such a good idea yet

Using ChatGPT for medical questions? Why this isn't such a good idea yet
Photo: AFP / Olivier Morin

Is ChatGPT our GP of the future? This question has been raised more and more frequently in recent years. But a situation in the United States shows that ChatGPT isn't always your best friend in this area. A man developed a rare condition after an AI bot advised him on how to eliminate salt from his diet.

In 2023, an AI-powered medical chatbot from Google passed a demanding US medical exam. Google researchers had the program administer the USMLE multiple-choice exam. Med-PaLM achieved a score of 67.6 percent, which meant the program passed. "Med-PaLM performs encouragingly, but remains inferior to physicians," the authors wrote at the time.

Annals of Internal Medicine , a medical journal in the United States, describes a situation in which the use of ChatGPT as a medical aid led to a rare condition. The 60-year-old man had read about the negative effects of table salt (sodium chloride) and asked the AI bot how to eliminate it from his diet. ChatGPT then suggested that chloride could be replaced with sodium bromide, which was still used as a sedative in the early 20th century.

After taking sodium bromide for three months, the man went to the emergency room. There, he claimed his neighbor was poisoning him. He experienced symptoms including extreme thirst, insomnia, and acne. It later turned out that the sodium bromide was the culprit. As a result, the man had developed bromism, also known as bromide poisoning.

“This case demonstrates how the use of artificial intelligence (AI) can potentially contribute to the development of preventable negative health outcomes,” write the authors, who are affiliated with the University of Washington. While they didn’t have access to the man’s chat, when they asked what chloride could be replaced with, they also received a response that mentioned bromide. “Although the response noted that context matters, it didn’t provide a specific health warning or ask why we wanted to know, as we would assume a medical professional would.”

They argue that it's important to keep in mind that ChatGPT and other AI systems "can generate scientific inaccuracies, lack the ability to critically discuss results, and ultimately fuel the spread of misinformation." The authors see ChatGPT as a "tool with great potential to bridge the gap between scientists and the non-academic population," but point out the risk of information being taken out of context. It's "highly unlikely" that a medical expert would have mentioned sodium bromide in that case. "Healthcare providers should be mindful of where their patients access health information."

There are other risks associated with chatting with ChatGPT. For example, some people use the bot as a personal therapist. AI responds with great empathy, so it's not surprising, but for people with real problems, it can lead to a worsening of their mental health . For example, one man lost his job, became depressed, and asked ChatGPT about the highest bridges in New York. A regular therapist would immediately intervene; ChatGPT provides a neat list.

Even sex questions appear to be risky . British research shows that many young people use ChatGPT as a form of sex education. But according to experts, it's difficult to know where the answers ChatGPT provides originate. This means the bot can give an incorrect answer, or recommend something completely undesirable.

Blue-green algae and bacteria: more and more swimming locations with poor water quality

This is how much wages rose in July

Metro Holland

Metro Holland

Similar News

All News
Animated ArrowAnimated ArrowAnimated Arrow