ChatGPT invents fake cancer data, doctors warn

Doctors are warning against using ChatGPT for medical advice after a study found it constituted health data when asked about cancer information.

The AI ​​chatbot incorrectly answered one in ten questions about breast cancer screening and the correct answers weren’t as “complete” as those found by a simple Google search.

The researchers said that in some cases, the AI ​​chatbot even used fake newspaper articles to back up its claims.

It comes with warnings that users should treat the software with caution as it tends to “hallucinate” – in other words, make things up.

Doctors warn against using ChatGPT for medical advice

Doctors warn against using ChatGPT for medical advice

Researchers at the University of Maryland School of Medicine asked ChatGPT to answer 25 questions related to breast cancer screening advice.

Since the chatbot is known to vary its answer, each question was asked three times. The results were then analyzed by three radiologists trained in mammography.

The “vast majority” – 88% – of responses were appropriate and easy to understand. But some of the answers, however, were “inaccurate or even fictitious”, they warned.

One response, for example, was based on outdated information. He advised delaying a mammogram for four to six weeks after receiving a Covid-19 vaccination, but that advice was changed more than a year ago to urge women not to wait.

ChatGPT also provided inconsistent answers to questions about breast cancer risk and where to get a mammogram. The study found that responses ‘varied considerably’ each time the same question was asked.

Study co-author Dr. Paul Yi said, “We have seen in our experience that ChatGPT sometimes invents fake journal articles or fake health consortia to support its claims.

“Consumers should be aware that these are new, unproven technologies and should always rely on their doctor, rather than ChatGPT, for advice.”

The results – published in the journal Radiology – also revealed that a simple Google search always provided a more comprehensive answer.

Lead author Dr. Hana Haver said ChatGPT relied on a single set of recommendations from a single organization, issued by the American Cancer Society, and did not offer different recommendations issued by Disease Control. and Prevention or the US Preventative Services Task Force.

The launch of ChatGPT late last year sparked a surge in demand for the technology, with millions of users now using the tools every day, from writing school essays to seeking health advice.

Microsoft has invested heavily in the software behind ChatGPT and integrates it into its Bing search engine and Office 365, including Word, PowerPoint, and Excel.

But the tech giant admitted it can still make mistakes.

Artificial intelligence experts call the phenomenon “hallucination,” in which a chatbot that can’t come up with the answer it’s trained on responds confidently with a made-up answer it deems plausible.

He then continues to repeatedly insist on the wrong answer without any internal awareness that it is a figment of his own imagination.

Dr. Yi, however, suggested that the results were generally positive, with ChatGPT correctly answering questions about symptoms of breast cancer, who is at risk, and questions about cost, age and frequency recommendations for mammograms.

He said the proportion of correct answers was “quite amazing”, with “the added benefit of summarizing the information in an easily digestible form for consumers to understand easily”.

More than a thousand academics, pundits and tech industry bosses recently called for an emergency halt to the “dangerous” “arms race” to launch the latest AI.

They warned that the battle between tech companies to develop ever more powerful digital minds is “out of control” and poses “profound risks to society and humanity”.

Leave a Comment