People fight to receive useful health advice from chatbots, study discoveries

People fight to receive useful health advice from chatbots, study discoveries


With long waiting lists and increased costs in overloaded health systems, many people turn to chatbots based on artificial intelligence as a chatgpt for medical self -diagnosis. About 1 out of 6 American adults already use chatbots for health advice at least monthly, according to a recent survey.

But resting too much confidence in the results of the chatbots can be risky, in part because people fight to know which information to provide chatbots for the best possible health recommendations, according to a recent study led by Oxford.

“The study revealed a two -way communication of communication,” he told Techcrunch Adam Mahdi, director of university studies at the Oxford Internet and co -author of the study. “Those who used (chatbot) did not make better decisions than participants based on traditional methods such as online searches or their opinion.”

For the study, the authors recruited about 1,300 people in the United Kingdom and gave them medical scenarios written by a group of doctors. Participants had the task of identifying potential health conditions in the scenarios and using chatbots, as well as their methods, to understand possible action courses (for example, see a doctor or go to the hospital).

Participants used the predefined model that feeds chatgpt, GPT-4o, as well as the command of Chere R+ and the Llama 3 of Meta, which once supported the assistant Meta Ai della Company. According to the authors, chatbots not only made the participants less likely to identify a relevant health condition, but have made more likely to underestimate the seriousness of the conditions they identified.

Mahdi said that participants often omitted the key details when they interrogated the chatbots or received answers that were difficult to interpret.

“(T) the answers they received (from chatbots) frequently combined good and scarce recommendations,” he added. “The current evaluation methods for (chatbot) do not reflect the complexity of the interaction with human users.”

Techcrunch event

Berkeley, ca.
|
June 5th

Book now

The results come when technological companies increasingly push the IA as a way to improve health results. According to reports, Apple is developing an artificial intelligence tool that can dispense advice relating to exercise, diet and sleep. Amazon is exploring a way based on artificial intelligence to analyze medical databases for “social determinants of health”. And Microsoft is helping to build the IA for Triage messages to care providers sent by patients.

But as Techcrunch has previously reported, both professionals and patients are mixed that the IA is ready for higher risk health applications. The American Medical Association recommends against the use of the chatbot doctor as a chatgpt for assistance with clinical decisions and the main artificial intelligence companies including Openii warn to diagnose based on the results of their chatbots.

“We recommend relying on reliable information sources for health decisions,” said Mahdi. “The current evaluation methods for (chatbot) do not reflect the complexity of the interaction with human users. Like clinical tests for new drugs, systems (chatbots) should be tested in the real world before being distributed.”

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *