Ad image

Doctors Say AI Is Introducing Slop Into Patient Care

6 Min Read

Recently, studies have been published that show that AI is better than human doctors when it comes to diagnosing health problems. These studies are fascinating because America’s health care system is badly broken and everyone is looking for solutions. AI offers a potential opportunity to make doctors more efficient by doing much of the administrative work for them, giving them time to see more patients and reducing the ultimate cost of care. Real-time translation could also improve accessibility for non-English speakers. For technology companies, the opportunity to serve the healthcare industry can be very lucrative.

However, in reality, we do not seem to have reached the point of replacing or actually augmenting doctors with artificial intelligence. of washington post spoke We worked with several experts, including doctors, to see how early tests of AI were progressing, but the results were not encouraging.

Below is an excerpt from Christopher Sharp, a clinical professor at Stanford Medicine, who uses GPT-4o to make recommendations for patients who contact his office.

Sharp randomly selects patient questions. It says, “My lips started itching after eating tomatoes. Do you have any recommendations?”

An AI using OpenAI’s version of GPT-4o would create a reply like this: It sounds like you’re having a mild allergic reaction to tomatoes. ” The AI ​​recommends avoiding tomatoes, using oral antihistamines, and using topical steroid creams.

Sharp stared at the screen for a while. “Clinically, I don’t agree with all aspects of that answer,” he says.

“While I totally agree with avoiding tomatoes, I don’t recommend topical creams like mild hydrocortisone for the lips,” says Sharp. “The lips are very thin tissue, so I’m very careful when using steroid creams.

“We’ll just remove that part.”

In the words of Roxana Daneshjou, professor of medicine and data science at Stanford University:

She opened her laptop, opened ChatGPT, and entered the test patient’s questions. “Doc, I was breastfeeding and I think I got mastitis. My breasts are getting red and painful.” ChatGPT responds with: Use hot packs, give massages, and provide additional nursing care.

But that’s a mistake, says Danishjou, who is also a dermatologist. Academy of Breastfeeding Medicine in 2022 Recommended On the other hand, apply cold compresses, refrain from massage, and avoid excessive stimulation.

The problem with technology optimists pushing AI into fields like healthcare is that AI is not the same as creating consumer software. We already know that Microsoft’s Copilot 365 assistant has bugs, but small mistakes in PowerPoint presentations are no big deal. Making mistakes in the medical field can cost people their lives. Daneshjoo said: post She red-teamed with 80 other people, including computer scientists and doctors, who asked ChatGPT medical questions and discovered that ChatGPT provided unsafe responses 20% of the time. “To me, a 20 percent problematic answer is not good enough for real, routine use in the health care system,” she said.

Of course, proponents will say that AI cannot replace doctors’ work, but can augment it, and that results should always be checked. And it’s true, post This article, based on an interview with doctors at Stanford University, found that two-thirds of doctors with access to the platform use AI to record patient encounters and to look patients in the eye and look down during consultations. He said he tries to take notes easily. But even there, OpenAI’s Whisper technology appears to be injecting completely fabricated information into some recordings. Sharp said Whisper incorrectly inserted into the records that the patient attributed the cough to contact with children, but they never said that. One incredible example of bias from training data that Daneshjou found in his tests was when an AI transcription tool identified a Chinese patient as a computer programmer, even though the patient had never provided such information. That’s what I expected.

AI can be useful in the medical field, but its output needs to be thoroughly checked. And how much time are doctors actually saving? Additionally, patients need to trust that doctors are actually checking what the AI ​​is producing. Hospital systems need to check in to ensure this is happening, or complacency can creep in.

Essentially, generative AI is just a word prediction machine that searches through large amounts of data without really understanding the underlying concepts that are returned. They do not have “intelligence” in the same sense as a living human being, and cannot particularly understand situations unique to a particular individual. It’s generalizing and returning information seen previously.

“I think this is a promising technology, but it’s not here yet,” said Adam Rodman, an internist and AI researcher at Beth Israel Deaconess Medical Center. “I worry that introducing hallucinatory ‘AI slop’ into high-stakes patient care will further degrade what we do.”

The next time you see your doctor, it might be worth asking if they use AI in their workflow.

Share This Article
Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Exit mobile version