What happens when ChatGPT enters the medical field? US experts explain


Other experts in the digital medicine space identified some other applications for large language models, like assisting doctors in translating medical terms to plain language, assisting in health care paperwork, or drug discovery. — Photo by Lionel BONAVENTURE/AFP

Last month, Connecticut's US Sen. Chris Murphy took to Twitter to warn that ChatGTP had "taught itself advanced chemistry" without prompting.

"Something is coming," Murphy warned. "We aren't ready."

Murphy spoke to CT Insider last week about his feelings on technology. But questions remain about the potential impact of AI programs like ChatGTP on sensitive issues like medicine. What happens when AI expands into the medical industry?

"I think you are 100% going to see this in medicine both on the patient-facing side and the provider side," said Dr. Perry Wilson, director of the Clinical and Translational Research Accelerator at Yale School of Medicine.

What is ChatGTP?

Before getting into specifics of what ChatGTP might do in medicine, we should define what it is. ChatGTP is a large language model, which is a type of computer program that is fed large amounts of contextual information, like text, images and audio. ChatGTP was fed off of data scraped from the internet. The model assigns billions of parameters to the information it has been fed. When a user enters a prompt, the model uses those parameters to predict what output would make the most sense.

A prompt can be anything from "Tell me a story" to "Write me a five-paragraph essay about leukemia treatments." The program will compose a response that seems plausible.

These programs act like magic mirrors, remixing and reflecting bits of the world, human artistry and language back on us. It is an illusion of coherence and creativity. Fundamentally, they merely guess what we want to see.

Consequentially, they are very good at formulaic text generation and rote recitation of information contained in their training data. They can write form letters but not jokes.

"Humans think on an abstract level that large language models cannot," Rory Mir, associate director of community organizing for the Electronic Frontier Foundation. "There's a big incentive to mechanize the human mind to be simply a language model that doesn't have life experiences... but our minds aren't things that can just be boiled down to those pieces."

Writing form letters

Wilson explained that where he saw large language models intervening in medicine first is in navigating the bureaucratic communications between health care providers and insurance companies. He said he anticipated "plug and play" AI applications that would take diagnostic criteria for billing codes and turn those into prior authorization communications.

"If I can have ChatGTP fill in all the nonsense I usually have to fill in to satisfy the people in the C-suite that we're billing appropriately, well, that's a huge time saver," Wilson said. "That's something that could be streamlined by something like ChatGTP."

Other experts in the digital medicine space identified some other applications for large language models, like assisting doctors in translating medical terms to plain language, assisting in health care paperwork, or drug discovery. But these applications are tools. They should not be understood as replacements for human beings.

"They are powerful in-betweeners capable of efficiently and creatively narrowing down the vastness of possible responses to the most likely ones," wrote Stefan Harrer, an artificial intelligence ethicist and researcher at the Digital Health Cooperative Research Center of Melbourne, Australia, in the journal EBioMedicine. "But they cannot assess whether a prompt was meaningful or whether the model's response made any sense."

The danger

Harrer outlined several problems with using large language models in health care. Particularly, that large language models have no means to assess their outputs for false or inappropriate information. Harrer points out that ChatGPT neither knows nor cares whether the information it produces is true. AI researchers call this kind of misinformation "hallucination."

An example of AI hallucination can be seen on MedPage Today's TikTok, where editor-in-chief Dr. Jeremy Faust queries ChatGTP about a hypothetical diagnosis. While the diagnosis it gave for a condition was plausible, the program spat out an entirely false reference when asked for a citation.

"When you get into specific things like a citation, it may have seen enough to know that it has to start with a number, then type some things that sound like a scientific title, a journal number and a date," Wilson said. "There's never a part of the training that says 'Oh, go back and check to see if that's real, or 'scan Pubmed for actual citations.'"

Mir told CT Insider that hallucinations make large language models no better than going down a "WebMD rabbit hole" of self-diagnosis.

"I would say it's actually worse because you're even less clear about where this information is coming from," Mir said. "There's a greater likelihood of hallucinations being mixed in with legitimate claims, so it becomes harder to decipher."

Privacy and transparency issues

Mir said these problems with accuracy were compounded when you consider that the training data for programs like ChatGTP is not public. The training data could include misinformation, private information, sensitive information and correct information, all jumbled together. Any information in the model could be spat out in response to a prompt.

"For anything public-facing, the safest thing for anyone to do is making the training data public," said Mir. "If you aren't OK with putting that data out publicly, then it shouldn't be in the training data to begin with."

Mir referenced the discovery of images a doctor took as part of medical records in the popular LAION-5B image data set. An AI artist discovered her face before-and-after a procedure within the dataset. LAION scrapes images from the internet for image generation.

"It brought up the issue that, particularly for rare medical conditions, can you prompt an image-generating AI in a way that it shows a (real) person with this medical condition?" Mir said.

Beyond leaking sensitive medical information that shouldn't be leaked, using data that shouldn't be used and providing inaccurate answers, there's a danger to AIs similar to those posed by search engines. Sebastian Zimmeck, a computer science professor and the director of the privacy-tech lab at Wesleyan University, said that the act of using services like ChatGTP could expose things about the user simply by using the service.

"Maybe you look for a particular disease you suspect you have — say you have a cold," Zimmeck said. "And then you reveal to the provider (of the AI) that you do have a cold because why else would you ask that question."

Zimmeck said that while this was similar to the vulnerabilities provided by search data, it poses a unique challenge in the AI context. Since large language models are prediction engines, this provides the model with a way to make predictions about specific users. If the AI program were being used by an insurance provider, that could be bad for a patient, especially if the AI was supposed to work as a therapist.

"It might make correlations that are obvious, but maybe there are correlations that are less obvious," Zimmeck said. "Making these kinds of predictions can reveal private information." – The Hour, Norwalk, Conn/Tribune News Service

Follow us on our official WhatsApp channel for breaking news alerts and key updates!
   

Next In Tech News

A US woman thought baseball star Trea Turner was asking her for money. Instead, an online scammer stole over RM230,000 from her, police say.
Renault to pursue autonomous minibuses but not cars
Amazon Web Services plans $8.4 billion cloud investment in Germany
Could being online actually be good for you? New study reveals a surprising finding
Google executives hint that AR glasses are poised for a comeback
Google moves deepfake porn sites lower in its search rankings
American sought after ‘So I raped you’ Facebook message detained in France on 2021 warrant
Will AI replace doctors who read X-rays, or just make them better than ever?
In major change, Google to use AI-generated answers in search results
Sidewalk video ‘Portal’ linking New York, Dublin by livestream temporarily paused after lewd antics

Others Also Read