Patients are using chatbots to fight medical bills, with mixed results


As chatbots become a fixture in everyday medical care, patients are using them not only to make lists of questions for doctors’ visits or decipher test results, but increasingly to pick apart the financial paperwork that follows, including challenging medical bills. — Christopher Capozziello/The New York Times

As chatbots become a fixture in everyday medical care, patients are using them not only to make lists of questions for doctors’ visits or decipher test results, but increasingly to pick apart the financial paperwork that follows, including challenging medical bills.

When Jackie Davalos, 34, received a notice from a collections agency that she owed US$22,604 (RM89,975) to a hospital for an emergency room visit after she fell down some stairs two years earlier, her partner, Walter Kerr, used the chatbot Claude to help challenge the hospital’s charges.

Kerr, 39, an executive at a global development nonprofit, said the chatbot had proved a useful adviser, “but not a perfect one.”

At a time when health care costs top Americans’ financial worries, more patients are turning to chatbots like Claude or ChatGPT as a no-cost, do-it-yourself way to navigate problems with medical bills or insurance coverage. The trend is significant enough that the American Hospital Association has alerted its members that patients are increasingly using artificial intelligence to help dispute bills.

Health care providers and insurers have used AI for some time, in ways that some people have suggested are intended to maximise charges and deny claims.

Chatbots might seem to offer patients a way to fight back. But critics warn that the tools can dispense flawed advice, especially to users who are less experienced in using AI, or who do not have much knowledge about the health care system. And they note that chatbots are not bound by the federal privacy protections of the Health Insurance Portability and Accountability Act, or HIPAA.

While chatbots can explain patients’ rights and identify opportunities for relief, critics contend that they often fail to ask for crucial context or they obscure important solutions, leaving patients to fill in the blanks.

For their part, the technology companies argue their current models are more sophisticated and address many of the shortcomings the critics point to.

OpenAI, the maker of ChatGPT, said its new models were “trained to hedge more, browse more and proactively ask for additional details when needed.” (The New York Times has sued OpenAI, claiming copyright infringement of news content. OpenAI denies the claims.)

‘We might actually win’

Davalos said she never received a bill from George Washington University Hospital in Washington, where she was treated and, according to the records, incorrectly listed as uninsured. Davalos, who is training to be a pastry chef but at the time was a journalist working for Bloomberg, feared the debt might derail the dream she and Kerr had of buying a home.

At first, the couple tried to dispute the bill with the hospital’s parent company, Universal Health Services, which manages its billing.

In response, Kerr said, the company removed a medication charge, but said that Davalos would have to pay the balance.

Last July, Kerr decided to upload Davalos’ billing and medical records to Claude. He asked the chatbot to identify whether they might have any further recourse.

Claude came up with several suggestions, Kerr said, including that the hospital might have failed to meet some legal requirements regarding debt and insurance.

The chatbot’s suggestions, Kerr said, encouraged him to think that he and Davalos might have grounds to keep fighting the hospital bill.

“For the first time,” he said, he felt that “we might actually win.”

Using many of the chatbot’s arguments, Kerr wrote a letter to executives at the hospital and Universal Health Services, urging them to drop the charges. Shortly after, the hospital waived the entire bill.

Although Kerr prevailed in the dispute, the chatbot’s advice may not have been entirely correct. Studies suggest that chatbots often err when answering legal questions.

After reviewing a summary of the dispute, Ariel Levinson-Waldman, the founding president of Tzedek DC, a nonprofit legal aid centre in Washington, said some of Claude’s analysis was correct. But the chatbot misunderstood the debt and insurance laws it was citing, and failed to inform the couple of other avenues that might be open to them.

For example, Levinson-Waldman said, some of the legal requirements that Claude suggested the hospital might not have complied with applied to insurers or third-party collectors, not to hospitals. But he could not draw further conclusions without reviewing more records, he said.

George Washington University and Universal Health Services said federal privacy laws limited what they could disclose about Davalos’ bill. ​​But Susan LaRosa, a spokesperson for the hospital, acknowledged that when Davalos was admitted to the hospital, “a clerical error” had been made.

The hospital eliminated the debt once it was “made aware of the updated information,” LaRosa said, and Davalos’ credit was not affected.

Maria English, a spokesperson for Universal Health Services, noted that the debt was eliminated once “all information about the situation was received and communications with the patient were completed” – a resolution achieved only after Kerr escalated the dispute.

Anthropic, the maker of Claude, declined to comment on the chatbot’s performance.

Confusing advice

Getting useful answers from a chatbot often requires knowing how to give the chatbot proper instructions or having enough knowledge about health insurance to supply the right context, said Andrew Cohen, an attorney at the nonprofit firm Health Law Advocates. These requirements can leave many people at a disadvantage.

Michelle Maziar, 46, an immigration policy consultant in Atlanta, tried ChatGPT last July to help recover a US$3,140 (RM12,505) payment she was owed by her insurance company, Anthem.

In March 2023, Anthem had reversed its initial denial of her claim for coverage of fertility services, but the payment never came. She thought the chatbot might be able to help. But ChatGPT mostly proposed steps she had already tried, including asking to speak with a manager, or gave her advice that sounded like another dead end, such as contacting her state insurance commissioner.

“It was deflating,” Maziar said.

Drained and unable to afford a lawyer, she put her dispute on hold.

Maziar recently repeated her ChatGPT query for the Times. Nicole Broadhurst, a professional patient advocate who reviewed the transcript of the exchange, agreed with much of the chatbot’s guidance.

But, she said, the bot had missed an important step: asking questions that could help it determine who oversaw Maziar’s insurance plan. Because her former employer, the city of Atlanta, is self-insured, contacting the state insurance commissioner, as ChatGPT suggested, would not be helpful.

Janey Kiryluik, a spokesperson for Anthem, said an error had delayed Maziar’s payment, but that it had now been issued in full, which Maziar confirmed.

Broadhurst noted that chatbots could excel at translating jargon and doing grunt work like combing through policy documents for key words, but that they often lacked the judgment needed for complex cases.

Privacy risks

Patients who share their health records or bills with a chatbot risk exposing sensitive information to companies that have few legal guardrails about disclosure.

Unlike hospitals and insurers, chatbot companies are not bound by HIPAA, the health privacy law. They can change privacy policies at will, and information given to a chatbot is not legally protected the way a conversation with a doctor is, so it can be more easily turned over as part of discovery in a lawsuit or custody dispute.

OpenAI and Anthropic recently pledged they would not train their models on their users’ health information, and they would store this information separately. But both companies’ safeguards require opting in, and are currently restricted to paid subscribers or people on a waiting list, rather than the general public.

Jennifer King, a data privacy researcher at Stanford University, called the addition of safeguards an improvement, but she questioned why they were not applied across the board.

Improved, but still limited

In late 2024, Joel Bachar, 58, a server at a fine-dining restaurant in Charlotte, North Carolina, uploaded an insurance document to ChatGPT and asked why his health plan covered so little of his MRI scan.

The chatbot offered no solutions, he recalled – it was “a dead end.” He called his health plan to question the amount, but ultimately paid the US$1,170 (RM4,659) balance. Caroline Landree, a spokesperson for UnitedHealthcare, the insurer’s parent company, said that the claim was processed correctly and reflected the benefits in his policy.

When Bachar recently replicated his exchange with ChatGPT, the chatbot suggested potential options to lessen the bill, like asking for a discount to settle it quickly.

But the chatbot also showed its limits. Julien Nakache, chief executive of the bill-negotiation startup Granted Health – a specialised AI company that disputes bills and denials and that Bachar has hired for other cases – reviewed the exchange. In Bachar’s case, Nakache said, the chatbot claimed that the plan had applied the benefit correctly, but it had not gathered enough information to know this was the case, and it did not suggest checking the bill for errors.

An OpenAI representative said the company used doctors to help test its chatbots’ answers involving health care, including bills and insurance.

Other patients have also found that technology can hit a wall in dealing with an exhausting bureaucracy. After Kerr posted the details of his dispute with George Washington University Hospital on social media, he began helping others contest bills by using chatbots. Even so, some people gave up. Others are still in limbo, awaiting a response.

Success, Kerr said, often requires persistence, something “AI can’t solve for you.” – ©2026 The New York Times Company

This article originally appeared in The New York Times.

Follow us on our official WhatsApp channel for breaking news alerts and key updates!

Next In Tech News

You're being watched: Japan battles online abuse of athletes
Waiting for DeepSeek: new model to test China's AI ambitions
Arm CEO Haas in line to lead much of parent SoftBank's international business, FT reports
OVHcloud posts 5.5% organic growth in half-year revenue
China says investigating 'malicious' cyberbullying of teen diving star
Bessent urges Congress to pass crypto regulation bill
Meta releases first new AI model since shaking up team
Anthropic limits Mythos model release in bid to stave off hacks
Researchers unmask trade in nude images on Telegram
Greece plans social media ban for children under 15

Others Also Read