At your next appointment, your doctor may have a new kind of assistant listening in: artificial intelligence.
Across the nation, AI programs are quietly recording these conversations and turning them into draft medical notes. These tools, known as AI scribes, are becoming more popular, with about 30% of US doctors using them to document patient encounters.
For physicians, the appeal is obvious. Doctors spend roughly 2.3 hours on paperwork for every eight hours of patient care. Several trials have found that AI scribes can ease doctors’ documentation burden, reducing stress and burnout.
But what do AI scribes mean for patients? The hope is that they will help doctors pay better attention during visits; shorten waits for appointments; and produce more thorough notes, said Dr Paul Lukac, chief AI officer at UCLA Health. But there has been little research on how AI scribes are affecting patient care.
The popularity of scribes has also raised concerns about privacy, consent and accuracy. So we asked experts what patients should know.
What is actually being stored?
Every healthcare organisation and AI scribe company has its own rules, but in general, the audio and transcript of the appointment are only temporarily stored with the company – typically deleted within a few weeks or months. At UCLA Health, for instance, they are kept for 14 days, Lukac said, while Microsoft says its Dragon Copilot tool, a popular AI scribe, stores the files for up to 90 days.
Only the draft note remains in your electronic medical record, along with whatever edits your doctor may have made to it, said Dr Majid Afshar, a pulmonary and critical care doctor at the University of Wisconsin-Madison. In other words, no raw AI notes, transcripts or recordings should be kept on file long term.
Patients have the right to access their records and see all the notes, but they typically don’t have access to the audio recording and transcript, Afshar added.
You can always ask the doctor if you can record the appointment yourself. This is already common practice in some specialities like oncology, said Dr Michael Turken, an internal medicine physician at University of California San Francisco Health, where there’s often a lot of complex information exchanged quickly.
Will doctors ask for my consent?
In most states, only one party needs to consent to a recording, so doctors may not legally be required to tell you they’re using an AI scribe. But in practice, your doctor will most likely ask for your consent beforehand, said Sharona Hoffman, a professor of law and bioethics at Case Western Reserve University, given the importance of trust and transparency in medicine.
But, because of time constraints, the consent process might be simplified. For example, doctors might ask only if you mind if they use a tool to help take notes, Turken said, instead of clearly disclosing that they’re creating an audio recording of the appointment.
However, you can always say no or ask to pause the recording during sensitive parts of an appointment. Doctors know that some patients will hold back if they know they’re being recorded, and they do not want that.
“If you have any feeling whatsoever that you’re not going to feel comfortable saying everything you need to say, tell the doctor not to record,” Turken said.
Should I be worried about privacy?
While AI scribe companies have access to the visit recording, these companies are typically bound by HIPAA, the federal health privacy law, through a contract with the health system. Some contracts may allow your data to be anonymised and then used to train the AI model, but in general, most health systems will have very strong security protections in place for data collected and generated by AI scribes, Turken said.
Health information is always a target for hackers because it is sensitive and hard to replace. But the information collected and generated by AI scribes isn’t necessarily any more exposed than other medical data, Hoffman said.
The privacy risk is greater if doctors use AI scribes without a formal contract – for example, choosing a tool outside a hospital’s approved system or using one when their practice hasn’t approved any scribe at all, Lukac said. In these cases, the scribe company would be bound only by its own terms and services, not HIPAA.
If you’re worried about privacy, Lukac recommends asking your doctors if they have a HIPAA contract with the AI scribe company – officially called a business associate agreement – before they start recording.
Should I check the AI-generated note?
The AI scribe’s note isn’t supposed to slide into your medical record untouched; the clinician has to review and sign off on it, taking responsibility for its accuracy, Hoffman said.
That matters because errors do happen. When transcribing, these tools might miss details or mix up who said what, Turken said, especially if multiple people are speaking.
Accents and dialects can be a particular weak spot for scribes: In a small 2024 study, researchers tested four AI transcription programs, and all had lower accuracy for Black patients than white patients. There’s also evidence that transcription errors are more common for non-native English speakers.
And even when the transcript is accurate, the AI tool can omit relevant information and introduce inaccuracies when drafting the note. In a test of five different AI scribes during simulated patient encounters, researchers found that, on average, every note had three potentially serious errors. Lukac similarly led a trial at UCLA, where AI scribes were used in over 15,000 visits, and he found that doctors reported clinically significant inaccuracies “occasionally.”
For patients, it’s probably just best to check all the notes in your medical record for accuracy, because human-written notes can also contain errors, Turken said. – ©2026 The New York Times Company
This article originally appeared in The New York Times.
