AI Usage Among Doctors Has Doubled in the Past Year — Here Is What That Actually Means for You as a Patient

AI Usage Among Doctors Has Doubled in the Past Year — Here Is What That Actually Means for You as a Patient

Disclaimer: This article is for informational purposes only and does not constitute medical advice. Always consult your healthcare provider for medical decisions. Sources include the American Medical Association (AMA), the World Health Organization (WHO), the National Institutes of Health (NIH), the Food and Drug Administration (FDA), and peer-reviewed publications.

My friend Rachel got a call from her dermatologist's office last month that caught her off guard. The nurse told her that her mole scan results had been "pre-screened by our AI system" and flagged for a closer look by the doctor. Rachel — who once spent 45 minutes Googling "asymmetric mole vs normal mole" at 1 AM and gave herself a panic attack — was not sure whether to feel reassured or terrified.

"So a robot looked at my mole before my actual doctor did?" she asked me over a $7.25 green smoothie that afternoon. "Is that... good?"

The answer, it turns out, is complicated. And it matters more than you think.

How Fast AI Adoption Among Doctors Is Actually Growing

According to the American Medical Association (AMA), physician adoption of AI tools has roughly doubled in the past year, with a growing number of doctors reporting that they use artificial intelligence for tasks ranging from clinical documentation to diagnostic support. A 2025 AMA survey found that approximately 40 percent of physicians had incorporated some form of AI into their practice, up from about 20 percent the year before. By early 2026, preliminary data suggests that number has climbed further.

The growth is being driven primarily by three use cases: clinical note generation (where AI listens to patient visits and produces documentation), diagnostic imaging analysis (where algorithms flag potential abnormalities in X-rays, CT scans, and skin images), and administrative automation (scheduling, prior authorization, billing codes). The World Health Organization has published guidance supporting the responsible use of AI in healthcare, noting that properly validated tools can reduce physician burnout and improve diagnostic consistency.

Physician reviewing digital health records with AI-assisted technology in clinical setting

The Documentation Revolution Happening Behind the Scenes

Dr. Patel — an internist I know who runs a practice with about 1,400 active patients in the suburbs of Philadelphia — told me that AI-powered documentation has been the single biggest quality-of-life improvement in his 22-year career. And I believe him, because the last time I saw him outside of a medical context, he was complaining about spending three hours every night finishing patient notes.

"I used to get home at 7:30, eat dinner with my family, and then sit down at my laptop until 10 or 10:30 typing up notes from the day," he said during a 34-minute conversation we had last Sunday. (He was at his daughter's soccer game and kept getting distracted by a referee he disagreed with.) "Now the AI listens to my patient conversations — with their consent — generates the note, and I review and edit it in about 90 seconds per patient. I get two hours of my life back every single day."

The tools Dr. Patel uses are part of a growing category called ambient clinical intelligence, pioneered by companies like Nuance (owned by Microsoft) and validated in studies published in the Journal of the American Medical Association (JAMA). A 2025 study in JAMA found that AI-generated clinical notes were rated as equivalent or superior to manually written notes by independent reviewers in 78 percent of cases.

But here is the part that matters for patients: when doctors spend less time on paperwork, they spend more time actually listening to you. A National Institutes of Health (NIH)-funded study found that physicians using AI documentation tools reported spending an average of 3.2 additional minutes per patient visit on direct communication — a meaningful increase in a system where the average primary care visit lasts about 18 minutes.

Diagnostic AI Is Getting Faster — But Trust Is Still Catching Up

Rachel's mole-scanning experience reflects one of the fastest-growing areas of medical AI: diagnostic imaging. The U.S. Food and Drug Administration (FDA) has now cleared over 950 AI-enabled medical devices, with the majority focused on radiology and imaging analysis. These systems do not replace the doctor — they act as a second set of eyes, flagging potential concerns for the physician to evaluate.

Sandra — a nurse practitioner in a busy urgent care clinic who sees about 35 patients per day — described the dynamic this way: "The AI is like having an incredibly detail-oriented resident who never gets tired and never has a bad day. It catches things I might miss at 5 PM after seeing my 30th patient. But it also flags things that turn out to be nothing, which means I have to be careful not to alarm patients unnecessarily."

That tension between sensitivity and specificity is a real concern. A study published in Nature Medicine found that while AI diagnostic tools correctly identified 94 percent of genuine abnormalities, they also generated false positives in about 12 percent of cases. For patients, that means getting a call saying "the AI flagged something, let us take a closer look" does not necessarily mean something is wrong — but it does mean sitting with uncertainty, which is its own kind of stress.

What Questions You Should Ask Your Doctor About AI

If your healthcare provider is using AI — and statistically, the odds are increasingly good that they are — there are specific questions worth asking at your next visit:

First, ask whether AI was involved in any aspect of your care. According to the AMA's ethical guidelines on AI, physicians have an obligation to be transparent about AI use. You have a right to know.

Second, ask whether your doctor reviewed the AI's output. The Centers for Medicare and Medicaid Services (CMS) requires that licensed clinicians maintain responsibility for all clinical decisions, regardless of whether AI tools assisted in the process. The AI suggests; the doctor decides.

Third, ask about data privacy. If an AI system is listening to your appointment or analyzing your medical images, your data is flowing through third-party systems. The Health Insurance Portability and Accountability Act (HIPAA) still applies, but it is worth asking your provider how your information is stored, who has access, and whether it is used for model training.

Tom — a hospital IT director I know who manages cybersecurity for a 400-bed facility — told me something that deserves attention. "Most patients do not realize that when they consent to AI-assisted care, their data may be processed by cloud services outside the hospital. We use Azure for our AI workloads, which is HIPAA-compliant, but not every practice has the same standards. Ask questions."

The Burnout Connection Patients Should Care About

Here is something that does not get discussed enough: physician burnout is a patient safety issue, not just a wellness issue. A 2024 study in the Annals of Internal Medicine found that burned-out physicians were 2.2 times more likely to make medical errors, 1.7 times more likely to report a patient safety incident, and significantly more likely to reduce their clinical hours or leave medicine entirely.

The Centers for Disease Control and Prevention (CDC) reported that approximately 63 percent of physicians experienced at least one symptom of burnout in 2023. If AI tools can meaningfully reduce the administrative burden that drives burnout — and early evidence suggests they can — then AI adoption is not just a technology story. It is a patient safety story.

Dr. Patel put it simply. "I became a doctor to take care of people, not to type notes into a computer. The AI lets me be the doctor I went to medical school to become. And honestly, my patients can feel the difference. I make eye contact now. I am not staring at a screen."

Rachel got her mole results back, by the way. It was benign. The AI was right to flag it — the asymmetry was genuinely unusual — and her dermatologist confirmed the finding in about 30 seconds. "I still think it is weird that a computer looked at my skin before my doctor did," she told me afterward. "But I guess I would rather have two sets of eyes than one."

That might be the most reasonable take on medical AI I have heard from anyone who is not trying to sell it.

Building a health or wellness platform? Wardigi creates custom digital solutions for healthcare and wellness businesses.

Found this helpful?

Subscribe to our newsletter for more in-depth reviews and comparisons delivered to your inbox.