T
Table of Contents
7Chapter 7

The Radiologist Who Multiplied

"The relation between what we see and what we know is never settled." — John Berger, Ways of Seeing

The Scan

There is a quality of attention that only develops after thousands of reads. It lives somewhere behind the eyes, below conscious analysis — a pattern-recognition engine built from ten thousand prior scans, refined across late nights and early mornings and the particular silence of a reading room at 3 AM when the only light comes from the screen.

I have been training that engine for fifteen years.

On a Tuesday morning in early 2026, a stroke alert fired and I opened a new study on my workstation. The patient was a fifty-eight-year-old woman who had arrived in the emergency department three hours earlier with sudden left-sided weakness. The clinical question was the one that has defined my specialty: Is this a stroke? And if so, what kind, and how much brain is still salvageable?

I scrolled through the diffusion-weighted images. There — a region of restricted diffusion in the right middle cerebral artery territory, bright against the surrounding tissue like a thumbprint pressed into fog. I recognized it the way a musician recognizes a chord: immediately, without deliberation, the product of every prior scan that had trained my visual cortex to distinguish signal from noise. Acute ischemic stroke. Right MCA. I estimated the penumbral volume on the perfusion maps — the borderland between dead brain and dying brain, the tissue that might still be saved if we got blood flowing again in time.

I have done this thousands of times. Each time, it takes approximately four minutes. Each time, it matters.

On this morning, after I completed my read, I did something different. I fed the same scan into a foundation model trained on brain MRIs across more than fifty neurological diagnoses — the system our department had been piloting for three weeks, the descendant of architectures that first proved themselves in chest X-rays and retinal scans and had now arrived at the organ that defines my professional life.

The system returned its results in eleven seconds.

It found the stroke. Same territory, same classification. But it also found something I had not mentioned in my read — a cluster of chronic white matter hyperintensities in the periventricular regions, graded by severity, with a note: Pattern consistent with small vessel disease, Fazekas grade 2. Progression risk elevated given patient demographics. It quantified the penumbral volume with a precision I can only approximate. It flagged an incidental finding in the posterior fossa — a small, asymptomatic meningioma I had, in fact, noticed but deprioritized in the acute setting.

Nothing the system found was wrong. Nothing I found was wrong, either. But the system found more, in less time, with a quantitative precision that my trained eye — experienced, calibrated, irreplaceable in a thousand clinical moments — cannot match.

I sat in the dark room, the screen still glowing, and felt something I did not expect. Not anger. Not fear. Something more vertiginous — the particular disorientation of watching your core competency replicated by something that learned it differently than you did, learned it on a scale you cannot comprehend, and executes it with a consistency your biology cannot sustain.

This is the moment. Not the one the headlines describe — AI Beats Radiologist — which is reductive and misses the point entirely. The real moment is quieter, more private, more disquieting. It is the moment a physician watches a machine perform the cognitive act that defines their professional identity, and asks: If the machine can do this, what am I for?

The Automation of Perception

The system that read my scan is not an anomaly. It is the leading edge of a transformation that, by early 2026, has moved from research curiosity to clinical infrastructure.

The numbers tell part of the story. The FDA has now cleared more than one thousand AI-enabled radiology devices — nearly eighty percent of all AI medical devices authorized for clinical use. Foundation models trained on millions of images can read brain MRIs across fifty neurological conditions, flagging emergencies in seconds. At Northwestern, a prospective study of nearly twenty-four thousand radiographs showed that generative AI-assisted reporting was 15.5 percent faster than conventional reads with no measurable difference in clinical accuracy or textual quality (PMID 40471579). Autonomous triage platforms operate in close to two thousand American hospitals, routing stroke alerts that reach treatment sixty-six minutes faster than conventional workflows.

And Curtis Langlotz — the Stanford radiologist whose careful empiricism has made him the field's most trusted voice on this question — released the first rigorous quantitative model of AI's workforce impact as a preprint in late 2025. His projection: a thirty-three percent reduction in radiologist working hours within five years. His conclusion, which deserves to be read alongside the number: radiologist job loss is unlikely, because imaging volumes continue to grow faster than the workforce, and the hours freed by AI will be consumed by rising demand (PMID 41480022; preprint, not yet peer-reviewed).

But numbers alone are photographs, and you know by now that I am interested in movies.

The movie of medical imaging AI in 2026 looks like this: a progression from second reader — the AI that checks the radiologist's work, catching what was missed — to first reader — the AI that triages, prioritizes, and pre-reads the scan before a human eye ever sees it. In high-volume imaging centers, this shift is already operational. The AI scans the queue, identifies the cases that demand urgent attention — the stroke in evolution, the aortic dissection, the tension pneumothorax — and routes them to the front of the worklist. Cases the AI reads as normal are flagged for confirmatory review, not primary interpretation. The human reads the hard cases. The machine handles the haystack; the human handles the needles.

This is Level 3 on the autonomy scale we established in Chapter 5 — conditional autonomy. The machine acts independently within defined boundaries, and the human remains available to intervene. In some narrow applications — diabetic retinopathy screening, where the FDA cleared an autonomous AI diagnostic system as early as 2018 — radiology has already reached Level 4: the machine makes the diagnosis without human review. A patient sits in front of a camera, the AI reads the retinal image, and a result is delivered. No radiologist. No ophthalmologist. No human eyes between the patient's retina and the diagnosis.

At the 2025 annual meeting of the Radiological Society of North America, a formal debate asked whether AI should autonomously interpret radiographs. In an informal poll conducted by the RSNA Daily Bulletin, sixty-eight percent of respondents said no. The consensus estimate for full autonomy in imaging: twenty years away. The technology is closer than the profession believes. The profession is more cautious than the technology demands. That gap is where the next decade of medical imaging lives.

Radiology is farther along the autonomy spectrum than any other medical specialty. The reason is deceptively simple: radiology, more than any other branch of medicine, is a perceptual discipline. The radiologist's primary cognitive act is looking — recognizing patterns in images, distinguishing normal from abnormal, benign from malignant, urgent from incidental. This is precisely the cognitive function that deep learning was built to automate. Convolutional neural networks were not designed for radiology; they were designed for image recognition. Radiology simply happens to be a medical specialty where image recognition is the core clinical act.

When the core of a profession is pattern recognition, and the machine is a pattern recognition engine of superhuman scale, the professional question becomes existential.

The Identity Crisis

You have heard the prediction. Every physician who works with imaging has heard it — the one about AI making radiologists obsolete within a decade, about medical schools shutting down training programs, about the dark room emptying. The prediction was made in 2016 by one of the founders of deep learning. By 2024, he walked it back. "I spoke too broadly," he told the New York Times. What he said instead was closer to what Curtis Langlotz — now president of the Radiological Society of North America — has been arguing for years: that radiologists who use AI will replace radiologists who don't.

But the retraction is less interesting than what actually happened. And I know what actually happened because I have lived through it.

I did not train as a radiologist. I trained as a neurologist — a vascular neurologist, specifically, which means I have spent my career at the intersection of brain images and clinical decisions. I am not the person who generates the official radiology report. I am the person who looks at the images at 3 AM because the treatment decision cannot wait for the report, the person who scrolls through diffusion-weighted sequences and perfusion maps while a patient is deteriorating in the next room, the person who has internalized enough imaging expertise to act on what I see but who still relies on the neuroradiologist for the definitive read. I occupy the border between seeing and deciding — and that border is exactly where AI's disruption is most intimate.

What I can tell you from that border is this: the identity crisis is real, and it is not limited to radiologists.

Consider what has changed. A decade ago, I was the sole interpreter of the acute images for my patients. The official radiology report arrived later — minutes or hours, depending on volume. My reading was the one that determined whether we administered thrombolytics, whether we called the interventional team, whether a family heard that treatment was possible or that the window had closed. My eye was the bottleneck, in the best sense: the trained filter through which raw imaging data passed before becoming a clinical decision. That act of seeing was my value in the acute moment.

Now consider the neurologist — consider me — in a department where AI pre-reads every scan. The routine strokes are identified before I open the study. The perfusion maps are quantified to a precision I cannot match. The AI has already triaged the queue, flagging the cases that demand my attention and deprioritizing those that do not. My daily practice shifts from reading all images to adjudicating the difficult ones and supervising the machine.

This is not, on its face, a bad outcome. Freed from the repetitive perceptual labor of emergency triage reads — the work that must be done at the speed the clinical situation demands, that leaves no time for the contemplation that ambiguous presentations require — I could become a better clinician. More time per patient. More attention to the atypical case. More capacity for the clinical integration — the synthesis of imaging with patient history, examination, and the thousand contextual factors that distinguish a diagnosis from an algorithmic output.

But identity is not purely functional. It is also psychological. When you have trained for a decade to see something others cannot, and a machine replicates that perception in eleven seconds, the challenge is not to your workflow. It is to your self. If the machine can see what I see, then what I see is not special. If what I see is not special, then what am I?

This is not hyperbole. It is the lived texture of a profession in transition, and it echoes a pattern older than computing. When automated looms replaced handweavers, the loss was not merely economic — it was the dissolution of an identity. The weaver was not just someone who produced cloth but someone whose hands knew the thread, whose body contained a knowledge that the machine rendered not obsolete but common. Jensen Huang, the CEO of NVIDIA — the company whose chips power every medical AI system in existence — articulated the distinction on a December 2025 podcast more precisely than most physicians would: "The purpose of a radiologist is to diagnose disease, not to study the image. The image studying is simply a task in service of diagnosing the disease" (The Joe Rogan Experience #2422, December 3, 2025). He is right. But knowing that the image is merely a task does not undo the decade you spent making it your calling.

The autonomy levels from Chapter 5 provide the frame. At Level 2, my identity is intact — the AI is a tool I use. At Level 3, it begins to shift — the AI is a colleague that does much of the work independently. At Level 4, I become a supervisor, and at Level 5, if it ever arrives, a historical artifact. Imaging sits at Level 3, edging into Level 4 in narrow domains. The identity crisis is not hypothetical. It is the current lived reality of a generation of physicians who trained for one relationship with images and are practicing another.

The Cinematographer

Here is where the photograph-to-movie metaphor — the spine of this book — finds what I believe is its most revealing expression.

Radiology was always the specialty of the photograph. Literally. A radiograph is a photograph — a frozen image of the body's interior at a single moment in time. A chest X-ray captures the lungs during one held breath. A CT scan captures the abdomen in one pass of the scanner. An MRI captures the brain's water molecules in one magnetic configuration. Each is a photograph of extraordinary depth and resolution, but a photograph nonetheless.

The radiologist's art was reading these photographs. Finding in the frozen image the evidence of dynamic disease — the tumor that is growing, the vessel that is narrowing, the bone that is weakening. The radiologist inferred the movie from the photograph: This mass has irregular margins, which suggests aggressive growth. This calcification pattern suggests a process that has been evolving for years. The interpretation was cinematic, but the data was photographic. The radiologist was, in effect, a still photographer who had learned to imagine the movie that produced each frame.

AI changes this. Not incrementally, but categorically.

Radiomics — the extraction of quantitative features from medical images — transforms the photograph into data. Not the kind of data a human can read at a glance, but the kind a machine can process: thousands of mathematical features per image, capturing texture, shape, intensity distribution, spatial relationships, and patterns at a scale below the threshold of human perception. A single CT scan, to my eye, contains perhaps a dozen diagnostically relevant features. To a radiomics algorithm, it contains thousands.

But the real transformation is temporal. When radiomics features are tracked across sequential imaging studies — the scan at diagnosis, the scan after three months of treatment, the scan at six months, at twelve — the machine constructs something no human radiologist can: a quantitative movie of the disease's trajectory. Not "the tumor looks smaller" but "the tumor's entropy has decreased by 14% while its surface regularity has increased by 22%, a pattern that in the training cohort correlated with sustained treatment response in 78% of cases." Not a qualitative impression but a quantitative narrative — a movie assembled from thousands of features across multiple time points, revealing trajectories that no human eye could compute.

This is the movie that the photograph-to-movie metaphor has been building toward across every chapter. In Chapter 1, I told you that AI gives physicians the movie — the temporal patterns invisible in the snapshot view. In diagnosis, the movie was assembled from lab values and vital signs. In drug discovery, from molecular trajectories through the body. In surgery, the metaphor inverted — the machine decomposed the cinematic reality of the operating field into photographic precision.

In imaging, the metaphor completes its arc. The photograph becomes a movie. The static image, which was always the radiologist's medium, is revealed to contain temporal information that was always present but invisible to human perception — patterns that predict, not just describe. The mass that looks unchanged to the eye has, in its radiomic features, already begun the shift toward progression. The scan that looks worse — more enhancement, more edema — is, in its deep quantitative structure, showing the early signatures of treatment response. The photograph lied. The movie tells the truth.

The physician who disappears from the reading room does not vanish. They reappear as something new: a cinematographer. Not someone who reads a single frame and declares its meaning, but someone who interprets narrative — the story of a disease unfolding across time, rendered visible by a machine that can read what the eye cannot. The clinician's judgment — their knowledge of the patient, the disease, the treatment context — becomes not less important but differently important. The machine provides the temporal data. The human provides the intelligence to know what the data means for this patient, in this moment, with this constellation of values and fears and goals.

The physician who reads single images is disappearing. The physician who reads trajectories is being born.

The Equity Edge

There is one more dimension to this transformation, and it may be the most consequential.

Approximately two-thirds of the world's population lacks access to diagnostic imaging interpreted by a trained specialist. In sub-Saharan Africa, there are fewer than one radiologist per million people. In rural India, a patient who needs an MRI may travel hundreds of kilometers to reach one — if they can afford the journey and the scan. The global distribution of imaging expertise is not merely unequal. It is a structure of deprivation so extreme that billions of people live and die with diseases that a scan could have detected, treated, or managed, had the scan and its interpreter been available.

AI does not need a dark room. AI does not need a decade of residency. AI needs a device and a connection.

The 2026 shift toward point-of-care AI — diagnostic algorithms running on portable ultrasound devices, smartphone-attached imaging tools, low-cost X-ray units paired with cloud-based interpretation — represents the equity principle in its most tangible form. A health worker in a rural clinic in Malawi, with a handheld ultrasound and an AI model trained on millions of images, can screen for conditions that previously required a referral to a distant hospital and a specialist who might not exist within a hundred kilometers. Lung pathology. Cardiac function. Obstetric emergencies. Fractures. The machine does not replace the radiologist in Malawi — there was no radiologist in Malawi to replace. It provides a capacity that never existed.

But equity is not just about access. It is about what happens when the access arrives.

A 2025 preprint from Stanford's Center for AI in Medicine and Imaging demonstrated that synthetic training data — five hundred sixty-five thousand demographically balanced chest radiographs generated by a diffusion model — could reduce underdiagnosis disparities by 19.3 percent across intersectional subgroups of sex, age, and race (PMID 41356360; preprint, not yet peer-reviewed). The technology that extends imaging to underserved populations is also, when designed with intention, the technology that corrects the biases embedded in existing imaging datasets.

And yet: a 2024 analysis in Nature Medicine found that fairness in medical imaging AI does not generalize. Debiasing approaches validated on one dataset fail when deployed across different institutions, scanners, and populations — and models that encode fewer demographic shortcuts often perform best in real-world deployment (PMID 38942996). The pattern holds in competition settings: in a national COVID-19 severity challenge, the nine best-performing models all showed subgroup disparities — even the winner, which achieved the highest overall accuracy, still disadvantaged three demographic subgroups on other fairness metrics (PMID 41064474).

This is the tension the equity conversation must hold, honestly, without resolving it into false comfort. The technology can democratize diagnosis and standardize its errors. The model that provides the first expert opinion a patient in Rajasthan has ever received may also carry biases it inherited from datasets in Boston. The promise is real. The risk is structural. And the only responsible path is to build systems that are evaluated where they are deployed, not where they were trained.

This is the argument that should be at the center of every conversation about AI in radiology, and it almost never is. The debate in wealthy nations focuses on whether the machine will take the radiologist's job. Meanwhile, in the places where most of humanity lives, the question is not whether the machine will replace the human expert but whether the machine will provide the first expert opinion the patient has ever received.

The radiologist who disappears from the reading room in Boston reappears — not as the same person, but as the same function — in a clinic in rural Rajasthan, in a mobile health unit in the Democratic Republic of Congo, in a community hospital in the Peruvian highlands. The knowledge that took a decade of residency to build, compressed into a model that runs on a device that costs less than a stethoscope.

The radiologist does not disappear. The radiologist multiplies.

The Liberation

Let me return to the morning of the eleven-second read.

I said I felt something unexpected — a vertigo, a disorientation. Let me tell you what happened next, because the rest of the story matters more than the moment that makes the headline.

After the AI returned its results, I did something I had not done in months. I closed the workstation, walked out of the reading room, and went down to the emergency department. Not to double-check the findings — the acute treatment decisions had already been made. I went because I had time. The AI had triaged the overnight imaging queue before I arrived: fourteen studies pre-read, three flagged for urgent attention, the rest queued for confirmatory review. The two hours of perceptual labor that would normally have consumed my morning — scrolling through routine studies, dictating normal reads, clearing a queue that grows faster than I can empty it — had been compressed into minutes.

So I went to see the patient. The woman with the right MCA stroke, the fifty-eight-year-old whose tissue-plasminogen activator had been running since before dawn.

She was awake. Her left arm was still weak — she could raise it against gravity but not hold it there, a finding I confirmed with a bedside exam that took three minutes and required nothing more sophisticated than my hands and my attention. Her daughter was sitting in the chair by the bed, still wearing the coat she had thrown on when the ambulance call woke her.

I pulled a chair next to the bed and explained what had happened in her mother's brain. Not in the language of radiology — not restricted diffusion in the right MCA territory with viable penumbra — but in the language I was trained to use before I was trained to read scans. I drew a rough diagram of the blood vessels on the back of a progress note. I pointed to where the damage was and where the treatment had saved territory. I told her the truth: the brain that had been at risk was largely intact, but the weeks ahead would be hard, and recovery would demand patience neither of them felt ready for.

The daughter asked whether her mother would be able to cook again. It was a specific question — the kind that matters to the person asking it far more than an NIHSS score matters to a neurologist — and I answered it honestly. Probably. With rehabilitation and time. The fine motor control of the left hand would be the last to return, but the pattern I saw in the imaging and the exam was one I had seen recover before.

I stayed for twenty minutes. In the arithmetic of a stroke neurologist's morning, twenty minutes at a single bedside is a luxury that the reading queue rarely permits.

This is the liberation, and I need to be precise about what I mean, because it is easy to sentimentalize and I want to do the opposite.

The machine did not give me the ability to care about my patient. I have always cared. What the machine gave me was time — and time, in a healthcare system that has systematically stripped physicians of it, is not a soft benefit. It is the material substrate of everything we claim medicine is supposed to be. When I spend twenty minutes at a bedside explaining a stroke to a frightened daughter, I am practicing medicine. When I spend those same twenty minutes scrolling through normal CT heads in a dark room, I am doing clerical work that happens to require a medical degree. The machine does not replace the medicine. It replaces the clerical work that has been masquerading as medicine for decades — and it returns to me the thing that every physician I know went into this profession to do.

Eleven seconds. That is what the foundation model needed to read a scan that takes me four minutes. Those three minutes and forty-nine seconds, multiplied across every study in a day, across every day in a career, are not computational savings. They are a physician's life, returned to the work that only a physician can do.

If the machine can do this, what am I for?

I am for the daughter in the chair. I am for the twenty minutes. I am for the question about cooking — the one no algorithm will ever know to ask and no probability score will ever know how to answer.

We have met such a patient before. Her name was Maria.

The physician who disappears from the reading room does not vanish into obsolescence. They reappear at the bedside — returned to the art that the computational burden of modern medicine had stolen from them, liberated by the very machine that seemed to threaten their existence.

The machine sees the image. The physician sees the patient. And in that division — clear-eyed, principled, equitable — medicine does not lose its soul. It recovers it.


The imaging chapter ends, but the book's gaze now turns inward — from the body that can be scanned to the mind that cannot. In the next chapter, we follow AI into the most intimate and least understood domain in all of medicine: the landscape of mental illness, where the photographs are subjective, the movies are invisible, and the machines have just produced results that no one quite knows how to interpret.