Reality Check

Is the Book Right?

Books about the future have an expiration problem. The moment they're published, reality starts testing every claim. Most authors hide from this. This page does the opposite — it tracks the book's core assertions against emerging evidence, updated as the world catches up (or doesn't).

Each claim is extracted from the book's argument and rated against current evidence. A challenged claim is not a failure — it's the most interesting kind of data.

🟢

Supported

11 claims

🟡

Evolving

4 claims

🔴

Challenged

0 claims

1

Welcome to the Future: Why AI Will Redefine Medicine

🟢

AI will redefine medicine — not merely augment existing workflows, but fundamentally transform what diagnosis, treatment, and clinical practice mean.

FDA has authorized 1,000+ AI-enabled medical devices by early 2026. AI-designed drugs are entering human trials. Autonomous diagnostic systems operate without physician-in-the-loop in diabetic retinopathy screening. The transformation is structural, not incremental.

3

The Diagnostic Revolution

🟢

AI diagnostic systems will match or exceed specialist-level accuracy in three or more imaging domains by 2027.

Already surpassed in diabetic retinopathy (autonomous FDA-cleared), lung nodule detection, and breast cancer screening as of 2026. Brain MRI foundation models reading 50+ neurological conditions with specialist-level accuracy reported in early 2026.

🟢

AI triage systems like Viz.ai will become standard of care in acute stroke workflows, significantly reducing time from imaging to treatment decision.

Viz.ai is deployed in 1,200+ hospitals across the US as of 2025. Multiple studies document 20-30 minute reductions in door-to-treatment times. CMS has established a New Technology Add-on Payment for Viz.ai, signaling payer acceptance.

4

When the Machine Kills: The Anatomy of AI Failure in Medicine

🟡

The most dangerous AI failures in medicine will not come from a single system failing, but from multiple systems succeeding narrowly — cascading green lights that create a false picture of safety no individual human can challenge.

No large-scale documented cascade failure matching the book's scenario has been publicly reported. However, interoperability remains poor — most hospital AI systems operate in siloes. The FDA's 2025 guidance on AI lifecycle management acknowledges multi-system interaction risks but provides no concrete framework.

5

The Surgeon and the Machine: When AI Gets a Body

🟢

Fully autonomous surgical robots will successfully complete soft-tissue procedures (Level 4+ autonomy) in animal models by 2026, with human trials beginning within five years.

The STAR system at Johns Hopkins performed autonomous laparoscopic surgery in porcine models in 2025, completing all 17 sequential surgical tasks without human intervention. The timeline for human trials remains uncertain — regulatory and liability frameworks are the bottleneck, not technical capability.

6

The Molecule as Patient: AI Reimagines Drug Discovery

🟢

AI-designed drugs will demonstrate Phase I success rates of 80-90% (vs. industry average of 40-65%), but Phase II success rates will remain near the 40% industry average — the '80/40 Paradox.'

Insilico Medicine's AI-designed drug INS018_055 completed Phase II for idiopathic pulmonary fibrosis. Isomorphic Labs entered human trials for AI-generated oncology compounds in 2026. Phase I rates for AI-native biotechs track at 80%+. Phase II data remains limited but early signals are consistent with the paradox.

🟢

Self-driving laboratories — autonomous facilities that run the design-make-test-learn cycle without human hands — will compress drug discovery iteration from months to days.

The NVIDIA/Eli Lilly collaboration and similar facilities are operational as of 2025-2026. Multiple pharmaceutical companies have announced autonomous research lab investments. The compression is real in early-stage hit identification; it remains to be seen whether it translates to faster end-to-end drug approval timelines.

7

The Radiologist Who Disappeared

🟢

Radiologists will not be replaced by AI. Instead, the specialty will transform: routine pattern recognition will be automated, freeing radiologists to become 'information specialists' who synthesize clinical context across modalities.

Radiology residency applications remain strong through 2026. No radiology department has eliminated positions due to AI. ACR guidelines increasingly position AI as workflow augmentation. However, the transformation of the role — toward synthesis and away from pure image reading — is still early.

8

The Therapist in Your Pocket: AI and Mental Health

🟡

AI chatbots will achieve clinical-grade therapeutic outcomes — matching or exceeding first-line antidepressant efficacy — in randomized controlled trials.

The Dartmouth Therabot trial (NEJM AI, 2025) reported a 51% reduction in depression symptoms — exceeding typical SSRI trial outcomes. However, this is a single trial. Replication is needed. Meanwhile, Woebot Health shut down in 2025 despite prior clinical evidence, demonstrating that clinical efficacy alone does not ensure market viability.

🟢

AI mental health tools will become the de facto first point of contact for mental healthcare in underserved areas — not because they are ideal, but because the alternative is nothing.

150+ million Americans live in mental health shortage areas. Wait times exceed 3 months in rural regions. AI chatbot usage continues to grow. The FDA has authorized zero mental health AI devices, but millions are using unregulated tools regardless. Access pressure is the driver, not clinical preference.

9

The Algorithm Has No Conscience (And That's the Point)

🟢

Regulatory frameworks for clinical AI will lag deployment by 3-5 years, creating a sustained period where patients interact with AI systems that no regulator has authorized for their use case.

The FDA's November 2025 advisory committee on generative AI in mental health produced recommendations for further study — no authorizations. Meanwhile, millions use LLM-based health chatbots daily. The EU AI Act's medical device provisions don't take full effect until 2027. The gap between deployment and regulation is widening, not narrowing.

🟢

Algorithmic bias in clinical AI will persist as a structural problem — not because it is technically unsolvable, but because the training data reflects historical inequities that predate the algorithms.

Pulse oximetry racial bias (documented in NEJM 2020) took 40 years to be widely acknowledged. Dermatology AI systems continue to underperform on darker skin tones. The problem is upstream — in data collection, in who participates in clinical trials, in which populations are studied. Technical debiasing is necessary but insufficient.

10

The Digital Twin Paradox

🟡

Patient-specific digital twins — computational models that simulate individual disease progression — will be used in clinical decision-making for cardiac and oncological care by 2028.

Cleveland Clinic and Siemens Healthineers are piloting cardiac digital twins. The FDA has approved computational modeling for some device evaluations. However, clinical adoption for individual patient decision-making remains pre-commercial. The 2028 timeline is ambitious but plausible for narrow use cases.

🟡

Predictive medicine will create a new category of ethical dilemma: what to do with accurate predictions about a body that has not yet failed. The 'right not to know' will become a contested legal and clinical principle.

Genetic testing has already surfaced this tension (BRCA, Huntington's). Digital twins and longitudinal AI prediction will intensify it by making predictive medicine continuous rather than one-time. No legal framework currently addresses the duty to disclose AI-generated health predictions. The ethical infrastructure is being built in real time.

11

The Last Photograph

🟢

The physician's role will not shrink — it will concentrate. As AI absorbs computational tasks, the irreducible core of medicine will be the human relationship: presence, empathy, the quality of silence before the first word.

This is the book's central thesis. Early evidence is consistent: physician demand has not decreased despite AI deployment. Patient satisfaction studies consistently rank communication and empathy above diagnostic accuracy. The question is whether medical training and health systems will invest in these human skills proportionally to their investment in AI.

Methodology

Claims are extracted from the book's core arguments — the specific, testable assertions that distinguish this book from hedged generalities. Statuses are assessed against peer-reviewed publications, regulatory decisions, and industry outcomes. "Supported" means current evidence is consistent with the claim. "Evolving" means the evidence is mixed or the claim's timeline is being tested. "Challenged" means significant counter-evidence has emerged. This page is updated as new evidence becomes available. If you have evidence that should be considered, reach out.

Last reviewed: February 2026

This book is free and open. Support thoughtful AI in medicine.