HomeHealthAI Surgery Faces an Urgent Safety Reckoning as FDA Reports and Early...

AI Surgery Faces an Urgent Safety Reckoning as FDA Reports and Early Recalls Rise; Lawsuits Allege Dangerous Device Misguidance

WASHINGTON — In recent months, hospitals and device makers pushing AI surgery into routine care have faced a safety reckoning as the Food and Drug Administration logs more incident reports, early recalls and lawsuits tied to software-guided tools used in operating rooms nationwide. The tension stems from a mismatch: machine-learning features can be updated quickly, while the evidence, training and postmarket surveillance needed to catch mistakes often develop far more slowly, Feb. 10, 2026.

AI surgery warnings are surfacing in FDA databases

A Reuters investigation spotlighted Acclarent’s TruDi Navigation System, a guidance tool used in sinus procedures, after an AI feature was added to its software. Reuters reported that before the AI change, the FDA had received unconfirmed reports of seven device malfunctions and one patient injury; after the update, the agency received unconfirmed reports of at least 100 additional malfunctions and adverse events. The same reporting and court records described at least 10 patients injured between late 2021 and November 2025, including allegations that the system provided incorrect location information while instruments were being used inside a patient’s head.

FDA incident reports are not designed to determine what caused a surgical mishap. They can be incomplete, duplicated or missing key context such as the software version, device configuration and a clinician’s experience. Still, the rising pile of reports, combined with recalls and legal claims, is forcing a harder look at how AI surgery products are tested, updated and monitored after they reach wide clinical use.

TruDi recall and lawsuits spotlight the “guidance” risk

Two lawsuits in Texas described by Reuters allege the TruDi system’s AI misinformed surgeons about instrument location near major arteries during sinus surgery, contributing to severe injuries, including strokes. Integra LifeSciences, which bought Acclarent in 2024, disputed any causal link between the AI technology and the alleged injuries, Reuters reported.

Regulators have also recorded a product correction. An FDA recall notice for the TruDi Navigation System posted Nov. 4, 2025, said certain revision combinations of a Multi Instrument Adapter used with a Patient Tracker “may not meet its specified accuracy” for visual verification of device location within a patient’s anatomy. The agency classified the action as a Class II recall and listed 1,198 units in commerce.

Early recalls are showing up across the broader AI-device market, too. An open-access 2025 JAMA Health Forum study matched 950 FDA-cleared AI-enabled medical devices to recall entries and found 60 devices associated with 182 recall events. It reported that 79 recalls (43.4%) occurred within the first 12 months after clearance and noted that diagnostic or measurement errors drove the largest share of recall events.

The AI surgery debate is also layered on a longer history of robotic-surgery safety reporting. A 2025 analysis of FDA MAUDE reports involving the da Vinci surgical system identified 66,651 reports from January 2015 through June 2025, compared with an estimated 15.9 million procedures during that period. Adverse-event reports do not prove a device caused an injury, but they illustrate the scale of the postmarket surveillance challenge as software-driven surgery expands.

Safety questions around technology-assisted surgery are not new. In 2019, STAT reported on an FDA warning against using robot-assisted devices for mastectomies and some other cancer surgeries, citing limited data on safety and outcomes. In 2021, The ASCO Post summarized the agency’s updated reminder that safety and effectiveness had not been established for robot-assisted mastectomy, underscoring the FDA’s long-running focus on evidence and longer-term outcomes for high-stakes use cases.

What safety checks could make AI surgery safer

Regulators say they are trying to modernize guardrails for software that changes over time. The FDA’s AI in software as a medical device page outlines the agency’s approach to lifecycle management and marketing submissions for AI-enabled device software functions, including expectations for how manufacturers plan, document and control updates.

Patient-safety researchers and clinicians say the next phase of AI surgery oversight will likely turn on fundamentals: clear labeling of when AI is active, stronger clinical validation for higher-risk guidance near critical structures, and postmarket reporting that captures software version and configuration when something goes wrong. Hospitals are also being urged to keep surgeons in charge by pairing AI surgery systems with standardized training, credentialing and protocols for when the software’s guidance conflicts with the operator’s judgment. The promise of AI surgery is better navigation and more consistent care, but the recent mix of reports, recalls and lawsuits is a reminder that “assist” still has to be earned with evidence.

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular