WASHINGTON — U.S. health regulators and major app stores are facing renewed pressure Tuesday to police AI medical apps that doctors warn can mislead patients and delay care. The push is growing as investigations and studies point to chatbots and skin-scanning tools that sound authoritative even when they are wrong, Feb. 10, 2026.
A Reuters investigation described how an 18-year-old cancer patient in Turkey panicked after consulting OpenAI’s ChatGPT, which told him he might survive only five years. The tumor was removed in July, said Dr. Cem Aksoy, a medical resident in Ankara, who later fielded another call after the chatbot suggested a cough could mean the cancer had spread.
“When someone is distressed and unguided,” Aksoy said, an AI chatbot “just drags them into this forest of knowledge without coherent context.”
The same reporting found a growing number of AI medical apps marketed to consumers with big promises and small-print disclaimers. One skin-scanning app advertised “over 97% accuracy” and hundreds of thousands of users, yet some reviewers said it flagged harmless spots as cancer or missed melanoma. Reuters could not independently verify individual experiences.
Why AI medical apps are hard to regulate
In the United States, whether an app triggers Food and Drug Administration oversight often hinges on “intended use” — what it is marketed to do. Many AI medical apps present themselves as education, not diagnosis, even when the user experience looks like triage.
That gap leaves regulators, platforms and patients sorting out what is “wellness” versus what looks like medicine. The FDA has said the complex life cycle of AI software requires careful management across development, deployment and maintenance, including when software changes after release, according to its overview of artificial intelligence in Software as a Medical Device.
App stores have become a second checkpoint. Apple said it removed at least one “AI doctor” app after Reuters inquiries, citing rules that medical apps must disclose data and methods supporting accuracy claims. Google said it pulled and later reinstated a skin scanner after changes and clearer disclaimers.
Regulators eye new guardrails for AI medical apps
Federal regulators have shown they will step in when marketing crosses the line. In a 2025 warning letter to Exer Labs, the FDA said the company promoted its “Exer Scan” to “screen, diagnose, and treat” conditions using AI algorithms without required clearance.
Hospitals and health systems are also pressing for tougher, clearer standards. In a December letter to the FDA from the American Hospital Association, the group urged risk-based, post-deployment monitoring to address bias, “hallucinations” and model drift — performance changes that can emerge after software is widely used.
Outside the U.S., the European Union is moving in the same direction. The European Commission says the EU’s AI Act, which took effect in 2024, treats many medical-purpose systems as “high-risk,” requiring steps such as risk management, quality data and human oversight, according to its overview of artificial intelligence in healthcare.
Warnings about AI medical apps have been building for years
Concerns over digital self-diagnosis tools predate generative AI. A 2015 audit study of symptom checkers found they listed the correct diagnosis first about a third of the time and gave appropriate triage advice a little more than half the time. A 2022 systematic review of symptom checker accuracy reported wide variation and warned that reliance could pose safety hazards.
And when chatbots drift into sensitive health coaching, the risks can be immediate. Wired reported in 2023 that the National Eating Disorders Association suspended its “Tessa” chatbot after it dispensed weight-loss and calorie-cutting advice that experts said could worsen eating disorders.
For now, physicians say patients should treat AI medical apps as a starting point for questions — not a substitute for clinical judgment — and regulators and platforms are being pushed to make that boundary clearer.

