SAN FRANCISCO — OpenAI introduced ChatGPT Health Wednesday, adding a dedicated space inside ChatGPT that lets users connect medical records and wellness apps to get more personalized, context-aware responses. The move lands as tech companies race to turn AI from “answers” into “actions” — from health guidance to shopping checkouts — and as regulators and brands try to define what trust looks like when software can both advise and transact, Jan. 14, 2026.
ChatGPT Health shifts health questions into a more personal lane
OpenAI says ChatGPT Health is designed to help people feel more prepared for everyday health decisions, from making sense of test results to planning questions for a clinician — without positioning the tool as a diagnostic or treatment product. In its ChatGPT Health announcement, the company emphasized additional privacy protections for the Health space, including separation from other chats and a commitment that Health conversations are not used to train its foundation models.
The stakes are high: Health is already one of ChatGPT’s most common use cases, and OpenAI has pointed to hundreds of millions of health and wellness questions arriving each week. But the history of chatbots in medicine is uneven. A 2023 JAMA Network Open study evaluating chatbot responses to physician-written medical questions found accuracy and completeness could vary — a reminder that ChatGPT Health will be judged not just on convenience, but on reliability and the ability to show its work.
Google’s agentic commerce aims to make AI a checkout path, not just a search layer
Google, meanwhile, is pitching “agentic commerce” as the next step in online shopping — where AI can help complete tasks on a shopper’s behalf. In its agentic commerce rollout, the company outlined a new Universal Commerce Protocol and said it plans to power checkout from eligible U.S. retailers directly on AI surfaces such as Search’s AI Mode and the Gemini app, using stored payment and shipping information through Google Wallet.
The push builds on years of Google investment in commerce data and discovery, including the company’s 2021 introduction of the Shopping Graph. What’s different now is the ambition to collapse browsing, decision-making and payment into a single AI-guided flow — a shift retailers may welcome for conversion, while consumers may scrutinize for transparency and control.
FDA clarifies what “low-risk” can mean for wearables and general wellness tools
As consumer health tech expands, the U.S. Food and Drug Administration is also refining its message. The agency’s January update to its general wellness guidance for low-risk devices describes how FDA views products intended to promote a healthy lifestyle — and where the line sits between general wellness claims and medical device oversight, including how certain software functions may fall outside the device definition under federal law.
That line-drawing has been in motion for years. A 2016 Federal Register notice reflected earlier FDA thinking on low-risk general wellness products — context that matters now, as AI features increasingly appear inside apps and wearables that consumers treat like health companions.
Meta’s Manus buyout and Equinox’s anti-AI moment show the reputational edge of “agentic” tech
Big money is chasing agentic systems, too. Meta’s acquisition of Manus — a startup focused on agentic AI — was reported at more than $2 billion in a Reuters analysis of the Meta-Manus acquisition, underscoring how aggressively large platforms are buying talent and tooling that can plan and execute multi-step digital tasks.
At the same time, brands are testing how loudly to embrace — or critique — AI. Equinox drew attention with a New Year campaign that juxtaposed bizarre AI-generated imagery with human portraits, a strategy Adweek detailed as an attempt to cut through what some consumers deride as synthetic “AI slop.” The company has leaned into provocation before, including its earlier “We Don’t Speak January” messaging, covered in a 2023 explainer of the campaign.
Together, ChatGPT Health, agentic commerce, and the latest regulatory and branding moves point to the same reality: AI is moving closer to sensitive data and real transactions. Whether ChatGPT Health becomes a trusted companion — and whether agentic shopping becomes a default interface — may depend less on novelty than on clear guardrails, credible privacy practices and the ability to prove what the AI did, and why.

