HomePoliticsKeir Starmer seeks sweeping, contentious online powers to protect children—fast‑tracking curbs and...

Keir Starmer seeks sweeping, contentious online powers to protect children—fast‑tracking curbs and extending deepfake bans to AI chatbots

LONDON — Prime Minister Keir Starmer is seeking broader, faster-moving powers to regulate online access in the U.K., saying the government must move more quickly to protect children from rapidly evolving digital threats, Feb. 16, 2026. Keir Starmer argues the pace of AI-driven change has outstripped the current rulemaking process, and he wants to narrow the gap between new harms and enforceable guardrails.

The push, first outlined in reporting by Reuters, would tighten how online safety rules apply to AI systems that can generate content, including chatbots, and would accelerate the government’s ability to update online safety requirements without reopening major legislation each time technology shifts.

What Keir Starmer is proposing

At the center of Keir Starmer’s plan is a bid to give ministers and regulators greater flexibility to react to online risks—especially risks to minors—by speeding up how new restrictions are introduced and enforced. According to Reuters, the proposal includes limiting parliamentary scrutiny for certain updates, an idea that critics say could concentrate power and weaken oversight.

Keir Starmer’s package also seeks to close gaps that child-safety groups say have left some AI products outside the strictest parts of the U.K.’s online safety regime. That includes AI chatbots that can be used to produce sexualized or exploitative material, or to give harmful advice to vulnerable users.

Extending deepfake controls to AI chatbots

A key political selling point for Keir Starmer is expanding protections against nonconsensual, sexualized “deepfakes” and related abuses when they are generated through conversational AI tools rather than traditional social platforms. The government has framed the issue as both child protection and a broader fight against technology-facilitated exploitation. In an announcement about its wider approach, the government said it is leading a global effort to counter deepfake threats and highlighted the direction of travel for enforcement and victim support in a recent statement.

Keir Starmer’s allies argue that making rules “platform-neutral” matters because harmful content may be created in one place (a chatbot), shared in another (a messaging or social app), and stored or replicated across multiple services. By treating generative tools as part of the same risk ecosystem, ministers say they can reduce the number of loopholes for bad actors.

Why Keir Starmer says the current system is not enough

Keir Starmer is positioning the overhaul as a response to the speed and scale of change in AI and social media. Under existing online safety structures, regulators can draft codes and require risk assessments, but government officials and campaigners have argued that some emerging products are not captured cleanly—or that the path from identifying a harm to enforcing new requirements can be too slow.

Recent coverage has also emphasized the political pressure created by high-profile controversies around AI-generated sexual content and child safety. The government’s argument is that safety rules designed for platforms hosting user-generated content need to fully account for tools that can generate that content on demand.

On Sunday, The Guardian reported the government is considering stronger measures aimed at chatbot providers, including significant penalties for services that put children at risk.

Age checks, VPN workarounds and “who controls access”

Keir Starmer’s broader agenda intersects with the U.K.’s ongoing rollout of age assurance and child-protection requirements, and with the practical reality that users can sometimes bypass geographic or age-related blocks using tools such as VPNs. Ministers have signaled they want to understand how circumvention works and what proportionate enforcement could look like—an area likely to trigger debate about privacy, cybersecurity and civil liberties.

As the government and regulators expand child safety expectations, companies have also warned about the compliance burden, the technical complexity of robust age assurance, and the potential for unintended consequences if services restrict access abruptly.

The continuity: how Keir Starmer’s plan builds on earlier steps

Keir Starmer’s new push sits on top of a multi-year policy arc that began before he entered Downing Street and accelerated after the Online Safety Act became law. Parliament marked the legislation’s passage in 2023 in an official update, and regulators have since been developing codes and guidance for implementation.

Ofcom has also been consulting on how services should meet their duties to protect children, including the steps platforms and search services should take to address content that is harmful to minors. Those consultations are outlined in Ofcom’s online safety consultation materials, which have shaped expectations around risk assessments and safety-by-design.

Separately, the government moved to tighten deepfake-related law enforcement tools as concerns grew about nonconsensual explicit imagery. In early 2025, it outlined plans to criminalize the creation of sexually explicit deepfakes in a government announcement—a step campaigners cited as overdue but necessary.

Keir Starmer is now arguing that these strands—platform duties, child-focused risk controls and deepfake enforcement—need to be brought together so they apply cleanly to generative AI systems, not just traditional social media.

How AI deepfakes are changing the problem

AI deepfakes have shifted from niche manipulation to mass, consumer-grade capability, powered by tools that require minimal technical skill. An explainer by ITV described how the U.K. is approaching detection and enforcement and cited research indicating how frequently people report exposure to harmful deepfakes in its coverage.

Supporters of Keir Starmer’s approach argue the same “friction” regulators tried to apply to harmful content on major platforms must now be applied to the tools that generate it. Critics counter that broadening state power over online access risks mission creep—especially if fast-tracked changes are made with limited parliamentary debate.

What happens next

Keir Starmer’s plan is expected to move into consultation and legislative refinement, with the most contested elements likely to be the scope of new ministerial powers, the thresholds for restricting access, and how any expanded rules would be enforced against global tech companies.

For Keir Starmer, the political wager is that voters want faster action to protect children online—even if it means granting the government more discretion over how online safety rules evolve. For opponents, the concern is that what starts as child protection could become a blueprint for broader online control.

Either way, Keir Starmer has placed online regulation—especially the regulation of AI systems—at the center of his child-safety agenda, setting up a new round of debate over how far the U.K. should go in policing digital spaces.

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular