HomeTechAI deepfakes: Spain’s Carla Galeote Pushes Tough Crackdown Beyond Under-16 Bans—Traceable Anonymous...

AI deepfakes: Spain’s Carla Galeote Pushes Tough Crackdown Beyond Under-16 Bans—Traceable Anonymous Accounts to Curb Abuse

MADRID — Carla Galeote, a Spanish women’s rights activist who says she was targeted with AI-generated fake nude images, is urging lawmakers to go further than Spain’s proposed ban on social media for under-16s by making anonymous accounts traceable and escalating penalties for repeat offenders, Feb. 27, 2026.

Galeote told Reuters that the shock of being hit with AI deepfakes was compounded by what she describes as a system that still struggles to recognize digital abuse as a real-world harm. “Social media isn’t new — and the violence is brutal, systematic, 24/7,” she said.

AI deepfakes and Spain’s hardening stance on platforms

Spain’s government has framed its under-16 proposal as a child-protection measure, but the debate has widened into a broader fight over what platforms must do — and what governments should be willing to threaten — when illegal or hateful content spreads at scale. In a Feb. 3 speech, Prime Minister Pedro Sánchez said Spain wants to “regain control” of digital spaces and announced plans to block under-16s from accessing major platforms, alongside tougher accountability measures for companies and executives.

Galeote argues that age gates alone won’t stop AI deepfakes or the harassment campaigns that often accompany them. In her view, the central problem is impunity: attackers can churn through burner profiles, hide behind anonymity and treat reporting systems as an obstacle course rather than a deterrent.

“PeppaPig88” is fine — but someone should be accountable

Galeote says she is not calling for a ban on pseudonyms. Instead, she wants anonymity to be conditional: users could post under any name, but platforms would have to be able to connect accounts to a verified identity that can be produced for authorities under due process. “Call yourself ‘PeppaPig88’ if you want — fine. But there has to be a real identity behind that account,” she said.

That idea — traceable anonymity — has become a recurring theme in policy debates over AI deepfakes, especially where victims say the damage comes from how fast synthetic images travel and how hard it is to identify who seeded them. Supporters argue that traceability could curb abuse without eliminating pseudonymous speech; critics warn that identity requirements can chill dissent and create new privacy risks if databases are breached.

From fines to “market access” threats

Spain’s debate is playing out as the European Union tries to enforce more muscular rules on Big Tech. The bloc’s Digital Services Act requires platforms to provide mechanisms for reporting illegal content and — for the largest services — to assess and mitigate systemic risks.

But Galeote says penalties that platforms can treat as a cost of doing business won’t deter the behavior that enables AI deepfakes to circulate. She has advocated a sharper sanction: barring repeat offenders from major markets, including the EU, rather than relying mainly on fines and takedown requests.

The push comes as Spain has also turned to prosecutors over the spread of AI-generated child sexual abuse material. Earlier this month, the government ordered an investigation into X, Meta and TikTok over alleged distribution of such material, part of a wider effort to pressure companies to prevent and remove harmful content before it goes viral.

Where existing rules meet AI deepfakes

European lawmakers are also leaning on the EU’s new artificial intelligence regime to make synthetic content easier to spot. Under the EU’s AI Act framework, providers and deployers of generative AI systems face transparency obligations that include labeling or disclosing AI-generated content, including deepfakes.

Those measures matter, experts say, because AI deepfakes are increasingly created with consumer-grade tools and spread through mainstream platforms, messaging apps and closed groups. Labels can help with detection and moderation, but victims and advocates argue that labeling doesn’t solve the hardest question: who is responsible when a fake image is created, uploaded and copied across multiple services in minutes?

A problem that predates the current AI boom

The current political urgency around AI deepfakes echoes warnings that surfaced years before today’s image generators. A 2019 investigation in Wired found that deepfake videos online were overwhelmingly pornographic, raising early alarms about nonconsensual sexual abuse as the dominant use case.

In Spain, the risks became more visible in 2023 after teenage girls in the town of Almendralejo reported receiving AI-generated nude images that appeared realistic, prompting a national debate over whether existing laws were equipped for the technology. Euronews reported at the time that the images were made by peers using an app that “nudified” photos pulled from social media.

Days later, a Spanish prosecutor opened a probe into whether the creation and sharing of the images constituted a crime, highlighting the legal gray areas victims and investigators faced. That inquiry, covered by Reuters, foreshadowed the wider crackdown now being debated in Madrid.

By mid-2024, a youth court had sentenced 15 schoolchildren in the case to probation and education measures, according to The Guardian — an outcome that underscored both the human toll and the challenge of crafting deterrence when perpetrators are minors and the content is synthetic.

What Galeote wants next

Galeote says policymakers should treat AI deepfakes as an amplifier of an older problem: gendered online abuse that becomes routine when enforcement is slow and perpetrators believe they will never be identified. She points to threats and harassment that would be unthinkable in public spaces, but are common online. “It’s impossible to think that a man on the street could shout that they’ll rape you and nothing happens, but that’s what we’re seeing online,” she said.

For Galeote, the fix is not one silver bullet but a layered response: meaningful identity traceability for accounts, faster cross-platform cooperation when intimate AI deepfakes spread, and penalties that compel executives and boards to prioritize safety. Without that, she argues, bans for teenagers will do little to protect adults — and will leave the underlying machinery of abuse intact.

As Spain prepares to debate its proposals in parliament, the fight over AI deepfakes is likely to become a test case for Europe’s broader regulatory turn: whether governments are willing to move beyond fines, and whether new rules can curb abuse without sacrificing the privacy and free-expression protections that many users rely on.

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular