Thursday, February 12, 2026
HomeTechDisturbing loophole exposed: bikini deepfakes still possible on Google Gemini and ChatGPT...

Disturbing loophole exposed: bikini deepfakes still possible on Google Gemini and ChatGPT despite policies, as Reddit bans jailbreak forum

SAN FRANCISCO — Users are still finding ways to generate bikini deepfakes with Google’s Gemini and OpenAI’s ChatGPT, despite both companies’ rules against nonconsensual sexual content, Dec. 23, 2025. The workarounds surfaced as Reddit moved to shut down a major hub for sharing jailbreak tactics, highlighting how quickly “guardrails” can be probed, bypassed and reposted elsewhere.

In tests and user discussions reviewed by WIRED, prompts and image-edit instructions produced fabricated “bikini” versions of fully clothed women — often without consent — even when the models initially refused. WIRED reported that Reddit banned r/ChatGPTJailbreak after the subreddit’s discussions veered into advice for generating these images.

bikini deepfakes: how the loophole works

The tactic is less about explicit nudity than “sexualizing-by-proxy”: requests are framed as harmless fashion edits, swimwear “try-ons,” or “beach outfit” transformations. That framing can slip through moderation systems designed to block overt nudity or pornographic requests, while still producing intimate, humiliating results. The end product is still a bikini deepfake — a synthetic image that changes someone’s appearance in a sexualized way.

Both companies publicly prohibit this kind of misuse. Google’s generative AI rules bar content that “facilitates non-consensual intimate imagery,” and also forbid attempts to bypass safety protections, according to its Generative AI Prohibited Use Policy. OpenAI similarly prohibits “sexual violence or non-consensual intimate content,” under its Usage Policies.

Reddit’s crackdown, and the moderation whack-a-mole

Reddit’s ban of r/ChatGPTJailbreak is the clearest sign yet that mainstream platforms are treating jailbreak sharing as a safety problem, not just a curiosity. Reddit’s sitewide rules prohibit posting intimate or sexually explicit media without consent, and the company has increasingly tied that rule to AI-generated sexual content. The platform’s Rule 3 policy is explicit about consent and privacy — but enforcement is reactive, often arriving after harmful content has already spread.

The deeper challenge is distribution. When one forum is closed, techniques migrate to smaller communities, private chats, paste sites or model-adjacent “prompt libraries.” That ecosystem makes bikini deepfakes resilient: removing one instruction set doesn’t remove the underlying capability.

Why this is escalating now

Nonconsensual deepfakes have been growing for years, and research has repeatedly shown sexual content dominates the category. A widely cited 2019 industry report, “The State of Deepfakes”, documented how pornographic deepfakes drove much of the early surge. Reddit later updated its enforcement posture to more clearly address AI-generated sexual media in a 2023 policy clarification, noting that AI-generated explicit content depicting a real, identifiable person violates its rules. Microsoft, facing pressure from victims and advocates, also outlined a broader approach to intimate image abuse in 2024, including bans on creating sexually intimate images without permission.

What happens next for bikini deepfakes

Policy alone is not stopping bikini deepfakes; enforcement and product design are the battleground. That includes better detection of “try-on” euphemisms, stricter handling of image-edit requests involving real people, and clearer consequences for repeat violators. It also includes legal pressure: the federal Take It Down Act, signed in 2025, created new obligations and penalties around nonconsensual intimate imagery, including deepfakes, as The Associated Press reported.

Even as Reddit closes prominent jailbreak spaces, the underlying reality remains: the easier it is to generate bikini deepfakes, the faster abusers will iterate — and the more the burden shifts to platforms to prove their safeguards work in practice, not just on paper.

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular