HomeTechGharidah Farooqi AI Videos Expose Alarming Deepfake Harassment During Islamabad Talks.

Gharidah Farooqi AI Videos Expose Alarming Deepfake Harassment During Islamabad Talks.

ISLAMABAD, Pakistan — Pakistani journalist Gharidah Farooqi said Monday, April 13, that AI-generated sexualized videos made from her image were circulated online after her coverage of the U.S.-Iran talks in Islamabad triggered a wave of commentary about her appearance. What began as outfit policing quickly became a consent and press freedom issue, showing how fast generative AI can turn routine misogynistic trolling into synthetic abuse.

The diplomatic backdrop was unusually high-stakes. Reuters reported that the weekend meeting in Islamabad was the highest-level direct U.S.-Iran engagement since the 1979 revolution, even though the talks ended without a breakthrough. Yet much of the online attention in Pakistan drifted away from the substance of the negotiations and toward Farooqi herself.

Gharidah Farooqi AI videos show how fast online trolling can escalate

Farooqi later said in remarks carried by Pakistan Today that explicit AI-generated clips using her image were circulated online and that the material should be understood as a form of sexual violence, not satire. A Dawn Images commentary on the backlash said the online pile-on centered in part on a photo of Farooqi taken from behind without her consent before inappropriate AI clips were made from it, pushing the story well beyond the familiar territory of social media mockery.

That distinction matters. By the time manipulated clips begin circulating, the issue is no longer simply whether a woman journalist is being judged for how she looks on camera. It is whether her body, likeness and credibility can be treated as raw material for humiliation each time she becomes visible at a major public event.

Why this case matters beyond one newsroom

This is not an isolated media scandal. Reporters Without Borders said in February that it documented 100 deepfake cases targeting journalists in 27 countries between December 2023 and December 2025, and 74 percent of the victims were women. That finding helps explain why Farooqi’s case resonates so sharply: deepfakes are not only tools of deception, but increasingly tools of intimidation aimed at exhausting women until visibility itself feels unsafe.

This pattern did not start in Islamabad

The current abuse fits a much longer timeline. CPJ reported in 2020 that Farooqi and other Pakistani women journalists were already dealing with sustained online abuse, and Dawn reported in 2022 on backlash after remarks that effectively blamed her for harassment while working in male-dominated spaces.

That history did not stop there. The Coalition for Women in Journalism has documented repeated organized campaigns against Farooqi since 2016, while TRT World reported in 2024 that sexualized deepfakes were already being weaponized against Pakistani women leaders. The technology changed, but the basic tactic remained the same: move attention away from a woman’s reporting and onto her body, then use that shift to undermine her credibility.

What should happen next

Farooqi’s case should be treated at once as a press freedom issue, a consent issue and a digital safety issue. Newsrooms, platforms and regulators do not need another circular debate about what women reporters should wear on camera; they need faster takedowns of synthetic sexual content, better evidence-preservation protocols and a clearer public understanding that AI-assisted humiliation is serious abuse, not a joke gone too far.

The real scandal is not that Farooqi was visible during a high-stakes U.S.-Iran meeting in Islamabad. It is that visibility itself could be converted so quickly into material for deepfake harassment.

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular