WASHINGTON — A report released Dec. 8 by online-extremism researchers says coordinated “raid” campaigns — bolstered by clusters of foreign accounts that behave like engagement farms — have helped inflate the visibility of white nationalist livestreamer Nick Fuentes on X, formerly Twitter. Analysts say the tactic can fool recommendation systems into treating fringe posts as broadly popular, pulling in real users and, eventually, mainstream attention, Dec. 15, 2025.
Nick Fuentes and the foreign-bot “raid” playbook
The analysis, “America Last: How Fuentes’s Coordinated Raids and Foreign Fake-Speech Networks Inflate His Influence”, was produced by the Network Contagion Research Institute, a group that tracks online extremism and disinformation. Its central claim is not that Fuentes’ audience is fake, but that his reach appears to be artificially magnified by rapid-fire early engagement that makes his posts look bigger — and more popular — than comparable accounts.
The report was co-drafted with support from Rutgers University’s Social Perception Lab, according to The Algemeiner’s summary of the findings. Researchers warn that raid-style amplification is not limited to one ideology or one influencer; they argue it is a reusable playbook for manufacturing “momentum” around any polarizing figure.
In simplest terms, a “raid” is a burst of synchronized reposts and replies meant to juice a post’s early performance. Watchdogs say coordinated swarms in the first minutes are especially powerful because many social platforms reward speed — not just volume — when deciding what to recommend or surface to other users.
What the watchdogs say they measured
According to the report, NCRI analyzed the first 30 minutes of shares on 20 recent Fuentes posts and compared those early-engagement patterns with other large political accounts, including X owner Elon Musk. JNS reported that the study found “dramatically higher early retweet velocity” for Fuentes than the comparison set and that 61 percent of the first-30-minute shares came from repeat early posters — behavior the report describes as “highly suggestive of coordination or automation.” Among repeat early posters, the report said 92 percent were anonymous accounts, according to JNS.
The “first 30 minutes” focus: The report emphasizes the opening window after a post is published, when ranking systems may decide whether to expand distribution.
Repeat boosters: A majority of early shares, the report said, came from accounts that repeatedly appeared as early amplifiers across multiple posts.
Thin identity signals: Many boosting accounts lacked the kinds of personal details typical of real, long-term users, the report said.
The report also flags the geography of those boosting accounts, describing a significant share of the high-frequency amplifiers as foreign-based. The analysis does not publicly identify a specific government or organization behind the activity, and researchers noted that foreign-located accounts can be run for a wide range of reasons — from commercial click-farm work to political influence efforts.
Why early bursts of engagement matter
Researchers who study manipulation campaigns have long argued that coordinated inauthentic behavior can be effective without being enormous. A relatively small group of accounts acting quickly can push a post into wider circulation, after which genuine users may take over and spread the content organically.
That can create a visibility spiral. Once a post is elevated, it may become fodder for reaction videos, commentary threads, and media coverage — and then those secondary waves of attention can be mistaken for proof that the original spike reflected a broad shift in public opinion.
Momentum, attention and a widening audience
Fuentes — who is widely described in news coverage as a white nationalist and Holocaust denier — rejected NCRI’s conclusions, calling the report a “matrix attack,” according to JNS. The report also comes as Fuentes’ profile has risen through long-form interviews and a nightly streaming show that has attracted millions of views, supporters and critics alike.
In a separate deep dive, Ali Breland wrote for The Atlantic that Fuentes has “momentum” and is “laying the groundwork to go even bigger,” describing a livestream ecosystem that blends political monologues with paid fan questions and fundraising. The format can convert attention — supportive or hostile — into revenue and repeat viewership, and it provides a steady stream of clips that supporters can repost across platforms.
A longer timeline of online swarms and platform whiplash
Watchdogs say the current debate over “raids” fits a longer pattern: online swarms that start as niche internet tactics can migrate into mainstream political communication, especially when platforms and media outlets treat spikes in engagement as proof of influence.
Fuentes and his supporters drew broad attention during the so-called “groyper” battles inside conservative youth politics, where disruption and trolling became a tactic for pressuring mainstream figures. Vox reported in 2019 that the group’s tactics blended online mobilization with real-world heckling aimed at pulling conservative politics toward more extreme positions.
Major platforms later removed Fuentes or limited his reach. Haaretz reported in 2020 that YouTube banned Fuentes, part of a broader crackdown on extremist content.
Even when he has been pushed off mainstream platforms, Fuentes has remained close to the center of U.S. political controversy. Reuters reported in 2022 that the White House condemned then-former President Donald Trump’s meeting with Fuentes and Ye, formerly known as Kanye West, at Trump’s Mar-a-Lago resort.
And while Fuentes was banned from Twitter before Elon Musk bought the company, he has also benefited from the platform’s shifting content-moderation posture. Axios reported in 2024 that Musk said he would reinstate Fuentes’ account on “free speech” grounds.
What comes next
For watchdog groups, the question is whether platforms will share enough data — and act quickly enough — to distinguish real political popularity from manufactured momentum before it shapes broader public debate. For journalists and researchers, the challenge is similar: verifying whether a “surge” is a genuine shift in opinion or a distorted signal engineered to look like one.
