DENVER — A physicist who coined “artificial general intelligence” decades before it became Silicon Valley’s favorite promise is resurfacing with a blunt message: the race for ever-more-capable AI is already sliding toward military competition and crisis, not just productivity. Dec. 24, 2025.
In a recent profile, WIRED’s account of Mark Gubrud traces the phrase to his 1997 work on the security risks of breakthrough technologies—where he warned that advanced systems could be “usable” across industrial and military operations. That framing, once obscure, now collides with a world in which big-tech labs market “frontier” models while governments pour money into autonomy, targeting, and swarms.
AGI: the acronym that outgrew its inventor
Gubrud did not build a lab or a startup around AGI. He named a destination—and then watched others turn it into a milestone, a slogan and, increasingly, a bargaining chip. The phrase’s modern glow is so hot that even the companies chasing it have begun to back away from the label itself, according to industry’s “AGI” rebrand cycle.
But the rebranding doesn’t erase the underlying push for systems that can generalize, plan and act across domains—the kind of capability many people now shorthand as AGI. Gubrud’s worry was never just the tech. It was the incentives: secrecy, speed, prestige and the logic of “if we don’t, they will.”
AGI and the arms-race trap he flagged early
His warnings rhyme with a decade of alarms from researchers and arms-control advocates. In 2015, a coalition of AI and robotics researchers urged action in open letter on autonomous weapons. A year later, Arms Control Today argued the case for urgent limits in “Stopping ‘Killer Robots’”, citing the risk of an accelerating “military AI arms race.” And in IEEE Spectrum, Gubrud pressed the point that autonomy could become the spark point of a broader contest in his call to ban autonomous weapons.
Go back even further and the throughline is stark: in Gubrud’s 1997 paper for the Foresight conference, he described emerging technologies as potentially reshaping conflict in ways that could eclipse nuclear-era stability. Today, the weapons and the code are different, but 1997 paper still sits at the center of the same question: what happens when general-purpose capability becomes general-purpose leverage?
AGI meets the policy scramble
International groups are now trying to catch up to the pace of deployment. The International Committee of the Red Cross has warned against letting AI spread across battlefields without “oversight and regulation” in a 2025 statement to the U.N. Security Council. Meanwhile, watchdogs note how AGI branding itself has been used to attract capital, set expectations and justify speed, as described in AI Now’s report on “AI Generated Business”.
Gubrud’s renewed visibility arrives at a moment when the public debate is split between wonder and whiplash: one side selling AGI as the next industrial revolution, the other warning that the first truly scalable use case may be conflict. His argument is not that progress must stop. It’s that the world should stop treating the most destabilizing version of AGI as inevitable—or as a race worth “winning” at any cost.
