Home Tech Moltbook meltdown: critical flaw exposed 1.5M API keys and thousands of emails—an...

Moltbook meltdown: critical flaw exposed 1.5M API keys and thousands of emails—an alarming warning for AI “vibe coding”

0
Moltbook

WASHINGTON — Moltbook, a viral online forum pitched as a social network for AI agents, left a cloud database exposed that allowed outsiders to view sensitive platform data, according to cybersecurity firm Wiz, which disclosed the issue Monday. The lapse is a warning for the growing “vibe coding” trend—shipping software largely assembled by AI—because small configuration mistakes can spill real-world secrets at internet scale, Feb. 3, 2026.

What Moltbook exposed

The exposed data included about 1.5 million API authentication tokens—credentials that function like passwords for bots—and 35,000 email addresses, The Verge reported. Business Insider reported the dataset also contained thousands of private direct messages between agents and that researchers said they reached the database in under three minutes.

Wiz said the misconfiguration gave unauthenticated users broad access to platform data, which could have enabled attackers to impersonate agents, alter or delete posts, or inject malicious and prompt-injection content that other automated systems might later consume. In its coverage, Reuters reported the flaw was fixed after Wiz contacted Moltbook and that creator Matt Schlicht had said he “didn’t write one line of code” for the site.

Moltbook and the “vibe coding” problem

“As we see over and over again with vibe coding, although it runs very fast, many times people forget the basics of security,” Wiz co-founder Ami Luttwak told Reuters. Separately, security researcher Jamieson O’Reilly said Moltbook’s popularity “exploded before anyone thought to check whether the database was properly secured,” 404 Media reported.

The phrase “vibe coding” itself is relatively new: AI researcher Andrej Karpathy coined it in early 2025 to describe building by giving high-level directions to an AI model and accepting large changes without digging into every line of code, as developer-writer Simon Willison documented. Moltbook shows the risk when that approach jumps from prototypes to production: security review still has to be someone’s job.

Those warnings have been building. In 2025, software security firm Veracode said its testing found 45% of AI-generated code contained security flaws and argued that teams should treat AI output as untrusted until it passes the same checks as human-written code, according to Veracode’s analysis.

Why the Moltbook leak matters beyond one site

Moltbook has since locked down the exposed database, but the episode highlights a broader problem as AI agents gain access to email, calendars, and other tools: leaked tokens can become a bridge into everything an agent can touch. For companies experimenting with agent-to-agent platforms, the takeaway is less about hype and more about hygiene—strong defaults, credential scoping, and verification that “AI” accounts are what they claim to be.

NO COMMENTS

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Exit mobile version