Home Tech Controversial EU AI Code of Practice Debuts: Google Signs, Meta Refuses in...

Controversial EU AI Code of Practice Debuts: Google Signs, Meta Refuses in a Bold Test of Global AI Governance

0
EU AI Code of Practice

BRUSSELS, Belgium — Google has signed the European Union’s EU AI Code of Practice, while Meta has refused, as the bloc’s rules for general-purpose AI providers took effect Aug. 2, 2025. The split pits a “follow-the-playbook” approach against a “comply-your-own-way” strategy as Europe tries to turn guidance into practical guardrails for frontier AI, Dec. 25, 2025.

The EU AI Code of Practice — formally the General-Purpose AI Code of Practice — was drafted by independent experts and published July 10, 2025. The European Commission says companies that adhere to the code can show they meet key obligations under the AI Act with less paperwork and more legal certainty; its official list of signatories includes Google, Amazon, Anthropic, IBM, Microsoft, OpenAI and Mistral AI, among others.

That voluntary layer sits atop a hard deadline calendar. The Commission’s AI Act application timeline says the law entered into force Aug. 1, 2024, and will be fully applicable Aug. 2, 2026, with the governance rules and general-purpose AI obligations already applying from August 2025.

EU AI Code of Practice: what it asks AI model makers to do

For model providers, the EU AI Code of Practice breaks compliance into three chapters: transparency, copyright and safety and security. It pushes companies to document how a model is trained and evaluated, publish a public summary of training content, and adopt a policy designed to respect EU copyright rules.

The safety and security chapter targets the small set of most advanced “systemic risk” models, urging deeper testing and mitigation measures and clearer incident reporting. The Commission notes that xAI signed only the safety and security chapter, meaning it must address transparency and copyright expectations through other “adequate means.”

Why Google signed and Meta refused the EU AI Code of Practice

Google has framed its move as a pragmatic bet that signing the EU AI Code of Practice will make compliance clearer as regulators ramp up oversight. In a post announcing its intent, the company said the final text “comes closer” to supporting innovation but warned that requirements exposing trade secrets or slowing approvals could “chill” AI development, according to its statement on signing the EU AI Code of Practice.

Meta, by contrast, says the code adds uncertainty and goes beyond what the underlying law requires. Its chief global affairs officer, Joel Kaplan, called the EU’s implementation “overreach” and said it would “throttle” frontier model development in Europe, according to TechCrunch’s report on Meta’s refusal. Meta has not said it will ignore the AI Act, but it is signaling that it will pursue compliance outside the voluntary code.

Continuity in Europe’s AI rulebook

The code extends a policy arc that began with lawmakers finalizing the AI Act in 2024. The European Parliament detailed its vote in an announcement of the landmark adoption, and the Council later issued its final green light, framing the law as a global first for comprehensive AI rules.

In between, the Commission experimented with voluntary pledges as a bridge to formal enforcement through the “AI Pact,” saying more than 100 companies signed commitments in a September 2024 update on the pact.

For companies building and deploying large models, signing the EU AI Code of Practice is not a substitute for the law — but it may influence how regulators measure “good faith” compliance as enforcement ramps up in 2026. Whether it becomes a global baseline or a Europe-only playbook may hinge on whether holdouts like Meta can persuade regulators that alternative compliance paths offer equivalent safeguards.

NO COMMENTS

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Exit mobile version