Thursday, February 12, 2026
HomeTechAI Governance Fractures: EU’s Sweeping AI Act, Trump’s ‘Woke AI’ Order, China’s...

AI Governance Fractures: EU’s Sweeping AI Act, Trump’s ‘Woke AI’ Order, China’s Global Push, Pakistan’s New Policy

BRUSSELS — The European Union, the Trump administration, China and Pakistan are tightening their approaches to AI governance as 2026 begins, creating a patchwork of rules that could shape how powerful models are built, bought and policed across borders. The divergence reflects clashing priorities — fundamental-rights regulation in Europe, ideological “neutrality” requirements in U.S. federal procurement, sovereignty-first diplomacy from Beijing and capacity-building plans in Islamabad, Jan. 5, 2026.

AI governance: four models harden at once

Europe’s risk-based rulebook. The EU’s AI Act entered into force, Aug. 1, 2024, and is moving into application in stages. A first wave of prohibitions took effect, Feb. 2, 2025, including bans on practices such as social scoring, untargeted scraping to build facial-recognition databases and some emotion-recognition systems in workplaces and schools. Obligations for general-purpose AI models became applicable, Aug. 2, 2025, and much of the law becomes applicable, Aug. 2, 2026, according to the European Commission’s AI Act overview.

Washington’s procurement test. President Donald Trump has pushed AI governance through federal buying power rather than a single, across-the-economy statute. His executive order, “Preventing Woke AI in the Federal Government,” signed July 23, 2025, tells agencies to buy large language models that meet “truth-seeking” and “ideological neutrality” standards and argues that DEI-driven design can distort outputs.

Implementation is now catching up with the rhetoric. The Office of Management and Budget issued procurement guidance, Dec. 11, 2025, spelling out transparency items agencies should seek from vendors and a timetable for updating purchasing rules, as detailed in Lawfare’s summary of the OMB memo.

Beijing’s global pitch. China is framing AI governance as a shared international project — but one anchored in national sovereignty and resistance to “exclusive groups.” Its Global AI Governance Initiative, released in 2023, calls for collaboration on standards, open-source sharing of AI knowledge and United Nations discussions on an institution to coordinate global AI rules.

Islamabad’s capacity-building plan. Pakistan’s federal cabinet approved a National AI Policy, July 30, 2025, tying AI governance to workforce development and public-service delivery. The policy includes targets to train 1 million AI professionals by 2030, create innovation funding and launch 50,000 AI-driven civic projects and 1,000 local AI products over five years, according to Dawn’s report on the policy. Prime Minister Shehbaz Sharif said: “Our youth are Pakistan’s greatest asset.”

Continuity, then a sharper turn

Global debates about AI governance did not start with chatbots. Many governments previously rallied around voluntary guardrails, including the OECD AI Principles adopted in 2019 and UNESCO’s Recommendation on the Ethics of Artificial Intelligence, adopted in 2021. Those frameworks emphasize ideas such as human rights, transparency, safety and human oversight.

What is changing now is the level of enforcement — and the politics around it. As AI governance becomes law, procurement policy and foreign-policy messaging, companies and governments face a central question: whether safety tests and transparency can be harmonized even as values and power blocs pull in different directions.

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular