Page Loader
Summarize
Why Meta refused to sign EU's AI code of practice
Meta's decision was announced yesterday

Why Meta refused to sign EU's AI code of practice

Jul 19, 2025
12:51 pm

What's the story

Meta has refused to sign the European Union's (EU) code of practice for its Artificial Intelligence (AI) Act. The decision comes just weeks ahead of the bloc's regulations on general-purpose AI models coming into effect. Joel Kaplan, Meta's Chief Global Affairs Officer, announced the company's stance in a LinkedIn post yesterday.

Regulatory concerns

'Europe is heading down the wrong path on AI'

Kaplan criticized the EU's approach to AI regulation, saying "Europe is heading down the wrong path on AI." He said Meta had reviewed the European Commission's Code of Practice for general-purpose AI models and found it is introducing legal uncertainties for model developers. Kaplan also argued that the code includes measures that go beyond what is required by the EU's AI Act.

Compliance requirements

What is EU's code of practice?

The EU's code of practice is a voluntary framework designed to help firms comply with the bloc's AI Act. It calls for the companies to provide and regularly update documentation about their AI tools and services. The code also bars developers from training AI on pirated content, and mandates compliance with content owners' requests not to use their works in data sets.

Business implications

EU's regulations could hinder advanced AI models

Kaplan warned that the EU's AI regulations could hinder the development and deployment of advanced AI models in Europe. He said these rules could also stifle European firms looking to build businesses on top of them. This warning comes amid global tech companies, including Alphabet, Meta, Microsoft and Mistral AI, fighting against such regulations and asking for their implementation to be delayed.

Scope

About EU's AI Act

The EU's AI Act is a risk-based regulation that bans some "unacceptable risk" use cases outright, like cognitive behavioral manipulation or social scoring. It also defines a set of "high-risk" uses, including biometrics and facial recognition in education and employment. The act requires developers to register their AI systems and meet risk and quality management obligations.