
Making AI-generated content more transparent for everyone in the EU

Artificial Intelligence Regulation and Compliance
Draft Guidelines on the implementation of the transparency obligations for certain AI
systems under Article 50 of Regulation (EU) 2024/1689 (the ‘AI Act’)
The AI Act(Regulation (EU) 2024/1689) lays down harmonised rules for the placing on the market, putting into service, and use of artificial intelligence (‘AI’) in the European Union. Its aim is to promote innovation in and the uptake of AI, while ensuring a high level of protection of health, safety and fundamental rights in the Union, including democracy and the rule of law.
The AI Act follows a risk-based approach, classifying AI systems into four different risk categories, one of which is AI systems posing transparency risks that are subject to the obligations laid down in Article 50 AI Act. These transparency obligations apply two years after the entry into force of the AI Act, i.e. as from 2 August 2026.
Pursuant to Article 96(1)(d) AI Act, these Guidelines are issued with the aim to serve as practical guidance to assist competent authorities, as well as providers and deployers of AI systems, in ensuring compliance with the transparency obligations under Article 50 AI Act in a consistent, effective and uniform manner.
The drafting of these Guidelines was informed by input from a variety of stakeholders collected during a broad consultation organised by the Commission and input from the Member States in the AI Board.
These Guidelines are non-binding. Any authoritative interpretation of the AI Act may ultimately only be given by the Court of Justice of the European Union.
EUROPEAN AI OFFICE
The AI Office makes use of its expertise to support the implementation of the AI Act by:
- Contributing to the coherent application of the AI Act across the Member States, including the set-up of advisory bodies at EU level, facilitating support and information exchange
- Developing tools, methodologies and benchmarks for evaluating capabilities and reach of general-purpose AI models, and classifying models with systemic risks
- Drawing up state-of-the-art codes of practice to detail out rules, in cooperation with leading AI developers, the scientific community and other experts
- Investigating possible infringements of rules, including evaluations to assess model capabilities, and requesting providers to take corrective action
- Preparing guidance and guidelines, implementing and delegated acts, and other tools to support effective implementation of the AI Act and monitor compliance with the regulation
The Commission aims to foster trustworthy AI across the internal market, through the AI Continent Action Plan and Apply AI Strategy. The AI Office, in collaboration with relevant public and private actors and the startup community, contributes to this by:
- Advancing actions and policies to reap the societal and economic benefits of AI across the EU
- Providing advice on best practices and enabling ready-access to AI sandboxes, real-world testing and other European support structures for AI uptake
- Encouraging innovative ecosystems of trustworthy AI to enhance the EU’s competitiveness and economic growth
- Aiding the Commission in leveraging the use of transformative AI tools and reinforcing AI literacy
