On 18 November 2023, Germany, France, and Italy reached a significant agreement on the future of AI regulation, marking a step forward in shaping the AI landscape within the European Union (EU).
Key Highlights
- Collaborative efforts: The joint paper reflects the collaborative efforts of three major European economies, Germany, France, and Italy, to establish a unified approach to AI regulation. This alliance signifies a commitment to fostering innovation and ensuring responsible AI adoption within the EU.
- Mandatory self-regulation for foundation models: The joint paper pushes for mandatory self-regulation for foundation models. Foundation models refer to the core AI algorithms and architectures that underpin various AI applications. Germany, France, and Italy advocate for stringent control over AI's foundational models, aiming to enhance accountability and transparency in AI development. According to the paper, developers of foundation models should define model cards, providing information to understand the model's functionality, its capabilities, and its limitations.
- Reaching smaller companies: The three governments endorse commitments that are binding on AI providers in the EU that sign up to them. They propose mandatory self-regulation to all AI providing companies, irrespective of size. While the discussion in the context of the EU’s AI Act initially targeted major AI providers only, the joint paper now advocates for a universal adherence to avoid compromising trust in the security of smaller EU companies.
- No imposition of sanctions: Immediate sanctions are not part of the three nations' position. However, there is consideration for a future sanction system if violations of codes of conduct emerge. A European authority would monitor compliance with the standards.
- Regulating AI application: The focus is on regulating the application of AI, not the AI technology itself. The paper aims to balance the opportunities and risks of AI, recognizing the potential damage of over-regulation to innovation and the risks of under-regulation to security. Therefore, the development process of AI models should not be subject to regulation.
Implications for Businesses
This agreement by Germany, France, and Italy on AI regulation has several implications for businesses operating in the EU:
- Compliance alignment: Businesses should closely monitor the developments stemming from this agreement, especially regarding the regulation of AI application and the treatment of foundation models. Alignment with these emerging regulations will be crucial.
- Ethical considerations: Emphasis on ethical AI development, mandatory self-regulation for foundation models, and a balanced approach for smaller companies highlight the importance of transparency, accountability, and fairness in AI systems.
- Competitive advantage: Companies engaged in AI research and development can benefit from the incentives and support proposed in the joint paper and the evolving regulations.
In conclusion, the agreement reached by the three governments represents a step towards unified AI regulation within the EU, with a particular focus on mandatory self-regulation for foundation models. Negotiations on the draft of the EU AI Act currently appear to be at a standstill. However, the new joint paper may accelerate ongoing trilogue discussions between the European Parliament, Council and Commission, aiming to position the EU in this evolving domain. Businesses should remain vigilant, adapt to evolving regulatory requirements, and consider the ethical implications of their AI systems.
Should you require further information or assistance in navigating these regulatory developments, please do not hesitate to contact us.