European Union enacts AI Act
On 1 August, the European Union officially enacted the Artificial Intelligence Act, setting out clear requirements for developers and operators to use AI in ways that protect citizen’s safety and fundamental rights.
Proposed by the European Commission in April 2021, the act was agreed upon by the European Parliament and Council in December 2023. This act is designed to ensure responsible AI development and use. However, it will also reduce the administrative and financial burden on businesses by harmonizing regulations across EU countries.
Risk-based AI classification
A key challenge in developing the Act was defining AI and assessing related risks. The act classifies AI systems based on their potential risks:
– Minimal risk: most AI systems such as spam filters and AI-enabled video games face no obligation under the AI Act, but companies can voluntarily adopt additional codes of conduct.
– Specific transparency risk: systems like chatbots must inform users that they are interacting with a machine, while certain AI-generated content must be labelled as such.
– High risk: high-risk AI systems such as AI-based medical software or AI systems used for recruitment must comply with strict requirements, including risk-mitigation systems, high-quality of data sets, clear user information, human oversight, etc.
– Unacceptable risk: for example, AI systems that allow “social scoring” by governments or companies are considered a clear threat to people’s fundamental rights and are therefore banned.
Enforcement
The act also establishes penalties for non-compliance. Companies could face fines of up to 7% of their global annual turnover for using banned AI applications, up to 3% for failing to meet other obligations, and up to 1.5% for providing false information.
Each EU member state must now appoint national authorities to oversee the enforcement of these rules, with a one-year deadline to implement these measures.
To support member states, the European Union has created three advisory bodies. The European Artificial Intelligence Board will ensure consistent application across member states and coordinate with the Commission. A scientific panel will offer technical advice and warn about risks from general-purpose AI models. The AI Office will also receive guidance from a diverse advisory forum.
EU’s vision for AI
The EU has ambitious goals for AI, believing that it can greatly benefit society if managed responsibly. “The EU aspires to be the global leader in safe AI,” explained the Directorate-General for Communication.
“By developing a strong regulatory framework based on human rights and fundamental values, the EU can create an AI ecosystem that benefits everyone. This means better healthcare, safer and cleaner transport, and improved public services for citizens. It brings innovative products and services, particularly in energy, security, and healthcare, as well as higher productivity and more efficient manufacturing for businesses. Governments can also benefit from cheaper and more sustainable services such as transport, energy, and waste management,” it added.