Understanding the European Union’s AI Act: A Step Towards Trustworthy Artificial Intelligence?
On Friday, 8 December 2023, the European Union agreed upon a groundbreaking and unprecedented law regulating Artificial Intelligence (AI). Known as the "EU AI Act," this comprehensive legislation is sector-agnostic and has far-reaching implications.
While a political agreement has been reached, a final draft is expected to be produced by the end of January or the beginning of February 2024. The legislators emphasised that the EU AI Act is designed to be future-proof, with built-in mechanisms for updates.
- Policy Objectives: The AI Act aims to establish a framework that promotes the effective operation of the single market within the European Union by facilitating the development and utilisation of reliable AI systems. It strives to strike a balance between fostering innovation and safeguarding fundamental rights, safety, and public interests.
- Scope and Application: The AI Act applies to providers placing AI systems on the market or putting them into service within the Union, irrespective of whether they are established within the Union or in a third country. It covers many AI systems, including high-risk systems, remote biometric identification systems, and AI systems used in critical infrastructure sectors.
- High-Risk AI Systems: The AI Act introduces specific requirements for high-risk AI systems, such as those used in critical infrastructure, healthcare, transportation, and law enforcement. These systems must undergo a conformity assessment process, ensuring compliance with essential requirements related to safety, accuracy, transparency, and human oversight.
- Prohibited Practices: The AI Act prohibits certain AI practices that pose significant risks to individuals' rights and safety. These include AI systems that manipulate human behaviour, exploit vulnerabilities, or use subliminal techniques to distort human decision-making. The Act also prohibits AI systems that enable social scoring by public authorities in a way that undermines fundamental rights.
- Transparency and Accountability: The AI Act emphasises the importance of transparency and accountability in AI systems. Providers must provide clear and comprehensive information about the AI system's capabilities, limitations, and potential risks. Additionally, certain AI systems must be designed to allow human oversight and provide explanations for their decisions. When individuals engage with an AI system and their emotions or characteristics are identified using automated methods, they must be informed of this fact. Suppose an AI system is employed to create or modify visual, auditory, or video content that resembles genuine content. In that case, disclosing the content was generated using automated methods should be required. However, there may be exceptions for legitimate purposes such as law enforcement or freedom of expression. This disclosure enables individuals to make informed decisions or disengage from situations.
- Data Governance and Privacy: The AI Act recognises the importance of data governance and privacy in AI systems. It aligns with the General Data Protection Regulation (GDPR) principles and ensures that personal data is processed lawfully and transparently. It also promotes the use of high-quality training data and safeguards against biased or discriminatory AI systems.
- Foundation models / General Purpose AI(GPAI) will be regulated: The regulation of foundation models or general-purpose AI, including LLMs, will be implemented through a two-tier approach. Both tiers will require adherence to transparency requirements. The "lower tier" models, subject to less regulation, must fulfil certain obligations. These include the creation of technical documentation, compliance with EU copyright law, and the dissemination of detailed summaries regarding the content used for training. On the other hand, the "higher tier" models, which are considered "high impact" models with potential systemic risks, will be subject to additional and more stringent obligations. These obligations include conducting model evaluations, assessing and mitigating systemic risks, performing adversarial testing, reporting serious incidents to the Commission, ensuring cybersecurity measures, and reporting on energy efficiency.
- AI Board: To comply with the new regulations on GPAI models and ensure their enforcement at the EU level, the Commission has established an AI Office. This office is responsible for overseeing the implementation of these advanced AI models, promoting standards and testing practices, and enforcing the common rules in all member states. The AI Board, consisting of representatives from member states, will continue to serve as a coordination platform and advisory body to the Commission. Member states will play a crucial role in implementing the regulation, including the design of codes of practice for foundation models. An advisory forum will also be established to provide technical expertise to the AI Board. This forum will include stakeholders from various sectors, such as industry representatives, SMEs, start-ups, civil society, and academia.
- Promoting Innovation: The act outlines measures to foster innovation and create an innovation-friendly environment. One such measure is the establishment of AI regulatory sandboxes, which provide a controlled environment for developing, testing, and validating innovative AI systems. These sandboxes facilitate compliance with regulations while promoting legal certainty for innovators. It also proposes reducing the regulatory burden on small and medium-sized enterprises (SMEs) and start-ups.
- Enforcement and Penalties: The AI Act establishes a robust enforcement framework to ensure compliance with its provisions. National competent authorities will be responsible for market surveillance and enforcement. Non-compliance with the Act can result in significant penalties, including fines of up to €35 million or 7% of the provider's worldwide annual turnover. This would be €35 million or 7% for violations of the banned AI applications, €15 million or 3% for violations of the AI act’s obligations and €7,5 million or 1,5% for the supply of incorrect information. However, the provisional agreement provides for more proportionate caps on administrative fines for SMEs and start-ups in case of infringements of the provisions of the AI Act.
The AI Act will have significant implications for insurance companies in the region. Here are some key implications:
Compliance with High-Risk AI Systems Requirements: Insurance companies that develop or use high-risk AI systems, such as those used in underwriting, claims processing, or customer service, must ensure compliance with the specific requirements outlined in the AI Act. This includes undergoing a conformity assessment process to demonstrate that their AI systems meet essential requirements related to safety, accuracy, transparency, and human oversight.
Enhanced Transparency and Explainability: Insurance companies must provide clear and comprehensive information about the AI systems they use, including their capabilities, limitations, and potential risks. This transparency will help build trust with customers and regulators. Additionally, certain AI systems used by insurance companies will need to be designed to allow human oversight and explain their decisions, ensuring accountability and fairness.
Data Governance and Privacy: The AI Act aligns with the General Data Protection Regulation (GDPR) principles and emphasises the importance of data governance and privacy in AI systems. Insurance companies must ensure that personal data in AI systems is processed lawfully and transparently, with appropriate safeguards in place. This includes ensuring the quality and fairness of training data and mitigating the risk of biased or discriminatory AI systems.
Compliance and Enforcement: Insurance companies must establish robust compliance mechanisms to ensure adherence to the AI Act's provisions. This includes implementing internal processes for conformity assessment, documentation, and ongoing monitoring of AI systems. National competent authorities will be responsible for market surveillance and enforcement, and non-compliance with the AI Act can result in significant penalties.
Opportunities for Innovation and Differentiation: While the AI Act introduces regulatory requirements, it also presents opportunities for insurance companies to innovate and differentiate themselves in the market. Insurance companies can build customer trust and gain a competitive advantage by developing AI systems that meet the Act's requirements and provide transparency, explainability, and fairness.
After the law is finalised and officially adopted, which is anticipated to occur in Q1 2024, most obligations will become enforceable within a 24-month, specifically by early 2026. However, the ban on prohibited use cases and the obligation regarding foundation models will become binding earlier.
It is worth emphasising that this legislation pertains to companies operating within the European Union. We must closely observe and monitor the regulatory developments in other regions across the globe. Countries like the United Kingdom have opted not to pursue a comprehensive, all-encompassing law dedicated solely to AI. Instead, they rely on existing laws and regulations to address matters related to artificial intelligence.
References
https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52021PC0206
Tile picture: https://unsplash.com/