The AI Act – How the Legislation is Developing

The AI Package – the Overview 

In April 2021, the European Commission unveiled the AI Package, the collective name for the Commission’s projects that address the current and future roles of Artificial Intelligence (AI) within the EU. The package was developed because it became clear that how we finance, incentivise development, and formulate rules for AI today will determine whether people and businesses in the future can enjoy the benefits of AI while feeling safe and protected.  

Naturally, one of the priorities within the package is to ensure that the EU has access to world-leading AI technologies. This is what the package refers to as fostering European excellence in AI through coordinated action, and it largely goes hand-in-hand with financing schemes to give European research and industry the best chance of establishing themselves as global leaders in cutting-edge, trustworthy AI. With that goal in mind, the 2021 Review of the Coordinated Plan on AI played a large guiding and stabilising role by providing the Member States with a unified EU vision of AI development and alignment. To break it down further, the Plan encourages investment in AI to foster greater economic resilience to future crises like another pandemic; it proposes sharp timelines for AI development programmes and road-mapped strategies so that the EU benefits from first-mover advantages on the world stage and, lastly, the plan ensures alignment of the EU’s position between Member States to prevent fragmentation, and ensure the capacity to address global challenges as a unified actor. 

As stated, the EU needs more than a vision of AI excellence and a plan to get there. It needs financing. Several sources of EU funding are available to meet these ambitious world-leading goals. Chief among them, the Recovery and Resilience Facility made €134 billion available for digital technologies investment, including AI development. Additionally, between the Horizon Europe and Digital Europe programmes, a further €1 billion annual EU investment will be funnelled into AI to mobilise private sector investments to the tune of €20 billion per year over the course of the Digital Decade. 

In addition to funding the projects, the right infrastructure environment must be in place to access the high-quality data inputs that AI systems need to thrive. To that end, the EU Cybersecurity Strategy, the Digital Services Act & the Digital Markets Act, and the Data Governance Act are each critical to removing barriers and creating new opportunities to develop increasingly advanced AI capabilities.  

All of the projects mentioned above exist to provide the ambition, funding, and infrastructure needed for the EU to excel in AI. Still, it’s another matter for citizens to trust and feel safe using the final product. That’s why, as part of the AI Package, the Commission proposed the world’s first-ever legal framework for AI, addressing the risks of using AI systems in our daily lives. By being the first to legislate on the trust and ethical aspects of AI, the EU sets the global tone, or what is considered the norm, for people interacting with AI worldwide. 

The Commission’s Proposal 

The Commission published its Proposal for a Regulation on Artificial Intelligence in 2021, hand in hand with the previously referenced Coordinated Plan for AI. The legislative proposal serves as the legal vessel contributing to the AI ecosystem by categorising its varying risks to users and ensuring the Member States have access to strict enforcement of those rules. The Commission proposes that all AI systems be ranked as having Minimal, Limited, High, or Unacceptable risk to your safety and fundamental rights. 

This is crucial because while most AI systems will pose limited to no risk to people and can even contribute to solving many societal challenges, other flagged AI systems create risks, and those must be addressed to prevent undesirable or discriminatory outcomes. The classic example is an AI used to narrow down applicants for a job or university place, which would greatly influence your future. And yet, it is often unclear why an AI system has made the decision or prediction it has. Other high-risk examples span healthcare, education, law enforcement, critical infrastructures, public services, migration & asylum, and justice applications. Therefore, the proposal aims to provide AI developers, deployers and users with precise requirements and obligations without waylaying SMEs with additional administrative or financial burdens.  

The Council Position – 2022 

The European Council was the first of the co-legislators to publish its position on the Commission’s proposal. Starting with its own definition of AI, the Council views it as any system developed through machine learning approaches & logic and knowledge-based approaches.  

One of the significant areas where the Council’s position differed from the initial proposal was its expansion of the list of unacceptable-risk applications for AI. For example, the Council’s text expanded the prohibition of social scoring AI to encompass private actors. And while the Commission’s proposal already suggested banning AIs that would exploit vulnerabilities of social groups, the Council wants to see that expanded to include those vulnerable due to their social or economic situation.  

Where the Council diverges the most is that it wants explicit provisions that allow law enforcement authorities to use ‘real-time’ remote biometric systems in public spaces for law enforcement purposes. Beyond that, the Council holds the position that AI systems used for national security, defence & military, research, and select non-professional purposes should not be included within the scope of the AI Act.  

Concerning additional requirements for high-risk categories, the Council’s position has focused on making them more technically feasible and less burdensome for stakeholders to comply with, such as greater leeway on the quality of data required and limiting the extensiveness of the technical documentation needed by SMEs.  

One final key point from the Council’s position added measures to support AI innovation, with provisions for unsupervised real-world testing of AI systems and clarifications around regulatory sandboxes to increase their rate of development. 

The European Parliament Position – 2023 

In May 2023, the European Parliament’s Civil Liberties and Internal Market Committees jointly adopted the AI Act, which means it only needs to pass through its plenary adoption, anticipated for mid-June, before the Parliament’s position is ready for negotiations with the Council.  

So, what does the Parliament insist on? It starts with the definition of AI, where centre-right MEPs won the contest to have it align with the OECD’s definition. The Parliament also heavily expanded the list of prohibited AI practices, at the insistence of centre-left MEPs, to include biometric categorisation or identification, predictive policing, emotion recognition software, or the scrapping of facial images for database building.  

The Parliament also called for stricter classifications, adding an extra layer to the high-risk categorisation scheme so that AI systems would only be caught within its purview if there was a demonstratable risk to health, safety or fundamental rights. Parliamentarians also expanded the list of high-risk AIs to include systems that influence voters during political campaigns and the ‘recommended for you’ feed found on most major social media sites. With an expanded list and tighter requirements, the European Parliament’s position added further obligations for high-risk AI providers, most notably in risk management, data governance, technical documentation, record keeping and the introduction of a fundamental rights impact assessment.  

An additional feature of the Parliament’s position is the establishment of a new AI Office, which can guide Member States and coordinate joint cross-border investigations on the misuse of AI. There was also the recent and fast-paced rise of ChatGPT to consider, which among other obligations, would have to tag texts as AI-generated and provide users access to the AI’s training data. 

Next Steps 

The next step in the process is the AI Act’s plenary adoption, with 14 June 2023 as a tentative date. After MEPs formalise their position, the proposal will enter the last stage of the ordinary legislative process, kicking off the negotiations with the EU Council and Commission, known as the trilogues. The tricky negotiating points will be which definition of AI to apply and whether the Council will be granted its exemptions for military and national security purposes. Final approval is expected by the end of the year, or early 2024 at the latest, followed by a grace period for companies and organisations to adapt, often around two years. 

Lykke Advice 

If you would like to learn more about the EU’s plan for integrating AI systems safely into society and how that would impact your business operations or are interested in the funding opportunities available to AI developers, then get in touch with us at Lykke Advice. 

LinkedIn
Twitter
Facebook
Email

Leave a Reply

Your email address will not be published. Required fields are marked *