AI Act: EU Parliament fine-tunes text ahead of key committee vote

Content-Type:

News Based on facts, either observed and verified directly by the reporter, or reported and verified from knowledgeable sources.

[WiP-Studio/Shutterstock]

EU lawmakers have been finalising the text of the AI regulation ahead of the vote in the leading parliamentary committees on Thursday (11 May).

The AI Act is a landmark legislative proposal to regulate Artificial Intelligence based on its potential to cause harm. The members of the European Parliaments (MEPs) spearheading the file shared a fine-tuned version of the compromise amendments on Friday (5 May).

The compromises, seen by EURACTIV, reflect a broader political agreement reached at the end of April but also include last-minute changes and important details on how the deal has been operationalised.

MEPs seal the deal on Artificial Intelligence Act

Following months of intense negotiations, members of the European Parliament (MEPs) have bridged their difference and reached a provisional political deal on the world’s first Artificial Intelligence rulebook.

Foundation models

The original proposal of the AI Act did not cover AI systems without a specific purpose. The breakneck success of ChatGPT and other generative AI models disrupted the discussions, prompting lawmakers to further interrogate how best to regulate such systems.

The arrangement was found in imposing a stricter regime for so-called foundation models, powerful AI systems that can power other AI applications.

Specifically on generative AI, the MEPs agreed that they should provide a summary of the training data covered by copyright law. The fine-tuned text specifies that this summary must be ‘sufficiently detailed’.

In addition, generative foundation models would have to ensure transparency that their content is AI rather than human-generated.

The fines for foundation model providers breaching the AI rules have been set up to €‎10 million or 2% annual turnover, whichever is higher.

AI Act: MEPs close in on rules for general purpose AI, foundation models

The European Parliament is set to propose stricter rules for foundation models like ChatGPT and distinguish them from general purpose AI, according to an advanced compromise text seen by EURACTIV.

High-risk systems

The AI Act establishes a stringent regime for AI solutions at high risk of causing harm. Originally, the proposal automatically categorised as high-risk every system that fell under certain critical areas or use cases listed in Annex III.

However, EU lawmakers have added an ‘extra layer’, meaning that the categorisation will not be automatic. The systems will also have to pose a ‘significant risk’ to qualify as high-risk.

A new paragraph was introduced to better define what significant risk means, stating that it should be assessed considering “on the one hand the effect of such risk with respect to its level of severity, intensity, probability of occurrence and duration combined altogether and on the other hand whether the risk can affect an individual, a plurality of persons or a particular group of persons.”

There were also some last-minute changes to Annex III. MEPs agreed to include the recommender systems of very large online platforms as a high-risk category under the Digital Services Act. The latest compromise limits this high-risk category to social media.

AI systems used to influence the outcome of voting behaviour are deemed high-risk. Still, an exception was introduced for AI models whose output is not directly seen by the general public, like tools to organise political campaigns.

A new one was added regarding requirements for these systems, mandating high-risk AI systems to comply with accessibility requirements.

In terms of transparency, the text specifies that “affected persons should always be informed that they are subject to the use of a high-risk AI system, when deployers use a high-risk AI system to assist in decision-making or make decisions related to natural persons”.

Upon request from the centre-left, the Parliament’s text includes an obligation for those deploying a high-risk system in the EU to carry out a fundamental rights impact assessment. This impact assessment includes a consultation with the competent authority and relevant stakeholders.

In a new addition to the text, SMEs have been exempted from this consultation provision.

AI Act: European Parliament headed for key committee vote at end of April

EU lawmakers in the leading European Parliament committees are voting on the political agreement on the AI Act on 26 April, with many questions being settled but a few critical issues still open.

Prohibited practices

The AI law bans applications deemed to pose an unacceptable risk. Progressive lawmakers obtained the expansion of the prohibition to biometric identification systems for both real-time and ex-post use, except for the latter in cases of severe crime and pre-judicial authorisation.

The ban on biometric identification is tough to digest for the centre-right European People’s Party, which has a strong faction in favour of law enforcement. The conservative group has obtained to vote on the biometric bans with a split vote, separately from the rest of the compromises, according to a draft voting list seen by EURACTIV.

In addition, in the prohibition on biometric categorisation, a carve-out was introduced for therapeutical purposes.

AI Act: EU Parliament walking fine line on banned practices

Members of the European Parliament closed several critical parts of the AI regulation at a political meeting on Thursday (13 April), but the prohibited uses of AI could potentially divide the house.

Governance and enforcement

MEPs introduced the figure of the AI Office, a new EU body to support the harmonised application of the AI rulebook and cross-border investigations.

Wording has been added referencing the possibility of reinforcing the Office in the future to support cross-border enforcement better. The reference is to upgrade it to an agency, a solution for which the current EU budget does not allow.

In a last-minute tweak, EU lawmakers gave national authorities the power to request access to both the trained and training models of the AI systems, including of foundation models. The access might occur on-site or, in exceptional circumstances, remotely.

Moreover, the document mentions a proposal to add a provision on professional secrecy for national authorities taken from the EU General Data Protection Regulation.

AI Act: MEPs extend ban on social scoring, reduce AI Office role

The ban on social scoring has been extended to private companies, regulatory sandboxes could be used to demonstrate compliance, and the AI Office’s role has been downsized in a whopping new set of compromise amendments to the upcoming AI Act.

On …

Review

The list of elements for the European Commission to consider when evaluating the AI Act was extended to the sustainability requirements, the legal regime for foundation models, and the unfair contractual terms unilaterally imposed on SMEs and start-ups by providers of General Purpose AI.

[Edited by Nathalie Weatherald]

Read more with Euractiv

Supporter

AI4TRUST

Funded by the European Union

Check out all Euractiv's Projects here

Subscribe to our newsletters

Subscribe