AI Act: EU countries mull options on fundamental rights, sustainability, workplace use

Content-Type:

News Based on facts, either observed and verified directly by the reporter, or reported and verified from knowledgeable sources.

The AI Act is the top digital priority of the Spanish presidency of the EU Council of Ministers [Alexandros Michailidis/Shutterstock]

The Spanish presidency circulated three discussion papers on Friday (13 October) to gather EU countries’ feedback on key aspects of the AI law ahead of an upcoming negotiation session: fundamental rights, sustainability obligations and workplace decision-making.

The AI Act is a landmark legislative proposal to regulate Artificial Intelligence based on its potential to cause harm. Since they took over the presidency of the EU Council of Ministers, the Spaniards have prioritised closing the negotiations on the file with the European Parliament and Commission.

The Telecom Working Party, a technical body of the Council, will discuss possible flexibility to accord to the lead negotiators on Tuesday and Thursday. On Friday, the file will land on the table of the Committee of Permanent Representatives, with the view of providing a revised negotiating mandate for the next political trilogue with the other EU institutions on 25 October.

The three discussion papers, seen by Euractiv, are meant to assess the possible room for manoeuvre the presidency can have in negotiating with the other institutions.

Fundamental rights impact assessment

MEPs have introduced an obligation for users of AI models at high risk to cause harm to conduct a fundamental rights impact assessment.

The presidency seems open to the idea “for the purpose of reaching a global agreement”, and “to cover residual risks that can’t be foreseen from the side of the provider because they are linked to the concrete use of the high-risk AI system”.

Madrid offered three options, but even in the one closest to the European Parliament’s version, the scope of the provision is narrowed to only public bodies on the ground that private companies will have to comply with similar obligations under the upcoming Due Diligence Directive (CSDDD).

The first option offered is based on the MEPs’ version, but it is limited to elements not identified in previous processes, enables users to collaborate with the AI providers only for information not available in the instructions and entails the notification of the market surveillance authority.

The second version makes the obligations broader and only requires users to submit information to the market surveillance authorities via a template, such as an online form.

The third version would further simplify the measure and merge it alongside the other obligations for high-risk AI systems rather than a stand-alone article as proposed by the Parliament.

“In all the cases, in order to facilitate assessment for the deployer, it could be positive to include a modification in the instructions of use, to include a summary on the risks identified in the risk management process,” the paper reads.

AI Act: Spanish presidency sets out options on key topics of negotiation

The topics of AI definition, high-risk classification, list of high-risk use cases and the fundamental rights impact assessment will be on the table of the Council this week as the Spanish presidency prepares to dive headfirst into negotiations.

Workplace dispositions

Centre-left lawmakers in the European Parliament made the conditions for using AI stricter in the workplace in two important regards.

At the national level, member states will be empowered to introduce national measures to protect workers’ rights when AI systems are used, a provision EU countries have agreed to but only insofar as the AI presents a significant risk.

Secondly, and more controversially, MEPs want workers’ representatives to be consulted before an AI model is deployed in the workplace and the affected employees informed in accordance with the relevant EU law.

Here, besides rejecting or accepting the proposal, the presidency suggests that a solution might be to inform workers’ representatives merely. Meanwhile, the European Commission has been working on a legislative proposal to regulate algorithmic management for the next mandate.

EU Commission mulls rules on algorithmic management in workplace for next mandate

The European Commission is laying the groundwork for a legislative initiative to regulate the use of algorithms for managing, monitoring and recruiting workers.

Sustainability obligations

Both co-legislators introduced sustainability-related provisions for AI models, but the EU Parliament went much further under pressure from the Greens. For instance, the MEPs’ text includes environmental harm in assessing whether a system is high-risk.

For the presidency, this approach is not legally sound “as it activates the obligation to comply with the requirements due to sustainability aspects, but these requirements are designed on the basis of health, safety and fundamental rights or forcing developers of foundation models to use the energy efficiency standards available”.

Similarly, the Parliament wants high-risk AI systems to keep tabs on energy consumption, but the Spaniards consider that “high-risk does not necessarily mean that it consumes more energy”.

By contrast, the presidency seems ready to reinforce the wording of the article related to the codes of conduct to include energy efficiency protocols and data management and also to task the Commission to issue a standardisation request for energy-efficient programming.

In other words, the Spanish government wants to remove sustainability from the high-risk requirements and make it part of a technical standard that AI providers can voluntarily adopt.

[Edited by Nathalie Weatherald]

Read more with Euractiv

Supporter

AI4TRUST

Funded by the European Union

Check out all Euractiv's Projects here

Subscribe to our newsletters

Subscribe