First in the world and surrounded by debates: the European Artificial Intelligence Act

After almost three years, still a world first: the Artificial Intelligence Act of the European Union seems to have passed the last serious hurdle. On 13 February, the two concerned parliamentary committees approved the text. The balance was not easy to strike: member states wanted to keep the options of criminal justice authorities free while many in Parliament wanted to get the maximum protection for individual rights. The plenary vote is expected later in February, after which the Council of Ministers will have to approve the text. While the Parliament’ approval could not be taken for granted, the plenary and the Council vote can be.
Artificial intelligence to impact almost one million jobs in Hungary

Svenja Hahn, a German lawmaker – ironically from the German liberal party close to the business world, FDP – who was the shadow rapporteur for Renew (the European liberal party) in the Internal Market and Consumer Protection Committee, found that the trilogue agreement (between the Commission, the Parliament and the Council) was not observed in the text voted by the committee of the ambassadors of Member States, the COREPER, as authorities got more leeway in biometric identification, including facial recognition. The subject was whether these are to be applied in real time or only ex-post. Also, the conditions for ex-post biometric identification were rather relaxed in her view. The agreement was that ex-post use of this technology– on the insistence of the Parliament – can be used only as strictly necessary to prevent serious crime and terrorism, has to be based on national legislation and is subject to prior authorisation of an independent authority. The European Commission is to oversee potential abuses.

During the negotiations, the OECD updated its definition of AI systems, and this definition was finally taken over in the EU act as well.

The final act takes a tiered approach to defining the requirements with which AI systems must comply. It wants thus to strike a sensitive balance – to prevent abuse and protect fundamental rights while not hampering innovation. In addition, law enforcement authorities also want to use AI tools and there is no reason to deny this to them if fundamental rights are observed.

Depending on the risks, we can distinguish four groups, to which different rules apply in terms of authorisation and transparency.

First of all, as a result of the trilogue agreement, free and open-source software will be excluded from the regulation’s scope unless they represent a high-risk or prohibited application or are a solution at risk of causing manipulation. There are two classes of risky systems (high-risk and low-risk) and the fourth class is simply prohibited. Prohibited applications include manipulative techniques, systems exploiting vulnerabilities, indiscriminate scraping of facial images and social scoring. As a result of the agreement, biometric categorisation based on sensitive personal traits like race, political views and religious convictions are only allowed if these traits are directly linked to a specific crime or the threat of one.

Beyond this, systems recognising emotions are not allowed in employment and education – the Council nevertheless made it clear in the final version that systems tracking sleepiness and tiredness for drivers are not falling under this prohibition, since they are important for accident prevention. Finally, the agreement made remote biometric identification only possible to prevent, detect or prosecute serious crime, including terrorism. This is the part that Svenja Hahn did not found satisfactory and considers and infringement of fundamental rights.

An additional open issue was whether these bans should apply only to systems used within the Union or also prevent EU-based companies from selling these prohibited applications abroad. This limitation was not maintained as it has no sufficient legal basis in the Treaties of the EU.

Transparency is the main tool of control but also certain internal measures have to be taken by the developers and users which will be verified by the supervision mechanism. The requirements are different for the two middle layers, and concern on one hand the “foundation models”, the models which define the “learning methods”, by which the systems are trained. The so-called “systemic” models are those above a certain threshold of computing power used for training. These, considered to be more risky, are subject to more stringent requirements on how to assess whether they work properly, to avoid them being flawed – instilling bias into the products for example. The four eyes principle is explicitly prescribed for these evaluations. To avoid bias, or at least to detect it, a sufficiently detailed summary of the training data has to be published. The trilogue agreement inserted a clause that this disclosure must not “infringe trade secrets”. On the other hand, the requirements concern the uses of the systems.

Other requirements include the need for a fundamental right impact assessment if high risk AI applications are used by public bodies, but also by private entities providing essential public services, in healthcare, education, insurance and banking. High-risk systems have to be reported to an EU-wide database. A non-public part will contain applications used in law enforcement which only the independent supervisory authority will have access to.

The Council succeeded in inserting some exemptions for law enforcement: a derogation to the four-eye principle – this exception has to be enshrined, however, in national law – and that sensitive data in law enforcement should not be subject to the transparency requirements.

Even a high-risk system that has not passed the conformity assessment procedure could be applied, subject to judicial authorisation.

Secondary providers of applications based on general-purpose AI systems like ChatGPT have to receive from the primary provider all information that they need to comply with the AI law’s obligations if the application provided is high-risk under the AI Act.

Supervision has two levels: national authorities will supervise AI systems. It is not clear where will there be new authorities created for this purpose and where will existing authorities – mainly data protection and consumer rights authorities – entrusted with the task. The leaders of these authorities will comprise the European Artificial Intelligence Board to ensure consistent application of the law. An advisory forum and a scientific panel will assist the authorities: the advisory forum will provide feedback from stakeholders, while the scientific panel, in an advisory role, will consist of independent experts.

The second layer will be the AI Office established within the Commission to enforce the foundation model provisions. The EU institutions are to make a joint declaration that the AI Office will have a dedicated budget line. The AI Office can also designate systems to the different tiers.

Enforcement will of course involve administrative fines. These can be a fixed amount or a percentage of the company's turnover. Depending on the severity of the infringement, their amount can reach 6.5% of global turnover or €35 million. Lower limits are set for less severe cases.

Most of the rules have to be applied within two years. There are two exceptions: one year for high-risk systems and governance, while banned applications have to be discontinued within six months.

 

More in Economy

Benzinkút
May 03, 2024 21:25

Another extraordinary price change at Hungarian filling stations

New prices from Monday

GettyImages-1460841318
May 03, 2024 15:50

Hungary reaches historic record at household loans, gov't move fails to work for companies

Personal loans have never been this high

Opel Astra
May 03, 2024 12:46

Hungarian second-hand car market slow to take off, turnaround seen in H2

Brand composition of imported cars changed

orbán viktor miniszterelnök kormányfő
May 03, 2024 11:40

"If there were peace, the Hungarian economy could grow twice as fast" - Orbán

It could easily be necessary to increase war spending in 2025

dolgozó worker gyár gyári munkás ipar fizetés
May 03, 2024 09:51

Collective redundancies no longer shown in the jobseeker's notice

Unemployment data just got even less clear

német export, német külkereskedelem, német gazdaság, hamburgi kikötő, tehervonat
May 03, 2024 09:20

A heavy legacy of government decisions strains the Hungarian economy - EC review

In-depth review of Hungarian economy published in Commission's Imbalance Report

LATEST NEWS

Detailed search