Newsroom

[editar plantilla]

EU publishes new Act on Artificial Intelligence [editar]

It is the first comprehensive global regulation of this important issue, but it should improve the protection of fundamental rights.

18/07/2024
FSG Igualdad y Lucha contra la Discriminación

SHARE
EU publishes new Act on Artificial IntelligenceFor several years now, Fundación Secretariado Gitano has been working on the field of artificial intelligence and the use of automated systems through algorithms to prevent possible discriminatory biases that can affect Roma people.

The Official Journal of the European Union published on 12 July the Regulation (Eu) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence.

The regulation, known as the Artificial Intelligence Act, is the first general regulation at a global level and with the status of a law on this important subject, and will condition economic and social development in the coming years.

Fundación Secretariado Gitano would like to highlight that the use of these systems can particularly affect Roma people, for example in predictive policing systems (the act does not totally prohibit these systems), in algorithms that amplify and spread anti-Roma hoaxes and fake news, or in automated systems for granting social aid, grants or loans, as has already been seen in some cases in different European countries (see the FSG’s 2022 report on Discrimination and Roma Community). We also consider it necessary for Spain to establish a stable mechanism for the participation of civil society in the National Agency for Artificial Intelligence Oversight, including the voices of representatives of the Roma people, in order to prevent the aforementioned biases and guarantee the protection of fundamental rights.

For several years now, Fundación Secretariado Gitano has been working on the field of artificial intelligence and the use of automated systems through algorithms to prevent possible discriminatory biases that can affect Roma people. The annual report Discrimination and Roma Community 2022 addressed this issue in depth, and we also organised a conference with experts in 2022 with the National Observatory of Technology and Society (Discriminatory bias in the use of artificial intelligence and algorithms. Impact on the Roma community). In parallel, for the last three years we have been collaborating in various networks working to defend rights in this area (AI and discriminatory bias), such as IA Ciudadana in Spain, or, in Europe, the EDRi network and the Justice, Equity and Technology Table of the London School of Economics.

The act states:

  1. harmonised rules for the placing on the market, the putting into service, and the use of AI systems in the Union;
  2. prohibitions of certain AI practices;
  3. specific requirements for high-risk AI systems and obligations for operators of such systems;
  4. harmonised transparency rules for certain AI systems;
  5. harmonised rules for the placing on the market of general-purpose AI models;
  6. rules on market monitoring, market surveillance, governance and enforcement;
  7. measures to support innovation, with a particular focus on SMEs, including start-ups.

Who is affected: This Act regulates the use of AI systems in all public administrations and also in the field of private companies (there are exceptions in cases of defence, military and national security systems).

According to the AI Act, machine learning systems will be divided into four main categories depending on the potential risk they pose to society. Systems considered high risk will be subject to strict rules that will apply before they enter the EU market.

Deadlines for entry into force and implementation:

Entry into force: 1 August 2024.

Implementation: General AI rules will apply one year after entry into force, in August 2025, and obligations for high-risk systems in three years. They will be under the supervision of national authorities, supported by the IA office within the European Commission.

The Act recognises the dangers of such systems for vulnerable groups, including ethnic minorities, and the need to monitor possible racial or gender bias. It also stresses the need to monitor that such systems do not discriminate in any way.

As regards the protection of fundamental rights, artificial intelligence systems for biometric categorisation on the basis of political, religious, philosophical beliefs, ethnic origin and sexual orientation are prohibited. Nor will it be possible to use systems that score people on the basis of their behaviour or personal characteristics, or artificial intelligence capable of manipulating human behaviour.

On the other hand, systems to expand or create databases of facial data captured indiscriminately via the internet or audiovisual recordings will also be prohibited.

In general terms, the regulation allows or prohibits the use of artificial intelligence depending on the risk it generates for people and identifies high-risk systems that can only be used if they can be shown to respect fundamental rights. For example, those that can be used to influence the outcome of an election, or those used by financial institutions to assess creditworthiness and establish credit ratings.

Fines for violators range from 35 million euros ($37.6 million) or 7 per cent of a company's global turnover to 7.5 million euros ($8 million) or 1.5 per cent of global turnover.

However, the regulation allows for some exceptions to permit certain uses to ensure national security. This was one of the most controversial points during negotiations between the European Parliament and member states. Thus, security forces will be able to use biometric identification cameras, always with judicial authorisation, to prevent a terrorist threat. In addition, these systems can also be used to locate those responsible for crimes of terrorism, human trafficking and sexual exploitation, as well as to search for victims.

Some weaknesses of the act:

  • The self-assessment of risks of AI companies jeopardises the protection of fundamental rights.
  • Standards for fundamental rights impact assessments are weak.
  • Use of AI for national security purposes may affect fundamental rights.
  • Civic participation in the implementation and enforcement of the Act is not guaranteed.

These weaknesses mean that the Act has not achieved an adequate standard of human rights protection. These improvements would be necessary to protect these rights:

  • Prohibit emotion recognition and limit biometric use (it is limited in the Law but in an unclear way).
  • Create a public registry of algorithms.
  • Ensure equality in AI systems.
Encourage the active participation of civil society, including ethnic minorities, who may be particularly affected by these systems. In this sense, it would be necessary for the recently created in Spain National Agency for Artificial Intelligence Oversight to establish direct and stable channels of participation with civil society organisations, as established in its statutes, the new Spanish AI Strategy 2024 and the European AI Act itself. BACK TO MAIN ‘NEWS’ PAGE