The European Union’s AI proposal: implications and key changes.
For the European Union, Artificial Intelligence has 2 main aspects, which we have to understand to know the meaning of this new proposal.
On the one hand, for the EU, AI is a “rapidly evolving set of technologies that can generate a wide range of economic and societal benefits across sectors and societal activities.”
However, the EU stresses at the same time that “the same elements and techniques that enhance the socio-economic benefits of AI may also give rise to new risks or negative consequences for individual people or society as a whole.”
In these two reflections, we can find the main rationale that the European Union is trying to achieve with its new regulation: to strike a balance.
While it is true that AI-based technologies can lead to improved prediction, optimization of operations and resource allocation and personalization of service delivery, facilitating the achievement of positive results from a social and environmental point of view and providing essential competitive advantages to companies and the European economy, there will always be a number of risk factors that we will have to take into account.
In this social and business context, where Artificial Intelligence is increasingly present in our lives and in the lives of virtually all companies, the Commission proposes a regulatory framework on artificial intelligence with the following specific objectives:
- Ensure that AI systems introduced and used in the EU market are safe and respect existing legislation on fundamental rights and values of the Union.
- Ensure legal certainty to facilitate investment and innovation in AI.
- Improve governance and effective enforcement of existing fundamental rights legislation and security requirements applicable to AI systems.
- Facilitate the development of a single market to make legal, safe and reliable use of AI applications and avoid market fragmentation.
Once these objectives are defined, the EU tells us that in order to achieve them, the proposal presents an approach based on balance and proportionality, limiting itself to establishing a series of minimum requirements necessary to address risks and problems related to AI, but without in any way hindering or preventing the proper technological development or disproportionately increasing the cost of introducing AI-based solutions.
With all these objectives and ideals present in the proposal, it also establishes a clear risk-based approach, which is not restrictive and adapts to specific situations, creating a legal framework with flexible mechanisms that also allow for “dynamic” adaptation as the technology evolves or as new situations arise that have the potential to raise concerns.
A risk-based approach and special mention to “forbidden practices”.
As we have been advancing previously, the approach taken by the EU in its new proposal is a risk-based approach.
Taking into account that the use of Artificial Intelligence may have negative repercussions for the fundamental rights contained in the Charter of Fundamental Rights of the European Union, and given the particular characteristics of AI such as its complexity, opacity or dependence on data, the aim is to ensure a high level of protection of fundamental rights, while at the same time addressing sources of risk by taking a risk-based approach.
Rights such as the right to human dignity (Article 1), respect for private and family life and protection of personal data (Articles 7 and 8), non-discrimination (Article 21) or effective equality between men and women (Article 23), the main objective of the proposal will be to avoid intrusions on these rights, as well as recognizing that the proposal has certain restrictions on rights such as the freedom to conduct a business, but “with a view to ensuring that overriding purposes of general interest relating to areas such as health, safety, consumer protection and the protection of other fundamental rights (“responsible innovation”) are respected when developing and using high-risk AI technology”, but are in any case proportionate and limited to what is necessary to prevent and reduce serious risks to safety and violations of fundamental rights.
The Regulation distinguishes between the uses of AI that generate:
i) an unacceptable risk
ii) a high risk,
iii) a limited risk
iv) a minimal risk
As we can see, within these risks, the proposal includes what are referred to as forbidden practices in Title II, which covers all AI systems whose use is considered unacceptable because they are contrary to the values of the Union, for example, because they violate fundamental rights.
The prohibitions cover practices that have a high potential to manipulate people through subliminal techniques that transcend their consciousness or that exploit the vulnerabilities of specific vulnerable groups, such as minors or people with disabilities (toys with subliminal messages, for example) to substantially alter their behavior in a way that is likely to cause physical or psychological harm to them or others.
The proposal also prohibits public authorities from conducting AI-based social scoring for general purposes, as well as prohibits, with limited exceptions, the use of “real-time” remote biometric identification systems in publicly accessible spaces for law enforcement purposes.
For this reasons, the proposal aims to considerably strengthen the EU’s role in helping to shape global standards and promote credible AI that is in line with the Union’s values and interests, giving the Union a solid foundation for a deeper dialogue with its external partners, in particular with third countries, and in international fora on AI-related issues.
Its implications and the future of AI will remain to be seen.