EU AI Act

EU IA ACT

EU AI Act

The EU AI Act is a proposed regulation that aims to create a harmonized and trustworthy framework for the development and use of artificial intelligence (AI) in the EU, based on the respect of the EU values, fundamental rights and data protection principles. 

 

 

What is the EU AI Act?

The EU AI Act is the first regulation of the EU for AI, which is based on the proposal of the European Commission of 2021, and which has been approved by the Council and the European Parliament in December 2023. The law defines AI as “the ability of a machine to perform tasks that normally require human intelligence, such as reasoning, understanding, learning, planning, problem solving or creativity”, and aims to regulate AI based on its risks, to foster investment and innovation in the field of AI, and to protect the rights and values of European citizens.

 

Do you want to know more about AI and data protection? click on our articles ai and gdpr and ai in data protection law.

 

 

When will the EU AI Act come into force?

The AI Act will enter into force 20 days after its publication in the EU Official Journal and will start applying to organizations two years after its entry into force, except for some specific provisions. This means that the law will be applicable from 2026, and that the providers and users of AI systems will have a transition period to adapt to the new rules.

 

 

Assessment for Trustworthy Artificial Intelligence

 

 

What EU Parliament wants in AI legislation

Some of the main points that the Parliament wants in the AI legislation are the following:

 

  • A broader definition of high-risk AI systems, which should include the potential impact and the level of human oversight of the AI system.

 

  • A list of prohibited AI practices, which would include the use of AI for mass surveillance, social scoring, biometric recognition in public spaces, or predictive policing.

 

  • A human-centric and human-in-command approach to AI, which ensures that human dignity, autonomy and self-determination are respected and protected, and that human oversight and intervention are possible and effective at any stage of the AI system’s life cycle.

 

  • A strong and harmonised enforcement framework for AI, which ensures a high level of compliance and legal certainty, and which prevents the fragmentation of the internal market. 

 

  • A balanced and innovation-friendly AI regulation, which fosters the development and uptake of trustworthy and human-centric AI in the EU, and which supports the competitiveness and the digital sovereignty of the EU in the global AI landscape. 

 

 

AI Act Scope of Application

The AI Act applies to all providers and users of AI systems that operate in the EU or that offer their services in the digital single market, regardless of their place of establishment. This means that the law affects both European and foreign companies that want to access the European market of AI.

 

The law also applies to AI systems that are used exclusively for military or defense purposes, as long as they are subject to EU law. However, the law does not apply to AI systems that are used outside the scope of application of EU law, nor to those that affect the competences of the Member States in the field of national security.

 

In addition, the law provides for some exceptions for AI systems that are used for research and innovation purposes, as long as certain conditions are met, such as that the use is temporary, limited and controlled, that the rights and freedoms of people are respected, and that adequate security measures are adopted.

 

Lear more about AI banned applications.

 

 

AI Act: Risk-based Approach to AI Regulation

The law classifies AI systems into four categories of risk: unacceptable, high, limited and minimal.

 

  • Unacceptable risk AI systems are those that contravene the values and principles of the EU, and that are therefore prohibited. 

 

  • High-risk AI systems are those that can have a significant impact on the rights and freedoms of people, or that can affect essential aspects of their life, such as employment, education, health, justice or security. 

 

  • Limited risk AI systems are those that can have a moderate impact on the rights and freedoms of people, or that can generate a reasonable expectation of privacy, such as AI systems that interact with people, that recognise emotions or biometric characteristics, or that generate or manipulate images, audio or video. 

 

  • Minimal risk AI systems are those that have a low or no impact on the rights and freedoms of people, or that have a widely accepted and beneficial use for society, such as AI systems that are used for leisure, culture, art or entertainment purposes.

 

 

Key requirements for high-risk AI systems

 

1. Conformity assessment: High-risk AI systems must undergo a conformity assessment before being placed on the market or put into service, in order to verify that they comply with the requirements and obligations of the law. The conformity assessment can be carried out by the provider itself or by a notified body, which is an independent organization designated by the Member States to assess the compliance of AI systems.

 

2. Registration: High-risk AI systems must be registered in a European database, which will be accessible to the public and to the authorities, and which will contain information about the AI system, such as its name, description, provider, intended use, risk category, conformity assessment and contact details.

 

3. Transparency: High-risk AI systems must provide clear and meaningful information to the users and the affected persons about the AI system’s capabilities, limitations and potential risks, as well as about the human oversight and intervention mechanisms. High-risk AI systems must also inform the users and the affected persons when they are subject to an automated decision-making process, and provide them with the logic, criteria and consequences of such a process.

 

4. Human oversight: High-risk AI systems must ensure that human oversight and intervention are possible and effective at any stage of the AI system’s life cycle, and that the final decision or review is always taken by a human. High-risk AI systems must also allow the users and the affected persons to express their views, to obtain an explanation and to challenge the automated decision-making process.

 

5. Accuracy: High-risk AI systems must ensure that the data and the algorithms used for the AI system are accurate, relevant, representative, up-to-date and free from errors or biases, and that the AI system’s performance is monitored and evaluated regularly.

 

6. Robustness: High-risk AI systems must ensure that the AI system is resilient, secure and reliable.

 

7. Security: High-risk AI systems must ensure that the AI system is protected from unauthorized access, use or modification, and that it complies with the applicable cybersecurity standards and regulations.

 

8. Traceability: High-risk AI systems must ensure that the AI system’s functioning, data and processes are documented, recorded and accessible, and that the AI system’s decisions and actions are traceable and verifiable.

 

 

Safeguards for general-purpose AI models

General-purpose AI models are those that can be used for multiple purposes and applications, and that can be adapted or fine-tuned to different contexts and data sets. These are AI models that can have different levels of risk depending on the use case, and that can pose challenges for the conformity assessment and the supervision of the AI systems.

 

 

Enforcement framework and penalties

The law grants the right to lodge a complaint to the users and the affected persons who consider that their rights or interests have been infringed by an AI system, as well as to the consumer organizations and the data protection authorities. The law also grants the right to seek an effective judicial remedy to the users and the affected persons who have suffered damage or harm as a result of an AI system, as well as to the consumer organizations and the data protection authorities. The law also provides for the possibility of collective redress actions for the users and the affected persons who have been harmed by the same AI system.

 

Sanctions and penalties: The law empowers the national competent authorities to impose administrative sanctions and penalties to the providers and the users of AI systems who do not comply with the requirements and obligations of the law. The sanctions and penalties may vary depending on the nature, gravity and duration of the infringement, and may reach up to 6% of the annual global turnover of the company, or up to 30 million euros, whichever is higher, for the most serious infringements, such as the use of prohibited AI systems or the provision of false or misleading information.

 

 

What next for the AI Act?

The AI Act is also expected to face some challenges and opportunities, such as the following:

 

  • The AI Act will have to balance the need to ensure a high level of protection and trust in AI, with the need to foster innovation and competitiveness in the field of AI, and to avoid creating unnecessary burdens or barriers for the providers and the users of AI systems.

 

  • The AI Act will have to adapt to the fast and dynamic evolution of AI, and to the emergence of new technologies, applications and risks, and to ensure that the regulation is future-proof, flexible and responsive to the changing reality of AI.

 

  • The AI Act will have to cooperate and coordinate with other existing or upcoming regulations and initiatives that are related to AI, such as the GDPR, the Digital Services Act, the Digital Markets Act, the Data Governance Act, the European Data Strategy, the European Green Deal, the European Democracy Action Plan, or the European Digital Rights Charter.

 

  • The AI Act is a bold and ambitious initiative that reflects the EU’s vision and leadership on AI, and that sets a global standard for the ethical and responsible development and use of AI. The AI Act is a key step towards achieving a human-centric and trustworthy AI, that respects the rights and values of the people, and that contributes to the common good, sustainable development and social progress.

 

 

Assessment for Trustworthy Artificial Intelligence

 

 

The cybersecurity solutions from Seifti:

Seifti is a company that provides cybersecurity and data protection services for all types of businesses.

 

We offer a variety of cybersecurity solutions, including consulting servicesthreat detectioncertifications, and phishing tests.

 

Seifti’s cybersecurity consulting services are designed to help organizations protect their assets and data from cyber threats and enhance their overall cybersecurity posture. Additionally, Seifti provides real-time monitoring and threat detection, enabling companies to swiftly detect and respond to cyber threats.

 

Furthermore, Seifti offers data protection solutions, including Record of Processing Activities (ROPA) and ad-hoc data protection consulting services. These services can assist businesses in complying with data privacy regulations and safeguarding confidential information.

 

Don’t waste any more time—contact us now!

 

 

No Comments

Post a Comment

Skip to content