AI banned applications

AI banned applications

AI banned applications

The EU’s regulation on artificial intelligence (AI) is a proposed law that aims to create a harmonized and trustworthy framework for the development and use of AI in the EU, based on the respect of the EU values, fundamental rights and data protection principles (lear more with ours articles: ai and gdpr, ai in data protection law and eu ai act). The law will apply to all providers and users of AI systems that operate in the EU or offer their services in the digital single market, and will classify AI systems according to their level of risk: unacceptable, high, limited and minimal. The law will also establish specific requirements and obligations for high-risk AI systems, such as conformity assessment, registration, transparency, human oversight, accuracy, robustness, security and traceability. The law will also provide safeguards for general-purpose and generative AI models, such as the obligation to inform, assess and report. The law will also create a governance and supervision system for AI, composed of a European AI Board and national competent authorities, and will grant rights and remedies to the users and the affected persons, as well as sanctions and penalties for the infringers of the law.

 

European AI Act 

The EU’s regulation on AI adopts a risk-based approach to AI regulation, that is, it sets out different rules and obligations depending on the level of risk that AI systems pose to the rights and security of people. Risk categories in AI: unacceptable, high, limited and minimal.

 

 

Assessment for Trustworthy Artificial Intelligence

 

 

Risk categories

 

  • Unacceptable risks in AI systems are those that contravene the values and principles of the EU, and that are therefore prohibited. These are AI systems that can cause irreversible, irreparable or irremediable harm to people, such as the manipulation of human behavior, the exploitation of people’s vulnerabilities, the mass surveillance or the generalized social scoring.

 

  • High-risk AI systems are those that can have a significant impact on the rights and freedoms of people, or that can affect essential aspects of their life, such as employment, education, health, justice or security. These are AI systems that must comply with a series of requirements and obligations before and after their introduction in the market, such as conformity assessment, registration, transparency, human oversight, accuracy, robustness, security and traceability.

 

  • Limited risk AI systems are those that can have a moderate impact on the rights and freedoms of people, or that can generate a reasonable expectation of privacy, such as AI systems that interact with people, that recognise emotions or biometric characteristics, or that generate or manipulate images, audio or video. These are AI systems that must comply with an obligation of transparency, that is, they must inform people that they are interacting with an AI system, and of the capabilities and limitations of the system.

 

  • Minimal risk AI systems are those that have a low or no impact on the rights and freedoms of people, or that have a widely accepted and beneficial use for society, such as AI systems that are used for leisure, culture, art or entertainment purposes. These are AI systems that are exempt from the specific obligations of the law, but that must respect the rest of the applicable regulations, such as the GDPR or the Directive on e-commerce.

 

 

Unacceptable risk in AI

Unacceptable risk AI systems are those that contravene the values and principles of the EU, and that are therefore prohibited. These are AI systems that can cause irreversible, irreparable or irremediable harm to people, such as the manipulation of human behavior, the exploitation of people’s vulnerabilities, the mass surveillance or the generalized social scoring³.

 

Some examples of unacceptable risk AI systems are the following:

 

  • Cognitive behavioral manipulation: AI systems that manipulate human behavior, opinions or decisions to the detriment of the person’s mental or physical health, or that induce a person to behave, form an opinion or make a decision in a manner that is prejudicial to their interests or rights.

 

  • Social scoring: AI systems that evaluate or classify the trustworthiness, behavior, preferences or personality of natural persons based on their social behavior or personal characteristics, and that have a negative impact on their access to fundamental rights, essential services, benefits or opportunities.

 

  • Biometric identification and categorisation of people: AI systems that identify or categorize natural persons based on their biometric data, such as facial images, fingerprints, DNA, voice, keystrokes or other behavioral signals, and that use sensitive characteristics, such as political, religious, philosophical beliefs, sexual orientation, race, ethnicity, health or disability status, as input or output variables.

 

  • Real-time and remote biometric identification: AI systems that identify or categorize natural persons in real time or remotely based on their biometric data, such as facial images, fingerprints, DNA, voice, keystrokes or other behavioral signals, and that are used in publicly accessible spaces for law enforcement purposes, without a valid legal basis, a prior judicial authorisation, or a specific and present threat to public security.

 

 

High risk in AI

High-risk AI systems are those that can have a significant impact on the rights and freedoms of people, or that can affect essential aspects of their life, such as employment, education, health, justice or security. These are AI systems that must comply with a series of requirements and obligations before and after their introduction in the market, such as conformity assessment, registration, transparency, human oversight, accuracy, robustness, security and traceability.

 

Some examples of high-risk AI systems are the following:

 

  • AI systems used for recruitment, selection, evaluation, promotion, dismissal, assignment or allocation of workers, or for access to self-employment or occupation.

 

  • AI systems used for admission, examination, evaluation, certification, accreditation, selection, assignment or allocation of students, learners or participants in education and vocational training.

 

  • – AI systems used for the provision or access to essential private or public services, such as health, social security, transport, energy, telecommunications, water or waste management.

 

  • AI systems used for the prevention, detection, investigation or prosecution of criminal offenses, or for the execution of criminal penalties, such as AI systems used for predictive policing, risk assessment, profiling, identification, surveillance, or evidence evaluation.

 

  • AI systems used for the protection of the health, safety or fundamental rights of people, or for the management of natural or man-made disasters, such as AI systems used for medical diagnosis, prognosis, treatment, or intervention, or for emergency services, civil protection, or humanitarian aid.

 

 

What is the general purpose and generative AI

General purpose and generative AI models are those that can be used for multiple purposes and applications, and that can be adapted or fine-tuned to different contexts and data sets. These are AI models that can have different levels of risk depending on the use case, and that can pose challenges for the conformity assessment and the supervision of the AI systems.

 

Some examples of general purpose and generative AI models are the following:

 

  • GPT-4: A large-scale language model that can generate natural language texts on various topics, domains and styles, based on a given input or prompt. GPT-4 can be used for various tasks, such as text summarisation, translation, question answering, dialogue, content creation, or code generation.

 

  • DALL-E: A large-scale vision and language model that can generate images from text descriptions, using a variational autoencoder. DALL-E can be used for various tasks, such as image synthesis, manipulation, captioning, or retrieval.

 

  • CLIP: A large-scale vision and language model that can learn from any natural language supervision, and that can perform various vision tasks, such as image classification, object detection, segmentation, or captioning, using a text query.

 

 

 

Limited risk in AI

Limited risk AI systems are those that can have a moderate impact on the rights and freedoms of people, or that can generate a reasonable expectation of privacy, such as AI systems that interact with people, that recognise emotions or biometric characteristics, or that generate or manipulate images, audio or video. These are AI systems that must comply with an obligation of transparency, that is, they must inform people that they are interacting with an AI system, and of the capabilities and limitations of the system.

 

Some examples of limited risk AI systems are the following:

 

  • AI systems that use natural language processing, speech recognition, or speech synthesis to communicate with people, such as chatbots, voice assistants, or conversational agents.

 

  • AI systems that use computer vision, facial expression analysis, or voice analysis to recognise or infer the emotions, moods, or preferences of people, such as emotion recognition, sentiment analysis, or preference learning.

 

  • AI systems that use computer vision, generative adversarial networks, or deep fakes to create, modify, or enhance images, audio, or video, such as image synthesis, manipulation, or restoration, or audio or video synthesis, manipulation, or editing.

 

 

Assessment for Trustworthy Artificial Intelligence

 

 

The cybersecurity solutions from Seifti:

Seifti is a company that provides cybersecurity and data protection services for all types of businesses.

 

We offer a variety of cybersecurity solutions, including consulting servicesthreat detectioncertifications, and phishing tests.

 

Seifti’s cybersecurity consulting services are designed to help organizations protect their assets and data from cyber threats and enhance their overall cybersecurity posture. Additionally, Seifti provides real-time monitoring and threat detection, enabling companies to swiftly detect and respond to cyber threats.

 

Furthermore, Seifti offers data protection solutions, including Record of Processing Activities (ROPA) and ad-hoc data protection consulting services. These services can assist businesses in complying with data privacy regulations and safeguarding confidential information.

 

Don’t waste any more time—contact us now!

 

No Comments

Post a Comment

Skip to content