Microsoft and its “Responsible AI Standard”

Microsoft standard AI

Microsoft and its “Responsible AI Standard”

The fact that artificial intelligence is increasingly present in our daily lives is no surprise. In practically all the devices that we use every day, we can find a piece of something that has changed the way we understand things. But we also know that artificial intelligence can lead to serious intrusions into our privacy, and concerns around these aspects are becoming more common among users of all types of devices.

And one of the most important technology giants knows all these aspects of artificial intelligence better than anyone: Microsoft

Knowing well the advantages that artificial intelligence can offer (such as permanent availability, large-scale communication or faster and more accurate decisions), it has been developing over the years new forms of technology that enhance in every way this great advantage for users.

But if there is something that we must pay attention to when we talk about AI, it’s the implications that it can have for the privacy and protection of people’s data, and Microsoft is no stranger to it. We have a clear example in how they have been considering eliminating facial recognition to identify emotions, limiting the access of new users to new technologies. 

In this context, the technology giant publishes its new “Responsible AI Standard”, which is the product of years of effort to be able to define their product development requirements for responsable AI. 

With this Standard, they upgrade their “Six AI principles”, (Accountability, Transparency, Fairness, Reliability and Safety, Privacy and Security and Inclusiveness),  being noteworthy in relation to the privacy principle that Microsoft had already been saying that as AI became more frequent, the protection of privacy would become more complex and critical. 

Microsoft has also recognized the existence of a “gap” between the unique risks of AI and available laws, and that the standard was necessary for its product development teams to submit, thus concrete guidance, making this standard to share what they have learned, inviting feedback from everyone and helping to build best practice standards around AI.

Microsoft apps in mobile

Microsoft’s  goals regarding AI

With all of this being said, it is time to mention the goals that Microsoft has with its new Standard. 

  1. Accountability Goals
    • Goal AI: Impact assessment.
    • Goal A2: Oversight of significant adverse impacts.
    • Goal A3: Fit for purpose.
    • Goal A4: Data governance and management.
    • Goals A5: Human oversight and control.
  1. Transparency Goals
    • Goal T1: System intelligibility for decision making.
    • Goal T2: Communication to stakeholders.
    • Goal T3: Disclosure of AI interaction.
  2. Fairness Goals
    • Goal F1: Quality of service.
    • Goal F2: Allocation of resources and opportunities.
    • Goal F3: Minimization of stereotyping, demeaning, and erasing outputs.
  3. Reliability and Safety Goals
    • Goal RS1: Reliability and safety guidance.
    • Goal RS2: Failures and remediations.
    • Goal RS3: Ongoing monitoring, feedback and evaluation.
  4. Privacy and Security Goals
    • Goal PS1: Privacy Standard compliance.
    • Goal PS2: Security Policy compliance.
  5. Inclusiveness Goal
    • Goal I1:  Accessibility Standards compliance.

While our Standard is an important step in Microsoft’s responsible AI journey, it is just one step. As we make progress with implementation, we expect to encounter challenges that require us to pause, reflect, and adjust.

Microsoft

We will see how this Standard remains as a living document and how it evolves , and what risks and challenges appear for Microsoft over the time around AI.

No Comments

Post a Comment

Skip to content