Open Ethics Maturity Model

OEMM v1.0.0

The Open Ethics Maturity Model (OEMM) is a five-level framework. It embarks the organization on a journey toward transparent governance of AI and autonomous systems.

What is it?

The framework offers a systemic approach on a set of levels: from awareness – to governance. Each level has a focus on the organization becoming more open and transparent, improving its ethical posture, and covering elements such as team, accountability, risk assessment, robustness, privacy, and policy compliance.

Open Ethics Maturity Model and its 5 Levels

Who is it for?

Open Ethics Maturity Model was created for any organization:

  • who aims to instill, strengthen and keep refining responsible practices for their autonomous systems throughout all stages of development and implementation.
  • who wants to build transparent & trustworthy technology which adheres to ethical & value-based considerations.
  • who would like to be equipped and mature to tackle ethical issues, while being consistent with the guidelines and regulatory standards in the area.

The unique feature of OEMM stems from the “open” aspect of the framework, underpinned initially by fostering an internal culture of transparency within the organization. It is gradually extending this ethos outwards, by making it transparent to its stakeholders and a wider public.


How is OEMM designed?

Having governance as the north star of the journey allows the organization to steer toward alignment by taking concrete actions, enhance transparency overall and have increased adaptability and ease of adopting solutions in the complex and dynamic landscape of AI.

The Open Ethics Maturity Model is designed to be agnostic, allowing for the use of any external tools and technology. At the same time, the Open Ethics Initiative provides open tools and technology for navigating the levels of the model.

Great things are not done by impulse, but by a series of small things brought together.Vincent Van Gogh

Levels of the Open Ethics Maturity Model

Level 1: Awareness

Tools: Open Ethics Canvas

The primary focus of this level is placed on setting up & aligning the teams, increasing awareness within the organization, and giving a start to the responsible technology roadmap. The Open Ethics Canvas can be used to enable a multidisciplinary dialogue and can weigh in on strategy and policy.

Level 2: Transparency

Tools: Open Ethics Label & Open Ethics Transparency Protocol

This level engages the team to identify the boundaries for transparency through internal dialogue and establish the first disclosure, as well as to satisfy the primary policy requirements. The Open Ethics Label (OEL) can be used by the organization to publish the first disclosure by describing the training data, algorithms and the decision space. The Open Ethics Transparency Protocol (OETP) is a documented way on how to generate, host, share, validate, and verify machine-readable disclosure in a standardized and explicit way, without compromising the IP or security. The organization should have the disclosure publicly available in the form of a machine-readable file and optionally, using a set of standard OEL icons.

Level 3: Integration

Tools: Data Passport

At the forefront of this level stands the integration of the training metadata (data about the training datasets and models) into a transparency disclosure, as well as assessing the risks associated with AI model deployment. A more in-depth disclosure of the models and datasets can be established and published by using the Data Passport, which should also be integrated into the disclosure. Both tools can be used iteratively to ensure continual improvement and safeguarding.

Level 4: Transformation

Tools: eXplainability Protocol

The focus of this level lies in transforming the products and processes to make them more robust. The organization evaluates key vendors and continues mitigating against associated risks. Safety measures are put in place and the AI is assessed by using the fairness and bias metrics scoped in the roadmap. The eXplainability Protocol (OEXP) can be used to enhance the explainability of the system outputs.

Level 5: Governance

Tools: Big Red Button and the 3rd Party Audit

This level focuses on establishing continuous improvement programs, integrating the structured feedback from human oversight into the machine learning lifecycle and conducting regular 3rd-party audits. Governance requires continuous improvement, crucial for the agility of the organization in a dynamic business and regulatory environment. Facilitating the access of a 3rd Party Audit leads to verification of the content accuracy of the disclosure, while the Big Red Button can be used to have algorithmic misuse, abuse, or discrimination reported.


Tools and Impact

CultureTransparencyeXplainabilityRedressAccountabilityControlFairnessPrivacy
Open Ethics Canvas
OEXP – Explainability Protocol
OETP – Transparency Protocol
BRB – Incident Reporting
Open Ethics Label
Open Ethics Data Passport

Actions and Outcomes

To provide a comprehensive overview of the OEMM framework and its approach, we curated the table delineating actions, and corresponding outcomes at each stage. This table serves as a roadmap, guiding organizations through the journey from awareness to governance. Stakeholders can gain insights and plan the sequential progression of their initiatives and the anticipated outcomes, facilitating informed decision-making and strategic alignment with the overarching goals of the OEMM framework.

Stages & Levels

Outcomes

Actions

Tools

Level 1

Awareness

  1. AI governance team is set up
  2. Responsibilities are defined
  3. Ethical technology mission and vision are aligned
  4. Policy roadmap is created
  • Identify the members of the cross-functional team
  • Establish clear lines of responsibility for the actions and decisions made by AI systems
  • Define the objectives, scope, and core components of the policy roadmap.
  • Ensure all AI endeavours are aligned with the policy landscape

Open Ethics Canvas

Level 2

Transparency

  1. State-of-the-art in org’s technology ethics is evaluated
  2. Boundaries for transparency are defined
  3. Transparent disclosure is made and openly published
  4. All policies’ transparency requirements are satisfied
  • Conduct Internal conversation on what elements of the technology and which technological processes will benefit from the disclosure
  • Establish transparent practices (such as self-disclosure) for the organization’s digital products
  • Identify elements of the IP that should not be disclosed
  • Ensure the transparent practices are also made available and are understandable & transparent to the public

Open Ethics Label

Open Ethics Transparency Protocol

Level 3

Integration

  1. Models and datasets are listed and described
  2. Every AI model’s scope is well-defined and correctly used in the operational environment
  3. The model description is integrated into the transparency disclosure
  4. Risk assessment is conducted
  • Assess data practices (data collection, annotation, use)
  • Describe models
  • Describe datasets on which the models are trained
  • Make descriptions of the models and datasets part of the public disclosure.
  • Set processes for detecting bias in the AI models and datasets
  • Assess risks based on value violation scenarios & make informed decisions accordingly
  • Use the bias-spotting & risk assessment tools in an iterative way throughout all machine learning lifecycle phases

Data Passport

Level 4

Transformation

  1. XAI practices are implemented
  2. Failure modes are listed and are public
  3. Downstream vendors are informed on the failure modes and regimes of how technology works and fails
  4. Upstream vendors are listed and risks related to these vendors are assessed
  5. AI is evaluated against scoped fairness and bias metrics
  6. Mitigate biases and ensure equitable treatment across target groups
  7. Privacy safeguards sensitive data and ensures AI systems adhere to privacy regulations
  8. Safety Measures to protect from adversarial attacks, data breaches, and unauthorized access are put in place
  9. A clear robustness metric is defined and published so that it’s clear when the system is not operating well against errors, noise, and unexpected inputs
  • List and identify downstream and upstream vendors
  • Describe failure modes of the system as granular as possible
  • Define the list of the fairness metrics and mitigate bias to ensure correct behaviour
  • Conduct safety measures to avoid adversarial attacks and data breaches
  • Introduce the metrics to evaluate the robustness of your system
  • Provide a user-friendly explanation about how your system has generated its output

eXplainability Protocol

Level 5

Governance

  1. Human oversight is structurally integrated with explainability measures
  2. Continual monitoring is set up
  3. Traceability of all actions
  4. Mechanisms of redress are implemented
  5. Incident reporting is set up and connected to a public incident database
  6. Regular staff training is put in place
  7. Continuous improvement programs are put live
  • Implement mechanisms to enable understanding of the AI system’s decisions and actions
  • Maintain comprehensive records of the AI system’s operations and decision-making processes
  • Incorporate mechanisms for human oversight and intervention when necessary
  • Regularly monitor AI systems for performance, safety, and ethical concerns, and update them as necessary

Big Red Button

3rd Party Audit


A new project?

If you want Open Ethics to help your organization in the journey toward transparency and safety with the OEMM framework, send us an email.

[email protected]