ReAL – Requirements for Autonomy Levels


This document is published for the purpose of soliciting feedback and comments. We encourage stakeholders to review the document and submit their inputs, suggestions, and concerns to inform the development of the final version. Comments should be directed to [email protected].

A legal mandate

Regulations like Europe’s GDPR and the US Bot Disclosure and Accountability Act emphasize transparency in decisions made without human involvement. The OECD AI Principles and EU Artificial Intelligence Act also stress the need to inform people explicitly when they interact with an AI system, and the Executive Order committed to contribute to the guidance of the content labeling produced by AI.

Even if not legally required, disclosing that users are engaging with a machine builds trust. For instance, the Coalition for Content Provenance and Authenticity (C2PA) addresses the prevalence of misleading information online through the development of technical standards for certifying the source and history (or provenance) of media content.

Standardization Gap

AI or no AI, is it all black and white?

The EU AI Act brought the scoping definitions in Article 2 of the act that “an AI system is a system […] designed to operate with varying levels of autonomy”. The human oversight measures required in Article 14 should align with the degree of autonomy, however, current regulation does not specify this gradation or hierarchies and exactly how this alignment should be achieved.

We believe that the industry and, ultimately, consumers will benefit from standardizing approaches to disclosure of the autonomy levels in contrast to the all-or-nothing approach. Prior work in the automotive domain is well-known and we discussed it in details. The current J3016 taxonomy standard defines six levels of driving automation, from Level Zero (no automation) to Level 5 (full vehicle autonomy). It serves as the industry’s most-cited reference for automated vehicle (AV) capabilities.

Unlike the automotive industry, there’s currently no widely adopted standard for general-purpose AI systems, as well as for autonomous systems deployed in industries beyond automotive. Andrew Ng and other experts have approached to introduce the AI/RPA automation classifications, albeit with different terminologies and levels.

Approach

At Open Ethics we reviewed prior work and generalized levels, drawing inspiration from the AV taxonomy. The approach to generalize J3016 instead of inventing a different one was chosen for the two following reasons:

  • It’s important to distinguish the context of the AI operational environment. Specifically, degrees of partial automation (analog to SAE Levels 3-5), when the human operator is involved with a fully capable system based on the context and not solely based on the expected output quality.
  • It’s still important to disclose when the decisions are made without intelligent automation at all, e.g. when a system or a component functions solely as a tool to collect and represent the data (analog to SAE Level 0).

We suggest mapping the levels to the following definitions with the break down into Autonomous and Assistive systems:

Assistive Systems

Assistive systems are AI-powered or robotic systems that augment human capabilities by providing support and enhancement to human decision-making and actions. These systems are designed to assist humans in performing specific tasks or functions, but do not replace human judgment or control.

Functionality

  1. Provide real-time data and analytics to inform human decision-making.
  2. Offer suggestions or recommendations to support human judgment.
  3. Automate routine or repetitive tasks to free up human time and resources.
  4. Enhance human physical or cognitive abilities through wearable devices or prosthetics.

Examples

  • Driver assistance systems in vehicles, such as lane departure warning or adaptive cruise control.
  • AI-powered medical diagnostic tools that provide doctors with diagnostic suggestions.
  • Exoskeletons that enhance human mobility or strength.
  • Virtual assistants that provide scheduling or organizational support.

Levels

 

Level 0

Level 1

Level 2

LabelA0A1A2
TitleNo AutonomyAssisted OperationPartial Autonomy
Description At this level, there is no autonomy, and humans are entirely responsible for performing tasks. This level involves basic manual processes without any automated support in decision-making. Here systems are used solely as a tool to collect and represent the collected data or perform fully controlled actions as in the case of teleoperated robotics. At this level, autonomous systems assist with specific tasks or functions, similar to how driver assistance systems operate in vehicles. Human operators remain in control, but automation supports and enhances their decision-making capabilities. Autonomous systems can simultaneously manage multiple aspects of a process or workflow only under specific conditions. Human operators rely on automation for certain routine components of their tasks, are not allowed to disengage, and are required to monitor the environment and be ready to take control when necessary.

Autonomus Systems

Autonomous systems are AI-powered or robotic systems that operate independently, making decisions and taking actions with little or no human intervention. These systems are designed to perform tasks or functions without human confimation, using sensors, algorithms, and machine learning to navigate and respond to their environment.

Functionality

  1. Operate independently, without human intervention or control.
  2. Make decisions based on sensor data, algorithms, and machine learning models.
  3. Adapt to changing environments or situations through real-time processing and learning.
  4. Perform tasks or functions that are repetitive, hazardous, or require high precision.

Examples

  • Self-driving cars or drones that navigate and respond to their environment without human input.
  • Industrial robots that perform tasks such as welding or assembly without human oversight.
  • Autonomous underwater vehicles (AUVs) that explore and map ocean environments.
  • Smart homes or buildings that adjust temperature, lighting, and security settings based on occupancy and preferences.

Levels

 

Level 3

Level 4

Level 5

LabelA3A4A5
TitleConditional AutonomyHigh AutonomyFull Autonomy
Description The autonomous system can independently manage most tasks under well-defined conditions. Human operators can disengage and let the system handle the workflow. However, operators must be ready to intervene if the system requests assistance or encounters a situation it cannot handle. At this level, autonomous systems can perform end-to-end processes autonomously within specific contexts or environments. Human intervention is typically only required for scenarios that fall outside the predefined conditions or operational environment.

At the highest level of autonomy, autonomous systems are fully capable of handling all tasks across various environments and conditions without the need for constant human oversight. Human involvement may be minimal, focusing more on strategic decisions, monitoring, and addressing rare, highly complex situations.

Requirements for Autonomy Levels

The level of autonomy in AI and robotic systems directly impacts the required level of oversight and the assignment of responsibility. This is primarily due to the inherent risks and potential consequences associated with the decision-making capabilities of these systems.

The key considerations in determining the appropriate level of oversight and responsibility allocation include:

  • The complexity and unpredictability of the system’s behavior
  • The potential impact of the system’s decisions on individuals, society, and the environment
  • The ability to understand, explain, and control the system’s decision-making processes
  • The availability of fail-safe mechanisms and the ability to intervene and override the system’s decisions
 

Level 0

Level 1

Level 2

Level 3

Level 4

Level 5

Human-in-the-loop RFC 2119 requirement levelNot Applicable (Human-only)
  • Machines SHOULD Assist
  • Machines MUST NOT Confirm
  • Humans MAY Assist
  • Humans MUST Confirm
  • Machines MAY Assist
  • Machines MAY Confirm
  • Humans MAY Assist
  • Humans MUST Confirm
  • Machines MAY Assist
  • Machines MAY Confirm
  • Humans MAY Assist
  • Humans SHOULD Confirm
  • Machines MAY Assist
  • Machines MAY Confirm
  • Humans MAY Assist
  • Humans MAY NOT Confirm
  • Machines MAY Assist
  • Machines SHOULD Confirm
  • Humans MAY Assist
  • Humans MAY NOT Confirm
Output and Decision-makingThe Human Operator produces the final output of the systemThe Human Operator produces the final output of the systemThe Human Operator MUST confirm the output of the systemThe Human Operator SHOULD intervene to correct the output of the system if requestedThe Human Operator MAY intervene to correct the output of the system based on the change in the operational environmentThe Human Operator MAY intervene to correct the output of the system based on the change in the operational environment
Exceptions FallbackNot Applicable (Human-only)Human OperatorHuman OperatorHuman Operator or Safety NetHuman Operator or Safety NetHuman Operator or Safety Net
MonitoringHuman Operator MUST monitor the operational environmentHuman Operator MUST monitor the operational environmentHuman Operator MUST monitor the operational environmentHuman Operator MUST monitor the operational environmentHuman Operator MUST monitor the operational environmentHuman Operator MAY monitor the operational environment
ResponsibilityOperatorOperatorOperatorOperator/DeveloperDeveloper/DeployerDeployer
LiabilityDeployer

Contribution

Our suggested mapping of levels aligns with the context of AI operational environments as well as no-automation use cases. The proposed levels could be easily realized in OETP transparency manifests and displayed in the visual labels for transparency disclosures for the end-users/consumers. The Deployer and Operator roles are used in the same meaning as they are defined in the Taxonomy.

As AI and robotic systems become more advanced and autonomous, the need for a nuanced and comprehensive approach to oversight and responsibility becomes increasingly crucial. Policymakers, industry leaders, and ethicists must work together to develop robust frameworks that balance the benefits of these technologies with the necessary safeguards and accountability measures.


Photo by Greyson Joralemon on Unsplash