#OpenEthicsSeries is the series of open online events aimed at operationalizing AI ethics and explaining the landscape of AI regulation. If you wish to contribute and share your experience as a panelist, send us the request with the brief description of your talk plus the URL of your linkedin profile.
S01E06 December 03,2020
S01E07 December 17,2020
Open Ethics for AI is like Creative Commons for the content. We aim to build trust between machines and humans by helping machines to explain themselves. We’re developing an open transparency protocol to help product-owners describe their AI-powered solutions in a standardized, user-friendly, and explicit way. Open Ethics is a global inclusive initiative with the mission to engage citizens, legislators, engineers, and subject-matter experts into a transparent design and deployment of solutions backed by artificial intelligence to make a positive societal impact.
Ethical decision making should account for points of view (POV) to make decisions traceable and explicable. Point of view is the angle of considering things, which shows us the opinion or criteria of the agent involved in decision making. Open Ethics Vector is the technical means that could be used by Machine Learning algorithms to represent different POVs.
Incorporating scaling to mark the extent to which every “value” is staisfied, will help end-users to choose relevant products based on their ethical vectors.
Safety is a degree to which hazardous impact can be limited or completely eliminated. Highly safe systems have implemented hierarchies of safety controls. They are spanning from measures of protecting the individuals to isolating, and totally eliminating the hazard.
When the complete safety is not reachable, insurance instruments should be implemented to manage the safety risks.
Systems fail. When this happens, a system could shield the user from effects of failures to make the experience more fluid and straightforward. At the same time, a simple acknowledgment of the fact that the system has not followed the expected scenario could provide a user with insights on how to act autonomously.
Failure models should cover all possible failure outcomes and include precise instructions for the users.
Some countries have started drafting their legal approaches. However, today we have neither country-specific regulatory frameworks nor cross-boundary AI regulation. Therefore, users should be aware of how their data is used, as well, as of how decisions are taken.
Depending on where you live, the AI system could affect your life differently. By learning in which legal framework the system operates, users can understand its potential impact.
With foreseeable technologies, an artificial agent will carry zero moral responsibility for its behavior and humans will retain full responsibility.
Said that, legal or moral person should be defined to hold the liability for any potential harmful action. Each theory of liability has certain conditions that must be proven by the claimant before liability will be established.
Human behavior doesn’t always reflect human values. AI systems may be able to learn a lot by observing humans, but because of our inconsistencies, current methods fundamentally unable to distinguish between value-aligned and misaligned behavior to provide AI with proper learning feedback.
To address these problems, systems should suggest a cross-validation mechanism for value alignment.
The right to privacy protects you against intrusion into your personal life. Your right to privacy can only be interfered with when the one protecting national security or public safety, preventing crime or protecting the rights of other people.
What we consider as private differs among cultures and individuals. This value is about putting people in control of their privacy with easy-to-use tools and clear choices. It also means being transparent about procedures of how data is collected and used by the AI system.
Cultural norms may shift to prevent all benefits going only to the creators of new technologies. Shared benefit as a principle implies that we should think about ethics as satisfying the preferences or benefiting of as many people as possible. Given the majority of AI systems learn from users, it is legitimate to ask about how benefits are shared.
Non-monetary benefit-sharing may involve information exchange, technology transfer or capacity building.
A requirement to maintain human control over the use of AI would eliminate many of the problems associated with fully autonomous systems, especially in medicine and in the battlefield. Such a requirement would protect the dignity of human life, facilitate compliance with international humanitarian and human rights law, and promote accountability for unlawful acts.
System design may promote suggestive or verification support provided by the AI systems.
Machines use affective computing and AI techniques to sense, understand, learn and interact with human emotions. A combination of facial recognition, gait, language, voice pattern analysis, can already decode human emotions with a high degree of confidence.
Acknowledging human emotions, using them as factors for the decision-making, as well as influencing emotions to bring the value to the AI solutions should be transparent.
In a world where threats cross borders and disciplines, where distinctions between what is domestic and what is foreign policy are becoming more and more tenuous, people need perspective to break down transparency silos. International hostility, especially an AI arms race, could intensify risk-taking, hostile motivations, and errors of judgment when creating AI.
International cooperation is more than needed to prevent cyber-terrorism and regulate the landscape of AI in the security space.
Eliminating intrinsic bias caused by training approaches is a crucial step towards making AI systems good helpers. Humans should be brought to an awareness about the processes and events which were caught by the AI’s “attention”. Systems should be providing humans with the ways to incorporate additional information as well as to make final choices.
The degree to which AI systems influence human decision should be explicit. The communicational distance should be kept so that the system operates in a non-manipulative manner.
Decision Traceability is the measure to which we are able to verify the history, and contexts in which decisions are made. Given that it is not always possible to draw the decision trees for systems with machine learning, the purpose of decision traceability is to be able to reproduce good practices, as well as to decompose mistaken decisions and find influencing factors.
AI systems should provide a sufficient degree of details allowing to explain the decision-making process.