Open Ethics is a global inclusive initiative with the mission to engage citizens, legislators, engineers, and subject-matter experts into a transparent design and deployment of solutions backed by artificial intelligence to make a positive societal impact.

The Open Ethics Initiative functions as a research and educational institution that contributes to the development of comprehensive frameworks, standards, and accessible resources to navigate various socio-technological phenomena. Operating at the intersection of technological innovation and societal impact, the initiative serves as an invaluable educational resource, equipping individuals and organizations with the knowledge and skills necessary to critically engage with the ethical dimensions of technology.


A world where every individual and organization is safe and confident about how decisions are made by the autonomous systems. Learn more


Bring standards and tools to enable transparent ecosystems where humans and AI successfully work together.


Key users

Secondary stakeholders

AI product owners


Integrators of AI solutions


Policy makers

Subject-matter experts

Build trust with end-users

Make informed choices

Mitigate vendor-related risks

Build responsibly

Simplify regulatory procedures

Label data responsibly


Data is the new electricity. The growth of the analyzable data is projected to be 50x by 2025. It leads to our ability to build AI-powered systems. There have been numerous successes and failures in the latest history if AI development. For example, 20% of US consumers consider banning AI development at all because they fear unpredictable consequences. So, what should we let machines do? What can go wrong? Who is responsible? As humans and machines start to work together increasingly more, the question of regulation becomes a cornerstone. The answer today is the top-down regulation, meant to be imposed by governments. For example, GDPR has faced multiple frictions in the innovation sector. The spectrum of the AI regulation goes from extremes of laisses-faire to another one restricting AI implementation at all “until it’s safe”, whatever safe means. Today we have 90+ national, super-national, and corporate guidelines. Some of them striving to become a regulatory standard. Many are far from being practical and most of them are imposing one-size-fits-all solutions for ethics.

The problem

Top-down AI regulatory approaches are necessary but not sufficient to eliminate information asymmetry, educate consumers, and protect fair industry players.


The AI landscape is complex: technology is moving fast, society struggles to adapt, while the law is lagging to incorporate changing societal norms and catch up with technology pace.

No common language

Existing regulatory approaches and guidelines frequently talk about the same things, while using different terms, thus contributing to the confusion.

won’t work

Cultural and normative granularity won’t allow solutions to converge around a single ethical framework. There would always be those consumers excluded/abused by one-size-fits-all technology.

Lack of practicality

Existing approaches are mainly built from the legal perspective. When so, they frequently lack the understanding of technology, thus, don’t bring ways to operationalize law enforcement and protect rights.

Old mechanisms

The “best” tools we have at our disposal as consumers to learn about our rights when using digital products are Privacy Policy and Terms and Conditions. Clumsy, humungous, vague, no one reads them.