Open Ethics is a global inclusive initiative with the mission to engage citizens, legislators, engineers, and subject-matter experts into a transparent design and deployment of solutions backed by artificial intelligence to make a positive societal impact.
A world where every individual and organization is safe and confident about how decisions are made by the autonomous systems. Learn more
Bring standards and tools to enable transparent ecosystems where humans and AI successfully work together.
AI product owners
Integrators of AI solutions
Build trust with end-users
Make informed choices
Hedge vendor-related risks
Simplify regulatory procedures
Label data responsibly
Data is the new electricity. The growth of the analyzable data is projected to be 50x by 2025. It leads to our ability to build AI-powered systems. There have been numerous successes and failures in the latest history if AI development. For example, 20% of US consumers consider banning AI development at all because they fear unpredictable consequences. So, what should we let machines do? What can go wrong? Who is responsible? As humans and machines start to work together increasingly more, the question of regulation becomes a cornerstone. The answer today is the top-down regulation, meant to be imposed by governments. For example, GDPR has faced multiple frictions in the innovation sector. The spectrum of the AI regulation goes from extremes of laisses-faire to another one restricting AI implementation at all “until it’s safe”, whatever safe means. Today we have 90+ national, super-national, and corporate guidelines. Some of them striving to become a regulatory standard. Many are far from being practical and most of them are imposing one-size-fits-all solutions for ethics.
Top-down AI regulatory approaches are necessary but not sufficient to eliminate information asymmetry, educate consumers, and protect fair industry players.
The AI landscape is complex: technology is moving fast, society struggles to adapt, while the law is lagging to incorporate changing societal norms and catch up with technology pace.
No common language
Existing regulatory approaches and guidelines frequently talk about the same things, while using different terms, thus contributing to the confusion.
Cultural and normative granularity won’t allow solutions to converge around a single ethical framework. There would always be those consumers excluded/abused by one-size-fits-all technology.
Lack of practicality
Existing approaches are mainly built from the legal perspective. When so, they frequently lack the understanding of technology, thus, don’t bring ways to operationalize law enforcement and protect rights.