The Fears of the AI Era. Where Ethics is Needed?

The Fears of the AI Era. Where Ethics is Needed?

AI has made a huge leap during the last decade. How far are we from the world, where there is no more space for humanity?

In this article our goal is to briefly cover areas in which AI has achieved amazing progress, discuss the nature of human fears in relation to the changes brought by artificial intelligence and prepare the foundation for deeper discussions by breaking down the problem into smaller pieces.

Fear as a survival mechanism

What is fear? From the neuroscience perspective, it is a survival mechanism. It has performed a vital function throughout times where our “fight or flight” response helped us survive in dangerous outdoor conditions. Today, our survival has shifted from threats from natural predators to health and financial security. There are a lot of stimulations that push our fear buttons nowadays, but mainly without the immediate threat to survival. Fear is irrational. To constructively talk about the ethics of emerging technologies such as artificial intelligence, we need to get into the rational space.

AI has made a huge leap

AI has made a huge leap during the last decade. This leap became possible with technological improvements in families of methods in machine learning for text, voice and image recognition; language and video reconstruction; as well as computer-aided design. How far are we from the world, where there is no more space for humanity?

Common misconceptions about AI from sci-fi movies

Fueled by fears and images created in the sci-fi movies, people start to fantasize about robots and the overwhelming crisis for society caused by super-intelligent machines. Let’s bring some clarity. The level of AI can be subjectively broken down into three main categories: Artificial Narrow Intelligence (ANI), Artificial General Intelligence (AGI) and Artificial Super Intelligence (ASI). ANI is the only form of Artificial Intelligence that humanity has achieved so far with respect to a measurable number of tasks. However, we can already see applications where ANI solutions in medicine, law, architecture, design, music are surpassing human accuracy and productivity. Numerous successes have been reported recently, but, because of the negativity bias, the huge media coverage is focused on failures.

The fields of computer vision, natural language processing are still at the stage of narrow AI, even if their advances seem fascinating. Narrow AI is good at performing a limited task, such as playing chess or go games, making sales predictions, calling and booking appointments, matching people in social networks or performing weather forecasts. Basically, narrow AI works within a very limited context, and can’t take decisions on tasks beyond the goals for which it was initially created. At the same time, decisions that are made by AI could be complex due to the nature of the context. How do we evaluate different decisions and contexts from the ethical perspective? We will discuss this.

AGI is a huge challenge already

According to definitions (Bostrom, 2003), when AI becomes much smarter than the best human brains in almost every field, including scientific creativity, general wisdom and social skills, we’ve achieved super-intelligence or ASI. But the more we dive into an exploration of what is possible and what is not, the more we realize that even animal intelligence is a big challenge to model and achieve. Humans might not be able to process data as fast as computers, but what remains a scientific puzzle is to understand how we produce new ideas, imagine or act altruistically.  Even artificial general intelligence, also known as human-level AI is not a completely defined concept as it can be evaluated only on the basis of pre-defined functions or goals. We don’t have a full list of things that humans can potentially do. More we learn about the capacities of the human brain, more we come to appreciate the elegance of its intelligent design.

Ensuring inclusivity of humans in the era of AI

If the reality is that we can actually control decisions made by AI, why does the fear still exist? In most cases, individual and societal coping skills are about awareness of reality, of what is possible and what is not. As a result of the proliferation of AI, we are simply afraid to be excluded from the crucial aspects of decision making at a personal or group level. We want to understand what to expect from a society where humans and AI work together as well as from each single AI-powered solution.

To address the issue, we need to design inclusive systems and “explain” these designs. Such systems should embrace participatory behaviors in ways that humans can control the boundaries of outcomes. In the information era, humans frequently live with a vague sense of danger that with time becomes their normal mental and emotional state. Such a state comes from ever increasing and evolving dynamics in the environment. Skills to respond to the fast-paced change are required. Indeed, facing uncertainty without information is hard. What should we do to fill the skill gaps? What kind of knowledge do we need to build to govern existing and emerging AI in different sectors?

Personal and societal dimensions of AI ethics

To commence cutting the Gordian knot of polemics around the opportunities and risks of an AI era, we outline two main themes. Each of them will be addressed separately in our future posts.

  1. Societal transformation in the era of artificial intelligence
    • What lessons can we learn from the history of previous technology revolutions?
    • What is the future of work and which skills should we acquire?
    • How should we regulate the proliferation of AI to positively impact privacy, security and wealth distribution?
  2. Governance and ethics of AI decisions
    • How does artificial intelligence make decisions?
    • What are the examples of data-driven biases and how can we avoid them?
    • How do we define quantitative ethics and ethical frameworks for AI?

Above we talked about fears, described the three types of AI and have given several examples of extremely successful solutions as well as failures. We also outlined two important themes to address upcoming challenges in the new AI era. We believe that we can make use of the mental energy consumed by fears if we redirect this energy into educating societies and individuals. Stay tuned.

The beautiful illustration of dinosaur flowers on top of this article was made by Chris Rodley using the technique called “style transfer”, created at the University of Tübingen in Germany.

Leave a reply

Your email address will not be published.