Against Neutrality in AI ethics: pros & cons of taking a stance

Against Neutrality in AI ethics: pros & cons of taking a stance

One of the key questions in AI ethics is whether neutrality should be objective and when. Neutrality means that AI systems do not favor or discriminate any group, individual, or value over another. Neutrality (don’t confuse it with Net Neutrality) can be seen as a desirable goal for AI ethics, as it can promote fairness, justice, and impartiality. However, neutrality can also be seen as an impossible or undesirable goal for AI ethics, as it can ignore the complexity, diversity, and contextuality of human values and situations.

In this post, we suggest that neutrality is not always applicable and should be used with caution. We argue about how neutrality harms AI ethics and provide a critical analysis of the pros and cons of neutrality in AI systems.

Arguments for neutrality in AI ethics

Some arguments for neutrality in AI ethics are:

  • Neutrality can help avoid bias and discrimination in AI systems. Bias means that AI systems produce inaccurate or unfair outcomes due to flawed data, algorithms, or design choices. Discrimination means that AI systems treat certain groups or individuals differently based on irrelevant or unjustified criteria. Bias and discrimination can harm people’s rights, opportunities, and well-being. By aiming for neutrality, AI systems can reduce bias and discrimination and ensure equal treatment for all.
  • Neutrality can help uphold universal human rights and values in AI systems. Human rights are the basic rights and freedoms that belong to every person regardless of their identity, status, or situation. Values are the principles or standards that guide people’s actions and judgments. Human rights and values are often considered to be universal, meaning that they apply to everyone everywhere at all times. By aiming for neutrality, AI systems can respect human rights and values and avoid imposing any particular worldview or agenda on others.
  • Neutrality can help foster trust and accountability in AI systems. Trust means that people have confidence in the reliability, safety, and quality of AI systems. Accountability means that people have mechanisms to monitor, evaluate and challenge the decisions and actions of AI systems. Trust and accountability are essential for ensuring public acceptance, legitimacy, and governance of AI systems. By aiming for neutrality, AI systems can increase trust and accountability by being responsive to feedback.
For example, a neutral AI system would not reject a job applicant based on their race or gender. A simple metaphor to visualize the point is that neutrality is like a balance scale that weighs all factors equally and does not tip to one side or the other.

Arguments against neutrality in AI ethics

Some arguments against neutrality in AI ethics are:

  • Neutrality is impossible to achieve in practice due to the inherent subjectivity and complexity of human values and situations. Subjectivity means that human values are not fixed or objective, but depend on people’s perspectives, preferences, and experiences. Complexity means that human situations are not simple or homogeneous, but involve multiple factors, dimensions, and trade-offs. By trying to be neutral, AI systems may fail to capture the diversity, nuance and contextuality of human values and situations, and may end up being biased, inaccurate, or irrelevant.
  • Neutrality is undesirable to pursue as a normative goal due to the ethical responsibility and agency of human actors, involved in creating and using AI systems. Responsibility means that human actors have moral obligations to consider the impacts and implications of their actions on others and themselves. Agency means that human actors have moral capacities to make choices, express preferences and act accordingly. By pretending to be neutral, AI systems may obscure the responsibility, and agency of human actors, and may prevent them from engaging in ethical deliberation, dialogue, and decision-making.
  • Neutrality is harmful to promote as a social ideal due to the potential risks and challenges of AI systems for human dignity, and rights.
For example, an AI system that tries to be neutral about political issues may ignore the different opinions, interests, and experiences of different groups of people. A possible metaphor to illustrate this point is that neutrality is like a blank canvas that does not reflect the colors, shapes, and textures of reality.

While in theory neutrality may sound like a goal to pursue, neutrality may act as a blindfold that prevents us from seeing the consequences of our actions or inactions. A facial recognition system may claim to be neutral, but it is designed and used by human actors who have moral obligations to respect the privacy and dignity of others, and moral capacities to choose how to use the system. Trained by human actors (or using data reflecting the socio-economic culture), AI systems act as mirrors that reflect the values and intentions of these human actors. By pretending to be neutral, AI systems may hide the mirror and make human actors forget their moral responsibility and agency.

Neutrality in AI systems is a complex and controversial topic. On one hand, some argue that AI systems should be neutral and unbiased, and that they should not favor any particular group, value, or outcome over another. On the other hand, others (including Open Ethics) claim that AI systems cannot be truly neutral, and that they inevitably reflect the preferences and assumptions of their designers, users, and data sources. Moreover, we suggest that AI systems should be explicitly non-neutral, and that they should align with certain ethical principles, social goals, or human rights. Concious choice is crucial.

Some applications of AI systems may require or entail normative judgments, values or objectives that are not neutral. For example, neutrality is not always acceptable in political analysis or legal advice, because these domains involve complex and contested moral, social and political issues that cannot be reduced to objective or impartial criteria. Moreover, AI systems that claim to be neutral may obscure or conceal the ethical implications of their decisions, and may undermine the role of human judgment and responsibility. Therefore, AI ethics should not aim for neutrality as a default or universal principle, but rather for consistency and transparency in pursuing the normative objectives of each application domain.

Talking about judgment, the principle of neutrality is not applicable in all legal systems too. In fact, it is not even a universally recognized principle in international law. The traditional rules of neutrality do not find scope of application in the collective security system created by the UN Charter and its prohibition of the use of force. However, transposed into fundamental principles of humanitarian law, they continue to rule over peace-keeping and humanitarian operations. Not always effectively though

Applications and domains

DomainUseful Neutrality ScenarioHarmful Neutrality Scenario
Justice and LawIn criminal justice, an AI system could be programmed to assess evidence and make recommendations without any bias or discrimination toward race, gender, or socio-economic status of the accused. This would ensure a fair trial and reduce the risk of wrongful convictions.In immigration law, an AI system that is programmed to treat all applicants equally may not be able to consider the individual circumstances of each case. This could result in some applicants being unfairly denied entry or granted asylum.
EducationIn standardized testing, an AI system could be used to grade exams without any bias towards the identity of the test-taker. This would ensure a fair assessment of the student’s knowledge and skills.In personalized learning, an AI system that is neutral towards individual students may not be able to recognize their unique needs or learning styles. This could result in the student receiving ineffective instruction.
Marketing & AdvertisingIn digital advertising, an AI system could be programmed to target ads based on user behavior and preferences rather than demographic data. This would ensure that users receive relevant ads without being subjected to discriminatory targeting.In political advertising, an AI system that is neutral towards the content of the ads may not be able to detect false or misleading information. This could result in voters being swayed by inaccurate or biased messaging.
Art & EntertainmentIn music recommendation systems, an AI system could be programmed to recommend songs based on the user’s listening history and preferences rather than the popularity of the song or the artist. This would ensure that users are exposed to a wider variety of music.In film and TV casting, an AI system that is neutral towards the race, gender, or ethnicity of the actors may not be able to recognize the importance of representation and diversity. This could result in casting decisions that perpetuate stereotypes and exclude underrepresented groups.
Military and SecurityIn cybersecurity, an AI system could be programmed to detect and respond to threats without any bias towards the identity of the attacker or the target. This would ensure a more secure network.In autonomous weapons systems, an AI system that is neutral towards its targets may not be able to distinguish between combatants and civilians, resulting in civilian casualties.
Healthcare and MedicineIn medical diagnosis, an AI system could be programmed to assess patient symptoms and provide recommendations without any bias towards the patient’s gender, race, or socio-economic status. This would ensure a more accurate diagnosis.In medical treatment, an AI system that is neutral towards the individual patient’s medical history and circumstances may not be able to provide the most effective treatment. This could result in suboptimal outcomes or even harm to the patient.

Focusing on neutrality in artificial intelligence (AI) is similar to attempting to deploy a one-size-fits-all solution in that its universality can limit its potential benefits in most areas. While neutrality in AI can be useful in many contexts, it may not be applicable or beneficial in all situations. This is because neutrality implies treating all inputs and situations equally, regardless of their individual differences or contextual factors. While neutrality may help to avoid overt discrimination and bias, it may not address systemic issues that contribute to these problems.

In this context, a possible way to address the challenges and opportunities of non-neutral AI systems is to promote transparency about their objectives, methods, and impacts. Transparency can help to increase the accountability and responsibility of AI developers and operators, as well as to inform and empower the users and stakeholders of AI systems. Transparency can also foster trust and confidence in AI systems, as well as to enable public debate and participation in shaping their design and governance. Transparency can also facilitate the identification and mitigation of potential harms and risks of non-neutral AI systems, such as discrimination, manipulation, or exploitation.

Therefore, transparency about non-neutral AI systems may benefit society by enhancing the ethical, social, and legal aspects of AI development and use. However, transparency is not a simple or straightforward solution. It requires careful consideration of the trade-offs between different values and interests, such as privacy, security, innovation, and competitiveness. It also requires appropriate mechanisms and standards to ensure the quality, accessibility, and usability of the information disclosed about non-neutral AI systems. Finally, it requires a collaborative and inclusive approach that involves multiple actors and perspectives in the AI ecosystem.

Are you passionate about the movement towards ethical and trustworthy AI systems? Do you want to collaborate with other like-minded people and learn from diverse perspectives? If so, we invite you to read our manifesto and join our community to work on Open Ethics Transparency Protocol, a multi-stakeholder approach fostering transparency and explicit disclosure for AI systems. Together, we can shape the future of AI in a responsible and inclusive way. Don’t miss this opportunity to make a difference and have fun along the way!

Photo by Nigel Tadyanehondo on Unsplash

Leave a reply

Your email address will not be published.