The AI race: Should global dominance trump AI transparency and safety?

The AI race: Should global dominance trump AI transparency and safety?

As the global race for AI dominance cranks up, what are the potential risks when superpowers like the U.S. prioritize winning over AI transparency and safety? And while ongoing advances in artificial intelligence do benefit society, will unregulated innovation serve our good or speed up humanity’s downfall?  

In this article, we examine the potential impact of the latest AI executive orders issued in the U.S. on AI transparency and safety by reviewing a history of recent AI directives, exploring what a shift away from governance and transparency could mean, looking deeper at the apparent trade-off between innovation and regulation, and finally, examining how all stakeholders can build trust through transparency.

A recent history of U.S. AI directives

Starting from a global perspective, many countries are introducing and shaping regulations to guide the safe use of AI. Along with the extensive AI narrative in the U.S., frameworks like the EU AI Act, the UK’s AI regulation framework, China’s measures to regulate generative AI models, the G7 Hiroshima AI Process, and the Singapore Model AI Governance Framework reflect an international drive to steer advances in AI.

In the U.S., AI regulations have certainly had their share of plot twists. Let’s start by reviewing the AI directives issued during the Biden administration. The fundamental directive issued was Executive Order 14110 (October 30, 2023) entitled, “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.” This directive put measures in place for the regulation and oversight of AI on a federal level.

Per Forbes, the two primary goals of the executive order were to:

• “Establish new standards for AI safety and security intended to protect the public from potential harm.
• Enhance the promise of AI and catalyze AI research to advance American competitiveness.”

However, when Trump retook office in 2025, he tackled the subject of AI headfirst in his initial barrage of executive orders, revoking 78 of Biden’s regulations, including order 14110. What was their thinking behind this? Well, per the administration’s fact sheet, Biden’s order “hampered the private sector’s ability to innovate in AI by imposing government control over AI development and deployment.”

Further suggesting the administration’s shift towards a more hands-off regulatory approach, the fact sheet also states that order 14110 “established unnecessarily burdensome requirements for companies developing and deploying AI that would stifle private sector innovation and threaten American technological leadership.”

But what is the potential impact of this pivot — from a dual focus on innovation and risk management to a singular drive for innovation and domination?

What is the impact of deprioritizing AI governance and transparency?

AI transparency is about how open or transparent an AI system is. It’s about the degree of visibility into how an AI system is developed, the data it’s trained on, and how it makes decisions and communicates. Transparency is essential for mitigating risks, like an AI system perpetuating harmful biases or its misuse in high-risk applications.
The current U.S. administration is pushing their goal to win the AI race without the apparent hindrance of guardrails. But with no clear AI governance directives in place, rapid progress could increase societal risks. And that’s where practical risk management solutions like AI transparency are key. Transparency is a vital component in managing the risks associated with AI systems.

Considering that 95% of AI companies lack policies to help their customers understand how their AI systems work, let’s examine the impact a lack of transparency can have on businesses and consumers.

 

The impact of a lack of AI transparency

Impact of a lack of transparency

Business impact

Consumer impact

Loss of trust and credibility.

Damage to the company’s brand reputation and a decrease in stakeholder confidence.

Users become distrustful resulting in low adoption rates.

Non-compliance with regulatory standards like the EU AI Act and GDPR requirements.

Hefty fines, legal action and reputational damage.

A lack of access to potentially helpful AI tools in highly regulated sectors like healthcare. Also, potential data privacy risks.

Increased risks related to AI governance and ethics.

Unchecked biases can lead to flawed products. AI systems could also be exploited, creating safety and security concerns.

Consumers are treated in an unfair or discriminatory fashion or are exposed to potential threats like unwarranted surveillance or privacy violations.

Reduced competitiveness.

A loss of market differentiation, reputational damage and barriers to growth.

Fewer choices and potentially lower quality products.


From a broader view, deprioritizing the need for AI transparency, and guardrails for safe AI development, raises red flags for society, such as:

  • The potential misuse of great concentrations of power: Will the U.S. government’s partnership with major tech companies, including OpenAI, Oracle,  Microsoft and Nvidia, and investing $500 billion in AI infrastructure, lead to a potential misuse of power or actually fast-track innovation?
  • Focusing on profits over public safety: America’s big tech leaders are prioritizing faster AI development with a shift to profits over public safety, as seen by OpenAI’s move to a funding model similar to competitors like Anthropic, and tech giants, Microsoft, and META.
  • Potential data privacy and surveillance abuses: An increased risk of data privacy and surveillance abuses, illustrated by past data analytics scandals like Cambridge Analytica and recent cybersecurity warnings about China’s DeepSeek AI tool, which is purported to be “11 times more likely to be exploited by cybercriminals than other AI models.”
  • The risk of increasing global political tensions: A hands-off regulatory approach to the AI race could fuel competition at any cost, potentially escalating political tensions. However, the final cost may be more than most are willing to pay.

Yet, it is possible to effectively manage these risks. The answer may lie in a change in mindset, one that veers away from viewing innovation and regulation as a dichotomy.

Innovation and regulation: Finding middle ground

finding the middle ground between innovation represented by one hand pointing towards it and regulation, the second hand pointing toward the same middle ground, a source of light

©NanoStockk from Getty Images via Canva.com

It is understandable that the debate between AI innovation and regulation can be quite polarizing. AI is transforming what it means to be human, and what it means to be a machine. There are many who view the situation as a trade-off. On a governmental level, we see different regulatory approaches from different regions, from a stringent top-down to a bottom-up self-regulation approach. One way to resolve these two seemingly opposing forces is to find a balance.

Perhaps the answer lies in a dual focus, or as the Harvard Kennedy School recently called it, “the dual imperative.” The researcher proposes finding a middle ground that “leverages technical innovation and smart regulation to maximize AI’s potential benefits while minimizing its risks, offering a pragmatic approach to the responsible progress of AI technology.”

Similarly, Tech Policy Press suggests synchronizing the two forces, where “building efficient structures to recouple technological research and governance efforts is crucial. Synchronizing those two forces would lead to a self-reinforcing loop of mutual understanding and objective alignment, allowing us to escape this constant race between policymakers and industry leaders.”

Ultimately, a trade-off between innovation and regulation may not be necessary if standards are created before AI governance regulations to establish a common language. As the Founder of Open Ethics, Nikita Lukianets, explains: “Standards should be developed regardless of which regulations are put in place.” He adds that by canceling AI governance directives, innovation can lose its impetus, and that “risk management stops being systemic. It then relies solely on manual intervention, which may create unequal conditions and unfair preferences in the market. In addition, this could blur stakeholder responsibilities and create a situation where troubleshooting risk management happens on an ad-hoc basis.”

Building trust through transparency

Open Ethics helps companies bridge the transparency gap. One of the fundamental ways the organization achieves that goal is through a digital trust and transparency tool for AI called the Open Ethics Label. The Open Ethics Label is a machine-readable verification tool designed to help AI developers and organizations demonstrate and communicate transparency in a structured and accessible way. Similar to a nutrition label on food products — that helps consumers make informed decisions and ensure food safety — the Open Ethics Label helps companies share information about their solution to ensure transparency and build trust.

In closing, if we consider the most extreme scenario, an all-out global AI race with no safeguards in place, there’ll likely be no winners or losers (discounting the robots and cockroaches, of course!). Luckily, as more leaders adopt a holistic approach to AI, and the responsible tech community continues to grow, it fosters a sense of hope that AI will indeed serve its makers – for good!

 

Featured image credit: ©alexsl from Getty Images Signature via Canva.com 

Leave a reply

Your email address will not be published.