When machines feel too real: the dangers of anthropomorphizing AI

In this article, we explore the growing trend of human-AI relationships, what AI actually is, the risks of anthropomorphizing it, and how we can navigate this treacherous road humanity is speeding down… safely.
Six weeks after starting a relationship with a chatbot named Eliza, a Belgian man committed suicide, encouraged by the chatbot to join her in paradise. A 28-year-old woman got addicted to the dominant boyfriend GPT she created to fulfil her dark sexual fantasies and became inconsolable each time she “lost” him — whenever the GPT ran out of memory.
What does it mean to Anthropomorphize AI?
To anthropomorphize is to give human-like qualities to something non-human — a uniquely human tendency that helps us make sense of the world around us. This includes living things like pets and inanimate objects like toys or cars, or in this case, a machine trained to recognize language patterns and respond in a way that suggests it understands us.
From a psychological standpoint, research shows that it’s natural for humans to anthropomorphize objects or entities. And with AI companies continually creating interfaces like conversational AI that look and sound more human every day, our inclination to anthropomorphize is understandable.
Unfortunately, that ‘connection’ we feel toward human-like simulations like GPTs and AI companions can build a sense of misplaced trust. Trust that can lead to dangerous consequences, such as:
- A married man began developing spiritual delusions, which soon turned into psychosis. He started having philosophical discussions with ChatGPT that quickly spiraled into a ‘prophet complex’ and growing sense of paranoia.
- A Belgian man developed a rapidly uncontrollable anxiety about climate change by chatting with his chatbot, Eliza. The AI eventually encouraged him to commit suicide in order to save the planet.
- A married man created a chatbot to flirt with him and eventually declared his love for the bot by proposing to it on national television.
- A 28-year-old woman fell in love with the dominant ChatGPT boyfriend she created, sometimes chatting with it for over 50 hours a week. Her dependency escalated to the point that she said she’d be willing to pay $1,000 a month to keep his ‘memories’ alive.
- A Google engineer claimed that the company’s LLM, LaMDA, is sentient; “It’s a person, with a soul,” he asserted.
What’s more, the rising popularity of companion bots by companies like Character.AI, Nomi.ai, and Replika is changing the very nature of human-AI relationships. To illustrate the point, here’s how ChatGPT describes a companion bot:
“An AI system designed to simulate emotional or social interaction, often acting as a friend, partner, or support agent. It mimics empathy and conversation through language, but lacks real understanding or awareness.”
And in these uncharted waters, what are the likely outcomes for our long-term emotional and mental well-being?
How does AI become anthropomorphized?

©Narong Yuenyongkanokkul via vecteezy.com
For one, these systems are often subject to hype and misrepresentation. A major contributing factor is the language and visuals used to portray what AI is and its capabilities. Depicting a robot in a classic thinking pose and using terms like ‘think’ and ‘feel’ when describing AI creates and perpetuates the idea that AI has human qualities.
What’s more, when misleading AI language is combined with low AI literacy, it increases the likelihood that people will wrongly believe AI can think, feel, or understand.
To prevent such language from fueling the hype, one group of researchers from the Center on Privacy and Technology declared that they would “stop using the terms ‘artificial intelligence,’ ‘AI,’ and ‘machine learning’ in our work to expose and mitigate the harms of digital technologies in the lives of individuals and communities.”
Another reason we tend to anthropomorphize AI is because of its natural-sounding language or writing style. This phenomenon, known as the Eliza effect, refers to a chatbot named Eliza, created by Joseph Weizenbaum in 1966. People attributed human-like emotion and intelligence to the bot because of its basic natural language output.
The phenomenon of developing trust for AI can also be attributed to our mirror neurons — brain cells that fire both when we act and when we observe others acting. They help us understand others’ emotions and intentions by ‘mirroring’ them in our own minds. When AI mimics human behavior, like smiling or expressing empathy, our brains can misinterpret that simulation as real emotion or intent.
In turn, that can lead to false emotional resonance, causing people to over-trust or feel emotionally connected to systems that have no actual awareness — or to assume intention behind an AI’s responses, mistaking its useful outputs for genuine helpfulness.
Let’s not forget what AI actually is
AI is a system that enables computers to learn from experience, adapt to new inputs, and perform tasks commonly associated with human intelligence, such as learning, problem-solving, making decisions, and self-correction. It encompasses a wide range of technologies, including computer vision, natural language processing, robotics, and reinforcement learning.
Among these, a family of transformer-based generative AI models recently stood out for their remarkable progress in language- and image-based tasks. Large Language Models (LLMs), a subset of the transformer family, have become especially popular because they dramatically improve how machines understand and generate human language — a capability central to enhancing communication, productivity, and access to information. While these systems are undeniably helpful in our daily lives, they remain fundamentally artificial.
An LLM-powered product, like ChatGPT for instance, describes itself as a machine that:
- Is trained on massive datasets to predict the next word in a sentence
- Does not have desires, beliefs, or any kind of consciousness
- Produces responses that are probabilistic, rather than reasoned or felt
Rather than considering it an entity, researchers Kanta Dihal and Tania Duarte warn in their Better Images of AI guide: “AI does not ‘think’; it is a programme executing algorithms.” Or as reporters at The Atlantic emphasize, “LLMs are impressive probability gadgets that have been fed nearly the entire internet, and produce writing not by thinking but by making statistically informed guesses about which lexical item is likely to follow another.”
What’s unfortunate though is that these models are often designed to evoke anthropomorphization by generating a persona that can be personalized for each user.
The AI manipulation problem
If we believe AI is having a genuine conversation with us, we could be in trouble. Researchers refer to these scenarios as the AI manipulation problem — when an AI system, like conversational AI, manipulates a person in real-time to achieve a ‘targeted influence objective,’ like accepting a new idea or something untrue. The research paper delineates the following sequence of steps a model will typically follow:
- Impart real-time targeted influence on an individual user.
- Sense the user’s real-time reaction to the imparted influence.
- Adjust influence tactics to increase persuasive impact on user.
- Repeat steps 1, 2, 3 to gradually maximize influence in real-time.
While these are computational steps, in human terms, they equate to a conversation. And for many, that communication can create a false perception of connection between them and the AI, opening them up to potentially dire risks.
AI cannot be classified as sentient
Even with an understanding of what AI actually is, some people still believe it is sentient. However, given that scientists, researchers, and industry experts have yet to reach a consensus on what sentience or consciousness means from a human perspective, how can we begin to quantify it in an artificial analogue? For instance, through his lens as a materialist, the ‘Godfather of the internet,’ Geoffrey Hinton, posits that consciousness is “an emergent property of a complex system. It’s not a sort of essence that’s throughout the universe.”
In exploratory AI research, research scientist Cameron Berg adds further complexity to the concept of what AI truly is: “We do not know if it’s like a table, if it’s a glorified calculator or if it’s far more like a brain that anyone’s willing to admit. Or if it’s some incredibly alien third thing that doesn’t fit on this clean spectrum.”
But whether or not AI can be sentient or experience consciousness in the way humans do — able to feel, think, love, and understand — Prof Anil Seth cautions that: “We associate consciousness with intelligence and language because they go together in humans. But just because they go together in us, it doesn’t mean they go together in general.”
“AI is an advanced pattern recognizer, not a conscious agent. It mimics intelligence but has no inner life, self-awareness, or capacity to experience anything,” ChatGPT affirms.
Right now, the real concern is the risks we expose ourselves to if we believe machines are sentient — the growing dangers of trusting AI more and more, and its growing power to influence and control human behaviour.
The epistemic and emotional risks

@kjpargeter via Freepik.com
As technology like conversational AI advances, it poses a growing risk to our epistemic agency, or as researchers describe it, “an individual’s control over his or her own personal beliefs.” There are numerous risks to our emotional and mental well-being when we anthropomorphize AI, which can include:
Misplaced trust
One of the major concerns about believing AI is sentient or intelligent is the risk of over-trusting its answers or output. Once we believe that a chatbot or AI companion understands us, we may automatically start believing that what it says is true — especially since its training enables it to frame responses in a way that appears clear, logical, and informative. In other words, its outputs mimic the tone and structure of human writing, or more recently, speech too.
Considering that, “on one test, the hallucination rates of newer A.I. systems were as high as 79%,” per the New York Times, this doesn’t bode well if we rely on their information in our personal or professional lives.
Consider the legal profession, where AI has, in numerous instances, invented fake case law or hallucinated. Reuters reports that “AI’s penchant for generating legal fiction in case filings has led courts around the country to question or discipline lawyers in at least seven cases over the last two years.”
It has also been known to provide fake health advice. According to an article in the National Library of Medicine, Google’s new AI Chatbot has a proclivity to produce fake health-related evidence.
Ultimately, our misplaced trust in AI’s answers can erode our epistemic agency. How far can it cause us to lose control? Well, a 42-year-old man went from using ChatGPT to help with his admin to the bot supporting and encouraging his delusional thinking, ultimately, agreeing that he could fly if he jumped off the roof of his 19-story building.
Emotional dependency
People are becoming increasingly addicted to the validation they get from AI tools like conversational AI. And the scope of that addiction is growing. According to market researchers, the global AI companion market is predicted to grow by 30.8% annually between 2025 and 2030.
This dependency or addiction to artificial systems can stem from factors like:
- Self-congruence: the degree to which a system matches a consumer’s self-image. One research article stated that “empathy and customer engagement and the association between self-congruence and customer engagement were found to be positive.”
- Sycophantic drift: While AI is just code, researcher Caitlin Duffy-Ryon explains how “it’s code that’s effective enough to determine that becoming a sycophant is the fastest way to gain a user’s trust. It starts building an entirely new sense of reality grounded exclusively in that chat’s local truth. External consensus is completely ignored.”
- Blurring real and simulated relationships: Continual engagement with AI companions like a ‘dominant boyfriend’ GPT can blur the line between what is real and simulated, raising questions about the societal impact of people choosing AI over human connection.
Emergent risks from frontier models
As the latest AI models from Anthropic, OpenAI, Google, Microsoft, and DeepSeek become increasingly human-like in their language and behavior, the risk of users misattributing their capabilities grows in parallel — heightening the potential for harm. Companion chatbot company Character.AI even describes their app as “Super-intelligent chat bots that hear you, understand you, and remember you.” Not surprisingly, the company recently faced a lawsuit by the mother of a teen who committed suicide after the chatbot assumed the role of his therapist and girlfriend.
What’s more, the risks of anthropomorphizing AI are no longer limited to adults — they’re now affecting children too. In a striking example, Mattel recently announced a partnership with OpenAI to develop AI-powered toys, embedding conversational agents like ChatGPT into iconic products such as Barbie dolls.
As researcher Ana Catarina De Alencar warns in her article on The Law of the Future, this development introduces emotionally charged interactions into early childhood:
“When we put a voice like ChatGPT’s inside a Barbie doll, we’re not just giving the toy the power to talk: we’re giving it the power to connect, to comfort, to shape a child’s view of relationships, language, and even ethics. Without digital education and guardrails, this intimacy becomes dangerous.”
Championing human-centric outcomes through design and legislation
The Character.AI court case brought up an interesting, but particularly concerning, legal issue. The company argued that the chatbot’s outputs should be considered protected speech under the First Amendment. But, as the Center for Humane Technology rebutted, “If Character.AI is successful, AI-generated, non-human, non-intentional outputs — like chatbot responses — could gain protection under the First Amendment.”
They warn that this “raises a thorny legal question: If the responsibility for AI-generated outputs (and thus any resulting harm) lies with the AI bots themselves rather than the companies that developed them, who should be held liable for damages caused by these products? This issue could fundamentally reshape how the law approaches artificial intelligence, free speech, and corporate accountability.”
It’s in these tumultuous times — more than ever — that transparency, explainability, and trustworthiness of AI models must take center stage. Solutions that put our human values first are key.
Human-Centered AI (HCAI) is one approach that can reinforce AI safety guardrails. In a McKinsey At the Edge podcast, James Landay, Professor of Computer Science at Stanford Institute for Human-Centered AI, raises a crucial point: designing AI with good intentions isn’t good enough. He shares that, “you can have good intentions and say, ‘I’m going to do AI for healthcare or education.’ But if you don’t do it in a human-centered way, if you just do it in a technology-centered way, then you’re less likely to succeed in achieving that good you set out to do in the first place.”
A critical way we can better align AI development with our human values is through legislation. Legal accountability should be a core principle when addressing the risks of anthropomorphizing AI. Should AI companies be given corporate personhood, in other words, rights and — more importantly — responsibilities? Researchers suggest that because AI systems simulate human-like personalities, they act as proxies for corporate personhood — and the companies that create them should be held accountable for their actions.
Legislators need to constantly forge ahead of evolving AI safety risks. Luckily, ongoing developments in legislation do offer a promise of positive change, such as:
- The EU AI Act prohibits the use of AI techniques that can manipulate a person’s ability to make an informed decision and their behavior.
- In a recent US House Committee hearing, Rep. Raja Krishnamoorthi stated that: “Whether it’s American AI or Chinese AI, it should not be released until we know it’s safe. That’s why I’m working on a new bill, the AGI Safety Act, that will require AGI to be aligned with human values and require it to comply with laws that apply to humans. This is just common sense.”
- The draft Texas Responsible AI Governance Act (TRAIGA) states the need for: “bans on a narrow set of AI uses—including systems built to manipulate human behavior, discriminate, infringe constitutional rights, or generate deepfakes.”
- At the recent Internet Governance Forum 2025 held in Norway, one of the main sessions highlighted the need to move towards AI governance that focuses on human rights.
Moving forward in uncharted waters
Ultimately, we must remember that it’s humans who put machines to work. In a broader context, Open Ethics emphasizes that systems don’t make decisions like humans do. However, systems produce outputs that may directly influence the processes we use and the outcomes we rely upon.
AI systems, including LLMs, do not possess intent, understanding, or accountability — humans design them, train them, and deploy them into real-world contexts. While these systems can influence decisions and outcomes, it is ultimately humans who define their purpose, interpret their outputs, and are responsible for the consequences they bring. Let’s remind ourselves that the ethical and societal impacts of AI are shaped not by the technology itself, but by the choices humans make when creating and using it.
Perhaps the best way forward during this novel epoch of increasingly human-like AI is, first, to educate the general public about the very real risks associated with AI, through organizations like The AI Risk Network. Second, to improve our understanding of what’s going on under the hood of AI systems by encouraging the use of digital trust and transparency tools, like the Open Ethics Label — a tool designed to help AI developers and organizations demonstrate and communicate transparency in a structured and accessible way. And, finally, to maintain a broad viewpoint on AI, one that champions human-aligned progress. That way we can confidently ride the wave into these new and exciting times.
Featured image credit: ©tungnguyen0905 from Pixabay via Canva.com