From AI Companions to AI Girlfriends: Is Black Mirror coming to real life?

In the episode “Be Right Back” from the British TV series Black Mirror, a woman grieving the loss of her boyfriend turns to an AI-powered chatbot to bring a version of him back. But chatting with a digital copy isn’t enough for her. In her desperation, she goes a step further and has the AI transferred into a realistic android body. Sounds eerie? It’s not as far from reality as it might seem. HereafterAI is a service that lets people upload voice or text messages from deceased loved ones to create AI-powered chat and call experiences designed to preserve their memory. AI seems to steadily transform nearly every part of our lives, from the workplace, our smartphones to household appliances, and now it is beginning to seep into our intimate lives and personal relationships.
Large Language Models (LLMs) like ChatGPT are trained to mimic human speech and generate creative, personalised responses. As a result, conversations with AI can feel surprisingly human. Therefore, people have started turning to AI chatbots like ChatGPT for guidance on everyday issues, including deeply personal topics they might hesitate to share with others. Moreover, dedicated AI companion platforms such as Replika, Chai, and Xialce are gaining considerable traction, attracting hundreds of millions of active users worldwide.
In this article, we are giving you a glance into the rising phenomenon of AI companions—what they are, and why so many people are turning to them for connection. We are broadly unpacking the emotional benefits they can offer and the risks they may pose.
What are AI Companions?
AI companions are chatbots designed to offer virtual friendship or even romantic relationships by providing emotional support, engaging users in enjoyable conversations, and gradually learning individual preferences to simulate a deeper emotional bond.
Most platforms offer extensive customisation, allowing users to shape their companion’s personality and conversational style. Many also include features like voice messaging, phone calls, and animated avatars that users can personalise. In essence, AI companions let people design their ideal friend or partner, or just to lend users a temporary open ear.

Granted, the screencap of our experimental AI friend on Replika looks a lot more like a video game character than a realistic human friend. However, the world of AI companions is evolving rapidly: AI’s generative capabilities concerning photorealistic images and videos can enable quasi-real video-chatting experiences, some companies are developing AI companions with avatars that can be projected into rooms as a hologram, or that can inhabit humanoid robots, projecting us right into an episode of Black Mirror. But what are the real implications of these technologies for us right now?
Why are people turning to AI Companions?
The Global Loneliness Epidemic
The reasons why AI companions appeal to us are manifold. The WHO declared that we are facing a global loneliness epidemic. Humans are social animals, meaning socialising and emotional bonds are essential for our health. Therefore, the WHO cautions that the rising prevalence of loneliness in older adults and adolescents poses a serious public health concern, as it correlates with increased rates of poor health outcomes, self-harm, and suicide. Possible causes for the loneliness epidemic may include:
- A cultural shift towards individualism and solitary lifestyles.
- Demographic change toward ageing societies.
- Isolation during the COVID-19 pandemic.
- The digitalisation of human connection and increased social media usage.
Thus, more and more people are finding it difficult to build meaningful relationships or feel part of a community. Many studies, such as this one, have found that users of AI companions are specifically turning to AI companions to help cope with loneliness with considerable success.
Accessible and Affordable Replacement for Therapy
In many places the waiting lists for therapy sessions are endless, health insurance won’t cover counselling, making professional help against mental health struggles unaffordable and inaccessible. Many generic AI chatbots are freely accessible and respond with compassion to intimate questions or expressions of sadness. All from the comfort of your own home, available 24/7—and at a fraction of the cost of therapy or counselling.
Furthermore, specific AI tools are being developed and used in therapy and for therapy. Take couples counselling app maia.ai, for instance. Maia can listen to fights, mediate discussions and generate personalised advice on how to improve your relationship be it in the way you communicate or even in the bedroom.
Feeding the Ego
As people grow lonelier and retreat further into digital social worlds, many are becoming increasingly unaccustomed to the demands of real-life relationships—relationships you can’t simply log out of. Genuine human connection requires reciprocity, patience, and unselfish empathy. It means navigating the needs, flaws, and unpredictability of others who may not always be available, agreeable, or attuned to our desires.
In contrast, an AI companion eliminates the need to accommodate anyone else. It’s always available, designed to meet your emotional needs on demand, and engineered to form bonds quickly. It never argues, never rejects, and offers a steady stream of validation and praise, if that is what you would like it to do. This consistent emotional reinforcement triggers feel-good chemicals like dopamine, drawing users back again and again.
What are the risks of AI – Human Relationships?
Further Social Isolation and Emotional Dependency
Relying on AI for companionship and anonymous advice may encourage people to retreat further into the perceived safety of these interactions, gradually losing the interest, or even the ability, to connect with others. After all, AI companions offer instant, non-judgemental, and often flattering responses that real human relationships simply can’t guarantee.
Radicalisation in Individual Echo Chambers
Social media algorithms are consistently criticised for algorithms that create individualised echo chambers that contribute to political radicalisation and fosters polarised societies. AI companions may contribute to this phenomenon on an even deeper emotional level. Two extreme cases such as the suicide of a 14-year-old and a man who was encouraged by his AI girlfriend to assassinate the Queen of England show that is risk is quite real.
Data Privacy Concerns and Commercial Exploitation of Vulnerability
Many AI companion platforms lack robust data privacy protection. This is an especially serious concern given the deeply personal and sensitive information users often share with them. Conversations intended to feel private and therapeutic can be stored, analysed, or even shared with third parties, raising significant ethical and security issues.
Looking Ahead – Where should we go from here?
As of today, the introduction of AI into our lives seems irreversible. And, considering the exponentially growing demand, AI companions seem to be here to stay as well. With the ever improving ability of AI to generate realistic images, we may see progression towards more and more realistic AI companions and increasing potential for manipulation, addiction, and social isolation. However, if AI companions are truly here to stay, how can we use them for good?
- Mental Health Benefits: Studies suggest AI companions can not only successfully alleviate loneliness and support mental health, but may also help prevent self-harm and suicide. They could serve as a supplementary tool for therapy and immediate emotional support.
- Aide for Analysis and Bureaucracy in Therapy and Social Services: AI could supplement and assist psychotherapists, social workers, caregivers, and counsellors in their work through transcribing and synthesising data, chatting with their clients in times of absence, alleviating administrative or bureaucratic burdens. Furthermore, tools such as maia.ai could increase efficiency within couples counselling as it encourages further reflection and exercises at home.
- Practice for Social Skills: AI companions could help users build social skills by offering a safe space to practice interactions, potentially boosting confidence in real-life relationships. Similarly, they could support those with social phobias such as agoraphobia or trauma-related issues to ease into social relationships.
What should regulation do?
At oxethica, we believe AI holds real potential to support human thriving if developed in a trustworthy and responsible way. In this article, we have shown that AI companions, while often perceived as harmless digital toys, carry significant risks that regulation should address. Regulation for AI companions should specifically focus on:
- Strong data privacy laws to protect users’ sensitive information.
- Transparency and accountability to reduce the risk of manipulation, emotional dependency, and political radicalisation.
- Scrutinising how AI companions reinforce bias or contribute to discrimination, particularly through erotic content, harmful stereotypes and language, or through the delivery of potentially harmful advice.
Example: Regulation of Online Gambling
Adapting regulatory models from online gambling may offer a practical framework for governing AI companions, given their shared risks and challenges.
Both rely on business models that encourage continuous engagement and spending, often through subscriptions and microtransactions. In the case of AI companions, microtransactions offer instant gratification, allowing users to customise their virtual companions, from appearance and voice to personality, memory, and even access to erotic content, at a seemingly low cost. The mix of instant emotional gratification and increasing personal investment and attachment gives AI companions a high potential for addictive use akin to gambling.
Hence, the regulation of AI companions could draw from existing regulatory frameworks applied to online gambling that require gambling services to obtain legal licences, comply with data protection laws, and implement responsible use measures such as deposit limits, self-exclusion tools, and access to addiction support
Therefore, regulation adapted for AI companions could aim to promote responsible use by:
- setting time limits to restrict excessive interaction, with thresholds defined by an interdisciplinary panel of experts
- requiring built-in self-exclusion tools and automated warning systems
- algorithmically ensuring reliable referrals to local support services for users showing signs of serious emotional distress, destructive behaviour, or dependency.
Overall, given the high potential for emotional harm, it is essential that we establish globally coherent regulations along with appropriately severe penalties for violations.
What do you think? Do you think we have passed a point of no return, or can AI be leveraged to meaningfully support human flourishing even within our delicate and complex social and emotional lives? How can we ensure that AI technology transforms our social lives in a “good” way?
In a Nutshell
What are AI Companions?
AI companions are virtual chatbots designed to act like friends or romantic partners. They offer emotional support, hold conversations that feel personal, and can be customised to match a user’s ideal personality. Some even include voice messages, phone calls, or animated avatars, making the experience feel more real and engaging.
What are the risks of AI Companions?
While they can offer comfort, AI companions can come with serious risks. They might encourage people to withdraw from real relationships, become emotionally dependent, or develop unrealistic expectations of intimacy. There are also concerns about privacy, as users often share sensitive information that could be stored or misused. In some cases, these tools may even reinforce bias or deliver harmful advice.
How could regulation address those risks?
AI companions are here to stay, with growing demand and increasingly realistic capabilities. While they pose risks they could also offer significant benefits, including reducing loneliness, supporting mental health, supplementing therapy, and helping people build social skills. To harness these benefits safely, regulation could draw on models like those used for online gambling, focusing on:
- Strong data privacy protections
- Responsible use policies, including time limits, self-exclusion tools, and support referrals
- Transparency and accountability to prevent manipulation and emotional dependency
- Measures to address bias, discrimination, and harmful content
Establishing globally coherent regulations with meaningful penalties is vital to minimise harm while enabling AI companions to positively contribute to human well-being.
More on AI regulation
