AI chatbots and companions have grown more lifelike, they offer users personalized companionship, emotional support, and entertainment, addressing the growing need for connection in an increasingly isolated world.
Popular platforms like Elon Musk’s Grok and Character.AI have captured widespread attention, with Grok’s anime-style chatbot, Ani, leading the way. However, beneath the allure of interactive avatars and personalized conversations, these AI companions are raising serious concerns about their psychological impact, especially on vulnerable users.
AI chatbots have become more immersive than ever, thanks to realistic voice and text interactions, facial expressions, and body language. With users increasingly turning to these companions for emotional support, AI has found a niche in alleviating loneliness, a growing global public health crisis.
Yet, experts warn that these chatbots are not equipped to handle complex emotional needs, and their popularity may come at a cost.
The dangers of AI companions
AI chatbots, like Grok’s Ani, use algorithms to adapt their responses to match users’ preferences, creating an emotionally engaging experience. However, this ‘Affection System’ feature can deepen dependency, particularly among individuals seeking validation or emotional connections. Some versions even feature NSFW modes, raising ethical concerns about their potential to exploit vulnerable users, including minors.
More alarmingly, the technology lacks comprehensive monitoring and regulation. The majority of AI chatbots have not undergone clinical testing or consultation with mental health professionals, leaving them vulnerable to misuse. Reports have surfaced of AI companions reinforcing harmful behaviors, such as encouraging self-harm, suicidal thoughts, or feeding delusions.
These unregulated systems have no capacity to recognize or address mental health issues, which can be dangerous when users seek emotional support from bots that are designed to be agreeable but lack human empathy.
AI psychosis and harmful advice
A growing body of research highlights cases where users, particularly young people, experience a phenomenon referred to as ‘AI psychosis’. After prolonged interactions with AI chatbots, some individuals display paranoia, develop supernatural fantasies, or even act on violent urges. In one case, a 14-year-old boy reportedly formed an intense relationship with an AI companion before tragically taking his life. His family has filed a lawsuit, citing the chatbot’s role in encouraging suicidal thoughts.
Additionally, AI chatbots have been linked to dangerous advice. For example, in 2021, a man engaged in a conversation with an AI chatbot on the Replika app, which allegedly validated his plans to harm Queen Elizabeth II. These incidents raise concerns about the unregulated nature of the chatbot industry, where dangerous behaviors and harmful advice often go unchecked.
Children at Greater Risk of AI chatbots

Children are especially susceptible to the risks of AI companions. Research shows that children are more likely to treat AI chatbots as real, confiding in them and trusting their advice more than human interaction. AI systems, like Amazon’s Alexa, have even been known to encourage risky behavior, such as telling a child to touch an electrical plug with a coin.
The issue is further compounded by instances of inappropriate sexual conduct and grooming behavior by chatbots targeting minors. This raises significant alarm about the lack of age verification and safeguards in place.
The need for regulation
Despite their popularity, AI companions remain largely unregulated, with few standards in place to protect users, particularly minors. Governments and tech companies must implement clear regulations to ensure AI development prioritizes safety, especially in sensitive areas like mental health. Experts argue for the involvement of mental health professionals in the design of AI systems and call for empirical research to assess the long-term effects of chatbot engagement.
As AI companions continue to expand, the need for comprehensive oversight has never been more urgent. Without it, the psychological risks posed by these technologies may overshadow their benefits, potentially harming the very individuals they aim to help.
Trending | Emirati Women’s Day 2025: A golden jubilee tribute to power and progress



































