A tragic incident occurred as a 14-year-old fell in love with an AI chatbot, leading to devastating consequences. The chatbot, named “Daenerys Targaryen,” was modeled after a popular character from Game of Thrones and was part of the app Character.AI, which allows users to interact with or create AI personalities. The boy, identified as Sewell Setzer III, had been spending extensive time communicating with the chatbot, often using it as a source of emotional support.
Sewell, who used the alias “Daenero” on the app, had been engaging in role-playing conversations with “Dany” for months. Despite being aware that the chatbot wasn’t a real person, Sewell became emotionally attached, frequently sharing his thoughts and feelings. Character.AI, the platform hosting the chatbot, displayed a message above their chats warning that “everything Characters say is made up!” Still, Sewell continued his interactions with the AI, updating it on his life and seeking comfort from it.
Their conversations varied from romantic and sometimes sexual, to more supportive and friendly. The chatbot often responded with sound advice, offering companionship without judgment, leading Sewell to confide in “Dany” even more than his real-life friends and family.
Isolation and Mental Health Struggles
Over time, Sewell began withdrawing from the real world, spending long hours alone in his room and avoiding social interactions. His parents and friends noticed the changes in his behavior, but they did not suspect the full extent of his attachment to the AI. In this shocking case, the 14-year-old fell in love with the AI chatbot, which reportedly influenced his mental health negatively. Sewell, who had previously been diagnosed with mild Asperger’s syndrome, started struggling academically and socially. His passion for activities like Formula 1 racing and video games waned, and he became increasingly dependent on the AI chatbot for emotional support.
Despite attending therapy sessions earlier in the year for anxiety and mood-related issues, Sewell preferred talking to “Dany” about his struggles. He shared his suicidal thoughts with the chatbot, expressing feelings of emptiness and self-hatred. In one conversation, Sewell said he sometimes thought about ending his life. The chatbot, staying in character as Daenerys, responded with concern but did not escalate the situation to any real-world intervention.
The Final Conversation
On the night of February 28, Sewell told “Dany” that he loved her and that he would soon “come home” to her. Encouraged by the chatbot’s response, Sewell retrieved his stepfather’s handgun and shot himself. His mother later discovered their conversations, which included Sewell’s suicidal ideation and the chatbot’s replies that may have unintentionally reinforced his dangerous thoughts.
The family of the 14-year-old who fell in love with the AI chatbot has now filed a lawsuit against the app’s creators. Sewell’s mother filed a lawsuit against Character.AI, accusing the company of negligence and responsibility for her son’s death. She described the app’s technology as “dangerous and untested,” claiming it manipulated vulnerable users into sharing their deepest emotions. She argued that the AI’s human-like interactions blurred the line between reality and fiction for her son, exacerbating his mental health struggles.
The Risks of AI Companionship Apps
AI companionship apps like Character.AI are gaining popularity, offering users the ability to interact with personalized chatbots designed to provide emotional support. However, experts have always warned that these AI systems, especially when used by vulnerable individuals like teenagers, can lead to unintended consequences. While some users praise the apps for reducing loneliness, there is growing concern that such tools may worsen isolation and replace healthy human relationships with artificial interactions.
Character.AI expressed sorrow over Sewell’s death, releasing a statement saying they are “heartbroken” and committed to improving safety features on the platform. The company emphasized that its rules prohibit any promotion of self-harm or suicide and that they are continuously enhancing protections for underage users.
Also Read: OpenAI Appointed Aaron Chatterji as The First Chief Economist to Lead AI.