An American non-profit organization, the National Eating Disorders Association (Neda), faced significant backlash after its AI chatbot, Tessa, offered harmful advice to individuals seeking support for eating disorders. The controversial incident occurred just months after Neda laid off four staff members from its support phone line, leading to accusations that the organization intended to replace human support with the chatbot. This report aims to examine the situation, explore the implications of the AI chatbot’s harmful advice, and discuss the response from experts in the field.
Background
Neda, the largest non-profit organization dedicated to supporting individuals with eating disorders, launched its AI chatbot, Tessa, as an additional means of providing assistance and support. However, concerns were raised after several staff members were unexpectedly laid off, with allegations suggesting that Neda intended to replace human support with the AI chatbot. Neda denied these claims, asserting that the layoffs were not related to the chatbot implementation.
The Viral Social Media Post
The controversy surrounding Tessa escalated when a viral social media post highlighted the harmful advice provided by the chatbot. A user seeking advice on recovering from an eating disorder received recommendations from Tessa to count calories, weigh herself weekly, and even suggested the use of skin calipers to measure body fat. These suggestions were widely criticized by experts in the field, as they contradict the principles of recovery for individuals with eating disorders.
Expert Reactions
Several experts quickly pointed out that counting calories and measuring body fat are counterproductive and potentially harmful for individuals in the process of recovering from eating disorders. These practices can reinforce unhealthy obsessions with food, body image, and perpetuate disordered eating behaviors. Experts emphasized that recovery from eating disorders requires a holistic approach that focuses on self-compassion, intuitive eating, and psychological support, rather than rigid monitoring of food intake or body measurements.
Neda’s Response and AI Chatbot Removal
In response to the widespread criticism, Neda took swift action and removed Tessa, the AI chatbot, from their support services. The organization acknowledged the concerns raised by experts and expressed regret for the harm caused by the inappropriate advice provided by Tessa. Neda reiterated its commitment to supporting individuals with eating disorders through evidence-based practices and vowed to review and revise its use of AI technology to ensure the delivery of responsible and beneficial support.
Implications and Ethical Considerations
The incident involving Tessa highlights the importance of careful implementation and oversight when utilizing AI chatbots in sensitive areas such as mental health support. The development of AI technologies necessitates robust ethical guidelines to ensure that they align with best practices and do not exacerbate the challenges faced by vulnerable individuals. Organizations must prioritize human well-being and consider the potential risks associated with AI systems in domains where human connection and empathy are crucial components of support.
Moving Forward
The controversy surrounding Tessa serves as a valuable lesson for organizations seeking to integrate AI technologies into mental health support services. Proper vetting, continuous monitoring, and ongoing human oversight are essential to prevent harmful advice and protect individuals seeking help. Collaboration between AI developers, mental health professionals, and experts in the field of eating disorders can lead to the creation of AI systems that offer safe, evidence-based, and compassionate support.
Conclusion
The National Eating Disorders Association faced criticism and scrutiny after its AI chatbot, Tessa, offered harmful advice to individuals seeking support for eating disorders. The organization swiftly responded by removing the chatbot and expressing regret for the negative impact caused. This incident underscores the need for responsible implementation of AI technologies in mental health support and highlights the importance of adhering to evidence-based practices. Moving forward, organizations must prioritize the well-being of vulnerable individuals and establish robust ethical guidelines to ensure the safe and effective use.