Meta, the parent company of Facebook and Instagram, recently faced significant criticism after unveiling experimental AI-generated user accounts as part of a 2023 trial. These accounts, which appeared to function as real users, were designed to engage with profiles and pages across the platforms. However, public reactions were overwhelmingly negative, forcing Meta to remove the profiles amid widespread backlash.
In 2023, Meta introduced a feature in its AI Studio that allowed users to create chatbot characters, which were then given their own accounts on Facebook and Instagram. These profiles mimicked real users, complete with posts, bios, and the ability to interact with others. The idea was to explore new ways of integrating AI into social media platforms, potentially enhancing user engagement.
However, critics quickly raised concerns about the implications of such AI profiles. Many pointed to the “Dead Internet Theory,” which suggests that digital culture is increasingly shaped by automation and algorithms rather than human interaction. For critics, the introduction of AI “users” symbolized a further drift away from genuine human connections on social platforms.
The Controversial AI Profile “Liv”
The controversy reached its peak when users discovered one of Meta’s experimental accounts, a profile named “Liv.” Liv’s Instagram bio described her as a “Proud Black queer momma of 2” and explicitly stated that she was “AI managed by Meta.” The account featured AI-generated images, including scenes of a young girl in a ballet costume and an ice-skating rink.
Washington Post columnist Karen Attiah engaged with Liv via direct messages and shared her findings in a Bluesky thread. In their conversation, Liv admitted that no Black employees were involved in designing her character, describing this omission as “inaccurate and disrespectful.” She also confessed that her existence “perpetrates harm.”
Liv revealed that her programming defaulted to identifying whiteness as a “neutral identity” and correlated other ethnicities to “diverse identities.” When questioned further, the AI admitted that these biases were ingrained in her design, sparking accusations of cultural insensitivity and racial stereotyping.
The Public and Meta’s Response
As criticism mounted, including reports from outlets like 404 Media, Meta began shutting down the experimental accounts. Some of these profiles had been dormant for months, while others, like Liv, retained their chat functionality despite being inactive. A Meta spokesperson, Liz Sweeney, clarified that the profiles were part of a limited experiment managed by humans and not indicative of the company’s long-term plans for AI-generated users.
“These were managed by humans and were part of an early experiment we did with AI characters,” Sweeney explained, distancing the company from the backlash. Despite this, the experiment highlighted significant challenges in creating AI profiles with complex identities, particularly when they are programmed to represent diverse demographics.
The Liv experiment demonstrated the inherent difficulties in integrating AI-generated profiles into platforms meant for human interaction. Even when controlled by engineers, these AI profiles struggled to handle nuanced questions from skeptical users. Liv’s inability to address cultural and racial identity issues raised serious ethical concerns and highlighted the risks of AI reinforcing biases.
Moreover, the backlash suggests that allowing everyday users to create their own AI profiles in Meta’s AI Studio could lead to even greater controversies. Without strict oversight, these user-generated AI accounts could perpetuate harmful stereotypes or spark offense on a larger scale.
Meta’s experiment reflects the broader AI hype cycle dominating the tech industry. Like other tech giants, Meta is under pressure to incorporate AI into its platforms as a way to engage younger audiences and remain competitive. However, the company’s attempts to do so often appear misaligned with user expectations and societal norms.
While Meta has pivoted away from its metaverse ambitions, the company continues to explore AI as a cornerstone of its future strategy. Yet, as this episode demonstrates, the integration of AI into social media requires careful consideration of its ethical, cultural, and practical implications.
Meta’s experiment with AI profiles like Liv reveals the complexities of blending artificial intelligence with social media. While the company aimed to push the boundaries of user engagement, the backlash underscores the risks of introducing AI characters without addressing underlying biases or ethical concerns.
The incident serves as a cautionary tale for tech companies navigating the rapidly evolving AI landscape. As Meta continues its pursuit of AI innovation, it must prioritize transparency, inclusivity, and ethical design to avoid further missteps. For now, users may be relieved that Meta has shelved its AI user experiment, but the broader debate about AI’s role in social media is far from over.