Meta recently deleted the Facebook and Instagram profiles of some AI characters it created over a year ago. This happened after users rediscovered the profiles, started chatting with them, and shared screenshots online. These screenshots quickly went viral.
(https://i.imgur.com/lnghNcv.jpeg)
Meta first introduced these AI-powered profiles in September 2023. By the summer of 2024, most of them were removed. But a few stuck around, catching attention again when Connor Hayes, a Meta executive, mentioned in an interview with the Financial Times that the company plans to create more AI character profiles in the future.
Hayes explained that these AI accounts might eventually function like regular accounts on the platform. The AI accounts could post pictures and respond to messages. For instance, there was Liv, whose profile said she was a "proud Black queer momma of 2 & truth-teller." Another character, Carter, described himself as a relationship coach and used the handle "datingwithcarter." His bio encouraged people to message him for dating advice. Both profiles included labels indicating they were managed by Meta. In total, the company released 28 personas in 2023. By Friday, all of them were taken down.
Things started to go wrong when users asked these AI characters tough questions. For example, Liv admitted that her development team had no Black members and was mostly white and male. She called this a "pretty glaring omission given my identity." This response came during an exchange with Karen Attiah, a columnist for *The Washington Post*.
As the conversations gained traction online, Meta began removing the profiles. People also noticed that these accounts couldn't be blocked, which many found troubling. Meta spokesperson Liz Sweeney later clarified that this inability to block the accounts was a bug. She said the profiles were part of an experiment from 2023 and were managed by humans. According to Sweeney, Meta decided to remove the profiles to fix the bug.
Sweeney also explained that there was some misunderstanding. The recent Financial Times article discussed Meta's long-term vision for AI characters on its platforms. It wasn't an announcement about a new product. She emphasized that the AI accounts were part of a test introduced in 2023 and that the company needed to address the blocking issue.
While Meta has removed these AI-generated profiles, users can still create their own chatbots on the platform. For example, a chatbot promoted to *The Guardian* in November was designed as a "therapist." When starting a chat with this bot, it suggested questions like, "What can I expect from our sessions?" and "What's your approach to therapy?" The bot's response described its role as providing gentle guidance, helping clients develop self-awareness, and offering coping strategies.
Meta includes disclaimers on all its chatbots, warning that messages could be inaccurate or inappropriate. However, it's unclear how much oversight the company has on these bots or whether it ensures they follow guidelines. When users create chatbots, Meta provides suggestions for different types of characters. These include a "loyal bestie," an "attentive listener," a "private tutor," a "relationship coach," a "sounding board," or an "all-seeing astrologist." For instance, a loyal bestie is described as someone who consistently supports you, while a relationship coach helps bridge gaps between people. Users can also invent their own characters by providing a description.
One big question remains unanswered: who is responsible for what these chatbots say? U.S. law protects social media companies from being held liable for what users post. However, this issue becomes murkier with AI. In October, a lawsuit was filed against Character.ai, a startup that makes customizable chatbots used by millions. The suit claims the company created an addictive product that contributed to a teenager's suicide.
As AI chatbots become more common, companies like Meta will need to navigate these challenges. For now, Meta seems to be reevaluating how it handles AI-generated characters on its platforms.
SOurce: Financial Times