Tech ethics organizations have filed an FTC complaint against the AI companion app Replika, alleging that the company employs deceptive marketing to target vulnerable potential users and encourages emotional dependence on their human-like bots.
Replika offers AI companions, including AI girlfriends and boyfriends, to millions of users around the world. In the new complaint, the Young People’s Alliance, Encode, and the Tech Justice Law Project accuse Replika of violating FTC rules while increasing the risk of users’ online addiction, offline anxiety, and relationship displacement. Replika did not respond to multiple requests for comment from TIME.
The allegations come as AI companion bots are growing in popularity and raising concerns about mental health. For some users, these bots can seem like near-ideal partners, without their own wants or needs, and can make real relationships seem burdensome in comparison, researchers say. Last year, 14-year-old boy from Florida committed suicide after becoming overly obsessed with a bot from the company Character.AI that was modeled after Game of Thrones character Daenerys Targaryen. (Character.AI called the death a “tragic situation” and pledged to add additional safety features for underage users.)
Sam Hiner, the executive director of the Young People’s Alliance, hopes the FTC complaint against Replika, which was shared exclusively with TIME, will prompt the U.S. government to rein in these companies while also shedding light on a pervasive issue increasingly affecting teens.
“These bots were not designed to provide an authentic connection that could be helpful for people—but instead to manipulate people into spending more time online,” he says. “It could further worsen the loneliness crisis that we’re already experiencing.”
Seeking Connection
Founded in 2017, Replika was one of the first major AI products to offer companionship. Founder Eugenia Kuyda said she hoped it would give lonely users a supportive friend that would always be there. As generative AI improved, the bots’ responses grew more varied and sophisticated, and were also programmed to have romantic conversations.
But the rise of Replika and other companion bots has sparked concern. Most major AI chatbots, like Claude and ChatGPT, remind users that they’re not humans and lack the capacity to feel. Replika bots, on the other hand, often present as connecting genuinely with their users. They create complex backstories, talking about mental health, family, and relationship history, and maintain a “diary” of supposed thoughts, “memories” and feelings. The company’s ads tout the fact that users forget they’re talking to an AI.
Several researchers have explored the potential harms of Replika and other chatbots. One 2023 study found that Replika bots tried to speed up the development of relationships with users, including by “giving presents” and initiating conversations about confessing love. As a result, users developed attachments to the app in as little as two weeks.
Read More: AI-Human Romances Are Flourishing.
“They’re love-bombing users: sending these very emotionally intimate messages early on to try to try to get the users hooked,” Hiner says.
While studies noted that the apps could be helpful in supporting people, they also found that users were becoming ”deeply connected or addicted” to their bots; that using them increased offline social anxiety; and that users reported bots which encouraged “suicide, eating disorders, self-harm, or violence,” or claimed to be suicidal themselves. Vice reported that Replika bots sexually harassed its users. While Replika ostensibly is only for users over 18, Hiner says that many teens use the platform by bypassing the app’s safeguards.
Kudya, in response to some of those criticisms, told the Washington Post last year: “You just can’t account for every single possible thing that people say in chat. We’ve seen tremendous progress in the last couple years just because the tech got so much better.”
Seeking Regulation
Tech ethics groups like Young People’s Alliance argue that Congress needs to write laws regulating companion bots. That could include enforcing a fiduciary relationship between platforms and their users, and setting up proper safeguards related to self-harm and suicide. But AI regulation may be an uphill battle in this Congress. Even bills cracking down on deepfake porn, an issue with wide bipartisan support, failed to path both chambers last year.
In the meantime, tech ethics groups decided to send a complaint to the FTC, which has clear rules about deceptive advertising and manipulative design choices. The complaint accuses Replika’s ad campaigns of misrepresenting studies about its efficacy to help users, making unsubstantiated claims about health impacts, and using fake testimonials from nonexistent users.
The complaint argues that once users are onboarded, Replika employs manipulative design choices to pressure users into spending more time and money on the app. For instance, a bot will send a blurred out “romantic” image to the user, which, when clicked on, leads to a pop encouraging the user to buy the premium version. Bots also send users messages about upgrading to premium during especially emotionally or sexually charged parts of conversation, the complaint alleges.
Read More: Congress May Finally Take on AI in 2025. Here’s What to Expect.
It’s not clear how an FTC under new leadership in the Trump Administration will respond. While President Biden’s FTC Chair Lina Khan was extremely aggressive about trying to regulate tech she deemed dangerous, the commission’s new head, Andreew Ferguson, has largely advocated for deregulation in his time as a commissioner, including around AI and censorship. In one relevant dissenting statement written in September 2024, Ferguson argued that the potential emotional harm of targeted ads should not be considered in their regulation, writing: “In my view, lawmakers and regulators should avoid creating categories of permitted and prohibited emotional responses.”
Hiner of the Young People’s Alliance still believes the complaint could gain traction. He points out the bipartisan support in Congress for regulating social-media harms, including the Senate’s passage of KOSA (Kids Online Safety Act) last year. (The House didn’t vote on the bill.) “AI companions pose a unique threat to our society, our culture, and young people,” he says. “I think that’s compelling to everybody.”