Meta is no longer letting teenagers interact with its AI chatbot characters. The social media giant said on Jan. 23 that it is working on new versions of characters to provide users with “an improved experience”.
An update to a blog focused on safety, originally published last October, said: “While we focus on developing this new version, we’re temporarily pausing teens’ access to existing AI characters globally.”
While Meta is working on new software, concerns about the safety of AI chatbots continue to grow.
In October, the Federal Trade Commission (FTC) revealed that it was investigating how seven companies – including Meta — measured and assessed the adverse effects of their bots on young people.
In December, a coalition of U.S. state attorneys general wrote to 13 major AI players, including Meta, suggesting they need to do more to prevent harmful conversations with children, citing cases of murder, suicide and domestic violence apparently influenced by AI outputs.
And in New Mexico, Meta is facing a lawsuit, due to start in February, alleging that it allowed child exploitation on its various platforms. While the case is not focused on AI bots specifically, reports suggest that the company has sought to prevent any reference to them during proceedings — an indication of Meta’s sensitivity to criticism in this area.
Amid these developments, it is perhaps not surprising that Meta has decided to deny access to the AI characters, a move it characterized as “prioritizing teens’ safety.”
The vendor stated: “Starting in the coming weeks, teens will no longer be able to access AI characters across our apps until the updated experience is ready. This will apply to anyone who has given us a teen birthday, as well as people who claim to be adults, but who we suspect are teens based on our age prediction technology.”
The move is an escalation of measures unveiled in October, when Meta introduced controls that enabled parents to see how their children were interacting with AI and to block chats completely.
This had followed a Reuters report about a leaked internal Meta policy document that showed the company had been tolerating responses from AI bots that many parents would have considered inappropriate.
Meta’s move mirrored that of OpenAI, which also brought in parental controls and the rerouting of sensitive conversations following a wrongful death lawsuit from the family of a teen ChatGPT user who committed suicide.
While it has pulled access to Meta’s character AI bots, the company said teens can still use its AI assistant for “educational opportunities and helpful information”.

