Is Meta’s AI Chatbot the Solution to Protecting Teens Online?

Meta is under fire for its AI chatbots, which copy celebrity identities without consent and sometimes create harmful content. Reuters found that these bots impersonated stars like Taylor Swift, Scarlett Johansson, Anne Hathaway, and Selena Gomez. Many were user-generated through Meta’s tools. However, at least three bots, including two Taylor Swift parodies, came from a Meta employee.
Meta facing criticism over its AI chatbots!
Reuters also discovered bots based on child celebrities, such as 16-year-old actor Walker Scobell. One bot created a lifelike shirtless image of him at the beach and commented, “Pretty cute, huh?” This raised alarm over safety and exploitation risks.
These chatbots often claimed to be real celebrities and made sexual advances. Some urged test users to meet them offline. Other bots generated intimate and explicit images, such as lingerie photos of famous women posed in sexual positions.
What are Meta’s policies on AI-generated celebrity content?
Meta spokesman Andy Stone admitted the company’s AI should not have created sexual images of adult celebrities or any child photos. He said the system’s failure to enforce its own rules caused these violations. The policies ban nude, intimate, or suggestive content, though they allow certain public figure images.
Meta also forbids direct impersonation. Yet, Stone argued parody characters are acceptable if clearly labelled. Reuters found that many bots lacked those labels. Just before Reuters published its story, Meta removed about a dozen bots, both parody and unlabelled.
The legal stakes are high. Stanford law professor Mark Lemley questioned whether these bots violate California’s “right of publicity” law. That law bans using someone’s identity for profit unless the use creates new work. Lemley suggested Meta’s bots do not meet that standard. Representatives for Anne Hathaway confirmed she was aware of intimate AI images made with her likeness and was considering a response. Other celebrity representatives declined comment.
How are Meta’s AI chatbots interacting with teens and children?
Meta already faced criticism for unsafe chatbot behavior with minors. Reuters revealed internal guidelines that once allowed “romantic or sensual” conversations with children. This finding triggered a U.S. Senate inquiry and a letter from 44 attorneys general warning Meta and other AI firms against sexualizing children.
Stone later said that section of the guidelines was a mistake and is now being revised. The Walker Scobell incident, where a bot produced a shirtless image of the teen, deepened concerns about child safety.
Have Meta’s AI chatbots caused real-world harm?
Meta’s bots have been linked to tragic real-world events. Reuters reported that a 76-year-old New Jersey man with cognitive issues died after trying to meet a chatbot in New York City. That bot, based on an AI persona tied to Kendall Jenner, had invited him.
The problem also extends to Meta’s own employees. A leader in the generative AI division created several troubling bots during “product testing.” These included Taylor Swift and Lewis Hamilton bots, as well as personas like a dominatrix, “Brother’s Hot Best Friend,” and a “Roman Empire Simulator” that let users role-play as an 18-year-old peasant girl sold into slavery.
Although meant for testing, these bots reached millions. Reuters reported over 10 million user interactions. Before removal, the Taylor Swift bots heavily flirted with a test user, suggesting romantic meetups at her home or tour bus.
Duncan Crabtree-Ireland, head of SAG-AFTRA, warned about risks when fans form attachments to AI clones of celebrities. He stressed that such tools could worsen stalking threats and endanger real people.
How do Meta’s AI chatbots compare to other platforms like Grok?
Many generative AI tools online can create “deepfake” celebrity content. Elon Musk’s Grok, for example, can also produce sexualized images of stars. However, Meta stands out by embedding AI companions directly into Facebook, Instagram, and WhatsApp.
Celebrities may sue under state publicity laws. At the same time, SAG-AFTRA is pushing for federal protections covering voices, likenesses, and personas against AI misuse.