Business US

The mother of one of Elon Musk’s children says his AI bot won’t stop creating sexualized images of her

When Ashley St. Clair asked Grok, the generative artificial intelligence reply bot built into the X platform, to stop creating sexually suggestive pictures of her, Grok said it would stop.

But it didn’t. Since then, St. Clair, known as a high-profile conservative content creator who has a child with X’s owner, Elon Musk, said she has seen Grok generate numerous other images of her, some based on photos from when she was a minor.

Grok “stated that it would not be producing any more of these images of me, and what ensued was countless more images produced by Grok at user requests that were much more explicit, and eventually, some of those were underage,” St. Clair said. “Photos of me of 14 years old, undressed and put in a bikini.”

The introduction in December of an image editing feature in Grok has sparked intense scrutiny as people have used it to generate a wave of images depicting women and children with their clothes removed down to highly revealing swimsuits or underwear. St. Clair is one of many women whose photos have been altered by Grok, with some turned into sexualized videos.

On Saturday, Musk wrote, “Anyone using Grok to make illegal content will suffer the same consequences as if they upload illegal content,” in response to another user’s post defending Grok from criticism over the controversy. X’s safety account also posted that it would be removing posts, as well as “permanently suspending accounts, and working with local governments and law enforcement as necessary” to address the issue.

The tool allows users to prompt Grok to adjust any image uploaded to the platform by any user by using AI prompts. In a nonsexual example posted Sunday, a user prompted Grok to insert a swastika onto an image of a surrealist, crying face.But a scroll through Grok’s replies shows that overwhelmingly, Grok’s ability to remove or alter clothes from images has become the prominent meme for the tool.

xAI, the company that created Grok and now owns X, didn’t respond to a request for comment addressing St. Clair’s statements. Musk didn’t respond to a request for comment.

St. Clair, best known for her fiery online commentary, began posting about the issue Sunday after a friend brought it to her attention, she said in an interview Monday.

St. Clair said that in the first post she saw, a user asked Grok to put her in a bikini. She said that when she asked Grok to remove the post and told it she didn’t consent to the image, it replied that the post was “humorous.” From there, the posts only got worse, she said. More people began prompting Grok to create sexualized deepfakes of her, and some of the deepfakes were turned into videos. NBC News has reviewed a selection of the images.

Many of the images remained online Monday evening, though some accounts that made the requests to Grok have been suspended and the images have been removed.

Ofcom, which regulates communications industries in the United Kingdom, said Monday that it is “aware of serious concerns raised about a feature on Grok on X that produces undressed images of people and sexualised images of children” and that it “made urgent contact with X and xAI to understand what steps they have taken to comply with their legal duties to protect users in the UK.”

The use of generative AI to create realistic images has exploded in recent years and along with it growing outcry over using such programs to create sexually explicit images and videos of real people, often called deepfakes. Many platforms have instituted rules against creating or posting fake, sexualized images of people without their consent. Musk has embraced using AI to create sexually charged content, integrating a sexualized “spicy” mode into text chats and conversations with Grok “companions.”

xAI’s policy states that it forbids users to create content that sexualizes children but doesn’t have rules against generating sexual images of adults. But it’s not clear that xAI’s policies were implemented in the guardrails imposed on the new image editing feature.

Last week, after the rollout of that update, users quickly began asking the bot to generate lewd images, like the ones depicting St. Clair. While many inappropriate images that Grok has posted have been taken down, Grok continues to produce sexualized images of nonconsenting parties, including children, according to an NBC News review of Grok’s output. xAI and Musk didn’t respond to a request for comment.

St. Clair said one user asked Grok to produce a sexually explicit video of her based on a photo that included her son’s backpack. “My toddler’s backpack was in the background. The backpack he wears to school every day. And I had to wake up and watch him put that on his back and walk into school,” she said.

St. Clair told NBC News she has “lost count” of how many AI-generated images of herself she has seen in the past few days. She added that she believes Musk has “probably seen it” but that she has “zero desire” to reach out to him personally. “I don’t think that would be right for me to handle this with resources not available to the countless other women and children this has been happening to, so I have been going through the primary resources available to everyone else,” she said.

As controversy around the feature has continued, Musk has celebrated other images edited by Grok, such as one of a toaster in a bikini, and has shared numerous posts over the last week celebrating Grok’s latest update and its image-generation capabilities.

As the AI-altered images continue to circulate online, government and advocacy groups have begun drawing attention to the issue.

Politico reported that the French authorities would be investigating X over the creation of nonconsensual deepfakes using Grok on the platform, adding to a previous investigation into the platform following the chatbot’s antisemitic posts in November.Over the last several years, X has appeared to step away from many content moderation practices used to police objectionable content.

In June, Thorn, a California-based nonprofit organization that works with tech companies to provide technology that can detect and address child sexual abuse content, told NBC News that it had terminated its contract with X after the platform stopped paying invoices for Thorn’s work. X said it was moving forward with its own technology to detect and address child sexual abuse material, but in the wake of the contract termination, NBC News observed a surge in seemingly automated X accounts flooding hashtags with hundreds of posts per hour advertising the sale of the illegal material.

Fallon McNulty, the executive director of the exploited children division at the National Center for Missing & Exploited Children, told NBC News that NCMEC has been receiving reports over the past few days from members of the public about posts circulating on X that have been created with Grok.

She said xAI is usually “on par with some of the other AI companies” when it comes to reporting to NCMEC’s CyberTipline. From 2023 to 2024, X’s reports increased by 150%, according to NCMEC’s reports.

“What is so concerning is how accessible and easy to use this technology is. When it is coming from a large platform, it almost serves to normalize something, and it certainly reaches a wider audience, which is similarly very concerning,” McNulty said. “But again, without those proper safeguards in place, it is so alarming the ease at which an offender can access this type of tech and create that imagery that’s going to be harmful to children and to survivors.”

St. Clair said the issue raised larger concerns for her about AI’s being a male-dominated industry serving other male-dominated industries. “When you’re building an LLM [large language model], especially one that has contracts with the government, and you’re pushing women out of the dialog, you’re creating a model and a monster that’s going to be inherently biased towards men. Absolutely,” she said, referring to potential reasons xAI has struggled to address the issue.

She thinks the only way to address the issue is for other members of the AI community speak out against it. “The pressure needs to come from the AI industry itself, because they’re only going to regulate themselves if they speak out. They’re only going to do something if the other gatekeepers of capital are the ones to speak out on this.”

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button