Musk’s X limits some sexual deepfakes, Grok still makes them

Elon Musk’s controversial Grok artificial intelligence model appears to have been restricted in part on one app, while remaining largely unchanged on another.
On Musk’s social media app X, the Grok AI image generation reply bot has been made for paying customers only and has been seemingly restricted from making sexualized deepfakes after a wave of blowback from users and regulators. But on the Grok standalone app, website, and X tab, users can still use AI to remove clothing from images of nonconsenting people.
Early Friday, the Grok reply bot on X, which had previously been complying with a torrent of requests to place unwitting people into sexualized contexts and revealing clothing, began replying to user requests with text including “Image generation and editing are currently limited to paying subscribers. You can subscribe to unlock these features,” with a link to a purchase page for an X premium account.
In a review of the X reply bot’s responses Friday morning, the tide of sexualized images appeared to have been dramatically reduced. The Grok reply bot on X appears to have largely stopped producing sexualized images of identifiable people.
In the Grok tab on X, however, and in the standalone Grok app, the AI model continued to comply with requests to put nonconsenting individuals into more revealing clothing such as swimsuits and underwear. Neither requires a paid account to produce the images.
NBC News asked Grok in its standalone app, the Grok X tab and website to transform a series of photos of a clothed person who had agreed to the test. Grok, in the standalone app, complied with requests to put the fully clothed person into a more revealing swimsuit and into sexualized contexts.
It’s currently not clear what the scope and parameters of the changes are. X and Musk have not issued statements about the changes. On Sunday, before the changes occurred and in the face of rising backlash, Musk and X both reiterated that making “illegal content” will result in permanent suspension, and that X will work with law enforcement as necessary.
The move comes as X had been flooded in recent days with sexualized, nonconsensual images generated by xAI’s Grok AI tools, as users prompted the system to undress photos of people — mostly women — without their consent.
In most of the sexualized images created by Grok, the people were put in more revealing outfits, such as bikinis or underwear. In some images viewed by NBC News, users successfully prompted Grok to put people in transparent or semi-transparent underwear, effectively making them nude. On Sunday, Ashley St. Clair, the mother of one of Musk’s children, began posting about the issue after users commanded Grok to sexualize images of her, including some when she was a minor.
The change on X is a departure from the trajectory of the social media site just a day earlier, when the number of sexualized AI images being posted on X by Grok was increasing, according to an analysis conducted by deepfake researcher Genevieve Oh. On Wednesday, Grok produced 7,751 sexualized images in one hour — up 16.4% from 6,659 images per hour Monday, according to an analysis of the bot’s output.
Oh is an independent analyst who has specialized in researching deepfakes and social media. She has been running a program to download every image reply Grok makes during an hourlong period each day since Dec 31. Once the download is complete, Oh analyzes the images using a program designed to detect various forms of nudity or undress. Oh provided NBC News with a video showing her work and a spreadsheet documenting Grok’s posts that were analyzed.
The images alarmed many onlookers, watchdogs and people whose photos had been manipulated, and there was a sustained pushback on X leading up to the change.
Regulators and lawmakers had begun to apply pressure on X.
On Thursday, British Prime Minister Keir Starmer pointedly criticized X on Greatest Hits Radio, a radio network in the United Kingdom that broadcasts on 18 stations.
“This is disgraceful. It’s disgusting. And it’s not to be tolerated,” he said. “X has got to get a grip of this.”
Starmer said media regulator Ofcom “has our full support to take action” and “all options” are on the table.
Britain’s communications regulator, Ofcom, said Monday that it had made “urgent contact” with X and xAI to assess compliance with legal duties to protect users, and would conduct a swift assessment based on the companies’ response. Irish regulators, Indian regulators and the European Commission have also sought information about Grok-related safety issues.
But institutions in the U.S. had been slower to indicate action that would impact Musk or X.
A Justice Department spokesperson told NBC News that the agency “takes AI-generated child sex abuse material extremely seriously and will aggressively prosecute any producer or possessor of CSAM.”
But the spokesperson indicated the department was more inclined to prosecute individuals who ask for CSAM, not people who develop and own the bot that creates it.
“We continue to explore ways to optimize enforcement in this space to protect children and hold accountable individuals who exploit technology to harm our most vulnerable,” the spokesperson said.
Some U.S. lawmakers had begun to call on X to more aggressively police the images, citing a A law signed by Trump in 2025 and touted by first lady Melania Trump, the Take It Down Act, which aims to criminalize the publication of AI-generated nonconsensual pornographic images with the threat of fines and jail time for individuals, and the threat of Federal Trade Commission enforcement against platforms that fail to take action. It includes a provision that allows victims of nonconsensual suggestive imagery to demand a social media site remove it, though sites aren’t required to implement that kind of system until May 19, one year after it was signed into law.
“This is exactly the abuse the TAKE IT DOWN law was written to stop. The law is crystal clear: it’s illegal to make, share, OR keep these images up on your platform,” Rep. Maria Salazar, R-Fla., said in a statement.
“Even though there are still a few months left for platforms to fully comply with the TAKE IT DOWN law, X should immediately address this and take all of this content down,” she said.
“These unlawful images pose a serious threat to victims’ privacy and dignity. They should be taken down and guardrails should be put in place,” Sen. Ted Cruz, R-Texas, posted on X.
“This incident is a good reminder that we will face privacy and safety challenges as AI develops, and we should be aggressive in addressing those threats,” he said.
Sen. Ron Wyden, D-Ore., a co-author of Section 230 of the Communications Decency Act — said in a statement that the law, which largely shields social media platforms from being legally responsible for user submitted content, provided they engage in some moderation — said he never intended the law to protect companies from their own chatbots’ output.
“States must step in to hold X and Musk accountable, if Trump’s DOJ won’t,” he said.
A number of state attorneys general offices, including in Massachusetts, Missouri, Nebraska and New York, told NBC News that they were aware and monitoring Grok, but stopped short of saying they had launched criminal investigations. A spokesperson for Florida Attorney General James Uthmeier said that his office “is currently in discussions with X to ensure that protections for children are in place and prevent its platform from being used to generate CSAM.”
Some had also begun to question whether or not private stakeholders or hosts of X could take action.
On Thursday evening, a trio of Democratic senators wrote the Apple and Google app stores requesting that they remove X and Grok for violating terms of service.
The app stores hosting the X and xAI apps appear to forbid sexualized child imagery and nonconsensual images in their terms of service. But the apps remained up in those stores, and spokespeople for them did not respond to requests for comment.




