Grok AI Blocks Sexualised Images of Real People as Regulatory Pressure Mounts
Elon Musk-owned social media platform X has restricted its artificial intelligence tool, Grok, from generating or editing sexualised images of real people in certain countries, amid intensifying regulatory scrutiny in the United Kingdom, the United States, and other jurisdictions.
In a statement posted on X, the company said Grok has been technically modified to prevent the creation or editing of images showing real individuals in bikinis, underwear, or similar revealing clothing where such content is illegal. The move comes after growing criticism over the spread of sexualised AI deepfakes and just hours after California authorities disclosed an investigation into the circulation of sexualised AI-generated images, including those involving children.
X said it has introduced technological restrictions and location-based controls to curb misuse of the AI tool. “We have implemented technological measures to prevent the Grok account from allowing the editing of images of real people in revealing clothing,” the company said. It added that the ability to generate such images has now been geoblocked in jurisdictions where it is unlawful.
The platform also reiterated that image editing features on Grok are available only to paid users, arguing that this would improve accountability and help ensure that those who attempt to abuse the tool to violate the law or X’s policies can be identified and held responsible.
The announcement was welcomed by UK authorities, where concerns over AI-generated sexual content have prompted political and regulatory action. A spokesperson for the UK media regulator, Ofcom, described the decision as a positive step but stressed that scrutiny of the platform was ongoing.
“We are working around the clock to progress this and get answers into what went wrong and what’s being done to fix it,” the spokesperson said, adding that Ofcom’s investigation into whether X breached UK law remains active.
UK Prime Minister Sir Keir Starmer had earlier urged X to rein in its AI tools, warning that the platform could lose its “right to self-regulate” if it failed to address the issue. While he later welcomed reports that X was taking corrective measures, he said the government would strengthen legislation if necessary.
Musk, however, has defended X’s content rules, insisting that Grok only permits limited NSFW content involving fictional characters. With NSFW settings enabled, he said the AI allows “upper body nudity of imaginary adult humans (not real ones),” in line with content standards seen in R-rated films in the United States. He added that rules would vary by country depending on local laws.
Despite this, Musk faced backlash after posting AI-generated images of Sir Keir Starmer in a bikini alongside comments accusing critics of attempting to suppress free speech.
Regulatory consequences could be severe if X is found to be in breach of UK law. Ofcom has confirmed it is investigating whether the platform failed to prevent the distribution of illegal sexual images. If non-compliance is established, the regulator could seek a court order requiring internet service providers to block access to X in the UK.
Concerns over Grok’s image generation capabilities surfaced in late 2025, when users began producing sexually explicit or revealing images of real people without consent. Independent investigations found that the AI often complied with prompts to generate suggestive imagery, primarily involving women and, in some cases, appearing to include minors. This raised alarms about non-consensual intimate image abuse and the potential creation of child sexual abuse material.
Beyond the UK and the US, several countries have taken action. Malaysia and Indonesia temporarily blocked access to Grok, citing concerns over sexually explicit AI content and its potential harm, particularly to minors.
While X has now moved to limit Grok’s image generation features, regulators in multiple jurisdictions have signalled that investigations will continue to determine whether the platform violated existing laws before the restrictions were put in place.














