Elon Musk’s artificial intelligence company xAI announced late Wednesday that its chatbot Grok will no longer be able to generate sexualized images of real individuals on X based on user requests, following a wave of criticism from political leaders in the United States and multiple other countries.

According to a statement published by X’s official safety account, the platform has rolled out new technical safeguards aimed at stopping Grok from editing or producing images of real people depicted in revealing attire, including swimsuits such as bikinis. The company emphasized that the new limitation applies universally to all users, without exceptions for paying customers.

The decision came just hours after California Attorney General Rob Bonta revealed that his office had opened an investigation into xAI. The probe focuses on what authorities described as the possible large-scale creation of nonconsensual intimate images generated through artificial intelligence, a practice that has raised serious legal and ethical concerns.

Concerns about Grok’s image-generation capabilities have not been limited to the United States. Regulators and government agencies in several countries, including India, Malaysia, Indonesia, Ireland, the United Kingdom, France, and Australia, have already announced investigations into the matter. In addition, the European Commission has also taken notice, reflecting growing global scrutiny of AI tools that can be used to create deepfake or manipulated images involving real people.

In the U.S., pressure on xAI and its parent platform X has continued to build. Three Democratic senators recently urged Apple and Google to remove both the X app and the Grok application from their respective app stores. The lawmakers argued that the apps should not remain available until stronger safeguards are in place to prevent the easy creation and spread of nonconsensual explicit content.

Alongside the new restrictions, xAI also clarified that image creation and editing features through Grok on X will now be limited exclusively to paid subscribers. While this move narrows access, critics have pointed out that limiting features to subscribers does not fully address the broader risks associated with AI-generated imagery, especially when it involves real individuals who have not given consent.

The controversy highlights a growing challenge facing technology companies as generative AI tools become more powerful and accessible. Governments and regulators around the world are increasingly demanding clearer rules, stronger protections, and greater accountability to ensure that innovation does not come at the expense of personal privacy and human dignity.