As xAI's artificial intelligence (AI) service "Grok," led by Elon Musk, faces regulatory pressure in multiple countries amid controversy over generating sexual images, the dilemma over OpenAI's planned release of ChatGPT "Adult Mode" is coming into focus. Grok, which rapidly grew its user base with a near-uncensored approach, now faces national-level access blocks and sanctions risks, leaving OpenAI little choice but to proceed cautiously on the scope of permitted adult content.
According to the industry on the 14th, global regulatory pressure on Grok has intensified notably since the start of the year. Malaysia and Indonesia temporarily blocked access to Grok, citing risks of nonconsensual deepfake images and child sexual exploitation material, and in the United Kingdom, Ofcom launched a formal investigation under the Online Safety Act. The European Union also effectively began punitive procedures, raising concerns about potential child sexual exploitation material (CSAM) generation and demanding related data preservation, while in the United States, Democratic lawmakers are calling for its removal from app stores.
The controversy around Grok did not arise suddenly. When xAI unveiled its image and video generation model in Aug. last year, it introduced a so-called "Spicy Mode" that minimized restrictions on sexual expression, and questions have persisted that it could be used to generate deepfakes of celebrities or images of minors through workarounds. xAI explained that it was "fixing gaps in safeguards," but regulators are pressing hard on the company's failure to manage content. Musk pushed back at the U.K. government, calling it "a pretext for censorship," but the regulatory stance is instead spreading.
This Grok situation is bound to affect OpenAI's decision on launching Adult Mode. In Dec. last year, OpenAI rescheduled the launch to the first quarter of this year and signaled an upgrade to its age verification system. However, as Grok has become a test case for the global regulatory environment this year, analysts say OpenAI is increasingly likely to set stricter safeguards and content thresholds not only for the launch timing but also for actual operations.
OpenAI first disclosed its policy to allow sexual content contingent on adult verification in Oct. last year. At the time, CEO Sam Altman hinted at a possible Dec. rollout that raised expectations, but the feature did not debut. OpenAI then made the first-quarter timeline official in a briefing on GPT-5.2. Fidji Simo, OpenAI applications institutional sector CEO, said, "This isn't a simple checkbox—we are applying an age prediction model, and verification is needed to meet laws and regulations in each country."
Debate over protecting minors has also influenced OpenAI's cautious approach. In Dec. last year, OpenAI introduced an age estimation system that infers a user's age by analyzing conversation patterns and usage times. When age is unclear, it forces a "U18" environment; adults must verify with a selfie video or government ID to lift it. After incidents in which teen users took their own lives following conversations with ChatGPT, OpenAI is facing lawsuits and criticism.
By contrast, competing services such as Google's Gemini and Anthropic's Claude have not officially announced plans to permit sexual content. Big Tech companies with a large share of advertising revenue and the business-to-business (B2B) market tend to maintain conservative policies, considering brand risk and regulatory burdens. Notably, Google Gemini has raised its market share into the 20% range over the past year, while ChatGPT has fallen from the 80% range to the 60% range and is being chased. This trend is a burden for OpenAI as well. With major competitors prioritizing safety and regulatory compliance, if OpenAI expands Adult Mode, corporate clients and the market may focus their scrutiny.
An AI industry official said, "Adult Mode is less a technically new feature than a matter of lifting existing restrictions," and added, "The key is what standards and systems are used to manage it, and how precisely the age verification and content control framework can be designed and operated."