OpenAI’s top executive revealed plans to give adult users access to erotica and NSFW content through ChatGPT, marking what could be a significant change in how the company handles mature content.
Sam Altman, who leads OpenAI as its CEO, shared the news through an X post on Tuesday. He said the company plans to make this change happen in December when they finish setting up age verification systems.
We made ChatGPT pretty restrictive to make sure we were being careful with mental health issues. We realize this made it less useful/enjoyable to many users who had no mental health problems, but given the seriousness of the issue we wanted to get this right.
Now that we have…
— Sam Altman (@sama) October 14, 2025
“In December, as we roll out age-gating more fully and as part of our ‘treat adult users like adults’ principle, we will allow even more, like erotica for verified adults,” Altman wrote.
The company hasn’t spelled out exactly what kinds of erotic content will be allowed. This represents a notable departure from how OpenAI has operated until now, since the company previously banned this type of material in almost all situations.
Altman explained that earlier versions of the chatbot had tight restrictions because the company worried about harming people’s mental health. But he said those limits ended up making the tool frustrating for many people who didn’t have mental health concerns.
“Now that we have been able to mitigate the serious mental health issues and have new tools, we are going to be able to safely relax the restrictions in most cases,” he stated.
The “new tools” Altman mentioned seem to be safety measures and parental control options that the company introduced last month. These features were designed to address worries about how the chatbot might affect younger people’s mental health.
With stronger protections for children now in place, Altman appears ready to give adults more freedom when using ChatGPT. At the same time, the company announced recently that it would build a new ChatGPT for teenagers, as reported by Cryptopolitan.
There were hints earlier this year that OpenAI might head in this direction. Back in February, the company changed some wording on its “Model Spec” page. The updated language said the company wanted to give users maximum freedom, and only erotic content showing minors would be off limits. Even then, though, erotic material was labeled as sensitive and would only be created in specific approved situations.
Along with the new changes, Altman also mentioned that a new ChatGPT version will come out in the next few weeks. This update will let the chatbot take on different personalities, building on features from the newest GPT-4o version.
“If you want your ChatGPT to respond in a very human-like way, or use a ton of emoji, or act like a friend, ChatGPT should do it,” he explained. “But only if you want it.”
Some people on social media quickly pointed out that Altman’s announcement seemed to contradict things he said before. In an interview this past August, tech journalist Cleo Abram asked Altman about decisions he’d made that helped the world but might hurt OpenAI’s competitive position.
“Well, we haven’t put a sex bot avatar in ChatGPT yet,” Altman answered at the time, appearing to reference AI companions that Elon Musk’s xAI had released.
The timing of this policy change is interesting because OpenAI is already under the microscope for safety issues. Last month, the Federal Trade Commission began looking into several technology companies, including OpenAI, because of possible dangers to kids and teens.
That investigation came after parents from California filed a lawsuit claiming ChatGPT played a role in their teenage son taking his own life. The boy was 16 years old.
On the same day as Altman’s announcement, OpenAI said it formed a new group of eight experts who will advise the company about AI and mental health. This council will help OpenAI figure out what healthy use of AI looks like, focusing on how artificial intelligence affects people’s emotions, mental wellbeing, and drive.
The company said these experts will provide guidance through regular meetings and check-ins on making sure AI interactions stay beneficial for users.
The planned changes raise questions about how OpenAI will balance giving adults more freedom while still protecting vulnerable users, especially as regulators and the public watch the company’s safety practices more closely.
If you're reading this, you’re already ahead. Stay there with our newsletter.