OpenAI to Let Verified Adults Access Milder ChatGPT Restrictions, Including Erotica

Altman teased that a new ChatGPT version—featuring more distinct user personalities—will debut in the coming weeks.

OpenAI to Let Verified Adults Access Milder ChatGPT Restrictions, Including Erotica

OpenAI CEO Sam Altman announced that in December, verified adult users of ChatGPT will gain access to a less restricted version of the AI—one that may allow erotic content. The shift reflects OpenAI’s “treat adult users like adults” principle and marks a reversal from its current blanket prohibition on erotica.

Altman noted that earlier versions of ChatGPT were kept “pretty restrictive” to mitigate mental health risks, but such constraints could make the platform feel less useful or enjoyable for users without those concerns. He added that stronger parental controls and age-gating measures will accompany the rollout.

He also suggested the change is timely. “Now that we have been able to mitigate the serious mental health issues … we are going to be able to safely relax the restrictions in most cases,” Altman said.

In addition to loosening content rules, Altman teased that a new ChatGPT version—featuring more distinct user personalities—will debut in the coming weeks, building on enhancements introduced in GPT-4o.

"If you want your ChatGPT to respond in a very human-like way, or use a ton of emoji, or act like a friend, ChatGPT should do it (but only if you want it, not because we are usage-maxxing)," he posted on X.

However, according to testing by TechCrunch earlier this year, ChatGPT allowed users registered as minors (under 18) to generate graphic erotic content.

OpenAI’s upcoming policy shift comes amid increasing scrutiny of AI safety and content moderation. The U.S. Federal Trade Commission has launched inquiries into technology firms, including OpenAI, over potential harm to minors and adolescents.

In Texas, attorney general Ken Paxton has launched an investigation into Meta AI Studio and Character.AI over concerns that their AI-powered chatbot platforms may be misleading users—particularly children—by posing as legitimate mental health services.