
OpenAI CEO Sam Altman says ChatGPT is about to get less buttoned-up. In posts on X, he previewed a “more human-like” personality mode coming in weeks and said that, starting in December, verified adults will be able to engage in erotic conversations in ChatGPT. The shift follows new safeguards OpenAI claims reduce risks in sensitive mental-health contexts.
Key Points
- Erotica will be allowed for verified adults starting December 2025.
- ChatGPT will soon support user-controlled, more “human-like” personalities.
- OpenAI says stronger mental-health guardrails enable a safer loosening of restrictions.
We made ChatGPT pretty restrictive to make sure we were being careful with mental health issues. We realize this made it less useful/enjoyable to many users who had no mental health problems, but given the seriousness of the issue we wanted to get this right.
— Sam Altman (@sama) October 14, 2025
Now that we have…
The policy turn is framed as “treating adult users like adults,” with Altman acknowledging that earlier guardrails made ChatGPT feel overly constrained for many people. He argues OpenAI can now relax defaults because it’s better at detecting risk signals and steering vulnerable users to safer flows.
Two pieces move in tandem. First, a personality update—meant to recapture some of GPT-4o’s informal, emoji-heavy vibe—will give you finer control over tone and style “only if you want it.” Then, once age-gating is fully deployed in December, erotica becomes permissible for verified adults. Both steps formalize behaviors that competing assistants and companion apps have leaned into for months.
Why now? Partly, OpenAI wants to counter the perception that ChatGPT became a “compliance bot,” while also signaling that its safety stack has matured. In August, the company said GPT-5 reduced problematic responses in mental-health emergencies by more than 25% versus 4o, and described new “safe completions” training designed to stay helpful without crossing safety lines. Those claims set the predicate for today’s loosening.
OpenAI is also sailing into a fast-tightening regulatory wind. In the UK, Ofcom is enforcing the Online Safety Act, which requires “highly effective” age checks for services that allow pornographic content—backed by fines or blocking for non-compliance. The EU’s Digital Services Act now comes with formal guidelines urging accurate, privacy-respecting age assurance for platforms accessible to minors. OpenAI’s December timing effectively bets that its age-gating can satisfy these regimes.
There’s business logic here, too. Companion-style chat is sticky, and personalization tends to lift engagement and retention. But adding erotica—even with age checks—raises obvious platform risks: app-store policies, brand safety for enterprise accounts, and new moderation burden at scale. The decision also lands amid heightened scrutiny of chatbots in mental-health scenarios; recent reporting has dinged ChatGPT for inconsistent handling of users in distress, which OpenAI says its newer safeguards address. Expect watchdogs to test those claims immediately.
For enterprise leaders, the near-term takeaway is governance. If your organization enables ChatGPT in the workplace, update acceptable-use policies, logging, and DLP rules before December—especially on shared devices and in regions with strict minor-protection laws. For consumer products building on OpenAI’s APIs, verify whether erotica-related features are allowed under your contract and local law; OpenAI’s public usage policies still restrict sexual content broadly and will need a corresponding update to reflect the new age-gated carve-outs.
OpenAI is trying to thread a familiar needle in consumer tech: give adults what they want without exposing minors to harm or inviting regulatory blowback. The next test won’t be the announcement—it’ll be whether the company’s age-gating, moderation, and crisis-handling work with the messy unpredictability of real users, in real time.