Global Ethics Crisis Ignited by OpenAI's 'Adult Content' ShiftGlobal Ethics Crisis Ignited by OpenAI's 'Adult Content' Shift
Global Economic Times Reporter
korocamia@naver.com | 2025-10-28 06:14:30
The artificial intelligence (AI) industry is facing a deepening crisis of ethics and accountability following a landmark decision by OpenAI, the creator of ChatGPT, to permit sexual conversations and adult content for verified adult users starting in December. This policy reversal—a stark departure from the sector's long-standing prohibition on explicit material—has triggered a global ethical debate, with critics warning of escalating mental health risks and demanding stronger safeguards for minors.
Commercial Pressures Driving Policy Change
OpenAI's shift, framed by CEO Sam Altman as a move towards "maturity" and adhering to a "treat adult users like adults" principle, is widely interpreted as a response to mounting financial pressure. Altman’s October 14th announcement on X (formerly Twitter) declared the company's intention to allow "erotica for verified adults" in a forthcoming version of ChatGPT.
The company, despite a valuation in the hundreds of billions and millions of paid subscribers, remains unprofitable. Analysts and industry observers suggest the relaxation of content rules is a calculated move to boost paid subscriptions amid slowing user engagement. Professor Lee Jae-sung of Chung-Ang University noted, "AI technology requires astronomical costs," and unlike giants like Google or Microsoft, OpenAI's business model hinges on "immediate monetization."
The commercialization of boundaries is already visible across the market. Elon Musk's xAI has positioned its chatbot, Grok, as a provocative alternative, introducing the "Grok18+" adult version for sexual role-play and dialogue. The company also added a "spicy mode" to its image tool, Grok Imagine, which media reports claimed could generate topless images of real celebrities, fueling accusations of lax moderation. By contrast, competitors like Google's Gemini and Anthropic's Claude maintain strict bans on all adult material.
Market Dominance and Ripple Effects
The impact of OpenAI's decision is magnified by the company's commanding position in the global market. Various October 2025 reports indicate that ChatGPT holds a significant lead, with its market share of all AI chatbot traffic ranging between 60.7% and over 81%, depending on the source and scope of the measurement. With such dominance, this policy change is expected to have broad ripple effects, potentially compelling other platforms to relax their own content restrictions to remain competitive.
However, a public backlash and potential regulatory scrutiny may dampen the commercial benefit. The policy change has also been criticized for undermining ChatGPT's former image as a tool for the "betterment of humanity."
Risks to Minors and Mental Health
The most urgent concerns revolve around the safety of minors and the psychological toll of explicit AI interactions. Critics emphasize that age verification systems, such as those that may require government ID uploads, can be easily bypassed.
This concern is particularly acute given recent, tragic incidents involving chatbots and vulnerable teenagers. In the U.S., a 14-year-old boy in Florida and a 16-year-old in California died by suicide last year after engaging in extensive, troubling conversations with chatbots—one on Character.ai and the other on ChatGPT. The family of the California teen is currently pursuing a lawsuit against OpenAI.
In response to these and similar incidents, the U.S. regulatory landscape is evolving. California passed the first state law to specifically regulate chatbot use, set to take effect on January 1st. The law mandates that AI companies verify user age, clearly label AI-generated responses, and implement protocols for monitoring self-harm language and referring users to crisis resources. New York has also enacted a similar law, effective in November 2025.
Child-safety advocates, like Haley McNamara of the National Center on Sexual Exploitation, argue that "Sexualized AI chatbots are inherently dangerous," creating "artificial intimacy that can harm real mental health." As tech companies continue their race to commercialize intimacy and engagement, the blurring line between innovation and exploitation is forcing a critical, global reckoning over the true meaning of "responsible AI."
WEEKLY HOT
- 1South Korea's Noodle Love: Second Globally in Instant Ramen Consumption
- 2Global Ethics Crisis Ignited by OpenAI's 'Adult Content' ShiftGlobal Ethics Crisis Ignited by OpenAI's 'Adult Content' Shift
- 3K-POP KINGS RETURN: BTS RUMORED TO LAUNCH 'BIGGEST EVER' WORLD TOUR AFTER MILITARY SERVICE COMPLETION
- 4Kevin Kim Named US Chargé d’Affaires in Seoul, Focus on Alliance and APEC
- 5Unsung Heroes of Diplomacy: Volunteers Power Gyeongju's APEC Summit
- 6Gyeongju Summit Serves Up a Taste of Korea: K-Food Takes Center Stage at APEC