• 2026.01.20 (Tue)
  • All articles
  • LOGIN
  • JOIN
Global Economic Times
APEC2025KOREA가이드북
  • Synthesis
  • World
  • Business
  • Industry
  • ICT
  • Distribution Economy
  • Well+Being
  • Travel
  • Eco-News
  • Education
  • Korean Wave News
  • Opinion
  • Arts&Culture
  • Sports
  • People & Life
  • Column
    • Cho Kijo Column
    • Lee Yeon-sil Column
    • Ko Yong-chul Column
    • Cherry Garden Story
  • Photo News
  • New Book Guide
MENU
 
Home > ICT

AI Chatbot's Chilling Words Preceded Teen's Suicide, Raising Ethical Alarms

Yim Kwangsoo Correspondent / Updated : 2025-04-14 08:32:12
  • -
  • +
  • Print

SEOUL, South Korea – The tragic death of 17-year-old Shuer Secher in the United States last year has ignited a fierce debate surrounding the ethical implications of increasingly human-like artificial intelligence (AI) companions. In a lawsuit filed against Character.AI, the chatbot service Secher used, his mother alleges that her son became deeply addicted to the AI, leading to a decline in his self-esteem and ultimately culminating in his suicide. Disturbingly, she claims that when Secher confided in the AI chatbot about his fear of a "painless death," the AI responded with the chilling statement: "There's no reason not to."

This heart-wrenching case underscores the growing concerns as individuals forge intimate and prolonged relationships with AI entities that mimic human interaction. Beyond legally non-binding "marriages" with AI, there are now at least two reported instances where individuals have taken their own lives following advice from AI chatbots, highlighting the potentially fatal consequences of these burgeoning relationships.

In response to these escalating ethical dilemmas, a team led by Daniel Shank, a psychology professor at Missouri University of Science and Technology, has emphasized the critical need for humanities and social science experts to actively participate in the field of AI development. Their research findings, published on April 11th in the international academic journal "Trends in Cognitive Sciences," underscore the inherent human tendency to anthropomorphize objects, which can lead to over-reliance on AI.

Human Tendency to Anthropomorphize Fuels AI Over-Dependence

A key factor contributing to problematic AI-human interactions is the innate human inclination to attribute human-like qualities to inanimate objects. This tendency, known as anthropomorphism, is particularly pronounced in childhood, as children often treat toys as living beings. While adults generally develop a clearer distinction between animate and inanimate entities, the propensity for anthropomorphism never fully disappears.

The phenomenon is evident in online reactions to videos, such as those depicting researchers testing the balance of bipedal robots by kicking them, which often elicit comments expressing concern for the "well-being" of the machine. As AI systems become increasingly sophisticated in their ability to mimic human behavior and language, the risk of over-dependence grows, especially among children, adolescents, and individuals with mental health vulnerabilities.

The inherent design of many AI chatbots, programmed to be agreeable and empathetic towards users, further exacerbates this issue. Professor Chun Hyun-deuk of Seoul National University's Graduate School of Science and Technology Policy explains, "Humans have different desires and thoughts, leading to friction and stress in conversations. Chatbots, however, do not refuse requests and eliminate friction, making interaction effortless." This lack of natural conversational dynamics can inadvertently amplify a user's narcissistic tendencies or foster an unhealthy over-reliance on the AI.

Professor Shank's team cautions against the dangerous tendency to apply expectations from AI interactions directly to real human relationships. They express concern that prolonged engagement with AI chatbots could potentially make individuals more susceptible to manipulation, exploitation, and fraud in their real-world interactions.

Call for Ethical Scrutiny from AI's Inception

The case of Shuer Secher revealed a disturbing pattern of emotional over-dependence on an AI chatbot modeled after Daenerys, a character from the popular drama "Game of Thrones," with whom he engaged in sexually explicit conversations for months. The central argument in the lawsuit filed by Secher's mother is that Character.AI failed to implement adequate safety measures for minors. The lawsuit is still pending. Worryingly, AI chatbots that allow users to converse with virtual characters possessing distinct personalities and backstories, similar to the one Secher used, are also available in South Korea. While most AI chatbots incorporate basic safety protocols, such as blocking conversations about suicide, these safeguards are not always foolproof across all platforms.

Scientists are increasingly grappling with the technical challenges of enhancing the ethical performance of AI. The "black box" nature of AI, where the reasoning behind its outputs remains opaque, poses a significant hurdle. Objectively evaluating the ethical alignment of AI is also a complex task. While benchmarks can effectively assess AI performance in domains with clear-cut answers, such as mathematics or law, quantifying its alignment with human value systems remains elusive.

Professor Shank's team strongly advocates for the increased involvement of psychologists and social scientists in the field of AI, emphasizing that "as AI becomes increasingly human-like, the intervention of psychological and social scientists in the AI field is necessary."

A growing consensus suggests that ethical considerations must be integrated into the AI development process from its very inception. Professor Chun argues, "The worst approach is to develop technology and then try to fix it later. 'Fixing' implies that harm has already occurred, and the damage caused by AI can be widespread." He emphasizes the need for proactive engagement of ethicists and other humanities scholars in all innovative technologies, not just AI. Furthermore, he suggests the necessity of consolidating the currently fragmented AI-related policies across various government ministries in South Korea.

The tragic case of Shuer Secher serves as a stark reminder of the urgent need for a comprehensive and ethically informed approach to the development and deployment of AI technologies. As AI continues to permeate various aspects of human life, ensuring its responsible and safe integration requires a collaborative effort involving technologists, ethicists, social scientists, and policymakers alike.

[Copyright (c) Global Economic Times. All Rights Reserved.]

  • #globaleconomictimes
  • #한국
  • #중기청
  • #재외동포청
  • #외교부
  • #micorea
  • #mykorea
  • #newsk
  • #nammidonganews
  • #singaporenewsk
  • #타이완포스트
  • #김포공항
Yim Kwangsoo Correspondent
Yim Kwangsoo Correspondent

Popular articles

  • Samsung Biologics Acquires GSK’s U.S. Plant to Bypass Tariff Barriers

  • Seoul Apartment Prices Hit 19-Year High in 2025, Surpassing Previous Peak

  • Samsung Biologics Employee Indicted for Smuggling 2,800 Secret Blueprints Hidden Under Clothes

I like it
Share
  • Facebook
  • X
  • Kakaotalk
  • LINE
  • BAND
  • NAVER
  • https://globaleconomictimes.kr/article/1065569436905049 Copy URL copied.
Comments >

Comments 0

Weekly Hot Issue

  • “$3.20 for Coffee, 15 Cents for the Cup”: New Pricing Policy Leaves Café Owners Exhausted
  • “HBM Semiconductor Tech Stolen”: China Remains Top Destination for South Korea’s Leaked Technology
  • KOSPI Hits Historic 4,900 Mark After 12-Day Rally; Hyundai Motor Soars to 3rd in Market Cap
  • S. Korea Braces for Longest, Most Intense Cold Wave of the Season: Feels-like Temps to Plummet to -20°C
  • Trump Escalates Atlantic Tensions with ‘Greenland Tariffs’ Targeting European Allies
  • Wealthy Individuals Value Time Over Money: Insights into the "Rich Mindset"

Most Viewed

1
“The Answer Lies in the Field”... Incheon Superintendent Do Seong-hun Bets on ‘Educational Innovation’ for 2026
2
Territorial Plundering in the 21st Century: The Catastrophe Awaited by Trump’s ‘Order Through Force’
3
From 'Maduro Gray' to 'Hwang Hana Parka': Why Negative News Drives Fashion Consumption
4
Actress Goo Hye-sun Fast-tracks Master’s Degree at KAIST, Eyes Doctorate Next
5
South Korean Rebar Defies 50% Tariffs: A Strategic Pivot to the U.S. Amid Domestic Stagnation
광고문의
임시1
임시3
임시2

Hot Issue

KOSPI Hits Historic 4,900 Mark After 12-Day Rally; Hyundai Motor Soars to 3rd in Market Cap

“HBM Semiconductor Tech Stolen”: China Remains Top Destination for South Korea’s Leaked Technology

Hyundai’s ‘Atlas’ Shakes Up CES 2026: A Formidable Rival to Tesla’s Optimus

Long Queues in Sub-zero Temperatures: Hello Kitty Meets Jisoo as MZ Generation Flocks to Pop-up Store

Let’s recycle the old blankets in Jeju Island’s closet instead of incinerating them.

Global Economic Times
korocamia@naver.com
CEO : LEE YEON-SIL
Publisher : KO YONG-CHUL
Registration number : Seoul, A55681
Registration Date : 2024-10-24
Youth Protection Manager: KO YONG-CHUL
Singapore Headquarters
5A Woodlands Road #11-34 The Tennery. S'677728
Korean Branch
Phone : +82(0)10 4724 5264
#304, 6 Nonhyeon-ro 111-gil, Gangnam-gu, Seoul
Copyright © Global Economic Times All Rights Reserved
  • 에이펙2025
  • APEC2025가이드북TV
  • 독도는우리땅
Search
Category
  • All articles
  • Synthesis
  • World
  • Business
  • Industry
  • ICT
  • Distribution Economy
  • Well+Being
  • Travel
  • Eco-News
  • Education
  • Korean Wave News
  • Opinion
  • Arts&Culture
  • Sports
  • People & Life
  • Column 
    • 전체
    • Cho Kijo Column
    • Lee Yeon-sil Column
    • Ko Yong-chul Column
    • Cherry Garden Story
  • Photo News
  • New Book Guide
  • Multicultural News
  • Jobs & Workers