AI Hallucination

Cho Kijo Reporter

kieejo@naver.com | 2025-11-17 22:09:13


(C) DataDrivenInvestor
 

For six days, starting from June 2, "SXSW London" was held in the Shoreditch district of London. Key organizations such as MIT Technology Review, DeepMind, and OpenAI participated, with emerging European artists and global stars performing and screening, and immersive exhibitions and interactive art taking place throughout the city center, offering experiences in design, fashion, and XR (Extended Reality). Continuing the climate action program of the global creative industry festival "SXSW Austin," held annually in March in Austin, Texas, USA, several sessions were run on the themes of carbon neutrality, renewable energy, and urban resilience. SXSW London aims to strengthen London's status as a European creative industry hub and serve as a platform to promote the fusion of technology and culture.

The highlight for me at this event was the presentation by Will Douglas Heaven, the senior editor for AI at MIT Technology Review, titled "Five Key Things You Need to Know About AI Right Now." I present the content after watching the presentation video.

Generative AI is now terrifyingly good. The presentation begins by playing four compositions (Metal, Folk, Rock, Pop songs) and asking whether they were composed by a human or an AI. It's difficult to guess correctly, indicating that there is no significant difference. AI is producing human-level results in almost all fields, including music, video, coding, and protein synthesis. AI technology is already excellent and continues to improve, with a kind of acceleration due to gravity, suggesting it may soon surpass human capabilities.
"Hallucination" is a feature, not a bug, of AI. The use of the term hallucination might seem awkward, but it refers to the phenomenon where AI reports non-existent papers, fictional characters, incorrect mathematical calculations, or fake legal precedents. There have been actual cases where a customer service chatbot bizarrely offered a refund, or where a court case was ruined by citing a fictional precedent found by AI. While rare, hallucination is not intentional but an intrinsic characteristic.
Generative AI, which uses Large Language Models (LLMs), is trained for "creation," so a perfectly factual AI does not exist. Since it is a mechanism that quickly finds appropriate information within existing websites and knowledge bases, hallucination is not an intended lie but a prediction error that occurs during the model's process of making its "best guess." Therefore, the expectation of eliminating hallucination is unrealistic, and we must understand and utilize this limitation of AI.

AI consumes an enormous amount of electricity and will consume increasingly more. Training Large Language Models requires vast amounts of power. Now, hundreds of millions of people use it daily, and the number of users will continue to grow. Therefore, we must prepare for the explosively increasing energy consumption. Water and electricity are needed to operate large servers and data centers, as well as cooling systems, necessitating consideration of the environmental impact and sustainability of power supply. Thus, the combination of Small Modular Reactors (SMRs) and renewable energy should be actively considered.
AI is performing increasingly complex tasks. It is expanding beyond simple text generation to robot control, protein design, and video production. Advanced video generation tools like Google DeepMind's "Veo 3" require semiconductors with better performance than GPUs. You may have experimented with asking for a desired photo or transforming it into a Ghibli style for fun. The production of films or educational videos with computer graphics has also become easier. AI is no longer just a simple helper but a partner performing creative and complex tasks.
AI ethics and regulation remain unresolved challenges. Can AI truly be fair? Can ethics be implemented through a purely technical approach? In Amsterdam, all recommendations were followed to create ethical AI, but bias could not be removed in actual application. Social and policy discussions will become even more crucial in the future to ensure the fairness and accountability of AI.
AI technology is astonishing and innovative, yet its underlying basis and operation are opaque and harbor risks. Therefore, an approach based on critical thinking, responsible use, and ethical strategy is essential. The advice is to use AI but also to question the algorithm and not delegate judgment. What punishment can be given to an artificial intelligence for generating a wrong answer? The responsibility lies with the human who trusted and used it.

WEEKLY HOT