
DAEJEON, South Korea — Researchers at the Korea Advanced Institute of Science and Technology (KAIST) have achieved a significant breakthrough in artificial intelligence by mimicking the human brain's ability to "think twice." Led by Professor Sang Wan Lee of the Department of Brain and Cognitive Sciences, the team has developed a new deep learning technology based on the biological principle of Predictive Coding (PC).
This research, which marks a departure from standard AI training methods, was recently presented at the International Conference on Learning Representations (ICLR) 2026.
The Breakthrough: From Backpropagation to Meta-Prediction
For decades, the gold standard for training AI has been Backpropagation. While effective, backpropagation requires the entire neural network to be updated simultaneously, often leading to massive energy consumption and technical "bottlenecks" where errors disappear or accumulate disproportionately in deep networks.
The KAIST team looked toward the human brain for a solution. Our brains process information via predictive coding—constantly comparing internal expectations with reality and updating only the differences (errors). To bring this to AI, the team introduced "Meta-Prediction."
"The core of this research isn't just mimicking the structure of the brain, but ensuring the AI follows the brain's fundamental learning principles," said Professor Sang Wan Lee. "We have opened the door for AI that learns as efficiently as a human."
Key Innovations of the Research
The study identifies and solves the "vanishing gradient" issues common in traditional predictive coding models. By enabling the AI to predict how its own prediction errors will change in the future, the researchers ensured stable learning even in extremely deep neural networks.
Superior Accuracy: In 29 out of 30 comparative experiments, the new method outperformed standard backpropagation.
Decentralized Learning: Unlike traditional AI, which requires global updates, this model allows for localized, distributed learning—much like how different regions of the brain operate independently yet cohesively.
High Efficiency: The model proves that large-scale AI can be trained effectively without the massive computational overhead of current systems.
Future Applications: From Robots to Edge AI
The implications for this technology are vast. Because the system is more energy-efficient and adaptable, it is expected to revolutionize several high-tech sectors:
Neuromorphic Computing: Developing hardware that functions like biological neurons.
Robotics: Creating AI that can adapt to unpredictable environments in real-time.
Edge AI: Allowing sophisticated AI to run on small devices (like smartphones or IoT sensors) without relying on massive data centers.
Global Recognition and Support
The paper, titled with Dr. Myung-hoon Ha as the lead author, gained international attention after being published online on January 26, 2026, ahead of the ICLR conference. The project was a collaborative effort supported by the Ministry of Science and ICT, the Samsung Electronics SAIT NPRC program, and the Microsoft Research Global Research Support project.
This advancement signals a shift in the AI race—moving away from simply "bigger" models toward "smarter, more biological" architectures that can truly learn from their mistakes.
[Copyright (c) Global Economic Times. All Rights Reserved.]



























