South Korean AI Models Flunk College Entrance Math Exams, Lagging Far Behind Global Leaders
Yim Kwangsoo Correspondent
pydonga@gmail.com | 2025-12-15 07:01:13
(C) Seeking Alpha
SEOUL— A recent performance comparison of South Korea's leading large language models (LLMs), often dubbed "National AI" contenders, revealed a significant gap in mathematical problem-solving ability compared to their international counterparts. The domestic models largely failed to achieve passing grades on standardized mathematics tests, including the highly challenging Suneung (College Scholastic Ability Test).
A research team led by Professor Kim Jong-rak of Sogang University's Department of Mathematics conducted the rigorous assessment. They tested five major South Korean LLMs—Upstage’s Solar Pro-2, LG AI Research’s Exaone 4.0.1, Naver’s HCX-007, SK Telecom’s A.X 4.0 (72B), and NCSOFT’s lightweight model Llama Varco 8B Instruct—against five frontier international models, including GPT-5.1, Gemini 3 Pro Preview, Claude Opus 4.5, Grok 4.1 Fast, and DeepSeek V3.2.
Rigorous Testing Methodology
The researchers administered a total of 50 mathematics problems across two categories:
Suneung (CSAT) Math (20 Problems): The 20 questions were selected as the most difficult from the common subjects, Probability and Statistics, Calculus, and Geometry sections of the highly competitive South Korean CSAT.
Essay-Type/Advanced Math (30 Problems): This set comprised questions from the entrance exams of 10 domestic universities, 10 questions from the Indian university entrance examination, and 10 questions from the mathematics section of the graduate school entrance exam for the University of Tokyo's Faculty of Engineering.
In the initial test comprising the 20 Suneung and 30 essay-type problems, the performance disparity was stark. International models consistently scored high, ranging from 76 to 92 points. In sharp contrast, the South Korean models struggled immensely. Only Solar Pro-2 managed a score of 58 points, while the others languished in the 20s. NCSOFT's Llama Varco 8B Instruct recorded the lowest score, a mere 2 points.
The research team noted that even after designing the domestic models to use Python as a tool to enhance problem-solving accuracy beyond simple inference, the results remained discouraging.
Second Test: EntropyMath Dataset Confirms Lag
The researchers conducted a second test using a proprietary dataset they developed called 'EntropyMath,' which features 100 questions of varying difficulty, from university-level to professorial research standards. Ten selected questions from this set were presented to the 10 AI models.
The results mirrored the first test: International models achieved scores between 82.8 and 90 points, whereas the domestic models were significantly lower, ranging from 7.1 to 53.3 points.
In a third attempt, where the models were given three chances to solve a problem for a correct answer, the international models again demonstrated dominance. Grok 4.1 Fast achieved a perfect score, and the rest of the overseas models scored 90 points. The best-performing domestic model, Solar Pro-2, scored 70 points, followed by Exaone at 60 points. The other domestic contenders, HCX-007, A.X 4.0, and Llama Varco 8B Instruct, recorded 40, 30, and 20 points, respectively.
Call for Improvement and Future Plans
"There was a lot of inquiry about why there was no evaluation of the five domestic sovereign AI models on Suneung problems, so our team conducted this test," Professor Kim explained. "It confirmed that the level of domestic models is significantly behind that of the overseas frontier models."
The research team acknowledged that the domestic models tested were based on existing public versions and plan to conduct a re-evaluation once the updated, dedicated "National AI" versions from each team are officially released.
Professor Kim also announced the launch of a dedicated mathematics leaderboard based on the EntropyMath dataset, with the goal of expanding it to an international standard. He added that the team will improve their proprietary problem-generation algorithms and pipelines to create specialized datasets for domains beyond mathematics, including science, manufacturing, and culture, to contribute to the performance enhancement of domain-specific AI models.
The study was jointly supported by Sogang University's Institute of Mathematical Sciences and Data Science (IMDS) and Deep Fountain.
WEEKLY HOT
- 1'Grandparenting Paradox': Grandmothers Who Care for Grandchildren Have 22% Lower Risk of Frailty
- 2Pyelonephritis Often Mistaken for Flu, Early Diagnosis and Antibiotics Crucial
- 310th Cheonan World Christmas Festival Opens, Runs Until Dec. 25
- 4'AI' Dominates 2025 Book Titles in South Korea
- 5End-of-Year Concert Extravaganza: Jo Sumi, Geum Nan-sae, and Danny Koo Headline Diverse Lineup
- 6R.E.D. Sectors Poised for Growth in 2026, the Year of the 'Red Horse,' Driven by AI Investment Boom