LG AI Research has unveiled its latest innovation, the 'EXAONE Deep' inference AI model, challenging major global players like OpenAI and DeepSeek. The company announced the open-source release of this model on the 18th, showcasing its commitment to advancing AI technology and fostering an open research ecosystem.
The EXAONE Deep series features three models: the high-performance 32B model with 32 billion parameters, a lightweight 7.8B model, and an on-device 2.4B model. Notably, the 32B model, despite being only 5% the size of DeepSeek R1, has demonstrated superior performance in mathematics, science, and coding evaluations. In the 2025 CSAT mathematics section, the 32B model scored 94.5 points, surpassing DeepSeek R1's 89.9 points.
LG AI Research is showcasing EXAONE Deep at NVIDIA's GTC 2025 in San Jose, emphasizing its potential to compete with global AI leaders. The company's decision to open-source the model aligns with its strategy to promote collaboration and innovation in the AI community. This move also aims to expand the EXAONE ecosystem, following the successful open-source approach of other influential AI developers like DeepSeek.
With EXAONE Deep, LG AI Research is poised to make a significant impact on the global AI landscape, demonstrating the competitiveness of Korean AI technology.
[Copyright (c) Global Economic Times. All Rights Reserved.]