New York, NY – OpenAI, the developer behind the popular AI chatbot ChatGPT, is experiencing delays in its development of a next-generation AI model, according to a report by The Wall Street Journal.
The WSJ reported on Tuesday that OpenAI's highly anticipated AI model, codenamed "Orion" and widely expected to be the successor to GPT-4, has encountered significant development hurdles. Despite Microsoft, OpenAI's primary investor, initially projecting a mid-year release, the new model has yet to materialize.
While OpenAI CEO Sam Altman unveiled a new advanced reasoning AI model called "o3" last week, he remained silent on the timeline for the release of the flagship model.
Over the past 18 months, OpenAI has conducted at least two large-scale training sessions, involving the processing of vast amounts of data. However, each attempt has been met with new challenges, resulting in less-than-ideal outcomes.
Although the new model is expected to outperform GPT-4, sources suggest that the progress made thus far may not justify the enormous costs incurred. Industry estimates suggest that the computational costs for training such a large-scale AI model can exceed $500 million over a six-month period.
Unlike previous models that were trained on data scraped from the internet, such as news articles, social media posts, and scientific papers, the development of the new model has been hindered by a shortage of high-quality data suitable for training a more intelligent AI.
To address this data scarcity, OpenAI has resorted to generating its own data by employing software engineers and mathematicians to create new software code and solve complex mathematical problems. While this approach has extended the development timeline, OpenAI has faced additional challenges from competitors poaching top talent with lucrative offers.
The challenges faced by OpenAI in developing its next-generation AI model have sparked concerns within the industry about the potential limitations of current AI development approaches. Ilya Sutskever, a co-founder and former chief scientist at OpenAI, expressed similar concerns during a recent lecture. He noted that while computational power continues to grow, the amount of available data remains stagnant. Sutskever likened the finite nature of high-quality training data to fossil fuels, suggesting that the current paradigm of pre-training large language models may eventually reach its limits.
As OpenAI grapples with these challenges, the broader AI community is engaged in a heated debate about whether the field has hit a plateau in terms of AI performance.
[Copyright (c) Global Economic Times. All Rights Reserved.]