A new study in The Information suggests that OpenAI’s next flagship model might not be as revolutionary as its predecessors.
According to reports, employees who evaluated the new model, code-named Orion, discovered that while it outperformed OpenAI’s current models, the improvement was not as great as what they had observed during the transition from GPT-3 to GPT-4.
Stated differently, the pace of progress appears to be decreasing. In reality, Orion may not consistently outperform earlier models in certain domains, including coding.
Also Read: No Orion AI Model in 2024: OpenAI Addresses Delay and Future Plans
This made a lot of people wonder: Have LLM advancements reached a ceiling? Gary Marcus, the most well-known AI critic, appeared to be the most excited about it. He immediately wrote on X, “Folks, game over.” I prevailed. As I predicted, GPT is approaching a phase of declining returns.
Uncle Gary, though, seems to have rejoiced a little too soon. Respectfully, the paper presents a novel scaling law for AI that may eventually supersede the previous one. One of the writers of the piece promptly clarified the point to Marcus by saying, “The sky isn’t falling.”
In a similar vein, OpenAI researchers quickly corrected the story, claiming that it was deceptive or misrepresentative of the development of OpenAI’s future models.
Also Read: Orion AI Model by OpenAI to Launch in December, Promises Game-Changing Performance in AI
Adam Goldberg, a founding member of OpenAI’s go-to-market (GTM) team, stated that “training time and inference time are now two important dimensions of scaling for models like the o1 series.” He clarified that although conventional scaling principles that emphasize longer pre-training times for larger models are still applicable, there is now another crucial element.
“The scale factor is still fundamental. But the addition of this second scaling dimension is expected to open up incredible new possibilities,” he continued.
As a result, OpenAI established a foundation team to determine how the business can keep enhancing its models despite a limited amount of fresh training data.
Also Read: Former OpenAI CTO Mira Murati Launches AI Startup, Targets $100M in Investor Funding
According to reports, these new tactics include using artificial intelligence (AI) models to train Orion on simulated data and further refining models after training.
“We don’t have plans to release a model code-named Orion this year,” the business stated in response to earlier reports regarding plans for its flagship model.