A study in The Information suggests that OpenAI's next flagship model, Orion, may not be as revolutionary as its predecessors. Employees found that while Orion outperformed current models, the improvement was not as significant as during the transition from GPT-3 to GPT-4. OpenAI researchers have corrected the story, stating that training time and inference time are still important dimensions of scaling.
OpenAI intends to bill up to $20,000 a month for specialized AI "agents."
A new study in The Information suggests that OpenAI’s next flagship model might not be as revolutionary as its predecessors.
According to reports, employees who evaluated the new model, code-named Orion, discovered that while it outperformed OpenAI’s current models, the improvement was not as great as what they had observed during the transition from GPT-3 to GPT-4.
Stated differently, the pace of progress appears to be decreasing. In reality, Orion may not consistently outperform earlier models in certain domains, including coding.
Also Read: No Orion AI Model in 2024: OpenAI Addresses Delay and Future Plans
This made a lot of people wonder: Have LLM advancements reached a ceiling? Gary Marcus, the most well-known AI critic, appeared to be the most excited about it. He immediately wrote on X, “Folks, game over.” I prevailed. As I predicted, GPT is approaching a phase of declining returns.
Uncle Gary, though, seems to have rejoiced a little too soon. Respectfully, the paper presents a novel scaling law for AI that may eventually supersede the previous one. One of the writers of the piece promptly clarified the point to Marcus by saying, “The sky isn’t falling.”
In a similar vein, OpenAI researchers quickly corrected the story, claiming that it was deceptive or misrepresentative of the development of OpenAI’s future models.
Also Read: Orion AI Model by OpenAI to Launch in December, Promises Game-Changing Performance in AI
Adam Goldberg, a founding member of OpenAI’s go-to-market (GTM) team, stated that “training time and inference time are now two important dimensions of scaling for models like the o1 series.” He clarified that although conventional scaling principles that emphasize longer pre-training times for larger models are still applicable, there is now another crucial element.
“The scale factor is still fundamental. But the addition of this second scaling dimension is expected to open up incredible new possibilities,” he continued.
As a result, OpenAI established a foundation team to determine how the business can keep enhancing its models despite a limited amount of fresh training data.
Also Read: Former OpenAI CTO Mira Murati Launches AI Startup, Targets $100M in Investor Funding
According to reports, these new tactics include using artificial intelligence (AI) models to train Orion on simulated data and further refining models after training.
“We don’t have plans to release a model code-named Orion this year,” the business stated in response to earlier reports regarding plans for its flagship model.
This post was last modified on November 12, 2024 5:24 am
What is digital arrest, and why is it becoming critical in today’s cybercrime-ridden world? This…
AI in Cybersecurity segment: AI has the potential to revolutionize cybersecurity with its ability to…
Explore the best AI security solutions of 2025 designed to protect against modern cyber threats.…
Autonomous agent layers are self-governing AI programs capable of sensing their environment, making decisions, and…
Artificial Intelligence is transforming the cryptocurrency industry by enhancing security, improving predictive analytics, and enabling…
In 2025, Earkick stands out as the best mental health AI chatbot. Offering free, real-time…