News

OpenAI is Creating New Tactics to Address the Halt in AI Advancement

A study in The Information suggests that OpenAI's next flagship model, Orion, may not be as revolutionary as its predecessors. Employees found that while Orion outperformed current models, the improvement was not as significant as during the transition from GPT-3 to GPT-4. OpenAI researchers have corrected the story, stating that training time and inference time are still important dimensions of scaling.

A new study in The Information suggests that OpenAI’s next flagship model might not be as revolutionary as its predecessors.

According to reports, employees who evaluated the new model, code-named Orion, discovered that while it outperformed OpenAI’s current models, the improvement was not as great as what they had observed during the transition from GPT-3 to GPT-4.

Stated differently, the pace of progress appears to be decreasing. In reality, Orion may not consistently outperform earlier models in certain domains, including coding.

Also Read: No Orion AI Model in 2024: OpenAI Addresses Delay and Future Plans

This made a lot of people wonder: Have LLM advancements reached a ceiling? Gary Marcus, the most well-known AI critic, appeared to be the most excited about it. He immediately wrote on X, “Folks, game over.” I prevailed. As I predicted, GPT is approaching a phase of declining returns.

Uncle Gary, though, seems to have rejoiced a little too soon. Respectfully, the paper presents a novel scaling law for AI that may eventually supersede the previous one. One of the writers of the piece promptly clarified the point to Marcus by saying, “The sky isn’t falling.”

In a similar vein, OpenAI researchers quickly corrected the story, claiming that it was deceptive or misrepresentative of the development of OpenAI’s future models.

Also Read: Orion AI Model by OpenAI to Launch in December, Promises Game-Changing Performance in AI

Adam Goldberg, a founding member of OpenAI’s go-to-market (GTM) team, stated that “training time and inference time are now two important dimensions of scaling for models like the o1 series.” He clarified that although conventional scaling principles that emphasize longer pre-training times for larger models are still applicable, there is now another crucial element.

“The scale factor is still fundamental. But the addition of this second scaling dimension is expected to open up incredible new possibilities,” he continued.

As a result, OpenAI established a foundation team to determine how the business can keep enhancing its models despite a limited amount of fresh training data.

Also Read: Former OpenAI CTO Mira Murati Launches AI Startup, Targets $100M in Investor Funding

According to reports, these new tactics include using artificial intelligence (AI) models to train Orion on simulated data and further refining models after training.

“We don’t have plans to release a model code-named Orion this year,” the business stated in response to earlier reports regarding plans for its flagship model. 

This post was last modified on November 12, 2024 5:24 am

Kumud Sahni Pruthi

A postgraduate in Science with an inclination towards education and technology. She always looks for ways to help people improve their lives by putting complex things into simple words through her writing.

Recent Posts

Google is moving Android news to a virtual event before I/O

Google is launching The Android Show: I/O Edition, featuring Android ecosystem president Sameer Samat, to…

April 29, 2025

Top Generative AI Companies of the World 2025

The top 11 generative AI companies in the world are listed below. These companies have…

April 28, 2025

Veo 2 extends access to more Gemini Advanced Users

Google has integrated Veo 2 video generation into the Gemini app for Advanced subscribers, enabling…

April 25, 2025

Perplexity launches the iPhone voice assistant

Perplexity's iOS app now makes its conversational AI voice assistant compatible with Apple devices, enabling…

April 24, 2025

Ola’s AI arm Krutrim intends to raise $300 million

Bhavish Aggarwal is in talks to raise $300 million for his AI company, Krutrim AI…

April 22, 2025

World’s first humanoid half-marathon pits people against robots

The Beijing Humanoid Robot Innovation Center won the Yizhuang Half-Marathon with the "Tiangong Ultra," a…

April 22, 2025