AI

What is GPT-4o? Check Capabilities, Evaluations and How to Use it?

OpenAI recently announced a flagship model that can reason across audio, vision, and text in real time. Read this article to know about GPT-4o, its capabilities, evaluations and more.

OpenAI recently launched GPT-4o, an iteration of the GPT-4 model. To advance AI technology and ensure it is accessible and beneficial to everyone, GPT-4o will be rolling out more intelligence and advanced tools to ChatGPT for free. This updated model “is much faster” and improves “capabilities across text, vision, and audio,” OpenAI CTO Mira Murati said in a livestream announcement on Monday. It’ll be free for all users, and paid users will continue to “have up to five times the capacity limits” of free users, Murati added.

Also, OpenAI CEO Sam Altman posted that the model is “natively multimodal,” which means the model could generate content or understand commands in voice, text, or images.

What is GPT-4o?

GPT-4o (“o” for “omni”) is a step towards much more natural human-computer interaction, it accepts as input any combination of text, audio, and image and generates any combination of text, audio, and image outputs. This newest flagship model provides GPT-4-level intelligence but is much faster and improves on its capabilities across text, voice, and vision. Also, t can respond to audio inputs in as little as 232 milliseconds, with an average of 320 milliseconds, which is similar to human response time in a conversation.

Also, in the future, improvements will allow for more natural, real-time voice conversation and the ability to converse with ChatGPT via real-time video. For example, you could show ChatGPT a live sports game and ask it to explain the rules to you. We plan to launch a new Voice Mode with these new capabilities in alpha in the coming weeks, with early access for Plus users as we roll out more broadly.

Developers can also now access GPT-4o in the API as a text and vision model. It is 2x faster, half the price, and has 5x higher rate limits compared to GPT-4 Turbo. We plan to launch support for GPT-4o’s new audio and video capabilities to a small group of trusted partners in the API in the coming weeks.

Google AI Essentials Course and Certification: Check Fees, Modules, Trainers, and How to Enroll?

What are the capabilities and evaluations?

Sam Altman states that GPT-4o is fast, smart, fun, natural, and helpful. In his blog, he said that this new model is a key part of a mission to put very capable AI tools in the hands of people for free (or at a great price).

Secondly, the new voice (and video) mode of the GPT-4o is the best computer interface. It feels like AI from the movies, and it’s still a bit surprising to me that it’s real. Getting to human-level response times and expressiveness turns out to be a big change.

The model evaluations for the latest GPT-4o version are:

  • GPT-4o is trained on a single new model end-to-end across text, vision, and audio, meaning that all inputs and outputs are processed by the same neural network.
  • It achieves GPT-4 Turbo-level performance on text, reasoning, and coding intelligence while setting new high watermarks on multilingual, audio, and vision capabilities.
  • The new version sets a new high score of 88.7% on the 0-shot COT MMLU (general knowledge questions). All these evals were gathered with our new simple evals library.
  • It sets new state-of-the-art on speech translation and outperforms Whisper-v3 on the MLS benchmark.
  • Also, it is stronger than GPT-4 on the M3Exam benchmark across all languages. It is both a multilingual and a vision evaluation, consisting of multiple-choice questions from other countries’ standardized tests that sometimes include figures and diagrams.
  • A set of 20 languages was chosen as representative of the new tokenizer’s compression across different language families, including Gujarati, Telugu, Tamil, Marathi, Hindi, Urdu, Arabic, Persian, Russian, Korean, Vietnamese, Chinese, Japanese, Turkish, Italian, German, Spanish, Portuguese, French, and English.

    Other than all the above benchmarks, it has safety built-in by design across modalities, through techniques such as filtering training data and refining the model’s behavior post-training.

How to use GPT-4o?

OpenAI is the brainchild of GPT-4o. It is making more capabilities available for free in ChatGPT. Anyone with access to ChatGPT can switch to GPT-4o in the API. The benefits and features of GPT-4o are available in three different tiers, such as: 

Free TierLimit access to messages using advanced tools.
Plus and Team5x greater message limits than free users
EnterpriseHigh-speed access to GPT-4o and GPT-4oEnterprise-grade security and privacy features
Higher message limits.

GPT-4o has safety built into its design across modalities, through techniques such as filtering training data and refining the model’s behaviour post-training. It has also undergone extensive external red teaming with 70+ external experts in domains such as social psychology, bias, fairness, and misinformation to identify risks that are introduced or amplified by the newly added modalities.

What is GPT-4 Turbo in the OpenAI API? How To Access It?  

This post was last modified on May 14, 2024 3:27 am

Winny

Winny is a fervent tech writer with a flair for simplifying complex concepts into layman’s language. Highly skilled in crafting content and translating tech jargon, she delivers articles, guides and document information to educate and empower. Get into the world of technology with the best chauffeur, bridging the gap between you and industrial science with clarity and precision.

Recent Posts

Google is moving Android news to a virtual event before I/O

Google is launching The Android Show: I/O Edition, featuring Android ecosystem president Sameer Samat, to…

April 29, 2025

Top Generative AI Companies of the World 2025

The top 11 generative AI companies in the world are listed below. These companies have…

April 28, 2025

Veo 2 extends access to more Gemini Advanced Users

Google has integrated Veo 2 video generation into the Gemini app for Advanced subscribers, enabling…

April 25, 2025

Perplexity launches the iPhone voice assistant

Perplexity's iOS app now makes its conversational AI voice assistant compatible with Apple devices, enabling…

April 24, 2025

Ola’s AI arm Krutrim intends to raise $300 million

Bhavish Aggarwal is in talks to raise $300 million for his AI company, Krutrim AI…

April 22, 2025

World’s first humanoid half-marathon pits people against robots

The Beijing Humanoid Robot Innovation Center won the Yizhuang Half-Marathon with the "Tiangong Ultra," a…

April 22, 2025