News

Perplexity Introduces Sonar for Pro Users, with Performance Comparable to Claude 3.5 Sonnet and GPT-4o

Perplexity, an AI search engine startup, has introduced its proprietary model, Sonar, built on Meta's Llama 3.3 70B and powered by Cerebras Inference. Sonar outperforms larger models and is ten times faster than Google's Gemini 2.0 Flash.

The AI search engine startup Perplexity declared that all Pro users on the platform will have access to its proprietary model, Sonar. Sonar can now be the default model in settings for customers with the Perplexity Pro plan.

The open-source Llama 3.3 70B from Meta serves as the foundation for Sonar. Cerebras Inference, which bills itself as the quickest AI inference engine in the world, powers it. 1200 tokens can be generated by the model every second.

The company Perplexity declared, “We optimized Sonar across two critical dimensions that strongly correlate with user satisfaction—answer factuality and readability.” This means that Sonar outperforms the original Llama model in these areas.

Also Read: Perplexity CEO willing to devote his time and $1 million to “making India great again” with AI

According to Perplexity, Sonar performs on par with the larger models GPT-4o and Claude 3.5 Sonnet and surpasses both OpenAI’s GPT-4o small and Anthropic’s Claude 3.5 Haiku.

Additionally, Sonar is ten times faster than Google’s Gemini 2.0 Flash, according to Perplexity.

Le Chat, an AI software released recently by the French startup Mistral, was touted as the competition’s fastest AI assistant. When tested, it was discovered that it was faster than any other model. Conversely, Gemini 2.0 Flash ranked second. Cerebras Inference powers Mistral’s Le Chat, just like it does Perplexity’s Sonar.

Perplexity recently revealed that the robust DeepSeek-R1 model, which is housed on US servers, is now accessible on the platform. 

Also Read: Perplexity Declares Its Residency Program

Perplexity revealed a few weeks ago that there are two versions of the Sonar API: the Sonar and the Sonar Pro. The business also referred to it as the market’s most reasonably priced API.

Sonar Pro is “perfect for multi-step tasks requiring deep understanding and context retention,” according to the business. Additionally, it offers “in-depth answers” with twice as many Sonar citations. With numerous searches permitted, the Pro edition costs $5 per 1,000 searches, $15 per million output tokens, and $3 per million input tokens.

Sonar’s plan is less complicated. It charges $5 for 1,000 searches, with only one search per request, and $1 per million tokens for input and output.

This post was last modified on February 13, 2025 10:54 pm

Kumud Sahni Pruthi

A postgraduate in Science with an inclination towards education and technology. She always looks for ways to help people improve their lives by putting complex things into simple words through her writing.

Recent Posts

Perplexity AI Voice Assistant: How to Use and Benefits for iOS and Android Phones

Perplexity AI Voice Assistant is a smart tool for Android devices that lets users perform…

May 10, 2025

Meta AI App: How to Download? Check Its Key Features and Benefits

Meta AI is a personal voice assistant app powered by Llama 4. It offers smart,…

May 10, 2025

AI in U.S. Education for American Youth by President DONALD TRUMP

On April 23, 2025, current President Donald J. Trump signed an executive order to advance…

May 10, 2025

Google is moving Android news to a virtual event before I/O

Google is launching The Android Show: I/O Edition, featuring Android ecosystem president Sameer Samat, to…

April 29, 2025

Top Generative AI Companies of the World 2025

The top 11 generative AI companies in the world are listed below. These companies have…

April 28, 2025

Veo 2 extends access to more Gemini Advanced Users

Google has integrated Veo 2 video generation into the Gemini app for Advanced subscribers, enabling…

April 25, 2025