Perplexity, an AI search engine startup, has introduced its proprietary model, Sonar, built on Meta's Llama 3.3 70B and powered by Cerebras Inference. Sonar outperforms larger models and is ten times faster than Google's Gemini 2.0 Flash.
Perplexity Introduces Sonar for Pro Users, with Performance Comparable to Claude 3.5 Sonnet and GPT-4o
The AI search engine startup Perplexity declared that all Pro users on the platform will have access to its proprietary model, Sonar. Sonar can now be the default model in settings for customers with the Perplexity Pro plan.
The open-source Llama 3.3 70B from Meta serves as the foundation for Sonar. Cerebras Inference, which bills itself as the quickest AI inference engine in the world, powers it. 1200 tokens can be generated by the model every second.
The company Perplexity declared, “We optimized Sonar across two critical dimensions that strongly correlate with user satisfaction—answer factuality and readability.” This means that Sonar outperforms the original Llama model in these areas.
Also Read: Perplexity CEO willing to devote his time and $1 million to “making India great again” with AI
According to Perplexity, Sonar performs on par with the larger models GPT-4o and Claude 3.5 Sonnet and surpasses both OpenAI’s GPT-4o small and Anthropic’s Claude 3.5 Haiku.
Additionally, Sonar is ten times faster than Google’s Gemini 2.0 Flash, according to Perplexity.
Le Chat, an AI software released recently by the French startup Mistral, was touted as the competition’s fastest AI assistant. When tested, it was discovered that it was faster than any other model. Conversely, Gemini 2.0 Flash ranked second. Cerebras Inference powers Mistral’s Le Chat, just like it does Perplexity’s Sonar.
Perplexity recently revealed that the robust DeepSeek-R1 model, which is housed on US servers, is now accessible on the platform.
Also Read: Perplexity Declares Its Residency Program
Perplexity revealed a few weeks ago that there are two versions of the Sonar API: the Sonar and the Sonar Pro. The business also referred to it as the market’s most reasonably priced API.
Sonar Pro is “perfect for multi-step tasks requiring deep understanding and context retention,” according to the business. Additionally, it offers “in-depth answers” with twice as many Sonar citations. With numerous searches permitted, the Pro edition costs $5 per 1,000 searches, $15 per million output tokens, and $3 per million input tokens.
Sonar’s plan is less complicated. It charges $5 for 1,000 searches, with only one search per request, and $1 per million tokens for input and output.
This post was last modified on February 13, 2025 10:54 pm
Rish Gupta is an Indian entrepreneur who serves as the chief executive officer (CEO) of…
Are you looking to advance your engineering career in the field of robotics? Check out…
Artificial intelligence is a topic that has recently made internet users all over the world…
Boost your learning journey with the power of AI communities. The article below highlights the…
Demystify the world of Artificial Intelligence with our comprehensive AI Glossary and Terminologies Cheat Sheet.…
Scott Wu is the co-founder and Chief Executive Officer of Cognition Labs, an artificial intelligence…