News

Mistral 7B Outperforms LLaMA 2 and GPT-3.5 by running 6x faster

Mistral AI focuses on open-source technology for generative AI tools, chatbot development and customizable features with an open-source 45B parameter AI model that matches or outperforms LLaMA 2 and GPT-3.5 on most benchmarks while running 6x faster.  French startup Mistral AI was founded in March by former leads at Google’s DeepMind and Meta (Facebook and Instagram’s parent company). 

Mistral AI will soon grow and release open models for community fine tunes and to grow networks, and AI will become free and abundant for everyone.  The company has raised €385 million, or $415 million in recent funding and now it values the company at roughly $2 billion. Mistral AI is also opening up its commercial platform today. 

Must Read: Mistral Drops OpenAI Language Model via Torrent Link

The Mistral AI team is proud to release Mistral 7B, the most powerful language model for its size to date. It aims to make its products available to the general public in early 2024.

What is Mistral 7B Models and How it outperforms LLaMA 2 and GPT-3.5

  • Outperforms Llama 2 13B on all benchmarks
  • Outperforms Llama 1 34B on many benchmarks
  • Approaches CodeLlama 7B performance on code, while remaining good at English tasks
  • Uses Grouped-query attention (GQA) for faster inference
  • Uses Sliding Window Attention (SWA) to handle longer sequences at a smaller cost
  • Mistral 7B is easy to fine-tune on any task. A fine-tuned model  for chat, which outperforms Llama 2 13B chat

Also Read: Nvidia Funded 35 AI Companies in 2023 to dominate the technology landscape

Mistral 7B  Performance in detail: In Comparison with Mistral 7B to the Llama 2 

As compared Mistral 7B to the Llama 2 family, and re-run all model evaluations ourselves for fair comparison. Performance of Mistral 7B and different Llama models on a wide range of benchmarks. For all metrics, all models were re-evaluated with our evaluation pipeline for accurate comparison. Mistral 7B significantly outperforms Llama 2 13B on all metrics and is on par with Llama 34B (since Llama 2 34B was not released, we report results on Llama 34B). It is also vastly superior in code and reasoning benchmarks. The Mistral 7B  Performance benchmarks are categorized by their themes:

  • Commonsense Reasoning: 0-shot average of Hellaswag, Winogrande, PIQA, SIQA, OpenbookQA, ARC-Easy, ARC-Challenge, and CommonsenseQA.
  • World Knowledge: 5-shot average of NaturalQuestions and TriviaQA.
  • Reading Comprehension: 0-shot average of BoolQ and QuAC.
  • Math: Average of 8-shot GSM8K with maj@8 and 4-shot MATH with maj@4
  • Code: Average of 0-shot Humaneval and 3-shot MBPP
  • Popular aggregated results: 5-shot MMLU, 3-shot BBH, and 3-5-shot AGI Eval (English multiple-choice questions only)

How Mistral 7B  models fare in the cost/performance plane: 

An interesting metric to compare how models fare in the cost/performance plane is to compute “equivalent model sizes”. On reasoning, comprehension and STEM reasoning (MMLU), Mistral 7B performs equivalently to a Llama 2 that would be more than 3x its size. This is as much saved in memory and gained in throughput. 

Results on MMLU, Commonsense Reasoning, World Knowledge and Reading comprehension for Mistral 7B and Llama 2 (7B/13/70B). Mistral 7B largely outperforms Llama 2 13B on all evaluations, except on knowledge benchmarks, where it is on par (this is likely due to its limited parameter count, which restricts the amount of knowledge it can compress).

Note: Important differences between our evaluation and the LLaMA2 papers:

  1. For MBPP, we use the hand-verified subset
  2. For TriviaQA, we do not provide Wikipedia contexts

Mistral 7B  models Flash and Furious: Attention drift

Mistral 7B uses a sliding window attention (SWA) mechanism (Child et al., Beltagy et al.), in which each layer attends to the previous 4,096 hidden states. The main improvement, and reason for which this was initially investigated, is a linear compute cost of O(sliding_window.seq_len). In practice, changes made to FlashAttention and xFormers yield a 2x speed improvement for sequence length of 16k with a window of 4k. A huge thanks to Tri Dao and Daniel Haziza for helping include these changes on a tight schedule.

Sliding window attention exploits the stacked layers of a transformer to attend in the past beyond the window size: A token i at layer k attends to tokens [i-sliding_window, i] at layer k-1. These tokens attended to tokens [i-2*sliding_window, i]. Higher layers have access to information further in the past than what the attention patterns seems to entail.

A fixed attention span means we can limit our cache to a size of sliding_window tokens, using rotating buffers (read more in our reference implementation repo). This saves half of the cache memory for inference on sequence length of 8192, without impacting model quality.

Mistral 7B models on MT-Bench Vs 13B chat models

The Mistral 7B Instruct model is a quick demonstration that the base model can be easily fine-tuned to achieve compelling performance. It does not have any moderation mechanism. The company is looking to  engage with the community on ways to make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs.

To show the generalization capabilities of Mistral 7B fine-tuned on instruction datasets publicly available on HuggingFace. No tricks, no proprietary data. The resulting model, Mistral 7B Instruct, outperforms all 7B models on MT-Bench and is comparable to 13B chat models.

Nvidia made a substantial investment in Mistral, a Paris-based AI startup valued at 2 billion euros (about $2.2 billion). Read here French startup Mistral AI raised 385 million euros ($414 million)

Also Check Tech Chilli Web Stories –

Françoise

Francoise Hardy, A digital content creator and tech integration specialist with over 10 years of experience, is known for his deep knowledge in AI, ML, Data Science, Robotics, and Neural Networks. He began his career with a passion for emerging technologies, leading to innovative solutions and digital transformation in various businesses. Francoise's expertise extends to the ethical aspects of technology, advocating for responsible usage. Recognized by his peers, he is a sought-after speaker and writer in the tech industry. His commitment to advancing technology for societal benefit defines his career.

Recent Posts

Google CEO Sundar Pichai Advises Indian Engineers on Thriving in AI Age

During a conversation with content creator Varun Mayya, Sundar Pichai covered a wide range of…

4 hours ago

Bill Gates Recommends Must-Read Book on AI and Education

Bill Gates posts, “This is a book that everyone serious about education should read. Sal…

4 hours ago

Colorado Enacts Historic AI Discrimination Law to Protect Workers and Consumers

Colorado passes SB205, the first comprehensive AI discrimination law in the U.S., aiming to prevent…

4 hours ago

AI ‘Godfather’ Geoffrey Hinton Advocates for Universal Basic Income Amid AI Advancements

AI pioneer Geoffrey Hinton warns of job losses and inequality due to AI, urging governments…

16 hours ago

What is Retrieval-Augmented Generation (RAG)?

Learn how RAG enhances the accuracy and relevance of generated content by dynamically integrating specific…

18 hours ago

How Does Bitcoin Mining Work?

Discover the process of Bitcoin mining, where transactions are verified and added to the blockchain,…

19 hours ago