xAI, owned by Elon Musk, has launched first generation multimodal model, Grok 1.5 Vision for preview. It will be available to early testers and Grok users soon. Grok 1.5 V has very strong text and visual capabilities. It can process information from diagrams, documents, charts, photographs, and screenshots.
In addition to better image understanding, Grok 1.5 v also introduced the RealWorldQA module, which helps it better understand the physical world using the images uploaded by users.
Also Read: Grok 1.5 Release Date, Price, Key Features and Other Details
What are the capabilities of Grok 1.5 Vision?
- Multimodal Capabilities: Grok-1.5V can process and understand a wide range of visual data, from documents to science diagrams, making it competitive with leading AI models like GPT-4.
- Practical Applications: From coding to personal advice, Grok-1.5V’s practical applications suggest a future where AI can assist in diverse and everyday tasks. A few examples include writing code from a diagram, telling a bedtime story from a child’s drawing, calculating calories, explaining a meme, converting a table to CSV format, and helping with wooden rot on the table.
- Rapid Development: x.AI’s Grok-1.5 Vision, developed under Elon Musk’s direction, achieving notable improvements in just nine months, represents significant advancements in AI.
- RealWorldQA Benchmark: This new benchmark challenges AIs with real-world visual questions, highlighting the model’s unique ability to handle complex spatial relationships. The initial release of RealWorldQA consists of over 700 images, with a question and easily verifiable answer for each image.
- Future Prospects: With plans to enhance its capabilities across various modalities such as images, audio, and video, Grok-1.5V is poised to become a pivotal tool in advancing multimodal AI interactions.
As per the official blog, Grok, when evaluated in a zero-shot setting without chain-of-thought prompting, outperformed its peers in their new RealWorldQA benchmark that measures real-world spatial understanding.
xAI Grok 1.5 Can Write Code from a Diagram
xAI Grok 1.5 can write code with a diagram. This is among the amazing features unveiled by Elon Musk during the announcement of the upcoming Grok Model to compete with ChatGPT 4 and Google Gemini 1.5 pro.

Also Read: Grok 1.5 vs Mistral vs Claude vs GPT-4 vs Gemini: What are the Benchmark Differences?
According to the official announcement, Grok-1.5V is competitive with existing frontier multimodal models in several domains, ranging from multi-disciplinary reasoning to understanding documents, science diagrams, charts, screenshots, and photographs. We are particularly excited about Grok’s capabilities in understanding our physical world. Grok outperforms its peers in our new RealWorldQA benchmark, that measures real-world spatial understanding. For all datasets below, we evaluate Grok in a zero-shot setting without chain-of-thought prompting. Here is a screenshot of its performance.

Grok 1.5 Vision with improvements in both multimodal modalities and generation capabilities will become a significant tool in advancing multimodal AI interactions.