News

Google DeepMind’s PaliGemma: A Small But Mighty Open-Source Vision-Language Model

Explore Google DeepMind's PaliGemma, a compact vision-language model with 3 billion parameters. This open-source VLM delivers impressive performance on diverse tasks, setting new standards in AI efficiency.

PaliGemma is a new open-source vision-language model (VLM) developed by Google DeepMind researchers. Despite its tiny size, PaliGemma performs well on various visual and linguistic tasks.

The 3-billion parameter model performed well on roughly 40 different benchmarks, including common VLM tasks and more specialized tasks in fields like remote sensing and image segmentation. It does this by combining a SigLIP vision encoder with a Gemma language model.

Also Read: Best Google AI Courses and Certifications for FREE in 2024

With a total of 3 billion parameters, PaliGemma comprises a Vision Transformer image encoder and a Transformer decoder. Gemma-2B is used to initialize the text decoder. SigLIP-So400m/14 is used to initialize the image encoder. The PaLI-3 recipes are used during PaliGemma’s training.

PaliGemma frequently outperforms larger models in tasks like labelling images and interpreting videos. It is perfect for video clips and image pairs because of its architecture, which allows for numerous input images. Without task-specific fine-tuning, it obtains top results on benchmarks such as MMVP and Objaverse Multiview.

A prefix-LM training aims for bidirectional attention, fine-tuning all model components concurrently, a multi-stage training procedure to boost picture resolution, and carefully selected, varied pretraining data. These are important design decisions.

Also Read: What is Ola Maps? Free Accessibility, Offerings, and How It is Different from Google Maps

To evaluate the effects of different architectural and training options, the researchers also carried out comprehensive ablation investigations. Longer pretraining, unfreezing all model components, and higher resolution were proven to be major contributors to PaliGemma’s performance.

By providing PaliGemma as an open base model without instruction tuning, the researchers intend to provide a useful starting point for further research on instruction tuning, specific applications, and clearer distinctions between base models and fine-tuning in VLM development.

The robust performance of this small model implies that well-built VLMs can achieve state-of-the-art outcomes without requiring scaling to large scales, which could lead to more accessible and efficient multimodal AI systems.

Click here to read the entire paper.

Also Read: Proton Releases Free and Privacy-Focused Alternative to Google Docs

Limitations

The primary purpose of PaliGemma’s design was to function as a general, pre-trained model that could be applied to specialized applications. As a result, its “zero-shot” or “out of the box” performance may not match that of models made especially for it.

PaliGemma is not a chatbot with multiple turns. It is made to accept text and image input in a single round.

Also Read: Google Cloud AI Gemini 1.5: Flash and Pro Versions Now Available

This post was last modified on July 14, 2024 6:53 am

Kumud Sahni Pruthi

A postgraduate in Science with an inclination towards education and technology. She always looks for ways to help people improve their lives by putting complex things into simple words through her writing.

Recent Posts

Google is moving Android news to a virtual event before I/O

Google is launching The Android Show: I/O Edition, featuring Android ecosystem president Sameer Samat, to…

April 29, 2025

Top Generative AI Companies of the World 2025

The top 11 generative AI companies in the world are listed below. These companies have…

April 28, 2025

Veo 2 extends access to more Gemini Advanced Users

Google has integrated Veo 2 video generation into the Gemini app for Advanced subscribers, enabling…

April 25, 2025

Perplexity launches the iPhone voice assistant

Perplexity's iOS app now makes its conversational AI voice assistant compatible with Apple devices, enabling…

April 24, 2025

Ola’s AI arm Krutrim intends to raise $300 million

Bhavish Aggarwal is in talks to raise $300 million for his AI company, Krutrim AI…

April 22, 2025

World’s first humanoid half-marathon pits people against robots

The Beijing Humanoid Robot Innovation Center won the Yizhuang Half-Marathon with the "Tiangong Ultra," a…

April 22, 2025