News

Meta launches SAM 2: The first Unified Model

Meta's Segment Anything Model 2 (SAM 2) expands segmentation capabilities to video, addressing challenges in segmenting video. SAM 2 can follow objects in real time across frames, enabling easier video generation and editing.

Segmentation is determining which pixels in an image correspond to an item. It is useful for activities like photo editing and scientific imaging analysis. New AI-enabled picture editing features in Meta’s apps, such as Backdrop and Cutouts on Instagram, were inspired by Meta’s initial Segment Anything Model, which was launched last year. 

Diverse applications in science, health, and many other fields have also been sparked by SAM. For instance, SAM has been applied in the medical industry to help diagnose skin cancer and in marine research to segment sonar images and study coral reefs. It has also been utilized in satellite imagery analysis for disaster relief. 

Also Read: What is Open Source AI? How is it Good for the World, Developers and Meta?

These features are expanded to include video with Meta’s new Segment Anything Model 2 (SAM 2). Any object in an image or video can be segmented by SAM 2, and it can follow an object in real time across all of the frames in the movie. 

Since segmenting video is far more difficult than segmenting photos, existing models have not been able to do this. Videos allow objects to appear and disappear quickly, as well as to be hidden by other objects or scenes. Many of these issues were resolved when Meta constructed SAM 2.

Meta thinks their research can lead to new opportunities, including simpler video generation and editing, as well as the creation of new mixed reality experiences. 

Also Read: Meta unveils its largest ‘open’ AI Model to date

Additionally, SAM 2 could be used to monitor an object of interest in a video, facilitating quicker annotation of visual data for computer vision system training, such as that which is employed in autonomous cars. 

Additionally, it might make it possible to choose and interact with items in novel ways during live videos or in real-time.

In keeping with Meta’s open scientific philosophy, they are making their SAM 2 research available so others can investigate additional features and applications. It would be interesting to see what applications of this research the AI community makes.

Also Read: How to use Imagine Me? Meta AI Tool for Selfie (Text to Image) Generation

This post was last modified on July 30, 2024 9:00 am

Kumud Sahni Pruthi

A postgraduate in Science with an inclination towards education and technology. She always looks for ways to help people improve their lives by putting complex things into simple words through her writing.

Recent Posts

Explained: What is Digital Arrest?

What is digital arrest, and why is it becoming critical in today’s cybercrime-ridden world? This…

May 31, 2025

AI in Cybersecurity [2025]: Benefits, Examples, and How it is Transforming its Future

AI in Cybersecurity segment: AI has the potential to revolutionize cybersecurity with its ability to…

May 31, 2025

Best AI Security Solutions in 2025

Explore the best AI security solutions of 2025 designed to protect against modern cyber threats.…

May 31, 2025

What Are Autonomous AI Agent Layers?

Autonomous agent layers are self-governing AI programs capable of sensing their environment, making decisions, and…

May 30, 2025

How Will Artificial Intelligence (AI) Transform the Crypto Industry?

Artificial Intelligence is transforming the cryptocurrency industry by enhancing security, improving predictive analytics, and enabling…

May 30, 2025

Top 10 AI Chatbots for Mental Health in 2025 (Rank-wise)

In 2025, Earkick stands out as the best mental health AI chatbot. Offering free, real-time…

May 28, 2025