News

Meta launches SAM 2: The first Unified Model

Meta's Segment Anything Model 2 (SAM 2) expands segmentation capabilities to video, addressing challenges in segmenting video. SAM 2 can follow objects in real time across frames, enabling easier video generation and editing.

Segmentation is determining which pixels in an image correspond to an item. It is useful for activities like photo editing and scientific imaging analysis. New AI-enabled picture editing features in Meta’s apps, such as Backdrop and Cutouts on Instagram, were inspired by Meta’s initial Segment Anything Model, which was launched last year. 

Diverse applications in science, health, and many other fields have also been sparked by SAM. For instance, SAM has been applied in the medical industry to help diagnose skin cancer and in marine research to segment sonar images and study coral reefs. It has also been utilized in satellite imagery analysis for disaster relief. 

Also Read: What is Open Source AI? How is it Good for the World, Developers and Meta?

These features are expanded to include video with Meta’s new Segment Anything Model 2 (SAM 2). Any object in an image or video can be segmented by SAM 2, and it can follow an object in real time across all of the frames in the movie. 

Since segmenting video is far more difficult than segmenting photos, existing models have not been able to do this. Videos allow objects to appear and disappear quickly, as well as to be hidden by other objects or scenes. Many of these issues were resolved when Meta constructed SAM 2.

Meta thinks their research can lead to new opportunities, including simpler video generation and editing, as well as the creation of new mixed reality experiences. 

Also Read: Meta unveils its largest ‘open’ AI Model to date

Additionally, SAM 2 could be used to monitor an object of interest in a video, facilitating quicker annotation of visual data for computer vision system training, such as that which is employed in autonomous cars. 

Additionally, it might make it possible to choose and interact with items in novel ways during live videos or in real-time.

In keeping with Meta’s open scientific philosophy, they are making their SAM 2 research available so others can investigate additional features and applications. It would be interesting to see what applications of this research the AI community makes.

Also Read: How to use Imagine Me? Meta AI Tool for Selfie (Text to Image) Generation

This post was last modified on July 30, 2024 9:00 am

Kumud Sahni Pruthi

A postgraduate in Science with an inclination towards education and technology. She always looks for ways to help people improve their lives by putting complex things into simple words through her writing.

Recent Posts

Top 10 Robotics Skills Required for Engineering Career Growth

Are you looking to advance your engineering career in the field of robotics? Check out…

April 18, 2025

Top 20 Books on AI in 2025: The Ultimate Reading List on Artificial Intelligence

Artificial intelligence is a topic that has recently made internet users all over the world…

April 18, 2025

Top 10 Best AI Communities in 2025

Boost your learning journey with the power of AI communities. The article below highlights the…

April 18, 2025

Artificial Intelligence (AI) Glossary and Terminologies – Complete Cheat Sheet List

Demystify the world of Artificial Intelligence with our comprehensive AI Glossary and Terminologies Cheat Sheet.…

April 18, 2025

Scott Wu Net Worth: Devin AI Software Engineer, CEO of Cognition Labs

Scott Wu is the co-founder and Chief Executive Officer of Cognition Labs, an artificial intelligence…

April 17, 2025

Top 13 Yield Farming Platforms in 2025: Maximize APY with Secure and Trusted Crypto Tools

Discover the 13 best yield farming platforms of 2025, where you can safely maximize your…

April 17, 2025