Google Genie, developed by Google DeepMind, is a groundbreaking research project that holds immense potential for the future of entertainment, game development, and even robotics. Read this article to learn more about the magical AI platform.
Genie
On Monday, Google’s Deepmind introduced Genie. It is a generative AI model capable of creating playable content from a single image prompt or text description.
DeepMind team lead for Genie, Tim Rocktäschel, wrote on X, “ I am really excited to reveal what @GoogleDeepMind’s Open Endedness Team has been up to. We introduce Genie, a foundational world model trained exclusively from Internet videos that can generate an endless variety of action-controllable 2D worlds given image prompts.”
This article will explore Genie, its features, working model, and more.
Genie is a foundational world model trained from Internet videos that can generate an endless variety of playable (action-controllable) worlds from synthetic images, photographs, and even sketches.
According to Google researchers, “the model can be prompted to generate an endless variety of action-controllable virtual worlds described through text, synthetic images, photographs, and even sketches. At 11B parameters, Genie can be considered a foundational world model. It is comprised of a spatiotemporal video tokenizer, an autoregressive dynamics model, and a simple and scalable latent action model.
Genie enables users to act in the generated environments on a frame-by-frame basis despite training without any ground-truth action labels or other domain-specific requirements typically found in the world model literature. Further, the resulting learned latent action space facilitates training agents to imitate behaviours from unseen videos, opening the path for training generalist agents of the future.”
This latest innovation leads to the era of being able to generate entire interactive worlds from images or text. It can also simulate deformable objects, which is a challenging task for human-designed simulators.
Demis Hassabis Net Worth, Income, and Assets
The virtual world model is trained on gameplay using a dataset made up of more than 200,000 hours of unlabeled videos. As a result, it can process a diverse range of character motions, control them, and take action consistently. However, the model at present can convert any image into a playable 2D world.
It only takes a single image to create an entirely new interactive environment. This opens the door to a variety of new ways to generate and step into virtual worlds. For example, we can take a state-of-the-art text-to-image generation model and use it to produce starting frames that we can then bring to life with Genie.
With Genie, future AI agents can be trained in a never-ending curriculum of new, generated worlds. According to the published paper, the latent actions learned by Genie can transfer to real human-designed environments, but this is just scratching the surface of what may be possible in the future.
Another significant advancement by Genie includes better comprehension of practical physics, which may be applied to the training of robots to perform tasks outside of their training regimen or navigate environments more skillfully.
However, there is no release date for Genie. Also, it is unclear if it will ever come out of the research paper and become a real product.
Google Announces Gemma: All About The Latest Two LLMs
This post was last modified on February 27, 2024 1:55 am
Rish Gupta is an Indian entrepreneur who serves as the chief executive officer (CEO) of…
Are you looking to advance your engineering career in the field of robotics? Check out…
Artificial intelligence is a topic that has recently made internet users all over the world…
Boost your learning journey with the power of AI communities. The article below highlights the…
Demystify the world of Artificial Intelligence with our comprehensive AI Glossary and Terminologies Cheat Sheet.…
Scott Wu is the co-founder and Chief Executive Officer of Cognition Labs, an artificial intelligence…