Generating 3D graphics used to be a monumental task, requiring robust hardware, sophisticated software, and intricate wireframes.
Imagine transforming this complex process into a 30-second task.
Stability AI has done just that with the unveiling of Stable Fast 3D, a groundbreaking generative AI technology. This innovation promises to revolutionize industries by creating high-quality 3D graphics from a single image in mere seconds.
Curious how this is possible. Dive in to discover the future of 3D image creation and how it’s reshaping design, architecture, and beyond.
This is a major improvement over earlier models that took minutes to provide comparable results in terms of processing time. For background, Stability AI introduced Stable Video 3D (SV3D) in March. SV3D generates a 3D object in as long as ten minutes; Stable Fast 3D completes the same task 1200 times faster.
Also Read: Stability AI’s Stable Video 4D: Cutting-Edge 3D Video Synthesis from Single Video Input
According to Stability AI, several industries, including design, architecture, retail, virtual reality, and game development, will find great use for the new model. The Stability AI API and the chatbot Stable Assistant both offer the model for use. Hugging Face has also made the model available under a community license.
How Stable Fast 3D produces images more quickly than ever before
Instead of being created from the ground up, Stable Fast 3D is an extension of Stability AI’s earlier work with the TripoSR model. In March, Stability AI first revealed that it was collaborating with 3D modeling provider Trip AI to develop a quick 3D asset creation system.
Researchers at Stability AI describe the novel techniques the new model employs to quickly reconstruct high-quality 3D meshes from individual pictures in a research article. The method addresses major problems in quick 3D reconstruction while preserving speed and enhancing output quality by combining several innovative strategies.
Fundamentally, Stable Fast 3D takes an input image and turns it into high-resolution triplanes—3D volumetric representations—using an upgraded transformer network. With this network’s ability to handle higher resolutions effectively and with no increase in computational cost, greater detail capture, and fewer aliasing problems are possible.
Also Read: Stability AI Launches Free Text-to-Sound Generator
The researchers also provide a novel method for estimating material and lighting. The material estimate network uses a novel probabilistic approach to forecast global metallic and roughness values, which leads to an improvement in image quality and consistency.
Notable is also the Stable Fast 3D model’s ability to condense various components needed for a 3D image—such as geometry, textures, and material attributes—into a small, usable 3D file.
Stability AI pushes the boundaries of gen AI from 2D to 4D.
Perhaps the most well-known product of Stability AI is still its text-to-image generating technology, called Stable Diffusion.
Although Stable Diffusion generates 2D images, Stability AI has been developing 3D since at least November 2023, when Stable 3D was released. When stable video 3D made its debut in March of this year, it improved the quality of 3D image creation and added basic camera panning for image viewing.
Also Read: Why Stability AI CEO Emad Mostaque Resigned
Furthermore, Stability AI isn’t limited to 3D. The company just unveiled Stable Video 4D, which gives the creation of brief 3D videos a temporal component.