NVIDIA recently introduced a generative AI model capable of generating audio using simple text. The NVIDIA Fugatto Text to AI sound model is the “Swiss Army knife for sound” that lets you edit and generate sound easily.
Fugatto, short for Foundational Generative Audio Transformer Opus 1, can create or transform any combination of music, voices, and sounds defined with prompts from any combination of text and audio recordings. It can generate music clips from text prompts, remove or add any musical instrument from an existing song, change emotions or tone from a clip, and generate sounds that have never been heard.
Rafael Valle, a manager of applied audio research at NVIDIA and one of the brains behind Fugatto revealed that they wanted to create a generative AI audio model that was capable of understanding and generating sound as humans do.
What is Act One? A Compelling AI Video and Voice Generation Tool
“Fugatto is our first step toward a future where unsupervised multitask learning in audio synthesis and transformation emerges from data and model scale,” said Valle.
Key Features
NVIDIA claims that none of the gen AI sound models available today have the dexterity that Fugatto does. So, here are some of this model’s key features that separate it from the rest:
- Fugatto is the first foundational generative AI model that supports a wide range of audio generation and transformation tasks. It can easily combine free-form instructions and emergent properties, which are capabilities that result from the interaction of its various trained abilities.
- It is capable of adding, removing, or editing different accents, tones, intonations, and more in existing tracks.
- Fugatto can compose music in different styles like Pop, R&B, Country, Indie, Jazz, and more. It also lets users experiment with different voices and instruments. Not only can it create music, but it can also add effects or enhance the audio quality of an existing track.
- This text-to-AI sound model exhibits the capability of generating outputs that are not present in its training data. This feature alone classifies Fugatto as a top-tier Gen AI model. For example, it can simulate a person speaking and barking or a cello that yells in rage.
- NVIDIA’s latest sound model offers complete creative and artistic control to its users.
- It is capable of temporal interpolation, i.e., generating sounds that change over time.
- During inference, the model uses a technique known as ComposableART to integrate instructions that were previously viewed individually during training.
NVIDIA Shares Omniverse Real-Time Physics Digital Twins With Leading Software Companies
Training
NVIDIA’s Fugatto text-to-AI sound model was trained on a bank of NVIDIA DGX systems packing 32 NVIDIA H100 Tensor Core GPUs. The full version of the model uses 2.5 billion parameters. The model’s multi-accent and multilingual capabilities were enhanced by generating a blended dataset of millions of audio samples.
A heterogeneous mix of people from all around the world, including India, Brazil, China, Jordan, and South Korea, worked on the model to make it more exclusive and diverse than other similar tools. The team worked over a year to refine Fugatto’s capabilities and discover new relationships among data.
Use Cases
Fugatto is an incredibly versatile and powerful sound model. Here are some possible ways professionals as well as casual users can use it:
- With Fugatto, musicians can immediately prototype or revise song ideas. They can experiment with numerous styles, singers, and instruments. The model is also capable of adding effects and improving the overall quality of an existing track.
- By applying various dialects and emotions to voiceovers, an advertising agency can use Fugatto to target an existing campaign for various locations or scenarios.
- What makes Fugatto versatile is that from industry professionals to general audiences, everyone can use it. For example, a casual listener can use it as a personalized language-learning tool. Imagine how easy it would be to learn an online course if it was delivered in the voice of any family member, friend, or loved one.
- It can be used by video game makers to adjust pre-recorded materials in their titles to reflect the changing action as players play. Or, they can generate new assets on the spot using either text prompts or audio inputs.
How to Access It?
As of the writing of this, NVIDIA’s Fugatto text-to-AI sound model is only a research paper. You can read about it here.
The model is expected to be developed soon and NVIDIA’s partners will be able to access it in the future. Once released, it will likely set a new standard for generative AI sound models.
List of 7 Best AI Dubbing and Voice Cloning Tools and Software for Video
The Bottom Line
It seems that after conquering the semiconductor domain, NVIDIA is aiming for a new frontier in generative AI with its impressive models like Llama-3.1-Nemotron, NVLM 1.0, and now the Fugatto model.
AI models, however impressive they may be, pose several challenges. These challenges include ethical considerations, potential misuse of the technology, the need for continuous monitoring, human replacement, copyright infringement, bias, and more.
Though no one can dispute that AI has become incredibly useful, one can also not deny the negative impacts it has on the job market, environment, and society as a whole.
It will be interesting to see how NVIDIA addresses these challenges and ensures responsible use of its cutting-edge AI technology.
Nvidia Acquires OctoAI for $165M to Boost AI Efficiency and Expand Cloud Capabilities