Text-to-Audio
Transformers
English
music
Inference Endpoints
Edit model card

TANGO: Text to Audio using iNstruction-Guided diffusiOn

TANGO is a latent diffusion model for text-to-audio generation. TANGO can generate realistic audios including human sounds, animal sounds, natural and artificial sounds and sound effects from textual prompts. We use the frozen instruction-tuned LLM Flan-T5 as the text encoder and train a UNet based diffusion model for audio generation. We outperform current state-of-the-art models for audio generation across both objective and subjective metrics. We release our model, training, inference code and pre-trained checkpoints for the research community.

πŸ“£ We are releasing Tango-Full-FT-Audiocaps which was first pre-trained on TangoPromptBank and later fine tuned on AudioCaps. This checkpoint obtained state-of-the-art results for text-to-audio generation on AudioCaps.

Code

Our code is released here: https://github.com/declare-lab/tango

We uploaded several TANGO generated samples here: https://tango-web.github.io/

Please follow the instructions in the repository for installation, usage and experiments.

Quickstart Guide

Download the TANGO model and generate audio from a text prompt:

import IPython
import soundfile as sf
from tango import Tango

tango = Tango("declare-lab/tango-full-ft-audiocaps")

prompt = "An audience cheering and clapping"
audio = tango.generate(prompt)
sf.write(f"{prompt}.wav", audio, samplerate=16000)
IPython.display.Audio(data=audio, rate=16000)

An audience cheering and clapping.webm

The model will be automatically downloaded and saved in cache. Subsequent runs will load the model directly from cache.

The generate function uses 100 steps by default to sample from the latent diffusion model. We recommend using 200 steps for generating better quality audios. This comes at the cost of increased run-time.

prompt = "Rolling thunder with lightning strikes"
audio = tango.generate(prompt, steps=200)
IPython.display.Audio(data=audio, rate=16000)

Rolling thunder with lightning strikes.webm

Use the generate_for_batch function to generate multiple audio samples for a batch of text prompts:

prompts = [
    "A car engine revving",
    "A dog barks and rustles with some clicking",
    "Water flowing and trickling"
]
audios = tango.generate_for_batch(prompts, samples=2)

This will generate two samples for each of the three text prompts.

Downloads last month
327
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train declare-lab/tango-full-ft-audiocaps

Spaces using declare-lab/tango-full-ft-audiocaps 3