code on https://github.com/mateo19182/all-the-breaks

small model trained on 295 freely available drum breaks. No text conditioning was used (inspired by https://github.com/aaronabebe/micro-musicgen).

only trained for 5 epochs, liked the sound there but can resume training with continue_from=checkpoint.th

useful docs: https://github.com/facebookresearch/audiocraft/blob/main/docs/TRAINING.md

examples: (picked at random)

dora run solver=musicgen/musicgen_base_32khz model/lm/model_scale=small conditioner=none dataset.batch_size=5 dset=audio/breaks.yaml dataset.valid.num_samples=1 generate.every=10000 evaluate.every=10000 optim.optimizer=adamw optim.lr=1e-4 optim.adam.weight_decay=0.01 checkpoint.save_every=5

use:

import torchaudio
from audiocraft.models import MusicGen
from audiocraft.data.audio import audio_write
model = MusicGen.get_pretrained('mateo-19182/all-the-breaks')
model.set_generation_params(duration=10)
wav = model.generate_unconditional(10)

for idx, one_wav in enumerate(wav):
    audio_write(f'{idx}', one_wav.cpu(), model.sample_rate, strategy="loudness", loudness_compressor=True)
Downloads last month
2
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for mateo-19182/all-the-breaks

Finetuned
(6)
this model