Self Supervised Audio Spectrogram Transformer (pretrained on AudioSet/Librispeech)

Self Supervised Audio Spectrogram Transformer (SSAST) model with uninitialized classifier head. It was introduced in the paper SSAST: Self-Supervised Audio Spectrogram Transformer by Gong et al. and first released in this repository.

Disclaimer: The team releasing Audio Spectrogram Transformer did not write a model card for this model.

Model description

The Audio Spectrogram Transformer is equivalent to ViT, but applied on audio. Audio is first turned into an image (as a spectrogram), after which a Vision Transformer is applied. The model gets state-of-the-art results on several audio classification benchmarks.

Usage

The model is pretrained on a massive amount of audio. Please finetune the classifier head before use, as it comes uninitialized.

Downloads last month
58
Safetensors
Model size
21.8M params
Tensor type
F32
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Datasets used to train Simon-Kotchou/ssast-small-patch-audioset-16-16