Datasets:
language: | |
- en | |
license: apache-2.0 | |
size_categories: | |
- 10K<n<100K | |
task_categories: | |
- audio-text-to-text | |
tags: | |
- audio-retrieval | |
- multimodal | |
- moment-retrieval | |
library_name: lighthouse | |
configs: | |
- config_name: default | |
data_files: | |
- split: train | |
path: train/*.tar | |
- split: valid | |
path: valid/*.tar | |
- split: test | |
path: test/*.tar | |
# Clotho-Moment | |
This repository provides wav files used in [Language-based Audio Moment Retrieval](https://arxiv.org/abs/2409.15672). | |
Each sample includes long audio containing some audio events with the temporal and textual annotation. | |
Project page: https://h-munakata.github.io/Language-based-Audio-Moment-Retrieval/ | |
Code: https://github.com/line/lighthouse | |
## Split | |
- Train | |
- train/train-{000..715}.tar | |
- 37930 audio samples | |
- Valid | |
- valid/valid-{000..108}.tar | |
- 5741 audio samples | |
- Test | |
- test/test-{000..142}.tar | |
- 7569 audio samples | |
## Using Webdataset | |
```python | |
import torch | |
import webdataset as wds | |
from huggingface_hub import get_token | |
from torch.utils.data import DataLoader | |
hf_token = get_token() | |
url = "https://huggingface.co/datasets/lighthouse-emnlp2024/Clotho-Moment/resolve/main/train/train-{{001..002}}.tar" | |
url = f"pipe:curl -s -L {url} -H 'Authorization:Bearer {hf_token}'" | |
dataset = wds.WebDataset(url, shardshuffle=None).decode(wds.torch_audio) | |
for sample in dataset: | |
print(sample.keys()) | |
``` | |
## Citation | |
```bibtex | |
@inproceedings{munakata2025language, | |
title={Language-based Audio Moment Retrieval}, | |
author={Munakata, Hokuto and Nishimura, Taichi and Nakada, Shota and Komatsu, Tatsuya}, | |
booktitle={ICASSP 2025-2025 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, | |
pages={1--5}, | |
year={2025}, | |
organization={IEEE} | |
} | |
``` |