Dataset Viewer
The dataset viewer is not available for this split.
Cannot load the dataset split (in streaming mode) to extract the first rows.
Error code:   StreamingRowsError
Exception:    CastError
Message:      Couldn't cast
audio: string
label: string
to
{'audio': Audio(sampling_rate=None, mono=True, decode=True, id=None)}
because column names don't match
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/utils.py", line 99, in get_rows_or_raise
                  return get_rows(
                File "/src/libs/libcommon/src/libcommon/utils.py", line 272, in decorator
                  return func(*args, **kwargs)
                File "/src/services/worker/src/worker/utils.py", line 77, in get_rows
                  rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2285, in __iter__
                  for key, example in ex_iterable:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1856, in __iter__
                  for key, pa_table in self._iter_arrow():
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1888, in _iter_arrow
                  pa_table = cast_table_to_features(pa_table, self.features)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2215, in cast_table_to_features
                  raise CastError(
              datasets.table.CastError: Couldn't cast
              audio: string
              label: string
              to
              {'audio': Audio(sampling_rate=None, mono=True, decode=True, id=None)}
              because column names don't match

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

[Dataset] Dog Vocal Separation

IMPORTANT NOTE for the IJCAI-2025 challenge

  • [Jun. 1st, 2025] The validation set has been updated.
  • [Apr. 25th, 2025] The dataset has some changes. Sorry for the inconvenience.

Overview

.
├── train
│   ├── train_pairs.csv
│   ├── dog
│   │   ├── 6357ca529eec8ca42a1fa588e0725904.wav
│   │   ├── cd06d290e0ebc76a137bd44ebec4d5fd.wav
│   │   └── ...
│   └── mixture
│       ├── f046b186c4def7428cd627ae98d1762d.wav
│       ├── 0731bd22098bb60412752ebcf029906d.wav
│       └── ...
├── val
│   ├── val_pairs.csv
│   ├── dog
│   │   └── ...
│   └── mixture
│       └── ...
└── test
    ├── test_pairs.csv
    └── mixture
        └── ...

The CSV files (train_pairs.csv, val_pairs.csv, and test_pairs.csv) contain pairs of (dog, mixture) in their rows. For instance, (6357ca529eec8ca42a1fa588e0725904.wav, f046b186c4def7428cd627ae98d1762d.wav) is a pair of (dog, mixture) written at train_pairs.csv.

The dataset uploaded on HuggingFace is splitted into subdirs, since the number of files is only allowed up to 10,000 in a single directory. To make the the dataset hierarchy like the original one, you can use the given script;

$ ./post_download.py train/dog

Description

Pairs of the 10-second mixed sound and its ground truth dog vocal are given in a train set. Only sound mixtures are provided in a test set. Participants are expected to produce 10-second dog barks as their output.

The total length of the train, validation, and test sets are about 348, 46, and 8 hours, respectively. In other words, 125,476 pairs for mixtures and ground truths are in the train set. 16,830 pairs and 3,000 pairs for validation and test set as well.

Pure dog barks come from a previous work [1] which are originally about 1-2 seconds long on average. These are padded to 10 seconds long. Background noises are strategically selected from AudioSet [2] to mix with the dog barks. Dog barks and noises are combined in different permutations, while ensuring that no single dog's vocal data exists in more than one of the sets above, to avoid information leak.

Challenge Notice

  1. All audios are sampled at 32,000 kHz. Make sure your submission is also sampled at 32,000 kHz.
  2. Named your submission audios with the test_pairs.csv. The csv file contains corresponding the dog filename from the mixture. For instance, if the original filename was 16a4168a678743ce0f23c70f89d9170b.wav in the test set, your prediction should be 97677e409040b54a21fdec623557bb2b.wav.
  3. Only test_pairs.csv have a split column to indicate the public and private in a submission. Participants do not need to use this column.
  4. SI-SDR will be calculated using the si_sdr.py script.

References

[1] Wang, T., Li, X., Zhang, C., Wu, M., & Zhu, K. (2024, November). Phonetic and Lexical Discovery of Canine Vocalization. In Findings of the Association for Computational Linguistics: EMNLP 2024 (pp. 13972-13983).

[2] Gemmeke, J. F., Ellis, D. P., Freedman, D., Jansen, A., Lawrence, W., Moore, R. C., ... & Ritter, M. (2017, March). Audio set: An ontology and human-labeled dataset for audio events. In 2017 IEEE international conference on acoustics, speech and signal processing (ICASSP) (pp. 776-780). IEEE.

Downloads last month
2,233