yangwang825 commited on
Commit
ac6a6c7
·
verified ·
1 Parent(s): bb2b53a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +50 -1
README.md CHANGED
@@ -18,4 +18,53 @@ tags:
18
  - audio
19
  size_categories:
20
  - 1M<n<10M
21
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
18
  - audio
19
  size_categories:
20
  - 1M<n<10M
21
+ ---
22
+
23
+ # AudioSet
24
+
25
+ AudioSet is a large-scale dataset comprising approximately 2 million 10-second YouTube audio clips, categorised into 527 sound classes.
26
+ We have pre-processed all audio files to a 16 kHz sampling rate and stored them in the WebDataset format for efficient large-scale training and retrieval.
27
+
28
+ ## Download
29
+
30
+ We recommend using the following commands to download the `confit/audioset-16khz-wds` dataset from HuggingFace.
31
+ The dataset is available in two versions:
32
+
33
+ - 20k: A smaller version with 20,550 clips for quick experimentation.
34
+ - 2m: The complete dataset with ~2 million clips.
35
+
36
+ ```bash
37
+ # For the 20k version
38
+ huggingface-cli download confit/audioset-16khz-wds --include 20k/train/*.tar --repo-type=dataset --local-dir /path/to/store
39
+ huggingface-cli download confit/audioset-16khz-wds --include 20k/test/*.tar --repo-type=dataset --local-dir /path/to/store
40
+
41
+ # For the 2m version
42
+ huggingface-cli download confit/audioset-16khz-wds --include 2m/train/*.tar --repo-type=dataset --local-dir /path/to/store
43
+ huggingface-cli download confit/audioset-16khz-wds --include 2m/test/*.tar --repo-type=dataset --local-dir /path/to/store
44
+ ```
45
+
46
+ ## Format and Usage
47
+
48
+ The dataset is stored in the WebDataset (WDS) format, which is optimised for distributed training and streaming.
49
+ Each `.tar` archive contains audio files and corresponding metadata.
50
+
51
+ To load the dataset in Python using webdataset:
52
+
53
+ ```python
54
+ train_base_url = '/path/to/20k/train/shard-{i:05d}.tar'
55
+ train_urls = [train_base_url.format(i=i) for i in range(7)]
56
+
57
+ test_base_url = '/path/to/20k/test/shard-{i:05d}.tar'
58
+ test_urls = [test_base_url.format(i=i) for i in range(6)]
59
+
60
+ raw_datasets = load_dataset(
61
+ "webdataset",
62
+ data_files={"train": train_urls, "test": test_urls},
63
+ streaming=False
64
+ )
65
+ ```
66
+
67
+ ## License and Usage Restrictions
68
+
69
+ Please ensure compliance with YouTube's terms of service when using this dataset.
70
+ Some clips may no longer be available if the original videos have been removed or made private.