Datasets:

Modalities:
Audio
Text
Formats:
parquet
Languages:
multilingual
Size:
< 1K
ArXiv:
Libraries:
Datasets
Dask
License:
wanchichen commited on
Commit
607edc1
·
verified ·
1 Parent(s): 3cf744c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +25 -2
README.md CHANGED
@@ -11,13 +11,36 @@ dataset_info:
11
  sampling_rate: 16000
12
  splits:
13
  - name: train
14
- num_bytes: 54665637580.0
15
  num_examples: 423
16
  download_size: 53917768734
17
- dataset_size: 54665637580.0
18
  configs:
19
  - config_name: default
20
  data_files:
21
  - split: train
22
  path: data/train-*
 
 
 
23
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
11
  sampling_rate: 16000
12
  splits:
13
  - name: train
14
+ num_bytes: 54665637580
15
  num_examples: 423
16
  download_size: 53917768734
17
+ dataset_size: 54665637580
18
  configs:
19
  - config_name: default
20
  data_files:
21
  - split: train
22
  path: data/train-*
23
+ license: cc-by-nc-sa-4.0
24
+ language:
25
+ - multilingual
26
  ---
27
+
28
+ Jesus Dramas is a collection of religious audio dramas across 430 languages. In total, there is around 640 hours of audio.
29
+ It can be used for language identification, spoken language modelling, or speech representation learning.
30
+ This dataset includes the raw unsegmented audio in a 16kHz single channel format. Each audio drama can have multiple speakers, for both male and female voices.
31
+ It can be segmented into utterances with a voice activity detection (VAD) model such as this [one](https://github.com/wiseman/py-webrtcvad).
32
+ The original audio sources wwere crawled from [InspirationalFilms](https://www.inspirationalfilms.com/).
33
+
34
+ We use this corpus to train [XEUS](), a multilingual speech encoder for 4000+ languages.
35
+ For more details about the dataset and its usage, please refer to our [paper]().
36
+
37
+
38
+ License and Acknowledgement
39
+
40
+ Jesus Dramas is released under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 license.
41
+
42
+ If you use this dataset, we ask that you cite our paper:
43
+
44
+ ```
45
+
46
+ ```