File size: 13,287 Bytes
9fed9d0
92f831d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9fed9d0
e03a772
9fed9d0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e03a772
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b2439bd
 
e03a772
b2439bd
 
e03a772
b2439bd
 
 
 
1211d50
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4ca8d82
 
1211d50
4ca8d82
 
1211d50
4ca8d82
 
 
 
9e39403
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
509aed4
 
9e39403
509aed4
 
9e39403
509aed4
 
 
 
ad90772
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e2212e3
 
ad90772
e2212e3
 
ad90772
e2212e3
 
 
 
9fed9d0
b5cb2fb
9fed9d0
e03a772
 
 
 
 
 
b5cb2fb
e03a772
1211d50
 
 
 
 
 
b5cb2fb
e03a772
9e39403
 
 
 
 
 
b5cb2fb
e03a772
ad90772
 
 
 
 
 
9fed9d0
92f831d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
db37125
92f831d
 
 
 
 
 
 
 
 
5d5ca17
 
92f831d
 
 
5d5ca17
92f831d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
db37125
 
97c7f16
db37125
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
---
license: apache-2.0
task_categories:
- automatic-speech-recognition
- text-to-speech
pretty_name: Nigerian Common Voice Dataset
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- en
- ha
- ig
- yo
multilinguality:
- multilingual
extra_gated_prompt: >-
  By clicking on “Access repository” below, you also agree to not attempt to
  determine the identity of speakers in the Common Voice dataset.
size_categories:
- 10K<n<100K
dataset_info:
- config_name: default
  features:
  - name: audio
    dtype: audio
  - name: client_id
    dtype: string
  - name: path
    dtype: string
  - name: sentence
    dtype: string
  - name: accent
    dtype: string
  - name: locale
    dtype: string
  splits:
  - name: english_train
    num_bytes: 76891.0
    num_examples: 3
  - name: english_validation
    num_bytes: 76388.0
    num_examples: 3
  - name: english_test
    num_bytes: 44707.0
    num_examples: 3
  - name: hausa_train
    num_bytes: 87721.0
    num_examples: 3
  - name: hausa_validation
    num_bytes: 81663.0
    num_examples: 3
  - name: hausa_test
    num_bytes: 86685.0
    num_examples: 3
  - name: igbo_train
    num_bytes: 77798.0
    num_examples: 3
  - name: igbo_validation
    num_bytes: 109802.0
    num_examples: 3
  - name: igbo_test
    num_bytes: 103504.0
    num_examples: 3
  - name: yoruba_train
    num_bytes: 111252.0
    num_examples: 3
  - name: yoruba_validation
    num_bytes: 125347.0
    num_examples: 3
  - name: yoruba_test
    num_bytes: 116250.0
    num_examples: 3
  download_size: 1127146
  dataset_size: 1098008.0
- config_name: english
  features:
  - name: audio
    dtype: audio
  - name: client_id
    dtype: string
  - name: path
    dtype: string
  - name: sentence
    dtype: string
  - name: accent
    dtype: string
  - name: locale
    dtype: string
  splits:
  - name: train
    num_bytes: 102291684.678
    num_examples: 2721
  - name: validation
    num_bytes: 12091603.0
    num_examples: 340
  - name: test
    num_bytes: 11585499.0
    num_examples: 341
  download_size: 121504884
  dataset_size: 125968786.678
- config_name: hausa
  features:
  - name: audio
    dtype: audio
  - name: client_id
    dtype: string
  - name: path
    dtype: string
  - name: sentence
    dtype: string
  - name: accent
    dtype: string
  - name: locale
    dtype: string
  splits:
  - name: train
    num_bytes: 189263575.55
    num_examples: 7206
  - name: validation
    num_bytes: 23256496.0
    num_examples: 901
  - name: test
    num_bytes: 24050751.0
    num_examples: 901
  download_size: 234586970
  dataset_size: 236570822.55
- config_name: igbo
  features:
  - name: audio
    dtype: audio
  - name: client_id
    dtype: string
  - name: path
    dtype: string
  - name: sentence
    dtype: string
  - name: accent
    dtype: string
  - name: locale
    dtype: string
  splits:
  - name: train
    num_bytes: 147708753.853
    num_examples: 4571
  - name: validation
    num_bytes: 19026693.0
    num_examples: 571
  - name: test
    num_bytes: 19092378.0
    num_examples: 572
  download_size: 185986664
  dataset_size: 185827824.853
- config_name: yoruba
  features:
  - name: audio
    dtype: audio
  - name: client_id
    dtype: string
  - name: path
    dtype: string
  - name: sentence
    dtype: string
  - name: accent
    dtype: string
  - name: locale
    dtype: string
  splits:
  - name: train
    num_bytes: 124429039.456
    num_examples: 3336
  - name: validation
    num_bytes: 15302013.0
    num_examples: 417
  - name: test
    num_bytes: 15182108.0
    num_examples: 418
  download_size: 147489914
  dataset_size: 154913160.456
configs:
- config_name: english
  data_files:
  - split: train
    path: english/train-*
  - split: validation
    path: english/validation-*
  - split: test
    path: english/test-*
- config_name: hausa
  data_files:
  - split: train
    path: hausa/train-*
  - split: validation
    path: hausa/validation-*
  - split: test
    path: hausa/test-*
- config_name: igbo
  data_files:
  - split: train
    path: igbo/train-*
  - split: validation
    path: igbo/validation-*
  - split: test
    path: igbo/test-*
- config_name: yoruba
  data_files:
  - split: train
    path: yoruba/train-*
  - split: validation
    path: yoruba/validation-*
  - split: test
    path: yoruba/test-*
---
# Dataset Card for Nigerian Common Voice Dataset

## Table of Contents
- [Dataset Description](#dataset-description)
  - [Dataset Summary](#dataset-summary)
  - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
  - [Languages](#languages)
- [Dataset Structure](#dataset-structure)
  - [Data Instances](#data-instances)
  - [Data Fields](#data-fields)
  - [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
  - [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
  - [Social Impact of Dataset](#social-impact-of-dataset)
- [Additional Information](#additional-information)
  - [Reference/Disclaimer](#reference-disclaimer)
  - [Contributions](#contributions)

## Dataset Description

- **Repository:** https://github.com/
- **Point of Contact:** [Benjamin Ogbonna](mailto:[email protected])

### Dataset Summary

The Nigerian Common Voice Dataset is a comprehensive dataset consisting of 158 hours of audio recordings and corresponding transcription (sentence). 
This dataset includes metadata like accent, locale that can help improve the accuracy of speech recognition engines. This dataset is specifically curated to address the gap in speech and language 
datasets for African accents, making it a valuable resource for researchers and developers working on Automatic Speech Recognition (ASR), 
Speech-to-text (STT), Text-to-Speech (TTS), Accent recognition, and Natural language processing (NLP) systems.

The dataset currently consists of 158 hours of audio recordings in 4 languages, but more voices and languages are always added. Contributions are welcome.


### Languages

```
English, Hausa, Igbo, Yoruba
```

## How to use

The `datasets` library allows you to load and pre-process your dataset in pure Python, at scale. The dataset can be downloaded and prepared in one call to your local drive by using the `load_dataset` function. 

For example, to download the Igbo config, simply specify the corresponding language config name (i.e., "igbo" for Igbo):
```python
from datasets import load_dataset
dataset = load_dataset("benjaminogbonna/nigerian_common_voice_dataset", "igbo", split="train")
```

Using the datasets library, you can also stream the dataset on-the-fly by adding a `streaming=True` argument to the `load_dataset` function call. Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire dataset to disk.
```python
from datasets import load_dataset
dataset = load_dataset("benjaminogbonna/nigerian_common_voice_dataset", "igbo", split="train", streaming=True)
print(next(iter(cv_17)))
```

*Bonus*: create a [PyTorch dataloader](https://huggingface.co/docs/datasets/use_with_pytorch) directly with your own datasets (local/streamed).

### Local

```python
from datasets import load_dataset
from torch.utils.data.sampler import BatchSampler, RandomSampler
dataset = load_dataset("benjaminogbonna/nigerian_common_voice_dataset", "igbo", split="train")
batch_sampler = BatchSampler(RandomSampler(dataset), batch_size=32, drop_last=False)
dataloader = DataLoader(dataset, batch_sampler=batch_sampler)
```

### Streaming

```python
from datasets import load_dataset
from torch.utils.data import DataLoader
dataset = load_dataset("benjaminogbonna/nigerian_common_voice_dataset", "igbo", split="train")
dataloader = DataLoader(dataset, batch_size=32)
```

To find out more about loading and preparing audio datasets, head over to [hf.co/blog/audio-datasets](https://huggingface.co/blog/audio-datasets).

### Example scripts

Train your own CTC or Seq2Seq Automatic Speech Recognition models on Common Voice 16 with `transformers` - [here](https://github.com/huggingface/transformers/tree/main/examples/pytorch/speech-recognition).

## Dataset Structure

### Data Instances

A typical data point comprises the `path` to the audio file and its `sentence`. 
Additional fields include `accent`, `client_id` and `locale`.

```python
{
  'client_id': 'user_5256', 
  'path': 'clips/ng_voice_igbo_5257.mp3',
  'audio': {
    'path': 'clips/ng_voice_igbo_5257.mp3', 
    'array': array([-0.00048828, -0.00018311, -0.00137329, ...,  0.00079346, 0.00091553,  0.00085449], dtype=float32), 
    'sampling_rate': 48000
  },
  'sentence': 'n'ihu ọha mmadụ.', 
  'accent': 'nigerian', 
  'locale': 'igbo', 
}
```

### Data Fields

`client_id` (`string`): An id for which client (voice) made the recording

`path` (`string`): The path to the audio file

`audio` (`dict`): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.

`sentence` (`string`): The sentence the user was prompted to speak

`accent` (`string`): Accent of the speaker

`locale` (`string`): The locale of the speaker

### Data Splits

The dataset has been subdivided into portions for dev, train and test.

## Data Preprocessing Recommended by Hugging Face

The following are data preprocessing steps advised by the Hugging Face team. They are accompanied by an example code snippet that shows how to put them to practice. 

Many examples in this dataset have trailing quotations marks, e.g _“the cat sat on the mat.“_. These trailing quotation marks do not change the actual meaning of the sentence, and it is near impossible to infer whether a sentence is a quotation or not a quotation from audio data alone. In these cases, it is advised to strip the quotation marks, leaving: _the cat sat on the mat_.

In addition, the majority of training sentences end in punctuation ( . or ? or ! ), whereas just a small proportion do not. In the dev set, **almost all** sentences end in punctuation. Thus, it is recommended to append a full-stop ( . ) to the end of the small number of training examples that do not end in punctuation.

```python
from datasets import load_dataset
ds = load_dataset("benjaminogbonna/nigerian_common_voice_dataset", "igbo")
def prepare_dataset(batch):
  """Function to preprocess the dataset with the .map method"""
  transcription = batch["sentence"]
  
  if transcription.startswith('"') and transcription.endswith('"'):
    # we can remove trailing quotation marks as they do not affect the transcription
    transcription = transcription[1:-1]
  
  if transcription[-1] not in [".", "?", "!"]:
    # append a full-stop to sentences that do not end in punctuation
    transcription = transcription + "."
  
  batch["sentence"] = transcription
  
  return batch
ds = ds.map(prepare_dataset, desc="preprocess dataset")
```

### Personal and Sensitive Information

The dataset consists of people who have donated their voice online.  You agree to not attempt to determine the identity of speakers in the Common Voice dataset.

### Social Impact of Dataset

The dataset consists of people who have donated their voice online.  You agree to not attempt to determine the identity of speakers in the Common Voice dataset.

### Reference/Disclaimer

Just to state it clearly, "the current languages and voices we have on the Nigerian Common Voice Dataset were not all collected from scratch".
Infact, this wasn't the problem we set out to solve initially. We were working on a speech to speech (stt & tts) conversational model for Nigeria languages, but along the way we had a bottleneck:
1. The few data (audio) available were scattered and from different sources (Kaggle, Hugging Face, and many other websites).
2. The data weren't in the format required by the models.
3. Many of the audios had wrong or no corresponding transcriptions at all.

So while training our model, we had to gather them into one repository, structure them, clean them (remove/edit wrong transcriptions), and trim most of them to 30 seconds chunks.

We figured many people had the same issue, hence we uploaded it to Hugging Face and made it public.

Secondly, we haven't found any publicly available data (audios & transcriptions) for many Nigerian languages that we need (ex. Pidgin, etc). 
So the Nigerian Common Voice Dataset will be an ongoing project to collect as many languages & voices as possible.

Next, in order to add more languages and voices:
1. We will crowd-source from volunteers and contributors.
2. Take advantage of the hundreds of hours of Nigerian movies that are publicly available in different languages.

Our goal here is just to bring this data into one central repository and make it available to the public (researchers, developers, and all).

### Contributions