benjaminogbonna commited on
Commit
92f831d
·
verified ·
1 Parent(s): e2212e3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +182 -0
README.md CHANGED
@@ -1,4 +1,25 @@
1
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2
  dataset_info:
3
  - config_name: default
4
  features:
@@ -191,3 +212,164 @@ configs:
191
  - split: test
192
  path: yoruba/test-*
193
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ license: apache-2.0
3
+ task_categories:
4
+ - automatic-speech-recognition
5
+ - text-to-speech
6
+ pretty_name: Nigerian Common Voice Dataset
7
+ annotations_creators:
8
+ - crowdsourced
9
+ language_creators:
10
+ - crowdsourced
11
+ language:
12
+ - en
13
+ - ha
14
+ - ig
15
+ - yo
16
+ multilinguality:
17
+ - multilingual
18
+ extra_gated_prompt: >-
19
+ By clicking on “Access repository” below, you also agree to not attempt to
20
+ determine the identity of speakers in the Common Voice dataset.
21
+ size_categories:
22
+ - 10K<n<100K
23
  dataset_info:
24
  - config_name: default
25
  features:
 
212
  - split: test
213
  path: yoruba/test-*
214
  ---
215
+ # Dataset Card for Nigerian Common Voice Dataset
216
+
217
+ ## Table of Contents
218
+ - [Dataset Description](#dataset-description)
219
+ - [Dataset Summary](#dataset-summary)
220
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
221
+ - [Languages](#languages)
222
+ - [Dataset Structure](#dataset-structure)
223
+ - [Data Instances](#data-instances)
224
+ - [Data Fields](#data-fields)
225
+ - [Data Splits](#data-splits)
226
+ - [Dataset Creation](#dataset-creation)
227
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
228
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
229
+ - [Social Impact of Dataset](#social-impact-of-dataset)
230
+ - [Additional Information](#additional-information)
231
+ - [Contributions](#contributions)
232
+
233
+ ## Dataset Description
234
+
235
+ - **Repository:** https://github.com/
236
+ - **Point of Contact:** [Benjamin Ogbonna](mailto:[email protected])
237
+
238
+ ### Dataset Summary
239
+
240
+ The Nigerian Common Voice Dataset is a comprehensive dataset consisting of 53 hours audio recordings and corresponding csv files.
241
+ This dataset also include metadata like accent, locale that can help improve the accuracy of speech recognition engines. This dataset is specifically curated to address the gap in speech and language
242
+ datasets for African accents, making it a valuable resource for researchers and developers working on Automatic Speech Recognition (ASR),
243
+ Speech-to-text (STT), Text-to-Speech (TTS), Accent recognition, and Natural language processing (NLP) systems.
244
+
245
+ The dataset currently consists of 53 hours hours audio recordings in 4 languages, but more voices and languages are always added. Contributions are welcome.
246
+
247
+
248
+ ### Languages
249
+
250
+ ```
251
+ English, Hausa, Igbo, Yoruba
252
+ ```
253
+
254
+ ## How to use
255
+
256
+ The `datasets` library allows you to load and pre-process your dataset in pure Python, at scale. The dataset can be downloaded and prepared in one call to your local drive by using the `load_dataset` function.
257
+
258
+ For example, to download the Igbo config, simply specify the corresponding language config name (i.e., "igbo" for Igbo):
259
+ ```python
260
+ from datasets import load_dataset
261
+ dataset = load_dataset("benjaminogbonna/nigerian_common_voice_dataset", "igbo", split="train")
262
+ ```
263
+
264
+ Using the datasets library, you can also stream the dataset on-the-fly by adding a `streaming=True` argument to the `load_dataset` function call. Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire dataset to disk.
265
+ ```python
266
+ from datasets import load_dataset
267
+ dataset = load_dataset("benjaminogbonna/nigerian_common_voice_dataset", "igbo", split="train", streaming=True)
268
+ print(next(iter(cv_17)))
269
+ ```
270
+
271
+ *Bonus*: create a [PyTorch dataloader](https://huggingface.co/docs/datasets/use_with_pytorch) directly with your own datasets (local/streamed).
272
+
273
+ ### Local
274
+
275
+ ```python
276
+ from datasets import load_dataset
277
+ from torch.utils.data.sampler import BatchSampler, RandomSampler
278
+ dataset = load_dataset("benjaminogbonna/nigerian_common_voice_dataset", "igbo", split="train")
279
+ batch_sampler = BatchSampler(RandomSampler(dataset), batch_size=32, drop_last=False)
280
+ dataloader = DataLoader(dataset, batch_sampler=batch_sampler)
281
+ ```
282
+
283
+ ### Streaming
284
+
285
+ ```python
286
+ from datasets import load_dataset
287
+ from torch.utils.data import DataLoader
288
+ dataset = load_dataset("benjaminogbonna/nigerian_common_voice_dataset", "igbo", split="train")
289
+ dataloader = DataLoader(dataset, batch_size=32)
290
+ ```
291
+
292
+ To find out more about loading and preparing audio datasets, head over to [hf.co/blog/audio-datasets](https://huggingface.co/blog/audio-datasets).
293
+
294
+ ### Example scripts
295
+
296
+ Train your own CTC or Seq2Seq Automatic Speech Recognition models on Common Voice 16 with `transformers` - [here](https://github.com/huggingface/transformers/tree/main/examples/pytorch/speech-recognition).
297
+
298
+ ## Dataset Structure
299
+
300
+ ### Data Instances
301
+
302
+ A typical data point comprises the `path` to the audio file and its `sentence`.
303
+ Additional fields include `accent`, `client_id` and `locale`.
304
+
305
+ ```python
306
+ {
307
+ 'client_id': 'user_5256',
308
+ 'path': 'clips/ng_voice_igbo_5257.mp3',
309
+ 'audio': {
310
+ 'path': 'clips/ng_voice_igbo_5257.mp3',
311
+ 'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346, 0.00091553, 0.00085449], dtype=float32),
312
+ 'sampling_rate': 48000
313
+ },
314
+ 'sentence': 'n'ihu ọha mmadụ.',
315
+ 'accent': 'nigerian',
316
+ 'locale': 'igbo',
317
+ }
318
+ ```
319
+
320
+ ### Data Fields
321
+
322
+ `client_id` (`string`): An id for which client (voice) made the recording
323
+
324
+ `path` (`string`): The path to the audio file
325
+
326
+ `audio` (`dict`): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
327
+
328
+ `sentence` (`string`): The sentence the user was prompted to speak
329
+
330
+ `accent` (`string`): Accent of the speaker
331
+
332
+ `locale` (`string`): The locale of the speaker
333
+
334
+ ### Data Splits
335
+
336
+ The dataset has been subdivided into portions for dev, train and test.
337
+
338
+ ## Data Preprocessing Recommended by Hugging Face
339
+
340
+ The following are data preprocessing steps advised by the Hugging Face team. They are accompanied by an example code snippet that shows how to put them to practice.
341
+
342
+ Many examples in this dataset have trailing quotations marks, e.g _“the cat sat on the mat.“_. These trailing quotation marks do not change the actual meaning of the sentence, and it is near impossible to infer whether a sentence is a quotation or not a quotation from audio data alone. In these cases, it is advised to strip the quotation marks, leaving: _the cat sat on the mat_.
343
+
344
+ In addition, the majority of training sentences end in punctuation ( . or ? or ! ), whereas just a small proportion do not. In the dev set, **almost all** sentences end in punctuation. Thus, it is recommended to append a full-stop ( . ) to the end of the small number of training examples that do not end in punctuation.
345
+
346
+ ```python
347
+ from datasets import load_dataset
348
+ ds = load_dataset("benjaminogbonna/nigerian_common_voice_dataset", "igbo")
349
+ def prepare_dataset(batch):
350
+ """Function to preprocess the dataset with the .map method"""
351
+ transcription = batch["sentence"]
352
+
353
+ if transcription.startswith('"') and transcription.endswith('"'):
354
+ # we can remove trailing quotation marks as they do not affect the transcription
355
+ transcription = transcription[1:-1]
356
+
357
+ if transcription[-1] not in [".", "?", "!"]:
358
+ # append a full-stop to sentences that do not end in punctuation
359
+ transcription = transcription + "."
360
+
361
+ batch["sentence"] = transcription
362
+
363
+ return batch
364
+ ds = ds.map(prepare_dataset, desc="preprocess dataset")
365
+ ```
366
+
367
+ ### Personal and Sensitive Information
368
+
369
+ The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.
370
+
371
+ ### Social Impact of Dataset
372
+
373
+ The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.
374
+
375
+ ### Contributions