YouTube-Commons / README.md
Rijgersberg's picture
Update README.md
005f7fc verified
metadata
dataset_info:
  features:
    - name: video_id
      dtype: string
    - name: video_link
      dtype: string
    - name: title
      dtype: string
    - name: text
      dtype: string
    - name: channel
      dtype: string
    - name: channel_id
      dtype: string
    - name: date
      dtype: string
    - name: license
      dtype: string
    - name: original_language
      dtype: string
    - name: language_id_method
      dtype: string
    - name: transcription_language
      dtype: string
    - name: word_count
      dtype: int64
    - name: character_count
      dtype: int64
    - name: source_language
      dtype: string
  splits:
    - name: train
      num_bytes: 298197594003
      num_examples: 22684737
  download_size: 162573072184
  dataset_size: 298197594003
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
license: cc-by-4.0
task_categories:
  - text-generation
tags:
  - conversational
language:
  - en
  - fr
  - es
  - pt
  - de
  - ru
  - nl
  - tr
  - it
pretty_name: YouTube Commons Re-upload

YouTube Commons Re-upload

This is a re-upload of PleIAs' YouTube Commons, a valuable open dataset:

YouTube-Commons is a collection of audio transcripts of 2,063,066 videos shared on YouTube under a CC BY 4.0 license.

Content

The collection comprises 22,709,724 original and automatically translated transcripts from 3,156,703 videos (721,136 individual channels).

Unfortunately, there are problems with loading YouTube Commons with Hugging Face Datasets. In order to alleviate those and to further process the dataset, I took the source parquet-files and reuploaded this fixed version to HuggingFace.

Code

The code used for this reupload. It makes use of a git clone of the PleIAs/YouTube-Commons dataset.

from pathlib import Path

from datasets import load_dataset, Dataset
from tqdm import tqdm

columns = set('''video_link
video_id
title
text
channel
channel_id
date
license
original_language
language_id_method
transcription_language
source_language
word_count
character_count'''.split('\n'))

def generate():
    for filepath in tqdm(sorted(Path('/Path/To/PleIAs/YouTube-Commons').rglob('*.parquet'))):
        print(filepath)
        dataset = load_dataset("parquet",
                               data_files={'train': str(filepath)})
        for row in dataset['train']:
            keys = set(row)
            # Some of the files are missing one of these two columns.
            # Setting them to None results in an Arrow error, so we use '' instead
            if 'language_id_method' not in keys:
                row['language_id_method'] = ''
            if 'source_language' not in keys:
                row['source_language'] = ''
            if '__index_level_0__' in keys:
                del row['__index_level_0__']

            if not set(row) == columns:
                raise ValueError(f'Error in columns: {set(row)}')
            yield row

youtube = Dataset.from_generator(generate)
youtube.push_to_hub('Rijgersberg/YouTube-Commons')