Rijgersberg's picture
Update README.md
7af6654 verified
metadata
dataset_info:
  features:
    - name: video_id
      dtype: string
    - name: video_link
      dtype: string
    - name: channel
      dtype: string
    - name: channel_id
      dtype: string
    - name: date
      dtype: string
    - name: license
      dtype: string
    - name: original_language
      dtype: string
    - name: title
      dtype: string
    - name: description
      dtype: string
    - name: language
      dtype: string
    - name: confidence
      dtype: float64
  splits:
    - name: train
      num_bytes: 3684421635
      num_examples: 3030568
  download_size: 2229560856
  dataset_size: 3684421635
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
license: cc-by-4.0
language:
  - en
  - fr
  - es
  - pt
  - de
  - ru
  - nl
  - tr
  - it
pretty_name: YouTube Commons Descriptions

YouTube Commons Descriptions and Language Detection

This dataset adds titles, descriptions and language detection to YouTube Commons, a valuable open dataset:

YouTube-Commons is a collection of audio transcripts of 2,063,066 videos shared on YouTube under a CC BY 4.0 license.

Content

The collection comprises 22,709,724 original and automatically translated transcripts from 3,156,703 videos (721,136 individual channels).

Unfortunately I have found that the detection of the original language, at least for Dutch, has room for improvement. Others have observed (1, 2) similar issues. Therefore this dataset adds the video title and description to YouTube Commons and performs language detection on those.

YouTube Commons

There are problems with loading YouTube Commons with Hugging Face Datasets. To alleviate those, I also took the source parquet-files and reuploaded a fixed version to HuggingFace: Rijgersberg/YouTube-Commons.

Acquisition

The titles and descriptions are downloaded from YouTube with the help of yt-dlp. Some videos are missing compared to YouTube Commons, for one of the following reasons:

  • Some videos are no longer available on YouTube, either taken down by the uploader or by YouTube.
  • Some videos are only visible to logged in users.
  • (rarely) Anti-bot measures by YouTube prevented download.

The download took about two weeks.

Code:
import json
from concurrent.futures import ProcessPoolExecutor, as_completed
from pathlib import Path

from datasets import load_dataset
from tqdm import tqdm
from yt_dlp import YoutubeDL

output_dir = Path('/path/to/output/dir/')


def get_info(video_id, output_dir):
    write_folder = output_dir / video_id[:2]
    write_filepath = write_folder / f'{video_id}.json'
    if write_filepath.exists():
        return video_id, True

    with YoutubeDL({'quiet': True, 'skip_download': True}) as ydl:
        try:
            info = ydl.extract_info(f'https://www.youtube.com/watch?v={video_id}', download=False)

            title = info.get('title', '')
            description = info.get('description', '')

            # Write the title and description to a text file
            write_folder.mkdir(exist_ok=True, parents=True)
            with open(write_filepath, 'w', encoding='utf-8') as f:
                json.dump({'id': video_id,
                           'title': title,
                           'description': description}, f)
        except Exception as e:
            print(video_id, e)
            return video_id, False
    return video_id, True

def main():
    video_ids = []
    for filepath in tqdm(sorted(Path('/path/to/YouTubeCommons/files').rglob('*.parquet'))):
        try:  # I was having trouble loading the original dataset, so this lets me get what I can
            dataset = load_dataset("parquet",
                                   data_files={'train': str(filepath)})
            video_ids.extend(dataset['train']['video_id'])
        except Exception as e:
            print(filepath, e)
            continue
    video_ids = set(video_ids)

    with ProcessPoolExecutor(max_workers=10) as executor:
        futures = {executor.submit(get_info, video_id, output_dir): video_id
                   for video_id in video_ids}

        for future in tqdm(as_completed(futures), total=len(futures), desc="Downloading video info"):
            video_id = futures[future]
            try:
                _, success = future.result()
                if not success:
                    print(f"Failed to process: {video_id}")
            except Exception as e:
                print(f"Error occurred for {video_id}: {e}")

if __name__ == "__main__":
    main()

Language detection

The language and confidence columns were added by running LangID on the title and description. So note: the detection was not performed on the audio of the video.

The equivalent detection code:

from langid.langid import LanguageIdentifier, model

lang_id = LanguageIdentifier.from_modelstring(model, norm_probs=True)

lang, conf = lang_id.classify(title + '\n\n' + description)

For Dutch, here is the agreement table between the original_language column from YouTube Commons and the newly detected language column.

original_language nl original_language !nl
language nl 7010 4698
language !nl 21452 2997408