| | --- |
| | license: cc-by-4.0 |
| | task_categories: |
| | - automatic-speech-recognition |
| | - text-to-speech |
| | language: |
| | - tr |
| | tags: |
| | - speech |
| | - audio |
| | - dataset |
| | - tts |
| | - asr |
| | - merged-dataset |
| | size_categories: |
| | - n<1K |
| | configs: |
| | - config_name: default |
| | data_files: |
| | - split: train |
| | path: "data.jsonl" |
| | default: true |
| | dataset_info: |
| | features: |
| | - name: audio |
| | dtype: |
| | audio: |
| | sampling_rate: null |
| | - name: text |
| | dtype: string |
| | - name: speaker_id |
| | dtype: string |
| | - name: emotion |
| | dtype: string |
| | - name: language |
| | dtype: string |
| | splits: |
| | - name: train |
| | num_examples: 551 |
| | config_name: default |
| | --- |
| | |
| | # test44444 |
| |
|
| | This is a merged speech dataset containing 551 audio segments from 2 source datasets. |
| |
|
| | ## Dataset Information |
| |
|
| | - **Total Segments**: 551 |
| | - **Speakers**: 2 |
| | - **Languages**: tr |
| | - **Emotions**: neutral, happy, angry |
| | - **Original Datasets**: 2 |
| |
|
| | ## Dataset Structure |
| |
|
| | Each example contains: |
| | - `audio`: Audio file (WAV format, original sampling rate preserved) |
| | - `text`: Transcription of the audio |
| | - `speaker_id`: Unique speaker identifier (made unique across all merged datasets) |
| | - `emotion`: Detected emotion (neutral, happy, sad, etc.) |
| | - `language`: Language code (en, es, fr, etc.) |
| |
|
| | ## Usage |
| |
|
| | ### Loading the Dataset |
| |
|
| | ```python |
| | from datasets import load_dataset |
| | |
| | # Load the dataset |
| | dataset = load_dataset("Codyfederer/test44444") |
| | |
| | # Access the training split |
| | train_data = dataset["train"] |
| | |
| | # Example: Get first sample |
| | sample = train_data[0] |
| | print(f"Text: {sample['text']}") |
| | print(f"Speaker: {sample['speaker_id']}") |
| | print(f"Language: {sample['language']}") |
| | print(f"Emotion: {sample['emotion']}") |
| | |
| | # Play audio (requires audio libraries) |
| | # sample['audio']['array'] contains the audio data |
| | # sample['audio']['sampling_rate'] contains the sampling rate |
| | ``` |
| |
|
| | ### Alternative: Load from JSONL |
| |
|
| | ```python |
| | from datasets import Dataset, Audio, Features, Value |
| | import json |
| | |
| | # Load the JSONL file |
| | rows = [] |
| | with open("data.jsonl", "r", encoding="utf-8") as f: |
| | for line in f: |
| | rows.append(json.loads(line)) |
| | |
| | features = Features({ |
| | "audio": Audio(sampling_rate=None), |
| | "text": Value("string"), |
| | "speaker_id": Value("string"), |
| | "emotion": Value("string"), |
| | "language": Value("string") |
| | }) |
| | |
| | dataset = Dataset.from_list(rows, features=features) |
| | ``` |
| |
|
| | ### Dataset Structure |
| |
|
| | The dataset includes: |
| | - `data.jsonl` - Main dataset file with all columns (JSON Lines) |
| | - `*.wav` - Audio files under `audio_XXX/` subdirectories |
| | - `load_dataset.txt` - Python script for loading the dataset (rename to .py to use) |
| |
|
| | JSONL keys: |
| | - `audio`: Relative audio path (e.g., `audio_000/segment_000000_speaker_0.wav`) |
| | - `text`: Transcription of the audio |
| | - `speaker_id`: Unique speaker identifier |
| | - `emotion`: Detected emotion |
| | - `language`: Language code |
| |
|
| | ## Speaker ID Mapping |
| |
|
| | Speaker IDs have been made unique across all merged datasets to avoid conflicts. |
| | For example: |
| | - Original Dataset A: `speaker_0`, `speaker_1` |
| | - Original Dataset B: `speaker_0`, `speaker_1` |
| | - Merged Dataset: `speaker_0`, `speaker_1`, `speaker_2`, `speaker_3` |
| |
|
| | Original dataset information is preserved in the metadata for reference. |
| |
|
| | ## Data Quality |
| |
|
| | This dataset was created using the Vyvo Dataset Builder with: |
| | - Automatic transcription and diarization |
| | - Quality filtering for audio segments |
| | - Music and noise filtering |
| | - Emotion detection |
| | - Language identification |
| |
|
| | ## License |
| |
|
| | This dataset is released under the Creative Commons Attribution 4.0 International License (CC BY 4.0). |
| |
|
| | ## Citation |
| |
|
| | ```bibtex |
| | @dataset{vyvo_merged_dataset, |
| | title={test44444}, |
| | author={Vyvo Dataset Builder}, |
| | year={2025}, |
| | url={https://huggingface.co/datasets/Codyfederer/test44444} |
| | } |
| | ``` |
| |
|
| | This dataset was created using the Vyvo Dataset Builder tool. |
| |
|