Update README.md
Browse files
README.md
CHANGED
@@ -6,4 +6,47 @@ language:
|
|
6 |
- zh
|
7 |
size_categories:
|
8 |
- 10K<n<100K
|
9 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
6 |
- zh
|
7 |
size_categories:
|
8 |
- 10K<n<100K
|
9 |
+
---
|
10 |
+
|
11 |
+
# RealTalk-CN: A Realistic Chinese Speech-Text Dialogue Benchmark With Cross-Modal Interaction Analysis
|
12 |
+
📌 **Resources:**
|
13 |
+
- [GitHub Repository](https://github.com/Summer-Enzhi/RealTalk)
|
14 |
+
- [Arxiv Paper](https://arxiv.org/abs/2508.10015)
|
15 |
+
|
16 |
+
**RealTalk-CN** is the first large-scale, multi-domain, bimodal (speech-text) Chinese **Task-Oriented Dialogue (TOD)** dataset. All data come from real human-to-human conversations, specifically constructed to advance research on speech-based large language models (Speech LLMs). Existing TOD datasets are mostly text-based, lacking real speech, spontaneous disfluencies, and cross-modal interaction scenarios. RealTalk-CN achieves breakthroughs in these aspects, fully supporting Chinese speech dialogue modeling and evaluation.
|
17 |
+
The dataset is released under the **CC BY-NC-SA 4.0 license**, and can be freely used for non-commercial research.
|
18 |
+
|
19 |
+
---
|
20 |
+
|
21 |
+
## Dataset Composition
|
22 |
+
- **Total Duration:** ~150 hours of verified real human-to-human dialogue audio
|
23 |
+
- **Dialogue Scale:** 5,400 multi-turn dialogues, over 60,000 utterances
|
24 |
+
- **Speakers:** 113 individuals, balanced gender ratio, ages 18–50, covering major dialect regions across China
|
25 |
+
- **Dialogue Domains:** 58 task-oriented domains (e.g., dining, transportation, shopping, healthcare, finance), including 55 intents and 115 slots
|
26 |
+
- **Audio Specifications:** 16kHz sampling rate, WAV format, recorded via both professional and mobile devices
|
27 |
+
- **Transcription & Annotation:**
|
28 |
+
- Manually transcribed at the character level, preserving spoken language features
|
29 |
+
- Annotated with 4 categories of disfluencies (elongation, repetition, self-correction, hesitation)
|
30 |
+
- Includes transcriptions, slot values, intents, and speaker metadata (gender, age, region, etc.)
|
31 |
+
|
32 |
+
---
|
33 |
+
|
34 |
+
## Dataset Features
|
35 |
+
1. **Natural and Colloquial:** Contains spoken features and disfluencies in real task-oriented dialogues, overcoming the limitation of “read speech” corpora.
|
36 |
+
2. **Bimodal and Real Interaction:** Provides paired speech-text annotations and introduces a *cross-modal chat task*, supporting dynamic switching between speech and text—closer to real-world human-computer interaction.
|
37 |
+
3. **Complete Dialogues and Multi-Domain Coverage:** Average of 12 turns per dialogue, covering 58 real-world domains, supporting both single-domain and cross-domain dialogue modeling.
|
38 |
+
4. **Diverse Speakers:** Covers major regions in China, balanced across gender and age, enabling research on the impact of accents, dialects, and demographic differences.
|
39 |
+
5. **High-Quality Annotation and Strict Quality Control:** Multiple rounds of manual verification, detailed timestamps, and slot annotations ensure reliability and research value.
|
40 |
+
|
41 |
+
---
|
42 |
+
|
43 |
+
## Advantages
|
44 |
+
- The **first large-scale Chinese speech-text TOD corpus**, filling the gap in benchmark datasets for Chinese spoken dialogue.
|
45 |
+
- Provides **disfluency annotations**, supporting robustness evaluation and error correction research in speech-based TOD systems.
|
46 |
+
- Enables research in **speech recognition, speech synthesis, intent recognition, slot filling, dialogue management, and cross-modal studies**.
|
47 |
+
- Serves as a **benchmark for Speech LLMs in Chinese TOD tasks**, driving the development of advanced speech interaction systems.
|
48 |
+
|
49 |
+
---
|
50 |
+
|
51 |
+
## Conclusion
|
52 |
+
The release of **RealTalk-CN** lays the foundation for research in Chinese speech-text bimodal dialogue. With its **large scale, multi-domain coverage, natural spoken language, diverse speakers, and cross-modal interaction**, it not only advances the development of Speech LLMs in task-oriented dialogue but also provides a key resource for future cross-modal and multimodal intelligent systems.
|