mooaoeu commited on
Commit
62fb7f1
·
verified ·
1 Parent(s): 5ed7a7b

Upload folder using huggingface_hub

Browse files
.gitattributes CHANGED
@@ -57,3 +57,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
57
  # Video files - compressed
58
  *.mp4 filter=lfs diff=lfs merge=lfs -text
59
  *.webm filter=lfs diff=lfs merge=lfs -text
 
 
57
  # Video files - compressed
58
  *.mp4 filter=lfs diff=lfs merge=lfs -text
59
  *.webm filter=lfs diff=lfs merge=lfs -text
60
+ train.parquet.bak filter=lfs diff=lfs merge=lfs -text
CHANGELOG ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ v.01 - Inital upload
2
+ v.02 - Further deduping
3
+ v.03 - ToS filtered. Added filters script repo
4
+ v.04 - Fixed end tags and emoticons having missing leading spaces
5
+ v.05 - Added dataset pipeline
README.md ADDED
@@ -0,0 +1,232 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - discord
4
+ - chatml
5
+ - conversation
6
+ - dialogue
7
+ - multi-turn
8
+ - single-turn
9
+ - fine-tuning
10
+ - reward-model
11
+ - llm-training
12
+ - chat-dataset
13
+ - open-source
14
+ - anonymized-data
15
+ - casual-dialogue
16
+ license: apache-2.0
17
+ language:
18
+ - en
19
+ pretty_name: Discord-Dialogues
20
+ size_categories:
21
+ - 1M<n<10M
22
+ ---
23
+
24
+ <span style="font-size:smaller;">This is a clone of mookiezi/Discord-Dialogues.</span>
25
+
26
+ <p align="center">
27
+ <img src="assests/Discord-Dialogues.png" alt="Discord-Dialogues">
28
+ </p>
29
+
30
+ > **Discord-Dialogues** is a large-scale dataset of anonymized Discord conversations from late spring to early fall 2025 for training and evaluating realistic conversational AI models in a ChatML-friendly format.
31
+
32
+ This dataset contains 7.5 million exchanges spread out over 17 million turns, with more than 145 million words.
33
+
34
+ ---
35
+
36
+ <p align="center">
37
+ <a href="https://atlas.nomic.ai/data/mookiezi/discord-alpha/map">
38
+ <img src="assests/discord-alpha.png" alt="discord-alpha">
39
+ </a>
40
+ </p>
41
+
42
+ <p align="center">
43
+ <a href="https://atlas.nomic.ai/data/mookiezi/discord-alpha/map"><strong>Nomic Atlas Map</strong></a>
44
+ </p>
45
+
46
+ ---
47
+
48
+ ## Features
49
+
50
+ - Mixed single and multi-turn exchanges
51
+ - Human-only dialogues (no bots)
52
+ - Filtered for ToS and harmful content
53
+ - Links, embeds, and commands removed
54
+ - Trading posts, code blocks, and LFG removed
55
+ - Two-author chains only
56
+ - Merged self-replies from the same author into a single message
57
+ - Cleaned and deduplicated for relevance
58
+ - Primarily English, with some other languages present
59
+
60
+ ---
61
+
62
+ ## Use
63
+
64
+ - Fine-tuning conversational models
65
+ - Training relevance/reward models
66
+ - Dialogue generation research
67
+
68
+ Use case examples:
69
+ - [mookiezi/Discord-Micae-8B-Preview](https://huggingface.co/mookiezi/Discord-Micae-8B-Preview) — experimental larger model
70
+ - [mookiezi/Discord-Micae-Hermes-3-3B](https://huggingface.co/mookiezi/Discord-Micae-Hermes-3-3B) — stable smaller model
71
+
72
+ ---
73
+
74
+ ## Filtering Pipeline
75
+
76
+ This dataset was constructed with a custom multi-stage filtering toolkit:
77
+
78
+ 1. **SQL filters** (`filter.sql`)
79
+ Postgres regex/text filters for PII, bot/command patterns, links, embeds, and automation noise.
80
+
81
+ 2. **Smart cleaner** (`smartclean.py`)
82
+ Multi-stage process: normalize text, slang replacement, resample by length, and enforce structural validation.
83
+ Filters out structural noise such as code blocks, trading posts, and LFG.
84
+
85
+ 3. **Dedupe** (`dedupe.py`)
86
+ Deduplicates conversations by hashing message chains
87
+ Keeps only unique rows preferring the longest final assistant message when duplicates occur.
88
+
89
+ 4. **Fix End** (`fixend.py`)
90
+ Strips any prefix of spaces, commas, or non-emoticon colons before `<|im_end|>` to the plain token.
91
+
92
+ 5. **ToS risk filter** (`tos.py`)
93
+ Drops or redacts unsafe categories (sexual violence, CSA, slurs, harassment, doxxing, self-harm, extremism) and PII.
94
+ Uses fuzzy/leet/diacritic-aware regex.
95
+
96
+ The full filtering scripts are open source at the [filters GitHub repository](https://github.com/mookiezi/filters).
97
+
98
+ ---
99
+
100
+ ## Dataset Pipeline
101
+
102
+ The full end-to-end pipeline is documented in the [dataset-pipeline GitHub repository](https://github.com/mookiezi/dataset-pipeline).
103
+
104
+ ---
105
+
106
+ ## Collection Policy
107
+
108
+ - All data was collected adhering to Discord's [Terms of Service](https://discord.com/terms) and [Community Guidelines](https://discord.com/guidelines).
109
+
110
+ ---
111
+
112
+ ## Dataset Statistics <span style="font-weight:normal;">(using the [NousResearch/Hermes-3-Llama-3.1-8B tokenizer](https://huggingface.co/NousResearch/Hermes-3-Llama-3.1-8B))</span>
113
+
114
+ <div style="display:flex; gap:20px; align-items:flex-start;">
115
+
116
+ <div>
117
+
118
+ | Metric | Value |
119
+ | ------------------------ | --------------: |
120
+ | Samples (count) | 7,546,294 |
121
+ | Min length (tokens) | 7 |
122
+ | Max length (tokens) | 5,979 |
123
+ | Mean length (tokens) | 33.02 |
124
+ | Median length (tokens) | 29 |
125
+ | Std dev (tokens) | 17.39 |
126
+ | Skew | 26.46 |
127
+ | Kurtosis | 7,487.55 |
128
+ | Total tokens | 249,193,745 |
129
+ | Total characters | 1,291,480,299 |
130
+ | Total words | 145,887,976 |
131
+ | Avg chars per sample | 171.14 |
132
+ | Avg words per sample | 19.33 |
133
+ | Avg chars per word | 8.85 |
134
+ | Tokens per char | 0.19 |
135
+ | Total assistant blocks | 9,341,891 |
136
+
137
+ </div>
138
+
139
+ <div>
140
+
141
+
142
+ | Tokens | Count |
143
+ | --------- | --------: |
144
+ | 0–8 | 1 |
145
+ | 8–16 | 110,310 |
146
+ | 16–32 | 4,382,094 |
147
+ | 32–64 | 2,674,780 |
148
+ | 64–128 | 360,401 |
149
+ | 128–256 | 18,083 |
150
+ | 256–384 | 417 |
151
+ | 384–512 | 75 |
152
+ | 512–768 | 78 |
153
+ | 768–1024 | 30 |
154
+ | 1024–2048 | 18 |
155
+ | 2048–4096 | 3 |
156
+
157
+ </div>
158
+
159
+ <div>
160
+
161
+
162
+ | Turns | Count |
163
+ | ----- | --------: |
164
+ | 2 | 5,969,540 |
165
+ | 3 | 1,080,526 |
166
+ | 4 | 319,794 |
167
+ | 5 | 102,553 |
168
+ | 6 | 41,246 |
169
+ | 7 | 16,904 |
170
+ | 8 | 7,715 |
171
+ | 9 | 3,691 |
172
+ | 10 | 1,867 |
173
+ | 11 | 1,007 |
174
+ | 12 | 575 |
175
+ | 13 | 334 |
176
+ | 14 | 189 |
177
+ | 15 | 129 |
178
+ | 16 | 67 |
179
+ | 17 | 62 |
180
+ | 18 | 32 |
181
+ | 19 | 21 |
182
+ | 20 | 8 |
183
+ | 21 | 11 |
184
+ | 22 | 11 |
185
+ | 23 | 2 |
186
+ | 24 | 1 |
187
+ | 25 | 3 |
188
+ | 27 | 2 |
189
+ | 29 | 1 |
190
+ | 32 | 1 |
191
+ | 33 | 2 |
192
+
193
+ </div>
194
+
195
+ </div>
196
+
197
+ ---
198
+
199
+ ## Disclaimer
200
+
201
+ Although filtering reduced the exchanges by about 75% (leaving roughly 7.5% of the full data dump), this dataset is still intended as a large-scale dump. For best training results, further curation to target high-signal data relevant to your goals is recommended.
202
+
203
+ ---
204
+
205
+ ## License
206
+
207
+ This project is licensed under the Apache License 2.0.
208
+
209
+ ---
210
+
211
+ ## How to cite:
212
+
213
+ ```bibtex
214
+ @misc{discord-dialogues-2025,
215
+ title = {Discord-Dialogues},
216
+ author = {mookiezi},
217
+ year = {2025},
218
+ url={https://huggingface.co/datasets/mookiezi/Discord-Dialogues}
219
+ }
220
+ ```
221
+
222
+ ---
223
+
224
+ ## Related
225
+
226
+ - [mookiezi/Discord-Micae-Hermes-3-3B](https://huggingface.co/mookiezi/Discord-Micae-Hermes-3-3B)
227
+ - [mookiezi/Discord-OpenMicae](https://huggingface.co/datasets/mookiezi/Discord-OpenMicae)
228
+ - [NousResearch/Hermes-3-Llama-3.2-8B](https://huggingface.co/NousResearch/Hermes-3-Llama-3.1-8B)
229
+
230
+ [​](https://20000.online/micae)
231
+ [​](https://20000.online/openmicae)
232
+ [​](https://20000.online/discord-dialogues)
assests/Discord-Dialogues.png ADDED

Git LFS Details

  • SHA256: 78004aced4c6c8d3e0eadfe7a706bc7cc613e4a25a1a61fcd8fde5ed5699a55c
  • Pointer size: 131 Bytes
  • Size of remote file: 330 kB
assests/discord-alpha.png ADDED

Git LFS Details

  • SHA256: 7b9091ab09ac823777a2dfa939a388023b172fd9693e531d412efc662813991a
  • Pointer size: 131 Bytes
  • Size of remote file: 601 kB
dataset_infos.json ADDED
@@ -0,0 +1,29 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "default": {
3
+ "description": "Discord-Dialogues is a large-scale dataset of anonymized Discord conversations formatted for ChatML. It includes mixed single- and multi-turn exchanges between two human participants, cleaned of bots, links, embeds, commands, ToS breaking content, and duplicate messages—primarily in English, suitable for fine-tuning conversational AI models.",
4
+ "citation": "@misc{discord-dialogues-2025,\n title = {Discord-Dialogues},\n author = {mookiezi},\n year = {2025},\n url = {https://huggingface.co/datasets/mookiezi/Discord-Dialogues}\n}",
5
+ "homepage": "https://huggingface.co/datasets/mookiezi/Discord-Dialogues",
6
+ "license": "Apache License 2.0",
7
+ "features": {
8
+ "text": { "dtype": "string", "_type": "Value" },
9
+ "tokens": { "dtype": "int64", "_type": "Value" },
10
+ "turns": { "dtype": "int64", "_type": "Value" },
11
+ "characters": { "dtype": "int64", "_type": "Value" },
12
+ "words": { "dtype": "int64", "_type": "Value" }
13
+ },
14
+ "splits": {
15
+ "train": {
16
+ "name": "train",
17
+ "num_bytes": 362022690,
18
+ "num_examples": 7546294,
19
+ "dataset_name": "default"
20
+ }
21
+ },
22
+ "download_size": 362022690,
23
+ "dataset_size": 362022690,
24
+ "size_in_bytes": 362022690,
25
+ "data_files": {
26
+ "train": [{ "filename": "train.parquet" }]
27
+ }
28
+ }
29
+ }
tokens.log ADDED
@@ -0,0 +1,135 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Stats for text:
2
+ min: 7
3
+ max: 5979
4
+ mean: 33.02200325086725
5
+ median: 29.0
6
+ std: 17.390580671916503
7
+ skew: 26.456841814125784
8
+ kurt: 7487.549682758939
9
+ count: 7546294
10
+ sum: 249193745
11
+ 99.9%: 152.0
12
+ 1%: 15.0
13
+ 2%: 16.0
14
+ 3%: 16.0
15
+ 4%: 17.0
16
+ 5%: 17.0
17
+ 6%: 18.0
18
+ 7%: 18.0
19
+ 8%: 18.0
20
+ 9%: 19.0
21
+ 10%: 19.0
22
+ 11%: 19.0
23
+ 12%: 19.0
24
+ 13%: 20.0
25
+ 14%: 20.0
26
+ 15%: 20.0
27
+ 16%: 20.0
28
+ 17%: 21.0
29
+ 18%: 21.0
30
+ 19%: 21.0
31
+ 20%: 21.0
32
+ 21%: 22.0
33
+ 22%: 22.0
34
+ 23%: 22.0
35
+ 24%: 22.0
36
+ 25%: 22.0
37
+ 26%: 23.0
38
+ 27%: 23.0
39
+ 28%: 23.0
40
+ 29%: 23.0
41
+ 30%: 24.0
42
+ 31%: 24.0
43
+ 32%: 24.0
44
+ 33%: 24.0
45
+ 34%: 24.0
46
+ 35%: 25.0
47
+ 36%: 25.0
48
+ 37%: 25.0
49
+ 38%: 25.0
50
+ 39%: 26.0
51
+ 40%: 26.0
52
+ 41%: 26.0
53
+ 42%: 26.0
54
+ 43%: 27.0
55
+ 44%: 27.0
56
+ 45%: 27.0
57
+ 46%: 27.0
58
+ 47%: 28.0
59
+ 48%: 28.0
60
+ 49%: 28.0
61
+ 50%: 29.0
62
+ 51%: 29.0
63
+ 52%: 29.0
64
+ 53%: 29.0
65
+ 54%: 30.0
66
+ 55%: 30.0
67
+ 56%: 30.0
68
+ 57%: 31.0
69
+ 58%: 31.0
70
+ 59%: 31.0
71
+ 60%: 32.0
72
+ 61%: 32.0
73
+ 62%: 32.0
74
+ 63%: 33.0
75
+ 64%: 33.0
76
+ 65%: 34.0
77
+ 66%: 34.0
78
+ 67%: 34.0
79
+ 68%: 35.0
80
+ 69%: 35.0
81
+ 70%: 36.0
82
+ 71%: 36.0
83
+ 72%: 37.0
84
+ 73%: 37.0
85
+ 74%: 38.0
86
+ 75%: 38.0
87
+ 76%: 39.0
88
+ 77%: 39.0
89
+ 78%: 40.0
90
+ 79%: 41.0
91
+ 80%: 42.0
92
+ 81%: 42.0
93
+ 82%: 43.0
94
+ 83%: 44.0
95
+ 84%: 45.0
96
+ 85%: 46.0
97
+ 86%: 47.0
98
+ 87%: 48.0
99
+ 88%: 49.0
100
+ 89%: 51.0
101
+ 90%: 52.0
102
+ 91%: 54.0
103
+ 92%: 56.0
104
+ 93%: 58.0
105
+ 94%: 60.0
106
+ 95%: 64.0
107
+ 96%: 68.0
108
+ 97%: 73.0
109
+ 98%: 80.0
110
+ 99%: 95.0
111
+ 100%: 5979.0
112
+ total_chars: 1291480299
113
+ total_words: 145887976
114
+ avg_chars: 171.14099967480726
115
+ avg_words: 19.332400248386826
116
+ avg_chars_per_word: 8.852547923483426
117
+ avg_chars_per_sample: 171.14099967480726
118
+ avg_words_per_sample: 19.332400248386826
119
+ tokens_per_char: 0.19295202969255673
120
+ bin_0-8: 1
121
+ bin_8-16: 110310
122
+ bin_16-32: 4382094
123
+ bin_32-64: 2674780
124
+ bin_64-128: 360401
125
+ bin_128-256: 18083
126
+ bin_256-384: 417
127
+ bin_384-512: 75
128
+ bin_512-768: 78
129
+ bin_768-1024: 30
130
+ bin_1024-2048: 18
131
+ bin_2048-4096: 3
132
+ assistant_blocks: 9341891
133
+
134
+ Total tokens across all columns: 249193745
135
+ Total assistant blocks: 9341891
tokenstats.txt ADDED
@@ -0,0 +1,135 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Stats for text:
2
+ min: 8
3
+ max: 5979
4
+ mean: 33.024094354044
5
+ median: 29.0
6
+ std: 17.402215788976577
7
+ skew: 26.64223385581348
8
+ kurt: 7506.831274998159
9
+ count: 7543842
10
+ sum: 249128550
11
+ 99.9%: 153.0
12
+ 1%: 15.0
13
+ 2%: 16.0
14
+ 3%: 16.0
15
+ 4%: 17.0
16
+ 5%: 17.0
17
+ 6%: 18.0
18
+ 7%: 18.0
19
+ 8%: 18.0
20
+ 9%: 19.0
21
+ 10%: 19.0
22
+ 11%: 19.0
23
+ 12%: 19.0
24
+ 13%: 20.0
25
+ 14%: 20.0
26
+ 15%: 20.0
27
+ 16%: 20.0
28
+ 17%: 21.0
29
+ 18%: 21.0
30
+ 19%: 21.0
31
+ 20%: 21.0
32
+ 21%: 22.0
33
+ 22%: 22.0
34
+ 23%: 22.0
35
+ 24%: 22.0
36
+ 25%: 22.0
37
+ 26%: 23.0
38
+ 27%: 23.0
39
+ 28%: 23.0
40
+ 29%: 23.0
41
+ 30%: 24.0
42
+ 31%: 24.0
43
+ 32%: 24.0
44
+ 33%: 24.0
45
+ 34%: 24.0
46
+ 35%: 25.0
47
+ 36%: 25.0
48
+ 37%: 25.0
49
+ 38%: 25.0
50
+ 39%: 26.0
51
+ 40%: 26.0
52
+ 41%: 26.0
53
+ 42%: 26.0
54
+ 43%: 27.0
55
+ 44%: 27.0
56
+ 45%: 27.0
57
+ 46%: 27.0
58
+ 47%: 28.0
59
+ 48%: 28.0
60
+ 49%: 28.0
61
+ 50%: 29.0
62
+ 51%: 29.0
63
+ 52%: 29.0
64
+ 53%: 29.0
65
+ 54%: 30.0
66
+ 55%: 30.0
67
+ 56%: 30.0
68
+ 57%: 31.0
69
+ 58%: 31.0
70
+ 59%: 31.0
71
+ 60%: 32.0
72
+ 61%: 32.0
73
+ 62%: 32.0
74
+ 63%: 33.0
75
+ 64%: 33.0
76
+ 65%: 34.0
77
+ 66%: 34.0
78
+ 67%: 34.0
79
+ 68%: 35.0
80
+ 69%: 35.0
81
+ 70%: 36.0
82
+ 71%: 36.0
83
+ 72%: 37.0
84
+ 73%: 37.0
85
+ 74%: 38.0
86
+ 75%: 38.0
87
+ 76%: 39.0
88
+ 77%: 39.0
89
+ 78%: 40.0
90
+ 79%: 41.0
91
+ 80%: 42.0
92
+ 81%: 42.0
93
+ 82%: 43.0
94
+ 83%: 44.0
95
+ 84%: 45.0
96
+ 85%: 46.0
97
+ 86%: 47.0
98
+ 87%: 48.0
99
+ 88%: 49.0
100
+ 89%: 51.0
101
+ 90%: 52.0
102
+ 91%: 54.0
103
+ 92%: 56.0
104
+ 93%: 58.0
105
+ 94%: 60.0
106
+ 95%: 64.0
107
+ 96%: 68.0
108
+ 97%: 73.0
109
+ 98%: 80.0
110
+ 99%: 95.0
111
+ 100%: 5979.0
112
+ total_chars: 1290998934
113
+ total_words: 145717457
114
+ avg_chars: 171.13281720375375
115
+ avg_words: 19.316080188317837
116
+ avg_chars_per_word: 8.85960378789756
117
+ avg_chars_per_sample: 171.13281720375375
118
+ avg_words_per_sample: 19.316080188317837
119
+ tokens_per_char: 0.19297347460087058
120
+ bin_0-8: 0
121
+ bin_8-16: 109538
122
+ bin_16-32: 4381031
123
+ bin_32-64: 2674243
124
+ bin_64-128: 360330
125
+ bin_128-256: 18072
126
+ bin_256-384: 418
127
+ bin_384-512: 78
128
+ bin_512-768: 77
129
+ bin_768-1024: 30
130
+ bin_1024-2048: 17
131
+ bin_2048-4096: 4
132
+ assistant_blocks: 9339690
133
+
134
+ Total tokens across all columns: 249128550
135
+ Total assistant blocks: 9339690
train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fe52210778f5d71724664d9c365a71599913f93c30cc60df8e674dd3c45c08ca
3
+ size 362018517