Geralt-Targaryen commited on
Commit
a5e7570
·
verified ·
1 Parent(s): 303df38

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -1
README.md CHANGED
@@ -8,7 +8,7 @@ High-quality Chinese text from Common Crawl cleaned by the following steps:
8
  - Documents whose language is identified as non-Chinese by fasttext are removed.
9
  - All text in Traditional Chinese is converted into Simplified Chinese.
10
  - Low-quality documents (e.g. boilerplates, advertisements) are heuristically removed based on statistics such as average line length, portion of special characters, etc.
11
- - Exact deduplication is performed in buckets of around 100GB compressed text. We did not deduplicate globally due to memory constraints, and estimate that about 0.03% of the documents are exact duplicates based on small-scale cross-bucket deduplication.
12
  - Qwen2.5-32B-Instruct is used to generate language quality annotation (on a scale of 1-5) for 9.3M Chinese documents and 9.2M English documents, from which we sample 398K Chinese documents and 250K English documents to balance label distribution. An XLM-RoBERT-large classifier is trained with regression on these annotations. Any document receiving a score lower than 4 is removed.
13
 
14
  **Details about Model Annotations**
@@ -21,3 +21,7 @@ On 2K samples, we compared the annotation distribution (in percentage) of Qwen2.
21
  | 72B | 0.3 | 4.7 | 22.9 | 58.1 | 14.1 |
22
 
23
  The scores between the two models have a correlation coefficient of 0.75, and manual inspection suggests that both are satisfactory. We eventually choose the 32B model for both efficiency and more balanced label distribution.
 
 
 
 
 
8
  - Documents whose language is identified as non-Chinese by fasttext are removed.
9
  - All text in Traditional Chinese is converted into Simplified Chinese.
10
  - Low-quality documents (e.g. boilerplates, advertisements) are heuristically removed based on statistics such as average line length, portion of special characters, etc.
11
+ - Exact deduplication.
12
  - Qwen2.5-32B-Instruct is used to generate language quality annotation (on a scale of 1-5) for 9.3M Chinese documents and 9.2M English documents, from which we sample 398K Chinese documents and 250K English documents to balance label distribution. An XLM-RoBERT-large classifier is trained with regression on these annotations. Any document receiving a score lower than 4 is removed.
13
 
14
  **Details about Model Annotations**
 
21
  | 72B | 0.3 | 4.7 | 22.9 | 58.1 | 14.1 |
22
 
23
  The scores between the two models have a correlation coefficient of 0.75, and manual inspection suggests that both are satisfactory. We eventually choose the 32B model for both efficiency and more balanced label distribution.
24
+
25
+ **Data Statistics**
26
+ - Number of samples: 623,807,180
27
+ - Size: 1.1TB (2120 parquet files)