--- license: apache-2.0 --- Chinese wikipedia (sourced from [here](https://huggingface.co/datasets/m-a-p/MAP-CC)) cleaned by the following steps: - All text in Traditional Chinese is converted into Simplified Chinese using zhconv. - Duplicates are removed. - Low-quality text are heuristically removed based on proportion of special characters. - Qwen2.5-32B-Instruct is used to generate language quality annotation (on a scale of 1-5) for 398K Chinese samples and 250K English samples. An XLM-RoBERT-large classifier is trained with regression on these annotations. Any document receiving a score of 1 or 2 from the classifier is removed. ### Data Statistics Number of samples: 2,021,184. Size of parquet files: 3G.