Update README.md
Browse files
README.md
CHANGED
@@ -20,7 +20,7 @@ datasets:
|
|
20 |
base_model:
|
21 |
- Qwen/Qwen3-4B
|
22 |
---
|
23 |
-
# ToxiFrench: Benchmarking and
|
24 |
|
25 |
<!-- Badges/Tags -->
|
26 |
[](https://axeldlv00.github.io/ToxiFrench/)
|
@@ -28,7 +28,7 @@ base_model:
|
|
28 |
[](https://huggingface.co/datasets/Naela00/ToxiFrenchFinetuning)
|
29 |
[](./LICENSE)
|
30 |
|
31 |
-
**
|
32 |
|
33 |
**Affiliations:** École Polytechnique & Shanghai Jiao Tong University (SJTU)
|
34 |
|
@@ -52,7 +52,7 @@ base_model:
|
|
52 |
|
53 |
## Abstract
|
54 |
|
55 |
-
|
56 |
|
57 |
---
|
58 |
|
@@ -224,8 +224,8 @@ If you use this project in your research, please cite it as follows:
|
|
224 |
|
225 |
```bibtex
|
226 |
@misc{delaval2025toxifrench,
|
227 |
-
title={ToxiFrench: Benchmarking and
|
228 |
-
author={Axel Delaval},
|
229 |
year={2025},
|
230 |
}
|
231 |
```
|
|
|
20 |
base_model:
|
21 |
- Qwen/Qwen3-4B
|
22 |
---
|
23 |
+
# ToxiFrench: Benchmarking and Enhancing Language Models via CoT Fine-Tuning for French Toxicity Detection
|
24 |
|
25 |
<!-- Badges/Tags -->
|
26 |
[](https://axeldlv00.github.io/ToxiFrench/)
|
|
|
28 |
[](https://huggingface.co/datasets/Naela00/ToxiFrenchFinetuning)
|
29 |
[](./LICENSE)
|
30 |
|
31 |
+
**Authors:** Axel Delaval, Shujian Yang, Haicheng Wang, Han Qiu, Jialiang Lu
|
32 |
|
33 |
**Affiliations:** École Polytechnique & Shanghai Jiao Tong University (SJTU)
|
34 |
|
|
|
52 |
|
53 |
## Abstract
|
54 |
|
55 |
+
Detecting toxic content using language models is crucial yet challenging. While substantial progress has been made in English, toxicity detection in French remains underdeveloped, primarily due to the lack of culturally relevant, large-scale datasets. In this work, we introduce TOXIFRENCH, a new public benchmark of 53,622 French online comments, constructed via a semi-automated annotation pipeline that reduces manual labeling to only 10% through high-confidence LLM-based pre-annotation and human verification. Then, we benchmark a broad range of models and uncover a counterintuitive insight: Small Language Models (SLMs) outperform many larger models in robustness and generalization under the toxicity detection task. Motivated by this finding, we propose a novel Chain-of-Thought (CoT) fine-tuning strategy using a dynamic weighted loss that progressively emphasizes the model's final decision, significantly improving faithfulness. Our fine-tuned 4B model achieves state-of-the-art performance, improving its F1 score by 13% over its baseline and outperforming LLMs such as GPT-40 and Gemini-2.5. Further evaluation on a cross-lingual toxicity benchmark demonstrates strong multilingual ability, suggesting that our methodology can be effectively extended to other languages and safety-critical classification tasks.
|
56 |
|
57 |
---
|
58 |
|
|
|
224 |
|
225 |
```bibtex
|
226 |
@misc{delaval2025toxifrench,
|
227 |
+
title={ToxiFrench: Benchmarking and Enhancing Language Models via CoT Fine-Tuning for French Toxicity Detection},
|
228 |
+
author={Axel Delaval and Shujian Yang and Haicheng Wang and Han Qiu and Jialiang Lu},
|
229 |
year={2025},
|
230 |
}
|
231 |
```
|