Datasets:

Modalities:
Text
Formats:
parquet
Size:
< 1K
ArXiv:
Libraries:
Datasets
pandas
License:
File size: 3,076 Bytes
f5b4700
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f398ca1
 
 
f5b4700
f398ca1
00827db
f398ca1
00827db
e264f90
00827db
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e264f90
 
 
 
 
 
 
 
00827db
 
 
 
 
 
 
f398ca1
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
---
dataset_info:
  features:
  - name: xml
    dtype: string
  - name: proceedings
    dtype: string
  - name: year
    dtype: string
  - name: url
    dtype: string
  - name: language documentation
    dtype: string
  - name: has non-English?
    dtype: string
  - name: topics
    dtype: string
  - name: language coverage
    dtype: string
  - name: title
    dtype: string
  - name: abstract
    dtype: string
  splits:
  - name: train
    num_bytes: 452838
    num_examples: 310
  download_size: 231933
  dataset_size: 452838
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
license: cc-by-4.0
task_categories:
- text-classification
---

# The State of Multilingual LLM Safety Research: From Measuring the Language Gap to Mitigating It
We present a comprehensive analysis of the linguistic diversity of LLM safety research, highlighting the English-centric nature of the field. Through a systematic review of nearly 300 publications from 2020–2024 across major NLP conferences and workshops at *ACL, we identify a significant and growing language gap in LLM safety research, with even high-resource non-English languages receiving minimal attention. 

- **Paper:** https://arxiv.org/abs/2505.24119

### Dataset Description

Current version of the dataset consists of annotations for conference and workshop papers collected from *ACL venues between 2020 and 2024, using keywords of "safe" and "safety" in abstracts to identify relevant literature. The data source is https://github.com/acl-org/acl-anthology/tree/master/data, and the paperes are curated by Zheng-Xin Yong, Beyza Ermis, Marzieh Fadaee, and Julia Kreutzer.

### Dataset Structure

- xml: xml string from ACL Anthology
- proceedings: proceedings of the conference or workshop the work is published in.
- year: year of publication
- url: paper url on ACL Anthology
- language documentation: whether the paper explicitly reports the languages studied in the work. ("x" indicates failure of reporting)
- has non-English?: whether the work contains non-English language. (0: English-only, 1: has at least one non-English language)
- topics: topic of the safety work ('jailbreaking attacks'; 'toxicity, bias'; 'hallucination, factuality'; 'privacy'; 'policy'; 'general safety, LLM alignment'; 'others')
- language coverage: languages covered in the work (null means English only)
- title: title of the paper
- abstract: abstract of the paper

## Citation

```
@article{yong2025safetysurvey,
  title={The State of Multilingual LLM Safety Research: From Measuring the Language Gap to Mitigating It}, 
  author={Zheng-Xin Yong and Beyza Ermis and Marzieh Fadaee and Stephen H. Bach and Julia Kreutzer},
  year={2025},
  journal={arXiv preprint arXiv:2505.24119},
}
```

## Dataset Card Authors

- [Zheng-Xin Yong](https://yongzx.github.io/)
- [Beyza Ermis](https://scholar.google.com/citations?user=v2cMiCAAAAAJ&hl=en)
- [Marzieh Fadaee](https://marziehf.github.io/)
- [Stephen H. Bach](https://cs.brown.edu/people/sbach/)
- [Julia Kreutzer](https://juliakreutzer.github.io/)