Datasets:
Tasks:
Text Generation
Modalities:
Text
Formats:
parquet
Languages:
Vietnamese
Size:
10M - 100M
Tags:
social media
Update README.md
Browse files
README.md
CHANGED
|
@@ -14,4 +14,77 @@ configs:
|
|
| 14 |
data_files:
|
| 15 |
- split: train
|
| 16 |
path: data/train-*
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 17 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 14 |
data_files:
|
| 15 |
- split: train
|
| 16 |
path: data/train-*
|
| 17 |
+
task_categories:
|
| 18 |
+
- text-generation
|
| 19 |
+
language:
|
| 20 |
+
- vi
|
| 21 |
+
tags:
|
| 22 |
+
- social media
|
| 23 |
+
pretty_name: ViSoBERT
|
| 24 |
+
size_categories:
|
| 25 |
+
- 10M<n<100M
|
| 26 |
---
|
| 27 |
+
# Dataset Card for ViSoBERT
|
| 28 |
+
|
| 29 |
+
## Dataset Description
|
| 30 |
+
|
| 31 |
+
- **Repository:** https://huggingface.co/uitnlp/visobert
|
| 32 |
+
- **Paper:** [ViSoBERT: A Pre-Trained Language Model for Vietnamese Social Media Text Processing](https://aclanthology.org/2023.emnlp-main.315/)
|
| 33 |
+
|
| 34 |
+
#### Dataset Summary
|
| 35 |
+
|
| 36 |
+
<!-- Provide a quick summary of the dataset. -->
|
| 37 |
+
**ViSoBERT Dataset Summary:**
|
| 38 |
+
|
| 39 |
+
ViSoBERT is the pre-training dataset for the ViSoBERT model. It contains social media texts from Facebook, Tiktok and YouTube collected between January 2020 and December 2022.
|
| 40 |
+
|
| 41 |
+
|
| 42 |
+
### Languages
|
| 43 |
+
The language in the dataset is Vietnamese.
|
| 44 |
+
|
| 45 |
+
## Dataset Structure
|
| 46 |
+
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
|
| 47 |
+
### Dataset Instances
|
| 48 |
+
An example of 'train' looks as follows:
|
| 49 |
+
```json
|
| 50 |
+
{
|
| 51 |
+
"text": "cười thế này iz ))",
|
| 52 |
+
}
|
| 53 |
+
```
|
| 54 |
+
|
| 55 |
+
### Data Fields
|
| 56 |
+
|
| 57 |
+
Here's the Data Fields section for the ViSoBERT pre-training corpus based on the dataset features provided:
|
| 58 |
+
|
| 59 |
+
- `text`: the text, stored as a `string` feature.
|
| 60 |
+
|
| 61 |
+
## Citation
|
| 62 |
+
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
|
| 63 |
+
|
| 64 |
+
**BibTeX:**
|
| 65 |
+
```
|
| 66 |
+
@inproceedings{nguyen-etal-2023-visobert,
|
| 67 |
+
title = "{V}i{S}o{BERT}: A Pre-Trained Language Model for {V}ietnamese Social Media Text Processing",
|
| 68 |
+
author = "Nguyen, Nam and
|
| 69 |
+
Phan, Thang and
|
| 70 |
+
Nguyen, Duc-Vu and
|
| 71 |
+
Nguyen, Kiet",
|
| 72 |
+
editor = "Bouamor, Houda and
|
| 73 |
+
Pino, Juan and
|
| 74 |
+
Bali, Kalika",
|
| 75 |
+
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
|
| 76 |
+
month = dec,
|
| 77 |
+
year = "2023",
|
| 78 |
+
address = "Singapore",
|
| 79 |
+
publisher = "Association for Computational Linguistics",
|
| 80 |
+
url = "https://aclanthology.org/2023.emnlp-main.315",
|
| 81 |
+
pages = "5191--5207",
|
| 82 |
+
abstract = "English and Chinese, known as resource-rich languages, have witnessed the strong development of transformer-based language models for natural language processing tasks. Although Vietnam has approximately 100M people speaking Vietnamese, several pre-trained models, e.g., PhoBERT, ViBERT, and vELECTRA, performed well on general Vietnamese NLP tasks, including POS tagging and named entity recognition. These pre-trained language models are still limited to Vietnamese social media tasks. In this paper, we present the first monolingual pre-trained language model for Vietnamese social media texts, ViSoBERT, which is pre-trained on a large-scale corpus of high-quality and diverse Vietnamese social media texts using XLM-R architecture. Moreover, we explored our pre-trained model on five important natural language downstream tasks on Vietnamese social media texts: emotion recognition, hate speech detection, sentiment analysis, spam reviews detection, and hate speech spans detection. Our experiments demonstrate that ViSoBERT, with far fewer parameters, surpasses the previous state-of-the-art models on multiple Vietnamese social media tasks. Our ViSoBERT model is available only for research purposes. Disclaimer: This paper contains actual comments on social networks that might be construed as abusive, offensive, or obscene.",
|
| 83 |
+
}
|
| 84 |
+
```
|
| 85 |
+
|
| 86 |
+
**APA:**
|
| 87 |
+
- Nguyen, N., Phan, T., Nguyen, D.-V., & Nguyen, K. (2023). **ViSoBERT: A pre-trained language model for Vietnamese social media text processing**. In H. Bouamor, J. Pino, & K. Bali (Eds.), *Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing* (pp. 5191-5207). Singapore: Association for Computational Linguistics. https://aclanthology.org/2023.emnlp-main.315
|
| 88 |
+
|
| 89 |
+
## Dataset Card Authors
|
| 90 |
+
[@phucdev](https://github.com/phucdev)
|