File size: 3,294 Bytes
752928d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
---
license: cc-by-4.0
tags:
- sentiment-classification
- telugu
- xlm-r
- multilingual
- baseline
language: te
datasets:
- DSL-13-SRMAP/TeSent_Benchmark-Dataset
model_name: XLM-R_WOR
---

# XLM-R_WOR: XLM-RoBERTa Telugu Sentiment Classification Model (Without Rationale)

## Model Overview

**XLM-R_WOR** is a Telugu sentiment classification model based on **XLM-RoBERTa (XLM-R)**, a general-purpose multilingual transformer developed by Facebook AI.  
The "WOR" in the model name stands for "**Without Rationale**", indicating that this model is trained only with sentiment labels from the TeSent_Benchmark-Dataset and **does not use human-annotated rationales**.

---

## Model Details

- **Architecture:** XLM-RoBERTa (transformer-based, multilingual)
- **Pretraining Data:** 2.5TB of filtered Common Crawl data across 100+ languages, including Telugu
- **Pretraining Objective:** Masked Language Modeling (MLM), no Next Sentence Prediction (NSP)
- **Fine-tuning Data:** [TeSent_Benchmark-Dataset](https://huggingface.co/datasets/dsl-13-srmap/tesent_benchmark-dataset), using only sentence-level sentiment labels (positive, negative, neutral); rationale annotations are disregarded
- **Task:** Sentence-level sentiment classification (3-way)
- **Rationale Usage:** **Not used** during training or inference ("WOR" = Without Rationale)

---

## Intended Use

- **Primary Use:** Benchmarking Telugu sentiment classification on the TeSent_Benchmark-Dataset, especially as a **baseline** for models trained without rationales
- **Research Setting:** Suitable for cross-lingual and multilingual NLP research, as well as explainable AI in low-resource settings

---

## Why XLM-R?

XLM-R is designed for cross-lingual understanding and contextual modeling, providing strong transfer learning capabilities and improved downstream performance compared to mBERT. When fine-tuned with local Telugu data, XLM-R delivers solid results for sentiment analysis.  
However, Telugu-specific models like MuRIL or L3Cube-Telugu-BERT may offer better cultural and linguistic alignment for purely Telugu tasks.

---

## Performance and Limitations

**Strengths:**  
- Strong transfer learning and contextual modeling for multilingual NLP
- Good performance for Telugu sentiment analysis when fine-tuned with local data
- Useful as a cross-lingual and multilingual baseline

**Limitations:**  
- May be outperformed by Telugu-specific models for culturally nuanced tasks
- Requires sufficient labeled Telugu data for best performance
- Since rationales are not used, the model cannot provide explicit explanations for its predictions

---

## Training Data

- **Dataset:** [TeSent_Benchmark-Dataset](https://huggingface.co/datasets/dsl-13-srmap/tesent_benchmark-dataset)
- **Data Used:** Only the **Content** (Telugu sentence) and **Label** (sentiment label) columns; **rationale** annotations are ignored for XLM-R_WOR training

---

## Language Coverage

- **Language:** Telugu (`te`)
- **Model Scope:** This implementation and evaluation focus strictly on Telugu sentiment classification

---

## Citation and More Details

For detailed experimental setup, evaluation metrics, and comparisons with rationale-based models, **please refer to our paper**.



---

## License

Released under [CC BY 4.0](LICENSE).