license: cc-by-4.0
tags:
- sentiment-classification
- telugu
- bert
- l3cube
- baseline
language: te
datasets:
- DSL-13-SRMAP/TeSent_Benchmark-Dataset
model_name: Telugu-BERT_WOR
Telugu-BERT_WOR: L3Cube-Telugu-BERT Telugu Sentiment Classification Model (Without Rationale)
Model Overview
Telugu-BERT_WOR is a Telugu sentiment classification model based on L3Cube-Telugu-BERT, a transformer-based BERT model pre-trained specifically on Telugu text (Telugu OSCAR, Wikipedia, news) by the L3Cube Pune research group for Masked Language Modeling (MLM).
"WOR" in the model name stands for "Without Rationale", meaning this model is trained only with sentiment labels from the TeSent_Benchmark-Dataset and does not use human-annotated rationales.
Model Details
- Architecture: L3Cube-Telugu-BERT (BERT-base, pre-trained on Telugu)
- Pretraining Data: Telugu OSCAR, Wikipedia, and news articles
- Pretraining Objective: Masked Language Modeling (MLM)
- Fine-tuning Data: TeSent_Benchmark-Dataset, using only sentence-level sentiment labels (positive, negative, neutral); rationale annotations are disregarded
- Task: Sentence-level sentiment classification (3-way)
- Rationale Usage: Not used during training or inference ("WOR" = Without Rationale)
Intended Use
- Primary Use: Benchmarking Telugu sentiment classification on the TeSent_Benchmark-Dataset, especially as a baseline for models trained without rationales
- Research Setting: Ideal for researchers working on pure Telugu text analysis with sufficient labeled data for fine-tuning
Why Telugu-BERT?
Telugu-BERT is tailored for Telugu and excels in capturing the vocabulary, syntax, and semantics of the language. It recognizes nuanced expressions, idioms, and sentiments that are often poorly represented in multilingual models like mBERT and XLM-R.
This makes Telugu-BERT_WOR an excellent choice for sentiment analysis tasks and other Telugu NLP applications requiring strong language-specific representation.
Performance and Limitations
Strengths:
- Superior understanding of Telugu language specifics compared to multilingual models
- Capable of capturing nuanced and idiomatic expressions in sentiment analysis
- Robust baseline for Telugu sentiment classification tasks
Limitations:
- Applicability limited to Telugu; not suitable for multilingual or cross-lingual tasks
- Requires sufficient labeled Telugu data for best performance
- Since rationales are not used, the model cannot provide explicit explanations for its predictions
Training Data
- Dataset: TeSent_Benchmark-Dataset
- Data Used: Only the Content (Telugu sentence) and Label (sentiment label) columns; rationale annotations are ignored for Telugu-BERT_WOR training
Language Coverage
- Language: Telugu (
te
) - Model Scope: This implementation and evaluation focus strictly on Telugu sentiment classification
Citation and More Details
For detailed experimental setup, evaluation metrics, and comparisons with rationale-based models, please refer to our paper.
License
Released under CC BY 4.0.