---
license: apache-2.0
language:
- ru
metrics:
- f1
base_model:
- ai-forever/ruRoberta-large
pipeline_tag: text-classification
library_name: transformers
tags:
- target-sentiment-analysis
- sentiment-analysis
- classification
- news
---
# Model Card for Model ID
## Model Details
This model is a modified version of [ruRoberta-large](https://huggingface.co/ai-forever/ruRoberta-large) for Target sentiment Analysis (TSA) using training data from the [RuSentNE-2023 collection](https://github.com/dialogue-evaluation/RuSentNE-evaluation).
This model is designed to analyze news texts in Russian.
Given an input sentence and a specified entity (target object) within it, this model determines the sentiment directed toward that entity and classifies it into one of the following categories:
['positive', 'negative', 'neutral'].
### Model Description
- **Model type:** [ruRoberta-large](https://huggingface.co/ai-forever/ruRoberta-large)
- **Language(s) (NLP):** Russian
- **License:** [Apache License 2.0](https://github.com/scofield7419/THOR-ISA/blob/main/LICENSE.txt)
## Uses
### Direct Use
This is a ruRoberta-large model with the addition of a linear layer for classification.
1. Loading model and tokenizer
```python
from model import TargetSentimentClassifier
model = TargetSentimentClassifier(
model_name="sberbank-ai/ruRoberta-large",
use_multi_sample_dropout=True,
device="cuda"
)
model.load_state_dict(torch.load("pytorch_model.bin", map_location="cuda"))
```
2. Predict sentiment for a named entity in a sentence
```python
text = "Джеймс «Бадди» Макгирт ... спортсмен остановить бой..."
target = "спортсмен"
entity_type = "PROFESSION"
prediction = model.predict(text, target, entity_type)
print(prediction) # Output: 0, 1 or 2
```
### Input Format
The input sentence must include a marked entity using the following format:
```python
ENTITY_TEXT <|ENTITY_TAG|>
```
Example:
```python
Джеймс «Бадди» Макгирт ... спортсмен <|PROFESSION|> остановить бой...
```
### Labels
The model predicts one of the following labels:
| Label | Description |
|-------------------|-------|
| 0 | Neutral |
| 1 | Positive |
| 2 | Negative |
## Training Details
### Training Data
For training, the data published for the RuSentNE-2023 competition, available via repository, was used:
https://github.com/dialogue-evaluation/RuSentNE-evaluation
To increase the training sample, data from the Sentiment Analysis in Russian dataset was used:
https://www.kaggle.com/c/sentiment-analysis-in-russian/overview
Additionally, to expand the dataset with entity-level sentiment annotations, an automatic annotation algorithm was used:
https://github.com/RyuminV01/markup-tsa-news/tree/main
This algorithm enabled the generation of additional labeled data based on named entity recognition and sentiment alignment in Russian news texts.
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
The direct link to the `test` evaluation data:
https://github.com/dialogue-evaluation/RuSentNE-evaluation/blob/main/final_data.csv
#### Metrics
For the model evaluation, two metrics were used:
1. F1_posneg_macro -- F1-measure over `positive` and `negative` classes;
2. F1_macro -- F1-measure over `positive`, `negative`, **and `neutral`** classes;
### Results
The test evaluation for this model, as shown on the [showcases](https://codalab.lisn.upsaclay.fr/competitions/9538#results), demonstrates the following performance:
| Metric | Score |
|-------------------|-------|
| F1_posneg_macro | 61.84 |
| F1_macro | 70.38 |