Model Card for Model ID

Model Details

This model is a modified version of ruRoberta-large for Target sentiment Analysis (TSA) using training data from the RuSentNE-2023 collection.

This model is designed to analyze news texts in Russian.

Given an input sentence and a specified entity (target object) within it, this model determines the sentiment directed toward that entity and classifies it into one of the following categories: ['positive', 'negative', 'neutral'].

Model Description

Uses

Direct Use

This is a ruRoberta-large model with the addition of a linear layer for classification.

  1. Loading model and tokenizer
from model import TargetSentimentClassifier

model = TargetSentimentClassifier(
    model_name="sberbank-ai/ruRoberta-large", 
    use_multi_sample_dropout=True,
    device="cuda"
)
model.load_state_dict(torch.load("pytorch_model.bin", map_location="cuda"))
  1. Predict sentiment for a named entity in a sentence
text = "Джеймс «Бадди» Макгирт ... спортсмен остановить бой..."
target = "спортсмен"
entity_type = "PROFESSION"

prediction = model.predict(text, target, entity_type)
print(prediction)  # Output: 0, 1 or 2

Input Format

The input sentence must include a marked entity using the following format:

<en> ENTITY_TEXT <|ENTITY_TAG|> </en>

Example:

Джеймс «Бадди» Макгирт ... <en> спортсмен <|PROFESSION|> </en> остановить бой...

Labels

The model predicts one of the following labels:

Label Description
0 Neutral
1 Positive
2 Negative

Training Details

Training Data

For training, the data published for the RuSentNE-2023 competition, available via repository, was used: https://github.com/dialogue-evaluation/RuSentNE-evaluation

To increase the training sample, data from the Sentiment Analysis in Russian dataset was used: https://www.kaggle.com/c/sentiment-analysis-in-russian/overview

Additionally, to expand the dataset with entity-level sentiment annotations, an automatic annotation algorithm was used: https://github.com/RyuminV01/markup-tsa-news/tree/main This algorithm enabled the generation of additional labeled data based on named entity recognition and sentiment alignment in Russian news texts.

Evaluation

Testing Data, Factors & Metrics

Testing Data

The direct link to the test evaluation data: https://github.com/dialogue-evaluation/RuSentNE-evaluation/blob/main/final_data.csv

Metrics

For the model evaluation, two metrics were used:

  1. F1_posneg_macro -- F1-measure over positive and negative classes;
  2. F1_macro -- F1-measure over positive, negative, and neutral classes;

Results

The test evaluation for this model, as shown on the showcases, demonstrates the following performance:

Metric Score
F1_posneg_macro 61.84
F1_macro 70.38
Downloads last month
42
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Vadim121/ruRoberta-large-tsa-news-ru

Finetuned
(18)
this model