ssary commited on
Commit
fdf2cbb
·
verified ·
1 Parent(s): 7586e8c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +57 -0
README.md CHANGED
@@ -1,3 +1,60 @@
1
  ---
2
  license: apache-2.0
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
+ language:
4
+ - de
5
+ - en
6
+ - ar
7
+ - fr
8
+ - hi
9
+ - he
10
+ metrics:
11
+ - f1
12
+ library_name: transformers
13
  ---
14
+ # Model Name: XLM-RoBERTa-German-Sentiment
15
+ ## Overview
16
+ XLM-RoBERTa-German-Sentiment model is designed to perform Sentiment Analysis for 8 Languages and more specifically German language.\
17
+ This model leverages the XLM-RoBERTa architecture, a choice inspired by the superior performance of Facebook's RoBERTa over Google's BERT across numerous benchmarks.\
18
+ The decision to use XLM-RoBERTa stems from its multilingual capabilities. Specifically tailored for the German language, this model has been fine-tuned on over 200,000 German-language sentiment analysis samples, more on the training of the model can be found in the [paper](https://drive.google.com/file/d/1xg7zbCPTS3lyKhQlA2S4b9UOYeIj5Pyt/view?usp=drive_link).\
19
+ The dataset utilized for training, available at [this GitHub repository](https://github.com/oliverguhr/german-sentiment-lib) this dataset is developed by Oliver Guhr, We extend our gratitude to him for making the dataset open source,the dataset was influential in refining the model's accuracy and responsiveness to the nuances of German sentiment.
20
+ Our model is based
21
+ ## Model Details
22
+
23
+ - **Architecture**: XLM-RoBERTa
24
+ - **Performance**: 87% Weighted F1 score .
25
+ - **Limitations**: The model is only train and tested on the German language, but can handle the other 8 languages with lower accuracy.
26
+
27
+ ## How to Use
28
+
29
+ To use this model, you need to install the Hugging Face Transformers library and PyTorch. You can do this using pip:
30
+
31
+ ```bash
32
+ pip install torch transformers
33
+ ```
34
+
35
+ ```python
36
+ from transformers import AutoModelForSequenceClassification, AutoTokenizer
37
+ import torch
38
+ text = "Erneuter Streik in der S-Bahn"
39
+ model = AutoModelForSequenceClassification.from_pretrained('ssary/XLM-RoBERTa-German-sentiment')
40
+ tokenizer = AutoTokenizer.from_pretrained('ssary/XLM-RoBERTa-German-sentiment')
41
+ inputs = tokenizer(text, return_tensors="pt", truncation=True, max_length=512)
42
+ with torch.no_grad():
43
+ outputs = model(**inputs)
44
+ predictions = torch.nn.functional.softmax(outputs.logits, dim=-1)
45
+ sentiment_classes = ['negative', 'neutral', 'positive']
46
+ print(sentiment_classes[predictions.argmax()]) # for the class with highest probability
47
+ print(predictions) # for each class probability
48
+ ```
49
+
50
+ ## Acknowledgments
51
+
52
+ This model was developed by Sary Nasser at HTW-Berlin under supervision of Martin Steinicke.
53
+
54
+ ## References
55
+
56
+ - Oliver Guhr Dataset paper: [Training a Broad-Coverage German Sentiment Classification Model for Dialog
57
+ Systems](http://www.lrec-conf.org/proceedings/lrec2020/pdf/2020.lrec-1.202.pdf)
58
+ - Model architecture: [XLM-T: Multilingual Language Models in Twitter for Sentiment Analysis and Beyond
59
+ ](https://arxiv.org/abs/2104.12250)
60
+