File size: 1,827 Bytes
a7d6058 37c11a2 7eb296b a7d6058 eaa09a9 9e67112 a7d6058 1178af6 93c73ce 093b7d2 93c73ce 093b7d2 93c73ce 093b7d2 93c73ce c8b2ffe |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 |
---
language:
- de
tags:
- Text Classification
- sentiment
- Simpletransformers
- deepset/gbert-base
---
This gBert-base model was finetuned on a sentiment prediction task with tweets from German politician during the German Federal Election in 2021.
## Model Description:
This model was trained on ~30.000 annotated tweets in German language on its sentiment. It can predict tweets as negative, positive or neutral. It achieved an accuracy of 93% on the specific dataset.
## Model Implementation
You can implement this model for example with Simpletransformers. First you have to unpack the file.
def unpack_model(model_name=''):
tar = tarfile.open(f"{model_name}.tar.gz", "r:gz")
tar.extractall()
tar.close()
The hyperparameter were defined as follows:
train_args ={"reprocess_input_data": True,
"fp16":False,
"num_train_epochs": 4,
"overwrite_output_dir":True,
"train_batch_size": 32,
"eval_batch_size": 32}
Now create the model:
unpack_model(YOUR_DOWNLOADED_FILE_HERE)
model = ClassificationModel(
"bert", "content/outputs/",
num_labels= 3,
args=train_args
)
In this case for the output:
- 0 = positive
- 1 = negative
- 2 = neutral
Example for a positive prediction:
model.predict(["Das ist gut! Wir danken dir."])
([0], array([[ 2.06561327, -3.57908797, 1.5340755 ]]))
Example for a negative prediction:
model.predict(["Ich hasse dich!"])
([1], array([[-3.50486898, 4.29590368, -0.9000684 ]]))
Example for a neutral prediction:
model.predict(["Heute ist Sonntag."])
([2], array([[-2.94458342, -2.91875601, 4.94414234]]))
This model was created by Maximilian Weissenbacher for a project at the University of Regensburg. |