raulgdp commited on
Commit
77a8515
1 Parent(s): c9f7087

End of training

Browse files
README.md ADDED
@@ -0,0 +1,74 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: daveni/twitter-xlm-roberta-emotion-es
3
+ tags:
4
+ - generated_from_trainer
5
+ metrics:
6
+ - accuracy
7
+ - f1
8
+ - precision
9
+ - recall
10
+ model-index:
11
+ - name: xml-roberta-HU-Com
12
+ results: []
13
+ ---
14
+
15
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
16
+ should probably proofread and complete it, then remove this comment. -->
17
+
18
+ # xml-roberta-HU-Com
19
+
20
+ This model is a fine-tuned version of [daveni/twitter-xlm-roberta-emotion-es](https://huggingface.co/daveni/twitter-xlm-roberta-emotion-es) on an unknown dataset.
21
+ It achieves the following results on the evaluation set:
22
+ - Loss: 1.3693
23
+ - Accuracy: 0.7911
24
+ - F1: 0.7440
25
+ - Precision: 0.7415
26
+ - Recall: 0.7466
27
+
28
+ ## Model description
29
+
30
+ More information needed
31
+
32
+ ## Intended uses & limitations
33
+
34
+ More information needed
35
+
36
+ ## Training and evaluation data
37
+
38
+ More information needed
39
+
40
+ ## Training procedure
41
+
42
+ ### Training hyperparameters
43
+
44
+ The following hyperparameters were used during training:
45
+ - learning_rate: 5e-05
46
+ - train_batch_size: 16
47
+ - eval_batch_size: 8
48
+ - seed: 42
49
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
50
+ - lr_scheduler_type: linear
51
+ - num_epochs: 10
52
+
53
+ ### Training results
54
+
55
+ | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
56
+ |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
57
+ | 0.6717 | 1.0 | 90 | 0.5918 | 0.6852 | 0.5272 | 0.6774 | 0.4315 |
58
+ | 0.453 | 2.0 | 180 | 0.5358 | 0.7465 | 0.6403 | 0.7570 | 0.5548 |
59
+ | 0.2631 | 3.0 | 270 | 0.7088 | 0.7744 | 0.7273 | 0.7152 | 0.7397 |
60
+ | 0.1936 | 4.0 | 360 | 0.7078 | 0.7939 | 0.7566 | 0.7278 | 0.7877 |
61
+ | 0.1273 | 5.0 | 450 | 1.1057 | 0.7772 | 0.7436 | 0.6988 | 0.7945 |
62
+ | 0.066 | 6.0 | 540 | 1.1990 | 0.7799 | 0.7168 | 0.7519 | 0.6849 |
63
+ | 0.0286 | 7.0 | 630 | 1.2457 | 0.7994 | 0.7584 | 0.7434 | 0.7740 |
64
+ | 0.0261 | 8.0 | 720 | 1.3297 | 0.7799 | 0.7106 | 0.7638 | 0.6644 |
65
+ | 0.0097 | 9.0 | 810 | 1.3733 | 0.7855 | 0.7354 | 0.7379 | 0.7329 |
66
+ | 0.0071 | 10.0 | 900 | 1.3693 | 0.7911 | 0.7440 | 0.7415 | 0.7466 |
67
+
68
+
69
+ ### Framework versions
70
+
71
+ - Transformers 4.43.0.dev0
72
+ - Pytorch 2.0.1+cu117
73
+ - Datasets 2.19.1
74
+ - Tokenizers 0.19.1
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:746af6bddbe613439c25b1a658025650e82c7262055a0a4eab566d301c0c2622
3
  size 1112205008
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1bd8669741258fceabacc621c923e0cc235806d4e4af40c21ae598e38c02fd81
3
  size 1112205008
runs/Jul08_10-18-20_raul-MS-7B98/events.out.tfevents.1720453673.raul-MS-7B98.10727.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:07ed05ad34ebadecbb025de083b04a46771761a4c01fb7629712e479e34a585f
3
- size 5728
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b70ee531c6efe1cb5cb2b1f5208a7c250b0508a350b57748327570a8a16cdeec
3
+ size 8814
runs/Jul08_10-53-48_raul-MS-7B98/events.out.tfevents.1720454035.raul-MS-7B98.11398.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cffefd4381aa38f9c862a984bf1ff266685a6ac72ae581e52a1d712fc299add5
3
+ size 12230
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:b3ac0a5df0c36f9fe02102373281fec53c53bcd9c86c1686a1b7d3a5433bc668
3
  size 4667
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:22e4328a56c24bedabfcdc3e65d26b574848bcfbf03f66ed013009b28de6e1a1
3
  size 4667