vodailuong2510 commited on
Commit
9b76dc2
verified
1 Parent(s): 1c458f9

Model save

Browse files
Files changed (1) hide show
  1. README.md +15 -12
README.md CHANGED
@@ -4,6 +4,8 @@ license: mit
4
  base_model: xlm-roberta-base
5
  tags:
6
  - generated_from_trainer
 
 
7
  model-index:
8
  - name: saved_model_trial_0
9
  results: []
@@ -16,7 +18,9 @@ should probably proofread and complete it, then remove this comment. -->
16
 
17
  This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
18
  It achieves the following results on the evaluation set:
19
- - Loss: 1.2784
 
 
20
 
21
  ## Model description
22
 
@@ -35,23 +39,22 @@ More information needed
35
  ### Training hyperparameters
36
 
37
  The following hyperparameters were used during training:
38
- - learning_rate: 1.4264150037585327e-05
39
- - train_batch_size: 32
40
- - eval_batch_size: 32
41
  - seed: 42
42
  - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
43
  - lr_scheduler_type: linear
44
- - num_epochs: 5
45
 
46
  ### Training results
47
 
48
- | Training Loss | Epoch | Step | Validation Loss |
49
- |:-------------:|:-----:|:----:|:---------------:|
50
- | 2.1108 | 1.0 | 402 | 1.9462 |
51
- | 1.6381 | 2.0 | 804 | 1.4885 |
52
- | 1.3775 | 3.0 | 1206 | 1.3802 |
53
- | 1.2159 | 4.0 | 1608 | 1.3033 |
54
- | 1.1228 | 5.0 | 2010 | 1.2784 |
55
 
56
 
57
  ### Framework versions
 
4
  base_model: xlm-roberta-base
5
  tags:
6
  - generated_from_trainer
7
+ metrics:
8
+ - f1
9
  model-index:
10
  - name: saved_model_trial_0
11
  results: []
 
18
 
19
  This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
20
  It achieves the following results on the evaluation set:
21
+ - Loss: 2.9073
22
+ - Exact Match: 42.4603
23
+ - F1: 4.6917
24
 
25
  ## Model description
26
 
 
39
  ### Training hyperparameters
40
 
41
  The following hyperparameters were used during training:
42
+ - learning_rate: 1.8299324141754886e-05
43
+ - train_batch_size: 8
44
+ - eval_batch_size: 8
45
  - seed: 42
46
  - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
47
  - lr_scheduler_type: linear
48
+ - num_epochs: 4
49
 
50
  ### Training results
51
 
52
+ | Training Loss | Epoch | Step | Validation Loss | Exact Match | F1 |
53
+ |:-------------:|:-----:|:----:|:---------------:|:-----------:|:------:|
54
+ | No log | 1.0 | 65 | 3.8357 | 40.8730 | 3.0176 |
55
+ | No log | 2.0 | 130 | 3.1247 | 44.4444 | 0.0 |
56
+ | No log | 3.0 | 195 | 2.9332 | 43.2540 | 3.6763 |
57
+ | 3.6864 | 4.0 | 260 | 2.9073 | 42.4603 | 4.6917 |
 
58
 
59
 
60
  ### Framework versions