vodailuong2510 commited on
Commit
a5e415d
verified
1 Parent(s): 02ce7e4

Model save

Browse files
Files changed (1) hide show
  1. README.md +14 -8
README.md CHANGED
@@ -4,6 +4,8 @@ license: mit
4
  base_model: xlm-roberta-base
5
  tags:
6
  - generated_from_trainer
 
 
7
  model-index:
8
  - name: saved_model_trial_9
9
  results: []
@@ -16,7 +18,9 @@ should probably proofread and complete it, then remove this comment. -->
16
 
17
  This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
18
  It achieves the following results on the evaluation set:
19
- - Loss: 5.8833
 
 
20
 
21
  ## Model description
22
 
@@ -35,19 +39,21 @@ More information needed
35
  ### Training hyperparameters
36
 
37
  The following hyperparameters were used during training:
38
- - learning_rate: 4.879957028894552e-05
39
- - train_batch_size: 16
40
- - eval_batch_size: 16
41
  - seed: 42
42
  - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
43
  - lr_scheduler_type: linear
44
- - num_epochs: 1
45
 
46
  ### Training results
47
 
48
- | Training Loss | Epoch | Step | Validation Loss |
49
- |:-------------:|:-----:|:----:|:---------------:|
50
- | No log | 1.0 | 1 | 5.8833 |
 
 
51
 
52
 
53
  ### Framework versions
 
4
  base_model: xlm-roberta-base
5
  tags:
6
  - generated_from_trainer
7
+ metrics:
8
+ - f1
9
  model-index:
10
  - name: saved_model_trial_9
11
  results: []
 
18
 
19
  This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
20
  It achieves the following results on the evaluation set:
21
+ - Loss: 4.4455
22
+ - Exact Match: 31.4607
23
+ - F1: 0.3958
24
 
25
  ## Model description
26
 
 
39
  ### Training hyperparameters
40
 
41
  The following hyperparameters were used during training:
42
+ - learning_rate: 3.338177248512558e-05
43
+ - train_batch_size: 32
44
+ - eval_batch_size: 32
45
  - seed: 42
46
  - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
47
  - lr_scheduler_type: linear
48
+ - num_epochs: 3
49
 
50
  ### Training results
51
 
52
+ | Training Loss | Epoch | Step | Validation Loss | Exact Match | F1 |
53
+ |:-------------:|:-----:|:----:|:---------------:|:-----------:|:------:|
54
+ | No log | 1.0 | 8 | 5.2961 | 17.9775 | 3.9016 |
55
+ | No log | 2.0 | 16 | 4.6305 | 11.2360 | 3.9679 |
56
+ | No log | 3.0 | 24 | 4.4455 | 31.4607 | 0.3958 |
57
 
58
 
59
  ### Framework versions