Noonecy commited on
Commit
f3441c5
·
1 Parent(s): e6431e9

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -18
README.md CHANGED
@@ -1,9 +1,8 @@
1
  ---
2
  license: mit
 
3
  tags:
4
  - generated_from_trainer
5
- datasets:
6
- - amazon_reviews_multi
7
  model-index:
8
  - name: xlm-roberta-base-finetuned-marc-en
9
  results: []
@@ -14,10 +13,7 @@ should probably proofread and complete it, then remove this comment. -->
14
 
15
  # xlm-roberta-base-finetuned-marc-en
16
 
17
- This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the amazon_reviews_multi dataset.
18
- It achieves the following results on the evaluation set:
19
- - Loss: 0.9872
20
- - Mae: 0.4878
21
 
22
  ## Model description
23
 
@@ -37,24 +33,16 @@ More information needed
37
 
38
  The following hyperparameters were used during training:
39
  - learning_rate: 2e-05
40
- - train_batch_size: 16
41
- - eval_batch_size: 16
42
  - seed: 42
43
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
44
  - lr_scheduler_type: linear
45
  - num_epochs: 2
46
 
47
- ### Training results
48
-
49
- | Training Loss | Epoch | Step | Validation Loss | Mae |
50
- |:-------------:|:-----:|:----:|:---------------:|:------:|
51
- | 1.1162 | 1.0 | 235 | 0.9885 | 0.4878 |
52
- | 0.9559 | 2.0 | 470 | 0.9872 | 0.4878 |
53
-
54
-
55
  ### Framework versions
56
 
57
- - Transformers 4.30.2
58
  - Pytorch 2.0.1+cu118
59
- - Datasets 2.13.1
60
  - Tokenizers 0.13.3
 
1
  ---
2
  license: mit
3
+ base_model: xlm-roberta-base
4
  tags:
5
  - generated_from_trainer
 
 
6
  model-index:
7
  - name: xlm-roberta-base-finetuned-marc-en
8
  results: []
 
13
 
14
  # xlm-roberta-base-finetuned-marc-en
15
 
16
+ This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
 
 
 
17
 
18
  ## Model description
19
 
 
33
 
34
  The following hyperparameters were used during training:
35
  - learning_rate: 2e-05
36
+ - train_batch_size: 2
37
+ - eval_batch_size: 2
38
  - seed: 42
39
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
40
  - lr_scheduler_type: linear
41
  - num_epochs: 2
42
 
 
 
 
 
 
 
 
 
43
  ### Framework versions
44
 
45
+ - Transformers 4.31.0
46
  - Pytorch 2.0.1+cu118
47
+ - Datasets 2.14.0
48
  - Tokenizers 0.13.3