bobbyw commited on
Commit
b1a6cc0
1 Parent(s): c9d1812

End of training

Browse files
README.md CHANGED
@@ -1,5 +1,5 @@
1
  ---
2
- base_model: bobbyw/deberta-v3-large_relationships_v3.1
3
  tags:
4
  - generated_from_trainer
5
  metrics:
@@ -17,14 +17,14 @@ should probably proofread and complete it, then remove this comment. -->
17
 
18
  # deberta-v3-large_relationships_v3.2
19
 
20
- This model is a fine-tuned version of [bobbyw/deberta-v3-large_relationships_v3.1](https://huggingface.co/bobbyw/deberta-v3-large_relationships_v3.1) on an unknown dataset.
21
  It achieves the following results on the evaluation set:
22
- - Loss: 0.0142
23
- - Accuracy: 0.0026
24
- - F1: 0.0045
25
- - Precision: 0.0023
26
- - Recall: 0.7
27
- - Learning Rate: 0.0
28
 
29
  ## Model description
30
 
@@ -48,19 +48,21 @@ The following hyperparameters were used during training:
48
  - eval_batch_size: 1
49
  - seed: 42
50
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
51
- - lr_scheduler_type: linear
52
- - num_epochs: 1
53
 
54
  ### Training results
55
 
56
- | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Rate |
57
- |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|:----:|
58
- | 0.0131 | 1.0 | 3388 | 0.0142 | 0.0026 | 0.0045 | 0.0023 | 0.7 | 0.0 |
 
 
59
 
60
 
61
  ### Framework versions
62
 
63
- - Transformers 4.41.2
64
- - Pytorch 2.3.0+cu121
65
- - Datasets 2.20.0
66
  - Tokenizers 0.19.1
 
1
  ---
2
+ base_model: bobbyw/deberta-v3-large_relationships_v3.2
3
  tags:
4
  - generated_from_trainer
5
  metrics:
 
17
 
18
  # deberta-v3-large_relationships_v3.2
19
 
20
+ This model is a fine-tuned version of [bobbyw/deberta-v3-large_relationships_v3.2](https://huggingface.co/bobbyw/deberta-v3-large_relationships_v3.2) on an unknown dataset.
21
  It achieves the following results on the evaluation set:
22
+ - Loss: 0.0079
23
+ - Accuracy: 0.0017
24
+ - F1: 0.0020
25
+ - Precision: 0.0010
26
+ - Recall: 0.4398
27
+ - Learning Rate: 0.001
28
 
29
  ## Model description
30
 
 
48
  - eval_batch_size: 1
49
  - seed: 42
50
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
51
+ - lr_scheduler_type: constant
52
+ - num_epochs: 3
53
 
54
  ### Training results
55
 
56
+ | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Rate |
57
+ |:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:---------:|:------:|:-----:|
58
+ | 0.0022 | 1.0 | 4106 | 0.0076 | 0.0015 | 0.0022 | 0.0011 | 0.4722 | 0.001 |
59
+ | 0.0026 | 2.0 | 8212 | 0.0080 | 0.0017 | 0.0020 | 0.0010 | 0.4306 | 0.001 |
60
+ | 0.0021 | 3.0 | 12318 | 0.0079 | 0.0017 | 0.0020 | 0.0010 | 0.4398 | 0.001 |
61
 
62
 
63
  ### Framework versions
64
 
65
+ - Transformers 4.42.4
66
+ - Pytorch 2.3.1+cu121
67
+ - Datasets 2.21.0
68
  - Tokenizers 0.19.1
runs/Aug22_16-54-57_5b31a031f6d5/events.out.tfevents.1724345698.5b31a031f6d5.216.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:dc076b3f4e71a12e274ec20579ef311df4f5d79b1cb5988d224d4d0c55213d21
3
- size 23816
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:149efc90f676e39d9b399ee90a50b297340b3ff560cee2d2496f55c37ff8c3ea
3
+ size 24758