sandernotenbaert commited on
Commit
80d94ed
·
verified ·
1 Parent(s): 21ae026

Model save

Browse files
Files changed (3) hide show
  1. README.md +4 -17
  2. final_model/training_args.bin +1 -1
  3. training_args.bin +1 -1
README.md CHANGED
@@ -1,5 +1,6 @@
1
  ---
2
  library_name: transformers
 
3
  tags:
4
  - generated_from_trainer
5
  model-index:
@@ -12,9 +13,7 @@ should probably proofread and complete it, then remove this comment. -->
12
 
13
  # hierarchical-music-t5
14
 
15
- This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
16
- It achieves the following results on the evaluation set:
17
- - Loss: 1.5412
18
 
19
  ## Model description
20
 
@@ -41,22 +40,10 @@ The following hyperparameters were used during training:
41
  - total_train_batch_size: 64
42
  - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
43
  - lr_scheduler_type: cosine
44
- - lr_scheduler_warmup_steps: 1000
45
- - num_epochs: 3
46
  - mixed_precision_training: Native AMP
47
 
48
- ### Training results
49
-
50
- | Training Loss | Epoch | Step | Validation Loss |
51
- |:-------------:|:------:|:----:|:---------------:|
52
- | 2.8072 | 0.4637 | 500 | 2.6616 |
53
- | 2.2359 | 0.9274 | 1000 | 2.1245 |
54
- | 1.9234 | 1.3904 | 1500 | 1.8231 |
55
- | 1.7725 | 1.8542 | 2000 | 1.6993 |
56
- | 1.6731 | 2.3172 | 2500 | 1.5826 |
57
- | 1.6358 | 2.7809 | 3000 | 1.5412 |
58
-
59
-
60
  ### Framework versions
61
 
62
  - Transformers 4.54.0
 
1
  ---
2
  library_name: transformers
3
+ base_model: sandernotenbaert/hierarchical-music-t5
4
  tags:
5
  - generated_from_trainer
6
  model-index:
 
13
 
14
  # hierarchical-music-t5
15
 
16
+ This model is a fine-tuned version of [sandernotenbaert/hierarchical-music-t5](https://huggingface.co/sandernotenbaert/hierarchical-music-t5) on an unknown dataset.
 
 
17
 
18
  ## Model description
19
 
 
40
  - total_train_batch_size: 64
41
  - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
42
  - lr_scheduler_type: cosine
43
+ - lr_scheduler_warmup_steps: 100
44
+ - num_epochs: 4
45
  - mixed_precision_training: Native AMP
46
 
 
 
 
 
 
 
 
 
 
 
 
 
47
  ### Framework versions
48
 
49
  - Transformers 4.54.0
final_model/training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:7d71f2af054f5faaf5b5ae8591032a30dbf7647b1dbe96f944e37ea3dd06f8b2
3
  size 5560
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c1cf41bae7f4c718f9bf014526c3423c721ac696a24b087ff0cb4541b05dd194
3
  size 5560
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:7d71f2af054f5faaf5b5ae8591032a30dbf7647b1dbe96f944e37ea3dd06f8b2
3
  size 5560
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c1cf41bae7f4c718f9bf014526c3423c721ac696a24b087ff0cb4541b05dd194
3
  size 5560