|
--- |
|
library_name: peft |
|
license: other |
|
base_model: deepseek-ai/deepseek-coder-1.3b-base |
|
tags: |
|
- generated_from_trainer |
|
model-index: |
|
- name: lemexp-task1-v2-lemma_object_full-deepseek-coder-1.3b-base-ddp-8lr-v2 |
|
results: [] |
|
--- |
|
|
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You |
|
should probably proofread and complete it, then remove this comment. --> |
|
|
|
# lemexp-task1-v2-lemma_object_full-deepseek-coder-1.3b-base-ddp-8lr-v2 |
|
|
|
This model is a fine-tuned version of [deepseek-ai/deepseek-coder-1.3b-base](https://huggingface.co/deepseek-ai/deepseek-coder-1.3b-base) on an unknown dataset. |
|
It achieves the following results on the evaluation set: |
|
- Loss: 0.2570 |
|
|
|
## Model description |
|
|
|
More information needed |
|
|
|
## Intended uses & limitations |
|
|
|
More information needed |
|
|
|
## Training and evaluation data |
|
|
|
More information needed |
|
|
|
## Training procedure |
|
|
|
### Training hyperparameters |
|
|
|
The following hyperparameters were used during training: |
|
- learning_rate: 0.0008 |
|
- train_batch_size: 2 |
|
- eval_batch_size: 2 |
|
- seed: 42 |
|
- distributed_type: multi-GPU |
|
- num_devices: 8 |
|
- total_train_batch_size: 16 |
|
- total_eval_batch_size: 16 |
|
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments |
|
- lr_scheduler_type: linear |
|
- num_epochs: 6 |
|
- mixed_precision_training: Native AMP |
|
|
|
### Training results |
|
|
|
| Training Loss | Epoch | Step | Validation Loss | |
|
|:-------------:|:-----:|:-----:|:---------------:| |
|
| 0.4506 | 0.2 | 3094 | 0.4509 | |
|
| 0.4176 | 0.4 | 6188 | 0.4119 | |
|
| 0.4028 | 0.6 | 9282 | 0.4009 | |
|
| 0.3903 | 0.8 | 12376 | 0.3867 | |
|
| 0.3791 | 1.0 | 15470 | 0.3844 | |
|
| 0.3728 | 1.2 | 18564 | 0.3752 | |
|
| 0.3652 | 1.4 | 21658 | 0.3608 | |
|
| 0.3604 | 1.6 | 24752 | 0.3574 | |
|
| 0.3549 | 1.8 | 27846 | 0.3554 | |
|
| 0.3491 | 2.0 | 30940 | 0.3493 | |
|
| 0.3411 | 2.2 | 34034 | 0.3406 | |
|
| 0.3369 | 2.4 | 37128 | 0.3315 | |
|
| 0.3304 | 2.6 | 40222 | 0.3313 | |
|
| 0.3269 | 2.8 | 43316 | 0.3309 | |
|
| 0.3229 | 3.0 | 46410 | 0.3285 | |
|
| 0.3128 | 3.2 | 49504 | 0.3141 | |
|
| 0.3128 | 3.4 | 52598 | 0.3127 | |
|
| 0.3059 | 3.6 | 55692 | 0.3097 | |
|
| 0.3047 | 3.8 | 58786 | 0.3038 | |
|
| 0.3003 | 4.0 | 61880 | 0.2949 | |
|
| 0.2881 | 4.2 | 64974 | 0.2886 | |
|
| 0.2838 | 4.4 | 68068 | 0.2920 | |
|
| 0.2821 | 4.6 | 71162 | 0.2878 | |
|
| 0.2735 | 4.8 | 74256 | 0.2808 | |
|
| 0.2698 | 5.0 | 77350 | 0.2764 | |
|
| 0.2596 | 5.2 | 80444 | 0.2720 | |
|
| 0.2624 | 5.4 | 83538 | 0.2714 | |
|
| 0.2574 | 5.6 | 86632 | 0.2691 | |
|
| 0.2542 | 5.8 | 89726 | 0.2630 | |
|
| 0.2484 | 6.0 | 92820 | 0.2570 | |
|
|
|
|
|
### Framework versions |
|
|
|
- PEFT 0.14.0 |
|
- Transformers 4.47.0 |
|
- Pytorch 2.5.1+cu124 |
|
- Datasets 3.2.0 |
|
- Tokenizers 0.21.0 |