|
---
|
|
library_name: peft
|
|
license: apache-2.0
|
|
base_model: Qwen/Qwen2.5-1.5B
|
|
tags:
|
|
- axolotl
|
|
- generated_from_trainer
|
|
language:
|
|
- zho
|
|
- eng
|
|
- fra
|
|
- spa
|
|
- por
|
|
- deu
|
|
- ita
|
|
- rus
|
|
- jpn
|
|
- kor
|
|
- vie
|
|
- tha
|
|
- ara
|
|
model-index:
|
|
- name: b85a4e3d-93c8-46f3-92a5-8e718064e026
|
|
results: []
|
|
---
|
|
|
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
|
should probably proofread and complete it, then remove this comment. -->
|
|
|
|
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
|
|
<details><summary>See axolotl config</summary>
|
|
|
|
axolotl version: `0.4.1`
|
|
```yaml
|
|
adapter: lora
|
|
base_model: Qwen/Qwen2.5-1.5B
|
|
bf16: auto
|
|
chat_template: llama3
|
|
cosine_min_lr_ratio: 0.1
|
|
data_processes: 4
|
|
dataset_prepared_path: null
|
|
datasets:
|
|
- data_files:
|
|
- b69b8f2576840a57_train_data.json
|
|
ds_type: json
|
|
format: custom
|
|
num_proc: 4
|
|
path: /workspace/input_data/b69b8f2576840a57_train_data.json
|
|
streaming: true
|
|
type:
|
|
field_input: student_answer
|
|
field_instruction: question
|
|
field_output: reference_answer
|
|
format: '{instruction} {input}'
|
|
no_input_format: '{instruction}'
|
|
system_format: '{system}'
|
|
system_prompt: ''
|
|
debug: null
|
|
deepspeed: null
|
|
device_map: balanced
|
|
do_eval: true
|
|
early_stopping_patience: 1
|
|
eval_batch_size: 1
|
|
eval_sample_packing: false
|
|
eval_steps: 25
|
|
evaluation_strategy: steps
|
|
flash_attention: false
|
|
fp16: null
|
|
fsdp: null
|
|
fsdp_config: null
|
|
gradient_accumulation_steps: 16
|
|
gradient_checkpointing: true
|
|
group_by_length: true
|
|
hub_model_id: eeeebbb2/b85a4e3d-93c8-46f3-92a5-8e718064e026
|
|
hub_strategy: checkpoint
|
|
hub_token: null
|
|
learning_rate: 0.0001
|
|
load_in_4bit: false
|
|
load_in_8bit: false
|
|
local_rank: null
|
|
logging_steps: 1
|
|
lora_alpha: 64
|
|
lora_dropout: 0.05
|
|
lora_fan_in_fan_out: null
|
|
lora_model_dir: null
|
|
lora_r: 32
|
|
lora_target_linear: true
|
|
lora_target_modules:
|
|
- q_proj
|
|
- v_proj
|
|
lr_scheduler: cosine
|
|
max_grad_norm: 1.0
|
|
max_memory:
|
|
0: 75GB
|
|
1: 75GB
|
|
2: 75GB
|
|
3: 75GB
|
|
max_steps: 50
|
|
micro_batch_size: 2
|
|
mixed_precision: bf16
|
|
mlflow_experiment_name: /tmp/b69b8f2576840a57_train_data.json
|
|
model_type: AutoModelForCausalLM
|
|
num_epochs: 3
|
|
optim_args:
|
|
adam_beta1: 0.9
|
|
adam_beta2: 0.95
|
|
adam_epsilon: 1e-5
|
|
optimizer: adamw_torch
|
|
output_dir: miner_id_24
|
|
pad_to_sequence_len: true
|
|
resume_from_checkpoint: null
|
|
s2_attention: null
|
|
sample_packing: false
|
|
save_steps: 25
|
|
save_strategy: steps
|
|
sequence_len: 2048
|
|
strict: false
|
|
tf32: false
|
|
tokenizer_type: AutoTokenizer
|
|
torch_compile: false
|
|
train_on_inputs: false
|
|
trust_remote_code: true
|
|
val_set_size: 50
|
|
wandb_entity: null
|
|
wandb_mode: online
|
|
wandb_name: b85a4e3d-93c8-46f3-92a5-8e718064e026
|
|
wandb_project: Public_TuningSN
|
|
wandb_runid: b85a4e3d-93c8-46f3-92a5-8e718064e026
|
|
warmup_ratio: 0.04
|
|
weight_decay: 0.01
|
|
xformers_attention: null
|
|
|
|
```
|
|
|
|
</details><br>
|
|
|
|
# b85a4e3d-93c8-46f3-92a5-8e718064e026
|
|
|
|
This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B](https://huggingface.co/Qwen/Qwen2.5-1.5B) on the None dataset.
|
|
It achieves the following results on the evaluation set:
|
|
- Loss: 0.1482
|
|
|
|
## Model description
|
|
|
|
More information needed
|
|
|
|
## Intended uses & limitations
|
|
|
|
More information needed
|
|
|
|
## Training and evaluation data
|
|
|
|
More information needed
|
|
|
|
## Training procedure
|
|
|
|
### Training hyperparameters
|
|
|
|
The following hyperparameters were used during training:
|
|
- learning_rate: 0.0001
|
|
- train_batch_size: 2
|
|
- eval_batch_size: 1
|
|
- seed: 42
|
|
- distributed_type: multi-GPU
|
|
- num_devices: 4
|
|
- gradient_accumulation_steps: 16
|
|
- total_train_batch_size: 128
|
|
- total_eval_batch_size: 4
|
|
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
|
|
- lr_scheduler_type: cosine
|
|
- lr_scheduler_warmup_steps: 2
|
|
- training_steps: 50
|
|
|
|
### Training results
|
|
|
|
| Training Loss | Epoch | Step | Validation Loss |
|
|
|:-------------:|:------:|:----:|:---------------:|
|
|
| 2.424 | 0.0264 | 1 | 3.6234 |
|
|
| 0.5879 | 0.6590 | 25 | 0.5001 |
|
|
| 0.0639 | 1.3410 | 50 | 0.1482 |
|
|
|
|
|
|
### Framework versions
|
|
|
|
- PEFT 0.13.2
|
|
- Transformers 4.46.0
|
|
- Pytorch 2.5.0+cu124
|
|
- Datasets 3.0.1
|
|
- Tokenizers 0.20.1 |