PEFT
Safetensors
qwen2
axolotl
Generated from Trainer

Built with Axolotl

See axolotl config

axolotl version: 0.7.0

base_model: Qwen/Qwen2.5-7B
hub_model_id: sumukshashidhar-testing/reasoning-v0.2-qwen2.5-7b
trust_remote_code: true

load_in_8bit: false
load_in_4bit: false
strict: false
bf16: true
hf_use_auth_token: true

plugins:
  - axolotl.integrations.liger.LigerPlugin
liger_rope: true
liger_rms_norm: true
liger_glu_activation: true
liger_layer_norm: true
liger_fused_linear_cross_entropy: true
save_safetensors:

datasets:
  - path: sumukshashidhar-testing/reasoning-rerankers-relevance-sft-data
    type: completion
    field: text
dataset_prepared_path: .axolotl_cache_data/reasoning-rerankers
shuffle_merged_datasets: true
# dataset_exact_deduplication: true
val_set_size: 0.05
output_dir: /scratch/reasoning-reankers/reasoning-v0.1-qwen2.5-7b
push_dataset_to_hub: sumukshashidhar-testing/reasoning-rerankers-relevance-sft-data-in-progress

sequence_length: 2048
sample_packing: true
pad_to_sequence_len: true

adapter: lora
lora_r: 256
lora_alpha: 32
lora_dropout: 0.05
peft_use_rslora: true
lora_target_linear: true

gradient_accumulation_steps: 1
micro_batch_size: 32
eval_batch_size: 1
num_epochs: 3
learning_rate: 5e-4
warmup_ratio: 0.05
evals_per_epoch: 2
saves_per_epoch: 2
gradient_checkpointing: true
lr_scheduler: cosine
optimizer: paged_adamw_8bit

profiler_steps: 100
save_safetensors: true
train_on_inputs: true
wandb_project: reasoning-rerankers
wandb_name: rr-qwen-7b
deepspeed: zero1.json

reasoning-v0.2-qwen2.5-7b

This model is a fine-tuned version of Qwen/Qwen2.5-7B on the sumukshashidhar-testing/reasoning-rerankers-relevance-sft-data dataset. It achieves the following results on the evaluation set:

  • Loss: 0.4119

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0005
  • train_batch_size: 32
  • eval_batch_size: 1
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 8
  • total_train_batch_size: 256
  • total_eval_batch_size: 8
  • optimizer: Use OptimizerNames.PAGED_ADAMW_8BIT with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 49
  • num_epochs: 3.0

Training results

Training Loss Epoch Step Validation Loss
No log 0.0030 1 2.2497
0.51 0.5 166 0.7306
0.2733 1.0 332 0.5004
0.1938 1.5 498 0.4445
0.1783 2.0 664 0.4152
0.1446 2.5 830 0.4147
0.1424 3.0 996 0.4119

Framework versions

  • PEFT 0.14.0
  • Transformers 4.48.3
  • Pytorch 2.4.0
  • Datasets 3.2.0
  • Tokenizers 0.21.1
Downloads last month
51
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for sumukshashidhar-testing/reasoning-v0.2-qwen2.5-7b

Base model

Qwen/Qwen2.5-7B
Adapter
(381)
this model

Dataset used to train sumukshashidhar-testing/reasoning-v0.2-qwen2.5-7b