unsloth/Meta-Llama-3.1-8B Fine-tuned with QLoRA (Unsloth) on Alpaca

This model is a fine-tuned version of unsloth/Meta-Llama-3.1-8B using QLoRA and Unsloth for efficient instruction-tuning.

πŸ“– Training Details

  • Dataset: yahma/alpaca-cleaned
  • QLoRA: 4 bit quantization (NF4) using bitsandbytes
  • LoRA Rank: 16 (adjust based on your config)
  • LoRA Alpha: 16
  • Batch Size: 2 per device
  • Gradient Accumulation: 4
  • Learning Rate: 2e-4
  • Epochs: 1
  • Trainer: trl.SFTTrainer

πŸ’‘ Notes

  • Optimized for memory-efficient fine-tuning with Unsloth
  • No evaluation was run during training β€” please evaluate separately

πŸ“ License

Apache 2.0

Downloads last month
4
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Dataset used to train ParamDev/llama-3.1-8b_alpaca