--- library_name: peft --- ## Training procedure This adapter has been fine-tuned using quantization-aware LoRA (QA-LoRA). More details on the training procedure here: [Fine-tune Quantized Llama 2 on Your GPU with QA-LoRA](https://kaitchup.substack.com/p/fine-tune-quantized-llama-2-on-your) The base model was quantized with AutoGPTQ INT4. You can find it here: [kaitchup/Llama-2-7b-4bit-32g-autogptq](https://huggingface.co/kaitchup/Llama-2-7b-4bit-32g-autogptq) ### Framework versions - PEFT 0.4.0