bnjmnmarie commited on
Commit
27ac08b
·
1 Parent(s): f8ce208

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -1
README.md CHANGED
@@ -7,4 +7,7 @@ Llama 2 7B quantized with AutoGPTQ V0.3.0.
7
  * Group size: 32
8
  * Data type: INT4
9
 
10
- This model is compatible with the first version of QA-LoRA.
 
 
 
 
7
  * Group size: 32
8
  * Data type: INT4
9
 
10
+ This model is compatible with the first version of QA-LoRA.
11
+
12
+ To fine-tune it with QA-LoRA, follow this tutorial:
13
+ [Fine-tune Quantized Llama 2 on Your GPU with QA-LoRA](https://kaitchup.substack.com/p/fine-tune-quantized-llama-2-on-your)