RichardErkhov commited on
Commit
b57019c
·
verified ·
1 Parent(s): d7ae76e

uploaded readme

Browse files
Files changed (1) hide show
  1. README.md +158 -0
README.md ADDED
@@ -0,0 +1,158 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Quantization made by Richard Erkhov.
2
+
3
+ [Github](https://github.com/RichardErkhov)
4
+
5
+ [Discord](https://discord.gg/pvy7H8DZMG)
6
+
7
+ [Request more models](https://github.com/RichardErkhov/quant_request)
8
+
9
+
10
+ Llama-2-13b-hf-4bit-64rank - GGUF
11
+ - Model creator: https://huggingface.co/LoftQ/
12
+ - Original model: https://huggingface.co/LoftQ/Llama-2-13b-hf-4bit-64rank/
13
+
14
+
15
+ | Name | Quant method | Size |
16
+ | ---- | ---- | ---- |
17
+ | [Llama-2-13b-hf-4bit-64rank.Q2_K.gguf](https://huggingface.co/RichardErkhov/LoftQ_-_Llama-2-13b-hf-4bit-64rank-gguf/blob/main/Llama-2-13b-hf-4bit-64rank.Q2_K.gguf) | Q2_K | 4.52GB |
18
+ | [Llama-2-13b-hf-4bit-64rank.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/LoftQ_-_Llama-2-13b-hf-4bit-64rank-gguf/blob/main/Llama-2-13b-hf-4bit-64rank.IQ3_XS.gguf) | IQ3_XS | 4.99GB |
19
+ | [Llama-2-13b-hf-4bit-64rank.IQ3_S.gguf](https://huggingface.co/RichardErkhov/LoftQ_-_Llama-2-13b-hf-4bit-64rank-gguf/blob/main/Llama-2-13b-hf-4bit-64rank.IQ3_S.gguf) | IQ3_S | 5.27GB |
20
+ | [Llama-2-13b-hf-4bit-64rank.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/LoftQ_-_Llama-2-13b-hf-4bit-64rank-gguf/blob/main/Llama-2-13b-hf-4bit-64rank.Q3_K_S.gguf) | Q3_K_S | 5.27GB |
21
+ | [Llama-2-13b-hf-4bit-64rank.IQ3_M.gguf](https://huggingface.co/RichardErkhov/LoftQ_-_Llama-2-13b-hf-4bit-64rank-gguf/blob/main/Llama-2-13b-hf-4bit-64rank.IQ3_M.gguf) | IQ3_M | 5.57GB |
22
+ | [Llama-2-13b-hf-4bit-64rank.Q3_K.gguf](https://huggingface.co/RichardErkhov/LoftQ_-_Llama-2-13b-hf-4bit-64rank-gguf/blob/main/Llama-2-13b-hf-4bit-64rank.Q3_K.gguf) | Q3_K | 5.9GB |
23
+ | [Llama-2-13b-hf-4bit-64rank.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/LoftQ_-_Llama-2-13b-hf-4bit-64rank-gguf/blob/main/Llama-2-13b-hf-4bit-64rank.Q3_K_M.gguf) | Q3_K_M | 5.9GB |
24
+ | [Llama-2-13b-hf-4bit-64rank.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/LoftQ_-_Llama-2-13b-hf-4bit-64rank-gguf/blob/main/Llama-2-13b-hf-4bit-64rank.Q3_K_L.gguf) | Q3_K_L | 6.45GB |
25
+ | [Llama-2-13b-hf-4bit-64rank.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/LoftQ_-_Llama-2-13b-hf-4bit-64rank-gguf/blob/main/Llama-2-13b-hf-4bit-64rank.IQ4_XS.gguf) | IQ4_XS | 6.54GB |
26
+ | [Llama-2-13b-hf-4bit-64rank.Q4_0.gguf](https://huggingface.co/RichardErkhov/LoftQ_-_Llama-2-13b-hf-4bit-64rank-gguf/blob/main/Llama-2-13b-hf-4bit-64rank.Q4_0.gguf) | Q4_0 | 6.86GB |
27
+ | [Llama-2-13b-hf-4bit-64rank.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/LoftQ_-_Llama-2-13b-hf-4bit-64rank-gguf/blob/main/Llama-2-13b-hf-4bit-64rank.IQ4_NL.gguf) | IQ4_NL | 6.9GB |
28
+ | [Llama-2-13b-hf-4bit-64rank.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/LoftQ_-_Llama-2-13b-hf-4bit-64rank-gguf/blob/main/Llama-2-13b-hf-4bit-64rank.Q4_K_S.gguf) | Q4_K_S | 6.91GB |
29
+ | [Llama-2-13b-hf-4bit-64rank.Q4_K.gguf](https://huggingface.co/RichardErkhov/LoftQ_-_Llama-2-13b-hf-4bit-64rank-gguf/blob/main/Llama-2-13b-hf-4bit-64rank.Q4_K.gguf) | Q4_K | 7.33GB |
30
+ | [Llama-2-13b-hf-4bit-64rank.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/LoftQ_-_Llama-2-13b-hf-4bit-64rank-gguf/blob/main/Llama-2-13b-hf-4bit-64rank.Q4_K_M.gguf) | Q4_K_M | 7.33GB |
31
+ | [Llama-2-13b-hf-4bit-64rank.Q4_1.gguf](https://huggingface.co/RichardErkhov/LoftQ_-_Llama-2-13b-hf-4bit-64rank-gguf/blob/main/Llama-2-13b-hf-4bit-64rank.Q4_1.gguf) | Q4_1 | 7.61GB |
32
+ | [Llama-2-13b-hf-4bit-64rank.Q5_0.gguf](https://huggingface.co/RichardErkhov/LoftQ_-_Llama-2-13b-hf-4bit-64rank-gguf/blob/main/Llama-2-13b-hf-4bit-64rank.Q5_0.gguf) | Q5_0 | 8.36GB |
33
+ | [Llama-2-13b-hf-4bit-64rank.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/LoftQ_-_Llama-2-13b-hf-4bit-64rank-gguf/blob/main/Llama-2-13b-hf-4bit-64rank.Q5_K_S.gguf) | Q5_K_S | 8.36GB |
34
+ | [Llama-2-13b-hf-4bit-64rank.Q5_K.gguf](https://huggingface.co/RichardErkhov/LoftQ_-_Llama-2-13b-hf-4bit-64rank-gguf/blob/main/Llama-2-13b-hf-4bit-64rank.Q5_K.gguf) | Q5_K | 8.6GB |
35
+ | [Llama-2-13b-hf-4bit-64rank.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/LoftQ_-_Llama-2-13b-hf-4bit-64rank-gguf/blob/main/Llama-2-13b-hf-4bit-64rank.Q5_K_M.gguf) | Q5_K_M | 8.6GB |
36
+ | [Llama-2-13b-hf-4bit-64rank.Q5_1.gguf](https://huggingface.co/RichardErkhov/LoftQ_-_Llama-2-13b-hf-4bit-64rank-gguf/blob/main/Llama-2-13b-hf-4bit-64rank.Q5_1.gguf) | Q5_1 | 9.1GB |
37
+ | [Llama-2-13b-hf-4bit-64rank.Q6_K.gguf](https://huggingface.co/RichardErkhov/LoftQ_-_Llama-2-13b-hf-4bit-64rank-gguf/blob/main/Llama-2-13b-hf-4bit-64rank.Q6_K.gguf) | Q6_K | 9.95GB |
38
+ | [Llama-2-13b-hf-4bit-64rank.Q8_0.gguf](https://huggingface.co/RichardErkhov/LoftQ_-_Llama-2-13b-hf-4bit-64rank-gguf/blob/main/Llama-2-13b-hf-4bit-64rank.Q8_0.gguf) | Q8_0 | 12.88GB |
39
+
40
+
41
+
42
+
43
+ Original model description:
44
+ ---
45
+ license: mit
46
+ language:
47
+ - en
48
+ pipeline_tag: text-generation
49
+ tags:
50
+ - 'quantization '
51
+ - lora
52
+ ---
53
+ # LoftQ Initialization
54
+
55
+ | [Paper](https://arxiv.org/abs/2310.08659) | [Code](https://github.com/yxli2123/LoftQ) | [PEFT Example](https://github.com/huggingface/peft/tree/main/examples/loftq_finetuning) |
56
+
57
+ LoftQ (LoRA-fine-tuning-aware Quantization) provides a quantized backbone Q and LoRA adapters A and B, given a full-precision pre-trained weight W.
58
+
59
+ This model, `Llama-2-13b-hf-4bit-64rank`, is obtained from [LLAMA-2-13b](https://huggingface.co/meta-llama/Llama-2-13b-hf).
60
+ The backbone is under `LoftQ/Llama-2-13b-hf-4bit-64rank` and LoRA adapters are under the `subfolder='loftq_init'`.
61
+
62
+ ## Model Info
63
+ ### Backbone
64
+ - Stored format: `torch.bfloat16`
65
+ - Size: ~ 26 GiB
66
+ - Loaded format: bitsandbytes nf4
67
+ - Size loaded on GPU: ~6.5 GiB
68
+
69
+ ### LoRA adapters
70
+ - rank: 64
71
+ - lora_alpha: 64
72
+ - target_modules: ["down_proj", "up_proj", "q_proj", "k_proj", "v_proj", "o_proj", "gate_proj"]
73
+
74
+ ## Usage
75
+
76
+ **Training** Here's an example of loading this model and preparing for the LoRA fine-tuning.
77
+
78
+ ```python
79
+ import torch
80
+ from transformers import AutoModelForCausalLM, BitsAndBytesConfig
81
+ from peft import PeftModel
82
+
83
+ MODEL_ID = "LoftQ/Llama-2-13b-hf-4bit-64rank"
84
+
85
+ base_model = AutoModelForCausalLM.from_pretrained(
86
+ MODEL_ID,
87
+ torch_dtype=torch.bfloat16, # you may change it with different models
88
+ quantization_config=BitsAndBytesConfig(
89
+ load_in_4bit=True,
90
+ bnb_4bit_compute_dtype=torch.bfloat16, # bfloat16 is recommended
91
+ bnb_4bit_use_double_quant=False,
92
+ bnb_4bit_quant_type='nf4',
93
+ ),
94
+ )
95
+ peft_model = PeftModel.from_pretrained(
96
+ base_model,
97
+ MODEL_ID,
98
+ subfolder="loftq_init",
99
+ is_trainable=True,
100
+ )
101
+
102
+ # Do training with peft_model ...
103
+ ```
104
+
105
+ ## Experiment Results
106
+ We have conducted experiments on supervised fine-tuning of [GSM8K](https://huggingface.co/datasets/gsm8k)
107
+ and [WikiText-2](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-raw-v1).
108
+
109
+ | Model | Bits | Rank | LoRA Initial | GSM8K | WikiText-2 |
110
+ | -------------- | ---- | ---- | -------------------- | ----- | ---------- |
111
+ | LLAMA-2-13b | 16 | 64 | Gaussian + 0 | 45.3 | 5.12 |
112
+ | LLAMA-2-13b | 4 | 64 | Gaussian + 0 (QLoRA) | 39.9 | 5.22 |
113
+ | **LLAMA-2-13b** | 4 | 64 | LoftQ | 45.0 | 5.16 |
114
+
115
+
116
+
117
+ **Inference** Here is an example code for inference after the model has been fine-tuned on [GSM8K](https://huggingface.co/datasets/gsm8k).
118
+
119
+ ```python
120
+ import torch
121
+ from transformers import AutoModelForCausalLM, BitsAndBytesConfig
122
+ from peft import PeftModel
123
+
124
+ MODEL_ID = "LoftQ/Llama-2-13b-hf-4bit-64rank"
125
+
126
+ base_model = AutoModelForCausalLM.from_pretrained(
127
+ MODEL_ID,
128
+ torch_dtype=torch.bfloat16, # you may change it with different models
129
+ quantization_config=BitsAndBytesConfig(
130
+ load_in_4bit=True,
131
+ bnb_4bit_compute_dtype=torch.bfloat16, # bfloat16 is recommended
132
+ bnb_4bit_use_double_quant=False,
133
+ bnb_4bit_quant_type='nf4',
134
+ ),
135
+ )
136
+ peft_model = PeftModel.from_pretrained(
137
+ base_model,
138
+ MODEL_ID,
139
+ subfolder="gsm8k",
140
+ is_trainable=True,
141
+ )
142
+
143
+ # Do inference with peft_model ...
144
+ ```
145
+ See the full code at our [Github Repo]((https://github.com/yxli2123/LoftQ))
146
+
147
+
148
+ ## Citation
149
+
150
+ ```bibtex
151
+ @article{li2023loftq,
152
+ title={Loftq: Lora-fine-tuning-aware quantization for large language models},
153
+ author={Li, Yixiao and Yu, Yifan and Liang, Chen and He, Pengcheng and Karampatziakis, Nikos and Chen, Weizhu and Zhao, Tuo},
154
+ journal={arXiv preprint arXiv:2310.08659},
155
+ year={2023}
156
+ }
157
+ ```
158
+