Update README.md
Browse files
README.md
CHANGED
@@ -1,4 +1,39 @@
|
|
1 |
<p><strong><font size="5">Information</font></strong></p>
|
2 |
OpenAssistant-Llama-30B-4-bit working with GPTQ versions used in Oobabooga's Text Generation Webui and KoboldAI.
|
3 |
<p>There are 3 quantized versions (Coming soon), one is quantized using GPTQ's <i>--true-sequential</i> and <i>--act-order</i> optimizations, the second is quantized using GPTQ's <i>--true-sequential</i> and <i>--groupsize 128</i> optimization, and the third one is quantized for GGML using q4_1</p>
|
4 |
-
This was made using Open Assistant's native fine-tune of Llama 30b on their dataset
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
<p><strong><font size="5">Information</font></strong></p>
|
2 |
OpenAssistant-Llama-30B-4-bit working with GPTQ versions used in Oobabooga's Text Generation Webui and KoboldAI.
|
3 |
<p>There are 3 quantized versions (Coming soon), one is quantized using GPTQ's <i>--true-sequential</i> and <i>--act-order</i> optimizations, the second is quantized using GPTQ's <i>--true-sequential</i> and <i>--groupsize 128</i> optimization, and the third one is quantized for GGML using q4_1</p>
|
4 |
+
This was made using <a href="https://huggingface.co/OpenAssistant/oasst-sft-6-llama-30b-xor">Open Assistant's native fine-tune</a> of Llama 30b on their dataset.</p>
|
5 |
+
|
6 |
+
<p><strong>GPU/GPTQ Usage</strong></p>
|
7 |
+
<p>To use with your GPU using GPTQ pick one of the .safetensors along with all of the .jsons and .model files.</p>
|
8 |
+
<p>Oobabooga: If you require further instruction, see <a href="https://github.com/oobabooga/text-generation-webui/blob/main/docs/GPTQ-models-(4-bit-mode).md">here</a> and <a href="https://github.com/oobabooga/text-generation-webui/blob/main/docs/LLaMA-model.md">here</a></p>
|
9 |
+
<p>KoboldAI: If you require further instruction, see <a href="https://github.com/0cc4m/KoboldAI">here</a></p>
|
10 |
+
|
11 |
+
<p><strong>CPU/GGML Usage</strong></p>
|
12 |
+
<p>To use your CPU using GGML(Llamacpp) you only need the single .bin ggml file.</p>
|
13 |
+
<p>Oobabooga: If you require further instruction, see <a href="https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md">here</a></p>
|
14 |
+
<p>KoboldAI: If you require further instruction, see <a href="https://github.com/LostRuins/koboldcpp">here</a></p>
|
15 |
+
|
16 |
+
<p><strong>Training Parameters</strong></p>
|
17 |
+
<ul><li>num_epochs=10</li><li>cutoff_len=512</li><li>group_by_length</li><li>lora_target_modules='[q_proj,k_proj,v_proj,o_proj]'</li><li>lora_r=16</li><li>micro_batch_size=8</li></ul>
|
18 |
+
|
19 |
+
<p><strong><font size="5">Benchmarks</font></strong></p>
|
20 |
+
|
21 |
+
<p><strong><font size="4">--true-sequential --act-order</font></strong></p>
|
22 |
+
|
23 |
+
<strong>Wikitext2</strong>: 4.964076519012451
|
24 |
+
|
25 |
+
<strong>Ptb-New</strong>: 9.641128540039062
|
26 |
+
|
27 |
+
<strong>C4-New</strong>: 7.203001022338867
|
28 |
+
|
29 |
+
<strong>Note</strong>: This version does not use <i>--groupsize 128</i>, therefore evaluations are minimally higher. However, this version allows fitting the whole model at full context using only 24GB VRAM.
|
30 |
+
|
31 |
+
<p><strong><font size="4">--true-sequential --groupsize 128</font></strong></p>
|
32 |
+
|
33 |
+
<strong>Wikitext2</strong>:
|
34 |
+
|
35 |
+
<strong>Ptb-New</strong>:
|
36 |
+
|
37 |
+
<strong>C4-New</strong>:
|
38 |
+
|
39 |
+
<strong>Note</strong>: This version uses <i>--groupsize 128</i>, resulting in better evaluations. However, it consumes more VRAM.
|