skatardude10 commited on
Commit
586eb4b
·
verified ·
1 Parent(s): 9a1eb1a

Update README.md

Browse files

Updated readme to add info about restricting tensor offload to make model go bur or something faster.

Files changed (1) hide show
  1. README.md +8 -0
README.md CHANGED
@@ -21,6 +21,14 @@ tags:
21
  - (Recommended) SnowDrogito-RpR3-32B_IQ4-XS+Enhanced_Tensors.gguf - largest, highest quality, Q4KM size, quant using recalibrated imatrix on Bartowki's dataset+RP+Tao at 8k context, uses selective quantization with llama-quantize --tensor-type flags to bump up select FFN/self attention tensors between Q6 and Q8 as <a href="https://github.com/ggml-org/llama.cpp/pull/12718" style="color: #E6E6FA; text-decoration: none;" onmouseover="this.style.color='#ADD8E6'" onmouseout="this.style.color='#E6E6FA'">described here.</a>
22
  - SnowDrogito-RpRv3-32B_IQ4-XS-Q8InOut-Q56Attn.gguf - Q6 and Q5 Attention tensors. This and all quants uploaded prior used imatrix from Snowdrop.
23
 
 
 
 
 
 
 
 
 
24
  ## <span style="color: #CCFFCC;">Overview</span>
25
  SnowDrogito-RpR-32B_IQ4-XS is my shot at an optimized imatrix quantization for my QwQ RP Reasoning merge, goal is to add smarts to the popular <span style="color: #ADD8E6;">Snowdrop</span> roleplay model, with a little <span style="color: #FF9999;">ArliAI RpR</span> and <span style="color: #00FF00;">Deepcogito</span> for the smarts. Built using the TIES merge method, it attempts to combine strengths from multiple fine-tuned QwQ-32B models, quantized to IQ4_XS with <span style="color: #E6E6FA;">Q8_0 embeddings and output layers</span> for enhanced quality, to plus it up just a bit. Uploading because the PPL was lower, have been getting more varied/longer/more creative responses with this, but maybe it lacks contextual awareness compared to snowdrop? Not sure.
26
 
 
21
  - (Recommended) SnowDrogito-RpR3-32B_IQ4-XS+Enhanced_Tensors.gguf - largest, highest quality, Q4KM size, quant using recalibrated imatrix on Bartowki's dataset+RP+Tao at 8k context, uses selective quantization with llama-quantize --tensor-type flags to bump up select FFN/self attention tensors between Q6 and Q8 as <a href="https://github.com/ggml-org/llama.cpp/pull/12718" style="color: #E6E6FA; text-decoration: none;" onmouseover="this.style.color='#ADD8E6'" onmouseout="this.style.color='#E6E6FA'">described here.</a>
22
  - SnowDrogito-RpRv3-32B_IQ4-XS-Q8InOut-Q56Attn.gguf - Q6 and Q5 Attention tensors. This and all quants uploaded prior used imatrix from Snowdrop.
23
 
24
+ ## <span style="color: #CCFFCC;">MORE SPEED!</span>
25
+ Improve inference speed offloading tensors instead of layers as referenced <a href="https://www.reddit.com/r/LocalLLaMA/comments/1ki7tg7/dont_offload_gguf_layers_offload_tensors_200_gen/" style="color: #E6E6FA; text-decoration: none;" onmouseover="this.style.color='#ADD8E6'" onmouseout="this.style.color='#E6E6FA'">HERE</a>.
26
+ --overridetensors "\.[13579]\.ffn_up|\.[1-3][13579]\.ffn_up=CPU Restricts offloading of every third FFN up tensor, saving enough space on GPU to offload all layers on 24gb, taking me from 3.9tps to 10.6 tps. Example:
27
+ ```
28
+ python koboldcpp.py --gpulayers 65 --quantkv 1 --overridetensors "\.[13579]\.ffn_up|\.[1-3][13579]\.ffn_up=CPU" --threads 10 --usecublas --contextsize 40960 --flashattention --model ~/Downloads/SnowDrogito-RpR3-32B_IQ4-XS+Enhanced_Tensors.gguf
29
+ ```
30
+ ...obviously editing threads, filepaths, etc...
31
+
32
  ## <span style="color: #CCFFCC;">Overview</span>
33
  SnowDrogito-RpR-32B_IQ4-XS is my shot at an optimized imatrix quantization for my QwQ RP Reasoning merge, goal is to add smarts to the popular <span style="color: #ADD8E6;">Snowdrop</span> roleplay model, with a little <span style="color: #FF9999;">ArliAI RpR</span> and <span style="color: #00FF00;">Deepcogito</span> for the smarts. Built using the TIES merge method, it attempts to combine strengths from multiple fine-tuned QwQ-32B models, quantized to IQ4_XS with <span style="color: #E6E6FA;">Q8_0 embeddings and output layers</span> for enhanced quality, to plus it up just a bit. Uploading because the PPL was lower, have been getting more varied/longer/more creative responses with this, but maybe it lacks contextual awareness compared to snowdrop? Not sure.
34