bartowski commited on
Commit
73f33b9
·
verified ·
1 Parent(s): 848d678

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -1
README.md CHANGED
@@ -53,8 +53,9 @@ IQ2_XXS may not be final, the size increase is quite substantial so I may want t
53
  | [DeepSeek-V3-0324-Q2_K_L.gguf](https://huggingface.co/bartowski/deepseek-ai_DeepSeek-V3-0324-GGUF/tree/main/deepseek-ai_DeepSeek-V3-0324-Q2_K_L) | Q2_K_L | 244.93GB | true | Uses Q8_0 for embed and output weights. Very low quality but surprisingly usable. |
54
  | [DeepSeek-V3-0324-IQ2_M.gguf](https://huggingface.co/bartowski/deepseek-ai_DeepSeek-V3-0324-GGUF/tree/main/deepseek-ai_DeepSeek-V3-0324-IQ2_M) | IQ2_M | 217.43GB | true | Relatively low quality, uses SOTA techniques to be surprisingly usable. |
55
  | [DeepSeek-V3-0324-IQ2_S.gguf](https://huggingface.co/bartowski/deepseek-ai_DeepSeek-V3-0324-GGUF/tree/main/deepseek-ai_DeepSeek-V3-0324-IQ2_S) | IQ2_S | 197.00GB | true | Low quality, uses SOTA techniques to be usable. |
56
- | [DeepSeek-V3-0324-IQ2_XXS-V2.gguf](https://huggingface.co/bartowski/deepseek-ai_DeepSeek-V3-0324-GGUF/tree/main/deepseek-ai_DeepSeek-V3-0324-IQ2_XXS-V2) | IQ2_XXS | 188.95GB | true | Attempted to modify tensor quant levels for better performance. |
57
  | [DeepSeek-V3-0324-IQ2_XXS.gguf](https://huggingface.co/bartowski/deepseek-ai_DeepSeek-V3-0324-GGUF/tree/main/deepseek-ai_DeepSeek-V3-0324-IQ2_XXS) | IQ2_XXS | 174.43GB | true | Very low quality, uses SOTA techniques to be usable. |
 
58
  | [DeepSeek-V3-0324-IQ1_M.gguf](https://huggingface.co/bartowski/deepseek-ai_DeepSeek-V3-0324-GGUF/tree/main/deepseek-ai_DeepSeek-V3-0324-IQ1_M) | IQ1_M | 148.88GB | true | Extremely low quality, *not* recommended. |
59
  | [DeepSeek-V3-0324-IQ1_S.gguf](https://huggingface.co/bartowski/deepseek-ai_DeepSeek-V3-0324-GGUF/tree/main/deepseek-ai_DeepSeek-V3-0324-IQ1_S) | IQ1_S | 133.56GB | true | Extremely low quality, *not* recommended. |
60
 
 
53
  | [DeepSeek-V3-0324-Q2_K_L.gguf](https://huggingface.co/bartowski/deepseek-ai_DeepSeek-V3-0324-GGUF/tree/main/deepseek-ai_DeepSeek-V3-0324-Q2_K_L) | Q2_K_L | 244.93GB | true | Uses Q8_0 for embed and output weights. Very low quality but surprisingly usable. |
54
  | [DeepSeek-V3-0324-IQ2_M.gguf](https://huggingface.co/bartowski/deepseek-ai_DeepSeek-V3-0324-GGUF/tree/main/deepseek-ai_DeepSeek-V3-0324-IQ2_M) | IQ2_M | 217.43GB | true | Relatively low quality, uses SOTA techniques to be surprisingly usable. |
55
  | [DeepSeek-V3-0324-IQ2_S.gguf](https://huggingface.co/bartowski/deepseek-ai_DeepSeek-V3-0324-GGUF/tree/main/deepseek-ai_DeepSeek-V3-0324-IQ2_S) | IQ2_S | 197.00GB | true | Low quality, uses SOTA techniques to be usable. |
56
+ | [DeepSeek-V3-0324-IQ2_XXS-V2.gguf](https://huggingface.co/bartowski/deepseek-ai_DeepSeek-V3-0324-GGUF/tree/main/deepseek-ai_DeepSeek-V3-0324-IQ2_XXS-V2) | IQ2_XXS | 188.95GB | true | *Being replaced soon by 179GB version* Attempted to modify tensor quant levels for better performance. |
57
  | [DeepSeek-V3-0324-IQ2_XXS.gguf](https://huggingface.co/bartowski/deepseek-ai_DeepSeek-V3-0324-GGUF/tree/main/deepseek-ai_DeepSeek-V3-0324-IQ2_XXS) | IQ2_XXS | 174.43GB | true | Very low quality, uses SOTA techniques to be usable. |
58
+ | [DeepSeek-V3-0324-IQ1_M-V2.gguf](https://huggingface.co/bartowski/deepseek-ai_DeepSeek-V3-0324-GGUF/tree/main/deepseek-ai_DeepSeek-V3-0324-IQ1_M-V2) | IQ1_M | 154.78GB | true | Attempted to modify tensor quant levels for better performance. Extremely low quality, *not* recommended. |
59
  | [DeepSeek-V3-0324-IQ1_M.gguf](https://huggingface.co/bartowski/deepseek-ai_DeepSeek-V3-0324-GGUF/tree/main/deepseek-ai_DeepSeek-V3-0324-IQ1_M) | IQ1_M | 148.88GB | true | Extremely low quality, *not* recommended. |
60
  | [DeepSeek-V3-0324-IQ1_S.gguf](https://huggingface.co/bartowski/deepseek-ai_DeepSeek-V3-0324-GGUF/tree/main/deepseek-ai_DeepSeek-V3-0324-IQ1_S) | IQ1_S | 133.56GB | true | Extremely low quality, *not* recommended. |
61