ubergarm commited on
Commit
ebbf1d1
·
1 Parent(s): e2877ac

initial commit

Browse files
Files changed (2) hide show
  1. .gitattributes +3 -0
  2. README.md +48 -3
.gitattributes CHANGED
@@ -33,3 +33,6 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ imatrix-*.dat filter=lfs diff=lfs merge=lfs -text
37
+ *.gguf filter=lfs diff=lfs merge=lfs -text
38
+ *.png filter=lfs diff=lfs merge=lfs -text
README.md CHANGED
@@ -1,3 +1,48 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ quantized_by: ubergarm
3
+ pipeline_tag: text-generation
4
+ base_model: deepcogito/cogito-v2-preview-deepseek-671B-MoE
5
+ license: mit
6
+ base_model_relation: quantized
7
+ tags:
8
+ - mla
9
+ - imatrix
10
+ - conversational
11
+ - deepseek_v3
12
+ - ik_llama.cpp
13
+ ---
14
+
15
+ *WIP* This big one will take a bit, please be patient as it cooks and uploads!
16
+
17
+ ## `ik_llama.cpp` imatrix Quantizations of deepcogito/cogito-v2-preview-deepseek-671B-MoE
18
+ This quant collection **REQUIRES** [ik_llama.cpp](https://github.com/ikawrakow/ik_llama.cpp/) fork to support the ik's latest SOTA quants and optimizations! Do **not** download these big files and expect them to run on mainline vanilla llama.cpp, ollama, LM Studio, KoboldCpp, etc!
19
+
20
+ *NOTE* `ik_llama.cpp` can also run your existing GGUFs from bartowski, unsloth, mradermacher, etc if you want to try it out before downloading my quants.
21
+
22
+ Some of ik's new quants are supported with [Nexesenex/croco.cpp](https://github.com/Nexesenex/croco.cpp) fork of KoboldCPP.
23
+
24
+ These quants provide best in class perplexity for the given memory footprint.
25
+
26
+ ## Big Thanks
27
+ Shout out to Wendell and the **Level1Techs** crew, the community [Forums](https://forum.level1techs.com/t/deepseek-deep-dive-r1-at-home/225826), [YouTube Channel](https://www.youtube.com/@Level1Techs)! **BIG thanks** for providing **BIG hardware** expertise and access to run these experiments and make these great quants available to the community!!!
28
+
29
+ Also thanks to all the folks in the quanting and inferencing community on [BeaverAI Club Discord](https://huggingface.co/BeaverAI) and on [r/LocalLLaMA](https://www.reddit.com/r/LocalLLaMA/) for tips and tricks helping each other run, test, and benchmark all the fun new models!
30
+
31
+ ## Quant Collection
32
+ Perplexity computed against *wiki.test.raw*.
33
+
34
+ ![Perplexity Chart](images/perplexity.png "Chart showing Perplexity improving as BPW increases.")
35
+
36
+ These first two are just test quants for baseline perplexity comparison:
37
+ * `Q8_0` TODO GiB (TODO BPW)
38
+ - Final estimate: PPL = TODO
39
+ * `Q4_0` TODO GiB (TODO BPW)
40
+ - Final estimate: PPL = TODO
41
+
42
+ TODO
43
+
44
+ ## References
45
+ * [ik_llama.cpp](https://github.com/ikawrakow/ik_llama.cpp)
46
+ * [Getting Started Guide (already out of date lol)](https://github.com/ikawrakow/ik_llama.cpp/discussions/258)
47
+ * [ubergarm-imatrix-calibration-corpus-v02.txt](https://gist.github.com/ubergarm/edfeb3ff9c6ec8b49e88cdf627b0711a?permalink_comment_id=5682584#gistcomment-5682584)
48
+ * [deepcogito blog post](https://www.deepcogito.com/research/cogito-v2-preview)