World's Largest Dataset
#16 opened 2 days ago
by
UJJAWAL-TYAGI

Re-converting the GGUF for MLA?
5
2
#15 opened 7 days ago
by
Silver267
What tool/framework to test gguf models?
1
#14 opened 11 days ago
by
bobchenyx
Request: DOI
#13 opened 20 days ago
by
jeffhoule01
How to run ollama using these new quantized weights?
#12 opened 20 days ago
by
vadimkantorov
Running Model "unsloth/DeepSeek-V3-0324-GGUF" with vLLM does not working
2
#11 opened 22 days ago
by
puppadas

The UD-IQ2_XXS is surprisingly good, but it's good to know that it degrades gradually but significantly after about 1000 tokens.
1
#9 opened 23 days ago
by
mmbela
671B params or 685B params?
6
#8 opened 24 days ago
by
createthis
how to run tools use correctly
#7 opened 24 days ago
by
rockcat-miao
How many bits of Quantization is enough for Code Generation Tasks?
1
#5 opened 25 days ago
by
luweigen
Added IQ1_S version to Ollama
3
#4 opened 27 days ago
by
Muhammadreza

Is the 2.51bit model using imatrix?
7
#3 opened 28 days ago
by
daweiba12
Will you release the imatrix.dat used for the quants?
2
#2 opened 28 days ago
by
tdh111
Would There be Dynamic Qunatized Versions like 2.51bit
8
#1 opened 28 days ago
by
MotorBottle