Running Model "unsloth/DeepSeek-V3-0324-GGUF" with vLLM does not working
1
#11 opened 1 day ago
by
puppadas

The UD-IQ2_XXS is surprisingly good, but it's good to know that it degrades gradually but significantly after about 1000 tokens.
1
#9 opened 2 days ago
by
mmbela
671B params or 685B params?
6
#8 opened 3 days ago
by
createthis
how to run tools use correctly
#7 opened 4 days ago
by
rockcat-miao
How many bits of Quantization is enough for Code Generation Tasks?
1
#5 opened 5 days ago
by
luweigen
Added IQ1_S version to Ollama
3
#4 opened 7 days ago
by
Muhammadreza

Is the 2.51bit model using imatrix?
7
#3 opened 7 days ago
by
daweiba12
Will you release the imatrix.dat used for the quants?
2
#2 opened 7 days ago
by
tdh111
Would There be Dynamic Qunatized Versions like 2.51bit
8
#1 opened 8 days ago
by
MotorBottle