Amazing, uses humble 255Gb RAM on my ancient 2014 Xeon PC for Q8
#15 opened about 2 hours ago
by
krustik
Quality benefits of UD-Q4_K_XL vs Q5_K_M vs Q6_K for this model?
#14 opened about 7 hours ago
by
ideosphere
Why changes in the past 0-24h here vs. not on Qwen3-235B-A22B-128K-GGUF?
#13 opened about 7 hours ago
by
ideosphere
Finetuning possible?
1
#12 opened about 15 hours ago
by
edgeinfinity
Why are XL quants smaller than M quants?
3
#11 opened about 22 hours ago
by
ChuckMcSneed

config.json "max_position_embeddings": 40960,
2
#10 opened 2 days ago
by
koushd
how to disable <think> with llama.cpp
4
#9 opened 3 days ago
by
bobchenyx
It seems like model have serious repetition issues (both gguf and on openrouter)
4
#8 opened 3 days ago
by
roadtoagi

[Qwen3-235B-A22B-UD-Q4_K_XL.gguf] UD Quant seems to be invalid.
2
#7 opened 3 days ago
by
XelotX
Test on 3090 + Tesla P40 (48gb vram total) + 64gb ram (Q2K)
1
#6 opened 3 days ago
by
roadtoagi

Ud quants please🥺
2
#5 opened 3 days ago
by
Ainonake
ValueError: Cannot use chat template functions because tokenizer.chat_template is not set and no template argument was passed!
1
#4 opened 3 days ago
by
shakhizat
Do the Q4 quants work? On the 30b moe it says not to use them.
2
#3 opened 3 days ago
by
Lockout

UD quants missing some files
3
6
#2 opened 3 days ago
by
MLDataScientist
Add languages tag
#1 opened 3 days ago
by
de-francophones
