Sugoi LLM 32B Ultra (GGUF version)

Unleashing the full potential of the previous sugoi 32B model, Sugoi 32B Ultra. Benchmark soon.

Downloads last month
368
GGUF
Model size
32.8B params
Architecture
qwen2
Hardware compatibility
Log In to view the estimation

2-bit

4-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for sugoitoolkit/Sugoi-32B-Ultra-GGUF

Base model

Qwen/Qwen2.5-32B
Quantized
(142)
this model

Collection including sugoitoolkit/Sugoi-32B-Ultra-GGUF