Project-Briar Rabbit: Hassenpfeffer 4B Q8 From Hoptimizer's fine work

G866fl17QaOmeNUOBQZy8A.jpg

This Model in not censored and is ready for deployment, not yet tested on a closed system please use and transfer with caution. This model is a Qwen3 mix for use in many applications in systems with 8gb Ram+ no GPU needed.

IntelligentEstate/Hasenpfeffer-4B-Q8_0-GGUF

This model was converted to GGUF format from bunnycore/Qwen3-4B-Mixture

Initial testing is being done in various applications, many show promise. Eat your fill.

Downloads last month
31
GGUF
Model size
3.64B params
Architecture
qwen3
Hardware compatibility
Log In to view the estimation

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for IntelligentEstate/Hasenpfeffer-4B-Q8_0-GGUF

Quantized
(9)
this model