Experimental layer-wise + pruned (layers 5 and 39) quantization of Qwen/Qwen3-30B-A3B

Upload in progress...

Model card will be available once all models have been uploaded

Downloads last month
271
GGUF
Model size
29.3B params
Architecture
qwen3moe
Hardware compatibility
Log In to view the estimation

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for eaddario/Qwen3-30B-A3B-pruned-GGUF

Finetuned
Qwen/Qwen3-30B-A3B
Quantized
(84)
this model

Dataset used to train eaddario/Qwen3-30B-A3B-pruned-GGUF