intelligentestate/GQwexx-4B-Q6_K-GGUF

gwexxx.png

A model made to assist edge devices and miniPCs or Pi devices in conquering the world. great at seafaring navionics and port instructions, Game walkthroughs and more with proper prompting and Scary with S-AGI. Adjustments to come.. use with reckless abandon.

This model was converted to GGUF format from ValiantLabs/Qwen3-4B-Esper3 using llama.cpp Refer to the original model card for more details on the model.

Use with llama.cpp

Install llama.cpp through brew (works on Mac and Linux)

brew install llama.cpp

Invoke the llama.cpp server or the CLI.

Downloads last month
1
GGUF
Model size
4.02B params
Architecture
qwen3
Hardware compatibility
Log In to view the estimation

6-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for IntelligentEstate/GQwexx-4B-Q6_K-GGUF

Base model

Qwen/Qwen3-4B-Base
Finetuned
Qwen/Qwen3-4B
Quantized
(3)
this model

Datasets used to train IntelligentEstate/GQwexx-4B-Q6_K-GGUF