Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
marcelone
/
Qwen3-4B-Instruct-2507-gguf
like
0
Text Generation
GGUF
7 languages
imatrix
conversational
License:
apache-2.0
Model card
Files
Files and versions
xet
Community
Deploy
Use this model
main
Qwen3-4B-Instruct-2507-gguf
Ctrl+K
Ctrl+K
1 contributor
History:
12 commits
marcelone
Update README.md
4fe65ae
verified
4 days ago
.gitattributes
Safe
3.22 kB
Upload folder using huggingface_hub
4 days ago
LICENSE
Safe
11.3 kB
Upload LICENSE
4 days ago
Qwen3-4B-Instruct-2507-gguf-BF16.gguf
Safe
8.05 GB
xet
Upload Qwen3-4B-Instruct-2507-gguf-BF16.gguf
5 days ago
Qwen3-4B-Instruct-2507-gguf-BF16_HXL.gguf
Safe
8.83 GB
xet
Upload folder using huggingface_hub
4 days ago
Qwen3-4B-Instruct-2507-gguf-IQ3_M_FXL.gguf
Safe
2.06 GB
xet
Upload folder using huggingface_hub
4 days ago
Qwen3-4B-Instruct-2507-gguf-IQ3_M_GXL.gguf
Safe
2.42 GB
xet
Upload folder using huggingface_hub
4 days ago
Qwen3-4B-Instruct-2507-gguf-IQ3_M_HXL.gguf
Safe
3.2 GB
xet
Upload folder using huggingface_hub
4 days ago
Qwen3-4B-Instruct-2507-gguf-IQ4_NL_GXL.gguf
Safe
2.84 GB
xet
Upload folder using huggingface_hub
4 days ago
Qwen3-4B-Instruct-2507-gguf-IQ4_NL_HXL.gguf
Safe
3.62 GB
xet
Upload folder using huggingface_hub
4 days ago
Qwen3-4B-Instruct-2507-gguf-IQ4_XS_FXL.gguf
Safe
2.36 GB
xet
Upload folder using huggingface_hub
4 days ago
Qwen3-4B-Instruct-2507-gguf-IQ4_XS_GXL.gguf
Safe
2.73 GB
xet
Upload folder using huggingface_hub
4 days ago
Qwen3-4B-Instruct-2507-gguf-IQ4_XS_HXL.gguf
Safe
3.51 GB
xet
Upload folder using huggingface_hub
4 days ago
Qwen3-4B-Instruct-2507-gguf-Q4_K_M_GXL.gguf
Safe
2.96 GB
xet
Upload folder using huggingface_hub
4 days ago
Qwen3-4B-Instruct-2507-gguf-Q4_K_M_HXL.gguf
Safe
3.73 GB
xet
Upload folder using huggingface_hub
4 days ago
Qwen3-4B-Instruct-2507-gguf-Q5_K_M_FXL.gguf
Safe
2.98 GB
xet
Upload folder using huggingface_hub
4 days ago
Qwen3-4B-Instruct-2507-gguf-Q5_K_M_GXL.gguf
Safe
3.35 GB
xet
Upload folder using huggingface_hub
4 days ago
Qwen3-4B-Instruct-2507-gguf-Q5_K_M_HXL.gguf
Safe
4.13 GB
xet
Upload folder using huggingface_hub
4 days ago
Qwen3-4B-Instruct-2507-gguf-Q6_K_FXL.gguf
Safe
3.4 GB
xet
Upload folder using huggingface_hub
4 days ago
Qwen3-4B-Instruct-2507-gguf-Q6_K_GXL.gguf
Safe
3.77 GB
xet
Upload folder using huggingface_hub
4 days ago
Qwen3-4B-Instruct-2507-gguf-Q6_K_HXL.gguf
Safe
4.54 GB
xet
Upload folder using huggingface_hub
4 days ago
Qwen3-4B-Instruct-2507-gguf-Q8_0_GXL.gguf
Safe
4.65 GB
xet
Upload folder using huggingface_hub
4 days ago
Qwen3-4B-Instruct-2507-gguf-Q8_0_HXL.gguf
Safe
5.42 GB
xet
Upload folder using huggingface_hub
4 days ago
Qwen3-4B-Instruct-2507-gguf-f32.gguf
Safe
16.1 GB
xet
Upload folder using huggingface_hub
4 days ago
README.md
6.48 kB
Update README.md
4 days ago
imatrix.gguf
Safe
3.87 MB
xet
Upload imatrix.gguf
4 days ago