YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
ThijsL202/win10_Qwen3-4B-only-tulu-3-sft-mixture-DolphinLabeled-step-190-GGUF
This model was converted to GGUF format from win10/Qwen3-4B-only-tulu-3-sft-mixture-DolphinLabeled-step-190
using llama.cpp.
Model Variants
- Q8_0: 4.69 GB, ~8.0 bits per weight - Largest, best quality
- Q6_K: 3.63 GB, ~6.56 bits per weight - Large, excellent quality
- Q5_K_M: 3.16 GB, ~5.69 bits per weight - Medium, very good
- Q5_0: 3.09 GB, ~5.0 bits per weight - Legacy format
- Q5_K_S: 3.09 GB, ~5.52 bits per weight - Small, high quality
- Q4_K_M: 2.72 GB, ~4.37 bits per weight - Medium, recommended
- Q4_K_S: 2.60 GB, ~4.14 bits per weight - Small, good quality
- Q4_0: 2.59 GB, ~4.0 bits per weight - Legacy format
- Q3_K_L: 2.41 GB, ~3.82 bits per weight - Large, better quality
- Q3_K_M: 2.24 GB, ~3.66 bits per weight - Medium, balanced
- Q3_K_S: 2.05 GB, ~3.4 bits per weight - Small, low quality
- Q2_K: 1.80 GB, ~2.6 bits per weight - Smallest, lowest quality
Total Size: 31.73 GB (12 files)
Usage
# Download a specific variant
huggingface-cli download ThijsL202/win10_Qwen3-4B-only-tulu-3-sft-mixture-DolphinLabeled-step-190-GGUF win10_Qwen3-4B-only-tulu-3-sft-mixture-DolphinLabeled-step-190.Q2_K.gguf
# Use with llama.cpp
./llama-cli -m win10_Qwen3-4B-only-tulu-3-sft-mixture-DolphinLabeled-step-190.Q2_K.gguf -p "Your prompt here"
Original Model
win10/Qwen3-4B-only-tulu-3-sft-mixture-DolphinLabeled-step-190
- Downloads last month
- 283
Hardware compatibility
Log In
to view the estimation
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support