Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
SimmonsSongHW
/
Qwen2.5-14B-Instruct-GGUF
like
0
GGUF
conversational
License:
apache-2.0
Model card
Files
Files and versions
Community
Deploy
Use this model
Qwen2.5-14B-Instruct Quantization with Llama.cpp
Downloads last month
99
GGUF
Model size
14.8B params
Architecture
qwen2
Hardware compatibility
Log In
to view the estimation
2-bit
Q2_K
5.77 GB
3-bit
Q3_K
7.34 GB
4-bit
Q4_0
8.52 GB
Q4_K
8.99 GB
5-bit
Q5_0
10.3 GB
Q5_K
10.5 GB
6-bit
Q6_K
12.1 GB
8-bit
Q8_0
15.7 GB
View +1 variant
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
HF Inference deployability: The model has no library tag.
Model tree for
SimmonsSongHW/Qwen2.5-14B-Instruct-GGUF
Base model
Qwen/Qwen2.5-14B
Finetuned
Qwen/Qwen2.5-14B-Instruct
Quantized
Qwen/Qwen2.5-14B-Instruct-GGUF
Quantized
(
1
)
this model
Collection including
SimmonsSongHW/Qwen2.5-14B-Instruct-GGUF
Qwen2.5-Quants
Collection
4 items
โข
Updated
26 days ago