Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
marcelone
/
Jinx-Qwen3-14B
like
0
GGUF
conversational
License:
apache-2.0
Model card
Files
Files and versions
xet
Community
Deploy
Use this model
main
Jinx-Qwen3-14B
Ctrl+K
Ctrl+K
1 contributor
History:
17 commits
marcelone
Upload imatrix.gguf
b08f978
verified
6 days ago
.gitattributes
Safe
2.46 kB
Upload imatrix.gguf
6 days ago
Jinx-Qwen3-14B-F16.gguf
Safe
29.5 GB
xet
Upload folder using huggingface_hub
7 days ago
Jinx-Qwen3-14B-Q4_K_M.gguf
Safe
9 GB
xet
Rename Jinx-Qwen3-14B-Q4_K.gguf to Jinx-Qwen3-14B-Q4_K_M.gguf
7 days ago
Jinx-Qwen3-14B-Q4_K_M_L.gguf
Safe
9 GB
xet
Rename Jinx-Qwen3-14B-Q4_1_L.gguf to Jinx-Qwen3-14B-Q4_K_M_L.gguf
7 days ago
Jinx-Qwen3-14B-Q4_K_M_XL.gguf
Safe
9.2 GB
xet
Upload Jinx-Qwen3-14B-Q4_K_M_XL.gguf
7 days ago
Jinx-Qwen3-14B-Q4_K_M_XXL.gguf
Safe
9.58 GB
xet
Rename Jinx-Qwen3-14B-Q4_2_XL.gguf to Jinx-Qwen3-14B-Q4_K_M_XXL.gguf
7 days ago
Jinx-Qwen3-14B-Q4_K_M_XXXL.gguf
11 GB
xet
Rename Jinx-Qwen3-14B-Q4_3_XL.gguf to Jinx-Qwen3-14B-Q4_K_M_XXXL.gguf
7 days ago
Jinx-Qwen3-14B-Q5_K_M_L.gguf
Safe
10.6 GB
xet
Upload Jinx-Qwen3-14B-Q5_K_M_L.gguf
7 days ago
README.md
Safe
94 Bytes
Update README.md
7 days ago
imatrix.gguf
Safe
7.74 MB
xet
Upload imatrix.gguf
6 days ago