LlamaEdge compatible quants for SmolVLM2 models.
AI & ML interests
Run open source LLMs across CPU and GPU without changing the binary in Rust and Wasm locally!
Recent Activity
LlamaEdge compatible quants for Qwen3 models.
LlamaEdge compatible quants for EXAONE-3.5 models.
LlamaEdge compatible quants for Gemma-3-it models.
-
second-state/gemma-3-27b-it-GGUF
Image-Text-to-Text • 0.4B • Updated • 1.19k -
second-state/gemma-3-12b-it-GGUF
Image-Text-to-Text • 12B • Updated • 1.34k • 1 -
second-state/gemma-3-4b-it-GGUF
Image-Text-to-Text • 4B • Updated • 1.34k -
second-state/gemma-3-1b-it-GGUF
Text Generation • 1.0B • Updated • 1.3k
-
second-state/stable-diffusion-v1-5-GGUF
Text-to-Image • 1B • Updated • 15.1k • 11 -
second-state/stable-diffusion-v-1-4-GGUF
Text-to-Image • 1B • Updated • 505 • 3 -
second-state/stable-diffusion-3.5-medium-GGUF
Text-to-Image • 0.7B • Updated • 3.91k • 9 -
second-state/stable-diffusion-3.5-large-GGUF
Text-to-Image • 0.7B • Updated • 4.44k • 8
LlamaEdge compatible quants for Qwen2-VL models.
LlamaEdge compatible quants for tool-use models.
-
second-state/Llama-3-Groq-8B-Tool-Use-GGUF
Text Generation • 8B • Updated • 2.27k • 2 -
second-state/Llama-3-Groq-70B-Tool-Use-GGUF
Text Generation • 71B • Updated • 154 • 2 -
second-state/Hermes-2-Pro-Llama-3-8B-GGUF
Text Generation • 8B • Updated • 1.33k • 2 -
second-state/Nemotron-Mini-4B-Instruct-GGUF
4B • Updated • 79
LlamaEdge compatible quants for Llama 3.2 3B and 1B Instruct models.
LlamaEdge compatible quants for Yi-1.5 chat models.
-
second-state/Yi-1.5-9B-Chat-16K-GGUF
Text Generation • 9B • Updated • 219 • 5 -
second-state/Yi-1.5-34B-Chat-16K-GGUF
Text Generation • 34B • Updated • 102 • 4 -
second-state/Yi-1.5-9B-Chat-GGUF
Text Generation • 9B • Updated • 1.27k • 8 -
second-state/Yi-1.5-6B-Chat-GGUF
Text Generation • 6B • Updated • 1.12k • 4
LlamaEdge compatible quants for Qwen2.5-VL models.
LlamaEdge compatible quants for Tessa-T1 models.
LlamaEdge compatible quants for EXAONE-Deep models.
LlamaEdge compatible quants for DeepSeek-R1 distilled models.
-
second-state/DeepSeek-R1-Distill-Qwen-1.5B-GGUF
Text Generation • 2B • Updated • 1.19k -
second-state/DeepSeek-R1-Distill-Qwen-7B-GGUF
Text Generation • 8B • Updated • 1.15k • 1 -
second-state/DeepSeek-R1-Distill-Qwen-14B-GGUF
Text Generation • 15B • Updated • 84 -
second-state/DeepSeek-R1-Distill-Qwen-32B-GGUF
Text Generation • 33B • Updated • 113
LlamaEdge compatible quants for Falcon3-Instruct models.
-
second-state/Falcon3-10B-Instruct-GGUF
Text Generation • 10B • Updated • 92 • 1 -
second-state/Falcon3-7B-Instruct-GGUF
Text Generation • 7B • Updated • 191 • 2 -
second-state/Falcon3-3B-Instruct-GGUF
Text Generation • 3B • Updated • 59 -
second-state/Falcon3-1B-Instruct-GGUF
Text Generation • 2B • Updated • 155
LlamaEdge compatible quants for Qwen2.5-Coder models.
-
second-state/Qwen2.5-Coder-0.5B-Instruct-GGUF
Text Generation • 0.5B • Updated • 104 -
second-state/Qwen2.5-Coder-3B-Instruct-GGUF
Text Generation • 3B • Updated • 1.18k -
second-state/Qwen2.5-Coder-14B-Instruct-GGUF
Text Generation • 15B • Updated • 108 -
second-state/Qwen2.5-Coder-32B-Instruct-GGUF
Text Generation • 33B • Updated • 1.13k
LlamaEdge compatible quants for InternLM-2.5 models.
LlamaEdge compatible quants for Qwen 2.5 instruct and coder models.
-
second-state/Qwen2.5-72B-Instruct-GGUF
Text Generation • 73B • Updated • 1.13k • 2 -
second-state/Qwen2.5-32B-Instruct-GGUF
Text Generation • 33B • Updated • 1.03k • 1 -
second-state/Qwen2.5-14B-Instruct-GGUF
Text Generation • 15B • Updated • 1.79k • 1 -
second-state/Qwen2.5-7B-Instruct-GGUF
Text Generation • 8B • Updated • 1.07k
LlamaEdge compatible quants for FLUX.1 models.
-
second-state/FLUX.1-schnell-GGUF
Text-to-Image • 0.1B • Updated • 556 • 11 -
second-state/FLUX.1-dev-GGUF
Text-to-Image • 0.1B • Updated • 758 • 10 -
second-state/FLUX.1-Redux-dev-GGUF
Text-to-Image • 0.1B • Updated • 272 • 11 -
second-state/FLUX.1-Canny-dev-GGUF
Text-to-Image • 12B • Updated • 316 • 13
LlamaEdge compatible quants for SmolVLM2 models.
LlamaEdge compatible quants for Qwen2.5-VL models.
LlamaEdge compatible quants for Qwen3 models.
LlamaEdge compatible quants for Tessa-T1 models.
LlamaEdge compatible quants for EXAONE-3.5 models.
LlamaEdge compatible quants for EXAONE-Deep models.
LlamaEdge compatible quants for Gemma-3-it models.
-
second-state/gemma-3-27b-it-GGUF
Image-Text-to-Text • 0.4B • Updated • 1.19k -
second-state/gemma-3-12b-it-GGUF
Image-Text-to-Text • 12B • Updated • 1.34k • 1 -
second-state/gemma-3-4b-it-GGUF
Image-Text-to-Text • 4B • Updated • 1.34k -
second-state/gemma-3-1b-it-GGUF
Text Generation • 1.0B • Updated • 1.3k
LlamaEdge compatible quants for DeepSeek-R1 distilled models.
-
second-state/DeepSeek-R1-Distill-Qwen-1.5B-GGUF
Text Generation • 2B • Updated • 1.19k -
second-state/DeepSeek-R1-Distill-Qwen-7B-GGUF
Text Generation • 8B • Updated • 1.15k • 1 -
second-state/DeepSeek-R1-Distill-Qwen-14B-GGUF
Text Generation • 15B • Updated • 84 -
second-state/DeepSeek-R1-Distill-Qwen-32B-GGUF
Text Generation • 33B • Updated • 113
-
second-state/stable-diffusion-v1-5-GGUF
Text-to-Image • 1B • Updated • 15.1k • 11 -
second-state/stable-diffusion-v-1-4-GGUF
Text-to-Image • 1B • Updated • 505 • 3 -
second-state/stable-diffusion-3.5-medium-GGUF
Text-to-Image • 0.7B • Updated • 3.91k • 9 -
second-state/stable-diffusion-3.5-large-GGUF
Text-to-Image • 0.7B • Updated • 4.44k • 8
LlamaEdge compatible quants for Falcon3-Instruct models.
-
second-state/Falcon3-10B-Instruct-GGUF
Text Generation • 10B • Updated • 92 • 1 -
second-state/Falcon3-7B-Instruct-GGUF
Text Generation • 7B • Updated • 191 • 2 -
second-state/Falcon3-3B-Instruct-GGUF
Text Generation • 3B • Updated • 59 -
second-state/Falcon3-1B-Instruct-GGUF
Text Generation • 2B • Updated • 155
LlamaEdge compatible quants for Qwen2-VL models.
LlamaEdge compatible quants for Qwen2.5-Coder models.
-
second-state/Qwen2.5-Coder-0.5B-Instruct-GGUF
Text Generation • 0.5B • Updated • 104 -
second-state/Qwen2.5-Coder-3B-Instruct-GGUF
Text Generation • 3B • Updated • 1.18k -
second-state/Qwen2.5-Coder-14B-Instruct-GGUF
Text Generation • 15B • Updated • 108 -
second-state/Qwen2.5-Coder-32B-Instruct-GGUF
Text Generation • 33B • Updated • 1.13k
LlamaEdge compatible quants for tool-use models.
-
second-state/Llama-3-Groq-8B-Tool-Use-GGUF
Text Generation • 8B • Updated • 2.27k • 2 -
second-state/Llama-3-Groq-70B-Tool-Use-GGUF
Text Generation • 71B • Updated • 154 • 2 -
second-state/Hermes-2-Pro-Llama-3-8B-GGUF
Text Generation • 8B • Updated • 1.33k • 2 -
second-state/Nemotron-Mini-4B-Instruct-GGUF
4B • Updated • 79
LlamaEdge compatible quants for InternLM-2.5 models.
LlamaEdge compatible quants for Llama 3.2 3B and 1B Instruct models.
LlamaEdge compatible quants for Qwen 2.5 instruct and coder models.
-
second-state/Qwen2.5-72B-Instruct-GGUF
Text Generation • 73B • Updated • 1.13k • 2 -
second-state/Qwen2.5-32B-Instruct-GGUF
Text Generation • 33B • Updated • 1.03k • 1 -
second-state/Qwen2.5-14B-Instruct-GGUF
Text Generation • 15B • Updated • 1.79k • 1 -
second-state/Qwen2.5-7B-Instruct-GGUF
Text Generation • 8B • Updated • 1.07k
LlamaEdge compatible quants for Yi-1.5 chat models.
-
second-state/Yi-1.5-9B-Chat-16K-GGUF
Text Generation • 9B • Updated • 219 • 5 -
second-state/Yi-1.5-34B-Chat-16K-GGUF
Text Generation • 34B • Updated • 102 • 4 -
second-state/Yi-1.5-9B-Chat-GGUF
Text Generation • 9B • Updated • 1.27k • 8 -
second-state/Yi-1.5-6B-Chat-GGUF
Text Generation • 6B • Updated • 1.12k • 4
LlamaEdge compatible quants for FLUX.1 models.
-
second-state/FLUX.1-schnell-GGUF
Text-to-Image • 0.1B • Updated • 556 • 11 -
second-state/FLUX.1-dev-GGUF
Text-to-Image • 0.1B • Updated • 758 • 10 -
second-state/FLUX.1-Redux-dev-GGUF
Text-to-Image • 0.1B • Updated • 272 • 11 -
second-state/FLUX.1-Canny-dev-GGUF
Text-to-Image • 12B • Updated • 316 • 13