GGUF LoRA adapters Adapters extracted from fine tuned models, using mergekit-extract-lora ggml-org/LoRA-Llama-3-Instruct-abliteration-8B-F16-GGUF Updated Nov 1, 2024 • 33 • 1 ggml-org/LoRA-Qwen2.5-1.5B-Instruct-abliterated-F16-GGUF Updated 3 days ago • 26 • 1 ggml-org/LoRA-Qwen2.5-3B-Instruct-abliterated-F16-GGUF Updated 18 days ago • 37 ggml-org/LoRA-Qwen2.5-7B-Instruct-abliterated-v3-F16-GGUF Updated 19 days ago • 84 • 2
llama.vim Recommended models for the llama.vim plugin ggml-org/Qwen2.5-Coder-1.5B-Q8_0-GGUF Text Generation • Updated Oct 28, 2024 • 940 • 5 ggml-org/Qwen2.5-Coder-3B-Q8_0-GGUF Text Generation • Updated Nov 26, 2024 • 785 • 3 ggml-org/Qwen2.5-Coder-7B-Q8_0-GGUF Text Generation • Updated Oct 28, 2024 • 1.34k • 1 ggml-org/Qwen2.5-Coder-14B-Q8_0-GGUF Text Generation • Updated Nov 18, 2024 • 232
ggml-org/LoRA-Deepthink-Reasoning-Qwen2.5-7B-Instruct-Q8_0-GGUF Text Generation • Updated 12 days ago • 20