unsloth/Llama-4-Scout-17B-16E-Instruct-GGUF Image-Text-to-Text • Updated about 19 hours ago • 160k • 69
view post Post 4684 You can now run Llama 4 on your own local device! 🦙Run our Dynamic 1.78-bit and 2.71-bit Llama 4 GGUFs: unsloth/Llama-4-Scout-17B-16E-Instruct-GGUFYou can run them on llama.cpp and other inference engines. See our guide here: https://docs.unsloth.ai/basics/tutorial-how-to-run-and-fine-tune-llama-4 See translation 1 reply · 🤗 14 14 🔥 10 10 ❤️ 6 6 🚀 6 6 + Reply
view post Post 3360 You can now run DeepSeek-V3-0324 on your own local device!Run our Dynamic 2.42 and 2.71-bit DeepSeek GGUFs: unsloth/DeepSeek-V3-0324-GGUFYou can run them on llama.cpp and other inference engines. See our guide here: https://docs.unsloth.ai/basics/tutorial-how-to-run-deepseek-v3-0324-locally See translation 🔥 16 16 ❤️ 8 8 👀 4 4 + Reply
view post Post 3103 I uploaded DeepSeek R1 GGUFs! unsloth/DeepSeek-R1-Distill-Llama-8B-GGUF unsloth/DeepSeek-R1-Distill-Llama-70B-GGUF2bit for MoE: unsloth/DeepSeek-R1-GGUF unsloth/DeepSeek-R1-Zero-GGUFMore at unsloth/deepseek-r1-all-versions-678e1c48f5d2fce87892ace5 See translation 🔥 3 3 🚀 3 3 + Reply