view post Post 1424 I uploaded GGUFs, 4bit bitsandbytes and full 16bit precision weights for Llama 3.3 70B Instruct are here: unsloth/llama-33-all-versions-67535d7d994794b9d7cf5e9fYou can also finetune Llama 3.3 70B in under 48GB of VRAM with Unsloth! GGUFs: unsloth/Llama-3.3-70B-Instruct-GGUFBnB 4bit: unsloth/Llama-3.3-70B-Instruct-bnb-4bit16bit: unsloth/Llama-3.3-70B-Instruct See translation
view post Post 1347 Vision finetuning is in 🦥Unsloth! You can now finetune Llama 3.2, Qwen2 VL, Pixtral and all Llava variants up to 2x faster and with up to 70% less VRAM usage! Colab to finetune Llama 3.2: https://colab.research.google.com/drive/1j0N4XTY1zXXy7mPAhOC1_gMYZ2F2EBlk?usp=sharing