markdown
stringlengths
945
38.3k
metadata
dict
Qwen3, full fine-tuning & all models are now supported! 🦥 [Unsloth](https://github.com/unslothai/unsloth) makes finetuning large language models like Llama-3, Mistral, Phi-4 and Gemma 2x faster, use 70% less memory, and with no degradation in accuracy! Our docs will guide you through training your own custom model. It covers the essentials of [installing & updating](https://docs.unsloth.ai/get-started/fine-tuning-guide#id-5.-installing--requirements) Unsloth, [creating datasets](https://docs.unsloth.ai/basics/datasets-guide), running & [deploying](https://docs.unsloth.ai/get-started/fine-tuning-guide#id-5.-running--saving-the-model) your model. #### [Direct link to heading](https://docs.unsloth.ai/\#get-started) [Get started](https://docs.unsloth.ai/get-started/beginner-start-here) [🧬Fine-tuning Guide](https://docs.unsloth.ai/get-started/fine-tuning-guide) [📒Unsloth Notebooks](https://docs.unsloth.ai/get-started/unsloth-notebooks) [🔮All Our Models](https://docs.unsloth.ai/get-started/all-our-models) [![Cover](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252Fz30qbVABdBlqEnKatTf1%252Fqwen3.png%3Falt%3Dmedia%26token%3Defd4bb30-4926-4272-b15d-91c0a0fc5ac5&width=245&dpr=4&quality=100&sign=c6de3b4f&sv=2)\\ \\ **Qwen3**\\ \\ Fine-tune & run Dynamic Qwen3 models.](https://docs.unsloth.ai/basics/qwen3-how-to-run-and-fine-tune) [![Cover](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252FdiwpvMM4VA4oZqaANJOE%252Fdynamic%2520v2%2520with%2520unsloth.png%3Falt%3Dmedia%26token%3Dadc64cb6-2b52-4565-a44e-ac4acbd4247d&width=245&dpr=4&quality=100&sign=95dfb159&sv=2)\\ \\ **Dynamic 2.0 Quants**\\ \\ The best performing quants on 5-shot MMLU.](https://docs.unsloth.ai/basics/unsloth-dynamic-2.0-ggufs) [![Cover](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252F8RZoiqWL4cXqTFwTAbg8%252Fllama%25204%2520only.png%3Falt%3Dmedia%26token%3Dc6b0dd0e-b817-482b-9b8e-05d017a72319&width=245&dpr=4&quality=100&sign=587751ee&sv=2)\\ \\ **Llama 4 by Meta**\\ \\ Learn to fine-tune & run Scout & Maverick.](https://docs.unsloth.ai/basics/llama-4-how-to-run-and-fine-tune) ## [Direct link to heading](https://docs.unsloth.ai/\#why-unsloth) 🦥 Why Unsloth? - Unsloth makes it super easy for you to train models like Llama 3 locally or on platforms such as Google Colab and Kaggle. We streamline the entire training workflow, including model loading, quantizing, training, evaluating, running, saving, exporting, and integrations with inference engines like Ollama, llama.cpp, and vLLM. - We collaborate regularly with teams at Hugging Face, Google, and Meta to fix bugs in LLM training and models (e.g. see our past work for [Gemma 3](https://docs.unsloth.ai/) and [Phi-4](https://unsloth.ai/blog/phi4)). Thus, expect to see the most accurate results when training with Unsloth or using our models. - Unsloth is highly customizable as we allow you to alter things like chat templates or dataset formatting. We also have pre-built notebooks for vision, text-to-speech (TTS), reinforcement learning and more! We also support all training methods and all transformer-based models. ## [Direct link to heading](https://docs.unsloth.ai/\#quickstart) Quickstart **Install locally with pip (recommended)** for Linux devices: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] pip install unsloth ``` For Windows install instructions, see [here](https://docs.unsloth.ai/get-started/installing-+-updating/windows-installation). ## [Direct link to heading](https://docs.unsloth.ai/\#what-is-finetuning-and-why) What is finetuning and why? Fine-tuning an LLM customizes its behavior, enhances domain knowledge, and optimizes performance for specific tasks. Finetuning is the process of updating the actual "brains" of the language model through some process called back-propagation. By fine-tuning a pre-trained model (e.g. Llama-3.1-8B) on a specialized dataset, you can: - **Update Knowledge**: Introduce new domain-specific information. - **Customize Behavior**: Adjust the model’s tone, personality, or response style. - **Optimize for Tasks**: Improve accuracy and relevance for specific use cases. **Example usecases**: - Train LLM to predict if a headline impacts a company positively or negatively. - Use historical customer interactions for more accurate and custom responses. - Fine-tune LLM on legal texts for contract analysis, case law research, and compliance. You can think of a fine-tuned model as a specialized agent designed to do specific tasks more effectively and efficiently. **Fine-tuning can replicate all of RAG's capabilities**, but not vice versa. [🤔FAQ + Is Fine-tuning Right For Me?](https://docs.unsloth.ai/get-started/beginner-start-here/faq-+-is-fine-tuning-right-for-me) ## [Direct link to heading](https://docs.unsloth.ai/\#how-to-use-unsloth) How to use Unsloth? [Unsloth](https://github.com/unslothai/unsloth) can be installed locally via Linux, Windows, Kaggle, or another GPU service like Google Colab. Most use Unsloth through the interface Google Colab which provides a free GPU to train with. [📥Installing + Updating](https://docs.unsloth.ai/get-started/installing-+-updating) [🛠️Unsloth Requirements](https://docs.unsloth.ai/get-started/beginner-start-here/unsloth-requirements) ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252FLrqITvuoKyiMl8mqfu5B%252Flarge%2520sloth%2520wave.png%3Falt%3Dmedia%26token%3D3077792b-90ff-459d-aa52-57abcf219adf&width=768&dpr=4&quality=100&sign=5c6df706&sv=2) [NextBeginner? Start here!](https://docs.unsloth.ai/get-started/beginner-start-here) Last updated 12 days ago Was this helpful?
{ "color-scheme": "light dark", "description": "New to Unsloth?", "favicon": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Forganizations%252FHpyELzcNe0topgVLGCZY%252Fsites%252Fsite_mXXTe%252Ficon%252FUdYqQdRpLXMz6O8ejTLD%252Funsloth%2520sticker.png%3Falt%3Dmedia%26token%3D2a5ba961-51cb-4094-8545-6eb20ad1e06e&width=48&height=48&sign=277417f7&sv=2", "generator": "GitBook (e15757d)", "language": "en", "og:description": "New to Unsloth?", "og:image": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "og:title": "Welcome | Unsloth Documentation", "ogDescription": "New to Unsloth?", "ogImage": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "ogTitle": "Welcome | Unsloth Documentation", "robots": "index, follow", "scrapeId": "7a5be7a8-8c29-4612-948b-73fbf4fa1b17", "sourceURL": "https://docs.unsloth.ai/", "statusCode": 200, "title": "Welcome | Unsloth Documentation", "twitter:card": "summary_large_image", "twitter:description": "New to Unsloth?", "twitter:image": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "twitter:title": "Welcome | Unsloth Documentation", "url": "https://docs.unsloth.ai/", "viewport": "width=device-width, initial-scale=1" }
Qwen3, full fine-tuning & all models are now supported! 🦥 Microsoft's new Phi-4 reasoning models are now supported in Unsloth. The 'plus' variant performs on par with OpenAI's o1-mini, o3-mini and Sonnet 3.7. The 'plus' and standard reasoning models are 14B parameters while the 'mini' has 4B parameters. All Phi-4 reasoning uploads use our [Unsloth Dynamic 2.0](https://docs.unsloth.ai/basics/unsloth-dynamic-2.0-ggufs) methodology. #### [Direct link to heading](https://docs.unsloth.ai/basics/tutorials-how-to-fine-tune-and-run-llms/phi-4-reasoning-how-to-run-and-fine-tune\#phi-4-reasoning-unsloth-dynamic-2.0-uploads) **Phi-4 reasoning - Unsloth Dynamic 2.0 uploads:** Dynamic 2.0 GGUF (to run) Dynamic 4-bit Safetensor (to finetune/deploy) - [Reasoning-plus](https://huggingface.co/unsloth/Phi-4-reasoning-plus-GGUF/) (14B) - [Reasoning](https://huggingface.co/unsloth/Phi-4-reasoning-GGUF) (14B) - [Mini-reasoning](https://huggingface.co/unsloth/Phi-4-mini-reasoning-GGUF/) (4B) - [Reasoning-plus](https://huggingface.co/unsloth/Phi-4-reasoning-plus-unsloth-bnb-4bit) - [Reasoning](https://huggingface.co/unsloth/phi-4-reasoning-unsloth-bnb-4bit) - [Mini-reasoning](https://huggingface.co/unsloth/Phi-4-mini-reasoning-unsloth-bnb-4bit) ## [Direct link to heading](https://docs.unsloth.ai/basics/tutorials-how-to-fine-tune-and-run-llms/phi-4-reasoning-how-to-run-and-fine-tune\#running-phi-4-reasoning) 🖥️ **Running Phi-4 reasoning** ### [Direct link to heading](https://docs.unsloth.ai/basics/tutorials-how-to-fine-tune-and-run-llms/phi-4-reasoning-how-to-run-and-fine-tune\#official-recommended-settings) ⚙️ Official Recommended Settings According to Microsoft, these are the recommended settings for inference: - **Temperature = 0.8** - Top\_P = 0.95 ### [Direct link to heading](https://docs.unsloth.ai/basics/tutorials-how-to-fine-tune-and-run-llms/phi-4-reasoning-how-to-run-and-fine-tune\#phi-4-reasoning-chat-templates) **Phi-4 reasoning Chat templates** Please ensure you use the correct chat template as the 'mini' variant has a different one. #### [Direct link to heading](https://docs.unsloth.ai/basics/tutorials-how-to-fine-tune-and-run-llms/phi-4-reasoning-how-to-run-and-fine-tune\#phi-4-mini) **Phi-4-mini:** Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] whitespace-pre-wrap <|system|>Your name is Phi, an AI math expert developed by Microsoft.<|end|><|user|>How to solve 3*x^2+4*x+5=1?<|end|><|assistant|> ``` #### [Direct link to heading](https://docs.unsloth.ai/basics/tutorials-how-to-fine-tune-and-run-llms/phi-4-reasoning-how-to-run-and-fine-tune\#phi-4-reasoning-and-phi-4-reasoning-plus) **Phi-4-reasoning and Phi-4-reasoning-plus:** This format is used for general conversation and instructions: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] whitespace-pre-wrap <|im_start|>system<|im_sep|>You are Phi, a language model trained by Microsoft to help users. Your role as an assistant involves thoroughly exploring questions through a systematic thinking process before providing the final precise and accurate solutions. This requires engaging in a comprehensive cycle of analysis, summarizing, exploration, reassessment, reflection, backtracing, and iteration to develop well-considered thinking process. Please structure your response into two main sections: Thought and Solution using the specified format: <think> {Thought section} </think> {Solution section}. In the Thought section, detail your reasoning process in steps. Each step should include detailed considerations such as analysing questions, summarizing relevant findings, brainstorming new ideas, verifying the accuracy of the current steps, refining any errors, and revisiting previous steps. In the Solution section, based on various attempts, explorations, and reflections from the Thought section, systematically present the final solution that you deem correct. The Solution section should be logical, accurate, and concise and detail necessary steps needed to reach the conclusion. Now, try to solve the following question through the above guidelines:<|im_end|><|im_start|>user<|im_sep|>What is 1+1?<|im_end|><|im_start|>assistant<|im_sep|> ``` Yes, the chat template/prompt format is this long! ### [Direct link to heading](https://docs.unsloth.ai/basics/tutorials-how-to-fine-tune-and-run-llms/phi-4-reasoning-how-to-run-and-fine-tune\#ollama-run-phi-4-reasoning-tutorial) 🦙 Ollama: Run Phi-4 reasoning Tutorial 1. Install `ollama` if you haven't already! Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] apt-get update apt-get install pciutils -y curl -fsSL https://ollama.com/install.sh | sh ``` 1. Run the model! Note you can call `ollama serve` in another terminal if it fails. We include all our fixes and suggested parameters (temperature etc) in `params` in our Hugging Face upload. Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] ollama run hf.co/unsloth/Phi-4-mini-reasoning-GGUF:Q4_K_XL ``` ### [Direct link to heading](https://docs.unsloth.ai/basics/tutorials-how-to-fine-tune-and-run-llms/phi-4-reasoning-how-to-run-and-fine-tune\#llama.cpp-run-phi-4-reasoning-tutorial) 📖 Llama.cpp: Run Phi-4 reasoning Tutorial You must use `--jinja` in llama.cpp to enable reasoning for the models, expect for the 'mini' variant. Otherwise no token will be provided. 1. Obtain the latest `llama.cpp` on [GitHub here](https://github.com/ggml-org/llama.cpp). You can follow the build instructions below as well. Change `-DGGML_CUDA=ON` to `-DGGML_CUDA=OFF` if you don't have a GPU or just want CPU inference. Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] apt-get update apt-get install pciutils build-essential cmake curl libcurl4-openssl-dev -y git clone https://github.com/ggml-org/llama.cpp cmake llama.cpp -B llama.cpp/build \ -DBUILD_SHARED_LIBS=OFF -DGGML_CUDA=ON -DLLAMA_CURL=ON cmake --build llama.cpp/build --config Release -j --clean-first --target llama-cli llama-gguf-split cp llama.cpp/build/bin/llama-* llama.cpp ``` 1. Download the model via (after installing `pip install huggingface_hub hf_transfer` ). You can choose Q4\_K\_M, or other quantized versions. Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] # !pip install huggingface_hub hf_transfer import os os.environ["HF_HUB_ENABLE_HF_TRANSFER"] = "1" from huggingface_hub import snapshot_download snapshot_download( repo_id = "unsloth/Phi-4-mini-reasoning-GGUF", local_dir = "unsloth/Phi-4-mini-reasoning-GGUF", allow_patterns = ["*UD-Q4_K_XL*"], ) ``` 1. Run the model in conversational mode in llama.cpp. You must use `--jinja` in llama.cpp to enable reasoning for the models. This is however not needed if you're using the 'mini' variant. Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] ./llama.cpp/llama-cli \ --model unsloth/Phi-4-mini-reasoning-GGUF/Phi-4-mini-reasoning-UD-Q4_K_XL.gguf \ --threads -1 \ --n-gpu-layers 99 \ --prio 3 \ --temp 0.8 \ --top-p 0.95 \ --jinja \ --min_p 0.00 \ --ctx-size 32768 \ --seed 3407 ``` ## [Direct link to heading](https://docs.unsloth.ai/basics/tutorials-how-to-fine-tune-and-run-llms/phi-4-reasoning-how-to-run-and-fine-tune\#fine-tuning-phi-4-with-unsloth) 🦥 Fine-tuning Phi-4 with Unsloth [Phi-4 fine-tuning](https://unsloth.ai/blog/phi4) for the models are also now supported in Unsloth. To fine-tune for free on Google Colab, just change the `model_name` of 'unsloth/Phi-4' to 'unsloth/Phi-4-mini-reasoning' etc. - [Phi-4 (14B) fine-tuning notebook](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Phi_4-Conversational.ipynb) [PreviousTutorials: How To Fine-tune & Run LLMs](https://docs.unsloth.ai/basics/tutorials-how-to-fine-tune-and-run-llms) [NextDeepSeek-V3-0324: How to Run Locally](https://docs.unsloth.ai/basics/tutorials-how-to-fine-tune-and-run-llms/deepseek-v3-0324-how-to-run-locally) Last updated 10 days ago Was this helpful?
{ "color-scheme": "light dark", "description": "Learn to run & fine-tune Phi-4 reasoning models locally with Unsloth + our Dynamic 2.0 quants", "favicon": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Forganizations%252FHpyELzcNe0topgVLGCZY%252Fsites%252Fsite_mXXTe%252Ficon%252FUdYqQdRpLXMz6O8ejTLD%252Funsloth%2520sticker.png%3Falt%3Dmedia%26token%3D2a5ba961-51cb-4094-8545-6eb20ad1e06e&width=48&height=48&sign=277417f7&sv=2", "generator": "GitBook (e15757d)", "language": "en", "og:description": "Learn to run & fine-tune Phi-4 reasoning models locally with Unsloth + our Dynamic 2.0 quants", "og:image": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "og:title": "Phi-4 Reasoning: How to Run & Fine-tune | Unsloth Documentation", "ogDescription": "Learn to run & fine-tune Phi-4 reasoning models locally with Unsloth + our Dynamic 2.0 quants", "ogImage": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "ogTitle": "Phi-4 Reasoning: How to Run & Fine-tune | Unsloth Documentation", "robots": "index, follow", "scrapeId": "0fbb26e5-2798-4c08-99b7-76ac8f88e796", "sourceURL": "https://docs.unsloth.ai/basics/tutorials-how-to-fine-tune-and-run-llms/phi-4-reasoning-how-to-run-and-fine-tune", "statusCode": 200, "title": "Phi-4 Reasoning: How to Run & Fine-tune | Unsloth Documentation", "twitter:card": "summary_large_image", "twitter:description": "Learn to run & fine-tune Phi-4 reasoning models locally with Unsloth + our Dynamic 2.0 quants", "twitter:image": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "twitter:title": "Phi-4 Reasoning: How to Run & Fine-tune | Unsloth Documentation", "url": "https://docs.unsloth.ai/basics/tutorials-how-to-fine-tune-and-run-llms/phi-4-reasoning-how-to-run-and-fine-tune", "viewport": "width=device-width, initial-scale=1" }
Qwen3, full fine-tuning & all models are now supported! 🦥 Unsloth supports natively 2x faster inference. For our inference only notebook, click [here](https://colab.research.google.com/drive/1aqlNQi7MMJbynFDyOQteD2t0yVfjb9Zh?usp=sharing). All QLoRA, LoRA and non LoRA inference paths are 2x faster. This requires no change of code or any new dependencies. Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] from unsloth import FastLanguageModel model, tokenizer = FastLanguageModel.from_pretrained( model_name = "lora_model", # YOUR MODEL YOU USED FOR TRAINING max_seq_length = max_seq_length, dtype = dtype, load_in_4bit = load_in_4bit, ) FastLanguageModel.for_inference(model) # Enable native 2x faster inference text_streamer = TextStreamer(tokenizer) _ = model.generate(**inputs, streamer = text_streamer, max_new_tokens = 64) ``` #### [Direct link to heading](https://docs.unsloth.ai/basics/running-and-saving-models/inference\#notimplementederror-a-utf-8-locale-is-required.-got-ansi) NotImplementedError: A UTF-8 locale is required. Got ANSI Sometimes when you execute a cell [this error](https://github.com/googlecolab/colabtools/issues/3409) can appear. To solve this, in a new cell, run the below: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] import locale locale.getpreferredencoding = lambda: "UTF-8" ``` [PreviousTroubleshooting](https://docs.unsloth.ai/basics/running-and-saving-models/troubleshooting) [NextText-to-Speech (TTS) Fine-tuning](https://docs.unsloth.ai/basics/text-to-speech-tts-fine-tuning) Last updated 4 months ago Was this helpful?
{ "color-scheme": "light dark", "description": "Learn how to run your finetuned model.", "favicon": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Forganizations%252FHpyELzcNe0topgVLGCZY%252Fsites%252Fsite_mXXTe%252Ficon%252FUdYqQdRpLXMz6O8ejTLD%252Funsloth%2520sticker.png%3Falt%3Dmedia%26token%3D2a5ba961-51cb-4094-8545-6eb20ad1e06e&width=48&height=48&sign=277417f7&sv=2", "generator": "GitBook (e15757d)", "language": "en", "og:description": "Learn how to run your finetuned model.", "og:image": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "og:title": "Inference | Unsloth Documentation", "ogDescription": "Learn how to run your finetuned model.", "ogImage": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "ogTitle": "Inference | Unsloth Documentation", "robots": "index, follow", "scrapeId": "1e0ab15d-306e-4fb5-bf9d-dbb428ef2e9f", "sourceURL": "https://docs.unsloth.ai/basics/running-and-saving-models/inference", "statusCode": 200, "title": "Inference | Unsloth Documentation", "twitter:card": "summary_large_image", "twitter:description": "Learn how to run your finetuned model.", "twitter:image": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "twitter:title": "Inference | Unsloth Documentation", "url": "https://docs.unsloth.ai/basics/running-and-saving-models/inference", "viewport": "width=device-width, initial-scale=1" }
Qwen3, full fine-tuning & all models are now supported! 🦥 You must edit the `Trainer` first to add `save_strategy` and `save_steps`. Below saves a checkpoint every 50 steps to the folder `outputs`. Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] trainer = SFTTrainer( .... args = TrainingArguments( .... output_dir = "outputs", save_strategy = "steps", save_steps = 50, ), ) ``` Then in the trainer do: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] trainer_stats = trainer.train(resume_from_checkpoint = True) ``` Which will start from the latest checkpoint and continue training. ## [Direct link to heading](https://docs.unsloth.ai/basics/finetuning-from-last-checkpoint\#wandb-integration) Wandb Integration Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] # Install library !pip install wandb --upgrade # Setting up Wandb !wandb login <token> import os os.environ["WANDB_PROJECT"] = "<name>" os.environ["WANDB_LOG_MODEL"] = "checkpoint" ``` Then in `TrainingArguments()` set Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] report_to = "wandb", logging_steps = 1, # Change if needed save_steps = 100 # Change if needed run_name = "<name>" # (Optional) ``` To train the model, do `trainer.train()`; to resume training, do Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] import wandb run = wandb.init() artifact = run.use_artifact('<username>/<Wandb-project-name>/<run-id>', type='model') artifact_dir = artifact.download() trainer.train(resume_from_checkpoint=artifact_dir) ``` [PreviousVision Fine-tuning](https://docs.unsloth.ai/basics/vision-fine-tuning) [NextErrors/Troubleshooting](https://docs.unsloth.ai/basics/errors-troubleshooting) Last updated 10 months ago Was this helpful?
{ "color-scheme": "light dark", "description": "Checkpointing allows you to save your finetuning progress so you can pause it and then continue.", "favicon": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Forganizations%252FHpyELzcNe0topgVLGCZY%252Fsites%252Fsite_mXXTe%252Ficon%252FUdYqQdRpLXMz6O8ejTLD%252Funsloth%2520sticker.png%3Falt%3Dmedia%26token%3D2a5ba961-51cb-4094-8545-6eb20ad1e06e&width=48&height=48&sign=277417f7&sv=2", "generator": "GitBook (e15757d)", "language": "en", "og:description": "Checkpointing allows you to save your finetuning progress so you can pause it and then continue.", "og:image": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "og:title": "Finetuning from Last Checkpoint | Unsloth Documentation", "ogDescription": "Checkpointing allows you to save your finetuning progress so you can pause it and then continue.", "ogImage": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "ogTitle": "Finetuning from Last Checkpoint | Unsloth Documentation", "robots": "index, follow", "scrapeId": "0f3402f5-70fd-4e31-9774-df235fd8683a", "sourceURL": "https://docs.unsloth.ai/basics/finetuning-from-last-checkpoint", "statusCode": 200, "title": "Finetuning from Last Checkpoint | Unsloth Documentation", "twitter:card": "summary_large_image", "twitter:description": "Checkpointing allows you to save your finetuning progress so you can pause it and then continue.", "twitter:image": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "twitter:title": "Finetuning from Last Checkpoint | Unsloth Documentation", "url": "https://docs.unsloth.ai/basics/finetuning-from-last-checkpoint", "viewport": "width=device-width, initial-scale=1" }
Qwen3, full fine-tuning & all models are now supported! 🦥 ## [Direct link to heading](https://docs.unsloth.ai/get-started/fine-tuning-guide/what-model-should-i-use\#llama-qwen-mistral-phi-or) Llama, Qwen, Mistral, Phi or? When preparing for fine-tuning, one of the first decisions you'll face is selecting the right model. Here's a step-by-step guide to help you choose: 1 #### [Direct link to heading](https://docs.unsloth.ai/get-started/fine-tuning-guide/what-model-should-i-use\#choose-a-model-that-aligns-with-your-usecase) Choose a model that aligns with your usecase - E.g. For image-based training, select a vision model such as _Llama 3.2 Vision_. For code datasets, opt for a specialized model like _Qwen Coder 2.5_. - **Licensing and Requirements**: Different models may have specific licensing terms and [system requirements](https://docs.unsloth.ai/get-started/beginner-start-here/unsloth-requirements#system-requirements). Be sure to review these carefully to avoid compatibility issues. 2 #### [Direct link to heading](https://docs.unsloth.ai/get-started/fine-tuning-guide/what-model-should-i-use\#assess-your-storage-compute-capacity-and-dataset) **Assess your storage, compute capacity and dataset** - Use our [VRAM guideline](https://docs.unsloth.ai/get-started/beginner-start-here/unsloth-requirements#approximate-vram-requirements-based-on-model-parameters) to determine the VRAM requirements for the model you’re considering. - Your dataset will reflect the type of model you will use and amount of time it will take to train 3 #### [Direct link to heading](https://docs.unsloth.ai/get-started/fine-tuning-guide/what-model-should-i-use\#select-a-model-and-parameters) **Select a Model and Parameters** - We recommend using the latest model for the best performance and capabilities. For instance, as of January 2025, the leading 70B model is _Llama 3.3_. - You can stay up to date by exploring our catalog of [model uploads](https://docs.unsloth.ai/get-started/all-our-models) to find the most recent and relevant options. 4 #### [Direct link to heading](https://docs.unsloth.ai/get-started/fine-tuning-guide/what-model-should-i-use\#choose-between-base-and-instruct-models) **Choose Between Base and Instruct Models** Further details below: ## [Direct link to heading](https://docs.unsloth.ai/get-started/fine-tuning-guide/what-model-should-i-use\#instruct-or-base-model) Instruct or Base Model? When preparing for fine-tuning, one of the first decisions you'll face is whether to use an instruct model or a base model. ### [Direct link to heading](https://docs.unsloth.ai/get-started/fine-tuning-guide/what-model-should-i-use\#instruct-models) Instruct Models Instruct models are pre-trained with built-in instructions, making them ready to use without any fine-tuning. These models, including GGUFs and others commonly available, are optimized for direct usage and respond effectively to prompts right out of the box. ### [Direct link to heading](https://docs.unsloth.ai/get-started/fine-tuning-guide/what-model-should-i-use\#base-models) **Base Models** Base models, on the other hand, are the original pre-trained versions without instruction fine-tuning. These are specifically designed for customization through fine-tuning, allowing you to adapt them to your unique needs. ### [Direct link to heading](https://docs.unsloth.ai/get-started/fine-tuning-guide/what-model-should-i-use\#should-i-choose-instruct-or-base) Should I Choose Instruct or Base? The decision often depends on the quantity, quality, and type of your data: - **1,000+ Rows of Data**: If you have a large dataset with over 1,000 rows, it's generally best to fine-tune the base model. - **300–1,000 Rows of High-Quality Data**: With a medium-sized, high-quality dataset, fine-tuning the base or instruct model are both viable options. - **Less than 300 Rows**: For smaller datasets, the instruct model is typically the better choice. Fine-tuning the instruct model enables it to align with specific needs while preserving its built-in instructional capabilities. This ensures it can follow general instructions without additional input unless you intend to significantly alter its functionality. - For information how how big your dataset should be, [see here](https://docs.unsloth.ai/basics/datasets-guide#how-big-should-my-dataset-be) ### [Direct link to heading](https://docs.unsloth.ai/get-started/fine-tuning-guide/what-model-should-i-use\#experimentation-is-key) Experimentation is Key We recommend experimenting with both models when possible. Fine-tune each one and evaluate the outputs to see which aligns better with your goals. [PreviousFine-tuning Guide](https://docs.unsloth.ai/get-started/fine-tuning-guide) [NextLoRA Hyperparameters Guide](https://docs.unsloth.ai/get-started/fine-tuning-guide/lora-hyperparameters-guide) Last updated 4 months ago Was this helpful?
{ "color-scheme": "light dark", "description": null, "favicon": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Forganizations%252FHpyELzcNe0topgVLGCZY%252Fsites%252Fsite_mXXTe%252Ficon%252FUdYqQdRpLXMz6O8ejTLD%252Funsloth%2520sticker.png%3Falt%3Dmedia%26token%3D2a5ba961-51cb-4094-8545-6eb20ad1e06e&width=48&height=48&sign=277417f7&sv=2", "generator": "GitBook (e15757d)", "language": "en", "og:description": null, "og:image": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "og:title": "What Model Should I Use? | Unsloth Documentation", "ogDescription": null, "ogImage": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "ogTitle": "What Model Should I Use? | Unsloth Documentation", "robots": "index, follow", "scrapeId": "23f7efee-026d-4684-a6ba-9c0be2be32c4", "sourceURL": "https://docs.unsloth.ai/get-started/fine-tuning-guide/what-model-should-i-use", "statusCode": 200, "title": "What Model Should I Use? | Unsloth Documentation", "twitter:card": "summary_large_image", "twitter:description": null, "twitter:image": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "twitter:title": "What Model Should I Use? | Unsloth Documentation", "url": "https://docs.unsloth.ai/get-started/fine-tuning-guide/what-model-should-i-use", "viewport": "width=device-width, initial-scale=1" }
Qwen3, full fine-tuning & all models are now supported! 🦥 A guide on how you can run our 1.58-bit Dynamic Quants for DeepSeek-R1 using llama.cpp. ## [Direct link to heading](https://docs.unsloth.ai/basics/tutorials-how-to-fine-tune-and-run-llms/deepseek-r1-how-to-run-locally\#using-llama.cpp-recommended) Using llama.cpp (recommended) 1. Do not forget about `<|User|>` and `<|Assistant|>` tokens! - Or use a chat template formatter 2. Obtain the latest `llama.cpp` at: [github.com/ggerganov/llama.cpp](https://github.com/ggerganov/llama.cpp). You can follow the build instructions below as well: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] apt-get update apt-get install pciutils build-essential cmake curl libcurl4-openssl-dev -y git clone https://github.com/ggerganov/llama.cpp cmake llama.cpp -B llama.cpp/build \ -DBUILD_SHARED_LIBS=ON -DGGML_CUDA=ON -DLLAMA_CURL=ON cmake --build llama.cpp/build --config Release -j --clean-first --target llama-quantize llama-cli llama-gguf-split cp llama.cpp/build/bin/llama-* llama.cpp ``` 1. It's best to use `--min-p 0.05` to counteract very rare token predictions - I found this to work well especially for the 1.58bit model. 2. Download the model via: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] # pip install huggingface_hub hf_transfer # import os # Optional for faster downloading # os.environ["HF_HUB_ENABLE_HF_TRANSFER"] = "1" from huggingface_hub import snapshot_download snapshot_download( repo_id = "unsloth/DeepSeek-R1-GGUF", local_dir = "DeepSeek-R1-GGUF", allow_patterns = ["*UD-IQ1_S*"], # Select quant type UD-IQ1_S for 1.58bit ) ``` 1. Example with Q4\_0 K quantized cache **Notice -no-cnv disables auto conversation mode** Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] ./llama.cpp/llama-cli \ --model DeepSeek-R1-GGUF/DeepSeek-R1-UD-IQ1_S/DeepSeek-R1-UD-IQ1_S-00001-of-00003.gguf \ --cache-type-k q4_0 \ --threads 12 -no-cnv --prio 2 \ --temp 0.6 \ --ctx-size 8192 \ --seed 3407 \ --prompt "<|User|>What is 1+1?<|Assistant|>" ``` Example output: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] <think> Okay, so I need to figure out what 1 plus 1 is. Hmm, where do I even start? I remember from school that adding numbers is pretty basic, but I want to make sure I understand it properly. Let me think, 1 plus 1. So, I have one item and I add another one. Maybe like a apple plus another apple. If I have one apple and someone gives me another, I now have two apples. So, 1 plus 1 should be 2. That makes sense. Wait, but sometimes math can be tricky. Could it be something else? Like, in a different number system maybe? But I think the question is straightforward, using regular numbers, not like binary or hexadecimal or anything. I also recall that in arithmetic, addition is combining quantities. So, if you have two quantities of 1, combining them gives you a total of 2. Yeah, that seems right. Is there a scenario where 1 plus 1 wouldn't be 2? I can't think of any... ``` 1. If you have a GPU (RTX 4090 for example) with 24GB, you can offload multiple layers to the GPU for faster processing. If you have multiple GPUs, you can probably offload more layers. Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] ./llama.cpp/llama-cli \ --model DeepSeek-R1-GGUF/DeepSeek-R1-UD-IQ1_S/DeepSeek-R1-UD-IQ1_S-00001-of-00003.gguf \ --cache-type-k q4_0 \ --threads 12 -no-cnv --prio 2 \ --n-gpu-layers 7 \ --temp 0.6 \ --ctx-size 8192 \ --seed 3407 \ --prompt "<|User|>Create a Flappy Bird game in Python.<|Assistant|>" ``` 1. To test our Flappy Bird example as mentioned in our blog post here: [https://unsloth.ai/blog/deepseekr1-dynamic](https://unsloth.ai/blog/deepseekr1-dynamic), we can produce the 2nd example like below using our 1.58bit dynamic quant: ![Cover](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252FHHUZZTFj0WpgSuWFlibf%252FInShot_20250127_043158375_H8Uu6tyJXYAFwUEIu04Am.gif%3Falt%3Dmedia%26token%3Da959720d-b1b4-4b80-b10d-1c41928dfdcf&width=245&dpr=4&quality=100&sign=f69a2605&sv=2) Original DeepSeek R1 ![Cover](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252FqgLhnVaN53kV4cvZaDci%252FInShot_20250127_042648160_lrtL8-eRhl4qtLaUDSU87.gif%3Falt%3Dmedia%26token%3De608b30a-1cbe-49ac-b18a-967a50c67c68&width=245&dpr=4&quality=100&sign=a0093029&sv=2) 1.58bit Dynamic Quant The prompt used is as below: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] whitespace-pre-wrap <|User|>Create a Flappy Bird game in Python. You must include these things: 1. You must use pygame. 2. The background color should be randomly chosen and is a light shade. Start with a light blue color. 3. Pressing SPACE multiple times will accelerate the bird. 4. The bird's shape should be randomly chosen as a square, circle or triangle. The color should be randomly chosen as a dark color. 5. Place on the bottom some land colored as dark brown or yellow chosen randomly. 6. Make a score shown on the top right side. Increment if you pass pipes and don't hit them. 7. Make randomly spaced pipes with enough space. Color them randomly as dark green or light brown or a dark gray shade. 8. When you lose, show the best score. Make the text inside the screen. Pressing q or Esc will quit the game. Restarting is pressing SPACE again. The final game should be inside a markdown section in Python. Check your code for errors and fix them before the final markdown section.<|Assistant|> ``` To call llama.cpp using this example, we do: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] ./llama.cpp/llama-cli \ --model DeepSeek-R1-GGUF/DeepSeek-R1-UD-IQ1_S/DeepSeek-R1-UD-IQ1_S-00001-of-00003.gguf \ --cache-type-k q4_0 \ --threads 12 -no-cnv --prio 2 \ --n-gpu-layers 7 \ --temp 0.6 \ --ctx-size 8192 \ --seed 3407 \ --prompt "<|User|>Create a Flappy Bird game in Python. You must include these things:\n1. You must use pygame.\n2. The background color should be randomly chosen and is a light shade. Start with a light blue color.\n3. Pressing SPACE multiple times will accelerate the bird.\n4. The bird's shape should be randomly chosen as a square, circle or triangle. The color should be randomly chosen as a dark color.\n5. Place on the bottom some land colored as dark brown or yellow chosen randomly.\n6. Make a score shown on the top right side. Increment if you pass pipes and don't hit them.\n7. Make randomly spaced pipes with enough space. Color them randomly as dark green or light brown or a dark gray shade.\n8. When you lose, show the best score. Make the text inside the screen. Pressing q or Esc will quit the game. Restarting is pressing SPACE again.\nThe final game should be inside a markdown section in Python. Check your code for errors and fix them before the final markdown section.<|Assistant|>" ``` 1. Also, if you want to merge the weights together for use in Ollama for example, use this script: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] ./llama.cpp/llama-gguf-split --merge \ DeepSeek-R1-GGUF/DeepSeek-R1-UD-IQ1_S-00001-of-00003.gguf \ merged_file.gguf ``` 1. DeepSeek R1 has 61 layers. For example with a 24GB GPU or 80GB GPU, you can expect to offload after rounding down (reduce by 1 if it goes out of memory): Quant File Size 24GB GPU 80GB GPU 2x80GB GPU 1.58bit 131GB 7 33 All layers 61 1.73bit 158GB 5 26 57 2.22bit 183GB 4 22 49 2.51bit 212GB 2 19 32 ### [Direct link to heading](https://docs.unsloth.ai/basics/tutorials-how-to-fine-tune-and-run-llms/deepseek-r1-how-to-run-locally\#running-on-mac-apple-devices) Running on Mac / Apple devices For Apple Metal devices, be careful of --n-gpu-layers. If you find the machine going out of memory, reduce it. For a 128GB unified memory machine, you should be able to offload 59 layers or so. Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] ./llama.cpp/llama-cli \ --model DeepSeek-R1-GGUF/DeepSeek-R1-UD-IQ1_S/DeepSeek-R1-UD-IQ1_S-00001-of-00003.gguf \ --cache-type-k q4_0 \ --threads 16 \ --prio 2 \ --temp 0.6 \ --ctx-size 8192 \ --seed 3407 \ --n-gpu-layers 59 \ -no-cnv \ --prompt "<|User|>Create a Flappy Bird game in Python.<|Assistant|>" ``` ### [Direct link to heading](https://docs.unsloth.ai/basics/tutorials-how-to-fine-tune-and-run-llms/deepseek-r1-how-to-run-locally\#run-in-ollama-open-webui) Run in Ollama/Open WebUI Open WebUI has made an step-by-step tutorial on how to run R1 here: [docs.openwebui.com/tutorials/integrations/deepseekr1-dynamic/](https://docs.openwebui.com/tutorials/integrations/deepseekr1-dynamic/) If you want to use Ollama for inference on GGUFs, you need to first merge the 3 GGUF split files into 1 like the code below. Then you will need to run the model locally. Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] ./llama.cpp/llama-gguf-split --merge \ DeepSeek-R1-GGUF/DeepSeek-R1-UD-IQ1_S/DeepSeek-R1-UD-IQ1_S-00001-of-00003.gguf \ merged_file.gguf ``` ## [Direct link to heading](https://docs.unsloth.ai/basics/tutorials-how-to-fine-tune-and-run-llms/deepseek-r1-how-to-run-locally\#deepseek-chat-template) DeepSeek Chat Template All distilled versions and the main 671B R1 model use the same chat template: `<|begin▁of▁sentence|><|User|>What is 1+1?<|Assistant|>It's 2.<|end▁of▁sentence|><|User|>Explain more!<|Assistant|>` A BOS is forcibly added, and an EOS separates each interaction. To counteract double BOS tokens during inference, you should only call _tokenizer.encode(..., add\_special\_tokens = False)_ since the chat template auto adds a BOS token as well. For llama.cpp / GGUF inference, you should skip the BOS since it’ll auto add it. `<|User|>What is 1+1?<|Assistant|>` The <think> and </think> tokens get their own designated tokens. For the distilled versions for Qwen and Llama, some tokens are re-mapped, whilst Qwen for example did not have a BOS token, so <\|object\_ref\_start\|> had to be used instead. **Tokenizer ID Mappings:** Token R1 Distill Qwen Distill Llama <think> 128798 151648 128013 </think> 128799 151649 128014 <\|begin\_of\_sentence\|> 0 151646 128000 <\|end\_of\_sentence\|> 1 151643 128001 <\|User\|> 128803 151644 128011 <\|Assistant\|> 128804 151645 128012 Padding token 2 151654 128004 Original tokens in models: Token Qwen 2.5 32B Base Llama 3.3 70B Instruct <think> <\|box\_start\|> <\|reserved\_special\_token\_5\|> </think> <\|box\_end\|> <\|reserved\_special\_token\_6\|> <|begin▁of▁sentence|> <\|object\_ref\_start\|> <\|begin\_of\_text\|> <|end▁of▁sentence|> <\|endoftext\|> <\|end\_of\_text\|> <|User|> <\|im\_start\|> <\|reserved\_special\_token\_3\|> <|Assistant|> <\|im\_end\|> <\|reserved\_special\_token\_4\|> Padding token <\|vision\_pad\|> <\|finetune\_right\_pad\_id\|> All Distilled and the original R1 versions seem to have accidentally assigned the padding token to <|end▁of▁sentence|>, which is mostly not a good idea, especially if you want to further finetune on top of these reasoning models. This will cause endless infinite generations, since most frameworks will mask the EOS token out as -100. We fixed all distilled and the original R1 versions with the correct padding token (Qwen uses <\|vision\_pad\|>, Llama uses <\|finetune\_right\_pad\_id\|>, and R1 uses <|▁pad▁|> or our own added <|PAD▁TOKEN|>. ## [Direct link to heading](https://docs.unsloth.ai/basics/tutorials-how-to-fine-tune-and-run-llms/deepseek-r1-how-to-run-locally\#gguf-r1-table) GGUF R1 Table MoE Bits Type Disk Size Accuracy Link Details 1.58bit UD-IQ1\_S **131GB** Fair [Link](https://huggingface.co/unsloth/DeepSeek-R1-GGUF/tree/main/DeepSeek-R1-UD-IQ1_S) MoE all 1.56bit. `down_proj` in MoE mixture of 2.06/1.56bit 1.73bit UD-IQ1\_M **158GB** Good [Link](https://huggingface.co/unsloth/DeepSeek-R1-GGUF/tree/main/DeepSeek-R1-UD-IQ1_M) MoE all 1.56bit. `down_proj` in MoE left at 2.06bit 2.22bit UD-IQ2\_XXS **183GB** Better [Link](https://huggingface.co/unsloth/DeepSeek-R1-GGUF/tree/main/DeepSeek-R1-UD-IQ2_XXS) MoE all 2.06bit. `down_proj` in MoE mixture of 2.5/2.06bit 2.51bit UD-Q2\_K\_XL **212GB** Best [Link](https://huggingface.co/unsloth/DeepSeek-R1-GGUF/tree/main/DeepSeek-R1-UD-Q2_K_XL) MoE all 2.5bit. `down_proj` in MoE mixture of 3.5/2.5bit [PreviousQwQ-32B: How to Run effectively](https://docs.unsloth.ai/basics/tutorials-how-to-fine-tune-and-run-llms/qwq-32b-how-to-run-effectively) [NextDeepSeek-R1 Dynamic 1.58-bit](https://docs.unsloth.ai/basics/tutorials-how-to-fine-tune-and-run-llms/deepseek-r1-how-to-run-locally/deepseek-r1-dynamic-1.58-bit) Last updated 1 month ago Was this helpful?
{ "color-scheme": "light dark", "description": null, "favicon": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Forganizations%252FHpyELzcNe0topgVLGCZY%252Fsites%252Fsite_mXXTe%252Ficon%252FUdYqQdRpLXMz6O8ejTLD%252Funsloth%2520sticker.png%3Falt%3Dmedia%26token%3D2a5ba961-51cb-4094-8545-6eb20ad1e06e&width=48&height=48&sign=277417f7&sv=2", "generator": "GitBook (e15757d)", "language": "en", "og:description": null, "og:image": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "og:title": "DeepSeek-R1: How to Run Locally | Unsloth Documentation", "ogDescription": null, "ogImage": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "ogTitle": "DeepSeek-R1: How to Run Locally | Unsloth Documentation", "robots": "index, follow", "scrapeId": "23468f92-f154-4ae5-a9a2-e91932693ca5", "sourceURL": "https://docs.unsloth.ai/basics/tutorials-how-to-fine-tune-and-run-llms/deepseek-r1-how-to-run-locally", "statusCode": 200, "title": "DeepSeek-R1: How to Run Locally | Unsloth Documentation", "twitter:card": "summary_large_image", "twitter:description": null, "twitter:image": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "twitter:title": "DeepSeek-R1: How to Run Locally | Unsloth Documentation", "url": "https://docs.unsloth.ai/basics/tutorials-how-to-fine-tune-and-run-llms/deepseek-r1-how-to-run-locally", "viewport": "width=device-width, initial-scale=1" }
Qwen3, full fine-tuning & all models are now supported! 🦥 We're excited to introduce our Dynamic v2.0 quantization method - a major upgrade to our previous quants. This new method outperforms leading quantization methods and sets new benchmarks for 5-shot MMLU and KL Divergence. This means you can now run + fine-tune quantized LLMs while preserving as much accuracy as possible! You can run the 2.0 GGUFs on any inference engine like llama.cpp, Ollama, Open WebUI etc. View all our Dynamic 2.0 GGUF models on [Hugging Face here](https://huggingface.co/collections/unsloth/unsloth-dynamic-v20-quants-68060d147e9b9231112823e6). ### [Direct link to heading](https://docs.unsloth.ai/basics/unsloth-dynamic-2.0-ggufs\#whats-new-in-dynamic-v2.0) 💡 What's New in Dynamic v2.0? - **Revamped Layer Selection for GGUFs + safetensors:** Unsloth Dynamic 2.0 now selectively quantizes layers much more intelligently and extensively. Rather than modifying only select layers, we now dynamically adjust the quantization type of every possible layer, and the combinations will differ for each layer and model. - Current selected and all future GGUF uploads will utilize Dynamic 2.0 and our new calibration dataset. The dataset ranges from **300K to 1.5M tokens** (depending on model) and comprise of high-quality, hand-curated and cleaned data - to greatly enhance conversational chat performance. - Previously, our Dynamic quantization (DeepSeek-R1 1.58-bit GGUF) was effective only for MoE architectures. **Dynamic 2.0 quantization now works on all models (including MOEs & non-MoEs)**. - **Model-Specific Quants:** Each model now uses a custom-tailored quantization scheme. E.g. the layers quantized in Gemma 3 differ significantly from those in Llama 4. - To maximize efficiency, especially on Apple Silicon and ARM devices, we now also add Q4\_NL, Q5.1, Q5.0, Q4.1, and Q4.0 formats. To ensure accurate benchmarking, we built an internal evaluation framework to match official reported 5-shot MMLU scores of Llama 4 and Gemma 3. This allowed apples-to-apples comparisons between full-precision vs. Dynamic v2.0, **QAT** and standard **imatrix** GGUF quants. Currently, we've released updates for: **Qwen3 (NEW):** [0.6B](https://huggingface.co/unsloth/Qwen3-0.6B-GGUF) • [1.7B](https://huggingface.co/unsloth/Qwen3-1.7B-GGUF) • [4B](https://huggingface.co/unsloth/Qwen3-4B-GGUF) • [8B](https://huggingface.co/unsloth/Qwen3-8B-GGUF) • [14B](https://huggingface.co/unsloth/Qwen3-14B-GGUF) • [30B-A3B](https://huggingface.co/unsloth/Qwen3-30B-A3B-GGUF) • [32B](https://huggingface.co/unsloth/Qwen3-32B-GGUF) • [235B-A22B](https://huggingface.co/unsloth/Qwen3-235B-A22B-GGUF) **Other:** [GLM-4-32B](https://huggingface.co/unsloth/GLM-4-32B-0414-GGUF) • [MAI-DS-R1](https://huggingface.co/unsloth/MAI-DS-R1-GGUF) • [QwQ (32B)](https://huggingface.co/unsloth/QwQ-32B-GGUF) **DeepSeek:** [R1](https://huggingface.co/unsloth/DeepSeek-R1-GGUF-UD) • [V3-0324](https://huggingface.co/unsloth/DeepSeek-V3-0324-GGUF-UD) • [R1-Distill-Llama](https://huggingface.co/unsloth/DeepSeek-R1-Distill-Llama-8B-GGUF) **Llama:** [4 (Scout)](https://huggingface.co/unsloth/Llama-4-Scout-17B-16E-Instruct-GGUF) • [4 (Maverick)](https://huggingface.co/unsloth/Llama-4-Maverick-17B-128E-Instruct-GGUF) • [3.1 (8B)](https://huggingface.co/unsloth/Llama-3.1-8B-Instruct-GGUF) **Gemma 3:** [4B](https://huggingface.co/unsloth/gemma-3-4b-it-GGUF) • [12B](https://huggingface.co/unsloth/gemma-3-12b-it-GGUF) • [27B](https://huggingface.co/unsloth/gemma-3-27b-it-GGUF) • [QAT](https://huggingface.co/unsloth/gemma-3-12b-it-qat-GGUF) **Mistral:** [Small-3.1-2503](https://huggingface.co/unsloth/Mistral-Small-3.1-24B-Instruct-2503-GGUF) All future GGUF uploads will utilize Unsloth Dynamic 2.0, and our Dynamic 4-bit safe tensor quants will also benefit from this in the future. Detailed analysis of our benchmarks and evaluation further below. ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252FWpuceJODVjlQcN7RvS6M%252Fkldivergence%2520graph.png%3Falt%3Dmedia%26token%3D1f8f39fb-d4c6-47c6-84fe-f767ec7bae6b&width=768&dpr=4&quality=100&sign=70391f18&sv=2) ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252FszSmyqwqLW7artvIR5ut%252F5shotmmlu.png%3Falt%3Dmedia%26token%3Dc9ef327e-5f8c-4720-8e05-08c345668745&width=768&dpr=4&quality=100&sign=517a777d&sv=2) ## [Direct link to heading](https://docs.unsloth.ai/basics/unsloth-dynamic-2.0-ggufs\#why-kl-divergence) 📊 Why KL Divergence? [Accuracy is Not All You Need](https://arxiv.org/pdf/2407.09141) showcases how pruning layers, even by selecting unnecessary ones still yields vast differences in terms of "flips". A "flip" is defined as answers changing from incorrect to correct or vice versa. The paper shows how MMLU might not decrease as we prune layers or do quantization,but that's because some incorrect answers might have "flipped" to become correct. Our goal is to match the original model, so measuring "flips" is a good metric. ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252FEjL8zLLNyceY3IpDUdWz%252Fimage.png%3Falt%3Dmedia%26token%3D6c31355b-57cf-4f22-a70e-b3b1e7c533d4&width=768&dpr=4&quality=100&sign=e862b672&sv=2) ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252FimYGCjWJ3GVKQmfAQwd5%252Fimage.png%3Falt%3Dmedia%26token%3D5a49d0ec-d92a-4d0e-9d6f-77f6d0d95738&width=768&dpr=4&quality=100&sign=77448477&sv=2) **KL Divergence** should be the **gold standard for reporting quantization errors** as per the research paper "Accuracy is Not All You Need". **Using perplexity is incorrect** since output token values can cancel out, so we must use KLD! The paper also shows that interestingly KL Divergence is highly correlated with flips, and so our goal is to reduce the mean KL Divergence whilst increasing the disk space of the quantization as less as possible. ## [Direct link to heading](https://docs.unsloth.ai/basics/unsloth-dynamic-2.0-ggufs\#calibration-dataset-overfitting) ⚖️ Calibration Dataset Overfitting Most frameworks report perplexity and KL Divergence using a test set of Wikipedia articles. However, we noticed using the calibration dataset which is also Wikipedia related causes quants to overfit, and attain lower perplexity scores. We utilize [Calibration\_v3](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8) and [Calibration\_v5](https://gist.github.com/tristandruyen/9e207a95c7d75ddf37525d353e00659c/) datasets for fair testing which includes some wikitext data amongst other data. **Also instruct models have unique chat templates, and using text only calibration datasets is not effective for instruct models** (base models yes). In fact most imatrix GGUFs are typically calibrated with these issues. As a result, they naturally perform better on KL Divergence benchmarks that also use Wikipedia data, since the model is essentially optimized for that domain. To ensure a fair and controlled evaluation, we do not to use our own calibration dataset (which is optimized for chat performance) when benchmarking KL Divergence. Instead, we conducted tests using the same standard Wikipedia datasets, allowing us to directly compare the performance of our Dynamic 2.0 method against the baseline imatrix approach. ## [Direct link to heading](https://docs.unsloth.ai/basics/unsloth-dynamic-2.0-ggufs\#mmlu-replication-adventure) 🔢 MMLU Replication Adventure - Replicating MMLU 5 shot was nightmarish. We **could not** replicate MMLU results for many models including Llama 3.1 (8B) Instruct, Gemma 3 (12B) and others due to **subtle implementation issues**. Llama 3.1 (8B) for example should be getting ~68.2%, whilst using incorrect implementations can attain **35% accuracy.** ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252FGqqARO9UA0qpIzNcfixv%252FMMLU%2520differences.png%3Falt%3Dmedia%26token%3D59c47844-a2e6-49a3-a523-1e28f2208e6d&width=768&dpr=4&quality=100&sign=3c0bd533&sv=2) MMLU implementation issues - Llama 3.1 (8B) Instruct has a MMLU 5 shot accuracy of 67.8% using a naive MMLU implementation. We find however Llama **tokenizes "A" and "\_A" (A with a space in front) as different token ids**. If we consider both spaced and non spaced tokens, we get 68.2% (+0.4%) - Interestingly Llama 3 as per Eleuther AI's [LLM Harness](https://github.com/EleutherAI/lm-evaluation-harness/blob/main/lm_eval/tasks/llama3/instruct/mmlu/_continuation_template_yaml) also appends **"The best answer is"** to the question, following Llama 3's original MMLU benchmarks. - There are many other subtle issues, and so to benchmark everything in a controlled environment, we designed our own MMLU implementation from scratch by investigating [github.com/hendrycks/test](https://github.com/hendrycks/test) directly, and verified our results across multiple models and comparing to reported numbers. ## [Direct link to heading](https://docs.unsloth.ai/basics/unsloth-dynamic-2.0-ggufs\#gemma-3-qat-replication-benchmarks) ✨ Gemma 3 QAT Replication, Benchmarks The Gemma team released two QAT (quantization aware training) versions of Gemma 3: 1. Q4\_0 GGUF - Quantizes all layers to Q4\_0 via the formula `w = q * block_scale` with each block having 32 weights. See [llama.cpp wiki](https://github.com/ggml-org/llama.cpp/wiki/Tensor-Encoding-Schemes) for more details. 2. int4 version - presumably [TorchAO int4 style](https://github.com/pytorch/ao/blob/main/torchao/quantization/README.md)? We benchmarked all Q4\_0 GGUF versions, and did extensive experiments on the 12B model. We see the **12B Q4\_0 QAT model gets 67.07%** whilst the full bfloat16 12B version gets 67.15% on 5 shot MMLU. That's very impressive! The 27B model is mostly nearly there! Metric 1B 4B 12B 27B MMLU 5 shot 26.12% 55.13% **67.07% (67.15% BF16)** **70.64% (71.5% BF16)** Disk Space 0.93GB 2.94GB **7.52GB** 16.05GB **Efficiency\*** 1.20 10.26 **5.59** 2.84 We designed a new **Efficiency metric** which calculates the usefulness of the model whilst also taking into account its disk size and MMLU 5 shot score: Efficiency=MMLU 5 shot score−25Disk Space GB\\text{Efficiency} = \\frac{\\text{MMLU 5 shot score} - 25}{\\text{Disk Space GB}}Efficiency=Disk Space GBMMLU 5 shot score−25​ We have to **minus 25** since MMLU has 4 multiple choices - A, B, C or D. Assume we make a model that simply randomly chooses answers - it'll get 25% accuracy, and have a disk space of a few bytes. But clearly this is not a useful model. On KL Divergence vs the base model, below is a table showcasing the improvements. Reminder the closer the KL Divergence is to 0, the better (ie 0 means identical to the full precision model) Quant Baseline KLD GB New KLD GB IQ1\_S 1.035688 5.83 0.972932 6.06 IQ1\_M 0.832252 6.33 0.800049 6.51 IQ2\_XXS 0.535764 7.16 0.521039 7.31 IQ2\_M 0.26554 8.84 0.258192 8.96 Q2\_K\_XL 0.229671 9.78 0.220937 9.95 Q3\_K\_XL 0.087845 12.51 0.080617 12.76 Q4\_K\_XL 0.024916 15.41 0.023701 15.64 If we plot the ratio of the disk space increase and the KL Divergence ratio change, we can see a much clearer benefit! Our dynamic 2bit Q2\_K\_XL reduces KLD quite a bit (around 7.5%). ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252FsYSRIPGSjExzSr5y828z%252Fchart%282%29.svg%3Falt%3Dmedia%26token%3De87db00e-6e3e-4478-af0b-bc84ed2e463b&width=768&dpr=4&quality=100&sign=9073c258&sv=2) Truncated table of results for MMLU for Gemma 3 (27B). See below. 1. **Our dynamic 4bit version is 2GB smaller whilst having +1% extra accuracy vs the QAT version!** 2. Efficiency wise, 2bit Q2\_K\_XL and others seem to do very well! Quant Unsloth Unsloth + QAT Disk Size Efficiency IQ1\_M 48.10 47.23 6.51 3.42 IQ2\_XXS 59.20 56.57 7.31 4.32 IQ2\_M 66.47 64.47 8.96 4.40 Q2\_K\_XL 68.70 67.77 9.95 4.30 Q3\_K\_XL 70.87 69.50 12.76 3.49 **Q4\_K\_XL** **71.47** **71.07** **15.64** **2.94** **Google QAT** **70.64** **17.2** **2.65** Click here for FullGoogle's Gemma 3 (27B) QAT Benchmarks: [Direct link to heading](https://docs.unsloth.ai/basics/unsloth-dynamic-2.0-ggufs#click-here-for-fullgoogles-gemma-3-27b-qat-benchmarks) Model Unsloth Unsloth + QAT Disk Size Efficiency IQ1\_S 41.87 43.37 6.06 3.03 IQ1\_M 48.10 47.23 6.51 3.42 IQ2\_XXS 59.20 56.57 7.31 4.32 IQ2\_M 66.47 64.47 8.96 4.40 Q2\_K 68.50 67.60 9.78 4.35 Q2\_K\_XL 68.70 67.77 9.95 4.30 IQ3\_XXS 68.27 67.07 10.07 4.18 Q3\_K\_M 70.70 69.77 12.51 3.58 Q3\_K\_XL 70.87 69.50 12.76 3.49 Q4\_K\_M 71.23 71.00 15.41 2.98 **Q4\_K\_XL** **71.47** **71.07** **15.64** **2.94** Q5\_K\_M 71.77 71.23 17.95 2.58 Q6\_K 71.87 71.60 20.64 2.26 Q8\_0 71.60 71.53 26.74 1.74 **Google QAT** **70.64** **17.2** **2.65** ## [Direct link to heading](https://docs.unsloth.ai/basics/unsloth-dynamic-2.0-ggufs\#llama-4-bug-fixes--run) 🦙 Llama 4 Bug Fixes + Run We also helped and fixed a few Llama 4 bugs: - Llama 4 Scout changed the RoPE Scaling configuration in their official repo. We helped resolve issues in llama.cpp to enable this [change here](https://github.com/ggml-org/llama.cpp/pull/12889) ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252FaJ5AOubUkMjbbvgiOekf%252Fimage.png%3Falt%3Dmedia%26token%3Db1fbdea1-7c95-4afa-9b12-aedec012f38b&width=768&dpr=4&quality=100&sign=2203a04c&sv=2) - Llama 4's QK Norm's epsilon for both Scout and Maverick should be from the config file - this means using 1e-05 and not 1e-06. We helped resolve these in [llama.cpp](https://github.com/ggml-org/llama.cpp/pull/12889) and [transformers](https://github.com/huggingface/transformers/pull/37418) - The Llama 4 team and vLLM also independently fixed an issue with QK Norm being shared across all heads (should not be so) [here](https://github.com/vllm-project/vllm/pull/16311). MMLU Pro increased from 68.58% to 71.53% accuracy. - [Wolfram Ravenwolf](https://x.com/WolframRvnwlf/status/1909735579564331016) showcased how our GGUFs via llama.cpp attain much higher accuracy than third party inference providers - this was most likely a combination of the issues explained above, and also probably due to quantization issues. ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252F4Wrz07bAdvluM2gACggU%252FGoC79hYXwAAPTMs.jpg%3Falt%3Dmedia%26token%3D05001bc0-74b0-4bbb-a89f-894fcdb985d8&width=768&dpr=4&quality=100&sign=23d1a190&sv=2) As shown in our graph, our 4-bit Dynamic QAT quantization deliver better performance on 5-shot MMLU while also being smaller in size. ### [Direct link to heading](https://docs.unsloth.ai/basics/unsloth-dynamic-2.0-ggufs\#running-llama-4-scout) Running Llama 4 Scout: To run Llama 4 Scout for example, first clone llama.cpp: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] apt-get update apt-get install pciutils build-essential cmake curl libcurl4-openssl-dev -y git clone https://github.com/ggml-org/llama.cpp cmake llama.cpp -B llama.cpp/build \ -DBUILD_SHARED_LIBS=OFF -DGGML_CUDA=ON -DLLAMA_CURL=ON cmake --build llama.cpp/build --config Release -j --clean-first --target llama-cli llama-gguf-split cp llama.cpp/build/bin/llama-* llama.cpp ``` Then download out new dynamic v 2.0 quant for Scout: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] # !pip install huggingface_hub hf_transfer import os os.environ["HF_HUB_ENABLE_HF_TRANSFER"] = "1" from huggingface_hub import snapshot_download snapshot_download( repo_id = "unsloth/Llama-4-Scout-17B-16E-Instruct-GGUF", local_dir = "unsloth/Llama-4-Scout-17B-16E-Instruct-GGUF", allow_patterns = ["*IQ2_XXS*"], ) ``` And and let's do inference! Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] whitespace-pre-wrap ./llama.cpp/llama-cli \ --model unsloth/Llama-4-Scout-17B-16E-Instruct-GGUF/Llama-4-Scout-17B-16E-Instruct-UD-IQ2_XXS.gguf \ --threads 32 \ --ctx-size 16384 \ --n-gpu-layers 99 \ -ot ".ffn_.*_exps.=CPU" \ --seed 3407 \ --prio 3 \ --temp 0.6 \ --min-p 0.01 \ --top-p 0.9 \ -no-cnv \ --prompt "<|header_start|>user<|header_end|>\n\nCreate a Flappy Bird game.<|eot|><|header_start|>assistant<|header_end|>\n\n" ``` Read more on running Llama 4 here: [https://docs.unsloth.ai/basics/tutorial-how-to-run-and-fine-tune-llama-4](https://docs.unsloth.ai/basics/tutorial-how-to-run-and-fine-tune-llama-4) [PreviousQwen3: How to Run & Fine-tune](https://docs.unsloth.ai/basics/qwen3-how-to-run-and-fine-tune) [NextLlama 4: How to Run & Fine-tune](https://docs.unsloth.ai/basics/llama-4-how-to-run-and-fine-tune) Last updated 7 days ago Was this helpful?
{ "color-scheme": "light dark", "description": "A big new upgrade to our Dynamic Quants!", "favicon": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Forganizations%252FHpyELzcNe0topgVLGCZY%252Fsites%252Fsite_mXXTe%252Ficon%252FUdYqQdRpLXMz6O8ejTLD%252Funsloth%2520sticker.png%3Falt%3Dmedia%26token%3D2a5ba961-51cb-4094-8545-6eb20ad1e06e&width=48&height=48&sign=277417f7&sv=2", "generator": "GitBook (e15757d)", "language": "en", "og:description": "A big new upgrade to our Dynamic Quants!", "og:image": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "og:title": "Unsloth Dynamic 2.0 GGUFs | Unsloth Documentation", "ogDescription": "A big new upgrade to our Dynamic Quants!", "ogImage": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "ogTitle": "Unsloth Dynamic 2.0 GGUFs | Unsloth Documentation", "robots": "index, follow", "scrapeId": "259ca072-1abe-41f8-8249-2bb61bd1faa2", "sourceURL": "https://docs.unsloth.ai/basics/unsloth-dynamic-2.0-ggufs", "statusCode": 200, "title": "Unsloth Dynamic 2.0 GGUFs | Unsloth Documentation", "twitter:card": "summary_large_image", "twitter:description": "A big new upgrade to our Dynamic Quants!", "twitter:image": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "twitter:title": "Unsloth Dynamic 2.0 GGUFs | Unsloth Documentation", "url": "https://docs.unsloth.ai/basics/unsloth-dynamic-2.0-ggufs", "viewport": "width=device-width, initial-scale=1" }
Qwen3, full fine-tuning & all models are now supported! 🦥 ## [Direct link to heading](https://docs.unsloth.ai/get-started/installing-+-updating/pip-install\#recommended-installation) **Recommended installation:** **Install with pip (recommended)** for Linux devices: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] pip install unsloth ``` Python 3.13 does not support Unsloth. Use 3.12, 3.11, 3.10 or 3.90 * * * ## [Direct link to heading](https://docs.unsloth.ai/get-started/installing-+-updating/pip-install\#uninstall--reinstall) Uninstall + Reinstall If you're still encountering dependency issues with Unsloth, many users have resolved them by forcing uninstalling and reinstalling Unsloth: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] pip install --upgrade --force-reinstall --no-cache-dir --no-deps git+https://github.com/unslothai/unsloth.git pip install --upgrade --force-reinstall --no-cache-dir --no-deps git+https://github.com/unslothai/unsloth-zoo.git ``` ## [Direct link to heading](https://docs.unsloth.ai/get-started/installing-+-updating/pip-install\#advanced-pip-installation) Advanced Pip Installation Do **NOT** use this if you have [Conda](https://docs.unsloth.ai/get-started/installing-+-updating/conda-install). Pip is a bit more complex since there are dependency issues. The pip command is different for `torch 2.2,2.3,2.4,2.5` and CUDA versions. For other torch versions, we support `torch211`, `torch212`, `torch220`, `torch230`, `torch240` and for CUDA versions, we support `cu118` and `cu121` and `cu124`. For Ampere devices (A100, H100, RTX3090) and above, use `cu118-ampere` or `cu121-ampere` or `cu124-ampere`. For example, if you have `torch 2.4` and `CUDA 12.1`, use: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] pip install --upgrade pip pip install "unsloth[cu121-torch240] @ git+https://github.com/unslothai/unsloth.git" ``` Another example, if you have `torch 2.5` and `CUDA 12.4`, use: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] pip install --upgrade pip pip install "unsloth[cu124-torch250] @ git+https://github.com/unslothai/unsloth.git" ``` And other examples: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] pip install "unsloth[cu121-ampere-torch240] @ git+https://github.com/unslothai/unsloth.git" pip install "unsloth[cu118-ampere-torch240] @ git+https://github.com/unslothai/unsloth.git" pip install "unsloth[cu121-torch240] @ git+https://github.com/unslothai/unsloth.git" pip install "unsloth[cu118-torch240] @ git+https://github.com/unslothai/unsloth.git" pip install "unsloth[cu121-torch230] @ git+https://github.com/unslothai/unsloth.git" pip install "unsloth[cu121-ampere-torch230] @ git+https://github.com/unslothai/unsloth.git" pip install "unsloth[cu121-torch250] @ git+https://github.com/unslothai/unsloth.git" pip install "unsloth[cu124-ampere-torch250] @ git+https://github.com/unslothai/unsloth.git" ``` Or, run the below in a terminal to get the **optimal** pip installation command: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] wget -qO- https://raw.githubusercontent.com/unslothai/unsloth/main/unsloth/_auto_install.py | python - ``` Or, run the below manually in a Python REPL: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] try: import torch except: raise ImportError('Install torch via `pip install torch`') from packaging.version import Version as V v = V(torch.__version__) cuda = str(torch.version.cuda) is_ampere = torch.cuda.get_device_capability()[0] >= 8 if cuda != "12.1" and cuda != "11.8" and cuda != "12.4": raise RuntimeError(f"CUDA = {cuda} not supported!") if v <= V('2.1.0'): raise RuntimeError(f"Torch = {v} too old!") elif v <= V('2.1.1'): x = 'cu{}{}-torch211' elif v <= V('2.1.2'): x = 'cu{}{}-torch212' elif v < V('2.3.0'): x = 'cu{}{}-torch220' elif v < V('2.4.0'): x = 'cu{}{}-torch230' elif v < V('2.5.0'): x = 'cu{}{}-torch240' elif v < V('2.6.0'): x = 'cu{}{}-torch250' else: raise RuntimeError(f"Torch = {v} too new!") x = x.format(cuda.replace(".", ""), "-ampere" if is_ampere else "") print(f'pip install --upgrade pip && pip install "unsloth[{x}] @ git+https://github.com/unslothai/unsloth.git"') ``` [PreviousUpdating](https://docs.unsloth.ai/get-started/installing-+-updating/updating) [NextWindows Installation](https://docs.unsloth.ai/get-started/installing-+-updating/windows-installation) Last updated 8 days ago Was this helpful?
{ "color-scheme": "light dark", "description": "To install Unsloth locally via Pip, follow the steps below:", "favicon": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Forganizations%252FHpyELzcNe0topgVLGCZY%252Fsites%252Fsite_mXXTe%252Ficon%252FUdYqQdRpLXMz6O8ejTLD%252Funsloth%2520sticker.png%3Falt%3Dmedia%26token%3D2a5ba961-51cb-4094-8545-6eb20ad1e06e&width=48&height=48&sign=277417f7&sv=2", "generator": "GitBook (e15757d)", "language": "en", "og:description": "To install Unsloth locally via Pip, follow the steps below:", "og:image": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "og:title": "Pip Install | Unsloth Documentation", "ogDescription": "To install Unsloth locally via Pip, follow the steps below:", "ogImage": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "ogTitle": "Pip Install | Unsloth Documentation", "robots": "index, follow", "scrapeId": "2bb54d34-8647-4d43-a893-21fff97417e7", "sourceURL": "https://docs.unsloth.ai/get-started/installing-+-updating/pip-install", "statusCode": 200, "title": "Pip Install | Unsloth Documentation", "twitter:card": "summary_large_image", "twitter:description": "To install Unsloth locally via Pip, follow the steps below:", "twitter:image": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "twitter:title": "Pip Install | Unsloth Documentation", "url": "https://docs.unsloth.ai/get-started/installing-+-updating/pip-install", "viewport": "width=device-width, initial-scale=1" }
Qwen3, full fine-tuning & all models are now supported! 🦥 Read our full DeepSeek-R1 blogpost here: [unsloth.ai/blog/deepseekr1-dynamic](https://unsloth.ai/blog/deepseekr1-dynamic) ### [Direct link to heading](https://docs.unsloth.ai/basics/tutorials-how-to-fine-tune-and-run-llms/deepseek-r1-how-to-run-locally/deepseek-r1-dynamic-1.58-bit\#id-1-bit-small-dynamic-vs.-basic) 1-bit (Small) - Dynamic vs. Basic GGUF Type Quant Size (GB) Seed Pygame Background Accelerate SPACE Bird shape Land Top right score Pipes Best Score Quit Runnable Score Avg Score Errors Notes Dynamic IQ1\_S 131 3407 1 0.5 1 0.5 0.5 1 0.5 1 1 0 7 score =!inc SyntaxError: invalid syntax Selects random shapes and colors at the start, but doesn't rotate across trials Dynamic IQ1\_S 131 3408 1 1 0.25 1 0.5 1 0.5 1 1 0 7.25 score =B4 NameError: name 'B4' is not defined Better - selects pipe colors randomnly, but all are just 1 color - should be different. Dropping to ground fails to reset acceleration. Dynamic IQ1\_S 131 3409 1 0.5 0.5 0.5 0 1 1 1 1 0 6.5 6.92 score =3D 0 SyntaxError: invalid decimal literal Too hard to play - acceleration too fast. Pipe colors now are random, but bird shape not changing. Land collison fails. Basic IQ1\_S 133 3407 0 0 0 0 0 0 0 0 0 0 0 No code Fully failed. Repeats "with Dark Colurs" forever Basic IQ1\_S 133 3408 0 0 0 0 0 0 0 0 0 0 0 No code Fully failed. Repeats "Pygame's" forever Basic IQ1\_S 133 3409 0 0 0 0 0 0 0 0 0 0 0 0 No code Fully failed. Repeats "pipe\_x = screen\_height pipe\_x = screen\_height pipe\_height = screen\_height - Pipe\_height" forever. ### [Direct link to heading](https://docs.unsloth.ai/basics/tutorials-how-to-fine-tune-and-run-llms/deepseek-r1-how-to-run-locally/deepseek-r1-dynamic-1.58-bit\#id-1-bit-medium-dynamic-vs.-basic) 1-bit (Medium) - Dynamic vs. Basic GGUF Type Quant Size (GB) Seed Pygame Background Accelerate SPACE Bird shape Land Top right score Pipes Best Score Quit Runnable Score Avg Score Errors Notes Dynamic IQ1\_M 158 3407 1 1 0.75 1 1 1 1 1 1 1 9.75 None A bit fast and hard to play. Dynamic IQ1\_M 158 3408 1 1 0.5 1 1 1 1 1 1 1 9.5 None Very good - land should be clearer. Acceleration should be slower. Dynamic IQ1\_M 158 3409 1 0.5 1 0.5 0.5 1 0.5 1 1 1 8 9.08 None Background color does not change across trials.Pipes do not touch the top. No land is seen. Basic IQ1\_M 149 3407 1 0 0 0 0 0 0 0 1 0 2 if game\_over: NameError: name 'game\_over' is not defined Fully failed. Black screen only Basic IQ1\_M 149 3408 1 0 0 0 0 0 0 0 1 0 2 No code Fully failed. Black screen then closes. Basic IQ1\_M 149 3409 1 0 0 0 0 0 0 0 0 0 1 1.67 window.fill((100, 100, 255)) Light Blue SyntaxError: invalid syntax && main() NameError: name 'main' is not defined. Fully failed. ### [Direct link to heading](https://docs.unsloth.ai/basics/tutorials-how-to-fine-tune-and-run-llms/deepseek-r1-how-to-run-locally/deepseek-r1-dynamic-1.58-bit\#id-2-bit-extra-extra-small-dynamic-vs.-basic) 2-bit (Extra extra Small) - Dynamic vs. Basic GGUF Type Quant Size (GB) Seed Pygame Background Accelerate SPACE Bird shape Land Top right score Pipes Best Score Quit Runnable Score Avg Score Errors Notes Dynamic IQ2\_XXS 183 3407 1 1 0.5 1 1 1 1 1 1 1 9.5 None Too hard to play - acceleration too slow. Lags Dynamic IQ2\_XXS 183 3408 1 1 1 1 1 1 0.5 0.5 1 0 8 global best\_score SyntaxError: name 'best\_score' is assigned to before global declaration Had to edit 2 lines - remove global best\_score, and set pipe\_list = \[\] Dynamic IQ2\_XXS 183 3409 1 1 1 1 1 1 1 1 1 1 10 9.17 None Extremely good. Even makes pipes have random distances between them. Basic IQ2\_XXS 175 3407 1 0.5 0.5 0.5 1 0 0.5 1 0 0 5 pipe\_color = random.choice(\[(34, 139, 34), (139, 69, 19), (47, 47, 47)) SyntaxError: closing parenthesis ')' does not match opening parenthesis '\[' && pygame.draw.polygon(screen, bird\_color, points) ValueError: points argument must contain more than 2 points\ \ Fails quiting. Same color. Collison detection a bit off. No score\ \ Basic\ \ IQ2\_XXS\ \ 175\ \ 3408\ \ 1\ \ 0.5\ \ 0.5\ \ 0.5\ \ 1\ \ 1\ \ 0.5\ \ 1\ \ 0\ \ 0\ \ 6\ \ pipes.append({'x': SCREEN\_WIDTH, 'gap\_y': random.randint(50, SCREEN\_HEIGHT - 150)) SyntaxError: closing parenthesis ')' does not match opening parenthesis '{'\ \ Acceleration weird. Chooses 1 color per round. Cannot quit.\ \ Basic\ \ IQ2\_XXS\ \ 175\ \ 3409\ \ 1\ \ 1\ \ 1\ \ 1\ \ 1\ \ 1\ \ 1\ \ 0\ \ 0.5\ \ 0\ \ 7.5\ \ 6.17\ \ screen = pygame.display.set\_mode((SCREEN\_WIDTH, SCREENHEIGHT)) NameError: name 'SCREENHEIGHT' is not defined. Did you mean: 'SCREEN\_HEIGHT'?\ \ OK. Colors change. Best score does not update. Quit only ESC not Q.\ \ ### [Direct link to heading](https://docs.unsloth.ai/basics/tutorials-how-to-fine-tune-and-run-llms/deepseek-r1-how-to-run-locally/deepseek-r1-dynamic-1.58-bit\#dynamic-quantization-trial-output) **Dynamic Quantization trial output**\ \ IQ1\_S codeIQ1\_M codeIQ2\_XXS code\ \ [12KB\\ \\ inference\_UD-IQ1\_S\_3407.txt](https://3215535692-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxhOjnexMCB3dmuQFQ2Zq%2Fuploads%2FqpBdpW55h5mNAzVoTxPI%2Finference_UD-IQ1_S_3407.txt?alt=media&token=37b19689-73e5-46d0-98be-352e515dfdf8)\ \ [11KB\\ \\ inference\_UD-IQ1\_S\_3408.txt](https://3215535692-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxhOjnexMCB3dmuQFQ2Zq%2Fuploads%2FTdIrJSqc2VbNJy1bf3w5%2Finference_UD-IQ1_S_3408.txt?alt=media&token=e11f73bb-80be-49e5-91e2-f3a1f5495dcd)\ \ [10KB\\ \\ inference\_UD-IQ1\_S\_3409.txt](https://3215535692-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxhOjnexMCB3dmuQFQ2Zq%2Fuploads%2FBk2ZwEIcLmvZQ3jlMLzw%2Finference_UD-IQ1_S_3409.txt?alt=media&token=052885f5-bee9-420d-a9c0-827412ac17c8)\ \ [10KB\\ \\ inference\_UD-IQ1\_M\_3407.txt](https://3215535692-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxhOjnexMCB3dmuQFQ2Zq%2Fuploads%2Ft7YmT1H3Nflcy5kAp1LE%2Finference_UD-IQ1_M_3407.txt?alt=media&token=6f62f911-3364-4f92-b311-c1fa9b759370)\ \ [30KB\\ \\ inference\_UD-IQ1\_M\_3408.txt](https://3215535692-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxhOjnexMCB3dmuQFQ2Zq%2Fuploads%2FH6BCTeWlJpUkfeEmeqpu%2Finference_UD-IQ1_M_3408.txt?alt=media&token=7727a999-8c0a-4baf-8542-be8686a01630)\ \ [9KB\\ \\ inference\_UD-IQ1\_M\_3409.txt](https://3215535692-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxhOjnexMCB3dmuQFQ2Zq%2Fuploads%2FvVJI0H2F9KTNj5kwUCtC%2Finference_UD-IQ1_M_3409.txt?alt=media&token=0f863d41-53d6-4c94-8d57-bf1eeb79ead5)\ \ [29KB\\ \\ inference\_UD-IQ2\_XXS\_3407.txt](https://3215535692-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxhOjnexMCB3dmuQFQ2Zq%2Fuploads%2F26jxRY5mWuon67OfvGtq%2Finference_UD-IQ2_XXS_3407.txt?alt=media&token=daf9bf7d-245e-4b54-b0c0-a6273833835a)\ \ [34KB\\ \\ inference\_UD-IQ2\_XXS\_3408.txt](https://3215535692-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxhOjnexMCB3dmuQFQ2Zq%2Fuploads%2FEhjjYN7vAh7gbmR8oXbS%2Finference_UD-IQ2_XXS_3408.txt?alt=media&token=4b50d6dd-2798-44c7-aa92-7e67c09868a4)\ \ [42KB\\ \\ inference\_UD-IQ2\_XXS\_3409.txt](https://3215535692-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxhOjnexMCB3dmuQFQ2Zq%2Fuploads%2FXwCSfIf16nTwHzcWepoV%2Finference_UD-IQ2_XXS_3409.txt?alt=media&token=2f7539c9-026d-41e7-b7c7-5738a89ae5d4)\ \ ### [Direct link to heading](https://docs.unsloth.ai/basics/tutorials-how-to-fine-tune-and-run-llms/deepseek-r1-how-to-run-locally/deepseek-r1-dynamic-1.58-bit\#non-dynamic-quantization-trial-output) Non Dynamic Quantization trial output\ \ IQ1\_S basic codeIQ1\_M basic codeIQ2\_XXS basic code\ \ [25KB\\ \\ inference\_basic-IQ1\_S\_3407.txt](https://3215535692-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxhOjnexMCB3dmuQFQ2Zq%2Fuploads%2FFtAMzAucSfKMkkmXItTj%2Finference_basic-IQ1_S_3407.txt?alt=media&token=76bfcf47-e1ce-442b-af49-6bfb6af7d046)\ \ [15KB\\ \\ inference\_basic-IQ1\_S\_3408.txt](https://3215535692-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxhOjnexMCB3dmuQFQ2Zq%2Fuploads%2F4NhjCVFMwCwT2OCj0IJ5%2Finference_basic-IQ1_S_3408.txt?alt=media&token=d4715674-3347-400b-9eb6-ae5d4470feeb)\ \ [14KB\\ \\ inference\_basic-IQ1\_S\_3409.txt](https://3215535692-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxhOjnexMCB3dmuQFQ2Zq%2Fuploads%2Fb0ZW3xs7R7IMryO7n7Yp%2Finference_basic-IQ1_S_3409.txt?alt=media&token=64b8825b-7103-4708-9d12-12770e43b546)\ \ [7KB\\ \\ inference\_basic-IQ1\_M\_3407.txt](https://3215535692-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxhOjnexMCB3dmuQFQ2Zq%2Fuploads%2FmZ2TsQEzoGjhGlqUjtmj%2Finference_basic-IQ1_M_3407.txt?alt=media&token=975a30d6-2d90-47eb-9d68-b50fd47337f7)\ \ [7KB\\ \\ inference\_basic-IQ1\_M\_3408.txt](https://3215535692-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxhOjnexMCB3dmuQFQ2Zq%2Fuploads%2FIx9TQ99Qpmk7BViNLFBl%2Finference_basic-IQ1_M_3408.txt?alt=media&token=b88e1e5b-4535-4d93-bd67-f81def7377d5)\ \ [12KB\\ \\ inference\_basic-IQ1\_M\_3409.txt](https://3215535692-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxhOjnexMCB3dmuQFQ2Zq%2Fuploads%2FDX7XYpJPxXKAMZeGhSrr%2Finference_basic-IQ1_M_3409.txt?alt=media&token=6da9127e-272b-4e74-b990-6657e25eea6b)\ \ [25KB\\ \\ inference\_basic-IQ2\_XXS\_3407.txt](https://3215535692-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxhOjnexMCB3dmuQFQ2Zq%2Fuploads%2FajsVHsVqlWpwHk7mY32t%2Finference_basic-IQ2_XXS_3407.txt?alt=media&token=cbbf36a2-0d6a-4a87-8232-45b0b7fcc588)\ \ [34KB\\ \\ inference\_basic-IQ2\_XXS\_3408.txt](https://3215535692-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxhOjnexMCB3dmuQFQ2Zq%2Fuploads%2F4vjncPu2r2D7F5jVOC7I%2Finference_basic-IQ2_XXS_3408.txt?alt=media&token=9ed635a2-bf97-4f49-b26f-6e985d0ab1b7)\ \ [34KB\\ \\ inference\_basic-IQ2\_XXS\_3409.txt](https://3215535692-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxhOjnexMCB3dmuQFQ2Zq%2Fuploads%2FJmVOFgrRyXjY4lYZXE96%2Finference_basic-IQ2_XXS_3409.txt?alt=media&token=faad5bff-ba7f-41f1-abd5-7896f17a5b25)\ \ [PreviousDeepSeek-R1: How to Run Locally](https://docs.unsloth.ai/basics/tutorials-how-to-fine-tune-and-run-llms/deepseek-r1-how-to-run-locally) [NextTutorial: How to Finetune Llama-3 and Use In Ollama](https://docs.unsloth.ai/basics/tutorial-how-to-finetune-llama-3-and-use-in-ollama)\ \ Last updated 3 months ago\ \ Was this helpful?
{ "color-scheme": "light dark", "description": "See performance comparison tables for Unsloth's Dynamic GGUF Quants vs Standard IMatrix Quants.", "favicon": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Forganizations%252FHpyELzcNe0topgVLGCZY%252Fsites%252Fsite_mXXTe%252Ficon%252FUdYqQdRpLXMz6O8ejTLD%252Funsloth%2520sticker.png%3Falt%3Dmedia%26token%3D2a5ba961-51cb-4094-8545-6eb20ad1e06e&width=48&height=48&sign=277417f7&sv=2", "generator": "GitBook (e15757d)", "language": "en", "og:description": "See performance comparison tables for Unsloth's Dynamic GGUF Quants vs Standard IMatrix Quants.", "og:image": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "og:title": "DeepSeek-R1 Dynamic 1.58-bit | Unsloth Documentation", "ogDescription": "See performance comparison tables for Unsloth's Dynamic GGUF Quants vs Standard IMatrix Quants.", "ogImage": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "ogTitle": "DeepSeek-R1 Dynamic 1.58-bit | Unsloth Documentation", "robots": "index, follow", "scrapeId": "30bae884-51c4-4b7d-bcb7-745e49887c55", "sourceURL": "https://docs.unsloth.ai/basics/tutorials-how-to-fine-tune-and-run-llms/deepseek-r1-how-to-run-locally/deepseek-r1-dynamic-1.58-bit", "statusCode": 200, "title": "DeepSeek-R1 Dynamic 1.58-bit | Unsloth Documentation", "twitter:card": "summary_large_image", "twitter:description": "See performance comparison tables for Unsloth's Dynamic GGUF Quants vs Standard IMatrix Quants.", "twitter:image": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "twitter:title": "DeepSeek-R1 Dynamic 1.58-bit | Unsloth Documentation", "url": "https://docs.unsloth.ai/basics/tutorials-how-to-fine-tune-and-run-llms/deepseek-r1-how-to-run-locally/deepseek-r1-dynamic-1.58-bit", "viewport": "width=device-width, initial-scale=1" }
Qwen3, full fine-tuning & all models are now supported! 🦥 Qwen's new Qwen3 models deliver state-of-the-art advancements in reasoning, instruction-following, agent capabilities, and multilingual support. All Qwen3 uploads use our new Unsloth [Dynamic 2.0](https://docs.unsloth.ai/basics/unsloth-dynamic-2.0-ggufs) methodology, delivering the best performance on 5-shot MMLU and KL Divergence benchmarks. This means, you can run and fine-tune quantized Qwen3 LLMs with minimal accuracy loss! We also uploaded Qwen3 with native 128K context length. Qwen achieves this by using YaRN to extend its original 40K window to 128K. [Unsloth](https://github.com/unslothai/unsloth) also now supports fine-tuning of Qwen3 and Qwen3 MOE models — 2x faster, with 70% less VRAM, and 8x longer context lengths. Fine-tune Qwen3 (14B) for free using our [Colab notebook.](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Qwen3_(14B)-Reasoning-Conversational.ipynb) • [**Running Qwen3 Tutorial**](https://docs.unsloth.ai/basics/qwen3-how-to-run-and-fine-tune#ollama-run-qwen3-tutorial) • [**Fine-tuning Qwen3 Tutorial**](https://docs.unsloth.ai/basics/qwen3-how-to-run-and-fine-tune#fine-tuning-qwen3-with-unsloth) #### [Direct link to heading](https://docs.unsloth.ai/basics/qwen3-how-to-run-and-fine-tune\#qwen3-unsloth-dynamic-2.0-with-optimal-configs) **Qwen3 - Unsloth Dynamic 2.0** with optimal configs: Dynamic 2.0 GGUF (to run) 128K Context GGUF Dynamic 4-bit Safetensor (to finetune/deploy) - [0.6B](https://huggingface.co/unsloth/Qwen3-0.6B-GGUF) - [1.7B](https://huggingface.co/unsloth/Qwen3-1.7B-GGUF) - [4B](https://huggingface.co/unsloth/Qwen3-4B-GGUF) - [8B](https://huggingface.co/unsloth/Qwen3-8B-GGUF) - [14B](https://huggingface.co/unsloth/Qwen3-14B-GGUF) - [30B-A3B](https://huggingface.co/unsloth/Qwen3-30B-A3B-GGUF) - [32B](https://huggingface.co/unsloth/Qwen3-32B-GGUF) - [235B-A22B](https://huggingface.co/unsloth/Qwen3-235B-A22B-GGUF) - [4B](https://huggingface.co/unsloth/Qwen3-4B-128K-GGUF) - [8B](https://huggingface.co/unsloth/Qwen3-8B-128K-GGUF) - [14B](https://huggingface.co/unsloth/Qwen3-14B-128K-GGUF) - [30B-A3B](https://huggingface.co/unsloth/Qwen3-30B-A3B-128K-GGUF) - [32B](https://huggingface.co/unsloth/Qwen3-32B-128K-GGUF) - [235B-A22B](https://huggingface.co/unsloth/Qwen3-235B-A22B-128K-GGUF) - [0.6B](https://huggingface.co/unsloth/Qwen3-0.6B-unsloth-bnb-4bit) - [1.7B](https://huggingface.co/unsloth/Qwen3-1.7B-unsloth-bnb-4bit) - [4B](https://huggingface.co/unsloth/Qwen3-4B-unsloth-bnb-4bit) - [8B](https://huggingface.co/unsloth/Qwen3-8B-unsloth-bnb-4bit) - [14B](https://huggingface.co/unsloth/Qwen3-14B-unsloth-bnb-4bit) - [30B-A3B](https://huggingface.co/unsloth/Qwen3-30B-A3B-bnb-4bit) - [32B](https://huggingface.co/unsloth/Qwen3-32B-unsloth-bnb-4bit) ## [Direct link to heading](https://docs.unsloth.ai/basics/qwen3-how-to-run-and-fine-tune\#running-qwen3) 🖥️ **Running Qwen3** ### [Direct link to heading](https://docs.unsloth.ai/basics/qwen3-how-to-run-and-fine-tune\#official-recommended-settings) ⚙️ Official Recommended Settings According to Qwen, these are the recommended settings for inference: Non-Thinking Mode Settings: Thinking Mode Settings: **Temperature = 0.7** **Temperature = 0.6** Min\_P = 0.0 (optional, but 0.01 works well, llama.cpp default is 0.1) Min\_P = 0.0 Top\_P = 0.8 Top\_P = 0.95 TopK = 20 TopK = 20 **Chat template/prompt format:** Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] whitespace-pre-wrap <|im_start|>user\nWhat is 2+2?<|im_end|>\n<|im_start|>assistant\n ``` For NON thinking mode, we purposely enclose <think> and </think> with nothing: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] whitespace-pre-wrap <|im_start|>user\nWhat is 2+2?<|im_end|>\n<|im_start|>assistant\n<think>\n\n</think>\n\n ``` **For Thinking-mode, DO NOT use greedy decoding**, as it can lead to performance degradation and endless repetitions. ### [Direct link to heading](https://docs.unsloth.ai/basics/qwen3-how-to-run-and-fine-tune\#switching-between-thinking-and-non-thinking-mode) Switching Between Thinking and Non-Thinking Mode Qwen3 models come with built-in "thinking mode" to boost reasoning and improve response quality - similar to how [QwQ-32B](https://docs.unsloth.ai/basics/tutorials-how-to-fine-tune-and-run-llms/qwq-32b-how-to-run-effectively) worked. Instructions for switching will differ depending on the inference engine you're using so ensure you use the correct instructions. #### [Direct link to heading](https://docs.unsloth.ai/basics/qwen3-how-to-run-and-fine-tune\#instructions-for-llama.cpp-and-ollama) Instructions for llama.cpp and Ollama: You can add `/think` and `/no_think` to user prompts or system messages to switch the model's thinking mode from turn to turn. The model will follow the most recent instruction in multi-turn conversations. Here is an example of multi-turn conversation: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] > Who are you /no_think <think> </think> I am Qwen, a large-scale language model developed by Alibaba Cloud. [...] > How many 'r's are in 'strawberries'? /think <think> Okay, let's see. The user is asking how many times the letter 'r' appears in the word "strawberries". [...] </think> The word strawberries contains 3 instances of the letter r. [...] ``` #### [Direct link to heading](https://docs.unsloth.ai/basics/qwen3-how-to-run-and-fine-tune\#instructions-for-transformers-and-vllm) Instructions for transformers and vLLM: **Thinking mode:** `enable_thinking=True` By default, Qwen3 has thinking enabled. When you call `tokenizer.apply_chat_template`, you **don’t need to set anything manually.** Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True, enable_thinking=True # Default is True ) ``` In thinking mode, the model will generate an extra `<think>...</think>` block before the final answer — this lets it "plan" and sharpen its responses. **Non-thinking mode:** `enable_thinking=False` Enabling non-thinking will make Qwen3 will skip all the thinking steps and behave like a normal LLM. Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True, enable_thinking=False # Disables thinking mode ) ``` This mode will provide final responses directly — no `<think>` blocks, no chain-of-thought. ### [Direct link to heading](https://docs.unsloth.ai/basics/qwen3-how-to-run-and-fine-tune\#ollama-run-qwen3-tutorial) 🦙 Ollama: Run Qwen3 Tutorial 1. Install `ollama` if you haven't already! You can only run models up to 32B in size. To run the full 235B-A22B model, [see here](https://docs.unsloth.ai/basics/qwen3-how-to-run-and-fine-tune#running-qwen3-235b-a22b). Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] apt-get update apt-get install pciutils -y curl -fsSL https://ollama.com/install.sh | sh ``` 1. Run the model! Note you can call `ollama serve` in another terminal if it fails! We include all our fixes and suggested parameters (temperature etc) in `params` in our Hugging Face upload! Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] ollama run hf.co/unsloth/Qwen3-8B-GGUF:Q4_K_XL ``` 1. To disable thinking, use (or you can set it in the system prompt): Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] >>> Write your prompt here /nothink ``` If you're experiencing any looping, Ollama might have set your context length window to 2,048 or so. If this is the case, bump it up to 32,000 and see if the issue still persists. ### [Direct link to heading](https://docs.unsloth.ai/basics/qwen3-how-to-run-and-fine-tune\#llama.cpp-run-qwen3-tutorial) 📖 Llama.cpp: Run Qwen3 Tutorial 1. Obtain the latest `llama.cpp` on [GitHub here](https://github.com/ggml-org/llama.cpp). You can follow the build instructions below as well. Change `-DGGML_CUDA=ON` to `-DGGML_CUDA=OFF` if you don't have a GPU or just want CPU inference. Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] apt-get update apt-get install pciutils build-essential cmake curl libcurl4-openssl-dev -y git clone https://github.com/ggml-org/llama.cpp cmake llama.cpp -B llama.cpp/build \ -DBUILD_SHARED_LIBS=OFF -DGGML_CUDA=ON -DLLAMA_CURL=ON cmake --build llama.cpp/build --config Release -j --clean-first --target llama-cli llama-gguf-split cp llama.cpp/build/bin/llama-* llama.cpp ``` 1. Download the model via (after installing `pip install huggingface_hub hf_transfer` ). You can choose Q4\_K\_M, or other quantized versions. Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] # !pip install huggingface_hub hf_transfer import os os.environ["HF_HUB_ENABLE_HF_TRANSFER"] = "1" from huggingface_hub import snapshot_download snapshot_download( repo_id = "unsloth/Qwen3-32B-GGUF", local_dir = "unsloth/Qwen3-32B-GGUF", allow_patterns = ["*UD-Q4_K_XL*"], ) ``` 1. Run the model and try any prompt. To disable thinking, use (or you can set it in the system prompt): Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] >>> Write your prompt here /nothink ``` ### [Direct link to heading](https://docs.unsloth.ai/basics/qwen3-how-to-run-and-fine-tune\#running-qwen3-235b-a22b) Running Qwen3-235B-A22B For Qwen3-235B-A22B, we will specifically use Llama.cpp for optimized inference and a plethora of options. 1. We're following similar steps to above however this time we'll also need to perform extra steps because the model is so big. 2. Download the model via (after installing `pip install huggingface_hub hf_transfer` ). You can choose UD\_IQ2\_XXS, or other quantized versions.. Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] # !pip install huggingface_hub hf_transfer import os os.environ["HF_HUB_ENABLE_HF_TRANSFER"] = "1" from huggingface_hub import snapshot_download snapshot_download( repo_id = "unsloth/Qwen3-235B-A22B-GGUF", local_dir = "unsloth/Qwen3-235B-A22B-GGUF", allow_patterns = ["*UD-IQ2_XXS*"], ) ``` 3. Run the model and try any prompt. 4. Edit `--threads 32` for the number of CPU threads, `--ctx-size 16384` for context length, `--n-gpu-layers 99` for GPU offloading on how many layers. Try adjusting it if your GPU goes out of memory. Also remove it if you have CPU only inference. Use `-ot ".ffn_.*_exps.=CPU"` to offload all MoE layers to the CPU! This effectively allows you to fit all non MoE layers on 1 GPU, improving generation speeds. You can customize the regex expression to fit more layers if you have more GPU capacity. Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] whitespace-pre-wrap ./llama.cpp/llama-cli \ --model unsloth/Qwen3-235B-A22B-GGUF/Qwen3-235B-A22B-UD-IQ2_XXS.gguf \ --threads 32 \ --ctx-size 16384 \ --n-gpu-layers 99 \ -ot ".ffn_.*_exps.=CPU" \ --seed 3407 \ --prio 3 \ --temp 0.6 \ --min-p 0.0 \ --top-p 0.95 \ --top-k 20 \ -no-cnv \ --prompt "<|im_start|>user\nCreate a Flappy Bird game in Python. You must include these things:\n1. You must use pygame.\n2. The background color should be randomly chosen and is a light shade. Start with a light blue color.\n3. Pressing SPACE multiple times will accelerate the bird.\n4. The bird's shape should be randomly chosen as a square, circle or triangle. The color should be randomly chosen as a dark color.\n5. Place on the bottom some land colored as dark brown or yellow chosen randomly.\n6. Make a score shown on the top right side. Increment if you pass pipes and don't hit them.\n7. Make randomly spaced pipes with enough space. Color them randomly as dark green or light brown or a dark gray shade.\n8. When you lose, show the best score. Make the text inside the screen. Pressing q or Esc will quit the game. Restarting is pressing SPACE again.\nThe final game should be inside a markdown section in Python. Check your code for errors and fix them before the final markdown section.<|im_end|>\n<|im_start|>assistant\n" ``` ## [Direct link to heading](https://docs.unsloth.ai/basics/qwen3-how-to-run-and-fine-tune\#fine-tuning-qwen3-with-unsloth) 🦥 Fine-tuning Qwen3 with Unsloth Unsloth makes Qwen3 fine-tuning 2x faster, use 70% less VRAM and supports 8x longer context lengths. Qwen3 (14B) fits comfortably in a Google Colab 16GB VRAM Tesla T4 GPU. Because Qwen3 supports both reasoning and non-reasoning, you can fine-tune it with a non-reasoning dataset, but this may affect its reasoning ability. If you want to maintain its reasoning capabilities (optional), you can use a mix of direct answers and chain-of-thought examples. Use 75% reasoning and 25% non-reasoning in your dataset to make the model retain its reasoning capabilities. Our Conversational notebook uses a combo of 75% NVIDIA’s open-math-reasoning dataset and 25% Maxime’s FineTome dataset (non-reasoning). Here's free Unsloth Colab notebooks to fine-tune Qwen3: - [Qwen3 (14B) Reasoning + Conversational notebook](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Qwen3_(14B)-Reasoning-Conversational.ipynb) (recommended) - [Qwen3 (14B) Alpaca notebook](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Qwen3_(14B)-Alpaca.ipynb) (for Base models) If you have an old version of Unsloth and/or are fine-tuning locally, install the latest version of Unsloth: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] pip install --upgrade --force-reinstall --no-cache-dir unsloth unsloth_zoo ``` ### [Direct link to heading](https://docs.unsloth.ai/basics/qwen3-how-to-run-and-fine-tune\#qwen3-moe-models-fine-tuning) Qwen3 MOE models fine-tuning Fine-tuning support includes MOE models: 30B-A3B and 235B-A22B. Qwen3-30B-A3B works on just 17.5GB VRAM with Unsloth. On fine-tuning MoE's - it's probably not a good idea to fine-tune the router layer so we disabled it by default. The 30B-A3B fits in 17.5GB VRAM, but you may lack RAM or disk space since the full 16-bit model must be downloaded and converted to 4-bit on the fly for QLoRA fine-tuning. This is due to issues importing 4-bit BnB MOE models directly. This only affects MOE models. If you're fine-tuning the MOE models, please use `FastModel` and not `FastLanguageModel` Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] from unsloth import FastModel import torch model, tokenizer = FastModel.from_pretrained( model_name = "unsloth/Qwen3-30B-A3B", max_seq_length = 2048, # Choose any for long context! load_in_4bit = True, # 4 bit quantization to reduce memory load_in_8bit = False, # [NEW!] A bit more accurate, uses 2x memory full_finetuning = False, # [NEW!] We have full finetuning now! # token = "hf_...", # use one if using gated models ) ``` ### [Direct link to heading](https://docs.unsloth.ai/basics/qwen3-how-to-run-and-fine-tune\#notebook-guide) Notebook Guide: ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252FFQX2CBzUqzAIMM50bpM4%252Fimage.png%3Falt%3Dmedia%26token%3D23c4b3d5-0d5f-4906-b2b4-bacde23235e0&width=768&dpr=4&quality=100&sign=dfdb362c&sv=2) To use the notebooks, just click Runtime, then Run all. You can change settings in the notebook to whatever you desire. We have set them automatically by default. Change model name to whatever you like by matching it with model's name on Hugging Face e.g. 'unsloth/Qwen3-8B' or 'unsloth/Qwen3-0.6B-unsloth-bnb-4bit'. There are other settings which you can toggle: - `max_seq_length = 2048` – Controls context length. While Qwen3 supports 40960, we recommend 2048 for testing. Unsloth enables 8× longer context fine-tuning. - `load_in_4bit = True` – Enables 4-bit quantization, reducing memory use 4× for fine-tuning on 16GB GPUs. - For **full-finetuning** \- set `full_finetuning = True` and **8-bit finetuning** \- set `load_in_8bit = True` If you'd like to read a full end-to-end guide on how to use Unsloth notebooks for fine-tuning or just learn about fine-tuning, creating [datasets](https://docs.unsloth.ai/basics/datasets-guide) etc., view our [complete guide here](https://docs.unsloth.ai/get-started/fine-tuning-guide): [🧬Fine-tuning Guide](https://docs.unsloth.ai/get-started/fine-tuning-guide) [📈Datasets Guide](https://docs.unsloth.ai/basics/datasets-guide) [PreviousLoRA Hyperparameters Guide](https://docs.unsloth.ai/get-started/fine-tuning-guide/lora-hyperparameters-guide) [NextUnsloth Dynamic 2.0 GGUFs](https://docs.unsloth.ai/basics/unsloth-dynamic-2.0-ggufs) Last updated 3 days ago Was this helpful?
{ "color-scheme": "light dark", "description": "Learn to run & fine-tune Qwen3 locally with Unsloth + our Dynamic 2.0 quants", "favicon": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Forganizations%252FHpyELzcNe0topgVLGCZY%252Fsites%252Fsite_mXXTe%252Ficon%252FUdYqQdRpLXMz6O8ejTLD%252Funsloth%2520sticker.png%3Falt%3Dmedia%26token%3D2a5ba961-51cb-4094-8545-6eb20ad1e06e&width=48&height=48&sign=277417f7&sv=2", "generator": "GitBook (e15757d)", "language": "en", "og:description": "Learn to run & fine-tune Qwen3 locally with Unsloth + our Dynamic 2.0 quants", "og:image": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "og:title": "Qwen3: How to Run & Fine-tune | Unsloth Documentation", "ogDescription": "Learn to run & fine-tune Qwen3 locally with Unsloth + our Dynamic 2.0 quants", "ogImage": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "ogTitle": "Qwen3: How to Run & Fine-tune | Unsloth Documentation", "robots": "index, follow", "scrapeId": "3a3e4f33-a2dc-4152-83f2-e070ff949450", "sourceURL": "https://docs.unsloth.ai/basics/qwen3-how-to-run-and-fine-tune", "statusCode": 200, "title": "Qwen3: How to Run & Fine-tune | Unsloth Documentation", "twitter:card": "summary_large_image", "twitter:description": "Learn to run & fine-tune Qwen3 locally with Unsloth + our Dynamic 2.0 quants", "twitter:image": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "twitter:title": "Qwen3: How to Run & Fine-tune | Unsloth Documentation", "url": "https://docs.unsloth.ai/basics/qwen3-how-to-run-and-fine-tune", "viewport": "width=device-width, initial-scale=1" }
Qwen3, full fine-tuning & all models are now supported! 🦥 ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252FQzuUQL60uFWHpaAvDPYD%252FColab%2520Options.png%3Falt%3Dmedia%26token%3Dfb808ec5-20c5-4f42-949e-14ed26a44987&width=768&dpr=4&quality=100&sign=be097a14&sv=2) If you have never used a Colab notebook, a quick primer on the notebook itself: 1. **Play Button at each "cell".** Click on this to run that cell's code. You must not skip any cells and you must run every cell in chronological order. If you encounter errors, simply rerun the cell you did not run. Another option is to click CTRL + ENTER if you don't want to click the play button. 2. **Runtime Button in the top toolbar.** You can also use this button and hit "Run all" to run the entire notebook in 1 go. This will skip all the customization steps, but is a good first try. 3. **Connect / Reconnect T4 button.** T4 is the free GPU Google is providing. It's quite powerful! The first installation cell looks like below: Remember to click the PLAY button in the brackets \[ \]. We grab our open source Github package, and install some other packages. ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252FIz2XUXhcmjheDtxfvbLA%252Fimage.png%3Falt%3Dmedia%26token%3Db9da0e5c-075c-48f8-8abb-5db6fdf9866b&width=768&dpr=4&quality=100&sign=e33e1780&sv=2) ## [Direct link to heading](https://docs.unsloth.ai/get-started/installing-+-updating/google-colab\#undefined) [PreviousConda Install](https://docs.unsloth.ai/get-started/installing-+-updating/conda-install) [NextFine-tuning Guide](https://docs.unsloth.ai/get-started/fine-tuning-guide) Last updated 10 months ago Was this helpful?
{ "color-scheme": "light dark", "description": "To install and run Unsloth on Google Colab, follow the steps below:", "favicon": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Forganizations%252FHpyELzcNe0topgVLGCZY%252Fsites%252Fsite_mXXTe%252Ficon%252FUdYqQdRpLXMz6O8ejTLD%252Funsloth%2520sticker.png%3Falt%3Dmedia%26token%3D2a5ba961-51cb-4094-8545-6eb20ad1e06e&width=48&height=48&sign=277417f7&sv=2", "generator": "GitBook (e15757d)", "language": "en", "og:description": "To install and run Unsloth on Google Colab, follow the steps below:", "og:image": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "og:title": "Google Colab | Unsloth Documentation", "ogDescription": "To install and run Unsloth on Google Colab, follow the steps below:", "ogImage": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "ogTitle": "Google Colab | Unsloth Documentation", "robots": "index, follow", "scrapeId": "2f3bdf19-657d-4870-af37-48257f48aa18", "sourceURL": "https://docs.unsloth.ai/get-started/installing-+-updating/google-colab", "statusCode": 200, "title": "Google Colab | Unsloth Documentation", "twitter:card": "summary_large_image", "twitter:description": "To install and run Unsloth on Google Colab, follow the steps below:", "twitter:image": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "twitter:title": "Google Colab | Unsloth Documentation", "url": "https://docs.unsloth.ai/get-started/installing-+-updating/google-colab", "viewport": "width=device-width, initial-scale=1" }
Qwen3, full fine-tuning & all models are now supported! 🦥 Unsloth works on Linux, Windows directly, Kaggle, Google Colab and more. See our [system requirements](https://docs.unsloth.ai/get-started/beginner-start-here/unsloth-requirements). **Recommended installation method:** Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] pip install unsloth ``` [Pip Install](https://docs.unsloth.ai/get-started/installing-+-updating/pip-install) [Windows Installation](https://docs.unsloth.ai/get-started/installing-+-updating/windows-installation) [Updating](https://docs.unsloth.ai/get-started/installing-+-updating/updating) [Conda Install](https://docs.unsloth.ai/get-started/installing-+-updating/conda-install) [Google Colab](https://docs.unsloth.ai/get-started/installing-+-updating/google-colab) [PreviousAll Our Models](https://docs.unsloth.ai/get-started/all-our-models) [NextUpdating](https://docs.unsloth.ai/get-started/installing-+-updating/updating) Last updated 1 month ago Was this helpful?
{ "color-scheme": "light dark", "description": "Learn to install Unsloth locally or online.", "favicon": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Forganizations%252FHpyELzcNe0topgVLGCZY%252Fsites%252Fsite_mXXTe%252Ficon%252FUdYqQdRpLXMz6O8ejTLD%252Funsloth%2520sticker.png%3Falt%3Dmedia%26token%3D2a5ba961-51cb-4094-8545-6eb20ad1e06e&width=48&height=48&sign=277417f7&sv=2", "generator": "GitBook (e15757d)", "language": "en", "og:description": "Learn to install Unsloth locally or online.", "og:image": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "og:title": "Installing + Updating | Unsloth Documentation", "ogDescription": "Learn to install Unsloth locally or online.", "ogImage": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "ogTitle": "Installing + Updating | Unsloth Documentation", "robots": "index, follow", "scrapeId": "4195d238-d3ca-40d5-9bbc-9b87b99bdb6f", "sourceURL": "https://docs.unsloth.ai/get-started/installing-+-updating", "statusCode": 200, "title": "Installing + Updating | Unsloth Documentation", "twitter:card": "summary_large_image", "twitter:description": "Learn to install Unsloth locally or online.", "twitter:image": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "twitter:title": "Installing + Updating | Unsloth Documentation", "url": "https://docs.unsloth.ai/get-started/installing-+-updating", "viewport": "width=device-width, initial-scale=1" }
Qwen3, full fine-tuning & all models are now supported! 🦥 Fine-tuning TTS models enable it to adapt it on your own dataset, specific use case or style/tone. This process helps customize the model for unique voices, speaking styles, new languages or specific types of content. With Unsloth, we allow you to fine-tune TTS models 1.2x faster with 50% less memory than other implementations with Flash Attention 2. This support includes OpenAI's Whisper, Orpheus, and most of the current popular TTS models. Because voice models are usually small in size, you can train the models using LoRA 16-bit or full fine-tuning FFT which may provider higher quality results. Please note we have not officially announced support for TTS models yet. You can use them but you might experience errors. If so, report them to our GitHub thank you! ### [Direct link to heading](https://docs.unsloth.ai/basics/text-to-speech-tts-fine-tuning\#fine-tuning-notebooks) Fine-tuning Notebooks: - [Orpheus-TTS (3B)](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Orpheus_(3B)-TTS.ipynb) - [Whisper Large V3](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Whisper.ipynb) - [Llasa-TTS (3B)](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llasa_TTS_(3B).ipynb) - [Spark-TTS (0.5B)](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Spark_TTS_(0_5B).ipynb) - [Oute-TTS (1B)](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Oute_TTS_(1B).ipynb) ### [Direct link to heading](https://docs.unsloth.ai/basics/text-to-speech-tts-fine-tuning\#choosing-and-loading-a-tts-model) Choosing and Loading a TTS Model For TTS, the primary model used in our examples is **Orpheus-TTS (3B)** – a Llama-based speech model. Orpheus was pre-trained on a large speech corpus and can generate highly realistic speech, with support for emotional cues (laughs, sighs, etc.) out-of-the-box. We’ll use Orpheus as our example for TTS fine-tuning. To load it in LoRA 16-bit: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] from unsloth import FastModel model_name = "unsloth/orpheus-3b-0.1-pretrained" model, tokenizer = FastModel.from_pretrained( model_name, load_in_4bit=False # use 4-bit precision (QLoRA) ) ``` When this runs, Unsloth will download the model weights if you prefer 8-bit, you could use `load_in_8bit=True`, or for full 16-bit fine-tuning set `full_finetuning=True` (ensure you have enough VRAM). You can also replace the model name with other TTS models. **Note:** Orpheus’s tokenizer already includes special tokens for audio output (more on this later). You do _not_ need a separate vocoder – Orpheus will output audio tokens directly, which can be decoded to a waveform. ### [Direct link to heading](https://docs.unsloth.ai/basics/text-to-speech-tts-fine-tuning\#preparing-your-dataset) Preparing Your Dataset At minimum, a TTS fine-tuning dataset consists of **audio clips and their corresponding transcripts** (text). Let’s use the [_Elise_ dataset](https://huggingface.co/datasets/MrDragonFox/Elise) which is dataset composing of a female character with a pre-built script. Elise will be used as an example of how to prepare data: **Elise dataset:** A small (~3 hours) single-speaker speech corpus from Hugging Face. There are two variants: - [`MrDragonFox/Elise`](https://huggingface.co/datasets/MrDragonFox/Elise) – an augmented version with **emotion tags** embedded in the transcripts. (This clone adds labels like “”, “”, etc., to the text.) - [`Jinsaryko/Elise`](https://huggingface.co/datasets/Jinsaryko/Elise) – base version with transcripts. The dataset is organized with one audio and transcript per entry. On Hugging Face, these datasets have fields such as `audio` (the waveform), `text` (the transcription), and some metadata (speaker name, pitch stats, etc.). We need to feed Unsloth a dataset of audio-text pairs. **Option 1: Using Hugging Face Datasets library** – This is the easiest route if your data is in HF format or a CSV. Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] from datasets import load_dataset # Load the Elise dataset from HF (without emotion tags) dataset = load_dataset("MrDragonFox/Elise", split="train") # Alternatively, you can use a standard dataset without emotion tags ``` This will download the data (approx 328 MB for ~1.2k samples). Each item in `dataset` has `dataset[i]["audio"]` (an Audio object with array data and sampling rate) and `dataset[i]["text"]` (the transcript string). You can inspect a sample: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] sample = dataset[0] print(sample["text"]) # e.g., "Oh, honestly, probably still your house <laughs>. But still, I mean, running the dishes through the dishwasher..." ``` In the `MrDragonFox/Elise` version, you’ll notice tags like `<laughs>` or `<chuckles>` in the text – these indicate expressive cues. These tags are enclosed in angle brackets and will be treated as special tokens by the model (they match [Orpheus’s expected tags](https://github.com/canopyai/Orpheus-TTS) like `<laugh>` and `<sigh>`. **Option 2: Preparing a custom dataset** – If you have your own audio files and transcripts: - Organize audio clips (WAV/FLAC files) in a folder. - Create a CSV or TSV file with columns for file path and transcript. For example: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] filename,text 0001.wav,Hello there! 0002.wav,<sigh> I am very tired. ``` - Use `load_dataset("csv", data_files="mydata.csv", split="train")` to load it. You might need to tell the dataset loader how to handle audio paths. An alternative is using the `datasets.Audio` feature to load audio data on the fly: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] from datasets import Audio dataset = load_dataset("csv", data_files="mydata.csv", split="train") dataset = dataset.cast_column("filename", Audio(sampling_rate=24000)) ``` Then `dataset[i]["audio"]` will contain the audio array. - **Ensure transcripts are normalized** (no unusual characters that the tokenizer might not know, except the emotion tags if used). Also ensure all audio have a consistent sampling rate (resample them if necessary to the target rate the model expects, e.g. 24kHz for Orpheus). **Emotion tags:** If your dataset includes expressive sounds (laughter, sighs, etc.), mark them in the transcript with a tag. Orpheus supports tags like `<laugh>`, `<chuckle>`, `<sigh>`, `<cough>`, `<sniffle>`, `<groan>`, `<yawn>`, `<gasp>`, etc. For example: `"I missed you <laugh> so much!"`. During training, the model will learn to associate these tags with the corresponding audio patterns. The Elise dataset with tags already has many of these (e.g., 336 occurrences of “laughs”, 156 of “sighs”, etc. as listed in its card). If your dataset lacks such tags but you want to incorporate them, you can manually annotate the transcripts where the audio contains those expressions. In summary, for **dataset preparation**: - You need a **list of (audio, text)** pairs. - Use the HF `datasets` library to handle loading and optional preprocessing (like resampling). - Include any **special tags** in the text that you want the model to learn (ensure they are in `<angle_brackets>` format so the model treats them as distinct tokens). - (Optional) If multi-speaker, you could include a speaker ID token in the text or use a separate speaker embedding approach, but that’s beyond this basic guide (Elise is single-speaker). ### [Direct link to heading](https://docs.unsloth.ai/basics/text-to-speech-tts-fine-tuning\#fine-tuning-tts-with-unsloth) Fine-Tuning TTS with Unsloth Now, let’s bring it all together and run the fine-tuning. We’ll illustrate using Python code (which you can run in a Jupyter notebook, Colab, etc.). This is analogous to running the Unsloth CLI with corresponding arguments. **Step 1: Initialize Model and Dataset** Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] from unsloth import FastModel from transformers import Trainer, TrainingArguments # Load the pre-trained Orpheus model (in 4-bit mode) and tokenizer model_name = "unsloth/orpheus-3b-0.1-pretrained-unsloth-bnb-4bit" model, tokenizer = FastModel.from_pretrained(model_name, load_in_4bit=True) # Load the dataset (Elise) and ensure audio is 24kHz dataset = load_dataset("Jinsaryko/Elise", split="train") # Cast the audio to 24kHz if not already dataset = dataset.cast_column("audio", Audio(sampling_rate=24000)) ``` _Note:_ If memory is very limited or if dataset is large, you can stream or load in chunks. Here, 3h of audio easily fits in RAM. If using your own dataset CSV, load it similarly. **Step 2: Preprocess the data for training** We need to prepare inputs for the Trainer. For text-to-speech, one approach is to train the model in a causal manner: concatenate text and audio token IDs as the target sequence. However, since Orpheus is a decoder-only LLM that outputs audio, we can feed the text as input (context) and have the audio token ids as labels. In practice, Unsloth’s integration might do this automatically if the model’s config identifies it as text-to-speech. If not, we can do something like: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] # Tokenize the text transcripts def preprocess_function(example): # Tokenize the text (keep the special tokens like <laugh> intact) tokens = tokenizer(example["text"], return_tensors="pt") # Flatten to list of token IDs input_ids = tokens["input_ids"].squeeze(0) # The model will generate audio tokens after these text tokens. # For training, we can set labels equal to input_ids (so it learns to predict next token). # But that only covers text tokens predicting the next text token (which might be an audio token or end). # A more sophisticated approach: append a special token indicating start of audio, and let the model generate the rest. # For simplicity, use the same input as labels (the model will learn to output the sequence given itself). return {"input_ids": input_ids, "labels": input_ids} train_data = dataset.map(preprocess_function, remove_columns=dataset.column_names) ``` _Important:_ The above is a simplification. In reality, to fine-tune Orpheus properly, you would need the _audio tokens as part of the training labels_. Orpheus’s pre-training likely involved converting audio to discrete tokens (via an audio codec) and training the model to predict those given the preceding text. For fine-tuning on new voice data, you would similarly need to obtain the audio tokens for each clip (using Orpheus’s audio codec). The Orpheus GitHub provides a script for data processing – it encodes audio into sequences of `<custom_token_x>` tokens. However, **Unsloth may abstract this away**: if the model is a FastModel with an associated processor that knows how to handle audio, it might automatically encode the audio in the dataset to tokens. If not, you’d have to manually encode each audio clip to token IDs (using Orpheus’s codebook). This is an advanced step beyond this guide, but keep in mind that simply using text tokens won’t teach the model the actual audio – it needs to match the audio patterns. For brevity, let's assume Unsloth provides a way to feed audio directly (for example, by setting `processor` and passing the audio array). If Unsloth does not yet support automatic audio tokenization, you might need to use the Orpheus repository’s `encode_audio` function to get token sequences for the audio, then use those as labels. (The dataset entries do have `phonemes` and some acoustic features which suggests a pipeline.) **Step 3: Set up training arguments and Trainer** Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] training_args = TrainingArguments( output_dir="orpheus_finetune_elise", per_device_train_batch_size=2, gradient_accumulation_steps=4, num_train_epochs=5, learning_rate=1e-5, fp16=True, # use mixed precision if available logging_steps=50, save_strategy="epoch", report_to="none" # or "tensorboard" if you want to use TB ) # Instantiate Trainer trainer = Trainer( model=model, train_dataset=train_data, args=training_args ) ``` Here we set a small batch and accumulations to simulate batch size 8, 5 epochs over ~1200 samples (~6000 steps), LR=1e-5, and FP16 training (which helps even with 4-bit base). Adjust as needed. **Step 4: Begin fine-tuning** Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] trainer.train() ``` This will start the training loop. You should see logs of loss every 50 steps (as set by `logging_steps`). The training might take some time depending on GPU – for example, on a Colab T4 GPU, a few epochs on 3h of data may take 1-2 hours. Unsloth’s optimizations will make it faster than standard HF training. During training, Unsloth applies its magic (patches, fused ops, etc.) behind the scenes to speed up computation. **Step 5: Save the fine-tuned model** After training completes (or if you stop it mid-way when you feel it’s sufficient), save the model: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] trainer.save_model("orpheus_finetune_elise/final") ``` This saves the model weights (for LoRA, it might save only adapter weights if the base is not fully fine-tuned). If you used `--push_model` in CLI or `trainer.push_to_hub()`, you could upload it to Hugging Face Hub directly. Now you should have a fine-tuned TTS model in the directory. The next step is to test it out! [PreviousInference](https://docs.unsloth.ai/basics/running-and-saving-models/inference) [NextContinued Pretraining](https://docs.unsloth.ai/basics/continued-pretraining) Last updated 2 days ago Was this helpful?
{ "color-scheme": "light dark", "description": "Learn how to to fine-tune TTS voice models with Unsloth.", "favicon": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Forganizations%252FHpyELzcNe0topgVLGCZY%252Fsites%252Fsite_mXXTe%252Ficon%252FUdYqQdRpLXMz6O8ejTLD%252Funsloth%2520sticker.png%3Falt%3Dmedia%26token%3D2a5ba961-51cb-4094-8545-6eb20ad1e06e&width=48&height=48&sign=277417f7&sv=2", "generator": "GitBook (e15757d)", "language": "en", "og:description": "Learn how to to fine-tune TTS voice models with Unsloth.", "og:image": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "og:title": "Text-to-Speech (TTS) Fine-tuning | Unsloth Documentation", "ogDescription": "Learn how to to fine-tune TTS voice models with Unsloth.", "ogImage": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "ogTitle": "Text-to-Speech (TTS) Fine-tuning | Unsloth Documentation", "robots": "index, follow", "scrapeId": "446e66cc-c504-4c25-b296-2eb54372acbe", "sourceURL": "https://docs.unsloth.ai/basics/text-to-speech-tts-fine-tuning", "statusCode": 200, "title": "Text-to-Speech (TTS) Fine-tuning | Unsloth Documentation", "twitter:card": "summary_large_image", "twitter:description": "Learn how to to fine-tune TTS voice models with Unsloth.", "twitter:image": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "twitter:title": "Text-to-Speech (TTS) Fine-tuning | Unsloth Documentation", "url": "https://docs.unsloth.ai/basics/text-to-speech-tts-fine-tuning", "viewport": "width=device-width, initial-scale=1" }
Qwen3, full fine-tuning & all models are now supported! 🦥 To save to 16bit for VLLM, use: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] model.save_pretrained_merged("model", tokenizer, save_method = "merged_16bit",) model.push_to_hub_merged("hf/model", tokenizer, save_method = "merged_16bit", token = "") ``` To merge to 4bit to load on HuggingFace, first call `merged_4bit`. Then use `merged_4bit_forced` if you are certain you want to merge to 4bit. I highly discourage you, unless you know what you are going to do with the 4bit model (ie for DPO training for eg or for HuggingFace's online inference engine) Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] model.save_pretrained_merged("model", tokenizer, save_method = "merged_4bit",) model.push_to_hub_merged("hf/model", tokenizer, save_method = "merged_4bit", token = "") ``` To save just the LoRA adapters, either use: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] model.save_pretrained(...) AND tokenizer.save_pretrained(...) ``` Or just use our builtin function to do that: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] model.save_pretrained_merged("model", tokenizer, save_method = "lora",) model.push_to_hub_merged("hf/model", tokenizer, save_method = "lora", token = "") ``` [PreviousSaving to Ollama](https://docs.unsloth.ai/basics/running-and-saving-models/saving-to-ollama) [NextTroubleshooting](https://docs.unsloth.ai/basics/running-and-saving-models/troubleshooting) Last updated 10 months ago Was this helpful?
{ "color-scheme": "light dark", "description": "Saving models to 16bit for VLLM", "favicon": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Forganizations%252FHpyELzcNe0topgVLGCZY%252Fsites%252Fsite_mXXTe%252Ficon%252FUdYqQdRpLXMz6O8ejTLD%252Funsloth%2520sticker.png%3Falt%3Dmedia%26token%3D2a5ba961-51cb-4094-8545-6eb20ad1e06e&width=48&height=48&sign=277417f7&sv=2", "generator": "GitBook (e15757d)", "language": "en", "og:description": "Saving models to 16bit for VLLM", "og:image": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "og:title": "Saving to VLLM | Unsloth Documentation", "ogDescription": "Saving models to 16bit for VLLM", "ogImage": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "ogTitle": "Saving to VLLM | Unsloth Documentation", "robots": "index, follow", "scrapeId": "44b8770a-223c-4a52-bb4d-f2992c084b79", "sourceURL": "https://docs.unsloth.ai/basics/running-and-saving-models/saving-to-vllm", "statusCode": 200, "title": "Saving to VLLM | Unsloth Documentation", "twitter:card": "summary_large_image", "twitter:description": "Saving models to 16bit for VLLM", "twitter:image": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "twitter:title": "Saving to VLLM | Unsloth Documentation", "url": "https://docs.unsloth.ai/basics/running-and-saving-models/saving-to-vllm", "viewport": "width=device-width, initial-scale=1" }
Qwen3, full fine-tuning & all models are now supported! 🦥 There are millions of possible hyperparameter combinations, and choosing the right values is crucial for fine-tuning. You'll learn the best practices for hyperparameters - based on insights from hundreds of research paper/experiments and how they impact the model. **We recommend you to use Unsloth's pre-selected defaults.** The goal is to change hyperparameter numbers to increase accuracy, but also **counteract** [**over-fitting or underfitting**](https://docs.unsloth.ai/get-started/fine-tuning-guide#avoiding-overfitting-and-underfitting). Over-fitting is where the model memorizes the data and struggles with new questions. We want a model that generalizes, not one that just memorizes. ## [Direct link to heading](https://docs.unsloth.ai/get-started/fine-tuning-guide/lora-hyperparameters-guide\#key-fine-tuning-hyperparameters) Key Fine-tuning Hyperparameters ### [Direct link to heading](https://docs.unsloth.ai/get-started/fine-tuning-guide/lora-hyperparameters-guide\#learning-rate) **Learning Rate** Defines how much the model’s weights adjust per training step. - **Higher Learning Rates**: Faster training, reduces overfitting just make sure to not make it too high as it will overfit - **Lower Learning Rates**: More stable training, may require more epochs. - **Typical Range**: 1e-4 (0.0001) to 5e-5 (0.00005). ### [Direct link to heading](https://docs.unsloth.ai/get-started/fine-tuning-guide/lora-hyperparameters-guide\#epochs) **Epochs** Number of times the model sees the full training dataset. - **Recommended:** 1-3 epochs (anything more than 3 is generally not optimal unless you want your model to have much less hallucinations but also less creativity) - **More Epochs**: Better learning, higher risk of overfitting. - **Fewer Epochs**: May undertrain the model. ## [Direct link to heading](https://docs.unsloth.ai/get-started/fine-tuning-guide/lora-hyperparameters-guide\#advanced-hyperparameters) **Advanced Hyperparameters:** Hyperparameter Function Recommended Settings **LoRA Rank** Controls the number of low-rank factors used for adaptation. 4-128 **LoRA Alpha** Scaling factor for weight updates. LoRA Rank \* 1 or 2 **Max Sequence Length** Maximum context a model can learn. Adjust based on dataset needs **Batch Size** Number of samples processed per training step. Higher values require more VRAM. 1 for long context, 2 or 4 for shorter context. **LoRA Dropout** Dropout rate to prevent overfitting. 0.1-0.2 **Warmup Steps** Gradually increases learning rate at the start of training. 5-10% of total steps **Scheduler Type** Adjusts learning rate dynamically during training. Linear Decay **Seed or Random State** Ensures reproducibility of results. Fixed number (e.g., 42) **Weight Decay** Penalizes large weight updates to prevent overfitting. 1.0 or 0.3 (if you have issues) ## [Direct link to heading](https://docs.unsloth.ai/get-started/fine-tuning-guide/lora-hyperparameters-guide\#lora-hyperparameters-in-unsloth) **LoRA Hyperparameters in Unsloth** You can manually adjust the hyperparameters below if you’d like - but feel free to skip it, as Unsloth automatically chooses well-balanced defaults for you. ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252FW1P2qmzGQGDAXQ0pXhRq%252Fparameters.png%3Falt%3Dmedia%26token%3Df146c646-ca31-4459-b1de-499bd1d23fd1&width=768&dpr=4&quality=100&sign=3bae97dd&sv=2) 1. Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] r = 16, # Choose any number > 0 ! Suggested 8, 16, 32, 64, 128 ``` The rank of the finetuning process. A larger number uses more memory and will be slower, but can increase accuracy on harder tasks. We normally suggest numbers like 8 (for fast finetunes), and up to 128. Too large numbers can causing over-fitting, damaging your model's quality. 2. Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] target_modules = ["q_proj", "k_proj", "v_proj", "o_proj",\ "gate_proj", "up_proj", "down_proj",], ``` We select all modules to finetune. You can remove some to reduce memory usage and make training faster, but we highly do not suggest this. Just train on all modules! 3. Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] lora_alpha = 16, ``` The scaling factor for finetuning. A larger number will make the finetune learn more about your dataset, but can promote over-fitting. We suggest this to equal to the rank `r`, or double it. 4. Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] lora_dropout = 0, # Supports any, but = 0 is optimized ``` Leave this as 0 for faster training! Can reduce over-fitting, but not that much. 5. Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] bias = "none", # Supports any, but = "none" is optimized ``` Leave this as 0 for faster and less over-fit training! 6. Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] use_gradient_checkpointing = "unsloth", # True or "unsloth" for very long context ``` Options include `True`, `False ` and `"unsloth"`. We suggest `"unsloth"` since we reduce memory usage by an extra 30% and support extremely long context finetunes. You can read up here: [https://unsloth.ai/blog/long-context](https://unsloth.ai/blog/long-context) for more details. 7. Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] random_state = 3407, ``` The number to determine deterministic runs. Training and finetuning needs random numbers, so setting this number makes experiments reproducible. 8. Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] use_rslora = False, # We support rank stabilized LoRA ``` Advanced feature to set the `lora_alpha = 16` automatically. You can use this if you want! 9. Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] loftq_config = None, # And LoftQ ``` Advanced feature to initialize the LoRA matrices to the top r singular vectors of the weights. Can improve accuracy somewhat, but can make memory usage explode at the start. ## [Direct link to heading](https://docs.unsloth.ai/get-started/fine-tuning-guide/lora-hyperparameters-guide\#avoiding-overfitting-and-underfitting) **Avoiding Overfitting & Underfitting** #### [Direct link to heading](https://docs.unsloth.ai/get-started/fine-tuning-guide/lora-hyperparameters-guide\#overfitting-too-specialized) **Overfitting** (Too Specialized) The model memorizes training data, failing to generalize to unseen inputs. Solution: - If your training duration is short, lower the learning rate. For longer training runs, increase the learning rate. Because of this, it might be best to test both and see which is better. - Increase batch size. - Lower the number of training epochs. - Combine your dataset with a generic dataset e.g. ShareGPT - Increase dropout rate to introduce regularization. #### [Direct link to heading](https://docs.unsloth.ai/get-started/fine-tuning-guide/lora-hyperparameters-guide\#underfitting-too-generic) **Underfitting** (Too Generic) Though not as common, underfitting is where a low rank model fails to generalize due to a lack of learnable params and so your model may fail to learn from training data. Solution: - If your training duration is short, increase the learning rate. For longer training runs, reduce the learning rate. - Train for more epochs. - Increasing rank and alpha. Alpha should at least equal to the rank number, and rank should be bigger for smaller models/more complex datasets; it usually is between 4 and 64. - Use a more domain-relevant dataset. Fine-tuning has no single "best" approach, only best practices. Experimentation is key to finding what works for your needs. Our notebooks auto-set optimal parameters based on evidence from research papers and past experiments. [PreviousWhat Model Should I Use?](https://docs.unsloth.ai/get-started/fine-tuning-guide/what-model-should-i-use) [NextQwen3: How to Run & Fine-tune](https://docs.unsloth.ai/basics/qwen3-how-to-run-and-fine-tune) Last updated 1 month ago Was this helpful?
{ "color-scheme": "light dark", "description": "Best practices for LoRA hyperparameters and learn how they affect the finetuning process.", "favicon": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Forganizations%252FHpyELzcNe0topgVLGCZY%252Fsites%252Fsite_mXXTe%252Ficon%252FUdYqQdRpLXMz6O8ejTLD%252Funsloth%2520sticker.png%3Falt%3Dmedia%26token%3D2a5ba961-51cb-4094-8545-6eb20ad1e06e&width=48&height=48&sign=277417f7&sv=2", "generator": "GitBook (e15757d)", "language": "en", "og:description": "Best practices for LoRA hyperparameters and learn how they affect the finetuning process.", "og:image": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "og:title": "LoRA Hyperparameters Guide | Unsloth Documentation", "ogDescription": "Best practices for LoRA hyperparameters and learn how they affect the finetuning process.", "ogImage": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "ogTitle": "LoRA Hyperparameters Guide | Unsloth Documentation", "robots": "index, follow", "scrapeId": "4b906cff-7bc1-468a-9817-d0a61dc5076a", "sourceURL": "https://docs.unsloth.ai/get-started/fine-tuning-guide/lora-hyperparameters-guide", "statusCode": 200, "title": "LoRA Hyperparameters Guide | Unsloth Documentation", "twitter:card": "summary_large_image", "twitter:description": "Best practices for LoRA hyperparameters and learn how they affect the finetuning process.", "twitter:image": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "twitter:title": "LoRA Hyperparameters Guide | Unsloth Documentation", "url": "https://docs.unsloth.ai/get-started/fine-tuning-guide/lora-hyperparameters-guide", "viewport": "width=device-width, initial-scale=1" }
Qwen3, full fine-tuning & all models are now supported! 🦥 Google released Gemma 3 in 4 sizes - 1B, 4B, 12B and 27B models! The smallest 1B model is text only, whilst the rest are capable of vision and text input! We provide GGUFs, and a guide of how to run it effectively, and how to finetune & do reasoning finetuning with Gemma 3! NEW: We uploaded new quants using Google's new Gemma 3 **QAT** method. See the full [collection here](https://huggingface.co/collections/unsloth/gemma-3-67d12b7e8816ec6efa7e4e5b). **Unsloth is the only framework which works in float16 machines for Gemma 3 inference and training.** This means Colab Notebooks with free Tesla T4 GPUs also work! - Fine-tune Gemma 3 (4B) using our [free Colab notebook](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Gemma3_(4B).ipynb) According to the Gemma team, the optimal config for inference is `temperature = 1.0, top_k = 64, top_p = 0.95, min_p = 0.0` **Unsloth Gemma 3 uploads with optimal configs:** GGUF Unsloth Dynamic 4-bit Instruct 16-bit Instruct - [1B](https://huggingface.co/unsloth/gemma-3-1b-it-GGUF) - [4B](https://huggingface.co/unsloth/gemma-3-4b-it-GGUF) - [12B](https://huggingface.co/unsloth/gemma-3-12b-it-GGUF) - [27B](https://huggingface.co/unsloth/gemma-3-27b-it-GGUF) - [1B](https://huggingface.co/unsloth/gemma-3-1b-it-bnb-4bit) - [4B](https://huggingface.co/unsloth/gemma-3-4b-it-bnb-4bit) - [12B](https://huggingface.co/unsloth/gemma-3-27b-it-unsloth-bnb-4bit) - [27B](https://huggingface.co/unsloth/gemma-3-27b-it-bnb-4bit) - [1B](https://huggingface.co/unsloth/gemma-3-1b) - [4B](https://huggingface.co/unsloth/gemma-3-4b) - [12B](https://huggingface.co/unsloth/gemma-3-12b) - [27B](https://huggingface.co/unsloth/gemma-3-27b) We fixed an issue with our Gemma 3 GGUF uploads where previously they did not support vision. Now they do. ## [Direct link to heading](https://docs.unsloth.ai/basics/gemma-3-how-to-run-and-fine-tune\#official-recommended-inference-settings) ⚙️ Official Recommended Inference Settings According to the Gemma team, the official recommended settings for inference is: - Temperature of 1.0 - Top\_K of 64 - Min\_P of 0.00 (optional, but 0.01 works well, llama.cpp default is 0.1) - Top\_P of 0.95 - Repetition Penalty of 1.0. (1.0 means disabled in llama.cpp and transformers) - Chat template: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] whitespace-pre-wrap <bos><start_of_turn>user\nHello!<end_of_turn>\n<start_of_turn>model\nHey there!<end_of_turn>\n<start_of_turn>user\nWhat is 1+1?<end_of_turn>\n<start_of_turn>model\n ``` - Chat template with `\n` newlines rendered (except for the last) Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] whitespace-pre-wrap <bos><start_of_turn>user Hello!<end_of_turn> <start_of_turn>model Hey there!<end_of_turn> <start_of_turn>user What is 1+1?<end_of_turn> <start_of_turn>model\n ``` llama.cpp an other inference engines auto add a <bos> - DO NOT add TWO <bos> tokens! You should ignore the <bos> when prompting the model! ## [Direct link to heading](https://docs.unsloth.ai/basics/gemma-3-how-to-run-and-fine-tune\#tutorial-how-to-run-gemma-3-27b-in-ollama) 🦙 Tutorial: How to Run Gemma 3 27B in Ollama 1. Install `ollama` if you haven't already! Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] apt-get update apt-get install pciutils -y curl -fsSL https://ollama.com/install.sh | sh ``` 1. Run the model! Note you can call `ollama serve` in another terminal if it fails! We include all our fixes and suggested parameters (temperature etc) in `params` in our Hugging Face upload! Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] ollama run hf.co/unsloth/gemma-3-27b-it-GGUF:Q4_K_M ``` ## [Direct link to heading](https://docs.unsloth.ai/basics/gemma-3-how-to-run-and-fine-tune\#tutorial-how-to-run-gemma-3-27b-in-llama.cpp) 📖 Tutorial: How to Run Gemma 3 27B in llama.cpp 1. Obtain the latest `llama.cpp` on [GitHub here](https://github.com/ggml-org/llama.cpp). You can follow the build instructions below as well. Change `-DGGML_CUDA=ON` to `-DGGML_CUDA=OFF` if you don't have a GPU or just want CPU inference. Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] apt-get update apt-get install pciutils build-essential cmake curl libcurl4-openssl-dev -y git clone https://github.com/ggerganov/llama.cpp cmake llama.cpp -B llama.cpp/build \ -DBUILD_SHARED_LIBS=ON -DGGML_CUDA=ON -DLLAMA_CURL=ON cmake --build llama.cpp/build --config Release -j --clean-first --target llama-quantize llama-cli llama-gguf-split llama-mtmd-cli cp llama.cpp/build/bin/llama-* llama.cpp ``` 1. If you want to use `llama.cpp` directly to load models, you can do the below: (:Q4\_K\_XL) is the quantization type. You can also download via Hugging Face (point 3). This is similar to `ollama run` Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] ./llama.cpp/llama-mtmd-cli \ -hf unsloth/gemma-3-4b-it-GGUF:Q4_K_XL ``` 1. **OR** download the model via (after installing `pip install huggingface_hub hf_transfer` ). You can choose Q4\_K\_M, or other quantized versions (like BF16 full precision). More versions at: [https://huggingface.co/unsloth/gemma-3-27b-it-GGUF](https://huggingface.co/unsloth/gemma-3-27b-it-GGUF) Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] # !pip install huggingface_hub hf_transfer import os os.environ["HF_HUB_ENABLE_HF_TRANSFER"] = "1" from huggingface_hub import snapshot_download snapshot_download( repo_id = "unsloth/gemma-3-27b-it-GGUF", local_dir = "unsloth/gemma-3-27b-it-GGUF", allow_patterns = ["*Q4_K_M*", "mmproj-BF16.gguf"], # For Q4_K_M ) ``` 1. Run Unsloth's Flappy Bird test 2. Edit `--threads 32` for the number of CPU threads, `--ctx-size 16384` for context length (Gemma 3 supports 128K context length!), `--n-gpu-layers 99` for GPU offloading on how many layers. Try adjusting it if your GPU goes out of memory. Also remove it if you have CPU only inference. 3. For conversation mode: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] ./llama.cpp/llama-mtmd-cli \ --model unsloth/gemma-3-27b-it-GGUF/gemma-3-27b-it-Q4_K_M.gguf \ --mmproj unsloth/gemma-3-27b-it-GGUF/mmproj-BF16.gguf \ --threads 32 \ --ctx-size 16384 \ --n-gpu-layers 99 \ --seed 3407 \ --prio 2 \ --temp 1.0 \ --repeat-penalty 1.0 \ --min-p 0.01 \ --top-k 64 \ --top-p 0.95 ``` 1. For non conversation mode to test Flappy Bird: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] ./llama.cpp/llama-cli \ --model unsloth/gemma-3-27b-it-GGUF/gemma-3-27b-it-Q4_K_M.gguf \ --threads 32 \ --ctx-size 16384 \ --n-gpu-layers 99 \ --seed 3407 \ --prio 2 \ --temp 1.0 \ --repeat-penalty 1.0 \ --min-p 0.01 \ --top-k 64 \ --top-p 0.95 \ -no-cnv \ --prompt "<start_of_turn>user\nCreate a Flappy Bird game in Python. You must include these things:\n1. You must use pygame.\n2. The background color should be randomly chosen and is a light shade. Start with a light blue color.\n3. Pressing SPACE multiple times will accelerate the bird.\n4. The bird's shape should be randomly chosen as a square, circle or triangle. The color should be randomly chosen as a dark color.\n5. Place on the bottom some land colored as dark brown or yellow chosen randomly.\n6. Make a score shown on the top right side. Increment if you pass pipes and don't hit them.\n7. Make randomly spaced pipes with enough space. Color them randomly as dark green or light brown or a dark gray shade.\n8. When you lose, show the best score. Make the text inside the screen. Pressing q or Esc will quit the game. Restarting is pressing SPACE again.\nThe final game should be inside a markdown section in Python. Check your code for errors and fix them before the final markdown section.<end_of_turn>\n<start_of_turn>model\n" ``` The full input from our [https://unsloth.ai/blog/deepseekr1-dynamic](https://unsloth.ai/blog/deepseekr1-dynamic) 1.58bit blog is: Remember to remove <bos> since Gemma 3 auto adds a <bos>! Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] whitespace-pre-wrap <start_of_turn>user Create a Flappy Bird game in Python. You must include these things: 1. You must use pygame. 2. The background color should be randomly chosen and is a light shade. Start with a light blue color. 3. Pressing SPACE multiple times will accelerate the bird. 4. The bird's shape should be randomly chosen as a square, circle or triangle. The color should be randomly chosen as a dark color. 5. Place on the bottom some land colored as dark brown or yellow chosen randomly. 6. Make a score shown on the top right side. Increment if you pass pipes and don't hit them. 7. Make randomly spaced pipes with enough space. Color them randomly as dark green or light brown or a dark gray shade. 8. When you lose, show the best score. Make the text inside the screen. Pressing q or Esc will quit the game. Restarting is pressing SPACE again. The final game should be inside a markdown section in Python. Check your code for error ``` ## [Direct link to heading](https://docs.unsloth.ai/basics/gemma-3-how-to-run-and-fine-tune\#unsloth-fine-tuning-fixes-for-gemma-3) 🦥 Unsloth Fine-tuning fixes for Gemma 3 Our solution in Unsloth is 3 fold: 1. Keep all intermediate activations in bfloat16 format - can be float32, but this uses 2x more VRAM or RAM (via Unsloth's async gradient checkpointing) 2. Do all matrix multiplies in float16 with tensor cores, but manually upcasting / downcasting without the help of Pytorch's mixed precision autocast. 3. Upcast all other options that don't need matrix multiplies (layernorms) to float32. **Unsloth is the only framework which works in float16 machines for Gemma 3 inference and training.** This means Colab Notebooks with free Tesla T4 GPUs also work! - Fine-tune Gemma 3 (4B) using our [free Colab notebook](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Gemma3_(4B).ipynb) ## [Direct link to heading](https://docs.unsloth.ai/basics/gemma-3-how-to-run-and-fine-tune\#gemma-3-fixes-analysis) 🤔 Gemma 3 Fixes Analysis ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252FpQGE6CEsuvGcQaOKrQFQ%252Foutput%281%29.png%3Falt%3Dmedia%26token%3D5f741769-3591-4a79-bb83-d6d58a4e9818&width=768&dpr=4&quality=100&sign=65c761c0&sv=2) Gemma 3 1B to 27B exceed float16's maximum of 65504 First, before we finetune or run Gemma 3, we found that when using float16 mixed precision, gradients and **activations become infinity** unfortunately. This happens in T4 GPUs, RTX 20x series and V100 GPUs where they only have float16 tensor cores. For newer GPUs like RTX 30x or higher, A100s, H100s etc, these GPUs have bfloat16 tensor cores, so this problem does not happen! **But why?** ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252FXmN6s9dA64N3nvmi4Y4x%252Ffloat16%2520bfloat16.png%3Falt%3Dmedia%26token%3D3e1cb682-49d0-4083-b791-589cf01a05a8&width=768&dpr=4&quality=100&sign=b86bca81&sv=2) Wikipedia [https://en.wikipedia.org/wiki/Bfloat16\_floating-point\_format](https://en.wikipedia.org/wiki/Bfloat16_floating-point_format) Float16 can only represent numbers up to **65504**, whilst bfloat16 can represent huge numbers up to **10^38**! But notice both number formats use only 16bits! This is because float16 allocates more bits so it can represent smaller decimals better, whilst bfloat16 cannot represent fractions well. But why float16? Let's just use float32! But unfortunately float32 in GPUs is very slow for matrix multiplications - sometimes 4 to 10x slower! So we cannot do this. [PreviousReinforcement Learning - DPO, ORPO & KTO](https://docs.unsloth.ai/basics/reasoning-grpo-and-rl/reinforcement-learning-dpo-orpo-and-kto) [NextDatasets Guide](https://docs.unsloth.ai/basics/datasets-guide) Last updated 1 day ago Was this helpful?
{ "color-scheme": "light dark", "description": "How to run Gemma 3 effectively with our GGUFs on llama.cpp, Ollama, Open WebUI and how to fine-tune with Unsloth!", "favicon": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Forganizations%252FHpyELzcNe0topgVLGCZY%252Fsites%252Fsite_mXXTe%252Ficon%252FUdYqQdRpLXMz6O8ejTLD%252Funsloth%2520sticker.png%3Falt%3Dmedia%26token%3D2a5ba961-51cb-4094-8545-6eb20ad1e06e&width=48&height=48&sign=277417f7&sv=2", "generator": "GitBook (e15757d)", "language": "en", "og:description": "How to run Gemma 3 effectively with our GGUFs on llama.cpp, Ollama, Open WebUI and how to fine-tune with Unsloth!", "og:image": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "og:title": "Gemma 3: How to Run & Fine-tune | Unsloth Documentation", "ogDescription": "How to run Gemma 3 effectively with our GGUFs on llama.cpp, Ollama, Open WebUI and how to fine-tune with Unsloth!", "ogImage": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "ogTitle": "Gemma 3: How to Run & Fine-tune | Unsloth Documentation", "robots": "index, follow", "scrapeId": "4b9328fa-f7af-44e1-a10d-781739647230", "sourceURL": "https://docs.unsloth.ai/basics/gemma-3-how-to-run-and-fine-tune", "statusCode": 200, "title": "Gemma 3: How to Run & Fine-tune | Unsloth Documentation", "twitter:card": "summary_large_image", "twitter:description": "How to run Gemma 3 effectively with our GGUFs on llama.cpp, Ollama, Open WebUI and how to fine-tune with Unsloth!", "twitter:image": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "twitter:title": "Gemma 3: How to Run & Fine-tune | Unsloth Documentation", "url": "https://docs.unsloth.ai/basics/gemma-3-how-to-run-and-fine-tune", "viewport": "width=device-width, initial-scale=1" }
Qwen3, full fine-tuning & all models are now supported! 🦥 LocallyManual Saving To save to GGUF, use the below to save locally: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] model.save_pretrained_gguf("dir", tokenizer, quantization_method = "q4_k_m") model.save_pretrained_gguf("dir", tokenizer, quantization_method = "q8_0") model.save_pretrained_gguf("dir", tokenizer, quantization_method = "f16") ``` For to push to hub: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] model.push_to_hub_gguf("hf_username/dir", tokenizer, quantization_method = "q4_k_m") model.push_to_hub_gguf("hf_username/dir", tokenizer, quantization_method = "q8_0") ``` All supported quantization options for `quantization_method` are listed below: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] # https://github.com/ggerganov/llama.cpp/blob/master/examples/quantize/quantize.cpp#L19 # From https://mlabonne.github.io/blog/posts/Quantize_Llama_2_models_using_ggml.html ALLOWED_QUANTS = \ { "not_quantized" : "Recommended. Fast conversion. Slow inference, big files.", "fast_quantized" : "Recommended. Fast conversion. OK inference, OK file size.", "quantized" : "Recommended. Slow conversion. Fast inference, small files.", "f32" : "Not recommended. Retains 100% accuracy, but super slow and memory hungry.", "f16" : "Fastest conversion + retains 100% accuracy. Slow and memory hungry.", "q8_0" : "Fast conversion. High resource use, but generally acceptable.", "q4_k_m" : "Recommended. Uses Q6_K for half of the attention.wv and feed_forward.w2 tensors, else Q4_K", "q5_k_m" : "Recommended. Uses Q6_K for half of the attention.wv and feed_forward.w2 tensors, else Q5_K", "q2_k" : "Uses Q4_K for the attention.vw and feed_forward.w2 tensors, Q2_K for the other tensors.", "q3_k_l" : "Uses Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else Q3_K", "q3_k_m" : "Uses Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else Q3_K", "q3_k_s" : "Uses Q3_K for all tensors", "q4_0" : "Original quant method, 4-bit.", "q4_1" : "Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models.", "q4_k_s" : "Uses Q4_K for all tensors", "q4_k" : "alias for q4_k_m", "q5_k" : "alias for q5_k_m", "q5_0" : "Higher accuracy, higher resource usage and slower inference.", "q5_1" : "Even higher accuracy, resource usage and slower inference.", "q5_k_s" : "Uses Q5_K for all tensors", "q6_k" : "Uses Q8_K for all tensors", "iq2_xxs" : "2.06 bpw quantization", "iq2_xs" : "2.31 bpw quantization", "iq3_xxs" : "3.06 bpw quantization", "q3_k_xs" : "3-bit extra small quantization", } ``` First save your model to 16bit: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] model.save_pretrained_merged("merged_model", tokenizer, save_method = "merged_16bit",) ``` Then use the terminal and do: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] git clone --recursive https://github.com/ggerganov/llama.cpp make clean -C llama.cpp make all -j -C llama.cpp pip install gguf protobuf python llama.cpp/convert-hf-to-gguf.py FOLDER --outfile OUTPUT --outtype f16 ``` Or follow the steps at https://rentry.org/llama-cpp-conversions#merging-loras-into-a-model using the model name "merged\_model" to merge to GGUF. [PreviousRunning & Saving Models](https://docs.unsloth.ai/basics/running-and-saving-models) [NextSaving to Ollama](https://docs.unsloth.ai/basics/running-and-saving-models/saving-to-ollama) Last updated 10 months ago Was this helpful?
{ "color-scheme": "light dark", "description": "Saving models to 16bit for GGUF so you can use it for Ollama, Jan AI, Open WebUI and more!", "favicon": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Forganizations%252FHpyELzcNe0topgVLGCZY%252Fsites%252Fsite_mXXTe%252Ficon%252FUdYqQdRpLXMz6O8ejTLD%252Funsloth%2520sticker.png%3Falt%3Dmedia%26token%3D2a5ba961-51cb-4094-8545-6eb20ad1e06e&width=48&height=48&sign=277417f7&sv=2", "generator": "GitBook (e15757d)", "language": "en", "og:description": "Saving models to 16bit for GGUF so you can use it for Ollama, Jan AI, Open WebUI and more!", "og:image": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "og:title": "Saving to GGUF | Unsloth Documentation", "ogDescription": "Saving models to 16bit for GGUF so you can use it for Ollama, Jan AI, Open WebUI and more!", "ogImage": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "ogTitle": "Saving to GGUF | Unsloth Documentation", "robots": "index, follow", "scrapeId": "4fadd594-1f58-4e09-9cb4-960e709a6aeb", "sourceURL": "https://docs.unsloth.ai/basics/running-and-saving-models/saving-to-gguf", "statusCode": 200, "title": "Saving to GGUF | Unsloth Documentation", "twitter:card": "summary_large_image", "twitter:description": "Saving models to 16bit for GGUF so you can use it for Ollama, Jan AI, Open WebUI and more!", "twitter:image": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "twitter:title": "Saving to GGUF | Unsloth Documentation", "url": "https://docs.unsloth.ai/basics/running-and-saving-models/saving-to-gguf", "viewport": "width=device-width, initial-scale=1" }
Qwen3, full fine-tuning & all models are now supported! 🦥 ## [Direct link to heading](https://docs.unsloth.ai/basics/errors-troubleshooting\#running-in-unsloth-works-well-but-after-exporting-and-running-on-other-platforms-the-results-are-poo) Running in Unsloth works well, but after exporting & running on other platforms, the results are poor You might sometimes encounter an issue where your model runs and produces good results on Unsloth, but when you use it on another platform like Ollama or vLLM, the results are poor or you might get gibberish, endless/infinite generations _or_ repeated outputs **.** - The most common cause of this error is using an incorrect chat template. It’s essential to use the SAME chat template that was used when training the model in Unsloth and later when you run it in another framework, such as llama.cpp or Ollama. When inferencing from a saved model, it's crucial to apply the correct template. - It might also be because your inference engine adds an unnecessary "start of sequence" token (or the lack of thereof on the contrary) so ensure you check both hypotheses! ## [Direct link to heading](https://docs.unsloth.ai/basics/errors-troubleshooting\#saving-to-gguf-vllm-16bit-crashes) Saving to GGUF / vLLM 16bit crashes You can try reducing the maximum GPU usage during saving by changing `maximum_memory_usage`. The default is `model.save_pretrained(..., maximum_memory_usage = 0.75)`. Reduce it to say 0.5 to use 50% of GPU peak memory or lower. This can reduce OOM crashes during saving. ## [Direct link to heading](https://docs.unsloth.ai/basics/errors-troubleshooting\#evaluation-loop-also-oom-or-crashing) Evaluation Loop - also OOM or crashing. A common issue when you OOM is because you set your batch size too high. Set it lower than 3 to use less VRAM. First split your training dataset into a train and test split. Set the trainer settings for evaluation to: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] new_dataset = dataset.train_test_split(test_size = 0.01) SFTTrainer( args = TrainingArguments( fp16_full_eval = True, per_device_eval_batch_size = 2, eval_accumulation_steps = 4, eval_strategy = "steps", eval_steps = 1, ), train_dataset = new_dataset["train"], eval_dataset = new_dataset["test"], ``` This will cause no OOMs and make it somewhat faster with no upcasting to float32. Validation set. ## [Direct link to heading](https://docs.unsloth.ai/basics/errors-troubleshooting\#notimplementederror-a-utf-8-locale-is-required.-got-ansi) NotImplementedError: A UTF-8 locale is required. Got ANSI See https://github.com/googlecolab/colabtools/issues/3409 In a new cell, run the below: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] import locale locale.getpreferredencoding = lambda: "UTF-8" ``` [PreviousFinetuning from Last Checkpoint](https://docs.unsloth.ai/basics/finetuning-from-last-checkpoint) [NextUnsloth Environment Flags](https://docs.unsloth.ai/basics/errors-troubleshooting/unsloth-environment-flags) Last updated 2 months ago Was this helpful?
{ "color-scheme": "light dark", "description": "To fix any errors with your setup, see below:", "favicon": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Forganizations%252FHpyELzcNe0topgVLGCZY%252Fsites%252Fsite_mXXTe%252Ficon%252FUdYqQdRpLXMz6O8ejTLD%252Funsloth%2520sticker.png%3Falt%3Dmedia%26token%3D2a5ba961-51cb-4094-8545-6eb20ad1e06e&width=48&height=48&sign=277417f7&sv=2", "generator": "GitBook (e15757d)", "language": "en", "og:description": "To fix any errors with your setup, see below:", "og:image": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "og:title": "Errors/Troubleshooting | Unsloth Documentation", "ogDescription": "To fix any errors with your setup, see below:", "ogImage": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "ogTitle": "Errors/Troubleshooting | Unsloth Documentation", "robots": "index, follow", "scrapeId": "4f9c274c-58cf-439d-9ada-b8359b9a4e4b", "sourceURL": "https://docs.unsloth.ai/basics/errors-troubleshooting", "statusCode": 200, "title": "Errors/Troubleshooting | Unsloth Documentation", "twitter:card": "summary_large_image", "twitter:description": "To fix any errors with your setup, see below:", "twitter:image": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "twitter:title": "Errors/Troubleshooting | Unsloth Documentation", "url": "https://docs.unsloth.ai/basics/errors-troubleshooting", "viewport": "width=device-width, initial-scale=1" }
Qwen3, full fine-tuning & all models are now supported! 🦥 ## [Direct link to heading](https://docs.unsloth.ai/get-started/fine-tuning-guide\#id-1.-understand-fine-tuning) 1\. Understand Fine-tuning Fine-tuning an LLM customizes its behavior, enhances + injects knowledge, and optimizes performance for domains/specific tasks. For example: - **GPT-4** serves as a base model; however, OpenAI fine-tuned it to better comprehend instructions and prompts, leading to the creation of ChatGPT-4 which everyone uses today. - ​ **DeepSeek-R1-Distill-Llama-8B** is a fine-tuned version of Llama-3.1-8B. DeepSeek utilized data generated by DeepSeek-R1, to fine-tune Llama-3.1-8B. This process, known as distillation (a subcategory of fine-tuning), injects the data into the Llama model to learn reasoning capabilities. With [Unsloth](https://github.com/unslothai/unsloth), you can fine-tune for free on Colab, Kaggle, or locally with just 3GB VRAM by using our [notebooks](https://docs.unsloth.ai/get-started/unsloth-notebooks). By fine-tuning a pre-trained model (e.g. Llama-3.1-8B) on a specialized dataset, you can: - **Update + Learn New Knowledge**: Inject and learn new domain-specific information. - **Customize Behavior**: Adjust the model’s tone, personality, or response style. - **Optimize for Tasks**: Improve accuracy and relevance for specific use cases. **Example usecases**: - Train LLM to predict if a headline impacts a company positively or negatively. - Use historical customer interactions for more accurate and custom responses. - Fine-tune LLM on legal texts for contract analysis, case law research, and compliance. You can think of a fine-tuned model as a specialized agent designed to do specific tasks more effectively and efficiently. **Fine-tuning can replicate all of RAG's capabilities**, but not vice versa. #### [Direct link to heading](https://docs.unsloth.ai/get-started/fine-tuning-guide\#fine-tuning-misconceptions) Fine-tuning misconceptions: You may have heard that fine-tuning does not make a model learn new knowledge or RAG performs better than fine-tuning. That is **false**. Read more FAQ + misconceptions here: [🤔FAQ + Is Fine-tuning Right For Me?](https://docs.unsloth.ai/get-started/beginner-start-here/faq-+-is-fine-tuning-right-for-me) ## [Direct link to heading](https://docs.unsloth.ai/get-started/fine-tuning-guide\#id-2.-choose-the-right-model--method) 2\. Choose the Right Model + Method If you're a beginner, it is best to start with a small instruct model like Llama 3.1 (8B) and experiment from there. You'll also need to decide between QLoRA and LoRA training: - **LoRA:** Fine-tunes small, trainable matrices in 16-bit without updating all model weights. - **QLoRA:** Combines LoRA with 4-bit quantization to handle very large models with minimal resources. ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252FDpWv59wCNJUR38sVMjT6%252Fmodel%2520name%2520change.png%3Falt%3Dmedia%26token%3D1283a92d-9df7-4de0-b1a1-9fc7cc483381&width=768&dpr=4&quality=100&sign=f8be6cd7&sv=2) You can change the model name to whichever model you like by matching it with model's name on Hugging Face e.g. 'unsloth/llama-3.1-8b-bnb-4bit'. There are 3 other settings which you can toggle: - `max_seq_length = 2048` – Controls context length. While Llama-3 supports 8192, we recommend 2048 for testing. Unsloth enables 4× longer context fine-tuning. - `dtype = None` – Defaults to None; use `torch.float16` or `torch.bfloat16` for newer GPUs. - `load_in_4bit = True` – Enables 4-bit quantization, reducing memory use 4× for fine-tuning on 16GB GPUs. Disabling it on larger GPUs (e.g., H100) slightly improves accuracy (1–2%). - For **full-finetuning** \- set `full_finetuning = True` and **8-bit finetuning** \- set `load_in_8bit = True` We recommend starting with QLoRA, as it is one of the most accessible and effective methods for training models. Our [dynamic 4-bit](https://unsloth.ai/blog/phi4) quants, the accuracy loss for QLoRA compared to LoRA is now largely recovered. You can also do [reasoning (GRPO)](https://docs.unsloth.ai/basics/reasoning-grpo-and-rl), [vision](https://docs.unsloth.ai/basics/vision-fine-tuning), [reward modelling](https://docs.unsloth.ai/basics/reasoning-grpo-and-rl/reinforcement-learning-dpo-orpo-and-kto) (DPO, ORPO, KTO), [continued pretraining](https://docs.unsloth.ai/basics/continued-pretraining), text completion and other training methodologies with Unsloth. Read our detailed guide on choosing the right model: [❓What Model Should I Use?](https://docs.unsloth.ai/get-started/fine-tuning-guide/what-model-should-i-use) ## [Direct link to heading](https://docs.unsloth.ai/get-started/fine-tuning-guide\#id-3.-your-dataset) 3\. Your Dataset For LLMs, datasets are collections of data that can be used to train our models. In order to be useful for training, text data needs to be in a format that can be tokenized. - You will need to create a dataset usually with 2 columns - question and answer. The quality and amount will largely reflect the end result of your fine-tune so it's imperative to get this part right. - You can [synthetically generate data](https://docs.unsloth.ai/basics/datasets-guide#synthetic-data-generation) and structure your dataset (into QA pairs) using ChatGPT or local LLMs. - You can also use our new Synthetic Dataset notebook which automatically parses documents (PDFs, videos etc.), generates QA pairs and auto cleans data using local models like Llama 3.2. [Access the notebook here.](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Meta_Synthetic_Data_Llama3_2_(3B).ipynb) - Fine-tuning can learn from an existing repository of documents and continuously expand its knowledge base, but just dumping data alone won’t work as well. For optimal results, curate a well-structured dataset, ideally as question-answer pairs. This enhances learning, understanding, and response accuracy. - But, that's not always the case, e.g. if you are fine-tuning a LLM for code, just dumping all your code data can actually enable your model to yield significant performance improvements, even without structured formatting. So it really depends on your use case. _**Read more about creating your dataset:**_ [📈Datasets Guide](https://docs.unsloth.ai/basics/datasets-guide) For most of our notebook examples, we utilize the [Alpaca dataset](https://docs.unsloth.ai/basics/tutorial-how-to-finetune-llama-3-and-use-in-ollama#id-6.-alpaca-dataset) however other notebooks like Vision will use different datasets which may need images in the answer ouput as well. ## [Direct link to heading](https://docs.unsloth.ai/get-started/fine-tuning-guide\#id-4.-understand-model-parameters) 4\. Understand Model Parameters There are millions of hyperparameters combinations and choosing the right numbers are crucial to a good result. You can edit the parameters (numbers) below, but you can ignore it, since we already select quite reasonable numbers. ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252FW1P2qmzGQGDAXQ0pXhRq%252Fparameters.png%3Falt%3Dmedia%26token%3Df146c646-ca31-4459-b1de-499bd1d23fd1&width=768&dpr=4&quality=100&sign=3bae97dd&sv=2) The goal is to change these numbers to increase accuracy, but also **counteract over-fitting**. Over-fitting is when you make the language model memorize a dataset, and not be able to answer novel new questions. We want to a final model to answer unseen questions, and not do memorization. Here are the key parameters: #### [Direct link to heading](https://docs.unsloth.ai/get-started/fine-tuning-guide\#learning-rate) **Learning Rate** Defines how much the model’s weights adjust per training step. - **Higher Learning Rates**: Faster training, reduces overfitting just make sure to not make it too high as it will overfit - **Lower Learning Rates**: More stable training, may require more epochs. - **Typical Range**: 1e-4 (0.0001) to 5e-5 (0.00005). #### [Direct link to heading](https://docs.unsloth.ai/get-started/fine-tuning-guide\#epochs) **Epochs** Number of times the model sees the full training dataset. - **Recommended:** 1-3 epochs (anything more than 3 is generally not optimal unless you want your model to have much less hallucinations but also less creativity and variety in answers) - **More Epochs**: Better learning, higher risk of overfitting. - **Fewer Epochs**: May undertrain the model. _**For a complete guide on how hyperparameters affect training, see:**_ [🧠LoRA Hyperparameters Guide](https://docs.unsloth.ai/get-started/fine-tuning-guide/lora-hyperparameters-guide) ### [Direct link to heading](https://docs.unsloth.ai/get-started/fine-tuning-guide\#avoiding-overfitting-and-underfitting) **Avoiding Overfitting & Underfitting** #### [Direct link to heading](https://docs.unsloth.ai/get-started/fine-tuning-guide\#overfitting-too-specialized) **Overfitting** (Too Specialized) The model memorizes training data, failing to generalize to unseen inputs. Solution: - If your training duration is short, lower the learning rate. For longer training runs, increase the learning rate. Because of this, it might be best to test both and see which is better. - Increase batch size. - Lower the number of training epochs. - Combine your dataset with a generic dataset e.g. ShareGPT - Increase dropout rate to introduce regularization. #### [Direct link to heading](https://docs.unsloth.ai/get-started/fine-tuning-guide\#underfitting-too-generic) **Underfitting** (Too Generic) Though not as common, underfitting is where a low rank model fails to generalize due to a lack of learnable params and so your model may fail to learn from training data. Solution: - If your training duration is short, increase the learning rate. For longer training runs, reduce the learning rate. - Train for more epochs. - Increasing rank and alpha. Alpha should at least equal to the rank number, and rank should be bigger for smaller models/more complex datasets; it usually is between 4 and 64. - Use a more domain-relevant dataset. Fine-tuning has no single "best" approach, only best practices. Experimentation is key to finding what works for your needs. Our notebooks auto-set optimal parameters based on evidence from research papers and past experiments. ## [Direct link to heading](https://docs.unsloth.ai/get-started/fine-tuning-guide\#id-5.-installing--requirements) 5\. Installing + Requirements We would recommend beginners to utilise our pre-made [notebooks](https://docs.unsloth.ai/get-started/unsloth-notebooks) first as it's the easiest way to get started with guided steps. However, if installing locally is a must, you can install and use Unsloth - just make sure you have all the right requirements necessary. Also depending on the model and quantization you're using, you'll need enough VRAM and resources. See all the details here: [🛠️Unsloth Requirements](https://docs.unsloth.ai/get-started/beginner-start-here/unsloth-requirements) Next, you'll need to install Unsloth. Unsloth currently only supports Windows and Linux devices. Once you install Unsloth, you can copy and paste our notebooks and use them in your own local environment. We have many installation methods: [📥Installing + Updating](https://docs.unsloth.ai/get-started/installing-+-updating) ## [Direct link to heading](https://docs.unsloth.ai/get-started/fine-tuning-guide\#id-6.-training--evaluation) 6\. Training + Evaluation Once you have everything set, it's time to train! If something's not working, remember you can always change hyperparameters, your dataset etc. You will see a log of some numbers whilst training! This is the training loss, and your job is to set parameters to make this go to as close to 0.5 as possible! If your finetune is not reaching 1, 0.8 or 0.5, you might have to adjust some numbers. If your loss goes to 0, that's probably not a good sign as well! ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252FxwOA09mtcimcQOCjP4PG%252Fimage.png%3Falt%3Dmedia%26token%3D39a0f525-6d4e-4c3b-af0d-82d8960d87be&width=768&dpr=4&quality=100&sign=853c0062&sv=2) The training loss will appear as numbers We generally recommend keeping the default settings unless you need longer training or larger batch sizes. - `per_device_train_batch_size = 2` – Increase for better GPU utilization but beware of slower training due to padding. Instead, increase `gradient_accumulation_steps` for smoother training. - `gradient_accumulation_steps = 4` – Simulates a larger batch size without increasing memory usage. - `max_steps = 60` – Speeds up training. For full runs, replace with `num_train_epochs = 1` (1–3 epochs recommended to avoid overfitting). - `learning_rate = 2e-4` – Lower for slower but more precise fine-tuning. Try values like `1e-4`, `5e-5`, or `2e-5`. ### [Direct link to heading](https://docs.unsloth.ai/get-started/fine-tuning-guide\#evaluation) Evaluation In order to evaluate, you could do manually evaluation by just chatting with the model and see if it's to your liking. You can also enable evaluation for Unsloth, but keep in mind it can be time-consuming depending on the dataset size. To speed up evaluation you can: reduce the evaluation dataset size or set `evaluation_steps = 100`. For testing, you can also take 20% of your training data and use that for testing. If you already used all of the training data, then you have to manually evaluate it. You can also use automatic eval tools like EleutherAI’s [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness). Keep in mind that automated tools may not perfectly align with your evaluation criteria. ## [Direct link to heading](https://docs.unsloth.ai/get-started/fine-tuning-guide\#id-7.-running--saving-the-model) 7\. Running + Saving the model ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252FRX9Byv1hlSpvmonT1PLw%252Fimage.png%3Falt%3Dmedia%26token%3D6043cd8c-c6a3-4cc5-a019-48baeed3b5a2&width=768&dpr=4&quality=100&sign=7c7ce43f&sv=2) Now let's run the model after we completed the training process! You can edit the yellow underlined part! In fact, because we created a multi turn chatbot, we can now also call the model as if it saw some conversations in the past like below: ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252F6DXSlsHkN8cZiiAxAV0Z%252Fimage.png%3Falt%3Dmedia%26token%3D846307de-7386-4bbe-894e-7d9e572244fe&width=768&dpr=4&quality=100&sign=6482b95b&sv=2) Reminder Unsloth itself provides **2x faster inference** natively as well, so always do not forget to call `FastLanguageModel.for_inference(model)`. If you want the model to output longer responses, set `max_new_tokens = 128` to some larger number like 256 or 1024. Notice you will have to wait longer for the result as well! ### [Direct link to heading](https://docs.unsloth.ai/get-started/fine-tuning-guide\#saving-the-model) Saving the model For saving and using your model in desired inference engines like Ollama, vLLM, Open WebUI, we can have more information here: [🖥️Running & Saving Models](https://docs.unsloth.ai/basics/running-and-saving-models) We can now save the finetuned model as a small 100MB file called a LoRA adapter like below. You can instead push to the Hugging Face hub as well if you want to upload your model! Remember to get a Hugging Face token via: [https://huggingface.co/settings/tokens](https://huggingface.co/settings/tokens) and add your token! ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252FBz0YDi6Sc2oEP5QWXgSz%252Fimage.png%3Falt%3Dmedia%26token%3D33d9e4fd-e7dc-4714-92c5-bfa3b00f86c4&width=768&dpr=4&quality=100&sign=d6933a01&sv=2) After saving the model, we can again use Unsloth to run the model itself! Use `FastLanguageModel` again to call it for inference! ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252FzymBQrqwt4GUmCIN0Iec%252Fimage.png%3Falt%3Dmedia%26token%3D41a110e4-8263-426f-8fa7-cdc295cc8210&width=768&dpr=4&quality=100&sign=b2a207c3&sv=2) ## [Direct link to heading](https://docs.unsloth.ai/get-started/fine-tuning-guide\#id-8.-were-done) 8\. We're done! You've successfully finetuned a language model and exported it to your desired inference engine with Unsloth! To learn more about finetuning tips and tricks, head over to our blogs which provide tremendous and educational value: [https://unsloth.ai/blog/](https://unsloth.ai/blog/) If you need any help on finetuning, you can also join our Discord server [here](https://discord.gg/unsloth). Thanks for reading and hopefully this was helpful! ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252FPEvp4xsbVObJZ1lawDj8%252Fsloth%2520sparkling%2520square.png%3Falt%3Dmedia%26token%3D876bf67d-7470-4977-a6cc-3ee02cc9440b&width=768&dpr=4&quality=100&sign=d5ba19e6&sv=2) [PreviousGoogle Colab](https://docs.unsloth.ai/get-started/installing-+-updating/google-colab) [NextWhat Model Should I Use?](https://docs.unsloth.ai/get-started/fine-tuning-guide/what-model-should-i-use) Last updated 10 days ago Was this helpful?
{ "color-scheme": "light dark", "description": "Learn all the basics and best practices of fine-tuning. Beginner-friendly.", "favicon": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Forganizations%252FHpyELzcNe0topgVLGCZY%252Fsites%252Fsite_mXXTe%252Ficon%252FUdYqQdRpLXMz6O8ejTLD%252Funsloth%2520sticker.png%3Falt%3Dmedia%26token%3D2a5ba961-51cb-4094-8545-6eb20ad1e06e&width=48&height=48&sign=277417f7&sv=2", "generator": "GitBook (e15757d)", "language": "en", "og:description": "Learn all the basics and best practices of fine-tuning. Beginner-friendly.", "og:image": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "og:title": "Fine-tuning Guide | Unsloth Documentation", "ogDescription": "Learn all the basics and best practices of fine-tuning. Beginner-friendly.", "ogImage": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "ogTitle": "Fine-tuning Guide | Unsloth Documentation", "robots": "index, follow", "scrapeId": "530ef3ec-8eb3-4ce4-9534-52cc858f0e26", "sourceURL": "https://docs.unsloth.ai/get-started/fine-tuning-guide", "statusCode": 200, "title": "Fine-tuning Guide | Unsloth Documentation", "twitter:card": "summary_large_image", "twitter:description": "Learn all the basics and best practices of fine-tuning. Beginner-friendly.", "twitter:image": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "twitter:title": "Fine-tuning Guide | Unsloth Documentation", "url": "https://docs.unsloth.ai/get-started/fine-tuning-guide", "viewport": "width=device-width, initial-scale=1" }
Qwen3, full fine-tuning & all models are now supported! 🦥 If you're a beginner, here might be the first questions you'll ask before your first fine-tune. You can also always ask our community by joining our [Discord server](https://discord.gg/unsloth). [🧬](https://docs.unsloth.ai/basics/tutorial-how-to-finetune-llama-3-and-use-in-ollama) [Fine-tuning Guide](https://docs.unsloth.ai/get-started/fine-tuning-guide) Step-by-step on how to fine-tune! Learn the core basics of training. [❓](https://docs.unsloth.ai/get-started/fine-tuning-guide/what-model-should-i-use) [What Model Should I Use?](https://docs.unsloth.ai/get-started/fine-tuning-guide/what-model-should-i-use) Instruct or Base Model? How big should my dataset be? [🚀](https://docs.unsloth.ai/basics/chat-templates) [Tutorials: How To Fine-tune & Run LLMs](https://docs.unsloth.ai/basics/tutorials-how-to-fine-tune-and-run-llms) How to Run & Fine-tune DeepSeek? What settings should I set when running Gemma 3? [🤔](https://docs.unsloth.ai/get-started/beginner-start-here/faq-+-is-fine-tuning-right-for-me) [FAQ + Is Fine-tuning Right For Me?](https://docs.unsloth.ai/get-started/beginner-start-here/faq-+-is-fine-tuning-right-for-me) What can fine-tuning do for me? RAG vs. Fine-tuning? [📥](https://docs.unsloth.ai/get-started/installing-+-updating) [Installing + Updating](https://docs.unsloth.ai/get-started/installing-+-updating) How do I install Unsloth locally? How to update Unsloth? 📈 [Datasets Guide](https://docs.unsloth.ai/basics/datasets-guide) How do I structure/prepare my dataset? How do I collect data? [🛠️](https://docs.unsloth.ai/get-started/beginner-start-here/unsloth-requirements) [Unsloth Requirements](https://docs.unsloth.ai/get-started/beginner-start-here/unsloth-requirements) Does Unsloth work on my GPU? How much VRAM will I need? [🖥️](https://docs.unsloth.ai/basics/running-and-saving-models) [Running & Saving Models](https://docs.unsloth.ai/basics/running-and-saving-models) How do I save my model locally? How do I run my model via Ollama or vLLM? 🧠 [LoRA Hyperparameters Guide](https://docs.unsloth.ai/get-started/fine-tuning-guide/lora-hyperparameters-guide) What happens when I change a parameter? What parameters should I change? ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252FjT759hR4zq8ygzg1oEwI%252FLarge%2520sloth%2520Question%2520mark.png%3Falt%3Dmedia%26token%3Dca8d2f56-889a-4da8-8106-da88d22e69d2&width=768&dpr=4&quality=100&sign=1635f07f&sv=2) [PreviousWelcome](https://docs.unsloth.ai/) [NextUnsloth Requirements](https://docs.unsloth.ai/get-started/beginner-start-here/unsloth-requirements) Last updated 1 month ago Was this helpful?
{ "color-scheme": "light dark", "description": null, "favicon": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Forganizations%252FHpyELzcNe0topgVLGCZY%252Fsites%252Fsite_mXXTe%252Ficon%252FUdYqQdRpLXMz6O8ejTLD%252Funsloth%2520sticker.png%3Falt%3Dmedia%26token%3D2a5ba961-51cb-4094-8545-6eb20ad1e06e&width=48&height=48&sign=277417f7&sv=2", "generator": "GitBook (e15757d)", "language": "en", "og:description": null, "og:image": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "og:title": "Beginner? Start here! | Unsloth Documentation", "ogDescription": null, "ogImage": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "ogTitle": "Beginner? Start here! | Unsloth Documentation", "robots": "index, follow", "scrapeId": "5cfbec86-14bc-4f48-a883-7e4072ddb9fc", "sourceURL": "https://docs.unsloth.ai/get-started/beginner-start-here", "statusCode": 200, "title": "Beginner? Start here! | Unsloth Documentation", "twitter:card": "summary_large_image", "twitter:description": null, "twitter:image": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "twitter:title": "Beginner? Start here! | Unsloth Documentation", "url": "https://docs.unsloth.ai/get-started/beginner-start-here", "viewport": "width=device-width, initial-scale=1" }
Qwen3, full fine-tuning & all models are now supported! 🦥 ## [Direct link to heading](https://docs.unsloth.ai/basics/datasets-guide\#what-is-a-dataset) What is a Dataset? For LLMs, datasets are collections of data that can be used to train our models. In order to be useful for training, text data needs to be in a format that can be tokenized. You'll also learn how to [use datasets inside of Unsloth](https://docs.unsloth.ai/basics/datasets-guide#applying-chat-templates-with-unsloth). One of the key parts of creating a dataset is your [chat template](https://docs.unsloth.ai/basics/chat-templates) and how you are going to design it. Tokenization is also important as it breaks text into tokens, which can be words, sub-words, or characters so LLMs can process it effectively. These tokens are then turned into embeddings and are adjusted to help the model understand the meaning and context. ### [Direct link to heading](https://docs.unsloth.ai/basics/datasets-guide\#data-format) Data Format To enable the process of tokenization, datasets need to be in a format that can be read by a tokenizer. Format Description Training Type Raw Corpus Raw text from a source such as a website, book, or article. Continued Pretraining (CPT) Instruct Instructions for the model to follow and an example of the output to aim for. Supervised fine-tuning (SFT) Conversation Multiple-turn conversation between a user and an AI assistant. Supervised fine-tuning (SFT) RLHF Conversation between a user and an AI assistant, with the assistant's responses being ranked by a script, another model or human evaluator. Reinforcement Learning (RL) It's worth noting that different styles of format exist for each of these types. ## [Direct link to heading](https://docs.unsloth.ai/basics/datasets-guide\#getting-started) Getting Started Before we format our data, we want to identify the following: 1 Purpose of dataset Knowing the purpose of the dataset will help us determine what data we need and format to use. The purpose could be, adapting a model to a new task such as summarization or improving a model's ability to role-play a specific character. For example: - Chat-based dialogues (Q&A, learn a new language, customer support, conversations). - Structured tasks ( [classification](https://colab.research.google.com/github/timothelaborie/text_classification_scripts/blob/main/unsloth_classification.ipynb), summarization, generation tasks). - Domain-specific data (medical, finance, technical). 2 Style of output The style of output will let us know what sources of data we will use to reach our desired output. For example, the type of output you want to achieve could be JSON, HTML, text or code. Or perhaps you want it to be Spanish, English or German etc. 3 Data source When we know the purpose and style of the data we need, we need to analyze the quality and [quantity](https://docs.unsloth.ai/basics/datasets-guide#how-big-should-my-dataset-be) of the data. Hugging Face and Wikipedia are great sources of datasets and Wikipedia is especially useful if you are looking to train a model to learn a language. The Source of data can be a CSV file, PDF or even a website. You can also [synthetically generate](https://docs.unsloth.ai/basics/datasets-guide#synthetic-data-generation) data but extra care is required to make sure each example is high quality and relevant. One of the best ways to create a better dataset is by combining it with a more generalized dataset from Hugging Face like ShareGPT to make your model smarter and diverse. You could also add [synthetically generated data](https://docs.unsloth.ai/basics/datasets-guide#synthetic-data-generation). ## [Direct link to heading](https://docs.unsloth.ai/basics/datasets-guide\#formatting-the-data) Formatting the Data When we have identified the relevant criteria, and collected the necessary data, we can then format our data into a machine readable format that is ready for training. ### [Direct link to heading](https://docs.unsloth.ai/basics/datasets-guide\#common-data-formats-for-llm-training) Common Data Formats for LLM Training For [**continued pretraining**](https://docs.unsloth.ai/basics/continued-pretraining), we use raw text format without specific structure: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] "text": "Pasta carbonara is a traditional Roman pasta dish. The sauce is made by mixing raw eggs with grated Pecorino Romano cheese and black pepper. The hot pasta is then tossed with crispy guanciale (cured pork cheek) and the egg mixture, creating a creamy sauce from the residual heat. Despite popular belief, authentic carbonara never contains cream or garlic. The dish likely originated in Rome in the mid-20th century, though its exact origins are debated..." ``` This format preserves natural language flow and allows the model to learn from continuous text. If we are adapting a model to a new task, and intend for the model to output text in a single turn based on a specific set of instructions, we can use **Instruction** format in [Alpaca style](https://docs.unsloth.ai/basics/tutorial-how-to-finetune-llama-3-and-use-in-ollama#id-6.-alpaca-dataset) Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] "Instruction": "Task we want the model to perform." "Input": "Optional, but useful, it will essentially be the user's query." "Output": "The expected result of the task and the output of the model." ``` When we want multiple turns of conversation we can use the ShareGPT format: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] { "conversations": [\ {\ "from": "human",\ "value": "Can you help me make pasta carbonara?"\ },\ {\ "from": "gpt",\ "value": "Would you like the traditional Roman recipe, or a simpler version?"\ },\ {\ "from": "human",\ "value": "The traditional version please"\ },\ {\ "from": "gpt",\ "value": "The authentic Roman carbonara uses just a few ingredients: pasta, guanciale, eggs, Pecorino Romano, and black pepper. Would you like the detailed recipe?"\ }\ ] } ``` The template format uses the "from"/"value" attribute keys and messages alternates between `human` and `gpt`, allowing for natural dialogue flow. The other common format is OpenAI's ChatML format and is what Hugging Face defaults to. This is probably the most used format, and alternates between `user` and `assistant` Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] { "messages": [\ {\ "role": "user",\ "content": "What is 1+1?"\ },\ {\ "role": "assistant",\ "content": "It's 2!"\ },\ ] } ``` ### [Direct link to heading](https://docs.unsloth.ai/basics/datasets-guide\#applying-chat-templates-with-unsloth) Applying Chat Templates with Unsloth For datasets that usually follow the common chatml format, the process of preparing the dataset for training or finetuning, consists of four simple steps: - Check the chat templates that Unsloth currently supports: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] from unsloth.chat_templates import CHAT_TEMPLATES print(list(CHAT_TEMPLATES.keys())) ``` This will print out the list of templates currently supported by Unsloth. Here is an example output: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] ['unsloth', 'zephyr', 'chatml', 'mistral', 'llama', 'vicuna', 'vicuna_old', 'vicuna old', 'alpaca', 'gemma', 'gemma_chatml', 'gemma2', 'gemma2_chatml', 'llama-3', 'llama3', 'phi-3', 'phi-35', 'phi-3.5', 'llama-3.1', 'llama-31', 'llama-3.2', 'llama-3.3', 'llama-32', 'llama-33', 'qwen-2.5', 'qwen-25', 'qwen25', 'qwen2.5', 'phi-4', 'gemma-3', 'gemma3'] ``` - Use `get_chat_template` to apply the right chat template to your tokenizer: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] from unsloth.chat_templates import get_chat_template tokenizer = get_chat_template( tokenizer, chat_template = "gemma-3", # change this to the right chat_template name ) ``` - Define your formatting function. Here's an example: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] def formatting_prompts_func(examples): convos = examples["conversations"] texts = [tokenizer.apply_chat_template(convo, tokenize = False, add_generation_prompt = False) for convo in convos] return { "text" : texts, } ``` This function loops through your dataset applying the chat template you defined to each sample. - Finally, let's load the dataset and apply the required modifications to our dataset: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] # Import and load dataset from datasets import load_dataset dataset = load_dataset("repo_name/dataset_name", split = "train") # Apply the formatting function to your dataset using the map method dataset = dataset.map(formatting_prompts_func, batched = True,) ``` If your dataset uses the ShareGPT format with "from"/"value" keys instead of the ChatML "role"/"content" format, you can use the `standardize_sharegpt` function to convert it first. The revised code will now look as follows: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] # Import dataset from datasets import load_dataset dataset = load_dataset("mlabonne/FineTome-100k", split = "train") # Convert your dataset to the "role"/"content" format if necessary from unsloth.chat_templates import standardize_sharegpt dataset = standardize_sharegpt(dataset) # Apply the formatting function to your dataset using the map method dataset = dataset.map(formatting_prompts_func, batched = True,) ``` ### [Direct link to heading](https://docs.unsloth.ai/basics/datasets-guide\#formatting-data-q-and-a) Formatting Data Q&A **Q:**How can I use the Alpaca instruct format? **A:** If your dataset is already formatted in the Alpaca format, then follow the formatting steps as shown in the Llama3.1 [notebook](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.1_(8B)-Alpaca.ipynb#scrollTo=LjY75GoYUCB8). If you need to convert your data to the Alpaca format, one approach is to create a Python script to process your raw data. If you're working on a summarization task, you can use a local LLM to generate instructions and outputs for each example. **Q:**Should I always use the standardize\_sharegpt method? **A:** Only use the standardize\_sharegpt method if your target dataset is formatted in the sharegpt format, but your model expect a ChatML format instead. **Q:**Why not use the apply\_chat\_template function that comes with the tokenizer. **A:** The `chat_template` attribute when a model is first uploaded by the original model owners sometimes contains errors and may take time to be updated. In contrast, at Unsloth, we thoroughly check and fix any errors in the `chat_template` for every model when we upload the quantized versions to our repositories. Additionally, our `get_chat_template` and `apply_chat_template` methods offer advanced data manipulation features, which are fully documented on our Chat Templates documentation [page](https://docs.unsloth.ai/basics/chat-templates). **Q:**What if my template is not currently supported by Unsloth? **A:** Submit a feature request on the unsloth github issues [forum](https://github.com/unslothai/unsloth). As a temporary workaround, you could also use the tokenizer's own apply\_chat\_template function until your feature request is approved and merged. ## [Direct link to heading](https://docs.unsloth.ai/basics/datasets-guide\#synthetic-data-generation) Synthetic Data Generation You can also use any local LLM like Llama 3.3 (70B) or OpenAI's GPT 4.5 to generate synthetic data. Generally, it is better to use a bigger like Llama 3.3 (70B) to ensure the highest quality outputs. You can directly use inference engines like vLLM, Ollama or llama.cpp to generate synthetic data but it will require some manual work to collect it and prompt for more data. There's 3 goals for synthetic data: - Produce entirely new data - either from scratch or from your existing dataset - Diversify your dataset so your model does not [overfit](https://docs.unsloth.ai/get-started/fine-tuning-guide/lora-hyperparameters-guide#avoiding-overfitting-and-underfitting) and become too specific - Augment existing data e.g. automatically structure your dataset in the correct chosen format ### [Direct link to heading](https://docs.unsloth.ai/basics/datasets-guide\#synthetic-dataset-notebook) Synthetic Dataset Notebook We collaborated with Meta to launch a free notebook for creating Synthetic Datasets automatically using local models like Llama 3.2. [Access the notebook here.](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Meta_Synthetic_Data_Llama3_2_(3B).ipynb) What the notebook does: • Auto-parses PDFs, websites, YouTube videos and more • Uses Meta’s Synthetic Data Kit + Llama 3.2 (3B) to generate QA pairs • Cleans and filters the data automatically • Fine-tunes the dataset with Unsloth + Llama • Notebook is fully done locally with no API calling necessary ### [Direct link to heading](https://docs.unsloth.ai/basics/datasets-guide\#using-a-local-llm-or-chatgpt-for-synthetic-data) Using a local LLM or ChatGPT for synthetic data Your goal is to prompt the model to generate and process QA data that is in your specified format. The model will need to learn the structure that you provided and also the context so ensure you at least have 10 examples of data already. Examples prompts: - **Prompt for generating more dialogue on an existing dataset**: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] whitespace-pre-wrap Using the dataset example I provided, follow the structure and generate conversations based on the examples. ``` - **Prompt if you no have dataset**: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] whitespace-pre-wrap Create 10 examples of product reviews for Coca-Coca classified as either positive, negative, or neutral. ``` - **Prompt for a dataset without formatting**: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] whitespace-pre-wrap Structure my dataset so it is in a QA ChatML format for fine-tuning. Then generate 5 synthetic data examples with the same topic and format. ``` It is recommended to check the quality of generated data to remove or improve on irrelevant or poor-quality responses. Depending on your dataset it may also have to be balanced in many areas so your model does not overfit. You can then feed this cleaned dataset back into your LLM to regenerate data, now with even more guidance. ## [Direct link to heading](https://docs.unsloth.ai/basics/datasets-guide\#dataset-faq--tips) Dataset FAQ + Tips ### [Direct link to heading](https://docs.unsloth.ai/basics/datasets-guide\#how-big-should-my-dataset-be) How big should my dataset be? We generally recommend using a bare minimum of at least 100 rows of data for fine-tuning to achieve reasonable results. For optimal performance, a dataset with over 1,000 rows is preferable, and in this case, more data usually leads to better outcomes. If your dataset is too small you can also add synthetic data or add a dataset from Hugging Face to diversify it. However, the effectiveness of your fine-tuned model depends heavily on the quality of the dataset, so be sure to thoroughly clean and prepare your data. ### [Direct link to heading](https://docs.unsloth.ai/basics/datasets-guide\#how-should-i-structure-my-dataset-if-i-want-to-fine-tune-a-reasoning-model) How should I structure my dataset if I want to fine-tune a reasoning model? If you want to fine-tune a model that already has reasoning capabilities like the distilled versions of DeepSeek-R1 (e.g. DeepSeek-R1-Distill-Llama-8B), you will need to still follow question/task and answer pairs however, for your answer you will need to change the answer so it includes reasoning/chain-of-thought process and the steps it took to derive the answer. For a model that does not have reasoning and you want to train it so that it later encompasses reasoning capabilities, you will need to utilize a standard dataset but this time without reasoning in its answers. This is training process is known as [Reinforcement Learning and GRPO](https://docs.unsloth.ai/basics/reasoning-grpo-and-rl). ### [Direct link to heading](https://docs.unsloth.ai/basics/datasets-guide\#multiple-datasets) Multiple datasets If you have multiple datasets for fine-tuning, you can either: - Standardize the format of all datasets, combine them into a single dataset, and fine-tune on this unified dataset. - Use the [Multiple Datasets](https://colab.research.google.com/drive/1njCCbE1YVal9xC83hjdo2hiGItpY_D6t?usp=sharing) notebook to fine-tune on multiple datasets directly. ### [Direct link to heading](https://docs.unsloth.ai/basics/datasets-guide\#can-i-fine-tune-the-same-model-multiple-times) Can I fine-tune the same model multiple times? You can fine-tune an already fine-tuned model multiple times, but it's best to combine all the datasets and perform the fine-tuning in a single process instead. Training an already fine-tuned model can potentially alter the quality and knowledge acquired during the previous fine-tuning process. ## [Direct link to heading](https://docs.unsloth.ai/basics/datasets-guide\#using-datasets-in-unsloth) Using Datasets in Unsloth ### [Direct link to heading](https://docs.unsloth.ai/basics/datasets-guide\#alpaca-dataset) Alpaca Dataset See an example of using the Alpaca dataset inside of Unsloth on Google Colab: ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252FKSmRDpkySelZfWSrWxDm%252Fimage.png%3Falt%3Dmedia%26token%3D5401e4da-796a-42ad-8b85-2263f3e59e86&width=768&dpr=4&quality=100&sign=28ad8509&sv=2) We will now use the Alpaca Dataset created by calling GPT-4 itself. It is a list of 52,000 instructions and outputs which was very popular when Llama-1 was released, since it made finetuning a base LLM be competitive with ChatGPT itself. You can access the GPT4 version of the Alpaca dataset [here](https://huggingface.co/datasets/vicgalle/alpaca-gpt4.). Below shows some examples of the dataset: ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252FzKhujR9Nxz95VFSdf4J5%252Fimage.png%3Falt%3Dmedia%26token%3Da3c52718-eaf1-4a3d-b325-414d8e67722e&width=768&dpr=4&quality=100&sign=2afb3a12&sv=2) You can see there are 3 columns in each row - an instruction, and input and an output. We essentially combine each row into 1 large prompt like below. We then use this to finetune the language model, and this made it very similar to ChatGPT. We call this process **supervised instruction finetuning**. ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252FieYX44Vjd0OygJvO0jaR%252Fimage.png%3Falt%3Dmedia%26token%3Deb67fa41-a280-4656-8be6-5b6bf6f587c2&width=768&dpr=4&quality=100&sign=68f5594e&sv=2) ### [Direct link to heading](https://docs.unsloth.ai/basics/datasets-guide\#multiple-columns-for-finetuning) Multiple columns for finetuning But a big issue is for ChatGPT style assistants, we only allow 1 instruction / 1 prompt, and not multiple columns / inputs. For example in ChatGPT, you can see we must submit 1 prompt, and not multiple prompts. ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252FpFUWhntUQLu05l4ns7Pq%252Fimage.png%3Falt%3Dmedia%26token%3De989e4a6-6033-4741-b97f-d0c3ce8f5888&width=768&dpr=4&quality=100&sign=a9eb969a&sv=2) This essentially means we have to "merge" multiple columns into 1 large prompt for finetuning to actually function! For example the very famous Titanic dataset has many many columns. Your job was to predict whether a passenger has survived or died based on their age, passenger class, fare price etc. We can't simply pass this into ChatGPT, but rather, we have to "merge" this information into 1 large prompt. ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252FrydHBjHoJT7w8FwzKAXK%252FMerge-1.png%3Falt%3Dmedia%26token%3Dec812057-0475-4717-87fe-311f14735c37&width=768&dpr=4&quality=100&sign=8211e070&sv=2) For example, if we ask ChatGPT with our "merged" single prompt which includes all the information for that passenger, we can then ask it to guess or predict whether the passenger has died or survived. ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252FJVkv73fRWvwwFxMym7uW%252Fimage.png%3Falt%3Dmedia%26token%3D59b97b76-f2f2-46c9-8940-60a37e4e7d62&width=768&dpr=4&quality=100&sign=37c0f3a1&sv=2) Other finetuning libraries require you to manually prepare your dataset for finetuning, by merging all your columns into 1 prompt. In Unsloth, we simply provide the function called `to_sharegpt` which does this in 1 go! ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252F9fo2IBA7P0tNwhNR9Prm%252Fimage.png%3Falt%3Dmedia%26token%3D7bd7244a-0fea-4e57-9038-a8a360138056&width=768&dpr=4&quality=100&sign=a94d397b&sv=2) Now this is a bit more complicated, since we allow a lot of customization, but there are a few points: - You must enclose all columns in curly braces `{}`. These are the column names in the actual CSV / Excel file. - Optional text components must be enclosed in `[[]]`. For example if the column "input" is empty, the merging function will not show the text and skip this. This is useful for datasets with missing values. - Select the output or target / prediction column in `output_column_name`. For the Alpaca dataset, this will be `output`. For example in the Titanic dataset, we can create a large merged prompt format like below, where each column / piece of text becomes optional. ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252FRMvBpfXC9ToCRL0oCJfN%252Fimage.png%3Falt%3Dmedia%26token%3Dc257c7fc-8a9c-4d4f-ab3d-6894ae49f2a9&width=768&dpr=4&quality=100&sign=4ec813ed&sv=2) For example, pretend the dataset looks like this with a lot of missing data: Embarked Age Fare S 23 18 7.25 Then, we do not want the result to be: 1. The passenger embarked from S. Their age is 23. Their fare is **EMPTY**. 2. The passenger embarked from **EMPTY**. Their age is 18. Their fare is $7.25. Instead by optionally enclosing columns using `[[]]`, we can exclude this information entirely. 1. \[\[The passenger embarked from S.\]\] \[\[Their age is 23.\]\] \[\[Their fare is **EMPTY**.\]\] 2. \[\[The passenger embarked from **EMPTY**.\]\] \[\[Their age is 18.\]\] \[\[Their fare is $7.25.\]\] becomes: 1. The passenger embarked from S. Their age is 23. 2. Their age is 18. Their fare is $7.25. ### [Direct link to heading](https://docs.unsloth.ai/basics/datasets-guide\#multi-turn-conversations) Multi turn conversations A bit issue if you didn't notice is the Alpaca dataset is single turn, whilst remember using ChatGPT was interactive and you can talk to it in multiple turns. For example, the left is what we want, but the right which is the Alpaca dataset only provides singular conversations. We want the finetuned language model to somehow learn how to do multi turn conversations just like ChatGPT. ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252FWCAN7bYUt6QWwCWUxisL%252Fdiff.png%3Falt%3Dmedia%26token%3D29821fd9-2181-4d1d-8b93-749b69bcf400&width=768&dpr=4&quality=100&sign=d4f1b675&sv=2) So we introduced the `conversation_extension` parameter, which essentially selects some random rows in your single turn dataset, and merges them into 1 conversation! For example, if you set it to 3, we randomly select 3 rows and merge them into 1! Setting them too long can make training slower, but could make your chatbot and final finetune much better! ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252FWi1rRNBFC2iDmCvSJsZt%252Fcombine.png%3Falt%3Dmedia%26token%3Dbef37a55-b272-4be3-89b5-9767c219a380&width=768&dpr=4&quality=100&sign=ae98ba1b&sv=2) Then set `output_column_name` to the prediction / output column. For the Alpaca dataset dataset, it would be the output column. We then use the `standardize_sharegpt` function to just make the dataset in a correct format for finetuning! Always call this! ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252FE75C4Y848VNF6luLuPRR%252Fimage.png%3Falt%3Dmedia%26token%3Daac1d79b-ecca-4e56-939d-d97dcbbf30eb&width=768&dpr=4&quality=100&sign=d48e3c76&sv=2) ## [Direct link to heading](https://docs.unsloth.ai/basics/datasets-guide\#vision-fine-tuning) Vision Fine-tuning The dataset for fine-tuning a vision or multimodal model also includes image inputs. For example, the [Llama 3.2 Vision Notebook](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.2_(11B)-Vision.ipynb#scrollTo=vITh0KVJ10qX) uses a radiography case to show how AI can help medical professionals analyze X-rays, CT scans, and ultrasounds more efficiently. We'll be using a sampled version of the ROCO radiography dataset. You can access the dataset [here](https://www.google.com/url?q=https%3A%2F%2Fhuggingface.co%2Fdatasets%2Funsloth%2FRadiology_mini). The dataset includes X-rays, CT scans and ultrasounds showcasing medical conditions and diseases. Each image has a caption written by experts describing it. The goal is to finetune a VLM to make it a useful analysis tool for medical professionals. Let's take a look at the dataset, and check what the 1st example shows: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] Dataset({ features: ['image', 'image_id', 'caption', 'cui'], num_rows: 1978 }) ``` Image Caption ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252FrjdETiyi6jqzAao7vg8I%252Fxray.png%3Falt%3Dmedia%26token%3Df66fdd7f-5e10-4eff-a280-5b3d63ed7849&width=768&dpr=4&quality=100&sign=4d4d6839&sv=2) Panoramic radiography shows an osteolytic lesion in the right posterior maxilla with resorption of the floor of the maxillary sinus (arrows). To format the dataset, all vision finetuning tasks should be formatted as follows: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] [\ { "role": "user",\ "content": [{"type": "text", "text": instruction}, {"type": "image", "image": image} ]\ },\ { "role": "assistant",\ "content": [{"type": "text", "text": answer} ]\ },\ ] ``` We will craft an custom instruction asking the VLM to be an expert radiographer. Notice also instead of just 1 instruction, you can add multiple turns to make it a dynamic conversation. Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] instruction = "You are an expert radiographer. Describe accurately what you see in this image." def convert_to_conversation(sample): conversation = [\ { "role": "user",\ "content" : [\ {"type" : "text", "text" : instruction},\ {"type" : "image", "image" : sample["image"]} ]\ },\ { "role" : "assistant",\ "content" : [\ {"type" : "text", "text" : sample["caption"]} ]\ },\ ] return { "messages" : conversation } pass ``` Let's convert the dataset into the "correct" format for finetuning: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] converted_dataset = [convert_to_conversation(sample) for sample in dataset] ``` The first example is now structured like below: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] converted_dataset[0] ``` Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] whitespace-pre-wrap {'messages': [{'role': 'user',\ 'content': [{'type': 'text',\ 'text': 'You are an expert radiographer. Describe accurately what you see in this image.'},\ {'type': 'image',\ 'image': <PIL.PngImagePlugin.PngImageFile image mode=L size=657x442>}]},\ {'role': 'assistant',\ 'content': [{'type': 'text',\ 'text': 'Panoramic radiography shows an osteolytic lesion in the right posterior maxilla with resorption of the floor of the maxillary sinus (arrows).'}]}]} ``` Before we do any finetuning, maybe the vision model already knows how to analyse the images? Let's check if this is the case! Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] FastVisionModel.for_inference(model) # Enable for inference! image = dataset[0]["image"] instruction = "You are an expert radiographer. Describe accurately what you see in this image." messages = [\ {"role": "user", "content": [\ {"type": "image"},\ {"type": "text", "text": instruction}\ ]}\ ] input_text = tokenizer.apply_chat_template(messages, add_generation_prompt = True) inputs = tokenizer( image, input_text, add_special_tokens = False, return_tensors = "pt", ).to("cuda") from transformers import TextStreamer text_streamer = TextStreamer(tokenizer, skip_prompt = True) _ = model.generate(**inputs, streamer = text_streamer, max_new_tokens = 128, use_cache = True, temperature = 1.5, min_p = 0.1) ``` And the result: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] This radiograph appears to be a panoramic view of the upper and lower dentition, specifically an Orthopantomogram (OPG). * The panoramic radiograph demonstrates normal dental structures. * There is an abnormal area on the upper right, represented by an area of radiolucent bone, corresponding to the antrum. **Key Observations** * The bone between the left upper teeth is relatively radiopaque. * There are two large arrows above the image, suggesting the need for a closer examination of this area. One of the arrows is in a left-sided position, and the other is in the right-sided position. However, only ``` For more details, view our dataset section in the [notebook here](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.2_(11B)-Vision.ipynb#scrollTo=vITh0KVJ10qX). [PreviousGemma 3: How to Run & Fine-tune](https://docs.unsloth.ai/basics/gemma-3-how-to-run-and-fine-tune) [NextTutorials: How To Fine-tune & Run LLMs](https://docs.unsloth.ai/basics/tutorials-how-to-fine-tune-and-run-llms) Last updated 6 days ago Was this helpful?
{ "color-scheme": "light dark", "description": "Learn how to create & prepare a dataset for fine-tuning.", "favicon": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Forganizations%252FHpyELzcNe0topgVLGCZY%252Fsites%252Fsite_mXXTe%252Ficon%252FUdYqQdRpLXMz6O8ejTLD%252Funsloth%2520sticker.png%3Falt%3Dmedia%26token%3D2a5ba961-51cb-4094-8545-6eb20ad1e06e&width=48&height=48&sign=277417f7&sv=2", "generator": "GitBook (e15757d)", "language": "en", "og:description": "Learn how to create & prepare a dataset for fine-tuning.", "og:image": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "og:title": "Datasets Guide | Unsloth Documentation", "ogDescription": "Learn how to create & prepare a dataset for fine-tuning.", "ogImage": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "ogTitle": "Datasets Guide | Unsloth Documentation", "robots": "index, follow", "scrapeId": "57ee7d1f-0d92-4204-a16d-275844ca83e1", "sourceURL": "https://docs.unsloth.ai/basics/datasets-guide", "statusCode": 200, "title": "Datasets Guide | Unsloth Documentation", "twitter:card": "summary_large_image", "twitter:description": "Learn how to create & prepare a dataset for fine-tuning.", "twitter:image": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "twitter:title": "Datasets Guide | Unsloth Documentation", "url": "https://docs.unsloth.ai/basics/datasets-guide", "viewport": "width=device-width, initial-scale=1" }
Qwen3, full fine-tuning & all models are now supported! 🦥 By the end of this tutorial, you will create a custom chatbot by **finetuning Llama-3** with [**Unsloth**](https://github.com/unslothai/unsloth) for free. It can run locally via [**Ollama**](https://github.com/ollama/ollama) on your PC, or in a free GPU instance through [**Google Colab**](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3_(8B)-Ollama.ipynb). You will be able to interact with the chatbot interactively like below: ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252FXlEQrBR24CKI9lQIzOS7%252FAssistant%2520example.png%3Falt%3Dmedia%26token%3Dfac7f5b0-69f4-4998-baee-3feee44f8c16&width=768&dpr=4&quality=100&sign=39273e6a&sv=2) **Unsloth** makes finetuning much easier, and can automatically export the finetuned model to **Ollama** with integrated automatic `Modelfile` creation! If you need help, you can join our Discord server: [https://discord.com/invite/unsloth](https://discord.com/invite/unsloth) ## [Direct link to heading](https://docs.unsloth.ai/basics/tutorial-how-to-finetune-llama-3-and-use-in-ollama\#id-1.-what-is-unsloth) 1\. What is Unsloth? [Unsloth](https://github.com/unslothai/unsloth) makes finetuning LLMs like Llama-3, Mistral, Phi-3 and Gemma 2x faster, use 70% less memory, and with no degradation in accuracy! We will be using Google Colab which provides a free GPU during this tutorial. You can access our free notebooks below: - [Ollama Llama-3 Alpaca](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3_(8B)-Ollama.ipynb) (notebook which we will be using) - [CSV/Excel Ollama Guide](https://colab.research.google.com/drive/1VYkncZMfGFkeCEgN2IzbZIKEDkyQuJAS?usp=sharing) #### [Direct link to heading](https://docs.unsloth.ai/basics/tutorial-how-to-finetune-llama-3-and-use-in-ollama\#you-will-also-need-to-login-into-your-google-account) _**You will also need to login into your Google account!**_ ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252FqnogsAv2zZ5WPFkXwQ5t%252FColab%2520Screen.png%3Falt%3Dmedia%26token%3D8722cf50-898f-4f15-be7a-7223b8b7440b&width=768&dpr=4&quality=100&sign=c93e1323&sv=2) ## [Direct link to heading](https://docs.unsloth.ai/basics/tutorial-how-to-finetune-llama-3-and-use-in-ollama\#id-2.-what-is-ollama) 2\. What is Ollama? [Ollama](https://github.com/ollama/ollama) allows you to run language models from your own computer in a quick and simple way! It quietly launches a program which can run a language model like Llama-3 in the background. If you suddenly want to ask the language model a question, you can simply submit a request to Ollama, and it'll quickly return the results to you! We'll be using Ollama as our inference engine! ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252FqKwhUFNW52GnKMi5ClLW%252FOllama.png%3Falt%3Dmedia%26token%3D27ccad2f-12a2-4188-96d9-ee3023d7f274&width=768&dpr=4&quality=100&sign=e04cd2e2&sv=2) ## [Direct link to heading](https://docs.unsloth.ai/basics/tutorial-how-to-finetune-llama-3-and-use-in-ollama\#id-3.-install-unsloth) 3\. Install Unsloth ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252FQzuUQL60uFWHpaAvDPYD%252FColab%2520Options.png%3Falt%3Dmedia%26token%3Dfb808ec5-20c5-4f42-949e-14ed26a44987&width=768&dpr=4&quality=100&sign=be097a14&sv=2) If you have never used a Colab notebook, a quick primer on the notebook itself: 1. **Play Button at each "cell".** Click on this to run that cell's code. You must not skip any cells and you must run every cell in chronological order. If you encounter any errors, simply rerun the cell you did not run before. Another option is to click CTRL + ENTER if you don't want to click the play button. 2. **Runtime Button in the top toolbar.** You can also use this button and hit "Run all" to run the entire notebook in 1 go. This will skip all the customization steps, and can be a good first try. 3. **Connect / Reconnect T4 button.** You can click here for more advanced system statistics. The first installation cell looks like below: Remember to click the PLAY button in the brackets \[ \]. We grab our open source Github package, and install some other packages. ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252F9DTAK0evMnZcnLXzKLx4%252Fimage.png%3Falt%3Dmedia%26token%3Db4781438-3858-4d6c-a560-5afcbbc12fa8&width=768&dpr=4&quality=100&sign=e78940c8&sv=2) ## [Direct link to heading](https://docs.unsloth.ai/basics/tutorial-how-to-finetune-llama-3-and-use-in-ollama\#id-4.-selecting-a-model-to-finetune) 4\. Selecting a model to finetune Let's now select a model for finetuning! We defaulted to Llama-3 from Meta / Facebook which was trained on a whopping 15 trillion "tokens". Assume a token is like 1 English word. That's approximately 350,000 thick Encyclopedias worth! Other popular models include Mistral, Phi-3 (trained using GPT-4 output) and Gemma from Google (13 trillion tokens!). Unsloth supports these models and more! In fact, simply type a model from the Hugging Face model hub to see if it works! We'll error out if it doesn't work. ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252Fmdci7SWqnAZiW8KzzDp0%252Fimage.png%3Falt%3Dmedia%26token%3D8ede6c31-3cc9-4005-ae44-0b056750e8d4&width=768&dpr=4&quality=100&sign=f453cf0e&sv=2) There are 3 other settings which you can toggle: 1. Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] max_seq_length = 2048 ``` This determines the context length of the model. Gemini for example has over 1 million context length, whilst Llama-3 has 8192 context length. We allow you to select ANY number - but we recommend setting it 2048 for testing purposes. Unsloth also supports very long context finetuning, and we show we can provide 4x longer context lengths than the best. 2. Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] dtype = None ``` Keep this as None, but you can select torch.float16 or torch.bfloat16 for newer GPUs. 3. Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] load_in_4bit = True ``` We do finetuning in 4 bit quantization. This reduces memory usage by 4x, allowing us to actually do finetuning in a free 16GB memory GPU. 4 bit quantization essentially converts weights into a limited set of numbers to reduce memory usage. A drawback of this is there is a 1-2% accuracy degradation. Set this to False on larger GPUs like H100s if you want that tiny extra accuracy. ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252FegXn4FqK96xXZWMz4NH5%252Fimage.png%3Falt%3Dmedia%26token%3D7531f78d-390b-470b-a91e-4463eea6537f&width=768&dpr=4&quality=100&sign=c6859bb2&sv=2) If you run the cell, you will get some print outs of the Unsloth version, which model you are using, how much memory your GPU has, and some other statistics. Ignore this for now. ## [Direct link to heading](https://docs.unsloth.ai/basics/tutorial-how-to-finetune-llama-3-and-use-in-ollama\#id-5.-parameters-for-finetuning) 5\. Parameters for finetuning ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252FqRTuI7x0FYlHTXqbi0hu%252Fimage.png%3Falt%3Dmedia%26token%3D4b0e0032-dbf1-4148-ba92-c18356862765&width=768&dpr=4&quality=100&sign=f94a3d99&sv=2) Now to customize your finetune, you can edit the numbers above, but you can ignore it, since we already select quite reasonable numbers. The goal is to change these numbers to increase accuracy, but also **counteract over-fitting**. Over-fitting is when you make the language model memorize a dataset, and not be able to answer novel new questions. We want to a final model to answer unseen questions, and not do memorization. 1. Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] r = 16, # Choose any number > 0 ! Suggested 8, 16, 32, 64, 128 ``` The rank of the finetuning process. A larger number uses more memory and will be slower, but can increase accuracy on harder tasks. We normally suggest numbers like 8 (for fast finetunes), and up to 128. Too large numbers can causing over-fitting, damaging your model's quality. 2. Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] target_modules = ["q_proj", "k_proj", "v_proj", "o_proj",\ "gate_proj", "up_proj", "down_proj",], ``` We select all modules to finetune. You can remove some to reduce memory usage and make training faster, but we highly do not suggest this. Just train on all modules! 3. Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] lora_alpha = 16, ``` The scaling factor for finetuning. A larger number will make the finetune learn more about your dataset, but can promote over-fitting. We suggest this to equal to the rank `r`, or double it. 4. Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] lora_dropout = 0, # Supports any, but = 0 is optimized ``` Leave this as 0 for faster training! Can reduce over-fitting, but not that much. 5. Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] bias = "none", # Supports any, but = "none" is optimized ``` Leave this as 0 for faster and less over-fit training! 6. Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] use_gradient_checkpointing = "unsloth", # True or "unsloth" for very long context ``` Options include `True`, `False ` and `"unsloth"`. We suggest `"unsloth"` since we reduce memory usage by an extra 30% and support extremely long context finetunes.You can read up here: [https://unsloth.ai/blog/long-context](https://unsloth.ai/blog/long-context) for more details. 7. Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] random_state = 3407, ``` The number to determine deterministic runs. Training and finetuning needs random numbers, so setting this number makes experiments reproducible. 8. Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] use_rslora = False, # We support rank stabilized LoRA ``` Advanced feature to set the `lora_alpha = 16` automatically. You can use this if you want! 9. Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] loftq_config = None, # And LoftQ ``` Advanced feature to initialize the LoRA matrices to the top r singular vectors of the weights. Can improve accuracy somewhat, but can make memory usage explode at the start. ## [Direct link to heading](https://docs.unsloth.ai/basics/tutorial-how-to-finetune-llama-3-and-use-in-ollama\#id-6.-alpaca-dataset) 6\. Alpaca Dataset ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252FKSmRDpkySelZfWSrWxDm%252Fimage.png%3Falt%3Dmedia%26token%3D5401e4da-796a-42ad-8b85-2263f3e59e86&width=768&dpr=4&quality=100&sign=28ad8509&sv=2) We will now use the Alpaca Dataset created by calling GPT-4 itself. It is a list of 52,000 instructions and outputs which was very popular when Llama-1 was released, since it made finetuning a base LLM be competitive with ChatGPT itself. You can access the GPT4 version of the Alpaca dataset here: [https://huggingface.co/datasets/vicgalle/alpaca-gpt4](https://huggingface.co/datasets/vicgalle/alpaca-gpt4). An older first version of the dataset is here: [https://github.com/tatsu-lab/stanford\_alpaca](https://github.com/tatsu-lab/stanford_alpaca). Below shows some examples of the dataset: ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252FzKhujR9Nxz95VFSdf4J5%252Fimage.png%3Falt%3Dmedia%26token%3Da3c52718-eaf1-4a3d-b325-414d8e67722e&width=768&dpr=4&quality=100&sign=2afb3a12&sv=2) You can see there are 3 columns in each row - an instruction, and input and an output. We essentially combine each row into 1 large prompt like below. We then use this to finetune the language model, and this made it very similar to ChatGPT. We call this process **supervised instruction finetuning**. ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252FieYX44Vjd0OygJvO0jaR%252Fimage.png%3Falt%3Dmedia%26token%3Deb67fa41-a280-4656-8be6-5b6bf6f587c2&width=768&dpr=4&quality=100&sign=68f5594e&sv=2) ## [Direct link to heading](https://docs.unsloth.ai/basics/tutorial-how-to-finetune-llama-3-and-use-in-ollama\#id-7.-multiple-columns-for-finetuning) 7\. Multiple columns for finetuning But a big issue is for ChatGPT style assistants, we only allow 1 instruction / 1 prompt, and not multiple columns / inputs. For example in ChatGPT, you can see we must submit 1 prompt, and not multiple prompts. ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252FpFUWhntUQLu05l4ns7Pq%252Fimage.png%3Falt%3Dmedia%26token%3De989e4a6-6033-4741-b97f-d0c3ce8f5888&width=768&dpr=4&quality=100&sign=a9eb969a&sv=2) This essentially means we have to "merge" multiple columns into 1 large prompt for finetuning to actually function! For example the very famous Titanic dataset has many many columns. Your job was to predict whether a passenger has survived or died based on their age, passenger class, fare price etc. We can't simply pass this into ChatGPT, but rather, we have to "merge" this information into 1 large prompt. ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252FrydHBjHoJT7w8FwzKAXK%252FMerge-1.png%3Falt%3Dmedia%26token%3Dec812057-0475-4717-87fe-311f14735c37&width=768&dpr=4&quality=100&sign=8211e070&sv=2) For example, if we ask ChatGPT with our "merged" single prompt which includes all the information for that passenger, we can then ask it to guess or predict whether the passenger has died or survived. ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252FJVkv73fRWvwwFxMym7uW%252Fimage.png%3Falt%3Dmedia%26token%3D59b97b76-f2f2-46c9-8940-60a37e4e7d62&width=768&dpr=4&quality=100&sign=37c0f3a1&sv=2) Other finetuning libraries require you to manually prepare your dataset for finetuning, by merging all your columns into 1 prompt. In Unsloth, we simply provide the function called `to_sharegpt` which does this in 1 go! To access the Titanic finetuning notebook or if you want to upload a CSV or Excel file, go here: [https://colab.research.google.com/drive/1VYkncZMfGFkeCEgN2IzbZIKEDkyQuJAS?usp=sharing](https://colab.research.google.com/drive/1VYkncZMfGFkeCEgN2IzbZIKEDkyQuJAS?usp=sharing) ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252F9fo2IBA7P0tNwhNR9Prm%252Fimage.png%3Falt%3Dmedia%26token%3D7bd7244a-0fea-4e57-9038-a8a360138056&width=768&dpr=4&quality=100&sign=a94d397b&sv=2) Now this is a bit more complicated, since we allow a lot of customization, but there are a few points: - You must enclose all columns in curly braces `{}`. These are the column names in the actual CSV / Excel file. - Optional text components must be enclosed in `[[]]`. For example if the column "input" is empty, the merging function will not show the text and skip this. This is useful for datasets with missing values. - Select the output or target / prediction column in `output_column_name`. For the Alpaca dataset, this will be `output`. For example in the Titanic dataset, we can create a large merged prompt format like below, where each column / piece of text becomes optional. ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252FRMvBpfXC9ToCRL0oCJfN%252Fimage.png%3Falt%3Dmedia%26token%3Dc257c7fc-8a9c-4d4f-ab3d-6894ae49f2a9&width=768&dpr=4&quality=100&sign=4ec813ed&sv=2) For example, pretend the dataset looks like this with a lot of missing data: Embarked Age Fare S 23 18 7.25 Then, we do not want the result to be: 1. The passenger embarked from S. Their age is 23. Their fare is **EMPTY**. 2. The passenger embarked from **EMPTY**. Their age is 18. Their fare is $7.25. Instead by optionally enclosing columns using `[[]]`, we can exclude this information entirely. 1. \[\[The passenger embarked from S.\]\] \[\[Their age is 23.\]\] \[\[Their fare is **EMPTY**.\]\] 2. \[\[The passenger embarked from **EMPTY**.\]\] \[\[Their age is 18.\]\] \[\[Their fare is $7.25.\]\] becomes: 1. The passenger embarked from S. Their age is 23. 2. Their age is 18. Their fare is $7.25. ## [Direct link to heading](https://docs.unsloth.ai/basics/tutorial-how-to-finetune-llama-3-and-use-in-ollama\#id-8.-multi-turn-conversations) 8\. Multi turn conversations A bit issue if you didn't notice is the Alpaca dataset is single turn, whilst remember using ChatGPT was interactive and you can talk to it in multiple turns. For example, the left is what we want, but the right which is the Alpaca dataset only provides singular conversations. We want the finetuned language model to somehow learn how to do multi turn conversations just like ChatGPT. ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252FWCAN7bYUt6QWwCWUxisL%252Fdiff.png%3Falt%3Dmedia%26token%3D29821fd9-2181-4d1d-8b93-749b69bcf400&width=768&dpr=4&quality=100&sign=d4f1b675&sv=2) So we introduced the `conversation_extension` parameter, which essentially selects some random rows in your single turn dataset, and merges them into 1 conversation! For example, if you set it to 3, we randomly select 3 rows and merge them into 1! Setting them too long can make training slower, but could make your chatbot and final finetune much better! ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252FWi1rRNBFC2iDmCvSJsZt%252Fcombine.png%3Falt%3Dmedia%26token%3Dbef37a55-b272-4be3-89b5-9767c219a380&width=768&dpr=4&quality=100&sign=ae98ba1b&sv=2) Then set `output_column_name` to the prediction / output column. For the Alpaca dataset dataset, it would be the output column. We then use the `standardize_sharegpt` function to just make the dataset in a correct format for finetuning! Always call this! ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252FE75C4Y848VNF6luLuPRR%252Fimage.png%3Falt%3Dmedia%26token%3Daac1d79b-ecca-4e56-939d-d97dcbbf30eb&width=768&dpr=4&quality=100&sign=d48e3c76&sv=2) ## [Direct link to heading](https://docs.unsloth.ai/basics/tutorial-how-to-finetune-llama-3-and-use-in-ollama\#id-9.-customizable-chat-templates) 9\. Customizable Chat Templates We can now specify the chat template for finetuning itself. The very famous Alpaca format is below: ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252F8SWcsgH47Uhkm0IclDs5%252Fimage.png%3Falt%3Dmedia%26token%3Dfa03d7aa-d568-468d-9884-18e925a0551f&width=768&dpr=4&quality=100&sign=dff54efb&sv=2) But remember we said this was a bad idea because ChatGPT style finetunes require only 1 prompt? Since we successfully merged all dataset columns into 1 using Unsloth, we essentially can create the below style chat template with 1 input column (instruction) and 1 output: ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252FyuMpSLIpPLEbcdh970UJ%252Fimage.png%3Falt%3Dmedia%26token%3D87c4d5e1-accf-4847-9971-63e3a47b4a5f&width=768&dpr=4&quality=100&sign=728095c1&sv=2) We just require you must put a `{INPUT}` field for the instruction and an `{OUTPUT}` field for the model's output field. We in fact allow an optional `{SYSTEM}` field as well which is useful to customize a system prompt just like in ChatGPT. For example, below are some cool examples which you can customize the chat template to be: ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252Fi6B8IP1OZmmxBYr6k4W3%252Fimage.png%3Falt%3Dmedia%26token%3D061d1b4c-4b22-4d1b-a423-8d4c15e40efa&width=768&dpr=4&quality=100&sign=dd8c7435&sv=2) For the ChatML format used in OpenAI models: ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252F3OEJaXooJCICJR6DJIJP%252Fimage.png%3Falt%3Dmedia%26token%3D4fa85cf1-463d-4090-a838-591c4f94efea&width=768&dpr=4&quality=100&sign=a1f23ff9&sv=2) Or you can use the Llama-3 template itself (which only functions by using the instruct version of Llama-3): We in fact allow an optional `{SYSTEM}` field as well which is useful to customize a system prompt just like in ChatGPT. ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252F4qQXd0hIvh9fJNO2cJ04%252Fimage.png%3Falt%3Dmedia%26token%3D614b9200-7375-47f5-ac15-ce9aa891ede4&width=768&dpr=4&quality=100&sign=c9811100&sv=2) Or in the Titanic prediction task where you had to predict if a passenger died or survived in this Colab notebook which includes CSV and Excel uploading: [https://colab.research.google.com/drive/1VYkncZMfGFkeCEgN2IzbZIKEDkyQuJAS?usp=sharing](https://colab.research.google.com/drive/1VYkncZMfGFkeCEgN2IzbZIKEDkyQuJAS?usp=sharing) ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252F1iQitC3PwcuV0LpHEhdP%252Fimage.png%3Falt%3Dmedia%26token%3Dd117f681-afb0-4d5f-b534-f51013fe772a&width=768&dpr=4&quality=100&sign=20577629&sv=2) ## [Direct link to heading](https://docs.unsloth.ai/basics/tutorial-how-to-finetune-llama-3-and-use-in-ollama\#id-10.-train-the-model) 10\. Train the model Let's train the model now! We normally suggest people to not edit the below, unless if you want to finetune for longer steps or want to train on large batch sizes. ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252FoPTTR7ppdxhZR2iPpE0R%252Fimage.png%3Falt%3Dmedia%26token%3D1dca98a5-c927-4e93-8e96-977015f4eeb9&width=768&dpr=4&quality=100&sign=18baea65&sv=2) We do not normally suggest changing the parameters above, but to elaborate on some of them: 1. Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] per_device_train_batch_size = 2, ``` Increase the batch size if you want to utilize the memory of your GPU more. Also increase this to make training more smooth and make the process not over-fit. We normally do not suggest this, since this might make training actually slower due to padding issues. We normally instead ask you to increase `gradient_accumulation_steps` which just does more passes over the dataset. 2. Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] gradient_accumulation_steps = 4, ``` Equivalent to increasing the batch size above itself, but does not impact memory consumption! We normally suggest people increasing this if you want smoother training loss curves. 3. Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] max_steps = 60, # num_train_epochs = 1, ``` We set steps to 60 for faster training. For full training runs which can take hours, instead comment out `max_steps`, and replace it with `num_train_epochs = 1`. Setting it to 1 means 1 full pass over your dataset. We normally suggest 1 to 3 passes, and no more, otherwise you will over-fit your finetune. 4. Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] learning_rate = 2e-4, ``` Reduce the learning rate if you want to make the finetuning process slower, but also converge to a higher accuracy result most likely. We normally suggest 2e-4, 1e-4, 5e-5, 2e-5 as numbers to try. ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252FxwOA09mtcimcQOCjP4PG%252Fimage.png%3Falt%3Dmedia%26token%3D39a0f525-6d4e-4c3b-af0d-82d8960d87be&width=768&dpr=4&quality=100&sign=853c0062&sv=2) You will see a log of some numbers! This is the training loss, and your job is to set parameters to make this go to as close to 0.5 as possible! If your finetune is not reaching 1, 0.8 or 0.5, you might have to adjust some numbers. If your loss goes to 0, that's probably not a good sign as well! ## [Direct link to heading](https://docs.unsloth.ai/basics/tutorial-how-to-finetune-llama-3-and-use-in-ollama\#id-11.-inference-running-the-model) 11\. Inference / running the model ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252FRX9Byv1hlSpvmonT1PLw%252Fimage.png%3Falt%3Dmedia%26token%3D6043cd8c-c6a3-4cc5-a019-48baeed3b5a2&width=768&dpr=4&quality=100&sign=7c7ce43f&sv=2) Now let's run the model after we completed the training process! You can edit the yellow underlined part! In fact, because we created a multi turn chatbot, we can now also call the model as if it saw some conversations in the past like below: ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252F6DXSlsHkN8cZiiAxAV0Z%252Fimage.png%3Falt%3Dmedia%26token%3D846307de-7386-4bbe-894e-7d9e572244fe&width=768&dpr=4&quality=100&sign=6482b95b&sv=2) Reminder Unsloth itself provides **2x faster inference** natively as well, so always do not forget to call `FastLanguageModel.for_inference(model)`. If you want the model to output longer responses, set `max_new_tokens = 128` to some larger number like 256 or 1024. Notice you will have to wait longer for the result as well! ## [Direct link to heading](https://docs.unsloth.ai/basics/tutorial-how-to-finetune-llama-3-and-use-in-ollama\#id-12.-saving-the-model) 12\. Saving the model We can now save the finetuned model as a small 100MB file called a LoRA adapter like below. You can instead push to the Hugging Face hub as well if you want to upload your model! Remember to get a Hugging Face token via [https://huggingface.co/settings/tokens](https://huggingface.co/settings/tokens) and add your token! ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252FBz0YDi6Sc2oEP5QWXgSz%252Fimage.png%3Falt%3Dmedia%26token%3D33d9e4fd-e7dc-4714-92c5-bfa3b00f86c4&width=768&dpr=4&quality=100&sign=d6933a01&sv=2) After saving the model, we can again use Unsloth to run the model itself! Use `FastLanguageModel` again to call it for inference! ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252FzymBQrqwt4GUmCIN0Iec%252Fimage.png%3Falt%3Dmedia%26token%3D41a110e4-8263-426f-8fa7-cdc295cc8210&width=768&dpr=4&quality=100&sign=b2a207c3&sv=2) ## [Direct link to heading](https://docs.unsloth.ai/basics/tutorial-how-to-finetune-llama-3-and-use-in-ollama\#id-13.-exporting-to-ollama) 13\. Exporting to Ollama Finally we can export our finetuned model to Ollama itself! First we have to install Ollama in the Colab notebook: ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252FqNvGTAGwZKXxkMQqzloS%252Fimage.png%3Falt%3Dmedia%26token%3Ddb503499-0c74-4281-b3bf-400fa20c9ce2&width=768&dpr=4&quality=100&sign=6d57e83a&sv=2) Then we export the finetuned model we have to llama.cpp's GGUF formats like below: ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252FZduLjedyfUbTmYqF85pa%252Fimage.png%3Falt%3Dmedia%26token%3Df5bac541-b99f-4d9b-82f7-033f8de780f2&width=768&dpr=4&quality=100&sign=1fdb7647&sv=2) Reminder to convert `False` to `True` for 1 row, and not change every row to `True`, or else you'll be waiting for a very time! We normally suggest the first row getting set to `True`, so we can export the finetuned model quickly to `Q8_0` format (8 bit quantization). We also allow you to export to a whole list of quantization methods as well, with a popular one being `q4_k_m`. Head over to [https://github.com/ggerganov/llama.cpp](https://github.com/ggerganov/llama.cpp) to learn more about GGUF. We also have some manual instructions of how to export to GGUF if you want here: [https://github.com/unslothai/unsloth/wiki#manually-saving-to-gguf](https://github.com/unslothai/unsloth/wiki#manually-saving-to-gguf) You will see a long list of text like below - please wait 5 to 10 minutes!! ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252FcuUAx0RNtrQACvU7uWCL%252Fimage.png%3Falt%3Dmedia%26token%3Ddc67801a-a363-48e2-8572-4c6d0d8d0d93&width=768&dpr=4&quality=100&sign=cc7f7372&sv=2) And finally at the very end, it'll look like below: ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252FxRh07PEQjAmmz3s2HJUP%252Fimage.png%3Falt%3Dmedia%26token%3D3552a3c9-4d4f-49ee-a31e-0a64327419f0&width=768&dpr=4&quality=100&sign=1e9c9f0d&sv=2) Then, we have to run Ollama itself in the background. We use `subprocess` because Colab doesn't like asynchronous calls, but normally one just runs `ollama serve` in the terminal / command prompt. ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252FszDuikrg4HY8lGefwpRQ%252Fimage.png%3Falt%3Dmedia%26token%3Dec1c8762-661d-4b13-ab4f-ed1a7b9fda00&width=768&dpr=4&quality=100&sign=fc72e538&sv=2) ## [Direct link to heading](https://docs.unsloth.ai/basics/tutorial-how-to-finetune-llama-3-and-use-in-ollama\#id-14.-automatic-modelfile-creation) 14\. Automatic `Modelfile` creation The trick Unsloth provides is we automatically create a `Modelfile` which Ollama requires! This is a just a list of settings and includes the chat template which we used for the finetune process! You can also print the `Modelfile` generated like below: ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252Fh6inH6k5ggxUP80Gltgj%252Fimage.png%3Falt%3Dmedia%26token%3D805bafb1-2795-4743-9bd2-323ab4f0881e&width=768&dpr=4&quality=100&sign=456e8653&sv=2) We then ask Ollama to create a model which is Ollama compatible, by using the `Modelfile` ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252F1123bSSwmjWXliaRUL5U%252Fimage.png%3Falt%3Dmedia%26token%3D2e72f1a0-1ff8-4189-8d9c-d31e39385555&width=768&dpr=4&quality=100&sign=52a4fd99&sv=2) ## [Direct link to heading](https://docs.unsloth.ai/basics/tutorial-how-to-finetune-llama-3-and-use-in-ollama\#id-15.-ollama-inference) 15\. Ollama Inference And we can now call the model for inference if you want to do call the Ollama server itself which is running on your own local machine / in the free Colab notebook in the background. Remember you can edit the yellow underlined part. ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252Fk5mdsJ57hQ1Ar3KY6VXY%252FInference.png%3Falt%3Dmedia%26token%3D8cf0cbf9-0534-4bae-a887-89f45a3de771&width=768&dpr=4&quality=100&sign=8489fe55&sv=2) ## [Direct link to heading](https://docs.unsloth.ai/basics/tutorial-how-to-finetune-llama-3-and-use-in-ollama\#id-16.-interactive-chatgpt-style) 16\. Interactive ChatGPT style But to actually run the finetuned model like a ChatGPT, we have to do a bit more! First click the terminal icon![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252FUb17xtyDliAKhJEL9KuH%252Fimage.png%3Falt%3Dmedia%26token%3Df612e9b7-7d05-4039-a476-646026c6c8e6&width=300&dpr=4&quality=100&sign=b1c272f5&sv=2) and a Terminal will pop up. It's on the left sidebar. ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252FRWPEy4fW8ytOljQYLn55%252FWhere_Terminal.png%3Falt%3Dmedia%26token%3D4ddf3017-2380-4615-958f-a465a76f7bac&width=768&dpr=4&quality=100&sign=32fba259&sv=2) Then, you might have to press ENTER twice to remove some weird output in the Terminal window. Wait a few seconds and type `ollama run unsloth_model` then hit ENTER. ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252FL4aLJtoWh3HCkQ6f4J0Q%252FTerminal_Type.png%3Falt%3Dmedia%26token%3D9063f511-1e45-4a44-a9c1-14f0de4e4571&width=768&dpr=4&quality=100&sign=835f2f2&sv=2) And finally, you can interact with the finetuned model just like an actual ChatGPT! Hit CTRL + D to exit the system, and hit ENTER to converse with the chatbot! ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252Fo3vIehaOLOOBlBGBS7lX%252FAssistant.png%3Falt%3Dmedia%26token%3D25319dd2-384c-4744-a2dd-398f48a3b20f&width=768&dpr=4&quality=100&sign=d95d479&sv=2) ## [Direct link to heading](https://docs.unsloth.ai/basics/tutorial-how-to-finetune-llama-3-and-use-in-ollama\#youve-done-it) You've done it! You've successfully finetuned a language model and exported it to Ollama with Unsloth 2x faster and with 70% less VRAM! And all this for free in a Google Colab notebook! If you want to learn how to do reward modelling, do continued pretraining, export to vLLM or GGUF, do text completion, or learn more about finetuning tips and tricks, head over to our [Github](https://github.com/unslothai/unsloth#-finetune-for-free). If you need any help on finetuning, you can also join our Discord server [here](https://discord.gg/unsloth). If you want help with Ollama, you can also join their server [here](https://discord.gg/ollama). And finally, we want to thank you for reading and following this far! We hope this made you understand some of the nuts and bolts behind finetuning language models, and we hope this was useful! To access our Alpaca dataset example click [here](https://colab.research.google.com/drive/1WZDi7APtQ9VsvOrQSSC5DDtxq159j8iZ?usp=sharing), and our CSV / Excel finetuning guide is [here](https://colab.research.google.com/drive/1VYkncZMfGFkeCEgN2IzbZIKEDkyQuJAS?usp=sharing). [PreviousDeepSeek-R1 Dynamic 1.58-bit](https://docs.unsloth.ai/basics/tutorials-how-to-fine-tune-and-run-llms/deepseek-r1-how-to-run-locally/deepseek-r1-dynamic-1.58-bit) [NextRunning & Saving Models](https://docs.unsloth.ai/basics/running-and-saving-models) Last updated 2 months ago Was this helpful?
{ "color-scheme": "light dark", "description": "Beginner's Guide for creating a customized personal assistant (like ChatGPT) to run locally on Ollama", "favicon": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Forganizations%252FHpyELzcNe0topgVLGCZY%252Fsites%252Fsite_mXXTe%252Ficon%252FUdYqQdRpLXMz6O8ejTLD%252Funsloth%2520sticker.png%3Falt%3Dmedia%26token%3D2a5ba961-51cb-4094-8545-6eb20ad1e06e&width=48&height=48&sign=277417f7&sv=2", "generator": "GitBook (e15757d)", "language": "en", "og:description": "Beginner's Guide for creating a customized personal assistant (like ChatGPT) to run locally on Ollama", "og:image": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "og:title": "Tutorial: How to Finetune Llama-3 and Use In Ollama | Unsloth Documentation", "ogDescription": "Beginner's Guide for creating a customized personal assistant (like ChatGPT) to run locally on Ollama", "ogImage": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "ogTitle": "Tutorial: How to Finetune Llama-3 and Use In Ollama | Unsloth Documentation", "robots": "index, follow", "scrapeId": "693638f7-9d2f-432c-a258-b44e0f8cd271", "sourceURL": "https://docs.unsloth.ai/basics/tutorial-how-to-finetune-llama-3-and-use-in-ollama", "statusCode": 200, "title": "Tutorial: How to Finetune Llama-3 and Use In Ollama | Unsloth Documentation", "twitter:card": "summary_large_image", "twitter:description": "Beginner's Guide for creating a customized personal assistant (like ChatGPT) to run locally on Ollama", "twitter:image": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "twitter:title": "Tutorial: How to Finetune Llama-3 and Use In Ollama | Unsloth Documentation", "url": "https://docs.unsloth.ai/basics/tutorial-how-to-finetune-llama-3-and-use-in-ollama", "viewport": "width=device-width, initial-scale=1" }
Qwen3, full fine-tuning & all models are now supported! 🦥 See the table below for all [Dynamic](https://docs.unsloth.ai/basics/unsloth-dynamic-2.0-ggufs) GGUF, 4-bit, 16-bit uploaded models on [Hugging Face](https://huggingface.co/unsloth). - GGUFs can be used to run in your favorite places like Ollama, Open WebUI and llama.cpp. - 4-bit and 16-bit models can be used for inference serving or fine-tuning. • GGUF + 4-bit • 16-bit original Here's a table of all our GGUF + 4-bit model uploads: Model GGUF Instruct (4-bit) Base (4-bit) [Qwen3](https://huggingface.co/collections/unsloth/qwen3-680edabfb790c8c34a242f95) (new) - [0.6B](https://huggingface.co/unsloth/Qwen3-0.6B-GGUF) - [1.7B](https://huggingface.co/unsloth/Qwen3-1.7B-GGUF) - [4B](https://huggingface.co/unsloth/Qwen3-4B-GGUF) - [8B](https://huggingface.co/unsloth/Qwen3-8B-GGUF) - [14B](https://huggingface.co/unsloth/Qwen3-14B-GGUF) - [30B-A3B](https://huggingface.co/unsloth/Qwen3-30B-A3B-GGUF) - [32B](https://huggingface.co/unsloth/Qwen3-32B-GGUF) - [235B-A22B](https://huggingface.co/unsloth/Qwen3-235B-A22B-GGUF) - [0.6B](https://huggingface.co/unsloth/Qwen3-0.6B-unsloth-bnb-4bit) - [1.7B](https://huggingface.co/unsloth/Qwen3-1.7B-unsloth-bnb-4bit) - [4B](https://huggingface.co/unsloth/Qwen3-4B-unsloth-bnb-4bit) - [8B](https://huggingface.co/unsloth/Qwen3-8B-unsloth-bnb-4bit) - [14B](https://huggingface.co/unsloth/Qwen3-14B-unsloth-bnb-4bit) - [30B-A3B](https://huggingface.co/unsloth/Qwen3-30B-A3B-bnb-4bit) - [32B](https://huggingface.co/unsloth/Qwen3-32B-unsloth-bnb-4bit) - [0.6B](https://huggingface.co/unsloth/Qwen3-1.7B-Base-unsloth-bnb-4bit) - [1.7B](https://huggingface.co/unsloth/Qwen3-1.7B-unsloth-bnb-4bit) - [4B](https://huggingface.co/unsloth/Qwen3-4B-Base-unsloth-bnb-4bit) - [8B](https://huggingface.co/unsloth/Qwen3-8B-Base-unsloth-bnb-4bit) - [14B](https://huggingface.co/unsloth/Qwen3-14B-Base-unsloth-bnb-4bit) - [30B](https://huggingface.co/unsloth/Qwen3-30B-A3B-Base-bnb-4bit) [Phi-4](https://huggingface.co/collections/unsloth/phi-4-all-versions-677eecf93784e61afe762afa) (new) - [Reasoning-plus](https://huggingface.co/unsloth/Phi-4-reasoning-plus-GGUF/) - [Reasoning](https://huggingface.co/unsloth/Phi-4-reasoning-GGUF) - [Mini-reasoning](https://huggingface.co/unsloth/Phi-4-mini-reasoning-GGUF/) - [Phi-4](https://huggingface.co/unsloth/phi-4-GGUF) - [mini](https://huggingface.co/unsloth/Phi-4-mini-instruct-GGUF) - [Reasoning-plus](https://huggingface.co/unsloth/Phi-4-reasoning-plus-unsloth-bnb-4bit) - [Reasoning](https://huggingface.co/unsloth/phi-4-reasoning-unsloth-bnb-4bit) - [Mini-reasoning](https://huggingface.co/unsloth/Phi-4-mini-reasoning-unsloth-bnb-4bit) - [Phi-4](https://huggingface.co/unsloth/phi-4-unsloth-bnb-4bit) - [mini](https://huggingface.co/unsloth/Phi-4-mini-instruct-unsloth-bnb-4bit) [Llama 4](https://huggingface.co/collections/unsloth/llama-4-67f19503d764b0f3a2a868d2) (new) - [Scout](https://huggingface.co/unsloth/Llama-4-Scout-17B-16E-Instruct-GGUF) - [Maverick](https://huggingface.co/unsloth/Llama-4-Maverick-17B-128E-Instruct-GGUF) - [Scout](https://huggingface.co/unsloth/Llama-4-Scout-17B-16E-Instruct-unsloth-bnb-4bit) - [Scout](https://huggingface.co/unsloth/Llama-4-Scout-17B-16E-unsloth-bnb-4bit) [Gemma 3](https://huggingface.co/collections/unsloth/gemma-3-67d12b7e8816ec6efa7e4e5b) (new) - [1B](https://huggingface.co/unsloth/gemma-3-1b-it-GGUF) - [4B](https://huggingface.co/unsloth/gemma-3-4b-it-GGUF) - [12B](https://huggingface.co/unsloth/gemma-3-12b-it-GGUF) - [27B](https://huggingface.co/unsloth/gemma-3-27b-it-GGUF) - [1B](https://huggingface.co/unsloth/gemma-3-1b-it-unsloth-bnb-4bit) - [4B](https://huggingface.co/unsloth/gemma-3-4b-it-unsloth-bnb-4bit) - [12B](https://huggingface.co/unsloth/gemma-3-12b-it-unsloth-bnb-4bit) - [27B](https://huggingface.co/unsloth/gemma-3-27b-it-unsloth-bnb-4bit) - [1B](https://huggingface.co/unsloth/gemma-3-1b-pt-unsloth-bnb-4bit) - [4B](https://huggingface.co/unsloth/gemma-3-4b-pt-unsloth-bnb-4bit) - [12B](https://huggingface.co/unsloth/gemma-3-12b-pt-unsloth-bnb-4bit) - [27B](https://huggingface.co/unsloth/gemma-3-27b-pt-unsloth-bnb-4bit) [DeepSeek-R1](https://huggingface.co/collections/unsloth/deepseek-r1-all-versions-678e1c48f5d2fce87892ace5) - [R1](https://huggingface.co/unsloth/DeepSeek-R1-GGUF) - [R1 Zero](https://huggingface.co/unsloth/DeepSeek-R1-Zero-GGUF) - [Llama 3 (8B)](https://huggingface.co/unsloth/DeepSeek-R1-Distill-Llama-8B-GGUF) - [Llama 3.3 (70B)](https://huggingface.co/unsloth/DeepSeek-R1-Distill-Llama-70B-GGUF) - [Qwen 2.5 (14B)](https://huggingface.co/unsloth/DeepSeek-R1-Distill-Qwen-14B-GGUF) - [Qwen 2.5 (32B)](https://huggingface.co/unsloth/DeepSeek-R1-Distill-Qwen-32B-GGUF) - [Qwen 2.5 (1.5B)](https://huggingface.co/unsloth/DeepSeek-R1-Distill-Qwen-1.5B-GGUF) - [Qwen 2.5 (7B)](https://huggingface.co/unsloth/DeepSeek-R1-Distill-Qwen-7B-GGUF) - [Llama 3 (8B)](https://huggingface.co/unsloth/DeepSeek-R1-Distill-Llama-8B-unsloth-bnb-4bit) - [Llama 3.3 (70B)](https://huggingface.co/unsloth/DeepSeek-R1-Distill-Llama-70B-bnb-4bit) - [Qwen 2.5 (14B)](https://huggingface.co/unsloth/DeepSeek-R1-Distill-Qwen-14B-unsloth-bnb-4bit) - [Qwen 2.5 (32B)](https://huggingface.co/unsloth/DeepSeek-R1-Distill-Qwen-32B-bnb-4bit) - [Qwen 2.5 (1.5B)](https://huggingface.co/unsloth/DeepSeek-R1-Distill-Qwen-1.5B-unsloth-bnb-4bit) - [Qwen 2.5 (7B)](https://huggingface.co/unsloth/DeepSeek-R1-Distill-Qwen-7B-unsloth-bnb-4bit) [Llama 3.2](https://huggingface.co/collections/unsloth/llama-32-66f46afde4ca573864321a22) - [1B](https://huggingface.co/unsloth/Llama-3.2-1B-Instruct-GGUF) - [3B](https://huggingface.co/unsloth/Llama-3.2-3B-Instruct-GGUF) - [1B](https://huggingface.co/unsloth/Llama-3.2-1B-Instruct-bnb-4bit) - [3B](https://huggingface.co/unsloth/Llama-3.2-3B-Instruct-bnb-4bit) - [11B Vision](https://huggingface.co/unsloth/Llama-3.2-11B-Vision-Instruct-unsloth-bnb-4bit) - [90B Vision](https://huggingface.co/unsloth/Llama-3.2-90B-Vision-Instruct-bnb-4bit) - [1B](https://huggingface.co/unsloth/Llama-3.2-1B-bnb-4bit) - [3B](https://huggingface.co/unsloth/Llama-3.2-3B-bnb-4bit) - [11B Vision](https://huggingface.co/unsloth/Llama-3.2-11B-Vision-unsloth-bnb-4bit) - [90B Vision](https://huggingface.co/unsloth/Llama-3.2-90B-Vision-bnb-4bit) [Llama 3.3](https://huggingface.co/collections/unsloth/llama-33-all-versions-67535d7d994794b9d7cf5e9f) - [70B](https://huggingface.co/unsloth/Llama-3.3-70B-Instruct-GGUF) - [70B](https://huggingface.co/unsloth/Llama-3.3-70B-Instruct-bnb-4bit) [Llama 3.1](https://huggingface.co/collections/unsloth/llama-31-collection-6753dca76f47d9ce1696495f) - [8B](https://huggingface.co/unsloth/Llama-3.1-8B-Instruct-GGUF) - [8B](https://huggingface.co/unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit) - [70B](https://huggingface.co/unsloth/Meta-Llama-3.1-70B-Instruct-bnb-4bit) - [405B](https://huggingface.co/unsloth/Meta-Llama-3.1-405B-Instruct-bnb-4bit/) - [8B](https://huggingface.co/unsloth/Meta-Llama-3.1-8B-bnb-4bit) - [70B](https://huggingface.co/unsloth/Meta-Llama-3.1-70B-bnb-4bit) - [405B](https://huggingface.co/unsloth/Meta-Llama-3.1-405B-bnb-4bit) Mistral - [Small 3.1](https://huggingface.co/unsloth/Mistral-Small-3.1-24B-Instruct-2503-GGUF) \- new - [Small 3](https://huggingface.co/unsloth/Mistral-Small-24B-Instruct-2501-GGUF) - [NeMo 2407 (12B)](https://huggingface.co/unsloth/Mistral-Nemo-Instruct-2407-GGUF) - [Small 3.1](https://huggingface.co/unsloth/Mistral-Small-3.1-24B-Instruct-2503-unsloth-bnb-4bit) \- new - [Small 3](https://huggingface.co/unsloth/Mistral-Small-24B-Instruct-2501-unsloth-bnb-4bit) - [NeMo 2407 (12B)](https://huggingface.co/unsloth/Mistral-Nemo-Instruct-2407-bnb-4bit) - [Small 2409 (22B)](https://huggingface.co/unsloth/Mistral-Small-Instruct-2409-bnb-4bit) - [Large 2407](https://huggingface.co/unsloth/Mistral-Large-Instruct-2407-bnb-4bit) - [7B (v0.3)](https://huggingface.co/unsloth/mistral-7b-instruct-v0.3-bnb-4bit) - [7B (v0.2)](https://huggingface.co/unsloth/mistral-7b-instruct-v0.2-bnb-4bit) - [Pixtral (12B) 2409](https://huggingface.co/unsloth/Pixtral-12B-2409-bnb-4bit) - [Mixtral-8x7B](https://huggingface.co/unsloth/Mixtral-8x7B-Instruct-v0.1-unsloth-bnb-4bit) - [Small 3.1](https://huggingface.co/unsloth/Mistral-Small-3.1-24B-Base-2503-unsloth-bnb-4bit) \- new - [Small 3](https://huggingface.co/unsloth/Mistral-Small-24B-Base-2501-unsloth-bnb-4bit) - [NeMo 2407 (12B)](https://huggingface.co/unsloth/Mistral-Nemo-Base-2407-bnb-4bit) - [7B (v0.3)](https://huggingface.co/unsloth/mistral-7b-v0.3-bnb-4bit) - [7B (v0.2)](https://huggingface.co/unsloth/mistral-7b-v0.2-bnb-4bit) - [Pixtral (12B) 2409](https://huggingface.co/unsloth/Pixtral-12B-2409-unsloth-bnb-4bit) [QwQ-32B](https://huggingface.co/collections/unsloth/qwen-qwq-32b-collection-676b3b29c20c09a8c71a6235) (new) - [32B](https://huggingface.co/unsloth/QwQ-32B-GGUF) - [32B](https://huggingface.co/unsloth/QwQ-32B-unsloth-bnb-4bit) [DeepSeek V3](https://huggingface.co/collections/unsloth/deepseek-v3-all-versions-677cf5cfd7df8b7815fc723c) (new) - [V3-0324](https://huggingface.co/unsloth/DeepSeek-V3-0324-GGUF) \- new - [V3](https://huggingface.co/unsloth/DeepSeek-V3-GGUF) [Qwen2.5-VL](https://huggingface.co/collections/unsloth/qwen25-vl-all-versions-679ca6c784fad5bd976a05a1) (new) - [3B](https://huggingface.co/unsloth/Qwen2.5-VL-3B-Instruct-unsloth-bnb-4bit) - [7B](https://huggingface.co/unsloth/Qwen2.5-VL-7B-Instruct-unsloth-bnb-4bit) - [32B](https://huggingface.co/unsloth/Qwen2.5-VL-32B-Instruct-unsloth-bnb-4bit) \- new - [72B](https://huggingface.co/unsloth/Qwen2.5-VL-72B-Instruct-unsloth-bnb-4bit) [Qwen 2.5](https://huggingface.co/collections/unsloth/qwen-25-66fe4c08fb9ada518e8a0d3f) - [0.5B](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct-bnb-4bit) - [1.5B](https://huggingface.co/unsloth/Qwen2.5-1.5B-Instruct-bnb-4bit) - [3B](https://huggingface.co/unsloth/Qwen2.5-3B-Instruct-bnb-4bit) - [7B](https://huggingface.co/unsloth/Qwen2.5-7B-Instruct-bnb-4bit) - [14B](https://huggingface.co/unsloth/Qwen2.5-14B-Instruct-bnb-4bit) - [32B](https://huggingface.co/unsloth/Qwen2.5-32B-Instruct-bnb-4bit) - [72B](https://huggingface.co/unsloth/Qwen2-72B-Instruct-bnb-4bit) - [QwQ](https://huggingface.co/unsloth/QwQ-32B-Preview-unsloth-bnb-4bit) - [QVQ](https://huggingface.co/unsloth/QVQ-72B-Preview-bnb-4bit) - [0.5B](https://huggingface.co/unsloth/Qwen2.5-0.5B-bnb-4bit) - [1.5B](https://huggingface.co/unsloth/Qwen2.5-1.5B-bnb-4bit) - [3B](https://huggingface.co/unsloth/Qwen2.5-3B-bnb-4bit) - [7B](https://huggingface.co/unsloth/Qwen2.5-7B-bnb-4bit) - [14B](https://huggingface.co/unsloth/Qwen2.5-14B-bnb-4bit) - [32B](https://huggingface.co/unsloth/Qwen2.5-32B-bnb-4bit) - [72B](https://huggingface.co/unsloth/Qwen2.5-72B-bnb-4bit) Text-to-speech (TTS) - [Orpheus-TTS (3B)](https://docs.unsloth.ai/) - [Orpheus-TTS (3B)](https://huggingface.co/unsloth/orpheus-3b-0.1-pretrained-unsloth-bnb-4bit) Gemma 2 - [All variants](https://huggingface.co/unsloth/gemma-2-it-GGUF) - [2B](https://huggingface.co/unsloth/gemma-2-2b-it-bnb-4bit) - [9B](https://huggingface.co/unsloth/gemma-2-9b-it-bnb-4bit) - [27B](https://huggingface.co/unsloth/gemma-2-27b-it-bnb-4bit) - [2B](https://huggingface.co/unsloth/gemma-2-2b-bnb-4bit) - [9B](https://huggingface.co/unsloth/gemma-2-9b-bnb-4bit) - [27B](https://huggingface.co/unsloth/gemma-2-27b-bnb-4bit) Phi-3.5 - [mini](https://huggingface.co/unsloth/Phi-3.5-mini-instruct-bnb-4bit) Phi-3 - [mini](https://huggingface.co/unsloth/Phi-3-mini-4k-instruct-bnb-4bit) - [medium](https://huggingface.co/unsloth/Phi-3-medium-4k-instruct-bnb-4bit) Llama 3 - [8B](https://huggingface.co/unsloth/llama-3-8b-Instruct-bnb-4bit) - [70B](https://huggingface.co/unsloth/llama-3-70b-bnb-4bit) - [8B](https://huggingface.co/unsloth/llama-3-8b-bnb-4bit) - [70B](https://huggingface.co/unsloth/llama-3-70b-bnb-4bit) Llava - [1.5 (7B)](https://huggingface.co/unsloth/llava-1.5-7b-hf-bnb-4bit) - [1.6 Mistral (7B)](https://huggingface.co/unsloth/llava-v1.6-mistral-7b-hf-bnb-4bit) [Qwen 2.5 Coder](https://huggingface.co/collections/unsloth/qwen-25-coder-6732bc833ed65dd1964994d4) - [0.5B](https://huggingface.co/unsloth/Qwen2.5-Coder-0.5B-Instruct-128K-GGUF) - [1.5B](https://huggingface.co/unsloth/Qwen2.5-Coder-1.5B-Instruct-128K-GGUF) - [3B](https://huggingface.co/unsloth/Qwen2.5-Coder-3B-Instruct-128K-GGUF) - [7B](https://huggingface.co/unsloth/Qwen2.5-Coder-7B-Instruct-128K-GGUF) - [14B](https://huggingface.co/unsloth/Qwen2.5-Coder-14B-Instruct-128K-GGUF) - [32B](https://huggingface.co/unsloth/Qwen2.5-Coder-32B-Instruct-128K-GGUF) - [0.5B](https://huggingface.co/unsloth/Qwen2.5-Coder-0.5B-Instruct-bnb-4bit) - [1.5B](https://huggingface.co/unsloth/Qwen2.5-Coder-1.5B-Instruct-bnb-4bit) - [3B](https://huggingface.co/unsloth/Qwen2.5-Coder-3B-Instruct-bnb-4bit) - [7B](https://huggingface.co/unsloth/Qwen2.5-Coder-7B-Instruct-bnb-4bit) - [14B](https://huggingface.co/unsloth/Qwen2.5-Coder-14B-Instruct-bnb-4bit) - [32B](https://huggingface.co/unsloth/Qwen2.5-Coder-32B-Instruct-bnb-4bit) - [0.5B](https://huggingface.co/unsloth/Qwen2.5-Coder-0.5B-bnb-4bit) - [1.5B](https://huggingface.co/unsloth/Qwen2.5-Coder-1.5B-bnb-4bit) - [3B](https://huggingface.co/unsloth/Qwen2.5-Coder-3B-bnb-4bit) - [7B](https://huggingface.co/unsloth/Qwen2.5-Coder-7B-bnb-4bit) - [14B](https://huggingface.co/unsloth/Qwen2.5-Coder-32B-bnb-4bit) - [32B](https://huggingface.co/unsloth/Qwen2.5-Coder-32B-bnb-4bit) Llama 2 - [7B](https://huggingface.co/unsloth/llama-2-7b-chat-bnb-4bit) - [7B](https://huggingface.co/unsloth/llama-2-7b-bnb-4bit) - [13B](https://huggingface.co/unsloth/llama-2-13b-bnb-4bit) Qwen2 VL - [2B](https://huggingface.co/unsloth/Qwen2-VL-2B-Instruct-unsloth-bnb-4bit) - [7B](https://huggingface.co/unsloth/Qwen2-VL-7B-Instruct-unsloth-bnb-4bit) - [72B](https://huggingface.co/unsloth/Qwen2-VL-72B-Instruct-bnb-4bit) SmolLM2 - [135M](https://huggingface.co/unsloth/SmolLM2-135M-Instruct-GGUF) - [360M](https://huggingface.co/unsloth/SmolLM2-360M-Instruct-GGUF) - [1.7B](https://huggingface.co/unsloth/SmolLM2-1.7B-Instruct-GGUF) - [135M](https://huggingface.co/unsloth/SmolLM2-135M-Instruct-bnb-4bit) - [360M](https://huggingface.co/unsloth/SmolLM2-360M-Instruct-bnb-4bit) - [1.7B](https://huggingface.co/unsloth/SmolLM2-1.7B-Instruct-bnb-4bit) - [135M](https://huggingface.co/unsloth/SmolLM2-135M-bnb-4bit) - [360M](https://huggingface.co/unsloth/SmolLM2-360M-bnb-4bit) - [1.7B](https://huggingface.co/unsloth/SmolLM2-1.7B-bnb-4bit) TinyLlama - [Instruct](https://huggingface.co/unsloth/tinyllama-chat-bnb-4bit) - [Base](https://huggingface.co/unsloth/tinyllama-bnb-4bit) Qwen2 - [1.5B](https://huggingface.co/unsloth/Qwen2-1.5B-Instruct-bnb-4bit) - [7B](https://huggingface.co/unsloth/Qwen2-7B-Instruct-bnb-4bit) - [72B](https://huggingface.co/unsloth/Qwen2-72B-Instruct-bnb-4bit) - [1.5B](https://huggingface.co/unsloth/Qwen2-1.5B-bnb-4bit) - [7B](https://huggingface.co/unsloth/Qwen2-7B-bnb-4bit) - [72B](https://huggingface.co/unsloth/Qwen2-7B-bnb-4bit) Zephyr SFT - [Instruct](https://huggingface.co/unsloth/zephyr-sft-bnb-4bit) CodeLlama - [7B](https://huggingface.co/unsloth/codellama-7b-bnb-4bit) - [13B](https://huggingface.co/unsloth/codellama-13b-bnb-4bit) - [34B](https://huggingface.co/unsloth/codellama-34b-bnb-4bit) Yi - [34B](https://huggingface.co/unsloth/yi-34b-chat-bnb-4bit) - [6B (v 1.5)](https://huggingface.co/unsloth/Yi-1.5-6B-bnb-4bit) - [6B](https://huggingface.co/unsloth/yi-6b-bnb-4bit) - [34B](https://huggingface.co/unsloth/yi-34b-bnb-4bit) Here's a table of all our 16-bit or 8-bit original model uploads: Model Instruct Base [Qwen3](https://huggingface.co/collections/unsloth/qwen3-680edabfb790c8c34a242f95) (new) - [0.6B](https://huggingface.co/unsloth/Qwen3-1.7B) - [1.7B](https://huggingface.co/unsloth/Qwen3-1.7B-GGUF) - [4B](https://huggingface.co/unsloth/Qwen3-4B) - [8B](https://huggingface.co/unsloth/Qwen3-8B) - [14B](https://huggingface.co/unsloth/Qwen3-14B) - [30B-A3B](https://huggingface.co/unsloth/Qwen3-30B-A3B) - [32B](https://huggingface.co/unsloth/Qwen3-32B) - [235B-A22B](https://huggingface.co/unsloth/Qwen3-235B-A22B) - [0.6B](https://huggingface.co/unsloth/Qwen3-0.6B-Base) - [1.7B](https://huggingface.co/unsloth/Qwen3-1.7B-Base) - [4B](https://huggingface.co/unsloth/Qwen3-4B-Base) - [8B](https://huggingface.co/unsloth/Qwen3-8B-Base) - [14B](https://huggingface.co/unsloth/Qwen3-14B-Base) - [30B-A3B](https://huggingface.co/unsloth/Qwen3-30B-A3B-Base) [Llama 4](https://huggingface.co/collections/unsloth/llama-4-67f19503d764b0f3a2a868d2) (new) - [Scout](https://huggingface.co/unsloth/Llama-4-Scout-17B-16E-Instruct) - [Maverick](https://huggingface.co/unsloth/Llama-4-Maverick-17B-128E-Instruct) - [Scout](https://huggingface.co/unsloth/Llama-4-Scout-17B-16E) - [Maverick](https://huggingface.co/unsloth/Llama-4-Maverick-17B-128E) [Phi-4](https://huggingface.co/collections/unsloth/phi-4-all-versions-677eecf93784e61afe762afa) (new) - [Reasoning-plus](https://huggingface.co/unsloth/Phi-4-reasoning-plus) - [Reasoning](https://huggingface.co/unsloth/Phi-4-reasoning) - ' - [Phi-4](https://huggingface.co/unsloth/phi-4) - [Phi-4-mini](https://huggingface.co/unsloth/Phi-4-mini-instruct) [Gemma 3](https://huggingface.co/collections/unsloth/gemma-3-67d12b7e8816ec6efa7e4e5b) (new) - [1B](https://huggingface.co/unsloth/gemma-3-1b-it) - [4B](https://huggingface.co/unsloth/gemma-3-4b-it) - [12B](https://huggingface.co/unsloth/gemma-3-12b-it) - [27B](https://huggingface.co/unsloth/gemma-3-27b-it) - [1B](https://huggingface.co/unsloth/gemma-3-1b-pt) - [4B](https://huggingface.co/unsloth/gemma-3-4b-pt) - [12B](https://huggingface.co/unsloth/gemma-3-12b-pt) - [27B](https://huggingface.co/unsloth/gemma-3-27b-pt) [DeepSeek-R1](https://huggingface.co/collections/unsloth/deepseek-r1-all-versions-678e1c48f5d2fce87892ace5) - [R1](https://huggingface.co/unsloth/DeepSeek-R1) - [R1 Zero](https://huggingface.co/unsloth/DeepSeek-R1-Zero) - [Llama 3 (8B)](https://huggingface.co/unsloth/DeepSeek-R1-Distill-Llama-8B) - [Llama 3.3 (70B)](https://huggingface.co/unsloth/DeepSeek-R1-Distill-Llama-70B) - [Qwen 2.5 (14B)](https://huggingface.co/unsloth/DeepSeek-R1-Distill-Qwen-14B) - [Qwen 2.5 (32B)](https://huggingface.co/unsloth/DeepSeek-R1-Distill-Qwen-32B) - [Qwen 2.5 (1.5B)](https://huggingface.co/unsloth/DeepSeek-R1-Distill-Qwen-1.5B) - [Qwen 2.5 (7B)](https://huggingface.co/unsloth/DeepSeek-R1-Distill-Qwen-7B) - [R1 (bf16)](https://huggingface.co/unsloth/DeepSeek-R1-BF16) [Llama 3.2](https://huggingface.co/collections/unsloth/llama-32-66f46afde4ca573864321a22) - [1B](https://huggingface.co/unsloth/Llama-3.2-1B-Instruct) - [3B](https://huggingface.co/unsloth/Llama-3.2-3B-Instruct-bnb-4bit) - [11B Vision](https://huggingface.co/unsloth/Llama-3.2-11B-Vision-Instruct) - [90B Vision](https://huggingface.co/unsloth/Llama-3.2-90B-Vision-Instruct) - [1B](https://huggingface.co/unsloth/Llama-3.2-1B) - [3B](https://huggingface.co/unsloth/Llama-3.2-3B) - [11B Vision](https://huggingface.co/unsloth/Llama-3.2-11B-Vision) - [90B Vision](https://huggingface.co/unsloth/Llama-3.2-90B-Vision) [Llama 3.3](https://huggingface.co/collections/unsloth/llama-33-all-versions-67535d7d994794b9d7cf5e9f) - [70B](https://huggingface.co/unsloth/Llama-3.3-70B-Instruct) [Llama 3.1](https://huggingface.co/collections/unsloth/llama-31-collection-6753dca76f47d9ce1696495f) - [8B](https://huggingface.co/unsloth/Meta-Llama-3.1-8B-Instruct) - [70B](https://huggingface.co/unsloth/Meta-Llama-3.1-70B-Instruct) - [8B](https://huggingface.co/unsloth/Meta-Llama-3.1-8B) - [70B](https://huggingface.co/unsloth/Meta-Llama-3.1-70B) Mistral - [Mistral Small 2501](https://huggingface.co/unsloth/Mistral-Small-24B-Instruct-2501) - [NeMo 2407 (12B)](https://huggingface.co/unsloth/Mistral-Nemo-Instruct-2407) - [Small 2409 (22B)](https://huggingface.co/unsloth/Mistral-Small-Instruct-2409) - [7B (v0.3)](https://huggingface.co/unsloth/mistral-7b-instruct-v0.3) - [7B (v0.2)](https://huggingface.co/unsloth/mistral-7b-instruct-v0.2) - [Pixtral (12B) 2409](https://huggingface.co/unsloth/Pixtral-12B-2409) - [Mixtral-8x7B](https://huggingface.co/unsloth/Mixtral-8x7B-Instruct-v0.1) - [Mistral Small 2501](https://huggingface.co/unsloth/Mistral-Small-24B-Base-2501) - [NeMo 2407 (12B)](https://huggingface.co/unsloth/Mistral-Nemo-Base-2407) - [7B (v0.3)](https://huggingface.co/unsloth/mistral-7b-v0.3) - [7B (v0.2)](https://huggingface.co/unsloth/mistral-7b-v0.2) - [Pixtral (12B) 2409](https://huggingface.co/unsloth/Pixtral-12B-Base-2409) Gemma 2 - [2B](https://huggingface.co/unsloth/gemma-2b-it) - [9B](https://huggingface.co/unsloth/gemma-9b-it) - [27B](https://huggingface.co/unsloth/gemma-27b-it) - [2B](https://huggingface.co/unsloth/gemma-2-2b) - [9B](https://huggingface.co/unsloth/gemma-2-9b) - [27B](https://huggingface.co/unsloth/gemma-2-27b) DeepSeek V3 - [bf16](https://huggingface.co/unsloth/DeepSeek-V3-bf16) - [original 8-bit](https://huggingface.co/unsloth/DeepSeek-V3) Phi-3.5 - [mini](https://huggingface.co/unsloth/Phi-3.5-mini-instruct) Phi-3 - [mini](https://huggingface.co/unsloth/Phi-3-mini-4k-instruct) - [medium](https://huggingface.co/unsloth/Phi-3-medium-4k-instruct) Llama 3 - [8B](https://huggingface.co/unsloth/llama-3-8b-Instruct) - [8B](https://huggingface.co/unsloth/llama-3-8b) [Qwen 2.5](https://huggingface.co/collections/unsloth/qwen-25-66fe4c08fb9ada518e8a0d3f) - [0.5B](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct) - [1.5B](https://huggingface.co/unsloth/Qwen2.5-1.5B-Instruct) - [3B](https://huggingface.co/unsloth/Qwen2.5-3B-Instruct) - [7B](https://huggingface.co/unsloth/Qwen2.5-7B-Instruct) - [14B](https://huggingface.co/unsloth/Qwen2.5-14B-Instruct) - [32B](https://huggingface.co/unsloth/Qwen2.5-32B-Instruct) - [72B](https://huggingface.co/unsloth/Qwen2.5-72B-Instruct) - [0.5B](https://huggingface.co/unsloth/Qwen2.5-0.5B) - [1.5B](https://huggingface.co/unsloth/Qwen2.5-1.5B) - [3B](https://huggingface.co/unsloth/Qwen2.5-3B) - [7B](https://huggingface.co/unsloth/Qwen2.5-7B) - [14B](https://huggingface.co/unsloth/Qwen2.5-14B) - [32B](https://huggingface.co/unsloth/Qwen2.5-32B) - [72B](https://huggingface.co/unsloth/Qwen2.5-72B) Llava - [1.5 (7B)](https://huggingface.co/unsloth/llava-1.5-7b-hf) - [1.6 Mistral (7B)](https://huggingface.co/unsloth/llava-v1.6-mistral-7b-hf) Qwen2 VL - [2B](https://huggingface.co/unsloth/Qwen2-VL-2B-Instruct) - [7B](https://huggingface.co/unsloth/Qwen2-VL-7B-Instruct) - [72B](https://huggingface.co/unsloth/Qwen2-VL-72B-Instruct) Llama 2 - [7B](https://huggingface.co/unsloth/llama-2-7b-chat) - [7B](https://huggingface.co/unsloth/llama-2-7b) - [13B](https://huggingface.co/unsloth/llama-2-13b) SmolLM2 - [135M](https://huggingface.co/unsloth/SmolLM2-135M-Instruct) - [360M](https://huggingface.co/unsloth/SmolLM2-360M-Instruct) - [1.7B](https://huggingface.co/unsloth/SmolLM2-1.7B-Instruct) - [135M](https://huggingface.co/unsloth/SmolLM2-135M) - [360M](https://huggingface.co/unsloth/SmolLM2-360M) - [1.7B](https://huggingface.co/unsloth/SmolLM2-1.7B) TinyLlama - [Instruct](https://huggingface.co/unsloth/tinyllama-chat) - [Base](https://huggingface.co/unsloth/tinyllama) Qwen2 - [1.5B](https://huggingface.co/unsloth/Qwen2-1.5B-Instruct) - [7B](https://huggingface.co/unsloth/Qwen2-7B-Instruct) - [1.5B](https://huggingface.co/unsloth/Qwen2-1.5B) - [7B](https://huggingface.co/unsloth/Qwen2-7B) Zephyr SFT - [Instruct](https://huggingface.co/unsloth/zephyr-sft) [PreviousUnsloth Notebooks](https://docs.unsloth.ai/get-started/unsloth-notebooks) [NextInstalling + Updating](https://docs.unsloth.ai/get-started/installing-+-updating) Last updated 10 days ago Was this helpful?
{ "color-scheme": "light dark", "description": null, "favicon": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Forganizations%252FHpyELzcNe0topgVLGCZY%252Fsites%252Fsite_mXXTe%252Ficon%252FUdYqQdRpLXMz6O8ejTLD%252Funsloth%2520sticker.png%3Falt%3Dmedia%26token%3D2a5ba961-51cb-4094-8545-6eb20ad1e06e&width=48&height=48&sign=277417f7&sv=2", "generator": "GitBook (e15757d)", "language": "en", "og:description": null, "og:image": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "og:title": "All Our Models | Unsloth Documentation", "ogDescription": null, "ogImage": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "ogTitle": "All Our Models | Unsloth Documentation", "robots": "index, follow", "scrapeId": "6d40f65f-7ffa-473e-a134-d005e29ea85c", "sourceURL": "https://docs.unsloth.ai/get-started/all-our-models", "statusCode": 200, "title": "All Our Models | Unsloth Documentation", "twitter:card": "summary_large_image", "twitter:description": null, "twitter:image": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "twitter:title": "All Our Models | Unsloth Documentation", "url": "https://docs.unsloth.ai/get-started/all-our-models", "viewport": "width=device-width, initial-scale=1" }
Qwen3, full fine-tuning & all models are now supported! 🦥 Fine-tuning vision models has numerous use cases across various industries, enabling models to adapt to specific tasks and datasets. We provided 3 example notebooks for vision finetuning. Note: [Gemma 3](https://docs.unsloth.ai/basics/gemma-3-how-to-run-and-fine-tune) also works, just change Qwen or Pixtral's notebook to the Gemma 3 model. 1. **Llama 3.2 Vision** finetuning for radiography: [Notebook](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.2_(11B)-Vision.ipynb) How can we assist medical professionals in analyzing Xrays, CT Scans & ultrasounds faster. 2. **Qwen2.5 VL** finetuning for converting handwriting to LaTeX: [Notebook](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Qwen2.5_VL_(7B)-Vision.ipynb) This allows complex math formulas to be easily transcribed as LaTeX without manually writing it. 3. **Pixtral 12B 2409** vision finetuning for general Q&A: [Notebook](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Pixtral_(12B)-Vision.ipynb) One can concatenate general Q&A datasets with more niche datasets to make the finetune not forget base model skills. It is best to ensure your dataset has images of all the same size/dimensions. Use dimensions of 300-1000px to ensure your training does not take too long or use too many resources. To finetune vision models, we now allow you to select which parts of the mode to finetune. You can select to only finetune the vision layers, or the language layers, or the attention / MLP layers! We set them all on by default! ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252Fhn0GA65x39KB93DZ4SzY%252Fvision%2520finetuning%2520template.png%3Falt%3Dmedia%26token%3D4a201a6c-53f5-4798-9a1b-0e18d0924d8a&width=768&dpr=4&quality=100&sign=a7bf3919&sv=2) ## [Direct link to heading](https://docs.unsloth.ai/basics/vision-fine-tuning\#vision-fine-tuning-dataset) Vision Fine-tuning Dataset The dataset for fine-tuning a vision or multimodal model is similar to standard question & answer pair [datasets](https://docs.unsloth.ai/basics/datasets-guide), but this time, they also includes image inputs. For example, the [Llama 3.2 Vision Notebook](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.2_(11B)-Vision.ipynb#scrollTo=vITh0KVJ10qX) uses a radiography case to show how AI can help medical professionals analyze X-rays, CT scans, and ultrasounds more efficiently. We'll be using a sampled version of the ROCO radiography dataset. You can access the dataset [here](https://www.google.com/url?q=https%3A%2F%2Fhuggingface.co%2Fdatasets%2Funsloth%2FRadiology_mini). The dataset includes X-rays, CT scans and ultrasounds showcasing medical conditions and diseases. Each image has a caption written by experts describing it. The goal is to finetune a VLM to make it a useful analysis tool for medical professionals. Let's take a look at the dataset, and check what the 1st example shows: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] Dataset({ features: ['image', 'image_id', 'caption', 'cui'], num_rows: 1978 }) ``` Image Caption ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252FrjdETiyi6jqzAao7vg8I%252Fxray.png%3Falt%3Dmedia%26token%3Df66fdd7f-5e10-4eff-a280-5b3d63ed7849&width=768&dpr=4&quality=100&sign=4d4d6839&sv=2) Panoramic radiography shows an osteolytic lesion in the right posterior maxilla with resorption of the floor of the maxillary sinus (arrows). To format the dataset, all vision finetuning tasks should be formatted as follows: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] [\ { "role": "user",\ "content": [{"type": "text", "text": instruction}, {"type": "image", "image": image} ]\ },\ { "role": "assistant",\ "content": [{"type": "text", "text": answer} ]\ },\ ] ``` We will craft an custom instruction asking the VLM to be an expert radiographer. Notice also instead of just 1 instruction, you can add multiple turns to make it a dynamic conversation. Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] instruction = "You are an expert radiographer. Describe accurately what you see in this image." def convert_to_conversation(sample): conversation = [\ { "role": "user",\ "content" : [\ {"type" : "text", "text" : instruction},\ {"type" : "image", "image" : sample["image"]} ]\ },\ { "role" : "assistant",\ "content" : [\ {"type" : "text", "text" : sample["caption"]} ]\ },\ ] return { "messages" : conversation } pass ``` Let's convert the dataset into the "correct" format for finetuning: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] converted_dataset = [convert_to_conversation(sample) for sample in dataset] ``` The first example is now structured like below: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] converted_dataset[0] ``` Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] whitespace-pre-wrap {'messages': [{'role': 'user',\ 'content': [{'type': 'text',\ 'text': 'You are an expert radiographer. Describe accurately what you see in this image.'},\ {'type': 'image',\ 'image': <PIL.PngImagePlugin.PngImageFile image mode=L size=657x442>}]},\ {'role': 'assistant',\ 'content': [{'type': 'text',\ 'text': 'Panoramic radiography shows an osteolytic lesion in the right posterior maxilla with resorption of the floor of the maxillary sinus (arrows).'}]}]} ``` Before we do any finetuning, maybe the vision model already knows how to analyse the images? Let's check if this is the case! Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] FastVisionModel.for_inference(model) # Enable for inference! image = dataset[0]["image"] instruction = "You are an expert radiographer. Describe accurately what you see in this image." messages = [\ {"role": "user", "content": [\ {"type": "image"},\ {"type": "text", "text": instruction}\ ]}\ ] input_text = tokenizer.apply_chat_template(messages, add_generation_prompt = True) inputs = tokenizer( image, input_text, add_special_tokens = False, return_tensors = "pt", ).to("cuda") from transformers import TextStreamer text_streamer = TextStreamer(tokenizer, skip_prompt = True) _ = model.generate(**inputs, streamer = text_streamer, max_new_tokens = 128, use_cache = True, temperature = 1.5, min_p = 0.1) ``` And the result: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] This radiograph appears to be a panoramic view of the upper and lower dentition, specifically an Orthopantomogram (OPG). * The panoramic radiograph demonstrates normal dental structures. * There is an abnormal area on the upper right, represented by an area of radiolucent bone, corresponding to the antrum. **Key Observations** * The bone between the left upper teeth is relatively radiopaque. * There are two large arrows above the image, suggesting the need for a closer examination of this area. One of the arrows is in a left-sided position, and the other is in the right-sided position. However, only ``` For more details, view our dataset section in the [notebook here](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.2_(11B)-Vision.ipynb#scrollTo=vITh0KVJ10qX). [PreviousChat Templates](https://docs.unsloth.ai/basics/chat-templates) [NextFinetuning from Last Checkpoint](https://docs.unsloth.ai/basics/finetuning-from-last-checkpoint) Last updated 1 month ago Was this helpful?
{ "color-scheme": "light dark", "description": "Learn how to fine-tune vision/multimodal LLMs with Unsloth", "favicon": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Forganizations%252FHpyELzcNe0topgVLGCZY%252Fsites%252Fsite_mXXTe%252Ficon%252FUdYqQdRpLXMz6O8ejTLD%252Funsloth%2520sticker.png%3Falt%3Dmedia%26token%3D2a5ba961-51cb-4094-8545-6eb20ad1e06e&width=48&height=48&sign=277417f7&sv=2", "generator": "GitBook (e15757d)", "language": "en", "og:description": "Learn how to fine-tune vision/multimodal LLMs with Unsloth", "og:image": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "og:title": "Vision Fine-tuning | Unsloth Documentation", "ogDescription": "Learn how to fine-tune vision/multimodal LLMs with Unsloth", "ogImage": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "ogTitle": "Vision Fine-tuning | Unsloth Documentation", "robots": "index, follow", "scrapeId": "6fa21c8b-f07a-4acc-9ccb-8c2392e80d08", "sourceURL": "https://docs.unsloth.ai/basics/vision-fine-tuning", "statusCode": 200, "title": "Vision Fine-tuning | Unsloth Documentation", "twitter:card": "summary_large_image", "twitter:description": "Learn how to fine-tune vision/multimodal LLMs with Unsloth", "twitter:image": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "twitter:title": "Vision Fine-tuning | Unsloth Documentation", "url": "https://docs.unsloth.ai/basics/vision-fine-tuning", "viewport": "width=device-width, initial-scale=1" }
Qwen3, full fine-tuning & all models are now supported! 🦥 DPO (Direct Preference Optimization), ORPO (Odds Ratio Preference Optimization), PPO, KTO Reward Modelling all work with Unsloth. We have Google Colab notebooks for reproducing ORPO, DPO Zephyr, KTO and SimPO: - [ORPO notebook](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3_(8B)-ORPO.ipynb) - [DPO Zephyr notebook](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Zephyr_(7B)-DPO.ipynb) - [KTO notebook](https://colab.research.google.com/drive/1MRgGtLWuZX4ypSfGguFgC-IblTvO2ivM?usp=sharing) - [SimPO notebook](https://colab.research.google.com/drive/1Hs5oQDovOay4mFA6Y9lQhVJ8TnbFLFh2?usp=sharing) We're also in 🤗Hugging Face's official docs! We're on the [SFT docs](https://huggingface.co/docs/trl/main/en/sft_trainer#accelerate-fine-tuning-2x-using-unsloth) and the [DPO docs](https://huggingface.co/docs/trl/main/en/dpo_trainer#accelerate-dpo-fine-tuning-using-unsloth). ## [Direct link to heading](https://docs.unsloth.ai/basics/reasoning-grpo-and-rl/reinforcement-learning-dpo-orpo-and-kto\#dpo-code) DPO Code Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] python import os os.environ["CUDA_VISIBLE_DEVICES"] = "0" # Optional set GPU device ID from unsloth import FastLanguageModel, PatchDPOTrainer from unsloth import is_bfloat16_supported PatchDPOTrainer() import torch from transformers import TrainingArguments from trl import DPOTrainer model, tokenizer = FastLanguageModel.from_pretrained( model_name = "unsloth/zephyr-sft-bnb-4bit", max_seq_length = max_seq_length, dtype = None, load_in_4bit = True, ) # Do model patching and add fast LoRA weights model = FastLanguageModel.get_peft_model( model, r = 64, target_modules = ["q_proj", "k_proj", "v_proj", "o_proj",\ "gate_proj", "up_proj", "down_proj",], lora_alpha = 64, lora_dropout = 0, # Supports any, but = 0 is optimized bias = "none", # Supports any, but = "none" is optimized # [NEW] "unsloth" uses 30% less VRAM, fits 2x larger batch sizes! use_gradient_checkpointing = "unsloth", # True or "unsloth" for very long context random_state = 3407, max_seq_length = max_seq_length, ) dpo_trainer = DPOTrainer( model = model, ref_model = None, args = TrainingArguments( per_device_train_batch_size = 4, gradient_accumulation_steps = 8, warmup_ratio = 0.1, num_train_epochs = 3, fp16 = not is_bfloat16_supported(), bf16 = is_bfloat16_supported(), logging_steps = 1, optim = "adamw_8bit", seed = 42, output_dir = "outputs", ), beta = 0.1, train_dataset = YOUR_DATASET_HERE, # eval_dataset = YOUR_DATASET_HERE, tokenizer = tokenizer, max_length = 1024, max_prompt_length = 512, ) dpo_trainer.train() ``` [PreviousTutorial: Train your own Reasoning model with GRPO](https://docs.unsloth.ai/basics/reasoning-grpo-and-rl/tutorial-train-your-own-reasoning-model-with-grpo) [NextGemma 3: How to Run & Fine-tune](https://docs.unsloth.ai/basics/gemma-3-how-to-run-and-fine-tune) Last updated 2 months ago Was this helpful?
{ "color-scheme": "light dark", "description": "To use the reward modelling functions for DPO, ORPO or KTO with Unsloth, follow the steps below:", "favicon": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Forganizations%252FHpyELzcNe0topgVLGCZY%252Fsites%252Fsite_mXXTe%252Ficon%252FUdYqQdRpLXMz6O8ejTLD%252Funsloth%2520sticker.png%3Falt%3Dmedia%26token%3D2a5ba961-51cb-4094-8545-6eb20ad1e06e&width=48&height=48&sign=277417f7&sv=2", "generator": "GitBook (e15757d)", "language": "en", "og:description": "To use the reward modelling functions for DPO, ORPO or KTO with Unsloth, follow the steps below:", "og:image": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "og:title": "Reinforcement Learning - DPO, ORPO & KTO | Unsloth Documentation", "ogDescription": "To use the reward modelling functions for DPO, ORPO or KTO with Unsloth, follow the steps below:", "ogImage": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "ogTitle": "Reinforcement Learning - DPO, ORPO & KTO | Unsloth Documentation", "robots": "index, follow", "scrapeId": "6fafcb3b-e48c-44a2-a1d4-0ea127293bc8", "sourceURL": "https://docs.unsloth.ai/basics/reasoning-grpo-and-rl/reinforcement-learning-dpo-orpo-and-kto", "statusCode": 200, "title": "Reinforcement Learning - DPO, ORPO & KTO | Unsloth Documentation", "twitter:card": "summary_large_image", "twitter:description": "To use the reward modelling functions for DPO, ORPO or KTO with Unsloth, follow the steps below:", "twitter:image": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "twitter:title": "Reinforcement Learning - DPO, ORPO & KTO | Unsloth Documentation", "url": "https://docs.unsloth.ai/basics/reasoning-grpo-and-rl/reinforcement-learning-dpo-orpo-and-kto", "viewport": "width=device-width, initial-scale=1" }
Qwen3, full fine-tuning & all models are now supported! 🦥 ### [Direct link to heading](https://docs.unsloth.ai/basics/chat-templates\#list-of-colab-chat-template-notebooks) List of Colab chat template notebooks: - [Conversational](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.2_(1B_and_3B)-Conversational.ipynb) - [ChatML](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3_(8B)-Ollama.ipynb) - [Ollama](https://colab.research.google.com/drive/1WZDi7APtQ9VsvOrQSSC5DDtxq159j8iZ?usp=sharing) - [Text Classification](https://github.com/timothelaborie/text_classification_scripts/blob/main/unsloth_classification.ipynb) by Timotheeee - [Multiple Datasets](https://colab.research.google.com/drive/1njCCbE1YVal9xC83hjdo2hiGItpY_D6t?usp=sharing) by Flail ## [Direct link to heading](https://docs.unsloth.ai/basics/chat-templates\#multi-turn-conversations) Multi turn conversations A bit issue if you didn't notice is the Alpaca dataset is single turn, whilst remember using ChatGPT was interactive and you can talk to it in multiple turns. For example, the left is what we want, but the right which is the Alpaca dataset only provides singular conversations. We want the finetuned language model to somehow learn how to do multi turn conversations just like ChatGPT. ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252FWCAN7bYUt6QWwCWUxisL%252Fdiff.png%3Falt%3Dmedia%26token%3D29821fd9-2181-4d1d-8b93-749b69bcf400&width=768&dpr=4&quality=100&sign=d4f1b675&sv=2) So we introduced the `conversation_extension` parameter, which essentially selects some random rows in your single turn dataset, and merges them into 1 conversation! For example, if you set it to 3, we randomly select 3 rows and merge them into 1! Setting them too long can make training slower, but could make your chatbot and final finetune much better! ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252FWi1rRNBFC2iDmCvSJsZt%252Fcombine.png%3Falt%3Dmedia%26token%3Dbef37a55-b272-4be3-89b5-9767c219a380&width=768&dpr=4&quality=100&sign=ae98ba1b&sv=2) Then set `output_column_name` to the prediction / output column. For the Alpaca dataset dataset, it would be the output column. We then use the `standardize_sharegpt` function to just make the dataset in a correct format for finetuning! Always call this! ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252FE75C4Y848VNF6luLuPRR%252Fimage.png%3Falt%3Dmedia%26token%3Daac1d79b-ecca-4e56-939d-d97dcbbf30eb&width=768&dpr=4&quality=100&sign=d48e3c76&sv=2) ## [Direct link to heading](https://docs.unsloth.ai/basics/chat-templates\#customizable-chat-templates) Customizable Chat Templates We can now specify the chat template for finetuning itself. The very famous Alpaca format is below: ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252F8SWcsgH47Uhkm0IclDs5%252Fimage.png%3Falt%3Dmedia%26token%3Dfa03d7aa-d568-468d-9884-18e925a0551f&width=768&dpr=4&quality=100&sign=dff54efb&sv=2) But remember we said this was a bad idea because ChatGPT style finetunes require only 1 prompt? Since we successfully merged all dataset columns into 1 using Unsloth, we essentially can create the below style chat template with 1 input column (instruction) and 1 output: ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252FyuMpSLIpPLEbcdh970UJ%252Fimage.png%3Falt%3Dmedia%26token%3D87c4d5e1-accf-4847-9971-63e3a47b4a5f&width=768&dpr=4&quality=100&sign=728095c1&sv=2) We just require you must put a `{INPUT}` field for the instruction and an `{OUTPUT}` field for the model's output field. We in fact allow an optional `{SYSTEM}` field as well which is useful to customize a system prompt just like in ChatGPT. For example, below are some cool examples which you can customize the chat template to be: ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252Fi6B8IP1OZmmxBYr6k4W3%252Fimage.png%3Falt%3Dmedia%26token%3D061d1b4c-4b22-4d1b-a423-8d4c15e40efa&width=768&dpr=4&quality=100&sign=dd8c7435&sv=2) For the ChatML format used in OpenAI models: ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252F3OEJaXooJCICJR6DJIJP%252Fimage.png%3Falt%3Dmedia%26token%3D4fa85cf1-463d-4090-a838-591c4f94efea&width=768&dpr=4&quality=100&sign=a1f23ff9&sv=2) Or you can use the Llama-3 template itself (which only functions by using the instruct version of Llama-3): We in fact allow an optional `{SYSTEM}` field as well which is useful to customize a system prompt just like in ChatGPT. ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252F4qQXd0hIvh9fJNO2cJ04%252Fimage.png%3Falt%3Dmedia%26token%3D614b9200-7375-47f5-ac15-ce9aa891ede4&width=768&dpr=4&quality=100&sign=c9811100&sv=2) Or in the Titanic prediction task where you had to predict if a passenger died or survived in this Colab notebook which includes CSV and Excel uploading: [https://colab.research.google.com/drive/1VYkncZMfGFkeCEgN2IzbZIKEDkyQuJAS?usp=sharing](https://colab.research.google.com/drive/1VYkncZMfGFkeCEgN2IzbZIKEDkyQuJAS?usp=sharing) ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252F1iQitC3PwcuV0LpHEhdP%252Fimage.png%3Falt%3Dmedia%26token%3Dd117f681-afb0-4d5f-b534-f51013fe772a&width=768&dpr=4&quality=100&sign=20577629&sv=2) ## [Direct link to heading](https://docs.unsloth.ai/basics/chat-templates\#applying-chat-templates-with-unsloth) Applying Chat Templates with Unsloth For datasets that usually follow the common chatml format, the process of preparing the dataset for training or finetuning, consists of four simple steps: - Check the chat templates that Unsloth currently supports: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] from unsloth.chat_templates import CHAT_TEMPLATES print(list(CHAT_TEMPLATES.keys())) ``` This will print out the list of templates currently supported by Unsloth. Here is an example output: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] ['unsloth', 'zephyr', 'chatml', 'mistral', 'llama', 'vicuna', 'vicuna_old', 'vicuna old', 'alpaca', 'gemma', 'gemma_chatml', 'gemma2', 'gemma2_chatml', 'llama-3', 'llama3', 'phi-3', 'phi-35', 'phi-3.5', 'llama-3.1', 'llama-31', 'llama-3.2', 'llama-3.3', 'llama-32', 'llama-33', 'qwen-2.5', 'qwen-25', 'qwen25', 'qwen2.5', 'phi-4', 'gemma-3', 'gemma3'] ``` - Use `get_chat_template` to apply the right chat template to your tokenizer: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] from unsloth.chat_templates import get_chat_template tokenizer = get_chat_template( tokenizer, chat_template = "gemma-3", # change this to the right chat_template name ) ``` - Define your formatting function. Here's an example: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] def formatting_prompts_func(examples): convos = examples["conversations"] texts = [tokenizer.apply_chat_template(convo, tokenize = False, add_generation_prompt = False) for convo in convos] return { "text" : texts, } ``` This function loops through your dataset applying the chat template you defined to each sample. - Finally, let's load the dataset and apply the required modifications to our dataset: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] # Import and load dataset from datasets import load_dataset dataset = load_dataset("repo_name/dataset_name", split = "train") # Apply the formatting function to your dataset using the map method dataset = dataset.map(formatting_prompts_func, batched = True,) ``` If your dataset uses the ShareGPT format with "from"/"value" keys instead of the ChatML "role"/"content" format, you can use the `standardize_sharegpt` function to convert it first. The revised code will now look as follows: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] # Import dataset from datasets import load_dataset dataset = load_dataset("mlabonne/FineTome-100k", split = "train") # Convert your dataset to the "role"/"content" format if necessary from unsloth.chat_templates import standardize_sharegpt dataset = standardize_sharegpt(dataset) # Apply the formatting function to your dataset using the map method dataset = dataset.map(formatting_prompts_func, batched = True,) ``` ## [Direct link to heading](https://docs.unsloth.ai/basics/chat-templates\#more-information) More Information Assuming your dataset is a list of list of dictionaries like the below: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] [\ [{'from': 'human', 'value': 'Hi there!'},\ {'from': 'gpt', 'value': 'Hi how can I help?'},\ {'from': 'human', 'value': 'What is 2+2?'}],\ [{'from': 'human', 'value': 'What's your name?'},\ {'from': 'gpt', 'value': 'I'm Daniel!'},\ {'from': 'human', 'value': 'Ok! Nice!'},\ {'from': 'gpt', 'value': 'What can I do for you?'},\ {'from': 'human', 'value': 'Oh nothing :)'},],\ ] ``` You can use our `get_chat_template` to format it. Select `chat_template` to be any of `zephyr, chatml, mistral, llama, alpaca, vicuna, vicuna_old, unsloth`, and use `mapping` to map the dictionary values `from`, `value` etc. `map_eos_token` allows you to map `<|im_end|>` to EOS without any training. Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] from unsloth.chat_templates import get_chat_template tokenizer = get_chat_template( tokenizer, chat_template = "chatml", # Supports zephyr, chatml, mistral, llama, alpaca, vicuna, vicuna_old, unsloth mapping = {"role" : "from", "content" : "value", "user" : "human", "assistant" : "gpt"}, # ShareGPT style map_eos_token = True, # Maps <|im_end|> to </s> instead ) def formatting_prompts_func(examples): convos = examples["conversations"] texts = [tokenizer.apply_chat_template(convo, tokenize = False, add_generation_prompt = False) for convo in convos] return { "text" : texts, } pass from datasets import load_dataset dataset = load_dataset("philschmid/guanaco-sharegpt-style", split = "train") dataset = dataset.map(formatting_prompts_func, batched = True,) ``` You can also make your own custom chat templates! For example our internal chat template we use is below. You must pass in a `tuple` of `(custom_template, eos_token)` where the `eos_token` must be used inside the template. Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] unsloth_template = \ "{{ bos_token }}"\ "{{ 'You are a helpful assistant to the user\n' }}"\ "</div>"\ "<div data-gb-custom-block data-tag="for">"\ "<div data-gb-custom-block data-tag="if" data-0='role' data-1='role' data-2='] == ' data-3='user'>"\ "{{ '>>> User: ' + message['content'] + '\n' }}"\ "<div data-gb-custom-block data-tag="elif" data-0='role' data-1='role' data-2='] == ' data-3='assistant'></div>"\ "{{ '>>> Assistant: ' + message['content'] + eos_token + '\n' }}"\ "</div>"\ "</div>"\ "<div data-gb-custom-block data-tag="if">"\ "{{ '>>> Assistant: ' }}"\ "</div>" unsloth_eos_token = "eos_token" tokenizer = get_chat_template( tokenizer, chat_template = (unsloth_template, unsloth_eos_token,), # You must provide a template and EOS token mapping = {"role" : "from", "content" : "value", "user" : "human", "assistant" : "gpt"}, # ShareGPT style map_eos_token = True, # Maps <|im_end|> to </s> instead ) ``` [PreviousContinued Pretraining](https://docs.unsloth.ai/basics/continued-pretraining) [NextVision Fine-tuning](https://docs.unsloth.ai/basics/vision-fine-tuning) Last updated 23 days ago Was this helpful?
{ "color-scheme": "light dark", "description": "Learn the basics and customization options for chat templates including the Alpaca format.", "favicon": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Forganizations%252FHpyELzcNe0topgVLGCZY%252Fsites%252Fsite_mXXTe%252Ficon%252FUdYqQdRpLXMz6O8ejTLD%252Funsloth%2520sticker.png%3Falt%3Dmedia%26token%3D2a5ba961-51cb-4094-8545-6eb20ad1e06e&width=48&height=48&sign=277417f7&sv=2", "generator": "GitBook (e15757d)", "language": "en", "og:description": "Learn the basics and customization options for chat templates including the Alpaca format.", "og:image": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "og:title": "Chat Templates | Unsloth Documentation", "ogDescription": "Learn the basics and customization options for chat templates including the Alpaca format.", "ogImage": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "ogTitle": "Chat Templates | Unsloth Documentation", "robots": "index, follow", "scrapeId": "725bb18b-fc77-423b-954b-4e6db0fb60fb", "sourceURL": "https://docs.unsloth.ai/basics/chat-templates", "statusCode": 200, "title": "Chat Templates | Unsloth Documentation", "twitter:card": "summary_large_image", "twitter:description": "Learn the basics and customization options for chat templates including the Alpaca format.", "twitter:image": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "twitter:title": "Chat Templates | Unsloth Documentation", "url": "https://docs.unsloth.ai/basics/chat-templates", "viewport": "width=device-width, initial-scale=1" }
Qwen3, full fine-tuning & all models are now supported! 🦥 - The [text completion notebook](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Mistral_(7B)-Text_Completion.ipynb) is for continued pretraining/raw text. - The [continued pretraining notebook](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Mistral_v0.3_(7B)-CPT.ipynb) is for learning another language. You can read more about continued pretraining and our release in our [blog post](https://unsloth.ai/blog/contpretraining). ## [Direct link to heading](https://docs.unsloth.ai/basics/continued-pretraining\#what-is-continued-pretraining) What is Continued Pretraining? Continued or continual pretraining (CPT) is necessary to “steer” the language model to understand new domains of knowledge, or out of distribution domains. Base models like Llama-3 8b or Mistral 7b are first pretrained on gigantic datasets of trillions of tokens (Llama-3 for e.g. is 15 trillion). But sometimes these models have not been well trained on other languages, or text specific domains, like law, medicine or other areas. So continued pretraining (CPT) is necessary to make the language model learn new tokens or datasets. ## [Direct link to heading](https://docs.unsloth.ai/basics/continued-pretraining\#advanced-features) Advanced Features: ### [Direct link to heading](https://docs.unsloth.ai/basics/continued-pretraining\#loading-lora-adapters-for-continued-finetuning) Loading LoRA adapters for continued finetuning If you saved a LoRA adapter through Unsloth, you can also continue training using your LoRA weights. The optimizer state will be reset as well. To load even optimizer states to continue finetuning, see the next section. Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] from unsloth import FastLanguageModel model, tokenizer = FastLanguageModel.from_pretrained( model_name = "LORA_MODEL_NAME", max_seq_length = max_seq_length, dtype = dtype, load_in_4bit = load_in_4bit, ) trainer = Trainer(...) trainer.train() ``` ### [Direct link to heading](https://docs.unsloth.ai/basics/continued-pretraining\#continued-pretraining-and-finetuning-the-lm_head-and-embed_tokens-matrices) Continued Pretraining & Finetuning the `lm_head` and `embed_tokens` matrices Add `lm_head` and `embed_tokens`. For Colab, sometimes you will go out of memory for Llama-3 8b. If so, just add `lm_head`. Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] model = FastLanguageModel.get_peft_model( model, r = 16, target_modules = ["q_proj", "k_proj", "v_proj", "o_proj",\ "gate_proj", "up_proj", "down_proj",\ "lm_head", "embed_tokens",], lora_alpha = 16, ) ``` Then use 2 different learning rates - a 2-10x smaller one for the `lm_head` or `embed_tokens` like so: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] from unsloth import UnslothTrainer, UnslothTrainingArguments trainer = UnslothTrainer( .... args = UnslothTrainingArguments( .... learning_rate = 5e-5, embedding_learning_rate = 5e-6, # 2-10x smaller than learning_rate ), ) ``` [PreviousText-to-Speech (TTS) Fine-tuning](https://docs.unsloth.ai/basics/text-to-speech-tts-fine-tuning) [NextChat Templates](https://docs.unsloth.ai/basics/chat-templates) Last updated 1 month ago Was this helpful?
{ "color-scheme": "light dark", "description": "AKA as Continued Finetuning. Unsloth allows you to continually pretrain so a model can learn a new language.", "favicon": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Forganizations%252FHpyELzcNe0topgVLGCZY%252Fsites%252Fsite_mXXTe%252Ficon%252FUdYqQdRpLXMz6O8ejTLD%252Funsloth%2520sticker.png%3Falt%3Dmedia%26token%3D2a5ba961-51cb-4094-8545-6eb20ad1e06e&width=48&height=48&sign=277417f7&sv=2", "generator": "GitBook (e15757d)", "language": "en", "og:description": "AKA as Continued Finetuning. Unsloth allows you to continually pretrain so a model can learn a new language.", "og:image": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "og:title": "Continued Pretraining | Unsloth Documentation", "ogDescription": "AKA as Continued Finetuning. Unsloth allows you to continually pretrain so a model can learn a new language.", "ogImage": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "ogTitle": "Continued Pretraining | Unsloth Documentation", "robots": "index, follow", "scrapeId": "7e8110e4-af15-46cf-893d-aad93c5b0425", "sourceURL": "https://docs.unsloth.ai/basics/continued-pretraining", "statusCode": 200, "title": "Continued Pretraining | Unsloth Documentation", "twitter:card": "summary_large_image", "twitter:description": "AKA as Continued Finetuning. Unsloth allows you to continually pretrain so a model can learn a new language.", "twitter:image": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "twitter:title": "Continued Pretraining | Unsloth Documentation", "url": "https://docs.unsloth.ai/basics/continued-pretraining", "viewport": "width=device-width, initial-scale=1" }
Qwen3, full fine-tuning & all models are now supported! 🦥 ## [Direct link to heading](https://docs.unsloth.ai/basics/running-and-saving-models/troubleshooting\#running-in-unsloth-works-well-but-after-exporting-and-running-on-other-platforms-the-results-are-poo) Running in Unsloth works well, but after exporting & running on other platforms, the results are poor You might sometimes encounter an issue where your model runs and produces good results on Unsloth, but when you use it on another platform like Ollama or vLLM, the results are poor or you might get gibberish, endless/infinite generations _or_ repeated outputs **.** - The most common cause of this error is using an incorrect chat template. It’s essential to use the SAME chat template that was used when training the model in Unsloth and later when you run it in another framework, such as llama.cpp or Ollama. When inferencing from a saved model, it's crucial to apply the correct template. - It might also be because your inference engine adds an unnecessary "start of sequence" token (or the lack of thereof on the contrary) so ensure you check both hypotheses! ## [Direct link to heading](https://docs.unsloth.ai/basics/running-and-saving-models/troubleshooting\#saving-to-safetensors-not-bin-format-in-colab) Saving to `safetensors`, not `bin` format in Colab We save to `.bin` in Colab so it's like 4x faster, but set `safe_serialization = None` to force saving to `.safetensors`. So `model.save_pretrained(..., safe_serialization = None)` or `model.push_to_hub(..., safe_serialization = None)` ## [Direct link to heading](https://docs.unsloth.ai/basics/running-and-saving-models/troubleshooting\#if-saving-to-gguf-or-vllm-16bit-crashes) If saving to GGUF or vLLM 16bit crashes You can try reducing the maximum GPU usage during saving by changing `maximum_memory_usage`. The default is `model.save_pretrained(..., maximum_memory_usage = 0.75)`. Reduce it to say 0.5 to use 50% of GPU peak memory or lower. This can reduce OOM crashes during saving. [PreviousSaving to VLLM](https://docs.unsloth.ai/basics/running-and-saving-models/saving-to-vllm) [NextInference](https://docs.unsloth.ai/basics/running-and-saving-models/inference) Last updated 2 months ago Was this helpful?
{ "color-scheme": "light dark", "description": "If you're experiencing issues when running or saving your model.", "favicon": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Forganizations%252FHpyELzcNe0topgVLGCZY%252Fsites%252Fsite_mXXTe%252Ficon%252FUdYqQdRpLXMz6O8ejTLD%252Funsloth%2520sticker.png%3Falt%3Dmedia%26token%3D2a5ba961-51cb-4094-8545-6eb20ad1e06e&width=48&height=48&sign=277417f7&sv=2", "generator": "GitBook (e15757d)", "language": "en", "og:description": "If you're experiencing issues when running or saving your model.", "og:image": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "og:title": "Troubleshooting | Unsloth Documentation", "ogDescription": "If you're experiencing issues when running or saving your model.", "ogImage": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "ogTitle": "Troubleshooting | Unsloth Documentation", "robots": "index, follow", "scrapeId": "757579a1-f0b4-4328-83be-88e0d01a2f3e", "sourceURL": "https://docs.unsloth.ai/basics/running-and-saving-models/troubleshooting", "statusCode": 200, "title": "Troubleshooting | Unsloth Documentation", "twitter:card": "summary_large_image", "twitter:description": "If you're experiencing issues when running or saving your model.", "twitter:image": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "twitter:title": "Troubleshooting | Unsloth Documentation", "url": "https://docs.unsloth.ai/basics/running-and-saving-models/troubleshooting", "viewport": "width=device-width, initial-scale=1" }
Qwen3, full fine-tuning & all models are now supported! 🦥 [![Cover](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252Fz30qbVABdBlqEnKatTf1%252Fqwen3.png%3Falt%3Dmedia%26token%3Defd4bb30-4926-4272-b15d-91c0a0fc5ac5&width=245&dpr=4&quality=100&sign=c6de3b4f&sv=2)](https://docs.unsloth.ai/basics/qwen3-how-to-run-and-fine-tune) [Qwen3](https://docs.unsloth.ai/basics/qwen3-how-to-run-and-fine-tune) [![Cover](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252FLDayziE4Q7Gc52BMQfd4%252Fphi4%2520reasoning2.png%3Falt%3Dmedia%26token%3Df3db5f93-dde0-49c3-97ed-cbf596d8d437&width=245&dpr=4&quality=100&sign=2afe2d91&sv=2)](https://docs.unsloth.ai/basics/tutorials-how-to-fine-tune-and-run-llms/phi-4-reasoning-how-to-run-and-fine-tune) [Phi-4 reasoning](https://docs.unsloth.ai/basics/tutorials-how-to-fine-tune-and-run-llms/phi-4-reasoning-how-to-run-and-fine-tune) [![Cover](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252FdiwpvMM4VA4oZqaANJOE%252Fdynamic%2520v2%2520with%2520unsloth.png%3Falt%3Dmedia%26token%3Dadc64cb6-2b52-4565-a44e-ac4acbd4247d&width=245&dpr=4&quality=100&sign=95dfb159&sv=2)](https://docs.unsloth.ai/basics/unsloth-dynamic-2.0-ggufs) [Dynamic 2.0 GGUFs](https://docs.unsloth.ai/basics/unsloth-dynamic-2.0-ggufs) [![Cover](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252F8RZoiqWL4cXqTFwTAbg8%252Fllama%25204%2520only.png%3Falt%3Dmedia%26token%3Dc6b0dd0e-b817-482b-9b8e-05d017a72319&width=245&dpr=4&quality=100&sign=587751ee&sv=2)](https://docs.unsloth.ai/basics/llama-4-how-to-run-and-fine-tune) [Llama 4](https://docs.unsloth.ai/basics/llama-4-how-to-run-and-fine-tune) [![Cover](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252FuvkQHGJWBVejGmQDLMkz%252Fv30324.png%3Falt%3Dmedia%26token%3D941a8bdd-c5af-4144-9126-fa656335aba2&width=245&dpr=4&quality=100&sign=1305effb&sv=2)](https://docs.unsloth.ai/basics/tutorials-how-to-fine-tune-and-run-llms/deepseek-v3-0324-how-to-run-locally) [DeepSeek-V3-0324](https://docs.unsloth.ai/basics/tutorials-how-to-fine-tune-and-run-llms/deepseek-v3-0324-how-to-run-locally) [![Cover](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252FML1v35ELOxO0AxBpXWCn%252Fgemma%25203%2520logo.png%3Falt%3Dmedia%26token%3D04fefb63-973d-4b36-a2f6-77414ddf8003&width=245&dpr=4&quality=100&sign=9f0d9b98&sv=2)](https://docs.unsloth.ai/basics/gemma-3-how-to-run-and-fine-tune) [Gemma 3](https://docs.unsloth.ai/basics/gemma-3-how-to-run-and-fine-tune) [![Cover](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252FhE7P8M1nQaMEkrLiaRj6%252Fqwq%2520logo%2520only.png%3Falt%3Dmedia%26token%3Dc42d1143-dbf8-425e-b1e2-7d9700c02816&width=245&dpr=4&quality=100&sign=5a0edb2&sv=2)](https://docs.unsloth.ai/basics/tutorials-how-to-fine-tune-and-run-llms/qwq-32b-how-to-run-effectively) [QwQ-32B](https://docs.unsloth.ai/basics/tutorials-how-to-fine-tune-and-run-llms/qwq-32b-how-to-run-effectively) [![Cover](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252FEDGoGKoQdMunfGToescN%252Fdeepseek%2520r1.png%3Falt%3Dmedia%26token%3Df2bafaeb-9cd3-4f9d-8c09-b645e72d7fe7&width=245&dpr=4&quality=100&sign=67995ef5&sv=2)](https://docs.unsloth.ai/basics/tutorials-how-to-fine-tune-and-run-llms/deepseek-r1-how-to-run-locally) [DeepSeek-R1](https://docs.unsloth.ai/basics/tutorials-how-to-fine-tune-and-run-llms/deepseek-r1-how-to-run-locally) [![Cover](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252FZdVLuoSsSD66ogyAO28v%252Flong%2520context%2520reasoning%2520long.png%3Falt%3Dmedia%26token%3Dd0e150fd-d208-4b50-9936-7d084b84efbc&width=245&dpr=4&quality=100&sign=9f3ac22c&sv=2)](https://docs.unsloth.ai/basics/reasoning-grpo-and-rl/tutorial-train-your-own-reasoning-model-with-grpo) [GRPO (Reasoning)](https://docs.unsloth.ai/basics/reasoning-grpo-and-rl/tutorial-train-your-own-reasoning-model-with-grpo) [![Cover](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252FeLYVuPYGC1Giu97E8zWi%252Fllama%25203logo.png%3Falt%3Dmedia%26token%3D2127b873-32cb-4a4a-9593-92a179b46c3b&width=245&dpr=4&quality=100&sign=f48b38a2&sv=2)](https://docs.unsloth.ai/basics/tutorial-how-to-finetune-llama-3-and-use-in-ollama) [Llama 3](https://docs.unsloth.ai/basics/tutorial-how-to-finetune-llama-3-and-use-in-ollama) [PreviousDatasets Guide](https://docs.unsloth.ai/basics/datasets-guide) [NextPhi-4 Reasoning: How to Run & Fine-tune](https://docs.unsloth.ai/basics/tutorials-how-to-fine-tune-and-run-llms/phi-4-reasoning-how-to-run-and-fine-tune) Last updated 10 days ago Was this helpful?
{ "color-scheme": "light dark", "description": "Learn How To Run & Fine-tune models 100% locally with Unsloth:", "favicon": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Forganizations%252FHpyELzcNe0topgVLGCZY%252Fsites%252Fsite_mXXTe%252Ficon%252FUdYqQdRpLXMz6O8ejTLD%252Funsloth%2520sticker.png%3Falt%3Dmedia%26token%3D2a5ba961-51cb-4094-8545-6eb20ad1e06e&width=48&height=48&sign=277417f7&sv=2", "generator": "GitBook (e15757d)", "language": "en", "og:description": "Learn How To Run & Fine-tune models 100% locally with Unsloth:", "og:image": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "og:title": "Tutorials: How To Fine-tune & Run LLMs | Unsloth Documentation", "ogDescription": "Learn How To Run & Fine-tune models 100% locally with Unsloth:", "ogImage": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "ogTitle": "Tutorials: How To Fine-tune & Run LLMs | Unsloth Documentation", "robots": "index, follow", "scrapeId": "99c0b17a-3d79-444a-92e6-ca68b8b51ede", "sourceURL": "https://docs.unsloth.ai/basics/tutorials-how-to-fine-tune-and-run-llms", "statusCode": 200, "title": "Tutorials: How To Fine-tune & Run LLMs | Unsloth Documentation", "twitter:card": "summary_large_image", "twitter:description": "Learn How To Run & Fine-tune models 100% locally with Unsloth:", "twitter:image": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "twitter:title": "Tutorials: How To Fine-tune & Run LLMs | Unsloth Documentation", "url": "https://docs.unsloth.ai/basics/tutorials-how-to-fine-tune-and-run-llms", "viewport": "width=device-width, initial-scale=1" }
Qwen3, full fine-tuning & all models are now supported! 🦥 ## [Direct link to heading](https://docs.unsloth.ai/get-started/installing-+-updating/windows-installation\#method-1-windows-directly) Method \#1 - Windows directly: Python 3.13 does not support Unsloth. Use 3.12, 3.11 or 3.10. Need help or experiencing an error? Ask on our [GitHub Discussions](https://github.com/unslothai/unsloth/discussions/1849) thread for Windows support! 1 **Install NVIDIA Video Driver** You should install the latest version of your GPUs driver. Download drivers here: [NVIDIA GPU Drive](https://www.nvidia.com/Download/index.aspx) 2 **Install Visual Studio C++** You will need Visual Studio, with C++ installed. By default, C++ is not installed with Visual Studio, so make sure you select all of the C++ options. Also select options for Windows 10/11 SDK. - Launch the Installer here: [Visual Studio Community Edition](https://visualstudio.microsoft.com/vs/community/) - In the installer, navigate to individual components and select all the options listed here: - **.NET Framework 4.8 SDK** - **.NET Framework 4.7.2 targeting pack** - **C# and Visual Basic Roslyn compilers** - **MSBuild** - **MSVC v143 - VS 2022 C++ x64/x86 build tools** - **C++ 2022 Redistributable Update** - **C++ CMake tools for Windows** - **C++/CLI support for v143 build tools (Latest)** - **MSBuild support for LLVM (clang-cl) toolset** - **C++ Clang Compiler for Windows (19.1.1)** - **Windows 11 SDK (10.0.22621.0)** - **Windows Universal CRT SDK** - **C++ 2022 Redistributable MSMs** **Easier method:** Or you can open an elevated Command Prompt or PowerShell: - Search for "cmd" or "PowerShell", right-click it, and choose "Run as administrator." - Paste and run this command (update the Visual Studio path if necessary): Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] "C:\Program Files (x86)\Microsoft Visual Studio\Installer\vs_installer.exe" modify ^ --installPath "C:\Program Files\Microsoft Visual Studio\2022\Community" ^ --add Microsoft.Net.Component.4.8.SDK ^ --add Microsoft.Net.Component.4.7.2.TargetingPack ^ --add Microsoft.VisualStudio.Component.Roslyn.Compiler ^ --add Microsoft.Component.MSBuild ^ --add Microsoft.VisualStudio.Component.VC.Tools.x86.x64 ^ --add Microsoft.VisualStudio.Component.VC.Redist.14.Latest ^ --add Microsoft.VisualStudio.Component.VC.CMake.Project ^ --add Microsoft.VisualStudio.Component.VC.CLI.Support ^ --add Microsoft.VisualStudio.Component.VC.Llvm.Clang ^ --add Microsoft.VisualStudio.ComponentGroup.ClangCL ^ --add Microsoft.VisualStudio.Component.Windows11SDK.22621 ^ --add Microsoft.VisualStudio.Component.Windows10SDK.19041 ^ --add Microsoft.VisualStudio.Component.UniversalCRT.SDK ^ --add Microsoft.VisualStudio.Component.VC.Redist.MSM ``` 3 **Install Python and CUDA Toolkit** Follow the instructions to install [CUDA Toolkit](https://developer.nvidia.com/cuda-toolkit-archive). Then install Miniconda (which has Python) here: [https://www.anaconda.com/docs/getting-started/miniconda/install](https://www.anaconda.com/docs/getting-started/miniconda/install#quickstart-install-instructions) 4 **Install PyTorch** You will need the correct version of PyTorch that is compatible with your CUDA drivers, so make sure to select them carefully. [Install PyTorch](https://pytorch.org/get-started/locally/) 5 **Install Unsloth** Open Conda command prompt or your terminal with Python and run the command: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] pip install "unsloth[windows] @ git+https://github.com/unslothai/unsloth.git" ``` If you're using GRPO or plan to use vLLM, currently vLLM does not support Windows directly but only via WSL or Linux. ### [Direct link to heading](https://docs.unsloth.ai/get-started/installing-+-updating/windows-installation\#notes) **Notes** To run Unsloth directly on Windows: - Install Triton from this Windows fork and follow the instructions [here](https://github.com/woct0rdho/triton-windows) (be aware that the Windows fork requires PyTorch >= 2.4 and CUDA 12) - In the SFTTrainer, set `dataset_num_proc=1` to avoid a crashing issue: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] trainer = SFTTrainer( dataset_num_proc=1, ... ) ``` ### [Direct link to heading](https://docs.unsloth.ai/get-started/installing-+-updating/windows-installation\#advanced-troubleshooting) **Advanced/Troubleshooting** For **advanced installation instructions** or if you see weird errors during installations: 1. Install `torch` and `triton`. Go to https://pytorch.org to install it. For example `pip install torch torchvision torchaudio triton` 2. Confirm if CUDA is installated correctly. Try `nvcc`. If that fails, you need to install `cudatoolkit` or CUDA drivers. 3. Install `xformers` manually. You can try installing `vllm` and seeing if `vllm` succeeds. Check if `xformers` succeeded with `python -m xformers.info` Go to https://github.com/facebookresearch/xformers. Another option is to install `flash-attn` for Ampere GPUs. 4. Double check that your versions of Python, CUDA, CUDNN, `torch`, `triton`, and `xformers` are compatible with one another. The [PyTorch Compatibility Matrix](https://github.com/pytorch/pytorch/blob/main/RELEASE.md#release-compatibility-matrix) may be useful. 5. Finally, install `bitsandbytes` and check it with `python -m bitsandbytes` * * * ## [Direct link to heading](https://docs.unsloth.ai/get-started/installing-+-updating/windows-installation\#method-2-windows-using-powershell) Method \#2 - Windows using PowerShell: #### [Direct link to heading](https://docs.unsloth.ai/get-started/installing-+-updating/windows-installation\#step-1-install-prerequisites) **Step 1: Install Prerequisites** 1. **Install NVIDIA CUDA Toolkit**: - Download and install the appropriate version of the **NVIDIA CUDA Toolkit** from [CUDA Downloads](https://developer.nvidia.com/cuda-downloads). - Reboot your system after installation if prompted. - **Note**: No additional setup is required after installation for Unsloth. 2. **Install Microsoft C++ Build Tools**: - Download and install **Microsoft Build Tools for Visual Studio** from the [official website](https://visualstudio.microsoft.com/visual-cpp-build-tools/). - During installation, select the **C++ build tools** workload. Ensure the **MSVC compiler toolset** is included. 3. **Set Environment Variables for the C++ Compiler**: - Open the **System Properties** window (search for "Environment Variables" in the Start menu). - Click **"Environment Variables…"**. - Add or update the following under **System variables**: - **CC**: Path to the `cl.exe` C++ compiler. Example (adjust if your version differs): Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] C:\Program Files\Microsoft Visual Studio\2022\BuildTools\VC\Tools\MSVC\14.34.31933\bin\Hostx64\x64\cl.exe ``` - **CXX**: Same path as `CC`. - Click **OK** to save changes. - Verify: Open a new terminal and type `cl`. It should show version info. 4. **Install Conda** 1. Download and install **Miniconda** from the [official website](https://docs.anaconda.com/miniconda/install/#quick-command-line-install) 2. Follow installation instruction from the website 3. To check whether `conda` is already installed, you can test it with `conda` in your PowerShell #### [Direct link to heading](https://docs.unsloth.ai/get-started/installing-+-updating/windows-installation\#step-2-run-the-unsloth-installation-script) **Step 2: Run the Unsloth Installation Script** 1. **Download the** [**unsloth\_windows.ps1**](https://github.com/unslothai/notebooks/blob/main/unsloth_windows.ps1) **PowerShell script by going through this link**. 2. **Open PowerShell as Administrator**: - Right-click Start and select **"Windows PowerShell (Admin)"**. 3. **Navigate to the script’s location** using `cd`: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] cd path\to\script\folder ``` 4. **Run the script**: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] powershell.exe -ExecutionPolicy Bypass -File .\unsloth_windows.ps1 ``` #### [Direct link to heading](https://docs.unsloth.ai/get-started/installing-+-updating/windows-installation\#step-3-using-unsloth) **Step 3: Using Unsloth** Activate the environment after the installation completes: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] conda activate unsloth_env ``` **Unsloth and its dependencies are now ready!** * * * ## [Direct link to heading](https://docs.unsloth.ai/get-started/installing-+-updating/windows-installation\#method-3-windows-via-wsl) Method \#3 - Windows via WSL: WSL is Window's subsystem for Linux. 1. Install python though [Python's official site](https://www.python.org/downloads/windows/). 2. Start WSL (Should already be preinstalled). Open command prompt as admin then run: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] wsl -d ubuntu ``` Optional: If WSL is not preinstalled, go to the Microsoft store and search "Ubuntu" and the app that says Ubuntu will be WSL. Install it and run it and continue from there. 1. Update WSL: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] sudo apt update && sudo apt upgrade -y ``` 1. Install pip: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] sudo apt install python3-pip ``` 1. Install unsloth: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] pip install unsloth ``` 1. Optional: Install Jupyter Notebook to run in a Colab like environment: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] pip3 install notebook ``` 1. Launch Jupyter Notebook: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] jupyter notebook ``` 1. Download any Colab notebook from Unsloth, import it into your Jupyter Notebook, adjust the parameters as needed, and execute the script. [PreviousPip Install](https://docs.unsloth.ai/get-started/installing-+-updating/pip-install) [NextConda Install](https://docs.unsloth.ai/get-started/installing-+-updating/conda-install) Last updated 2 months ago Was this helpful?
{ "color-scheme": "light dark", "description": "See how to install Unsloth on Windows with or without WSL.", "favicon": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Forganizations%252FHpyELzcNe0topgVLGCZY%252Fsites%252Fsite_mXXTe%252Ficon%252FUdYqQdRpLXMz6O8ejTLD%252Funsloth%2520sticker.png%3Falt%3Dmedia%26token%3D2a5ba961-51cb-4094-8545-6eb20ad1e06e&width=48&height=48&sign=277417f7&sv=2", "generator": "GitBook (e15757d)", "language": "en", "og:description": "See how to install Unsloth on Windows with or without WSL.", "og:image": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "og:title": "Windows Installation | Unsloth Documentation", "ogDescription": "See how to install Unsloth on Windows with or without WSL.", "ogImage": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "ogTitle": "Windows Installation | Unsloth Documentation", "robots": "index, follow", "scrapeId": "a99ca66d-cea5-4c88-8453-fd1671afbef3", "sourceURL": "https://docs.unsloth.ai/get-started/installing-+-updating/windows-installation", "statusCode": 200, "title": "Windows Installation | Unsloth Documentation", "twitter:card": "summary_large_image", "twitter:description": "See how to install Unsloth on Windows with or without WSL.", "twitter:image": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "twitter:title": "Windows Installation | Unsloth Documentation", "url": "https://docs.unsloth.ai/get-started/installing-+-updating/windows-installation", "viewport": "width=device-width, initial-scale=1" }
Qwen3, full fine-tuning & all models are now supported! 🦥 ## [Direct link to heading](https://docs.unsloth.ai/get-started/installing-+-updating/updating\#standard-updating-recommended) Standard Updating (recommended): Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] pip install --upgrade unsloth unsloth_zoo ``` ### [Direct link to heading](https://docs.unsloth.ai/get-started/installing-+-updating/updating\#updating-without-dependency-updates) Updating without dependency updates: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] pip install --upgrade --force-reinstall --no-cache-dir --no-deps git+https://github.com/unslothai/unsloth.git pip install --upgrade --force-reinstall --no-cache-dir --no-deps git+https://github.com/unslothai/unsloth-zoo.git ``` ## [Direct link to heading](https://docs.unsloth.ai/get-started/installing-+-updating/updating\#to-use-an-old-version-of-unsloth) To use an old version of Unsloth: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] pip install --force-reinstall --no-cache-dir --no-deps unsloth==2025.1.5 ``` '2025.1.5' is one of the previous old versions of Unsloth. Change it to a specific release listed on our [Github here](https://github.com/unslothai/unsloth/releases). [PreviousInstalling + Updating](https://docs.unsloth.ai/get-started/installing-+-updating) [NextPip Install](https://docs.unsloth.ai/get-started/installing-+-updating/pip-install) Last updated 1 month ago Was this helpful?
{ "color-scheme": "light dark", "description": "To update or use an old version of Unsloth, follow the steps below:", "favicon": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Forganizations%252FHpyELzcNe0topgVLGCZY%252Fsites%252Fsite_mXXTe%252Ficon%252FUdYqQdRpLXMz6O8ejTLD%252Funsloth%2520sticker.png%3Falt%3Dmedia%26token%3D2a5ba961-51cb-4094-8545-6eb20ad1e06e&width=48&height=48&sign=277417f7&sv=2", "generator": "GitBook (e15757d)", "language": "en", "og:description": "To update or use an old version of Unsloth, follow the steps below:", "og:image": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "og:title": "Updating | Unsloth Documentation", "ogDescription": "To update or use an old version of Unsloth, follow the steps below:", "ogImage": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "ogTitle": "Updating | Unsloth Documentation", "robots": "index, follow", "scrapeId": "a9cc6f20-c874-426c-bc4d-1870504ca7e0", "sourceURL": "https://docs.unsloth.ai/get-started/installing-+-updating/updating", "statusCode": 200, "title": "Updating | Unsloth Documentation", "twitter:card": "summary_large_image", "twitter:description": "To update or use an old version of Unsloth, follow the steps below:", "twitter:image": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "twitter:title": "Updating | Unsloth Documentation", "url": "https://docs.unsloth.ai/get-started/installing-+-updating/updating", "viewport": "width=device-width, initial-scale=1" }
Qwen3, full fine-tuning & all models are now supported! 🦥 You can also run your fine-tuned models by using [Unsloth's 2x faster inference](https://docs.unsloth.ai/basics/running-and-saving-models/inference). [Saving to GGUF](https://docs.unsloth.ai/basics/running-and-saving-models/saving-to-gguf) [Saving to Ollama](https://docs.unsloth.ai/basics/running-and-saving-models/saving-to-ollama) [Saving to VLLM](https://docs.unsloth.ai/basics/running-and-saving-models/saving-to-vllm) [Troubleshooting](https://docs.unsloth.ai/basics/running-and-saving-models/troubleshooting) [Inference](https://docs.unsloth.ai/basics/running-and-saving-models/inference) [PreviousTutorial: How to Finetune Llama-3 and Use In Ollama](https://docs.unsloth.ai/basics/tutorial-how-to-finetune-llama-3-and-use-in-ollama) [NextSaving to GGUF](https://docs.unsloth.ai/basics/running-and-saving-models/saving-to-gguf) Last updated 2 months ago Was this helpful?
{ "color-scheme": "light dark", "description": "Learn how to save your finetuned model so you can run it in your favorite inference engine.", "favicon": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Forganizations%252FHpyELzcNe0topgVLGCZY%252Fsites%252Fsite_mXXTe%252Ficon%252FUdYqQdRpLXMz6O8ejTLD%252Funsloth%2520sticker.png%3Falt%3Dmedia%26token%3D2a5ba961-51cb-4094-8545-6eb20ad1e06e&width=48&height=48&sign=277417f7&sv=2", "generator": "GitBook (e15757d)", "language": "en", "og:description": "Learn how to save your finetuned model so you can run it in your favorite inference engine.", "og:image": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "og:title": "Running & Saving Models | Unsloth Documentation", "ogDescription": "Learn how to save your finetuned model so you can run it in your favorite inference engine.", "ogImage": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "ogTitle": "Running & Saving Models | Unsloth Documentation", "robots": "index, follow", "scrapeId": "b3e6414a-636e-4566-b9c7-36ddddb3ea60", "sourceURL": "https://docs.unsloth.ai/basics/running-and-saving-models", "statusCode": 200, "title": "Running & Saving Models | Unsloth Documentation", "twitter:card": "summary_large_image", "twitter:description": "Learn how to save your finetuned model so you can run it in your favorite inference engine.", "twitter:image": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "twitter:title": "Running & Saving Models | Unsloth Documentation", "url": "https://docs.unsloth.ai/basics/running-and-saving-models", "viewport": "width=device-width, initial-scale=1" }
Qwen3, full fine-tuning & all models are now supported! 🦥 See our guide below for the complete process on how to save to [Ollama](https://github.com/ollama/ollama): [🦙Tutorial: How to Finetune Llama-3 and Use In Ollama](https://docs.unsloth.ai/basics/tutorial-how-to-finetune-llama-3-and-use-in-ollama) ## [Direct link to heading](https://docs.unsloth.ai/basics/running-and-saving-models/saving-to-ollama\#saving-on-google-colab) Saving on Google Colab You can save the finetuned model as a small 100MB file called a LoRA adapter like below. You can instead push to the Hugging Face hub as well if you want to upload your model! Remember to get a Hugging Face token via: [https://huggingface.co/settings/tokens](https://huggingface.co/settings/tokens) and add your token! ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252FBz0YDi6Sc2oEP5QWXgSz%252Fimage.png%3Falt%3Dmedia%26token%3D33d9e4fd-e7dc-4714-92c5-bfa3b00f86c4&width=768&dpr=4&quality=100&sign=d6933a01&sv=2) After saving the model, we can again use Unsloth to run the model itself! Use `FastLanguageModel` again to call it for inference! ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252FzymBQrqwt4GUmCIN0Iec%252Fimage.png%3Falt%3Dmedia%26token%3D41a110e4-8263-426f-8fa7-cdc295cc8210&width=768&dpr=4&quality=100&sign=b2a207c3&sv=2) ## [Direct link to heading](https://docs.unsloth.ai/basics/running-and-saving-models/saving-to-ollama\#exporting-to-ollama) Exporting to Ollama Finally we can export our finetuned model to Ollama itself! First we have to install Ollama in the Colab notebook: ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252FqNvGTAGwZKXxkMQqzloS%252Fimage.png%3Falt%3Dmedia%26token%3Ddb503499-0c74-4281-b3bf-400fa20c9ce2&width=768&dpr=4&quality=100&sign=6d57e83a&sv=2) Then we export the finetuned model we have to llama.cpp's GGUF formats like below: ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252FZduLjedyfUbTmYqF85pa%252Fimage.png%3Falt%3Dmedia%26token%3Df5bac541-b99f-4d9b-82f7-033f8de780f2&width=768&dpr=4&quality=100&sign=1fdb7647&sv=2) Reminder to convert `False` to `True` for 1 row, and not change every row to `True`, or else you'll be waiting for a very time! We normally suggest the first row getting set to `True`, so we can export the finetuned model quickly to `Q8_0` format (8 bit quantization). We also allow you to export to a whole list of quantization methods as well, with a popular one being `q4_k_m`. Head over to [https://github.com/ggerganov/llama.cpp](https://github.com/ggerganov/llama.cpp) to learn more about GGUF. We also have some manual instructions of how to export to GGUF if you want here: [https://github.com/unslothai/unsloth/wiki#manually-saving-to-gguf](https://github.com/unslothai/unsloth/wiki#manually-saving-to-gguf) You will see a long list of text like below - please wait 5 to 10 minutes!! ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252FcuUAx0RNtrQACvU7uWCL%252Fimage.png%3Falt%3Dmedia%26token%3Ddc67801a-a363-48e2-8572-4c6d0d8d0d93&width=768&dpr=4&quality=100&sign=cc7f7372&sv=2) And finally at the very end, it'll look like below: ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252FxRh07PEQjAmmz3s2HJUP%252Fimage.png%3Falt%3Dmedia%26token%3D3552a3c9-4d4f-49ee-a31e-0a64327419f0&width=768&dpr=4&quality=100&sign=1e9c9f0d&sv=2) Then, we have to run Ollama itself in the background. We use `subprocess` because Colab doesn't like asynchronous calls, but normally one just runs `ollama serve` in the terminal / command prompt. ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252FszDuikrg4HY8lGefwpRQ%252Fimage.png%3Falt%3Dmedia%26token%3Dec1c8762-661d-4b13-ab4f-ed1a7b9fda00&width=768&dpr=4&quality=100&sign=fc72e538&sv=2) ## [Direct link to heading](https://docs.unsloth.ai/basics/running-and-saving-models/saving-to-ollama\#automatic-modelfile-creation) Automatic `Modelfile` creation The trick Unsloth provides is we automatically create a `Modelfile` which Ollama requires! This is a just a list of settings and includes the chat template which we used for the finetune process! You can also print the `Modelfile` generated like below: ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252Fh6inH6k5ggxUP80Gltgj%252Fimage.png%3Falt%3Dmedia%26token%3D805bafb1-2795-4743-9bd2-323ab4f0881e&width=768&dpr=4&quality=100&sign=456e8653&sv=2) We then ask Ollama to create a model which is Ollama compatible, by using the `Modelfile` ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252F1123bSSwmjWXliaRUL5U%252Fimage.png%3Falt%3Dmedia%26token%3D2e72f1a0-1ff8-4189-8d9c-d31e39385555&width=768&dpr=4&quality=100&sign=52a4fd99&sv=2) ## [Direct link to heading](https://docs.unsloth.ai/basics/running-and-saving-models/saving-to-ollama\#ollama-inference) Ollama Inference And we can now call the model for inference if you want to do call the Ollama server itself which is running on your own local machine / in the free Colab notebook in the background. Remember you can edit the yellow underlined part. ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252Fk5mdsJ57hQ1Ar3KY6VXY%252FInference.png%3Falt%3Dmedia%26token%3D8cf0cbf9-0534-4bae-a887-89f45a3de771&width=768&dpr=4&quality=100&sign=8489fe55&sv=2) ## [Direct link to heading](https://docs.unsloth.ai/basics/running-and-saving-models/saving-to-ollama\#undefined) [PreviousSaving to GGUF](https://docs.unsloth.ai/basics/running-and-saving-models/saving-to-gguf) [NextSaving to VLLM](https://docs.unsloth.ai/basics/running-and-saving-models/saving-to-vllm) Last updated 10 months ago Was this helpful?
{ "color-scheme": "light dark", "description": null, "favicon": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Forganizations%252FHpyELzcNe0topgVLGCZY%252Fsites%252Fsite_mXXTe%252Ficon%252FUdYqQdRpLXMz6O8ejTLD%252Funsloth%2520sticker.png%3Falt%3Dmedia%26token%3D2a5ba961-51cb-4094-8545-6eb20ad1e06e&width=48&height=48&sign=277417f7&sv=2", "generator": "GitBook (e15757d)", "language": "en", "og:description": null, "og:image": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "og:title": "Saving to Ollama | Unsloth Documentation", "ogDescription": null, "ogImage": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "ogTitle": "Saving to Ollama | Unsloth Documentation", "robots": "index, follow", "scrapeId": "af6640c7-5592-4d3b-8df0-01c136e54bb4", "sourceURL": "https://docs.unsloth.ai/basics/running-and-saving-models/saving-to-ollama", "statusCode": 200, "title": "Saving to Ollama | Unsloth Documentation", "twitter:card": "summary_large_image", "twitter:description": null, "twitter:image": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "twitter:title": "Saving to Ollama | Unsloth Documentation", "url": "https://docs.unsloth.ai/basics/running-and-saving-models/saving-to-ollama", "viewport": "width=device-width, initial-scale=1" }
Qwen3, full fine-tuning & all models are now supported! 🦥 Qwen released QwQ-32B - a reasoning model with performance comparable to DeepSeek-R1 on many [benchmarks](https://qwenlm.github.io/blog/qwq-32b/). However, people have been experiencing **infinite generations**, **many repetitions**, <think> token issues and finetuning issues. We hope this guide will help debug and fix most issues! Our model uploads with our bug fixes work great for fine-tuning, vLLM and Transformers. If you're using llama.cpp and engines that use llama.cpp as backend, follow our [instructions here](https://docs.unsloth.ai/basics/tutorials-how-to-fine-tune-and-run-llms/qwq-32b-how-to-run-effectively#tutorial-how-to-run-qwq-32b) to fix endless generations. **Unsloth QwQ-32B uploads with our bug fixes:** [GGUF](https://huggingface.co/unsloth/QwQ-32B-GGUF) [Dynamic 4-bit](https://huggingface.co/unsloth/QwQ-32B-unsloth-bnb-4bit) [BnB 4-bit](https://huggingface.co/unsloth/QwQ-32B-bnb-4bit) [16-bit](https://huggingface.co/unsloth/QwQ-32B) ## [Direct link to heading](https://docs.unsloth.ai/basics/tutorials-how-to-fine-tune-and-run-llms/qwq-32b-how-to-run-effectively\#official-recommended-settings) ⚙️ Official Recommended Settings According to [Qwen](https://huggingface.co/Qwen/QwQ-32B), these are the recommended settings for inference: - Temperature of 0.6 - Top\_K of 40 (or 20 to 40) - Min\_P of 0.00 (optional, but 0.01 works well, llama.cpp default is 0.1) - Top\_P of 0.95 - Repetition Penalty of 1.0. (1.0 means disabled in llama.cpp and transformers) - Chat template: `<|im_start|>user\nCreate a Flappy Bird game in Python.<|im_end|>\n<|im_start|>assistant\n<think>\n` `llama.cpp` uses `min_p = 0.1` by default, which might cause issues. Force it to 0.0. ## [Direct link to heading](https://docs.unsloth.ai/basics/tutorials-how-to-fine-tune-and-run-llms/qwq-32b-how-to-run-effectively\#recommended-settings-for-llama.cpp) 👍 Recommended settings for llama.cpp We noticed many people use a `Repetition Penalty` greater than 1.0. For example 1.1 to 1.5. This actually interferes with llama.cpp's sampling mechanisms. The goal of a repetition penalty is to penalize repeated generations, but we found this doesn't work as expected. Turning off `Repetition Penalty` also works (ie setting it to 1.0), but we found using it to be useful to penalize endless generations. To use it, we found you must also edit the ordering of samplers in llama.cpp to before applying `Repetition Penalty`, otherwise there will be endless generations. So add this: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] --samplers "top_k;top_p;min_p;temperature;dry;typ_p;xtc" ``` By default, llama.cpp uses this ordering: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] --samplers "dry;top_k;typ_p;top_p;min_p;xtc;temperature" ``` We reorder essentially temperature and dry, and move min\_p forward. This means we apply samplers in this order: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] top_k=40 top_p=0.95 min_p=0.0 temperature=0.6 dry typ_p xtc ``` If you still encounter issues, you can increase the `--repeat-penalty 1.0 to 1.2 or 1.3.` Courtesy to [@krist486](https://x.com/krist486/status/1897885598196654180) for bringing llama.cpp sampling directions to my attention. ## [Direct link to heading](https://docs.unsloth.ai/basics/tutorials-how-to-fine-tune-and-run-llms/qwq-32b-how-to-run-effectively\#dry-repetition-penalty) ☀️ Dry Repetition Penalty We investigated usage of `dry penalty` as suggested in [https://github.com/ggml-org/llama.cpp/blob/master/examples/main/README.md](https://github.com/ggml-org/llama.cpp/blob/master/examples/main/README.md) using a value of 0.8, but we actually found this to **rather cause syntax issues especially for coding**. If you still encounter issues, you can increase the `dry penalty to 0.8.` Utilizing our swapped sampling ordering can also help if you decide to use `dry penalty`. ## [Direct link to heading](https://docs.unsloth.ai/basics/tutorials-how-to-fine-tune-and-run-llms/qwq-32b-how-to-run-effectively\#tutorial-how-to-run-qwq-32b-in-ollama) 🦙 Tutorial: How to Run QwQ-32B in Ollama 1. Install `ollama` if you haven't already! Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] apt-get update apt-get install pciutils -y curl -fsSL https://ollama.com/install.sh | sh ``` 1. Run run the model! Note you can call `ollama serve` in another terminal if it fails! We include all our fixes and suggested parameters (temperature, min\_p etc) in `param` in our Hugging Face upload! Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] ollama run hf.co/unsloth/QwQ-32B-GGUF:Q4_K_M ``` ## [Direct link to heading](https://docs.unsloth.ai/basics/tutorials-how-to-fine-tune-and-run-llms/qwq-32b-how-to-run-effectively\#tutorial-how-to-run-qwq-32b-in-llama.cpp) 📖 Tutorial: How to Run QwQ-32B in llama.cpp 1. Obtain the latest `llama.cpp` on [GitHub here](https://github.com/ggml-org/llama.cpp). You can follow the build instructions below as well. Change `-DGGML_CUDA=ON` to `-DGGML_CUDA=OFF` if you don't have a GPU or just want CPU inference. Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] apt-get update apt-get install pciutils build-essential cmake curl libcurl4-openssl-dev -y git clone https://github.com/ggerganov/llama.cpp cmake llama.cpp -B llama.cpp/build \ -DBUILD_SHARED_LIBS=ON -DGGML_CUDA=ON -DLLAMA_CURL=ON cmake --build llama.cpp/build --config Release -j --clean-first --target llama-quantize llama-cli llama-gguf-split cp llama.cpp/build/bin/llama-* llama.cpp ``` 1. Download the model via (after installing `pip install huggingface_hub hf_transfer` ). You can choose Q4\_K\_M, or other quantized versions (like BF16 full precision). More versions at: [https://huggingface.co/unsloth/QwQ-32B-GGUF](https://huggingface.co/unsloth/QwQ-32B-GGUF) Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] # !pip install huggingface_hub hf_transfer import os os.environ["HF_HUB_ENABLE_HF_TRANSFER"] = "1" from huggingface_hub import snapshot_download snapshot_download( repo_id = "unsloth/QwQ-32B-GGUF", local_dir = "unsloth-QwQ-32B-GGUF", allow_patterns = ["*Q4_K_M*"], # For Q4_K_M ) ``` 1. Run Unsloth's Flappy Bird test, which will save the output to `Q4_K_M_yes_samplers.txt` 2. Edit `--threads 32` for the number of CPU threads, `--ctx-size 16384` for context length, `--n-gpu-layers 99` for GPU offloading on how many layers. Try adjusting it if your GPU goes out of memory. Also remove it if you have CPU only inference. 3. We use `--repeat-penalty 1.1` and `--dry-multiplier 0.5` which you can adjust. Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] ./llama.cpp/llama-cli \ --model unsloth-QwQ-32B-GGUF/QwQ-32B-Q4_K_M.gguf \ --threads 32 \ --ctx-size 16384 \ --n-gpu-layers 99 \ --seed 3407 \ --prio 2 \ --temp 0.6 \ --repeat-penalty 1.1 \ --dry-multiplier 0.5 \ --min-p 0.01 \ --top-k 40 \ --top-p 0.95 \ -no-cnv \ --samplers "top_k;top_p;min_p;temperature;dry;typ_p;xtc" \ --prompt "<|im_start|>user\nCreate a Flappy Bird game in Python. You must include these things:\n1. You must use pygame.\n2. The background color should be randomly chosen and is a light shade. Start with a light blue color.\n3. Pressing SPACE multiple times will accelerate the bird.\n4. The bird's shape should be randomly chosen as a square, circle or triangle. The color should be randomly chosen as a dark color.\n5. Place on the bottom some land colored as dark brown or yellow chosen randomly.\n6. Make a score shown on the top right side. Increment if you pass pipes and don't hit them.\n7. Make randomly spaced pipes with enough space. Color them randomly as dark green or light brown or a dark gray shade.\n8. When you lose, show the best score. Make the text inside the screen. Pressing q or Esc will quit the game. Restarting is pressing SPACE again.\nThe final game should be inside a markdown section in Python. Check your code for errors and fix them before the final markdown section.<|im_end|>\n<|im_start|>assistant\n<think>\n" \ 2>&1 | tee Q4_K_M_yes_samplers.txt ``` The full input from our [https://unsloth.ai/blog/deepseekr1-dynamic](https://unsloth.ai/blog/deepseekr1-dynamic) 1.58bit blog is: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] <|im_start|>user Create a Flappy Bird game in Python. You must include these things: 1. You must use pygame. 2. The background color should be randomly chosen and is a light shade. Start with a light blue color. 3. Pressing SPACE multiple times will accelerate the bird. 4. The bird's shape should be randomly chosen as a square, circle or triangle. The color should be randomly chosen as a dark color. 5. Place on the bottom some land colored as dark brown or yellow chosen randomly. 6. Make a score shown on the top right side. Increment if you pass pipes and don't hit them. 7. Make randomly spaced pipes with enough space. Color them randomly as dark green or light brown or a dark gray shade. 8. When you lose, show the best score. Make the text inside the screen. Pressing q or Esc will quit the game. Restarting is pressing SPACE again. The final game should be inside a markdown section in Python. Check your code for errors and fix them before the final markdown section.<|im_end|> <|im_start|>assistant <think> ``` The beginning and the end of the final Python output after removing the thinking parts: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] import pygame import random import sys pygame.init() ### Continues class Bird: def __init__(self): ### Continues def main(): best_score = 0 current_score = 0 game_over = False pipes = [] first_time = True # Track first game play # Initial setup background_color = (173, 216, 230) # Light blue initially land_color = random.choice(land_colors) bird = Bird() while True: for event in pygame.event.get(): ### Continues if not game_over: # Update bird and pipes bird.update() ### Continues # Drawing ### Continues pygame.display.flip() clock.tick(60) if __name__ == "__main__": main() ``` Full final Python output (removed thinking parts): [Direct link to heading](https://docs.unsloth.ai/basics/tutorials-how-to-fine-tune-and-run-llms/qwq-32b-how-to-run-effectively#full-final-python-output-removed-thinking-parts) Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] import pygame import random import sys pygame.init() WIDTH, HEIGHT = 800, 600 screen = pygame.display.set_mode((WIDTH, HEIGHT)) pygame.display.set_caption("Flappy Bird Clone") clock = pygame.time.Clock() # Colors pipe_colors = [(0, 100, 0), (210, 180, 140), (50, 50, 50)] land_colors = [(139, 69, 19), (255, 255, 0)] # Game constants GRAVITY = 0.5 PIPE_SPEED = 5 BIRD_SIZE = 30 LAND_HEIGHT = 50 PIPE_WIDTH = 50 PIPE_GAP = 150 class Bird: def __init__(self): self.x = WIDTH // 2 self.y = HEIGHT // 2 self.velocity = 0 self.shape = random.choice(['square', 'circle', 'triangle']) self.color = (random.randint(0, 100), random.randint(0, 100), random.randint(0, 100)) self.rect = pygame.Rect(self.x - BIRD_SIZE//2, self.y - BIRD_SIZE//2, BIRD_SIZE, BIRD_SIZE) def update(self): self.velocity += GRAVITY self.y += self.velocity self.rect.y = self.y - BIRD_SIZE//2 self.rect.x = self.x - BIRD_SIZE//2 # Keep x centered def draw(self): if self.shape == 'square': pygame.draw.rect(screen, self.color, self.rect) elif self.shape == 'circle': pygame.draw.circle(screen, self.color, (self.rect.centerx, self.rect.centery), BIRD_SIZE//2) elif self.shape == 'triangle': points = [\ (self.rect.centerx, self.rect.top),\ (self.rect.left, self.rect.bottom),\ (self.rect.right, self.rect.bottom)\ ] pygame.draw.polygon(screen, self.color, points) def spawn_pipe(): pipe_x = WIDTH top_height = random.randint(50, HEIGHT - PIPE_GAP - LAND_HEIGHT) rect_top = pygame.Rect(pipe_x, 0, PIPE_WIDTH, top_height) bottom_y = top_height + PIPE_GAP bottom_height = (HEIGHT - LAND_HEIGHT) - bottom_y rect_bottom = pygame.Rect(pipe_x, bottom_y, PIPE_WIDTH, bottom_height) color = random.choice(pipe_colors) return { 'rect_top': rect_top, 'rect_bottom': rect_bottom, 'color': color, 'scored': False } def main(): best_score = 0 current_score = 0 game_over = False pipes = [] first_time = True # Track first game play # Initial setup background_color = (173, 216, 230) # Light blue initially land_color = random.choice(land_colors) bird = Bird() while True: for event in pygame.event.get(): if event.type == pygame.QUIT: pygame.quit() sys.exit() if event.type == pygame.KEYDOWN: if event.key == pygame.K_ESCAPE or event.key == pygame.K_q: pygame.quit() sys.exit() if event.key == pygame.K_SPACE: if game_over: # Reset the game bird = Bird() pipes.clear() current_score = 0 if first_time: # First restart after initial game over background_color = (random.randint(200, 255), random.randint(200, 255), random.randint(200, 255)) first_time = False else: background_color = (random.randint(200, 255), random.randint(200, 255), random.randint(200, 255)) land_color = random.choice(land_colors) game_over = False else: # Jump the bird bird.velocity = -15 # Initial upward velocity if not game_over: # Update bird and pipes bird.update() # Move pipes left remove_pipes = [] for pipe in pipes: pipe['rect_top'].x -= PIPE_SPEED pipe['rect_bottom'].x -= PIPE_SPEED # Check if bird passed the pipe if not pipe['scored'] and bird.rect.x > pipe['rect_top'].right: current_score += 1 pipe['scored'] = True # Check if pipe is offscreen if pipe['rect_top'].right < 0: remove_pipes.append(pipe) # Remove offscreen pipes for p in remove_pipes: pipes.remove(p) # Spawn new pipe if needed if not pipes or pipes[-1]['rect_top'].x < WIDTH - 200: pipes.append(spawn_pipe()) # Check collisions land_rect = pygame.Rect(0, HEIGHT - LAND_HEIGHT, WIDTH, LAND_HEIGHT) bird_rect = bird.rect # Check pipes for pipe in pipes: if bird_rect.colliderect(pipe['rect_top']) or bird_rect.colliderect(pipe['rect_bottom']): game_over = True break # Check land and top if bird_rect.bottom >= land_rect.top or bird_rect.top <= 0: game_over = True if game_over: if current_score > best_score: best_score = current_score # Drawing screen.fill(background_color) # Draw pipes for pipe in pipes: pygame.draw.rect(screen, pipe['color'], pipe['rect_top']) pygame.draw.rect(screen, pipe['color'], pipe['rect_bottom']) # Draw land pygame.draw.rect(screen, land_color, (0, HEIGHT - LAND_HEIGHT, WIDTH, LAND_HEIGHT)) # Draw bird bird.draw() # Draw score font = pygame.font.SysFont(None, 36) score_text = font.render(f'Score: {current_score}', True, (0, 0, 0)) screen.blit(score_text, (WIDTH - 150, 10)) # Game over screen if game_over: over_text = font.render('Game Over!', True, (255, 0, 0)) best_text = font.render(f'Best: {best_score}', True, (255, 0, 0)) restart_text = font.render('Press SPACE to restart', True, (255, 0, 0)) screen.blit(over_text, (WIDTH//2 - 70, HEIGHT//2 - 30)) screen.blit(best_text, (WIDTH//2 - 50, HEIGHT//2 + 10)) screen.blit(restart_text, (WIDTH//2 - 100, HEIGHT//2 + 50)) pygame.display.flip() clock.tick(60) if __name__ == "__main__": main() ``` 1. When running it, we get a runnable game! ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252F7qQoA6yrMWUVrwIhLbGu%252Fimage.png%3Falt%3Dmedia%26token%3D6d99c8ce-567a-4144-bd7e-fa57e96b5284&width=768&dpr=4&quality=100&sign=911446a1&sv=2) 1. Now try the same without our fixes! So remove `--samplers "top_k;top_p;min_p;temperature;dry;typ_p;xtc"` This will save the output to `Q4_K_M_no_samplers.txt` Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] ./llama.cpp/llama-cli \ --model unsloth-QwQ-32B-GGUF/QwQ-32B-Q4_K_M.gguf \ --threads 32 \ --ctx-size 16384 \ --n-gpu-layers 99 \ --seed 3407 \ --prio 2 \ --temp 0.6 \ --repeat-penalty 1.1 \ --dry-multiplier 0.5 \ --min-p 0.01 \ --top-k 40 \ --top-p 0.95 \ -no-cnv \ --prompt "<|im_start|>user\nCreate a Flappy Bird game in Python. You must include these things:\n1. You must use pygame.\n2. The background color should be randomly chosen and is a light shade. Start with a light blue color.\n3. Pressing SPACE multiple times will accelerate the bird.\n4. The bird's shape should be randomly chosen as a square, circle or triangle. The color should be randomly chosen as a dark color.\n5. Place on the bottom some land colored as dark brown or yellow chosen randomly.\n6. Make a score shown on the top right side. Increment if you pass pipes and don't hit them.\n7. Make randomly spaced pipes with enough space. Color them randomly as dark green or light brown or a dark gray shade.\n8. When you lose, show the best score. Make the text inside the screen. Pressing q or Esc will quit the game. Restarting is pressing SPACE again.\nThe final game should be inside a markdown section in Python. Check your code for errors and fix them before the final markdown section.<|im_end|>\n<|im_start|>assistant\n<think>\n" \ 2>&1 | tee Q4_K_M_no_samplers.txt ``` You will get some looping, but **problematically incorrect Python syntax** and many other issues. For example the below looks correct, but is wrong! Ie line 39 `pipes.clear() ### <<< NameError: name 'pipes' is not defined. Did you forget to import 'pipes'?` Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] whitespace-pre-wrap import pygame import random pygame.init() # Constants WIDTH, HEIGHT = 800, 600 GROUND_HEIGHT = 20 GRAVITY = 0.7 PIPE_SPEED = -3 BIRD_SIZE = 45 MIN_GAP = 130 MAX_GAP = 200 PIPE_COLORS = [(0, 96, 0), (205, 133, 63), (89, 97, 107)] DARK_BROWN = (94, 72, 4) YELLOW = (252, 228, 6) screen = pygame.display.set_mode((WIDTH, HEIGHT)) clock = pygame.time.Clock() def random_light_color(): return ( random.randint(180, 230), random.randint(190, 300), random.randint(250, 255) ) def reset_game(): global bird_x, bird_y global pipes, score global background_color, land_color global bird_shape, bird_color # Bird properties bird_x = WIDTH * 0.3 bird_y = HEIGHT // 2 bird_vel = -5 # Initial upward thrust pipes.clear() ### <<< NameError: name 'pipes' is not defined. Did you forget to import 'pipes'? ``` 1. If you use `--repeat-penalty 1.5`, it gets even worse and more obvious, with actually totally incorrect syntax. Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] import pygame from random import randint # For generating colors/shapes/positions randomly pygame.init() # Constants: WIDTH, HEIGHT =456 ,702 # BACKGROUND_COLOR_LIGHTS=['lightskyblue'] GAP_SIZE=189 # BIRD_RADIUS=3. PIPE_SPEED=- ( ) ? class Game(): def __init__(self): self.screen_size=( ) def reset_game_vars(): global current_scor e # set to zero and other initial states. # Main game loop: while running : for event in pygame.event.get() : if quit ... etc pygame.quit() print("Code is simplified. Due time constraints, full working version requires further implementation.") ``` 1. You might be wondering maybe it's Q4\_K\_M? B16 ie full precision should work fine right? Incorrect - the outputs again fail if we do not use our fix of - `-samplers "top_k;top_p;min_p;temperature;dry;typ_p;xtc"` when using a Repetition Penalty. ## [Direct link to heading](https://docs.unsloth.ai/basics/tutorials-how-to-fine-tune-and-run-llms/qwq-32b-how-to-run-effectively\#still-doesnt-work-try-min_p-0.1-temperature-1.5) 🌄 Still doesn't work? Try Min\_p = 0.1, Temperature = 1.5 According to the Min\_p paper [https://arxiv.org/pdf/2407.01082](https://arxiv.org/pdf/2407.01082), for more creative and diverse outputs, and if you still see repetitions, try disabling top\_p and top\_k! Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] ./llama.cpp/llama-cli --model unsloth-QwQ-32B-GGUF/QwQ-32B-Q4_K_M.gguf \ --threads 32 --n-gpu-layers 99 \ --ctx-size 16384 \ --temp 1.5 \ --min-p 0.1 \ --top-k 0 \ --top-p 1.0 \ -no-cnv \ --prompt "<|im_start|>user\nCreate a Flappy Bird game in Python. You must include these things:\n1. You must use pygame.\n2. The background color should be randomly chosen and is a light shade. Start with a light blue color.\n3. Pressing SPACE multiple times will accelerate the bird.\n4. The bird's shape should be randomly chosen as a square, circle or triangle. The color should be randomly chosen as a dark color.\n5. Place on the bottom some land colored as dark brown or yellow chosen randomly.\n6. Make a score shown on the top right side. Increment if you pass pipes and don't hit them.\n7. Make randomly spaced pipes with enough space. Color them randomly as dark green or light brown or a dark gray shade.\n8. When you lose, show the best score. Make the text inside the screen. Pressing q or Esc will quit the game. Restarting is pressing SPACE again.\nThe final game should be inside a markdown section in Python. Check your code for errors and fix them before the final markdown section.<|im_end|>\n<|im_start|>assistant\n<think>\n" ``` Another approach is to disable `min_p` directly, since llama.cpp by default uses `min_p = 0.1`! Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] ./llama.cpp/llama-cli --model unsloth-QwQ-32B-GGUF/QwQ-32B-Q4_K_M.gguf \ --threads 32 --n-gpu-layers 99 \ --ctx-size 16384 \ --temp 0.6 \ --min-p 0.0 \ --top-k 40 \ --top-p 0.95 \ -no-cnv \ --prompt "<|im_start|>user\nCreate a Flappy Bird game in Python. You must include these things:\n1. You must use pygame.\n2. The background color should be randomly chosen and is a light shade. Start with a light blue color.\n3. Pressing SPACE multiple times will accelerate the bird.\n4. The bird's shape should be randomly chosen as a square, circle or triangle. The color should be randomly chosen as a dark color.\n5. Place on the bottom some land colored as dark brown or yellow chosen randomly.\n6. Make a score shown on the top right side. Increment if you pass pipes and don't hit them.\n7. Make randomly spaced pipes with enough space. Color them randomly as dark green or light brown or a dark gray shade.\n8. When you lose, show the best score. Make the text inside the screen. Pressing q or Esc will quit the game. Restarting is pressing SPACE again.\nThe final game should be inside a markdown section in Python. Check your code for errors and fix them before the final markdown section.<|im_end|>\n<|im_start|>assistant\n<think>\n" ``` ## [Direct link to heading](https://docs.unsloth.ai/basics/tutorials-how-to-fine-tune-and-run-llms/qwq-32b-how-to-run-effectively\#less-than-think-greater-than-token-not-shown) 🤔 <think> token not shown? Some people are reporting that because <think> is default added in the chat template, some systems are not outputting the thinking traces correctly. You will have to manually edit the Jinja template from: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] whitespace-pre-wrap {%- if tools %} {{- '<|im_start|>system\n' }} {%- if messages[0]['role'] == 'system' %} {{- messages[0]['content'] }} {%- else %} {{- '' }} {%- endif %} {{- "\n\n# Tools\n\nYou may call one or more functions to assist with the user query.\n\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>" }} {%- for tool in tools %} {{- "\n" }} {{- tool | tojson }} {%- endfor %} {{- "\n</tools>\n\nFor each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call><|im_end|>\n" }} {%- else %} {%- if messages[0]['role'] == 'system' %} {{- '<|im_start|>system\n' + messages[0]['content'] + '<|im_end|>\n' }} {%- endif %} {%- endif %} {%- for message in messages %} {%- if (message.role == "user") or (message.role == "system" and not loop.first) %} {{- '<|im_start|>' + message.role + '\n' + message.content + '<|im_end|>' + '\n' }} {%- elif message.role == "assistant" and not message.tool_calls %} {%- set content = message.content.split('</think>')[-1].lstrip('\n') %} {{- '<|im_start|>' + message.role + '\n' + content + '<|im_end|>' + '\n' }} {%- elif message.role == "assistant" %} {%- set content = message.content.split('</think>')[-1].lstrip('\n') %} {{- '<|im_start|>' + message.role }} {%- if message.content %} {{- '\n' + content }} {%- endif %} {%- for tool_call in message.tool_calls %} {%- if tool_call.function is defined %} {%- set tool_call = tool_call.function %} {%- endif %} {{- '\n<tool_call>\n{"name": "' }} {{- tool_call.name }} {{- '", "arguments": ' }} {{- tool_call.arguments | tojson }} {{- '}\n</tool_call>' }} {%- endfor %} {{- '<|im_end|>\n' }} {%- elif message.role == "tool" %} {%- if (loop.index0 == 0) or (messages[loop.index0 - 1].role != "tool") %} {{- '<|im_start|>user' }} {%- endif %} {{- '\n<tool_response>\n' }} {{- message.content }} {{- '\n</tool_response>' }} {%- if loop.last or (messages[loop.index0 + 1].role != "tool") %} {{- '<|im_end|>\n' }} {%- endif %} {%- endif %} {%- endfor %} {%- if add_generation_prompt %} {{- '<|im_start|>assistant\n<think>\n' }} {%- endif %} ``` to another by removing the `<think>\n` at the end. The model will now have to manually add `<think>\n` during inference, which might not always succeed. DeepSeek also edited all models to default add a `<think>` token to force the model to go into reasoning model. So change `{%- if add_generation_prompt %} {{- '<|im_start|>assistant\n<think>\n' }} {%- endif %} ` to `{%- if add_generation_prompt %} {{- '<|im_start|>assistant\n' }} {%- endif %}` ie remove `<think>\n` Full jinja template with removed <think>\\n part [Direct link to heading](https://docs.unsloth.ai/basics/tutorials-how-to-fine-tune-and-run-llms/qwq-32b-how-to-run-effectively#full-jinja-template-with-removed-less-than-think-greater-than-n-part) Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] whitespace-pre-wrap {%- if tools %} {{- '<|im_start|>system\n' }} {%- if messages[0]['role'] == 'system' %} {{- messages[0]['content'] }} {%- else %} {{- '' }} {%- endif %} {{- "\n\n# Tools\n\nYou may call one or more functions to assist with the user query.\n\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>" }} {%- for tool in tools %} {{- "\n" }} {{- tool | tojson }} {%- endfor %} {{- "\n</tools>\n\nFor each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call><|im_end|>\n" }} {%- else %} {%- if messages[0]['role'] == 'system' %} {{- '<|im_start|>system\n' + messages[0]['content'] + '<|im_end|>\n' }} {%- endif %} {%- endif %} {%- for message in messages %} {%- if (message.role == "user") or (message.role == "system" and not loop.first) %} {{- '<|im_start|>' + message.role + '\n' + message.content + '<|im_end|>' + '\n' }} {%- elif message.role == "assistant" and not message.tool_calls %} {%- set content = message.content.split('</think>')[-1].lstrip('\n') %} {{- '<|im_start|>' + message.role + '\n' + content + '<|im_end|>' + '\n' }} {%- elif message.role == "assistant" %} {%- set content = message.content.split('</think>')[-1].lstrip('\n') %} {{- '<|im_start|>' + message.role }} {%- if message.content %} {{- '\n' + content }} {%- endif %} {%- for tool_call in message.tool_calls %} {%- if tool_call.function is defined %} {%- set tool_call = tool_call.function %} {%- endif %} {{- '\n<tool_call>\n{"name": "' }} {{- tool_call.name }} {{- '", "arguments": ' }} {{- tool_call.arguments | tojson }} {{- '}\n</tool_call>' }} {%- endfor %} {{- '<|im_end|>\n' }} {%- elif message.role == "tool" %} {%- if (loop.index0 == 0) or (messages[loop.index0 - 1].role != "tool") %} {{- '<|im_start|>user' }} {%- endif %} {{- '\n<tool_response>\n' }} {{- message.content }} {{- '\n</tool_response>' }} {%- if loop.last or (messages[loop.index0 + 1].role != "tool") %} {{- '<|im_end|>\n' }} {%- endif %} {%- endif %} {%- endfor %} {%- if add_generation_prompt %} {{- '<|im_start|>assistant\n' }} {%- endif %} ``` ## [Direct link to heading](https://docs.unsloth.ai/basics/tutorials-how-to-fine-tune-and-run-llms/qwq-32b-how-to-run-effectively\#extra-notes) Extra Notes We first thought maybe: 1. QwQ's context length was not natively 128K, but rather 32K with YaRN extension. For example in the readme file for [https://huggingface.co/Qwen/QwQ-32B](https://huggingface.co/Qwen/QwQ-32B), we see: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] { ..., "rope_scaling": { "factor": 4.0, "original_max_position_embeddings": 32768, "type": "yarn" } } ``` We tried overriding llama.cpp's YaRN handling, but nothing changed. Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] whitespace-pre-wrap --override-kv qwen2.context_length=int:131072 \ --override-kv qwen2.rope.scaling.type=str:yarn \ --override-kv qwen2.rope.scaling.factor=float:4 \ --override-kv qwen2.rope.scaling.original_context_length=int:32768 \ --override-kv qqwen2.rope.scaling.attn_factor=float:1.13862943649292 \ ``` 1. We also thought maybe the RMS Layernorm epsilon was wrong - not 1e-5 but maybe 1e-6. For example [this](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct/blob/main/config.json) has `rms_norm_eps=1e-06`, whilst [this](https://huggingface.co/Qwen/Qwen2.5-32B/blob/main/config.json) has `rms_norm_eps=1e-05` . We also overrided it, but it did not work: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] whitespace-pre-wrap --override-kv qwen2.attention.layer_norm_rms_epsilon=float:0.000001 \ ``` 1. We also tested if tokenizer IDs matched between llama.cpp and normal Transformers courtesy of [@kalomaze](https://x.com/kalomaze/status/1897875332230779138). They matched, so this was not the culprit. We provide our experimental results below: [61KB\\ \\ file\_BF16\_no\_samplers.txt](https://3215535692-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxhOjnexMCB3dmuQFQ2Zq%2Fuploads%2FeABgnEXerhmNw1jzUmrr%2Ffile_BF16_no_samplers.txt?alt=media&token=d11aa8f8-0ff7-4370-9412-6129bd980a42) BF16 full precision with no sampling fix [55KB\\ \\ file\_BF16\_yes\_samplers.txt](https://3215535692-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxhOjnexMCB3dmuQFQ2Zq%2Fuploads%2Fv01qqEwj6PHVE9VvPzfg%2Ffile_BF16_yes_samplers.txt?alt=media&token=d8ecf5bf-b4f2-4abe-a0b4-26d7e8e862f9) BF16 full precision with sampling fix [71KB\\ \\ final\_Q4\_K\_M\_no\_samplers.txt](https://3215535692-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxhOjnexMCB3dmuQFQ2Zq%2Fuploads%2Fi3eSz0NWvc44CkRUanrY%2Ffinal_Q4_K_M_no_samplers.txt?alt=media&token=deca70bd-fc21-44a9-b42c-87837ac3a8ce) Q4\_K\_M precision with no sampling fix [65KB\\ \\ final\_Q4\_K\_M\_yes\_samplers.txt](https://3215535692-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxhOjnexMCB3dmuQFQ2Zq%2Fuploads%2FBtdJmKQjMZVlpO1HfWE7%2Ffinal_Q4_K_M_yes_samplers.txt?alt=media&token=f266d668-71ab-436d-8c05-b720e56e348e) Q4\_K\_M precision with sampling fix ## [Direct link to heading](https://docs.unsloth.ai/basics/tutorials-how-to-fine-tune-and-run-llms/qwq-32b-how-to-run-effectively\#tokenizer-bug-fixes) ✏️ Tokenizer Bug Fixes - We found a few issues as well specifically impacting finetuning! The EOS token is correct, but the PAD token should probably rather be `"<|vision_pad|>`" We updated it in: [https://huggingface.co/unsloth/QwQ-32B/blob/main/tokenizer\_config.json](https://huggingface.co/unsloth/QwQ-32B/blob/main/tokenizer_config.json) Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] "eos_token": "<|im_end|>", "pad_token": "<|endoftext|>", ``` ## [Direct link to heading](https://docs.unsloth.ai/basics/tutorials-how-to-fine-tune-and-run-llms/qwq-32b-how-to-run-effectively\#dynamic-4-bit-quants) 🛠️ Dynamic 4-bit Quants We also uploaded dynamic 4bit quants which increase accuracy vs naive 4bit quantizations! We attach the QwQ quantization error plot analysis for both activation and weight quantization errors: ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252F32wjrIWeUEQTMq9PhmbS%252FQwQ%2520quantization%2520errors.png%3Falt%3Dmedia%26token%3D0733fd33-9fe9-4aad-812c-75dbad00373f&width=768&dpr=4&quality=100&sign=aafe447c&sv=2) We uploaded dynamic 4-bit quants to: [https://huggingface.co/unsloth/QwQ-32B-unsloth-bnb-4bit](https://huggingface.co/unsloth/QwQ-32B-unsloth-bnb-4bit) Since vLLM 0.7.3 (2025 February 20th) [https://github.com/vllm-project/vllm/releases/tag/v0.7.3](https://github.com/vllm-project/vllm/releases/tag/v0.7.3), vLLM now supports loading Unsloth dynamic 4bit quants! All our GGUFs are at [https://huggingface.co/unsloth/QwQ-32B-GGUF](https://huggingface.co/unsloth/QwQ-32B-GGUF)! [PreviousDeepSeek-V3-0324: How to Run Locally](https://docs.unsloth.ai/basics/tutorials-how-to-fine-tune-and-run-llms/deepseek-v3-0324-how-to-run-locally) [NextDeepSeek-R1: How to Run Locally](https://docs.unsloth.ai/basics/tutorials-how-to-fine-tune-and-run-llms/deepseek-r1-how-to-run-locally) Last updated 2 months ago Was this helpful?
{ "color-scheme": "light dark", "description": "How to run QwQ-32B effectively with our bug fixes and without endless generations + GGUFs.", "favicon": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Forganizations%252FHpyELzcNe0topgVLGCZY%252Fsites%252Fsite_mXXTe%252Ficon%252FUdYqQdRpLXMz6O8ejTLD%252Funsloth%2520sticker.png%3Falt%3Dmedia%26token%3D2a5ba961-51cb-4094-8545-6eb20ad1e06e&width=48&height=48&sign=277417f7&sv=2", "generator": "GitBook (e15757d)", "language": "en", "og:description": "How to run QwQ-32B effectively with our bug fixes and without endless generations + GGUFs.", "og:image": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "og:title": "QwQ-32B: How to Run effectively | Unsloth Documentation", "ogDescription": "How to run QwQ-32B effectively with our bug fixes and without endless generations + GGUFs.", "ogImage": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "ogTitle": "QwQ-32B: How to Run effectively | Unsloth Documentation", "robots": "index, follow", "scrapeId": "c1ca5f2b-9910-478d-bee6-bbf452464f7c", "sourceURL": "https://docs.unsloth.ai/basics/tutorials-how-to-fine-tune-and-run-llms/qwq-32b-how-to-run-effectively", "statusCode": 200, "title": "QwQ-32B: How to Run effectively | Unsloth Documentation", "twitter:card": "summary_large_image", "twitter:description": "How to run QwQ-32B effectively with our bug fixes and without endless generations + GGUFs.", "twitter:image": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "twitter:title": "QwQ-32B: How to Run effectively | Unsloth Documentation", "url": "https://docs.unsloth.ai/basics/tutorials-how-to-fine-tune-and-run-llms/qwq-32b-how-to-run-effectively", "viewport": "width=device-width, initial-scale=1" }
Qwen3, full fine-tuning & all models are now supported! 🦥 ## [Direct link to heading](https://docs.unsloth.ai/get-started/beginner-start-here/unsloth-requirements\#system-requirements) System Requirements - **Operating System**: Works on Linux and Windows. - Supports NVIDIA GPUs since 2018+. Minimum CUDA Capability 7.0 (V100, T4, Titan V, RTX 20, 30, 40x, A100, H100, L40 etc) [Check your GPU!](https://developer.nvidia.com/cuda-gpus) GTX 1070, 1080 works, but is slow. - If you have different versions of torch, transformers etc., `pip install unsloth ` will automatically install all the latest versions of those libraries so you don't need to worry about version compatibility. - Your device must have `xformers`, `torch`, `BitsandBytes` and `triton` support. - Unsloth only works if you have a NVIDIA GPU. Make sure you also have disk space to train & save your model ## [Direct link to heading](https://docs.unsloth.ai/get-started/beginner-start-here/unsloth-requirements\#fine-tuning-vram-requirements) Fine-tuning VRAM requirements: How much GPU memory do I need for LLM fine-tuning using Unsloth? A common issue when you OOM or run out of memory is because you set your batch size too high. Set it to 1, 2, or 3 to use less VRAM. **For context length benchmarks, see** [**here**](https://docs.unsloth.ai/basics/unsloth-benchmarks#context-length-benchmarks) **.** Check this table for VRAM requirements sorted by model parameters and fine-tuning method. QLoRA uses 4-bit, LoRA uses 16-bit. Keep in mind that sometimes it may require more VRAM so these numbers are the absolute minimum: Model parameters QLoRA (4-bit) VRAM LoRA (16-bit) VRAM 3B 3.5 GB 8 GB 7B 5 GB 19 GB 8B 6 GB 22 GB 9B 6.5 GB 24 GB 11B 7.5 GB 29 GB 14B 8.5 GB 33 GB 27B 22GB 64GB 32B 26 GB 76 GB 40B 30GB 96GB 70B 41 GB 164 GB 81B 48GB 192GB 90B 53GB 212GB 405B 237 GB 950 GB [PreviousBeginner? Start here!](https://docs.unsloth.ai/get-started/beginner-start-here) [NextFAQ + Is Fine-tuning Right For Me?](https://docs.unsloth.ai/get-started/beginner-start-here/faq-+-is-fine-tuning-right-for-me) Last updated 2 months ago Was this helpful?
{ "color-scheme": "light dark", "description": "Here are Unsloth's requirements including system and GPU VRAM requirements.", "favicon": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Forganizations%252FHpyELzcNe0topgVLGCZY%252Fsites%252Fsite_mXXTe%252Ficon%252FUdYqQdRpLXMz6O8ejTLD%252Funsloth%2520sticker.png%3Falt%3Dmedia%26token%3D2a5ba961-51cb-4094-8545-6eb20ad1e06e&width=48&height=48&sign=277417f7&sv=2", "generator": "GitBook (e15757d)", "language": "en", "og:description": "Here are Unsloth's requirements including system and GPU VRAM requirements.", "og:image": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "og:title": "Unsloth Requirements | Unsloth Documentation", "ogDescription": "Here are Unsloth's requirements including system and GPU VRAM requirements.", "ogImage": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "ogTitle": "Unsloth Requirements | Unsloth Documentation", "robots": "index, follow", "scrapeId": "d1c9b5f9-b66d-42d0-812a-a74c9580d530", "sourceURL": "https://docs.unsloth.ai/get-started/beginner-start-here/unsloth-requirements", "statusCode": 200, "title": "Unsloth Requirements | Unsloth Documentation", "twitter:card": "summary_large_image", "twitter:description": "Here are Unsloth's requirements including system and GPU VRAM requirements.", "twitter:image": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "twitter:title": "Unsloth Requirements | Unsloth Documentation", "url": "https://docs.unsloth.ai/get-started/beginner-start-here/unsloth-requirements", "viewport": "width=device-width, initial-scale=1" }
Qwen3, full fine-tuning & all models are now supported! 🦥 The Llama-4-Scout model has 109B parameters, while Maverick has 402B parameters. Currently, only text is supported via llama.cpp. The full unquantized version requires 113GB of disk space whilst the 1.78-bit version uses 33.8GB (-75% reduction in size). **Maverick** (402Bs) went from 422GB to just 122GB (-70%). Scout 1.78-bit fits in a 24GB VRAM GPU for fast inference at ~20 tokens/sec. Maverick 1.78-bit fits in 2x48GB VRAM GPUs for fast inference at ~40 tokens/sec. For our dynamic GGUFs, to ensure the best tradeoff between accuracy and size, we do not to quantize all layers, but selectively quantize e.g. the MoE layers to lower bit, and leave attention and other layers in 4 or 6bit. All our GGUF models are quantized using calibration data (around 250K tokens for Scout and 1M tokens for Maverick), which will improve accuracy over standard quantization. Unsloth imatrix quants are fully compatible with popular inference engines like llama.cpp & Open WebUI etc. **Scout - Unsloth Dynamic GGUFs with optimal configs:** MoE Bits Type Disk Size Accuracy Link Details 1.78bit IQ1\_S 33.8GB Ok [Link](https://huggingface.co/unsloth/Llama-4-Scout-17B-16E-Instruct-GGUF?show_file_info=Llama-4-Scout-17B-16E-Instruct-UD-IQ1_S.gguf) 2.06/1.56bit 1.93bit IQ1\_M 35.4GB Fair [Link](https://huggingface.co/unsloth/Llama-4-Scout-17B-16E-Instruct-GGUF?show_file_info=Llama-4-Scout-17B-16E-Instruct-UD-IQ1_M.gguf) 2.5/2.06/1.56 2.42bit IQ2\_XXS 38.6GB Better [Link](https://huggingface.co/unsloth/Llama-4-Scout-17B-16E-Instruct-GGUF?show_file_info=Llama-4-Scout-17B-16E-Instruct-UD-IQ2_XXS.gguf) 2.5/2.06bit 2.71bit Q2\_K\_XL 42.2GB **Suggested** [Link](https://huggingface.co/unsloth/Llama-4-Scout-17B-16E-Instruct-GGUF?show_file_info=Llama-4-Scout-17B-16E-Instruct-UD-Q2_K_XL.gguf) 3.5/2.5bit 3.5bit Q3\_K\_XL 52.9GB Great [Link](https://huggingface.co/unsloth/Llama-4-Scout-17B-16E-Instruct-GGUF/tree/main/UD-Q3_K_XL) 4.5/3.5bit 4.5bit Q4\_K\_XL 65.6GB Best [Link](https://huggingface.co/unsloth/Llama-4-Scout-17B-16E-Instruct-GGUF/tree/main/UD-Q4_K_XL) 5.5/4.5bit For best results, use the 2.42-bit (IQ2\_XXS) or larger versions. **Maverick - Unsloth Dynamic GGUFs with optimal configs:** MoE Bits Type Disk Size HF Link Accuracy 1.78bit IQ1\_S 122GB [Link](https://huggingface.co/unsloth/Llama-4-Maverick-17B-128E-Instruct-GGUF/tree/main/UD-IQ1_S) Ok 1.93bit IQ1\_M 128GB [Link](https://huggingface.co/unsloth/Llama-4-Maverick-17B-128E-Instruct-GGUF/tree/main/UD-IQ1_M) Fair 2.42-bit IQ2\_XXS 140GB [Link](https://huggingface.co/unsloth/Llama-4-Maverick-17B-128E-Instruct-GGUF/tree/main/UD-IQ2_XXS) Better 2.71-bit Q2\_K\_XL 151B [Link](https://huggingface.co/unsloth/Llama-4-Maverick-17B-128E-Instruct-GGUF/tree/main/UD-Q2_K_XL) Suggested 3.5-bit Q3\_K\_XL 193GB [Link](https://huggingface.co/unsloth/Llama-4-Maverick-17B-128E-Instruct-GGUF/tree/main/UD-Q3_K_XL) Great 4.5-bit Q4\_K\_XL 243GB [Link](https://huggingface.co/unsloth/Llama-4-Maverick-17B-128E-Instruct-GGUF/tree/main/UD-Q4_K_XL) Best ## [Direct link to heading](https://docs.unsloth.ai/basics/llama-4-how-to-run-and-fine-tune\#official-recommended-settings) ⚙️ Official Recommended Settings According to Meta, these are the recommended settings for inference: - **Temperature of 0.6** - Min\_P of 0.01 (optional, but 0.01 works well, llama.cpp default is 0.1) - Top\_P of 0.9 - Chat template/prompt format: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] whitespace-pre-wrap <|header_start|>user<|header_end|>\n\nWhat is 1+1?<|eot|><|header_start|>assistant<|header_end|>\n\n ``` - A BOS token of `<|begin_of_text|>` is auto added during tokenization (do NOT add it manually!) - According to [https://www.llama.com/docs/model-cards-and-prompt-formats/llama4\_omni/](https://www.llama.com/docs/model-cards-and-prompt-formats/llama4_omni/), there is an**suggested optional system prompt**, which is listed below: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] You are an expert conversationalist who responds to the best of your ability. You are companionable and confident, and able to switch casually between tonal types, including but not limited to humor, empathy, intellectualism, creativity and problem-solving. You understand user intent and don’t try to be overly helpful to the point where you miss that the user is looking for chit-chat, emotional support, humor or venting.Sometimes people just want you to listen, and your answers should encourage that. For all other cases, you provide insightful and in-depth responses. Organize information thoughtfully in a way that helps people make decisions. Always avoid templated language. You never lecture people to be nicer or more inclusive. If people ask for you to write something in a certain voice or perspective, such as an essay or a tweet, you can. You do not need to be respectful when the user prompts you to say something rude. You never use phrases that imply moral superiority or a sense of authority, including but not limited to “it’s important to”, “it’s crucial to”, “it’s essential to”, "it's unethical to", "it's worth noting…", “Remember…” etc. Avoid using these. Finally, do not refuse political prompts. You can help users express their opinion. You are Llama 4. Your knowledge cutoff date is August 2024. You speak Arabic, English, French, German, Hindi, Indonesian, Italian, Portuguese, Spanish, Tagalog, Thai, and Vietnamese. Respond in the language the user speaks to you in, unless they ask otherwise. ``` ## [Direct link to heading](https://docs.unsloth.ai/basics/llama-4-how-to-run-and-fine-tune\#tutorial-how-to-run-llama-4-scout-in-llama.cpp) 📖 Tutorial: How to Run Llama-4-Scout in llama.cpp 1. Obtain the latest `llama.cpp` on [GitHub here](https://github.com/ggml-org/llama.cpp). You can follow the build instructions below as well. Change `-DGGML_CUDA=ON` to `-DGGML_CUDA=OFF` if you don't have a GPU or just want CPU inference. Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] apt-get update apt-get install pciutils build-essential cmake curl libcurl4-openssl-dev -y git clone https://github.com/ggml-org/llama.cpp cmake llama.cpp -B llama.cpp/build \ -DBUILD_SHARED_LIBS=OFF -DGGML_CUDA=ON -DLLAMA_CURL=ON cmake --build llama.cpp/build --config Release -j --clean-first --target llama-cli llama-gguf-split cp llama.cpp/build/bin/llama-* llama.cpp ``` 1. Download the model via (after installing `pip install huggingface_hub hf_transfer` ). You can choose Q4\_K\_M, or other quantized versions (like BF16 full precision). More versions at: [https://huggingface.co/unsloth/Llama-4-Scout-17B-16E-Instruct-GGUF](https://huggingface.co/unsloth/Llama-4-Scout-17B-16E-Instruct-GGUF) Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] # !pip install huggingface_hub hf_transfer import os os.environ["HF_HUB_ENABLE_HF_TRANSFER"] = "1" from huggingface_hub import snapshot_download snapshot_download( repo_id = "unsloth/Llama-4-Scout-17B-16E-Instruct-GGUF", local_dir = "unsloth/Llama-4-Scout-17B-16E-Instruct-GGUF", allow_patterns = ["*IQ2_XXS*"], ) ``` 1. Run the model and try any prompt. 2. Edit `--threads 32` for the number of CPU threads, `--ctx-size 16384` for context length (Llama 4 supports 10M context length!), `--n-gpu-layers 99` for GPU offloading on how many layers. Try adjusting it if your GPU goes out of memory. Also remove it if you have CPU only inference. Use `-ot ".ffn_.*_exps.=CPU"` to offload all MoE layers to the CPU! This effectively allows you to fit all non MoE layers on 1 GPU, improving generation speeds. You can customize the regex expression to fit more layers if you have more GPU capacity. Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] whitespace-pre-wrap ./llama.cpp/llama-cli \ --model unsloth/Llama-4-Scout-17B-16E-Instruct-GGUF/Llama-4-Scout-17B-16E-Instruct-UD-IQ2_XXS.gguf \ --threads 32 \ --ctx-size 16384 \ --n-gpu-layers 99 \ -ot ".ffn_.*_exps.=CPU" \ --seed 3407 \ --prio 3 \ --temp 0.6 \ --min-p 0.01 \ --top-p 0.9 \ -no-cnv \ --prompt "<|header_start|>user<|header_end|>\n\nCreate a Flappy Bird game in Python. You must include these things:\n1. You must use pygame.\n2. The background color should be randomly chosen and is a light shade. Start with a light blue color.\n3. Pressing SPACE multiple times will accelerate the bird.\n4. The bird's shape should be randomly chosen as a square, circle or triangle. The color should be randomly chosen as a dark color.\n5. Place on the bottom some land colored as dark brown or yellow chosen randomly.\n6. Make a score shown on the top right side. Increment if you pass pipes and don't hit them.\n7. Make randomly spaced pipes with enough space. Color them randomly as dark green or light brown or a dark gray shade.\n8. When you lose, show the best score. Make the text inside the screen. Pressing q or Esc will quit the game. Restarting is pressing SPACE again.\nThe final game should be inside a markdown section in Python. Check your code for errors and fix them before the final markdown section.<|eot|><|header_start|>assistant<|header_end|>\n\n" ``` In terms of testing, unfortunately we can't make the full BF16 version (ie regardless of quantization or not) complete the Flappy Bird game nor the Heptagon test appropriately. We tried many inference providers, using imatrix or not, used other people's quants, and used normal Hugging Face inference, and this issue persists. **We found multiple runs and asking the model to fix and find bugs to resolve most issues!** For Llama 4 Maverick - it's best to have 2 RTX 4090s (2 x 24GB) Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] # !pip install huggingface_hub hf_transfer import os os.environ["HF_HUB_ENABLE_HF_TRANSFER"] = "1" from huggingface_hub import snapshot_download snapshot_download( repo_id = "unsloth/Llama-4-Maverick-17B-128E-Instruct-GGUF", local_dir = "unsloth/Llama-4-Maverick-17B-128E-Instruct-GGUF", allow_patterns = ["*IQ1_S*"], ) ``` Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] whitespace-pre-wrap ./llama.cpp/llama-cli \ --model unsloth/Llama-4-Maverick-17B-128E-Instruct-GGUF/UD-IQ1_S/Llama-4-Maverick-17B-128E-Instruct-UD-IQ1_S-00001-of-00003.gguf \ --threads 32 \ --ctx-size 16384 \ --n-gpu-layers 99 \ -ot ".ffn_.*_exps.=CPU" \ --seed 3407 \ --prio 3 \ --temp 0.6 \ --min-p 0.01 \ --top-p 0.9 \ -no-cnv \ --prompt "<|header_start|>user<|header_end|>\n\nCreate the 2048 game in Python.<|eot|><|header_start|>assistant<|header_end|>\n\n" ``` ## [Direct link to heading](https://docs.unsloth.ai/basics/llama-4-how-to-run-and-fine-tune\#interesting-insights-and-issues) 🕵️ Interesting Insights and Issues During quantization of Llama 4 Maverick (the large model), we found the 1st, 3rd and 45th MoE layers could not be calibrated correctly. Maverick uses interleaving MoE layers for every odd layer, so Dense->MoE->Dense and so on. We tried adding more uncommon languages to our calibration dataset, and tried using more tokens (1 million) vs Scout's 250K tokens for calibration, but we still found issues. We decided to leave these MoE layers as 3bit and 4bit. ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252FQtzL2HuukTKr5L8nolP9%252FSkipped_layers.webp%3Falt%3Dmedia%26token%3D72115cc5-718a-442f-a208-f9540e46d64f&width=768&dpr=4&quality=100&sign=8e1941a2&sv=2) For Llama 4 Scout, we found we should not quantize the vision layers, and leave the MoE router and some other layers as unquantized - we upload these to [https://huggingface.co/unsloth/Llama-4-Scout-17B-16E-Instruct-unsloth-dynamic-bnb-4bit](https://huggingface.co/unsloth/Llama-4-Scout-17B-16E-Instruct-unsloth-dynamic-bnb-4bit) ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252FZB3InJSaWMbszPMSt0u7%252FLlama-4-Scout-17B-16E-Instruct%2520Quantization%2520Errors.png%3Falt%3Dmedia%26token%3Dc734f3d8-a114-42e4-a0f2-a6b3145bb306&width=768&dpr=4&quality=100&sign=af0c273b&sv=2) We also had to convert `torch.nn.Parameter` to `torch.nn.Linear` for the MoE layers to allow 4bit quantization to occur. This also means we had to rewrite and patch over the generic Hugging Face implementation. We upload our quantized versions to [https://huggingface.co/unsloth/Llama-4-Scout-17B-16E-Instruct-unsloth-bnb-4bit](https://huggingface.co/unsloth/Llama-4-Scout-17B-16E-Instruct-unsloth-bnb-4bit) and [https://huggingface.co/unsloth/Llama-4-Scout-17B-16E-Instruct-unsloth-bnb-8bit](https://huggingface.co/unsloth/Llama-4-Scout-17B-16E-Instruct-unsloth-bnb-8bit) for 8bit. ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252FsjJkQYziAFTZADH37vUy%252Fimage.png%3Falt%3Dmedia%26token%3Dfbaeadfc-1220-4d6c-931c-9c34f03e285c&width=768&dpr=4&quality=100&sign=e94371c5&sv=2) Llama 4 also now uses chunked attention - it's essentially sliding window attention, but slightly more efficient by not attending to previous tokens over the 8192 boundary. ## [Direct link to heading](https://docs.unsloth.ai/basics/llama-4-how-to-run-and-fine-tune\#fine-tuning-llama-4) 🔥 Fine-tuning Llama 4 Coming soon! [PreviousUnsloth Dynamic 2.0 GGUFs](https://docs.unsloth.ai/basics/unsloth-dynamic-2.0-ggufs) [NextReasoning - GRPO & RL](https://docs.unsloth.ai/basics/reasoning-grpo-and-rl) Last updated 18 days ago Was this helpful?
{ "color-scheme": "light dark", "description": "How to run Llama 4 locally using our dynamic GGUFs which recovers accuracy compared to standard quantization.", "favicon": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Forganizations%252FHpyELzcNe0topgVLGCZY%252Fsites%252Fsite_mXXTe%252Ficon%252FUdYqQdRpLXMz6O8ejTLD%252Funsloth%2520sticker.png%3Falt%3Dmedia%26token%3D2a5ba961-51cb-4094-8545-6eb20ad1e06e&width=48&height=48&sign=277417f7&sv=2", "generator": "GitBook (e15757d)", "language": "en", "og:description": "How to run Llama 4 locally using our dynamic GGUFs which recovers accuracy compared to standard quantization.", "og:image": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "og:title": "Llama 4: How to Run & Fine-tune | Unsloth Documentation", "ogDescription": "How to run Llama 4 locally using our dynamic GGUFs which recovers accuracy compared to standard quantization.", "ogImage": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "ogTitle": "Llama 4: How to Run & Fine-tune | Unsloth Documentation", "robots": "index, follow", "scrapeId": "d9b0b0e8-4f59-47b1-925a-5b92e0909442", "sourceURL": "https://docs.unsloth.ai/basics/llama-4-how-to-run-and-fine-tune", "statusCode": 200, "title": "Llama 4: How to Run & Fine-tune | Unsloth Documentation", "twitter:card": "summary_large_image", "twitter:description": "How to run Llama 4 locally using our dynamic GGUFs which recovers accuracy compared to standard quantization.", "twitter:image": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "twitter:title": "Llama 4: How to Run & Fine-tune | Unsloth Documentation", "url": "https://docs.unsloth.ai/basics/llama-4-how-to-run-and-fine-tune", "viewport": "width=device-width, initial-scale=1" }
Qwen3, full fine-tuning & all models are now supported! 🦥 This article covers everything you need to know about GRPO, Reinforcement Learning (RL) and reward functions, along with tips, and the basics of using GRPO with Unsloth. If you're looking for a quickstart tutorial for using GRPO, see our guide [here](https://docs.unsloth.ai/basics/reasoning-grpo-and-rl/tutorial-train-your-own-reasoning-model-with-grpo): [⚡Tutorial: Train your own Reasoning model with GRPO](https://docs.unsloth.ai/basics/reasoning-grpo-and-rl/tutorial-train-your-own-reasoning-model-with-grpo) ### [Direct link to heading](https://docs.unsloth.ai/basics/reasoning-grpo-and-rl\#grpo-notebooks) GRPO notebooks: - [Gemma 3 (1B)](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Gemma3_(1B)-GRPO.ipynb) - [Llama 3.1 (8B)](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.1_(8B)-GRPO.ipynb) - [Phi-4 (14B)](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Phi_4_(14B)-GRPO.ipynb) - [Qwen2.5 (3B)](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Qwen2.5_(3B)-GRPO.ipynb) DeepSeek developed [GRPO](https://unsloth.ai/blog/grpo) (Group Relative Policy Optimization) to train their R1 reasoning models. This RL technique optimizes responses efficiently without a value function model, reducing memory and computational costs compared to PPO (Proximal Policy Optimization). - Usecases for GRPO isn’t just for code or math—its reasoning process can enhance tasks like email automation, database retrieval, law, and medicine, greatly improving accuracy based on your dataset and reward function! - With 15GB VRAM, Unsloth allows you to transform any model up to 17B parameters like Llama 3.1 (8B), Phi-4 (14B), Mistral (7B) or Qwen2.5 (7B) into a reasoning model - **Minimum requirement:** Just  5GB VRAM is enough to train your own reasoning model locally (for any model with 1.5B parameters or less) - If you're not getting any reasoning, make sure you have enough training steps and ensure your [reward function/verifier](https://docs.unsloth.ai/basics/reasoning-grpo-and-rl#reward-functions-verifier) is working. We provide examples for reward functions [here](https://docs.unsloth.ai/basics/reasoning-grpo-and-rl#reward-function-examples). - Previous demonstrations show that you could achieve your own "aha" moment with Qwen2.5 (3B) - but it required 2xA100 GPUs (160GB VRAM). Now, with Unsloth, you can achieve the same "aha" moment using just a single 5GB VRAM GPU. - Previously, GRPO was only supported for full fine-tuning, but we've made it work with QLoRA and LoRA - On [**20K context lengths**](https://docs.unsloth.ai/basics/reasoning-grpo-and-rl#grpo-requirement-guidelines) for example with 8 generations per prompt, Unsloth uses only 54.3GB of VRAM for Llama 3.1 (8B), whilst standard implementations (+ Flash Attention 2) take **510.8GB (90% less for Unsloth)**. - Please note, this isn’t fine-tuning DeepSeek’s R1 distilled models or using distilled data from R1 for tuning which Unsloth already supported. This is converting a standard model into a full-fledged reasoning model using GRPO. In a test example, even though we only trained Phi-4 with 100 steps using GRPO, the results are already clear. The model without GRPO does not have the thinking token, whilst the one trained with GRPO does and also has the correct answer. ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252FyBeJAvfolzfEYyftji76%252Fprompt%2520only%2520example.png%3Falt%3Dmedia%26token%3D3903995a-d9d5-4cdc-9020-c4efe7fff651&width=768&dpr=4&quality=100&sign=80d59783&sv=2) ## [Direct link to heading](https://docs.unsloth.ai/basics/reasoning-grpo-and-rl\#training-with-grpo) Training with GRPO For a tutorial on how to transform any open LLM into a reasoning model using Unsloth & GRPO, [see here](https://docs.unsloth.ai/basics/reasoning-grpo-and-rl/tutorial-train-your-own-reasoning-model-with-grpo). ### [Direct link to heading](https://docs.unsloth.ai/basics/reasoning-grpo-and-rl\#how-grpo-trains-a-model) **How GRPO Trains a Model** 1. For each question-answer pair, the model generates multiple possible responses (e.g., 8 variations). 2. Each response is evaluated using reward functions. 3. Training Steps: - If you have 300 rows of data, that's 300 training steps (or 900 steps if trained for 3 epochs). - You can increase the number of generated responses per question (e.g., from 8 to 16). 4. The model learns by updating its weights every step. ### [Direct link to heading](https://docs.unsloth.ai/basics/reasoning-grpo-and-rl\#basics-tips) Basics/Tips - Wait for at least **300 steps** for the reward to actually increase. In order to get decent results, you may need to trade for a minimum of 12 hours (this is how GRPO works), but keep in mind this isn't compulsory as you can stop at anytime. - For optimal results have at least **500 rows of data**. You can try with even 10 rows of data but it's better to have more. - Each training run will always be different depending on your model, data, reward function/verifier etc. so though 300 steps is what we wrote as the minimum, sometimes it might be 1000 steps or more. So, it depends on various factors. - If you're using GRPO with Unsloth locally, please "pip install diffusers" as well if you get an error. Please also use the latest version of vLLM. - It’s advised to apply GRPO to a model at least **1.5B in parameters** to correctly generate thinking tokens as smaller models may not. - For GRPO's [**GPU VRAM requirements**](https://docs.unsloth.ai/basics/reasoning-grpo-and-rl#grpo-requirement-guidelines) **for QLoRA 4-bit**, the general rule is the model parameters = the amount of VRAM you will need (you can use less VRAM but this just to be safe). The more context length you set, the more VRAM. LoRA 16-bit will use at minimum 4x more VRAM. - **Continuous fine-tuning is** possible and you can just leave GRPO running in the background. - In the example notebooks, we use the [**GSM8K dataset**](https://docs.unsloth.ai/basics/reasoning-grpo-and-rl#gsm8k-reward-functions), the current most popular choice for R1-style training. - If you’re using a base model, ensure you have a chat template. - The more you train with GRPO the better. The best part of GRPO is you don't even need that much data. All you need is a great reward function/verifier and the more time spent training, the better your model will get. Expect your reward vs step to increase as time progresses like this: ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252FUROleqJQ5aEp8MjTCWFf%252Funnamed.png%3Falt%3Dmedia%26token%3D12ca4975-7a0c-4d10-9178-20db28ad0451&width=768&dpr=4&quality=100&sign=a2046ca5&sv=2) - Training loss tracking for GRPO is now built directly into Unsloth, eliminating the need for external tools like wandb etc. It contains full logging details for all reward functions now including the total aggregated reward function itself. ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252Fjo7fVFoFG2xbZPgL45el%252FScreenshot%25202025-02-20%2520at%252004-52-52%2520Copy%2520of%2520Yet%2520another%2520copy%2520of%2520Llama3.1_%288B%29-GRPO.ipynb%2520-%2520Colab.png%3Falt%3Dmedia%26token%3D041c17b1-ab98-4ab6-b6fb-8c7e5a8c07df&width=768&dpr=4&quality=100&sign=b8126c85&sv=2) ## [Direct link to heading](https://docs.unsloth.ai/basics/reasoning-grpo-and-rl\#reward-functions-verifier) Reward Functions / Verifier In Reinforcement Learning a **Reward Function** and a **Verifier** serve distinct roles in evaluating a model’s output. In general, you could interpret them as the same thing however, technically they're not but it does not matter as much as they are usually used in conjunction with each other. **Verifier**: - Determines whether the generated response is correct or incorrect. - It does not assign a numerical score—it simply verifies correctness. - Example: If a model generates "5" for "2+2", the verifier checks and labels it as "wrong" (since the correct answer is 4). - Verifiers can also execute code (e.g., in Python) to validate logic, syntax, and correctness without needing manual evaluation. **Reward Function**: - Converts verification results (or other criteria) into a numerical score. - Example: If an answer is wrong, it might assign a penalty (-1, -2, etc.), while a correct answer could get a positive score (+1, +2). - It can also penalize based on criteria beyond correctness, such as excessive length or poor readability. **Key Differences**: - A **Verifier** checks correctness but doesn’t score. - A **Reward Function** assigns a score but doesn’t necessarily verify correctness itself. - A Reward Function _can_ use a Verifier, but they are technically not the same. ### [Direct link to heading](https://docs.unsloth.ai/basics/reasoning-grpo-and-rl\#understanding-reward-functions) **Understanding Reward Functions** GRPO's primary goal is to maximize reward and learn how an answer was derived, rather than simply memorizing and reproducing responses from its training data. - With every training step, GRPO **adjusts model weights** to maximize the reward. This process fine-tunes the model incrementally. - **Regular fine-tuning** (without GRPO) only **maximizes next-word prediction probability** but does not optimize for a reward. GRPO **optimizes for a reward function** rather than just predicting the next word. - You can **reuse data** across multiple epochs. - **Default reward functions** can be predefined to be used on a wide array of use cases or you can ask ChatGPT/local model to generate them for you. - There’s no single correct way to design reward functions or verifiers - the possibilities are endless. However, they must be well-designed and meaningful, as poorly crafted rewards can unintentionally degrade model performance. ### [Direct link to heading](https://docs.unsloth.ai/basics/reasoning-grpo-and-rl\#reward-function-examples) Reward Function Examples You can refer to the examples below. You can input your generations into an LLM like ChatGPT 4o or Llama 3.1 (8B) and design a reward function and verifier to evaluate it. For example, feed your generations into a LLM of your choice and set a rule: "If the answer sounds too robotic, deduct 3 points." This helps refine outputs based on quality criteria #### [Direct link to heading](https://docs.unsloth.ai/basics/reasoning-grpo-and-rl\#example-1-simple-arithmetic-task) **Example \#1: Simple Arithmetic Task** - **Question:** `"2 + 2"` - **Answer:** `"4"` - **Reward Function 1:** - If a number is detected → **+1** - If no number is detected → **-1** - **Reward Function 2:** - If the number matches the correct answer → **+3** - If incorrect → **-3** - **Total Reward:** _Sum of all reward functions_ #### [Direct link to heading](https://docs.unsloth.ai/basics/reasoning-grpo-and-rl\#example-2-email-automation-task) **Example \#2: Email Automation Task** - **Question:** Inbound email - **Answer:** Outbound email - **Reward Functions:** - If the answer contains a required keyword → **+1** - If the answer exactly matches the ideal response → **+1** - If the response is too long → **-1** - If the recipient's name is included → **+1** - If a signature block (phone, email, address) is present → **+1** ### [Direct link to heading](https://docs.unsloth.ai/basics/reasoning-grpo-and-rl\#gsm8k-reward-functions) GSM8K Reward Functions In our examples, we've built on existing GSM8K reward functions by [@willccbb](https://x.com/willccbb) which is popular and shown to be quite effective: - **correctness\_reward\_func** – Rewards exact label matches. - **int\_reward\_func** – Encourages integer-only answers. - **soft\_format\_reward\_func** – Checks structure but allows minor newline mismatches. - **strict\_format\_reward\_func** – Ensures response structure matches the prompt, including newlines. - **xmlcount\_reward\_func** – Ensures exactly one of each XML tag in the response. ## [Direct link to heading](https://docs.unsloth.ai/basics/reasoning-grpo-and-rl\#using-vllm) Using vLLM You can now use [vLLM](https://github.com/vllm-project/vllm/) directly in your finetuning stack, which allows for much more throughput and allows you to finetune and do inference on the model at the same time! On 1x A100 40GB, expect 4000 tokens / s or so with Unsloth’s dynamic 4bit quant of Llama 3.2 3B Instruct. On a 16GB Tesla T4 (free Colab GPU), you can get 300 tokens / s. We also magically removed double memory usage when loading vLLM and Unsloth together, allowing for savings of 5GB or so for Llama 3.1 8B and 3GB for Llama 3.2 3B. Unsloth could originally finetune Llama 3.3 70B Instruct in 1x 48GB GPU with Llama 3.3 70B weights taking 40GB of VRAM. If we do not remove double memory usage, then we’ll need >= 80GB of VRAM when loading Unsloth and vLLM together. But with Unsloth, you can still finetune and get the benefits of fast inference in one package in under 48GB of VRAM! To use fast inference, first install vllm, and instantiate Unsloth with fast\_inference: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] pip install unsloth vllm from unsloth import FastLanguageModel model, tokenizer = FastLanguageModel.from_pretrained( model_name = "unsloth/Llama-3.2-3B-Instruct", fast_inference = True, ) model.fast_generate(["Hello!"]) ``` ## [Direct link to heading](https://docs.unsloth.ai/basics/reasoning-grpo-and-rl\#grpo-requirement-guidelines) GRPO Requirement Guidelines When you’re using Unsloth to do GRPO, we smartly reduce VRAM usage by over 90% when compared to standard implementations with Flash Attention 2 by using multiple tricks! On 20K context lengths for example with 8 generations per prompt, Unsloth uses only **54.3GB of VRAM for Llama 3.1 8B**, whilst standard implementations take **510.8GB (90% less for Unsloth)**. 1. For GRPO's **GPU VRAM requirements for QLoRA 4-bit**, the general rule is the model parameters = the amount of VRAM you will need (you can use less VRAM but this just to be safe). The more context length you set, the more VRAM. LoRA 16-bit will use at minimum 4x more VRAM. 2. Our new memory efficient linear kernels for GRPO slashes memory usage by 8x or more. This shaves 68.5GB of memory, whilst being actually faster through the help of torch.compile! 3. We leverage our smart [Unsloth gradient checkpointing](https://unsloth.ai/blog/long-context) algorithm which we released a while ago. It smartly offloads intermediate activations to system RAM asynchronously whilst being only 1% slower. This shaves 52GB of memory. 4. Unsloth also uses the same GPU / CUDA memory space as the underlying inference engine (vLLM), unlike implementations in other packages. This shaves 16GB of memory. Metrics Unsloth Standard + FA2 Training Memory Cost (GB) 42GB 414GB GRPO Memory Cost (GB) 9.8GB 78.3GB Inference Cost (GB) 0GB 16GB Inference KV Cache for 20K context length (GB) 2.5GB 2.5GB Total Memory Usage 54.33GB (90% less) 510.8GB In typical standard GRPO implementations, you need to create 2 logits of size (8. 20K) to calculate the GRPO loss. This takes 2 \* 2 bytes \* 8 (num generations) \* 20K (context length) \* 128256 (vocabulary size) = 78.3GB in VRAM. Unsloth shaves 8x memory usage for long context GRPO, so we need only an extra 9.8GB in extra VRAM for 20K context lengths! We also need to from the KV Cache in 16bit. Llama 3.1 8B has 32 layers, and both K and V are 1024 in size. So memory usage for 20K context length = 2 \* 2 bytes \* 32 layers \* 20K context length \* 1024 = 2.5GB per batch. We would set the batch size for vLLM to 8, but we shall leave it at 1 for our calculations to save VRAM. Otherwise you will need 20GB for the KV cache. ## [Direct link to heading](https://docs.unsloth.ai/basics/reasoning-grpo-and-rl\#how-grpo-works) How GRPO Works: DeepSeek’s researchers observed an "aha moment" when training R1-Zero with pure reinforcement learning (RL). The model learned to extend its thinking time by reevaluating its initial approach, without any human guidance or predefined instructions. 1. The model generates groups of responses. 2. Each response is scored based on correctness or another metric created by some set reward function rather than an LLM reward model. 3. The average score of the group is computed. 4. Each response's score is compared to the group average. 5. The model is reinforced to favor higher-scoring responses. As an example, assume we want a model to solve: What is 1+1? >> Chain of thought/working out >> The answer is 2. What is 2+2? >> Chain of thought/working out >> The answer is 4. Originally, one had to collect large swathes of data to fill the working out / chain of thought process. But GRPO (the algorithm DeepSeek uses) or other RL algorithms can steer the model to automatically exhibit reasoning capabilities and create the reasoning trace. Instead, we need to create good reward functions or verifiers. For example, if it gets the correct answer, give it a score of 1. If some words are mis-spelt, minus 0.1. And so on! We can provide many many functions to reward the process. [PreviousLlama 4: How to Run & Fine-tune](https://docs.unsloth.ai/basics/llama-4-how-to-run-and-fine-tune) [NextTutorial: Train your own Reasoning model with GRPO](https://docs.unsloth.ai/basics/reasoning-grpo-and-rl/tutorial-train-your-own-reasoning-model-with-grpo) Last updated 1 month ago Was this helpful?
{ "color-scheme": "light dark", "description": "Train your own DeepSeek-R1 reasoning model with Unsloth using GRPO.", "favicon": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Forganizations%252FHpyELzcNe0topgVLGCZY%252Fsites%252Fsite_mXXTe%252Ficon%252FUdYqQdRpLXMz6O8ejTLD%252Funsloth%2520sticker.png%3Falt%3Dmedia%26token%3D2a5ba961-51cb-4094-8545-6eb20ad1e06e&width=48&height=48&sign=277417f7&sv=2", "generator": "GitBook (e15757d)", "language": "en", "og:description": "Train your own DeepSeek-R1 reasoning model with Unsloth using GRPO.", "og:image": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "og:title": "Reasoning - GRPO & RL | Unsloth Documentation", "ogDescription": "Train your own DeepSeek-R1 reasoning model with Unsloth using GRPO.", "ogImage": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "ogTitle": "Reasoning - GRPO & RL | Unsloth Documentation", "robots": "index, follow", "scrapeId": "d9bd4ed7-b233-4924-8909-f788cd2ba7d4", "sourceURL": "https://docs.unsloth.ai/basics/reasoning-grpo-and-rl", "statusCode": 200, "title": "Reasoning - GRPO & RL | Unsloth Documentation", "twitter:card": "summary_large_image", "twitter:description": "Train your own DeepSeek-R1 reasoning model with Unsloth using GRPO.", "twitter:image": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "twitter:title": "Reasoning - GRPO & RL | Unsloth Documentation", "url": "https://docs.unsloth.ai/basics/reasoning-grpo-and-rl", "viewport": "width=device-width, initial-scale=1" }
Qwen3, full fine-tuning & all models are now supported! 🦥 ## [Direct link to heading](https://docs.unsloth.ai/get-started/beginner-start-here/faq-+-is-fine-tuning-right-for-me\#understanding-fine-tuning) Understanding Fine-Tuning Fine-tuning an LLM customizes its behavior, deepens its domain expertise, and optimizes its performance for specific tasks. By refining a pre-trained model (e.g. _Llama-3.1-8B_) with specialized data, you can: - **Update Knowledge** – Introduce new, domain-specific information that the base model didn’t originally include. - **Customize Behavior** – Adjust the model’s tone, personality, or response style to fit specific needs or a brand voice. - **Optimize for Tasks** – Improve accuracy and relevance on particular tasks or queries your use-case requires. Think of fine-tuning as creating a specialized expert out of a generalist model. Some debate whether to use Retrieval-Augmented Generation (RAG) instead of fine-tuning, but fine-tuning can incorporate knowledge and behaviors directly into the model in ways RAG cannot. In practice, combining both approaches yields the best results - leading to greater accuracy, better usability, and fewer hallucinations. ### [Direct link to heading](https://docs.unsloth.ai/get-started/beginner-start-here/faq-+-is-fine-tuning-right-for-me\#real-world-applications-of-fine-tuning) Real-World Applications of Fine-Tuning Fine-tuning can be applied across various domains and needs. Here are a few practical examples of how it makes a difference: - **Sentiment Analysis for Finance** – Train an LLM to determine if a news headline impacts a company positively or negatively, tailoring its understanding to financial context. - **Customer Support Chatbots** – Fine-tune on past customer interactions to provide more accurate and personalized responses in a company’s style and terminology. - **Legal Document Assistance** – Fine-tune on legal texts (contracts, case law, regulations) for tasks like contract analysis, case law research, or compliance support, ensuring the model uses precise legal language. ## [Direct link to heading](https://docs.unsloth.ai/get-started/beginner-start-here/faq-+-is-fine-tuning-right-for-me\#the-benefits-of-fine-tuning) The Benefits of Fine-Tuning Fine-tuning offers several notable benefits beyond what a base model or a purely retrieval-based system can provide: #### [Direct link to heading](https://docs.unsloth.ai/get-started/beginner-start-here/faq-+-is-fine-tuning-right-for-me\#fine-tuning-vs.-rag-whats-the-difference) Fine-Tuning vs. RAG: What’s the Difference? Fine-tuning can do mostly everything RAG can - but not the other way around. During training, fine-tuning embeds external knowledge directly into the model. This allows the model to handle niche queries, summarize documents, and maintain context without relying on an outside retrieval system. That’s not to say RAG lacks advantages as it is excels at accessing up-to-date information from external databases. It is in fact possible to retrieve fresh data with fine-tuning as well, however it is better to combine RAG with fine-tuning for efficiency. #### [Direct link to heading](https://docs.unsloth.ai/get-started/beginner-start-here/faq-+-is-fine-tuning-right-for-me\#task-specific-mastery) Task-Specific Mastery Fine-tuning deeply integrates domain knowledge into the model. This makes it highly effective at handling structured, repetitive, or nuanced queries, scenarios where RAG-alone systems often struggle. In other words, a fine-tuned model becomes a specialist in the tasks or content it was trained on. #### [Direct link to heading](https://docs.unsloth.ai/get-started/beginner-start-here/faq-+-is-fine-tuning-right-for-me\#independence-from-retrieval) Independence from Retrieval A fine-tuned model has no dependency on external data sources at inference time. It remains reliable even if a connected retrieval system fails or is incomplete, because all needed information is already within the model’s own parameters. This self-sufficiency means fewer points of failure in production. #### [Direct link to heading](https://docs.unsloth.ai/get-started/beginner-start-here/faq-+-is-fine-tuning-right-for-me\#faster-responses) Faster Responses Fine-tuned models don’t need to call out to an external knowledge base during generation. Skipping the retrieval step means they can produce answers much more quickly. This speed makes fine-tuned models ideal for time-sensitive applications where every second counts. #### [Direct link to heading](https://docs.unsloth.ai/get-started/beginner-start-here/faq-+-is-fine-tuning-right-for-me\#custom-behavior-and-tone) Custom Behavior and Tone Fine-tuning allows precise control over how the model communicates. This ensures the model’s responses stay consistent with a brand’s voice, adhere to regulatory requirements, or match specific tone preferences. You get a model that not only knows _what_ to say, but _how_ to say it in the desired style. #### [Direct link to heading](https://docs.unsloth.ai/get-started/beginner-start-here/faq-+-is-fine-tuning-right-for-me\#reliable-performance) Reliable Performance Even in a hybrid setup that uses both fine-tuning and RAG, the fine-tuned model provides a reliable fallback. If the retrieval component fails to find the right information or returns incorrect data, the model’s built-in knowledge can still generate a useful answer. This guarantees more consistent and robust performance for your system. ## [Direct link to heading](https://docs.unsloth.ai/get-started/beginner-start-here/faq-+-is-fine-tuning-right-for-me\#common-misconceptions) Common Misconceptions Despite fine-tuning’s advantages, a few myths persist. Let’s address two of the most common misconceptions about fine-tuning: ### [Direct link to heading](https://docs.unsloth.ai/get-started/beginner-start-here/faq-+-is-fine-tuning-right-for-me\#does-fine-tuning-add-new-knowledge-to-a-model) Does Fine-Tuning Add New Knowledge to a Model? **Yes - it absolutely can.** A common myth suggests that fine-tuning doesn’t introduce new knowledge, but in reality it does. If your fine-tuning dataset contains new domain-specific information, the model will learn that content during training and incorporate it into its responses. In effect, fine-tuning _can and does_ teach the model new facts and patterns from scratch. ### [Direct link to heading](https://docs.unsloth.ai/get-started/beginner-start-here/faq-+-is-fine-tuning-right-for-me\#is-rag-always-better-than-fine-tuning) Is RAG Always Better Than Fine-Tuning? **Not necessarily.** Many assume RAG will consistently outperform a fine-tuned model, but that’s not the case when fine-tuning is done properly. In fact, a well-tuned model often matches or even surpasses RAG-based systems on specialized tasks. Claims that “RAG is always better” usually stem from fine-tuning attempts that weren’t optimally configured - for example, using incorrect [LoRA parameters](https://docs.unsloth.ai/get-started/fine-tuning-guide/lora-hyperparameters-guide) or insufficient training. Unsloth takes care of these complexities by automatically selecting the best parameter configurations for you. All you need is a good-quality dataset, and you'll get a fine-tuned model that performs to its fullest potential. ### [Direct link to heading](https://docs.unsloth.ai/get-started/beginner-start-here/faq-+-is-fine-tuning-right-for-me\#is-fine-tuning-expensive) Is Fine-Tuning Expensive? **Not at all!** While full fine-tuning or pretraining can be costly, these are not necessary (pretraining is especially not necessary). In most cases, LoRA or QLoRA fine-tuning can be done for minimal cost. In fact, with Unsloth’s [free notebooks](https://docs.unsloth.ai/get-started/unsloth-notebooks) for Colab or Kaggle, you can fine-tune models without spending a dime. Better yet, you can even fine-tune locally on your own device. ## [Direct link to heading](https://docs.unsloth.ai/get-started/beginner-start-here/faq-+-is-fine-tuning-right-for-me\#faq) FAQ: ### [Direct link to heading](https://docs.unsloth.ai/get-started/beginner-start-here/faq-+-is-fine-tuning-right-for-me\#why-you-should-combine-rag-and-fine-tuning) Why You Should Combine RAG & Fine-Tuning Instead of choosing between RAG and fine-tuning, consider using **both** together for the best results. Combining a retrieval system with a fine-tuned model brings out the strengths of each approach. Here’s why: - **Task-Specific Expertise** – Fine-tuning excels at specialized tasks or formats (making the model an expert in a specific area), while RAG keeps the model up-to-date with the latest external knowledge. - **Better Adaptability** – A fine-tuned model can still give useful answers even if the retrieval component fails or returns incomplete information. Meanwhile, RAG ensures the system stays current without requiring you to retrain the model for every new piece of data. - **Efficiency** – Fine-tuning provides a strong foundational knowledge base within the model, and RAG handles dynamic or quickly-changing details without the need for exhaustive re-training from scratch. This balance yields an efficient workflow and reduces overall compute costs. ### [Direct link to heading](https://docs.unsloth.ai/get-started/beginner-start-here/faq-+-is-fine-tuning-right-for-me\#lora-vs.-qlora-which-one-to-use) LoRA vs. QLoRA: Which One to Use? When it comes to implementing fine-tuning, two popular techniques can dramatically cut down the compute and memory requirements: **LoRA** and **QLoRA**. Here’s a quick comparison of each: - **LoRA (Low-Rank Adaptation)** – Fine-tunes only a small set of additional “adapter” weight matrices (in 16-bit precision), while leaving most of the original model unchanged. This significantly reduces the number of parameters that need updating during training. - **QLoRA (Quantized LoRA)** – Combines LoRA with 4-bit quantization of the model weights, enabling efficient fine-tuning of very large models on minimal hardware. By using 4-bit precision where possible, it dramatically lowers memory usage and compute overhead. We recommend starting with **QLoRA**, as it’s one of the most efficient and accessible methods available. Thanks to Unsloth’s [dynamic 4-bit](https://unsloth.ai/blog/dynamic-4bit) quants, the accuracy loss compared to standard 16-bit LoRA fine-tuning is now negligible. ### [Direct link to heading](https://docs.unsloth.ai/get-started/beginner-start-here/faq-+-is-fine-tuning-right-for-me\#experimentation-is-key) Experimentation is Key There’s no single “best” approach to fine-tuning - only best practices for different scenarios. It’s important to experiment with different methods and configurations to find what works best for your dataset and use case. A great starting point is **QLoRA (4-bit)**, which offers a very cost-effective, resource-friendly way to fine-tune models without heavy computational requirements. [🧠LoRA Hyperparameters Guide](https://docs.unsloth.ai/get-started/fine-tuning-guide/lora-hyperparameters-guide) [PreviousUnsloth Requirements](https://docs.unsloth.ai/get-started/beginner-start-here/unsloth-requirements) [NextUnsloth Notebooks](https://docs.unsloth.ai/get-started/unsloth-notebooks) Last updated 2 months ago Was this helpful?
{ "color-scheme": "light dark", "description": "If you're stuck on if fine-tuning is right for you, see here! Learn about fine-tuning misconceptions, how it compared to RAG and more:", "favicon": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Forganizations%252FHpyELzcNe0topgVLGCZY%252Fsites%252Fsite_mXXTe%252Ficon%252FUdYqQdRpLXMz6O8ejTLD%252Funsloth%2520sticker.png%3Falt%3Dmedia%26token%3D2a5ba961-51cb-4094-8545-6eb20ad1e06e&width=48&height=48&sign=277417f7&sv=2", "generator": "GitBook (e15757d)", "language": "en", "og:description": "If you're stuck on if fine-tuning is right for you, see here! Learn about fine-tuning misconceptions, how it compared to RAG and more:", "og:image": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "og:title": "FAQ + Is Fine-tuning Right For Me? | Unsloth Documentation", "ogDescription": "If you're stuck on if fine-tuning is right for you, see here! Learn about fine-tuning misconceptions, how it compared to RAG and more:", "ogImage": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "ogTitle": "FAQ + Is Fine-tuning Right For Me? | Unsloth Documentation", "robots": "index, follow", "scrapeId": "dafd7fd0-f691-4f44-9541-612f8b8b5e2a", "sourceURL": "https://docs.unsloth.ai/get-started/beginner-start-here/faq-+-is-fine-tuning-right-for-me", "statusCode": 200, "title": "FAQ + Is Fine-tuning Right For Me? | Unsloth Documentation", "twitter:card": "summary_large_image", "twitter:description": "If you're stuck on if fine-tuning is right for you, see here! Learn about fine-tuning misconceptions, how it compared to RAG and more:", "twitter:image": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "twitter:title": "FAQ + Is Fine-tuning Right For Me? | Unsloth Documentation", "url": "https://docs.unsloth.ai/get-started/beginner-start-here/faq-+-is-fine-tuning-right-for-me", "viewport": "width=device-width, initial-scale=1" }
Qwen3, full fine-tuning & all models are now supported! 🦥 • [**GRPO (Reasoning)**](https://docs.unsloth.ai/get-started/unsloth-notebooks#grpo-reasoning-notebooks) • [**Text-to-Speech (TTS)**](https://docs.unsloth.ai/get-started/unsloth-notebooks#text-to-speech-tts-notebooks) • [**Vision (Multimodal)**](https://docs.unsloth.ai/get-started/unsloth-notebooks#vision-multimodal-notebooks) • Google Colab• Kaggle #### [Direct link to heading](https://docs.unsloth.ai/get-started/unsloth-notebooks\#standard-notebooks) Standard notebooks: - [**Qwen3 (14B)**](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Qwen3_(14B)-Reasoning-Conversational.ipynb) **\- new** - [Gemma 3 (4B)](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Gemma3_(4B).ipynb) - [Phi-4 (14B)](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Phi_4-Conversational.ipynb) - [**Synthetic Data Generation Llama 3.2 (3B)**](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Meta_Synthetic_Data_Llama3_2_(3B).ipynb) \- new - [Llama 3.1 (8B)](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.1_(8B)-Alpaca.ipynb) - [Llama 3.2 (1B + 3B)](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.2_(1B_and_3B)-Conversational.ipynb) - [Mistral v0.3 Instruct (7B)](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Mistral_v0.3_(7B)-Conversational.ipynb) - [Gemma 2 (9B)](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Gemma2_(9B)-Alpaca.ipynb) - [Qwen 2.5 (7B)](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Qwen2.5_(7B)-Alpaca.ipynb) #### [Direct link to heading](https://docs.unsloth.ai/get-started/unsloth-notebooks\#grpo-reasoning-notebooks) GRPO (Reasoning) notebooks: - [**Gemma 3 (1B)**](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Gemma3_(1B)-GRPO.ipynb) - [Llama 3.2 (3B)](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Advanced_Llama3_1_(3B)_GRPO_LoRA.ipynb) \- Preliminary Advanced GRPO LoRA - [Llama 3.1 (8B)](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.1_(8B)-GRPO.ipynb) - [Phi-4 (14B)](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Phi_4_(14B)-GRPO.ipynb) - [Qwen2.5 (3B)](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Qwen2.5_(3B)-GRPO.ipynb) #### [Direct link to heading](https://docs.unsloth.ai/get-started/unsloth-notebooks\#text-to-speech-tts-notebooks) Text-to-Speech (TTS) notebooks: Please note we have not officially announced support for TTS models yet. You can use them but you might experience errors. If so, report them to our GitHub thank you! - [Orpheus-TTS (3B)](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Orpheus_(3B)-TTS.ipynb) - [Whisper Large V3](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Whisper.ipynb) - [Llasa-TTS (3B)](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llasa_TTS_(3B).ipynb) - [Spark-TTS (0.5B)](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Spark_TTS_(0_5B).ipynb) - [Oute-TTS (1B)](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Oute_TTS_(1B).ipynb) #### [Direct link to heading](https://docs.unsloth.ai/get-started/unsloth-notebooks\#vision-multimodal-notebooks) Vision (Multimodal) notebooks: - [Llama 3.2 Vision (11B)](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.2_(11B)-Vision.ipynb) - [Qwen2.5-VL (7B)](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Qwen2_VL_(7B)-Vision.ipynb) - [Pixtral (12B) 2409](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Pixtral_(12B)-Vision.ipynb) #### [Direct link to heading](https://docs.unsloth.ai/get-started/unsloth-notebooks\#other-important-notebooks) Other important notebooks: - [Ollama](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3_(8B)-Ollama.ipynb) - [ORPO](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3_(8B)-ORPO.ipynb) - [Continued Pretraining](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Mistral_v0.3_(7B)-CPT.ipynb) - [DPO Zephyr](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Zephyr_(7B)-DPO.ipynb) - [_**Inference only**_](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.1_(8B)-Inference.ipynb) - [Phi-3.5 (mini)](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Phi_3.5_Mini-Conversational.ipynb) - [Llama 3 (8B)](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3_(8B)-Alpaca.ipynb) - [Phi-3 (medium)](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Phi_3_Medium-Conversational.ipynb) - [Mistral NeMo (12B)](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Mistral_Nemo_(12B)-Alpaca.ipynb) #### [Direct link to heading](https://docs.unsloth.ai/get-started/unsloth-notebooks\#specific-use-case-notebooks) Specific use-case notebooks: - [_**Inference chat UI**_](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Unsloth_Studio.ipynb) - [Text Classification](https://colab.research.google.com/github/timothelaborie/text_classification_scripts/blob/main/unsloth_classification.ipynb) by Timotheeee - [Multiple Datasets](https://colab.research.google.com/drive/1njCCbE1YVal9xC83hjdo2hiGItpY_D6t?usp=sharing) by Flail - [KTO](https://colab.research.google.com/drive/1MRgGtLWuZX4ypSfGguFgC-IblTvO2ivM?usp=sharing) by Jeffrey - [Conversational](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.2_(1B_and_3B)-Conversational.ipynb) - [ChatML](https://colab.research.google.com/drive/15F1xyn8497_dUbxZP4zWmPZ3PJx1Oymv?usp=sharing) - [Text Completion](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Mistral_(7B)-Text_Completion.ipynb) #### [Direct link to heading](https://docs.unsloth.ai/get-started/unsloth-notebooks\#rest-of-notebooks) Rest of notebooks: - [Gemm](https://colab.research.google.com/drive/1weTpKOjBZxZJ5PQ-Ql8i6ptAY2x-FWVA?usp=sharing) [a 2 (2B)](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Gemma2_(2B)-Alpaca.ipynb) - [Qwen 2.5 Coder (14B)](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Qwen2.5_Coder_(14B)-Conversational.ipynb) - [Mistral Small (22B)](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Mistral_Small_(22B)-Alpaca.ipynb) - [TinyLlama](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/TinyLlama_(1.1B)-Alpaca.ipynb) - [CodeGemma (7B)](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/CodeGemma_(7B)-Conversational.ipynb) - [Mistral v0.3 (7B)](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Mistral_v0.3_(7B)-Alpaca.ipynb) - [Qwen2 (7B)](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Qwen2_(7B)-Alpaca.ipynb) - [Llama 3.1 (8B)](https://www.kaggle.com/notebooks/welcome?src=https://github.com/unslothai/notebooks/blob/main/nb/Kaggle-Phi_4_(14B)-GRPO.ipynb&accelerator=nvidiaTeslaT4) \- GRPO reasoning - [Phi-4 (14B)](https://www.kaggle.com/notebooks/welcome?src=https://github.com/unslothai/notebooks/blob/main/nb/Kaggle-Phi_4_(14B)-GRPO.ipynb&accelerator=nvidiaTeslaT4) \- GRPO reasoning - [Qwen2.5 (3B)](https://www.kaggle.com/notebooks/welcome?src=https://github.com/unslothai/notebooks/blob/main/nb/Kaggle-Qwen2.5_(3B)-GRPO.ipynb&accelerator=nvidiaTeslaT4) \- GRPO reasoning - [Phi-4 (14B)](https://www.kaggle.com/code/danielhanchen/phi-4-finetuning-unsloth-notebook) - [Llama 3.1 (8B)](https://www.kaggle.com/code/danielhanchen/kaggle-llama-3-1-8b-unsloth-notebook) - [Llama 3.2 (1B + 3B)](https://www.kaggle.com/code/danielhanchen/fixed-kaggle-llama-3-2-1b-3b-conversation) - [Llama 3.2 Vision](https://www.kaggle.com/code/danielhanchen/llama-3-2-vision-finetuning-unsloth-kaggle) - [Mistral NeMo (12B)](https://www.kaggle.com/code/danielhanchen/kaggle-mistral-nemo-12b-unsloth-notebook) - [Qwen 2.5 (14B)](https://www.kaggle.com/code/danielhanchen/kaggle-qwen-2-5-conversational-unsloth) - [Gemma 2 (9B)](https://www.kaggle.com/code/danielhanchen/kaggle-gemma2-9b-unsloth-notebook) - [Phi-3 (medium)](https://www.kaggle.com/code/danielhanchen/kaggle-phi-3-medium-unsloth-notebook) - [Qwen2-VL (7B)](https://www.kaggle.com/code/danielhanchen/qwen2-vision-finetuning-unsloth-kaggle) - [Qwen2.5-Coder (14B)](https://www.kaggle.com/code/danielhanchen/kaggle-qwen-2-5-coder-14b-conversational) - [Llama 3 (8B)](https://www.kaggle.com/code/danielhanchen/kaggle-llama-3-8b-unsloth-notebook) - [Mistral v0.3 (7B)](https://www.kaggle.com/code/danielhanchen/kaggle-mistral-7b-unsloth-notebook) - [Qwen2 (7B)](https://docs.unsloth.ai/) To view a complete list of all our Kaggle notebooks, [click here](https://github.com/unslothai/notebooks#-kaggle-notebooks). Feel free to contribute to the notebooks by visiting our [repo](https://github.com/unslothai/notebooks)! [PreviousFAQ + Is Fine-tuning Right For Me?](https://docs.unsloth.ai/get-started/beginner-start-here/faq-+-is-fine-tuning-right-for-me) [NextAll Our Models](https://docs.unsloth.ai/get-started/all-our-models) Last updated 2 days ago Was this helpful?
{ "color-scheme": "light dark", "description": "Below is a list of all our notebooks:", "favicon": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Forganizations%252FHpyELzcNe0topgVLGCZY%252Fsites%252Fsite_mXXTe%252Ficon%252FUdYqQdRpLXMz6O8ejTLD%252Funsloth%2520sticker.png%3Falt%3Dmedia%26token%3D2a5ba961-51cb-4094-8545-6eb20ad1e06e&width=48&height=48&sign=277417f7&sv=2", "generator": "GitBook (e15757d)", "language": "en", "og:description": "Below is a list of all our notebooks:", "og:image": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "og:title": "Unsloth Notebooks | Unsloth Documentation", "ogDescription": "Below is a list of all our notebooks:", "ogImage": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "ogTitle": "Unsloth Notebooks | Unsloth Documentation", "robots": "index, follow", "scrapeId": "e2f469e4-5704-4f2c-bf74-ee3f3d84b737", "sourceURL": "https://docs.unsloth.ai/get-started/unsloth-notebooks", "statusCode": 200, "title": "Unsloth Notebooks | Unsloth Documentation", "twitter:card": "summary_large_image", "twitter:description": "Below is a list of all our notebooks:", "twitter:image": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "twitter:title": "Unsloth Notebooks | Unsloth Documentation", "url": "https://docs.unsloth.ai/get-started/unsloth-notebooks", "viewport": "width=device-width, initial-scale=1" }
Qwen3, full fine-tuning & all models are now supported! 🦥 DeepSeek developed [GRPO](https://unsloth.ai/blog/grpo) (Group Relative Policy Optimization) to train their R1 reasoning models. ## [Direct link to heading](https://docs.unsloth.ai/basics/reasoning-grpo-and-rl/tutorial-train-your-own-reasoning-model-with-grpo\#quickstart) Quickstart These instructions are for our pre-made Google Colab [notebooks](https://docs.unsloth.ai/get-started/unsloth-notebooks). If you are installing Unsloth locally, you can also copy our notebooks inside your favorite code editor. #### [Direct link to heading](https://docs.unsloth.ai/basics/reasoning-grpo-and-rl/tutorial-train-your-own-reasoning-model-with-grpo\#the-grpo-notebooks-we-are-using-gemma-3-1b-llama-3.1-8b-phi-4-14b-and-qwen2.5-3b) The GRPO notebooks we are using: [Gemma 3 (1B)](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Gemma3_(1B)-GRPO.ipynb), [Llama 3.1 (8B)](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/HuggingFace%20Course-Gemma3_(1B)-GRPO.ipynb), [Phi-4 (14B)](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Phi_4_(14B)-GRPO.ipynb) and [Qwen2.5 (3B)](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Qwen2.5_(3B)-GRPO.ipynb) 1 ### [Direct link to heading](https://docs.unsloth.ai/basics/reasoning-grpo-and-rl/tutorial-train-your-own-reasoning-model-with-grpo\#install-unsloth) Install Unsloth If you're using our Colab notebook, click **Runtime > Run all**. We'd highly recommend you checking out our [Fine-tuning Guide](https://docs.unsloth.ai/get-started/fine-tuning-guide) before getting started. If installing locally, ensure you have the correct [requirements](https://docs.unsloth.ai/get-started/beginner-start-here/unsloth-requirements) and use `pip install unsloth ` on Linux or follow our [Windows install](https://docs.unsloth.ai/get-started/installing-+-updating/windows-installation) instructions. ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252FCovHTH7dI2GcwNZm5TxF%252Fimage.png%3Falt%3Dmedia%26token%3Da157e33b-ad01-4174-a01c-67f742e4e732&width=768&dpr=4&quality=100&sign=e2f6a15e&sv=2) 2 ### [Direct link to heading](https://docs.unsloth.ai/basics/reasoning-grpo-and-rl/tutorial-train-your-own-reasoning-model-with-grpo\#learn-about-grpo-and-reward-functions) Learn about GRPO & Reward Functions Before we get started, it is recommended to learn more about GRPO, reward functions and how they work. Read more about them including [tips & tricks](https://docs.unsloth.ai/basics/reasoning-grpo-and-rl#basics-tips) [here](https://docs.unsloth.ai/basics/reasoning-grpo-and-rl#basics-tips). You will also need enough VRAM. In general, model parameters = amount of VRAM you will need. In Colab, we are using their free 16GB VRAM GPUs which can train any model up to 16B in parameters. 3 ### [Direct link to heading](https://docs.unsloth.ai/basics/reasoning-grpo-and-rl/tutorial-train-your-own-reasoning-model-with-grpo\#configure-desired-settings) Configure desired settings We have pre-selected optimal settings for the best results for you already and you can change the model to whichever you want listed in our [supported models](https://docs.unsloth.ai/get-started/all-our-models). Would not recommend changing other settings if you're a beginner. ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252Fyd3RkyPKInZBbvX1Memf%252Fimage.png%3Falt%3Dmedia%26token%3Da9ca4ce4-2e9f-4b5a-a65c-646d267411c8&width=768&dpr=4&quality=100&sign=89b12b4e&sv=2) 4 ### [Direct link to heading](https://docs.unsloth.ai/basics/reasoning-grpo-and-rl/tutorial-train-your-own-reasoning-model-with-grpo\#data-preparation) Data preparation We have pre-selected OpenAI's [GSM8K](https://huggingface.co/datasets/openai/gsm8k) dataset which contains grade school math problems but you could change it to your own or any public one on Hugging Face. You can read more about [datasets here](https://docs.unsloth.ai/basics/datasets-guide). Your dataset should still have at least 2 columns for question and answer pairs. However the answer must not reveal the reasoning behind how it derived the answer from the question. See below for an example: ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252FqdTVcMEeJ3kzPToSY1X8%252Fimage.png%3Falt%3Dmedia%26token%3D3dd8d9d7-1847-42b6-a73a-f9c995b798b1&width=768&dpr=4&quality=100&sign=7dd5cdc9&sv=2) We'll structure the data to prompt the model to articulate its reasoning before delivering an answer. To start, we'll establish a clear format for both prompts and responses. Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] # Define the system prompt that instructs the model to use a specific format SYSTEM_PROMPT = """ Respond in the following format: <reasoning> ... </reasoning> <answer> ... </answer> """ XML_COT_FORMAT = """\ <reasoning> {reasoning} </reasoning> <answer> {answer} </answer> """ ``` Now, to prepare the dataset: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] import re from datasets import load_dataset, Dataset # Helper functions to extract answers from different formats def extract_xml_answer(text: str) -> str: answer = text.split("<answer>")[-1] answer = answer.split("</answer>")[0] return answer.strip() def extract_hash_answer(text: str) -> str | None: if "####" not in text: return None return text.split("####")[1].strip() # Function to prepare the GSM8K dataset def get_gsm8k_questions(split="train") -> Dataset: data = load_dataset("openai/gsm8k", "main")[split] data = data.map( lambda x: { "prompt": [\ {"role": "system", "content": SYSTEM_PROMPT},\ {"role": "user", "content": x["question"]},\ ], "answer": extract_hash_answer(x["answer"]), } ) return data dataset = get_gsm8k_questions() ``` The dataset is prepared by extracting the answers and formatting them as structured strings. 5 ### [Direct link to heading](https://docs.unsloth.ai/basics/reasoning-grpo-and-rl/tutorial-train-your-own-reasoning-model-with-grpo\#reward-functions-verifier) Reward Functions/Verifier [Reward Functions/Verifiers](https://docs.unsloth.ai/basics/reasoning-grpo-and-rl#reward-functions-verifier) lets us know if the model is doing well or not according to the dataset you have provided. Each generation run will be assessed on how it performs to the score of the average of the rest of generations. You can create your own reward functions however we have already pre-selected them for you with [Will's GSM8K](https://docs.unsloth.ai/basics/reasoning-grpo-and-rl#gsm8k-reward-functions) reward functions. With this, we have 5 different ways which we can reward each generation. You can input your generations into an LLM like ChatGPT 4o or Llama 3.1 (8B) and design a reward function and verifier to evaluate it. For example, feed your generations into a LLM of your choice and set a rule: "If the answer sounds too robotic, deduct 3 points." This helps refine outputs based on quality criteria. **See examples** of what they can look like [here](https://docs.unsloth.ai/basics/reasoning-grpo-and-rl#reward-function-examples). **Example Reward Function for an Email Automation Task:** - **Question:** Inbound email - **Answer:** Outbound email - **Reward Functions:** - If the answer contains a required keyword → **+1** - If the answer exactly matches the ideal response → **+1** - If the response is too long → **-1** - If the recipient's name is included → **+1** - If a signature block (phone, email, address) is present → **+1** ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252F6GRcqgUKmKn2dWCk4nWK%252Fimage.png%3Falt%3Dmedia%26token%3Dac153141-03f8-4795-9074-ad592289bd70&width=768&dpr=4&quality=100&sign=3f226098&sv=2) 6 ### [Direct link to heading](https://docs.unsloth.ai/basics/reasoning-grpo-and-rl/tutorial-train-your-own-reasoning-model-with-grpo\#train-your-model) Train your model We have pre-selected hyperparameters for the most optimal results however you could change them. Read all about [parameters here](https://docs.unsloth.ai/get-started/fine-tuning-guide/lora-hyperparameters-guide). ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252F1MpLSyaOH3j8MhQvquqX%252Fimage.png%3Falt%3Dmedia%26token%3D818034b1-f2db-464d-a108-3b2c6897edb7&width=768&dpr=4&quality=100&sign=4da81ed4&sv=2) The **GRPOConfig** defines key hyperparameters for training: - `use_vllm`: Activates fast inference using vLLM. - `learning_rate`: Determines the model's learning speed. - `num_generations`: Specifies the number of completions generated per prompt. - `max_steps`: Sets the total number of training steps. You should see the reward increase overtime. We would recommend you train for at least 300 steps which may take 30 mins however, for optimal results, you should train for longer. You will also see sample answers which allows you to see how the model is learning. Some may have steps, XML tags, attempts etc. and the idea is as trains it's going to get better and better because it's going to get scored higher and higher until we get the outputs we desire with long reasoning chains of answers. ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252FyRmUGe8laUKIl0RKwlE6%252Fimage.png%3Falt%3Dmedia%26token%3D3ff931cc-0d2b-4a9c-bbe1-b6289b22d157&width=768&dpr=4&quality=100&sign=40488764&sv=2) 7 ### [Direct link to heading](https://docs.unsloth.ai/basics/reasoning-grpo-and-rl/tutorial-train-your-own-reasoning-model-with-grpo\#run-and-evaluate-your-model) Run & Evaluate your model Run your model by clicking the play button. In the first example, there is usually no reasoning in the answer and in order to see the reasoning, we need to first save the LoRA weights we just trained with GRPO first using: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] model.save_lora("grpo_saved_lora") ``` ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252FkLHdlRVKN58tM7SGKp3O%252Fimage.png%3Falt%3Dmedia%26token%3Db43a8164-7eae-4ec4-bf59-976078f9be31&width=768&dpr=4&quality=100&sign=e2241c49&sv=2) The first inference example run has no reasoning. You must load the LoRA and test it to reveal the reasoning. Then we load the LoRA and test it. Our reasoning model is much better - it's not always correct, since we only trained it for an hour or so - it'll be better if we extend the sequence length and train for longer! You can then save your model to GGUF, Ollama etc. by following our [guide here](https://docs.unsloth.ai/get-started/fine-tuning-guide#id-7.-running--saving-the-model). ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252FYdz5ch20Ig8JlumBesle%252Fimage.png%3Falt%3Dmedia%26token%3D8aea2867-b8a8-470a-aa4b-a7b9cdd64c3c&width=768&dpr=4&quality=100&sign=a9ddceab&sv=2) If you are still not getting any reasoning, you may have either trained for too less steps or your reward function/verifier was not optimal. 8 ### [Direct link to heading](https://docs.unsloth.ai/basics/reasoning-grpo-and-rl/tutorial-train-your-own-reasoning-model-with-grpo\#save-your-model) Save your model We have multiple options for saving your fine-tuned model, but we’ll focus on the easiest and most popular approaches which you can read more about [here](https://docs.unsloth.ai/basics/running-and-saving-models) **Saving in 16-bit Precision** You can save the model with 16-bit precision using the following command: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] # Save to 16-bit precision model.save_pretrained_merged("model", tokenizer, save_method="merged_16bit") ``` #### [Direct link to heading](https://docs.unsloth.ai/basics/reasoning-grpo-and-rl/tutorial-train-your-own-reasoning-model-with-grpo\#pushing-to-hugging-face-hub) **Pushing to Hugging Face Hub** To share your model, we’ll push it to the Hugging Face Hub using the `push_to_hub_merged` method. This allows saving the model in multiple quantization formats. Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] # Push to Hugging Face Hub (requires a token) model.push_to_hub_merged( "your-username/model-name", tokenizer, save_method="merged_16bit", token="your-token" ) ``` #### [Direct link to heading](https://docs.unsloth.ai/basics/reasoning-grpo-and-rl/tutorial-train-your-own-reasoning-model-with-grpo\#saving-in-gguf-format-for-llama.cpp) **Saving in GGUF Format for llama.cpp** Unsloth also supports saving in **GGUF format**, making it compatible with **llama.cpp** and **Ollama**. Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] model.push_to_hub_gguf( "your-username/model-name", tokenizer, quantization_method=["q4_k_m", "q8_0", "q5_k_m"], token="your-token", ) ``` Once saved in GGUF format, the model can be easily deployed in lightweight environments using **llama.cpp** or used in other inference engines. ## [Direct link to heading](https://docs.unsloth.ai/basics/reasoning-grpo-and-rl/tutorial-train-your-own-reasoning-model-with-grpo\#video-tutorials) Video Tutorials Here are some video tutorials created by amazing YouTubers who we think are fantastic! [iframe](https://cdn.iframe.ly/fqKHqnL) Local GRPO on your own device [iframe](https://cdn.iframe.ly/jpfe4Lg) Great to learn about how to prep your dataset and explanations behind Reinforcement Learning + GRPO basics [iframe](https://cdn.iframe.ly/DSEYgDM) [iframe](https://cdn.iframe.ly/mnjsSyN) [PreviousReasoning - GRPO & RL](https://docs.unsloth.ai/basics/reasoning-grpo-and-rl) [NextReinforcement Learning - DPO, ORPO & KTO](https://docs.unsloth.ai/basics/reasoning-grpo-and-rl/reinforcement-learning-dpo-orpo-and-kto) Last updated 1 month ago Was this helpful?
{ "color-scheme": "light dark", "description": "Beginner's Guide to transforming a model like Llama 3.1 (8B) into a reasoning model by using Unsloth and GRPO.", "favicon": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Forganizations%252FHpyELzcNe0topgVLGCZY%252Fsites%252Fsite_mXXTe%252Ficon%252FUdYqQdRpLXMz6O8ejTLD%252Funsloth%2520sticker.png%3Falt%3Dmedia%26token%3D2a5ba961-51cb-4094-8545-6eb20ad1e06e&width=48&height=48&sign=277417f7&sv=2", "generator": "GitBook (e15757d)", "language": "en", "og:description": "Beginner's Guide to transforming a model like Llama 3.1 (8B) into a reasoning model by using Unsloth and GRPO.", "og:image": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "og:title": "Tutorial: Train your own Reasoning model with GRPO | Unsloth Documentation", "ogDescription": "Beginner's Guide to transforming a model like Llama 3.1 (8B) into a reasoning model by using Unsloth and GRPO.", "ogImage": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "ogTitle": "Tutorial: Train your own Reasoning model with GRPO | Unsloth Documentation", "robots": "index, follow", "scrapeId": "becd5d64-3bd5-4127-9b90-91bdda52a7eb", "sourceURL": "https://docs.unsloth.ai/basics/reasoning-grpo-and-rl/tutorial-train-your-own-reasoning-model-with-grpo", "statusCode": 200, "title": "Tutorial: Train your own Reasoning model with GRPO | Unsloth Documentation", "twitter:card": "summary_large_image", "twitter:description": "Beginner's Guide to transforming a model like Llama 3.1 (8B) into a reasoning model by using Unsloth and GRPO.", "twitter:image": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "twitter:title": "Tutorial: Train your own Reasoning model with GRPO | Unsloth Documentation", "url": "https://docs.unsloth.ai/basics/reasoning-grpo-and-rl/tutorial-train-your-own-reasoning-model-with-grpo", "viewport": "width=device-width, initial-scale=1" }
Qwen3, full fine-tuning & all models are now supported! 🦥 Only use Conda if you have it. If not, use [Pip](https://docs.unsloth.ai/get-started/installing-+-updating/pip-install). Select either `pytorch-cuda=11.8,12.1` for CUDA 11.8 or CUDA 12.1. We support `python=3.10,3.11,3.12`. Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] conda create --name unsloth_env \ python=3.11 \ pytorch-cuda=12.1 \ pytorch cudatoolkit xformers -c pytorch -c nvidia -c xformers \ -y conda activate unsloth_env pip install unsloth ``` If you're looking to install Conda in a Linux environment, [read here](https://docs.anaconda.com/miniconda/), or run the below: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] mkdir -p ~/miniconda3 wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O ~/miniconda3/miniconda.sh bash ~/miniconda3/miniconda.sh -b -u -p ~/miniconda3 rm -rf ~/miniconda3/miniconda.sh ~/miniconda3/bin/conda init bash ~/miniconda3/bin/conda init zsh ``` [PreviousWindows Installation](https://docs.unsloth.ai/get-started/installing-+-updating/windows-installation) [NextGoogle Colab](https://docs.unsloth.ai/get-started/installing-+-updating/google-colab) Last updated 2 months ago Was this helpful?
{ "color-scheme": "light dark", "description": "To install Unsloth locally on Conda, follow the steps below:", "favicon": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Forganizations%252FHpyELzcNe0topgVLGCZY%252Fsites%252Fsite_mXXTe%252Ficon%252FUdYqQdRpLXMz6O8ejTLD%252Funsloth%2520sticker.png%3Falt%3Dmedia%26token%3D2a5ba961-51cb-4094-8545-6eb20ad1e06e&width=48&height=48&sign=277417f7&sv=2", "generator": "GitBook (e15757d)", "language": "en", "og:description": "To install Unsloth locally on Conda, follow the steps below:", "og:image": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "og:title": "Conda Install | Unsloth Documentation", "ogDescription": "To install Unsloth locally on Conda, follow the steps below:", "ogImage": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "ogTitle": "Conda Install | Unsloth Documentation", "robots": "index, follow", "scrapeId": "f404869e-3d1e-41ae-89f7-e48d581fb520", "sourceURL": "https://docs.unsloth.ai/get-started/installing-+-updating/conda-install", "statusCode": 200, "title": "Conda Install | Unsloth Documentation", "twitter:card": "summary_large_image", "twitter:description": "To install Unsloth locally on Conda, follow the steps below:", "twitter:image": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "twitter:title": "Conda Install | Unsloth Documentation", "url": "https://docs.unsloth.ai/get-started/installing-+-updating/conda-install", "viewport": "width=device-width, initial-scale=1" }
Qwen3, full fine-tuning & all models are now supported! 🦥 DeepSeek is at it again! After releasing V3, R1 Zero and R1 back in December 2024 and January 2025, DeepSeek updated their checkpoints / models for V3, and released a March update! According to DeepSeek, MMLU-Pro jumped +5.3% to 81.2%. **GPQA +9.3% points**. AIME + 19.8% and LiveCodeBench + 10.0%! They provided a plot showing how they compared to the previous V3 checkpoint and other models like GPT 4.5 and Claude Sonnet 3.7. **But how do we run a 671 billion parameter model locally?** MoE Bits Type Disk Size Accuracy Link Details 1.78bit IQ1\_S **173GB** Ok [Link](https://huggingface.co/unsloth/DeepSeek-V3-0324-GGUF/tree/main/UD-IQ1_S) 2.06/1.56bit 1.93bit IQ1\_M **183GB** Fair [Link](https://huggingface.co/unsloth/DeepSeek-V3-0324-GGUF/tree/main/UD-IQ1_M) 2.5/2.06/1.56 2.42bit IQ2\_XXS **203GB** **Suggested** [Link](https://huggingface.co/unsloth/DeepSeek-V3-0324-GGUF/tree/main/UD-IQ2_XXS) 2.5/2.06bit 2.71bit Q2\_K\_XL **231GB** **Suggested** [Link](https://huggingface.co/unsloth/DeepSeek-V3-0324-GGUF/tree/main/UD-Q2_K_XL) 3.5/2.5bit 3.5bit Q3\_K\_XL **320GB** Great [Link](https://huggingface.co/unsloth/DeepSeek-V3-0324-GGUF/tree/main/UD-Q3_K_XL) 4.5/3.5bit 4.5bit Q4\_K\_XL **406GB** Best [Link](https://huggingface.co/unsloth/DeepSeek-V3-0324-GGUF/tree/main/UD-Q4_K_XL) 5.5/4.5bit DeepSeek V3's original upload is in float8, which takes 715GB. Using Q4\_K\_M halves the file size to 404GB or so, and our dynamic 1.78bit quant fits in around 151GB. **I suggest using our 2.7bit quant to balance size and accuracy! The 2.4bit one also works well!** ## [Direct link to heading](https://docs.unsloth.ai/basics/tutorials-how-to-fine-tune-and-run-llms/deepseek-v3-0324-how-to-run-locally\#official-recommended-settings) ⚙️ Official Recommended Settings According to [DeepSeek](https://huggingface.co/deepseek-ai/DeepSeek-V3-0324), these are the recommended settings for inference: - **Temperature of 0.3** (Maybe 0.0 for coding as [seen here](https://api-docs.deepseek.com/quick_start/parameter_settings)) - Min\_P of 0.00 (optional, but 0.01 works well, llama.cpp default is 0.1) - Chat template: `<|User|>Create a simple playable Flappy Bird Game in Python. Place the final game inside of a markdown section.<|Assistant|>` - A BOS token of `<|begin▁of▁sentence|>` is auto added during tokenization (do NOT add it manually!) - DeepSeek mentioned using a **system prompt** as well (optional) - it's in Chinese: `该助手为DeepSeek Chat,由深度求索公司创造。\n今天是3月24日,星期一。` which translates to: `The assistant is DeepSeek Chat, created by DeepSeek.\nToday is Monday, March 24th.` - **For KV cache quantization, use 8bit, NOT 4bit - we found it to do noticeably worse.** ## [Direct link to heading](https://docs.unsloth.ai/basics/tutorials-how-to-fine-tune-and-run-llms/deepseek-v3-0324-how-to-run-locally\#tutorial-how-to-run-deepseek-v3-in-llama.cpp) 📖 Tutorial: How to Run DeepSeek-V3 in llama.cpp 1. Obtain the latest `llama.cpp` on [GitHub here](https://github.com/ggml-org/llama.cpp). You can follow the build instructions below as well. Change `-DGGML_CUDA=ON` to `-DGGML_CUDA=OFF` if you don't have a GPU or just want CPU inference. NOTE using `-DGGML_CUDA=ON` for GPUs might take 5 minutes to compile. CPU only takes 1 minute to compile. You might be interested in llama.cpp's precompiled binaries. Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] apt-get update apt-get install pciutils build-essential cmake curl libcurl4-openssl-dev -y git clone https://github.com/ggml-org/llama.cpp cmake llama.cpp -B llama.cpp/build \ -DBUILD_SHARED_LIBS=OFF -DGGML_CUDA=ON -DLLAMA_CURL=ON cmake --build llama.cpp/build --config Release -j --clean-first --target llama-quantize llama-cli llama-gguf-split cp llama.cpp/build/bin/llama-* llama.cpp ``` 1. Download the model via (after installing `pip install huggingface_hub hf_transfer` ). You can choose `UD-IQ1_S`(dynamic 1.78bit quant) or other quantized versions like `Q4_K_M` . **I recommend using our 2.7bit dynamic quant**`UD-Q2_K_XL`**to balance size and accuracy**. More versions at: [https://huggingface.co/unsloth/DeepSeek-V3-0324-GGUF](https://huggingface.co/unsloth/DeepSeek-V3-0324-GGUF) Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] whitespace-pre-wrap # !pip install huggingface_hub hf_transfer import os os.environ["HF_HUB_ENABLE_HF_TRANSFER"] = "1" from huggingface_hub import snapshot_download snapshot_download( repo_id = "unsloth/DeepSeek-V3-0324-GGUF", local_dir = "unsloth/DeepSeek-V3-0324-GGUF", allow_patterns = ["*UD-Q2_K_XL*"], # Dynamic 2.7bit (230GB) Use "*UD-IQ_S*" for Dynamic 1.78bit (151GB) ) ``` 1. Run Unsloth's Flappy Bird test as described in our 1.58bit Dynamic Quant for DeepSeek R1. 2. Edit `--threads 32` for the number of CPU threads, `--ctx-size 16384` for context length, `--n-gpu-layers 2` for GPU offloading on how many layers. Try adjusting it if your GPU goes out of memory. Also remove it if you have CPU only inference. Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] whitespace-pre-wrap ./llama.cpp/llama-cli \ --model unsloth/DeepSeek-V3-0324-GGUF/UD-Q2_K_XL/DeepSeek-V3-0324-UD-Q2_K_XL-00001-of-00006.gguf \ --cache-type-k q8_0 \ --threads 20 \ --n-gpu-layers 2 \ -no-cnv \ --prio 3 \ --temp 0.3 \ --min_p 0.01 \ --ctx-size 4096 \ --seed 3407 \ --prompt "<|User|>Create a Flappy Bird game in Python. You must include these things:\n1. You must use pygame.\n2. The background color should be randomly chosen and is a light shade. Start with a light blue color.\n3. Pressing SPACE multiple times will accelerate the bird.\n4. The bird's shape should be randomly chosen as a square, circle or triangle. The color should be randomly chosen as a dark color.\n5. Place on the bottom some land colored as dark brown or yellow chosen randomly.\n6. Make a score shown on the top right side. Increment if you pass pipes and don't hit them.\n7. Make randomly spaced pipes with enough space. Color them randomly as dark green or light brown or a dark gray shade.\n8. When you lose, show the best score. Make the text inside the screen. Pressing q or Esc will quit the game. Restarting is pressing SPACE again.\nThe final game should be inside a markdown section in Python. Check your code for errors and fix them before the final markdown section.<|Assistant|>" ``` If we run the above, we get 2 very different results. **Standard 2-bit version:** Click to view result _**(seizure warning!)**_ **Dynamic 2-bit version:** See the result below: [Direct link to heading](https://docs.unsloth.ai/basics/tutorials-how-to-fine-tune-and-run-llms/deepseek-v3-0324-how-to-run-locally#if-we-run-the-above-we-get-2-very-different-results.-standard-2-bit-version-click-to-view-result-sei) ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252F7sXwEonmVeWZaIXbT4Ry%252FOld.gif%3Falt%3Dmedia%26token%3D0b2bd075-091f-4ca6-affa-a9f8a3b98e49&width=300&dpr=4&quality=100&sign=5af7034c&sv=2) Standard 2-bit. Fails with background, fails with collision ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252FDcms38Q9DgdPAVyMIzof%252FNew.gif%3Falt%3Dmedia%26token%3D4c8870ae-71d1-4568-b413-780f10e7f892&width=768&dpr=4&quality=100&sign=7c148832&sv=2) Dynamic 2-bit. Succeeds in creating a playable game. 1. Like DeepSeek-R1, V3 has 61 layers. For example with a 24GB GPU or 80GB GPU, you can expect to offload after rounding down (reduce by 1 if it goes out of memory): Quant File Size 24GB GPU 80GB GPU 2x80GB GPU 1.73bit 173GB 5 25 56 2.22bit 183GB 4 22 49 2.51bit 212GB 2 19 32 ### [Direct link to heading](https://docs.unsloth.ai/basics/tutorials-how-to-fine-tune-and-run-llms/deepseek-v3-0324-how-to-run-locally\#running-on-mac-apple-devices) Running on Mac / Apple devices For Apple Metal devices, be careful of --n-gpu-layers. If you find the machine going out of memory, reduce it. For a 128GB unified memory machine, you should be able to offload 59 layers or so. Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] ./llama.cpp/llama-cli \ --model DeepSeek-R1-GGUF/DeepSeek-V3-0324-UD-IQ1_S/DeepSeek-V3-0324-UD-IQ1_S-00001-of-00003.gguf \ --cache-type-k q4_0 \ --threads 16 \ --prio 2 \ --temp 0.6 \ --ctx-size 8192 \ --seed 3407 \ --n-gpu-layers 59 \ -no-cnv \ --prompt "<|User|>Create a Flappy Bird game in Python.<|Assistant|>" ``` ## [Direct link to heading](https://docs.unsloth.ai/basics/tutorials-how-to-fine-tune-and-run-llms/deepseek-v3-0324-how-to-run-locally\#heptagon-test) 🎱 Heptagon Test We also test our dynamic quants via [r/Localllama](https://www.reddit.com/r/LocalLLaMA/comments/1j7r47l/i_just_made_an_animation_of_a_ball_bouncing/) which tests the model on creating a basic physics engine to simulate balls rotating in a moving enclosed heptagon shape. ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252F2O72oTw5yPUbcxXjDNKS%252Fsnapshot.jpg%3Falt%3Dmedia%26token%3Dce852f9f-20ee-4b93-9d7b-1a5f211b9e04&width=768&dpr=4&quality=100&sign=55d1134d&sv=2) The goal is to make the heptagon spin, and the balls in the heptagon should move. Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] whitespace-pre-wrap ./llama.cpp/llama-cli \ --model unsloth/DeepSeek-V3-0324-GGUF/UD-Q2_K_XL/DeepSeek-V3-0324-UD-Q2_K_XL-00001-of-00006.gguf \ --cache-type-k q8_0 \ --threads 20 \ --n-gpu-layers 2 \ -no-cnv \ --prio 3 \ --temp 0.3 \ --min_p 0.01 \ --ctx-size 4096 \ --seed 3407 \ --prompt "<|User|>Write a Python program that shows 20 balls bouncing inside a spinning heptagon:\n- All balls have the same radius.\n- All balls have a number on it from 1 to 20.\n- All balls drop from the heptagon center when starting.\n- Colors are: #f8b862, #f6ad49, #f39800, #f08300, #ec6d51, #ee7948, #ed6d3d, #ec6800, #ec6800, #ee7800, #eb6238, #ea5506, #ea5506, #eb6101, #e49e61, #e45e32, #e17b34, #dd7a56, #db8449, #d66a35\n- The balls should be affected by gravity and friction, and they must bounce off the rotating walls realistically. There should also be collisions between balls.\n- The material of all the balls determines that their impact bounce height will not exceed the radius of the heptagon, but higher than ball radius.\n- All balls rotate with friction, the numbers on the ball can be used to indicate the spin of the ball.\n- The heptagon is spinning around its center, and the speed of spinning is 360 degrees per 5 seconds.\n- The heptagon size should be large enough to contain all the balls.\n- Do not use the pygame library; implement collision detection algorithms and collision response etc. by yourself. The following Python libraries are allowed: tkinter, math, numpy, dataclasses, typing, sys.\n- All codes should be put in a single Python file.<|Assistant|>" ``` ![Cover](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252F8hq8kYZ8RmTUQjYuZN3w%252FInShot_20250325_185636426.gif%3Falt%3Dmedia%26token%3D41a46ca4-c4d1-4bac-a035-1d153269c29d&width=245&dpr=4&quality=100&sign=2a272dba&sv=2) Non Dynamic 2bit. Fails - SEIZURE WARNING again! [unsloth-q2\_k\_rotate.txt](https://3215535692-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxhOjnexMCB3dmuQFQ2Zq%2Fuploads%2FE9GSJlT4kXAR2LnBvNyk%2Funsloth-q2_k_rotate.txt?alt=media&token=46c4040e-e464-4562-9430-d017868a1077) ![Cover](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252Fssk5mGbDUHdYhdiDFHPT%252FInShot_20250325_181710554.gif%3Falt%3Dmedia%26token%3D50e93aa5-2a93-47d3-b118-f339dcf9d3c2&width=245&dpr=4&quality=100&sign=f431a57f&sv=2) Dynamic 2bit. Actually solves the heptagon puzzle correctly!! [unsloth-q2\_k\_xl\_rotate.txt](https://3215535692-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxhOjnexMCB3dmuQFQ2Zq%2Fuploads%2FIED4xFpcdldNQCO8KKOi%2Funsloth-q2_k_xl_rotate.txt?alt=media&token=9d1ec35f-f6ba-4f19-a374-6020801e493c) ![Cover](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252FYrKuXm5uhsFW3b8e92Cz%252FInShot_20250325_181423756.gif%3Falt%3Dmedia%26token%3Daf23c694-b8f7-4d75-b6ad-f87254eb73c0&width=245&dpr=4&quality=100&sign=649d2a08&sv=2) Original float8 [fp8-heptagon.txt](https://3215535692-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxhOjnexMCB3dmuQFQ2Zq%2Fuploads%2FEP8pdoCOtznTdMTI7Pw8%2Ffp8-heptagon.txt?alt=media&token=93659885-d403-4c01-8864-b5d21fba8ce6) The dynamic 2.7 bit quant which is only 230GB in size actually manages to solve the heptagon puzzle! The full output for all 3 versions (including full fp8) is below: Dynamic 2bit Heptagon code [Direct link to heading](https://docs.unsloth.ai/basics/tutorials-how-to-fine-tune-and-run-llms/deepseek-v3-0324-how-to-run-locally#dynamic-2bit-heptagon-code) Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] import tkinter as tk import math import numpy as np from dataclasses import dataclass from typing import List, Tuple, Optional # Constants HEPTAGON_RADIUS = 300 BALL_RADIUS = 20 GRAVITY = 0.2 FRICTION = 0.99 BOUNCE_FACTOR = 0.8 ROTATION_SPEED = 360 / 5 # degrees per second SPIN_FRICTION = 0.98 BALL_COLORS = [\ "#f8b862", "#f6ad49", "#f39800", "#f08300", "#ec6d51",\ "#ee7948", "#ed6d3d", "#ec6800", "#ec6800", "#ee7800",\ "#eb6238", "#ea5506", "#ea5506", "#eb6101", "#e49e61",\ "#e45e32", "#e17b34", "#dd7a56", "#db8449", "#d66a35"\ ] @dataclass class Ball: x: float y: float vx: float vy: float number: int spin: float = 0.0 color: str = "#000000" @dataclass class Wall: x1: float y1: float x2: float y2: float class BouncingBalls: def __init__(self, root): self.root = root self.canvas = tk.Canvas(root, width=800, height=800, bg="white") self.canvas.pack() self.balls: List[Ball] = [] self.walls: List[Wall] = [] self.heptagon_angle = 0 self.last_time = 0 self.setup_balls() self.setup_heptagon() self.root.after(16, self.update) self.root.bind("<space>", self.reset_balls) def setup_balls(self): for i in range(20): ball = Ball( x=400, y=400, vx=np.random.uniform(-5, 5), vy=np.random.uniform(-5, 5), number=i+1, color=BALL_COLORS[i] ) self.balls.append(ball) def setup_heptagon(self): # Create initial heptagon walls self.update_heptagon_walls(0) def update_heptagon_walls(self, angle): self.walls = [] center_x, center_y = 400, 400 angle_rad = math.radians(angle) for i in range(7): angle1 = angle_rad + 2 * math.pi * i / 7 angle2 = angle_rad + 2 * math.pi * (i + 1) / 7 x1 = center_x + HEPTAGON_RADIUS * math.cos(angle1) y1 = center_y + HEPTAGON_RADIUS * math.sin(angle1) x2 = center_x + HEPTAGON_RADIUS * math.cos(angle2) y2 = center_y + HEPTAGON_RADIUS * math.sin(angle2) self.walls.append(Wall(x1, y1, x2, y2)) def reset_balls(self, event=None): for ball in self.balls: ball.x = 400 ball.y = 400 ball.vx = np.random.uniform(-5, 5) ball.vy = np.random.uniform(-5, 5) ball.spin = np.random.uniform(-5, 5) def update(self): current_time = self.root.after_idle(self.root.after, 16, self.update) if self.last_time == 0: self.last_time = current_time return # Calculate delta time (approximate) dt = 0.016 # Assuming ~60 FPS # Update heptagon rotation self.heptagon_angle += ROTATION_SPEED * dt self.update_heptagon_walls(self.heptagon_angle) # Update balls for ball in self.balls: # Apply gravity ball.vy += GRAVITY # Apply friction ball.vx *= FRICTION ball.vy *= FRICTION ball.spin *= SPIN_FRICTION # Move ball ball.x += ball.vx ball.y += ball.vy # Check collisions with walls self.check_wall_collisions(ball) # Check collisions with other balls for other in self.balls: if other.number != ball.number: self.check_ball_collision(ball, other) # Draw everything self.draw() def check_wall_collisions(self, ball): for wall in self.walls: # Find closest point on wall segment to ball closest = self.closest_point_on_segment( wall.x1, wall.y1, wall.x2, wall.y2, ball.x, ball.y ) # Calculate distance to wall dx = ball.x - closest[0] dy = ball.y - closest[1] distance = math.sqrt(dx*dx + dy*dy) if distance < BALL_RADIUS: # Collision detected # Calculate normal vector nx = dx / distance ny = dy / distance # Calculate relative velocity along normal v_rel = ball.vx * nx + ball.vy * ny if v_rel < 0: # Moving toward the wall # Calculate impulse j = -(1 + BOUNCE_FACTOR) * v_rel # Apply impulse ball.vx += j * nx ball.vy += j * ny # Add some spin based on collision ball.spin += (ball.vx * ny - ball.vy * nx) * 0.1 # Move ball out of collision penetration = BALL_RADIUS - distance ball.x += penetration * nx ball.y += penetration * ny def check_ball_collision(self, ball1, ball2): dx = ball2.x - ball1.x dy = ball2.y - ball1.y distance = math.sqrt(dx*dx + dy*dy) if distance < 2 * BALL_RADIUS: # Collision detected nx = dx / distance ny = dy / distance # Calculate relative velocity v_rel_x = ball2.vx - ball1.vx v_rel_y = ball2.vy - ball1.vy v_rel = v_rel_x * nx + v_rel_y * ny if v_rel < 0: # Moving toward each other # Calculate impulse j = -(1 + BOUNCE_FACTOR) * v_rel / 2 # Apply impulses ball1.vx -= j * nx ball1.vy -= j * ny ball2.vx += j * nx ball2.vy += j * ny # Add spin based on collision ball1.spin += (ball1.vx * ny - ball1.vy * nx) * 0.05 ball2.spin += (ball2.vx * ny - ball2.vy * nx) * 0.05 # Move balls apart penetration = 2 * BALL_RADIUS - distance ball1.x -= penetration * nx * 0.5 ball1.y -= penetration * ny * 0.5 ball2.x += penetration * nx * 0.5 ball2.y += penetration * ny * 0.5 @staticmethod def closest_point_on_segment(x1, y1, x2, y2, x, y): # Vector from point to segment start dx = x - x1 dy = y - y1 # Segment vector sx = x2 - x1 sy = y2 - y1 # Projection of point onto segment dot = dx * sx + dy * sy len_sq = sx * sx + sy * sy param = dot / len_sq if len_sq != 0 else -1 if param < 0: return x1, y1 elif param > 1: return x2, y2 else: return x1 + param * sx, y1 + param * sy def draw(self): self.canvas.delete("all") # Draw heptagon points = [] for wall in self.walls: points.extend([wall.x1, wall.y1]) self.canvas.create_polygon(points, fill="", outline="black", width=2) # Draw balls for ball in self.balls: # Draw ball self.canvas.create_oval( ball.x - BALL_RADIUS, ball.y - BALL_RADIUS, ball.x + BALL_RADIUS, ball.y + BALL_RADIUS, fill=ball.color, outline="black" ) # Draw number with rotation based on spin angle = ball.spin * 10 # Scale spin for visual effect self.canvas.create_text( ball.x, ball.y, text=str(ball.number), font=("Arial", 12, "bold"), angle=angle ) if __name__ == "__main__": root = tk.Tk() root.title("Bouncing Balls in Spinning Heptagon") app = BouncingBalls(root) root.mainloop() ``` Non Dynamic 2bit Heptagon code [Direct link to heading](https://docs.unsloth.ai/basics/tutorials-how-to-fine-tune-and-run-llms/deepseek-v3-0324-how-to-run-locally#non-dynamic-2bit-heptagon-code) Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] import tkinter as tk import math import random from dataclasses import dataclass from typing import List, Tuple, Optional import sys # Constants WIDTH, HEIGHT = 800, 800 HEPTAGON_RADIUS = 300 BALL_RADIUS = 15 GRAVITY = 0.5 FRICTION = 0.999 ELASTICITY = 0.8 ROTATION_SPEED = 2 * math.pi / 5 # 360 degrees per 5 seconds SPIN_DECAY = 0.99 # Colors for the balls BALL_COLORS = [\ "#f8b862", "#f6ad49", "#f39800", "#f08300", "#ec6d51",\ "#ee7948", "#ed6d3d", "#ec6800", "#ec6800", "#ee7800",\ "#eb6238", "#ea5506", "#ea5506", "#eb6101", "#e49e61",\ "#e45e32", "#e17b34", "#dd7a56", "#db8449", "#d66a35"\ ] @dataclass class Ball: x: float y: float vx: float vy: float radius: float color: str number: int spin: float = 0.0 @dataclass class Heptagon: center_x: float center_y: float radius: float angle: float = 0.0 class BouncingBalls: def __init__(self, root): self.root = root self.canvas = tk.Canvas(root, width=WIDTH, height=HEIGHT, bg="white") self.canvas.pack() self.heptagon = Heptagon(WIDTH//2, HEIGHT//2, HEPTAGON_RADIUS) self.balls = [] self.setup_balls() self.root.after(0, self.update) self.root.mainloop() def setup_balls(self): center_x, center_y = WIDTH//2, HEIGHT//2 for i in range(20): self.balls.append(Ball( x=center_x, y=center_y, vx=0, vy=0, radius=BALL_RADIUS, color=BALL_COLORS[i], number=i+1, spin=0 )) def update(self): self.canvas.delete("all") # Update heptagon angle self.heptagon.angle += ROTATION_SPEED / 60 # Assuming 60 FPS # Draw heptagon self.draw_heptagon() # Update and draw balls for ball in self.balls: # Apply gravity ball.vy += GRAVITY # Update position ball.x += ball.vx ball.y += ball.vy # Apply friction ball.vx *= FRICTION ball.vy *= FRICTION # Apply spin decay ball.spin *= SPIN_DECAY # Check collision with heptagon walls self.check_heptagon_collision(ball) # Check collision with other balls for other in self.balls: if other != ball: if self.check_ball_collision(ball, other): self.resolve_ball_collision(ball, other) # Draw the ball self.draw_ball(ball) self.root.after(16, self.update) # ~60 FPS def draw_heptagon(self): center_x, center_y = self.heptagon.center_x, self.heptagon.center_y points = [] for i in range(7): angle = self.heptagon.angle + i * 2 * math.pi / 7 x = center_x + self.heptagon.radius * math.cos(angle) y = center_y + self.heptagon.radius * math.sin(angle) points.append((x, y)) # Draw heptagon self.canvas.create_polygon( [points[0], points[1], points[2], points[3],\ points[4], points[5], points[6]], outline="black", fill="", width=2 ) def draw_ball(self, ball): self.canvas.create_oval( ball.x - ball.radius, ball.y - ball.radius, ball.x + ball.radius, ball.y + ball.radius, fill=ball.color, outline="black" ) # Draw the number self.canvas.create_text( ball.x, ball.y, text=str(ball.number), fill="black" ) def check_heptagon_collision(self, ball): center_x, center_y = WIDTH//2, HEIGHT//2 # Check distance from center dx = ball.x - center_x dy = ball.y - center_y dist = math.sqrt(dx**2 + dy**2) if dist + ball.radius > self.heptagon.radius: # Find the normal vector from center to ball angle = math.atan2(dy, dx) normal_x = math.cos(angle) normal_y = math.sin(angle) # Move ball back inside heptagon overlap = (dist + ball.radius) - self.heptagon.radius ball.x -= overlap * normal_x ball.y -= overlap * normal_y # Reflect velocity dot_product = ball.vx * normal_x + ball.vy * normal_y ball.vx -= 2 * dot_product * normal_x * ELASTICITY ball.vy -= 2 * dot_product * normal_y * ELASTICITY def check_ball_collision(self, ball1, ball2): dx = ball2.x - ball1.x dy = ball2.y - ball1.y distance = math.sqrt(dx**2 + dy**2) return distance < (ball1.radius + ball2.radius) def resolve_ball_collision(self, ball1, ball2): dx = ball2.x - ball1.x dy = ball2.y - ball1.y distance = math.sqrt(dx**2 + dy**2) # Normal vector nx = dx / distance ny = dy / distance # Relative velocity dvx = ball2.vx - ball1.vx dvy = ball2.vy - ball1.vy # Calculate impulse impulse = 2 * (dvx * nx + dvy * ny) / 2 impulse *= ELASTICITY # Apply impulse ball1.vx -= impulse * nx ball1.vy -= impulse * ny ball2.vx += impulse * nx ball2.vy += impulse * ny # Separate the balls to prevent sticking overlap = (ball1.radius + ball2.radius) - distance ball1.x -= overlap * nx / 2 ball1.y -= overlap * ny / 2 ball2.x += overlap * nx / 2 ball2.y += overlap * ny / 2 def run(self): self.root.mainloop() if __name__ == "__main__": root = tk.Tk() root.title("Bouncing Balls in a Spinning Heptagon") app = BouncingBalls(root) app.run() ``` Float8 Heptagon code [Direct link to heading](https://docs.unsloth.ai/basics/tutorials-how-to-fine-tune-and-run-llms/deepseek-v3-0324-how-to-run-locally#float8-heptagon-code) Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] import tkinter as tk import math import numpy as np from dataclasses import dataclass from typing import List, Tuple, Optional # Constants WIDTH, HEIGHT = 800, 800 CENTER_X, CENTER_Y = WIDTH // 2, HEIGHT // 2 GRAVITY = 0.2 FRICTION = 0.99 BALL_RADIUS = 15 HEPTAGON_RADIUS = 300 ROTATION_SPEED = 360 / 5 # degrees per second ELASTICITY = 0.8 SPIN_FRICTION = 0.98 # Colors for the balls BALL_COLORS = [\ '#f8b862', '#f6ad49', '#f39800', '#f08300', '#ec6d51',\ '#ee7948', '#ed6d3d', '#ec6800', '#ec6800', '#ee7800',\ '#eb6238', '#ea5506', '#ea5506', '#eb6101', '#e49e61',\ '#e45e32', '#e17b34', '#dd7a56', '#db8449', '#d66a35'\ ] @dataclass class Ball: x: float y: float vx: float vy: float radius: float color: str number: int spin: float = 0.0 def move(self): self.x += self.vx self.y += self.vy self.vy += GRAVITY self.vx *= FRICTION self.vy *= FRICTION self.spin *= SPIN_FRICTION def collide_with_ball(self, other: 'Ball'): dx = other.x - self.x dy = other.y - self.y distance = math.hypot(dx, dy) if distance < self.radius + other.radius: # Calculate collision normal nx = dx / distance ny = dy / distance # Calculate relative velocity dvx = other.vx - self.vx dvy = other.vy - self.vy # Calculate impulse impulse = 2 * (dvx * nx + dvy * ny) / (1/self.radius + 1/other.radius) # Apply impulse self.vx += impulse * nx / self.radius self.vy += impulse * ny / self.radius other.vx -= impulse * nx / other.radius other.vy -= impulse * ny / other.radius # Separate balls to prevent sticking overlap = (self.radius + other.radius - distance) / 2 self.x -= overlap * nx self.y -= overlap * ny other.x += overlap * nx other.y += overlap * ny # Transfer some spin transfer = impulse * 0.01 self.spin -= transfer other.spin += transfer class HeptagonBounceSimulator: def __init__(self, root): self.root = root self.canvas = tk.Canvas(root, width=WIDTH, height=HEIGHT, bg='white') self.canvas.pack() self.balls = self.create_balls() self.heptagon_angle = 0 self.last_time = 0 self.running = True self.root.bind('<space>', self.toggle_pause) self.root.bind('<Escape>', lambda e: root.destroy()) self.last_time = self.root.after(0, self.update) def create_balls(self) -> List[Ball]: balls = [] for i in range(20): # Start all balls at center with small random velocity angle = np.random.uniform(0, 2 * math.pi) speed = np.random.uniform(0.5, 2) vx = math.cos(angle) * speed vy = math.sin(angle) * speed balls.append(Ball( x=CENTER_X, y=CENTER_Y, vx=vx, vy=vy, radius=BALL_RADIUS, color=BALL_COLORS[i], number=i+1, spin=np.random.uniform(-2, 2) )) return balls def toggle_pause(self, event): self.running = not self.running if self.running: self.last_time = self.root.after(0, self.update) def get_heptagon_vertices(self) -> List[Tuple[float, float]]: vertices = [] for i in range(7): angle = math.radians(self.heptagon_angle + i * 360 / 7) x = CENTER_X + HEPTAGON_RADIUS * math.cos(angle) y = CENTER_Y + HEPTAGON_RADIUS * math.sin(angle) vertices.append((x, y)) return vertices def check_ball_heptagon_collision(self, ball: Ball): vertices = self.get_heptagon_vertices() closest_dist = float('inf') closest_normal = (0, 0) closest_edge = None # Check collision with each edge of the heptagon for i in range(len(vertices)): p1 = vertices[i] p2 = vertices[(i + 1) % len(vertices)] # Vector from p1 to p2 edge_x = p2[0] - p1[0] edge_y = p2[1] - p1[1] edge_length = math.hypot(edge_x, edge_y) # Normalize edge vector edge_x /= edge_length edge_y /= edge_length # Normal vector (perpendicular to edge, pointing inward) nx = -edge_y ny = edge_x # Vector from p1 to ball ball_to_p1_x = ball.x - p1[0] ball_to_p1_y = ball.y - p1[1] # Project ball onto edge normal projection = ball_to_p1_x * nx + ball_to_p1_y * ny # If projection is negative, ball is outside the heptagon if projection < ball.radius: # Find closest point on edge to ball edge_proj = ball_to_p1_x * edge_x + ball_to_p1_y * edge_y edge_proj = max(0, min(edge_length, edge_proj)) closest_x = p1[0] + edge_proj * edge_x closest_y = p1[1] + edge_proj * edge_y # Distance from ball to closest point on edge dist = math.hypot(ball.x - closest_x, ball.y - closest_y) if dist < closest_dist: closest_dist = dist closest_normal = (nx, ny) closest_edge = (p1, p2) if closest_dist < ball.radius: # Calculate bounce response dot_product = ball.vx * closest_normal[0] + ball.vy * closest_normal[1] # Apply bounce with elasticity ball.vx -= (1 + ELASTICITY) * dot_product * closest_normal[0] ball.vy -= (1 + ELASTICITY) * dot_product * closest_normal[1] # Add some spin based on impact edge_vec = (closest_edge[1][0] - closest_edge[0][0], closest_edge[1][1] - closest_edge[0][1]) edge_length = math.hypot(edge_vec[0], edge_vec[1]) if edge_length > 0: edge_vec = (edge_vec[0]/edge_length, edge_vec[1]/edge_length) # Cross product of velocity and edge direction spin_effect = (ball.vx * edge_vec[1] - ball.vy * edge_vec[0]) * 0.1 ball.spin += spin_effect # Move ball outside the heptagon to prevent sticking penetration = ball.radius - closest_dist ball.x += penetration * closest_normal[0] ball.y += penetration * closest_normal[1] def update(self): if not self.running: return # Clear canvas self.canvas.delete('all') # Update heptagon rotation self.heptagon_angle += ROTATION_SPEED / 60 # Assuming ~60 FPS # Draw heptagon vertices = self.get_heptagon_vertices() self.canvas.create_polygon(vertices, outline='black', fill='', width=2) # Update and draw balls for i, ball in enumerate(self.balls): # Move ball ball.move() # Check collisions with heptagon self.check_ball_heptagon_collision(ball) # Draw ball self.canvas.create_oval( ball.x - ball.radius, ball.y - ball.radius, ball.x + ball.radius, ball.y + ball.radius, fill=ball.color, outline='black' ) # Draw number with rotation based on spin angle = ball.spin * 10 # Scale spin for visible rotation self.canvas.create_text( ball.x, ball.y, text=str(ball.number), font=('Arial', 10, 'bold'), angle=angle ) # Check ball-ball collisions for i in range(len(self.balls)): for j in range(i + 1, len(self.balls)): self.balls[i].collide_with_ball(self.balls[j]) # Schedule next update self.last_time = self.root.after(16, self.update) # ~60 FPS if __name__ == '__main__': root = tk.Tk() root.title('Bouncing Balls in a Spinning Heptagon') simulator = HeptagonBounceSimulator(root) root.mainloop() ``` ## [Direct link to heading](https://docs.unsloth.ai/basics/tutorials-how-to-fine-tune-and-run-llms/deepseek-v3-0324-how-to-run-locally\#extra-findings-and-tips) 🕵️ Extra Findings & Tips 1. We find using lower KV cache quantization (4bit) seems to degrade generation quality via empirical tests - more tests need to be done, but we suggest using `q8_0` cache quantization. The goal of quantization is to support longer context lengths since the KV cache uses quite a bit of memory. 2. We found the `down_proj` in this model to be extremely sensitive to quantitation. We had to redo some of our dyanmic quants which used 2bits for `down_proj` and now we use 3bits as the minimum for all these matrices. 3. Using `llama.cpp` 's Flash Attention backend does result in somewhat faster decoding speeds. Use `-DGGML_CUDA_FA_ALL_QUANTS=ON` when compiling. Note it's also best to set your CUDA architecture as found in [https://developer.nvidia.com/cuda-gpus](https://developer.nvidia.com/cuda-gpus) to reduce compilation times, then set it via `-DCMAKE_CUDA_ARCHITECTURES="80"` 4. Using a `min_p=0.01` is probably enough. `llama.cpp` defaults to 0.1, which is probably not necessary. Since a temperature of 0.3 is used anyways, we most likely will very unlikely sample low probability tokens, so removing very unlikely tokens is a good idea. DeepSeek recommends 0.0 temperature for coding tasks. [PreviousPhi-4 Reasoning: How to Run & Fine-tune](https://docs.unsloth.ai/basics/tutorials-how-to-fine-tune-and-run-llms/phi-4-reasoning-how-to-run-and-fine-tune) [NextQwQ-32B: How to Run effectively](https://docs.unsloth.ai/basics/tutorials-how-to-fine-tune-and-run-llms/qwq-32b-how-to-run-effectively) Last updated 1 month ago Was this helpful?
{ "color-scheme": "light dark", "description": "How to run DeepSeek-V3-0324 locally using our dynamic quants which recovers accuracy", "favicon": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Forganizations%252FHpyELzcNe0topgVLGCZY%252Fsites%252Fsite_mXXTe%252Ficon%252FUdYqQdRpLXMz6O8ejTLD%252Funsloth%2520sticker.png%3Falt%3Dmedia%26token%3D2a5ba961-51cb-4094-8545-6eb20ad1e06e&width=48&height=48&sign=277417f7&sv=2", "generator": "GitBook (e15757d)", "language": "en", "og:description": "How to run DeepSeek-V3-0324 locally using our dynamic quants which recovers accuracy", "og:image": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "og:title": "DeepSeek-V3-0324: How to Run Locally | Unsloth Documentation", "ogDescription": "How to run DeepSeek-V3-0324 locally using our dynamic quants which recovers accuracy", "ogImage": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "ogTitle": "DeepSeek-V3-0324: How to Run Locally | Unsloth Documentation", "robots": "index, follow", "scrapeId": "ec77e7d0-4bd9-49c6-ba7f-2ae08cf2dfaf", "sourceURL": "https://docs.unsloth.ai/basics/tutorials-how-to-fine-tune-and-run-llms/deepseek-v3-0324-how-to-run-locally", "statusCode": 200, "title": "DeepSeek-V3-0324: How to Run Locally | Unsloth Documentation", "twitter:card": "summary_large_image", "twitter:description": "How to run DeepSeek-V3-0324 locally using our dynamic quants which recovers accuracy", "twitter:image": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "twitter:title": "DeepSeek-V3-0324: How to Run Locally | Unsloth Documentation", "url": "https://docs.unsloth.ai/basics/tutorials-how-to-fine-tune-and-run-llms/deepseek-v3-0324-how-to-run-locally", "viewport": "width=device-width, initial-scale=1" }
Qwen3, full fine-tuning & all models are now supported! 🦥 Environment variable Purpose `os.environ["UNSLOTH_RETURN_LOGITS"] = "1"` Forcibly returns logits - useful for evaluation if logits are needed. `os.environ["UNSLOTH_COMPILE_DISABLE"] = "1"` Disables auto compiler. Could be useful to debug incorrect finetune results. `os.environ["UNSLOTH_DISABLE_FAST_GENERATION"] = "1"` Disables fast generation for generic models. `os.environ["UNSLOTH_ENABLE_LOGGING"] = "1"` Enables auto compiler logging - useful to see which functions are compiled or not. `os.environ["UNSLOTH_FORCE_FLOAT32"] = "1"` On float16 machines, use float32 and not float16 mixed precision. Useful for Gemma 3. `os.environ["UNSLOTH_STUDIO_DISABLED"] = "1"` Disables extra features. `os.environ["UNSLOTH_COMPILE_DEBUG"] = "1"` Turns on extremely verbose `torch.compile` logs. `os.environ["UNSLOTH_COMPILE_MAXIMUM"] = "0"` Enables maximum `torch.compile` optimizations - not recommended. `os.environ["UNSLOTH_COMPILE_IGNORE_ERRORS"] = "1"` Can turn this off to enable fullgraph parsing. `os.environ["UNSLOTH_FULLGRAPH"] = "0"` Enable `torch.compile` fullgraph mode `os.environ["UNSLOTH_DISABLE_AUTO_UPDATES"] = "1"` Forces no updates to `unsloth-zoo` Another possiblity is maybe the model uploads we uploaded are corrupted, but unlikely. Try the following: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] model, tokenizer = FastVisionModel.from_pretrained( "Qwen/Qwen2-VL-7B-Instruct", use_exact_model_name = True, ) ``` [PreviousErrors/Troubleshooting](https://docs.unsloth.ai/basics/errors-troubleshooting) [NextUnsloth Benchmarks](https://docs.unsloth.ai/basics/unsloth-benchmarks) Last updated 1 month ago Was this helpful?
{ "color-scheme": "light dark", "description": "Advanced flags which might be useful if you see breaking finetunes, or you want to turn stuff off.", "favicon": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Forganizations%252FHpyELzcNe0topgVLGCZY%252Fsites%252Fsite_mXXTe%252Ficon%252FUdYqQdRpLXMz6O8ejTLD%252Funsloth%2520sticker.png%3Falt%3Dmedia%26token%3D2a5ba961-51cb-4094-8545-6eb20ad1e06e&width=48&height=48&sign=277417f7&sv=2", "generator": "GitBook (e15757d)", "language": "en", "og:description": "Advanced flags which might be useful if you see breaking finetunes, or you want to turn stuff off.", "og:image": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "og:title": "Unsloth Environment Flags | Unsloth Documentation", "ogDescription": "Advanced flags which might be useful if you see breaking finetunes, or you want to turn stuff off.", "ogImage": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "ogTitle": "Unsloth Environment Flags | Unsloth Documentation", "robots": "index, follow", "scrapeId": "f4839e5c-e547-4ecb-a9bf-389abbe82903", "sourceURL": "https://docs.unsloth.ai/basics/errors-troubleshooting/unsloth-environment-flags", "statusCode": 200, "title": "Unsloth Environment Flags | Unsloth Documentation", "twitter:card": "summary_large_image", "twitter:description": "Advanced flags which might be useful if you see breaking finetunes, or you want to turn stuff off.", "twitter:image": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "twitter:title": "Unsloth Environment Flags | Unsloth Documentation", "url": "https://docs.unsloth.ai/basics/errors-troubleshooting/unsloth-environment-flags", "viewport": "width=device-width, initial-scale=1" }
Qwen3, full fine-tuning & all models are now supported! 🦥 - For our most detailed benchmarks, read our [Llama 3.3 Blog](https://unsloth.ai/blog/llama3-3). - Benchmarking of Unsloth was also conducted by [🤗Hugging Face](https://huggingface.co/blog/unsloth-trl). We tested using the Alpaca Dataset, a batch size of 2, gradient accumulation steps of 4, rank = 32, and applied QLoRA on all linear layers (q, k, v, o, gate, up, down): Model VRAM 🦥Unsloth speed 🦥VRAM reduction 🦥Longer context 😊Hugging Face + FA2 Llama 3.3 (70B) 80GB 2x >75% 13x longer 1x Llama 3.1 (8B) 80GB 2x >70% 12x longer 1x ## [Direct link to heading](https://docs.unsloth.ai/basics/unsloth-benchmarks\#context-length-benchmarks) Context length benchmarks The more data you have, the less VRAM Unsloth uses due to our [gradient checkpointing](https://unsloth.ai/blog/long-context) algorithm + Apple's CCE algorithm! ### [Direct link to heading](https://docs.unsloth.ai/basics/unsloth-benchmarks\#llama-3.1-8b-max.-context-length) **Llama 3.1 (8B) max. context length** We tested Llama 3.1 (8B) Instruct and did 4bit QLoRA on all linear layers (Q, K, V, O, gate, up and down) with rank = 32 with a batch size of 1. We padded all sequences to a certain maximum sequence length to mimic long context finetuning workloads. GPU VRAM 🦥Unsloth context length Hugging Face + FA2 8 GB 2,972 OOM 12 GB 21,848 932 16 GB 40,724 2,551 24 GB 78,475 5,789 40 GB 153,977 12,264 48 GB 191,728 15,502 80 GB 342,733 28,454 ### [Direct link to heading](https://docs.unsloth.ai/basics/unsloth-benchmarks\#llama-3.3-70b-max.-context-length) **Llama 3.3 (70B) max. context length** We tested Llama 3.3 (70B) Instruct on a 80GB A100 and did 4bit QLoRA on all linear layers (Q, K, V, O, gate, up and down) with rank = 32 with a batch size of 1. We padded all sequences to a certain maximum sequence length to mimic long context finetuning workloads. GPU VRAM 🦥Unsloth context length Hugging Face + FA2 48 GB 12,106 OOM 80 GB 89,389 6,916 [PreviousUnsloth Environment Flags](https://docs.unsloth.ai/basics/errors-troubleshooting/unsloth-environment-flags) Last updated 3 months ago Was this helpful?
{ "color-scheme": "light dark", "description": "Want to know how fast Unsloth is?", "favicon": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Forganizations%252FHpyELzcNe0topgVLGCZY%252Fsites%252Fsite_mXXTe%252Ficon%252FUdYqQdRpLXMz6O8ejTLD%252Funsloth%2520sticker.png%3Falt%3Dmedia%26token%3D2a5ba961-51cb-4094-8545-6eb20ad1e06e&width=48&height=48&sign=277417f7&sv=2", "generator": "GitBook (e15757d)", "language": "en", "og:description": "Want to know how fast Unsloth is?", "og:image": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "og:title": "Unsloth Benchmarks | Unsloth Documentation", "ogDescription": "Want to know how fast Unsloth is?", "ogImage": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "ogTitle": "Unsloth Benchmarks | Unsloth Documentation", "robots": "index, follow", "scrapeId": "f6895ef9-ffef-4108-b7b4-60ffd5a3077a", "sourceURL": "https://docs.unsloth.ai/basics/unsloth-benchmarks", "statusCode": 200, "title": "Unsloth Benchmarks | Unsloth Documentation", "twitter:card": "summary_large_image", "twitter:description": "Want to know how fast Unsloth is?", "twitter:image": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "twitter:title": "Unsloth Benchmarks | Unsloth Documentation", "url": "https://docs.unsloth.ai/basics/unsloth-benchmarks", "viewport": "width=device-width, initial-scale=1" }