Dataset Viewer
Auto-converted to Parquet
markdown
stringlengths
945
38.3k
metadata
dict
Qwen3, full fine-tuning & all models are now supported! 🦥 [Unsloth](https://github.com/unslothai/unsloth) makes finetuning large language models like Llama-3, Mistral, Phi-4 and Gemma 2x faster, use 70% less memory, and with no degradation in accuracy! Our docs will guide you through training your own custom model. It covers the essentials of [installing & updating](https://docs.unsloth.ai/get-started/fine-tuning-guide#id-5.-installing--requirements) Unsloth, [creating datasets](https://docs.unsloth.ai/basics/datasets-guide), running & [deploying](https://docs.unsloth.ai/get-started/fine-tuning-guide#id-5.-running--saving-the-model) your model. #### [Direct link to heading](https://docs.unsloth.ai/\#get-started) [Get started](https://docs.unsloth.ai/get-started/beginner-start-here) [🧬Fine-tuning Guide](https://docs.unsloth.ai/get-started/fine-tuning-guide) [📒Unsloth Notebooks](https://docs.unsloth.ai/get-started/unsloth-notebooks) [🔮All Our Models](https://docs.unsloth.ai/get-started/all-our-models) [![Cover](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252Fz30qbVABdBlqEnKatTf1%252Fqwen3.png%3Falt%3Dmedia%26token%3Defd4bb30-4926-4272-b15d-91c0a0fc5ac5&width=245&dpr=4&quality=100&sign=c6de3b4f&sv=2)\\ \\ **Qwen3**\\ \\ Fine-tune & run Dynamic Qwen3 models.](https://docs.unsloth.ai/basics/qwen3-how-to-run-and-fine-tune) [![Cover](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252FdiwpvMM4VA4oZqaANJOE%252Fdynamic%2520v2%2520with%2520unsloth.png%3Falt%3Dmedia%26token%3Dadc64cb6-2b52-4565-a44e-ac4acbd4247d&width=245&dpr=4&quality=100&sign=95dfb159&sv=2)\\ \\ **Dynamic 2.0 Quants**\\ \\ The best performing quants on 5-shot MMLU.](https://docs.unsloth.ai/basics/unsloth-dynamic-2.0-ggufs) [![Cover](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252F8RZoiqWL4cXqTFwTAbg8%252Fllama%25204%2520only.png%3Falt%3Dmedia%26token%3Dc6b0dd0e-b817-482b-9b8e-05d017a72319&width=245&dpr=4&quality=100&sign=587751ee&sv=2)\\ \\ **Llama 4 by Meta**\\ \\ Learn to fine-tune & run Scout & Maverick.](https://docs.unsloth.ai/basics/llama-4-how-to-run-and-fine-tune) ## [Direct link to heading](https://docs.unsloth.ai/\#why-unsloth) 🦥 Why Unsloth? - Unsloth makes it super easy for you to train models like Llama 3 locally or on platforms such as Google Colab and Kaggle. We streamline the entire training workflow, including model loading, quantizing, training, evaluating, running, saving, exporting, and integrations with inference engines like Ollama, llama.cpp, and vLLM. - We collaborate regularly with teams at Hugging Face, Google, and Meta to fix bugs in LLM training and models (e.g. see our past work for [Gemma 3](https://docs.unsloth.ai/) and [Phi-4](https://unsloth.ai/blog/phi4)). Thus, expect to see the most accurate results when training with Unsloth or using our models. - Unsloth is highly customizable as we allow you to alter things like chat templates or dataset formatting. We also have pre-built notebooks for vision, text-to-speech (TTS), reinforcement learning and more! We also support all training methods and all transformer-based models. ## [Direct link to heading](https://docs.unsloth.ai/\#quickstart) Quickstart **Install locally with pip (recommended)** for Linux devices: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] pip install unsloth ``` For Windows install instructions, see [here](https://docs.unsloth.ai/get-started/installing-+-updating/windows-installation). ## [Direct link to heading](https://docs.unsloth.ai/\#what-is-finetuning-and-why) What is finetuning and why? Fine-tuning an LLM customizes its behavior, enhances domain knowledge, and optimizes performance for specific tasks. Finetuning is the process of updating the actual "brains" of the language model through some process called back-propagation. By fine-tuning a pre-trained model (e.g. Llama-3.1-8B) on a specialized dataset, you can: - **Update Knowledge**: Introduce new domain-specific information. - **Customize Behavior**: Adjust the model’s tone, personality, or response style. - **Optimize for Tasks**: Improve accuracy and relevance for specific use cases. **Example usecases**: - Train LLM to predict if a headline impacts a company positively or negatively. - Use historical customer interactions for more accurate and custom responses. - Fine-tune LLM on legal texts for contract analysis, case law research, and compliance. You can think of a fine-tuned model as a specialized agent designed to do specific tasks more effectively and efficiently. **Fine-tuning can replicate all of RAG's capabilities**, but not vice versa. [🤔FAQ + Is Fine-tuning Right For Me?](https://docs.unsloth.ai/get-started/beginner-start-here/faq-+-is-fine-tuning-right-for-me) ## [Direct link to heading](https://docs.unsloth.ai/\#how-to-use-unsloth) How to use Unsloth? [Unsloth](https://github.com/unslothai/unsloth) can be installed locally via Linux, Windows, Kaggle, or another GPU service like Google Colab. Most use Unsloth through the interface Google Colab which provides a free GPU to train with. [📥Installing + Updating](https://docs.unsloth.ai/get-started/installing-+-updating) [🛠️Unsloth Requirements](https://docs.unsloth.ai/get-started/beginner-start-here/unsloth-requirements) ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252FLrqITvuoKyiMl8mqfu5B%252Flarge%2520sloth%2520wave.png%3Falt%3Dmedia%26token%3D3077792b-90ff-459d-aa52-57abcf219adf&width=768&dpr=4&quality=100&sign=5c6df706&sv=2) [NextBeginner? Start here!](https://docs.unsloth.ai/get-started/beginner-start-here) Last updated 12 days ago Was this helpful?
{ "color-scheme": "light dark", "description": "New to Unsloth?", "favicon": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Forganizations%252FHpyELzcNe0topgVLGCZY%252Fsites%252Fsite_mXXTe%252Ficon%252FUdYqQdRpLXMz6O8ejTLD%252Funsloth%2520sticker.png%3Falt%3Dmedia%26token%3D2a5ba961-51cb-4094-8545-6eb20ad1e06e&width=48&height=48&sign=277417f7&sv=2", "generator": "GitBook (e15757d)", "language": "en", "og:description": "New to Unsloth?", "og:image": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "og:title": "Welcome | Unsloth Documentation", "ogDescription": "New to Unsloth?", "ogImage": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "ogTitle": "Welcome | Unsloth Documentation", "robots": "index, follow", "scrapeId": "7a5be7a8-8c29-4612-948b-73fbf4fa1b17", "sourceURL": "https://docs.unsloth.ai/", "statusCode": 200, "title": "Welcome | Unsloth Documentation", "twitter:card": "summary_large_image", "twitter:description": "New to Unsloth?", "twitter:image": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "twitter:title": "Welcome | Unsloth Documentation", "url": "https://docs.unsloth.ai/", "viewport": "width=device-width, initial-scale=1" }
Qwen3, full fine-tuning & all models are now supported! 🦥 Microsoft's new Phi-4 reasoning models are now supported in Unsloth. The 'plus' variant performs on par with OpenAI's o1-mini, o3-mini and Sonnet 3.7. The 'plus' and standard reasoning models are 14B parameters while the 'mini' has 4B parameters. All Phi-4 reasoning uploads use our [Unsloth Dynamic 2.0](https://docs.unsloth.ai/basics/unsloth-dynamic-2.0-ggufs) methodology. #### [Direct link to heading](https://docs.unsloth.ai/basics/tutorials-how-to-fine-tune-and-run-llms/phi-4-reasoning-how-to-run-and-fine-tune\#phi-4-reasoning-unsloth-dynamic-2.0-uploads) **Phi-4 reasoning - Unsloth Dynamic 2.0 uploads:** Dynamic 2.0 GGUF (to run) Dynamic 4-bit Safetensor (to finetune/deploy) - [Reasoning-plus](https://huggingface.co/unsloth/Phi-4-reasoning-plus-GGUF/) (14B) - [Reasoning](https://huggingface.co/unsloth/Phi-4-reasoning-GGUF) (14B) - [Mini-reasoning](https://huggingface.co/unsloth/Phi-4-mini-reasoning-GGUF/) (4B) - [Reasoning-plus](https://huggingface.co/unsloth/Phi-4-reasoning-plus-unsloth-bnb-4bit) - [Reasoning](https://huggingface.co/unsloth/phi-4-reasoning-unsloth-bnb-4bit) - [Mini-reasoning](https://huggingface.co/unsloth/Phi-4-mini-reasoning-unsloth-bnb-4bit) ## [Direct link to heading](https://docs.unsloth.ai/basics/tutorials-how-to-fine-tune-and-run-llms/phi-4-reasoning-how-to-run-and-fine-tune\#running-phi-4-reasoning) 🖥️ **Running Phi-4 reasoning** ### [Direct link to heading](https://docs.unsloth.ai/basics/tutorials-how-to-fine-tune-and-run-llms/phi-4-reasoning-how-to-run-and-fine-tune\#official-recommended-settings) ⚙️ Official Recommended Settings According to Microsoft, these are the recommended settings for inference: - **Temperature = 0.8** - Top\_P = 0.95 ### [Direct link to heading](https://docs.unsloth.ai/basics/tutorials-how-to-fine-tune-and-run-llms/phi-4-reasoning-how-to-run-and-fine-tune\#phi-4-reasoning-chat-templates) **Phi-4 reasoning Chat templates** Please ensure you use the correct chat template as the 'mini' variant has a different one. #### [Direct link to heading](https://docs.unsloth.ai/basics/tutorials-how-to-fine-tune-and-run-llms/phi-4-reasoning-how-to-run-and-fine-tune\#phi-4-mini) **Phi-4-mini:** Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] whitespace-pre-wrap <|system|>Your name is Phi, an AI math expert developed by Microsoft.<|end|><|user|>How to solve 3*x^2+4*x+5=1?<|end|><|assistant|> ``` #### [Direct link to heading](https://docs.unsloth.ai/basics/tutorials-how-to-fine-tune-and-run-llms/phi-4-reasoning-how-to-run-and-fine-tune\#phi-4-reasoning-and-phi-4-reasoning-plus) **Phi-4-reasoning and Phi-4-reasoning-plus:** This format is used for general conversation and instructions: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] whitespace-pre-wrap <|im_start|>system<|im_sep|>You are Phi, a language model trained by Microsoft to help users. Your role as an assistant involves thoroughly exploring questions through a systematic thinking process before providing the final precise and accurate solutions. This requires engaging in a comprehensive cycle of analysis, summarizing, exploration, reassessment, reflection, backtracing, and iteration to develop well-considered thinking process. Please structure your response into two main sections: Thought and Solution using the specified format: <think> {Thought section} </think> {Solution section}. In the Thought section, detail your reasoning process in steps. Each step should include detailed considerations such as analysing questions, summarizing relevant findings, brainstorming new ideas, verifying the accuracy of the current steps, refining any errors, and revisiting previous steps. In the Solution section, based on various attempts, explorations, and reflections from the Thought section, systematically present the final solution that you deem correct. The Solution section should be logical, accurate, and concise and detail necessary steps needed to reach the conclusion. Now, try to solve the following question through the above guidelines:<|im_end|><|im_start|>user<|im_sep|>What is 1+1?<|im_end|><|im_start|>assistant<|im_sep|> ``` Yes, the chat template/prompt format is this long! ### [Direct link to heading](https://docs.unsloth.ai/basics/tutorials-how-to-fine-tune-and-run-llms/phi-4-reasoning-how-to-run-and-fine-tune\#ollama-run-phi-4-reasoning-tutorial) 🦙 Ollama: Run Phi-4 reasoning Tutorial 1. Install `ollama` if you haven't already! Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] apt-get update apt-get install pciutils -y curl -fsSL https://ollama.com/install.sh | sh ``` 1. Run the model! Note you can call `ollama serve` in another terminal if it fails. We include all our fixes and suggested parameters (temperature etc) in `params` in our Hugging Face upload. Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] ollama run hf.co/unsloth/Phi-4-mini-reasoning-GGUF:Q4_K_XL ``` ### [Direct link to heading](https://docs.unsloth.ai/basics/tutorials-how-to-fine-tune-and-run-llms/phi-4-reasoning-how-to-run-and-fine-tune\#llama.cpp-run-phi-4-reasoning-tutorial) 📖 Llama.cpp: Run Phi-4 reasoning Tutorial You must use `--jinja` in llama.cpp to enable reasoning for the models, expect for the 'mini' variant. Otherwise no token will be provided. 1. Obtain the latest `llama.cpp` on [GitHub here](https://github.com/ggml-org/llama.cpp). You can follow the build instructions below as well. Change `-DGGML_CUDA=ON` to `-DGGML_CUDA=OFF` if you don't have a GPU or just want CPU inference. Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] apt-get update apt-get install pciutils build-essential cmake curl libcurl4-openssl-dev -y git clone https://github.com/ggml-org/llama.cpp cmake llama.cpp -B llama.cpp/build \ -DBUILD_SHARED_LIBS=OFF -DGGML_CUDA=ON -DLLAMA_CURL=ON cmake --build llama.cpp/build --config Release -j --clean-first --target llama-cli llama-gguf-split cp llama.cpp/build/bin/llama-* llama.cpp ``` 1. Download the model via (after installing `pip install huggingface_hub hf_transfer` ). You can choose Q4\_K\_M, or other quantized versions. Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] # !pip install huggingface_hub hf_transfer import os os.environ["HF_HUB_ENABLE_HF_TRANSFER"] = "1" from huggingface_hub import snapshot_download snapshot_download( repo_id = "unsloth/Phi-4-mini-reasoning-GGUF", local_dir = "unsloth/Phi-4-mini-reasoning-GGUF", allow_patterns = ["*UD-Q4_K_XL*"], ) ``` 1. Run the model in conversational mode in llama.cpp. You must use `--jinja` in llama.cpp to enable reasoning for the models. This is however not needed if you're using the 'mini' variant. Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] ./llama.cpp/llama-cli \ --model unsloth/Phi-4-mini-reasoning-GGUF/Phi-4-mini-reasoning-UD-Q4_K_XL.gguf \ --threads -1 \ --n-gpu-layers 99 \ --prio 3 \ --temp 0.8 \ --top-p 0.95 \ --jinja \ --min_p 0.00 \ --ctx-size 32768 \ --seed 3407 ``` ## [Direct link to heading](https://docs.unsloth.ai/basics/tutorials-how-to-fine-tune-and-run-llms/phi-4-reasoning-how-to-run-and-fine-tune\#fine-tuning-phi-4-with-unsloth) 🦥 Fine-tuning Phi-4 with Unsloth [Phi-4 fine-tuning](https://unsloth.ai/blog/phi4) for the models are also now supported in Unsloth. To fine-tune for free on Google Colab, just change the `model_name` of 'unsloth/Phi-4' to 'unsloth/Phi-4-mini-reasoning' etc. - [Phi-4 (14B) fine-tuning notebook](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Phi_4-Conversational.ipynb) [PreviousTutorials: How To Fine-tune & Run LLMs](https://docs.unsloth.ai/basics/tutorials-how-to-fine-tune-and-run-llms) [NextDeepSeek-V3-0324: How to Run Locally](https://docs.unsloth.ai/basics/tutorials-how-to-fine-tune-and-run-llms/deepseek-v3-0324-how-to-run-locally) Last updated 10 days ago Was this helpful?
{ "color-scheme": "light dark", "description": "Learn to run & fine-tune Phi-4 reasoning models locally with Unsloth + our Dynamic 2.0 quants", "favicon": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Forganizations%252FHpyELzcNe0topgVLGCZY%252Fsites%252Fsite_mXXTe%252Ficon%252FUdYqQdRpLXMz6O8ejTLD%252Funsloth%2520sticker.png%3Falt%3Dmedia%26token%3D2a5ba961-51cb-4094-8545-6eb20ad1e06e&width=48&height=48&sign=277417f7&sv=2", "generator": "GitBook (e15757d)", "language": "en", "og:description": "Learn to run & fine-tune Phi-4 reasoning models locally with Unsloth + our Dynamic 2.0 quants", "og:image": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "og:title": "Phi-4 Reasoning: How to Run & Fine-tune | Unsloth Documentation", "ogDescription": "Learn to run & fine-tune Phi-4 reasoning models locally with Unsloth + our Dynamic 2.0 quants", "ogImage": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "ogTitle": "Phi-4 Reasoning: How to Run & Fine-tune | Unsloth Documentation", "robots": "index, follow", "scrapeId": "0fbb26e5-2798-4c08-99b7-76ac8f88e796", "sourceURL": "https://docs.unsloth.ai/basics/tutorials-how-to-fine-tune-and-run-llms/phi-4-reasoning-how-to-run-and-fine-tune", "statusCode": 200, "title": "Phi-4 Reasoning: How to Run & Fine-tune | Unsloth Documentation", "twitter:card": "summary_large_image", "twitter:description": "Learn to run & fine-tune Phi-4 reasoning models locally with Unsloth + our Dynamic 2.0 quants", "twitter:image": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "twitter:title": "Phi-4 Reasoning: How to Run & Fine-tune | Unsloth Documentation", "url": "https://docs.unsloth.ai/basics/tutorials-how-to-fine-tune-and-run-llms/phi-4-reasoning-how-to-run-and-fine-tune", "viewport": "width=device-width, initial-scale=1" }
Qwen3, full fine-tuning & all models are now supported! 🦥 Unsloth supports natively 2x faster inference. For our inference only notebook, click [here](https://colab.research.google.com/drive/1aqlNQi7MMJbynFDyOQteD2t0yVfjb9Zh?usp=sharing). All QLoRA, LoRA and non LoRA inference paths are 2x faster. This requires no change of code or any new dependencies. Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] from unsloth import FastLanguageModel model, tokenizer = FastLanguageModel.from_pretrained( model_name = "lora_model", # YOUR MODEL YOU USED FOR TRAINING max_seq_length = max_seq_length, dtype = dtype, load_in_4bit = load_in_4bit, ) FastLanguageModel.for_inference(model) # Enable native 2x faster inference text_streamer = TextStreamer(tokenizer) _ = model.generate(**inputs, streamer = text_streamer, max_new_tokens = 64) ``` #### [Direct link to heading](https://docs.unsloth.ai/basics/running-and-saving-models/inference\#notimplementederror-a-utf-8-locale-is-required.-got-ansi) NotImplementedError: A UTF-8 locale is required. Got ANSI Sometimes when you execute a cell [this error](https://github.com/googlecolab/colabtools/issues/3409) can appear. To solve this, in a new cell, run the below: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] import locale locale.getpreferredencoding = lambda: "UTF-8" ``` [PreviousTroubleshooting](https://docs.unsloth.ai/basics/running-and-saving-models/troubleshooting) [NextText-to-Speech (TTS) Fine-tuning](https://docs.unsloth.ai/basics/text-to-speech-tts-fine-tuning) Last updated 4 months ago Was this helpful?
{ "color-scheme": "light dark", "description": "Learn how to run your finetuned model.", "favicon": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Forganizations%252FHpyELzcNe0topgVLGCZY%252Fsites%252Fsite_mXXTe%252Ficon%252FUdYqQdRpLXMz6O8ejTLD%252Funsloth%2520sticker.png%3Falt%3Dmedia%26token%3D2a5ba961-51cb-4094-8545-6eb20ad1e06e&width=48&height=48&sign=277417f7&sv=2", "generator": "GitBook (e15757d)", "language": "en", "og:description": "Learn how to run your finetuned model.", "og:image": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "og:title": "Inference | Unsloth Documentation", "ogDescription": "Learn how to run your finetuned model.", "ogImage": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "ogTitle": "Inference | Unsloth Documentation", "robots": "index, follow", "scrapeId": "1e0ab15d-306e-4fb5-bf9d-dbb428ef2e9f", "sourceURL": "https://docs.unsloth.ai/basics/running-and-saving-models/inference", "statusCode": 200, "title": "Inference | Unsloth Documentation", "twitter:card": "summary_large_image", "twitter:description": "Learn how to run your finetuned model.", "twitter:image": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "twitter:title": "Inference | Unsloth Documentation", "url": "https://docs.unsloth.ai/basics/running-and-saving-models/inference", "viewport": "width=device-width, initial-scale=1" }
Qwen3, full fine-tuning & all models are now supported! 🦥 You must edit the `Trainer` first to add `save_strategy` and `save_steps`. Below saves a checkpoint every 50 steps to the folder `outputs`. Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] trainer = SFTTrainer( .... args = TrainingArguments( .... output_dir = "outputs", save_strategy = "steps", save_steps = 50, ), ) ``` Then in the trainer do: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] trainer_stats = trainer.train(resume_from_checkpoint = True) ``` Which will start from the latest checkpoint and continue training. ## [Direct link to heading](https://docs.unsloth.ai/basics/finetuning-from-last-checkpoint\#wandb-integration) Wandb Integration Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] # Install library !pip install wandb --upgrade # Setting up Wandb !wandb login <token> import os os.environ["WANDB_PROJECT"] = "<name>" os.environ["WANDB_LOG_MODEL"] = "checkpoint" ``` Then in `TrainingArguments()` set Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] report_to = "wandb", logging_steps = 1, # Change if needed save_steps = 100 # Change if needed run_name = "<name>" # (Optional) ``` To train the model, do `trainer.train()`; to resume training, do Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] import wandb run = wandb.init() artifact = run.use_artifact('<username>/<Wandb-project-name>/<run-id>', type='model') artifact_dir = artifact.download() trainer.train(resume_from_checkpoint=artifact_dir) ``` [PreviousVision Fine-tuning](https://docs.unsloth.ai/basics/vision-fine-tuning) [NextErrors/Troubleshooting](https://docs.unsloth.ai/basics/errors-troubleshooting) Last updated 10 months ago Was this helpful?
{ "color-scheme": "light dark", "description": "Checkpointing allows you to save your finetuning progress so you can pause it and then continue.", "favicon": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Forganizations%252FHpyELzcNe0topgVLGCZY%252Fsites%252Fsite_mXXTe%252Ficon%252FUdYqQdRpLXMz6O8ejTLD%252Funsloth%2520sticker.png%3Falt%3Dmedia%26token%3D2a5ba961-51cb-4094-8545-6eb20ad1e06e&width=48&height=48&sign=277417f7&sv=2", "generator": "GitBook (e15757d)", "language": "en", "og:description": "Checkpointing allows you to save your finetuning progress so you can pause it and then continue.", "og:image": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "og:title": "Finetuning from Last Checkpoint | Unsloth Documentation", "ogDescription": "Checkpointing allows you to save your finetuning progress so you can pause it and then continue.", "ogImage": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "ogTitle": "Finetuning from Last Checkpoint | Unsloth Documentation", "robots": "index, follow", "scrapeId": "0f3402f5-70fd-4e31-9774-df235fd8683a", "sourceURL": "https://docs.unsloth.ai/basics/finetuning-from-last-checkpoint", "statusCode": 200, "title": "Finetuning from Last Checkpoint | Unsloth Documentation", "twitter:card": "summary_large_image", "twitter:description": "Checkpointing allows you to save your finetuning progress so you can pause it and then continue.", "twitter:image": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "twitter:title": "Finetuning from Last Checkpoint | Unsloth Documentation", "url": "https://docs.unsloth.ai/basics/finetuning-from-last-checkpoint", "viewport": "width=device-width, initial-scale=1" }
Qwen3, full fine-tuning & all models are now supported! 🦥 ## [Direct link to heading](https://docs.unsloth.ai/get-started/fine-tuning-guide/what-model-should-i-use\#llama-qwen-mistral-phi-or) Llama, Qwen, Mistral, Phi or? When preparing for fine-tuning, one of the first decisions you'll face is selecting the right model. Here's a step-by-step guide to help you choose: 1 #### [Direct link to heading](https://docs.unsloth.ai/get-started/fine-tuning-guide/what-model-should-i-use\#choose-a-model-that-aligns-with-your-usecase) Choose a model that aligns with your usecase - E.g. For image-based training, select a vision model such as _Llama 3.2 Vision_. For code datasets, opt for a specialized model like _Qwen Coder 2.5_. - **Licensing and Requirements**: Different models may have specific licensing terms and [system requirements](https://docs.unsloth.ai/get-started/beginner-start-here/unsloth-requirements#system-requirements). Be sure to review these carefully to avoid compatibility issues. 2 #### [Direct link to heading](https://docs.unsloth.ai/get-started/fine-tuning-guide/what-model-should-i-use\#assess-your-storage-compute-capacity-and-dataset) **Assess your storage, compute capacity and dataset** - Use our [VRAM guideline](https://docs.unsloth.ai/get-started/beginner-start-here/unsloth-requirements#approximate-vram-requirements-based-on-model-parameters) to determine the VRAM requirements for the model you’re considering. - Your dataset will reflect the type of model you will use and amount of time it will take to train 3 #### [Direct link to heading](https://docs.unsloth.ai/get-started/fine-tuning-guide/what-model-should-i-use\#select-a-model-and-parameters) **Select a Model and Parameters** - We recommend using the latest model for the best performance and capabilities. For instance, as of January 2025, the leading 70B model is _Llama 3.3_. - You can stay up to date by exploring our catalog of [model uploads](https://docs.unsloth.ai/get-started/all-our-models) to find the most recent and relevant options. 4 #### [Direct link to heading](https://docs.unsloth.ai/get-started/fine-tuning-guide/what-model-should-i-use\#choose-between-base-and-instruct-models) **Choose Between Base and Instruct Models** Further details below: ## [Direct link to heading](https://docs.unsloth.ai/get-started/fine-tuning-guide/what-model-should-i-use\#instruct-or-base-model) Instruct or Base Model? When preparing for fine-tuning, one of the first decisions you'll face is whether to use an instruct model or a base model. ### [Direct link to heading](https://docs.unsloth.ai/get-started/fine-tuning-guide/what-model-should-i-use\#instruct-models) Instruct Models Instruct models are pre-trained with built-in instructions, making them ready to use without any fine-tuning. These models, including GGUFs and others commonly available, are optimized for direct usage and respond effectively to prompts right out of the box. ### [Direct link to heading](https://docs.unsloth.ai/get-started/fine-tuning-guide/what-model-should-i-use\#base-models) **Base Models** Base models, on the other hand, are the original pre-trained versions without instruction fine-tuning. These are specifically designed for customization through fine-tuning, allowing you to adapt them to your unique needs. ### [Direct link to heading](https://docs.unsloth.ai/get-started/fine-tuning-guide/what-model-should-i-use\#should-i-choose-instruct-or-base) Should I Choose Instruct or Base? The decision often depends on the quantity, quality, and type of your data: - **1,000+ Rows of Data**: If you have a large dataset with over 1,000 rows, it's generally best to fine-tune the base model. - **300–1,000 Rows of High-Quality Data**: With a medium-sized, high-quality dataset, fine-tuning the base or instruct model are both viable options. - **Less than 300 Rows**: For smaller datasets, the instruct model is typically the better choice. Fine-tuning the instruct model enables it to align with specific needs while preserving its built-in instructional capabilities. This ensures it can follow general instructions without additional input unless you intend to significantly alter its functionality. - For information how how big your dataset should be, [see here](https://docs.unsloth.ai/basics/datasets-guide#how-big-should-my-dataset-be) ### [Direct link to heading](https://docs.unsloth.ai/get-started/fine-tuning-guide/what-model-should-i-use\#experimentation-is-key) Experimentation is Key We recommend experimenting with both models when possible. Fine-tune each one and evaluate the outputs to see which aligns better with your goals. [PreviousFine-tuning Guide](https://docs.unsloth.ai/get-started/fine-tuning-guide) [NextLoRA Hyperparameters Guide](https://docs.unsloth.ai/get-started/fine-tuning-guide/lora-hyperparameters-guide) Last updated 4 months ago Was this helpful?
{ "color-scheme": "light dark", "description": null, "favicon": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Forganizations%252FHpyELzcNe0topgVLGCZY%252Fsites%252Fsite_mXXTe%252Ficon%252FUdYqQdRpLXMz6O8ejTLD%252Funsloth%2520sticker.png%3Falt%3Dmedia%26token%3D2a5ba961-51cb-4094-8545-6eb20ad1e06e&width=48&height=48&sign=277417f7&sv=2", "generator": "GitBook (e15757d)", "language": "en", "og:description": null, "og:image": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "og:title": "What Model Should I Use? | Unsloth Documentation", "ogDescription": null, "ogImage": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "ogTitle": "What Model Should I Use? | Unsloth Documentation", "robots": "index, follow", "scrapeId": "23f7efee-026d-4684-a6ba-9c0be2be32c4", "sourceURL": "https://docs.unsloth.ai/get-started/fine-tuning-guide/what-model-should-i-use", "statusCode": 200, "title": "What Model Should I Use? | Unsloth Documentation", "twitter:card": "summary_large_image", "twitter:description": null, "twitter:image": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "twitter:title": "What Model Should I Use? | Unsloth Documentation", "url": "https://docs.unsloth.ai/get-started/fine-tuning-guide/what-model-should-i-use", "viewport": "width=device-width, initial-scale=1" }
Qwen3, full fine-tuning & all models are now supported! 🦥 A guide on how you can run our 1.58-bit Dynamic Quants for DeepSeek-R1 using llama.cpp. ## [Direct link to heading](https://docs.unsloth.ai/basics/tutorials-how-to-fine-tune-and-run-llms/deepseek-r1-how-to-run-locally\#using-llama.cpp-recommended) Using llama.cpp (recommended) 1. Do not forget about `<|User|>` and `<|Assistant|>` tokens! - Or use a chat template formatter 2. Obtain the latest `llama.cpp` at: [github.com/ggerganov/llama.cpp](https://github.com/ggerganov/llama.cpp). You can follow the build instructions below as well: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] apt-get update apt-get install pciutils build-essential cmake curl libcurl4-openssl-dev -y git clone https://github.com/ggerganov/llama.cpp cmake llama.cpp -B llama.cpp/build \ -DBUILD_SHARED_LIBS=ON -DGGML_CUDA=ON -DLLAMA_CURL=ON cmake --build llama.cpp/build --config Release -j --clean-first --target llama-quantize llama-cli llama-gguf-split cp llama.cpp/build/bin/llama-* llama.cpp ``` 1. It's best to use `--min-p 0.05` to counteract very rare token predictions - I found this to work well especially for the 1.58bit model. 2. Download the model via: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] # pip install huggingface_hub hf_transfer # import os # Optional for faster downloading # os.environ["HF_HUB_ENABLE_HF_TRANSFER"] = "1" from huggingface_hub import snapshot_download snapshot_download( repo_id = "unsloth/DeepSeek-R1-GGUF", local_dir = "DeepSeek-R1-GGUF", allow_patterns = ["*UD-IQ1_S*"], # Select quant type UD-IQ1_S for 1.58bit ) ``` 1. Example with Q4\_0 K quantized cache **Notice -no-cnv disables auto conversation mode** Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] ./llama.cpp/llama-cli \ --model DeepSeek-R1-GGUF/DeepSeek-R1-UD-IQ1_S/DeepSeek-R1-UD-IQ1_S-00001-of-00003.gguf \ --cache-type-k q4_0 \ --threads 12 -no-cnv --prio 2 \ --temp 0.6 \ --ctx-size 8192 \ --seed 3407 \ --prompt "<|User|>What is 1+1?<|Assistant|>" ``` Example output: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] <think> Okay, so I need to figure out what 1 plus 1 is. Hmm, where do I even start? I remember from school that adding numbers is pretty basic, but I want to make sure I understand it properly. Let me think, 1 plus 1. So, I have one item and I add another one. Maybe like a apple plus another apple. If I have one apple and someone gives me another, I now have two apples. So, 1 plus 1 should be 2. That makes sense. Wait, but sometimes math can be tricky. Could it be something else? Like, in a different number system maybe? But I think the question is straightforward, using regular numbers, not like binary or hexadecimal or anything. I also recall that in arithmetic, addition is combining quantities. So, if you have two quantities of 1, combining them gives you a total of 2. Yeah, that seems right. Is there a scenario where 1 plus 1 wouldn't be 2? I can't think of any... ``` 1. If you have a GPU (RTX 4090 for example) with 24GB, you can offload multiple layers to the GPU for faster processing. If you have multiple GPUs, you can probably offload more layers. Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] ./llama.cpp/llama-cli \ --model DeepSeek-R1-GGUF/DeepSeek-R1-UD-IQ1_S/DeepSeek-R1-UD-IQ1_S-00001-of-00003.gguf \ --cache-type-k q4_0 \ --threads 12 -no-cnv --prio 2 \ --n-gpu-layers 7 \ --temp 0.6 \ --ctx-size 8192 \ --seed 3407 \ --prompt "<|User|>Create a Flappy Bird game in Python.<|Assistant|>" ``` 1. To test our Flappy Bird example as mentioned in our blog post here: [https://unsloth.ai/blog/deepseekr1-dynamic](https://unsloth.ai/blog/deepseekr1-dynamic), we can produce the 2nd example like below using our 1.58bit dynamic quant: ![Cover](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252FHHUZZTFj0WpgSuWFlibf%252FInShot_20250127_043158375_H8Uu6tyJXYAFwUEIu04Am.gif%3Falt%3Dmedia%26token%3Da959720d-b1b4-4b80-b10d-1c41928dfdcf&width=245&dpr=4&quality=100&sign=f69a2605&sv=2) Original DeepSeek R1 ![Cover](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252FqgLhnVaN53kV4cvZaDci%252FInShot_20250127_042648160_lrtL8-eRhl4qtLaUDSU87.gif%3Falt%3Dmedia%26token%3De608b30a-1cbe-49ac-b18a-967a50c67c68&width=245&dpr=4&quality=100&sign=a0093029&sv=2) 1.58bit Dynamic Quant The prompt used is as below: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] whitespace-pre-wrap <|User|>Create a Flappy Bird game in Python. You must include these things: 1. You must use pygame. 2. The background color should be randomly chosen and is a light shade. Start with a light blue color. 3. Pressing SPACE multiple times will accelerate the bird. 4. The bird's shape should be randomly chosen as a square, circle or triangle. The color should be randomly chosen as a dark color. 5. Place on the bottom some land colored as dark brown or yellow chosen randomly. 6. Make a score shown on the top right side. Increment if you pass pipes and don't hit them. 7. Make randomly spaced pipes with enough space. Color them randomly as dark green or light brown or a dark gray shade. 8. When you lose, show the best score. Make the text inside the screen. Pressing q or Esc will quit the game. Restarting is pressing SPACE again. The final game should be inside a markdown section in Python. Check your code for errors and fix them before the final markdown section.<|Assistant|> ``` To call llama.cpp using this example, we do: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] ./llama.cpp/llama-cli \ --model DeepSeek-R1-GGUF/DeepSeek-R1-UD-IQ1_S/DeepSeek-R1-UD-IQ1_S-00001-of-00003.gguf \ --cache-type-k q4_0 \ --threads 12 -no-cnv --prio 2 \ --n-gpu-layers 7 \ --temp 0.6 \ --ctx-size 8192 \ --seed 3407 \ --prompt "<|User|>Create a Flappy Bird game in Python. You must include these things:\n1. You must use pygame.\n2. The background color should be randomly chosen and is a light shade. Start with a light blue color.\n3. Pressing SPACE multiple times will accelerate the bird.\n4. The bird's shape should be randomly chosen as a square, circle or triangle. The color should be randomly chosen as a dark color.\n5. Place on the bottom some land colored as dark brown or yellow chosen randomly.\n6. Make a score shown on the top right side. Increment if you pass pipes and don't hit them.\n7. Make randomly spaced pipes with enough space. Color them randomly as dark green or light brown or a dark gray shade.\n8. When you lose, show the best score. Make the text inside the screen. Pressing q or Esc will quit the game. Restarting is pressing SPACE again.\nThe final game should be inside a markdown section in Python. Check your code for errors and fix them before the final markdown section.<|Assistant|>" ``` 1. Also, if you want to merge the weights together for use in Ollama for example, use this script: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] ./llama.cpp/llama-gguf-split --merge \ DeepSeek-R1-GGUF/DeepSeek-R1-UD-IQ1_S-00001-of-00003.gguf \ merged_file.gguf ``` 1. DeepSeek R1 has 61 layers. For example with a 24GB GPU or 80GB GPU, you can expect to offload after rounding down (reduce by 1 if it goes out of memory): Quant File Size 24GB GPU 80GB GPU 2x80GB GPU 1.58bit 131GB 7 33 All layers 61 1.73bit 158GB 5 26 57 2.22bit 183GB 4 22 49 2.51bit 212GB 2 19 32 ### [Direct link to heading](https://docs.unsloth.ai/basics/tutorials-how-to-fine-tune-and-run-llms/deepseek-r1-how-to-run-locally\#running-on-mac-apple-devices) Running on Mac / Apple devices For Apple Metal devices, be careful of --n-gpu-layers. If you find the machine going out of memory, reduce it. For a 128GB unified memory machine, you should be able to offload 59 layers or so. Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] ./llama.cpp/llama-cli \ --model DeepSeek-R1-GGUF/DeepSeek-R1-UD-IQ1_S/DeepSeek-R1-UD-IQ1_S-00001-of-00003.gguf \ --cache-type-k q4_0 \ --threads 16 \ --prio 2 \ --temp 0.6 \ --ctx-size 8192 \ --seed 3407 \ --n-gpu-layers 59 \ -no-cnv \ --prompt "<|User|>Create a Flappy Bird game in Python.<|Assistant|>" ``` ### [Direct link to heading](https://docs.unsloth.ai/basics/tutorials-how-to-fine-tune-and-run-llms/deepseek-r1-how-to-run-locally\#run-in-ollama-open-webui) Run in Ollama/Open WebUI Open WebUI has made an step-by-step tutorial on how to run R1 here: [docs.openwebui.com/tutorials/integrations/deepseekr1-dynamic/](https://docs.openwebui.com/tutorials/integrations/deepseekr1-dynamic/) If you want to use Ollama for inference on GGUFs, you need to first merge the 3 GGUF split files into 1 like the code below. Then you will need to run the model locally. Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] ./llama.cpp/llama-gguf-split --merge \ DeepSeek-R1-GGUF/DeepSeek-R1-UD-IQ1_S/DeepSeek-R1-UD-IQ1_S-00001-of-00003.gguf \ merged_file.gguf ``` ## [Direct link to heading](https://docs.unsloth.ai/basics/tutorials-how-to-fine-tune-and-run-llms/deepseek-r1-how-to-run-locally\#deepseek-chat-template) DeepSeek Chat Template All distilled versions and the main 671B R1 model use the same chat template: `<|begin▁of▁sentence|><|User|>What is 1+1?<|Assistant|>It's 2.<|end▁of▁sentence|><|User|>Explain more!<|Assistant|>` A BOS is forcibly added, and an EOS separates each interaction. To counteract double BOS tokens during inference, you should only call _tokenizer.encode(..., add\_special\_tokens = False)_ since the chat template auto adds a BOS token as well. For llama.cpp / GGUF inference, you should skip the BOS since it’ll auto add it. `<|User|>What is 1+1?<|Assistant|>` The <think> and </think> tokens get their own designated tokens. For the distilled versions for Qwen and Llama, some tokens are re-mapped, whilst Qwen for example did not have a BOS token, so <\|object\_ref\_start\|> had to be used instead. **Tokenizer ID Mappings:** Token R1 Distill Qwen Distill Llama <think> 128798 151648 128013 </think> 128799 151649 128014 <\|begin\_of\_sentence\|> 0 151646 128000 <\|end\_of\_sentence\|> 1 151643 128001 <\|User\|> 128803 151644 128011 <\|Assistant\|> 128804 151645 128012 Padding token 2 151654 128004 Original tokens in models: Token Qwen 2.5 32B Base Llama 3.3 70B Instruct <think> <\|box\_start\|> <\|reserved\_special\_token\_5\|> </think> <\|box\_end\|> <\|reserved\_special\_token\_6\|> <|begin▁of▁sentence|> <\|object\_ref\_start\|> <\|begin\_of\_text\|> <|end▁of▁sentence|> <\|endoftext\|> <\|end\_of\_text\|> <|User|> <\|im\_start\|> <\|reserved\_special\_token\_3\|> <|Assistant|> <\|im\_end\|> <\|reserved\_special\_token\_4\|> Padding token <\|vision\_pad\|> <\|finetune\_right\_pad\_id\|> All Distilled and the original R1 versions seem to have accidentally assigned the padding token to <|end▁of▁sentence|>, which is mostly not a good idea, especially if you want to further finetune on top of these reasoning models. This will cause endless infinite generations, since most frameworks will mask the EOS token out as -100. We fixed all distilled and the original R1 versions with the correct padding token (Qwen uses <\|vision\_pad\|>, Llama uses <\|finetune\_right\_pad\_id\|>, and R1 uses <|▁pad▁|> or our own added <|PAD▁TOKEN|>. ## [Direct link to heading](https://docs.unsloth.ai/basics/tutorials-how-to-fine-tune-and-run-llms/deepseek-r1-how-to-run-locally\#gguf-r1-table) GGUF R1 Table MoE Bits Type Disk Size Accuracy Link Details 1.58bit UD-IQ1\_S **131GB** Fair [Link](https://huggingface.co/unsloth/DeepSeek-R1-GGUF/tree/main/DeepSeek-R1-UD-IQ1_S) MoE all 1.56bit. `down_proj` in MoE mixture of 2.06/1.56bit 1.73bit UD-IQ1\_M **158GB** Good [Link](https://huggingface.co/unsloth/DeepSeek-R1-GGUF/tree/main/DeepSeek-R1-UD-IQ1_M) MoE all 1.56bit. `down_proj` in MoE left at 2.06bit 2.22bit UD-IQ2\_XXS **183GB** Better [Link](https://huggingface.co/unsloth/DeepSeek-R1-GGUF/tree/main/DeepSeek-R1-UD-IQ2_XXS) MoE all 2.06bit. `down_proj` in MoE mixture of 2.5/2.06bit 2.51bit UD-Q2\_K\_XL **212GB** Best [Link](https://huggingface.co/unsloth/DeepSeek-R1-GGUF/tree/main/DeepSeek-R1-UD-Q2_K_XL) MoE all 2.5bit. `down_proj` in MoE mixture of 3.5/2.5bit [PreviousQwQ-32B: How to Run effectively](https://docs.unsloth.ai/basics/tutorials-how-to-fine-tune-and-run-llms/qwq-32b-how-to-run-effectively) [NextDeepSeek-R1 Dynamic 1.58-bit](https://docs.unsloth.ai/basics/tutorials-how-to-fine-tune-and-run-llms/deepseek-r1-how-to-run-locally/deepseek-r1-dynamic-1.58-bit) Last updated 1 month ago Was this helpful?
{ "color-scheme": "light dark", "description": null, "favicon": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Forganizations%252FHpyELzcNe0topgVLGCZY%252Fsites%252Fsite_mXXTe%252Ficon%252FUdYqQdRpLXMz6O8ejTLD%252Funsloth%2520sticker.png%3Falt%3Dmedia%26token%3D2a5ba961-51cb-4094-8545-6eb20ad1e06e&width=48&height=48&sign=277417f7&sv=2", "generator": "GitBook (e15757d)", "language": "en", "og:description": null, "og:image": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "og:title": "DeepSeek-R1: How to Run Locally | Unsloth Documentation", "ogDescription": null, "ogImage": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "ogTitle": "DeepSeek-R1: How to Run Locally | Unsloth Documentation", "robots": "index, follow", "scrapeId": "23468f92-f154-4ae5-a9a2-e91932693ca5", "sourceURL": "https://docs.unsloth.ai/basics/tutorials-how-to-fine-tune-and-run-llms/deepseek-r1-how-to-run-locally", "statusCode": 200, "title": "DeepSeek-R1: How to Run Locally | Unsloth Documentation", "twitter:card": "summary_large_image", "twitter:description": null, "twitter:image": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "twitter:title": "DeepSeek-R1: How to Run Locally | Unsloth Documentation", "url": "https://docs.unsloth.ai/basics/tutorials-how-to-fine-tune-and-run-llms/deepseek-r1-how-to-run-locally", "viewport": "width=device-width, initial-scale=1" }
Qwen3, full fine-tuning & all models are now supported! 🦥 We're excited to introduce our Dynamic v2.0 quantization method - a major upgrade to our previous quants. This new method outperforms leading quantization methods and sets new benchmarks for 5-shot MMLU and KL Divergence. This means you can now run + fine-tune quantized LLMs while preserving as much accuracy as possible! You can run the 2.0 GGUFs on any inference engine like llama.cpp, Ollama, Open WebUI etc. View all our Dynamic 2.0 GGUF models on [Hugging Face here](https://huggingface.co/collections/unsloth/unsloth-dynamic-v20-quants-68060d147e9b9231112823e6). ### [Direct link to heading](https://docs.unsloth.ai/basics/unsloth-dynamic-2.0-ggufs\#whats-new-in-dynamic-v2.0) 💡 What's New in Dynamic v2.0? - **Revamped Layer Selection for GGUFs + safetensors:** Unsloth Dynamic 2.0 now selectively quantizes layers much more intelligently and extensively. Rather than modifying only select layers, we now dynamically adjust the quantization type of every possible layer, and the combinations will differ for each layer and model. - Current selected and all future GGUF uploads will utilize Dynamic 2.0 and our new calibration dataset. The dataset ranges from **300K to 1.5M tokens** (depending on model) and comprise of high-quality, hand-curated and cleaned data - to greatly enhance conversational chat performance. - Previously, our Dynamic quantization (DeepSeek-R1 1.58-bit GGUF) was effective only for MoE architectures. **Dynamic 2.0 quantization now works on all models (including MOEs & non-MoEs)**. - **Model-Specific Quants:** Each model now uses a custom-tailored quantization scheme. E.g. the layers quantized in Gemma 3 differ significantly from those in Llama 4. - To maximize efficiency, especially on Apple Silicon and ARM devices, we now also add Q4\_NL, Q5.1, Q5.0, Q4.1, and Q4.0 formats. To ensure accurate benchmarking, we built an internal evaluation framework to match official reported 5-shot MMLU scores of Llama 4 and Gemma 3. This allowed apples-to-apples comparisons between full-precision vs. Dynamic v2.0, **QAT** and standard **imatrix** GGUF quants. Currently, we've released updates for: **Qwen3 (NEW):** [0.6B](https://huggingface.co/unsloth/Qwen3-0.6B-GGUF) • [1.7B](https://huggingface.co/unsloth/Qwen3-1.7B-GGUF) • [4B](https://huggingface.co/unsloth/Qwen3-4B-GGUF) • [8B](https://huggingface.co/unsloth/Qwen3-8B-GGUF) • [14B](https://huggingface.co/unsloth/Qwen3-14B-GGUF) • [30B-A3B](https://huggingface.co/unsloth/Qwen3-30B-A3B-GGUF) • [32B](https://huggingface.co/unsloth/Qwen3-32B-GGUF) • [235B-A22B](https://huggingface.co/unsloth/Qwen3-235B-A22B-GGUF) **Other:** [GLM-4-32B](https://huggingface.co/unsloth/GLM-4-32B-0414-GGUF) • [MAI-DS-R1](https://huggingface.co/unsloth/MAI-DS-R1-GGUF) • [QwQ (32B)](https://huggingface.co/unsloth/QwQ-32B-GGUF) **DeepSeek:** [R1](https://huggingface.co/unsloth/DeepSeek-R1-GGUF-UD) • [V3-0324](https://huggingface.co/unsloth/DeepSeek-V3-0324-GGUF-UD) • [R1-Distill-Llama](https://huggingface.co/unsloth/DeepSeek-R1-Distill-Llama-8B-GGUF) **Llama:** [4 (Scout)](https://huggingface.co/unsloth/Llama-4-Scout-17B-16E-Instruct-GGUF) • [4 (Maverick)](https://huggingface.co/unsloth/Llama-4-Maverick-17B-128E-Instruct-GGUF) • [3.1 (8B)](https://huggingface.co/unsloth/Llama-3.1-8B-Instruct-GGUF) **Gemma 3:** [4B](https://huggingface.co/unsloth/gemma-3-4b-it-GGUF) • [12B](https://huggingface.co/unsloth/gemma-3-12b-it-GGUF) • [27B](https://huggingface.co/unsloth/gemma-3-27b-it-GGUF) • [QAT](https://huggingface.co/unsloth/gemma-3-12b-it-qat-GGUF) **Mistral:** [Small-3.1-2503](https://huggingface.co/unsloth/Mistral-Small-3.1-24B-Instruct-2503-GGUF) All future GGUF uploads will utilize Unsloth Dynamic 2.0, and our Dynamic 4-bit safe tensor quants will also benefit from this in the future. Detailed analysis of our benchmarks and evaluation further below. ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252FWpuceJODVjlQcN7RvS6M%252Fkldivergence%2520graph.png%3Falt%3Dmedia%26token%3D1f8f39fb-d4c6-47c6-84fe-f767ec7bae6b&width=768&dpr=4&quality=100&sign=70391f18&sv=2) ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252FszSmyqwqLW7artvIR5ut%252F5shotmmlu.png%3Falt%3Dmedia%26token%3Dc9ef327e-5f8c-4720-8e05-08c345668745&width=768&dpr=4&quality=100&sign=517a777d&sv=2) ## [Direct link to heading](https://docs.unsloth.ai/basics/unsloth-dynamic-2.0-ggufs\#why-kl-divergence) 📊 Why KL Divergence? [Accuracy is Not All You Need](https://arxiv.org/pdf/2407.09141) showcases how pruning layers, even by selecting unnecessary ones still yields vast differences in terms of "flips". A "flip" is defined as answers changing from incorrect to correct or vice versa. The paper shows how MMLU might not decrease as we prune layers or do quantization,but that's because some incorrect answers might have "flipped" to become correct. Our goal is to match the original model, so measuring "flips" is a good metric. ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252FEjL8zLLNyceY3IpDUdWz%252Fimage.png%3Falt%3Dmedia%26token%3D6c31355b-57cf-4f22-a70e-b3b1e7c533d4&width=768&dpr=4&quality=100&sign=e862b672&sv=2) ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252FimYGCjWJ3GVKQmfAQwd5%252Fimage.png%3Falt%3Dmedia%26token%3D5a49d0ec-d92a-4d0e-9d6f-77f6d0d95738&width=768&dpr=4&quality=100&sign=77448477&sv=2) **KL Divergence** should be the **gold standard for reporting quantization errors** as per the research paper "Accuracy is Not All You Need". **Using perplexity is incorrect** since output token values can cancel out, so we must use KLD! The paper also shows that interestingly KL Divergence is highly correlated with flips, and so our goal is to reduce the mean KL Divergence whilst increasing the disk space of the quantization as less as possible. ## [Direct link to heading](https://docs.unsloth.ai/basics/unsloth-dynamic-2.0-ggufs\#calibration-dataset-overfitting) ⚖️ Calibration Dataset Overfitting Most frameworks report perplexity and KL Divergence using a test set of Wikipedia articles. However, we noticed using the calibration dataset which is also Wikipedia related causes quants to overfit, and attain lower perplexity scores. We utilize [Calibration\_v3](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8) and [Calibration\_v5](https://gist.github.com/tristandruyen/9e207a95c7d75ddf37525d353e00659c/) datasets for fair testing which includes some wikitext data amongst other data. **Also instruct models have unique chat templates, and using text only calibration datasets is not effective for instruct models** (base models yes). In fact most imatrix GGUFs are typically calibrated with these issues. As a result, they naturally perform better on KL Divergence benchmarks that also use Wikipedia data, since the model is essentially optimized for that domain. To ensure a fair and controlled evaluation, we do not to use our own calibration dataset (which is optimized for chat performance) when benchmarking KL Divergence. Instead, we conducted tests using the same standard Wikipedia datasets, allowing us to directly compare the performance of our Dynamic 2.0 method against the baseline imatrix approach. ## [Direct link to heading](https://docs.unsloth.ai/basics/unsloth-dynamic-2.0-ggufs\#mmlu-replication-adventure) 🔢 MMLU Replication Adventure - Replicating MMLU 5 shot was nightmarish. We **could not** replicate MMLU results for many models including Llama 3.1 (8B) Instruct, Gemma 3 (12B) and others due to **subtle implementation issues**. Llama 3.1 (8B) for example should be getting ~68.2%, whilst using incorrect implementations can attain **35% accuracy.** ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252FGqqARO9UA0qpIzNcfixv%252FMMLU%2520differences.png%3Falt%3Dmedia%26token%3D59c47844-a2e6-49a3-a523-1e28f2208e6d&width=768&dpr=4&quality=100&sign=3c0bd533&sv=2) MMLU implementation issues - Llama 3.1 (8B) Instruct has a MMLU 5 shot accuracy of 67.8% using a naive MMLU implementation. We find however Llama **tokenizes "A" and "\_A" (A with a space in front) as different token ids**. If we consider both spaced and non spaced tokens, we get 68.2% (+0.4%) - Interestingly Llama 3 as per Eleuther AI's [LLM Harness](https://github.com/EleutherAI/lm-evaluation-harness/blob/main/lm_eval/tasks/llama3/instruct/mmlu/_continuation_template_yaml) also appends **"The best answer is"** to the question, following Llama 3's original MMLU benchmarks. - There are many other subtle issues, and so to benchmark everything in a controlled environment, we designed our own MMLU implementation from scratch by investigating [github.com/hendrycks/test](https://github.com/hendrycks/test) directly, and verified our results across multiple models and comparing to reported numbers. ## [Direct link to heading](https://docs.unsloth.ai/basics/unsloth-dynamic-2.0-ggufs\#gemma-3-qat-replication-benchmarks) ✨ Gemma 3 QAT Replication, Benchmarks The Gemma team released two QAT (quantization aware training) versions of Gemma 3: 1. Q4\_0 GGUF - Quantizes all layers to Q4\_0 via the formula `w = q * block_scale` with each block having 32 weights. See [llama.cpp wiki](https://github.com/ggml-org/llama.cpp/wiki/Tensor-Encoding-Schemes) for more details. 2. int4 version - presumably [TorchAO int4 style](https://github.com/pytorch/ao/blob/main/torchao/quantization/README.md)? We benchmarked all Q4\_0 GGUF versions, and did extensive experiments on the 12B model. We see the **12B Q4\_0 QAT model gets 67.07%** whilst the full bfloat16 12B version gets 67.15% on 5 shot MMLU. That's very impressive! The 27B model is mostly nearly there! Metric 1B 4B 12B 27B MMLU 5 shot 26.12% 55.13% **67.07% (67.15% BF16)** **70.64% (71.5% BF16)** Disk Space 0.93GB 2.94GB **7.52GB** 16.05GB **Efficiency\*** 1.20 10.26 **5.59** 2.84 We designed a new **Efficiency metric** which calculates the usefulness of the model whilst also taking into account its disk size and MMLU 5 shot score: Efficiency=MMLU 5 shot score−25Disk Space GB\\text{Efficiency} = \\frac{\\text{MMLU 5 shot score} - 25}{\\text{Disk Space GB}}Efficiency=Disk Space GBMMLU 5 shot score−25​ We have to **minus 25** since MMLU has 4 multiple choices - A, B, C or D. Assume we make a model that simply randomly chooses answers - it'll get 25% accuracy, and have a disk space of a few bytes. But clearly this is not a useful model. On KL Divergence vs the base model, below is a table showcasing the improvements. Reminder the closer the KL Divergence is to 0, the better (ie 0 means identical to the full precision model) Quant Baseline KLD GB New KLD GB IQ1\_S 1.035688 5.83 0.972932 6.06 IQ1\_M 0.832252 6.33 0.800049 6.51 IQ2\_XXS 0.535764 7.16 0.521039 7.31 IQ2\_M 0.26554 8.84 0.258192 8.96 Q2\_K\_XL 0.229671 9.78 0.220937 9.95 Q3\_K\_XL 0.087845 12.51 0.080617 12.76 Q4\_K\_XL 0.024916 15.41 0.023701 15.64 If we plot the ratio of the disk space increase and the KL Divergence ratio change, we can see a much clearer benefit! Our dynamic 2bit Q2\_K\_XL reduces KLD quite a bit (around 7.5%). ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252FsYSRIPGSjExzSr5y828z%252Fchart%282%29.svg%3Falt%3Dmedia%26token%3De87db00e-6e3e-4478-af0b-bc84ed2e463b&width=768&dpr=4&quality=100&sign=9073c258&sv=2) Truncated table of results for MMLU for Gemma 3 (27B). See below. 1. **Our dynamic 4bit version is 2GB smaller whilst having +1% extra accuracy vs the QAT version!** 2. Efficiency wise, 2bit Q2\_K\_XL and others seem to do very well! Quant Unsloth Unsloth + QAT Disk Size Efficiency IQ1\_M 48.10 47.23 6.51 3.42 IQ2\_XXS 59.20 56.57 7.31 4.32 IQ2\_M 66.47 64.47 8.96 4.40 Q2\_K\_XL 68.70 67.77 9.95 4.30 Q3\_K\_XL 70.87 69.50 12.76 3.49 **Q4\_K\_XL** **71.47** **71.07** **15.64** **2.94** **Google QAT** **70.64** **17.2** **2.65** Click here for FullGoogle's Gemma 3 (27B) QAT Benchmarks: [Direct link to heading](https://docs.unsloth.ai/basics/unsloth-dynamic-2.0-ggufs#click-here-for-fullgoogles-gemma-3-27b-qat-benchmarks) Model Unsloth Unsloth + QAT Disk Size Efficiency IQ1\_S 41.87 43.37 6.06 3.03 IQ1\_M 48.10 47.23 6.51 3.42 IQ2\_XXS 59.20 56.57 7.31 4.32 IQ2\_M 66.47 64.47 8.96 4.40 Q2\_K 68.50 67.60 9.78 4.35 Q2\_K\_XL 68.70 67.77 9.95 4.30 IQ3\_XXS 68.27 67.07 10.07 4.18 Q3\_K\_M 70.70 69.77 12.51 3.58 Q3\_K\_XL 70.87 69.50 12.76 3.49 Q4\_K\_M 71.23 71.00 15.41 2.98 **Q4\_K\_XL** **71.47** **71.07** **15.64** **2.94** Q5\_K\_M 71.77 71.23 17.95 2.58 Q6\_K 71.87 71.60 20.64 2.26 Q8\_0 71.60 71.53 26.74 1.74 **Google QAT** **70.64** **17.2** **2.65** ## [Direct link to heading](https://docs.unsloth.ai/basics/unsloth-dynamic-2.0-ggufs\#llama-4-bug-fixes--run) 🦙 Llama 4 Bug Fixes + Run We also helped and fixed a few Llama 4 bugs: - Llama 4 Scout changed the RoPE Scaling configuration in their official repo. We helped resolve issues in llama.cpp to enable this [change here](https://github.com/ggml-org/llama.cpp/pull/12889) ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252FaJ5AOubUkMjbbvgiOekf%252Fimage.png%3Falt%3Dmedia%26token%3Db1fbdea1-7c95-4afa-9b12-aedec012f38b&width=768&dpr=4&quality=100&sign=2203a04c&sv=2) - Llama 4's QK Norm's epsilon for both Scout and Maverick should be from the config file - this means using 1e-05 and not 1e-06. We helped resolve these in [llama.cpp](https://github.com/ggml-org/llama.cpp/pull/12889) and [transformers](https://github.com/huggingface/transformers/pull/37418) - The Llama 4 team and vLLM also independently fixed an issue with QK Norm being shared across all heads (should not be so) [here](https://github.com/vllm-project/vllm/pull/16311). MMLU Pro increased from 68.58% to 71.53% accuracy. - [Wolfram Ravenwolf](https://x.com/WolframRvnwlf/status/1909735579564331016) showcased how our GGUFs via llama.cpp attain much higher accuracy than third party inference providers - this was most likely a combination of the issues explained above, and also probably due to quantization issues. ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252F4Wrz07bAdvluM2gACggU%252FGoC79hYXwAAPTMs.jpg%3Falt%3Dmedia%26token%3D05001bc0-74b0-4bbb-a89f-894fcdb985d8&width=768&dpr=4&quality=100&sign=23d1a190&sv=2) As shown in our graph, our 4-bit Dynamic QAT quantization deliver better performance on 5-shot MMLU while also being smaller in size. ### [Direct link to heading](https://docs.unsloth.ai/basics/unsloth-dynamic-2.0-ggufs\#running-llama-4-scout) Running Llama 4 Scout: To run Llama 4 Scout for example, first clone llama.cpp: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] apt-get update apt-get install pciutils build-essential cmake curl libcurl4-openssl-dev -y git clone https://github.com/ggml-org/llama.cpp cmake llama.cpp -B llama.cpp/build \ -DBUILD_SHARED_LIBS=OFF -DGGML_CUDA=ON -DLLAMA_CURL=ON cmake --build llama.cpp/build --config Release -j --clean-first --target llama-cli llama-gguf-split cp llama.cpp/build/bin/llama-* llama.cpp ``` Then download out new dynamic v 2.0 quant for Scout: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] # !pip install huggingface_hub hf_transfer import os os.environ["HF_HUB_ENABLE_HF_TRANSFER"] = "1" from huggingface_hub import snapshot_download snapshot_download( repo_id = "unsloth/Llama-4-Scout-17B-16E-Instruct-GGUF", local_dir = "unsloth/Llama-4-Scout-17B-16E-Instruct-GGUF", allow_patterns = ["*IQ2_XXS*"], ) ``` And and let's do inference! Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] whitespace-pre-wrap ./llama.cpp/llama-cli \ --model unsloth/Llama-4-Scout-17B-16E-Instruct-GGUF/Llama-4-Scout-17B-16E-Instruct-UD-IQ2_XXS.gguf \ --threads 32 \ --ctx-size 16384 \ --n-gpu-layers 99 \ -ot ".ffn_.*_exps.=CPU" \ --seed 3407 \ --prio 3 \ --temp 0.6 \ --min-p 0.01 \ --top-p 0.9 \ -no-cnv \ --prompt "<|header_start|>user<|header_end|>\n\nCreate a Flappy Bird game.<|eot|><|header_start|>assistant<|header_end|>\n\n" ``` Read more on running Llama 4 here: [https://docs.unsloth.ai/basics/tutorial-how-to-run-and-fine-tune-llama-4](https://docs.unsloth.ai/basics/tutorial-how-to-run-and-fine-tune-llama-4) [PreviousQwen3: How to Run & Fine-tune](https://docs.unsloth.ai/basics/qwen3-how-to-run-and-fine-tune) [NextLlama 4: How to Run & Fine-tune](https://docs.unsloth.ai/basics/llama-4-how-to-run-and-fine-tune) Last updated 7 days ago Was this helpful?
{ "color-scheme": "light dark", "description": "A big new upgrade to our Dynamic Quants!", "favicon": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Forganizations%252FHpyELzcNe0topgVLGCZY%252Fsites%252Fsite_mXXTe%252Ficon%252FUdYqQdRpLXMz6O8ejTLD%252Funsloth%2520sticker.png%3Falt%3Dmedia%26token%3D2a5ba961-51cb-4094-8545-6eb20ad1e06e&width=48&height=48&sign=277417f7&sv=2", "generator": "GitBook (e15757d)", "language": "en", "og:description": "A big new upgrade to our Dynamic Quants!", "og:image": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "og:title": "Unsloth Dynamic 2.0 GGUFs | Unsloth Documentation", "ogDescription": "A big new upgrade to our Dynamic Quants!", "ogImage": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "ogTitle": "Unsloth Dynamic 2.0 GGUFs | Unsloth Documentation", "robots": "index, follow", "scrapeId": "259ca072-1abe-41f8-8249-2bb61bd1faa2", "sourceURL": "https://docs.unsloth.ai/basics/unsloth-dynamic-2.0-ggufs", "statusCode": 200, "title": "Unsloth Dynamic 2.0 GGUFs | Unsloth Documentation", "twitter:card": "summary_large_image", "twitter:description": "A big new upgrade to our Dynamic Quants!", "twitter:image": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "twitter:title": "Unsloth Dynamic 2.0 GGUFs | Unsloth Documentation", "url": "https://docs.unsloth.ai/basics/unsloth-dynamic-2.0-ggufs", "viewport": "width=device-width, initial-scale=1" }
Qwen3, full fine-tuning & all models are now supported! 🦥 ## [Direct link to heading](https://docs.unsloth.ai/get-started/installing-+-updating/pip-install\#recommended-installation) **Recommended installation:** **Install with pip (recommended)** for Linux devices: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] pip install unsloth ``` Python 3.13 does not support Unsloth. Use 3.12, 3.11, 3.10 or 3.90 * * * ## [Direct link to heading](https://docs.unsloth.ai/get-started/installing-+-updating/pip-install\#uninstall--reinstall) Uninstall + Reinstall If you're still encountering dependency issues with Unsloth, many users have resolved them by forcing uninstalling and reinstalling Unsloth: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] pip install --upgrade --force-reinstall --no-cache-dir --no-deps git+https://github.com/unslothai/unsloth.git pip install --upgrade --force-reinstall --no-cache-dir --no-deps git+https://github.com/unslothai/unsloth-zoo.git ``` ## [Direct link to heading](https://docs.unsloth.ai/get-started/installing-+-updating/pip-install\#advanced-pip-installation) Advanced Pip Installation Do **NOT** use this if you have [Conda](https://docs.unsloth.ai/get-started/installing-+-updating/conda-install). Pip is a bit more complex since there are dependency issues. The pip command is different for `torch 2.2,2.3,2.4,2.5` and CUDA versions. For other torch versions, we support `torch211`, `torch212`, `torch220`, `torch230`, `torch240` and for CUDA versions, we support `cu118` and `cu121` and `cu124`. For Ampere devices (A100, H100, RTX3090) and above, use `cu118-ampere` or `cu121-ampere` or `cu124-ampere`. For example, if you have `torch 2.4` and `CUDA 12.1`, use: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] pip install --upgrade pip pip install "unsloth[cu121-torch240] @ git+https://github.com/unslothai/unsloth.git" ``` Another example, if you have `torch 2.5` and `CUDA 12.4`, use: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] pip install --upgrade pip pip install "unsloth[cu124-torch250] @ git+https://github.com/unslothai/unsloth.git" ``` And other examples: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] pip install "unsloth[cu121-ampere-torch240] @ git+https://github.com/unslothai/unsloth.git" pip install "unsloth[cu118-ampere-torch240] @ git+https://github.com/unslothai/unsloth.git" pip install "unsloth[cu121-torch240] @ git+https://github.com/unslothai/unsloth.git" pip install "unsloth[cu118-torch240] @ git+https://github.com/unslothai/unsloth.git" pip install "unsloth[cu121-torch230] @ git+https://github.com/unslothai/unsloth.git" pip install "unsloth[cu121-ampere-torch230] @ git+https://github.com/unslothai/unsloth.git" pip install "unsloth[cu121-torch250] @ git+https://github.com/unslothai/unsloth.git" pip install "unsloth[cu124-ampere-torch250] @ git+https://github.com/unslothai/unsloth.git" ``` Or, run the below in a terminal to get the **optimal** pip installation command: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] wget -qO- https://raw.githubusercontent.com/unslothai/unsloth/main/unsloth/_auto_install.py | python - ``` Or, run the below manually in a Python REPL: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] try: import torch except: raise ImportError('Install torch via `pip install torch`') from packaging.version import Version as V v = V(torch.__version__) cuda = str(torch.version.cuda) is_ampere = torch.cuda.get_device_capability()[0] >= 8 if cuda != "12.1" and cuda != "11.8" and cuda != "12.4": raise RuntimeError(f"CUDA = {cuda} not supported!") if v <= V('2.1.0'): raise RuntimeError(f"Torch = {v} too old!") elif v <= V('2.1.1'): x = 'cu{}{}-torch211' elif v <= V('2.1.2'): x = 'cu{}{}-torch212' elif v < V('2.3.0'): x = 'cu{}{}-torch220' elif v < V('2.4.0'): x = 'cu{}{}-torch230' elif v < V('2.5.0'): x = 'cu{}{}-torch240' elif v < V('2.6.0'): x = 'cu{}{}-torch250' else: raise RuntimeError(f"Torch = {v} too new!") x = x.format(cuda.replace(".", ""), "-ampere" if is_ampere else "") print(f'pip install --upgrade pip && pip install "unsloth[{x}] @ git+https://github.com/unslothai/unsloth.git"') ``` [PreviousUpdating](https://docs.unsloth.ai/get-started/installing-+-updating/updating) [NextWindows Installation](https://docs.unsloth.ai/get-started/installing-+-updating/windows-installation) Last updated 8 days ago Was this helpful?
{ "color-scheme": "light dark", "description": "To install Unsloth locally via Pip, follow the steps below:", "favicon": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Forganizations%252FHpyELzcNe0topgVLGCZY%252Fsites%252Fsite_mXXTe%252Ficon%252FUdYqQdRpLXMz6O8ejTLD%252Funsloth%2520sticker.png%3Falt%3Dmedia%26token%3D2a5ba961-51cb-4094-8545-6eb20ad1e06e&width=48&height=48&sign=277417f7&sv=2", "generator": "GitBook (e15757d)", "language": "en", "og:description": "To install Unsloth locally via Pip, follow the steps below:", "og:image": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "og:title": "Pip Install | Unsloth Documentation", "ogDescription": "To install Unsloth locally via Pip, follow the steps below:", "ogImage": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "ogTitle": "Pip Install | Unsloth Documentation", "robots": "index, follow", "scrapeId": "2bb54d34-8647-4d43-a893-21fff97417e7", "sourceURL": "https://docs.unsloth.ai/get-started/installing-+-updating/pip-install", "statusCode": 200, "title": "Pip Install | Unsloth Documentation", "twitter:card": "summary_large_image", "twitter:description": "To install Unsloth locally via Pip, follow the steps below:", "twitter:image": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "twitter:title": "Pip Install | Unsloth Documentation", "url": "https://docs.unsloth.ai/get-started/installing-+-updating/pip-install", "viewport": "width=device-width, initial-scale=1" }
Qwen3, full fine-tuning & all models are now supported! 🦥 Read our full DeepSeek-R1 blogpost here: [unsloth.ai/blog/deepseekr1-dynamic](https://unsloth.ai/blog/deepseekr1-dynamic) ### [Direct link to heading](https://docs.unsloth.ai/basics/tutorials-how-to-fine-tune-and-run-llms/deepseek-r1-how-to-run-locally/deepseek-r1-dynamic-1.58-bit\#id-1-bit-small-dynamic-vs.-basic) 1-bit (Small) - Dynamic vs. Basic GGUF Type Quant Size (GB) Seed Pygame Background Accelerate SPACE Bird shape Land Top right score Pipes Best Score Quit Runnable Score Avg Score Errors Notes Dynamic IQ1\_S 131 3407 1 0.5 1 0.5 0.5 1 0.5 1 1 0 7 score =!inc SyntaxError: invalid syntax Selects random shapes and colors at the start, but doesn't rotate across trials Dynamic IQ1\_S 131 3408 1 1 0.25 1 0.5 1 0.5 1 1 0 7.25 score =B4 NameError: name 'B4' is not defined Better - selects pipe colors randomnly, but all are just 1 color - should be different. Dropping to ground fails to reset acceleration. Dynamic IQ1\_S 131 3409 1 0.5 0.5 0.5 0 1 1 1 1 0 6.5 6.92 score =3D 0 SyntaxError: invalid decimal literal Too hard to play - acceleration too fast. Pipe colors now are random, but bird shape not changing. Land collison fails. Basic IQ1\_S 133 3407 0 0 0 0 0 0 0 0 0 0 0 No code Fully failed. Repeats "with Dark Colurs" forever Basic IQ1\_S 133 3408 0 0 0 0 0 0 0 0 0 0 0 No code Fully failed. Repeats "Pygame's" forever Basic IQ1\_S 133 3409 0 0 0 0 0 0 0 0 0 0 0 0 No code Fully failed. Repeats "pipe\_x = screen\_height pipe\_x = screen\_height pipe\_height = screen\_height - Pipe\_height" forever. ### [Direct link to heading](https://docs.unsloth.ai/basics/tutorials-how-to-fine-tune-and-run-llms/deepseek-r1-how-to-run-locally/deepseek-r1-dynamic-1.58-bit\#id-1-bit-medium-dynamic-vs.-basic) 1-bit (Medium) - Dynamic vs. Basic GGUF Type Quant Size (GB) Seed Pygame Background Accelerate SPACE Bird shape Land Top right score Pipes Best Score Quit Runnable Score Avg Score Errors Notes Dynamic IQ1\_M 158 3407 1 1 0.75 1 1 1 1 1 1 1 9.75 None A bit fast and hard to play. Dynamic IQ1\_M 158 3408 1 1 0.5 1 1 1 1 1 1 1 9.5 None Very good - land should be clearer. Acceleration should be slower. Dynamic IQ1\_M 158 3409 1 0.5 1 0.5 0.5 1 0.5 1 1 1 8 9.08 None Background color does not change across trials.Pipes do not touch the top. No land is seen. Basic IQ1\_M 149 3407 1 0 0 0 0 0 0 0 1 0 2 if game\_over: NameError: name 'game\_over' is not defined Fully failed. Black screen only Basic IQ1\_M 149 3408 1 0 0 0 0 0 0 0 1 0 2 No code Fully failed. Black screen then closes. Basic IQ1\_M 149 3409 1 0 0 0 0 0 0 0 0 0 1 1.67 window.fill((100, 100, 255)) Light Blue SyntaxError: invalid syntax && main() NameError: name 'main' is not defined. Fully failed. ### [Direct link to heading](https://docs.unsloth.ai/basics/tutorials-how-to-fine-tune-and-run-llms/deepseek-r1-how-to-run-locally/deepseek-r1-dynamic-1.58-bit\#id-2-bit-extra-extra-small-dynamic-vs.-basic) 2-bit (Extra extra Small) - Dynamic vs. Basic GGUF Type Quant Size (GB) Seed Pygame Background Accelerate SPACE Bird shape Land Top right score Pipes Best Score Quit Runnable Score Avg Score Errors Notes Dynamic IQ2\_XXS 183 3407 1 1 0.5 1 1 1 1 1 1 1 9.5 None Too hard to play - acceleration too slow. Lags Dynamic IQ2\_XXS 183 3408 1 1 1 1 1 1 0.5 0.5 1 0 8 global best\_score SyntaxError: name 'best\_score' is assigned to before global declaration Had to edit 2 lines - remove global best\_score, and set pipe\_list = \[\] Dynamic IQ2\_XXS 183 3409 1 1 1 1 1 1 1 1 1 1 10 9.17 None Extremely good. Even makes pipes have random distances between them. Basic IQ2\_XXS 175 3407 1 0.5 0.5 0.5 1 0 0.5 1 0 0 5 pipe\_color = random.choice(\[(34, 139, 34), (139, 69, 19), (47, 47, 47)) SyntaxError: closing parenthesis ')' does not match opening parenthesis '\[' && pygame.draw.polygon(screen, bird\_color, points) ValueError: points argument must contain more than 2 points\ \ Fails quiting. Same color. Collison detection a bit off. No score\ \ Basic\ \ IQ2\_XXS\ \ 175\ \ 3408\ \ 1\ \ 0.5\ \ 0.5\ \ 0.5\ \ 1\ \ 1\ \ 0.5\ \ 1\ \ 0\ \ 0\ \ 6\ \ pipes.append({'x': SCREEN\_WIDTH, 'gap\_y': random.randint(50, SCREEN\_HEIGHT - 150)) SyntaxError: closing parenthesis ')' does not match opening parenthesis '{'\ \ Acceleration weird. Chooses 1 color per round. Cannot quit.\ \ Basic\ \ IQ2\_XXS\ \ 175\ \ 3409\ \ 1\ \ 1\ \ 1\ \ 1\ \ 1\ \ 1\ \ 1\ \ 0\ \ 0.5\ \ 0\ \ 7.5\ \ 6.17\ \ screen = pygame.display.set\_mode((SCREEN\_WIDTH, SCREENHEIGHT)) NameError: name 'SCREENHEIGHT' is not defined. Did you mean: 'SCREEN\_HEIGHT'?\ \ OK. Colors change. Best score does not update. Quit only ESC not Q.\ \ ### [Direct link to heading](https://docs.unsloth.ai/basics/tutorials-how-to-fine-tune-and-run-llms/deepseek-r1-how-to-run-locally/deepseek-r1-dynamic-1.58-bit\#dynamic-quantization-trial-output) **Dynamic Quantization trial output**\ \ IQ1\_S codeIQ1\_M codeIQ2\_XXS code\ \ [12KB\\ \\ inference\_UD-IQ1\_S\_3407.txt](https://3215535692-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxhOjnexMCB3dmuQFQ2Zq%2Fuploads%2FqpBdpW55h5mNAzVoTxPI%2Finference_UD-IQ1_S_3407.txt?alt=media&token=37b19689-73e5-46d0-98be-352e515dfdf8)\ \ [11KB\\ \\ inference\_UD-IQ1\_S\_3408.txt](https://3215535692-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxhOjnexMCB3dmuQFQ2Zq%2Fuploads%2FTdIrJSqc2VbNJy1bf3w5%2Finference_UD-IQ1_S_3408.txt?alt=media&token=e11f73bb-80be-49e5-91e2-f3a1f5495dcd)\ \ [10KB\\ \\ inference\_UD-IQ1\_S\_3409.txt](https://3215535692-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxhOjnexMCB3dmuQFQ2Zq%2Fuploads%2FBk2ZwEIcLmvZQ3jlMLzw%2Finference_UD-IQ1_S_3409.txt?alt=media&token=052885f5-bee9-420d-a9c0-827412ac17c8)\ \ [10KB\\ \\ inference\_UD-IQ1\_M\_3407.txt](https://3215535692-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxhOjnexMCB3dmuQFQ2Zq%2Fuploads%2Ft7YmT1H3Nflcy5kAp1LE%2Finference_UD-IQ1_M_3407.txt?alt=media&token=6f62f911-3364-4f92-b311-c1fa9b759370)\ \ [30KB\\ \\ inference\_UD-IQ1\_M\_3408.txt](https://3215535692-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxhOjnexMCB3dmuQFQ2Zq%2Fuploads%2FH6BCTeWlJpUkfeEmeqpu%2Finference_UD-IQ1_M_3408.txt?alt=media&token=7727a999-8c0a-4baf-8542-be8686a01630)\ \ [9KB\\ \\ inference\_UD-IQ1\_M\_3409.txt](https://3215535692-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxhOjnexMCB3dmuQFQ2Zq%2Fuploads%2FvVJI0H2F9KTNj5kwUCtC%2Finference_UD-IQ1_M_3409.txt?alt=media&token=0f863d41-53d6-4c94-8d57-bf1eeb79ead5)\ \ [29KB\\ \\ inference\_UD-IQ2\_XXS\_3407.txt](https://3215535692-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxhOjnexMCB3dmuQFQ2Zq%2Fuploads%2F26jxRY5mWuon67OfvGtq%2Finference_UD-IQ2_XXS_3407.txt?alt=media&token=daf9bf7d-245e-4b54-b0c0-a6273833835a)\ \ [34KB\\ \\ inference\_UD-IQ2\_XXS\_3408.txt](https://3215535692-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxhOjnexMCB3dmuQFQ2Zq%2Fuploads%2FEhjjYN7vAh7gbmR8oXbS%2Finference_UD-IQ2_XXS_3408.txt?alt=media&token=4b50d6dd-2798-44c7-aa92-7e67c09868a4)\ \ [42KB\\ \\ inference\_UD-IQ2\_XXS\_3409.txt](https://3215535692-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxhOjnexMCB3dmuQFQ2Zq%2Fuploads%2FXwCSfIf16nTwHzcWepoV%2Finference_UD-IQ2_XXS_3409.txt?alt=media&token=2f7539c9-026d-41e7-b7c7-5738a89ae5d4)\ \ ### [Direct link to heading](https://docs.unsloth.ai/basics/tutorials-how-to-fine-tune-and-run-llms/deepseek-r1-how-to-run-locally/deepseek-r1-dynamic-1.58-bit\#non-dynamic-quantization-trial-output) Non Dynamic Quantization trial output\ \ IQ1\_S basic codeIQ1\_M basic codeIQ2\_XXS basic code\ \ [25KB\\ \\ inference\_basic-IQ1\_S\_3407.txt](https://3215535692-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxhOjnexMCB3dmuQFQ2Zq%2Fuploads%2FFtAMzAucSfKMkkmXItTj%2Finference_basic-IQ1_S_3407.txt?alt=media&token=76bfcf47-e1ce-442b-af49-6bfb6af7d046)\ \ [15KB\\ \\ inference\_basic-IQ1\_S\_3408.txt](https://3215535692-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxhOjnexMCB3dmuQFQ2Zq%2Fuploads%2F4NhjCVFMwCwT2OCj0IJ5%2Finference_basic-IQ1_S_3408.txt?alt=media&token=d4715674-3347-400b-9eb6-ae5d4470feeb)\ \ [14KB\\ \\ inference\_basic-IQ1\_S\_3409.txt](https://3215535692-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxhOjnexMCB3dmuQFQ2Zq%2Fuploads%2Fb0ZW3xs7R7IMryO7n7Yp%2Finference_basic-IQ1_S_3409.txt?alt=media&token=64b8825b-7103-4708-9d12-12770e43b546)\ \ [7KB\\ \\ inference\_basic-IQ1\_M\_3407.txt](https://3215535692-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxhOjnexMCB3dmuQFQ2Zq%2Fuploads%2FmZ2TsQEzoGjhGlqUjtmj%2Finference_basic-IQ1_M_3407.txt?alt=media&token=975a30d6-2d90-47eb-9d68-b50fd47337f7)\ \ [7KB\\ \\ inference\_basic-IQ1\_M\_3408.txt](https://3215535692-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxhOjnexMCB3dmuQFQ2Zq%2Fuploads%2FIx9TQ99Qpmk7BViNLFBl%2Finference_basic-IQ1_M_3408.txt?alt=media&token=b88e1e5b-4535-4d93-bd67-f81def7377d5)\ \ [12KB\\ \\ inference\_basic-IQ1\_M\_3409.txt](https://3215535692-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxhOjnexMCB3dmuQFQ2Zq%2Fuploads%2FDX7XYpJPxXKAMZeGhSrr%2Finference_basic-IQ1_M_3409.txt?alt=media&token=6da9127e-272b-4e74-b990-6657e25eea6b)\ \ [25KB\\ \\ inference\_basic-IQ2\_XXS\_3407.txt](https://3215535692-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxhOjnexMCB3dmuQFQ2Zq%2Fuploads%2FajsVHsVqlWpwHk7mY32t%2Finference_basic-IQ2_XXS_3407.txt?alt=media&token=cbbf36a2-0d6a-4a87-8232-45b0b7fcc588)\ \ [34KB\\ \\ inference\_basic-IQ2\_XXS\_3408.txt](https://3215535692-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxhOjnexMCB3dmuQFQ2Zq%2Fuploads%2F4vjncPu2r2D7F5jVOC7I%2Finference_basic-IQ2_XXS_3408.txt?alt=media&token=9ed635a2-bf97-4f49-b26f-6e985d0ab1b7)\ \ [34KB\\ \\ inference\_basic-IQ2\_XXS\_3409.txt](https://3215535692-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxhOjnexMCB3dmuQFQ2Zq%2Fuploads%2FJmVOFgrRyXjY4lYZXE96%2Finference_basic-IQ2_XXS_3409.txt?alt=media&token=faad5bff-ba7f-41f1-abd5-7896f17a5b25)\ \ [PreviousDeepSeek-R1: How to Run Locally](https://docs.unsloth.ai/basics/tutorials-how-to-fine-tune-and-run-llms/deepseek-r1-how-to-run-locally) [NextTutorial: How to Finetune Llama-3 and Use In Ollama](https://docs.unsloth.ai/basics/tutorial-how-to-finetune-llama-3-and-use-in-ollama)\ \ Last updated 3 months ago\ \ Was this helpful?
{ "color-scheme": "light dark", "description": "See performance comparison tables for Unsloth's Dynamic GGUF Quants vs Standard IMatrix Quants.", "favicon": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Forganizations%252FHpyELzcNe0topgVLGCZY%252Fsites%252Fsite_mXXTe%252Ficon%252FUdYqQdRpLXMz6O8ejTLD%252Funsloth%2520sticker.png%3Falt%3Dmedia%26token%3D2a5ba961-51cb-4094-8545-6eb20ad1e06e&width=48&height=48&sign=277417f7&sv=2", "generator": "GitBook (e15757d)", "language": "en", "og:description": "See performance comparison tables for Unsloth's Dynamic GGUF Quants vs Standard IMatrix Quants.", "og:image": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "og:title": "DeepSeek-R1 Dynamic 1.58-bit | Unsloth Documentation", "ogDescription": "See performance comparison tables for Unsloth's Dynamic GGUF Quants vs Standard IMatrix Quants.", "ogImage": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "ogTitle": "DeepSeek-R1 Dynamic 1.58-bit | Unsloth Documentation", "robots": "index, follow", "scrapeId": "30bae884-51c4-4b7d-bcb7-745e49887c55", "sourceURL": "https://docs.unsloth.ai/basics/tutorials-how-to-fine-tune-and-run-llms/deepseek-r1-how-to-run-locally/deepseek-r1-dynamic-1.58-bit", "statusCode": 200, "title": "DeepSeek-R1 Dynamic 1.58-bit | Unsloth Documentation", "twitter:card": "summary_large_image", "twitter:description": "See performance comparison tables for Unsloth's Dynamic GGUF Quants vs Standard IMatrix Quants.", "twitter:image": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "twitter:title": "DeepSeek-R1 Dynamic 1.58-bit | Unsloth Documentation", "url": "https://docs.unsloth.ai/basics/tutorials-how-to-fine-tune-and-run-llms/deepseek-r1-how-to-run-locally/deepseek-r1-dynamic-1.58-bit", "viewport": "width=device-width, initial-scale=1" }
Qwen3, full fine-tuning & all models are now supported! 🦥 Qwen's new Qwen3 models deliver state-of-the-art advancements in reasoning, instruction-following, agent capabilities, and multilingual support. All Qwen3 uploads use our new Unsloth [Dynamic 2.0](https://docs.unsloth.ai/basics/unsloth-dynamic-2.0-ggufs) methodology, delivering the best performance on 5-shot MMLU and KL Divergence benchmarks. This means, you can run and fine-tune quantized Qwen3 LLMs with minimal accuracy loss! We also uploaded Qwen3 with native 128K context length. Qwen achieves this by using YaRN to extend its original 40K window to 128K. [Unsloth](https://github.com/unslothai/unsloth) also now supports fine-tuning of Qwen3 and Qwen3 MOE models — 2x faster, with 70% less VRAM, and 8x longer context lengths. Fine-tune Qwen3 (14B) for free using our [Colab notebook.](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Qwen3_(14B)-Reasoning-Conversational.ipynb) • [**Running Qwen3 Tutorial**](https://docs.unsloth.ai/basics/qwen3-how-to-run-and-fine-tune#ollama-run-qwen3-tutorial) • [**Fine-tuning Qwen3 Tutorial**](https://docs.unsloth.ai/basics/qwen3-how-to-run-and-fine-tune#fine-tuning-qwen3-with-unsloth) #### [Direct link to heading](https://docs.unsloth.ai/basics/qwen3-how-to-run-and-fine-tune\#qwen3-unsloth-dynamic-2.0-with-optimal-configs) **Qwen3 - Unsloth Dynamic 2.0** with optimal configs: Dynamic 2.0 GGUF (to run) 128K Context GGUF Dynamic 4-bit Safetensor (to finetune/deploy) - [0.6B](https://huggingface.co/unsloth/Qwen3-0.6B-GGUF) - [1.7B](https://huggingface.co/unsloth/Qwen3-1.7B-GGUF) - [4B](https://huggingface.co/unsloth/Qwen3-4B-GGUF) - [8B](https://huggingface.co/unsloth/Qwen3-8B-GGUF) - [14B](https://huggingface.co/unsloth/Qwen3-14B-GGUF) - [30B-A3B](https://huggingface.co/unsloth/Qwen3-30B-A3B-GGUF) - [32B](https://huggingface.co/unsloth/Qwen3-32B-GGUF) - [235B-A22B](https://huggingface.co/unsloth/Qwen3-235B-A22B-GGUF) - [4B](https://huggingface.co/unsloth/Qwen3-4B-128K-GGUF) - [8B](https://huggingface.co/unsloth/Qwen3-8B-128K-GGUF) - [14B](https://huggingface.co/unsloth/Qwen3-14B-128K-GGUF) - [30B-A3B](https://huggingface.co/unsloth/Qwen3-30B-A3B-128K-GGUF) - [32B](https://huggingface.co/unsloth/Qwen3-32B-128K-GGUF) - [235B-A22B](https://huggingface.co/unsloth/Qwen3-235B-A22B-128K-GGUF) - [0.6B](https://huggingface.co/unsloth/Qwen3-0.6B-unsloth-bnb-4bit) - [1.7B](https://huggingface.co/unsloth/Qwen3-1.7B-unsloth-bnb-4bit) - [4B](https://huggingface.co/unsloth/Qwen3-4B-unsloth-bnb-4bit) - [8B](https://huggingface.co/unsloth/Qwen3-8B-unsloth-bnb-4bit) - [14B](https://huggingface.co/unsloth/Qwen3-14B-unsloth-bnb-4bit) - [30B-A3B](https://huggingface.co/unsloth/Qwen3-30B-A3B-bnb-4bit) - [32B](https://huggingface.co/unsloth/Qwen3-32B-unsloth-bnb-4bit) ## [Direct link to heading](https://docs.unsloth.ai/basics/qwen3-how-to-run-and-fine-tune\#running-qwen3) 🖥️ **Running Qwen3** ### [Direct link to heading](https://docs.unsloth.ai/basics/qwen3-how-to-run-and-fine-tune\#official-recommended-settings) ⚙️ Official Recommended Settings According to Qwen, these are the recommended settings for inference: Non-Thinking Mode Settings: Thinking Mode Settings: **Temperature = 0.7** **Temperature = 0.6** Min\_P = 0.0 (optional, but 0.01 works well, llama.cpp default is 0.1) Min\_P = 0.0 Top\_P = 0.8 Top\_P = 0.95 TopK = 20 TopK = 20 **Chat template/prompt format:** Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] whitespace-pre-wrap <|im_start|>user\nWhat is 2+2?<|im_end|>\n<|im_start|>assistant\n ``` For NON thinking mode, we purposely enclose <think> and </think> with nothing: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] whitespace-pre-wrap <|im_start|>user\nWhat is 2+2?<|im_end|>\n<|im_start|>assistant\n<think>\n\n</think>\n\n ``` **For Thinking-mode, DO NOT use greedy decoding**, as it can lead to performance degradation and endless repetitions. ### [Direct link to heading](https://docs.unsloth.ai/basics/qwen3-how-to-run-and-fine-tune\#switching-between-thinking-and-non-thinking-mode) Switching Between Thinking and Non-Thinking Mode Qwen3 models come with built-in "thinking mode" to boost reasoning and improve response quality - similar to how [QwQ-32B](https://docs.unsloth.ai/basics/tutorials-how-to-fine-tune-and-run-llms/qwq-32b-how-to-run-effectively) worked. Instructions for switching will differ depending on the inference engine you're using so ensure you use the correct instructions. #### [Direct link to heading](https://docs.unsloth.ai/basics/qwen3-how-to-run-and-fine-tune\#instructions-for-llama.cpp-and-ollama) Instructions for llama.cpp and Ollama: You can add `/think` and `/no_think` to user prompts or system messages to switch the model's thinking mode from turn to turn. The model will follow the most recent instruction in multi-turn conversations. Here is an example of multi-turn conversation: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] > Who are you /no_think <think> </think> I am Qwen, a large-scale language model developed by Alibaba Cloud. [...] > How many 'r's are in 'strawberries'? /think <think> Okay, let's see. The user is asking how many times the letter 'r' appears in the word "strawberries". [...] </think> The word strawberries contains 3 instances of the letter r. [...] ``` #### [Direct link to heading](https://docs.unsloth.ai/basics/qwen3-how-to-run-and-fine-tune\#instructions-for-transformers-and-vllm) Instructions for transformers and vLLM: **Thinking mode:** `enable_thinking=True` By default, Qwen3 has thinking enabled. When you call `tokenizer.apply_chat_template`, you **don’t need to set anything manually.** Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True, enable_thinking=True # Default is True ) ``` In thinking mode, the model will generate an extra `<think>...</think>` block before the final answer — this lets it "plan" and sharpen its responses. **Non-thinking mode:** `enable_thinking=False` Enabling non-thinking will make Qwen3 will skip all the thinking steps and behave like a normal LLM. Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True, enable_thinking=False # Disables thinking mode ) ``` This mode will provide final responses directly — no `<think>` blocks, no chain-of-thought. ### [Direct link to heading](https://docs.unsloth.ai/basics/qwen3-how-to-run-and-fine-tune\#ollama-run-qwen3-tutorial) 🦙 Ollama: Run Qwen3 Tutorial 1. Install `ollama` if you haven't already! You can only run models up to 32B in size. To run the full 235B-A22B model, [see here](https://docs.unsloth.ai/basics/qwen3-how-to-run-and-fine-tune#running-qwen3-235b-a22b). Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] apt-get update apt-get install pciutils -y curl -fsSL https://ollama.com/install.sh | sh ``` 1. Run the model! Note you can call `ollama serve` in another terminal if it fails! We include all our fixes and suggested parameters (temperature etc) in `params` in our Hugging Face upload! Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] ollama run hf.co/unsloth/Qwen3-8B-GGUF:Q4_K_XL ``` 1. To disable thinking, use (or you can set it in the system prompt): Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] >>> Write your prompt here /nothink ``` If you're experiencing any looping, Ollama might have set your context length window to 2,048 or so. If this is the case, bump it up to 32,000 and see if the issue still persists. ### [Direct link to heading](https://docs.unsloth.ai/basics/qwen3-how-to-run-and-fine-tune\#llama.cpp-run-qwen3-tutorial) 📖 Llama.cpp: Run Qwen3 Tutorial 1. Obtain the latest `llama.cpp` on [GitHub here](https://github.com/ggml-org/llama.cpp). You can follow the build instructions below as well. Change `-DGGML_CUDA=ON` to `-DGGML_CUDA=OFF` if you don't have a GPU or just want CPU inference. Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] apt-get update apt-get install pciutils build-essential cmake curl libcurl4-openssl-dev -y git clone https://github.com/ggml-org/llama.cpp cmake llama.cpp -B llama.cpp/build \ -DBUILD_SHARED_LIBS=OFF -DGGML_CUDA=ON -DLLAMA_CURL=ON cmake --build llama.cpp/build --config Release -j --clean-first --target llama-cli llama-gguf-split cp llama.cpp/build/bin/llama-* llama.cpp ``` 1. Download the model via (after installing `pip install huggingface_hub hf_transfer` ). You can choose Q4\_K\_M, or other quantized versions. Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] # !pip install huggingface_hub hf_transfer import os os.environ["HF_HUB_ENABLE_HF_TRANSFER"] = "1" from huggingface_hub import snapshot_download snapshot_download( repo_id = "unsloth/Qwen3-32B-GGUF", local_dir = "unsloth/Qwen3-32B-GGUF", allow_patterns = ["*UD-Q4_K_XL*"], ) ``` 1. Run the model and try any prompt. To disable thinking, use (or you can set it in the system prompt): Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] >>> Write your prompt here /nothink ``` ### [Direct link to heading](https://docs.unsloth.ai/basics/qwen3-how-to-run-and-fine-tune\#running-qwen3-235b-a22b) Running Qwen3-235B-A22B For Qwen3-235B-A22B, we will specifically use Llama.cpp for optimized inference and a plethora of options. 1. We're following similar steps to above however this time we'll also need to perform extra steps because the model is so big. 2. Download the model via (after installing `pip install huggingface_hub hf_transfer` ). You can choose UD\_IQ2\_XXS, or other quantized versions.. Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] # !pip install huggingface_hub hf_transfer import os os.environ["HF_HUB_ENABLE_HF_TRANSFER"] = "1" from huggingface_hub import snapshot_download snapshot_download( repo_id = "unsloth/Qwen3-235B-A22B-GGUF", local_dir = "unsloth/Qwen3-235B-A22B-GGUF", allow_patterns = ["*UD-IQ2_XXS*"], ) ``` 3. Run the model and try any prompt. 4. Edit `--threads 32` for the number of CPU threads, `--ctx-size 16384` for context length, `--n-gpu-layers 99` for GPU offloading on how many layers. Try adjusting it if your GPU goes out of memory. Also remove it if you have CPU only inference. Use `-ot ".ffn_.*_exps.=CPU"` to offload all MoE layers to the CPU! This effectively allows you to fit all non MoE layers on 1 GPU, improving generation speeds. You can customize the regex expression to fit more layers if you have more GPU capacity. Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] whitespace-pre-wrap ./llama.cpp/llama-cli \ --model unsloth/Qwen3-235B-A22B-GGUF/Qwen3-235B-A22B-UD-IQ2_XXS.gguf \ --threads 32 \ --ctx-size 16384 \ --n-gpu-layers 99 \ -ot ".ffn_.*_exps.=CPU" \ --seed 3407 \ --prio 3 \ --temp 0.6 \ --min-p 0.0 \ --top-p 0.95 \ --top-k 20 \ -no-cnv \ --prompt "<|im_start|>user\nCreate a Flappy Bird game in Python. You must include these things:\n1. You must use pygame.\n2. The background color should be randomly chosen and is a light shade. Start with a light blue color.\n3. Pressing SPACE multiple times will accelerate the bird.\n4. The bird's shape should be randomly chosen as a square, circle or triangle. The color should be randomly chosen as a dark color.\n5. Place on the bottom some land colored as dark brown or yellow chosen randomly.\n6. Make a score shown on the top right side. Increment if you pass pipes and don't hit them.\n7. Make randomly spaced pipes with enough space. Color them randomly as dark green or light brown or a dark gray shade.\n8. When you lose, show the best score. Make the text inside the screen. Pressing q or Esc will quit the game. Restarting is pressing SPACE again.\nThe final game should be inside a markdown section in Python. Check your code for errors and fix them before the final markdown section.<|im_end|>\n<|im_start|>assistant\n" ``` ## [Direct link to heading](https://docs.unsloth.ai/basics/qwen3-how-to-run-and-fine-tune\#fine-tuning-qwen3-with-unsloth) 🦥 Fine-tuning Qwen3 with Unsloth Unsloth makes Qwen3 fine-tuning 2x faster, use 70% less VRAM and supports 8x longer context lengths. Qwen3 (14B) fits comfortably in a Google Colab 16GB VRAM Tesla T4 GPU. Because Qwen3 supports both reasoning and non-reasoning, you can fine-tune it with a non-reasoning dataset, but this may affect its reasoning ability. If you want to maintain its reasoning capabilities (optional), you can use a mix of direct answers and chain-of-thought examples. Use 75% reasoning and 25% non-reasoning in your dataset to make the model retain its reasoning capabilities. Our Conversational notebook uses a combo of 75% NVIDIA’s open-math-reasoning dataset and 25% Maxime’s FineTome dataset (non-reasoning). Here's free Unsloth Colab notebooks to fine-tune Qwen3: - [Qwen3 (14B) Reasoning + Conversational notebook](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Qwen3_(14B)-Reasoning-Conversational.ipynb) (recommended) - [Qwen3 (14B) Alpaca notebook](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Qwen3_(14B)-Alpaca.ipynb) (for Base models) If you have an old version of Unsloth and/or are fine-tuning locally, install the latest version of Unsloth: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] pip install --upgrade --force-reinstall --no-cache-dir unsloth unsloth_zoo ``` ### [Direct link to heading](https://docs.unsloth.ai/basics/qwen3-how-to-run-and-fine-tune\#qwen3-moe-models-fine-tuning) Qwen3 MOE models fine-tuning Fine-tuning support includes MOE models: 30B-A3B and 235B-A22B. Qwen3-30B-A3B works on just 17.5GB VRAM with Unsloth. On fine-tuning MoE's - it's probably not a good idea to fine-tune the router layer so we disabled it by default. The 30B-A3B fits in 17.5GB VRAM, but you may lack RAM or disk space since the full 16-bit model must be downloaded and converted to 4-bit on the fly for QLoRA fine-tuning. This is due to issues importing 4-bit BnB MOE models directly. This only affects MOE models. If you're fine-tuning the MOE models, please use `FastModel` and not `FastLanguageModel` Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] from unsloth import FastModel import torch model, tokenizer = FastModel.from_pretrained( model_name = "unsloth/Qwen3-30B-A3B", max_seq_length = 2048, # Choose any for long context! load_in_4bit = True, # 4 bit quantization to reduce memory load_in_8bit = False, # [NEW!] A bit more accurate, uses 2x memory full_finetuning = False, # [NEW!] We have full finetuning now! # token = "hf_...", # use one if using gated models ) ``` ### [Direct link to heading](https://docs.unsloth.ai/basics/qwen3-how-to-run-and-fine-tune\#notebook-guide) Notebook Guide: ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252FFQX2CBzUqzAIMM50bpM4%252Fimage.png%3Falt%3Dmedia%26token%3D23c4b3d5-0d5f-4906-b2b4-bacde23235e0&width=768&dpr=4&quality=100&sign=dfdb362c&sv=2) To use the notebooks, just click Runtime, then Run all. You can change settings in the notebook to whatever you desire. We have set them automatically by default. Change model name to whatever you like by matching it with model's name on Hugging Face e.g. 'unsloth/Qwen3-8B' or 'unsloth/Qwen3-0.6B-unsloth-bnb-4bit'. There are other settings which you can toggle: - `max_seq_length = 2048` – Controls context length. While Qwen3 supports 40960, we recommend 2048 for testing. Unsloth enables 8× longer context fine-tuning. - `load_in_4bit = True` – Enables 4-bit quantization, reducing memory use 4× for fine-tuning on 16GB GPUs. - For **full-finetuning** \- set `full_finetuning = True` and **8-bit finetuning** \- set `load_in_8bit = True` If you'd like to read a full end-to-end guide on how to use Unsloth notebooks for fine-tuning or just learn about fine-tuning, creating [datasets](https://docs.unsloth.ai/basics/datasets-guide) etc., view our [complete guide here](https://docs.unsloth.ai/get-started/fine-tuning-guide): [🧬Fine-tuning Guide](https://docs.unsloth.ai/get-started/fine-tuning-guide) [📈Datasets Guide](https://docs.unsloth.ai/basics/datasets-guide) [PreviousLoRA Hyperparameters Guide](https://docs.unsloth.ai/get-started/fine-tuning-guide/lora-hyperparameters-guide) [NextUnsloth Dynamic 2.0 GGUFs](https://docs.unsloth.ai/basics/unsloth-dynamic-2.0-ggufs) Last updated 3 days ago Was this helpful?
{ "color-scheme": "light dark", "description": "Learn to run & fine-tune Qwen3 locally with Unsloth + our Dynamic 2.0 quants", "favicon": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Forganizations%252FHpyELzcNe0topgVLGCZY%252Fsites%252Fsite_mXXTe%252Ficon%252FUdYqQdRpLXMz6O8ejTLD%252Funsloth%2520sticker.png%3Falt%3Dmedia%26token%3D2a5ba961-51cb-4094-8545-6eb20ad1e06e&width=48&height=48&sign=277417f7&sv=2", "generator": "GitBook (e15757d)", "language": "en", "og:description": "Learn to run & fine-tune Qwen3 locally with Unsloth + our Dynamic 2.0 quants", "og:image": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "og:title": "Qwen3: How to Run & Fine-tune | Unsloth Documentation", "ogDescription": "Learn to run & fine-tune Qwen3 locally with Unsloth + our Dynamic 2.0 quants", "ogImage": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "ogTitle": "Qwen3: How to Run & Fine-tune | Unsloth Documentation", "robots": "index, follow", "scrapeId": "3a3e4f33-a2dc-4152-83f2-e070ff949450", "sourceURL": "https://docs.unsloth.ai/basics/qwen3-how-to-run-and-fine-tune", "statusCode": 200, "title": "Qwen3: How to Run & Fine-tune | Unsloth Documentation", "twitter:card": "summary_large_image", "twitter:description": "Learn to run & fine-tune Qwen3 locally with Unsloth + our Dynamic 2.0 quants", "twitter:image": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "twitter:title": "Qwen3: How to Run & Fine-tune | Unsloth Documentation", "url": "https://docs.unsloth.ai/basics/qwen3-how-to-run-and-fine-tune", "viewport": "width=device-width, initial-scale=1" }
Qwen3, full fine-tuning & all models are now supported! 🦥 ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252FQzuUQL60uFWHpaAvDPYD%252FColab%2520Options.png%3Falt%3Dmedia%26token%3Dfb808ec5-20c5-4f42-949e-14ed26a44987&width=768&dpr=4&quality=100&sign=be097a14&sv=2) If you have never used a Colab notebook, a quick primer on the notebook itself: 1. **Play Button at each "cell".** Click on this to run that cell's code. You must not skip any cells and you must run every cell in chronological order. If you encounter errors, simply rerun the cell you did not run. Another option is to click CTRL + ENTER if you don't want to click the play button. 2. **Runtime Button in the top toolbar.** You can also use this button and hit "Run all" to run the entire notebook in 1 go. This will skip all the customization steps, but is a good first try. 3. **Connect / Reconnect T4 button.** T4 is the free GPU Google is providing. It's quite powerful! The first installation cell looks like below: Remember to click the PLAY button in the brackets \[ \]. We grab our open source Github package, and install some other packages. ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252FIz2XUXhcmjheDtxfvbLA%252Fimage.png%3Falt%3Dmedia%26token%3Db9da0e5c-075c-48f8-8abb-5db6fdf9866b&width=768&dpr=4&quality=100&sign=e33e1780&sv=2) ## [Direct link to heading](https://docs.unsloth.ai/get-started/installing-+-updating/google-colab\#undefined) [PreviousConda Install](https://docs.unsloth.ai/get-started/installing-+-updating/conda-install) [NextFine-tuning Guide](https://docs.unsloth.ai/get-started/fine-tuning-guide) Last updated 10 months ago Was this helpful?
{ "color-scheme": "light dark", "description": "To install and run Unsloth on Google Colab, follow the steps below:", "favicon": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Forganizations%252FHpyELzcNe0topgVLGCZY%252Fsites%252Fsite_mXXTe%252Ficon%252FUdYqQdRpLXMz6O8ejTLD%252Funsloth%2520sticker.png%3Falt%3Dmedia%26token%3D2a5ba961-51cb-4094-8545-6eb20ad1e06e&width=48&height=48&sign=277417f7&sv=2", "generator": "GitBook (e15757d)", "language": "en", "og:description": "To install and run Unsloth on Google Colab, follow the steps below:", "og:image": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "og:title": "Google Colab | Unsloth Documentation", "ogDescription": "To install and run Unsloth on Google Colab, follow the steps below:", "ogImage": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "ogTitle": "Google Colab | Unsloth Documentation", "robots": "index, follow", "scrapeId": "2f3bdf19-657d-4870-af37-48257f48aa18", "sourceURL": "https://docs.unsloth.ai/get-started/installing-+-updating/google-colab", "statusCode": 200, "title": "Google Colab | Unsloth Documentation", "twitter:card": "summary_large_image", "twitter:description": "To install and run Unsloth on Google Colab, follow the steps below:", "twitter:image": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "twitter:title": "Google Colab | Unsloth Documentation", "url": "https://docs.unsloth.ai/get-started/installing-+-updating/google-colab", "viewport": "width=device-width, initial-scale=1" }
Qwen3, full fine-tuning & all models are now supported! 🦥 Unsloth works on Linux, Windows directly, Kaggle, Google Colab and more. See our [system requirements](https://docs.unsloth.ai/get-started/beginner-start-here/unsloth-requirements). **Recommended installation method:** Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] pip install unsloth ``` [Pip Install](https://docs.unsloth.ai/get-started/installing-+-updating/pip-install) [Windows Installation](https://docs.unsloth.ai/get-started/installing-+-updating/windows-installation) [Updating](https://docs.unsloth.ai/get-started/installing-+-updating/updating) [Conda Install](https://docs.unsloth.ai/get-started/installing-+-updating/conda-install) [Google Colab](https://docs.unsloth.ai/get-started/installing-+-updating/google-colab) [PreviousAll Our Models](https://docs.unsloth.ai/get-started/all-our-models) [NextUpdating](https://docs.unsloth.ai/get-started/installing-+-updating/updating) Last updated 1 month ago Was this helpful?
{ "color-scheme": "light dark", "description": "Learn to install Unsloth locally or online.", "favicon": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Forganizations%252FHpyELzcNe0topgVLGCZY%252Fsites%252Fsite_mXXTe%252Ficon%252FUdYqQdRpLXMz6O8ejTLD%252Funsloth%2520sticker.png%3Falt%3Dmedia%26token%3D2a5ba961-51cb-4094-8545-6eb20ad1e06e&width=48&height=48&sign=277417f7&sv=2", "generator": "GitBook (e15757d)", "language": "en", "og:description": "Learn to install Unsloth locally or online.", "og:image": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "og:title": "Installing + Updating | Unsloth Documentation", "ogDescription": "Learn to install Unsloth locally or online.", "ogImage": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "ogTitle": "Installing + Updating | Unsloth Documentation", "robots": "index, follow", "scrapeId": "4195d238-d3ca-40d5-9bbc-9b87b99bdb6f", "sourceURL": "https://docs.unsloth.ai/get-started/installing-+-updating", "statusCode": 200, "title": "Installing + Updating | Unsloth Documentation", "twitter:card": "summary_large_image", "twitter:description": "Learn to install Unsloth locally or online.", "twitter:image": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "twitter:title": "Installing + Updating | Unsloth Documentation", "url": "https://docs.unsloth.ai/get-started/installing-+-updating", "viewport": "width=device-width, initial-scale=1" }
Qwen3, full fine-tuning & all models are now supported! 🦥 Fine-tuning TTS models enable it to adapt it on your own dataset, specific use case or style/tone. This process helps customize the model for unique voices, speaking styles, new languages or specific types of content. With Unsloth, we allow you to fine-tune TTS models 1.2x faster with 50% less memory than other implementations with Flash Attention 2. This support includes OpenAI's Whisper, Orpheus, and most of the current popular TTS models. Because voice models are usually small in size, you can train the models using LoRA 16-bit or full fine-tuning FFT which may provider higher quality results. Please note we have not officially announced support for TTS models yet. You can use them but you might experience errors. If so, report them to our GitHub thank you! ### [Direct link to heading](https://docs.unsloth.ai/basics/text-to-speech-tts-fine-tuning\#fine-tuning-notebooks) Fine-tuning Notebooks: - [Orpheus-TTS (3B)](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Orpheus_(3B)-TTS.ipynb) - [Whisper Large V3](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Whisper.ipynb) - [Llasa-TTS (3B)](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llasa_TTS_(3B).ipynb) - [Spark-TTS (0.5B)](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Spark_TTS_(0_5B).ipynb) - [Oute-TTS (1B)](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Oute_TTS_(1B).ipynb) ### [Direct link to heading](https://docs.unsloth.ai/basics/text-to-speech-tts-fine-tuning\#choosing-and-loading-a-tts-model) Choosing and Loading a TTS Model For TTS, the primary model used in our examples is **Orpheus-TTS (3B)** – a Llama-based speech model. Orpheus was pre-trained on a large speech corpus and can generate highly realistic speech, with support for emotional cues (laughs, sighs, etc.) out-of-the-box. We’ll use Orpheus as our example for TTS fine-tuning. To load it in LoRA 16-bit: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] from unsloth import FastModel model_name = "unsloth/orpheus-3b-0.1-pretrained" model, tokenizer = FastModel.from_pretrained( model_name, load_in_4bit=False # use 4-bit precision (QLoRA) ) ``` When this runs, Unsloth will download the model weights if you prefer 8-bit, you could use `load_in_8bit=True`, or for full 16-bit fine-tuning set `full_finetuning=True` (ensure you have enough VRAM). You can also replace the model name with other TTS models. **Note:** Orpheus’s tokenizer already includes special tokens for audio output (more on this later). You do _not_ need a separate vocoder – Orpheus will output audio tokens directly, which can be decoded to a waveform. ### [Direct link to heading](https://docs.unsloth.ai/basics/text-to-speech-tts-fine-tuning\#preparing-your-dataset) Preparing Your Dataset At minimum, a TTS fine-tuning dataset consists of **audio clips and their corresponding transcripts** (text). Let’s use the [_Elise_ dataset](https://huggingface.co/datasets/MrDragonFox/Elise) which is dataset composing of a female character with a pre-built script. Elise will be used as an example of how to prepare data: **Elise dataset:** A small (~3 hours) single-speaker speech corpus from Hugging Face. There are two variants: - [`MrDragonFox/Elise`](https://huggingface.co/datasets/MrDragonFox/Elise) – an augmented version with **emotion tags** embedded in the transcripts. (This clone adds labels like “”, “”, etc., to the text.) - [`Jinsaryko/Elise`](https://huggingface.co/datasets/Jinsaryko/Elise) – base version with transcripts. The dataset is organized with one audio and transcript per entry. On Hugging Face, these datasets have fields such as `audio` (the waveform), `text` (the transcription), and some metadata (speaker name, pitch stats, etc.). We need to feed Unsloth a dataset of audio-text pairs. **Option 1: Using Hugging Face Datasets library** – This is the easiest route if your data is in HF format or a CSV. Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] from datasets import load_dataset # Load the Elise dataset from HF (without emotion tags) dataset = load_dataset("MrDragonFox/Elise", split="train") # Alternatively, you can use a standard dataset without emotion tags ``` This will download the data (approx 328 MB for ~1.2k samples). Each item in `dataset` has `dataset[i]["audio"]` (an Audio object with array data and sampling rate) and `dataset[i]["text"]` (the transcript string). You can inspect a sample: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] sample = dataset[0] print(sample["text"]) # e.g., "Oh, honestly, probably still your house <laughs>. But still, I mean, running the dishes through the dishwasher..." ``` In the `MrDragonFox/Elise` version, you’ll notice tags like `<laughs>` or `<chuckles>` in the text – these indicate expressive cues. These tags are enclosed in angle brackets and will be treated as special tokens by the model (they match [Orpheus’s expected tags](https://github.com/canopyai/Orpheus-TTS) like `<laugh>` and `<sigh>`. **Option 2: Preparing a custom dataset** – If you have your own audio files and transcripts: - Organize audio clips (WAV/FLAC files) in a folder. - Create a CSV or TSV file with columns for file path and transcript. For example: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] filename,text 0001.wav,Hello there! 0002.wav,<sigh> I am very tired. ``` - Use `load_dataset("csv", data_files="mydata.csv", split="train")` to load it. You might need to tell the dataset loader how to handle audio paths. An alternative is using the `datasets.Audio` feature to load audio data on the fly: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] from datasets import Audio dataset = load_dataset("csv", data_files="mydata.csv", split="train") dataset = dataset.cast_column("filename", Audio(sampling_rate=24000)) ``` Then `dataset[i]["audio"]` will contain the audio array. - **Ensure transcripts are normalized** (no unusual characters that the tokenizer might not know, except the emotion tags if used). Also ensure all audio have a consistent sampling rate (resample them if necessary to the target rate the model expects, e.g. 24kHz for Orpheus). **Emotion tags:** If your dataset includes expressive sounds (laughter, sighs, etc.), mark them in the transcript with a tag. Orpheus supports tags like `<laugh>`, `<chuckle>`, `<sigh>`, `<cough>`, `<sniffle>`, `<groan>`, `<yawn>`, `<gasp>`, etc. For example: `"I missed you <laugh> so much!"`. During training, the model will learn to associate these tags with the corresponding audio patterns. The Elise dataset with tags already has many of these (e.g., 336 occurrences of “laughs”, 156 of “sighs”, etc. as listed in its card). If your dataset lacks such tags but you want to incorporate them, you can manually annotate the transcripts where the audio contains those expressions. In summary, for **dataset preparation**: - You need a **list of (audio, text)** pairs. - Use the HF `datasets` library to handle loading and optional preprocessing (like resampling). - Include any **special tags** in the text that you want the model to learn (ensure they are in `<angle_brackets>` format so the model treats them as distinct tokens). - (Optional) If multi-speaker, you could include a speaker ID token in the text or use a separate speaker embedding approach, but that’s beyond this basic guide (Elise is single-speaker). ### [Direct link to heading](https://docs.unsloth.ai/basics/text-to-speech-tts-fine-tuning\#fine-tuning-tts-with-unsloth) Fine-Tuning TTS with Unsloth Now, let’s bring it all together and run the fine-tuning. We’ll illustrate using Python code (which you can run in a Jupyter notebook, Colab, etc.). This is analogous to running the Unsloth CLI with corresponding arguments. **Step 1: Initialize Model and Dataset** Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] from unsloth import FastModel from transformers import Trainer, TrainingArguments # Load the pre-trained Orpheus model (in 4-bit mode) and tokenizer model_name = "unsloth/orpheus-3b-0.1-pretrained-unsloth-bnb-4bit" model, tokenizer = FastModel.from_pretrained(model_name, load_in_4bit=True) # Load the dataset (Elise) and ensure audio is 24kHz dataset = load_dataset("Jinsaryko/Elise", split="train") # Cast the audio to 24kHz if not already dataset = dataset.cast_column("audio", Audio(sampling_rate=24000)) ``` _Note:_ If memory is very limited or if dataset is large, you can stream or load in chunks. Here, 3h of audio easily fits in RAM. If using your own dataset CSV, load it similarly. **Step 2: Preprocess the data for training** We need to prepare inputs for the Trainer. For text-to-speech, one approach is to train the model in a causal manner: concatenate text and audio token IDs as the target sequence. However, since Orpheus is a decoder-only LLM that outputs audio, we can feed the text as input (context) and have the audio token ids as labels. In practice, Unsloth’s integration might do this automatically if the model’s config identifies it as text-to-speech. If not, we can do something like: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] # Tokenize the text transcripts def preprocess_function(example): # Tokenize the text (keep the special tokens like <laugh> intact) tokens = tokenizer(example["text"], return_tensors="pt") # Flatten to list of token IDs input_ids = tokens["input_ids"].squeeze(0) # The model will generate audio tokens after these text tokens. # For training, we can set labels equal to input_ids (so it learns to predict next token). # But that only covers text tokens predicting the next text token (which might be an audio token or end). # A more sophisticated approach: append a special token indicating start of audio, and let the model generate the rest. # For simplicity, use the same input as labels (the model will learn to output the sequence given itself). return {"input_ids": input_ids, "labels": input_ids} train_data = dataset.map(preprocess_function, remove_columns=dataset.column_names) ``` _Important:_ The above is a simplification. In reality, to fine-tune Orpheus properly, you would need the _audio tokens as part of the training labels_. Orpheus’s pre-training likely involved converting audio to discrete tokens (via an audio codec) and training the model to predict those given the preceding text. For fine-tuning on new voice data, you would similarly need to obtain the audio tokens for each clip (using Orpheus’s audio codec). The Orpheus GitHub provides a script for data processing – it encodes audio into sequences of `<custom_token_x>` tokens. However, **Unsloth may abstract this away**: if the model is a FastModel with an associated processor that knows how to handle audio, it might automatically encode the audio in the dataset to tokens. If not, you’d have to manually encode each audio clip to token IDs (using Orpheus’s codebook). This is an advanced step beyond this guide, but keep in mind that simply using text tokens won’t teach the model the actual audio – it needs to match the audio patterns. For brevity, let's assume Unsloth provides a way to feed audio directly (for example, by setting `processor` and passing the audio array). If Unsloth does not yet support automatic audio tokenization, you might need to use the Orpheus repository’s `encode_audio` function to get token sequences for the audio, then use those as labels. (The dataset entries do have `phonemes` and some acoustic features which suggests a pipeline.) **Step 3: Set up training arguments and Trainer** Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] training_args = TrainingArguments( output_dir="orpheus_finetune_elise", per_device_train_batch_size=2, gradient_accumulation_steps=4, num_train_epochs=5, learning_rate=1e-5, fp16=True, # use mixed precision if available logging_steps=50, save_strategy="epoch", report_to="none" # or "tensorboard" if you want to use TB ) # Instantiate Trainer trainer = Trainer( model=model, train_dataset=train_data, args=training_args ) ``` Here we set a small batch and accumulations to simulate batch size 8, 5 epochs over ~1200 samples (~6000 steps), LR=1e-5, and FP16 training (which helps even with 4-bit base). Adjust as needed. **Step 4: Begin fine-tuning** Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] trainer.train() ``` This will start the training loop. You should see logs of loss every 50 steps (as set by `logging_steps`). The training might take some time depending on GPU – for example, on a Colab T4 GPU, a few epochs on 3h of data may take 1-2 hours. Unsloth’s optimizations will make it faster than standard HF training. During training, Unsloth applies its magic (patches, fused ops, etc.) behind the scenes to speed up computation. **Step 5: Save the fine-tuned model** After training completes (or if you stop it mid-way when you feel it’s sufficient), save the model: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] trainer.save_model("orpheus_finetune_elise/final") ``` This saves the model weights (for LoRA, it might save only adapter weights if the base is not fully fine-tuned). If you used `--push_model` in CLI or `trainer.push_to_hub()`, you could upload it to Hugging Face Hub directly. Now you should have a fine-tuned TTS model in the directory. The next step is to test it out! [PreviousInference](https://docs.unsloth.ai/basics/running-and-saving-models/inference) [NextContinued Pretraining](https://docs.unsloth.ai/basics/continued-pretraining) Last updated 2 days ago Was this helpful?
{ "color-scheme": "light dark", "description": "Learn how to to fine-tune TTS voice models with Unsloth.", "favicon": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Forganizations%252FHpyELzcNe0topgVLGCZY%252Fsites%252Fsite_mXXTe%252Ficon%252FUdYqQdRpLXMz6O8ejTLD%252Funsloth%2520sticker.png%3Falt%3Dmedia%26token%3D2a5ba961-51cb-4094-8545-6eb20ad1e06e&width=48&height=48&sign=277417f7&sv=2", "generator": "GitBook (e15757d)", "language": "en", "og:description": "Learn how to to fine-tune TTS voice models with Unsloth.", "og:image": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "og:title": "Text-to-Speech (TTS) Fine-tuning | Unsloth Documentation", "ogDescription": "Learn how to to fine-tune TTS voice models with Unsloth.", "ogImage": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "ogTitle": "Text-to-Speech (TTS) Fine-tuning | Unsloth Documentation", "robots": "index, follow", "scrapeId": "446e66cc-c504-4c25-b296-2eb54372acbe", "sourceURL": "https://docs.unsloth.ai/basics/text-to-speech-tts-fine-tuning", "statusCode": 200, "title": "Text-to-Speech (TTS) Fine-tuning | Unsloth Documentation", "twitter:card": "summary_large_image", "twitter:description": "Learn how to to fine-tune TTS voice models with Unsloth.", "twitter:image": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "twitter:title": "Text-to-Speech (TTS) Fine-tuning | Unsloth Documentation", "url": "https://docs.unsloth.ai/basics/text-to-speech-tts-fine-tuning", "viewport": "width=device-width, initial-scale=1" }
Qwen3, full fine-tuning & all models are now supported! 🦥 To save to 16bit for VLLM, use: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] model.save_pretrained_merged("model", tokenizer, save_method = "merged_16bit",) model.push_to_hub_merged("hf/model", tokenizer, save_method = "merged_16bit", token = "") ``` To merge to 4bit to load on HuggingFace, first call `merged_4bit`. Then use `merged_4bit_forced` if you are certain you want to merge to 4bit. I highly discourage you, unless you know what you are going to do with the 4bit model (ie for DPO training for eg or for HuggingFace's online inference engine) Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] model.save_pretrained_merged("model", tokenizer, save_method = "merged_4bit",) model.push_to_hub_merged("hf/model", tokenizer, save_method = "merged_4bit", token = "") ``` To save just the LoRA adapters, either use: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] model.save_pretrained(...) AND tokenizer.save_pretrained(...) ``` Or just use our builtin function to do that: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] model.save_pretrained_merged("model", tokenizer, save_method = "lora",) model.push_to_hub_merged("hf/model", tokenizer, save_method = "lora", token = "") ``` [PreviousSaving to Ollama](https://docs.unsloth.ai/basics/running-and-saving-models/saving-to-ollama) [NextTroubleshooting](https://docs.unsloth.ai/basics/running-and-saving-models/troubleshooting) Last updated 10 months ago Was this helpful?
{ "color-scheme": "light dark", "description": "Saving models to 16bit for VLLM", "favicon": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Forganizations%252FHpyELzcNe0topgVLGCZY%252Fsites%252Fsite_mXXTe%252Ficon%252FUdYqQdRpLXMz6O8ejTLD%252Funsloth%2520sticker.png%3Falt%3Dmedia%26token%3D2a5ba961-51cb-4094-8545-6eb20ad1e06e&width=48&height=48&sign=277417f7&sv=2", "generator": "GitBook (e15757d)", "language": "en", "og:description": "Saving models to 16bit for VLLM", "og:image": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "og:title": "Saving to VLLM | Unsloth Documentation", "ogDescription": "Saving models to 16bit for VLLM", "ogImage": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "ogTitle": "Saving to VLLM | Unsloth Documentation", "robots": "index, follow", "scrapeId": "44b8770a-223c-4a52-bb4d-f2992c084b79", "sourceURL": "https://docs.unsloth.ai/basics/running-and-saving-models/saving-to-vllm", "statusCode": 200, "title": "Saving to VLLM | Unsloth Documentation", "twitter:card": "summary_large_image", "twitter:description": "Saving models to 16bit for VLLM", "twitter:image": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "twitter:title": "Saving to VLLM | Unsloth Documentation", "url": "https://docs.unsloth.ai/basics/running-and-saving-models/saving-to-vllm", "viewport": "width=device-width, initial-scale=1" }
Qwen3, full fine-tuning & all models are now supported! 🦥 There are millions of possible hyperparameter combinations, and choosing the right values is crucial for fine-tuning. You'll learn the best practices for hyperparameters - based on insights from hundreds of research paper/experiments and how they impact the model. **We recommend you to use Unsloth's pre-selected defaults.** The goal is to change hyperparameter numbers to increase accuracy, but also **counteract** [**over-fitting or underfitting**](https://docs.unsloth.ai/get-started/fine-tuning-guide#avoiding-overfitting-and-underfitting). Over-fitting is where the model memorizes the data and struggles with new questions. We want a model that generalizes, not one that just memorizes. ## [Direct link to heading](https://docs.unsloth.ai/get-started/fine-tuning-guide/lora-hyperparameters-guide\#key-fine-tuning-hyperparameters) Key Fine-tuning Hyperparameters ### [Direct link to heading](https://docs.unsloth.ai/get-started/fine-tuning-guide/lora-hyperparameters-guide\#learning-rate) **Learning Rate** Defines how much the model’s weights adjust per training step. - **Higher Learning Rates**: Faster training, reduces overfitting just make sure to not make it too high as it will overfit - **Lower Learning Rates**: More stable training, may require more epochs. - **Typical Range**: 1e-4 (0.0001) to 5e-5 (0.00005). ### [Direct link to heading](https://docs.unsloth.ai/get-started/fine-tuning-guide/lora-hyperparameters-guide\#epochs) **Epochs** Number of times the model sees the full training dataset. - **Recommended:** 1-3 epochs (anything more than 3 is generally not optimal unless you want your model to have much less hallucinations but also less creativity) - **More Epochs**: Better learning, higher risk of overfitting. - **Fewer Epochs**: May undertrain the model. ## [Direct link to heading](https://docs.unsloth.ai/get-started/fine-tuning-guide/lora-hyperparameters-guide\#advanced-hyperparameters) **Advanced Hyperparameters:** Hyperparameter Function Recommended Settings **LoRA Rank** Controls the number of low-rank factors used for adaptation. 4-128 **LoRA Alpha** Scaling factor for weight updates. LoRA Rank \* 1 or 2 **Max Sequence Length** Maximum context a model can learn. Adjust based on dataset needs **Batch Size** Number of samples processed per training step. Higher values require more VRAM. 1 for long context, 2 or 4 for shorter context. **LoRA Dropout** Dropout rate to prevent overfitting. 0.1-0.2 **Warmup Steps** Gradually increases learning rate at the start of training. 5-10% of total steps **Scheduler Type** Adjusts learning rate dynamically during training. Linear Decay **Seed or Random State** Ensures reproducibility of results. Fixed number (e.g., 42) **Weight Decay** Penalizes large weight updates to prevent overfitting. 1.0 or 0.3 (if you have issues) ## [Direct link to heading](https://docs.unsloth.ai/get-started/fine-tuning-guide/lora-hyperparameters-guide\#lora-hyperparameters-in-unsloth) **LoRA Hyperparameters in Unsloth** You can manually adjust the hyperparameters below if you’d like - but feel free to skip it, as Unsloth automatically chooses well-balanced defaults for you. ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252FW1P2qmzGQGDAXQ0pXhRq%252Fparameters.png%3Falt%3Dmedia%26token%3Df146c646-ca31-4459-b1de-499bd1d23fd1&width=768&dpr=4&quality=100&sign=3bae97dd&sv=2) 1. Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] r = 16, # Choose any number > 0 ! Suggested 8, 16, 32, 64, 128 ``` The rank of the finetuning process. A larger number uses more memory and will be slower, but can increase accuracy on harder tasks. We normally suggest numbers like 8 (for fast finetunes), and up to 128. Too large numbers can causing over-fitting, damaging your model's quality. 2. Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] target_modules = ["q_proj", "k_proj", "v_proj", "o_proj",\ "gate_proj", "up_proj", "down_proj",], ``` We select all modules to finetune. You can remove some to reduce memory usage and make training faster, but we highly do not suggest this. Just train on all modules! 3. Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] lora_alpha = 16, ``` The scaling factor for finetuning. A larger number will make the finetune learn more about your dataset, but can promote over-fitting. We suggest this to equal to the rank `r`, or double it. 4. Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] lora_dropout = 0, # Supports any, but = 0 is optimized ``` Leave this as 0 for faster training! Can reduce over-fitting, but not that much. 5. Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] bias = "none", # Supports any, but = "none" is optimized ``` Leave this as 0 for faster and less over-fit training! 6. Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] use_gradient_checkpointing = "unsloth", # True or "unsloth" for very long context ``` Options include `True`, `False ` and `"unsloth"`. We suggest `"unsloth"` since we reduce memory usage by an extra 30% and support extremely long context finetunes. You can read up here: [https://unsloth.ai/blog/long-context](https://unsloth.ai/blog/long-context) for more details. 7. Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] random_state = 3407, ``` The number to determine deterministic runs. Training and finetuning needs random numbers, so setting this number makes experiments reproducible. 8. Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] use_rslora = False, # We support rank stabilized LoRA ``` Advanced feature to set the `lora_alpha = 16` automatically. You can use this if you want! 9. Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] loftq_config = None, # And LoftQ ``` Advanced feature to initialize the LoRA matrices to the top r singular vectors of the weights. Can improve accuracy somewhat, but can make memory usage explode at the start. ## [Direct link to heading](https://docs.unsloth.ai/get-started/fine-tuning-guide/lora-hyperparameters-guide\#avoiding-overfitting-and-underfitting) **Avoiding Overfitting & Underfitting** #### [Direct link to heading](https://docs.unsloth.ai/get-started/fine-tuning-guide/lora-hyperparameters-guide\#overfitting-too-specialized) **Overfitting** (Too Specialized) The model memorizes training data, failing to generalize to unseen inputs. Solution: - If your training duration is short, lower the learning rate. For longer training runs, increase the learning rate. Because of this, it might be best to test both and see which is better. - Increase batch size. - Lower the number of training epochs. - Combine your dataset with a generic dataset e.g. ShareGPT - Increase dropout rate to introduce regularization. #### [Direct link to heading](https://docs.unsloth.ai/get-started/fine-tuning-guide/lora-hyperparameters-guide\#underfitting-too-generic) **Underfitting** (Too Generic) Though not as common, underfitting is where a low rank model fails to generalize due to a lack of learnable params and so your model may fail to learn from training data. Solution: - If your training duration is short, increase the learning rate. For longer training runs, reduce the learning rate. - Train for more epochs. - Increasing rank and alpha. Alpha should at least equal to the rank number, and rank should be bigger for smaller models/more complex datasets; it usually is between 4 and 64. - Use a more domain-relevant dataset. Fine-tuning has no single "best" approach, only best practices. Experimentation is key to finding what works for your needs. Our notebooks auto-set optimal parameters based on evidence from research papers and past experiments. [PreviousWhat Model Should I Use?](https://docs.unsloth.ai/get-started/fine-tuning-guide/what-model-should-i-use) [NextQwen3: How to Run & Fine-tune](https://docs.unsloth.ai/basics/qwen3-how-to-run-and-fine-tune) Last updated 1 month ago Was this helpful?
{ "color-scheme": "light dark", "description": "Best practices for LoRA hyperparameters and learn how they affect the finetuning process.", "favicon": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Forganizations%252FHpyELzcNe0topgVLGCZY%252Fsites%252Fsite_mXXTe%252Ficon%252FUdYqQdRpLXMz6O8ejTLD%252Funsloth%2520sticker.png%3Falt%3Dmedia%26token%3D2a5ba961-51cb-4094-8545-6eb20ad1e06e&width=48&height=48&sign=277417f7&sv=2", "generator": "GitBook (e15757d)", "language": "en", "og:description": "Best practices for LoRA hyperparameters and learn how they affect the finetuning process.", "og:image": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "og:title": "LoRA Hyperparameters Guide | Unsloth Documentation", "ogDescription": "Best practices for LoRA hyperparameters and learn how they affect the finetuning process.", "ogImage": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "ogTitle": "LoRA Hyperparameters Guide | Unsloth Documentation", "robots": "index, follow", "scrapeId": "4b906cff-7bc1-468a-9817-d0a61dc5076a", "sourceURL": "https://docs.unsloth.ai/get-started/fine-tuning-guide/lora-hyperparameters-guide", "statusCode": 200, "title": "LoRA Hyperparameters Guide | Unsloth Documentation", "twitter:card": "summary_large_image", "twitter:description": "Best practices for LoRA hyperparameters and learn how they affect the finetuning process.", "twitter:image": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "twitter:title": "LoRA Hyperparameters Guide | Unsloth Documentation", "url": "https://docs.unsloth.ai/get-started/fine-tuning-guide/lora-hyperparameters-guide", "viewport": "width=device-width, initial-scale=1" }
Qwen3, full fine-tuning & all models are now supported! 🦥 Google released Gemma 3 in 4 sizes - 1B, 4B, 12B and 27B models! The smallest 1B model is text only, whilst the rest are capable of vision and text input! We provide GGUFs, and a guide of how to run it effectively, and how to finetune & do reasoning finetuning with Gemma 3! NEW: We uploaded new quants using Google's new Gemma 3 **QAT** method. See the full [collection here](https://huggingface.co/collections/unsloth/gemma-3-67d12b7e8816ec6efa7e4e5b). **Unsloth is the only framework which works in float16 machines for Gemma 3 inference and training.** This means Colab Notebooks with free Tesla T4 GPUs also work! - Fine-tune Gemma 3 (4B) using our [free Colab notebook](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Gemma3_(4B).ipynb) According to the Gemma team, the optimal config for inference is `temperature = 1.0, top_k = 64, top_p = 0.95, min_p = 0.0` **Unsloth Gemma 3 uploads with optimal configs:** GGUF Unsloth Dynamic 4-bit Instruct 16-bit Instruct - [1B](https://huggingface.co/unsloth/gemma-3-1b-it-GGUF) - [4B](https://huggingface.co/unsloth/gemma-3-4b-it-GGUF) - [12B](https://huggingface.co/unsloth/gemma-3-12b-it-GGUF) - [27B](https://huggingface.co/unsloth/gemma-3-27b-it-GGUF) - [1B](https://huggingface.co/unsloth/gemma-3-1b-it-bnb-4bit) - [4B](https://huggingface.co/unsloth/gemma-3-4b-it-bnb-4bit) - [12B](https://huggingface.co/unsloth/gemma-3-27b-it-unsloth-bnb-4bit) - [27B](https://huggingface.co/unsloth/gemma-3-27b-it-bnb-4bit) - [1B](https://huggingface.co/unsloth/gemma-3-1b) - [4B](https://huggingface.co/unsloth/gemma-3-4b) - [12B](https://huggingface.co/unsloth/gemma-3-12b) - [27B](https://huggingface.co/unsloth/gemma-3-27b) We fixed an issue with our Gemma 3 GGUF uploads where previously they did not support vision. Now they do. ## [Direct link to heading](https://docs.unsloth.ai/basics/gemma-3-how-to-run-and-fine-tune\#official-recommended-inference-settings) ⚙️ Official Recommended Inference Settings According to the Gemma team, the official recommended settings for inference is: - Temperature of 1.0 - Top\_K of 64 - Min\_P of 0.00 (optional, but 0.01 works well, llama.cpp default is 0.1) - Top\_P of 0.95 - Repetition Penalty of 1.0. (1.0 means disabled in llama.cpp and transformers) - Chat template: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] whitespace-pre-wrap <bos><start_of_turn>user\nHello!<end_of_turn>\n<start_of_turn>model\nHey there!<end_of_turn>\n<start_of_turn>user\nWhat is 1+1?<end_of_turn>\n<start_of_turn>model\n ``` - Chat template with `\n` newlines rendered (except for the last) Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] whitespace-pre-wrap <bos><start_of_turn>user Hello!<end_of_turn> <start_of_turn>model Hey there!<end_of_turn> <start_of_turn>user What is 1+1?<end_of_turn> <start_of_turn>model\n ``` llama.cpp an other inference engines auto add a <bos> - DO NOT add TWO <bos> tokens! You should ignore the <bos> when prompting the model! ## [Direct link to heading](https://docs.unsloth.ai/basics/gemma-3-how-to-run-and-fine-tune\#tutorial-how-to-run-gemma-3-27b-in-ollama) 🦙 Tutorial: How to Run Gemma 3 27B in Ollama 1. Install `ollama` if you haven't already! Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] apt-get update apt-get install pciutils -y curl -fsSL https://ollama.com/install.sh | sh ``` 1. Run the model! Note you can call `ollama serve` in another terminal if it fails! We include all our fixes and suggested parameters (temperature etc) in `params` in our Hugging Face upload! Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] ollama run hf.co/unsloth/gemma-3-27b-it-GGUF:Q4_K_M ``` ## [Direct link to heading](https://docs.unsloth.ai/basics/gemma-3-how-to-run-and-fine-tune\#tutorial-how-to-run-gemma-3-27b-in-llama.cpp) 📖 Tutorial: How to Run Gemma 3 27B in llama.cpp 1. Obtain the latest `llama.cpp` on [GitHub here](https://github.com/ggml-org/llama.cpp). You can follow the build instructions below as well. Change `-DGGML_CUDA=ON` to `-DGGML_CUDA=OFF` if you don't have a GPU or just want CPU inference. Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] apt-get update apt-get install pciutils build-essential cmake curl libcurl4-openssl-dev -y git clone https://github.com/ggerganov/llama.cpp cmake llama.cpp -B llama.cpp/build \ -DBUILD_SHARED_LIBS=ON -DGGML_CUDA=ON -DLLAMA_CURL=ON cmake --build llama.cpp/build --config Release -j --clean-first --target llama-quantize llama-cli llama-gguf-split llama-mtmd-cli cp llama.cpp/build/bin/llama-* llama.cpp ``` 1. If you want to use `llama.cpp` directly to load models, you can do the below: (:Q4\_K\_XL) is the quantization type. You can also download via Hugging Face (point 3). This is similar to `ollama run` Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] ./llama.cpp/llama-mtmd-cli \ -hf unsloth/gemma-3-4b-it-GGUF:Q4_K_XL ``` 1. **OR** download the model via (after installing `pip install huggingface_hub hf_transfer` ). You can choose Q4\_K\_M, or other quantized versions (like BF16 full precision). More versions at: [https://huggingface.co/unsloth/gemma-3-27b-it-GGUF](https://huggingface.co/unsloth/gemma-3-27b-it-GGUF) Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] # !pip install huggingface_hub hf_transfer import os os.environ["HF_HUB_ENABLE_HF_TRANSFER"] = "1" from huggingface_hub import snapshot_download snapshot_download( repo_id = "unsloth/gemma-3-27b-it-GGUF", local_dir = "unsloth/gemma-3-27b-it-GGUF", allow_patterns = ["*Q4_K_M*", "mmproj-BF16.gguf"], # For Q4_K_M ) ``` 1. Run Unsloth's Flappy Bird test 2. Edit `--threads 32` for the number of CPU threads, `--ctx-size 16384` for context length (Gemma 3 supports 128K context length!), `--n-gpu-layers 99` for GPU offloading on how many layers. Try adjusting it if your GPU goes out of memory. Also remove it if you have CPU only inference. 3. For conversation mode: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] ./llama.cpp/llama-mtmd-cli \ --model unsloth/gemma-3-27b-it-GGUF/gemma-3-27b-it-Q4_K_M.gguf \ --mmproj unsloth/gemma-3-27b-it-GGUF/mmproj-BF16.gguf \ --threads 32 \ --ctx-size 16384 \ --n-gpu-layers 99 \ --seed 3407 \ --prio 2 \ --temp 1.0 \ --repeat-penalty 1.0 \ --min-p 0.01 \ --top-k 64 \ --top-p 0.95 ``` 1. For non conversation mode to test Flappy Bird: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] ./llama.cpp/llama-cli \ --model unsloth/gemma-3-27b-it-GGUF/gemma-3-27b-it-Q4_K_M.gguf \ --threads 32 \ --ctx-size 16384 \ --n-gpu-layers 99 \ --seed 3407 \ --prio 2 \ --temp 1.0 \ --repeat-penalty 1.0 \ --min-p 0.01 \ --top-k 64 \ --top-p 0.95 \ -no-cnv \ --prompt "<start_of_turn>user\nCreate a Flappy Bird game in Python. You must include these things:\n1. You must use pygame.\n2. The background color should be randomly chosen and is a light shade. Start with a light blue color.\n3. Pressing SPACE multiple times will accelerate the bird.\n4. The bird's shape should be randomly chosen as a square, circle or triangle. The color should be randomly chosen as a dark color.\n5. Place on the bottom some land colored as dark brown or yellow chosen randomly.\n6. Make a score shown on the top right side. Increment if you pass pipes and don't hit them.\n7. Make randomly spaced pipes with enough space. Color them randomly as dark green or light brown or a dark gray shade.\n8. When you lose, show the best score. Make the text inside the screen. Pressing q or Esc will quit the game. Restarting is pressing SPACE again.\nThe final game should be inside a markdown section in Python. Check your code for errors and fix them before the final markdown section.<end_of_turn>\n<start_of_turn>model\n" ``` The full input from our [https://unsloth.ai/blog/deepseekr1-dynamic](https://unsloth.ai/blog/deepseekr1-dynamic) 1.58bit blog is: Remember to remove <bos> since Gemma 3 auto adds a <bos>! Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] whitespace-pre-wrap <start_of_turn>user Create a Flappy Bird game in Python. You must include these things: 1. You must use pygame. 2. The background color should be randomly chosen and is a light shade. Start with a light blue color. 3. Pressing SPACE multiple times will accelerate the bird. 4. The bird's shape should be randomly chosen as a square, circle or triangle. The color should be randomly chosen as a dark color. 5. Place on the bottom some land colored as dark brown or yellow chosen randomly. 6. Make a score shown on the top right side. Increment if you pass pipes and don't hit them. 7. Make randomly spaced pipes with enough space. Color them randomly as dark green or light brown or a dark gray shade. 8. When you lose, show the best score. Make the text inside the screen. Pressing q or Esc will quit the game. Restarting is pressing SPACE again. The final game should be inside a markdown section in Python. Check your code for error ``` ## [Direct link to heading](https://docs.unsloth.ai/basics/gemma-3-how-to-run-and-fine-tune\#unsloth-fine-tuning-fixes-for-gemma-3) 🦥 Unsloth Fine-tuning fixes for Gemma 3 Our solution in Unsloth is 3 fold: 1. Keep all intermediate activations in bfloat16 format - can be float32, but this uses 2x more VRAM or RAM (via Unsloth's async gradient checkpointing) 2. Do all matrix multiplies in float16 with tensor cores, but manually upcasting / downcasting without the help of Pytorch's mixed precision autocast. 3. Upcast all other options that don't need matrix multiplies (layernorms) to float32. **Unsloth is the only framework which works in float16 machines for Gemma 3 inference and training.** This means Colab Notebooks with free Tesla T4 GPUs also work! - Fine-tune Gemma 3 (4B) using our [free Colab notebook](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Gemma3_(4B).ipynb) ## [Direct link to heading](https://docs.unsloth.ai/basics/gemma-3-how-to-run-and-fine-tune\#gemma-3-fixes-analysis) 🤔 Gemma 3 Fixes Analysis ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252FpQGE6CEsuvGcQaOKrQFQ%252Foutput%281%29.png%3Falt%3Dmedia%26token%3D5f741769-3591-4a79-bb83-d6d58a4e9818&width=768&dpr=4&quality=100&sign=65c761c0&sv=2) Gemma 3 1B to 27B exceed float16's maximum of 65504 First, before we finetune or run Gemma 3, we found that when using float16 mixed precision, gradients and **activations become infinity** unfortunately. This happens in T4 GPUs, RTX 20x series and V100 GPUs where they only have float16 tensor cores. For newer GPUs like RTX 30x or higher, A100s, H100s etc, these GPUs have bfloat16 tensor cores, so this problem does not happen! **But why?** ![](https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252FXmN6s9dA64N3nvmi4Y4x%252Ffloat16%2520bfloat16.png%3Falt%3Dmedia%26token%3D3e1cb682-49d0-4083-b791-589cf01a05a8&width=768&dpr=4&quality=100&sign=b86bca81&sv=2) Wikipedia [https://en.wikipedia.org/wiki/Bfloat16\_floating-point\_format](https://en.wikipedia.org/wiki/Bfloat16_floating-point_format) Float16 can only represent numbers up to **65504**, whilst bfloat16 can represent huge numbers up to **10^38**! But notice both number formats use only 16bits! This is because float16 allocates more bits so it can represent smaller decimals better, whilst bfloat16 cannot represent fractions well. But why float16? Let's just use float32! But unfortunately float32 in GPUs is very slow for matrix multiplications - sometimes 4 to 10x slower! So we cannot do this. [PreviousReinforcement Learning - DPO, ORPO & KTO](https://docs.unsloth.ai/basics/reasoning-grpo-and-rl/reinforcement-learning-dpo-orpo-and-kto) [NextDatasets Guide](https://docs.unsloth.ai/basics/datasets-guide) Last updated 1 day ago Was this helpful?
{ "color-scheme": "light dark", "description": "How to run Gemma 3 effectively with our GGUFs on llama.cpp, Ollama, Open WebUI and how to fine-tune with Unsloth!", "favicon": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Forganizations%252FHpyELzcNe0topgVLGCZY%252Fsites%252Fsite_mXXTe%252Ficon%252FUdYqQdRpLXMz6O8ejTLD%252Funsloth%2520sticker.png%3Falt%3Dmedia%26token%3D2a5ba961-51cb-4094-8545-6eb20ad1e06e&width=48&height=48&sign=277417f7&sv=2", "generator": "GitBook (e15757d)", "language": "en", "og:description": "How to run Gemma 3 effectively with our GGUFs on llama.cpp, Ollama, Open WebUI and how to fine-tune with Unsloth!", "og:image": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "og:title": "Gemma 3: How to Run & Fine-tune | Unsloth Documentation", "ogDescription": "How to run Gemma 3 effectively with our GGUFs on llama.cpp, Ollama, Open WebUI and how to fine-tune with Unsloth!", "ogImage": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "ogTitle": "Gemma 3: How to Run & Fine-tune | Unsloth Documentation", "robots": "index, follow", "scrapeId": "4b9328fa-f7af-44e1-a10d-781739647230", "sourceURL": "https://docs.unsloth.ai/basics/gemma-3-how-to-run-and-fine-tune", "statusCode": 200, "title": "Gemma 3: How to Run & Fine-tune | Unsloth Documentation", "twitter:card": "summary_large_image", "twitter:description": "How to run Gemma 3 effectively with our GGUFs on llama.cpp, Ollama, Open WebUI and how to fine-tune with Unsloth!", "twitter:image": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "twitter:title": "Gemma 3: How to Run & Fine-tune | Unsloth Documentation", "url": "https://docs.unsloth.ai/basics/gemma-3-how-to-run-and-fine-tune", "viewport": "width=device-width, initial-scale=1" }
Qwen3, full fine-tuning & all models are now supported! 🦥 LocallyManual Saving To save to GGUF, use the below to save locally: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] model.save_pretrained_gguf("dir", tokenizer, quantization_method = "q4_k_m") model.save_pretrained_gguf("dir", tokenizer, quantization_method = "q8_0") model.save_pretrained_gguf("dir", tokenizer, quantization_method = "f16") ``` For to push to hub: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] model.push_to_hub_gguf("hf_username/dir", tokenizer, quantization_method = "q4_k_m") model.push_to_hub_gguf("hf_username/dir", tokenizer, quantization_method = "q8_0") ``` All supported quantization options for `quantization_method` are listed below: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] # https://github.com/ggerganov/llama.cpp/blob/master/examples/quantize/quantize.cpp#L19 # From https://mlabonne.github.io/blog/posts/Quantize_Llama_2_models_using_ggml.html ALLOWED_QUANTS = \ { "not_quantized" : "Recommended. Fast conversion. Slow inference, big files.", "fast_quantized" : "Recommended. Fast conversion. OK inference, OK file size.", "quantized" : "Recommended. Slow conversion. Fast inference, small files.", "f32" : "Not recommended. Retains 100% accuracy, but super slow and memory hungry.", "f16" : "Fastest conversion + retains 100% accuracy. Slow and memory hungry.", "q8_0" : "Fast conversion. High resource use, but generally acceptable.", "q4_k_m" : "Recommended. Uses Q6_K for half of the attention.wv and feed_forward.w2 tensors, else Q4_K", "q5_k_m" : "Recommended. Uses Q6_K for half of the attention.wv and feed_forward.w2 tensors, else Q5_K", "q2_k" : "Uses Q4_K for the attention.vw and feed_forward.w2 tensors, Q2_K for the other tensors.", "q3_k_l" : "Uses Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else Q3_K", "q3_k_m" : "Uses Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else Q3_K", "q3_k_s" : "Uses Q3_K for all tensors", "q4_0" : "Original quant method, 4-bit.", "q4_1" : "Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models.", "q4_k_s" : "Uses Q4_K for all tensors", "q4_k" : "alias for q4_k_m", "q5_k" : "alias for q5_k_m", "q5_0" : "Higher accuracy, higher resource usage and slower inference.", "q5_1" : "Even higher accuracy, resource usage and slower inference.", "q5_k_s" : "Uses Q5_K for all tensors", "q6_k" : "Uses Q8_K for all tensors", "iq2_xxs" : "2.06 bpw quantization", "iq2_xs" : "2.31 bpw quantization", "iq3_xxs" : "3.06 bpw quantization", "q3_k_xs" : "3-bit extra small quantization", } ``` First save your model to 16bit: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] model.save_pretrained_merged("merged_model", tokenizer, save_method = "merged_16bit",) ``` Then use the terminal and do: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] git clone --recursive https://github.com/ggerganov/llama.cpp make clean -C llama.cpp make all -j -C llama.cpp pip install gguf protobuf python llama.cpp/convert-hf-to-gguf.py FOLDER --outfile OUTPUT --outtype f16 ``` Or follow the steps at https://rentry.org/llama-cpp-conversions#merging-loras-into-a-model using the model name "merged\_model" to merge to GGUF. [PreviousRunning & Saving Models](https://docs.unsloth.ai/basics/running-and-saving-models) [NextSaving to Ollama](https://docs.unsloth.ai/basics/running-and-saving-models/saving-to-ollama) Last updated 10 months ago Was this helpful?
{ "color-scheme": "light dark", "description": "Saving models to 16bit for GGUF so you can use it for Ollama, Jan AI, Open WebUI and more!", "favicon": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Forganizations%252FHpyELzcNe0topgVLGCZY%252Fsites%252Fsite_mXXTe%252Ficon%252FUdYqQdRpLXMz6O8ejTLD%252Funsloth%2520sticker.png%3Falt%3Dmedia%26token%3D2a5ba961-51cb-4094-8545-6eb20ad1e06e&width=48&height=48&sign=277417f7&sv=2", "generator": "GitBook (e15757d)", "language": "en", "og:description": "Saving models to 16bit for GGUF so you can use it for Ollama, Jan AI, Open WebUI and more!", "og:image": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "og:title": "Saving to GGUF | Unsloth Documentation", "ogDescription": "Saving models to 16bit for GGUF so you can use it for Ollama, Jan AI, Open WebUI and more!", "ogImage": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "ogTitle": "Saving to GGUF | Unsloth Documentation", "robots": "index, follow", "scrapeId": "4fadd594-1f58-4e09-9cb4-960e709a6aeb", "sourceURL": "https://docs.unsloth.ai/basics/running-and-saving-models/saving-to-gguf", "statusCode": 200, "title": "Saving to GGUF | Unsloth Documentation", "twitter:card": "summary_large_image", "twitter:description": "Saving models to 16bit for GGUF so you can use it for Ollama, Jan AI, Open WebUI and more!", "twitter:image": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "twitter:title": "Saving to GGUF | Unsloth Documentation", "url": "https://docs.unsloth.ai/basics/running-and-saving-models/saving-to-gguf", "viewport": "width=device-width, initial-scale=1" }
Qwen3, full fine-tuning & all models are now supported! 🦥 ## [Direct link to heading](https://docs.unsloth.ai/basics/errors-troubleshooting\#running-in-unsloth-works-well-but-after-exporting-and-running-on-other-platforms-the-results-are-poo) Running in Unsloth works well, but after exporting & running on other platforms, the results are poor You might sometimes encounter an issue where your model runs and produces good results on Unsloth, but when you use it on another platform like Ollama or vLLM, the results are poor or you might get gibberish, endless/infinite generations _or_ repeated outputs **.** - The most common cause of this error is using an incorrect chat template. It’s essential to use the SAME chat template that was used when training the model in Unsloth and later when you run it in another framework, such as llama.cpp or Ollama. When inferencing from a saved model, it's crucial to apply the correct template. - It might also be because your inference engine adds an unnecessary "start of sequence" token (or the lack of thereof on the contrary) so ensure you check both hypotheses! ## [Direct link to heading](https://docs.unsloth.ai/basics/errors-troubleshooting\#saving-to-gguf-vllm-16bit-crashes) Saving to GGUF / vLLM 16bit crashes You can try reducing the maximum GPU usage during saving by changing `maximum_memory_usage`. The default is `model.save_pretrained(..., maximum_memory_usage = 0.75)`. Reduce it to say 0.5 to use 50% of GPU peak memory or lower. This can reduce OOM crashes during saving. ## [Direct link to heading](https://docs.unsloth.ai/basics/errors-troubleshooting\#evaluation-loop-also-oom-or-crashing) Evaluation Loop - also OOM or crashing. A common issue when you OOM is because you set your batch size too high. Set it lower than 3 to use less VRAM. First split your training dataset into a train and test split. Set the trainer settings for evaluation to: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] new_dataset = dataset.train_test_split(test_size = 0.01) SFTTrainer( args = TrainingArguments( fp16_full_eval = True, per_device_eval_batch_size = 2, eval_accumulation_steps = 4, eval_strategy = "steps", eval_steps = 1, ), train_dataset = new_dataset["train"], eval_dataset = new_dataset["test"], ``` This will cause no OOMs and make it somewhat faster with no upcasting to float32. Validation set. ## [Direct link to heading](https://docs.unsloth.ai/basics/errors-troubleshooting\#notimplementederror-a-utf-8-locale-is-required.-got-ansi) NotImplementedError: A UTF-8 locale is required. Got ANSI See https://github.com/googlecolab/colabtools/issues/3409 In a new cell, run the below: Copy ```inline-grid min-w-full grid-cols-[auto_1fr] p-2 [count-reset:line] import locale locale.getpreferredencoding = lambda: "UTF-8" ``` [PreviousFinetuning from Last Checkpoint](https://docs.unsloth.ai/basics/finetuning-from-last-checkpoint) [NextUnsloth Environment Flags](https://docs.unsloth.ai/basics/errors-troubleshooting/unsloth-environment-flags) Last updated 2 months ago Was this helpful?
{ "color-scheme": "light dark", "description": "To fix any errors with your setup, see below:", "favicon": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Forganizations%252FHpyELzcNe0topgVLGCZY%252Fsites%252Fsite_mXXTe%252Ficon%252FUdYqQdRpLXMz6O8ejTLD%252Funsloth%2520sticker.png%3Falt%3Dmedia%26token%3D2a5ba961-51cb-4094-8545-6eb20ad1e06e&width=48&height=48&sign=277417f7&sv=2", "generator": "GitBook (e15757d)", "language": "en", "og:description": "To fix any errors with your setup, see below:", "og:image": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "og:title": "Errors/Troubleshooting | Unsloth Documentation", "ogDescription": "To fix any errors with your setup, see below:", "ogImage": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "ogTitle": "Errors/Troubleshooting | Unsloth Documentation", "robots": "index, follow", "scrapeId": "4f9c274c-58cf-439d-9ada-b8359b9a4e4b", "sourceURL": "https://docs.unsloth.ai/basics/errors-troubleshooting", "statusCode": 200, "title": "Errors/Troubleshooting | Unsloth Documentation", "twitter:card": "summary_large_image", "twitter:description": "To fix any errors with your setup, see below:", "twitter:image": "https://docs.unsloth.ai/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fsocialpreview%252FTs1n0uVSd2mdNRN58fhY%252Fdocumentation%25202.png%3Falt%3Dmedia%26token%3Ddd0c4b3f-da1d-4678-aec2-23331f675a34&width=1200&height=630&sign=7a665d5&sv=2", "twitter:title": "Errors/Troubleshooting | Unsloth Documentation", "url": "https://docs.unsloth.ai/basics/errors-troubleshooting", "viewport": "width=device-width, initial-scale=1" }
End of preview. Expand in Data Studio

No dataset card yet

Downloads last month
7