--- library_name: transformers tags: - medical license: apache-2.0 datasets: - FreedomIntelligence/medical-o1-reasoning-SFT language: - en base_model: - meta-llama/Llama-4-Scout-17B-16E-Instruct pipeline_tag: text-generation --- # Fine-tuning Llama 4 (Scout 17B 16E) in 4-bit Quantization for Medical Reasoning This project fine-tunes the [`meta-llama/Llama-4-Scout-17B-16E-Instruct`](https://huggingface.co/meta-llama/Llama-4-Scout-17B-16E-Instruct) model using a medical reasoning dataset (`FreedomIntelligence/medical-o1-reasoning-SFT`) with **4-bit quantization** for memory-efficient training. --- ## Setup 1. Install the required libraries: ```bash pip install -U datasets accelerate peft trl bitsandbytes pip install transformers==4.51.0 pip install huggingface_hub[hf_xet] ``` 2. Authenticate with Hugging Face Hub: Make sure your Hugging Face token is stored in an environment variable: ```bash export HF_TOKEN=your_huggingface_token ``` The notebook will automatically log you in using this token. --- ## How to Run 1. **Load the Model and Tokenizer** The script downloads the Llama 4 Scout model and applies 4-bit quantization with `BitsAndBytesConfig` for efficient memory usage. 2. **Prepare the Dataset** - The notebook uses `FreedomIntelligence/medical-o1-reasoning-SFT` (first 500 samples). - It formats each example into an **instruction-following prompt** with step-by-step chain-of-thought reasoning. 3. **Fine-tuning** - Fine-tuning is set up with PEFT (LoRA / Adapter Tuning style) to modify a small subset of model parameters. - TRL (Transformer Reinforcement Learning) is used to fine-tune efficiently. 4. **Push Fine-tuned Model** - After training, the fine-tuned model and tokenizer are pushed back to your Hugging Face account. --- Here is the training notebook: [Fine_tuning_llama_4](https://huggingface.co/kingabzpro/Llama-4-Scout-17B-16E-Instruct-Medical-ChatBot/blob/main/Fine_tuning_llama4%20(Original).ipynb) ## Model Configuration - **Base Model**: `meta-llama/Llama-4-Scout-17B-16E-Instruct` - **Quantization**: 4-bit (NF4) - **Training**: PEFT + TRL - **Dataset**: 500 examples from medical reasoning dataset --- ## Notes - **GPU Required**: Make sure you have access to 3X H200s. Get it from RunPod for an hours. Training took only 7 minutes. - **Environment**: The notebook expects an environment where NVIDIA CUDA drivers are available (`nvidia-smi` check is included). - **Memory Efficiency**: 4-bit loading greatly reduces memory footprint. --- ## Example Prompt Format ``` Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request. Before answering, think carefully about the question and create a step-by-step chain of thoughts to ensure a logical and accurate response. ### Instruction: You are a medical expert with advanced knowledge in clinical reasoning, diagnostics, and treatment planning. Please answer the following medical question. ### Question: ### Response: ``` --- ## Usage Script (not-tested) ```python from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig from peft import PeftModel import torch # Base model (original model from Meta) base_model_id = "meta-llama/Llama-4-Scout-17B-16E-Instruct" # Your fine-tuned LoRA adapter repository lora_adapter_id = "kingabzpro/Llama-4-Scout-17B-16E-Instruct-Medical-ChatBot" # Load the model in 4-bit bnb_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_use_double_quant=False, bnb_4bit_quant_type="nf4", bnb_4bit_compute_dtype=torch.bfloat16, ) # Load base model base_model = AutoModelForCausalLM.from_pretrained( base_model_id, device_map="auto", torch_dtype=torch.bfloat16, quantization_config=bnb_config, trust_remote_code=True, ) # Attach the LoRA adapter model = PeftModel.from_pretrained( base_model, lora_adapter_id, device_map="auto", trust_remote_code=True, ) # Load tokenizer tokenizer = AutoTokenizer.from_pretrained(base_model_id, trust_remote_code=True) # Inference example prompt = """Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request. Before answering, think carefully about the question and create a step-by-step chain of thoughts to ensure a logical and accurate response. ### Instruction: You are a medical expert with advanced knowledge in clinical reasoning, diagnostics, and treatment planning. Please answer the following medical question. ### Question: What is the initial management for a patient presenting with diabetic ketoacidosis (DKA)? ### Response: """ inputs = tokenizer(prompt, return_tensors="pt").to(model.device) outputs = model.generate(**inputs, max_new_tokens=500) response = tokenizer.decode(outputs[0], skip_special_tokens=True) print(response) ```