--- base_model: - openthaigpt/openthaigpt1.5-14b-instruct datasets: - Thaweewat/thai-med-pack language: - th - en library_name: transformers license: apache-2.0 pipeline_tag: text-generation tags: - text-generation-inference - sft - trl - 4-bit precision - bitsandbytes - LoRA - Fine-Tuning with LoRA - LLM - GenAI - NT GenAI - ntgenai - lahnmah - NT Thai GPT - ntthaigpt - medical - medtech - HealthGPT - หลานม่า - NT Academy new_version: amornpan/openthaigpt-MedChatModelv11 --- # ✨ Fine-tuning the MedChat model for GPU efficiency. ✨ # 🇹🇭 **Model Card for openthaigpt1.5-14b-medical-tuned** ## ℹ️ This version is optimized for GPU. Please wait for the CPU version, which will be available soon.!! This model is fine-tuned from openthaigpt1.5-14b-instruct using Supervised Fine-Tuning (SFT) on the Thaweewat/thai-med-pack dataset. The model is designed for medical question-answering tasks in Thai, specializing in providing accurate and contextual answers based on medical information. ## Model Description This model was fine-tuned using Supervised Fine-Tuning (SFT) to optimize it for medical question answering in Thai. The base model is `openthaigpt1.5-14b-instruct`, and it has been enhanced with domain-specific knowledge using the Thaweewat/thai-med-pack dataset. - **Model type:** Causal Language Model (AutoModelForCausalLM) - **Language(s):** Thai - **License:** Apache License 2.0 - **Fine-tuned from model:** openthaigpt1.5-14b-instruct - **Dataset used for fine-tuning:** Thaweewat/thai-med-pack ### Model Sources - **Repository:** https://huggingface.co/amornpan - **Citing Repository:** https://huggingface.co/Aekanun - **Base Model:** https://huggingface.co/openthaigpt/openthaigpt1.5-14b-instruct - **Dataset:** https://huggingface.co/datasets/Thaweewat/thai-med-pack ## Uses ### Direct Use The model can be directly used for generating medical responses in Thai. It has been optimized for: - Medical question-answering - Providing clinical information - Health-related dialogue generation ### Downstream Use This model can be used as a foundational model for medical assistance systems, chatbots, and applications related to healthcare, specifically in the Thai language. ### Out-of-Scope Use - This model should not be used for real-time diagnosis or emergency medical scenarios. - Avoid using it for critical clinical decisions without human oversight, as the model is not intended to replace professional medical advice. ## Bias, Risks, and Limitations ### Bias - The model might reflect biases present in the dataset, particularly when addressing underrepresented medical conditions or topics. ### Risks - Responses may contain inaccuracies due to the inherent limitations of the model and the dataset used for fine-tuning. - This model should not be used as the sole source of medical advice. ### Limitations - Limited to the medical domain. - The model is sensitive to prompts and may generate off-topic responses for non-medical queries. ## Model Training Results ``` 985/985 8:34:43, Epoch 134/141] Step Training Loss Validation Loss 50 1.883700 1.708532 100 1.792500 1.528184 150 1.555000 1.296583 200 1.403900 1.251281 250 1.374300 1.225630 300 1.321000 1.195238 350 1.313900 1.187670 400 1.299000 1.181292 450 1.296400 1.177670 500 1.285000 1.173616 550 1.272800 1.170705 600 1.251200 1.169226 650 1.262600 1.166078 700 1.255300 1.165633 750 1.251600 1.165041 800 1.252300 1.162943 850 1.232700 1.164691 900 1.247300 1.163449 950 1.246300 1.163610 ``` ![image/png](https://cdn-uploads.huggingface.co/production/uploads/663ce15f197afc063058dc3a/YGFmHdnura4iCxWwvHrlq.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/663ce15f197afc063058dc3a/bqzHSvriZV3uxwsx949JM.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/663ce15f197afc063058dc3a/7IqYD4YVfI-NAGBCmztQ6.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/663ce15f197afc063058dc3a/nCmd5f7p5q7UXFEkAmAD1.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/663ce15f197afc063058dc3a/gBixy-3XobOvFd21JaYaM.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/663ce15f197afc063058dc3a/OnXan8Pli6ju3z-Ecllca.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/663ce15f197afc063058dc3a/nlMGPZb05Z9BeOD3NUbo5.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/663ce15f197afc063058dc3a/6fZPHTNzM6Wjia0kYBKQ2.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/663ce15f197afc063058dc3a/LZucvA_GwdZWpd9ljWSCF.png) ## How to Get Started with the Model Here’s how to load and use the model for generating medical responses in Thai: ## Using Google Colab Pro or Pro+ for fine-tuning and inference. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/663ce15f197afc063058dc3a/XbUTda-Gdvl1DeUs82xoX.png) ## 1. Install the Required Packages First, ensure you have installed the required libraries by running: ```python ! pip install --upgrade torch transformers accelerate bitsandbytes --upgrade ``` ## 2. Load the Model and Tokenizer You can load the model and tokenizer directly from Hugging Face using the following code: ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig ``` # Define the model path ```python model_path = 'amornpan/openthaigpt1.5-14b-MedChatModelV1' ``` # Load the tokenizer and model ```python tokenizer = AutoTokenizer.from_pretrained(model_path) tokenizer.pad_token = tokenizer.eos_token ``` ## 3. Prepare Your Input (Custom Prompt) Create a custom medical prompt that you want the model to respond to: ```python custom_prompt = "โปรดอธิบายลักษณะช่องปากที่เป็นมะเร็งในระยะเริ่มต้น" PROMPT = f'[INST] {custom_prompt}[/INST]' # Tokenize the input prompt inputs = tokenizer(PROMPT, return_tensors="pt", padding=True, truncation=True) ``` ## 4. Configure the Model for Efficient Loading (4-bit Quantization) The model uses 4-bit precision for efficient inference. Here’s how to set up the configuration: ```python bnb_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_quant_type="nf4", bnb_4bit_compute_dtype=torch.float16 ) ``` ## 5. Load the Model with Quantization Support Now, load the model with the 4-bit quantization settings: ```python model = AutoModelForCausalLM.from_pretrained( model_path, quantization_config=bnb_config, trust_remote_code=True ) ``` ## 6. Move the Model and Inputs to the GPU (prefer GPU) For faster inference, move the model and input tensors to a GPU, if available: ```python device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model.to(device) inputs = {k: v.to(device) for k, v in inputs.items()} ``` ## 7. Generate a Response from the Model Now, generate the medical response by running the model: ```python outputs = model.generate(**inputs, max_new_tokens=256, do_sample=True) ``` ## 8. Decode the Generated Text Finally, decode and print the response from the model: ```python generated_text = tokenizer.decode(outputs[0], skip_special_tokens=True) print(generated_text) ``` ## 9. Output ```python [INST] โปรดอธิบายลักษณะช่องปากที่เป็นมะเร็งในระยะเริ่มต้น[/INST] สำหรับมะเร็งช่องปากในระยะแรกอาจรวมถึงอาการหรือลักษณะดังต่อไปนี้: 1. แผลบนช่องปากหรือกระพูดนิ่มที่อยู่กับที่และไม่หายไปแม้จะผ่านการรักษาด้วยตนเอง 2. บวมที่อยู่กับที่ที่ข้างใดข้างหนึ่งของริมฝีปาก 3. แผลเปื่อยหรือพังผืดที่เกิดขึ้นที่กระพูดหรือฟันที่ไม่หาย 4. ความเปลี่ยนแปลงของผิว pigment ในช่องปาก เช่น สีของกระพืดหรือริมฝีปากที่เปลี่ยนเป็นสีขาวหรือขาว 5. ปัญหาในการพูดหรือกินอาหาร 6. ขดลวดที่ด้านข้างหรือใต้คอที่เจริญเติบโต 7. อาการเจ็บจี๊ด ``` ### 👤 **Authors** * Amornpan Phornchaicharoen (amornpan@gmail.com) * Aekanun Thongtae (cto@bangkokfirsttech.com) * Montita Somsoo (montita.fonn@gmail.com * Phongsatorn Somjai (ipongdev@gmail.com) * Jiranuwat Songpad (jiranuwat.song64@gmail.com) * Benchawan Wangphoomyai (benchaa.27@gmail.com)