🦷 Qwen2.5-1.5B Dental Model (LoRA Adapter & Full Model)

This model is a fine-tuned version of Qwen/Qwen2.5-1.5B, trained with LoRA adapters on a dental procedure instruction dataset. It has been trained to explain ADA dental procedure codes in a way patients can understand.

Two versions are available:

  • BirdieByte1024/Qwen2.5-1.5B-LoRA-dental: LoRA adapter only
  • BirdieByte1024/Qwen2.5-1.5B-dental-full: Fully merged standalone model

πŸ” Model Details

  • Base model: Qwen/Qwen2.5-1.5B
  • Training method: PEFT (LoRA)
  • Tokenizer: Inherited from base model
  • Model size: 1.5B parameters
  • Precision: fp32
  • Training hardware: GTX 1060 (6GB)

πŸ“š Dataset

This model was trained on:

The dataset includes ADA codes and short/long descriptions useful for patient-friendly explanations.


πŸ’¬ Prompt Format

This is an instruction-tuned model using a simple text format:

### Instruction:
Explain the following dental code.

### Code:
D7140 - Extraction, erupted tooth

### Response:

βœ… How to Use (Transformers)

from transformers import AutoTokenizer, AutoModelForCausalLM

model_id = "BirdieByte1024/Qwen2.5-1.5B-dental-full"  # or LoRA version with PEFT if needed

tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(model_id, trust_remote_code=True).to("cuda")

prompt = """### Instruction:
Explain the following dental code.

### Code:
D7140 - Extraction, erupted tooth

### Response:"""

inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

πŸ›  Deployment Options

πŸ–₯ Local with Transformers (Python)

Use the example above with transformers for local inference.

🧠 Use with Ollama / llama.cpp (GGUF format)

To deploy via ollama, convert the merged model to GGUF and run:

ollama run qwen2.5-dental

Make sure to convert your model to GGUF first using transformers + transformers-gguf or use llama.cpp export tools.


⚠️ Limitations

  • English-only dental domain coverage
  • Not a diagnostic or real clinical system
  • May hallucinate or oversimplify medical terms

✍️ Author

Created by BirdieByte1024 as part of a patient-education AI project using LoRA + Qwen models.


πŸ“œ License

Apache 2.0

Downloads last month
0
Safetensors
Model size
1.54B params
Tensor type
F32
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for BirdieByte1024/Qwen2.5-1.5B-dental-full

Base model

Qwen/Qwen2.5-1.5B
Adapter
(329)
this model
Adapters
1 model

Dataset used to train BirdieByte1024/Qwen2.5-1.5B-dental-full