You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

Phi-4-mini-instruct QLoRA Adapter (jp-disease-finding)

TL;DR – A 4-bit QLoRA adapter that specializes the small Phi-4-mini-instruct model for span-level extraction of disease names and clinical findings from Japanese medical journal text.
The adapter was trained on a subset (vol. 98–102) of the jp-disease-finding-dataset.

Access & Usage Conditions

This repository is a manual-approval gated model. You may browse the file list, but downloading any file requires an approved access request Hugging Face. Requests are reviewed manually by the authors; please allow 1–3 business days for a decision.

1. Model Details

🚀 Notebook / Code Quick-start
GitHub repo

2. Intended Use & Scope

  • Direct use: automatic extraction of Disease and Finding spans from Japanese medical literature for database construction or downstream NLP pipelines.
  • Users: clinical NLP researchers, medical informatics engineers, healthcare data scientists.
  • Out-of-scope: deployment without domain-expert supervision in clinical decision support systems.

3. How to Use

from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel

BASE    = "microsoft/Phi-4-mini-instruct"
ADAPTER = "seiya/Phi-4-mini-instruct-qlora-adapter-jp-disease-finding"

model = AutoModelForCausalLM.from_pretrained(
    BASE, torch_dtype="auto", device_map="auto"
)
model = PeftModel.from_pretrained(model, ADAPTER)

tokenizer = AutoTokenizer.from_pretrained(BASE)
prompt = "慢性関節リウマチの診断と管理……"
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
out = model.generate(**inputs, max_new_tokens=128)
print(tokenizer.decode(out[0], skip_special_tokens=True))

For an end-to-end demonstration (prompt design, post-processing, evaluation), see the notebook linked above.

4. Limitations & Bias

  • Trained on journal language; may underperform on electronic health records or layperson text.
  • Japanese only – no guarantee of accuracy on other languages.
  • Hallucination and boundary errors are possible; always validate critical outputs.

Acknowledgment

This work was supported by JSPS KAKENHI Grant Number JP22K12263.

Citation

If you use this dataset, please cite:
Currently under preparation...


This model was trained 2x faster with Unsloth and Huggingface's TRL library.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for seiya/Phi-4-mini-instruct-qlora-adapter-jp-disease-finding

Finetuned
(6)
this model

Dataset used to train seiya/Phi-4-mini-instruct-qlora-adapter-jp-disease-finding