File size: 2,435 Bytes
2318914
 
 
7d53d7d
 
 
 
 
 
 
 
 
2318914
 
c794240
2318914
 
 
 
9d10ab9
 
2318914
9d10ab9
2318914
 
9d10ab9
 
 
 
 
2318914
 
 
 
 
9d10ab9
 
 
2318914
 
 
9d10ab9
2318914
 
 
 
 
 
9d10ab9
2318914
9d10ab9
 
 
2318914
9d10ab9
2318914
9d10ab9
 
 
 
 
2318914
c794240
2318914
 
9d10ab9
2318914
9d10ab9
2318914
9d10ab9
 
 
 
 
2318914
9d10ab9
2318914
9d10ab9
2318914
9d10ab9
 
 
 
2318914
9d10ab9
 
2318914
9d10ab9
2318914
9d10ab9
2318914
 
9d10ab9
2318914
9d10ab9
2318914
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
---
base_model: unsloth/Qwen2.5-1.5B-Instruct
library_name: peft
license: mit
datasets:
- ituperceptron/turkish_medical_reasoning
language:
- tr
pipeline_tag: question-answering
tags:
- medical
- biology
---

# Model Card for Turkish-Medical-R1


## Model Details

This model is a fine-tuned version of Qwen2.5-1.5B-Instruct for medical reasoning in Turkish. The model was trained on ituperceptron/turkish_medical_reasoning dataset, which contains 
instruction-tuned examples focused on clinical reasoning, diagnosis, patient care, and medical decision-making.

### Model Description


- **Developed by:** Rustam Shiriyev
- **Language(s) (NLP):** Turkish
- **License:** MIT
- **Finetuned from model:** unsloth/Qwen2.5-1.5B-Instruct
 

## Uses

### Direct Use

 - Medical Q&A in Turkish
 - Clinical reasoning tasks (educational or non-diagnostic)
 - Research on medical domain adaptation and multilingual LLMs

### Out-of-Scope Use

This model is intended for research and educational purposes only. It should not be used for real-world medical decision-making or patient care.


## How to Get Started with the Model

Use the code below to get started with the model.

```python

from huggingface_hub import login
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel

login(token="")

tokenizer = AutoTokenizer.from_pretrained("unsloth/Qwen2.5-1.5B-Instruct",)
base_model = AutoModelForCausalLM.from_pretrained(
    "unsloth/Qwen2.5-1.5B-Instruct",
    device_map={"": 0}, token=""
)

model = PeftModel.from_pretrained(base_model,"Rustamshry/Rustamshry/Turkish-Medical-R1")


question = "Medüller tiroid karsinomu örneklerinin elektron mikroskopisinde gözlemlenen spesifik özellik nedir?"

prompt = (

    "### Talimat:\n"
    "Siz bir tıbb alanında uzmanlaşmış yapay zeka asistanısınız. Gelen soruları yalnızca Türkçe olarak, "
    "açıklayıcı bir şekilde yanıtlayın.\n\n"
     f"### Soru:\n{question.strip()}\n\n"
     f"### Cevap:\n"

)

input_ids = tokenizer(prompt, return_tensors="pt").to(model.device)

outputs = model.generate(
    **input_ids,
    max_new_tokens=2048,
)

print(tokenizer.decode(outputs[0]))
```

## Training Data

- Dataset: ituperceptron/turkish_medical_reasoning; Translated version of FreedomIntelligence/medical-o1-reasoning-SFT (Turkish, ~7K examples)


## Evaluation

No formal quantitative evaluation yet.


### Framework versions

- PEFT 0.15.2