Update README.md
Browse files
README.md
CHANGED
@@ -1,11 +1,22 @@
|
|
1 |
---
|
2 |
library_name: transformers
|
3 |
-
tags:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
4 |
---
|
5 |
|
6 |
# Model Card for Model ID
|
7 |
|
8 |
-
|
9 |
|
10 |
|
11 |
|
@@ -17,26 +28,35 @@ tags: []
|
|
17 |
|
18 |
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
|
19 |
|
20 |
-
- **Developed by:** [
|
21 |
-
- **
|
22 |
-
- **Shared by [optional]:** [More Information Needed]
|
23 |
-
- **Model type:** [More Information Needed]
|
24 |
-
- **Language(s) (NLP):** [More Information Needed]
|
25 |
-
- **License:** [More Information Needed]
|
26 |
-
- **Finetuned from model [optional]:** [More Information Needed]
|
27 |
|
28 |
### Model Sources [optional]
|
29 |
|
30 |
-
<!-- Provide the basic links for the model. -->
|
31 |
-
|
32 |
-
- **Repository:** [More Information Needed]
|
33 |
-
- **Paper [optional]:** [More Information Needed]
|
34 |
-
- **Demo [optional]:** [More Information Needed]
|
35 |
-
|
36 |
## Uses
|
37 |
|
38 |
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
|
39 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
40 |
### Direct Use
|
41 |
|
42 |
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
|
@@ -78,12 +98,15 @@ Use the code below to get started with the model.
|
|
78 |
### Training Data
|
79 |
|
80 |
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
|
|
|
|
|
|
|
81 |
|
82 |
-
[More Information Needed]
|
83 |
|
84 |
### Training Procedure
|
85 |
|
86 |
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
|
|
|
87 |
|
88 |
#### Preprocessing [optional]
|
89 |
|
@@ -120,13 +143,21 @@ Use the code below to get started with the model.
|
|
120 |
|
121 |
#### Metrics
|
122 |
|
123 |
-
|
124 |
|
125 |
[More Information Needed]
|
126 |
|
127 |
### Results
|
128 |
|
129 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
130 |
|
131 |
#### Summary
|
132 |
|
|
|
1 |
---
|
2 |
library_name: transformers
|
3 |
+
tags:
|
4 |
+
- medical
|
5 |
+
- SOAP_notes_generation
|
6 |
+
license: apache-2.0
|
7 |
+
datasets:
|
8 |
+
- SubashNeupane/dataset_SOAP_summary
|
9 |
+
metrics:
|
10 |
+
- bertscore
|
11 |
+
- rouge
|
12 |
+
base_model:
|
13 |
+
- Qwen/Qwen2.5-14B-Instruct
|
14 |
+
pipeline_tag: text-generation
|
15 |
---
|
16 |
|
17 |
# Model Card for Model ID
|
18 |
|
19 |
+
This model is a LoRA fine-tuned version of base model Qwen/Qwen2.5-14B-Instruct to improve the SOAP (Subjective, Objective, Assessment, Plan) notes generation from an input doctor-patient dialog. This is an 8-bit quantized version.
|
20 |
|
21 |
|
22 |
|
|
|
28 |
|
29 |
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
|
30 |
|
31 |
+
- **Developed by:** [Yehia Zakaria]
|
32 |
+
- **Finetuned from model [Qwen/Qwen2.5-14B-Instruct]:**
|
|
|
|
|
|
|
|
|
|
|
33 |
|
34 |
### Model Sources [optional]
|
35 |
|
|
|
|
|
|
|
|
|
|
|
|
|
36 |
## Uses
|
37 |
|
38 |
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
|
39 |
|
40 |
+
```python
|
41 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM
|
42 |
+
import torch
|
43 |
+
|
44 |
+
# Load model and tokenizer
|
45 |
+
tokenizer = AutoTokenizer.from_pretrained("{HF_USERNAME}/{model_name}")
|
46 |
+
model = AutoModelForCausalLM.from_pretrained(
|
47 |
+
"yehiazak/Qwen2.5-14B-Instruct-SOAP-tuned-Q8",
|
48 |
+
torch_dtype=torch.bfloat16,
|
49 |
+
device_map="auto"
|
50 |
+
)
|
51 |
+
|
52 |
+
# Generate text
|
53 |
+
prompt = "Your prompt here"
|
54 |
+
inputs = tokenizer(prompt, return_tensors="pt")
|
55 |
+
outputs = model.generate(**inputs, max_length=512, temperature=0.2)
|
56 |
+
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
|
57 |
+
print(response)
|
58 |
+
```
|
59 |
+
|
60 |
### Direct Use
|
61 |
|
62 |
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
|
|
|
98 |
### Training Data
|
99 |
|
100 |
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
|
101 |
+
Dataset (total samples: 1473) was shuffled and slplit into:
|
102 |
+
- Training samples: 1300
|
103 |
+
- Validation samples: 173
|
104 |
|
|
|
105 |
|
106 |
### Training Procedure
|
107 |
|
108 |
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
|
109 |
+
LoRA fine-tuning with 3 epochs.
|
110 |
|
111 |
#### Preprocessing [optional]
|
112 |
|
|
|
143 |
|
144 |
#### Metrics
|
145 |
|
146 |
+
BertScore computed using "microsoft/deberta-xlarge-mnli".
|
147 |
|
148 |
[More Information Needed]
|
149 |
|
150 |
### Results
|
151 |
|
152 |
+
===================
|
153 |
+
EVALUATION RESULTS
|
154 |
+
===================
|
155 |
+
ROUGE-1: 0.7017
|
156 |
+
ROUGE-2: 0.4914
|
157 |
+
ROUGE-L: 0.6132
|
158 |
+
BertScore Precision: 0.8494
|
159 |
+
BertScore Recall: 0.8288
|
160 |
+
BertScore F1: 0.8382
|
161 |
|
162 |
#### Summary
|
163 |
|