diff for compatibility
Browse files- README.md +11 -143
- config.json +5 -2
- generation_config.json +1 -1
- tokenizer_config.json +2 -2
README.md
CHANGED
@@ -22,155 +22,23 @@ datasets:
|
|
22 |
pipeline_tag: image-text-to-text
|
23 |
---
|
24 |
|
25 |
-
|
|
|
|
|
|
|
|
|
26 |
|
27 |
-

|
28 |
|
29 |
-
|
30 |
|
31 |
-
|
|
|
32 |
|
33 |
-
|
34 |
|
35 |
-
|
36 |
-
|
37 |
-
Parameter Count: 8 billion
|
38 |
-
|
39 |
-
Training Data: Custom high-quality biomedical text and image dataset
|
40 |
-
|
41 |
-
Number of Entries in Dataset: 500,000+
|
42 |
-
|
43 |
-
Dataset Composition: The dataset comprises of text and image, both synthetic and manually curated samples, ensuring a diverse and comprehensive coverage of biomedical knowledge.
|
44 |
-
|
45 |
-
## Model description
|
46 |
-
|
47 |
-
Bio-Medical-MultiModal-Llama-3-8B-V1 is a specialized large language model designed for biomedical applications. It is finetuned from the Llama-3-8B-Instruct model using a custom dataset containing over 500,000 diverse entries. These entries include a mix of synthetic and manually curated data, ensuring high quality and broad coverage of biomedical topics.
|
48 |
-
|
49 |
-
The model is trained to understand and generate text related to various biomedical fields, making it a valuable tool for researchers, clinicians, and other professionals in the biomedical domain.
|
50 |
|
51 |
## License
|
52 |
|
53 |
-
|
54 |
-
|
55 |
-
## Quick Demo
|
56 |
-
|
57 |
-
<video controls autoplay src="https://cdn-uploads.huggingface.co/production/uploads/653f5b93cd52f288490edc83/RpdFKs3mBY9ZIxvUUWOKc.mp4"></video>
|
58 |
-
|
59 |
-
## How to use
|
60 |
-
|
61 |
-
import torch
|
62 |
-
|
63 |
-
from PIL import Image
|
64 |
-
|
65 |
-
from transformers import AutoModel, AutoTokenizer,BitsAndBytesConfig
|
66 |
-
|
67 |
-
bnb_config = BitsAndBytesConfig(
|
68 |
-
load_in_4bit=True,
|
69 |
-
bnb_4bit_quant_type="nf4",
|
70 |
-
bnb_4bit_use_double_quant=True,
|
71 |
-
bnb_4bit_compute_dtype=torch.float16,
|
72 |
-
)
|
73 |
-
|
74 |
-
|
75 |
-
model = AutoModel.from_pretrained(
|
76 |
-
"ContactDoctor/Bio-Medical-MultiModal-Llama-3-8B-V1",
|
77 |
-
quantization_config=bnb_config,
|
78 |
-
device_map="auto",
|
79 |
-
torch_dtype=torch.float16,
|
80 |
-
trust_remote_code=True,
|
81 |
-
attn_implementation="flash_attention_2",
|
82 |
-
)
|
83 |
-
|
84 |
-
|
85 |
-
tokenizer = AutoTokenizer.from_pretrained("ContactDoctor/Bio-Medical-MultiModal-Llama-3-8B-V1", trust_remote_code=True)
|
86 |
-
|
87 |
-
|
88 |
-
image = Image.open("Path to Your image").convert('RGB')
|
89 |
-
|
90 |
-
question = 'Give the modality, organ, analysis, abnormalities (if any), treatment (if abnormalities are present)?'
|
91 |
-
|
92 |
-
msgs = [{'role': 'user', 'content': [image, question]}]
|
93 |
-
|
94 |
-
res = model.chat(
|
95 |
-
image=image,
|
96 |
-
msgs=msgs,
|
97 |
-
tokenizer=tokenizer,
|
98 |
-
sampling=True,
|
99 |
-
temperature=0.95,
|
100 |
-
stream=True
|
101 |
-
)
|
102 |
-
|
103 |
-
generated_text = ""
|
104 |
-
|
105 |
-
for new_text in res:
|
106 |
-
generated_text += new_text
|
107 |
-
print(new_text, flush=True, end='')
|
108 |
-
|
109 |
-
|
110 |
-
> Sample Response
|
111 |
-
|
112 |
-
The modality is Magnetic Resonance Imaging (MRI), the organ being analyzed is the cervical spine, and there are no abnormalities present in the image.
|
113 |
-
|
114 |
-
## Intended uses & limitations
|
115 |
-
|
116 |
-
Bio-Medical-MultiModal-Llama-3-8B-V1 is intended for a wide range of applications within the biomedical field, including:
|
117 |
-
|
118 |
-
1. Research Support: Assisting researchers in literature review and data extraction from biomedical texts.
|
119 |
-
2. Clinical Decision Support: Providing information to support clinical decision-making processes.
|
120 |
-
3. Educational Tool: Serving as a resource for medical students and professionals seeking to expand their knowledge base.
|
121 |
-
|
122 |
-
## Limitations and Ethical Considerations
|
123 |
-
|
124 |
-
Bio-Medical-MultiModal-Llama-3-8B-V1 performs well in various biomedical NLP tasks, users should be aware of the following limitations:
|
125 |
-
|
126 |
-
1. Biases: The model may inherit biases present in the training data. Efforts have been made to curate a balanced dataset, but some biases may persist.
|
127 |
-
2. Accuracy: The model's responses are based on patterns in the data it has seen and may not always be accurate or up-to-date. Users should verify critical information from reliable sources.
|
128 |
-
3. Ethical Use: The model should be used responsibly, particularly in clinical settings where the stakes are high. It should complement, not replace, professional judgment and expertise.
|
129 |
-
|
130 |
-
|
131 |
-
## Training and evaluation
|
132 |
-
|
133 |
-
Bio-Medical-MultiModal-Llama-3-8B-V1 was trained using NVIDIA H100 GPU's, which provides the computational power necessary for handling large-scale data and model parameters efficiently. Rigorous evaluation protocols have been implemented to benchmark its performance against similar models, ensuring its robustness and reliability in real-world applications.
|
134 |
-
|
135 |
-
The model was trained using **MiniCPM**, which allowed us to efficiently handle the multimodal data. MiniCPM provided the ability to process and learn from visual information.
|
136 |
-
|
137 |
-
### Contact Information
|
138 |
-
|
139 |
-
For further information, inquiries, or issues related to Biomed-LLM, please contact:
|
140 |
-
|
141 |
-
Email: [email protected]
|
142 |
-
|
143 |
-
Website: https://www.contactdoctor.in
|
144 |
-
|
145 |
-
### Training hyperparameters
|
146 |
-
|
147 |
-
The following hyperparameters were used during training:
|
148 |
-
- learning_rate: 0.0002
|
149 |
-
- train_batch_size: 4
|
150 |
-
- eval_batch_size: 4
|
151 |
-
- Number of epochs: 3
|
152 |
-
- seed: 42
|
153 |
-
- gradient_accumulation_steps: 4
|
154 |
-
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
155 |
-
- lr_scheduler_type: cosine
|
156 |
-
- lr_scheduler_warmup_ratio: 0.03
|
157 |
-
- mixed_precision_training: Native AMP
|
158 |
-
|
159 |
-
### Framework versions
|
160 |
-
|
161 |
-
- PEFT 0.11.0
|
162 |
-
- Transformers 4.40.2
|
163 |
-
- Pytorch 2.1.2
|
164 |
-
- Datasets 2.19.1
|
165 |
-
- Tokenizers 0.19.1
|
166 |
-
|
167 |
-
### Citation
|
168 |
-
|
169 |
-
If you use Bio-Medical-MultiModal-Llama-3-8B-V1 in your research or applications, please cite it as follows:
|
170 |
|
171 |
-
@misc{ContactDoctor_MEDLLM,
|
172 |
-
author = ContactDoctor,
|
173 |
-
title = {Bio-Medical-MultiModal-Llama-3-8B-V1: A High-Performance Biomedical Multimodal LLM},
|
174 |
-
year = {2024},
|
175 |
-
howpublished = {https://huggingface.co/ContactDoctor/Bio-Medical-MultiModal-Llama-3-8B-V1},
|
176 |
-
}
|
|
|
22 |
pipeline_tag: image-text-to-text
|
23 |
---
|
24 |
|
25 |
+
<!-- header start -->
|
26 |
+
<p align="center">
|
27 |
+
<img src="https://huggingface.co/datasets/FriendliAI/documentation-images/resolve/main/model-card-assets/friendliai.png" width="100%" alt="FriendliAI Logo">
|
28 |
+
</p>
|
29 |
+
<!-- header end -->
|
30 |
|
|
|
31 |
|
32 |
+
# ContactDoctor/Bio-Medical-MultiModal-Llama-3-8B-V1
|
33 |
|
34 |
+
* Model creator: [ContactDoctor](https://huggingface.co/ContactDoctor)
|
35 |
+
* Original model: [Bio-Medical-MultiModal-Llama-3-8B-V1](https://huggingface.co/ContactDoctor/Bio-Medical-MultiModal-Llama-3-8B-V1)
|
36 |
|
37 |
+
## Differences
|
38 |
|
39 |
+
* Added missing eos_token (`<|eot_id|>`) to config.json.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
40 |
|
41 |
## License
|
42 |
|
43 |
+
Refer to the license of the original model card.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
44 |
|
|
|
|
|
|
|
|
|
|
|
|
config.json
CHANGED
@@ -13,7 +13,10 @@
|
|
13 |
"batch_vision_input": true,
|
14 |
"bos_token_id": 128000,
|
15 |
"drop_vision_last_layer": false,
|
16 |
-
"eos_token_id":
|
|
|
|
|
|
|
17 |
"hidden_act": "silu",
|
18 |
"hidden_size": 4096,
|
19 |
"image_size": 448,
|
@@ -52,4 +55,4 @@
|
|
52 |
"patch_size": 14
|
53 |
},
|
54 |
"vocab_size": 128256
|
55 |
-
}
|
|
|
13 |
"batch_vision_input": true,
|
14 |
"bos_token_id": 128000,
|
15 |
"drop_vision_last_layer": false,
|
16 |
+
"eos_token_id": [
|
17 |
+
128001,
|
18 |
+
128009
|
19 |
+
],
|
20 |
"hidden_act": "silu",
|
21 |
"hidden_size": 4096,
|
22 |
"image_size": 448,
|
|
|
55 |
"patch_size": 14
|
56 |
},
|
57 |
"vocab_size": 128256
|
58 |
+
}
|
generation_config.json
CHANGED
@@ -3,4 +3,4 @@
|
|
3 |
"bos_token_id": 128000,
|
4 |
"eos_token_id": 128001,
|
5 |
"transformers_version": "4.41.2"
|
6 |
-
}
|
|
|
3 |
"bos_token_id": 128000,
|
4 |
"eos_token_id": 128001,
|
5 |
"transformers_version": "4.41.2"
|
6 |
+
}
|
tokenizer_config.json
CHANGED
@@ -2058,7 +2058,7 @@
|
|
2058 |
"bos_token": "<|begin_of_text|>",
|
2059 |
"chat_template": "{% set loop_messages = messages %}{% for message in loop_messages %}{% set content = '<|start_header_id|>' + message['role'] + '<|end_header_id|>\n\n'+ message['content'] | trim + '<|eot_id|>' %}{% if loop.index0 == 0 %}{% set content = bos_token + content %}{% endif %}{{ content }}{% endfor %}{{ '<|start_header_id|>assistant<|end_header_id|>\n\n' }}",
|
2060 |
"clean_up_tokenization_spaces": true,
|
2061 |
-
"eos_token": "<|
|
2062 |
"model_input_names": [
|
2063 |
"input_ids",
|
2064 |
"attention_mask"
|
@@ -2069,4 +2069,4 @@
|
|
2069 |
"tokenizer_class": "MiniCPMVTokenizerFast",
|
2070 |
"truncation_side": "right",
|
2071 |
"unk_token": "<unk>"
|
2072 |
-
}
|
|
|
2058 |
"bos_token": "<|begin_of_text|>",
|
2059 |
"chat_template": "{% set loop_messages = messages %}{% for message in loop_messages %}{% set content = '<|start_header_id|>' + message['role'] + '<|end_header_id|>\n\n'+ message['content'] | trim + '<|eot_id|>' %}{% if loop.index0 == 0 %}{% set content = bos_token + content %}{% endif %}{{ content }}{% endfor %}{{ '<|start_header_id|>assistant<|end_header_id|>\n\n' }}",
|
2060 |
"clean_up_tokenization_spaces": true,
|
2061 |
+
"eos_token": "<|eot_id|>",
|
2062 |
"model_input_names": [
|
2063 |
"input_ids",
|
2064 |
"attention_mask"
|
|
|
2069 |
"tokenizer_class": "MiniCPMVTokenizerFast",
|
2070 |
"truncation_side": "right",
|
2071 |
"unk_token": "<unk>"
|
2072 |
+
}
|