dcampanini
commited on
Commit
•
e96d323
1
Parent(s):
c2fac4d
adding model info
Browse files
README.md
CHANGED
@@ -1,4 +1,15 @@
|
|
1 |
---
|
2 |
license: unknown
|
3 |
---
|
4 |
-
# LLaVA-Med model for multimodal radiology report generation
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: unknown
|
3 |
---
|
4 |
+
# LLaVA-Med model for multimodal radiology report generation
|
5 |
+
|
6 |
+
This is a model based on LLaVA-Med 1.0, finetuned to generate medical reports, based on a chest X-ray and a prompt,
|
7 |
+
in our case, the instruction was "write the finding section of chest x-ray radiology report".
|
8 |
+
|
9 |
+
The dataset used for finetuning was the MIMIC-CXR share for the challenge in Radiology Report Generation
|
10 |
+
for the Association for Computational Linguistics 2024 at BioNLP Workshop
|
11 |
+
|
12 |
+
We used the 148,374 findings of MIMIC-CXR for finetuing during 3 epochs.
|
13 |
+
|
14 |
+
More details of the challenge can be found on the [challenge web page](https://stanford-aimi.github.io/RRG24/)
|
15 |
+
or in [workshop site](https://aclweb.org/aclwiki/BioNLP_Workshop)
|