Update README.md
Browse files
README.md
CHANGED
@@ -26,7 +26,7 @@ base_model:
|
|
26 |
---
|
27 |
<p align="center">
|
28 |
<picture>
|
29 |
-
<source media="(prefers-color-scheme: dark)" srcset="https://cdn-uploads.huggingface.co/production/uploads/6620f941eba5274b5c12f83d/
|
30 |
<img alt="aloe_70b" src="https://cdn-uploads.huggingface.co/production/uploads/6620f941eba5274b5c12f83d/udSFjP3wdCu3liH_VXhBk.png" width=50%>
|
31 |
</picture>
|
32 |
</p>
|
@@ -37,7 +37,7 @@ Aloe: A Family of Fine-tuned Open Healthcare LLMs
|
|
37 |
---
|
38 |
|
39 |
|
40 |
-
Llama3.1-Aloe-70B
|
41 |
|
42 |
Aloe is trained on 20 medical tasks, resulting in a robust and versatile healthcare model. Evaluations show Aloe models to be among the best in their class. When combined with a RAG system ([also released](https://github.com/HPAI-BSC/prompt_engine)) the 8B version gets close to the performance of closed models like MedPalm-2, GPT4. With the same RAG system, Aloe-Beta-70B outperforms those private alternatives, producing state-of-the-art results.
|
43 |
|
@@ -66,7 +66,7 @@ Complete training details, model merging configurations, and all training data (
|
|
66 |
- **Developed by:**聽[HPAI](https://hpai.bsc.es/)
|
67 |
- **Model type:**聽Causal decoder-only transformer language model
|
68 |
- **Language(s) (NLP):**聽English (capable but not formally evaluated on other languages)
|
69 |
-
- **License:**聽This model is based on Meta Llama 3.1 70B and is governed by the [Meta Llama 3 License](https://www.llama.com/llama3_1/license/). All our modifications are available with a [CC BY 4.0](https://creativecommons.org/licenses/by
|
70 |
- **Base model :** [meta-llama/Llama-3.1-70B](https://huggingface.co/meta-llama/Llama-3.1-70B)
|
71 |
- **Paper:** (more coming soon)
|
72 |
- **RAG Repository:**聽https://github.com/HPAI-BSC/prompt_engine
|
|
|
26 |
---
|
27 |
<p align="center">
|
28 |
<picture>
|
29 |
+
<source media="(prefers-color-scheme: dark)" srcset="https://cdn-uploads.huggingface.co/production/uploads/6620f941eba5274b5c12f83d/McxwesdwA45FqWJyO7evz.png">
|
30 |
<img alt="aloe_70b" src="https://cdn-uploads.huggingface.co/production/uploads/6620f941eba5274b5c12f83d/udSFjP3wdCu3liH_VXhBk.png" width=50%>
|
31 |
</picture>
|
32 |
</p>
|
|
|
37 |
---
|
38 |
|
39 |
|
40 |
+
Llama3.1-Aloe-Beta-70B is an **open healthcare LLM** achieving **state-of-the-art performance** on several medical tasks. Aloe Beta is made available in two model sizes: [8B](https://huggingface.co/HPAI-BSC/Llama3.1-Aloe-Beta-8B) and [70B](https://huggingface.co/HPAI-BSC/Llama3.1-Aloe-Beta-70B). Both models are trained using the same recipe.
|
41 |
|
42 |
Aloe is trained on 20 medical tasks, resulting in a robust and versatile healthcare model. Evaluations show Aloe models to be among the best in their class. When combined with a RAG system ([also released](https://github.com/HPAI-BSC/prompt_engine)) the 8B version gets close to the performance of closed models like MedPalm-2, GPT4. With the same RAG system, Aloe-Beta-70B outperforms those private alternatives, producing state-of-the-art results.
|
43 |
|
|
|
66 |
- **Developed by:**聽[HPAI](https://hpai.bsc.es/)
|
67 |
- **Model type:**聽Causal decoder-only transformer language model
|
68 |
- **Language(s) (NLP):**聽English (capable but not formally evaluated on other languages)
|
69 |
+
- **License:**聽This model is based on Meta Llama 3.1 70B and is governed by the [Meta Llama 3 License](https://www.llama.com/llama3_1/license/). All our modifications are available with a [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/) license, making the Aloe Beta models **compatible with commercial use**.
|
70 |
- **Base model :** [meta-llama/Llama-3.1-70B](https://huggingface.co/meta-llama/Llama-3.1-70B)
|
71 |
- **Paper:** (more coming soon)
|
72 |
- **RAG Repository:**聽https://github.com/HPAI-BSC/prompt_engine
|