Update README.md
Browse files
README.md
CHANGED
@@ -83,24 +83,21 @@ Aloe Beta has been tested on the most popular healthcare QA datasets, with and w
|
|
83 |
|
84 |

|
85 |
|
86 |
-
<!---
|
87 |
-
The Beta model has been developed to excel in several different medical tasks. For this reason, we evaluated the model in many different medical tasks:
|
88 |
-
|
89 |
|
90 |
-
|
91 |
|
92 |
-

|
102 |
|
103 |
-
|
104 |
|
105 |
|
106 |
## Uses
|
@@ -263,7 +260,7 @@ The training set consists of around 1.8B tokens, having 3 different types of dat
|
|
263 |
- [HPAI-BSC/headqa-cot-llama31](https://huggingface.co/datasets/HPAI-BSC/headqa-cot-llama31)
|
264 |
- [HPAI-BSC/MMLU-medical-cot-llama31](https://huggingface.co/datasets/HPAI-BSC/MMLU-medical-cot-llama31)
|
265 |
- [HPAI-BSC/Polymed-QA](https://huggingface.co/datasets/HPAI-BSC/Polymed-QA)
|
266 |
-
- General data. It includes maths, STEM, code, function calling, and instruction
|
267 |
- [HPAI-BSC/Aloe-Beta-General-Collection](https://huggingface.co/datasets/HPAI-BSC/Aloe-Beta-General-Collection)
|
268 |
|
269 |
#### Training parameters
|
|
|
83 |
|
84 |

|
85 |
|
|
|
|
|
|
|
86 |
|
87 |
+
The Beta model has been developed to excel in several different medical tasks. For this reason, we evaluated the model in many different medical benchmarks:
|
88 |
|
89 |
+

|
90 |
|
91 |
|
92 |
+

|
93 |
|
|
|
94 |
|
95 |
We also compared the performance of the model in the general domain, using the OpenLLM Leaderboard benchmark. Aloe-Beta gets competitive results with the current SOTA general models in the most used general benchmarks and outperforms the medical models:
|
96 |
|
97 |
|
98 |

|
99 |
|
100 |
+
|
101 |
|
102 |
|
103 |
## Uses
|
|
|
260 |
- [HPAI-BSC/headqa-cot-llama31](https://huggingface.co/datasets/HPAI-BSC/headqa-cot-llama31)
|
261 |
- [HPAI-BSC/MMLU-medical-cot-llama31](https://huggingface.co/datasets/HPAI-BSC/MMLU-medical-cot-llama31)
|
262 |
- [HPAI-BSC/Polymed-QA](https://huggingface.co/datasets/HPAI-BSC/Polymed-QA)
|
263 |
+
- General data. It includes maths, STEM, code, function calling, and instruction with very long context.
|
264 |
- [HPAI-BSC/Aloe-Beta-General-Collection](https://huggingface.co/datasets/HPAI-BSC/Aloe-Beta-General-Collection)
|
265 |
|
266 |
#### Training parameters
|