G-AshwinKumar commited on
Commit
42d69ee
verified
1 Parent(s): 91ee1fa

Change references from 8B to 70B

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -22,7 +22,7 @@ tags:
22
  - healthcare
23
  pipeline_tag: question-answering
24
  base_model:
25
- - meta-llama/Llama-3.1-8B
26
  ---
27
  <p align="center">
28
  <picture>
@@ -191,7 +191,7 @@ print(outputs[0]["generated_text"][len(prompt):])
191
  from transformers import AutoTokenizer, AutoModelForCausalLM
192
  import torch
193
 
194
- model_id = "HPAI-BSC/Llama31-Aloe-Beta-8B"
195
 
196
  tokenizer = AutoTokenizer.from_pretrained(model_id)
197
  model = AutoModelForCausalLM.from_pretrained(
@@ -351,7 +351,7 @@ To compare Aloe with the most competitive open models (both general purpose and
351
 
352
  Benchmark results indicate the training conducted on Aloe has boosted its performance achieving comparable results with SOTA models like Llama3-OpenBioLLLM, Llama3-Med42, MedPalm-2 and GPT-4. Llama31-Aloe-Beta-70B also outperforms the other existing medical models in the OpenLLM Leaderboard and in the evaluation of other medical tasks like Medical Factualy and Medical Treatment recommendations among others. All these results make Llama31-Aloe-Beta-70B one of the best existing models for healthcare.
353
 
354
- With the help of prompting techniques the performance of Llama3-Aloe-8B-Beta is significantly improved. Medprompting in particular provides a 4% increase in reported accuracy, after which Llama31-Aloe-Beta-70B outperforms all the existing models that do not use RAG evaluation.
355
 
356
 
357
  ## Environmental Impact
 
22
  - healthcare
23
  pipeline_tag: question-answering
24
  base_model:
25
+ - meta-llama/Llama-3.1-70B
26
  ---
27
  <p align="center">
28
  <picture>
 
191
  from transformers import AutoTokenizer, AutoModelForCausalLM
192
  import torch
193
 
194
+ model_id = "HPAI-BSC/Llama31-Aloe-Beta-70B"
195
 
196
  tokenizer = AutoTokenizer.from_pretrained(model_id)
197
  model = AutoModelForCausalLM.from_pretrained(
 
351
 
352
  Benchmark results indicate the training conducted on Aloe has boosted its performance achieving comparable results with SOTA models like Llama3-OpenBioLLLM, Llama3-Med42, MedPalm-2 and GPT-4. Llama31-Aloe-Beta-70B also outperforms the other existing medical models in the OpenLLM Leaderboard and in the evaluation of other medical tasks like Medical Factualy and Medical Treatment recommendations among others. All these results make Llama31-Aloe-Beta-70B one of the best existing models for healthcare.
353
 
354
+ With the help of prompting techniques the performance of Llama3-Aloe-70B-Beta is significantly improved. Medprompting in particular provides a 4% increase in reported accuracy, after which Llama31-Aloe-Beta-70B outperforms all the existing models that do not use RAG evaluation.
355
 
356
 
357
  ## Environmental Impact