Galen
Galen is fine-tuned from Mistral-7B-Instruct-v0.2, using medical quesion answering dataset
Galen's view about future of medicine and AI:
Get Started
Install "accelerate" to use CUDA GPU
pip install accelerate
from transformers import AutoTokenizer, pipeline
tokenizer = AutoTokenizer.from_pretrained('ahmed-ai/galen')
model_pipeline = pipeline(task="text-generation", model='ahmed-ai/galen', tokenizer=tokenizer, max_length=256, temperature=0.5, top_p=0.6)
result = model_pipeline('What is squamous cell carcinoma')
#print the generated text
print(result[0]['generated_text'][len(prompt):])
- Downloads last month
- 15
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
HF Inference deployability: The model authors have turned it off explicitly.