Create README.md
To run a simple model do the following. The model of course didnt work that well for me:
pip install llama-cpp-python
pip install huggingface-hub
You could of course change the gguf file to download. Please dont download all the files as it can be fairly big.
huggingface-cli download aaditya/OpenBioLLM-Llama3-8B-GGUF openbiollm-llama3-8b.Q4_K_M.gguf --local-dir ./models --local-dir-use-symlinks False
The file to simply start generating, do the following:
from llama_cpp import Llama
llm = Llama(model_path="./models/openbiollm-llama3-8b.Q4_K_M.gguf", chat_format="llama-3") # Set chat_format according to the model you are using
response=llm.create_chat_completion(
max_tokens=250, messages = [
{"role": "system", "content": "You are biomed ai"},
{"role": "user", "content": "name 5 diabetes medications"}
]
)
print(response["choices"][0]["message"]["content"])
To run a simple model do the following. The model of course didnt work that well for me:
pip install llama-cpp-python
pip install huggingface-hub
You could of course change the gguf file to download. Please dont download all the files as it can be fairly big.
huggingface-cli download aaditya/OpenBioLLM-Llama3-8B-GGUF openbiollm-llama3-8b.Q4_K_M.gguf --local-dir ./models --local-dir-use-symlinks False
The file to simply start generating, do the following:
from llama_cpp import Llama
llm = Llama(model_path="./models/openbiollm-llama3-8b.Q4_K_M.gguf", chat_format="llama-3") # Set chat_format according to the model you are using
response=llm.create_chat_completion(
max_tokens=250, messages = [
{"role": "system", "content": "You are biomed ai"},
{"role": "user", "content": "name 5 diabetes medications"}
]
)
print(response["choices"][0]["message"]["content"])