seapoe1809
commited on
Commit
•
389c32b
1
Parent(s):
8f5396c
Update README.md
Browse filessorry the readme file wasnt updated. Trying again.
README.md
CHANGED
@@ -1,3 +1,26 @@
|
|
1 |
---
|
2 |
license: mit
|
3 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: mit
|
3 |
+
---
|
4 |
+
|
5 |
+
|
6 |
+
To run a simple model do the following. The model of course didnt work that well for me:
|
7 |
+
pip install llama-cpp-python
|
8 |
+
pip install huggingface-hub
|
9 |
+
|
10 |
+
You could of course change the gguf file to download. Please dont download all the files as it can be fairly big.
|
11 |
+
huggingface-cli download aaditya/OpenBioLLM-Llama3-8B-GGUF openbiollm-llama3-8b.Q4_K_M.gguf --local-dir ./models --local-dir-use-symlinks False
|
12 |
+
|
13 |
+
The file to simply start generating, do the following:
|
14 |
+
|
15 |
+
from llama_cpp import Llama
|
16 |
+
llm = Llama(model_path="./models/openbiollm-llama3-8b.Q4_K_M.gguf", chat_format="llama-3") # Set chat_format according to the model you are using
|
17 |
+
response=llm.create_chat_completion(
|
18 |
+
max_tokens=250, messages = [
|
19 |
+
{"role": "system", "content": "You are biomed ai"},
|
20 |
+
{"role": "user", "content": "name 5 diabetes medications"}
|
21 |
+
|
22 |
+
]
|
23 |
+
)
|
24 |
+
|
25 |
+
print(response["choices"][0]["message"]["content"])
|
26 |
+
|