File size: 3,012 Bytes
c86f72b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
---
language: en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- ruslanmv
- llama
- trl
- llama-3
- instruct
- finetune
- chatml
- DPO
- RLHF
- gpt4
- distillation
- heathcare
- medical
- clinical
- med
- lifescience
- Pharmaceutical
- Pharma
- llama-cpp
- gguf-my-repo
base_model: ruslanmv/Medical-Llama3-8B
datasets:
- ruslanmv/ai-medical-chatbot
widget:
- example_title: Medical-Llama3-8B
  messages:
  - role: system
    content: You are an expert and experienced from the healthcare and biomedical
      domain with extensive medical knowledge and practical experience.
  - role: user
    content: How long does it take for newborn jaundice to go away?
  output:
    text: Newborn jaundice, also known as neonatal jaundice, is a common condition
      in newborns where the yellowing of the skin and eyes occurs due to an elevated
      level of bilirubin in the blood. Bilirubin is a yellow pigment that forms when
      red blood cells break down. In most cases, newborn jaundice resolves on its
      own without any specific treatment. The duration of newborn jaundice can vary
      depending on several factors such as the underlying cause, gestational age at
      birth, and individual variations in bilirubin metabolism. Here are some general
      guidelines
model-index:
- name: Medical-Llama3-8B
  results: []
---

# m1guelperez/Medical-Llama3-8B-Q8_0-GGUF
This model was converted to GGUF format from [`ruslanmv/Medical-Llama3-8B`](https://huggingface.co/ruslanmv/Medical-Llama3-8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/ruslanmv/Medical-Llama3-8B) for more details on the model.

## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)

```bash
brew install llama.cpp

```
Invoke the llama.cpp server or the CLI.

### CLI:
```bash
llama-cli --hf-repo m1guelperez/Medical-Llama3-8B-Q8_0-GGUF --hf-file medical-llama3-8b-q8_0.gguf -p "The meaning to life and the universe is"
```

### Server:
```bash
llama-server --hf-repo m1guelperez/Medical-Llama3-8B-Q8_0-GGUF --hf-file medical-llama3-8b-q8_0.gguf -c 2048
```

Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.

Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```

Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```

Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo m1guelperez/Medical-Llama3-8B-Q8_0-GGUF --hf-file medical-llama3-8b-q8_0.gguf -p "The meaning to life and the universe is"
```
or 
```
./llama-server --hf-repo m1guelperez/Medical-Llama3-8B-Q8_0-GGUF --hf-file medical-llama3-8b-q8_0.gguf -c 2048
```