File size: 4,186 Bytes
5b48299
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ccc7130
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5b48299
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
---
base_model: NousResearch/DeepHermes-3-Llama-3-3B-Preview
language:
- en
library_name: transformers
license: llama3
tags:
- Llama-3
- instruct
- finetune
- chatml
- gpt4
- synthetic data
- distillation
- function calling
- json mode
- axolotl
- roleplaying
- chat
- reasoning
- r1
- vllm
- llama-cpp
- gguf-my-repo
widget:
- example_title: Hermes 3
  messages:
  - role: system
    content: You are a sentient, superintelligent artificial general intelligence,
      here to teach and assist me.
  - role: user
    content: What is the meaning of life?
model-index:
- name: DeepHermes-3-Llama-3.1-3B
  results: []
---

# Triangle104/DeepHermes-3-Llama-3-3B-Preview-Q4_K_M-GGUF
This model was converted to GGUF format from [`NousResearch/DeepHermes-3-Llama-3-3B-Preview`](https://huggingface.co/NousResearch/DeepHermes-3-Llama-3-3B-Preview) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/NousResearch/DeepHermes-3-Llama-3-3B-Preview) for more details on the model.

---
DeepHermes 3 Preview is the latest version of our flagship Hermes 
series of LLMs by Nous Research, and one of the first models in the 
world to unify Reasoning (long chains of thought that improve answer 
accuracy) and normal LLM response modes into one model. We have also 
improved LLM annotation, judgement, and function calling.


DeepHermes 3 Preview is a hybrid reasoning model, and one of the 
first LLM models to unify both "intuitive", traditional mode responses 
and long chain of thought reasoning responses into a single model, toggled by a system prompt.


Hermes 3, the predecessor of DeepHermes 3, is a generalist language 
model with many improvements over Hermes 2, including advanced agentic 
capabilities, much better roleplaying, reasoning, multi-turn 
conversation, long context coherence, and improvements across the board.


The ethos of the Hermes series of models is focused on aligning LLMs 
to the user, with powerful steering capabilities and control given to 
the end user.


This is a preview Hermes with early reasoning capabilities, 
distilled from R1 across a variety of tasks that benefit from reasoning 
and objectivity. Some quirks may be discovered! Please let us know any 
interesting findings or issues you discover!

Note: To toggle REASONING ON, you must use the following system prompt:
	
You are a deep thinking AI, you may use extremely long chains of thought to deeply consider the problem and deliberate with yourself via systematic reasoning processes to help come to a correct solution prior to answering. You should enclose your thoughts and internal monologue inside <think> </think> tags, and then provide your solution or response to the problem.

---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)

```bash
brew install llama.cpp

```
Invoke the llama.cpp server or the CLI.

### CLI:
```bash
llama-cli --hf-repo Triangle104/DeepHermes-3-Llama-3-3B-Preview-Q4_K_M-GGUF --hf-file deephermes-3-llama-3-3b-preview-q4_k_m.gguf -p "The meaning to life and the universe is"
```

### Server:
```bash
llama-server --hf-repo Triangle104/DeepHermes-3-Llama-3-3B-Preview-Q4_K_M-GGUF --hf-file deephermes-3-llama-3-3b-preview-q4_k_m.gguf -c 2048
```

Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.

Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```

Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```

Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/DeepHermes-3-Llama-3-3B-Preview-Q4_K_M-GGUF --hf-file deephermes-3-llama-3-3b-preview-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or 
```
./llama-server --hf-repo Triangle104/DeepHermes-3-Llama-3-3B-Preview-Q4_K_M-GGUF --hf-file deephermes-3-llama-3-3b-preview-q4_k_m.gguf -c 2048
```