File size: 1,409 Bytes
6ea510e c58b0e0 13744a6 4aeca78 c58b0e0 6ea510e |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 |
---
language:
- en
license: llama3
tags:
- Llama-3
- instruct
- finetune
- chatml
- gpt4
- synthetic data
- distillation
- function calling
- json mode
- axolotl
- roleplaying
- chat
- llama-cpp
- gguf-my-repo
base_model: NousResearch/Hermes-3-Llama-3.2-3B
widget:
- example_title: Hermes 3
messages:
- role: system
content: You are a sentient, superintelligent artificial general intelligence,
here to teach and assist me.
- role: user
content: Write a short story about Goku discovering kirby has teamed up with Majin
Buu to destroy the world.
library_name: transformers
model-index:
- name: Hermes-3-Llama-3.2-3B
results: []
---
# THOTH Experiment
Completed model @ https://huggingface.co/IntelligentEstate/Thoth_Warding-Llama-3B-IQ5_K_S-GGUF
![thoth2.png](https://cdn-uploads.huggingface.co/production/uploads/6593502ca2607099284523db/5hpy4IHflPFigikhFPAKj.png)
# Model is Experimental Imatrix Quant using "THE_KEY" Dataset in QAT
This model was converted to GGUF format from [`NousResearch/Hermes-3-Llama-3.2-3B`](https://huggingface.co/NousResearch/Hermes-3-Llama-3.2-3B) using llama.cpp.
Refer to the [original model card](https://huggingface.co/NousResearch/Hermes-3-Llama-3.2-3B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
|