Update README.md
Browse files
README.md
CHANGED
@@ -7,47 +7,35 @@ tags:
|
|
7 |
- gguf-my-repo
|
8 |
base_model: Nitral-AI/Poppy_Porpoise-1.4-L3-8B
|
9 |
---
|
|
|
10 |
|
11 |
-
#
|
12 |
-
This model was converted to GGUF format from [`Nitral-AI/Poppy_Porpoise-1.4-L3-8B`](https://huggingface.co/Nitral-AI/Poppy_Porpoise-1.4-L3-8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
13 |
-
Refer to the [original model card](https://huggingface.co/Nitral-AI/Poppy_Porpoise-1.4-L3-8B) for more details on the model.
|
14 |
|
15 |
-
|
16 |
-
Install llama.cpp through brew (works on Mac and Linux)
|
17 |
|
18 |
-
|
19 |
-
brew install llama.cpp
|
20 |
|
21 |
-
|
22 |
-
Invoke the llama.cpp server or the CLI.
|
23 |
-
|
24 |
-
### CLI:
|
25 |
-
```bash
|
26 |
-
llama --hf-repo Nitral-AI/Poppy_Porpoise-1.4-L3-8B-Q4_K_M-GGUF --hf-file poppy_porpoise-1.4-l3-8b-q4_k_m.gguf -p "The meaning to life and the universe is"
|
27 |
-
```
|
28 |
|
29 |
-
###
|
30 |
-
```bash
|
31 |
-
llama-server --hf-repo Nitral-AI/Poppy_Porpoise-1.4-L3-8B-Q4_K_M-GGUF --hf-file poppy_porpoise-1.4-l3-8b-q4_k_m.gguf -c 2048
|
32 |
-
```
|
33 |
|
34 |
-
|
35 |
-
|
36 |
-
Step 1: Clone llama.cpp from GitHub.
|
37 |
-
```
|
38 |
-
git clone https://github.com/ggerganov/llama.cpp
|
39 |
-
```
|
40 |
|
41 |
-
|
42 |
-
|
43 |
-
|
44 |
-
|
45 |
-
|
46 |
-
|
47 |
-
|
48 |
-
|
49 |
-
|
50 |
-
|
51 |
-
|
52 |
-
|
|
|
|
|
|
|
|
|
|
|
53 |
```
|
|
|
7 |
- gguf-my-repo
|
8 |
base_model: Nitral-AI/Poppy_Porpoise-1.4-L3-8B
|
9 |
---
|
10 |
+
![image/png](https://cdn-uploads.huggingface.co/production/uploads/642265bc01c62c1e4102dc36/Boje781GkTdYgORTYGI6r.png)
|
11 |
|
12 |
+
# "Poppy Porpoise" is a cutting-edge AI roleplay assistant based on the Llama 3 8B model, specializing in crafting unforgettable narrative experiences. With its advanced language capabilities, Poppy expertly immerses users in an interactive and engaging adventure, tailoring each adventure to their individual preferences.
|
|
|
|
|
13 |
|
14 |
+
# Note: This variant is an attempt to get something closer to 0.72 while maintaining the improvements of 1.30.
|
|
|
15 |
|
16 |
+
# : [Presets in repo folder](https://huggingface.co/Nitral-AI/Poppy_Porpoise-1.0-L3-8B/tree/main/Porpoise_1.0-Presets).
|
|
|
17 |
|
18 |
+
# If you want to use vision functionality: You must use the latest versions of [Koboldcpp](https://github.com/LostRuins/koboldcpp). And need to load the specified **mmproj** file: [Llava MMProj](https://huggingface.co/Nitral-AI/Llama-3-Update-2.0-mmproj-model-f16).
|
|
|
|
|
|
|
|
|
|
|
|
|
19 |
|
20 |
+
### Configuration
|
|
|
|
|
|
|
21 |
|
22 |
+
The following YAML configuration was used to produce this model:
|
|
|
|
|
|
|
|
|
|
|
23 |
|
24 |
+
```yaml
|
25 |
+
slices:
|
26 |
+
- sources:
|
27 |
+
- model: Nitral-AI/Pp-72xra1
|
28 |
+
layer_range: [0, 32]
|
29 |
+
- model: Nitral-AI/Poppy-1.35-Phase1
|
30 |
+
layer_range: [0, 32]
|
31 |
+
merge_method: slerp
|
32 |
+
base_model: Nitral-AI/Pp-72xra1
|
33 |
+
parameters:
|
34 |
+
t:
|
35 |
+
- filter: self_attn
|
36 |
+
value: [0, 0.5, 0.3, 0.7, 1]
|
37 |
+
- filter: mlp
|
38 |
+
value: [1, 0.5, 0.7, 0.3, 0]
|
39 |
+
- value: 0.5
|
40 |
+
dtype: bfloat16
|
41 |
```
|