Updating README.md to ensure it follows the original model card's prompt format
Browse files
README.md
CHANGED
@@ -19,14 +19,27 @@ tags:
|
|
19 |
koboldcpp.exe Luna-AI-Llama2-Uncensored-ggmlv3.Q2_K --threads 6 --stream --smartcontext --unbantokens --noblas
|
20 |
```
|
21 |
|
22 |
-
**
|
23 |
```
|
24 |
-
|
25 |
-
You're a digital assistant designed to provide helpful and accurate responses to the user.
|
26 |
|
27 |
-
|
28 |
-
{input}
|
29 |
-
|
30 |
-
### Response:
|
31 |
```
|
32 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
19 |
koboldcpp.exe Luna-AI-Llama2-Uncensored-ggmlv3.Q2_K --threads 6 --stream --smartcontext --unbantokens --noblas
|
20 |
```
|
21 |
|
22 |
+
**Prompt format (refer to the original model for additional details):**
|
23 |
```
|
24 |
+
USER: {input}
|
|
|
25 |
|
26 |
+
ASSISTANT:
|
|
|
|
|
|
|
27 |
```
|
28 |
|
29 |
+
<details>
|
30 |
+
|
31 |
+
<summary>(Clickable) I tested the model with the following format. This format was specified in the older version of the model's card. It works, but I'm leaving it behind the spoiler tag, as it's better to follow the format above to ensure the model works as intended.</summary>
|
32 |
+
|
33 |
+
**Tested with the following format (refer to [Stanford Alpaca](https://github.com/tatsu-lab/stanford_alpaca) for additional details):**
|
34 |
+
|
35 |
+
```
|
36 |
+
### Instruction:
|
37 |
+
You're a digital assistant designed to provide helpful and accurate responses to the user.
|
38 |
+
|
39 |
+
### Input:
|
40 |
+
{input}
|
41 |
+
|
42 |
+
### Response:
|
43 |
+
```
|
44 |
+
|
45 |
+
</details>
|