parameters guide
samplers guide
model generation
role play settings
quant selection
arm quants
iq quants vs q quants
optimal model setting
gibberish fixes
coherence
instructing following
quality generation
chat settings
quality settings
llamacpp server
llamacpp
lmstudio
sillytavern
koboldcpp
backyard
ollama
model generation steering
steering
model generation fixes
text generation webui
ggufs
exl2
full precision
quants
imatrix
neo imatrix
llama
llama-3
gemma
gemma2
gemma3
llama-2
llama-3.1
llama-3.2
mistral
Mixture of Experts
mixture of experts
mixtral
Update README.md
Browse files
README.md
CHANGED
@@ -64,6 +64,44 @@ to MOE/Mixture of expert models - both GGUF and source.
|
|
64 |
|
65 |
[ https://huggingface.co/DavidAU/How-To-Set-and-Manage-MOE-Mix-of-Experts-Model-Activation-of-Experts ]
|
66 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
67 |
---
|
68 |
|
69 |
<H2>MAIN DOCUMENT:</H2>
|
|
|
64 |
|
65 |
[ https://huggingface.co/DavidAU/How-To-Set-and-Manage-MOE-Mix-of-Experts-Model-Activation-of-Experts ]
|
66 |
|
67 |
+
<B>#4 How to Set the "System Role" / "System Prompt" / "System Message"</B>
|
68 |
+
|
69 |
+
For some of my models I have "system prompt(s)" to enhance model operation and/or invoke/control reasoning/thinking in a model.
|
70 |
+
|
71 |
+
This section will cover how to set these.
|
72 |
+
|
73 |
+
System Role / System Prompt / System Message (called "System Prompt" in this section)
|
74 |
+
is "root access" to the model and controls internal workings - both instruction following and output generation and in the
|
75 |
+
case of this "reasoning models" reasoning control and on/off for reasoning too.
|
76 |
+
|
77 |
+
For reasoning models that require it:
|
78 |
+
|
79 |
+
If you do not set a "system prompt", reasoning/thinking will be OFF by default, and the model will operate like a normal LLM.
|
80 |
+
|
81 |
+
HOW TO SET:
|
82 |
+
|
83 |
+
Depending on your AI "app" you may have to copy/paste on of the "code(s)" to enable reasoning/thinking in the
|
84 |
+
"System Prompt" or "System Role" window.
|
85 |
+
|
86 |
+
For "normal" models the "code(s)" can enhance operation of the model - this works for all models, and all model types. The instruction
|
87 |
+
following "power" of the model will directly influence the overall effect(s) of such code.
|
88 |
+
|
89 |
+
Likewise, the more parameters a model has, the more powerful the effect too.
|
90 |
+
|
91 |
+
In Lmstudio set/activate "Power User" or "Developer" mode to access, copy/paste to System Prompt Box.
|
92 |
+
|
93 |
+
In SillyTavern go to the "template page" ("A") , activate "system prompt" and enter the text in the prompt box.
|
94 |
+
|
95 |
+
In Ollama see [ https://github.com/ollama/ollama/blob/main/README.md ] ; and setting the "system message".
|
96 |
+
|
97 |
+
In Koboldcpp, load the model, start it, go to settings -> select a template and enter the text in the "sys prompt" box.
|
98 |
+
|
99 |
+
SYSTEM PROMPTS - Add them / Edit Them:
|
100 |
+
|
101 |
+
When you copy/paste PRESERVE formatting, including line breaks.
|
102 |
+
|
103 |
+
If you want to edit/adjust these only do so in NOTEPAD OR the LLM App directly.
|
104 |
+
|
105 |
---
|
106 |
|
107 |
<H2>MAIN DOCUMENT:</H2>
|