Triangle104 commited on
Commit
51595e0
·
verified ·
1 Parent(s): d9cf3dd

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +71 -15
README.md CHANGED
@@ -46,34 +46,90 @@ Refer to the [original model card](https://huggingface.co/DavidAU/L3.1-Dark-Reas
46
  ---
47
  Context : 128k.
48
 
49
-
50
  Required: Llama 3 Instruct template.
51
 
 
52
 
53
- "Dark Reasoning" is a variable control reasoning model that is uncensored and operates at all temps/settings and
54
- is for creative uses cases and general usage.
55
 
 
56
 
57
- This version's "thinking"/"reasoning" has been "darkened" by the
58
- original CORE model's DNA (see model tree) and will also be shorter
59
- and more compressed. Additional system prompts below to take this a lot
60
- further - a lot darker, a lot more ... evil.
61
 
 
62
 
63
- Higher temps will result in deeper, richer "thoughts"... and frankly more interesting ones too.
 
64
 
 
65
 
66
- The "thinking/reasoning" tech (for the model at this repo) is from the original Llama 3.1 "DeepHermes" model from NousResearch:
 
67
 
 
68
 
69
- [ https://huggingface.co/NousResearch/DeepHermes-3-Llama-3-8B-Preview ]
70
 
 
71
 
72
- This version will retain all the functions and features of the
73
- original "DeepHermes" model at about 50%-67% of original reasoning
74
- power.
75
- Please visit their repo for all information on features, test results
76
- and so on.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
77
 
78
  ---
79
  ## Use with llama.cpp
 
46
  ---
47
  Context : 128k.
48
 
 
49
  Required: Llama 3 Instruct template.
50
 
51
+ "Dark Reasoning" is a variable control reasoning model that is uncensored and operates at all temps/settings andis for creative uses cases and general usage.
52
 
53
+ This version's "thinking"/"reasoning" has been "darkened" by the original CORE model's DNA (see model tree) and will also be shorter and more compressed. Additional system prompts below to take this a lot further - a lot darker, a lot more ... evil.
 
54
 
55
+ Higher temps will result in deeper, richer "thoughts"... and frankly more interesting ones too.
56
 
57
+ The "thinking/reasoning" tech (for the model at this repo) is from the original Llama 3.1 "DeepHermes" model from NousResearch:
 
 
 
58
 
59
+ [ https://huggingface.co/NousResearch/DeepHermes-3-Llama-3-8B-Preview ]
60
 
61
+ This version will retain all the functions and features of the original "DeepHermes" model at about 50%-67% of original reasoning
62
+ power.
63
 
64
+ Please visit their repo for all information on features, test results and so on.
65
 
66
+ KNOWN ISSUES:
67
+ -
68
 
69
+ -You may need to hit regen sometimes to get the thinking/reasoning to activate / get a good "thinking block".
70
 
71
+ -Sometimes the 2nd or 3rd generation is the best version. Suggest min of 5 for specific creative uses.
72
 
73
+ -Sometimes the thinking block will end, and you need to manually prompt the model to "generate" the output.
74
 
75
+ USE CASES:
76
+ -
77
+ This model is for all use cases, and but designed for creative use cases specifically.
78
+
79
+ This model can also be used for solving logic puzzles, riddles, and other problems with the enhanced "thinking" systems.
80
+
81
+ This model also can solve problems/riddles/ and puzzles normally beyond the abilities of a Llama 3.1 model due to DeepHermes systems.
82
+
83
+ (It will not however, have the same level of abilities due to Dark Planet core.)
84
+
85
+ This model WILL produce HORROR / NSFW / uncensored content in EXPLICIT and GRAPHIC DETAIL.
86
+
87
+ TEMP/SETTINGS:
88
+ -
89
+ Set Temp between 0 and .8, higher than this "think" functions will activate differently. The most "stable" temp seems to be .6, with a variance of +-0.05. Lower for more "logic" reasoning, raise it for more "creative" reasoning (max .8 or so). Also set context to at least 4096, to account for "thoughts" generation.
90
+
91
+ For temps 1+,2+ etc etc, thought(s) will expand, and become deeper and richer.
92
+
93
+ Set "repeat penalty" to 1.02 to 1.07 (recommended) .
94
+
95
+ This model requires a Llama 3 Instruct and/or Command-R chat template. (see notes on "System Prompt" / "Role" below) OR standard "Jinja Autoloaded Template" (this is contained in the quant and will autoload)
96
+
97
+ PROMPTS:
98
+ -
99
+ If you enter a prompt without implied "step by step" requirements (ie: Generate a scene, write a story, give me 6 plots for xyz), "thinking" (one or more) MAY activate AFTER first generation. (IE: Generate a scene -> scene will generate, followed by suggestions for improvement in "thoughts")
100
+
101
+ If you enter a prompt where "thinking" is stated or implied (ie puzzle, riddle, solve this, brainstorm this idea etc), "thoughts" process(es) in Deepseek will activate almost immediately. Sometimes you need to regen it to activate.
102
+
103
+ You will also get a lot of variations - some will continue the generation, others will talk about how to improve it, and some (ie generation of a scene) will cause the characters to "reason" about this situation. In some cases, the model will ask you to continue generation /thoughts too.
104
+
105
+ In some cases the model's "thoughts" may appear in the generation itself.
106
+
107
+ State the word size length max IN THE PROMPT for best results, especially for activation of "thinking." (see examples below)
108
+
109
+ You may want to try your prompt once at "default" or "safe" temp settings, another at temp 1.2, and a third at 2.5 as an example. This will give you a broad range of "reasoning/thoughts/problem" solving.
110
+
111
+ GENERATION - THOUGHTS/REASONING:
112
+ -
113
+ It may take one or more regens for "thinking" to "activate." (depending on the prompt)
114
+
115
+ Model can generate a LOT of "thoughts". Sometimes the most interesting ones are 3,4,5 or more levels deep.
116
+
117
+ Many times the "thoughts" are unique and very different from one another.
118
+
119
+ Temp/rep pen settings can affect reasoning/thoughts too.
120
+
121
+ Change up or add directives/instructions or increase the detail level(s) in your prompt to improve reasoning/thinking.
122
+
123
+ Adding to your prompt: "think outside the box", "brainstorm X number of ideas", "focus on the most uncommon approaches" can drastically improve your results.
124
+
125
+ GENERAL SUGGESTIONS:
126
+ -
127
+ I have found opening a "new chat" per prompt works best with "thinking/reasoning activation", with temp .6, rep pen 1.05 ... THEN "regen" as required.
128
+
129
+ Sometimes the model will really really get completely unhinged and you need to manually stop it.
130
+ Depending on your AI app, "thoughts" may appear with "< THINK >" and "</ THINK >" tags AND/OR the AI will generate "thoughts"directly in the main output or later output(s).
131
+
132
+ Although quant q4KM was used for testing/examples, higher quants will provide better generation / more sound "reasoning/thinking".
133
 
134
  ---
135
  ## Use with llama.cpp