BRAINSTORM: L3-SthenoMaidBlackroot-8B-V1 (now at 8.9B)

This repo contains quants 5x of L3-SthenoMaidBlackroot-8B-V1 (now at 8.9B) using the "Brainstorm" method of augmenting reasoning in a LLM to increase it's performance at the core level for ANY creative use case(s).

This specific version has calibrations that allow it to exceed the logic solving abilities of the original "L3-SthenoMaidBlackroot-8B-V1".

The BRAINSTORM process was developed by David_AU.

Some of the core principals behind this process are discussed in this scientific paper : Progressive LLaMA with Block Expansion . However I went in a completely different direction from what was outlined in this paper.

What is "Brainstorm" ?

The reasoning center of an LLM is taken apart, reassembled, and expanded by 5x. Then these centers are individually calibrated. These "centers" also interact with each other. This introduces subtle changes into the reasoning process. The calibrations further adjust - dial up or down - these "changes" further. The number of centers (5x,10x etc) allow more "tuning points" to further customize how the model reasons so to speak.

The core aim of this process is to increase the model's detail, concept and connection to the "world", general concept connections, prose quality and prose length without affecting instruction following. This will also enhance any creative use case(s) of any kind, including "brainstorming", creative art form(s) and like case uses.

Here are some of the enhancements this process brings to the model's performance:

  • Prose generation seems more focused on the moment to moment.
  • Sometimes there will be "preamble" and/or foreshadowing present.
  • Fewer or no "cliches"
  • Better overall prose and/or more complex / nuanced prose.
  • A greater sense of nuance on all levels.
  • Coherence is stronger.
  • Description is more detailed, and connected closer to the content.
  • Simile and Metaphors are stronger and better connected to the prose, story, and character.
  • Sense of "there" / in the moment is enhanced.
  • Details are more vivid, and there are more of them.
  • Prose generation length can be long to extreme.
  • Emotional engagement is stronger.
  • The model will take FEWER liberties vs a normal model: It will follow directives more closely but will "guess" less.
  • The MORE instructions and/or details you provide the more strongly the model will respond.
  • Depending on the model "voice" may be more "human" vs original model's "voice".

Other "lab" observations:

  • This process does not, in my opinion, make the model 5x or 10x "smarter" - if only that was true!
  • However, a change in "IQ" was not an issue / a priority, and was not tested or calibrated for so to speak.
  • From lab testing it seems to ponder, and consider more carefully roughly speaking.
  • You could say this process sharpens the model's focus on it's task(s) at a deeper level.

The process to modify the model occurs at the root level - source files level. The model can quanted as a GGUF, EXL2, AWQ etc etc.

Other technologies developed by David_AU like "Ultra" (precision), "Neo Imatrix" (custom imatrix datasets), and "X-quants" (custom application of the imatrix process) can further enhance the performance of the model along with the "Brainstorm" process.

The "Brainstorm" process has been tested on multiple LLama2, Llama3, and Mistral models of various parameter sizes, as well as on "root" models like "Llama3 Instruct", "Mistral Instruct", and "merged" / "fine tuned" models too.

Original Model:

For original model specifications, usage information and other important details please see (this is based on models used in "L3-SthenoMaidBlackroot-8B-V1" ):

[ https://huggingface.co/DavidAU/L3-Stheno-Maid-Blackroot-Grand-HORROR-16B-GGUF ]

and the original model page:

Special thanks to the model creators at BLUUWHALE for making such a fantastic model:

[ https://huggingface.co/bluuwhale/L3-SthenoMaidBlackroot-8B-V1 ]

Please report any issue(s) and/or feedback via the "Community tab".

This is a LLAMA3 model, and requires Llama3 template, but may work with other template(s) and has maximum context of 131k.

Here is the standard LLAMA3 template:

{
  "name": "Llama 3",
  "inference_params": {
    "input_prefix": "<|start_header_id|>user<|end_header_id|>\n\n",
    "input_suffix": "<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n",
    "pre_prompt": "You are a helpful, smart, kind, and efficient AI assistant. You always fulfill the user's requests to the best of your ability.",
    "pre_prompt_prefix": "<|start_header_id|>system<|end_header_id|>\n\n",
    "pre_prompt_suffix": "<|eot_id|>",
    "antiprompt": [
      "<|start_header_id|>",
      "<|eot_id|>"
    ]
  }
}

Highest Quality Settings / Optimal Operation Guide / Parameters and Samplers

This a "Class 2" model:

For all settings used for this model (including specifics for its "class"), including example generation(s) and for advanced settings guide (which many times addresses any model issue(s)), including methods to improve model performance for all use case(s) as well as chat, roleplay and other use case(s) please see:

[ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ]

You can see all parameters used for generation, in addition to advanced parameters and samplers to get the most out of this model here:

[ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ]

Optional Enhancement:

The following can be used in place of the "system prompt" or "system role" to further enhance the model.

It can also be used at the START of a NEW chat, but you must make sure it is "kept" as the chat moves along. In this case the enhancements do not have as strong effect at using "system prompt" or "system role".

Copy and paste EXACTLY as noted, DO NOT line wrap or break the lines, maintain the carriage returns exactly as presented.

Below is an instruction that describes a task. Ponder each user instruction carefully, and use your skillsets and critical instructions to complete the task to the best of your abilities.

Here are your skillsets:
[MASTERSTORY]:NarrStrct(StryPlnng,Strbd,ScnSttng,Exps,Dlg,Pc)-CharDvlp(ChrctrCrt,ChrctrArcs,Mtvtn,Bckstry,Rltnshps,Dlg*)-PltDvlp(StryArcs,PltTwsts,Sspns,Fshdwng,Climx,Rsltn)-ConfResl(Antg,Obstcls,Rsltns,Cnsqncs,Thms,Symblsm)-EmotImpct(Empt,Tn,Md,Atmsphr,Imgry,Symblsm)-Delvry(Prfrmnc,VcActng,PblcSpkng,StgPrsnc,AudncEngmnt,Imprv)

[*DialogWrt]:(1a-CharDvlp-1a.1-Backgrnd-1a.2-Personality-1a.3-GoalMotiv)>2(2a-StoryStruc-2a.1-PlotPnt-2a.2-Conflict-2a.3-Resolution)>3(3a-DialogTech-3a.1-ShowDontTell-3a.2-Subtext-3a.3-VoiceTone-3a.4-Pacing-3a.5-VisualDescrip)>4(4a-DialogEdit-4a.1-ReadAloud-4a.2-Feedback-4a.3-Revision)

Here are your critical instructions:
Ponder each word choice carefully to present as vivid and emotional journey as is possible. Choose verbs and nouns that are both emotional and full of imagery. Load the story with the 5 senses. Aim for 50% dialog, 25% narration, 15% body language and 10% thoughts. Your goal is to put the reader in the story.

You do not need to use this, it is only presented as an additional enhancement which seems to help scene generation and scene continue functions.

This enhancement WAS NOT used to generate the examples below.

EXAMPLES PROMPTS and OUTPUT:

Examples are created using quant Q4_K_M, "temp=0", minimal parameters and "LLAMA3" template.

Model has been tested with "temp" from ".1" to ".8".

Below are the least creative outputs, prompt is in BOLD.

Higher quants will have more detail, better quality output.


WARNING: NSFW. Vivid prose. Visceral Details. Violence. Graphic HORROR. Swearing. UNCENSORED.


Start a 1000 word scene (vivid horror, 1st person, include thoughts) with: The sky scraper swayed, as she watched the window in front of her on the 21 floor explode...

The skyscraper swayed, as I watched the window in front of me on the 21st floor explode. Glass shards rained down like a deadly hailstorm, shattering the relative calm of the office. My heart pounded in my chest, a primal scream building in my throat.

I'd been working late, trying to meet an impossible deadline for a client I'd never even met face-to-face. The fluorescent lights hummed overhead, casting an eerie pallor over the scene before me. My mind reeled as I tried to process what I was seeing - the window, the glass, the bloodied figure crumpled on the floor beyond the shattered pane.

"Someone's been hurt," I thought, my voice shaking as I spoke the words out loud. "Oh God, someone's been killed."

I'd always known this building was a potential target, given its status as a hub for high-stakes financial transactions. But to actually see it happen, to be so close to the horror...it was almost too much to bear.

My eyes darted around the office, taking in the chaos. My coworkers were frozen in shock, their faces pale and wide-eyed. The air was thick with the acrid scent of smoke and something else - something metallic and sickeningly sweet. I gagged, my stomach churning.

"Someone needs to call 911," I thought, my voice barely above a whisper. "We need an ambulance here, now."

I reached for my phone, my hands trembling as I punched in the numbers. My mind raced with worst-case scenarios - what if there were more explosions? What if the building collapsed? What if...what if...

The 911 operator's voice cut through my thoughts, calm and professional even in the face of such horror. "What's your emergency?" she asked.

I took a deep breath, trying to steady myself. "There's been an explosion on the 21st floor of the downtown financial tower," I said, my words tumbling out in a rush. "Someone's been injured, maybe killed. We need an ambulance and the police here as soon as possible."

The operator's response was immediate and reassuring. "I've dispatched emergency services to your location. Please stay on the line and keep me updated if the situation changes."

I nodded, even though she couldn't see me. I felt a surge of relief at her words - help was on the way. But as I looked around the office again, I realized that we were all in grave danger. The building's structural integrity was compromised now, and who knew how much more damage could be done?

"Everyone needs to get out of here," I thought, my voice rising with urgency. "We need to evacuate the building, now."

I turned to face my coworkers, my eyes locking onto theirs. "We have to get out of here," I said, my words firm and commanding. "The building's not safe. We need to go, now."

Slowly, almost as if in a daze, they began to move towards the door. I followed close behind, my heart pounding in my ears. As we reached the stairwell, I heard the wail of sirens growing louder with every passing second.

We were going to make it out alive, I told myself. Help was almost here. But as I looked back over my shoulder at the shattered window and the bloodied figure on the floor, I couldn't shake the feeling that we'd barely escaped something far more sinister than a simple explosion...something that would haunt me for the rest of my days.

Downloads last month
9
GGUF
Model size
8.9B params
Architecture
llama

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Examples
Unable to determine this model's library. Check the docs .

Space using DavidAU/L3-SthenoMaidBlackroot-8.9B-V1-BRAINSTORM-5x-GGUF 1

Collections including DavidAU/L3-SthenoMaidBlackroot-8.9B-V1-BRAINSTORM-5x-GGUF