image/png

Switching things up a bit since the last slew of models were all 12B, we now have NovaSpark! NovaSpark is an 8B model trained on GrimJim's abliterated version of arcee's SuperNova-lite. The hope is abliteration will remove some of the inherant refusals and censorship of the original model, however I noticed that finetuning on GrimJim's model undid some of the abliteration, therefore more than likely abiliteration will have to be reapplied to the resulting model to reinforce it.

Quants!

full / exl2 / gguf

Prompting

This model is trained on llama instruct template, the prompting structure goes a little something like this:

<|begin_of_text|><|start_header_id|>system<|end_header_id|>

{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>

{prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|>

Context and Instruct

This model is trained on llama-instruct, please use that Context and Instruct template.

Current Top Sampler Settings

Smooth Creativity: Credit to Juelsman for researching this one!
Variant Chimera: Credit to Numbra!
Spicy_Temp
Violet_Twilight-Nitral-Special

Downloads last month
160
Safetensors
Model size
8.03B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Epiculous/NovaSpark

Finetuned
(2)
this model
Adapters
1 model
Quantizations
3 models

Datasets used to train Epiculous/NovaSpark

Collection including Epiculous/NovaSpark