---
base_model:
- Casual-Autopsy/L3-Super-Nova-RP-8B
- allknowingroger/DeepHermes-3-Llama-3-slerp-8B
library_name: transformers
tags:
- mergekit
- merge
- roleplay
- story-writing
- adventure
- gemma-2
- rp
- nsfw
language:
- en
- zh
- ja
- fr
- ko
- de
- ru
---
|
**For RP & story gen,
Llama-3 choked hard on 8B. It's super fast but quite crispy. When the task is armoring a F1 car to rampaging tank, fine-tunings suffer not only on nuances, also fortunes.
Thank God the plasticity is solid.
To some bean counters the result seems acceptable,
to me, it's all hallucination...
I hate 99% of the experience I went through,
only recognized [Casual-Autopsy/L3-Super-Nova-RP-8B](https://huggingface.co/Casual-Autopsy/L3-Super-Nova-RP-8B) and [allknowingroger/DeepHermes-3-Llama-3-slerp-8B](https://huggingface.co/allknowingroger/DeepHermes-3-Llama-3-slerp-8B) as the two lucky bastards.
In multilingual scenarios they are the miracle workers.
And their baby confirms stable.
Sometimes it behaves very much like those larger competitors, as long as you tighten the reins.
Consider it a mobile assassin.
Or a copycat criminal.** |
|:---:|
*"In those dark times, we did 1, 3...and 70B models."*
```yaml
models:
- model: allknowingroger/DeepHermes-3-Llama-3-slerp-8B
- model: Casual-Autopsy/L3-Super-Nova-RP-8B
base_model: allknowingroger/DeepHermes-3-Llama-3-slerp-8B
merge_method: slerp
parameters:
t: [0.3, 0.6, 0.9, 0.6, 0.3]
embed_slerp: true
dtype: bfloat16
```