L3-DhesNova-8B-ftr / README.md
AIgotahole's picture
Update README.md
71eb3f7 verified
---
base_model:
- Casual-Autopsy/L3-Super-Nova-RP-8B
- allknowingroger/DeepHermes-3-Llama-3-slerp-8B
library_name: transformers
tags:
- mergekit
- merge
- roleplay
- story-writing
- adventure
- gemma-2
- rp
- nsfw
language:
- en
- zh
- ja
- fr
- ko
- de
- ru
---
| <img style="float:left;margin-right:0.4em" src="https://qu.ax/TxFLs.webp"> **For RP & story gen,<br/>Llama-3 choked hard on 8B. It's super fast but quite crispy. When the task is armoring a F1 car to rampaging tank, fine-tunings suffer not only on nuances, also fortunes.<br/>Thank God the plasticity is solid.<br/>To some bean counters the result seems acceptable,<br/>to me, it's all hallucination...<br/><br/>I hate 99% of the experience I went through,<br/>only recognized [Casual-Autopsy/L3-Super-Nova-RP-8B](https://huggingface.co/Casual-Autopsy/L3-Super-Nova-RP-8B) and [allknowingroger/DeepHermes-3-Llama-3-slerp-8B](https://huggingface.co/allknowingroger/DeepHermes-3-Llama-3-slerp-8B) as the two lucky bastards.<br/>In multilingual scenarios they are the miracle workers.<br/>And their baby confirms stable.<br/><br/>Sometimes it behaves very much like those larger competitors, as long as you tighten the reins.<br/>Consider it a mobile assassin.<br/>Or a copycat criminal.** |
|:---:|
<small>*"In those dark times, we did 1, 3...and 70B models."*</small>
```yaml
models:
- model: allknowingroger/DeepHermes-3-Llama-3-slerp-8B
- model: Casual-Autopsy/L3-Super-Nova-RP-8B
base_model: allknowingroger/DeepHermes-3-Llama-3-slerp-8B
merge_method: slerp
parameters:
t: [0.3, 0.6, 0.9, 0.6, 0.3]
embed_slerp: true
dtype: bfloat16
```