v000000's picture
Update README.md
2365e8a verified
---
base_model:
- princeton-nlp/Llama-3-Instruct-8B-SimPO
- Nitral-AI/Poppy_Porpoise-0.72-L3-8B
- Blackroot/Llama-3-8B-Abomination-LORA
- jondurbin/bagel-8b-v1.0
- lodrick-the-lafted/Fuselage-8B
- lodrick-the-lafted/Limon-8B
- Nitral-AI/Hathor_Stable-v0.2-L3-8B
- jeiku/Orthocopter_8B
- mpasila/Llama-3-LimaRP-Instruct-LoRA-8B
- Hastagaras/Jamet-8B-L3-MK.V-Blackroot
- Vdr1/L3-8B-Sunfall-v0.3-Stheno-v3.2
- HPAI-BSC/Llama3-Aloe-8B-Alpha
- maldv/llama-3-fantasy-writer-8b
- ToastyPigeon/Llama-3-8B-Instruct-SpringDragon-V2-QLoRA
- v000000/component
- cgato/L3-TheSpice-8b-v0.8.3
library_name: transformers
tags:
- mergekit
- merge
- llama
- not-for-all-audiences
---
### Llama-3-8B-MegaSerpentine
RP / storywriter / generalized model.
,=e
`-.
_,-'
,=e
`-.
_,-'
,=e
`-.
_,-'
![image/png](https://cdn-uploads.huggingface.co/production/uploads/64f74b6e6389380c77562762/cDHqxVwPfzlc8FCD8MCUl.png)
# Thanks mradermacher for the quants!
* [GGUF](https://huggingface.co/mradermacher/L3-8B-MegaSerpentine-GGUF)
* [GGUF imatrix](https://huggingface.co/mradermacher/L3-8B-MegaSerpentine-i1-GGUF)
# Quants
* [GGUFs imatrix: Q8 - Q6K - Q5KS - IQ4XS](https://huggingface.co/v000000/L3-8B-MegaSerpentine-imat-GGUFs)
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [princeton-nlp/Llama-3-Instruct-8B-SimPO](https://huggingface.co/princeton-nlp/Llama-3-Instruct-8B-SimPO) as a base.
### Models Merged
The following models were included in the merge:
* [Nitral-AI/Poppy_Porpoise-0.72-L3-8B](https://huggingface.co/Nitral-AI/Poppy_Porpoise-0.72-L3-8B) + [Blackroot/Llama-3-8B-Abomination-LORA](https://huggingface.co/Blackroot/Llama-3-8B-Abomination-LORA)
* [jondurbin/bagel-8b-v1.0](https://huggingface.co/jondurbin/bagel-8b-v1.0)
* [lodrick-the-lafted/Fuselage-8B](https://huggingface.co/lodrick-the-lafted/Fuselage-8B)
* [lodrick-the-lafted/Limon-8B](https://huggingface.co/lodrick-the-lafted/Limon-8B)
* [Nitral-AI/Hathor_Stable-v0.2-L3-8B](https://huggingface.co/Nitral-AI/Hathor_Stable-v0.2-L3-8B)
* [jeiku/Orthocopter_8B](https://huggingface.co/jeiku/Orthocopter_8B) + [mpasila/Llama-3-LimaRP-Instruct-LoRA-8B](https://huggingface.co/mpasila/Llama-3-LimaRP-Instruct-LoRA-8B)
* [Hastagaras/Jamet-8B-L3-MK.V-Blackroot](https://huggingface.co/Hastagaras/Jamet-8B-L3-MK.V-Blackroot)
* [Vdr1/L3-8B-Sunfall-v0.3-Stheno-v3.2](https://huggingface.co/Vdr1/L3-8B-Sunfall-v0.3-Stheno-v3.2)
* [HPAI-BSC/Llama3-Aloe-8B-Alpha](https://huggingface.co/HPAI-BSC/Llama3-Aloe-8B-Alpha)
* [maldv/llama-3-fantasy-writer-8b](https://huggingface.co/maldv/llama-3-fantasy-writer-8b) + [ToastyPigeon/Llama-3-8B-Instruct-SpringDragon-V2-QLoRA](https://huggingface.co/ToastyPigeon/Llama-3-8B-Instruct-SpringDragon-V2-QLoRA)
* [v000000/sauce](https://huggingface.co/v000000/sauce)
* [cgato/L3-TheSpice-8b-v0.8.3](https://huggingface.co/cgato/L3-TheSpice-8b-v0.8.3)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: Nitral-AI/Poppy_Porpoise-0.72-L3-8B+Blackroot/Llama-3-8B-Abomination-LORA
- model: Hastagaras/Jamet-8B-L3-MK.V-Blackroot
- model: cgato/L3-TheSpice-8b-v0.8.3
- model: Vdr1/L3-8B-Sunfall-v0.3-Stheno-v3.2
- model: jondurbin/bagel-8b-v1.0
- model: lodrick-the-lafted/Fuselage-8B
- model: HPAI-BSC/Llama3-Aloe-8B-Alpha
- model: maldv/llama-3-fantasy-writer-8b+ToastyPigeon/Llama-3-8B-Instruct-SpringDragon-V2-QLoRA
- model: lodrick-the-lafted/Limon-8B
- model: Nitral-AI/Hathor_Stable-v0.2-L3-8B
- model: princeton-nlp/Llama-3-Instruct-8B-SimPO
- model: jeiku/Orthocopter_8B+mpasila/Llama-3-LimaRP-Instruct-LoRA-8B
- model: jondurbin/bagel-8b-v1.0
- model: v000000/sauce
base_model: princeton-nlp/Llama-3-Instruct-8B-SimPO
merge_method: model_stock
dtype: bfloat16
```
# Prompt Template:
```bash
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
{input}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
{output}<|eot_id|>
```