|
--- |
|
license: llama3 |
|
language: |
|
- en |
|
tags: |
|
- moe |
|
--- |
|
|
|
![image/png](https://cdn-uploads.huggingface.co/production/uploads/64f5e51289c121cb864ba464/oHq8uPY_H6SC-sfA-Wx7L.png) |
|
|
|
> [!IMPORTANT] |
|
> Check for [ChaoticSoliloquy-v1.5-4x8B](https://huggingface.co/xxx777xxxASD/L3-ChaoticSoliloquy-v1.5-4x8B) |
|
|
|
Experimental RP-oriented MoE, the idea was to get a model that would be equal to or better than the Mixtral 8x7B and it's finetunes in RP/ERP tasks. |
|
|
|
[GGUF, Exl2](https://huggingface.co/collections/xxx777xxxASD/chaoticsoliloquy-4x8b-6628a759b5a60d8d3f51ed62) |
|
|
|
### Llama 3 ChaoticSoliloquy-4x8B |
|
``` |
|
base_model: jeiku_Chaos_RP_l3_8B |
|
gate_mode: random |
|
dtype: bfloat16 |
|
experts_per_token: 2 |
|
experts: |
|
- source_model: ChaoticNeutrals_Poppy_Porpoise-v0.6-L3-8B |
|
- source_model: jeiku_Chaos_RP_l3_8B |
|
- source_model: openlynn_Llama-3-Soliloquy-8B |
|
- source_model: Sao10K_L3-Solana-8B-v1 |
|
``` |
|
|
|
## Models used |
|
|
|
- [ChaoticNeutrals/Poppy_Porpoise-v0.6-L3-8B](https://huggingface.co/ChaoticNeutrals/Poppy_Porpoise-v0.6-L3-8B) |
|
- [jeiku/Chaos_RP_l3_8B](https://huggingface.co/jeiku/Chaos_RP_l3_8B) |
|
- [openlynn/Llama-3-Soliloquy-8B](https://huggingface.co/openlynn/Llama-3-Soliloquy-8B) |
|
- [Sao10K/L3-Solana-8B-v1](https://huggingface.co/Sao10K/L3-Solana-8B-v1) |
|
|
|
## Vision |
|
|
|
[llama3_mmproj(OLD)](https://huggingface.co/ChaoticNeutrals/Llava_1.5_Llama3_mmproj) |
|
|
|
[llama3_mmproj(NEW)](https://huggingface.co/ChaoticNeutrals/LLaVA-Llama-3-8B-mmproj) |
|
|
|
![image/png](https://cdn-uploads.huggingface.co/production/uploads/64f5e51289c121cb864ba464/yv4C6NalqORLjvY3KKZk8.png) |
|
|
|
|
|
## Prompt format: Llama 3 |