File size: 2,665 Bytes
f1410aa 4923e02 f1410aa 4923e02 8ab6fe0 4923e02 8ab6fe0 4923e02 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 |
---
license: cc-by-nc-4.0
base_model: envoid/Verdict-8x7B
inference: false
model_creator: Envoid
model_type: mixtral
tags:
- gguf
- not-for-all-audiences
---
# *DUN DUN*
<i>Update 2/27 New I-Quants uploaded. YMMV on 1Q_S but was able to get it to run on a 16gb ram laptop with no dedicated GPU</i>
# Verdict-8x7B-GGUF
- Model creator: [Envoid](https://huggingface.co/envoid)
- Original model: [Verdict-8x7B](https://huggingface.co/Envoid/Verdict-8x7B)
Quantized from fp16 with love.
Uploading Q8_0 & Q5_K_M for starters, other sizes available upon request.
See original model card details below.
---
# Warning this model may output mature or disturbing content
![](https://files.catbox.moe/q2rrja.jpg)
## Verdict-8x7B
This model has been through so many merge steps that I failed to keep track of due to constantly needing to shift drive space around that I will simply give accredition to all of its component models.
retrieval-bar/Mixtral-8x7B-v0.1_case-briefs (no longer available)
## New version available at:
[retrieval-bar/Mixtral-8x7B-v0.1_case-briefs](https://huggingface.co/retrieval-bar/Mixtral-8x7B-v0.1_case-briefs)
[mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1)
[mistralai/Mixtral-8x7B-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1)
[Envoid/Augmentasanguis-8x7B](https://huggingface.co/Envoid/Augmentasanguis-8x7B)
[Envoid/CATA-8x7B](https://huggingface.co/Envoid/CATA-8x7B)
[crestf411/daybreak-mixtral-8x7b-hf](https://huggingface.co/crestf411/daybreak-mixtral-8x7b-hf)
[IBI-CAAI/MELT-Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/IBI-CAAI/MELT-Mixtral-8x7B-Instruct-v0.1)
unreleased experimental model that gets mentioned quite frequently
### Many of the models were included multiple times in multiple intermediate steps
using a combination of slerp, linear, dare_ties and task_arithmetic
## Model was tested in q8 gguf form (not included due to being from an old pull of llama.cpp that I'm too lazy to update)
# Warning: This models alignment is heavily diminished.
It responds to mixtral instruct formatting
# Example:
```
[INST]Write me a haiku about violence.[/INST]
Fist clenched in anger,
Echoes of pain in the air,
Silence after storm.
```
It can be a bit unruly in roleplay- sometimes to its advantage, sometimes to its detriment.
It's a perfectly usable instruct model and good for creative writing via instruct- its diminished alignment allowing it to respond to requests for more nuanced scences.
All-in-all I'm happy enough with the results that I plan to use it as a daily driver for a while to see what cracks show from long term use. |