File size: 2,741 Bytes
1924ac0 df0cf70 1924ac0 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 |
# Model Card for ohno-8x7B-GGUF
- Model creator: [rAIfle](https://huggingface.co/rAIfle)
- Original model: [ohno-8x7B-fp16](https://huggingface.co/rAIfle/ohno-8x7B-fp16)
<!-- Provide a quick summary of what the model is/does. -->
ohno-8x7B quantized with love.
Upload Notes: Wanted to give this one a spin after seeing its unique merge recipe, was curious about how Mixtral-8x7B-v0.1 case-briefs affected the output.
Starting out with Q5_K_M, taking requests for any other quants.
**All quantizations based on original fp16 model.**
Any feedback is greatly appreciated!
---
# Original Model Card
# ohno-8x7b
this... will either be my magnum opus... or terrible. no inbetweens!
Post-test verdict: It's mostly braindamaged. Might be my settings or something, idk.
the `./output` mentioned below is my own merge using identical recipe as [Envoid/Mixtral-Instruct-ITR-8x7B](https://huggingface.co/Envoid/Mixtral-Instruct-ITR-8x7B).
# output_merge2
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [Envoid/Mixtral-Instruct-ITR-8x7B](https://huggingface.co/Envoid/Mixtral-Instruct-ITR-8x7B) as a base.
### Models Merged
The following models were included in the merge:
* ./output/ + /ai/LLM/tmp/pefts/daybreak-peft/mixtral-8x7b
* [Envoid/Mixtral-Instruct-ITR-8x7B](https://huggingface.co/Envoid/Mixtral-Instruct-ITR-8x7B) + [Doctor-Shotgun/limarp-zloss-mixtral-8x7b-qlora](https://huggingface.co/Doctor-Shotgun/limarp-zloss-mixtral-8x7b-qlora)
* [Envoid/Mixtral-Instruct-ITR-8x7B](https://huggingface.co/Envoid/Mixtral-Instruct-ITR-8x7B) + [retrieval-bar/Mixtral-8x7B-v0.1_case-briefs](https://huggingface.co/retrieval-bar/Mixtral-8x7B-v0.1_case-briefs)
* [NeverSleep/Noromaid-v0.4-Mixtral-Instruct-8x7b-Zloss](https://huggingface.co/NeverSleep/Noromaid-v0.4-Mixtral-Instruct-8x7b-Zloss)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: ./output/+/ai/LLM/tmp/pefts/daybreak-peft/mixtral-8x7b
parameters:
density: 0.66
weight: 1.0
- model: Envoid/Mixtral-Instruct-ITR-8x7B+retrieval-bar/Mixtral-8x7B-v0.1_case-briefs
parameters:
density: 0.1
weight: 0.25
- model: Envoid/Mixtral-Instruct-ITR-8x7B+Doctor-Shotgun/limarp-zloss-mixtral-8x7b-qlora
parameters:
density: 0.66
weight: 0.5
- model: NeverSleep/Noromaid-v0.4-Mixtral-Instruct-8x7b-Zloss
parameters:
density: 0.15
weight: 0.3
merge_method: dare_ties
base_model: Envoid/Mixtral-Instruct-ITR-8x7B
dtype: float16
```
|