File size: 2,558 Bytes
9aba670
59fddc7
 
 
 
 
9aba670
 
 
 
59fddc7
 
 
 
9aba670
 
 
 
 
 
5ef15a7
 
9aba670
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
---
base_model: rAIfle/Sloppy-Wingman-8x7B-hf
inference: false
model_creator: rAIfle
model_name: Sloppy-Wingman-8x7B-hf
model_type: mixtral
tags:
- gguf
---

# Model Card for Mixtral-Instruct-ITR-8x7B-GGUF
- Model creator: [rAIfle](https://huggingface.co/rAIfle)
- Original model: [Sloppy-Wingman-8x7B-hf](https://huggingface.co/rAIfle/Sloppy-Wingman-8x7B-hf)

# Sloppy-Wingman-8x7B-GGUF 

Quantized from fp16 with love.

Uploading Q5_K_M for starters, other sizes available upon request.

*Update: Q8_0 now uploaded per request. Shooting to have imatrix.dat file and some IQ quantizations in the coming days as a convenience for users with less VRAM available.*

See original model card details below.


---
# Sloppy-Wingman-8x7B-hf
![Sloppy Wingman](https://files.catbox.moe/7ay3me.png)

Big slop, good model.
Running better at slightly higher temp (1.1-ish) than usual, along with 0.05 MinP and 0.28 snoot. 
Bog-standard ChatML works best imo, but Alpaca and Mixtral formats work (to some degree) too.

Parts:
```yaml
models:
  - model: mistralai/Mixtral-8x7B-v0.1+retrieval-bar/Mixtral-8x7B-v0.1_case-briefs
    parameters:
      weight: 0.33
  - model: mistralai/Mixtral-8x7B-v0.1+wandb/Mixtral-8x7b-Remixtral
    parameters:
      weight: 0.33
merge_method: task_arithmetic
base_model: mistralai/Mixtral-8x7B-v0.1
dtype: float16
```
and
```yaml
models:
  - model: mistralai/Mixtral-8x7B-Instruct-v0.1+/ai/LLM/tmp/pefts/daybreak-peft/mixtral-8x7b
    parameters:
      weight: 0.85
  - model: notstoic/Nous-Hermes-2-Mixtruct-v0.1-8x7B-DPO-DARE_TIES
    parameters:
      weight: 0.25
  - model: ycros/BagelWorldTour-8x7B
    parameters:
      weight: 0.1
merge_method: task_arithmetic
base_model: mistralai/Mixtral-8x7B-Instruct-v0.1
dtype: float16
```
SLERP:ed together as per below.

---
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).

## Merge Details
### Merge Method

This model was merged using the SLERP merge method.

### Models Merged

The following models were included in the merge:
* ./02-friend2-instruct
* ./01-friend2-base

### Configuration

The following YAML configuration was used to produce this model:

```yaml
models:
  - model: ./01-friend2-base
  - model: ./02-friend2-instruct
merge_method: slerp
base_model: ./01-friend2-base
parameters:
  t:
    - value: 0.5
dtype: float16

```

```yaml
models:
  - model: ./temp-output-base
  - model: ./temp-output-instruct
merge_method: slerp
base_model: ./temp-output-base
parameters:
  t:
    - value: 0.5
dtype: float16

```