Update README.md
Browse files![SF-III.jpg](https://cdn-uploads.huggingface.co/production/uploads/635bf4cfca038892de049862/txhGhwRWWbZAuKQET-v8F.jpeg)
![download.png](https://cdn-uploads.huggingface.co/production/uploads/635bf4cfca038892de049862/UubKr4WlnWjnmt8Eh9xkk.png)
![games-won.png](https://cdn-uploads.huggingface.co/production/uploads/635bf4cfca038892de049862/gZHRuz7KO6-czOEcPwZw_.png)
README.md
CHANGED
@@ -117,9 +117,51 @@ model-index:
|
|
117 |
---
|
118 |
# Gonzo-Chat-7B
|
119 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
120 |
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
|
121 |
|
|
|
|
|
122 |
## Merge Details
|
|
|
123 |
### Merge Method
|
124 |
|
125 |
This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO](https://huggingface.co/eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO) as a base.
|
@@ -157,16 +199,5 @@ parameters:
|
|
157 |
int8_mask: true
|
158 |
dtype: bfloat16
|
159 |
```
|
160 |
-
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|
161 |
-
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Badgids__Gonzo-Chat-7B)
|
162 |
|
163 |
-
| Metric | Value |
|
164 |
-
| --------------------------------- | ----: |
|
165 |
-
| Avg. | 66.63 |
|
166 |
-
| AI2 Reasoning Challenge (25-Shot) | 65.02 |
|
167 |
-
| HellaSwag (10-Shot) | 85.40 |
|
168 |
-
| MMLU (5-Shot) | 63.75 |
|
169 |
-
| TruthfulQA (0-shot) | 60.23 |
|
170 |
-
| Winogrande (5-shot) | 77.74 |
|
171 |
-
| GSM8k (5-shot) | 47.61 |
|
172 |
|
|
|
117 |
---
|
118 |
# Gonzo-Chat-7B
|
119 |
|
120 |
+
Is a a merged LLM that likes to chat, roleplay, work with agents, do some light programming, andthen beat the brakes off you in the back alley...
|
121 |
+
|
122 |
+
The ***BEST*** Open Source 7B **Street Fighting** LLM of 2024!!!
|
123 |
+
|
124 |
+
![SF-III.jpg](https://cdn-uploads.huggingface.co/production/uploads/635bf4cfca038892de049862/txhGhwRWWbZAuKQET-v8F.jpeg)
|
125 |
+
|
126 |
+
|
127 |
+
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|
128 |
+
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Badgids__Gonzo-Chat-7B)
|
129 |
+
|
130 |
+
| Metric | Value |
|
131 |
+
| --------------------------------- | ----: |
|
132 |
+
| Avg. | 66.63 |
|
133 |
+
| AI2 Reasoning Challenge (25-Shot) | 65.02 |
|
134 |
+
| HellaSwag (10-Shot) | 85.40 |
|
135 |
+
| MMLU (5-Shot) | 63.75 |
|
136 |
+
| TruthfulQA (0-shot) | 60.23 |
|
137 |
+
| Winogrande (5-shot) | 77.74 |
|
138 |
+
| GSM8k (5-shot) | 47.61 |
|
139 |
+
|
140 |
+
|
141 |
+
|
142 |
+
## LLM-Colosseum Results
|
143 |
+
|
144 |
+
|
145 |
+
All contestents fought using the same LLM-Colosseum default settings. Each contestant fought 25 rounds with every other contestant.
|
146 |
+
|
147 |
+
https://github.com/OpenGenerativeAI/llm-colosseum
|
148 |
+
|
149 |
+
|
150 |
+
|
151 |
+
### Gonzo-Chat-7B .vs Mistral v0.2, Dolphon-Mistral v0.2, Deepseek-Coder-6.7b-instruct
|
152 |
+
|
153 |
+
|
154 |
+
![games-won.png](https://cdn-uploads.huggingface.co/production/uploads/635bf4cfca038892de049862/gZHRuz7KO6-czOEcPwZw_.png)
|
155 |
+
|
156 |
+
![download.png](https://cdn-uploads.huggingface.co/production/uploads/635bf4cfca038892de049862/UubKr4WlnWjnmt8Eh9xkk.png)
|
157 |
+
|
158 |
+
|
159 |
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
|
160 |
|
161 |
+
|
162 |
+
|
163 |
## Merge Details
|
164 |
+
|
165 |
### Merge Method
|
166 |
|
167 |
This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO](https://huggingface.co/eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO) as a base.
|
|
|
199 |
int8_mask: true
|
200 |
dtype: bfloat16
|
201 |
```
|
|
|
|
|
202 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
203 |
|