johannhartmann
commited on
Commit
•
6a96dee
1
Parent(s):
604916a
Update README.md
Browse files
README.md
CHANGED
@@ -22,6 +22,28 @@ Wiederchat-7b is a merge of the following models using [LazyMergekit](https://co
|
|
22 |
* [mayflowergmbh/Wiedervereinigung-7b-dpo-laser](https://huggingface.co/mayflowergmbh/Wiedervereinigung-7b-dpo-laser)
|
23 |
* [cognitivecomputations/openchat-3.5-0106-laser](https://huggingface.co/cognitivecomputations/openchat-3.5-0106-laser)
|
24 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
25 |
## 🧩 Configuration
|
26 |
|
27 |
```yaml
|
|
|
22 |
* [mayflowergmbh/Wiedervereinigung-7b-dpo-laser](https://huggingface.co/mayflowergmbh/Wiedervereinigung-7b-dpo-laser)
|
23 |
* [cognitivecomputations/openchat-3.5-0106-laser](https://huggingface.co/cognitivecomputations/openchat-3.5-0106-laser)
|
24 |
|
25 |
+
# Benchmark mt-bench-de
|
26 |
+
Even before dpo-alignment this model performs quite good:
|
27 |
+
```json
|
28 |
+
{
|
29 |
+
"first_turn": 7.46875,
|
30 |
+
"second_turn": 6.7875,
|
31 |
+
"categories": {
|
32 |
+
"writing": 8.55,
|
33 |
+
"roleplay": 8,
|
34 |
+
"reasoning": 5.3,
|
35 |
+
"math": 4.35,
|
36 |
+
"coding": 4.6,
|
37 |
+
"extraction": 8.4,
|
38 |
+
"stem": 8.575,
|
39 |
+
"humanities": 9.25
|
40 |
+
},
|
41 |
+
"average": 7.128125
|
42 |
+
}
|
43 |
+
|
44 |
+
```
|
45 |
+
|
46 |
+
|
47 |
## 🧩 Configuration
|
48 |
|
49 |
```yaml
|