Update README.md
Browse files
README.md
CHANGED
@@ -10,16 +10,46 @@ base_model:
|
|
10 |
- yam-peleg/Experiment28-7B
|
11 |
---
|
12 |
|
13 |
-
# YamshadowExperiment28-7B
|
14 |
|
15 |
YamshadowExperiment28-7B is an automated merge created by [Maxime Labonne](https://huggingface.co/mlabonne) using the following configuration.
|
16 |
* [automerger/YamShadow-7B](https://huggingface.co/automerger/YamShadow-7B)
|
17 |
* [yam-peleg/Experiment28-7B](https://huggingface.co/yam-peleg/Experiment28-7B)
|
18 |
|
19 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
20 |
|
21 |
![image/png](https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/ONmehD2GXYefb-O3zHbp5.png)
|
22 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
23 |
## 🧩 Configuration
|
24 |
|
25 |
```yaml
|
|
|
10 |
- yam-peleg/Experiment28-7B
|
11 |
---
|
12 |
|
13 |
+
# 🧪 YamshadowExperiment28-7B
|
14 |
|
15 |
YamshadowExperiment28-7B is an automated merge created by [Maxime Labonne](https://huggingface.co/mlabonne) using the following configuration.
|
16 |
* [automerger/YamShadow-7B](https://huggingface.co/automerger/YamShadow-7B)
|
17 |
* [yam-peleg/Experiment28-7B](https://huggingface.co/yam-peleg/Experiment28-7B)
|
18 |
|
19 |
+
## 🔍 Applications
|
20 |
+
|
21 |
+
This model uses a context window of 8k. I recommend using it with the Alpaca chat template (works perfectly with LM Studio).
|
22 |
+
|
23 |
+
The model can sometimes break and output a lot of "INST". From my experience, its excellent results on the Open LLM Leaderboard are probably a sign of overfitting.
|
24 |
+
|
25 |
+
## ⚡ Quantized models
|
26 |
+
|
27 |
+
* **GGUF**: https://huggingface.co/automerger/YamshadowExperiment28-7B-GGUF
|
28 |
+
|
29 |
+
## 🏆 Evaluation
|
30 |
+
|
31 |
+
### Open LLM Leaderboard
|
32 |
+
|
33 |
+
YamshadowExperiment28-7B is currently the best-performing 7B model on the Open LLM Leaderboard (08 Apr 24).
|
34 |
|
35 |
![image/png](https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/ONmehD2GXYefb-O3zHbp5.png)
|
36 |
|
37 |
+
### EQ-bench
|
38 |
+
|
39 |
+
Thanks to [Samuel J. Paech](https://twitter.com/sam_paech), who kindly ran the evaluation.
|
40 |
+
|
41 |
+
![image/png](https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/e6cg_7TD35JveTjx_KoTT.png)
|
42 |
+
|
43 |
+
### Nous
|
44 |
+
|
45 |
+
Evaluation performed using [LLM AutoEval](https://github.com/mlabonne/llm-autoeval). See the entire leaderboard [here](https://huggingface.co/spaces/mlabonne/Yet_Another_LLM_Leaderboard).
|
46 |
+
|
47 |
+
![image/png](https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/s4oKdK3FfaDsagXe7tEM2.png)
|
48 |
+
|
49 |
+
## 🌳 Model Family Tree
|
50 |
+
|
51 |
+
![image/png](https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/fEA4EdtSa_fssdvsUXPf1.png)
|
52 |
+
|
53 |
## 🧩 Configuration
|
54 |
|
55 |
```yaml
|