Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,61 @@
|
|
1 |
-
---
|
2 |
-
license: apache-2.0
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
language:
|
4 |
+
- en
|
5 |
+
- zh
|
6 |
+
base_model:
|
7 |
+
- deepseek-ai/DeepSeek-R1-0528-Qwen3-8B
|
8 |
+
- Qwen/Qwen3-8B
|
9 |
+
pipeline_tag: text-generation
|
10 |
+
tags:
|
11 |
+
- merge
|
12 |
+
---
|
13 |
+
|
14 |
+
# *Model Highlights:*
|
15 |
+
|
16 |
+
- ***merge method**: `slerp`*
|
17 |
+
|
18 |
+
- ***Highest precision**: `dtype: float32` + `out_dtype: bfloat16`*
|
19 |
+
|
20 |
+
- ***Brand-new chat template**: ensures normal operation on LM Studio*
|
21 |
+
|
22 |
+
- ***Context length**: `32768`*
|
23 |
+
## *Model Selection Table:*
|
24 |
+
|Model|Context|Uses Basic Model|
|
25 |
+
|---|---|---|
|
26 |
+
|[Qwen3-8B-YOYO-slerp](https://huggingface.co/YOYO-AI/Qwen3-8B-YOYO-slerp)|32K|Yes|
|
27 |
+
|[Qwen3-8B-YOYO-slerp-128K](https://huggingface.co/YOYO-AI/Qwen3-8B-YOYO-slerp-128K)|128K|Yes|
|
28 |
+
|[Qwen3-8B-YOYO-nuslerp](https://huggingface.co/YOYO-AI/Qwen3-8B-YOYO-nuslerp)|32K|No|
|
29 |
+
|[Qwen3-8B-YOYO-nuslerp-128K](https://huggingface.co/YOYO-AI/Qwen3-8B-YOYO-nuslerp-128K)|128K|No|
|
30 |
+
|[Qwen3-8B-YOYO-nuslerp-plus](https://huggingface.co/YOYO-AI/Qwen3-8B-YOYO-nuslerp-plus)|32K|Yes|
|
31 |
+
|[Qwen3-8B-YOYO-nuslerp-plus-128K](https://huggingface.co/YOYO-AI/Qwen3-8B-YOYO-nuslerp-plus-128K)|128K|Yes|
|
32 |
+
> **Warning**:
|
33 |
+
> *Models with `128K` context may have slight quality loss. In most cases, please use the `32K` native context!*
|
34 |
+
# *Parameter Settings*:
|
35 |
+
## *Thinking Mode:*
|
36 |
+
> [!NOTE]
|
37 |
+
> *`Temperature=0.6`, `TopP=0.95`, `TopK=20`,`MinP=0`.*
|
38 |
+
|
39 |
+
# *Configuration*:
|
40 |
+
*The following YAML configuration was used to produce this model:*
|
41 |
+
|
42 |
+
```yaml
|
43 |
+
slices:
|
44 |
+
- sources:
|
45 |
+
- model: deepseek-ai/DeepSeek-R1-0528-Qwen3-8B
|
46 |
+
layer_range: [0, 36]
|
47 |
+
- model: Qwen/Qwen3-8B
|
48 |
+
layer_range: [0, 36]
|
49 |
+
merge_method: slerp
|
50 |
+
base_model: Qwen/Qwen3-8B
|
51 |
+
parameters:
|
52 |
+
t:
|
53 |
+
- filter: self_attn
|
54 |
+
value: [0, 0.5, 0.3, 0.7, 1]
|
55 |
+
- filter: mlp
|
56 |
+
value: [1, 0.5, 0.7, 0.3, 0]
|
57 |
+
- value: 0.5
|
58 |
+
tokenizer_source: base
|
59 |
+
dtype: float32
|
60 |
+
out_dtype: bfloat16
|
61 |
+
```
|