File size: 1,868 Bytes
7039dab
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
16df785
 
 
 
7039dab
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
---
license: apache-2.0
language:
- en
- zh
base_model:
- deepseek-ai/DeepSeek-R1-0528-Qwen3-8B
- AXCXEPT/Qwen3-EZO-8B-beta
pipeline_tag: text-generation
tags:
- merge
---

# *Model Highlights:*

- ***merge method**: `slerp`*

- ***Highest precision**: `dtype: float32` + `out_dtype: bfloat16`*

- ***Brand-new chat template**: ensures normal operation on LM Studio*

- ***Context length**: `131072`*
## *Model Selection Table:*
|Model|Context|Uses Basic Model|
|---|---|---|
|[Qwen3-EZO-8B-YOYO-slerp](https://huggingface.co/YOYO-AI/Qwen3-EZO-8B-YOYO-slerp)|32K|Yes|
|[Qwen3-EZO-8B-YOYO-slerp-128K](https://huggingface.co/YOYO-AI/Qwen3-EZO-8B-YOYO-slerp-128K)|128K|Yes|
|[Qwen3-EZO-8B-YOYO-nuslerp](https://huggingface.co/YOYO-AI/Qwen3-EZO-8B-YOYO-nuslerp)|32K|No|
|[Qwen3-EZO-8B-YOYO-nuslerp-128K](https://huggingface.co/YOYO-AI/Qwen3-EZO-8B-YOYO-nuslerp-128K)|128K|No|
|[Qwen3-EZO-8B-YOYO-nuslerp-plus](https://huggingface.co/YOYO-AI/Qwen3-EZO-8B-YOYO-nuslerp-plus)|32K|Yes|
|[Qwen3-EZO-8B-YOYO-nuslerp-plus-128K](https://huggingface.co/YOYO-AI/Qwen3-EZO-8B-YOYO-nuslerp-plus-128K)|128K|Yes|
> **Warning**:
> *Models with `128K` context may have slight quality loss. In most cases, please use the `32K` native context!*
# *Parameter Settings*:
## *Thinking Mode:*
> [!NOTE]
> *`Temperature=0.6`, `TopP=0.95`, `TopK=20`,`MinP=0`.*

# *Configuration*:
*The following YAML configuration was used to produce this model:*

```yaml
slices:
  - sources:
      - model: deepseek-ai/DeepSeek-R1-0528-Qwen3-8B
        layer_range: [0, 36]
      - model: AXCXEPT/Qwen3-EZO-8B-beta
        layer_range: [0, 36]
merge_method: slerp
base_model: AXCXEPT/Qwen3-EZO-8B-beta
parameters:
  t:
    - filter: self_attn
      value: [0, 0.5, 0.3, 0.7, 1]
    - filter: mlp
      value: [1, 0.5, 0.7, 0.3, 0]
    - value: 0.5
tokenizer_source: base
dtype: float32
out_dtype: bfloat16
```