File size: 2,513 Bytes
64ea5f3
 
 
 
 
 
 
 
 
 
 
 
 
 
7a7a5ee
64ea5f3
 
 
 
 
 
 
 
 
 
 
 
 
 
af14ff6
64ea5f3
 
 
 
 
 
 
 
 
 
 
 
 
 
ae585a3
64ea5f3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ae585a3
64ea5f3
ae585a3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7a7a5ee
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
---
license: apache-2.0
language:
- en
- zh
base_model:
- Qwen/Qwen3-30B-A3B-Thinking-2507
- Qwen/Qwen3-30B-A3B-Instruct-2507
- Qwen/Qwen3-Coder-30B-A3B-Instruct
- Qwen/Qwen3-30B-A3B-Base
pipeline_tag: text-generation
tags:
- merge
---
> *This is the initial unified version of the Qwen3-30B-A3B series models.As more fine-tuned models emerge and merging methods are applied, we will further improve it. Stay tuned!*
# *Model Highlights:*

- ***merge method**: `nuslerp`  `della`*

- ***precision**: `dtype: bfloat16`*

- ***Context length**: `1010000`*

# *Parameter Settings:*
> [!TIP]
> *`Temperature=0.7`, `TopP=0.8`, `TopK=20`,`MinP=0`.*

## *Step1: Merge Code Model with Instruction & Thinking Models Separately*
- *Adopt the nuslerp method to improve model absorption rate.*
- *Set a merging ratio of 9:1 to prevent capability degradation caused by an excessively high proportion of the code model.*
```yaml
models:
  - model: Qwen/Qwen3-30B-A3B-Instruct-2507
    parameters:
      weight: 0.9
  - model: Qwen/Qwen3-Coder-30B-A3B-Instruct
    parameters:
      weight: 0.1
merge_method: nuslerp
tokenizer_source: Qwen/Qwen3-30B-A3B-Instruct-2507
parameters:
  normalize: true
  int8_mask: true
dtype: bfloat16
name: Qwen3-30B-A3B-Coder-Instruct-nuslerp
```
```yaml
models:
  - model: Qwen/Qwen3-30B-A3B-Thinking-2507
    parameters:
      weight: 0.9
  - model: Qwen/Qwen3-Coder-30B-A3B-Instruct
    parameters:
      weight: 0.1
merge_method: nuslerp
tokenizer_source: Qwen/Qwen3-30B-A3B-Thinking-2507
parameters:
  normalize: true
  int8_mask: true
dtype: bfloat16
name: Qwen3-30B-A3B-Coder-Thinking-nuslerp
```
## *Step2: Merge Code Instruction & Code Thinking Models into Base Model Together*
- *Merge the two models into the base model using the della merging method to make the model more versatile and stable.*
- *Since the merged model is more similar to the instruction model, we use the chat template of the Qwen3-30B-A3B-Instruct-2507.*
```yaml
models:
  - model: Qwen3-30B-A3B-Coder-Instruct-nuslerp
    parameters:
      density: 1
      weight: 1
      lambda: 0.9
  - model: Qwen3-30B-A3B-Coder-Thinking-nuslerp
    parameters:
      density: 1
      weight: 1
      lambda: 0.9
merge_method: della
base_model: Qwen/Qwen3-30B-A3B-Base
dtype: bfloat16
name: Qwen3-30B-A3B-YOYO-V2
```
## *Step3: Further Extend Context Length*
- *By referring to the config_1m.json of Qwen3-30B-A3B-Instruct-2507, we modified the config.json of the merged model and extended the maximum context length to 1M.*