DS-R1-0528-Qwen3-YOYO-merge
Collection
12 items
•
Updated
•
1
merge method: nuslerp
Highest precision: dtype: float32
+ out_dtype: bfloat16
Brand-new chat template: ensures normal operation on LM Studio
Context length: 131072
Model | Context | Uses Basic Model |
---|---|---|
Qwen3-8B-YOYO-slerp | 32K | Yes |
Qwen3-8B-YOYO-slerp-128K | 128K | Yes |
Qwen3-8B-YOYO-nuslerp | 32K | No |
Qwen3-8B-YOYO-nuslerp-128K | 128K | No |
Qwen3-8B-YOYO-nuslerp-plus | 32K | Yes |
Qwen3-8B-YOYO-nuslerp-plus-128K | 128K | Yes |
Warning: Models with
128K
context may have slight quality loss. In most cases, please use the32K
native context!
Temperature=0.6
,TopP=0.95
,TopK=20
,MinP=0
.
The following YAML configuration was used to produce this model:
models:
- model: deepseek-ai/DeepSeek-R1-0528-Qwen3-8B
parameters:
weight: 1
- model: Qwen/Qwen3-8B
parameters:
weight: 1
merge_method: nuslerp
base_model: Qwen/Qwen3-8B-Base
tokenizer_source: Qwen/Qwen3-8B
parameters:
normalize: true
int8_mask: true
dtype: float32
out_dtype: bfloat16