Model Highlights:

  • merge method: nuslerp

  • Highest precision: dtype: float32 + out_dtype: bfloat16

  • Brand-new chat template: ensures normal operation on LM Studio

  • Context length: 32768

Model Selection Table:

Warning: Models with 128K context may have slight quality loss. In most cases, please use the 32K native context!

Parameter Settings:

Thinking Mode:

Temperature=0.6, TopP=0.95, TopK=20,MinP=0.

Configuration:

The following YAML configuration was used to produce this model:

models:
  - model: deepseek-ai/DeepSeek-R1-0528-Qwen3-8B
    parameters:
      weight: 1
  - model: Qwen/Qwen3-8B
    parameters:
      weight: 1
merge_method: nuslerp
tokenizer_source: Qwen/Qwen3-8B
parameters:
  normalize: true
  int8_mask: true
dtype: float32
out_dtype: bfloat16
Downloads last month
34
Safetensors
Model size
8.19B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for YOYO-AI/Qwen3-8B-YOYO-nuslerp

Merge model
this model
Quantizations
3 models

Collection including YOYO-AI/Qwen3-8B-YOYO-nuslerp