File size: 2,519 Bytes
570da39 13d1691 570da39 13d1691 570da39 7b7d69a 13d1691 d5d95b0 13d1691 d5d95b0 13d1691 d5d95b0 13d1691 570da39 13d1691 570da39 13d1691 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 |
---
base_model:
- Qwen/Qwen2.5-Coder-32B
- Qwen/Qwen2.5-Coder-32B-Instruct
- tanliboy/lambda-qwen2.5-32b-dpo-test
- deepcogito/cogito-v1-preview-qwen-32B
- Qwen/Qwen2.5-32B-Instruct
- Qwen/QwQ-32B
- fblgit/TheBeagle-v2beta-32B-MGS
- Skywork/Skywork-OR1-32B-Preview
- qihoo360/Light-R1-32B
- AXCXEPT/EZO-Qwen2.5-32B-Instruct
- Qwen/Qwen2.5-32B
- EVA-UNIT-01/EVA-Qwen2.5-32B-v0.2
- arcee-ai/Virtuoso-Medium-v2
- Azure99/Blossom-V6-32B
library_name: transformers
tags:
- mergekit
- merge
license: apache-2.0
language:
- en
- zh
pipeline_tag: text-generation
---
# Qwen2.5-32B-YOYO-V2
*The YOYO Second Generation **32B** Model is Released!*
***Highlights:***
*1. Using the **Karcher** merging method.*
*2. Integrating **high-performance 32B models** from the open-source community.*
## First stage:
*Make a code model:*
```yaml
models:
- model: Qwen/Qwen2.5-Coder-32B-instruct
parameters:
density: 1
weight: 1
lambda: 0.9
merge_method: della
base_model: Qwen/Qwen2.5-Coder-32B
parameters:
density: 1
weight: 1
lambda: 0.9
normalize: true
int8_mask: true
dtype: bfloat16
name: YOYO-AI/Qwen2.5-Coder-32B-YOYO
```
## Second stage:
*Make an instruction model:*
```yaml
models:
- model: YOYO-AI/Qwen2.5-Coder-32B-YOYO
- model: Qwen/QwQ-32B
- model: Skywork/Skywork-OR1-32B-Preview
- model: deepcogito/cogito-v1-preview-qwen-32B
- model: qihoo360/Light-R1-32B
- model: AXCXEPT/EZO-Qwen2.5-32B-Instruct
- model: fblgit/TheBeagle-v2beta-32B-MGS
- model: tanliboy/lambda-qwen2.5-32b-dpo-test
- model: Qwen/Qwen2.5-32B-Instruct
merge_method: karcher
base_model: Qwen/Qwen2.5-32B-Instruct
parameters:
max_iter: 1000
normalize: true
int8_mask: true
tokenizer_source: base
dtype: float16
name: YOYO-AI/Qwen2.5-32B-YOYO-karcher
```
## Third stage:
*Make a base model:*
```yaml
models:
- model: EVA-UNIT-01/EVA-Qwen2.5-32B-v0.2
- model: Azure99/Blossom-V6-32B
- model: arcee-ai/Virtuoso-Medium-v2
merge_method: karcher
base_model: Qwen/Qwen2.5-32B
parameters:
max_iter: 1000
normalize: true
int8_mask: true
tokenizer_source: base
dtype: float16
name: YOYO-AI/Qwen2.5-32B-YOYO-karcher-base
```
## Final stage:
```yaml
models:
- model: YOYO-AI/Qwen2.5-32B-YOYO-karcher
parameters:
density: 1
weight: 1
lambda: 0.9
merge_method: della
base_model: YOYO-AI/Qwen2.5-32B-YOYO-karcher-base
parameters:
density: 1
weight: 1
lambda: 0.9
normalize: true
int8_mask: true
dtype: bfloat16
tokenizer_source: base
name: YOYO-AI/Qwen2.5-32B-YOYO-V2
``` |