File size: 1,554 Bytes
ca04316
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
---
base_model:
- sometimesanotion/Qwenvergence-14B-v13-Prose-DS
- Sao10K/14B-Qwen2.5-Kunou-v1
- deepcogito/cogito-v1-preview-qwen-14B
library_name: transformers
tags:
- mergekit
- merge

---
# merge

This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).

## Merge Details
### Merge Method

This model was merged using the [Model Breadcrumbs](https://arxiv.org/abs/2312.06795) merge method using [Sao10K/14B-Qwen2.5-Kunou-v1](https://huggingface.co/Sao10K/14B-Qwen2.5-Kunou-v1) as a base.

### Models Merged

The following models were included in the merge:
* [sometimesanotion/Qwenvergence-14B-v13-Prose-DS](https://huggingface.co/sometimesanotion/Qwenvergence-14B-v13-Prose-DS)
* [deepcogito/cogito-v1-preview-qwen-14B](https://huggingface.co/deepcogito/cogito-v1-preview-qwen-14B)

### Configuration

The following YAML configuration was used to produce this model:

```yaml
models:
  - model: Sao10K/14B-Qwen2.5-Kunou-v1
  - model: sometimesanotion/Qwenvergence-14B-v13-Prose-DS
    parameters:
      density: [0.16, 0.26, 0.36, 0.46, 0.56, 0.46, 0.36, 0.26, 0.16]
      weight: [0.166, 0.496, 0.496, 0.166, 0.166, 0.496, 0.496, 0.166]
  - model: deepcogito/cogito-v1-preview-qwen-14B
    parameters:
      density: [0.56, 0.46, 0.36, 0.26, 0.16, 0.26, 0.36, 0.46, 0.56]
      weight: [0.496, 0.166, 0.166, 0.496, 0.496, 0.166, 0.166, 0.496]
merge_method: breadcrumbs
base_model: Sao10K/14B-Qwen2.5-Kunou-v1
parameters:
    gamma: 0.06
    lambda: 0.96
tokenizer_source: base
dtype: bfloat16
```