Queen-2.5-14B-aka / README.md
AIgotahole's picture
Upload folder using huggingface_hub
ca04316 verified
|
raw
history blame
1.55 kB
metadata
base_model:
  - sometimesanotion/Qwenvergence-14B-v13-Prose-DS
  - Sao10K/14B-Qwen2.5-Kunou-v1
  - deepcogito/cogito-v1-preview-qwen-14B
library_name: transformers
tags:
  - mergekit
  - merge

merge

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the Model Breadcrumbs merge method using Sao10K/14B-Qwen2.5-Kunou-v1 as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

models:
  - model: Sao10K/14B-Qwen2.5-Kunou-v1
  - model: sometimesanotion/Qwenvergence-14B-v13-Prose-DS
    parameters:
      density: [0.16, 0.26, 0.36, 0.46, 0.56, 0.46, 0.36, 0.26, 0.16]
      weight: [0.166, 0.496, 0.496, 0.166, 0.166, 0.496, 0.496, 0.166]
  - model: deepcogito/cogito-v1-preview-qwen-14B
    parameters:
      density: [0.56, 0.46, 0.36, 0.26, 0.16, 0.26, 0.36, 0.46, 0.56]
      weight: [0.496, 0.166, 0.166, 0.496, 0.496, 0.166, 0.166, 0.496]
merge_method: breadcrumbs
base_model: Sao10K/14B-Qwen2.5-Kunou-v1
parameters:
    gamma: 0.06
    lambda: 0.96
tokenizer_source: base
dtype: bfloat16