RP-Final-Merges
Collection
20 items
โข
Updated
This is a merge of pre-trained language models created using mergekit.
This model was merged using the DELLA merge method using /media/administrator/oiseauxai1data1/modelout/Smart-base-v2 as a base.
The following models were included in the merge:
The following YAML configuration was used to produce this model:
models:
- model: /media/administrator/oiseauxai1data1/modelout/Dark-Base-V1
parameters:
weight: 0.4 # Scalar weight for this model's overall contribution
- model: /media/administrator/oiseauxai1data/modelout/story-Base-V1
parameters:
weight: 0.3 # Scalar weight
- model: /media/administrator/oiseauxai1data1/modelout/Middle-Base-V1
parameters:
weight: 0.3 # Scalar weight (example: giving this more influence)
# Smart-Base-V2 is the base_model, so it's not typically listed here as a donor to itself.
merge_method: della
base_model: /media/administrator/oiseauxai1data1/modelout/Smart-base-v2
parameters: # Global parameters, including those for the DELLA method
density: 0.58 # Single density for the DELLA pruning process
epsilon: 0.1 # Single epsilon for the pruning
lambda: 1.02 # Single lambda for scaling the final merged deltas
normalize: false # If true, the weights above would be normalized to sum to 1
int8_mask: true
# You could have different density, epsilon, lambda values here based on experimentation
dtype: bfloat16
out_dtype: bfloat16
chat_template: llama3 # Assuming this is a valid chat template identifier
tokenizer:
source: base
pad_to_multiple_of: 8