kunoichi-lemon-royale-hamansu-v1-32k-7B
This is a merge of pre-trained language models created using mergekit.
The model is subtly damaged, but the result might still have entertainment value.
Merge Details
Merge Method
This model was merged using the SLERP merge method.
Models Merged
The following models were included in the merge:
Configuration
The following YAML configuration was used to produce this model:
base_model: grimjim/kunoichi-lemon-royale-v2experiment1-32K-7B
dtype: bfloat16
merge_method: slerp
slices:
- sources:
- model: grimjim/kunoichi-lemon-royale-v2experiment1-32K-7B
layer_range: [0, 32]
- model: Delta-Vector/Hamanasu-7B-instruct
layer_range: [0, 32]
value: 0.5
parameters:
t:
- filter: embed_tokens
value: 0.5
- filter: lm_head
value: 0.5
- value: 0.5
- Downloads last month
- 11
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for grimjim/kunoichi-lemon-royale-hamansu-v1-32k-7B
Merge model
this model