kunoichi-lemon-royale-v2experiment1-32K-7B
This is a merge of pre-trained language models created using mergekit.
The result appears to be a successful adapatation to the v0.3 tokenizer, with the resulting model being coherent, although there is some evident damage.
Merge Details
Merge Method
This model was merged using the SLERP merge method.
Models Merged
The following models were included in the merge:
Configuration
The following YAML configuration was used to produce this model:
base_model: grimjim/mistralai-Mistral-7B-Instruct-v0.3
dtype: bfloat16
merge_method: slerp
slices:
- sources:
- model: grimjim/mistralai-Mistral-7B-Instruct-v0.3
layer_range: [0, 32]
- model: grimjim/kunoichi-lemon-royale-v2ext-32K-7B
layer_range: [0, 32]
value: 0.8
parameters:
t:
- filter: embed_tokens
value: 0.0
- filter: lm_head
value: 0.0
- value: 0.8
- Downloads last month
- 13
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for grimjim/kunoichi-lemon-royale-v2experiment1-32K-7B
Merge model
this model