Finch 7b Merge
A SLERP merge of my two current fav 7B models
macadeliccc/WestLake-7B-v2-laser-truthy-dpo & SanjiWatsuki/Kunoichi-DPO-v2-7B
A set of GGUF quants of Finch
Settings
I reccomend using the ChatML format. As for samplers, I reccomend the following:
Temperature: 1.2
Min P: 0.2
Smoothing Factor: 0.2
Mergekit Config
base_model: macadeliccc/WestLake-7B-v2-laser-truthy-dpo
dtype: float16
merge_method: slerp
parameters:
t:
- filter: self_attn
value: [0.0, 0.5, 0.3, 0.7, 1.0]
- filter: mlp
value: [1.0, 0.5, 0.7, 0.3, 0.0]
- value: 0.5
slices:
- sources:
- layer_range: [0, 32]
model: macadeliccc/WestLake-7B-v2-laser-truthy-dpo
- layer_range: [0, 32]
model: SanjiWatsuki/Kunoichi-DPO-v2-7B
- Downloads last month
- 20
Hardware compatibility
Log In
to view the estimation
6-bit
8-bit
16-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support