--- base_model: - Undi95/PsyMedRP-v1-20B - Undi95/MXLewd-L2-20B library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [Undi95/PsyMedRP-v1-20B](https://huggingface.co/Undi95/PsyMedRP-v1-20B) * [Undi95/MXLewd-L2-20B](https://huggingface.co/Undi95/MXLewd-L2-20B) ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: Undi95/PsyMedRP-v1-20B layer_range: [0, 62] # PsyMedRP has 62 layers - model: Undi95/MXLewd-L2-20B layer_range: [0, 62] # MXLewd has 62 layers merge_method: slerp base_model: Undi95/PsyMedRP-v1-20B parameters: t: - filter: self_attn value: [0.7, 0.6, 0.8, 0.9, 1] # Boost PsyMedRP's role in attention, especially deeper layers - filter: mlp value: [0.6, 0.7, 0.8, 0.4, 0.5] # Adjust MLP for more abstraction but also balance - value: 0.65 # Favor PsyMedRP overall in the blending dtype: bfloat16 # Preferred dtype, or adjust to fp16 for performance ```