merge
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the Linear merge method.
Models Merged
The following models were included in the merge:
- MrRobotoAI/Nord-v1.2-8b-Uncensored-BASE-128k + svjack/Genshin_Impact_aya_23_8B_v3_Plot_Chat_roleplay_chat_lora_small
- MrRobotoAI/Nord-v1.2-8b-Uncensored-BASE-128k + svjack/DPO_Genshin_Impact_Mistral_Plot_Engine_Step_Json_Short_lora_small
- MrRobotoAI/Nord-v1.2-8b-Uncensored-BASE-128k + svjack/Genshin_Impact_Mistral_v3_Plot_Chat_roleplay_chat_lora_small
- MrRobotoAI/Nord-v1.2-8b-Uncensored-BASE-128k + multimodalai/talent-critique-llama3_1_8b-tt_lora-model_4_2k-adapter-rev_3
Configuration
The following YAML configuration was used to produce this model:
models:
- model: MrRobotoAI/Nord-v1.2-8b-Uncensored-BASE-128k+svjack/Genshin_Impact_aya_23_8B_v3_Plot_Chat_roleplay_chat_lora_small
- model: MrRobotoAI/Nord-v1.2-8b-Uncensored-BASE-128k+svjack/DPO_Genshin_Impact_Mistral_Plot_Engine_Step_Json_Short_lora_small
- model: MrRobotoAI/Nord-v1.2-8b-Uncensored-BASE-128k+svjack/Genshin_Impact_Mistral_v3_Plot_Chat_roleplay_chat_lora_small
- model: MrRobotoAI/Nord-v1.2-8b-Uncensored-BASE-128k+multimodalai/talent-critique-llama3_1_8b-tt_lora-model_4_2k-adapter-rev_3
parameters:
weight: 1.0
merge_method: linear
normalize: true
dtype: float16
- Downloads last month
- 23
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for MrRobotoAI/X3
Merge model
this model