--- base_model: [] library_name: transformers tags: - mergekit - merge --- # merged_model_output This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using /media/administrator/oiseauxai1data/modelweights/Negative_LLAMA_70B as a base. ### Models Merged The following models were included in the merge: * /media/administrator/oiseauxai1data/modelweights/Fallen-Llama-3.3-R1-70B-v1 * /media/administrator/oiseauxai1data/modelweights/fallen-safeword-70b-r1-v4.1 * /media/administrator/oiseauxai1data/modelweights/Bigger-Body-70b * /media/administrator/oiseauxai1data/modelweights/Forgotten-Abomination-70B-v5.0 ### Configuration The following YAML configuration was used to produce this model: ```yaml # --- Mergekit Example: model_stock --- # Method: Averages "stock" models and combines with a base model. models: - model: /media/administrator/oiseauxai1data/modelweights/Forgotten-Abomination-70B-v5.0 - model: /media/administrator/oiseauxai1data/modelweights/fallen-safeword-70b-r1-v4.1 - model: /media/administrator/oiseauxai1data/modelweights/Fallen-Llama-3.3-R1-70B-v1 - model: /media/administrator/oiseauxai1data/modelweights/Bigger-Body-70b - model: /media/administrator/oiseauxai1data/modelweights/Negative_LLAMA_70B base_model: /media/administrator/oiseauxai1data/modelweights/Negative_LLAMA_70B model_name: Dark-Base-V2 # Name of your merge dtype: float32 # Input size float32, float16, bfloat16 out_dtype: bfloat16 # output size float32, float16, bfloat16 merge_method: model_stock parameters: filter_wise: false # Default tokenizer_source: base # Or 'base' if base_model is set, or 'union', careful with this one chat_template: llama3 # Template for chat (Chatml, llama3, etc...) license: apache-2.0 # License type ```