The-Omega-Directive-12B-v1.0
This is a merge of pre-trained language models created using mergekit. This model will be very repetitive and not function well. After removing the layers, I found the model a bit unsuable. However I am currently crafting a small RP dataset based off of synthetic data from Claude 3.7 and Haiku 3.5 to retrain the smaller models.
Merge Details
Merge Method
This model was merged using the Passthrough merge method.
Models Merged
The following models were included in the merge:
- /storage/bases/The-Omega-Directive-M-12B-v1.0
Configuration
The following YAML configuration was used to produce this model:
dtype: bfloat16
merge_method: passthrough
modules:
default:
slices:
- sources:
- layer_range: [0, 25]
model: /storage/bases/The-Omega-Directive-M-12B-v1.0
- sources:
- layer_range: [27, 29]
model: /storage/bases/The-Omega-Directive-M-12B-v1.0
- sources:
- layer_range: [31, 40]
model: /storage/bases/The-Omega-Directive-M-12B-v1.0
- Downloads last month
- 18
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support