MN-12b-RP-Ink

This is a merge of pre-trained language models created using mergekit. I have removed several of the unsused layers as a test. The model still works but it can catch itself into a loop. I am attempting to finetune the model on a longer conversational dataset to see if that issue can be resolved.

I would NOT use this model... It is for testing purposes only.

Merge Details

Merge Method

This model was merged using the Passthrough merge method.

Models Merged

The following models were included in the merge:

  • /storage/bases/MN-12b-RP-Ink

Configuration

The following YAML configuration was used to produce this model:

dtype: bfloat16
merge_method: passthrough
modules:
  default:
    slices:
    - sources:
      - layer_range: [0, 27]
        model: /storage/bases/MN-12b-RP-Ink
    - sources:
      - layer_range: [29, 30]
        model: /storage/bases/MN-12b-RP-Ink
    - sources:
      - layer_range: [32, 40]
        model: /storage/bases/MN-12b-RP-Ink
Downloads last month
9
Safetensors
Model size
11.2B params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support