Edit model card

FrankenCRIA v1.3-m.2

What is FrankenCRIA?

FrankenCRIA Logo
This is a frankenmerge of davzoku/cria-llama2-7b-v1.3.

The configuration is the same as vilm/vinallama-12.5b-chat-DUS.

Please be aware that this model is highly experimental, and no further training has been conducted following the merge. Therefore, the model performance may not meet expectations, as described in the SOLAR paper

πŸ“¦ FrankenCRIA Model Release

FrankenCRIA v1.3 comes with several variants.

🧩 Merge Details

Merge Method

This model was merged using the passthrough merge method.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

# https://huggingface.co/vilm/vinallama-12.5b-chat-DUS
slices:
  - sources:
      - model: davzoku/cria-llama2-7b-v1.3
        layer_range: [0, 16]
  - sources:
      - model: davzoku/cria-llama2-7b-v1.3
        layer_range: [8, 16]
  - sources:
      - model: davzoku/cria-llama2-7b-v1.3
        layer_range: [8, 16]
  - sources:
      - model: davzoku/cria-llama2-7b-v1.3
        layer_range: [16, 24]
  - sources:
      - model: davzoku/cria-llama2-7b-v1.3
        layer_range: [16, 24]
  - sources:
      - model: davzoku/cria-llama2-7b-v1.3
        layer_range: [24, 28]
  - sources:
      - model: davzoku/cria-llama2-7b-v1.3
        layer_range: [24, 28]
  - sources:
      - model: davzoku/cria-llama2-7b-v1.3
        layer_range: [28, 32]
merge_method: passthrough
dtype: bfloat16
Downloads last month
81
Safetensors
Model size
12.4B params
Tensor type
BF16
Β·
Inference Examples
Inference API (serverless) has been turned off for this model.

Model tree for davzoku/frankencria-llama2-12.5b-v1.3-m.2

Finetuned
(2)
this model

Dataset used to train davzoku/frankencria-llama2-12.5b-v1.3-m.2

Collection including davzoku/frankencria-llama2-12.5b-v1.3-m.2