gemma-2-13b-it

gemma-2-13b-it is a merge of the following models using mergekit:

🧩 Configuration

dtype: bfloat16
merge_method: passthrough
slices:
- sources:
  - layer_range: [0, 8]
    model: Qwen/Qwen2-7B
- sources:
  - layer_range: [4, 12]
    model: Qwen/Qwen2-7B
- sources:
  - layer_range: [8, 16]
    model: Qwen/Qwen2-7B
- sources:
  - layer_range: [12, 20]
    model: Qwen/Qwen2-7B
- sources:
  - layer_range: [16, 24]
    model: Qwen/Qwen2-7B
- sources:
  - layer_range: [20, 28]
    model: Qwen/Qwen2-7B
Downloads last month
3
Safetensors
Model size
12.3B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support