Eyas 17B

Overview

Eyas 17B is a frankenmerge based on the Falcon 3-10B architecture. Built using the mergekit library, Eyas 17B is optimized for a range of natural language processing tasks.

Merge Details

Merge Method

This model was created using the passthrough merge method. This method allows for a seamless integration of model layers to produce a new, high-performance model while maintaining compatibility with the Hugging Face transformers library.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce Eyas 17B:

slices:
- sources:
  - layer_range: [0, 10]
    model: tiiuae/Falcon3-10B-Base
- sources:
  - layer_range: [5, 15]
    model: tiiuae/Falcon3-10B-Base
- sources:
  - layer_range: [10, 20]
    model: tiiuae/Falcon3-10B-Base
- sources:
  - layer_range: [15, 25]
    model: tiiuae/Falcon3-10B-Base
- sources:
  - layer_range: [20, 30]
    model: tiiuae/Falcon3-10B-Base
- sources:
  - layer_range: [25, 35]
    model: tiiuae/Falcon3-10B-Base
- sources:
  - layer_range: [30, 40]
    model: tiiuae/Falcon3-10B-Base
merge_method: passthrough
dtype: float16
Downloads last month
48
Safetensors
Model size
17.4B params
Tensor type
FP16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for qingy2024/Eyas-17B-Base

Finetuned
(4)
this model
Finetunes
1 model
Quantizations
2 models