Luminatium-L3-8b : Overpowered.
Recommended Settings
temperature: 1.3
min_p: 0.08
rep_pen : 1.1
top_k : 50
max_tokens/context : 8196
template : Llama-3-instruct
Merge Details
Merge Method
This model was created using SLERP (Spherical Linear Interpolation), a technique that blends model weights along a spherical path in the weight space. This method preserves the unique strengths of both base models while creating a smooth transition between their capabilities.
Models Merged
Configuration
base_model: Sao10K/L3-8B-Stheno-v3.2
dtype: bfloat16
merge_method: slerp
modules:
default:
slices:
- sources:
- layer_range: [0, 32]
model: Sao10K/L3-8B-Stheno-v3.2
- layer_range: [0, 32]
model: Sao10K/L3-8B-Lunaris-v1
parameters:
t:
- filter: self_attn
value: [0.0, 0.5, 0.3, 0.7, 1.0]
- filter: mlp
value: [1.0, 0.5, 0.7, 0.3, 0.0]
- value: 0.5
This model was created using mergekit.
- Downloads last month
- 25
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for Entropicengine/Luminatium-L3-8b
Merge model
this model