Edit model card

L3-Scrambled-Eggs-On-Toast-8B

L3-Scrambled-Eggs-On-Toast-8B is a role-play model merger using 18 models that was made in 11 merging steps.

The goal is to create both a creative and smart model by using gradients. Each model has their own section in the gradient where they have a larger weight to promote intelligence whereas the rest of the models in the section of the gradient have a small weight to promote creativity.

The following models were used as inspiration:

Instruct Format

Llama 3

Settings/Presets

Instruct/Context

Virt-io's SillyTavern Presets is recommended.

Sampler Settings

Here are the current recommended settings for more creativity

Top K: 60
Min P: 0.035
Rep Pen: 1.05
Rep Pen Range: 2048
Pres Pen: 0.15
Smoothing Factor: 0.25
Dyna Temp:
  Min Temp: 0.75
  Max Temp: 1.5
  Expo: 0.85

if you want more adherence, then the Naive preset is recommended

Quants

Weighted Quants by:

Static Quants by:

Secret Sauce

Models Used

L3-Scrambled-Eggs-On-Toast-8B is a merge of the following models using LazyMergekit:

YAML Configs Used

The following YAML configs were used to make this mode

Eggs-and-Bread-RP-pt.1

models:
  - model: Sao10K/L3-8B-Stheno-v3.2
  - model: ChaoticNeutrals/Poppy_Porpoise-1.0-L3-8B
    parameters:
      density: 0.5
      weight: [0.33, 0.0825, 0.0825, 0.0825, 0.0825]
  - model: Nitral-AI/Hathor_Stable-v0.2-L3-8B
    parameters:
      density: 0.5
      weight: [0.0825, 0.33, 0.0825, 0.0825, 0.0825]
  - model: NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS
    parameters:
      density: 0.5
      weight: [0.0825, 0.0825, 0.33, 0.0825, 0.0825]
  - model: Hastagaras/Jamet-8B-L3-MK.V-Blackroot
    parameters:
      density: 0.5
      weight: [0.0825, 0.0825, 0.0825, 0.33, 0.0825]
  - model: openlynn/Llama-3-Soliloquy-8B-v2
    parameters:
      density: 0.5
      weight: [0.0825, 0.0825, 0.0825, 0.0825, 0.33]
merge_method: dare_ties
base_model: Sao10K/L3-8B-Stheno-v3.2
parameters:
  normalize: false
  int8_mask: true
dtype: bfloat16

Eggs-and-Bread-RP-pt.2

models:
  - model: Sao10K/L3-8B-Stheno-v3.2
  - model: ChaoticNeutrals/Poppy_Porpoise-1.0-L3-8B
    parameters:
      gamma: 0.01
      density: 0.9
      weight: [0.0825, 0.0825, 0.0825, 0.0825, 0.33]
  - model: Nitral-AI/Hathor_Stable-v0.2-L3-8B
    parameters:
      gamma: 0.01
      density: 0.9
      weight: [0.0825, 0.0825, 0.0825, 0.33, 0.0825]
  - model: NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS
    parameters:
      gamma: 0.01
      density: 0.9
      weight: [0.0825, 0.0825, 0.33, 0.0825, 0.0825]
  - model: Hastagaras/Jamet-8B-L3-MK.V-Blackroot
    parameters:
      gamma: 0.01
      density: 0.9
      weight: [0.0825, 0.33, 0.0825, 0.0825, 0.0825]
  - model: openlynn/Llama-3-Soliloquy-8B-v2
    parameters:
      gamma: 0.01
      density: 0.9
      weight: [0.33, 0.0825, 0.0825, 0.0825, 0.0825]
merge_method: breadcrumbs_ties
base_model: Sao10K/L3-8B-Stheno-v3.2
parameters:
  normalize: false
  int8_mask: true
dtype: bfloat16

Egg-and-Bread-RP

models:
  - model: Casual-Autopsy/Eggs-and-Bread-RP-pt.1
  - model: Casual-Autopsy/Eggs-and-Bread-RP-pt.2
merge_method: slerp
base_model: Casual-Autopsy/Eggs-and-Bread-RP-pt.1
parameters:
  t:
    - filter: self_attn
      value: [0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5]
    - filter: mlp
      value: [0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5]
    - value: 0.5
dtype: bfloat16

Eggs-and-Bread-IQ-pt.1

models:
  - model: NousResearch/Meta-Llama-3-8B-Instruct
  - model: turboderp/llama3-turbcat-instruct-8b
    parameters:
      density: 0.5
      weight: [0.33, 0.0825, 0.0825, 0.0825, 0.0825]
  - model: VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct
    parameters:
      density: 0.5
      weight: [0.0825, 0.33, 0.0825, 0.0825, 0.0825]
  - model: TIGER-Lab/MAmmoTH2-8B-Plus
    parameters:
      density: 0.5
      weight: [0.0825, 0.0825, 0.33, 0.0825, 0.0825]
  - model: jondurbin/bagel-8b-v1.0
    parameters:
      density: 0.5
      weight: [0.0825, 0.0825, 0.0825, 0.33, 0.0825]
  - model: abacusai/Llama-3-Smaug-8B
    parameters:
      density: 0.5
      weight: [0.0825, 0.0825, 0.0825, 0.0825, 0.33]
merge_method: dare_ties
base_model: NousResearch/Meta-Llama-3-8B-Instruct
parameters:
  normalize: false
  int8_mask: true
dtype: bfloat16

Eggs-and-Bread-IQ-pt.2

models:
  - model: NousResearch/Meta-Llama-3-8B-Instruct
  - model: turboderp/llama3-turbcat-instruct-8b
    parameters:
      gamma: 0.01
      density: 0.9
      weight: [0.0825, 0.0825, 0.0825, 0.0825, 0.33]
  - model: VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct
    parameters:
      gamma: 0.01
      density: 0.9
      weight: [0.0825, 0.0825, 0.0825, 0.33, 0.0825]
  - model: TIGER-Lab/MAmmoTH2-8B-Plus
    parameters:
      gamma: 0.01
      density: 0.9
      weight: [0.0825, 0.0825, 0.33, 0.0825, 0.0825]
  - model: jondurbin/bagel-8b-v1.0
    parameters:
      gamma: 0.01
      density: 0.9
      weight: [0.0825, 0.33, 0.0825, 0.0825, 0.0825]
  - model: abacusai/Llama-3-Smaug-8B
    parameters:
      gamma: 0.01
      density: 0.9
      weight: [0.33, 0.0825, 0.0825, 0.0825, 0.0825]
merge_method: breadcrumbs_ties
base_model: NousResearch/Meta-Llama-3-8B-Instruct
parameters:
  normalize: false
  int8_mask: true
dtype: bfloat16

Eggs-and-Bread-IQ

models:
  - model: Casual-Autopsy/Eggs-and-Bread-IQ-pt.1
  - model: Casual-Autopsy/Eggs-and-Bread-IQ-pt.2
merge_method: slerp
base_model: Casual-Autopsy/Eggs-and-Bread-IQ-pt.1
parameters:
  t:
    - filter: self_attn
      value: [0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5]
    - filter: mlp
      value: [0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5]
    - value: 0.5
dtype: bfloat16

Eggs-and-Bread-Uncen-pt.1

models:
  - model: failspy/Meta-Llama-3-8B-Instruct-abliterated-v3
  - model: AwanLLM/Awanllm-Llama-3-8B-Cumulus-v1.0
    parameters:
      density: 0.5
      weight: [0.33, 0.0825, 0.0825, 0.0825, 0.0825]
  - model: lodrick-the-lafted/Limon-8B
    parameters:
      density: 0.5
      weight: [0.0825, 0.33, 0.0825, 0.0825, 0.0825]
  - model: vicgalle/Configurable-Llama-3-8B-v0.3
    parameters:
      density: 0.5
      weight: [0.0825, 0.0825, 0.33, 0.0825, 0.0825]
  - model: Undi95/Llama3-Unholy-8B-OAS
    parameters:
      density: 0.5
      weight: [0.0825, 0.0825, 0.0825, 0.33, 0.0825]
  - model: Undi95/Unholy-8B-DPO-OAS
    parameters:
      density: 0.5
      weight: [0.0825, 0.0825, 0.0825, 0.0825, 0.33]
merge_method: dare_ties
base_model: failspy/Meta-Llama-3-8B-Instruct-abliterated-v3
parameters:
  normalize: false
  int8_mask: true
dtype: bfloat16

Eggs-and-Bread-Uncen-pt.2

models:
  - model: failspy/Meta-Llama-3-8B-Instruct-abliterated-v3
  - model: AwanLLM/Awanllm-Llama-3-8B-Cumulus-v1.0
    parameters:
      gamma: 0.01
      density: 0.9
      weight: [0.0825, 0.0825, 0.0825, 0.0825, 0.33]
  - model: lodrick-the-lafted/Limon-8B
    parameters:
      gamma: 0.01
      density: 0.9
      weight: [0.0825, 0.0825, 0.0825, 0.33, 0.0825]
  - model: vicgalle/Configurable-Llama-3-8B-v0.3
    parameters:
      gamma: 0.01
      density: 0.9
      weight: [0.0825, 0.0825, 0.33, 0.0825, 0.0825]
  - model: Undi95/Llama3-Unholy-8B-OAS
    parameters:
      gamma: 0.01
      density: 0.9
      weight: [0.0825, 0.33, 0.0825, 0.0825, 0.0825]
  - model: Undi95/Unholy-8B-DPO-OAS
    parameters:
      gamma: 0.01
      density: 0.9
      weight: [0.33, 0.0825, 0.0825, 0.0825, 0.0825]
merge_method: breadcrumbs_ties
base_model: failspy/Meta-Llama-3-8B-Instruct-abliterated-v3
parameters:
  normalize: false
  int8_mask: true
dtype: bfloat16

Eggs-and-Bread-Uncen

models:
  - model: Casual-Autopsy/Eggs-and-Bread-Uncen-pt.1
  - model: Casual-Autopsy/Eggs-and-Bread-Uncen-pt.2
merge_method: slerp
base_model: Casual-Autopsy/Eggs-and-Bread-Uncen-pt.1
parameters:
  t:
    - filter: self_attn
      value: [0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5]
    - filter: mlp
      value: [0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5]
    - value: 0.5
dtype: bfloat16

Scrambled-Eggs-On-Toast-1

models:
  - model: Casual-Autopsy/Eggs-and-Bread-RP
  - model: Casual-Autopsy/Eggs-and-Bread-Uncen
merge_method: slerp
base_model: Casual-Autopsy/Eggs-and-Bread-RP
parameters:
  t:
    - value: [0.1, 0.15, 0.2, 0.4, 0.6, 0.4, 0.2, 0.15, 0.1]
dtype: bfloat16

L3-Scrambled-Eggs-On-Toast-8B

models:
  - model: Casual-Autopsy/Scrambled-Eggs-On-Toast-1
  - model: Casual-Autopsy/Eggs-and-Bread-IQ
merge_method: slerp
base_model: Casual-Autopsy/Scrambled-Eggs-On-Toast-1
parameters:
  t:
    - value: [0.7, 0.5, 0.3, 0.25, 0.2, 0.25, 0.3, 0.5, 0.7]
dtype: bfloat16
Downloads last month
3
Safetensors
Model size
8.03B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Casual-Autopsy/L3-Scrambled-Eggs-On-Toast-8B

Collection including Casual-Autopsy/L3-Scrambled-Eggs-On-Toast-8B