Model Image

KiloNovaSynth-12B

Where matter itself is squeezed to the edge of meaning, two dying stars spiral inward.
The silence breaks with a final collision, forging new worlds in the ashes.
Light blooms briefly, soft but searing, as gravity sings in frequencies never heard.
This is not fire, but memory collapsed into mass.

πŸ”§ Recommended Sampling Settings:

Temperature: 0.75 to 1.25
Min P: 0.035
Context Length: Stable at 12k tokens, with possible support for extended contexts

πŸ’¬ Prompt Format

Supports ChatML style messages. Example:

<|im_start|>user
Your question here.
<|im_end|>
<|im_start|>assistant

KiloNovaSynth-12B is a merge of the following models using LazyMergekit:

🧩 Configuration

merge_method: dare_ties
base_model:
  model: DreadPoor/Irix-12B-Model_Stock
  name: Irix
models:
  - model: yamatazen/LorablatedStock-12B
    name: Lorablated
    parameters:
      weight: 0.4
      density: 0.7
  - model: yamatazen/EtherealAurora-12B-v2
    name: Aurora
    parameters:
      weight: 0.3
      density: 0.6
parameters:
  normalize: true
  int8_mask: true
  anneal_factor: 0.2
dtype: bfloat16

layer_parameters:
  - filter: "attn"
    sources:
      - model: Irix
        weight: 0.8
      - model: Aurora
        weight: 0.15
      - model: Lorablated
        weight: 0.05
  
  - filter: "mlp"
    sources:
      - model: Lorablated
        weight: 0.6
      - model: Aurora
        weight: 0.3
      - model: Irix
        weight: 0.1

πŸ’» Usage

!pip install -qU transformers accelerate

from transformers import AutoTokenizer
import transformers
import torch

model = "Marcjoni/KiloNovaSynth-12B"
messages = [{"role": "user", "content": "What is a large language model?"}]

tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
    "text-generation",
    model=model,
    torch_dtype=torch.float16,
    device_map="auto",
)

outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=1, top_k=0, top_p=1)
print(outputs[0]["generated_text"])
Downloads last month
51
Safetensors
Model size
12.2B params
Tensor type
BF16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for Marcjoni/KiloNovaSynth-12B