What is this?

Next-gen version of Mimicore. Balance of roleplaying performance, intelligent and model size. I like this model, nearly 80% of my 24B MiniusLight v2.1.

GGUF

Normal - IMatrix

Template: Although Mistral Tekken one is smarter, I recommend to use ChatML format for roleplaying.

If you don't really care about the model intelligent, ChatML is better in some cases with more creative. Mistral Tekken for smarter model, sometimes give it a try is not too bad.

Configuration

models:
  - model: Delta-Vector/Francois-PE-V2-Huali-12B
    parameters:
      density: 0.9
      weight: 1
  - model: DoppelReflEx/MN-12B-Mimicore-GreenSnake
    parameters:
      density: 0.6
      weight: 0.8
  - model: yamatazen/EtherealAurora-12B-v2
    parameters:
      density: 0.8
      weight: 0.6
merge_method: dare_ties
base_model: Delta-Vector/Francois-PE-V2-Huali-12B
tokenizer_source: base
parameters:
  rescale: true
dtype: bfloat16
Downloads last month
124
Safetensors
Model size
12.2B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for DoppelReflEx/LilithCore-v1-12B

Collection including DoppelReflEx/LilithCore-v1-12B