Edit model card

Winter Garden 7B - β - "The Writer"

It was mentioned that we are in the open ai dark winter; so I thought I would make myself a nice winter garden.

An experiment

I've merged four partitions successfully in the past, so lets go for 9! I started with:

  • Mistral-7B-v0.1

and merged in

  • ZySec-7B-v1
  • LemonadeRP-4.5.3
  • dpo-binarized-NeutrixOmnibe-7B
  • Multi-Verse-RP-7B
  • AlphaMonarch-7B
  • opus-v1.2-7b
  • Kunoichi-DPO-v2-7B
  • Noromaid-7B-0.4-DPO
  • ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v2

9-partition merge

All of the layers were partitioned in to 9 random bins. Alternating models were slerped at [0...1], and [1...0] gradients; except attention, which was slerped at 0.03.

This means that the model is still predominantly ordered around base mistral - including half of the input and output layers, and 28% of attention.

Other

Includes fast tokenizer.

Chat Template

There is currently a chat template for conversational turns. It may or may not be good for this model without more tuning. Initial tests show this model is a real talker and writer.

  • If you prompt with [WP] <prompt>\n\n it will take right off.

It doesn't seem to know ### Instruction: or <|im_start|>; <|user|> ,<|assistant|> works but tends to get quickly mixed up with html, and back and forth conversation devolves in to long narratives.

Scores

Metric Score
Downloads last month
72
Safetensors
Model size
7.24B params
Tensor type
FP16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for maldv/winter-garden-7b-beta

Quantizations
1 model

Spaces using maldv/winter-garden-7b-beta 5