Edit model card

Nous-Capybara-34B and Tess-M-Creative-v1.0 merged, then quantized with exllamav2 on 200 rows (400K tokens) on a long Vicuna format chat, a sci fi story and a fantasy story. This should hopefully yield better chat performance than the default wikitext quantization.

Quantized to 4bpw, enough for ~47K context on a 24GB GPU.

The following merge config was used:

models:
  - model: /home/alpha/Storage/Models/Raw/larryvrh_Yi-34B-200K-Llamafied
    # no parameters necessary for base model
  - model: /home/alpha/Storage/Models/Raw/migtissera_Tess-M-v1.0
    parameters:
      density: 0.6
      weight: 1.0
  - model: /home/alpha/Storage/Models/Raw/Nous-Capybara-34B
    parameters:
      density: 0.6
      weight: 1.0
merge_method: ties
base_model: //home/alpha/Storage/Models/Raw/larryvrh_Yi-34B-200K-Llamafied
parameters:
  normalize: true
  int8_mask: true
dtype: float16

First exllama quantization pass:

python convert.py --in_dir /home/alpha/FastModels/Capybara-Tess-Yi-34B-200K -o /home/alpha/FastModels/Capybara-Tess-Yi-34B-200K-exl2 -om /home/alpha/FastModels/capytessmes.json --cal_dataset /home/alpha/Documents/smol.parquet -l 2048 -r 80 -ml 2048 -mr 40 -gr 40 -ss 4096 -nr -b 3.5 -hb 6

Second exllama quantization pass:

python convert.py --in_dir /home/alpha/FastModels/Capybara-Tess-Yi-34B-200K -o /home/alpha/FastModels/Capybara-Tess-Yi-34B-200K-exl2 -m /home/alpha/FastModels/capytessmes.json --cal_dataset /home/alpha/Documents/medium.parquet -l 2048 -r 200 -ml 2048 -mr 40 -gr 200 -ss 4096 -b 3.1 -hb 6 -cf /home/alpha/FastModels/Capybara-Tess-Yi-34B-200K-exl2-31bpw -nr

Both models have Vicuna syntax, so:

Prompt Format:

SYSTEM: ...
USER: ...
ASSISTANT: ...

Stop token: </s>


Credits:

https://github.com/cg123/mergekit

https://huggingface.co/NousResearch/Nous-Capybara-34B/discussions

https://huggingface.co/migtissera/Tess-M-Creative-v1.0

https://huggingface.co/larryvrh/Yi-34B-200K-Llamafied

https://huggingface.co/01-ai/Yi-34B-200K

Downloads last month
16
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.