image/png

Quant's available from Bartowski <3: GGUF Exl2 Quant's from me: 4bpw Exl2 6bpw Exl2

image/png

Uses Mistral Formatting, Text completion preset here

Notes: One off train most likely, this was done purely for internal testing purposes but seemed ok enough to release. I do not plan to offer any kind of extended support for using this model, so your mileage may vary depending on use and context size.

  • (Nemo 12B instruct as base)
  • 200k randomized subset of GU_instruct-Remastered-1.1, with a splash of 25k hathor/poppy sauce, slow cooked for 3 epochs on medium heat.
Downloads last month
268
Safetensors
Model size
12.2B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model's library.

Model tree for Nitral-AI/Captain_BMO-12B

Merges
8 models
Quantizations
3 models

Collection including Nitral-AI/Captain_BMO-12B