βββββ βββββ βββ β β βββ ββ βββββ ββββ
β ββ ββ β ββ ββ β β β β β β β β ββ
β βββββ ββββ β β β β ββ β β β ββββ β β βββββ
βββββββ ββ ββ ββ ββ β β β β β β β β β β β ββ
βββββ βββββ ββ ββ β β β ββββ β βββββ ββ ββββ
βββ β ββ β
β
ββΰ¨ΰ§Λ THE PRIMΓTOILE ENGINE Λΰ¨ΰ§βqΛβ
β Visual Novel generation under starlight β
Version | Type | Strengths | Weaknesses | Recommended Use |
---|---|---|---|---|
Secunda-0.1-GGUF / RAW | Instruction | - Most precise - Coherent code - Perfected Modelfile |
- Smaller context / limited flexibility | Production / Baseline |
Secunda-0.3-F16-QA | QA-based Input | - Acceptable for question-based generation | - Less accurate than 0.1 - Not as coherent |
Prototyping (QA mode) |
Secunda-0.3-F16-TEXT | Text-to-text | - Flexible for freeform tasks | - Slightly off - Modelfile-dependent |
Experimental / Text rewrite |
Secunda-0.3-GGUF | GGUF build | - Portable GGUF of 0.3 | - Inherits 0.3 weaknesses | Lightweight local testing |
Secunda-0.5-RAW | QA Natural | - Best QA understanding - Long-form generation potential |
- Inconsistent output length - Some instability |
Research / Testing LoRA |
Secunda-0.5-GGUF | GGUF build | - Portable, inference-ready version of 0.5 | - Shares issues of 0.5 | Offline experimentation |
Secunda-0.1-RAW | Instruction | - Same base as 0.1-GGUF | - Same as 0.1 | Production backup |
π Overview
Secunda-0.3-GGUF is a fully merged and quantized release of Secundaβs original RenβPy .rpy story generator, built from the LoRA adapters of Secunda-0.3-RAW + LLaMA 3.1 8B β now packaged in GGUF format for lightweight local inference via llama.cpp, llamafile, ollama, or LM Studio.
β§ Available variants: Q8_0, TQ2_0, TQ1_0
This model produces: β’ Full define character blocks with color β’ Backgrounds and sprite image declarations β’ Narrative arc starting from label start: β’ Menus, jumps, emotional dialogue β’ A RenβPy script that actually runs
/!\ NO HUMAN-MADE DATA WAS USED TO TRAIN THIS AI ! Secunda takes much pride in making sure the training data is scripted ! /!\
If you like Visual Novels, please visit itch.io and support independant creators !
βοΈ GGUF Model Variants
π Variant | π§ Quantization Type | πΎ Filename | π¬ Notes |
---|---|---|---|
8-bit Quantized | q8_0 |
secunda-0.3-q8_0.gguf |
Balanced. Great quality & performance tradeoff. |
2-bit Tiny | tq2_0 |
secunda-0.3-tq2_0.gguf |
Ultra-light. Use on small devices, lower fidelity. |
1-bit Minimalist | tq1_0 |
secunda-0.3-tq1_0.gguf |
Experimental. For extreme edge deployments. |
π Run Locally with Ollama
First, make sure you've installed Ollama and cloned the model:
ollama create secunda
ollama run secunda
> A lonely girl who can read dreams like books.
ALWAYS USE THE MODELFILE !
π Evaluation
This model has:
- Generated 1000+
.rpy
files - Passed human review for structure, creativity & syntax
- 90% valid output with minimal manual tweaks
π Citation
@misc{secunda2025gguf,
title={Secunda-0.3-GGUF},
author={Yaroster},
year={2025},
note={https://huggingface.co/Yaroster}
}
πͺ Constellation Companions
- Secunda-0.3-F16-QA β experimental question-answer variant (F16 Base of this GGUF)
- Secunda-0.3-F16-TEXT β for less structured generation
- PrimΓ©toile β full VN pipeline
ββΊββ βΎββΊββ Secunda-0.3-GGUF ββΊββ βΎββΊββ
β§ Because stories can spark from a single phrase β§
β οΈ This repo contains only the LoRA adapter weights. To use the model, download the base LLaMA 3.1
from Meta (terms apply): https://ai.meta.com/resources/models-and-libraries/llama-downloads/
- Downloads last month
- 65
1-bit
2-bit
8-bit