β–„β–„β–„β–„β–„    β–„β–ˆβ–ˆβ–ˆβ–„   β–„β–ˆβ–„      β–„      β–„   β–ˆβ–ˆβ–„   β–ˆβ–ˆ       β–ˆβ–ˆβ–ˆβ–ˆβ–„     β–„β–ˆβ–ˆβ–ˆ
       β–ˆ    β–€β–„  β–ˆβ–€   β–€  β–ˆβ–€ β–€β–„     β–ˆ      β–ˆ  β–ˆ  β–ˆ  β–ˆ β–ˆ     β–ˆ   β–ˆ       β–ˆβ–ˆ 
    β–„   β–€β–€β–€β–€β–„   β–ˆβ–ˆβ–„β–„    β–ˆ   β–€  β–ˆ   β–ˆ β–ˆβ–ˆ   β–ˆ β–ˆ   β–ˆ β–ˆβ–„β–„β–ˆ     β–ˆ   β–ˆ    β–„β–ˆβ–ˆβ–ˆβ–„
     β–€β–€β–„β–„β–„β–„β–€    β–ˆβ–„   β–„β–€ β–ˆβ–„  β–„β–€ β–ˆ   β–ˆ β–ˆ β–ˆ  β–ˆ β–ˆ  β–ˆ  β–ˆ  β–ˆ     β–ˆ   β–ˆ       β–β–ˆ 
               β–€β–ˆβ–ˆβ–ˆβ–€   β–€β–ˆβ–ˆβ–ˆβ–€  β–ˆβ–„ β–„β–ˆ β–ˆ  β–ˆ β–ˆ β–ˆβ–ˆβ–ˆβ–€      β–ˆ     β–ˆβ–ˆβ–ˆβ–ˆβ–€ β–β–ˆ  β–„β–ˆβ–ˆβ–€  
                               β–€β–€β–€  β–ˆ   β–ˆβ–ˆ         β–ˆ                          
                                                 β–€                           
   
                      β‹†β‹†ΰ­¨ΰ­§Λš THE PRIMΓ‰TOILE ENGINE Λšΰ­¨ΰ­§β‹†ο½‘Λšβ‹†
                  β€” Visual Novel generation under starlight β€”
Version Type Strengths Weaknesses Recommended Use
Secunda-0.1-GGUF / RAW Instruction - Most precise
- Coherent code
- Perfected Modelfile
- Smaller context / limited flexibility Production / Baseline
Secunda-0.3-F16-QA QA-based Input - Acceptable for question-based generation - Less accurate than 0.1
- Not as coherent
Prototyping (QA mode)
Secunda-0.3-F16-TEXT Text-to-text - Flexible for freeform tasks - Slightly off
- Modelfile-dependent
Experimental / Text rewrite
Secunda-0.3-GGUF GGUF build - Portable GGUF of 0.3 - Inherits 0.3 weaknesses Lightweight local testing
Secunda-0.5-RAW QA Natural - Best QA understanding
- Long-form generation potential
- Inconsistent output length
- Some instability
Research / Testing LoRA
Secunda-0.5-GGUF GGUF build - Portable, inference-ready version of 0.5 - Shares issues of 0.5 Offline experimentation
Secunda-0.1-RAW Instruction - Same base as 0.1-GGUF - Same as 0.1 Production backup

πŸŒ™ Overview

Secunda-0.3-GGUF is a fully merged and quantized release of Secunda’s original Ren’Py .rpy story generator, built from the LoRA adapters of Secunda-0.3-RAW + LLaMA 3.1 8B β€” now packaged in GGUF format for lightweight local inference via llama.cpp, llamafile, ollama, or LM Studio.

✧ Available variants: Q8_0, TQ2_0, TQ1_0

This model produces: β€’ Full define character blocks with color β€’ Backgrounds and sprite image declarations β€’ Narrative arc starting from label start: β€’ Menus, jumps, emotional dialogue β€’ A Ren’Py script that actually runs


/!\ NO HUMAN-MADE DATA WAS USED TO TRAIN THIS AI ! Secunda takes much pride in making sure the training data is scripted ! /!\

If you like Visual Novels, please visit itch.io and support independant creators !

☁️ GGUF Model Variants

πŸŒ• Variant πŸ”§ Quantization Type πŸ’Ύ Filename πŸ’¬ Notes
8-bit Quantized q8_0 secunda-0.3-q8_0.gguf Balanced. Great quality & performance tradeoff.
2-bit Tiny tq2_0 secunda-0.3-tq2_0.gguf Ultra-light. Use on small devices, lower fidelity.
1-bit Minimalist tq1_0 secunda-0.3-tq1_0.gguf Experimental. For extreme edge deployments.

πŸŒ’ Run Locally with Ollama

First, make sure you've installed Ollama and cloned the model:

ollama create secunda
ollama run secunda
> A lonely girl who can read dreams like books.

ALWAYS USE THE MODELFILE !

🌌 Evaluation

This model has:

  • Generated 1000+ .rpy files
  • Passed human review for structure, creativity & syntax
  • 90% valid output with minimal manual tweaks

πŸ“š Citation

@misc{secunda2025gguf,
  title={Secunda-0.3-GGUF},
  author={Yaroster},
  year={2025},
  note={https://huggingface.co/Yaroster}
}

πŸͺ Constellation Companions


β‹†βΊβ‚Šβ‹† β˜Ύβ‹†βΊβ‚Šβ‹† Secunda-0.3-GGUF β‹†βΊβ‚Šβ‹† β˜Ύβ‹†βΊβ‚Šβ‹†

✧ Because stories can spark from a single phrase ✧

⚠️ This repo contains only the LoRA adapter weights. To use the model, download the base LLaMA 3.1 from Meta (terms apply): https://ai.meta.com/resources/models-and-libraries/llama-downloads/

Downloads last month
65
GGUF
Model size
8.03B params
Architecture
llama
Hardware compatibility
Log In to view the estimation

1-bit

2-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support