π§ Andy-4-tiny π
Andyβ4-tiny is an 360 Millionβparameter specialist model tuned for Minecraft gameplay via the Mindcraft framework.
The Current version of Andy-4-tiny is Andy-4-tiny-0522
.
This is the Safetensors repository
β οΈ Certification:
Andyβ4 is not yet certified by the Mindcraft developers. Use in production at your own discretion.
π Model Specifications
Parameters: 360M
Training Hardware: 1 Γ NVIDIA RTX 3070
Duration: ~ 36 hours total
Data Volumes:
- Messages: 179,384
- Tokens: 425,535,198
- Conversations: 62,149
Base Architecture: SmolLM2
License: Andy 1.0 License
Repository: https://huggingface.co/Sweaterdog/Andyβ4
π Training Regimen
Andyβ4βbaseβ1 dataset
- Epochs: 2
- Learning Rate: 5e-5
- Dataset Size: 47.4k
Andyβ4βbase-2 dataset
- Epochs: 2
- Learning Rate: 7e-5
- Dataset Size: 49.2k
Fineβtune (FT) dataset
- Epochs: 2.5
- Learning Rate: 2e-5
- Dataset Size: 4.12k
- Optimizer: AdamW_8bit with cosine decay
- Quantization: 4βbit (
bnb-4bit
) for inference - Warm Up Steps: 0.1% of each dataset
π Installation
Andy-4-tiny is an Edge-case model, built to run on the CPU and use minimal ram
Quantization | RAM Required |
---|---|
F16 | CPU |
Q8_0 | CPU |
Q4_K_M | CPU |
1. Installation directly on Ollama
- Visit Andy-4 on Ollama
- Copy the command after choosing model type / quantization
- Run the command in the terminal
- Set the profile's model to be what you installed, such as
ollama/sweaterdog/andy-4:tiny-q8_0
2. Manual Download & Modelfile
Download
- From the HF Files tab, grab your chosen
.GGUF
quant weights (e.g.Andy-4-tiny.Q4_K_M.gguf
). - Download the provided
Modelfile
.
- From the HF Files tab, grab your chosen
Edit
Change
FROM YOUR/PATH/HERE
to
FROM /path/to/Andy-4-tiny.Q4_K_M.gguf
Optional:
Increase the parameter num_ctx
to a higher value for longer conversations if you:
A. Have extra VRAM
B. Quantized the context window
C. Can use a smaller model
- Create
ollama create andy-4-tiny -f Modelfile
This registers the Andyβ4-tiny model locally.
π Acknowledgments
Click to expand
- Data & Models by: @Sweaterdog
- Framework: Mindcraft (https://github.com/kolbytn/mindcraft)
- LoRA Weights: https://huggingface.co/Sweaterdog/Andy-4-LoRA
- *Explicit credit is not granted to Meta since this model was trained off of a slightly different architecture, from DeepSeek-R1
βοΈ License
See Andy 1.0 License.
This work uses data and models created by @Sweaterdog.
- Downloads last month
- 4
Model tree for Sweaterdog/Andy-4-tiny-safetensors
Base model
HuggingFaceTB/SmolLM2-360M