R1-Distill-Qwen-1.5B-Roblox-Luau

A fine tune of deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B using boatbomber/roblox-info-dump and boatbomber/the-luau-stack for Roblox domain knowledge.

This is intended to be used for speculative decoding with boatbomber/R1-Distill-Qwen-14B-Roblox-Luau. It can be used standalone in memory constrained environments, but is not nearly as capable as the 14B model as it has so few weights that it cannot learn the same level of detail.

Recommended inference settings:

Parameter Value Notes
System Prompt You are an expert Roblox developer and Luau software engineer. Model was fine tuned with this prompt.
temperature 0.5-0.7 Underlying R1 Distill uses this. I've found best results with 0.55.
top_p 0.95 Underlying R1 Distill uses this.

Quantization done using Unsloth.

Available quants:

Quant Size Notes
F16 3.56GB Retains 100% accuracy. Slow and memory hungry.
Q8_O 1.89GB High resource use, but generally acceptable. Use when accuracy is crucial.
Q6_K 1.46GB Uses Q6_K for all tensors. Good for high end GPUs.
Q5_K_M 1.29GB Recommended. Uses Q6_K for half of the attention.wv and feed_forward.w2 tensors, else Q5_K
Q4_K_M 1.12GB Recommended. Uses Q6_K for half of the attention.wv and feed_forward.w2 tensors, else Q4_K
Q3_K_M 0.92GB Uses Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else Q3_K. Quality is noticeably degraded.
Downloads last month
5
GGUF
Hardware compatibility
Log In to view the estimation

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for boatbomber/R1-Distill-Qwen-1.5B-Roblox-Luau

Quantized
(186)
this model

Datasets used to train boatbomber/R1-Distill-Qwen-1.5B-Roblox-Luau