R1-Distill-Qwen-14B-Roblox-Luau
A fine tune of deepseek-ai/DeepSeek-R1-Distill-Qwen-14B using boatbomber/roblox-info-dump and boatbomber/the-luau-stack for Roblox domain knowledge.
Recommended inference settings:
Parameter | Value | Notes |
---|---|---|
System Prompt | You are an expert Roblox developer and Luau software engineer. |
Model was fine tuned with this prompt. |
temperature | 0.5-0.7 |
Underlying R1 Distill uses this. I've found best results with 0.55 . |
top_p | 0.95 |
Underlying R1 Distill uses this. |
Quantization done using Unsloth.
Available quants:
Quant | Size | Notes |
---|---|---|
F16 | 29.55GB | Retains 100% accuracy. Slow and memory hungry. |
Q8_O | 15.70GB | High resource use, but generally acceptable. Use when accuracy is crucial. |
Q6_K | 12.12GB | Uses Q6_K for all tensors. Good for high end GPUs. |
Q5_K_M | 10.51GB | Recommended. Uses Q6_K for half of the attention.wv and feed_forward.w2 tensors, else Q5_K |
Q4_K_M | 8.99GB | Recommended. Uses Q6_K for half of the attention.wv and feed_forward.w2 tensors, else Q4_K |
Q3_K_M | 7.34GB | Uses Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else Q3_K. Quality is noticeably degraded. |
- Downloads last month
- 699
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
HF Inference deployability: The model has no library tag.
Model tree for boatbomber/R1-Distill-Qwen-14B-Roblox-Luau
Base model
deepseek-ai/DeepSeek-R1-Distill-Qwen-14B