Refact-1.6B-fim-GGUF
- Model creator: Small Magellanic Cloud AI
- Original model: Refact-1.6B
Description
This repository contains quantized GGUF format model files for Refact-1.6B.
Prompt: fill in the middle
<fim_prefix>def print_hello_world():\n """<fim_suffix>\n print("Hello world!")<fim_middle>
Prompt: chat (experimental)
<empty_output>SYSTEM You are a programming assistant
<empty_output>USER How do I sort a list in Python?
<empty_output>ASSISTANT
Example llama.cpp
command
./main -m refact-1_6b-Q4_K_M.gguf -c 4096 -n -1 -p '<fim_prefix>{prefix}<fim_suffix>{suffix}<fim_middle>'
For other parameters and how to use them, please refer to the llama.cpp documentation
- Downloads last month
- 284
Hardware compatibility
Log In
to view the estimation
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support
HF Inference deployability: The model has no library tag.
Model tree for oblivious/Refact-1.6B-fim-GGUF
Base model
smallcloudai/Refact-1_6B-fim