Text Generation
GGUF
English
code
Inference Endpoints

Refact-1.6B-fim-GGUF

Description

This repository contains quantized GGUF format model files for Refact-1.6B.

Prompt: fill in the middle

<fim_prefix>def print_hello_world():\n    """<fim_suffix>\n    print("Hello world!")<fim_middle>

Prompt: chat (experimental)

<empty_output>SYSTEM You are a programming assistant
<empty_output>USER How do I sort a list in Python?
<empty_output>ASSISTANT

Example llama.cpp command

./main -m refact-1_6b-Q4_K_M.gguf -c 4096 -n -1 -p '<fim_prefix>{prefix}<fim_suffix>{suffix}<fim_middle>'

For other parameters and how to use them, please refer to the llama.cpp documentation

Downloads last month
134
GGUF
Model size
1.59B params
Architecture
refact

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Examples
Unable to determine this model's library. Check the docs .

Model tree for oblivious/Refact-1.6B-fim-GGUF

Quantized
(3)
this model

Datasets used to train oblivious/Refact-1.6B-fim-GGUF

Space using oblivious/Refact-1.6B-fim-GGUF 1