Text Generation
GGUF
English
code

Refact-1.6B-fim-GGUF

Description

This repository contains quantized GGUF format model files for Refact-1.6B.

Prompt: fill in the middle

<fim_prefix>def print_hello_world():\n    """<fim_suffix>\n    print("Hello world!")<fim_middle>

Prompt: chat (experimental)

<empty_output>SYSTEM You are a programming assistant
<empty_output>USER How do I sort a list in Python?
<empty_output>ASSISTANT

Example llama.cpp command

./main -m refact-1_6b-Q4_K_M.gguf -c 4096 -n -1 -p '<fim_prefix>{prefix}<fim_suffix>{suffix}<fim_middle>'

For other parameters and how to use them, please refer to the llama.cpp documentation

Downloads last month
284
GGUF
Model size
1.59B params
Architecture
refact
Hardware compatibility
Log In to view the estimation

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for oblivious/Refact-1.6B-fim-GGUF

Quantized
(6)
this model

Datasets used to train oblivious/Refact-1.6B-fim-GGUF

Space using oblivious/Refact-1.6B-fim-GGUF 1