image/png

ibm-granite/granite-20b-code-instruct-8k-GGUF

This is the Q4_K_M converted version of the original ibm-granite/granite-20b-code-instruct-8k. Refer to the original model card for more details.

Use with llama.cpp

git clone https://github.com/ggerganov/llama.cpp
cd llama.cpp

# install
make

# run generation
./main -m granite-20b-code-instruct-8k-GGUF/granite-20b-code-instruct.Q4_K_M.gguf -n 128 -p "def generate_random(x: int):" --color
Downloads last month
111
GGUF
Model size
20.1B params
Architecture
starcoder

4-bit

Inference Examples
Inference API (serverless) has been turned off for this model.

Model tree for ibm-granite/granite-20b-code-instruct-8k-GGUF

Quantized
(11)
this model

Datasets used to train ibm-granite/granite-20b-code-instruct-8k-GGUF

Collection including ibm-granite/granite-20b-code-instruct-8k-GGUF

Evaluation results