Bitsandbytes quantization of https://huggingface.co/bigcode/starcoder2-7b.

See https://huggingface.co/blog/4bit-transformers-bitsandbytes for instructions.

from transformers import AutoModelForCausalLM, AutoTokenizer
from transformers import BitsAndBytesConfig
import torch

nf4_config = BitsAndBytesConfig(
   load_in_4bit=True,
   bnb_4bit_quant_type="nf4",
   bnb_4bit_use_double_quant=True,
   bnb_4bit_compute_dtype=torch.bfloat16
)
model = AutoModelForCausalLM.from_pretrained("bigcode/starcoder2-7b", quantization_config=nf4_config)
tokenizer = AutoTokenizer.from_pretrained("bigcode/starcoder2-7b")

model.push_to_hub("onekq-ai/starcoder2-7b-bnb-4bit")
tokenizer.push_to_hub("onekq-ai/starcoder2-7b-bnb-4bit")
Downloads last month
84
Safetensors
Model size
3.81B params
Tensor type
F32
·
FP16
·
U8
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for onekq-ai/starcoder2-7b-bnb-4bit

Quantized
(5)
this model

Collection including onekq-ai/starcoder2-7b-bnb-4bit