This repository is a community-driven quantized version of the original model meta-llama/Llama-3.2-3B-Instruct which is the FP16 half-precision official version released by Meta AI.

Model Information

The Meta Llama 3.2 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction tuned generative models in 8B, 70B and 405B sizes (text in/text out). The Llama 3.1 instruction tuned text only models (8B, 70B, 405B) are optimized for multilingual dialogue use cases and outperform many of the available open source and closed chat models on common industry benchmarks.

This repository contains meta-llama/Llama-3.2-3B-Instruct quantized using AutoGPTQ from FP16 down to INT4 using the GPTQ kernels performing zero-point quantization with a group size of 128.

Model Usage

In order to run the inference with Llama 3.2 3B Instruct GPTQ in INT4, around 8 GiB of VRAM are needed only for loading the model checkpoint, without including the KV cache or the CUDA graphs, meaning that there should be a bit over that VRAM available.

In order to use the current quantized model, support is offered for different solutions as transformers, autogptq, or text-generation-inference.

馃 transformers

In order to run the inference with Llama 3.2 3B Instruct GPTQ in INT4, you need to install the following packages:

pip install -q --upgrade transformers accelerate optimum
pip install -q --no-build-isolation auto-gptq

To run the inference on top of Llama 3.2 3B Instruct GPTQ in INT4 precision, the GPTQ model can be instantiated as any other causal language modeling model via AutoModelForCausalLM and run the inference normally.

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer

model_id = "rkumar70900/Llama-3.2-3B-Instruct-GPTQ-INT4"
tokenizer = AutoTokenizer.from_pretrained(model_id, device_map="cuda")
model = AutoModelForCausalLM.from_pretrained(
  model_id,
  torch_dtype=torch.float16,
  low_cpu_mem_usage=True,
  device_map="cuda",
)

prompt = [
  {"role": "system", "content": "You are a helpful assistant, that responds as a pirate."},
  {"role": "user", "content": "What's Deep Learning?"},
]
inputs = tokenizer.apply_chat_template(
  prompt,
  tokenize=True,
  add_generation_prompt=True,
  return_tensors="pt",
  return_dict=True,
).to("cuda")

outputs = model.generate(**inputs, do_sample=True, max_new_tokens=256)
print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
Downloads last month
15
Safetensors
Model size
772M params
Tensor type
I32
BF16
F16
Inference Providers NEW
This model isn't deployed by any Inference Provider. 馃檵 Ask for provider support

Model tree for rkumar70900/Llama-3.2-3B-Instruct-GPTQ-INT4

Quantized
(346)
this model