Requirements

pip install -U transformers autoawq

Transformers inference

from transformers import AutoTokenizer, AutoModelForCausalLM
import torch

dtype = torch.bfloat16 if torch.cuda.is_bf16_supported() else torch.float16
device = "auto"

model_name = "jakiAJK/granite-3.1-8b-instruct_AWQ"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, device_map= device, trust_remote_code= True, torch_dtype= dtype)

model.eval()

chat = [
    { "role": "user", "content": "List any 5 country capitals." },
]
chat = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)

input_tokens = tokenizer(chat, return_tensors="pt").to('cuda')

output = model.generate(**input_tokens, 
                        max_new_tokens=100)

output = tokenizer.batch_decode(output)

print(output)
Downloads last month
35
Safetensors
Model size
1.27B params
Tensor type
I32
BF16
F16
Inference Providers NEW
This model isn't deployed by any Inference Provider. 馃檵 Ask for provider support

Model tree for jakiAJK/granite-3.1-8b-instruct_AWQ

Quantized
(34)
this model