llama-3b-instruct-ft-function-call

This model is a merged version of the base model [meta-llama/Llama-3.2-3B-Instruct] with the following LoRA adapter(s):

  • /home/ubuntu/zona/decision-step-model/train/mar16_sembalanced_data

Description

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer

model_id = "matt-bcny/llama-3b-instruct-ft-function-call"

# Load tokenizer and model
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id, 
    torch_dtype=torch.float16, 
    device_map="auto"
)

# Example inference
prompt = "Your prompt here"
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_length=100)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)

Model creation

This model was created by merging the base model with LoRA adapter(s) on 2025-03-17.

Downloads last month
13
Safetensors
Model size
3.21B params
Tensor type
FP16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support