d4nieldev's picture
Update README.md
acb3512 verified
metadata
base_model: google/gemma-3-4b-it
library_name: peft

To load the model and tokenizer:

from transformers import AutoTokenizer
from peft import AutoPeftModelForCausalLM

add_special_tokens = False

# Load model & tokenizer
model_path = "d4nieldev/gemma-3-4b-it-qpl-decomposer"
model = AutoPeftModelForCausalLM.from_pretrained(model_path).cuda()
model = model.eval()
tokenizer = AutoTokenizer.from_pretrained(model_path)