Flan V2 Adapter for Gemma-7B-IT
This is a LoRA adapter trained on flan_v2, compatible with google/gemma-7b-it
.
Usage
from transformers import AutoModelForCausalLM
from peft import PeftModel
base = AutoModelForCausalLM.from_pretrained("google/gemma-7b-it")
model = PeftModel.from_pretrained(base, "RealSilvia/flan_v2-adapter")
- Downloads last month
- 38
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support