--- base_model: google/gemma-7b-it tags: - adapter - lora - gemma - peft - causal-lm --- # Flan V2 Adapter for Gemma-7B-IT This is a LoRA adapter trained on **flan_v2**, compatible with `google/gemma-7b-it`. ## Usage ```python from transformers import AutoModelForCausalLM from peft import PeftModel base = AutoModelForCausalLM.from_pretrained("google/gemma-7b-it") model = PeftModel.from_pretrained(base, "RealSilvia/flan_v2-adapter")