Gemma 3 270M Fine-Tuned with LoRA
This model is a fine-tuned derivative of Google's Gemma 3 270M using LoRA. fp32 version.
It was fine-tuned by Toheed Akhtar on a small dataset of Gen Z conversations in Hinglish, focusing on casual interactions among college students.
fp16 one is here : link to fp16
Model Details
- Developed by: Toheed Akhtar
- Model type: Causal Language Model (text-generation)
- Language(s): Multilingual (Hinglish focus)
- License: Subject to Gemma Terms of Use
- Finetuned from model: google/gemma-3-270m-it
Intended Use
This model is designed for casual text generation, simulating informal Gen Z conversations in Hinglish. It is mainly intended for personal experimentation.
Out-of-Scope Use
- The model may not produce accurate or safe content for critical applications.
How to Get Started
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
from peft import PeftModel
# Set device for pipeline
device = 0 if torch.cuda.is_available() else -1 # 0 = first GPU, -1 = CPU
# Load base model
base_model_name = "google/gemma-3-270m-it"
base_model = AutoModelForCausalLM.from_pretrained(
base_model_name,
torch_dtype=torch.float16
)
# Load PEFT LoRA fine-tuned model from Hugging Face Hub
peft_model_hf = "Tohidichi/gemma3-genz-270m"
model = PeftModel.from_pretrained(base_model, peft_model_hf)
model.eval()
# Load tokenizer from the PEFT model repo
tokenizer = AutoTokenizer.from_pretrained(peft_model_hf)
# Create text-generation pipeline
text_gen_pipeline = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
device=device
)
- Downloads last month
- -
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support