|
--- |
|
base_model: google/gemma-3-270m-it |
|
library_name: peft |
|
pipeline_tag: text-generation |
|
tags: |
|
- lora |
|
- sft |
|
- transformers |
|
- trl |
|
license: gemma |
|
--- |
|
|
|
# Gemma 3 270M Fine-Tuned with LoRA |
|
|
|
This model is a **fine-tuned derivative of Google's Gemma 3 270M** using LoRA. **fp32 version**. |
|
It was fine-tuned by **Toheed Akhtar** on a small dataset of Gen Z conversations in **Hinglish**, focusing on casual interactions among college students. |
|
|
|
**fp16** one is here : [link to fp16](https://huggingface.co/Tohidichi/gemma3-genz16-270m/) |
|
|
|
## Model Details |
|
|
|
- **Developed by:** Toheed Akhtar |
|
- **Model type:** Causal Language Model (text-generation) |
|
- **Language(s):** Multilingual (Hinglish focus) |
|
- **License:** Subject to [Gemma Terms of Use](https://ai.google.dev/gemma/terms) |
|
- **Finetuned from model:** google/gemma-3-270m-it |
|
|
|
## Intended Use |
|
|
|
This model is designed for **casual text generation**, simulating informal Gen Z conversations in Hinglish. It is mainly intended for **personal experimentation**. |
|
|
|
### Out-of-Scope Use |
|
|
|
- The model may not produce accurate or safe content for critical applications. |
|
|
|
## How to Get Started |
|
|
|
|
|
|
|
```python |
|
import torch |
|
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline |
|
from peft import PeftModel |
|
|
|
# Set device for pipeline |
|
device = 0 if torch.cuda.is_available() else -1 # 0 = first GPU, -1 = CPU |
|
|
|
# Load base model |
|
base_model_name = "google/gemma-3-270m-it" |
|
base_model = AutoModelForCausalLM.from_pretrained( |
|
base_model_name, |
|
torch_dtype=torch.float16 |
|
) |
|
|
|
# Load PEFT LoRA fine-tuned model from Hugging Face Hub |
|
peft_model_hf = "Tohidichi/gemma3-genz-270m" |
|
model = PeftModel.from_pretrained(base_model, peft_model_hf) |
|
model.eval() |
|
|
|
# Load tokenizer from the PEFT model repo |
|
tokenizer = AutoTokenizer.from_pretrained(peft_model_hf) |
|
|
|
# Create text-generation pipeline |
|
text_gen_pipeline = pipeline( |
|
"text-generation", |
|
model=model, |
|
tokenizer=tokenizer, |
|
device=device |
|
) |