File size: 1,951 Bytes
e82612f 0f6f42e e82612f 1521307 e82612f c45c76f e82612f bea97f5 e82612f |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 |
---
base_model: google/gemma-3-270m-it
library_name: peft
pipeline_tag: text-generation
tags:
- lora
- sft
- transformers
- trl
license: gemma
---
# Gemma 3 270M Fine-Tuned with LoRA
This model is a **fine-tuned derivative of Google's Gemma 3 270M** using LoRA. **fp32 version**.
It was fine-tuned by **Toheed Akhtar** on a small dataset of Gen Z conversations in **Hinglish**, focusing on casual interactions among college students.
**fp16** one is here : [link to fp16](https://huggingface.co/Tohidichi/gemma3-genz16-270m/)
## Model Details
- **Developed by:** Toheed Akhtar
- **Model type:** Causal Language Model (text-generation)
- **Language(s):** Multilingual (Hinglish focus)
- **License:** Subject to [Gemma Terms of Use](https://ai.google.dev/gemma/terms)
- **Finetuned from model:** google/gemma-3-270m-it
## Intended Use
This model is designed for **casual text generation**, simulating informal Gen Z conversations in Hinglish. It is mainly intended for **personal experimentation**.
### Out-of-Scope Use
- The model may not produce accurate or safe content for critical applications.
## How to Get Started
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
from peft import PeftModel
# Set device for pipeline
device = 0 if torch.cuda.is_available() else -1 # 0 = first GPU, -1 = CPU
# Load base model
base_model_name = "google/gemma-3-270m-it"
base_model = AutoModelForCausalLM.from_pretrained(
base_model_name,
torch_dtype=torch.float16
)
# Load PEFT LoRA fine-tuned model from Hugging Face Hub
peft_model_hf = "Tohidichi/gemma3-genz-270m"
model = PeftModel.from_pretrained(base_model, peft_model_hf)
model.eval()
# Load tokenizer from the PEFT model repo
tokenizer = AutoTokenizer.from_pretrained(peft_model_hf)
# Create text-generation pipeline
text_gen_pipeline = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
device=device
) |