π Fine-tuned Gemma 3 Model (4B, 4-bit) by Webkul
This repository contains a fine-tuned version of Unsloth's gemma-3-4b-it
model, optimized for lightweight 4-bit inference and instruction tuning using Hugging Face's TRL and Unsloth's speed-optimized framework.
π§ Model Details
- Base Model:
unsloth/gemma-3-4b-it-unsloth-bnb-4bit
- Fine-tuned By: Webkul
- License: Apache 2.0
- Language: English (
en
) - Model Size: 4B parameters (4-bit quantized)
- Frameworks Used: Unsloth, Hugging Face Transformers, TRL
π Fine-tuning Dataset
This model was fine-tuned on unopim dev documentation available at https://devdocs.unopim.com/, focusing on structured software documentation and developer support content.
π‘ Intended Use
- Conversational AI assistants trained on UnoPIM developer docs
- API documentation question answering
- Developer tools and chatbot integrations
- Contextual helpdesk or onboarding bots for UnoPIM products
π§ͺ How to Use
You can use this model with the Hugging Face transformers
library:
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "webkul/gemma-3-4b-it-unopim-docs"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
input_text = "How do I integrate the UnoPIM API for product syncing?"
inputs = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=300)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
π License This model is licensed under the Apache License 2.0.
- Downloads last month
- 8
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support
Model tree for webkul/unopim-devdocs
Base model
google/gemma-3-4b-pt
Finetuned
google/gemma-3-4b-it
Quantized
unsloth/gemma-3-4b-it-unsloth-bnb-4bit