π§ Gemma 3 (4B) Fine-Tuned on UnoPIM Docs β by Webkul
This is a fine-tuned version of unsloth/gemma-3-4b-it-unsloth-bnb-4bit
, optimized and accelerated with Unsloth and Hugging Face's TRL for instruction-based text generation tasks.
π Model Summary
- Base Model:
unsloth/gemma-3-4b-it-unsloth-bnb-4bit
- Fine-Tuned By: Webkul
- License: Apache-2.0
- Language: English
- Model Type: Instruction-tuned (4-bit quantized)
- Training Boost: ~2x faster training with Unsloth optimizations
π Fine-Tuning Dataset
This model has been fine-tuned specifically on official UnoPIM documentation and user guides available at:
Content Covered:
- Product Information Management (PIM) workflows
- Admin dashboard and module configurations
- API usage and endpoints
- User roles and access control
- Product import/export and sync logic
- Custom field and attribute setups
- Troubleshooting and common use cases
π‘ Use Cases
This model is designed for:
- π§Ύ Q&A on UnoPIM documentation
- π¬ Chatbots for UnoPIM technical support
- π§ Contextual assistants inside dev tools
- π οΈ Knowledge base automation for onboarding users
π Quick Start
You can run this model with Hugging Faceβs transformers
library:
from transformers import AutoTokenizer, AutoModelForCausalLM
model_id = "webkul/gemma-3-4b-it-unopim-docs"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
prompt = "How can I import products in bulk using UnoPIM?"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=300)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
π License This model is distributed under the Apache 2.0 License. See LICENSE for more information.
- Downloads last month
- 6
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support
Model tree for webkul/unopim-docs-gemma-finetuned
Base model
google/gemma-3-4b-pt
Finetuned
google/gemma-3-4b-it
Quantized
unsloth/gemma-3-4b-it-unsloth-bnb-4bit