File size: 2,221 Bytes
e12ed86
 
 
 
 
 
 
 
 
 
 
 
d5425c8
e12ed86
d5425c8
e12ed86
d5425c8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e12ed86
d5425c8
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
---
base_model: unsloth/gemma-3-4b-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3
license: apache-2.0
language:
- en
---

# 🧠 Gemma 3 (4B) Fine-Tuned on UnoPIM Docs β€” by Webkul

This is a fine-tuned version of [`unsloth/gemma-3-4b-it-unsloth-bnb-4bit`](https://huggingface.co/unsloth/gemma-3-4b-it-unsloth-bnb-4bit), optimized and accelerated with [Unsloth](https://github.com/unslothai/unsloth) and Hugging Face's TRL for instruction-based text generation tasks.

---

## πŸ” Model Summary

- **Base Model:** `unsloth/gemma-3-4b-it-unsloth-bnb-4bit`
- **Fine-Tuned By:** [Webkul](https://webkul.com)
- **License:** Apache-2.0
- **Language:** English
- **Model Type:** Instruction-tuned (4-bit quantized)
- **Training Boost:** ~2x faster training with Unsloth optimizations

---

## πŸ“š Fine-Tuning Dataset

This model has been fine-tuned specifically on official UnoPIM documentation and user guides available at:

πŸ‘‰ **[https://docs.unopim.com/](https://docs.unopim.com/)**

### Content Covered:

- Product Information Management (PIM) workflows
- Admin dashboard and module configurations
- API usage and endpoints
- User roles and access control
- Product import/export and sync logic
- Custom field and attribute setups
- Troubleshooting and common use cases

---

## πŸ’‘ Use Cases

This model is designed for:

- 🧾 **Q&A on UnoPIM documentation**
- πŸ’¬ **Chatbots for UnoPIM technical support**
- 🧠 **Contextual assistants inside dev tools**
- πŸ› οΈ **Knowledge base automation for onboarding users**

---

## πŸš€ Quick Start

You can run this model with Hugging Face’s `transformers` library:

```python
from transformers import AutoTokenizer, AutoModelForCausalLM

model_id = "webkul/gemma-3-4b-it-unopim-docs"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)

prompt = "How can I import products in bulk using UnoPIM?"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=300)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```

πŸ“„ License
This model is distributed under the Apache 2.0 License. See LICENSE for more information.