Update README.md
Browse files
README.md
CHANGED
@@ -10,12 +10,68 @@ language:
|
|
10 |
- en
|
11 |
---
|
12 |
|
13 |
-
#
|
14 |
|
15 |
-
-
|
16 |
-
- **License:** apache-2.0
|
17 |
-
- **Finetuned from model :** unsloth/gemma-3-4b-it-unsloth-bnb-4bit
|
18 |
|
19 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
20 |
|
21 |
-
|
|
|
|
10 |
- en
|
11 |
---
|
12 |
|
13 |
+
# π§ Gemma 3 (4B) Fine-Tuned on UnoPIM Docs β by Webkul
|
14 |
|
15 |
+
This is a fine-tuned version of [`unsloth/gemma-3-4b-it-unsloth-bnb-4bit`](https://huggingface.co/unsloth/gemma-3-4b-it-unsloth-bnb-4bit), optimized and accelerated with [Unsloth](https://github.com/unslothai/unsloth) and Hugging Face's TRL for instruction-based text generation tasks.
|
|
|
|
|
16 |
|
17 |
+
---
|
18 |
+
|
19 |
+
## π Model Summary
|
20 |
+
|
21 |
+
- **Base Model:** `unsloth/gemma-3-4b-it-unsloth-bnb-4bit`
|
22 |
+
- **Fine-Tuned By:** [Webkul](https://webkul.com)
|
23 |
+
- **License:** Apache-2.0
|
24 |
+
- **Language:** English
|
25 |
+
- **Model Type:** Instruction-tuned (4-bit quantized)
|
26 |
+
- **Training Boost:** ~2x faster training with Unsloth optimizations
|
27 |
+
|
28 |
+
---
|
29 |
+
|
30 |
+
## π Fine-Tuning Dataset
|
31 |
+
|
32 |
+
This model has been fine-tuned specifically on official UnoPIM documentation and user guides available at:
|
33 |
+
|
34 |
+
π **[https://docs.unopim.com/](https://docs.unopim.com/)**
|
35 |
+
|
36 |
+
### Content Covered:
|
37 |
+
|
38 |
+
- Product Information Management (PIM) workflows
|
39 |
+
- Admin dashboard and module configurations
|
40 |
+
- API usage and endpoints
|
41 |
+
- User roles and access control
|
42 |
+
- Product import/export and sync logic
|
43 |
+
- Custom field and attribute setups
|
44 |
+
- Troubleshooting and common use cases
|
45 |
+
|
46 |
+
---
|
47 |
+
|
48 |
+
## π‘ Use Cases
|
49 |
+
|
50 |
+
This model is designed for:
|
51 |
+
|
52 |
+
- π§Ύ **Q&A on UnoPIM documentation**
|
53 |
+
- π¬ **Chatbots for UnoPIM technical support**
|
54 |
+
- π§ **Contextual assistants inside dev tools**
|
55 |
+
- π οΈ **Knowledge base automation for onboarding users**
|
56 |
+
|
57 |
+
---
|
58 |
+
|
59 |
+
## π Quick Start
|
60 |
+
|
61 |
+
You can run this model with Hugging Faceβs `transformers` library:
|
62 |
+
|
63 |
+
```python
|
64 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM
|
65 |
+
|
66 |
+
model_id = "webkul/gemma-3-4b-it-unopim-docs"
|
67 |
+
tokenizer = AutoTokenizer.from_pretrained(model_id)
|
68 |
+
model = AutoModelForCausalLM.from_pretrained(model_id)
|
69 |
+
|
70 |
+
prompt = "How can I import products in bulk using UnoPIM?"
|
71 |
+
inputs = tokenizer(prompt, return_tensors="pt")
|
72 |
+
outputs = model.generate(**inputs, max_new_tokens=300)
|
73 |
+
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
|
74 |
+
```
|
75 |
|
76 |
+
π License
|
77 |
+
This model is distributed under the Apache 2.0 License. See LICENSE for more information.
|