himanshu007 commited on
Commit
d5425c8
Β·
verified Β·
1 Parent(s): 19a8714

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +62 -6
README.md CHANGED
@@ -10,12 +10,68 @@ language:
10
  - en
11
  ---
12
 
13
- # Uploaded finetuned model
14
 
15
- - **Developed by:** webkul
16
- - **License:** apache-2.0
17
- - **Finetuned from model :** unsloth/gemma-3-4b-it-unsloth-bnb-4bit
18
 
19
- This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
20
 
21
- [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
 
 
10
  - en
11
  ---
12
 
13
+ # 🧠 Gemma 3 (4B) Fine-Tuned on UnoPIM Docs β€” by Webkul
14
 
15
+ This is a fine-tuned version of [`unsloth/gemma-3-4b-it-unsloth-bnb-4bit`](https://huggingface.co/unsloth/gemma-3-4b-it-unsloth-bnb-4bit), optimized and accelerated with [Unsloth](https://github.com/unslothai/unsloth) and Hugging Face's TRL for instruction-based text generation tasks.
 
 
16
 
17
+ ---
18
+
19
+ ## πŸ” Model Summary
20
+
21
+ - **Base Model:** `unsloth/gemma-3-4b-it-unsloth-bnb-4bit`
22
+ - **Fine-Tuned By:** [Webkul](https://webkul.com)
23
+ - **License:** Apache-2.0
24
+ - **Language:** English
25
+ - **Model Type:** Instruction-tuned (4-bit quantized)
26
+ - **Training Boost:** ~2x faster training with Unsloth optimizations
27
+
28
+ ---
29
+
30
+ ## πŸ“š Fine-Tuning Dataset
31
+
32
+ This model has been fine-tuned specifically on official UnoPIM documentation and user guides available at:
33
+
34
+ πŸ‘‰ **[https://docs.unopim.com/](https://docs.unopim.com/)**
35
+
36
+ ### Content Covered:
37
+
38
+ - Product Information Management (PIM) workflows
39
+ - Admin dashboard and module configurations
40
+ - API usage and endpoints
41
+ - User roles and access control
42
+ - Product import/export and sync logic
43
+ - Custom field and attribute setups
44
+ - Troubleshooting and common use cases
45
+
46
+ ---
47
+
48
+ ## πŸ’‘ Use Cases
49
+
50
+ This model is designed for:
51
+
52
+ - 🧾 **Q&A on UnoPIM documentation**
53
+ - πŸ’¬ **Chatbots for UnoPIM technical support**
54
+ - 🧠 **Contextual assistants inside dev tools**
55
+ - πŸ› οΈ **Knowledge base automation for onboarding users**
56
+
57
+ ---
58
+
59
+ ## πŸš€ Quick Start
60
+
61
+ You can run this model with Hugging Face’s `transformers` library:
62
+
63
+ ```python
64
+ from transformers import AutoTokenizer, AutoModelForCausalLM
65
+
66
+ model_id = "webkul/gemma-3-4b-it-unopim-docs"
67
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
68
+ model = AutoModelForCausalLM.from_pretrained(model_id)
69
+
70
+ prompt = "How can I import products in bulk using UnoPIM?"
71
+ inputs = tokenizer(prompt, return_tensors="pt")
72
+ outputs = model.generate(**inputs, max_new_tokens=300)
73
+ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
74
+ ```
75
 
76
+ πŸ“„ License
77
+ This model is distributed under the Apache 2.0 License. See LICENSE for more information.