Upload model
#11
by
viswesz
- opened
- README.md +14 -12
- adapter_model.bin +1 -1
README.md
CHANGED
@@ -1,18 +1,20 @@
|
|
1 |
---
|
2 |
-
|
3 |
-
language:
|
4 |
-
- en
|
5 |
-
library_name: transformers
|
6 |
-
pipeline_tag: text-generation
|
7 |
---
|
|
|
8 |
|
9 |
-
[Falcon 7b](https://huggingface.co/tiiuae/falcon-7b) model trained on [FAQ from an ecommerce website](https://www.kaggle.com/datasets/saadmakhdoom/ecommerce-faq-chatbot-dataset).
|
10 |
|
11 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
12 |
|
13 |
-
- Text tutorial: https://www.mlexpert.io/prompt-engineering/fine-tuning-llm-on-custom-dataset-with-qlora
|
14 |
-
- YouTube video: https://www.youtube.com/watch?v=DcBC4yGHV4Q
|
15 |
|
16 |
-
|
17 |
-
|
18 |
-
Fine-tuned using the QLoRA technique (only adapter uploaded). With the help of bitsandbytes, peft and transformers. Full reproduction is available in the tutorials.
|
|
|
1 |
---
|
2 |
+
library_name: peft
|
|
|
|
|
|
|
|
|
3 |
---
|
4 |
+
## Training procedure
|
5 |
|
|
|
6 |
|
7 |
+
The following `bitsandbytes` quantization config was used during training:
|
8 |
+
- load_in_8bit: False
|
9 |
+
- load_in_4bit: True
|
10 |
+
- llm_int8_threshold: 6.0
|
11 |
+
- llm_int8_skip_modules: None
|
12 |
+
- llm_int8_enable_fp32_cpu_offload: False
|
13 |
+
- llm_int8_has_fp16_weight: False
|
14 |
+
- bnb_4bit_quant_type: nf4
|
15 |
+
- bnb_4bit_use_double_quant: True
|
16 |
+
- bnb_4bit_compute_dtype: bfloat16
|
17 |
+
### Framework versions
|
18 |
|
|
|
|
|
19 |
|
20 |
+
- PEFT 0.4.0.dev0
|
|
|
|
adapter_model.bin
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 18898161
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:7fb23f5af873a5a1ddfc7ce9cc3cc6d183a39be505be9eb2152fa7eb7fedfaa8
|
3 |
size 18898161
|