Commit
·
193e0b0
1
Parent(s):
93414a9
Update README.md
Browse files
README.md
CHANGED
@@ -1,8 +1,12 @@
|
|
1 |
---
|
2 |
library_name: peft
|
3 |
---
|
4 |
-
|
|
|
|
|
|
|
5 |
|
|
|
6 |
|
7 |
The following `bitsandbytes` quantization config was used during training:
|
8 |
- load_in_8bit: True
|
@@ -14,7 +18,5 @@ The following `bitsandbytes` quantization config was used during training:
|
|
14 |
- bnb_4bit_quant_type: fp4
|
15 |
- bnb_4bit_use_double_quant: False
|
16 |
- bnb_4bit_compute_dtype: float32
|
17 |
-
|
18 |
-
|
19 |
-
|
20 |
- PEFT 0.4.0
|
|
|
1 |
---
|
2 |
library_name: peft
|
3 |
---
|
4 |
+
# Introduction
|
5 |
+
Vicuna-style model based on LoRA tuning
|
6 |
+
- Dataset: https://huggingface.co/datasets/Aeala/ShareGPT_Vicuna_unfiltered
|
7 |
+
- Base model: https://huggingface.co/huggyllama/llama-7b
|
8 |
|
9 |
+
# Training procedure
|
10 |
|
11 |
The following `bitsandbytes` quantization config was used during training:
|
12 |
- load_in_8bit: True
|
|
|
18 |
- bnb_4bit_quant_type: fp4
|
19 |
- bnb_4bit_use_double_quant: False
|
20 |
- bnb_4bit_compute_dtype: float32
|
21 |
+
## Framework versions
|
|
|
|
|
22 |
- PEFT 0.4.0
|