qlora merge and load requires that base model isn't loaded in 4 or 8 bit
Browse files
README.md
CHANGED
|
@@ -24,7 +24,7 @@
|
|
| 24 |
|
| 25 |
## Quickstart β‘
|
| 26 |
|
| 27 |
-
**Requirements**: Python 3.9.
|
| 28 |
|
| 29 |
```bash
|
| 30 |
git clone https://github.com/OpenAccess-AI-Collective/axolotl
|
|
@@ -45,7 +45,7 @@ accelerate launch scripts/finetune.py examples/4bit-lora-7b/config.yml \
|
|
| 45 |
|
| 46 |
### Environment
|
| 47 |
|
| 48 |
-
- Docker
|
| 49 |
```bash
|
| 50 |
docker run --gpus '"all"' --rm -it winglian/axolotl:main
|
| 51 |
```
|
|
@@ -332,7 +332,7 @@ seed:
|
|
| 332 |
|
| 333 |
### Accelerate
|
| 334 |
|
| 335 |
-
Configure accelerate
|
| 336 |
|
| 337 |
```bash
|
| 338 |
accelerate config
|
|
@@ -363,12 +363,18 @@ Pass the appropriate flag to the train command:
|
|
| 363 |
|
| 364 |
### Merge LORA to base
|
| 365 |
|
| 366 |
-
Add below flag to train command above
|
| 367 |
|
| 368 |
```bash
|
| 369 |
--merge_lora --lora_model_dir="./completed-model"
|
| 370 |
```
|
| 371 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 372 |
## Common Errors π§°
|
| 373 |
|
| 374 |
> Cuda out of memory
|
|
@@ -383,7 +389,7 @@ Please reduce any below
|
|
| 383 |
Try set `fp16: true`
|
| 384 |
|
| 385 |
## Need help? πββοΈ
|
| 386 |
-
|
| 387 |
Join our [Discord server](https://discord.gg/HhrNrHJPRb) where we can help you
|
| 388 |
|
| 389 |
## Contributing π€
|
|
|
|
| 24 |
|
| 25 |
## Quickstart β‘
|
| 26 |
|
| 27 |
+
**Requirements**: Python 3.9.
|
| 28 |
|
| 29 |
```bash
|
| 30 |
git clone https://github.com/OpenAccess-AI-Collective/axolotl
|
|
|
|
| 45 |
|
| 46 |
### Environment
|
| 47 |
|
| 48 |
+
- Docker
|
| 49 |
```bash
|
| 50 |
docker run --gpus '"all"' --rm -it winglian/axolotl:main
|
| 51 |
```
|
|
|
|
| 332 |
|
| 333 |
### Accelerate
|
| 334 |
|
| 335 |
+
Configure accelerate
|
| 336 |
|
| 337 |
```bash
|
| 338 |
accelerate config
|
|
|
|
| 363 |
|
| 364 |
### Merge LORA to base
|
| 365 |
|
| 366 |
+
Add below flag to train command above (and using LoRA)
|
| 367 |
|
| 368 |
```bash
|
| 369 |
--merge_lora --lora_model_dir="./completed-model"
|
| 370 |
```
|
| 371 |
|
| 372 |
+
Add below flag to train command above (and using QLoRA)
|
| 373 |
+
|
| 374 |
+
```bash
|
| 375 |
+
--merge_lora --lora_model_dir="./completed-model" --load_in_8bit False --load_in_4bit False
|
| 376 |
+
```
|
| 377 |
+
|
| 378 |
## Common Errors π§°
|
| 379 |
|
| 380 |
> Cuda out of memory
|
|
|
|
| 389 |
Try set `fp16: true`
|
| 390 |
|
| 391 |
## Need help? πββοΈ
|
| 392 |
+
|
| 393 |
Join our [Discord server](https://discord.gg/HhrNrHJPRb) where we can help you
|
| 394 |
|
| 395 |
## Contributing π€
|