Feat(readme): improve docs on multi-gpu
Browse files
    	
        README.md
    CHANGED
    
    | @@ -36,8 +36,6 @@ git clone https://github.com/OpenAccess-AI-Collective/axolotl | |
| 36 | 
             
            pip3 install -e .
         | 
| 37 | 
             
            pip3 install -U git+https://github.com/huggingface/peft.git
         | 
| 38 |  | 
| 39 | 
            -
            accelerate config
         | 
| 40 | 
            -
             | 
| 41 | 
             
            # finetune lora
         | 
| 42 | 
             
            accelerate launch scripts/finetune.py examples/openllama-3b/lora.yml
         | 
| 43 |  | 
| @@ -525,6 +523,21 @@ Run | |
| 525 | 
             
            accelerate launch scripts/finetune.py configs/your_config.yml
         | 
| 526 | 
             
            ```
         | 
| 527 |  | 
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
| 528 | 
             
            ### Inference
         | 
| 529 |  | 
| 530 | 
             
            Pass the appropriate flag to the train command:
         | 
| @@ -575,6 +588,10 @@ Try set `fp16: true` | |
| 575 |  | 
| 576 | 
             
            Try to turn off xformers.
         | 
| 577 |  | 
|  | |
|  | |
|  | |
|  | |
| 578 | 
             
            ## Need help? πβοΈ
         | 
| 579 |  | 
| 580 | 
             
            Join our [Discord server](https://discord.gg/HhrNrHJPRb) where we can help you
         | 
|  | |
| 36 | 
             
            pip3 install -e .
         | 
| 37 | 
             
            pip3 install -U git+https://github.com/huggingface/peft.git
         | 
| 38 |  | 
|  | |
|  | |
| 39 | 
             
            # finetune lora
         | 
| 40 | 
             
            accelerate launch scripts/finetune.py examples/openllama-3b/lora.yml
         | 
| 41 |  | 
|  | |
| 523 | 
             
            accelerate launch scripts/finetune.py configs/your_config.yml
         | 
| 524 | 
             
            ```
         | 
| 525 |  | 
| 526 | 
            +
            #### Multi-GPU Config
         | 
| 527 | 
            +
             | 
| 528 | 
            +
            - llama FSDP
         | 
| 529 | 
            +
            ```yaml
         | 
| 530 | 
            +
            fsdp:
         | 
| 531 | 
            +
              - full_shard
         | 
| 532 | 
            +
              - auto_wrap
         | 
| 533 | 
            +
            fsdp_config:
         | 
| 534 | 
            +
              fsdp_offload_params: true
         | 
| 535 | 
            +
              fsdp_state_dict_type: FULL_STATE_DICT
         | 
| 536 | 
            +
              fsdp_transformer_layer_cls_to_wrap: LlamaDecoderLayer
         | 
| 537 | 
            +
            ```
         | 
| 538 | 
            +
             | 
| 539 | 
            +
            - llama Deepspeed: append `ACCELERATE_USE_DEEPSPEED=true` in front of finetune command
         | 
| 540 | 
            +
             | 
| 541 | 
             
            ### Inference
         | 
| 542 |  | 
| 543 | 
             
            Pass the appropriate flag to the train command:
         | 
|  | |
| 588 |  | 
| 589 | 
             
            Try to turn off xformers.
         | 
| 590 |  | 
| 591 | 
            +
            > Message about accelerate config missing
         | 
| 592 | 
            +
             | 
| 593 | 
            +
            It's safe to ignore it.
         | 
| 594 | 
            +
             | 
| 595 | 
             
            ## Need help? πβοΈ
         | 
| 596 |  | 
| 597 | 
             
            Join our [Discord server](https://discord.gg/HhrNrHJPRb) where we can help you
         |