---
license: other
language:
- en
pipeline_tag: text2text-generation
tags:
- alpaca
- llama
- chat
- gpt4
---
This is the HF format merged model for [chansung's gpt4-alpaca-lora-13b](https://huggingface.co/chansung/gpt4-alpaca-lora-13b).
## Want to support my work?
I've had a lot of people ask if they can contribute. I love providing models and helping people, but it is starting to rack up pretty big cloud computing bills.
So if you're able and willing to contribute, it'd be most gratefully received and will help me to keep providing models, and work on various AI projects.
Donaters will get priority support on any and all AI/LLM/model questions, and I'll gladly quantise any model you'd like to try.
* Patreon: coming soon! (just awaiting approval)
* Ko-Fi: https://ko-fi.com/TheBlokeAI
* Discord: https://discord.gg/UBgz4VXf
# Original model card
This repository comes with LoRA checkpoint to make LLaMA into a chatbot like language model. The checkpoint is the output of instruction following fine-tuning process with the following settings on 8xA100(40G) DGX system.
- Training script: borrowed from the official [Alpaca-LoRA](https://github.com/tloen/alpaca-lora) implementation
- Training script:
```shell
python finetune.py \
--base_model='decapoda-research/llama-30b-hf' \
--data_path='alpaca_data_gpt4.json' \
--num_epochs=10 \
--cutoff_len=512 \
--group_by_length \
--output_dir='./gpt4-alpaca-lora-30b' \
--lora_target_modules='[q_proj,k_proj,v_proj,o_proj]' \
--lora_r=16 \
--batch_size=... \
--micro_batch_size=...
```
You can find how the training went from W&B report [here](https://wandb.ai/chansung18/gpt4_alpaca_lora/runs/w3syd157?workspace=user-chansung18).