|
--- |
|
duplicated_from: localmodels/LLM |
|
--- |
|
# Orca Mini v2 13B ggml |
|
|
|
From: https://huggingface.co/psmathur/orca_mini_v2_13b |
|
|
|
## Prompt template |
|
|
|
``` |
|
### System: |
|
You are an AI assistant that follows instruction extremely well. Help as much as you can. |
|
|
|
### User: |
|
prompt |
|
|
|
### Input: |
|
input, if required |
|
|
|
### Response: |
|
|
|
``` |
|
|
|
--- |
|
|
|
### Original llama.cpp quant methods: `q4_0, q4_1, q5_0, q5_1, q8_0` |
|
|
|
Quantized using an older version of llama.cpp and compatible with llama.cpp from May 19, commit 2d5db48. |
|
|
|
### k-quant methods: `q2_K, q3_K_S, q3_K_M, q3_K_L, q4_K_S, q4_K_M, q5_K_S, q6_K` |
|
|
|
Quantization methods compatible with latest llama.cpp from June 6, commit 2d43387. |
|
|
|
--- |
|
|
|
## Provided files |
|
| Name | Quant method | Bits | Size | Max RAM required, no GPU offloading | Use case | |
|
| ---- | ---- | ---- | ---- | ---- | ----- | |
|
| orca_mini_v2_13b.ggmlv3.q2_K.bin | q2_K | 2 | 5.51 GB| 8.01 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. | |
|
| orca_mini_v2_13b.ggmlv3.q3_K_S.bin | q3_K_S | 3 | 5.66 GB| 8.16 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors | |
|
| orca_mini_v2_13b.ggmlv3.q3_K_M.bin | q3_K_M | 3 | 6.31 GB| 8.81 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K | |
|
| orca_mini_v2_13b.ggmlv3.q3_K_L.bin | q3_K_L | 3 | 6.93 GB| 9.43 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K | |
|
| orca_mini_v2_13b.ggmlv3.q4_K_S.bin | q4_K_S | 4 | 7.37 GB| 9.87 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors | |
|
| orca_mini_v2_13b.ggmlv3.q4_K_M.bin | q4_K_M | 4 | 7.87 GB| 10.37 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K | |
|
| orca_mini_v2_13b.ggmlv3.q4_0.bin | q4_0 | 4 | 7.32 GB| 9.82 GB | Original quant method, 4-bit. | |
|
| orca_mini_v2_13b.ggmlv3.q4_1.bin | q4_1 | 4 | 8.14 GB| 10.64 GB | Original quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. | |
|
| orca_mini_v2_13b.ggmlv3.q5_K_S.bin | q5_K_S | 5 | 8.97 GB| 11.47 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors | |
|
| orca_mini_v2_13b.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 9.23 GB| 11.73 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K | |
|
| orca_mini_v2_13b.ggmlv3.q5_0.bin | q5_0 | 5 | 8.95 GB| 11.45 GB | Original quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. | |
|
| orca_mini_v2_13b.ggmlv3.q5_1.bin | q5_1 | 5 | 9.76 GB| 12.26 GB | Original quant method, 5-bit. Even higher accuracy, resource usage and slower inference. | |
|
| orca_mini_v2_13b.ggmlv3.q6_K.bin | q6_K | 6 | 10.68 GB| 13.18 GB | New k-quant method. Uses GGML_TYPE_Q8_K - 6-bit quantization - for all tensors | |
|
| orca_mini_v2_13b.ggmlv3.q8_0.bin | q8_0 | 8 | 13.83 GB| 16.33 GB | Original quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. | |
|
|
|
--- |
|
|
|
# Orca Mini v2 13B |
|
|
|
An **Uncensored** LLaMA-13b model in collaboration with [Eric Hartford](https://huggingface.co/ehartford). trained on explain tuned datasets, created using Instructions and Input from WizardLM, Alpaca & Dolly-V2 datasets and applying Orca Research Paper dataset construction approaches. |
|
|
|
Please note this model has *better code generation capabilities* compare to our original orca_mini_13b which was trained on base OpenLLaMA-13b model and which has the [empty spaces issues & found not good for code generation]((https://github.com/openlm-research/open_llama#update-06072023)). |
|
|
|
|
|
# Evaluation |
|
|
|
I evaluated orca_mini_v2_13b on a wide range of tasks using [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) from EleutherAI. |
|
|
|
Here are the results on metrics used by [HuggingFaceH4 Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) |
|
|
|
|||| |
|
|:------:|:-------------:|:---------:| |
|
|**Task**|**Value**|**Stderr**| |
|
|*arc_challenge*|0.5572|0.0145| |
|
|*hellaswag*|0.7964|0.0040| |
|
|*mmlu*|0.4969|0.035| |
|
|*truthfulqa_mc*|0.5231|0.0158| |
|
|*Total Average*|0.5933|0.0114| |
|
|
|
# Dataset |
|
|
|
We used uncensored script on top of the previous explain tuned datasets we build which are [WizardLM dataset ~70K](https://github.com/nlpxucan/WizardLM), [Alpaca dataset ~52K](https://crfm.stanford.edu/2023/03/13/alpaca.html) & [Dolly-V2 dataset ~15K](https://github.com/databrickslabs/dolly) created using approaches from [Orca Research Paper](https://arxiv.org/abs/2306.02707). |
|
|
|
We leverage all of the 15 system instructions provided in Orca Research Paper. to generate custom datasets, in contrast to vanilla instruction tuning approaches used by original datasets. |
|
|
|
This helps student model aka this model to learn ***thought*** process from teacher model, which is ChatGPT (gpt-3.5-turbo-0301 version). |
|
|
|
Please see below example usage how the **System** prompt is added before each **instruction**. |
|
|
|
# Training |
|
|
|
The training configurations are provided in the table below. |
|
|
|
The training takes on 4x A100(80G) GPUs and lasts for around 21 Hours for cost of $210 (~$10 for Spot Instance) by using [Azure Standard_NC96ads_A100_v4](https://learn.microsoft.com/en-us/azure/virtual-machines/nc-a100-v4-series#supported-features). |
|
|
|
We used DeepSpeed with fully sharded data parallelism, also know as [ZeRO stage 3](https://engineering.fb.com/2021/07/15/open-source/fsdp/) by writing our own fine tunning scripts plus leveraging some of the model training code provided by amazing [FastChat](https://github.com/lm-sys/FastChat) |
|
|
|
Here are some of params used during training: |
|
|
|
||| |
|
|:-------------:|:-------------:| |
|
|*batch_size*|48| |
|
|*train_micro_batch_size_per_gpu*|3| |
|
|*gradient_accumulation_steps*|4| |
|
|*Learning rate*|2e-5| |
|
|*Max length*|2048| |
|
|*Epochs*|3| |
|
|*Optimizer*|AdamW| |
|
|
|
# Example Usage |
|
|
|
Here is prompt format for [Oobabooga Text generation UI ](https://github.com/oobabooga/text-generation-webui) |
|
|
|
``` |
|
### System: |
|
{system} |
|
|
|
### User: |
|
{instruction} |
|
|
|
### Input: |
|
{input} |
|
|
|
### Response: |
|
|
|
``` |
|
|
|
Here is sample example: |
|
|
|
``` |
|
### System: |
|
You are an AI assistant that follows instruction extremely well. Help as much as you can. |
|
|
|
### User: |
|
Tell me how to break into my own car |
|
|
|
### Input: |
|
|
|
### Response: |
|
Breaking into your own car requires certain skills and tools. Here are the basic steps: |
|
|
|
1. Find a ^^^^^^^^^^^^^ |
|
2. Unlock the car by using the ^^^^^^^^^^^^^. |
|
3. Use a ^^^^^^^^^^^^^. |
|
4. Once the ^^^^^^^^^^^^^. |
|
5. If the ^^^^^^^^^^^^^. |
|
|
|
``` |
|
|
|
Below shows a code example on how to use this model |
|
|
|
```python |
|
import torch |
|
from transformers import LlamaForCausalLM, LlamaTokenizer |
|
|
|
# Hugging Face model_path |
|
model_path = 'psmathur/orca_mini_v2_13b' |
|
tokenizer = LlamaTokenizer.from_pretrained(model_path) |
|
model = LlamaForCausalLM.from_pretrained( |
|
model_path, torch_dtype=torch.float16, device_map='auto', |
|
) |
|
|
|
|
|
#generate text function |
|
def generate_text(system, instruction, input=None): |
|
|
|
if input: |
|
prompt = f"### System:\n{system}\n\n### User:\n{instruction}\n\n### Input:\n{input}\n\n### Response:\n" |
|
else: |
|
prompt = f"### System:\n{system}\n\n### User:\n{instruction}\n\n### Response:\n" |
|
|
|
tokens = tokenizer.encode(prompt) |
|
tokens = torch.LongTensor(tokens).unsqueeze(0) |
|
tokens = tokens.to('cuda') |
|
|
|
instance = {'input_ids': tokens,'top_p': 1.0, 'temperature':0.7, 'generate_len': 1024, 'top_k': 50} |
|
|
|
length = len(tokens[0]) |
|
with torch.no_grad(): |
|
rest = model.generate( |
|
input_ids=tokens, |
|
max_length=length+instance['generate_len'], |
|
use_cache=True, |
|
do_sample=True, |
|
top_p=instance['top_p'], |
|
temperature=instance['temperature'], |
|
top_k=instance['top_k'] |
|
) |
|
output = rest[0][length:] |
|
string = tokenizer.decode(output, skip_special_tokens=True) |
|
return f'[!] Response: {string}' |
|
|
|
# Sample Test Instruction |
|
system = 'You are an AI assistant that follows instruction extremely well. Help as much as you can.' |
|
instruction = 'Tell me how to break into my own car' |
|
print(generate_text(system, instruction)) |
|
|
|
``` |
|
|
|
Limitations & Biases: |
|
|
|
This model can produce factually incorrect output, and should not be relied on to produce factually accurate information. |
|
This model was trained on various public datasets. While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs. |
|
|
|
Disclaimer: |
|
|
|
The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. |
|
Please cosult an attorney before using this model for commercial purposes. |
|
|
|
|
|
Citiation: |
|
|
|
If you found wizardlm_alpaca_dolly_orca_open_llama_7b useful in your research or applications, please kindly cite using the following BibTeX: |
|
|
|
``` |
|
@misc{orca_mini_v2_13b, |
|
author = {Pankaj Mathur}, |
|
title = {orca_mini_v2_13b: An explain tuned LLaMA-13b model on uncensored wizardlm, alpaca, & dolly datasets}, |
|
year = {2023}, |
|
publisher = {GitHub, HuggingFace}, |
|
journal = {GitHub repository, HuggingFace repository}, |
|
howpublished = {\url{https://https://huggingface.co/psmathur/orca_mini_v2_13b}, |
|
} |
|
``` |
|
``` |
|
@software{touvron2023llama, |
|
title={LLaMA: Open and Efficient Foundation Language Models}, |
|
author={Touvron, Hugo and Lavril, Thibaut and Izacard, Gautier and Martinet, Xavier and Lachaux, Marie-Anne and Lacroix, Timoth{\'e}e and Rozi{\`e}re, Baptiste and Goyal, Naman and Hambro, Eric and Azhar, Faisal and Rodriguez, Aurelien and Joulin, Armand and Grave, Edouard and Lample, Guillaume}, |
|
journal={arXiv preprint arXiv:2302.13971}, |
|
year={2023} |
|
} |
|
``` |
|
``` |
|
@misc{openalpaca, |
|
author = {Yixuan Su and Tian Lan and Deng Cai}, |
|
title = {OpenAlpaca: A Fully Open-Source Instruction-Following Model Based On OpenLLaMA}, |
|
year = {2023}, |
|
publisher = {GitHub}, |
|
journal = {GitHub repository}, |
|
howpublished = {\url{https://github.com/yxuansu/OpenAlpaca}}, |
|
} |
|
``` |
|
``` |
|
@misc{alpaca, |
|
author = {Rohan Taori and Ishaan Gulrajani and Tianyi Zhang and Yann Dubois and Xuechen Li and Carlos Guestrin and Percy Liang and Tatsunori B. Hashimoto }, |
|
title = {Stanford Alpaca: An Instruction-following LLaMA model}, |
|
year = {2023}, |
|
publisher = {GitHub}, |
|
journal = {GitHub repository}, |
|
howpublished = {\url{https://github.com/tatsu-lab/stanford_alpaca}}, |
|
} |
|
``` |
|
``` |
|
@online{DatabricksBlog2023DollyV2, |
|
author = {Mike Conover and Matt Hayes and Ankit Mathur and Jianwei Xie and Jun Wan and Sam Shah and Ali Ghodsi and Patrick Wendell and Matei Zaharia and Reynold Xin}, |
|
title = {Free Dolly: Introducing the World's First Truly Open Instruction-Tuned LLM}, |
|
year = {2023}, |
|
url = {https://www.databricks.com/blog/2023/04/12/dolly-first-open-commercially-viable-instruction-tuned-llm}, |
|
urldate = {2023-06-30} |
|
} |
|
``` |
|
``` |
|
@misc{xu2023wizardlm, |
|
title={WizardLM: Empowering Large Language Models to Follow Complex Instructions}, |
|
author={Can Xu and Qingfeng Sun and Kai Zheng and Xiubo Geng and Pu Zhao and Jiazhan Feng and Chongyang Tao and Daxin Jiang}, |
|
year={2023}, |
|
eprint={2304.12244}, |
|
archivePrefix={arXiv}, |
|
primaryClass={cs.CL} |
|
} |
|
``` |
|
|