File size: 7,838 Bytes
e5c924e ad2ef44 a585f92 ad2ef44 a6fe6ea e5c924e 6886580 ad2ef44 6d6e959 0340f83 6d6e959 ad2ef44 04207f5 d1727fa ad2ef44 6886580 85d7618 8e2dd5a ad2ef44 8e2dd5a 752c098 ad2ef44 6b1f8bd ad2ef44 752c098 ad2ef44 752c098 ad2ef44 8e2dd5a ad2ef44 8e2dd5a ad2ef44 752c098 ad2ef44 752c098 ad2ef44 752c098 ad2ef44 8e2dd5a ad2ef44 8e2dd5a ad2ef44 8e2dd5a ad2ef44 752c098 ad2ef44 752c098 ad2ef44 752c098 dcc714c 752c098 c89e984 752c098 8e2dd5a dcc714c 752c098 ad2ef44 752c098 ad2ef44 dcc714c ad2ef44 752c098 ad2ef44 27647ae 4f6b36f b60d50e ad2ef44 752c098 ad2ef44 7ca9069 ad2ef44 752c098 ad2ef44 752c098 ad2ef44 8e2dd5a ad2ef44 03560f4 ad2ef44 8e2dd5a ad2ef44 8e2dd5a ad2ef44 8e2dd5a 434b388 ad2ef44 8e2dd5a 5fdebbe ad2ef44 5fdebbe ad2ef44 8e2dd5a ad2ef44 8e2dd5a ad2ef44 8e2dd5a ac01a3e ad2ef44 ac01a3e 0cbc9c9 ac01a3e a585f92 8e2dd5a |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 |
---
license: apache-2.0
library_name: peft
tags:
- mistral
datasets:
- jondurbin/airoboros-2.2.1
inference: false
pipeline_tag: text-generation
base_model: mistralai/Mistral-7B-v0.1
---
<div align="center">
<img src="./logo.png" width="110px">
</div>
# Mistral-7B-Instruct-v0.1
A pretrained generative language model with 7 billion parameters geared towards instruction-following capabilities.
## Model Details
This model was built via parameter-efficient finetuning of the [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) base model on the [jondurbin/airoboros-2.2.1](https://huggingface.co/datasets/jondurbin/airoboros-2.2.1) dataset.
- **Developed by:** Daniel Furman
- **Model type:** Causal language model (clm)
- **Language(s) (NLP):** English
- **License:** Apache 2.0
- **Finetuned from model:** [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)
## Evaluation Results
| Metric | Value |
|-----------------------|-------|
| MMLU (5-shot) | Coming |
| ARC (25-shot) | Coming |
| HellaSwag (10-shot) | Coming |
| TruthfulQA (0-shot) | Coming |
| Avg. | Coming |
We use Eleuther.AI's [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) to run the benchmark tests above, the same version as Hugging Face's [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
## Basic Usage
<details>
<summary>Setup</summary>
```python
!pip install -q -U transformers peft torch accelerate einops sentencepiece
```
```python
import torch
from peft import PeftModel, PeftConfig
from transformers import (
AutoModelForCausalLM,
AutoTokenizer,
)
```
```python
peft_model_id = "dfurman/Mistral-7B-Instruct-v0.1"
config = PeftConfig.from_pretrained(peft_model_id)
tokenizer = AutoTokenizer.from_pretrained(
peft_model_id,
use_fast=True,
trust_remote_code=True,
)
model = AutoModelForCausalLM.from_pretrained(
config.base_model_name_or_path,
torch_dtype=torch.float16,
device_map="auto",
trust_remote_code=True,
)
model = PeftModel.from_pretrained(
model,
peft_model_id
)
```
</details>
```python
messages = [
{"role": "user", "content": "Tell me a recipe for a mai tai."},
]
print("\n\n*** Prompt:")
input_ids = tokenizer.apply_chat_template(
messages,
tokenize=True,
return_tensors="pt",
)
print(tokenizer.decode(input_ids[0]))
```
<details>
<summary>Prompt</summary>
```python
"<s> [INST] Tell me a recipe for a mai tai. [/INST]"
```
</details>
```python
print("\n\n*** Generate:")
with torch.autocast("cuda", dtype=torch.bfloat16):
output = model.generate(
input_ids=input_ids.cuda(),
max_new_tokens=1024,
do_sample=True,
temperature=0.7,
return_dict_in_generate=True,
eos_token_id=tokenizer.eos_token_id,
pad_token_id=tokenizer.pad_token_id,
repetition_penalty=1.2,
no_repeat_ngram_size=5,
)
response = tokenizer.decode(
output["sequences"][0][len(input_ids[0]):],
skip_special_tokens=True
)
print(response)
```
<details>
<summary>Generation</summary>
```python
"""1 oz light rum
½ oz dark rum
¼ oz orange curaçao
2 oz pineapple juice
¾ oz lime juice
Dash of orgeat syrup (optional)
Splash of grenadine (for garnish, optional)
Lime wheel and cherry garnishes (optional)
Shake all ingredients except the splash of grenadine in a cocktail shaker over ice. Strain into an old-fashioned glass filled with fresh ice cubes. Gently pour the splash of grenadine down the side of the glass so that it sinks to the bottom. Add garnishes as desired."""
```
</details>
## Speeds, Sizes, Times
| runtime / 50 tokens (sec) | GPU | dtype | VRAM (GB) |
|:-----------------------------:|:---------------------:|:-------------:|:-----------------------:|
| 3.21 | 1x A100 (40 GB SXM) | torch.bfloat16 | 16 |
## Training
It took ~2 hours to train 2 epochs on 1x A100 (40 GB SXM).
### Prompt Format
This model was finetuned with the following format:
```python
tokenizer.chat_template = "{{ bos_token }}{% for message in messages %}{% if (message['role'] == 'user') != (loop.index0 % 2 == 0) %}{{ raise_exception('Conversation roles must alternate user/assistant/user/assistant/...') }}{% endif %}{% if message['role'] == 'user' %}{{ '[INST] ' + message['content'] + ' [/INST] ' }}{% elif message['role'] == 'assistant' %}{{ message['content'] + eos_token + ' ' }}{% else %}{{ raise_exception('Only user and assistant roles are supported!') }}{% endif %}{% endfor %}"
```
This format is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating) via the `apply_chat_template()` method. Here's an illustrative example:
```python
messages = [
{"role": "user", "content": "Tell me a recipe for a mai tai."},
{"role": "assistant", "content": "1 oz light rum\n½ oz dark rum\n¼ oz orange curaçao\n2 oz pineapple juice\n¾ oz lime juice\nDash of orgeat syrup (optional)\nSplash of grenadine (for garnish, optional)\nLime wheel and cherry garnishes (optional)\n\nShake all ingredients except the splash of grenadine in a cocktail shaker over ice. Strain into an old-fashioned glass filled with fresh ice cubes. Gently pour the splash of grenadine down the side of the glass so that it sinks to the bottom. Add garnishes as desired."},
{"role": "user", "content": "How can I make it more upscale and luxurious?"},
]
print("\n\n*** Prompt:")
input_ids = tokenizer.apply_chat_template(
messages,
tokenize=True,
return_tensors="pt",
)
print(tokenizer.decode(input_ids[0]))
```
<details>
<summary>Output</summary>
```python
"""<s> [INST] Tell me a recipe for a mai tai. [/INST] 1 oz light rum\n½ oz dark rum\n (...) Add garnishes as desired.</s> [INST] How can I make it more upscale and luxurious? [/INST]"""
```
</details>
### Training Hyperparameters
We use the [SFTTrainer](https://huggingface.co/docs/trl/main/en/sft_trainer) from `trl` to fine-tune LLMs on instruction-following datasets.
See [here](https://github.com/daniel-furman/sft-demos/blob/main/src/sft/mistral/sft_Mistral_7B_Instruct_v0_1_peft.ipynb) for the finetuning code, which contains an exhaustive view of the hyperparameters employed.
The following `TrainingArguments` config was used:
- output_dir = "./results"
- num_train_epochs = 3
- auto_find_batch_size = True
- gradient_accumulation_steps = 1
- optim = "paged_adamw_32bit"
- save_strategy = "epoch"
- learning_rate = 3e-4
- lr_scheduler_type = "cosine"
- warmup_ratio = 0.03
- logging_strategy = "steps"
- logging_steps = 25
- evaluation_strategy = "epoch"
- prediction_loss_only = True
- bf16 = True
The following `bitsandbytes` quantization config was used:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: bfloat16
## Model Card Contact
dryanfurman at gmail
## Mistral Research Citation
```
@misc{jiang2023mistral,
title={Mistral 7B},
author={Albert Q. Jiang and Alexandre Sablayrolles and Arthur Mensch and Chris Bamford and Devendra Singh Chaplot and Diego de las Casas and Florian Bressand and Gianna Lengyel and Guillaume Lample and Lucile Saulnier and Lélio Renard Lavaud and Marie-Anne Lachaux and Pierre Stock and Teven Le Scao and Thibaut Lavril and Thomas Wang and Timothée Lacroix and William El Sayed},
year={2023},
eprint={2310.06825},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## Framework versions
- PEFT 0.6.3.dev0
|