Commit
·
ce59c50
0
Parent(s):
Mistral-Nemo-Instruct-FP8-2407 release
Browse files- .gitattributes +37 -0
- README.md +135 -0
- consolidated.safetensors +3 -0
- params.json +12 -0
- tekken.json +3 -0
.gitattributes
ADDED
@@ -0,0 +1,37 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
*.7z filter=lfs diff=lfs merge=lfs -text
|
2 |
+
*.arrow filter=lfs diff=lfs merge=lfs -text
|
3 |
+
*.bin filter=lfs diff=lfs merge=lfs -text
|
4 |
+
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
5 |
+
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
6 |
+
*.ftz filter=lfs diff=lfs merge=lfs -text
|
7 |
+
*.gz filter=lfs diff=lfs merge=lfs -text
|
8 |
+
*.h5 filter=lfs diff=lfs merge=lfs -text
|
9 |
+
*.joblib filter=lfs diff=lfs merge=lfs -text
|
10 |
+
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
11 |
+
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
12 |
+
*.model filter=lfs diff=lfs merge=lfs -text
|
13 |
+
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
14 |
+
*.npy filter=lfs diff=lfs merge=lfs -text
|
15 |
+
*.npz filter=lfs diff=lfs merge=lfs -text
|
16 |
+
*.onnx filter=lfs diff=lfs merge=lfs -text
|
17 |
+
*.ot filter=lfs diff=lfs merge=lfs -text
|
18 |
+
*.parquet filter=lfs diff=lfs merge=lfs -text
|
19 |
+
*.pb filter=lfs diff=lfs merge=lfs -text
|
20 |
+
*.pickle filter=lfs diff=lfs merge=lfs -text
|
21 |
+
*.pkl filter=lfs diff=lfs merge=lfs -text
|
22 |
+
*.pt filter=lfs diff=lfs merge=lfs -text
|
23 |
+
*.pth filter=lfs diff=lfs merge=lfs -text
|
24 |
+
*.rar filter=lfs diff=lfs merge=lfs -text
|
25 |
+
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
26 |
+
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
27 |
+
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
28 |
+
*.tar filter=lfs diff=lfs merge=lfs -text
|
29 |
+
*.tflite filter=lfs diff=lfs merge=lfs -text
|
30 |
+
*.tgz filter=lfs diff=lfs merge=lfs -text
|
31 |
+
*.wasm filter=lfs diff=lfs merge=lfs -text
|
32 |
+
*.xz filter=lfs diff=lfs merge=lfs -text
|
33 |
+
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
+
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
+
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
36 |
+
tekken.json filter=lfs diff=lfs merge=lfs -text
|
37 |
+
consolidated.safetensors filter=lfs diff=lfs merge=lfs -text
|
README.md
ADDED
@@ -0,0 +1,135 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
language:
|
3 |
+
- en
|
4 |
+
- fr
|
5 |
+
- de
|
6 |
+
- es
|
7 |
+
- it
|
8 |
+
- pt
|
9 |
+
- ru
|
10 |
+
- zh
|
11 |
+
- ja
|
12 |
+
license: apache-2.0
|
13 |
+
base_model: mistralai/Mistral-Nemo-Base-2407
|
14 |
+
extra_gated_description: If you want to learn more about how we process your personal
|
15 |
+
data, please read our <a href="https://mistral.ai/terms/">Privacy Policy</a>.
|
16 |
+
---
|
17 |
+
|
18 |
+
# Model Card for Mistral-Nemo-Instruct-FP8-2407
|
19 |
+
|
20 |
+
The Mistral-Nemo-Instruct-FP8-2407 Large Language Model (LLM) is a quantized instruct fine-tuned version of the [Mistral-Nemo-Base-2407](https://huggingface.co/mistralai/Mistral-Nemo-Base-2407). Trained jointly by Mistral AI and NVIDIA, it significantly outperforms existing models smaller or similar in size.
|
21 |
+
|
22 |
+
For more details about this model please refer to our release [blog post](https://mistral.ai/news/mistral-nemo/).
|
23 |
+
|
24 |
+
## Key features
|
25 |
+
- Released under the **Apache 2 License**
|
26 |
+
- Pre-trained and instructed versions
|
27 |
+
- Trained with a **128k context window**
|
28 |
+
- Trained on a large proportion of **multilingual and code data**
|
29 |
+
- Drop-in replacement of Mistral 7B
|
30 |
+
|
31 |
+
## Model Architecture
|
32 |
+
Mistral Nemo is a transformer model, with the following architecture choices:
|
33 |
+
- **Layers:** 40
|
34 |
+
- **Dim:** 5,120
|
35 |
+
- **Head dim:** 128
|
36 |
+
- **Hidden dim:** 14,336
|
37 |
+
- **Activation Function:** SwiGLU
|
38 |
+
- **Number of heads:** 32
|
39 |
+
- **Number of kv-heads:** 8 (GQA)
|
40 |
+
- **Vocabulary size:** 2**17 ~= 128k
|
41 |
+
- **Rotary embeddings (theta = 1M)**
|
42 |
+
|
43 |
+
## Metrics
|
44 |
+
|
45 |
+
### Main Benchmarks
|
46 |
+
|
47 |
+
| Benchmark | Score |
|
48 |
+
| --- | --- |
|
49 |
+
| HellaSwag (0-shot) | 83.5% |
|
50 |
+
| Winogrande (0-shot) | 76.8% |
|
51 |
+
| OpenBookQA (0-shot) | 60.6% |
|
52 |
+
| CommonSenseQA (0-shot) | 70.4% |
|
53 |
+
| TruthfulQA (0-shot) | 50.3% |
|
54 |
+
| MMLU (5-shot) | 68.0% |
|
55 |
+
| TriviaQA (5-shot) | 73.8% |
|
56 |
+
| NaturalQuestions (5-shot) | 31.2% |
|
57 |
+
|
58 |
+
### Multilingual Benchmarks (MMLU)
|
59 |
+
|
60 |
+
| Language | Score |
|
61 |
+
| --- | --- |
|
62 |
+
| French | 62.3% |
|
63 |
+
| German | 62.7% |
|
64 |
+
| Spanish | 64.6% |
|
65 |
+
| Italian | 61.3% |
|
66 |
+
| Portuguese | 63.3% |
|
67 |
+
| Russian | 59.2% |
|
68 |
+
| Chinese | 59.0% |
|
69 |
+
| Japanese | 59.0% |
|
70 |
+
|
71 |
+
## Usage
|
72 |
+
|
73 |
+
The model can be used with the [vLLM](https://github.com/vllm-project/vllm) library.
|
74 |
+
|
75 |
+
|
76 |
+
**_Installation_**
|
77 |
+
|
78 |
+
Make sure to install a fresh version of vLLM:
|
79 |
+
|
80 |
+
```bash
|
81 |
+
pip install --upgrade vllm
|
82 |
+
```
|
83 |
+
|
84 |
+
Also make sure to have `mistral_common` installed:
|
85 |
+
|
86 |
+
```bash
|
87 |
+
pip install --upgrade mistral_common
|
88 |
+
```
|
89 |
+
|
90 |
+
**_Example_**
|
91 |
+
|
92 |
+
You can use Mistral-Nemo-Instruct-FP8-2407 in a server/client settings.
|
93 |
+
|
94 |
+
1. Spin up the server:
|
95 |
+
|
96 |
+
```python
|
97 |
+
vllm serve mistralai/Mistral-Nemo-Instruct-FP8-2407 --tokenizer_mode mistral --config_format mistral --load_format mistral --tool-call-parser mistral
|
98 |
+
```
|
99 |
+
|
100 |
+
2. To ping the client, you can use a simple Python snippet:
|
101 |
+
|
102 |
+
```python
|
103 |
+
import requests
|
104 |
+
import json
|
105 |
+
|
106 |
+
url = "http://localhost:8000/v1/chat/completions"
|
107 |
+
headers = {"Content-Type": "application/json", "Authorization": "Bearer token"}
|
108 |
+
|
109 |
+
model = "mistralai/Mistral-Nemo-Instruct-FP8-2407"
|
110 |
+
|
111 |
+
messages = [
|
112 |
+
{
|
113 |
+
"role": "user",
|
114 |
+
"content": "How expensive would it be to ask a window cleaner to clean all windows in Paris. Make a reasonable guess in US Dollar."
|
115 |
+
},
|
116 |
+
]
|
117 |
+
|
118 |
+
data = {"model": model, "messages": messages}
|
119 |
+
|
120 |
+
response = requests.post(url, headers=headers, data=json.dumps(data))
|
121 |
+
print(response.json()["choices"][0]["message"]["content"])
|
122 |
+
# To estimate the cost of hiring a window cleaner in Paris for all windows, we need to make several assumptions:
|
123 |
+
|
124 |
+
#1. Paris has approximately 45,000 buildings, according to the city's official statistics...
|
125 |
+
```
|
126 |
+
|
127 |
+
## Limitations
|
128 |
+
|
129 |
+
The Mistral Nemo Instruct model is a quick demonstration that the base model can be easily fine-tuned to achieve compelling performance.
|
130 |
+
It does not have any moderation mechanisms. We're looking forward to engaging with the community on ways to
|
131 |
+
make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs.
|
132 |
+
|
133 |
+
## The Mistral AI Team
|
134 |
+
|
135 |
+
Albert Jiang, Alexandre Sablayrolles, Alexis Tacnet, Alok Kothari, Antoine Roux, Arthur Mensch, Audrey Herblin-Stoop, Augustin Garreau, Austin Birky, Bam4d, Baptiste Bout, Baudouin de Monicault, Blanche Savary, Carole Rambaud, Caroline Feldman, Devendra Singh Chaplot, Diego de las Casas, Eleonore Arcelin, Emma Bou Hanna, Etienne Metzger, Gaspard Blanchet, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Harizo Rajaona, Henri Roussez, Hichem Sattouf, Ian Mack, Jean-Malo Delignon, Jessica Chudnovsky, Justus Murke, Kartik Khandelwal, Lawrence Stewart, Louis Martin, Louis Ternon, Lucile Saulnier, Lélio Renard Lavaud, Margaret Jennings, Marie Pellat, Marie Torelli, Marie-Anne Lachaux, Marjorie Janiewicz, Mickaël Seznec, Nicolas Schuhl, Niklas Muhs, Olivier de Garrigues, Patrick von Platen, Paul Jacob, Pauline Buche, Pavan Kumar Reddy, Perry Savas, Pierre Stock, Romain Sauvestre, Sagar Vaze, Sandeep Subramanian, Saurabh Garg, Sophia Yang, Szymon Antoniak, Teven Le Scao, Thibault Schueller, Thibaut Lavril, Thomas Wang, Théophile Gervet, Timothée Lacroix, Valera Nemychnikova, Wendy Shang, William El Sayed, William Marshall
|
consolidated.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:2a20f27139def27bfb7b7bc377f49040e4e28ffa41afd102021256c0e8d8147f
|
3 |
+
size 13590473832
|
params.json
ADDED
@@ -0,0 +1,12 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"dim": 5120,
|
3 |
+
"n_layers": 40,
|
4 |
+
"head_dim": 128,
|
5 |
+
"hidden_dim": 14336,
|
6 |
+
"n_heads": 32,
|
7 |
+
"n_kv_heads": 8,
|
8 |
+
"norm_eps": 1e-05,
|
9 |
+
"vocab_size": 131072,
|
10 |
+
"rope_theta": 1000000.0,
|
11 |
+
"quantization": {"qformat_weight": "fp8_e4m3"}
|
12 |
+
}
|
tekken.json
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:c604f35d1035f534519622c0ec83fed6184978d4fdee92a5bd2a50bc05438094
|
3 |
+
size 14801330
|