|
--- |
|
license: other |
|
license_name: mrl |
|
license_link: https://mistral.ai/licenses/MRL-0.1.md |
|
language: |
|
- en |
|
- fr |
|
- de |
|
- es |
|
- it |
|
- pt |
|
- zh |
|
- ja |
|
- ru |
|
- ko |
|
|
|
--- |
|
|
|
This is [mistralai/Mistral-Large-Instruct-2407](https://huggingface.co/mistralai/Mistral-Large-Instruct-2407), converted to GGUF and quantized to q8_0. Both the model and the embedding/output tensors are q8_0. |
|
|
|
The model is split using the `llama.cpp/llama-gguf-split` CLI utility into shards no larger than 7GB. The purpose of this is to make it less painful to resume downloading if interrupted. |
|
|
|
The purpose of this upload is archival. |
|
|
|
[GGUFv3](https://huggingface.co/ddh0/Mistral-Large-Instruct-2407-q8_0-q8_0-GGUF/blob/main/gguf.md) |