File size: 715 Bytes
ec2dcdc
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2d7ec91
ec2dcdc
cb567ba
 
e9a1e3e
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
---
license: other
license_name: mrl
license_link: https://mistral.ai/licenses/MRL-0.1.md
language:
  - en
  - fr
  - de
  - es
  - it
  - pt
  - zh
  - ja
  - ru 
  - ko

---

This is [mistralai/Mistral-Large-Instruct-2407](https://huggingface.co/mistralai/Mistral-Large-Instruct-2407), converted to GGUF and quantized to q8_0. Both the model and the embedding/output tensors are q8_0.

The model is split using the `llama.cpp/llama-gguf-split` CLI utility into shards no larger than 7GB. The purpose of this is to make it less painful to resume downloading if interrupted.

The purpose of this upload is archival.

[GGUFv3](https://huggingface.co/ddh0/Mistral-Large-Instruct-2407-q8_0-q8_0-GGUF/blob/main/gguf.md)