ddh0's picture
Update README.md
2d7ec91 verified
|
raw
history blame
715 Bytes
metadata
license: other
license_name: mrl
license_link: https://mistral.ai/licenses/MRL-0.1.md
language:
  - en
  - fr
  - de
  - es
  - it
  - pt
  - zh
  - ja
  - ru
  - ko

This is mistralai/Mistral-Large-Instruct-2407, converted to GGUF and quantized to q8_0. Both the model and the embedding/output tensors are q8_0.

The model is split using the llama.cpp/llama-gguf-split CLI utility into shards no larger than 7GB. The purpose of this is to make it less painful to resume downloading if interrupted.

The purpose of this upload is archival.

GGUFv3