roleplaiapp's picture
Upload README.md with huggingface_hub
b7920c0 verified
---
library_name: transformers
pipeline_tag: text-generation
tags:
- 24b
- 4-bit
- Q4_K_S
- gguf
- instruct
- llama-cpp
- mistral
- small
- text-generation
---
# roleplaiapp/Mistral-Small-24B-Instruct-2501-Q4_K_S-GGUF
**Repo:** `roleplaiapp/Mistral-Small-24B-Instruct-2501-Q4_K_S-GGUF`
**Original Model:** `Mistral-Small-24B-Instruct-2501`
**Quantized File:** `Mistral-Small-24B-Instruct-2501-Q4_K_S.gguf`
**Quantization:** `GGUF`
**Quantization Method:** `Q4_K_S`
## Overview
This is a GGUF Q4_K_S quantized version of Mistral-Small-24B-Instruct-2501
## Quantization By
I often have idle GPUs while building/testing for the RP app, so I put them to use quantizing models.
I hope the community finds these quantizations useful.
Andrew Webby @ [RolePlai](https://roleplai.app/).