roleplaiapp's picture
Upload README.md with huggingface_hub
f779129 verified
---
library_name: transformers
pipeline_tag: text-generation
tags:
- 22b
- 8-bit
- Q8_0
- gguf
- llama-cpp
- pantheon
- pure
- small
- text-generation
---
# roleplaiapp/Pantheon-RP-Pure-1.6.2-22b-Small-Q8_0-GGUF
**Repo:** `roleplaiapp/Pantheon-RP-Pure-1.6.2-22b-Small-Q8_0-GGUF`
**Original Model:** `Pantheon-RP-Pure-1.6.2-22b-Small`
**Quantized File:** `Pantheon-RP-Pure-1.6.2-22b-Small-Q8_0.gguf`
**Quantization:** `GGUF`
**Quantization Method:** `Q8_0`
## Overview
This is a GGUF Q8_0 quantized version of Pantheon-RP-Pure-1.6.2-22b-Small
## Quantization By
I often have idle GPUs while building/testing for the RP app, so I put them to use quantizing models.
I hope the community finds these quantizations useful.
Andrew Webby @ [RolePlai](https://roleplai.app/).