--- library_name: transformers pipeline_tag: text-generation tags: - 4-bit - 70b - Q4_K_M - gguf - llama-cpp - midnight - miqu - text-generation - v15 --- # roleplaiapp/Midnight-Miqu-70B-v1.5-i1-Q4_K_M-GGUF **Repo:** `roleplaiapp/Midnight-Miqu-70B-v1.5-i1-Q4_K_M-GGUF` **Original Model:** `Midnight-Miqu-70B-v1.5-i1` **Quantized File:** `Midnight-Miqu-70B-v1.5.i1-Q4_K_M.gguf` **Quantization:** `GGUF` **Quantization Method:** `Q4_K_M` ## Overview This is a GGUF Q4_K_M quantized version of Midnight-Miqu-70B-v1.5-i1 ## Quantization By I often have idle GPUs while building/testing for the RP app, so I put them to use quantizing models. I hope the community finds these quantizations useful. Andrew Webby @ [RolePlai](https://roleplai.app/).