--- library_name: transformers pipeline_tag: text-generation tags: - f16 - gguf - japmed - llama-cpp - slerp - text-generation --- # roleplaiapp/JapMed-SLERP-f16-GGUF **Repo:** `roleplaiapp/JapMed-SLERP-f16-GGUF` **Original Model:** `JapMed-SLERP` **Quantized File:** `JapMed-SLERP.f16.gguf` **Quantization:** `GGUF` **Quantization Method:** `f16` ## Overview This is a GGUF f16 quantized version of JapMed-SLERP ## Quantization By I often have idle GPUs while building/testing for the RP app, so I put them to use quantizing models. I hope the community finds these quantizations useful. Andrew Webby @ [RolePlai](https://roleplai.app/).