--- base_model: R136a1/Bungo-L3-8B quantized_by: Lewdiculous library_name: transformers license: unlicense inference: false language: - en tags: - roleplay --- [[Request #59 – Click here for more context.]](https://huggingface.co/Lewdiculous/Model-Requests/discussions/59)
**Request description:**
"An experimental model that turned really well. Scores high on Chai leaderboard (slerp8bv2 there). Feel smarter than average L3 merges for RP." **Model page:**
[R136a1/Bungo-L3-8B](https://huggingface.co/R136a1/Bungo-L3-8B) > [!IMPORTANT] > Use with the [**latest version of KoboldCpp**](https://github.com/LostRuins/koboldcpp/releases/latest), or [**this alternative fork**](https://github.com/Nexesenex/kobold.cpp) if you have issues.
Click here to expand/hide information:
General chart with relative quant performance.
> [!NOTE] > **Recommended read:**
> > [**"Which GGUF is right for me? (Opinionated)" by Artefact2**](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9) > > *Click the image to view full size.* > !["Which GGUF is right for me? (Opinionated)" by Artefact2 - First Graph](https://cdn-uploads.huggingface.co/production/uploads/65d4cf2693a0a3744a27536c/fScWdHIPix5IzNJ8yswCB.webp)
[![image/webp](https://cdn-uploads.huggingface.co/production/uploads/65d4cf2693a0a3744a27536c/ezaxE50ef-7RsFi3gUbNp.webp)](https://huggingface.co/Lewdiculous/Bungo-L3-8B-GGUF-IQ-Imatrix-Request/blob/main/2x-upscaled-bunga-2x-realesrgan.webp)