--- license: apache-2.0 datasets: - openerotica/mixed-rp - kingbri/PIPPA-shareGPT - flammenai/character-roleplay-DPO language: - en base_model: - N-Bot-Int/OpenElla3-Llama3.2B pipeline_tag: text-generation tags: - unsloth - Uncensored - text-generation-inference - transformers - unsloth - llama - trl - roleplay - conversational --- # Support Us Through - [![ko-fi](https://ko-fi.com/img/githubbutton_sm.svg)](https://ko-fi.com/J3J61D8NHV) - [https://ko-fi.com/nexusnetworkint](Official Ko-FI link!) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6633a73004501e16e7896b86/vQ-n1-Y5H2yoPt9k45WTC.png) # GGUF Version **GGUF** with Quants! Allowing you to run models using KoboldCPP and other AI Environments! # Quantizations: | Quant Type | Benefits | Cons | |---------------|---------------------------------------------------|---------------------------------------------------| | **Q4_K_M** | ✅ Smallest size (fastest inference) | ❌ Lowest accuracy compared to other quants | | | ✅ Requires the least VRAM/RAM | ❌ May struggle with complex reasoning | | | ✅ Ideal for edge devices & low-resource setups | ❌ Can produce slightly degraded text quality | | **Q5_K_M** | ✅ Better accuracy than Q4, while still compact | ❌ Slightly larger model size than Q4 | | | ✅ Good balance between speed and precision | ❌ Needs a bit more VRAM than Q4 | | | ✅ Works well on mid-range GPUs | ❌ Still not as accurate as higher-bit models | | **Q8_0** | ✅ Highest accuracy (closest to full model) | ❌ Requires significantly more VRAM/RAM | | | ✅ Best for complex reasoning & detailed outputs | ❌ Slower inference compared to Q4 & Q5 | | | ✅ Suitable for high-end GPUs & serious workloads | ❌ Larger file size (takes more storage) | # Model Details: Read the Model details on huggingface [Model Detail Here](https://huggingface.co/N-Bot-Int/OpenElla3-Llama3.2B-V2)