ItsMeDevRoland's picture
Update README.md
ef96bc5 verified
metadata
license: apache-2.0
datasets:
  - openerotica/mixed-rp
  - kingbri/PIPPA-shareGPT
  - flammenai/character-roleplay-DPO
language:
  - en
base_model:
  - N-Bot-Int/OpenElla3-Llama3.2B
pipeline_tag: text-generation
tags:
  - unsloth
  - Uncensored
  - text-generation-inference
  - transformers
  - unsloth
  - llama
  - trl
  - roleplay
  - conversational

Support Us Through

image/png

GGUF Version

GGUF with Quants! Allowing you to run models using KoboldCPP and other AI Environments!

Quantizations:

Quant Type Benefits Cons
Q4_K_M βœ… Smallest size (fastest inference) ❌ Lowest accuracy compared to other quants
βœ… Requires the least VRAM/RAM ❌ May struggle with complex reasoning
βœ… Ideal for edge devices & low-resource setups ❌ Can produce slightly degraded text quality
Q5_K_M βœ… Better accuracy than Q4, while still compact ❌ Slightly larger model size than Q4
βœ… Good balance between speed and precision ❌ Needs a bit more VRAM than Q4
βœ… Works well on mid-range GPUs ❌ Still not as accurate as higher-bit models
Q8_0 βœ… Highest accuracy (closest to full model) ❌ Requires significantly more VRAM/RAM
βœ… Best for complex reasoning & detailed outputs ❌ Slower inference compared to Q4 & Q5
βœ… Suitable for high-end GPUs & serious workloads ❌ Larger file size (takes more storage)

Model Details:

Read the Model details on huggingface Model Detail Here