Quantized version of: Nitral-AI/Irixxed-Magcap-12B-Slerp

'Make knowledge free for everyone'

Made with

Buy Me a Coffee at ko-fi.com

Downloads last month
166
GGUF
Model size
12.2B params
Architecture
llama
Hardware compatibility
Log In to view the estimation

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for DevQuasar/Nitral-AI.Irixxed-Magcap-12B-Slerp-GGUF

Quantized
(8)
this model