roleplaiapp/Midnight-Miqu-70B-v1.5-i1-IQ3_XS-GGUF

Repo: roleplaiapp/Midnight-Miqu-70B-v1.5-i1-IQ3_XS-GGUF Original Model: Midnight-Miqu-70B-v1.5-i1 Quantized File: Midnight-Miqu-70B-v1.5.i1-IQ3_XS.gguf Quantization: GGUF Quantization Method: IQ3_XS

Overview

This is a GGUF IQ3_XS quantized version of Midnight-Miqu-70B-v1.5-i1

Quantization By

I often have idle GPUs while building/testing for the RP app, so I put them to use quantizing models. I hope the community finds these quantizations useful.

Andrew Webby @ RolePlai.

Downloads last month
17
GGUF
Model size
69B params
Architecture
llama
Hardware compatibility
Log In to view the estimation
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support