M3.2-24B-Loki-V1.2-GGUF

GGUF model files for M3.2-24B-Loki-V1.2.

This repository contains GGUF models quantized using llama.cpp.

  • Base Model: M3.2-24B-Loki-V1.2
  • Quantization Methods Processed in this Job: Q8_0, Q6_K, Q5_K_M, Q5_0, Q5_K_S, Q4_K_M, Q4_K_S, Q4_0, Q3_K_L, Q3_K_M, Q3_K_S, Q2_K, BF16
  • Importance Matrix Used: No

This specific upload is for the BF16 quantization.

Downloads last month
177
GGUF
Model size
23.6B params
Architecture
llama
Hardware compatibility
Log In to view the estimation

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support