LFM2‑1.2B β€’ Quantized Version (GGUF)

Quantized GGUF version of the LiquidAI/LFM2-1.2B model.

  • βœ… Format: GGUF
  • βœ… Use with: liquid_llama.cpp
  • βœ… Supported precisions: Q4_0, Q4_K, etc.

Download

wget https://huggingface.co/yasserrmd/LFM2-1.2B-gguf/resolve/main/lfm2-700m.Q4_K.gguf

(Adjust filename for other quant formats like Q4_0, if available.)

Notes

  • Only compatible with liquid_llama.cpp (not llama.cpp).
  • Replace Q4_K with your chosen quant version.
Downloads last month
465
GGUF
Model size
1.17B params
Architecture
lfm2
Hardware compatibility
Log In to view the estimation

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

32-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for yasserrmd/LFM2-1.2B-gguf

Base model

LiquidAI/LFM2-1.2B
Quantized
(6)
this model

Collection including yasserrmd/LFM2-1.2B-gguf