LiquidAI LFM2 gguf
Collection
3 items
β’
Updated
Quantized GGUF version of the LiquidAI/LFM2-1.2B
model.
GGUF
liquid_llama.cpp
Q4_0
, Q4_K
, etc.wget https://huggingface.co/yasserrmd/LFM2-1.2B-gguf/resolve/main/lfm2-700m.Q4_K.gguf
(Adjust filename for other quant formats like Q4_0
, if available.)
liquid_llama.cpp
(not llama.cpp
).Q4_K
with your chosen quant version.2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
32-bit
Base model
LiquidAI/LFM2-1.2B