GGUF
llama.cpp
Inference Endpoints

Quantized Models?

#4
by PFnove - opened

Almost none of the people interested in running a 7B model want or need to run the f32 model (vram isn't infinite on consumer GPUs)
It would be nice to have some models with quantized weights in GGUF format (most people don't want to download a file over 30gb just to quantize it down to 8 or 4 bits)
float32 weights only make sense on the gemma-2b and gemma-7b models (the ones that aren't already instruction-tuned or in GGUF, a format that doesn't allow finetuning)

Sign up or log in to comment