MMLU Pro benchmark for GGUFs (1 shot) Collection "Not all quantized model perform good", serving framework ollama uses NVIDIA gpu, llama.cpp uses CPU with AVX & AMX • 13 items • Updated about 18 hours ago • 7