quantized model to 4 but or 8 bit

#1
by thefcraft - opened

Can you please quantized this model to 4 but or 8 bit i don't have access to high memory

I have Collab free version...

Other people have uploaded the quantized stuff already. There's a 4-bit quantization in the gptq format by anon82, and a 4-bit quant in the ggml format done on by me

thefcraft changed discussion status to closed

Sign up or log in to comment