Quantize version
#10
by
baconnier
- opened
Hi di you plan to make a quantized version or the result will not be accurate ?
That's on our todo list. Please stay tuned! :)
Currently we can't run it on a free tier colab notebook. It gives me RAM limit exceeded. So we'd love a quantized version.
Any idea if this works on the pro tier? and what might be the RAM and VRAM requirements to run this?
Any possibility of this model in the GGUF format? Like available in the Bloke's repositories?
Or... will I need to convert it manually?
Please share any link if you guys find one.
Alternatively I am opening a separate discussion on this project for the same.
π»
You can use convert tool from llama.cpp to build gguf format.
Yeah of course, thanks