Asking with a request

#1
by FrescoHF - opened

I would like to ask you to do quantization for the PocketDoc/Dans-SakuraKaze-V1.0.0-12b model with the latest llama.cpp. If you would make quantization of this model with the latest llama.cpp, you would make me (and other users) the happiest people in the world, because this is a very good model and I would like to see from you its quantization with the latest llama.cpp at the moment of its quantization. Thank you in advance for your reply

Sign up or log in to comment