Nitral Ultra-Mix 8b / Community Request 4 12B imatrix GGUFs Request

#801
by swiftwater - opened

Hello friends. Would you please create imatrix GGUFs for the following Nitral models? Thank you!

https://huggingface.co/Nitral-AI/Hathor_Ultra-Mix-Final-0.420-L3-8B
https://huggingface.co/Nitral-AI/Community_Request-04.20-12B

They are queued! :D

You can check for progress at http://hf.tst.eu/status.html or regularly check the model summary page at https://hf.tst.eu/model#/Hathor_Ultra-Mix-Final-0.420-L3-8B-GGUF and https://hf.tst.eu/model#/Community_Request-04.20-12B-GGUF for quants to appear.

please pardon my request if premature, but the imatrix versions seem blocked or errored out. would you please retry the imatrix versions? thank you.

please pardon my request if premature, but the imatrix versions seem blocked or errored out. would you please retry the imatrix versions? thank you.

No worries the imatrix quants will come soon. I currently lack the ability/knowledge how to restart a failed imatrix task so we need to wait for mradermacher to get online to restart them. The reason they failed is because I unfortunately forgot to reinitialize the GPUs of StormPeak before starting the nico1 LXC container leading to llama.cpp trying imatrix without a GPU which would have been painfully slow and a waste of resources. I already informed @mradermacher under https://huggingface.co/mradermacher/BabyHercules-4x150M-GGUF/discussions/4#67e46117694994f97b9e6390. I technically could nuke and requeue the model to restart imatrix tasks now that static quants are done but there has to be a better way to do so and mradermacher will likely soon be avilabele.

Sign up or log in to comment