Host files for a google colab notebook, hoping to make it easier to GGUF models with Imatrix.

Free Tier Colab

This is only for making the intial FP16 gguf file and computing an imatrix.dat

Quantizing is too slow on colab due to only having two available cores.

Details

Thanks to mlabonne for the initial code

Default Imatrix is from kalomaze

RP Imatrix is from Lewdiculous

Extended is a mix of all data with added alphabets ParasiticRogue

Downloads last month

-

Downloads are not tracked for this model. How to track
GGUF
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model's library.