|
--- |
|
license: cc-by-nc-4.0 |
|
inference: false |
|
pipeline_tag: text-generation |
|
tags: |
|
- gguf |
|
- quantized |
|
- text-generation-inference |
|
--- |
|
|
|
**GGUF-IQ-Imatrix-Quantization-Script:** |
|
|
|
![image/png](https://cdn-uploads.huggingface.co/production/uploads/65ddabb9bbffb280f4b45d8e/vwlPdqxrSdILCHM24n_M2.png) |
|
|
|
Simple python script (`gguf-imat.py`) to generate various GGUF-IQ-Imatrix quantizations from a Hugging Face `author/model` input, for Windows and NVIDIA hardware. |
|
|
|
This is setup for a Windows machine with 8GB of VRAM, assuming use with an NVIDIA GPU. If you want to change the the `-ngl` (number of GPU layers) amount, you can do so at **line 120**. This is only relevant during the `--imatrix` data generation. If you don't have enough VRAM you can decrease the `-ngl` amount or set it to 0 to only use your System RAM instead for all layers. |
|
|
|
Your `imatrix.txt` is expected to be located inside the `imatrix` folder. Included file is considered a good option, [this discussion](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384) is where it came from. |
|
|
|
Adjust `quantization_options` in **line 133**. |
|
|
|
**Requirements:** |
|
- Git |
|
- Python 3.11 |
|
- `pip install huggingface_hub` |
|
|
|
**Usage:** |
|
``` |
|
python .\gguf-imat.py |
|
``` |
|
Quantizations will be output into the created `models\{model-name}-GGUF` folder. |
|
<br><br> |
|
|
|
### **Credits:** |
|
|
|
**If this proves useful for you, feel free to credit and share the repository.** |
|
|
|
**Made in conjunction with [@Lewdiculous](https://huggingface.co/Lewdiculous).** |