|
--- |
|
license: cc-by-4.0 |
|
tags: |
|
- requests |
|
- gguf |
|
- quantized |
|
--- |
|
# Welcome to my GGUF-IQ-Imatrix Model Quantization Requests card! |
|
|
|
Read bellow for more information. |
|
|
|
**Requirements to request model quantizations:** |
|
|
|
For the model: |
|
- Maximum model parameter size of **11B**. <br> |
|
*At the moment I am unable to accept requests for larger models due to hardware/time limitations.* |
|
|
|
Important: |
|
- Fill the request template as outlined in the next section. |
|
|
|
#### How to request a model quantization: |
|
|
|
1. Open a [**New Discussion**](https://huggingface.co/Lewdiculous/Model-Requests/discussions/new) with a title of "`Request: Model-Author/Model-Name`", for example, "`Request: Nitral-AI/Infinitely-Laydiculous-7B`". |
|
|
|
2. Include the following template in your message and fill the information ([example request here](https://huggingface.co/Lewdiculous/Model-Requests/discussions/1)): |
|
|
|
``` |
|
**Model name:** |
|
|
|
|
|
**Model link:** |
|
|
|
|
|
**Brief description:** |
|
|
|
|
|
**An image to represent the model (square shaped):** |
|
|
|
|
|
``` |