File size: 1,291 Bytes
f3658f0 c7c7f9c f3658f0 c7c7f9c 24d7af4 c7c7f9c 24d7af4 c7c7f9c 92fd3f9 c7c7f9c 92fd3f9 01002cd 92fd3f9 01002cd 92fd3f9 51f1643 b9a77d9 ff3a6c5 51f1643 94a49e9 c7c7f9c |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 |
---
license: cc-by-4.0
tags:
- requests
- gguf
- quantized
---
# Welcome to my GGUF-IQ-Imatrix Model Quantization Requests card!
Read bellow for more information.
**Requirements to request model quantizations:**
For the model:
- Maximum model parameter size of **11B**. <br>
*At the moment I am unable to accept requests for larger models due to hardware/time limitations.*
Important:
- Fill the request template as outlined in the next section.
#### How to request a model quantization:
1. Open a [**New Discussion**](https://huggingface.co/Lewdiculous/Model-Requests/discussions/new) titled "`Request: Model-Author/Model-Name`", for example, "`Request: Nitral-AI/Infinitely-Laydiculous-7B`", without the quotation marks.
2. Include the following template in your post and fill the required information ([example request here](https://huggingface.co/Lewdiculous/Model-Requests/discussions/1)):
```
**[Required] Model name:**
**[Required] Model link:**
**[Required] Brief description:**
**[Required] An image/direct image link to represent the model (square shaped):**
**[Optional] Additonal quants (if you want any):**
Default list of quants for reference:
"Q4_K_M", "Q4_K_S", "IQ4_XS", "Q5_K_M", "Q5_K_S",
"Q6_K", "Q8_0", "IQ3_M", "IQ3_S", "IQ3_XXS"
``` |