Update README.md
Browse files
README.md
CHANGED
@@ -32,8 +32,9 @@ Use Jinja Template or CHATML template.
|
|
32 |
|
33 |
IMPORTANT NOTES:
|
34 |
|
35 |
-
- Due to the unique nature (MOE, Size, Activated experts) of this model GGUF quants can be run on the CPU, GPU or with GPU part "off-load", right up to full precision.
|
36 |
-
- This model is difficult to Imatrix : You need a much larger imatrix file / multi-language / multi-content to imatrix it.
|
|
|
37 |
|
38 |
Please refer the org model card for details, benchmarks, how to use, settings, system roles etc etc :
|
39 |
|
|
|
32 |
|
33 |
IMPORTANT NOTES:
|
34 |
|
35 |
+
- Due to the unique nature (MOE, Size, Activated experts, size of experts) of this model GGUF quants can be run on the CPU, GPU or with GPU part "off-load", right up to full precision.
|
36 |
+
- This model is difficult to Imatrix : You need a much larger imatrix file / multi-language / multi-content (ie code/text) to imatrix it.
|
37 |
+
- GPU speeds will be BLISTERING 4x-8x or higher than CPU only speeds AND this model will be BLISTERING too, relative to other "30B" models (Token per second speed equal roughly to 4.5B "normal" model speeds).
|
38 |
|
39 |
Please refer the org model card for details, benchmarks, how to use, settings, system roles etc etc :
|
40 |
|