auto-patch README.md
Browse files
README.md
CHANGED
@@ -4,11 +4,9 @@ language:
|
|
4 |
- en
|
5 |
library_name: transformers
|
6 |
license: apache-2.0
|
|
|
|
|
7 |
quantized_by: mradermacher
|
8 |
-
tags:
|
9 |
-
- generated_from_trainer
|
10 |
-
- smol_llama
|
11 |
-
- llama2
|
12 |
---
|
13 |
## About
|
14 |
|
@@ -20,6 +18,9 @@ tags:
|
|
20 |
static quants of https://huggingface.co/BEE-spoke-data/NanoLlama-GQA-L10-A32_KV8-v13-KI
|
21 |
|
22 |
<!-- provided-files -->
|
|
|
|
|
|
|
23 |
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
|
24 |
## Usage
|
25 |
|
|
|
4 |
- en
|
5 |
library_name: transformers
|
6 |
license: apache-2.0
|
7 |
+
mradermacher:
|
8 |
+
readme_rev: 1
|
9 |
quantized_by: mradermacher
|
|
|
|
|
|
|
|
|
10 |
---
|
11 |
## About
|
12 |
|
|
|
18 |
static quants of https://huggingface.co/BEE-spoke-data/NanoLlama-GQA-L10-A32_KV8-v13-KI
|
19 |
|
20 |
<!-- provided-files -->
|
21 |
+
|
22 |
+
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#NanoLlama-GQA-L10-A32_KV8-v13-KI-GGUF).***
|
23 |
+
|
24 |
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
|
25 |
## Usage
|
26 |
|