Nitral
commited on
Delete quant
Browse files
quant/README.md
DELETED
@@ -1,35 +0,0 @@
|
|
1 |
-
|
2 |
-
Thanks to @s3nh for the great quantization notebook code.
|
3 |
-
---
|
4 |
-
license: openrail
|
5 |
-
pipeline_tag: text-generation
|
6 |
-
library_name: transformers
|
7 |
-
language:
|
8 |
-
- en
|
9 |
-
---
|
10 |
-
|
11 |
-
|
12 |
-
## Original model card
|
13 |
-
|
14 |
-
Buy @s3nh a coffee if you like this project ;)
|
15 |
-
<a href="https://www.buymeacoffee.com/s3nh"><img src="https://www.buymeacoffee.com/assets/img/guidelines/download-assets-sm-1.svg" alt=""></a>
|
16 |
-
|
17 |
-
#### Description
|
18 |
-
|
19 |
-
GGUF Format model files for [This project](https://huggingface.co/Test157t/Kunocchini-7b-128k-test).
|
20 |
-
|
21 |
-
### GGUF Specs
|
22 |
-
|
23 |
-
GGUF is a format based on the existing GGJT, but makes a few changes to the format to make it more extensible and easier to use. The following features are desired:
|
24 |
-
|
25 |
-
Single-file deployment: they can be easily distributed and loaded, and do not require any external files for additional information.
|
26 |
-
Extensible: new features can be added to GGML-based executors/new information can be added to GGUF models without breaking compatibility with existing models.
|
27 |
-
mmap compatibility: models can be loaded using mmap for fast loading and saving.
|
28 |
-
Easy to use: models can be easily loaded and saved using a small amount of code, with no need for external libraries, regardless of the language used.
|
29 |
-
Full information: all information needed to load a model is contained in the model file, and no additional information needs to be provided by the user.
|
30 |
-
The key difference between GGJT and GGUF is the use of a key-value structure for the hyperparameters (now referred to as metadata), rather than a list of untyped values.
|
31 |
-
This allows for new metadata to be added without breaking compatibility with existing models, and to annotate the model with additional information that may be useful for
|
32 |
-
inference or for identifying the model.
|
33 |
-
|
34 |
-
# Original model card
|
35 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
quant/kunocchini-7b-128k-test.IQ3_XXS.gguf
DELETED
@@ -1,3 +0,0 @@
|
|
1 |
-
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:d52156cc82de97a3ff9756c04d15170649764b37003bb653a0b5327f693f2691
|
3 |
-
size 3023378432
|
|
|
|
|
|
|
|
quant/kunocchini-7b-128k-test.fp16.bin
DELETED
@@ -1,3 +0,0 @@
|
|
1 |
-
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:576dd511cd00c56f145d20f013ff19b8df15c088017171b6d7823fa6a3d76a86
|
3 |
-
size 14484731872
|
|
|
|
|
|
|
|