Upload folder using huggingface_hub (#1)
Browse files- 2b75eb2ec30c24c3cb820557d4a6f3194e9a72634ff8b4990227c983f2e0693e (b3eaf67bf99a6dfccdbb89c609617c0274035a4b)
- c8892290a5f8fae649a63ba8e47d05a413ade17a59cdfc13e187db802ae0b63b (aecb4992080e9aa6fd74d3359f19026dc21d8a25)
- 725121eb33cc40b3b7812bd9fc6d7be6796e2de14a4c5315b4301e3afde20915 (c096d8aa3b9d9bafa34e1f2bd0ecc35cebce1733)
- 7a382008c7d38997a555d2d047dc6d4ff04536a696b9f2c2969b21b35cfd4402 (74bb249ef23a2de407d24225bf7d6b96d6866a5c)
- 335fd80d1dc2c93ef6c0d7fa5bd9008cdf6c556b95cb2a8f94a9d7c474a859d2 (587b016ec419f259aca83afdcc9f284748102fbf)
- 51a9256386be88f56cd1aeafbb68e4c8fd7d3eef462bfcbe2e40bb2a873d86fc (d378c9cd086c24206ba91c8e4353b52711c81417)
- 226b87da757759969680fd55b39d42c06e84db9bbaec265deb24c4cceb50fd0c (271ea5fbaac1fe220f2ddc30fc14d8dfe4b63b5e)
- 2e71f6a8700461bd695c0d826929a0cb684326073d9737a4ef96bb01265e58cb (987bbf04ec5152dfc097b594ffa341c4dd503f26)
- .gitattributes +15 -0
- README.md +46 -3
- Yi-Coder-1.5B-Chat-GGUF_imatrix.dat +3 -0
- Yi-Coder-1.5B-Chat.IQ1_M.gguf +3 -0
- Yi-Coder-1.5B-Chat.IQ1_S.gguf +3 -0
- Yi-Coder-1.5B-Chat.IQ2_XS.gguf +3 -0
- Yi-Coder-1.5B-Chat.IQ3_XS.gguf +3 -0
- Yi-Coder-1.5B-Chat.IQ4_XS.gguf +3 -0
- Yi-Coder-1.5B-Chat.Q2_K.gguf +3 -0
- Yi-Coder-1.5B-Chat.Q3_K_L.gguf +3 -0
- Yi-Coder-1.5B-Chat.Q3_K_M.gguf +3 -0
- Yi-Coder-1.5B-Chat.Q3_K_S.gguf +3 -0
- Yi-Coder-1.5B-Chat.Q4_K_M.gguf +3 -0
- Yi-Coder-1.5B-Chat.Q4_K_S.gguf +3 -0
- Yi-Coder-1.5B-Chat.Q5_K_M.gguf +3 -0
- Yi-Coder-1.5B-Chat.Q5_K_S.gguf +3 -0
- Yi-Coder-1.5B-Chat.fp16.gguf +3 -0
@@ -33,3 +33,18 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
36 |
+
Yi-Coder-1.5B-Chat-GGUF_imatrix.dat filter=lfs diff=lfs merge=lfs -text
|
37 |
+
Yi-Coder-1.5B-Chat.IQ1_M.gguf filter=lfs diff=lfs merge=lfs -text
|
38 |
+
Yi-Coder-1.5B-Chat.IQ1_S.gguf filter=lfs diff=lfs merge=lfs -text
|
39 |
+
Yi-Coder-1.5B-Chat.IQ2_XS.gguf filter=lfs diff=lfs merge=lfs -text
|
40 |
+
Yi-Coder-1.5B-Chat.IQ3_XS.gguf filter=lfs diff=lfs merge=lfs -text
|
41 |
+
Yi-Coder-1.5B-Chat.IQ4_XS.gguf filter=lfs diff=lfs merge=lfs -text
|
42 |
+
Yi-Coder-1.5B-Chat.Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
|
43 |
+
Yi-Coder-1.5B-Chat.Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
|
44 |
+
Yi-Coder-1.5B-Chat.Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
45 |
+
Yi-Coder-1.5B-Chat.Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
|
46 |
+
Yi-Coder-1.5B-Chat.Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
47 |
+
Yi-Coder-1.5B-Chat.Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
|
48 |
+
Yi-Coder-1.5B-Chat.Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
49 |
+
Yi-Coder-1.5B-Chat.fp16.gguf filter=lfs diff=lfs merge=lfs -text
|
50 |
+
Yi-Coder-1.5B-Chat.Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
|
@@ -1,3 +1,46 @@
|
|
1 |
-
---
|
2 |
-
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
tags:
|
3 |
+
- quantized
|
4 |
+
- 2-bit
|
5 |
+
- 3-bit
|
6 |
+
- 4-bit
|
7 |
+
- 5-bit
|
8 |
+
- 6-bit
|
9 |
+
- 8-bit
|
10 |
+
- GGUF
|
11 |
+
- text-generation
|
12 |
+
- text-generation
|
13 |
+
model_name: Yi-Coder-1.5B-Chat-GGUF
|
14 |
+
base_model: 01-ai/Yi-Coder-1.5B-Chat
|
15 |
+
inference: false
|
16 |
+
model_creator: 01-ai
|
17 |
+
pipeline_tag: text-generation
|
18 |
+
quantized_by: MaziyarPanahi
|
19 |
+
---
|
20 |
+
# [MaziyarPanahi/Yi-Coder-1.5B-Chat-GGUF](https://huggingface.co/MaziyarPanahi/Yi-Coder-1.5B-Chat-GGUF)
|
21 |
+
- Model creator: [01-ai](https://huggingface.co/01-ai)
|
22 |
+
- Original model: [01-ai/Yi-Coder-1.5B-Chat](https://huggingface.co/01-ai/Yi-Coder-1.5B-Chat)
|
23 |
+
|
24 |
+
## Description
|
25 |
+
[MaziyarPanahi/Yi-Coder-1.5B-Chat-GGUF](https://huggingface.co/MaziyarPanahi/Yi-Coder-1.5B-Chat-GGUF) contains GGUF format model files for [01-ai/Yi-Coder-1.5B-Chat](https://huggingface.co/01-ai/Yi-Coder-1.5B-Chat).
|
26 |
+
|
27 |
+
### About GGUF
|
28 |
+
|
29 |
+
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
|
30 |
+
|
31 |
+
Here is an incomplete list of clients and libraries that are known to support GGUF:
|
32 |
+
|
33 |
+
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
|
34 |
+
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
|
35 |
+
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
|
36 |
+
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
|
37 |
+
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
|
38 |
+
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
|
39 |
+
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
|
40 |
+
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
|
41 |
+
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
|
42 |
+
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
|
43 |
+
|
44 |
+
## Special thanks
|
45 |
+
|
46 |
+
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:f97a0b0ae439b402f257d8dbd1b8d0414ae8033744fa025356a24c9fc60a07be
|
3 |
+
size 1713578
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:33c8e19dcd885cf0a41809c8dea9d83463596864d32ca8b436328f75eef67616
|
3 |
+
size 508567744
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:de0a12140c4c9d23ff578ce6d4694ce4028b0b490ef83afe098e853768dba257
|
3 |
+
size 491167936
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:504d9a63fbdddd6216c5f1e6d63583dfa42db0c972158303d6316cb8c1be7ace
|
3 |
+
size 563912896
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:e8c28ff353b90aa1fa89d99ae22b7a1a5d43d4f4b521e8a74b1c0b93fcb28c59
|
3 |
+
size 694952128
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:0a0403538de89c7e8cfd9a33602ccb2ad7dca81891e823a5994dcfb4cddd0c04
|
3 |
+
size 832569536
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:53c7933d7086bebb6d84f11446e18af07b948f29c69af23beb9553d7a351b6c1
|
3 |
+
size 634699968
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:8ae1533dec84aacbe0664c11be69023aa9e2f916772c9182a964e12aafb53baa
|
3 |
+
size 826040512
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:0f1465650566ecbf1edd1b9908a1345b9c9d55680363f11fbd6dfe4d17242ee8
|
3 |
+
size 785719488
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:5b3f8e1eff80ad0ba1535f252a9b5c369e9b74ced9dee3096fa1f52188bce778
|
3 |
+
size 723411136
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:e2e8fa659cd75c828d7783b5c2fb60d220e08836065901fad8edb48e537c1cec
|
3 |
+
size 963674304
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:c5d2b101e296111893a706a132ddf887365e02506ffff5520eb16629ffeece97
|
3 |
+
size 904184000
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:0fad1688ebd76354bed003546354909b3e4991f8cba9506402f8daf062d998d1
|
3 |
+
size 1100185792
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:9c9bb582a6d1245f80f2ab13442a8e98f2eda33e228ae57436d5dbcd623220af
|
3 |
+
size 1051230400
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:cee248d28859cf0cd1bc34c4cce2d11f8033a603310d487720ef5595bcdd6cc7
|
3 |
+
size 2954682336
|