Upload folder using huggingface_hub (#1)
Browse files- 07ae574c03cd3cd83858695cb7e53df7c6bf812739fbe93f4663768f879901f5 (f2bd3a4d67f14999efe138cda7665cd4f2f9c92c)
- 82b79d8530939fcd4e1d925f963b6147f8d9639540a3b818a65d75755b768f26 (f4c01cc6306231ca9b7280fe7b18778fae0f36b0)
- 62e19f16ad9598390105ace6dda9feeffd56a65fedad65ea0a0e1103d38ab117 (a0dabaacef6f216a3882680f51b7425293e4fd33)
- e67a95f82ca186474c24ede16089d1ace1c8d35a9481bbd83441593c5c556b39 (6e81a1b4e272e123591ce531be0abb56ed5fa489)
- bb13c41f0d707272fe17ebe7142ce82f602ccb3a7113749a7b0e5af9edfde6ca (4ce2041393557ec043ae347ae80b1919f4d6a94e)
- aad9f219182a62bc3542aefc23585c0de1bfcdd396f2836c342ba1e325db2821 (70f617083531ff553c06fb6e1f05935a7ee0d8d7)
.gitattributes
CHANGED
@@ -33,3 +33,8 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
|
|
|
33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
36 |
+
Ice0.101-20.03-RP-GRPO-2.Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
37 |
+
Ice0.101-20.03-RP-GRPO-2.Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
|
38 |
+
Ice0.101-20.03-RP-GRPO-2.Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
|
39 |
+
Ice0.101-20.03-RP-GRPO-2.Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
|
40 |
+
Ice0.101-20.03-RP-GRPO-2.fp16.gguf filter=lfs diff=lfs merge=lfs -text
|
Ice0.101-20.03-RP-GRPO-2.Q5_K_M.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:738142bf96d35c7c6ee515572eb4b43c698bab9b54f4d533307ffdea1f3ee870
|
3 |
+
size 5131411200
|
Ice0.101-20.03-RP-GRPO-2.Q5_K_S.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:0802839ff2f2ff07c30cf88c7e2d1d7a6468399628aaff8fa5d135d8c4f89720
|
3 |
+
size 4997717760
|
Ice0.101-20.03-RP-GRPO-2.Q6_K.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:c6216c68086bbe28b962ae695b6008e9e85b5f89f7a4300b7fe999751919c3c0
|
3 |
+
size 5942066944
|
Ice0.101-20.03-RP-GRPO-2.Q8_0.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:b3ad875799993f9defd0ba55e5a2622524c0e53537fe20841f42a6b2f9d9f873
|
3 |
+
size 7695859456
|
Ice0.101-20.03-RP-GRPO-2.fp16.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:a6dff5c352ae37c0745de67c343b96c9a7b80db4fd986373c1f0882843828eb3
|
3 |
+
size 14484733696
|
README.md
ADDED
@@ -0,0 +1,45 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
base_model: icefog72/Ice0.101-20.03-RP-GRPO-2
|
3 |
+
inference: false
|
4 |
+
model_creator: icefog72
|
5 |
+
model_name: Ice0.101-20.03-RP-GRPO-2-GGUF
|
6 |
+
pipeline_tag: text-generation
|
7 |
+
quantized_by: MaziyarPanahi
|
8 |
+
tags:
|
9 |
+
- quantized
|
10 |
+
- 2-bit
|
11 |
+
- 3-bit
|
12 |
+
- 4-bit
|
13 |
+
- 5-bit
|
14 |
+
- 6-bit
|
15 |
+
- 8-bit
|
16 |
+
- GGUF
|
17 |
+
- text-generation
|
18 |
+
---
|
19 |
+
# [MaziyarPanahi/Ice0.101-20.03-RP-GRPO-2-GGUF](https://huggingface.co/MaziyarPanahi/Ice0.101-20.03-RP-GRPO-2-GGUF)
|
20 |
+
- Model creator: [icefog72](https://huggingface.co/icefog72)
|
21 |
+
- Original model: [icefog72/Ice0.101-20.03-RP-GRPO-2](https://huggingface.co/icefog72/Ice0.101-20.03-RP-GRPO-2)
|
22 |
+
|
23 |
+
## Description
|
24 |
+
[MaziyarPanahi/Ice0.101-20.03-RP-GRPO-2-GGUF](https://huggingface.co/MaziyarPanahi/Ice0.101-20.03-RP-GRPO-2-GGUF) contains GGUF format model files for [icefog72/Ice0.101-20.03-RP-GRPO-2](https://huggingface.co/icefog72/Ice0.101-20.03-RP-GRPO-2).
|
25 |
+
|
26 |
+
### About GGUF
|
27 |
+
|
28 |
+
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
|
29 |
+
|
30 |
+
Here is an incomplete list of clients and libraries that are known to support GGUF:
|
31 |
+
|
32 |
+
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
|
33 |
+
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
|
34 |
+
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
|
35 |
+
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
|
36 |
+
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
|
37 |
+
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
|
38 |
+
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
|
39 |
+
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
|
40 |
+
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
|
41 |
+
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
|
42 |
+
|
43 |
+
## Special thanks
|
44 |
+
|
45 |
+
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
|