Updating model files
Browse files
README.md
CHANGED
@@ -11,6 +11,17 @@ datasets:
|
|
11 |
- tatsu-lab/alpaca
|
12 |
inference: false
|
13 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
14 |
|
15 |
# StableVicuna-13B-GPTQ
|
16 |
|
@@ -85,7 +96,7 @@ To access this file, please switch to the `latest` branch fo this repo and downl
|
|
85 |
```
|
86 |
CUDA_VISIBLE_DEVICES=0 python3 llama.py stable-vicuna-13B-HF c4 --wbits 4 --true-sequential --act-order --groupsize 128 --save_safetensors stable-vicuna-13B-GPTQ-4bit.act-order.safetensors
|
87 |
```
|
88 |
-
|
89 |
## Manual instructions for `text-generation-webui`
|
90 |
|
91 |
File `stable-vicuna-13B-GPTQ-4bit.compat.no-act-order.safetensors` can be loaded the same as any other GPTQ file, without requiring any updates to [oobaboogas text-generation-webui](https://github.com/oobabooga/text-generation-webui).
|
@@ -115,6 +126,17 @@ The above commands assume you have installed all dependencies for GPTQ-for-LLaMa
|
|
115 |
|
116 |
If you can't update GPTQ-for-LLaMa or don't want to, you can use `stable-vicuna-13B-GPTQ-4bit.no-act-order.safetensors` as mentioned above, which should work without any upgrades to text-generation-webui.
|
117 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
118 |
# Original StableVicuna-13B model card
|
119 |
|
120 |
## Model Description
|
@@ -262,7 +284,7 @@ This work would not have been possible without the support of [Stability AI](htt
|
|
262 |
Zack Witten and
|
263 |
alexandremuzio and
|
264 |
crumb},
|
265 |
-
title = {{CarperAI/trlx: v0.6.0: LLaMa (Alpaca), Benchmark
|
266 |
Util, T5 ILQL, Tests}},
|
267 |
month = mar,
|
268 |
year = 2023,
|
|
|
11 |
- tatsu-lab/alpaca
|
12 |
inference: false
|
13 |
---
|
14 |
+
<div style="width: 100%;">
|
15 |
+
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
|
16 |
+
</div>
|
17 |
+
<div style="display: flex; justify-content: space-between; width: 100%;">
|
18 |
+
<div style="display: flex; flex-direction: column; align-items: flex-start;">
|
19 |
+
<p><a href="https://discord.gg/UBgz4VXf">Chat & support: my new Discord server</a></p>
|
20 |
+
</div>
|
21 |
+
<div style="display: flex; flex-direction: column; align-items: flex-end;">
|
22 |
+
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? Patreon coming soon!</a></p>
|
23 |
+
</div>
|
24 |
+
</div>
|
25 |
|
26 |
# StableVicuna-13B-GPTQ
|
27 |
|
|
|
96 |
```
|
97 |
CUDA_VISIBLE_DEVICES=0 python3 llama.py stable-vicuna-13B-HF c4 --wbits 4 --true-sequential --act-order --groupsize 128 --save_safetensors stable-vicuna-13B-GPTQ-4bit.act-order.safetensors
|
98 |
```
|
99 |
+
|
100 |
## Manual instructions for `text-generation-webui`
|
101 |
|
102 |
File `stable-vicuna-13B-GPTQ-4bit.compat.no-act-order.safetensors` can be loaded the same as any other GPTQ file, without requiring any updates to [oobaboogas text-generation-webui](https://github.com/oobabooga/text-generation-webui).
|
|
|
126 |
|
127 |
If you can't update GPTQ-for-LLaMa or don't want to, you can use `stable-vicuna-13B-GPTQ-4bit.no-act-order.safetensors` as mentioned above, which should work without any upgrades to text-generation-webui.
|
128 |
|
129 |
+
## Want to support my work?
|
130 |
+
|
131 |
+
I've had a lot of people ask if they can contribute. I love providing models and helping people, but it is starting to rack up pretty big cloud computing bills.
|
132 |
+
|
133 |
+
So if you're able and willing to contribute, it'd be most gratefully received and will help me to keep providing models, and work on various AI projects.
|
134 |
+
|
135 |
+
Donaters will get priority support on any and all AI/LLM/model questions, and I'll gladly quantise any model you'd like to try.
|
136 |
+
|
137 |
+
* Patreon: coming soon! (just awaiting approval)
|
138 |
+
* Ko-Fi: https://ko-fi.com/TheBlokeAI
|
139 |
+
* Discord: https://discord.gg/UBgz4VXf
|
140 |
# Original StableVicuna-13B model card
|
141 |
|
142 |
## Model Description
|
|
|
284 |
Zack Witten and
|
285 |
alexandremuzio and
|
286 |
crumb},
|
287 |
+
title = {{CarperAI/trlx: v0.6.0: LLaMa (Alpaca), Benchmark
|
288 |
Util, T5 ILQL, Tests}},
|
289 |
month = mar,
|
290 |
year = 2023,
|