Updating model files
Browse files
README.md
CHANGED
@@ -4,6 +4,17 @@ datasets:
|
|
4 |
- ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered
|
5 |
inference: false
|
6 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
7 |
|
8 |
# WizardLM - uncensored: An Instruction-following LLM Using Evol-Instruct
|
9 |
|
@@ -52,7 +63,18 @@ It was created with the `--act-order` parameter. It may have slightly lower infe
|
|
52 |
```
|
53 |
python llama.py /workspace/models/ehartford_WizardLM-30B-Uncensored wikitext2 --wbits 4 --true-sequential --act-order --save_safetensors /workspace/eric-30B/gptq/WizardLM-30B-Uncensored-GPTQ-4bit.act-order.safetensors
|
54 |
```
|
55 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
56 |
# WizardLM-30B-Uncensored original model card
|
57 |
|
58 |
This is WizardLM trained with a subset of the dataset - responses that contained alignment / moralizing were removed. The intent is to train a WizardLM that doesn't have alignment built-in, so that alignment (of any sort) can be added separately with for example with a RLHF LoRA.
|
|
|
4 |
- ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered
|
5 |
inference: false
|
6 |
---
|
7 |
+
<div style="width: 100%;">
|
8 |
+
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
|
9 |
+
</div>
|
10 |
+
<div style="display: flex; justify-content: space-between; width: 100%;">
|
11 |
+
<div style="display: flex; flex-direction: column; align-items: flex-start;">
|
12 |
+
<p><a href="https://discord.gg/UBgz4VXf">Chat & support: my new Discord server</a></p>
|
13 |
+
</div>
|
14 |
+
<div style="display: flex; flex-direction: column; align-items: flex-end;">
|
15 |
+
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? Patreon coming soon!</a></p>
|
16 |
+
</div>
|
17 |
+
</div>
|
18 |
|
19 |
# WizardLM - uncensored: An Instruction-following LLM Using Evol-Instruct
|
20 |
|
|
|
63 |
```
|
64 |
python llama.py /workspace/models/ehartford_WizardLM-30B-Uncensored wikitext2 --wbits 4 --true-sequential --act-order --save_safetensors /workspace/eric-30B/gptq/WizardLM-30B-Uncensored-GPTQ-4bit.act-order.safetensors
|
65 |
```
|
66 |
+
|
67 |
+
## Want to support my work?
|
68 |
+
|
69 |
+
I've had a lot of people ask if they can contribute. I love providing models and helping people, but it is starting to rack up pretty big cloud computing bills.
|
70 |
+
|
71 |
+
So if you're able and willing to contribute, it'd be most gratefully received and will help me to keep providing models, and work on various AI projects.
|
72 |
+
|
73 |
+
Donaters will get priority support on any and all AI/LLM/model questions, and I'll gladly quantise any model you'd like to try.
|
74 |
+
|
75 |
+
* Patreon: coming soon! (just awaiting approval)
|
76 |
+
* Ko-Fi: https://ko-fi.com/TheBlokeAI
|
77 |
+
* Discord: https://discord.gg/UBgz4VXf
|
78 |
# WizardLM-30B-Uncensored original model card
|
79 |
|
80 |
This is WizardLM trained with a subset of the dataset - responses that contained alignment / moralizing were removed. The intent is to train a WizardLM that doesn't have alignment built-in, so that alignment (of any sort) can be added separately with for example with a RLHF LoRA.
|