alim
commited on
Commit
β’
52250d1
1
Parent(s):
1feb2d3
Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,70 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: llama2
|
3 |
+
pipeline_tag: text-generation
|
4 |
+
---
|
5 |
+
# Disclaimer: I do not own the weights of WizardLM-13B-V1.2, nor did I train the model. I only sharded or split the model weights.
|
6 |
+
|
7 |
+
The actual weights can be found [here](https://huggingface.co/WizardLM/WizardLM-13B-V1.2).
|
8 |
+
|
9 |
+
The rest of the README is copied from the same page listed above.
|
10 |
+
|
11 |
+
|
12 |
+
|
13 |
+
This is the **Full-Weight** of WizardLM-13B V1.2 model, this model is trained from **Llama-2 13b**.
|
14 |
+
|
15 |
+
## WizardLM: Empowering Large Pre-Trained Language Models to Follow Complex Instructions
|
16 |
+
|
17 |
+
|
18 |
+
|
19 |
+
<p align="center">
|
20 |
+
π€ <a href="https://huggingface.co/WizardLM" target="_blank">HF Repo</a> β’ π¦ <a href="https://twitter.com/WizardLM_AI" target="_blank">Twitter</a> β’ π <a href="https://arxiv.org/abs/2304.12244" target="_blank">[WizardLM]</a> β’ π <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> <br>
|
21 |
+
</p>
|
22 |
+
<p align="center">
|
23 |
+
π Join our <a href="https://discord.gg/bpmeZD7V" target="_blank">Discord</a>
|
24 |
+
</p>
|
25 |
+
|
26 |
+
|
27 |
+
<font size=4>
|
28 |
+
|
29 |
+
| <sup>Model</sup> | <sup>Checkpoint</sup> | <sup>Paper</sup> |<sup>MT-Bench</sup> | <sup>AlpacaEval</sup> | <sup>WizardEval</sup> | <sup>HumanEval</sup> | <sup>License</sup>|
|
30 |
+
| ----- |------| ---- |------|-------| ----- | ----- | ----- |
|
31 |
+
| <sup>WizardLM-13B-V1.2</sup> | <sup>π€ <a href="https://huggingface.co/WizardLM/WizardLM-13B-V1.2" target="_blank">HF Link</a> </sup>| | <sup>7.06</sup> | <sup>89.17%</sup> | <sup>101.4% </sup>|<sup>36.6 pass@1</sup>|<sup> <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 License </a></sup> |
|
32 |
+
| <sup>WizardLM-13B-V1.1</sup> |<sup> π€ <a href="https://huggingface.co/WizardLM/WizardLM-13B-V1.1" target="_blank">HF Link</a> </sup> | | <sup>6.76</sup> |<sup>86.32%</sup> | <sup>99.3% </sup> |<sup>25.0 pass@1</sup>| <sup>Non-commercial</sup>|
|
33 |
+
| <sup>WizardLM-30B-V1.0</sup> | <sup>π€ <a href="https://huggingface.co/WizardLM/WizardLM-30B-V1.0" target="_blank">HF Link</a></sup> | | <sup>7.01</sup> | | <sup>97.8% </sup> | <sup>37.8 pass@1</sup>| <sup>Non-commercial</sup> |
|
34 |
+
| <sup>WizardLM-13B-V1.0</sup> | <sup>π€ <a href="https://huggingface.co/WizardLM/WizardLM-13B-V1.0" target="_blank">HF Link</a> </sup> | | <sup>6.35</sup> | <sup>75.31%</sup> | <sup>89.1% </sup> |<sup> 24.0 pass@1 </sup> | <sup>Non-commercial</sup>|
|
35 |
+
| <sup>WizardLM-7B-V1.0 </sup>| <sup>π€ <a href="https://huggingface.co/WizardLM/WizardLM-7B-V1.0" target="_blank">HF Link</a> </sup> |<sup> π <a href="https://arxiv.org/abs/2304.12244" target="_blank">[WizardLM]</a> </sup>| | | <sup>78.0% </sup> |<sup>19.1 pass@1 </sup>|<sup> Non-commercial</sup>|
|
36 |
+
| <sup>WizardCoder-15B-V1.0</sup> | <sup> π€ <a href="https://huggingface.co/WizardLM/WizardCoder-15B-V1.0" target="_blank">HF Link</a></sup> | <sup>π <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a></sup> | || |<sup> 57.3 pass@1 </sup> | <sup> <a href="https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement" target="_blank">OpenRAIL-M</a></sup> |
|
37 |
+
</font>
|
38 |
+
|
39 |
+
**Repository**: https://github.com/nlpxucan/WizardLM
|
40 |
+
|
41 |
+
**Twitter**:
|
42 |
+
|
43 |
+
|
44 |
+
- π₯π₯π₯ [7/25/2023] We released **WizardLM V1.2** models. The **WizardLM-13B-V1.2** is here ([Demo_13B-V1.2](https://b7a19878988c8c73.gradio.app), [Demo_13B-V1.2_bak-1](https://d0a37a76e0ac4b52.gradio.app/), [Full Model Weight](https://huggingface.co/WizardLM/WizardLM-13B-V1.2)). Please checkout the [paper](https://arxiv.org/abs/2304.12244).
|
45 |
+
- π₯π₯π₯ [7/25/2023] The **WizardLM-13B-V1.2** achieves **7.06** on [MT-Bench Leaderboard](https://chat.lmsys.org/?leaderboard), **89.17%** on [AlpacaEval Leaderboard](https://tatsu-lab.github.io/alpaca_eval/), and **101.4%** on [WizardLM Eval](https://github.com/nlpxucan/WizardLM/blob/main/WizardLM/data/WizardLM_testset.jsonl). (Note: MT-Bench and AlpacaEval are all self-test, will push update and request review. All tests are completed under their official settings.)
|
46 |
+
|
47 |
+
β<b>Note for model system prompts usage:</b>
|
48 |
+
|
49 |
+
|
50 |
+
<b>WizardLM</b> adopts the prompt format from <b>Vicuna</b> and supports **multi-turn** conversation. The prompt should be as following:
|
51 |
+
|
52 |
+
```
|
53 |
+
A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.
|
54 |
+
USER: Hi
|
55 |
+
ASSISTANT: Hello.
|
56 |
+
USER: Who are you?
|
57 |
+
ASSISTANT: I am WizardLM.
|
58 |
+
......
|
59 |
+
```
|
60 |
+
|
61 |
+
β<b>To commen concern about dataset:</b>
|
62 |
+
|
63 |
+
Recently, there have been clear changes in the open-source policy and regulations of our overall organization's code, data, and models.
|
64 |
+
|
65 |
+
|
66 |
+
Despite this, we have still worked hard to obtain opening the weights of the model first, but the data involves stricter auditing and is in review with our legal team .
|
67 |
+
|
68 |
+
Our researchers have no authority to publicly release them without authorization.
|
69 |
+
|
70 |
+
Thank you for your understanding.
|