Text Generation
Transformers
PyTorch
llama
text-generation-inference
Inference Endpoints
WizardLM-70B-V1.0 / README.md
WizardLM's picture
Update README.md
3fea143
|
raw
history blame
4.16 kB
metadata
license: llama2

WizardLM: Empowering Large Pre-Trained Language Models to Follow Complex Instructions

πŸ€— HF Repo β€’ 🐦 Twitter β€’ πŸ“ƒ [WizardLM] β€’ πŸ“ƒ [WizardCoder]

πŸ‘‹ Join our Discord

Model Checkpoint Paper MT-Bench AlpacaEval GSM8k HumanEval License
WizardLM-70B-V1.0 πŸ€— HF Link πŸ“ƒComing Soon 7.78 92.91% 77.6% 50.6 pass@1 Llama 2 License
WizardLM-13B-V1.2 πŸ€— HF Link 7.06 89.17% 55.3% 36.6 pass@1 Llama 2 License
WizardLM-13B-V1.1 πŸ€— HF Link 6.76 86.32% 25.0 pass@1 Non-commercial
WizardLM-30B-V1.0 πŸ€— HF Link 7.01 37.8 pass@1 Non-commercial
WizardLM-13B-V1.0 πŸ€— HF Link 6.35 75.31% 24.0 pass@1 Non-commercial
WizardLM-7B-V1.0 πŸ€— HF Link πŸ“ƒ [WizardLM] 19.1 pass@1 Non-commercial
WizardCoder-15B-V1.0 πŸ€— HF Link πŸ“ƒ [WizardCoder] 57.3 pass@1 OpenRAIL-M
  • πŸ”₯πŸ”₯πŸ”₯ [08/09/2023] We released WizardLM-70B-V1.0 model.

Github Repo: https://github.com/nlpxucan/WizardLM

Twitter: https://twitter.com/WizardLM_AI/status/1689270108747976704

Discord: https://discord.gg/bpmeZD7V

❗Note for model system prompts usage:

WizardLM adopts the prompt format from Vicuna and supports multi-turn conversation. The prompt should be as following:

A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: Hi ASSISTANT: Hello.</s>USER: Who are you? ASSISTANT: I am WizardLM.</s>......

❗To commen concern about dataset:

Recently, there have been clear changes in the open-source policy and regulations of our overall organization's code, data, and models.

Despite this, we have still worked hard to obtain opening the weights of the model first, but the data involves stricter auditing and is in review with our legal team .

Our researchers have no authority to publicly release them without authorization.

Thank you for your understanding.