TheBloke commited on
Commit
b0c1ed2
1 Parent(s): 0eda9da

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +114 -30
README.md CHANGED
@@ -5,16 +5,9 @@ license: llama2
5
  model_creator: Xwin-LM
6
  model_name: Xwin-LM 13B V0.1
7
  model_type: llama
8
- prompt_template: 'Below is an instruction that describes a task. Write a response
9
- that appropriately completes the request.
10
-
11
-
12
- ### Instruction:
13
-
14
- {prompt}
15
-
16
-
17
- ### Response:
18
 
19
  '
20
  quantized_by: TheBloke
@@ -63,15 +56,10 @@ It is also now supported by continuous batching server [vLLM](https://github.com
63
  <!-- repositories-available end -->
64
 
65
  <!-- prompt-template start -->
66
- ## Prompt template: Alpaca
67
 
68
  ```
69
- Below is an instruction that describes a task. Write a response that appropriately completes the request.
70
-
71
- ### Instruction:
72
- {prompt}
73
-
74
- ### Response:
75
 
76
  ```
77
 
@@ -99,7 +87,7 @@ Documentation on installing and using vLLM [can be found here](https://vllm.read
99
  - When using vLLM as a server, pass the `--quantization awq` parameter, for example:
100
 
101
  ```shell
102
- python3 python -m vllm.entrypoints.api_server --model TheBloke/Xwin-LM-13B-V0.1-AWQ --quantization awq
103
  ```
104
 
105
  When using vLLM from Python code, pass the `quantization=awq` parameter, for example:
@@ -115,7 +103,7 @@ prompts = [
115
  ]
116
  sampling_params = SamplingParams(temperature=0.8, top_p=0.95)
117
 
118
- llm = LLM(model="TheBloke/Xwin-LM-13B-V0.1-AWQ", quantization="awq")
119
 
120
  outputs = llm.generate(prompts, sampling_params)
121
 
@@ -161,12 +149,7 @@ model = AutoAWQForCausalLM.from_quantized(model_name_or_path, fuse_layers=True,
161
  tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, trust_remote_code=False)
162
 
163
  prompt = "Tell me about AI"
164
- prompt_template=f'''Below is an instruction that describes a task. Write a response that appropriately completes the request.
165
-
166
- ### Instruction:
167
- {prompt}
168
-
169
- ### Response:
170
 
171
  '''
172
 
@@ -259,6 +242,9 @@ Xwin-LM: Powerful, Stable, and Reproducible LLM Alignment
259
  </h3>
260
 
261
  <p align="center">
 
 
 
262
  <a href="https://huggingface.co/Xwin-LM">
263
  <img src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Models-blue">
264
  </a>
@@ -268,13 +254,14 @@ Xwin-LM: Powerful, Stable, and Reproducible LLM Alignment
268
 
269
  **Step up your LLM alignment with Xwin-LM!**
270
 
271
- Xwin-LM aims to develop and open-source alignment technologies for large language models, including supervised fine-tuning (SFT), reward models, reject sampling, reinforcement learning, etc. Our first release, built-upon on the Llama2 base models, ranked **TOP-1** on [AlpacaEval](https://tatsu-lab.github.io/alpaca_eval/). Notably, it's **the first to surpass GPT-4** on this benchmark. The project will be continuously updated.
272
 
273
  ## News
274
 
275
- - :boom: [Sep, 2023] We released [Xwin-LM-70B-V0.1](https://huggingface.co/Xwin-LM/Xwin-LM-70B-V0.1), which has achieved a win-rate against Davinci-003 of **95.57%** on [AlpacaEval](https://tatsu-lab.github.io/alpaca_eval/) benchmark, ranking as **TOP-1** on AlpacaEval. **It was the FIRST model surpassing GPT-4** on [AlpacaEval](https://tatsu-lab.github.io/alpaca_eval/). Also note its winrate v.s. GPT-4 is **60.61**.
276
- - :boom: [Sep, 2023] We released [Xwin-LM-13B-V0.1](https://huggingface.co/Xwin-LM/Xwin-LM-13B-V0.1), which has achieved **91.76%** win-rate on [AlpacaEval](https://tatsu-lab.github.io/alpaca_eval/), ranking as **top-1** among all 13B models.
277
- - :boom: [Sep, 2023] We released [Xwin-LM-7B-V0.1](https://huggingface.co/Xwin-LM/Xwin-LM-7B-V0.1), which has achieved **87.82%** win-rate on [AlpacaEval](https://tatsu-lab.github.io/alpaca_eval/), ranking as **top-1** among all 7B models.
 
278
 
279
 
280
  ## Model Card
@@ -302,4 +289,101 @@ The table below displays the performance of Xwin-LM on [AlpacaEval](https://tats
302
  | **Xwin-LM-7B-V0.1** | **87.35** | **76.40** | **47.57** |
303
  | Llama-2-13B-Chat | 81.09 | 64.22 | 30.92 |
304
 
305
- ##
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5
  model_creator: Xwin-LM
6
  model_name: Xwin-LM 13B V0.1
7
  model_type: llama
8
+ prompt_template: 'A chat between a curious user and an artificial intelligence assistant.
9
+ The assistant gives helpful, detailed, and polite answers to the user''s questions.
10
+ USER: {prompt} ASSISTANT:
 
 
 
 
 
 
 
11
 
12
  '
13
  quantized_by: TheBloke
 
56
  <!-- repositories-available end -->
57
 
58
  <!-- prompt-template start -->
59
+ ## Prompt template: Vicuna
60
 
61
  ```
62
+ A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: {prompt} ASSISTANT:
 
 
 
 
 
63
 
64
  ```
65
 
 
87
  - When using vLLM as a server, pass the `--quantization awq` parameter, for example:
88
 
89
  ```shell
90
+ python3 python -m vllm.entrypoints.api_server --model TheBloke/Xwin-LM-13B-V0.1-AWQ --quantization awq --dtype half
91
  ```
92
 
93
  When using vLLM from Python code, pass the `quantization=awq` parameter, for example:
 
103
  ]
104
  sampling_params = SamplingParams(temperature=0.8, top_p=0.95)
105
 
106
+ llm = LLM(model="TheBloke/Xwin-LM-13B-V0.1-AWQ", quantization="awq", dtype="half")
107
 
108
  outputs = llm.generate(prompts, sampling_params)
109
 
 
149
  tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, trust_remote_code=False)
150
 
151
  prompt = "Tell me about AI"
152
+ prompt_template=f'''A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: {prompt} ASSISTANT:
 
 
 
 
 
153
 
154
  '''
155
 
 
242
  </h3>
243
 
244
  <p align="center">
245
+ <a href="https://github.com/Xwin-LM/Xwin-LM">
246
+ <img src="https://img.shields.io/badge/GitHub-yellow.svg?style=social&logo=github">
247
+ </a>
248
  <a href="https://huggingface.co/Xwin-LM">
249
  <img src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Models-blue">
250
  </a>
 
254
 
255
  **Step up your LLM alignment with Xwin-LM!**
256
 
257
+ Xwin-LM aims to develop and open-source alignment technologies for large language models, including supervised fine-tuning (SFT), reward models (RM), reject sampling, reinforcement learning from human feedback (RLHF), etc. Our first release, built-upon on the Llama2 base models, ranked **TOP-1** on [AlpacaEval](https://tatsu-lab.github.io/alpaca_eval/). Notably, it's **the first to surpass GPT-4** on this benchmark. The project will be continuously updated.
258
 
259
  ## News
260
 
261
+ - 💥 [Sep, 2023] We released [Xwin-LM-70B-V0.1](https://huggingface.co/Xwin-LM/Xwin-LM-70B-V0.1), which has achieved a win-rate against Davinci-003 of **95.57%** on [AlpacaEval](https://tatsu-lab.github.io/alpaca_eval/) benchmark, ranking as **TOP-1** on AlpacaEval. **It was the FIRST model surpassing GPT-4** on [AlpacaEval](https://tatsu-lab.github.io/alpaca_eval/). Also note its winrate v.s. GPT-4 is **60.61**.
262
+ - 🔍 [Sep, 2023] RLHF plays crucial role in the strong performance of Xwin-LM-V0.1 release!
263
+ - 💥 [Sep, 2023] We released [Xwin-LM-13B-V0.1](https://huggingface.co/Xwin-LM/Xwin-LM-13B-V0.1), which has achieved **91.76%** win-rate on [AlpacaEval](https://tatsu-lab.github.io/alpaca_eval/), ranking as **top-1** among all 13B models.
264
+ - 💥 [Sep, 2023] We released [Xwin-LM-7B-V0.1](https://huggingface.co/Xwin-LM/Xwin-LM-7B-V0.1), which has achieved **87.82%** win-rate on [AlpacaEval](https://tatsu-lab.github.io/alpaca_eval/), ranking as **top-1** among all 7B models.
265
 
266
 
267
  ## Model Card
 
289
  | **Xwin-LM-7B-V0.1** | **87.35** | **76.40** | **47.57** |
290
  | Llama-2-13B-Chat | 81.09 | 64.22 | 30.92 |
291
 
292
+ ### Case study on [AlpacaEval](https://tatsu-lab.github.io/alpaca_eval/)
293
+
294
+ | Prompts | GTP4 response | Xwin-LM response |
295
+ |----------------|------------|--------|
296
+ |I've put a plate on top of an apple, and take the plate from kitchen to dinning room. Where is the apple now|The apple is now in the dining room, as it was underneath the plate when you moved the plate from the kitchen to the dining room.|The apple is still in the kitchen, as you have only moved the plate. The apple is under the plate, so if you want to bring the apple to the dining room, you should either move the apple directly or put the apple on the plate and then move the plate to the dining room.|
297
+ | Answer the following question.<br> How do you say "good evening" in French. | In French, "good evening" is said as "bonsoir." | In French, "good evening" is translated to "bonsoir." This greeting is used in the evening and nighttime hours, typically after 6 p.m. It is a combination of "bon," which means "good," and "soir," which means "evening." To pronounce it, say "bone-swahr." |
298
+
299
+
300
+ ### Xwin-LM performance on NLP foundation tasks.
301
+
302
+ The following table provides a comparison of Xwin-LMs with other LLMs on NLP foundation tasks in [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
303
+
304
+ | Model | MMLU 5-shot | ARC 25-shot | TruthfulQA 0-shot | HellaSwag 10-shot | Average |
305
+ |------------------|-------------|-------------|-------------------|-------------------|------------|
306
+ | Text-davinci-003 | 56.9 | **85.2** | 59.3 | 82.2 | 70.9 |
307
+ |Vicuna-13b 1.1 | 51.3 | 53.0 | 51.8 | 80.1 | 59.1 |
308
+ |Guanaco 30B | 57.6 | 63.7 | 50.7 | 85.1 | 64.3 |
309
+ | WizardLM-7B 1.0 | 42.7 | 51.6 | 44.7 | 77.7 | 54.2 |
310
+ | WizardLM-13B 1.0 | 52.3 | 57.2 | 50.5 | 81.0 | 60.2 |
311
+ | WizardLM-30B 1.0 | 58.8 | 62.5 | 52.4 | 83.3 | 64.2|
312
+ | Llama-2-7B-Chat | 48.3 | 52.9 | 45.6 | 78.6 | 56.4 |
313
+ | Llama-2-13B-Chat | 54.6 | 59.0 | 44.1 | 81.9 | 59.9 |
314
+ | Llama-2-70B-Chat | 63.9 | 64.6 | 52.8 | 85.9 | 66.8 |
315
+ | **Xwin-LM-7B-V0.1** | 49.7 | 56.2 | 48.1 | 79.5 | 58.4 |
316
+ | **Xwin-LM-13B-V0.1** | 56.6 | 62.4 | 45.5 | 83.0 | 61.9 |
317
+ | **Xwin-LM-70B-V0.1** | **69.6** | 70.5 | **60.1** | **87.1** | **71.8** |
318
+
319
+
320
+ ## Inference
321
+
322
+ ### Conversation templates
323
+ To obtain desired results, please strictly follow the conversation templates when utilizing our model for inference. Our model adopts the prompt format established by [Vicuna](https://github.com/lm-sys/FastChat) and is equipped to support **multi-turn** conversations.
324
+ ```
325
+ A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: Hi! ASSISTANT: Hello.</s>USER: Who are you? ASSISTANT: I am Xwin-LM.</s>......
326
+ ```
327
+
328
+ ### HuggingFace Example
329
+
330
+ ```python
331
+ from transformers import AutoTokenizer, AutoModelForCausalLM
332
+
333
+ model = AutoModelForCausalLM.from_pretrained("Xwin-LM/Xwin-LM-7B-V0.1")
334
+ tokenizer = AutoTokenizer.from_pretrained("Xwin-LM/Xwin-LM-7B-V0.1")
335
+ (
336
+ prompt := "A chat between a curious user and an artificial intelligence assistant. "
337
+ "The assistant gives helpful, detailed, and polite answers to the user's questions. "
338
+ "USER: Hello, can you help me? "
339
+ "ASSISTANT:"
340
+ )
341
+ inputs = tokenizer(prompt, return_tensors="pt")
342
+ samples = model.generate(**inputs, max_new_tokens=4096, temperature=0.7)
343
+ output = tokenizer.decode(samples[0][inputs["input_ids"].shape[1]:], skip_special_tokens=True)
344
+ print(output)
345
+ # Of course! I'm here to help. Please feel free to ask your question or describe the issue you're having, and I'll do my best to assist you.
346
+ ```
347
+
348
+
349
+ ### vllm Example
350
+ Because Xwin-LM is based on Llama2, it also offers support for rapid inference using [vllm](https://github.com/vllm-project/vllm). Please refer to [vllm](https://github.com/vllm-project/vllm) for detailed installation instructions.
351
+ ```python
352
+ from vllm import LLM, SamplingParams
353
+ (
354
+ prompt := "A chat between a curious user and an artificial intelligence assistant. "
355
+ "The assistant gives helpful, detailed, and polite answers to the user's questions. "
356
+ "USER: Hello, can you help me? "
357
+ "ASSISTANT:"
358
+ )
359
+ sampling_params = SamplingParams(temperature=0.7, max_tokens=4096)
360
+ llm = LLM(model="Xwin-LM/Xwin-LM-7B-V0.1")
361
+ outputs = llm.generate([prompt,], sampling_params)
362
+
363
+ for output in outputs:
364
+ prompt = output.prompt
365
+ generated_text = output.outputs[0].text
366
+ print(generated_text)
367
+ ```
368
+
369
+ ## TODO
370
+
371
+ - [ ] Release the source code
372
+ - [ ] Release more capabilities, such as math, reasoning, and etc.
373
+
374
+ ## Citation
375
+ Please consider citing our work if you use the data or code in this repo.
376
+ ```
377
+ @software{xwin-lm,
378
+ title = {Xwin-LM},
379
+ author = {Xwin-LM Team},
380
+ url = {https://github.com/Xwin-LM/Xwin-LM},
381
+ version = {pre-release},
382
+ year = {2023},
383
+ month = {9},
384
+ }
385
+ ```
386
+
387
+ ## Acknowledgements
388
+
389
+ Thanks to [Llama 2](https://ai.meta.com/llama/), [FastChat](https://github.com/lm-sys/FastChat), [AlpacaFarm](https://github.com/tatsu-lab/alpaca_farm), and [vllm](https://github.com/vllm-project/vllm).