LoneStriker commited on
Commit
8f817c6
·
1 Parent(s): ec1ef59

GGUF quants for LoneStriker/Yi-6B-200K-Airo-Claude-Puffin fine-tune

Browse files
README.md ADDED
@@ -0,0 +1,94 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ license_name: yi-license
4
+ license_link: LICENSE
5
+ ---
6
+ <div align="center">
7
+
8
+ <img src="./Yi.svg" width="200px">
9
+
10
+ </div>
11
+
12
+ ## Introduction
13
+
14
+ The **Yi** series models are large language models trained from scratch by
15
+ developers at [01.AI](https://01.ai/). The first public release contains two
16
+ bilingual(English/Chinese) base models with the parameter sizes of 6B([`Yi-6B`](https://huggingface.co/01-ai/Yi-6B))
17
+ and 34B([`Yi-34B`](https://huggingface.co/01-ai/Yi-34B)). Both of them are trained
18
+ with 4K sequence length and can be extended to 32K during inference time.
19
+ The [`Yi-6B-200K`](https://huggingface.co/01-ai/Yi-6B-200K)
20
+ and [`Yi-34B-200K`](https://huggingface.co/01-ai/Yi-34B-200K) are base model with
21
+ 200K context length.
22
+
23
+ ## News
24
+
25
+ - 🎯 **2023/11/06**: The base model of [`Yi-6B-200K`](https://huggingface.co/01-ai/Yi-6B-200K)
26
+ and [`Yi-34B-200K`](https://huggingface.co/01-ai/Yi-34B-200K) with 200K context length.
27
+ - 🎯 **2023/11/02**: The base model of [`Yi-6B`](https://huggingface.co/01-ai/Yi-6B) and
28
+ [`Yi-34B`](https://huggingface.co/01-ai/Yi-34B).
29
+
30
+
31
+ ## Model Performance
32
+
33
+ | Model | MMLU | CMMLU | C-Eval | GAOKAO | BBH | Common-sense Reasoning | Reading Comprehension | Math & Code |
34
+ | :------------ | :------: | :------: | :------: | :------: | :------: | :--------------------: | :-------------------: | :---------: |
35
+ | | 5-shot | 5-shot | 5-shot | 0-shot | 3-shot@1 | - | - | - |
36
+ | LLaMA2-34B | 62.6 | - | - | - | 44.1 | 69.9 | 68.0 | 26.0 |
37
+ | LLaMA2-70B | 68.9 | 53.3 | - | 49.8 | 51.2 | 71.9 | 69.4 | 36.8 |
38
+ | Baichuan2-13B | 59.2 | 62.0 | 58.1 | 54.3 | 48.8 | 64.3 | 62.4 | 23.0 |
39
+ | Qwen-14B | 66.3 | 71.0 | 72.1 | 62.5 | 53.4 | 73.3 | 72.5 | **39.8** |
40
+ | Skywork-13B | 62.1 | 61.8 | 60.6 | 68.1 | 41.7 | 72.4 | 61.4 | 24.9 |
41
+ | InternLM-20B | 62.1 | 59.0 | 58.8 | 45.5 | 52.5 | 78.3 | - | 30.4 |
42
+ | Aquila-34B | 67.8 | 71.4 | 63.1 | - | - | - | - | - |
43
+ | Falcon-180B | 70.4 | 58.0 | 57.8 | 59.0 | 54.0 | 77.3 | 68.8 | 34.0 |
44
+ | Yi-6B | 63.2 | 75.5 | 72.0 | 72.2 | 42.8 | 72.3 | 68.7 | 19.8 |
45
+ | Yi-6B-200K | 64.0 | 75.3 | 73.5 | 73.9 | 42.0 | 72.0 | 69.1 | 19.0 |
46
+ | **Yi-34B** | **76.3** | **83.7** | 81.4 | 82.8 | **54.3** | **80.1** | 76.4 | 37.1 |
47
+ | Yi-34B-200K | 76.1 | 83.6 | **81.9** | **83.4** | 52.7 | 79.7 | **76.6** | 36.3 |
48
+
49
+ While benchmarking open-source models, we have observed a disparity between the
50
+ results generated by our pipeline and those reported in public sources (e.g.
51
+ OpenCompass). Upon conducting a more in-depth investigation of this difference,
52
+ we have discovered that various models may employ different prompts,
53
+ post-processing strategies, and sampling techniques, potentially resulting in
54
+ significant variations in the outcomes. Our prompt and post-processing strategy
55
+ remains consistent with the original benchmark, and greedy decoding is employed
56
+ during evaluation without any post-processing for the generated content. For
57
+ scores that were not reported by the original authors (including scores reported
58
+ with different settings), we try to get results with our pipeline.
59
+
60
+ To evaluate the model's capability extensively, we adopted the methodology
61
+ outlined in Llama2. Specifically, we included PIQA, SIQA, HellaSwag, WinoGrande,
62
+ ARC, OBQA, and CSQA to assess common sense reasoning. SquAD, QuAC, and BoolQ
63
+ were incorporated to evaluate reading comprehension. CSQA was exclusively tested
64
+ using a 7-shot setup, while all other tests were conducted with a 0-shot
65
+ configuration. Additionally, we introduced GSM8K (8-shot@1), MATH (4-shot@1),
66
+ HumanEval (0-shot@1), and MBPP (3-shot@1) under the category "Math & Code". Due
67
+ to technical constraints, we did not test Falcon-180 on QuAC and OBQA; the score
68
+ is derived by averaging the scores on the remaining tasks. Since the scores for
69
+ these two tasks are generally lower than the average, we believe that
70
+ Falcon-180B's performance was not underestimated.
71
+
72
+ ## Usage
73
+
74
+ Please visit our [github repository](https://github.com/01-ai/Yi) for general
75
+ guidance on how to use this model.
76
+
77
+ ## Disclaimer
78
+
79
+ Although we use data compliance checking algorithms during the training process
80
+ to ensure the compliance of the trained model to the best of our ability, due to
81
+ the complexity of the data and the diversity of language model usage scenarios,
82
+ we cannot guarantee that the model will generate correct and reasonable output
83
+ in all scenarios. Please be aware that there is still a risk of the model
84
+ producing problematic outputs. We will not be responsible for any risks and
85
+ issues resulting from misuse, misguidance, illegal usage, and related
86
+ misinformation, as well as any associated data security concerns.
87
+
88
+ ## License
89
+
90
+ The Yi series models are fully open for academic research and free commercial
91
+ usage with permission via applications. All usage must adhere to the [Model
92
+ License Agreement 2.0](https://huggingface.co/01-ai/Yi-6B-200K/blob/main/LICENSE). To
93
+ apply for the official commercial license, please contact us
94
Yi-6B-200K-Airo-Claude-Puffin-Q3_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b3600005caf6df4d76f14a7a8abe5def3634bd18171719ac273e5ba90e21dd67
3
+ size 2992846080
Yi-6B-200K-Airo-Claude-Puffin-Q4_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:88ca8d324be85633aa77e2c2955c0de638e752633b3db435a226bd1b85ccb4f9
3
+ size 3673979200
Yi-6B-200K-Airo-Claude-Puffin-Q4_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fcf0ce139623a88f3cba39a1263e2e18e33ebbf9b33f68d3417a3282531b5dcc
3
+ size 3502930240
Yi-6B-200K-Airo-Claude-Puffin-Q6_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b3cb2cc36901000f18362080664fcc67e29df9d44402849a1b4b8565d9b7447b
3
+ size 4974297472
Yi-6B-200K-Airo-Claude-Puffin-Q8_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:973105d8e73500bdf88632ceeb2a30c23a41a0eaa4e07d4958937427814b34cb
3
+ size 6442144000