morriszms commited on
Commit
3bc6061
·
verified ·
1 Parent(s): 7b08572

Upload folder using huggingface_hub

Browse files
.gitattributes CHANGED
@@ -33,3 +33,15 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ YuLan-Mini-Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
37
+ YuLan-Mini-Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
38
+ YuLan-Mini-Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
39
+ YuLan-Mini-Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
40
+ YuLan-Mini-Q4_0.gguf filter=lfs diff=lfs merge=lfs -text
41
+ YuLan-Mini-Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
42
+ YuLan-Mini-Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
43
+ YuLan-Mini-Q5_0.gguf filter=lfs diff=lfs merge=lfs -text
44
+ YuLan-Mini-Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
45
+ YuLan-Mini-Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
46
+ YuLan-Mini-Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
47
+ YuLan-Mini-Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,170 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ library_name: transformers
4
+ pipeline_tag: text-generation
5
+ datasets:
6
+ - yulan-team/YuLan-Mini-Datasets
7
+ - HuggingFaceFW/fineweb-edu
8
+ - bigcode/the-stack-v2
9
+ - mlfoundations/dclm-baseline-1.0
10
+ - math-ai/AutoMathText
11
+ - gair-prox/open-web-math-pro
12
+ - RUC-AIBOX/long_form_thought_data_5k
13
+ - internlm/Lean-Workbook
14
+ - internlm/Lean-Github
15
+ - deepseek-ai/DeepSeek-Prover-V1
16
+ - ScalableMath/Lean-STaR-base
17
+ - ScalableMath/Lean-STaR-plus
18
+ - ScalableMath/Lean-CoT-base
19
+ - ScalableMath/Lean-CoT-plus
20
+ - opencsg/chinese-fineweb-edu
21
+ - liwu/MNBVC
22
+ - vikp/textbook_quality_programming
23
+ - HuggingFaceTB/smollm-corpus
24
+ - OpenCoder-LLM/opc-annealing-corpus
25
+ - OpenCoder-LLM/opc-sft-stage1
26
+ - OpenCoder-LLM/opc-sft-stage2
27
+ - XinyaoHu/AMPS_mathematica
28
+ - deepmind/math_dataset
29
+ - mrfakename/basic-math-10m
30
+ - microsoft/orca-math-word-problems-200k
31
+ - AI-MO/NuminaMath-CoT
32
+ - HuggingFaceTB/cosmopedia
33
+ - MU-NLPC/Calc-ape210k
34
+ - manu/project_gutenberg
35
+ - storytracer/LoC-PD-Books
36
+ - allenai/dolma
37
+ language:
38
+ - en
39
+ - zh
40
+ tags:
41
+ - code
42
+ - math
43
+ - TensorBlock
44
+ - GGUF
45
+ arxiv: 2412.17743
46
+ base_model: yulan-team/YuLan-Mini
47
+ model-index:
48
+ - name: YuLan-Mini
49
+ results:
50
+ - task:
51
+ type: text-generation
52
+ dataset:
53
+ name: HumanEval
54
+ type: openai_humaneval
55
+ metrics:
56
+ - type: pass@1
57
+ value: 0.64
58
+ name: pass@1
59
+ verified: false
60
+ - task:
61
+ type: text-generation
62
+ dataset:
63
+ name: MBPP
64
+ type: mbpp
65
+ metrics:
66
+ - type: pass@1
67
+ value: 0.659
68
+ name: pass@1
69
+ verified: false
70
+ - task:
71
+ type: text-generation
72
+ dataset:
73
+ name: MATH-500
74
+ type: math-500
75
+ metrics:
76
+ - type: maj@1
77
+ value: 0.378
78
+ name: maj@1
79
+ verified: false
80
+ - task:
81
+ type: text-generation
82
+ dataset:
83
+ name: GSM8K
84
+ type: gsm8k
85
+ metrics:
86
+ - type: maj@1
87
+ value: 0.684
88
+ name: maj@1
89
+ verified: false
90
+ ---
91
+
92
+ <div style="width: auto; margin-left: auto; margin-right: auto">
93
+ <img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
94
+ </div>
95
+ <div style="display: flex; justify-content: space-between; width: 100%;">
96
+ <div style="display: flex; flex-direction: column; align-items: flex-start;">
97
+ <p style="margin-top: 0.5em; margin-bottom: 0em;">
98
+ Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
99
+ </p>
100
+ </div>
101
+ </div>
102
+
103
+ ## yulan-team/YuLan-Mini - GGUF
104
+
105
+ This repo contains GGUF format model files for [yulan-team/YuLan-Mini](https://huggingface.co/yulan-team/YuLan-Mini).
106
+
107
+ The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4823](https://github.com/ggml-org/llama.cpp/commit/5bbe6a9fe9a8796a9389c85accec89dbc4d91e39).
108
+
109
+ <div style="text-align: left; margin: 20px 0;">
110
+ <a href="https://tensorblock.co/waitlist/client" style="display: inline-block; padding: 10px 20px; background-color: #007bff; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;">
111
+ Run them on the TensorBlock client using your local machine ↗
112
+ </a>
113
+ </div>
114
+
115
+ ## Prompt template
116
+
117
+ ```
118
+
119
+ <s>
120
+
121
+ <|start_header_id|>system<|end_header_id|>
122
+
123
+ {system_prompt}<|eot_id|>
124
+
125
+ <|start_header_id|>user<|end_header_id|>
126
+
127
+ {prompt}<|eot_id|>
128
+
129
+ <|start_header_id|>assistant<|end_header_id|>
130
+ ```
131
+
132
+ ## Model file specification
133
+
134
+ | Filename | Quant type | File Size | Description |
135
+ | -------- | ---------- | --------- | ----------- |
136
+ | [YuLan-Mini-Q2_K.gguf](https://huggingface.co/tensorblock/YuLan-Mini-GGUF/blob/main/YuLan-Mini-Q2_K.gguf) | Q2_K | 1.468 GB | smallest, significant quality loss - not recommended for most purposes |
137
+ | [YuLan-Mini-Q3_K_S.gguf](https://huggingface.co/tensorblock/YuLan-Mini-GGUF/blob/main/YuLan-Mini-Q3_K_S.gguf) | Q3_K_S | 1.463 GB | very small, high quality loss |
138
+ | [YuLan-Mini-Q3_K_M.gguf](https://huggingface.co/tensorblock/YuLan-Mini-GGUF/blob/main/YuLan-Mini-Q3_K_M.gguf) | Q3_K_M | 1.560 GB | very small, high quality loss |
139
+ | [YuLan-Mini-Q3_K_L.gguf](https://huggingface.co/tensorblock/YuLan-Mini-GGUF/blob/main/YuLan-Mini-Q3_K_L.gguf) | Q3_K_L | 1.606 GB | small, substantial quality loss |
140
+ | [YuLan-Mini-Q4_0.gguf](https://huggingface.co/tensorblock/YuLan-Mini-GGUF/blob/main/YuLan-Mini-Q4_0.gguf) | Q4_0 | 1.463 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
141
+ | [YuLan-Mini-Q4_K_S.gguf](https://huggingface.co/tensorblock/YuLan-Mini-GGUF/blob/main/YuLan-Mini-Q4_K_S.gguf) | Q4_K_S | 1.746 GB | small, greater quality loss |
142
+ | [YuLan-Mini-Q4_K_M.gguf](https://huggingface.co/tensorblock/YuLan-Mini-GGUF/blob/main/YuLan-Mini-Q4_K_M.gguf) | Q4_K_M | 1.846 GB | medium, balanced quality - recommended |
143
+ | [YuLan-Mini-Q5_0.gguf](https://huggingface.co/tensorblock/YuLan-Mini-GGUF/blob/main/YuLan-Mini-Q5_0.gguf) | Q5_0 | 1.742 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
144
+ | [YuLan-Mini-Q5_K_S.gguf](https://huggingface.co/tensorblock/YuLan-Mini-GGUF/blob/main/YuLan-Mini-Q5_K_S.gguf) | Q5_K_S | 1.882 GB | large, low quality loss - recommended |
145
+ | [YuLan-Mini-Q5_K_M.gguf](https://huggingface.co/tensorblock/YuLan-Mini-GGUF/blob/main/YuLan-Mini-Q5_K_M.gguf) | Q5_K_M | 1.969 GB | large, very low quality loss - recommended |
146
+ | [YuLan-Mini-Q6_K.gguf](https://huggingface.co/tensorblock/YuLan-Mini-GGUF/blob/main/YuLan-Mini-Q6_K.gguf) | Q6_K | 2.580 GB | very large, extremely low quality loss |
147
+ | [YuLan-Mini-Q8_0.gguf](https://huggingface.co/tensorblock/YuLan-Mini-GGUF/blob/main/YuLan-Mini-Q8_0.gguf) | Q8_0 | 2.580 GB | very large, extremely low quality loss - not recommended |
148
+
149
+
150
+ ## Downloading instruction
151
+
152
+ ### Command line
153
+
154
+ Firstly, install Huggingface Client
155
+
156
+ ```shell
157
+ pip install -U "huggingface_hub[cli]"
158
+ ```
159
+
160
+ Then, downoad the individual model file the a local directory
161
+
162
+ ```shell
163
+ huggingface-cli download tensorblock/YuLan-Mini-GGUF --include "YuLan-Mini-Q2_K.gguf" --local-dir MY_LOCAL_DIR
164
+ ```
165
+
166
+ If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
167
+
168
+ ```shell
169
+ huggingface-cli download tensorblock/YuLan-Mini-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
170
+ ```
YuLan-Mini-Q2_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:eac2d20df9563a2254b2e3b70ec2ba7514503653843a931caee527c70c491898
3
+ size 1467847520
YuLan-Mini-Q3_K_L.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:478114f8afb91b6454b7a21bd64d978644ef48e60951f33298a00bca2137b64b
3
+ size 1605903200
YuLan-Mini-Q3_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8170e40fa57faa82dbd23cccd0afd45d91eb2b573b7e13c51e1116371b997e47
3
+ size 1559984480
YuLan-Mini-Q3_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0e28433a594ecbbb74bd39234d19b7948876c07ff0061ddb16bcc85c4419254e
3
+ size 1462686560
YuLan-Mini-Q4_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:615a175886d5597980ff080729748b4c3f667129134cba61b5ce250b461844c7
3
+ size 1462686560
YuLan-Mini-Q4_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1c4e4f722b92749b7b4f336b412acae5c26d0f121025c0d67da0857ea0419500
3
+ size 1846423520
YuLan-Mini-Q4_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c6dd3f761e287da2a529989e514e878077aeb16439e1bf35ba7eeccf577944a7
3
+ size 1746130400
YuLan-Mini-Q5_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:37133857c840af80338a9824082985ff4da958fbe7876b2ffb96ff147ce99f52
3
+ size 1741914080
YuLan-Mini-Q5_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ea583299985bf01ce71a0256729996be57692c48ca4b4631c6aa35957dcf60bb
3
+ size 1968619040
YuLan-Mini-Q5_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:13035e1ea064cb03e6c32a1dfbd4c2238e26c2c09838c367334be080b989fbbd
3
+ size 1881527840
YuLan-Mini-Q6_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0446fcef91063fbb3d4dcce9a336fa72c0845dfb5c8dac6c98d9000d5715b0b6
3
+ size 2579596640
YuLan-Mini-Q8_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a8f595e258272b84c867d0e1c8e6f6d7759c0e73fbf98b32ba3a8fb9c6ece9db
3
+ size 2579596640