morriszms commited on
Commit
e4a3fa1
·
verified ·
1 Parent(s): 6f74464

Upload folder using huggingface_hub

Browse files
.gitattributes CHANGED
@@ -33,3 +33,15 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ TinyMistral-248M-Chat-v2-Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
37
+ TinyMistral-248M-Chat-v2-Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
38
+ TinyMistral-248M-Chat-v2-Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
39
+ TinyMistral-248M-Chat-v2-Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
40
+ TinyMistral-248M-Chat-v2-Q4_0.gguf filter=lfs diff=lfs merge=lfs -text
41
+ TinyMistral-248M-Chat-v2-Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
42
+ TinyMistral-248M-Chat-v2-Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
43
+ TinyMistral-248M-Chat-v2-Q5_0.gguf filter=lfs diff=lfs merge=lfs -text
44
+ TinyMistral-248M-Chat-v2-Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
45
+ TinyMistral-248M-Chat-v2-Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
46
+ TinyMistral-248M-Chat-v2-Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
47
+ TinyMistral-248M-Chat-v2-Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,170 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ license: apache-2.0
5
+ base_model: Felladrin/TinyMistral-248M-Chat-v2
6
+ datasets:
7
+ - HuggingFaceH4/ultrachat_200k
8
+ - Felladrin/ChatML-ultrachat_200k
9
+ - Open-Orca/OpenOrca
10
+ - Felladrin/ChatML-OpenOrca
11
+ - hkust-nlp/deita-10k-v0
12
+ - Felladrin/ChatML-deita-10k-v0
13
+ - LDJnr/Capybara
14
+ - Felladrin/ChatML-Capybara
15
+ - databricks/databricks-dolly-15k
16
+ - Felladrin/ChatML-databricks-dolly-15k
17
+ - euclaise/reddit-instruct-curated
18
+ - Felladrin/ChatML-reddit-instruct-curated
19
+ - CohereForAI/aya_dataset
20
+ - Felladrin/ChatML-aya_dataset
21
+ - HuggingFaceH4/ultrafeedback_binarized
22
+ pipeline_tag: text-generation
23
+ widget:
24
+ - messages:
25
+ - role: system
26
+ content: You are a highly knowledgeable and friendly assistant. Your goal is to
27
+ understand and respond to user inquiries with clarity. Your interactions are
28
+ always respectful, helpful, and focused on delivering the most accurate information
29
+ to the user.
30
+ - role: user
31
+ content: Hey! Got a question for you!
32
+ - role: assistant
33
+ content: Sure! What's it?
34
+ - role: user
35
+ content: What are some potential applications for quantum computing?
36
+ - messages:
37
+ - role: user
38
+ content: Heya!
39
+ - role: assistant
40
+ content: Hi! How may I help you?
41
+ - role: user
42
+ content: I'm interested in developing a career in software engineering. What would
43
+ you recommend me to do?
44
+ - messages:
45
+ - role: user
46
+ content: Morning!
47
+ - role: assistant
48
+ content: Good morning! How can I help you today?
49
+ - role: user
50
+ content: Could you give me some tips for becoming a healthier person?
51
+ - messages:
52
+ - role: system
53
+ content: You are a very creative assistant. User will give you a task, which you
54
+ should complete with all your knowledge.
55
+ - role: user
56
+ content: Hello! Can you please elaborate a background story of an RPG game about
57
+ wizards and dragons in a sci-fi world?
58
+ tags:
59
+ - TensorBlock
60
+ - GGUF
61
+ ---
62
+
63
+ <div style="width: auto; margin-left: auto; margin-right: auto">
64
+ <img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
65
+ </div>
66
+ <div style="display: flex; justify-content: space-between; width: 100%;">
67
+ <div style="display: flex; flex-direction: column; align-items: flex-start;">
68
+ <p style="margin-top: 0.5em; margin-bottom: 0em;">
69
+ Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
70
+ </p>
71
+ </div>
72
+ </div>
73
+
74
+ ## Felladrin/TinyMistral-248M-Chat-v2 - GGUF
75
+
76
+ This repo contains GGUF format model files for [Felladrin/TinyMistral-248M-Chat-v2](https://huggingface.co/Felladrin/TinyMistral-248M-Chat-v2).
77
+
78
+ The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
79
+
80
+ ## Our projects
81
+ <table border="1" cellspacing="0" cellpadding="10">
82
+ <tr>
83
+ <th style="font-size: 25px;">Awesome MCP Servers</th>
84
+ <th style="font-size: 25px;">TensorBlock Studio</th>
85
+ </tr>
86
+ <tr>
87
+ <th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
88
+ <th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
89
+ </tr>
90
+ <tr>
91
+ <th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
92
+ <th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
93
+ </tr>
94
+ <tr>
95
+ <th>
96
+ <a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
97
+ display: inline-block;
98
+ padding: 8px 16px;
99
+ background-color: #FF7F50;
100
+ color: white;
101
+ text-decoration: none;
102
+ border-radius: 6px;
103
+ font-weight: bold;
104
+ font-family: sans-serif;
105
+ ">👀 See what we built 👀</a>
106
+ </th>
107
+ <th>
108
+ <a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
109
+ display: inline-block;
110
+ padding: 8px 16px;
111
+ background-color: #FF7F50;
112
+ color: white;
113
+ text-decoration: none;
114
+ border-radius: 6px;
115
+ font-weight: bold;
116
+ font-family: sans-serif;
117
+ ">👀 See what we built 👀</a>
118
+ </th>
119
+ </tr>
120
+ </table>
121
+
122
+ ## Prompt template
123
+
124
+ ```
125
+ <|im_start|>system
126
+ {system_prompt}<|im_end|>
127
+ <|im_start|>user
128
+ {prompt}<|im_end|>
129
+ <|im_start|>assistant
130
+ ```
131
+
132
+ ## Model file specification
133
+
134
+ | Filename | Quant type | File Size | Description |
135
+ | -------- | ---------- | --------- | ----------- |
136
+ | [TinyMistral-248M-Chat-v2-Q2_K.gguf](https://huggingface.co/tensorblock/Felladrin_TinyMistral-248M-Chat-v2-GGUF/blob/main/TinyMistral-248M-Chat-v2-Q2_K.gguf) | Q2_K | 0.105 GB | smallest, significant quality loss - not recommended for most purposes |
137
+ | [TinyMistral-248M-Chat-v2-Q3_K_S.gguf](https://huggingface.co/tensorblock/Felladrin_TinyMistral-248M-Chat-v2-GGUF/blob/main/TinyMistral-248M-Chat-v2-Q3_K_S.gguf) | Q3_K_S | 0.120 GB | very small, high quality loss |
138
+ | [TinyMistral-248M-Chat-v2-Q3_K_M.gguf](https://huggingface.co/tensorblock/Felladrin_TinyMistral-248M-Chat-v2-GGUF/blob/main/TinyMistral-248M-Chat-v2-Q3_K_M.gguf) | Q3_K_M | 0.129 GB | very small, high quality loss |
139
+ | [TinyMistral-248M-Chat-v2-Q3_K_L.gguf](https://huggingface.co/tensorblock/Felladrin_TinyMistral-248M-Chat-v2-GGUF/blob/main/TinyMistral-248M-Chat-v2-Q3_K_L.gguf) | Q3_K_L | 0.137 GB | small, substantial quality loss |
140
+ | [TinyMistral-248M-Chat-v2-Q4_0.gguf](https://huggingface.co/tensorblock/Felladrin_TinyMistral-248M-Chat-v2-GGUF/blob/main/TinyMistral-248M-Chat-v2-Q4_0.gguf) | Q4_0 | 0.149 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
141
+ | [TinyMistral-248M-Chat-v2-Q4_K_S.gguf](https://huggingface.co/tensorblock/Felladrin_TinyMistral-248M-Chat-v2-GGUF/blob/main/TinyMistral-248M-Chat-v2-Q4_K_S.gguf) | Q4_K_S | 0.149 GB | small, greater quality loss |
142
+ | [TinyMistral-248M-Chat-v2-Q4_K_M.gguf](https://huggingface.co/tensorblock/Felladrin_TinyMistral-248M-Chat-v2-GGUF/blob/main/TinyMistral-248M-Chat-v2-Q4_K_M.gguf) | Q4_K_M | 0.156 GB | medium, balanced quality - recommended |
143
+ | [TinyMistral-248M-Chat-v2-Q5_0.gguf](https://huggingface.co/tensorblock/Felladrin_TinyMistral-248M-Chat-v2-GGUF/blob/main/TinyMistral-248M-Chat-v2-Q5_0.gguf) | Q5_0 | 0.176 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
144
+ | [TinyMistral-248M-Chat-v2-Q5_K_S.gguf](https://huggingface.co/tensorblock/Felladrin_TinyMistral-248M-Chat-v2-GGUF/blob/main/TinyMistral-248M-Chat-v2-Q5_K_S.gguf) | Q5_K_S | 0.176 GB | large, low quality loss - recommended |
145
+ | [TinyMistral-248M-Chat-v2-Q5_K_M.gguf](https://huggingface.co/tensorblock/Felladrin_TinyMistral-248M-Chat-v2-GGUF/blob/main/TinyMistral-248M-Chat-v2-Q5_K_M.gguf) | Q5_K_M | 0.179 GB | large, very low quality loss - recommended |
146
+ | [TinyMistral-248M-Chat-v2-Q6_K.gguf](https://huggingface.co/tensorblock/Felladrin_TinyMistral-248M-Chat-v2-GGUF/blob/main/TinyMistral-248M-Chat-v2-Q6_K.gguf) | Q6_K | 0.204 GB | very large, extremely low quality loss |
147
+ | [TinyMistral-248M-Chat-v2-Q8_0.gguf](https://huggingface.co/tensorblock/Felladrin_TinyMistral-248M-Chat-v2-GGUF/blob/main/TinyMistral-248M-Chat-v2-Q8_0.gguf) | Q8_0 | 0.264 GB | very large, extremely low quality loss - not recommended |
148
+
149
+
150
+ ## Downloading instruction
151
+
152
+ ### Command line
153
+
154
+ Firstly, install Huggingface Client
155
+
156
+ ```shell
157
+ pip install -U "huggingface_hub[cli]"
158
+ ```
159
+
160
+ Then, downoad the individual model file the a local directory
161
+
162
+ ```shell
163
+ huggingface-cli download tensorblock/Felladrin_TinyMistral-248M-Chat-v2-GGUF --include "TinyMistral-248M-Chat-v2-Q2_K.gguf" --local-dir MY_LOCAL_DIR
164
+ ```
165
+
166
+ If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
167
+
168
+ ```shell
169
+ huggingface-cli download tensorblock/Felladrin_TinyMistral-248M-Chat-v2-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
170
+ ```
TinyMistral-248M-Chat-v2-Q2_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1e6bce51eac3e8334f9cebe4987f36116d23d8a155cd71f3c85d8dc84d3dfd36
3
+ size 105463328
TinyMistral-248M-Chat-v2-Q3_K_L.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:452bf4a080272dcd41b913caa31733e33d45035b45742f4a2532670eeb7bb0f0
3
+ size 137226272
TinyMistral-248M-Chat-v2-Q3_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:50d8509ba1ebcc547d3676c975a738ec227df82b99bd6aca0227bd0666a0398b
3
+ size 129034272
TinyMistral-248M-Chat-v2-Q3_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c2fb5a6b2ed5f1942e18c834b26c8154990493b07ec4048b89de3d9d29d10154
3
+ size 120195104
TinyMistral-248M-Chat-v2-Q4_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7308d1bf846df12fb25f42940e89273f638a55154d2c7f76304f289a818c6800
3
+ size 148779712
TinyMistral-248M-Chat-v2-Q4_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bc8b366f5c1a8d0056556dd2692b2dd652655e1fbc68a49b490294a82e7cb88c
3
+ size 155673280
TinyMistral-248M-Chat-v2-Q4_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3ccda7c1aa1caf72f55ebb957718e184846e6845a23010694501442f8c9db17a
3
+ size 149435072
TinyMistral-248M-Chat-v2-Q5_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cf23b45882facc6edf9e44d9d99cabc73a8af2d1e42923d3f0eb0a1f430ed10a
3
+ size 175682880
TinyMistral-248M-Chat-v2-Q5_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c0198e14890229ec7ecb209b34247a35389f946ec94c7f32dbd7ee7273c6964d
3
+ size 179234112
TinyMistral-248M-Chat-v2-Q5_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9cc70ae072ad2148e6161c73f903d5668b8048ad2adea3894c4799289b433f9e
3
+ size 175682880
TinyMistral-248M-Chat-v2-Q6_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:79b221219102a8303a4036249216a2282abccd1cdc8a29271f67c400112e0b62
3
+ size 204267520
TinyMistral-248M-Chat-v2-Q8_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fbc7af9d1df680e3bd56d4f309e11f588d86cc0ef128297585a1f5d954c01b0f
3
+ size 264329600