morriszms commited on
Commit
32537d1
Β·
verified Β·
1 Parent(s): 3d4380b

Upload folder using huggingface_hub

Browse files
.gitattributes CHANGED
@@ -33,3 +33,15 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ Mistral-7B-v0.1-platy-1k-Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
37
+ Mistral-7B-v0.1-platy-1k-Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
38
+ Mistral-7B-v0.1-platy-1k-Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
39
+ Mistral-7B-v0.1-platy-1k-Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
40
+ Mistral-7B-v0.1-platy-1k-Q4_0.gguf filter=lfs diff=lfs merge=lfs -text
41
+ Mistral-7B-v0.1-platy-1k-Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
42
+ Mistral-7B-v0.1-platy-1k-Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
43
+ Mistral-7B-v0.1-platy-1k-Q5_0.gguf filter=lfs diff=lfs merge=lfs -text
44
+ Mistral-7B-v0.1-platy-1k-Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
45
+ Mistral-7B-v0.1-platy-1k-Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
46
+ Mistral-7B-v0.1-platy-1k-Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
47
+ Mistral-7B-v0.1-platy-1k-Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
Mistral-7B-v0.1-platy-1k-Q2_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d46ad5124a2b4c98997601a5d924c7cc669471711972c862e61229e01da79c2f
3
+ size 2719242624
Mistral-7B-v0.1-platy-1k-Q3_K_L.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:40274679f865f43224127c7b89e5fd599fab3ed12b78df6baa238c55052a850d
3
+ size 3822025088
Mistral-7B-v0.1-platy-1k-Q3_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8b90b3554f3b8002d1ac710ea26c3f4a013cda6d8d8e6e90461fbdb26982af9a
3
+ size 3518986624
Mistral-7B-v0.1-platy-1k-Q3_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:aa200dc0158427033c9ad5b9d7d0b7f498e383ffdfe044c8a92083bce2fd6d69
3
+ size 3164567936
Mistral-7B-v0.1-platy-1k-Q4_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:adf00f99fe8e87885920c2548317c714ac4c2f366b8df658e92a762feb863d45
3
+ size 4108917120
Mistral-7B-v0.1-platy-1k-Q4_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bc836705b1794fca4c4ee9379c97dde5729bac770985eb7afb52052cbd29c3c3
3
+ size 4368439680
Mistral-7B-v0.1-platy-1k-Q4_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9adc16f0abe3a3b02b486f30bb8e915ab18c1ef4ca55ee853a68673272dba73e
3
+ size 4140374400
Mistral-7B-v0.1-platy-1k-Q5_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:294418c2c7035991b67b63c6769e33ec32665b0b7f18596dcd89ccb4b335274e
3
+ size 4997716352
Mistral-7B-v0.1-platy-1k-Q5_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8aa1209bd20be19d6dee71782d1dce58a9de400fca0049ee0ea5af10b8419d69
3
+ size 5131409792
Mistral-7B-v0.1-platy-1k-Q5_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d85631d1ce1e2a8346ff9ed95f82261dcf9d76be4e81eeb4e2cd51f30678cc54
3
+ size 4997716352
Mistral-7B-v0.1-platy-1k-Q6_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fddeb0869862aede95292adfdbd56b4dc82f028d61129a06c0eb2228b9e8dc61
3
+ size 5942065536
Mistral-7B-v0.1-platy-1k-Q8_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d315804a5b97658b19385df32bad01aa206765b7c4409732ff7a35a71b49f519
3
+ size 7695858048
README.md ADDED
@@ -0,0 +1,120 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ pipeline_tag: text-generation
3
+ license: mit
4
+ language:
5
+ - en
6
+ - ko
7
+ library_name: transformers
8
+ tags:
9
+ - MindsAndCompany
10
+ - TensorBlock
11
+ - GGUF
12
+ datasets:
13
+ - kyujinpy/KOpen-platypus
14
+ base_model: mncai/Mistral-7B-v0.1-platy-1k
15
+ ---
16
+
17
+ <div style="width: auto; margin-left: auto; margin-right: auto">
18
+ <img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
19
+ </div>
20
+ <div style="display: flex; justify-content: space-between; width: 100%;">
21
+ <div style="display: flex; flex-direction: column; align-items: flex-start;">
22
+ <p style="margin-top: 0.5em; margin-bottom: 0em;">
23
+ Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
24
+ </p>
25
+ </div>
26
+ </div>
27
+
28
+ ## mncai/Mistral-7B-v0.1-platy-1k - GGUF
29
+
30
+ This repo contains GGUF format model files for [mncai/Mistral-7B-v0.1-platy-1k](https://huggingface.co/mncai/Mistral-7B-v0.1-platy-1k).
31
+
32
+ The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
33
+
34
+ ## Our projects
35
+ <table border="1" cellspacing="0" cellpadding="10">
36
+ <tr>
37
+ <th style="font-size: 25px;">Awesome MCP Servers</th>
38
+ <th style="font-size: 25px;">TensorBlock Studio</th>
39
+ </tr>
40
+ <tr>
41
+ <th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
42
+ <th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
43
+ </tr>
44
+ <tr>
45
+ <th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
46
+ <th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
47
+ </tr>
48
+ <tr>
49
+ <th>
50
+ <a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
51
+ display: inline-block;
52
+ padding: 8px 16px;
53
+ background-color: #FF7F50;
54
+ color: white;
55
+ text-decoration: none;
56
+ border-radius: 6px;
57
+ font-weight: bold;
58
+ font-family: sans-serif;
59
+ ">πŸ‘€ See what we built πŸ‘€</a>
60
+ </th>
61
+ <th>
62
+ <a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
63
+ display: inline-block;
64
+ padding: 8px 16px;
65
+ background-color: #FF7F50;
66
+ color: white;
67
+ text-decoration: none;
68
+ border-radius: 6px;
69
+ font-weight: bold;
70
+ font-family: sans-serif;
71
+ ">πŸ‘€ See what we built πŸ‘€</a>
72
+ </th>
73
+ </tr>
74
+ </table>
75
+
76
+ ## Prompt template
77
+
78
+ ```
79
+
80
+ ```
81
+
82
+ ## Model file specification
83
+
84
+ | Filename | Quant type | File Size | Description |
85
+ | -------- | ---------- | --------- | ----------- |
86
+ | [Mistral-7B-v0.1-platy-1k-Q2_K.gguf](https://huggingface.co/tensorblock/mncai_Mistral-7B-v0.1-platy-1k-GGUF/blob/main/Mistral-7B-v0.1-platy-1k-Q2_K.gguf) | Q2_K | 2.719 GB | smallest, significant quality loss - not recommended for most purposes |
87
+ | [Mistral-7B-v0.1-platy-1k-Q3_K_S.gguf](https://huggingface.co/tensorblock/mncai_Mistral-7B-v0.1-platy-1k-GGUF/blob/main/Mistral-7B-v0.1-platy-1k-Q3_K_S.gguf) | Q3_K_S | 3.165 GB | very small, high quality loss |
88
+ | [Mistral-7B-v0.1-platy-1k-Q3_K_M.gguf](https://huggingface.co/tensorblock/mncai_Mistral-7B-v0.1-platy-1k-GGUF/blob/main/Mistral-7B-v0.1-platy-1k-Q3_K_M.gguf) | Q3_K_M | 3.519 GB | very small, high quality loss |
89
+ | [Mistral-7B-v0.1-platy-1k-Q3_K_L.gguf](https://huggingface.co/tensorblock/mncai_Mistral-7B-v0.1-platy-1k-GGUF/blob/main/Mistral-7B-v0.1-platy-1k-Q3_K_L.gguf) | Q3_K_L | 3.822 GB | small, substantial quality loss |
90
+ | [Mistral-7B-v0.1-platy-1k-Q4_0.gguf](https://huggingface.co/tensorblock/mncai_Mistral-7B-v0.1-platy-1k-GGUF/blob/main/Mistral-7B-v0.1-platy-1k-Q4_0.gguf) | Q4_0 | 4.109 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
91
+ | [Mistral-7B-v0.1-platy-1k-Q4_K_S.gguf](https://huggingface.co/tensorblock/mncai_Mistral-7B-v0.1-platy-1k-GGUF/blob/main/Mistral-7B-v0.1-platy-1k-Q4_K_S.gguf) | Q4_K_S | 4.140 GB | small, greater quality loss |
92
+ | [Mistral-7B-v0.1-platy-1k-Q4_K_M.gguf](https://huggingface.co/tensorblock/mncai_Mistral-7B-v0.1-platy-1k-GGUF/blob/main/Mistral-7B-v0.1-platy-1k-Q4_K_M.gguf) | Q4_K_M | 4.368 GB | medium, balanced quality - recommended |
93
+ | [Mistral-7B-v0.1-platy-1k-Q5_0.gguf](https://huggingface.co/tensorblock/mncai_Mistral-7B-v0.1-platy-1k-GGUF/blob/main/Mistral-7B-v0.1-platy-1k-Q5_0.gguf) | Q5_0 | 4.998 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
94
+ | [Mistral-7B-v0.1-platy-1k-Q5_K_S.gguf](https://huggingface.co/tensorblock/mncai_Mistral-7B-v0.1-platy-1k-GGUF/blob/main/Mistral-7B-v0.1-platy-1k-Q5_K_S.gguf) | Q5_K_S | 4.998 GB | large, low quality loss - recommended |
95
+ | [Mistral-7B-v0.1-platy-1k-Q5_K_M.gguf](https://huggingface.co/tensorblock/mncai_Mistral-7B-v0.1-platy-1k-GGUF/blob/main/Mistral-7B-v0.1-platy-1k-Q5_K_M.gguf) | Q5_K_M | 5.131 GB | large, very low quality loss - recommended |
96
+ | [Mistral-7B-v0.1-platy-1k-Q6_K.gguf](https://huggingface.co/tensorblock/mncai_Mistral-7B-v0.1-platy-1k-GGUF/blob/main/Mistral-7B-v0.1-platy-1k-Q6_K.gguf) | Q6_K | 5.942 GB | very large, extremely low quality loss |
97
+ | [Mistral-7B-v0.1-platy-1k-Q8_0.gguf](https://huggingface.co/tensorblock/mncai_Mistral-7B-v0.1-platy-1k-GGUF/blob/main/Mistral-7B-v0.1-platy-1k-Q8_0.gguf) | Q8_0 | 7.696 GB | very large, extremely low quality loss - not recommended |
98
+
99
+
100
+ ## Downloading instruction
101
+
102
+ ### Command line
103
+
104
+ Firstly, install Huggingface Client
105
+
106
+ ```shell
107
+ pip install -U "huggingface_hub[cli]"
108
+ ```
109
+
110
+ Then, downoad the individual model file the a local directory
111
+
112
+ ```shell
113
+ huggingface-cli download tensorblock/mncai_Mistral-7B-v0.1-platy-1k-GGUF --include "Mistral-7B-v0.1-platy-1k-Q2_K.gguf" --local-dir MY_LOCAL_DIR
114
+ ```
115
+
116
+ If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
117
+
118
+ ```shell
119
+ huggingface-cli download tensorblock/mncai_Mistral-7B-v0.1-platy-1k-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
120
+ ```