morriszms commited on
Commit
389ee8e
Β·
verified Β·
1 Parent(s): b2daf6f

Upload folder using huggingface_hub

Browse files
.gitattributes CHANGED
@@ -33,3 +33,15 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ 7B-Orfini-Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
37
+ 7B-Orfini-Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
38
+ 7B-Orfini-Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
39
+ 7B-Orfini-Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
40
+ 7B-Orfini-Q4_0.gguf filter=lfs diff=lfs merge=lfs -text
41
+ 7B-Orfini-Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
42
+ 7B-Orfini-Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
43
+ 7B-Orfini-Q5_0.gguf filter=lfs diff=lfs merge=lfs -text
44
+ 7B-Orfini-Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
45
+ 7B-Orfini-Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
46
+ 7B-Orfini-Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
47
+ 7B-Orfini-Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
7B-Orfini-Q2_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a1a3be81cec58b551af3e6f6edafb511ed44fd4e5811b5dcfbb3dfb39b033e7b
3
+ size 2532865472
7B-Orfini-Q3_K_L.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e64d0815dd532b1b3e86df1a5f9d64dcbb19e37983a9dbabc22a541e56a4b38d
3
+ size 3597112768
7B-Orfini-Q3_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:277b290aaaeeb213f98c1329590d39dc5566f419420c5ca64436302f48ecc4d7
3
+ size 3298006464
7B-Orfini-Q3_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:87e4d09eaf6b0e5cca4ad2f16e02fa9d674a0726d2ac12618b2cabfdc93d562d
3
+ size 2948306368
7B-Orfini-Q4_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b629a70b85fd54b1f29a61e5f90d3788ad2e065b0aea75b2600e42778a445dbc
3
+ size 3825808832
7B-Orfini-Q4_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c7493e31bd6a24731f883fd8601c6a7a0369bb4a1df6bd0bf8c1d26ce4e474ba
3
+ size 4081006016
7B-Orfini-Q4_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e8fbeb56ba9547a828e058abcc04ba1eecaeb19daab2036a8a2e070ccfbfda09
3
+ size 3856741824
7B-Orfini-Q5_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f51682a2683823d8d903549b6681813b175d4a569fc7a5aa46f905fe525744a7
3
+ size 4651693504
7B-Orfini-Q5_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d21f66cc8bd83e89a751df2ea3136733bcb50434939789d06cd5c61bd7df4f46
3
+ size 4783158720
7B-Orfini-Q5_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6de3cfda61e20335a7ae47e16700a99bff6fbc08b172616d8ae787b3bd573788
3
+ size 4651693504
7B-Orfini-Q6_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b9d7c9374cd60fd2a61de1fb7aa3b5ac70b4533ca4961382719d8bf1f340b438
3
+ size 5529195968
7B-Orfini-Q8_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ebc4cd20329a3101b954dfcb9c8511e142b895edac30dd68ddcb746b69dd49a6
3
+ size 7161091520
README.md ADDED
@@ -0,0 +1,127 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ datasets:
4
+ - Open-Orca/OpenOrca
5
+ - conceptofmind/cot_submix_original
6
+ - conceptofmind/t0_submix_original
7
+ - conceptofmind/niv2_submix_original
8
+ - conceptofmind/flan2021_submix_original
9
+ - ehartford/dolphin
10
+ language:
11
+ - en
12
+ tags:
13
+ - merge
14
+ - slerp
15
+ - TensorBlock
16
+ - GGUF
17
+ inference: false
18
+ metrics:
19
+ - accuracy
20
+ - bleu
21
+ base_model: NewstaR/7B-Orfini
22
+ ---
23
+
24
+ <div style="width: auto; margin-left: auto; margin-right: auto">
25
+ <img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
26
+ </div>
27
+ <div style="display: flex; justify-content: space-between; width: 100%;">
28
+ <div style="display: flex; flex-direction: column; align-items: flex-start;">
29
+ <p style="margin-top: 0.5em; margin-bottom: 0em;">
30
+ Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
31
+ </p>
32
+ </div>
33
+ </div>
34
+
35
+ ## NewstaR/7B-Orfini - GGUF
36
+
37
+ This repo contains GGUF format model files for [NewstaR/7B-Orfini](https://huggingface.co/NewstaR/7B-Orfini).
38
+
39
+ The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
40
+
41
+ ## Our projects
42
+ <table border="1" cellspacing="0" cellpadding="10">
43
+ <tr>
44
+ <th style="font-size: 25px;">Awesome MCP Servers</th>
45
+ <th style="font-size: 25px;">TensorBlock Studio</th>
46
+ </tr>
47
+ <tr>
48
+ <th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
49
+ <th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
50
+ </tr>
51
+ <tr>
52
+ <th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
53
+ <th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
54
+ </tr>
55
+ <tr>
56
+ <th>
57
+ <a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
58
+ display: inline-block;
59
+ padding: 8px 16px;
60
+ background-color: #FF7F50;
61
+ color: white;
62
+ text-decoration: none;
63
+ border-radius: 6px;
64
+ font-weight: bold;
65
+ font-family: sans-serif;
66
+ ">πŸ‘€ See what we built πŸ‘€</a>
67
+ </th>
68
+ <th>
69
+ <a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
70
+ display: inline-block;
71
+ padding: 8px 16px;
72
+ background-color: #FF7F50;
73
+ color: white;
74
+ text-decoration: none;
75
+ border-radius: 6px;
76
+ font-weight: bold;
77
+ font-family: sans-serif;
78
+ ">πŸ‘€ See what we built πŸ‘€</a>
79
+ </th>
80
+ </tr>
81
+ </table>
82
+
83
+ ## Prompt template
84
+
85
+ ```
86
+ Unable to determine prompt format automatically. Please check the original model repository for the correct prompt format.
87
+ ```
88
+
89
+ ## Model file specification
90
+
91
+ | Filename | Quant type | File Size | Description |
92
+ | -------- | ---------- | --------- | ----------- |
93
+ | [7B-Orfini-Q2_K.gguf](https://huggingface.co/tensorblock/NewstaR_7B-Orfini-GGUF/blob/main/7B-Orfini-Q2_K.gguf) | Q2_K | 2.533 GB | smallest, significant quality loss - not recommended for most purposes |
94
+ | [7B-Orfini-Q3_K_S.gguf](https://huggingface.co/tensorblock/NewstaR_7B-Orfini-GGUF/blob/main/7B-Orfini-Q3_K_S.gguf) | Q3_K_S | 2.948 GB | very small, high quality loss |
95
+ | [7B-Orfini-Q3_K_M.gguf](https://huggingface.co/tensorblock/NewstaR_7B-Orfini-GGUF/blob/main/7B-Orfini-Q3_K_M.gguf) | Q3_K_M | 3.298 GB | very small, high quality loss |
96
+ | [7B-Orfini-Q3_K_L.gguf](https://huggingface.co/tensorblock/NewstaR_7B-Orfini-GGUF/blob/main/7B-Orfini-Q3_K_L.gguf) | Q3_K_L | 3.597 GB | small, substantial quality loss |
97
+ | [7B-Orfini-Q4_0.gguf](https://huggingface.co/tensorblock/NewstaR_7B-Orfini-GGUF/blob/main/7B-Orfini-Q4_0.gguf) | Q4_0 | 3.826 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
98
+ | [7B-Orfini-Q4_K_S.gguf](https://huggingface.co/tensorblock/NewstaR_7B-Orfini-GGUF/blob/main/7B-Orfini-Q4_K_S.gguf) | Q4_K_S | 3.857 GB | small, greater quality loss |
99
+ | [7B-Orfini-Q4_K_M.gguf](https://huggingface.co/tensorblock/NewstaR_7B-Orfini-GGUF/blob/main/7B-Orfini-Q4_K_M.gguf) | Q4_K_M | 4.081 GB | medium, balanced quality - recommended |
100
+ | [7B-Orfini-Q5_0.gguf](https://huggingface.co/tensorblock/NewstaR_7B-Orfini-GGUF/blob/main/7B-Orfini-Q5_0.gguf) | Q5_0 | 4.652 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
101
+ | [7B-Orfini-Q5_K_S.gguf](https://huggingface.co/tensorblock/NewstaR_7B-Orfini-GGUF/blob/main/7B-Orfini-Q5_K_S.gguf) | Q5_K_S | 4.652 GB | large, low quality loss - recommended |
102
+ | [7B-Orfini-Q5_K_M.gguf](https://huggingface.co/tensorblock/NewstaR_7B-Orfini-GGUF/blob/main/7B-Orfini-Q5_K_M.gguf) | Q5_K_M | 4.783 GB | large, very low quality loss - recommended |
103
+ | [7B-Orfini-Q6_K.gguf](https://huggingface.co/tensorblock/NewstaR_7B-Orfini-GGUF/blob/main/7B-Orfini-Q6_K.gguf) | Q6_K | 5.529 GB | very large, extremely low quality loss |
104
+ | [7B-Orfini-Q8_0.gguf](https://huggingface.co/tensorblock/NewstaR_7B-Orfini-GGUF/blob/main/7B-Orfini-Q8_0.gguf) | Q8_0 | 7.161 GB | very large, extremely low quality loss - not recommended |
105
+
106
+
107
+ ## Downloading instruction
108
+
109
+ ### Command line
110
+
111
+ Firstly, install Huggingface Client
112
+
113
+ ```shell
114
+ pip install -U "huggingface_hub[cli]"
115
+ ```
116
+
117
+ Then, downoad the individual model file the a local directory
118
+
119
+ ```shell
120
+ huggingface-cli download tensorblock/NewstaR_7B-Orfini-GGUF --include "7B-Orfini-Q2_K.gguf" --local-dir MY_LOCAL_DIR
121
+ ```
122
+
123
+ If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
124
+
125
+ ```shell
126
+ huggingface-cli download tensorblock/NewstaR_7B-Orfini-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
127
+ ```