morriszms commited on
Commit
b300dd9
Β·
verified Β·
1 Parent(s): 0a8159f

Upload folder using huggingface_hub

Browse files
.gitattributes CHANGED
@@ -33,3 +33,15 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ AceGPT-v2-32B-Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
37
+ AceGPT-v2-32B-Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
38
+ AceGPT-v2-32B-Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
39
+ AceGPT-v2-32B-Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
40
+ AceGPT-v2-32B-Q4_0.gguf filter=lfs diff=lfs merge=lfs -text
41
+ AceGPT-v2-32B-Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
42
+ AceGPT-v2-32B-Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
43
+ AceGPT-v2-32B-Q5_0.gguf filter=lfs diff=lfs merge=lfs -text
44
+ AceGPT-v2-32B-Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
45
+ AceGPT-v2-32B-Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
46
+ AceGPT-v2-32B-Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
47
+ AceGPT-v2-32B-Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
AceGPT-v2-32B-Q2_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c6b2b8234ba7a793539b1012f16942db844b3c176a7fb55fbef3e9c987ef8202
3
+ size 12223315488
AceGPT-v2-32B-Q3_K_L.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:531657a393eb33c3baaf2d7f629c83a77717032ae3d3bd823f101d35e4172a92
3
+ size 17118629408
AceGPT-v2-32B-Q3_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b61cffe79a6878d656fe2abab7423e774974b5436f7b73b07c63000ff0fc2a96
3
+ size 15816429088
AceGPT-v2-32B-Q3_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7e0d894090f6166f90379d1f246e651a0ce8d238c9741f079f69c209184c6cde
3
+ size 14285508128
AceGPT-v2-32B-Q4_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:95c39e196da287e11e6eef9774794312b1caf25b82e959470e5b46a5df83bcab
3
+ size 18499984928
AceGPT-v2-32B-Q4_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c32e7bfddfa3ab8dd1137dcf4194bdd2b125fc086be3cfdded505438f64a6f52
3
+ size 19700276768
AceGPT-v2-32B-Q4_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:eb5daef804336eb55442b58cb82b21caffa27eb226e69840a6b7661871a6e545
3
+ size 18642853408
AceGPT-v2-32B-Q5_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:08918991daf71ca9d698eb359957988b1ef0ab05d33629e01feb6e966c172673
3
+ size 22466551328
AceGPT-v2-32B-Q5_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:be66160c4947bc5d43dfc7e904fac9433366957bcb8c3c0dec628aa7edb66532
3
+ size 23084883488
AceGPT-v2-32B-Q5_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:323355a67400fe35e9edc176e7482339984dd07c2aa4c5d2768b7df12a0c7f9f
3
+ size 22466551328
AceGPT-v2-32B-Q6_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8958c4087eb536ec88064b6f2a6d5bbb4cad6e815dc7e642a8617da431341047
3
+ size 26681028128
AceGPT-v2-32B-Q8_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:844a45e5c91a6daf0af85d8b7e89345f8878fb558011d424d0bae94eec4bff05
3
+ size 34554809888
README.md ADDED
@@ -0,0 +1,120 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ language:
4
+ - ar
5
+ - zh
6
+ - en
7
+ tags:
8
+ - TensorBlock
9
+ - GGUF
10
+ base_model: FreedomIntelligence/AceGPT-v2-32B
11
+ ---
12
+
13
+ <div style="width: auto; margin-left: auto; margin-right: auto">
14
+ <img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
15
+ </div>
16
+ <div style="display: flex; justify-content: space-between; width: 100%;">
17
+ <div style="display: flex; flex-direction: column; align-items: flex-start;">
18
+ <p style="margin-top: 0.5em; margin-bottom: 0em;">
19
+ Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
20
+ </p>
21
+ </div>
22
+ </div>
23
+
24
+ ## FreedomIntelligence/AceGPT-v2-32B - GGUF
25
+
26
+ This repo contains GGUF format model files for [FreedomIntelligence/AceGPT-v2-32B](https://huggingface.co/FreedomIntelligence/AceGPT-v2-32B).
27
+
28
+ The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
29
+
30
+ ## Our projects
31
+ <table border="1" cellspacing="0" cellpadding="10">
32
+ <tr>
33
+ <th style="font-size: 25px;">Awesome MCP Servers</th>
34
+ <th style="font-size: 25px;">TensorBlock Studio</th>
35
+ </tr>
36
+ <tr>
37
+ <th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
38
+ <th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
39
+ </tr>
40
+ <tr>
41
+ <th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
42
+ <th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
43
+ </tr>
44
+ <tr>
45
+ <th>
46
+ <a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
47
+ display: inline-block;
48
+ padding: 8px 16px;
49
+ background-color: #FF7F50;
50
+ color: white;
51
+ text-decoration: none;
52
+ border-radius: 6px;
53
+ font-weight: bold;
54
+ font-family: sans-serif;
55
+ ">πŸ‘€ See what we built πŸ‘€</a>
56
+ </th>
57
+ <th>
58
+ <a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
59
+ display: inline-block;
60
+ padding: 8px 16px;
61
+ background-color: #FF7F50;
62
+ color: white;
63
+ text-decoration: none;
64
+ border-radius: 6px;
65
+ font-weight: bold;
66
+ font-family: sans-serif;
67
+ ">πŸ‘€ See what we built πŸ‘€</a>
68
+ </th>
69
+ </tr>
70
+ </table>
71
+
72
+ ## Prompt template
73
+
74
+ ```
75
+ <|im_start|>system
76
+ {system_prompt}<|im_end|>
77
+ <|im_start|>user
78
+ {prompt}<|im_end|>
79
+ <|im_start|>assistant
80
+ ```
81
+
82
+ ## Model file specification
83
+
84
+ | Filename | Quant type | File Size | Description |
85
+ | -------- | ---------- | --------- | ----------- |
86
+ | [AceGPT-v2-32B-Q2_K.gguf](https://huggingface.co/tensorblock/FreedomIntelligence_AceGPT-v2-32B-GGUF/blob/main/AceGPT-v2-32B-Q2_K.gguf) | Q2_K | 12.223 GB | smallest, significant quality loss - not recommended for most purposes |
87
+ | [AceGPT-v2-32B-Q3_K_S.gguf](https://huggingface.co/tensorblock/FreedomIntelligence_AceGPT-v2-32B-GGUF/blob/main/AceGPT-v2-32B-Q3_K_S.gguf) | Q3_K_S | 14.286 GB | very small, high quality loss |
88
+ | [AceGPT-v2-32B-Q3_K_M.gguf](https://huggingface.co/tensorblock/FreedomIntelligence_AceGPT-v2-32B-GGUF/blob/main/AceGPT-v2-32B-Q3_K_M.gguf) | Q3_K_M | 15.816 GB | very small, high quality loss |
89
+ | [AceGPT-v2-32B-Q3_K_L.gguf](https://huggingface.co/tensorblock/FreedomIntelligence_AceGPT-v2-32B-GGUF/blob/main/AceGPT-v2-32B-Q3_K_L.gguf) | Q3_K_L | 17.119 GB | small, substantial quality loss |
90
+ | [AceGPT-v2-32B-Q4_0.gguf](https://huggingface.co/tensorblock/FreedomIntelligence_AceGPT-v2-32B-GGUF/blob/main/AceGPT-v2-32B-Q4_0.gguf) | Q4_0 | 18.500 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
91
+ | [AceGPT-v2-32B-Q4_K_S.gguf](https://huggingface.co/tensorblock/FreedomIntelligence_AceGPT-v2-32B-GGUF/blob/main/AceGPT-v2-32B-Q4_K_S.gguf) | Q4_K_S | 18.643 GB | small, greater quality loss |
92
+ | [AceGPT-v2-32B-Q4_K_M.gguf](https://huggingface.co/tensorblock/FreedomIntelligence_AceGPT-v2-32B-GGUF/blob/main/AceGPT-v2-32B-Q4_K_M.gguf) | Q4_K_M | 19.700 GB | medium, balanced quality - recommended |
93
+ | [AceGPT-v2-32B-Q5_0.gguf](https://huggingface.co/tensorblock/FreedomIntelligence_AceGPT-v2-32B-GGUF/blob/main/AceGPT-v2-32B-Q5_0.gguf) | Q5_0 | 22.467 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
94
+ | [AceGPT-v2-32B-Q5_K_S.gguf](https://huggingface.co/tensorblock/FreedomIntelligence_AceGPT-v2-32B-GGUF/blob/main/AceGPT-v2-32B-Q5_K_S.gguf) | Q5_K_S | 22.467 GB | large, low quality loss - recommended |
95
+ | [AceGPT-v2-32B-Q5_K_M.gguf](https://huggingface.co/tensorblock/FreedomIntelligence_AceGPT-v2-32B-GGUF/blob/main/AceGPT-v2-32B-Q5_K_M.gguf) | Q5_K_M | 23.085 GB | large, very low quality loss - recommended |
96
+ | [AceGPT-v2-32B-Q6_K.gguf](https://huggingface.co/tensorblock/FreedomIntelligence_AceGPT-v2-32B-GGUF/blob/main/AceGPT-v2-32B-Q6_K.gguf) | Q6_K | 26.681 GB | very large, extremely low quality loss |
97
+ | [AceGPT-v2-32B-Q8_0.gguf](https://huggingface.co/tensorblock/FreedomIntelligence_AceGPT-v2-32B-GGUF/blob/main/AceGPT-v2-32B-Q8_0.gguf) | Q8_0 | 34.555 GB | very large, extremely low quality loss - not recommended |
98
+
99
+
100
+ ## Downloading instruction
101
+
102
+ ### Command line
103
+
104
+ Firstly, install Huggingface Client
105
+
106
+ ```shell
107
+ pip install -U "huggingface_hub[cli]"
108
+ ```
109
+
110
+ Then, downoad the individual model file the a local directory
111
+
112
+ ```shell
113
+ huggingface-cli download tensorblock/FreedomIntelligence_AceGPT-v2-32B-GGUF --include "AceGPT-v2-32B-Q2_K.gguf" --local-dir MY_LOCAL_DIR
114
+ ```
115
+
116
+ If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
117
+
118
+ ```shell
119
+ huggingface-cli download tensorblock/FreedomIntelligence_AceGPT-v2-32B-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
120
+ ```