Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
@@ -73,34 +73,30 @@ library_name: transformers
|
|
73 |
|
74 |
| Name | Quant method | Bits | Size | Use case |
|
75 |
| ---- | ---- | ---- | ---- | ----- |
|
76 |
-
| [Qwen2-VL-72B-Instruct-
|
77 |
-
| [Qwen2-VL-72B-Instruct-Q4_0.gguf](https://huggingface.co/second-state/Qwen2-VL-72B-Instruct-GGUF/blob/main/Qwen2-VL-72B-Instruct-Q4_0.gguf) | Q4_0 | 4 | 41.2 GB| legacy; small, very high quality loss - prefer using Q3_K_M |
|
78 |
-
| [Qwen2-VL-72B-Instruct-Q5_K_M-00001-of-00002.gguf](https://huggingface.co/second-state/Qwen2-VL-72B-Instruct-GGUF/blob/main/Qwen2-VL-72B-Instruct-Q5_K_M-00001-of-00002.gguf) | Q5_K_M | 5 | 29.9 GB| large, very low quality loss - recommended |
|
79 |
-
| [Qwen2-VL-72B-Instruct-Q5_K_M-00002-of-00002.gguf](https://huggingface.co/second-state/Qwen2-VL-72B-Instruct-GGUF/blob/main/Qwen2-VL-72B-Instruct-Q5_K_M-00002-of-00002.gguf) | Q5_K_M | 5 | 24.5 GB| large, very low quality loss - recommended |
|
80 |
-
| [Qwen2-VL-72B-Instruct-vision-encoder.gguf](https://huggingface.co/second-state/Qwen2-VL-72B-Instruct-GGUF/blob/main/Qwen2-VL-72B-Instruct-vision-encoder.gguf) | f16 | 16 | 2.8 GB| |
|
81 |
-
<!-- | [Qwen2-VL-72B-Instruct-Q2_K.gguf](https://huggingface.co/second-state/Qwen2-VL-72B-Instruct-GGUF/blob/main/Qwen2-VL-72B-Instruct-Q2_K.gguf) | Q2_K | 2 | 29.8 GB| smallest, significant quality loss - not recommended for most purposes |
|
82 |
| [Qwen2-VL-72B-Instruct-Q3_K_L.gguf](https://huggingface.co/second-state/Qwen2-VL-72B-Instruct-GGUF/blob/main/Qwen2-VL-72B-Instruct-Q3_K_L.gguf) | Q3_K_L | 3 | 39.5 GB| small, substantial quality loss |
|
83 |
| [Qwen2-VL-72B-Instruct-Q3_K_M.gguf](https://huggingface.co/second-state/Qwen2-VL-72B-Instruct-GGUF/blob/main/Qwen2-VL-72B-Instruct-Q3_K_M.gguf) | Q3_K_M | 3 | 37.7 GB| very small, high quality loss |
|
84 |
| [Qwen2-VL-72B-Instruct-Q3_K_S.gguf](https://huggingface.co/second-state/Qwen2-VL-72B-Instruct-GGUF/blob/main/Qwen2-VL-72B-Instruct-Q3_K_S.gguf) | Q3_K_S | 3 | 34.5 GB| very small, high quality loss |
|
85 |
| [Qwen2-VL-72B-Instruct-Q4_0.gguf](https://huggingface.co/second-state/Qwen2-VL-72B-Instruct-GGUF/blob/main/Qwen2-VL-72B-Instruct-Q4_0.gguf) | Q4_0 | 4 | 41.2 GB| legacy; small, very high quality loss - prefer using Q3_K_M |
|
86 |
| [Qwen2-VL-72B-Instruct-Q4_K_M.gguf](https://huggingface.co/second-state/Qwen2-VL-72B-Instruct-GGUF/blob/main/Qwen2-VL-72B-Instruct-Q4_K_M.gguf) | Q4_K_M | 4 | 47.4 GB| medium, balanced quality - recommended |
|
87 |
| [Qwen2-VL-72B-Instruct-Q4_K_S.gguf](https://huggingface.co/second-state/Qwen2-VL-72B-Instruct-GGUF/blob/main/Qwen2-VL-72B-Instruct-Q4_K_S.gguf) | Q4_K_S | 4 | 43.9 GB| small, greater quality loss |
|
88 |
-
| [Qwen2-VL-72B-Instruct-Q5_0-00001-of-00002.gguf](https://huggingface.co/second-state/Qwen2-VL-72B-Instruct-GGUF/blob/main/Qwen2-VL-72B-Instruct-Q5_0-00001-of-00002.gguf) | Q5_0 | 5 |
|
89 |
-
| [Qwen2-VL-72B-Instruct-Q5_0-00002-of-00002.gguf](https://huggingface.co/second-state/Qwen2-VL-72B-Instruct-GGUF/blob/main/Qwen2-VL-72B-Instruct-Q5_0-00002-of-00002.gguf) | Q5_0 | 5 |
|
90 |
| [Qwen2-VL-72B-Instruct-Q5_K_M-00001-of-00002.gguf](https://huggingface.co/second-state/Qwen2-VL-72B-Instruct-GGUF/blob/main/Qwen2-VL-72B-Instruct-Q5_K_M-00001-of-00002.gguf) | Q5_K_M | 5 | 29.9 GB| large, very low quality loss - recommended |
|
91 |
| [Qwen2-VL-72B-Instruct-Q5_K_M-00002-of-00002.gguf](https://huggingface.co/second-state/Qwen2-VL-72B-Instruct-GGUF/blob/main/Qwen2-VL-72B-Instruct-Q5_K_M-00002-of-00002.gguf) | Q5_K_M | 5 | 24.5 GB| large, very low quality loss - recommended |
|
92 |
-
| [Qwen2-VL-72B-Instruct-Q5_K_S-00001-of-00002.gguf](https://huggingface.co/second-state/Qwen2-VL-72B-Instruct-GGUF/blob/main/Qwen2-VL-72B-Instruct-Q5_K_S-00001-of-00002.gguf) | Q5_K_S | 5 |
|
93 |
-
| [Qwen2-VL-72B-Instruct-Q5_K_S-00002-of-00002.gguf](https://huggingface.co/second-state/Qwen2-VL-72B-Instruct-GGUF/blob/main/Qwen2-VL-72B-Instruct-Q5_K_S-00002-of-00002.gguf) | Q5_K_S | 5 |
|
94 |
-
| [Qwen2-VL-72B-Instruct-Q6_K-00001-of-
|
95 |
-
| [Qwen2-VL-72B-Instruct-Q6_K-00002-of-
|
96 |
-
| [Qwen2-VL-72B-Instruct-
|
97 |
-
| [Qwen2-VL-72B-Instruct-Q8_0-
|
98 |
-
| [Qwen2-VL-72B-Instruct-Q8_0-
|
99 |
-
| [Qwen2-VL-72B-Instruct-
|
100 |
-
| [Qwen2-VL-72B-Instruct-f16-
|
101 |
-
| [Qwen2-VL-72B-Instruct-f16-
|
102 |
-
| [Qwen2-VL-72B-Instruct-f16-
|
103 |
-
| [Qwen2-VL-72B-Instruct-f16-
|
104 |
-
| [Qwen2-VL-72B-Instruct-f16-00005-of-00005.gguf](https://huggingface.co/second-state/Qwen2-VL-72B-Instruct-GGUF/blob/main/Qwen2-VL-72B-Instruct-f16-00005-of-00005.gguf) | f16 | 16 |
|
105 |
-
|
106 |
-
|
|
|
|
73 |
|
74 |
| Name | Quant method | Bits | Size | Use case |
|
75 |
| ---- | ---- | ---- | ---- | ----- |
|
76 |
+
| [Qwen2-VL-72B-Instruct-Q2_K.gguf](https://huggingface.co/second-state/Qwen2-VL-72B-Instruct-GGUF/blob/main/Qwen2-VL-72B-Instruct-Q2_K.gguf) | Q2_K | 2 | 29.8 GB| smallest, significant quality loss - not recommended for most purposes |
|
|
|
|
|
|
|
|
|
|
|
77 |
| [Qwen2-VL-72B-Instruct-Q3_K_L.gguf](https://huggingface.co/second-state/Qwen2-VL-72B-Instruct-GGUF/blob/main/Qwen2-VL-72B-Instruct-Q3_K_L.gguf) | Q3_K_L | 3 | 39.5 GB| small, substantial quality loss |
|
78 |
| [Qwen2-VL-72B-Instruct-Q3_K_M.gguf](https://huggingface.co/second-state/Qwen2-VL-72B-Instruct-GGUF/blob/main/Qwen2-VL-72B-Instruct-Q3_K_M.gguf) | Q3_K_M | 3 | 37.7 GB| very small, high quality loss |
|
79 |
| [Qwen2-VL-72B-Instruct-Q3_K_S.gguf](https://huggingface.co/second-state/Qwen2-VL-72B-Instruct-GGUF/blob/main/Qwen2-VL-72B-Instruct-Q3_K_S.gguf) | Q3_K_S | 3 | 34.5 GB| very small, high quality loss |
|
80 |
| [Qwen2-VL-72B-Instruct-Q4_0.gguf](https://huggingface.co/second-state/Qwen2-VL-72B-Instruct-GGUF/blob/main/Qwen2-VL-72B-Instruct-Q4_0.gguf) | Q4_0 | 4 | 41.2 GB| legacy; small, very high quality loss - prefer using Q3_K_M |
|
81 |
| [Qwen2-VL-72B-Instruct-Q4_K_M.gguf](https://huggingface.co/second-state/Qwen2-VL-72B-Instruct-GGUF/blob/main/Qwen2-VL-72B-Instruct-Q4_K_M.gguf) | Q4_K_M | 4 | 47.4 GB| medium, balanced quality - recommended |
|
82 |
| [Qwen2-VL-72B-Instruct-Q4_K_S.gguf](https://huggingface.co/second-state/Qwen2-VL-72B-Instruct-GGUF/blob/main/Qwen2-VL-72B-Instruct-Q4_K_S.gguf) | Q4_K_S | 4 | 43.9 GB| small, greater quality loss |
|
83 |
+
| [Qwen2-VL-72B-Instruct-Q5_0-00001-of-00002.gguf](https://huggingface.co/second-state/Qwen2-VL-72B-Instruct-GGUF/blob/main/Qwen2-VL-72B-Instruct-Q5_0-00001-of-00002.gguf) | Q5_0 | 5 | 29.9 GB| legacy; medium, balanced quality - prefer using Q4_K_M |
|
84 |
+
| [Qwen2-VL-72B-Instruct-Q5_0-00002-of-00002.gguf](https://huggingface.co/second-state/Qwen2-VL-72B-Instruct-GGUF/blob/main/Qwen2-VL-72B-Instruct-Q5_0-00002-of-00002.gguf) | Q5_0 | 5 | 20.2 GB| legacy; medium, balanced quality - prefer using Q4_K_M |
|
85 |
| [Qwen2-VL-72B-Instruct-Q5_K_M-00001-of-00002.gguf](https://huggingface.co/second-state/Qwen2-VL-72B-Instruct-GGUF/blob/main/Qwen2-VL-72B-Instruct-Q5_K_M-00001-of-00002.gguf) | Q5_K_M | 5 | 29.9 GB| large, very low quality loss - recommended |
|
86 |
| [Qwen2-VL-72B-Instruct-Q5_K_M-00002-of-00002.gguf](https://huggingface.co/second-state/Qwen2-VL-72B-Instruct-GGUF/blob/main/Qwen2-VL-72B-Instruct-Q5_K_M-00002-of-00002.gguf) | Q5_K_M | 5 | 24.5 GB| large, very low quality loss - recommended |
|
87 |
+
| [Qwen2-VL-72B-Instruct-Q5_K_S-00001-of-00002.gguf](https://huggingface.co/second-state/Qwen2-VL-72B-Instruct-GGUF/blob/main/Qwen2-VL-72B-Instruct-Q5_K_S-00001-of-00002.gguf) | Q5_K_S | 5 | 29.8 GB| large, low quality loss - recommended |
|
88 |
+
| [Qwen2-VL-72B-Instruct-Q5_K_S-00002-of-00002.gguf](https://huggingface.co/second-state/Qwen2-VL-72B-Instruct-GGUF/blob/main/Qwen2-VL-72B-Instruct-Q5_K_S-00002-of-00002.gguf) | Q5_K_S | 5 | 21.5 GB| large, low quality loss - recommended |
|
89 |
+
| [Qwen2-VL-72B-Instruct-Q6_K-00001-of-00003.gguf](https://huggingface.co/second-state/Qwen2-VL-72B-Instruct-GGUF/blob/main/Qwen2-VL-72B-Instruct-Q6_K-00001-of-00003.gguf) | Q6_K | 6 | 29.9 GB| very large, extremely low quality loss |
|
90 |
+
| [Qwen2-VL-72B-Instruct-Q6_K-00002-of-00003.gguf](https://huggingface.co/second-state/Qwen2-VL-72B-Instruct-GGUF/blob/main/Qwen2-VL-72B-Instruct-Q6_K-00002-of-00003.gguf) | Q6_K | 6 | 29.9 GB| very large, extremely low quality loss |
|
91 |
+
| [Qwen2-VL-72B-Instruct-Q6_K-00003-of-00003.gguf](https://huggingface.co/second-state/Qwen2-VL-72B-Instruct-GGUF/blob/main/Qwen2-VL-72B-Instruct-Q6_K-00003-of-00003.gguf) | Q6_K | 6 | 4.55 GB| very large, extremely low quality loss |
|
92 |
+
| [Qwen2-VL-72B-Instruct-Q8_0-00001-of-00003.gguf](https://huggingface.co/second-state/Qwen2-VL-72B-Instruct-GGUF/blob/main/Qwen2-VL-72B-Instruct-Q8_0-00001-of-00003.gguf) | Q8_0 | 8 | 29.9 GB| very large, extremely low quality loss - not recommended |
|
93 |
+
| [Qwen2-VL-72B-Instruct-Q8_0-00002-of-00003.gguf](https://huggingface.co/second-state/Qwen2-VL-72B-Instruct-GGUF/blob/main/Qwen2-VL-72B-Instruct-Q8_0-00002-of-00003.gguf) | Q8_0 | 8 | 29.8 GB| very large, extremely low quality loss - not recommended |
|
94 |
+
| [Qwen2-VL-72B-Instruct-Q8_0-00003-of-00003.gguf](https://huggingface.co/second-state/Qwen2-VL-72B-Instruct-GGUF/blob/main/Qwen2-VL-72B-Instruct-Q8_0-00003-of-00003.gguf) | Q8_0 | 8 | 17.6 GB| very large, extremely low quality loss - not recommended |
|
95 |
+
| [Qwen2-VL-72B-Instruct-f16-00001-of-00005.gguf](https://huggingface.co/second-state/Qwen2-VL-72B-Instruct-GGUF/blob/main/Qwen2-VL-72B-Instruct-f16-00001-of-00005.gguf) | f16 | 16 | 29.9 GB| |
|
96 |
+
| [Qwen2-VL-72B-Instruct-f16-00002-of-00005.gguf](https://huggingface.co/second-state/Qwen2-VL-72B-Instruct-GGUF/blob/main/Qwen2-VL-72B-Instruct-f16-00002-of-00005.gguf) | f16 | 16 | 29.7 GB| |
|
97 |
+
| [Qwen2-VL-72B-Instruct-f16-00003-of-00005.gguf](https://huggingface.co/second-state/Qwen2-VL-72B-Instruct-GGUF/blob/main/Qwen2-VL-72B-Instruct-f16-00003-of-00005.gguf) | f16 | 16 | 29.7 GB| |
|
98 |
+
| [Qwen2-VL-72B-Instruct-f16-00004-of-00005.gguf](https://huggingface.co/second-state/Qwen2-VL-72B-Instruct-GGUF/blob/main/Qwen2-VL-72B-Instruct-f16-00004-of-00005.gguf) | f16 | 16 | 29.5 GB| |
|
99 |
+
| [Qwen2-VL-72B-Instruct-f16-00005-of-00005.gguf](https://huggingface.co/second-state/Qwen2-VL-72B-Instruct-GGUF/blob/main/Qwen2-VL-72B-Instruct-f16-00005-of-00005.gguf) | f16 | 16 | 26.6 GB| |
|
100 |
+
| [Qwen2-VL-72B-Instruct-vision-encoder.gguf](https://huggingface.co/second-state/Qwen2-VL-72B-Instruct-GGUF/blob/main/Qwen2-VL-72B-Instruct-vision-encoder.gguf) | f16 | 16 | 2.8 GB| |
|
101 |
+
|
102 |
+
*Quantized with llama.cpp b4329*
|