apepkuss79 commited on
Commit
6594cc9
·
verified ·
1 Parent(s): 6648a84

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +19 -23
README.md CHANGED
@@ -73,34 +73,30 @@ library_name: transformers
73
 
74
  | Name | Quant method | Bits | Size | Use case |
75
  | ---- | ---- | ---- | ---- | ----- |
76
- | [Qwen2-VL-72B-Instruct-Q3_K_M.gguf](https://huggingface.co/second-state/Qwen2-VL-72B-Instruct-GGUF/blob/main/Qwen2-VL-72B-Instruct-Q3_K_M.gguf) | Q3_K_M | 3 | 37.7 GB| very small, high quality loss |
77
- | [Qwen2-VL-72B-Instruct-Q4_0.gguf](https://huggingface.co/second-state/Qwen2-VL-72B-Instruct-GGUF/blob/main/Qwen2-VL-72B-Instruct-Q4_0.gguf) | Q4_0 | 4 | 41.2 GB| legacy; small, very high quality loss - prefer using Q3_K_M |
78
- | [Qwen2-VL-72B-Instruct-Q5_K_M-00001-of-00002.gguf](https://huggingface.co/second-state/Qwen2-VL-72B-Instruct-GGUF/blob/main/Qwen2-VL-72B-Instruct-Q5_K_M-00001-of-00002.gguf) | Q5_K_M | 5 | 29.9 GB| large, very low quality loss - recommended |
79
- | [Qwen2-VL-72B-Instruct-Q5_K_M-00002-of-00002.gguf](https://huggingface.co/second-state/Qwen2-VL-72B-Instruct-GGUF/blob/main/Qwen2-VL-72B-Instruct-Q5_K_M-00002-of-00002.gguf) | Q5_K_M | 5 | 24.5 GB| large, very low quality loss - recommended |
80
- | [Qwen2-VL-72B-Instruct-vision-encoder.gguf](https://huggingface.co/second-state/Qwen2-VL-72B-Instruct-GGUF/blob/main/Qwen2-VL-72B-Instruct-vision-encoder.gguf) | f16 | 16 | 2.8 GB| |
81
- <!-- | [Qwen2-VL-72B-Instruct-Q2_K.gguf](https://huggingface.co/second-state/Qwen2-VL-72B-Instruct-GGUF/blob/main/Qwen2-VL-72B-Instruct-Q2_K.gguf) | Q2_K | 2 | 29.8 GB| smallest, significant quality loss - not recommended for most purposes |
82
  | [Qwen2-VL-72B-Instruct-Q3_K_L.gguf](https://huggingface.co/second-state/Qwen2-VL-72B-Instruct-GGUF/blob/main/Qwen2-VL-72B-Instruct-Q3_K_L.gguf) | Q3_K_L | 3 | 39.5 GB| small, substantial quality loss |
83
  | [Qwen2-VL-72B-Instruct-Q3_K_M.gguf](https://huggingface.co/second-state/Qwen2-VL-72B-Instruct-GGUF/blob/main/Qwen2-VL-72B-Instruct-Q3_K_M.gguf) | Q3_K_M | 3 | 37.7 GB| very small, high quality loss |
84
  | [Qwen2-VL-72B-Instruct-Q3_K_S.gguf](https://huggingface.co/second-state/Qwen2-VL-72B-Instruct-GGUF/blob/main/Qwen2-VL-72B-Instruct-Q3_K_S.gguf) | Q3_K_S | 3 | 34.5 GB| very small, high quality loss |
85
  | [Qwen2-VL-72B-Instruct-Q4_0.gguf](https://huggingface.co/second-state/Qwen2-VL-72B-Instruct-GGUF/blob/main/Qwen2-VL-72B-Instruct-Q4_0.gguf) | Q4_0 | 4 | 41.2 GB| legacy; small, very high quality loss - prefer using Q3_K_M |
86
  | [Qwen2-VL-72B-Instruct-Q4_K_M.gguf](https://huggingface.co/second-state/Qwen2-VL-72B-Instruct-GGUF/blob/main/Qwen2-VL-72B-Instruct-Q4_K_M.gguf) | Q4_K_M | 4 | 47.4 GB| medium, balanced quality - recommended |
87
  | [Qwen2-VL-72B-Instruct-Q4_K_S.gguf](https://huggingface.co/second-state/Qwen2-VL-72B-Instruct-GGUF/blob/main/Qwen2-VL-72B-Instruct-Q4_K_S.gguf) | Q4_K_S | 4 | 43.9 GB| small, greater quality loss |
88
- | [Qwen2-VL-72B-Instruct-Q5_0-00001-of-00002.gguf](https://huggingface.co/second-state/Qwen2-VL-72B-Instruct-GGUF/blob/main/Qwen2-VL-72B-Instruct-Q5_0-00001-of-00002.gguf) | Q5_0 | 5 | 32.2 GB| legacy; medium, balanced quality - prefer using Q4_K_M |
89
- | [Qwen2-VL-72B-Instruct-Q5_0-00002-of-00002.gguf](https://huggingface.co/second-state/Qwen2-VL-72B-Instruct-GGUF/blob/main/Qwen2-VL-72B-Instruct-Q5_0-00002-of-00002.gguf) | Q5_0 | 5 | 18 GB| legacy; medium, balanced quality - prefer using Q4_K_M |
90
  | [Qwen2-VL-72B-Instruct-Q5_K_M-00001-of-00002.gguf](https://huggingface.co/second-state/Qwen2-VL-72B-Instruct-GGUF/blob/main/Qwen2-VL-72B-Instruct-Q5_K_M-00001-of-00002.gguf) | Q5_K_M | 5 | 29.9 GB| large, very low quality loss - recommended |
91
  | [Qwen2-VL-72B-Instruct-Q5_K_M-00002-of-00002.gguf](https://huggingface.co/second-state/Qwen2-VL-72B-Instruct-GGUF/blob/main/Qwen2-VL-72B-Instruct-Q5_K_M-00002-of-00002.gguf) | Q5_K_M | 5 | 24.5 GB| large, very low quality loss - recommended |
92
- | [Qwen2-VL-72B-Instruct-Q5_K_S-00001-of-00002.gguf](https://huggingface.co/second-state/Qwen2-VL-72B-Instruct-GGUF/blob/main/Qwen2-VL-72B-Instruct-Q5_K_S-00001-of-00002.gguf) | Q5_K_S | 5 | 32.1 GB| large, low quality loss - recommended |
93
- | [Qwen2-VL-72B-Instruct-Q5_K_S-00002-of-00002.gguf](https://huggingface.co/second-state/Qwen2-VL-72B-Instruct-GGUF/blob/main/Qwen2-VL-72B-Instruct-Q5_K_S-00002-of-00002.gguf) | Q5_K_S | 5 | 32.1 GB| large, low quality loss - recommended |
94
- | [Qwen2-VL-72B-Instruct-Q6_K-00001-of-00002.gguf](https://huggingface.co/second-state/Qwen2-VL-72B-Instruct-GGUF/blob/main/Qwen2-VL-72B-Instruct-Q6_K-00001-of-00002.gguf) | Q6_K | 6 | 32.2 GB| very large, extremely low quality loss |
95
- | [Qwen2-VL-72B-Instruct-Q6_K-00002-of-00002.gguf](https://huggingface.co/second-state/Qwen2-VL-72B-Instruct-GGUF/blob/main/Qwen2-VL-72B-Instruct-Q6_K-00002-of-00002.gguf) | Q6_K | 6 | 32.2 GB| very large, extremely low quality loss |
96
- | [Qwen2-VL-72B-Instruct-Q8_0-00001-of-00003.gguf](https://huggingface.co/second-state/Qwen2-VL-72B-Instruct-GGUF/blob/main/Qwen2-VL-72B-Instruct-Q8_0-00001-of-00003.gguf) | Q8_0 | 8 | 32.1 GB| very large, extremely low quality loss - not recommended |
97
- | [Qwen2-VL-72B-Instruct-Q8_0-00002-of-00003.gguf](https://huggingface.co/second-state/Qwen2-VL-72B-Instruct-GGUF/blob/main/Qwen2-VL-72B-Instruct-Q8_0-00002-of-00003.gguf) | Q8_0 | 8 | 32.1 GB| very large, extremely low quality loss - not recommended |
98
- | [Qwen2-VL-72B-Instruct-Q8_0-00003-of-00003.gguf](https://huggingface.co/second-state/Qwen2-VL-72B-Instruct-GGUF/blob/main/Qwen2-VL-72B-Instruct-Q8_0-00003-of-00003.gguf) | Q8_0 | 8 | 32.1 GB| very large, extremely low quality loss - not recommended |
99
- | [Qwen2-VL-72B-Instruct-f16-00001-of-00005.gguf](https://huggingface.co/second-state/Qwen2-VL-72B-Instruct-GGUF/blob/main/Qwen2-VL-72B-Instruct-f16-00001-of-00005.gguf) | f16 | 16 | 31.9 GB| |
100
- | [Qwen2-VL-72B-Instruct-f16-00002-of-00005.gguf](https://huggingface.co/second-state/Qwen2-VL-72B-Instruct-GGUF/blob/main/Qwen2-VL-72B-Instruct-f16-00002-of-00005.gguf) | f16 | 16 | 32.1 GB| |
101
- | [Qwen2-VL-72B-Instruct-f16-00003-of-00005.gguf](https://huggingface.co/second-state/Qwen2-VL-72B-Instruct-GGUF/blob/main/Qwen2-VL-72B-Instruct-f16-00003-of-00005.gguf) | f16 | 16 | 32.1 GB| |
102
- | [Qwen2-VL-72B-Instruct-f16-00004-of-00005.gguf](https://huggingface.co/second-state/Qwen2-VL-72B-Instruct-GGUF/blob/main/Qwen2-VL-72B-Instruct-f16-00004-of-00005.gguf) | f16 | 16 | 32.1 GB| |
103
- | [Qwen2-VL-72B-Instruct-f16-00005-of-00005.gguf](https://huggingface.co/second-state/Qwen2-VL-72B-Instruct-GGUF/blob/main/Qwen2-VL-72B-Instruct-f16-00005-of-00005.gguf) | f16 | 16 | 17.3 GB| |
104
- | [Qwen2-VL-72B-Instruct-f16-00005-of-00005.gguf](https://huggingface.co/second-state/Qwen2-VL-72B-Instruct-GGUF/blob/main/Qwen2-VL-72B-Instruct-f16-00005-of-00005.gguf) | f16 | 16 | 17.3 GB| | -->
105
-
106
- *Quantized with llama.cpp b4372*
 
 
73
 
74
  | Name | Quant method | Bits | Size | Use case |
75
  | ---- | ---- | ---- | ---- | ----- |
76
+ | [Qwen2-VL-72B-Instruct-Q2_K.gguf](https://huggingface.co/second-state/Qwen2-VL-72B-Instruct-GGUF/blob/main/Qwen2-VL-72B-Instruct-Q2_K.gguf) | Q2_K | 2 | 29.8 GB| smallest, significant quality loss - not recommended for most purposes |
 
 
 
 
 
77
  | [Qwen2-VL-72B-Instruct-Q3_K_L.gguf](https://huggingface.co/second-state/Qwen2-VL-72B-Instruct-GGUF/blob/main/Qwen2-VL-72B-Instruct-Q3_K_L.gguf) | Q3_K_L | 3 | 39.5 GB| small, substantial quality loss |
78
  | [Qwen2-VL-72B-Instruct-Q3_K_M.gguf](https://huggingface.co/second-state/Qwen2-VL-72B-Instruct-GGUF/blob/main/Qwen2-VL-72B-Instruct-Q3_K_M.gguf) | Q3_K_M | 3 | 37.7 GB| very small, high quality loss |
79
  | [Qwen2-VL-72B-Instruct-Q3_K_S.gguf](https://huggingface.co/second-state/Qwen2-VL-72B-Instruct-GGUF/blob/main/Qwen2-VL-72B-Instruct-Q3_K_S.gguf) | Q3_K_S | 3 | 34.5 GB| very small, high quality loss |
80
  | [Qwen2-VL-72B-Instruct-Q4_0.gguf](https://huggingface.co/second-state/Qwen2-VL-72B-Instruct-GGUF/blob/main/Qwen2-VL-72B-Instruct-Q4_0.gguf) | Q4_0 | 4 | 41.2 GB| legacy; small, very high quality loss - prefer using Q3_K_M |
81
  | [Qwen2-VL-72B-Instruct-Q4_K_M.gguf](https://huggingface.co/second-state/Qwen2-VL-72B-Instruct-GGUF/blob/main/Qwen2-VL-72B-Instruct-Q4_K_M.gguf) | Q4_K_M | 4 | 47.4 GB| medium, balanced quality - recommended |
82
  | [Qwen2-VL-72B-Instruct-Q4_K_S.gguf](https://huggingface.co/second-state/Qwen2-VL-72B-Instruct-GGUF/blob/main/Qwen2-VL-72B-Instruct-Q4_K_S.gguf) | Q4_K_S | 4 | 43.9 GB| small, greater quality loss |
83
+ | [Qwen2-VL-72B-Instruct-Q5_0-00001-of-00002.gguf](https://huggingface.co/second-state/Qwen2-VL-72B-Instruct-GGUF/blob/main/Qwen2-VL-72B-Instruct-Q5_0-00001-of-00002.gguf) | Q5_0 | 5 | 29.9 GB| legacy; medium, balanced quality - prefer using Q4_K_M |
84
+ | [Qwen2-VL-72B-Instruct-Q5_0-00002-of-00002.gguf](https://huggingface.co/second-state/Qwen2-VL-72B-Instruct-GGUF/blob/main/Qwen2-VL-72B-Instruct-Q5_0-00002-of-00002.gguf) | Q5_0 | 5 | 20.2 GB| legacy; medium, balanced quality - prefer using Q4_K_M |
85
  | [Qwen2-VL-72B-Instruct-Q5_K_M-00001-of-00002.gguf](https://huggingface.co/second-state/Qwen2-VL-72B-Instruct-GGUF/blob/main/Qwen2-VL-72B-Instruct-Q5_K_M-00001-of-00002.gguf) | Q5_K_M | 5 | 29.9 GB| large, very low quality loss - recommended |
86
  | [Qwen2-VL-72B-Instruct-Q5_K_M-00002-of-00002.gguf](https://huggingface.co/second-state/Qwen2-VL-72B-Instruct-GGUF/blob/main/Qwen2-VL-72B-Instruct-Q5_K_M-00002-of-00002.gguf) | Q5_K_M | 5 | 24.5 GB| large, very low quality loss - recommended |
87
+ | [Qwen2-VL-72B-Instruct-Q5_K_S-00001-of-00002.gguf](https://huggingface.co/second-state/Qwen2-VL-72B-Instruct-GGUF/blob/main/Qwen2-VL-72B-Instruct-Q5_K_S-00001-of-00002.gguf) | Q5_K_S | 5 | 29.8 GB| large, low quality loss - recommended |
88
+ | [Qwen2-VL-72B-Instruct-Q5_K_S-00002-of-00002.gguf](https://huggingface.co/second-state/Qwen2-VL-72B-Instruct-GGUF/blob/main/Qwen2-VL-72B-Instruct-Q5_K_S-00002-of-00002.gguf) | Q5_K_S | 5 | 21.5 GB| large, low quality loss - recommended |
89
+ | [Qwen2-VL-72B-Instruct-Q6_K-00001-of-00003.gguf](https://huggingface.co/second-state/Qwen2-VL-72B-Instruct-GGUF/blob/main/Qwen2-VL-72B-Instruct-Q6_K-00001-of-00003.gguf) | Q6_K | 6 | 29.9 GB| very large, extremely low quality loss |
90
+ | [Qwen2-VL-72B-Instruct-Q6_K-00002-of-00003.gguf](https://huggingface.co/second-state/Qwen2-VL-72B-Instruct-GGUF/blob/main/Qwen2-VL-72B-Instruct-Q6_K-00002-of-00003.gguf) | Q6_K | 6 | 29.9 GB| very large, extremely low quality loss |
91
+ | [Qwen2-VL-72B-Instruct-Q6_K-00003-of-00003.gguf](https://huggingface.co/second-state/Qwen2-VL-72B-Instruct-GGUF/blob/main/Qwen2-VL-72B-Instruct-Q6_K-00003-of-00003.gguf) | Q6_K | 6 | 4.55 GB| very large, extremely low quality loss |
92
+ | [Qwen2-VL-72B-Instruct-Q8_0-00001-of-00003.gguf](https://huggingface.co/second-state/Qwen2-VL-72B-Instruct-GGUF/blob/main/Qwen2-VL-72B-Instruct-Q8_0-00001-of-00003.gguf) | Q8_0 | 8 | 29.9 GB| very large, extremely low quality loss - not recommended |
93
+ | [Qwen2-VL-72B-Instruct-Q8_0-00002-of-00003.gguf](https://huggingface.co/second-state/Qwen2-VL-72B-Instruct-GGUF/blob/main/Qwen2-VL-72B-Instruct-Q8_0-00002-of-00003.gguf) | Q8_0 | 8 | 29.8 GB| very large, extremely low quality loss - not recommended |
94
+ | [Qwen2-VL-72B-Instruct-Q8_0-00003-of-00003.gguf](https://huggingface.co/second-state/Qwen2-VL-72B-Instruct-GGUF/blob/main/Qwen2-VL-72B-Instruct-Q8_0-00003-of-00003.gguf) | Q8_0 | 8 | 17.6 GB| very large, extremely low quality loss - not recommended |
95
+ | [Qwen2-VL-72B-Instruct-f16-00001-of-00005.gguf](https://huggingface.co/second-state/Qwen2-VL-72B-Instruct-GGUF/blob/main/Qwen2-VL-72B-Instruct-f16-00001-of-00005.gguf) | f16 | 16 | 29.9 GB| |
96
+ | [Qwen2-VL-72B-Instruct-f16-00002-of-00005.gguf](https://huggingface.co/second-state/Qwen2-VL-72B-Instruct-GGUF/blob/main/Qwen2-VL-72B-Instruct-f16-00002-of-00005.gguf) | f16 | 16 | 29.7 GB| |
97
+ | [Qwen2-VL-72B-Instruct-f16-00003-of-00005.gguf](https://huggingface.co/second-state/Qwen2-VL-72B-Instruct-GGUF/blob/main/Qwen2-VL-72B-Instruct-f16-00003-of-00005.gguf) | f16 | 16 | 29.7 GB| |
98
+ | [Qwen2-VL-72B-Instruct-f16-00004-of-00005.gguf](https://huggingface.co/second-state/Qwen2-VL-72B-Instruct-GGUF/blob/main/Qwen2-VL-72B-Instruct-f16-00004-of-00005.gguf) | f16 | 16 | 29.5 GB| |
99
+ | [Qwen2-VL-72B-Instruct-f16-00005-of-00005.gguf](https://huggingface.co/second-state/Qwen2-VL-72B-Instruct-GGUF/blob/main/Qwen2-VL-72B-Instruct-f16-00005-of-00005.gguf) | f16 | 16 | 26.6 GB| |
100
+ | [Qwen2-VL-72B-Instruct-vision-encoder.gguf](https://huggingface.co/second-state/Qwen2-VL-72B-Instruct-GGUF/blob/main/Qwen2-VL-72B-Instruct-vision-encoder.gguf) | f16 | 16 | 2.8 GB| |
101
+
102
+ *Quantized with llama.cpp b4329*