Llamacpp quants
Browse files- README.md +5 -3
- gemma-2-27b-it-IQ2_M.gguf +2 -2
- gemma-2-27b-it-IQ2_S.gguf +2 -2
- gemma-2-27b-it-IQ2_XS.gguf +2 -2
- gemma-2-27b-it-IQ3_M.gguf +2 -2
- gemma-2-27b-it-IQ3_XS.gguf +2 -2
- gemma-2-27b-it-IQ3_XXS.gguf +2 -2
- gemma-2-27b-it-IQ4_XS.gguf +2 -2
- gemma-2-27b-it-Q2_K.gguf +2 -2
- gemma-2-27b-it-Q2_K_L.gguf +2 -2
- gemma-2-27b-it-Q3_K_L.gguf +2 -2
- gemma-2-27b-it-Q3_K_M.gguf +2 -2
- gemma-2-27b-it-Q3_K_S.gguf +2 -2
- gemma-2-27b-it-Q3_K_XL.gguf +2 -2
- gemma-2-27b-it-Q4_K_L.gguf +2 -2
- gemma-2-27b-it-Q4_K_M.gguf +2 -2
- gemma-2-27b-it-Q4_K_S.gguf +2 -2
- gemma-2-27b-it-Q5_K_L.gguf +2 -2
- gemma-2-27b-it-Q5_K_M.gguf +2 -2
- gemma-2-27b-it-Q5_K_S.gguf +2 -2
- gemma-2-27b-it-Q6_K.gguf +2 -2
- gemma-2-27b-it-Q6_K_L.gguf +2 -2
- gemma-2-27b-it-Q8_0.gguf +2 -2
- gemma-2-27b-it-Q8_0_L.gguf +2 -2
- gemma-2-27b-it-f32.gguf/gemma-2-27b-it-f32-00001-of-00003.gguf +2 -2
- gemma-2-27b-it-f32.gguf/gemma-2-27b-it-f32-00002-of-00003.gguf +2 -2
- gemma-2-27b-it-f32.gguf/gemma-2-27b-it-f32-00003-of-00003.gguf +2 -2
- gemma-2-27b-it.imatrix +1 -1
README.md
CHANGED
@@ -25,6 +25,8 @@ All quants made using imatrix option with dataset from [here](https://gist.githu
|
|
25 |
<bos><start_of_turn>user
|
26 |
{prompt}<end_of_turn>
|
27 |
<start_of_turn>model
|
|
|
|
|
28 |
|
29 |
```
|
30 |
|
@@ -38,7 +40,7 @@ Note that this model does not support a System prompt.
|
|
38 |
| [gemma-2-27b-it-Q8_0.gguf](https://huggingface.co/bartowski/gemma-2-27b-it-GGUF/blob/main/gemma-2-27b-it-Q8_0.gguf) | Q8_0 | 28.93GB | Extremely high quality, generally unneeded but max available quant. |
|
39 |
| [gemma-2-27b-it-Q6_K_L.gguf](https://huggingface.co/bartowski/gemma-2-27b-it-GGUF/blob/main/gemma-2-27b-it-Q6_K_L.gguf) | Q6_K_L | 23.73GB | *Experimental*, uses f16 for embed and output weights. Please provide any feedback of differences. Very high quality, near perfect, *recommended*. |
|
40 |
| [gemma-2-27b-it-Q6_K.gguf](https://huggingface.co/bartowski/gemma-2-27b-it-GGUF/blob/main/gemma-2-27b-it-Q6_K.gguf) | Q6_K | 22.34GB | Very high quality, near perfect, *recommended*. |
|
41 |
-
| [gemma-2-27b-it-Q5_K_L.gguf](https://huggingface.co/bartowski/gemma-2-27b-it-GGUF/blob/main/gemma-2-27b-it-Q5_K_L.gguf) | Q5_K_L | 20.
|
42 |
| [gemma-2-27b-it-Q5_K_M.gguf](https://huggingface.co/bartowski/gemma-2-27b-it-GGUF/blob/main/gemma-2-27b-it-Q5_K_M.gguf) | Q5_K_M | 19.40GB | High quality, *recommended*. |
|
43 |
| [gemma-2-27b-it-Q5_K_S.gguf](https://huggingface.co/bartowski/gemma-2-27b-it-GGUF/blob/main/gemma-2-27b-it-Q5_K_S.gguf) | Q5_K_S | 18.88GB | High quality, *recommended*. |
|
44 |
| [gemma-2-27b-it-Q4_K_L.gguf](https://huggingface.co/bartowski/gemma-2-27b-it-GGUF/blob/main/gemma-2-27b-it-Q4_K_L.gguf) | Q4_K_L | 18.03GB | *Experimental*, uses f16 for embed and output weights. Please provide any feedback of differences. Good quality, uses about 4.83 bits per weight, *recommended*. |
|
@@ -52,10 +54,10 @@ Note that this model does not support a System prompt.
|
|
52 |
| [gemma-2-27b-it-Q3_K_S.gguf](https://huggingface.co/bartowski/gemma-2-27b-it-GGUF/blob/main/gemma-2-27b-it-Q3_K_S.gguf) | Q3_K_S | 12.16GB | Low quality, not recommended. |
|
53 |
| [gemma-2-27b-it-IQ3_XS.gguf](https://huggingface.co/bartowski/gemma-2-27b-it-GGUF/blob/main/gemma-2-27b-it-IQ3_XS.gguf) | IQ3_XS | 11.55GB | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
|
54 |
| [gemma-2-27b-it-IQ3_XXS.gguf](https://huggingface.co/bartowski/gemma-2-27b-it-GGUF/blob/main/gemma-2-27b-it-IQ3_XXS.gguf) | IQ3_XXS | 10.75GB | Lower quality, new method with decent performance, comparable to Q3 quants. |
|
55 |
-
| [gemma-2-27b-it-Q2_K.gguf](https://huggingface.co/bartowski/gemma-2-27b-it-GGUF/blob/main/gemma-2-27b-it-Q2_K.gguf) | Q2_K | 10.
|
56 |
| [gemma-2-27b-it-IQ2_M.gguf](https://huggingface.co/bartowski/gemma-2-27b-it-GGUF/blob/main/gemma-2-27b-it-IQ2_M.gguf) | IQ2_M | 9.39GB | Very low quality, uses SOTA techniques to also be surprisingly usable. |
|
57 |
| [gemma-2-27b-it-IQ2_S.gguf](https://huggingface.co/bartowski/gemma-2-27b-it-GGUF/blob/main/gemma-2-27b-it-IQ2_S.gguf) | IQ2_S | 8.65GB | Very low quality, uses SOTA techniques to be usable. |
|
58 |
-
| [gemma-2-27b-it-IQ2_XS.gguf](https://huggingface.co/bartowski/gemma-2-27b-it-GGUF/blob/main/gemma-2-27b-it-IQ2_XS.gguf) | IQ2_XS | 8.
|
59 |
|
60 |
## Downloading using huggingface-cli
|
61 |
|
|
|
25 |
<bos><start_of_turn>user
|
26 |
{prompt}<end_of_turn>
|
27 |
<start_of_turn>model
|
28 |
+
<end_of_turn>
|
29 |
+
<start_of_turn>model
|
30 |
|
31 |
```
|
32 |
|
|
|
40 |
| [gemma-2-27b-it-Q8_0.gguf](https://huggingface.co/bartowski/gemma-2-27b-it-GGUF/blob/main/gemma-2-27b-it-Q8_0.gguf) | Q8_0 | 28.93GB | Extremely high quality, generally unneeded but max available quant. |
|
41 |
| [gemma-2-27b-it-Q6_K_L.gguf](https://huggingface.co/bartowski/gemma-2-27b-it-GGUF/blob/main/gemma-2-27b-it-Q6_K_L.gguf) | Q6_K_L | 23.73GB | *Experimental*, uses f16 for embed and output weights. Please provide any feedback of differences. Very high quality, near perfect, *recommended*. |
|
42 |
| [gemma-2-27b-it-Q6_K.gguf](https://huggingface.co/bartowski/gemma-2-27b-it-GGUF/blob/main/gemma-2-27b-it-Q6_K.gguf) | Q6_K | 22.34GB | Very high quality, near perfect, *recommended*. |
|
43 |
+
| [gemma-2-27b-it-Q5_K_L.gguf](https://huggingface.co/bartowski/gemma-2-27b-it-GGUF/blob/main/gemma-2-27b-it-Q5_K_L.gguf) | Q5_K_L | 20.79GB | *Experimental*, uses f16 for embed and output weights. Please provide any feedback of differences. High quality, *recommended*. |
|
44 |
| [gemma-2-27b-it-Q5_K_M.gguf](https://huggingface.co/bartowski/gemma-2-27b-it-GGUF/blob/main/gemma-2-27b-it-Q5_K_M.gguf) | Q5_K_M | 19.40GB | High quality, *recommended*. |
|
45 |
| [gemma-2-27b-it-Q5_K_S.gguf](https://huggingface.co/bartowski/gemma-2-27b-it-GGUF/blob/main/gemma-2-27b-it-Q5_K_S.gguf) | Q5_K_S | 18.88GB | High quality, *recommended*. |
|
46 |
| [gemma-2-27b-it-Q4_K_L.gguf](https://huggingface.co/bartowski/gemma-2-27b-it-GGUF/blob/main/gemma-2-27b-it-Q4_K_L.gguf) | Q4_K_L | 18.03GB | *Experimental*, uses f16 for embed and output weights. Please provide any feedback of differences. Good quality, uses about 4.83 bits per weight, *recommended*. |
|
|
|
54 |
| [gemma-2-27b-it-Q3_K_S.gguf](https://huggingface.co/bartowski/gemma-2-27b-it-GGUF/blob/main/gemma-2-27b-it-Q3_K_S.gguf) | Q3_K_S | 12.16GB | Low quality, not recommended. |
|
55 |
| [gemma-2-27b-it-IQ3_XS.gguf](https://huggingface.co/bartowski/gemma-2-27b-it-GGUF/blob/main/gemma-2-27b-it-IQ3_XS.gguf) | IQ3_XS | 11.55GB | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
|
56 |
| [gemma-2-27b-it-IQ3_XXS.gguf](https://huggingface.co/bartowski/gemma-2-27b-it-GGUF/blob/main/gemma-2-27b-it-IQ3_XXS.gguf) | IQ3_XXS | 10.75GB | Lower quality, new method with decent performance, comparable to Q3 quants. |
|
57 |
+
| [gemma-2-27b-it-Q2_K.gguf](https://huggingface.co/bartowski/gemma-2-27b-it-GGUF/blob/main/gemma-2-27b-it-Q2_K.gguf) | Q2_K | 10.44GB | Very low quality but surprisingly usable. |
|
58 |
| [gemma-2-27b-it-IQ2_M.gguf](https://huggingface.co/bartowski/gemma-2-27b-it-GGUF/blob/main/gemma-2-27b-it-IQ2_M.gguf) | IQ2_M | 9.39GB | Very low quality, uses SOTA techniques to also be surprisingly usable. |
|
59 |
| [gemma-2-27b-it-IQ2_S.gguf](https://huggingface.co/bartowski/gemma-2-27b-it-GGUF/blob/main/gemma-2-27b-it-IQ2_S.gguf) | IQ2_S | 8.65GB | Very low quality, uses SOTA techniques to be usable. |
|
60 |
+
| [gemma-2-27b-it-IQ2_XS.gguf](https://huggingface.co/bartowski/gemma-2-27b-it-GGUF/blob/main/gemma-2-27b-it-IQ2_XS.gguf) | IQ2_XS | 8.39GB | Very low quality, uses SOTA techniques to be usable. |
|
61 |
|
62 |
## Downloading using huggingface-cli
|
63 |
|
gemma-2-27b-it-IQ2_M.gguf
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:2ac5e0423a14bd124310a9bbff4dfd2d0a6a8165162fd3c9f5090869a692d2e4
|
3 |
+
size 9398878560
|
gemma-2-27b-it-IQ2_S.gguf
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:eb8acc19032705252b5891883ef339107a4e7236da266307c3f2ff485414c45c
|
3 |
+
size 8652161376
|
gemma-2-27b-it-IQ2_XS.gguf
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:0667fdc5bc7635eb379f9cce9f3225719f87d3e062a4227be127f154d6843c00
|
3 |
+
size 8399716704
|
gemma-2-27b-it-IQ3_M.gguf
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:b56de27535c598d0419eb2e93502d8be65baac7b5cdfda610b8541cd2adf67dc
|
3 |
+
size 12454830432
|
gemma-2-27b-it-IQ3_XS.gguf
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:3dfc33b9619f6c141a2b680fa5d625ce3f9b514b5e62b233ad5fd58985e056c8
|
3 |
+
size 11550630240
|
gemma-2-27b-it-IQ3_XXS.gguf
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:e9dd8bcb4c6cec2c35bd1cdeaa295f6788ff8a7c656a2448b8f67db557fabc05
|
3 |
+
size 10750755168
|
gemma-2-27b-it-IQ4_XS.gguf
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:136db68716327171facb0da75f5396e61175d99b953bdcbed8c1b228c366c85f
|
3 |
+
size 14814421344
|
gemma-2-27b-it-Q2_K.gguf
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:589b098ead37615a95d83181ce0f0d3a162b7bd720eae370029b5f452ef36612
|
3 |
+
size 10449576288
|
gemma-2-27b-it-Q2_K_L.gguf
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:ae47e19306cbc099f405720e84d65f5168cd1e315de886436bbd7e9b2fb00012
|
3 |
+
size 11841192288
|
gemma-2-27b-it-Q3_K_L.gguf
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:ef6c739293a83b428950abda322dbcd20893002b404f244f4c5881898e38a0bf
|
3 |
+
size 14519361888
|
gemma-2-27b-it-Q3_K_M.gguf
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:f0cc746e782593662436433a1541bd3dfcf3fc64b8ba9564361b2eeab5475937
|
3 |
+
size 13424648544
|
gemma-2-27b-it-Q3_K_S.gguf
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:8fe7885f7bdb3bf0bc8548c296f437fb576ed906043fd9f9c0c7ac84246d0972
|
3 |
+
size 12169060704
|
gemma-2-27b-it-Q3_K_XL.gguf
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:042f847a41a3dfcf83414b203054ca044acab33d741114c864f15dd0c5539338
|
3 |
+
size 15910977888
|
gemma-2-27b-it-Q4_K_L.gguf
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:c7d35766b774effed0fb93535c2ec4a19a52a1ac263a865388ce8a1f4217297d
|
3 |
+
size 18036998496
|
gemma-2-27b-it-Q4_K_M.gguf
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:be6739763f1b7661d32bd63e05bc1131e5bb9dac436b249faf6c6edffa601c96
|
3 |
+
size 16645382496
|
gemma-2-27b-it-Q4_K_S.gguf
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:4605b0667036d027a179e718a7182118d00db6242ca6f08d9e1ab0ba4afecd54
|
3 |
+
size 15739265376
|
gemma-2-27b-it-Q5_K_L.gguf
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:da959f7228be368760c7b0e55ae68b522582966b1d1b22ef4c6b2994a482ee18
|
3 |
+
size 20799734112
|
gemma-2-27b-it-Q5_K_M.gguf
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:96c90d894225c760044fc37e0b0912c67268a7870fe01d445b1821b561ee0bab
|
3 |
+
size 19408118112
|
gemma-2-27b-it-Q5_K_S.gguf
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:bd0a3f5a02417eeb0e3a6bc45b0bce6b9b1f24d36afed1e8b1c93ff5d82f17a0
|
3 |
+
size 18884206944
|
gemma-2-27b-it-Q6_K.gguf
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:460d88c4ad506163673fde237fef6be2c231a056926a40ef859ce32456c7a424
|
3 |
+
size 22343524704
|
gemma-2-27b-it-Q6_K_L.gguf
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:a7f00d33b1feb33184d50cabf047cc9d53fb56ecb5f839e0cb8fbfba0f826a11
|
3 |
+
size 23735140704
|
gemma-2-27b-it-Q8_0.gguf
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:784146b99cc08e237292638d064150909a7e82cf6ecc55bed7d81328c58d42d2
|
3 |
+
size 28937388384
|
gemma-2-27b-it-Q8_0_L.gguf
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:96f1c2a3807b2e8d56a705713a064fc03879da21564472fcdfc0477c0b65ba05
|
3 |
+
size 30043308384
|
gemma-2-27b-it-f32.gguf/gemma-2-27b-it-f32-00001-of-00003.gguf
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:40cffae24f3b2ef81b71f3f9395641e2d7b3b648c81ff6b37d16b496b655463b
|
3 |
+
size 39605588416
|
gemma-2-27b-it-f32.gguf/gemma-2-27b-it-f32-00002-of-00003.gguf
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:ee1299b5d2d728b578c7d9e309ff87d13b4701dd01a7b268c4d2393e128e881a
|
3 |
+
size 39864004064
|
gemma-2-27b-it-f32.gguf/gemma-2-27b-it-f32-00003-of-00003.gguf
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:f7c97c2904b8af2f09fa1d83304f6b6d3640f3b00474685e994b49fed6c31e8f
|
3 |
+
size 29444981280
|
gemma-2-27b-it.imatrix
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 11786697
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:fa62436bc9274e797e6a525815823c8c9f3baf1a011c4da34ee3252bc77cefd8
|
3 |
size 11786697
|