Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
QuantStack
/
InternVL3_5-38B-gguf
like
1
Follow
QuantStack
1.33k
GGUF
conversational
License:
apache-2.0
Model card
Files
Files and versions
xet
Community
1
Deploy
Use this model
eb66e7a
InternVL3_5-38B-gguf
108 GB
1 contributor
History:
13 commits
wsbagnsv1
Rename internvl3_5-38b-q2_k.gguf to InternVL3_5-38b-q2_k.gguf
eb66e7a
verified
2 months ago
.gitattributes
2.04 kB
Rename internvl3_5-38b-q2_k.gguf to InternVL3_5-38b-q2_k.gguf
2 months ago
InternVL3_5-38B-iq4_xs.gguf
Safe
17.9 GB
xet
Upload InternVL3_5-38B-iq4_xs.gguf
2 months ago
InternVL3_5-38b-q2_k.gguf
Safe
12.3 GB
xet
Rename internvl3_5-38b-q2_k.gguf to InternVL3_5-38b-q2_k.gguf
2 months ago
README.md
Safe
324 Bytes
Update README.md
2 months ago
internvl3_5-38b-q3_k_s.gguf
14.4 GB
xet
Upload internvl3_5-38b-q3_k_s.gguf with huggingface_hub
2 months ago
internvl3_5-38b-q8_0.gguf
Safe
34.8 GB
xet
Upload internvl3_5-38b-q8_0.gguf with huggingface_hub
2 months ago
mmproj-InternVL3_5-38B-bf16.gguf
Safe
11.3 GB
xet
Upload 2 files
2 months ago
mmproj-InternVL3_5-38B-f16.gguf
11.3 GB
xet
Upload 2 files
2 months ago
mmproj-InternVL3_5-38B-q8_0.gguf
Safe
6 GB
xet
Upload mmproj-InternVL3_5-38B-q8_0.gguf
2 months ago