Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
Impulse2000
/
Dolphin-Mistral-24B-Venice-Edition-Q6_K-GGUF
like
0
GGUF
llama-cpp
gguf-my-repo
conversational
License:
apache-2.0
Model card
Files
Files and versions
xet
Community
Deploy
Use this model
main
Dolphin-Mistral-24B-Venice-Edition-Q6_K-GGUF
Ctrl+K
Ctrl+K
1 contributor
History:
3 commits
Impulse2000
Upload README.md with huggingface_hub
8b2c8ff
verified
6 days ago
.gitattributes
Safe
1.6 kB
Upload dolphin-mistral-24b-venice-edition-q6_k.gguf with huggingface_hub
6 days ago
README.md
2.03 kB
Upload README.md with huggingface_hub
6 days ago
dolphin-mistral-24b-venice-edition-q6_k.gguf
Safe
19.3 GB
xet
Upload dolphin-mistral-24b-venice-edition-q6_k.gguf with huggingface_hub
6 days ago