CappyHermes (Test Upload)

โš ๏ธ Note: Despite the repo name, this model is based on a 7B architecture (Mistral) with Q8 quantization.
The file name reflects the correct base model.

This upload is for testing Hugging Face workflows, public linking, and Vast.ai/Oobabooga integration.
Do not use as benchmark reference or assume it is 8B.

Uploaded by Babs as part of a personal LoRA and infrastructure prototyping process.

File:

  • capybarahermes-2.5-mistral-7b.Q8_0.gguf (7.7GB, 32K context)

License: OpenRAIL++

Downloads last month
43
GGUF
Model size
7.24B params
Architecture
llama
Hardware compatibility
Log In to view the estimation

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support