#Roleplay #Multimodal #Vision

These are quants for an experimental model.

quantization_options = [
    "Q4_K_M", "Q4_K_S", "IQ4_XS",
    "Q5_K_M", "Q5_K_S",
    "Q6_K", "Q8_0"
]

Original model weights and information:
https://huggingface.co/Nitral-AI/Eris_PrimeV4.69-Vision-32k-7B

MMPROJ:
./mmproj/mmproj-model-f16.gguf

image/png

Vision/multimodal capabilities:

If you want to use vision functionality:

  • You must use the latest version of KoboldCpp.

To use the multimodal capabilities of this model and use vision you need to load the specified mmproj file, this can be found inside this model repo.

  • You can load the mmproj by using the corresponding section in the interface:

image/png

Downloads last month
1,405
GGUF
Model size
7.24B params
Architecture
llama

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model's library.