Quantized gguf of Tifa-DeepsexV2-7b-NoCot-0325-F16.gguf from https://huggingface.co/ValueFX9507/Tifa-DeepsexV2-7b-MGRPO-GGUF-F16
Quantized with llama.cpp from https://github.com/ggml-org/llama.cpp
From my perspective, this version is significantly better than older versions.
- Downloads last month
- 1,435
Hardware compatibility
Log In
to view the estimation
4-bit
5-bit
6-bit
8-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
HF Inference deployability: The model has no library tag.
Model tree for Misaka27260/Tifa-DeepsexV2-7b-NoCot-0325-GGUF
Base model
ValueFX9507/Tifa-DeepsexV2-7b-MGRPO-GGUF-F16