A Q_8_0 Quant gguf of Tifa-DeepsexV2-7b-Cot-0317-F16.gguf from https://huggingface.co/ValueFX9507/Tifa-DeepsexV2-7b-MGRPO-GGUF-F16
Quantized with llama.cpp from https://github.com/ggml-org/llama.cpp
- Downloads last month
- 370
Hardware compatibility
Log In
to view the estimation
8-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for Misaka27260/Tifa-DeepsexV2-7b-Cot-0317-Q8_0_gguf
Base model
ValueFX9507/Tifa-DeepsexV2-7b-MGRPO-GGUF-F16