A Q_8_0 Quant gguf of Tifa-DeepsexV2-7b-Cot-0317-F16.gguf from https://huggingface.co/ValueFX9507/Tifa-DeepsexV2-7b-MGRPO-GGUF-F16

Quantized with llama.cpp from https://github.com/ggml-org/llama.cpp

Downloads last month
370
GGUF
Model size
7.62B params
Architecture
qwen2
Hardware compatibility
Log In to view the estimation

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for Misaka27260/Tifa-DeepsexV2-7b-Cot-0317-Q8_0_gguf

Quantized
(4)
this model