AndreyRGW/saiga_tlite_8b_abliterated-GGUF
This is quantized version of IlyaGusev/saiga_tlite_8b_abliterated_sft_m1_d9 created using llama.cpp
- Downloads last month
- 10
Hardware compatibility
Log In
to view the estimation
4-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support