YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
NitroFusion GGUF
This repository contains GGUF versions of the ChenDY/NitroFusion models, converted for efficient inference and merged with the sdxl-vae-fp16-fix.
These models are available in the following quantization formats:
q2_k
q3_k
q4_0
q5_0
q6_k
q8_0
f16
- Downloads last month
- 63
Hardware compatibility
Log In
to view the estimation
2-bit
4-bit
5-bit
6-bit
8-bit
16-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support