FLUX schnell Quantized Models
This repo contains quantized versions of the FLUX schnell transformer for use in InvokeAI.
Contents:
transformer/base/
- Transformer in bfloat16 copied from heretransformer/bnb_nf4/
- Transformer quantized to bitsandbytes NF4 format using this script
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
HF Inference API was unable to determine this model's library.