WIP - read P.S.

Model Details

Just the SVDQuant quantized int4 variant of the base model mikeyandfriends/PixelWave_FLUX.1-schnell_04.

It was quantized using official svdquant toolset using both fast and gptq presets.

P.S. Yields worse than expected generation results, so not recommended as for now, I will take another try to quantize it using slow mode.

P.P.S. I've ran full quantization, but due to the way the toolset implemented, it had a bug at the very end in the eval part of workflow, so it exited with error not saving a single byte of final quantization result, everything is lost.

Due to the fact that I paid for the compute myself I consider this loss of ~$60 as a valuable lesson, but I would not redo it again, as I am short on free cash atm. Sorry. Fot those who willing to have it done - feel free to generate a redeemable credit code at RunPod, and send it to me via telegram: t.me/WaveCut, and I'll be happy to get another shot at it.

Downloads last month
23
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for WaveCut/PixelWave_FLUX.1-schnell_04_SVDQuant-int4