Specific Size Mismatch Detail - LTXV 13b GGUF Quants

#1
by PyrateGFX - opened

Hi,

Just wanted to follow up on your note that the LTXV 13b GGUF quants aren't working yet. I also encountered a loading error in ComfyUI.

Trying to understand the problem and how to fix it I had Gemini analyze the traceback, it consistently shows a size mismatch where parameters in the GGUF checkpoint have a dimension size of 4096, but the model in ComfyUI is expecting 2048 for those same parameters (e.g., in scale_shift_table and various transformer_blocks weights like attn1.q_norm.weight).

This specific 4096 vs 2048 discrepancy strongly suggests the issue lies within the conversion process, likely related to how a core internal dimension was handled by the conversion scripts.

Thanks for working on this!

Best regards,

I'm getting the exact same error loading the non-gguf weights with Diffusers, so it might be something causing trouble even before the gguf process.

As far as ik its probably a config issue with their models, but im still investigating
edit: nope, the issue lays with the way it is loaded, the config seems to be hardcoded for 2048 but the model has 4096, the .safetensors from them has the exact same issue when I load it as a diffusion model as you have said tintwotin /:

Have you installed their kernel which they recommend https://github.com/Lightricks/LTX-Video-Q8-Kernels or is it necessary to install this to run gguf quants?

You need it for one of the nodes they gave, but i dont think its needed for the ggufs. But I installed it anyway.

The issue with the loader has nothing to do with that though

How does their original model work in ComfyUI then without this size mismatch error? There are a few people on Reddit who claim they have been running it on RunPod. Do those LTXV custom nodes hide some dirty tricks that are not compatible with mainstream libraries and conversion methods?

it only does with the checkpoint loader since that one doesnt use the standard model config ig

for upscaling, Get Vae dont'work, i linked vae in the correct node (is good), but grain make error if use distorch loader

ill check it, didnt really use that part and just left it from the original example workflow

otherwise, some people just have weird issues that no one can really explain lol, a reinstall of comfyui portable could help, the newest version 0.3.33 should work without issues, ive linked an sage attention installer in the workflow too, you can even change the model location to just use the ones from your old install, if you need help with that i could help you (;

Great, where did you find an installer for sage attention? (this? https://github.com/thu-ml/SageAttention)
0.3.3 is good, but force update dependences for update to 0.3.3

Edit. "Clean VRAM used" before node Grain and work fine

Good job, everything works. I solved this problem by updating the comfyui dependencies, I had to update the dependencies 3 or 4 times in a row because there were updates that installed sequentially 1 at a time.
Laptop with 32gb and 4070 8gb
4/5 minutes for I2V video generation without upscaler.
Now I try if it understands prompts and try upscaler

@bicio78ita in what resolution? And what you have in workflow?

768 x 512 25 frame x second, 97 frame tot, whit sageattention, workflow the same here

Only your clip file work, another clip dont work

Great, where did you find an installer for sage attention? (this? https://github.com/thu-ml/SageAttention)
0.3.3 is good, but force update dependences for update to 0.3.3

Edit. "Clean VRAM used" before node Grain and work fine

I have one batch file that installs sage attention (though you might need git and cmake dont remember) if you place it in the correct folder, the link is in one of the note nodes in the workflow (;

yeah that clear vram node can be helpful i had similar issues though people with bigger vram sizes didnt had those.

Sign up or log in to comment