Error When Trying To Run In ComfyUI

#11
by inferior321 - opened

I get the following error when trying to open the model and run it in comfyUI

!!! Exception during processing!!! module 'torch' has no attribute 'float8_e4m3fn'

Actually, when examining a few things I think I may know the issue. Will report back once testing my hypothesis.

Stability AI org

Hi @inferior321 we made some small updates to the comfy workflow to make sure it loaded the correct checkpoints and text encoders. Please try update your cloned repo to the latest commit and try again, thanks!

It turns out that I needed to download the version of the files that didn't include the FP8 in them. I suppose my card doesn't have native support for it (nvidia rtx 3060) and that might have been throwing issues. after downloading the clips and such separately and using the triple clip, things seem to be working without throwing errors

@leemeng . I assume my AMD gpu is gonna be useless for quite a while then? I have comfyUI and can load different models that work but when trying to load these SD3 models and prompt them, the ksampler looks like it's working to generate the image and then I get an image of pure artefacts along an error in the terminal: tokens = clip.tokenize(text)
AttributeError: 'NoneType' object has no attribute 'tokenize'

Stability AI org

@Tiemnota not 100% sure about AMD setup but the error message you're getting

AttributeError: 'NoneType' object has no attribute 'tokenize'

means that your text encoder (clip object) is not properly instantiated. I would suggest you to check whether you have both CLIP L and CLIP G downloaded and are moved to the clip/ subfolder under your ComfyUI directory. And just run the triple clip node to see whether you have CLIP loaded.

I tried several combinations

sd3_medium_incl_clips_t5xxlfp8.safetensors and use the clip it contains -> Error

sd3_medium.safetensors + TripleCLIPLoader(clip_g,clip_l,t5xxl_fp8_e4m3fn) ->Error

sd3_medium.safetensors + DualCLIPLoader(clip_g,clip_l,sd3) -> Success

Sd3_medium_incl_clips.safetensors and use the clip it contains -> Success

I think the problem is t5xxl_fp8_e4m3fn

@tww448 I've tried all those with every workflow I've seen from youtube videos, including the basic one. I put the clips in the clip folders, the SD3 models in the checkpoints. I keep mixing them in different combinations and getting the same ridiculous images of artifacts. I'm using the directml approach for Windows. This is frustrating. I think StabilityAI should make a video showing things step by step. There's always a problem with AMD gpus.

This comment has been hidden

@inferior321 I started ComfyUI with --fp16-unet, no more "!!! Exception during processing!!! module 'torch' has no attribute 'float8_e4m3fn'"

When running the simplest ComfyUI script provided, I got this error message:

"Error occurred when executing TripleCLIPLoader:
Error while deserializing header: InvalidHeaderDeserialization"

In order to get it working, I switched the default t5xxl_fp8_e4m3fn.safetensors in the TripleCLIPLoader for t5xxl_fp16.safetensors, which you can also download on this repo.

@qaraleza that is also how i solved my issue. I tried the other suggestions above and had no success. I am assuming that my hardware simply isn't compatible with fp8 - since i have been made aware that it is relatively cutting edge and recent relatively speaking when compared to fp16 which has been around longer.

I've got a 4090 RTX - the most modern Nvidia consumer card - and the fp8 doesn't work for me. So it's maybe not an issue with hardware age.

Interesting. Hopefully it will be worked out at some point, but for now fp16 will suffice. This error is the only reason i didn't download the all-in-one model so it would definitely be nice to have it addressed so I don't have to import multiple files into the UI just to run the model

I had the same error on Nvidia RTX 4090. I upgraded torch with: pip install --upgrade torch torchvision torchaudio -r requirements.txt

The combination of these models with basic workflow now works:

checkpoint: sd3_medium_incl_clips_t5xxlfp8.safetensors
clip 1: clip_g.safetensors
clip 2: clip_l.safetensors
clip 3: t5xxl_fp16.safetensors

Nice, @cat3y3 , that pip upgrade helped, and now fp8 works.

I think I might also have had to pip upgrade xformers as well.

I did some more research and it does appear that my graphics card (the 3060) does not support the FP8, but the 4090s do.

inferior321 changed discussion status to closed

What exactly is the difference between fp8 and fp16?

Floating point numbers. FP8 I think would have 8 trailing numbers and FP16 would have 16 for its contained data. Using the FP8 does offer advantages in certain ways I have heard such as in training and such, but not all cards support it as it is newer than FP16. Also, the FP8 version uses less memory than the FP16 - making it less resource intense.

I had the same error on Nvidia RTX 4090. I upgraded torch with: pip install --upgrade torch torchvision torchaudio -r requirements.txt

The combination of these models with basic workflow now works:

checkpoint: sd3_medium_incl_clips_t5xxlfp8.safetensors
clip 1: clip_g.safetensors
clip 2: clip_l.safetensors
clip 3: t5xxl_fp16.safetensors

This works for me as well. Thanks

Sign up or log in to comment