Furkan Gözükara PRO

MonsterMMORPG

AI & ML interests

Check out my youtube page SECourses for Stable Diffusion tutorials. They will help you tremendously in every topic

Recent Activity

reacted to their post with 🤯 1 day ago
I have Compared Kohya vs OneTrainer for FLUX Dev Finetuning / DreamBooth Training OneTrainer can train FLUX Dev with Text-Encoders unlike Kohya so I wanted to try it. Unfortunately, the developer doesn't want to add feature to save trained Clip L or T5 XXL as safetensors or merge them into output so basically they are useless without so much extra effort. I still went ahead and wanted to test EMA training. EMA normally improves quality significantly in SD 1.5 training. With FLUX I have to use CPU for EMA and it was really slow but i wanted to test. I have tried to replicate Kohya config. The below you will see results. Sadly the quality is nothing sort of. More research has to be made and since we still don't get text-encoder training due to developer decision, I don't see any benefit of using OneTrainer for FLUX training instead of using Koha. 1st image : Kohya best config : https://www.patreon.com/posts/112099700 2nd image : One Trainer Kohya config with EMA update every 1 step 3rd image : One Trainer Kohya config with EMA update every 5 steps 4th image : One Trainer Kohya config 5th image : One Trainer Kohya config but Timestep Shift is 1 instead of 3.1582 I am guessing that Timestep Shift of OneTrainer is not same as Discrete Flow Shift of Kohya Probably I need to work and do more test and i can improve results but i don't see any reason to do atm. If Clip Training + merging it into safetensors file was working, I was gonna pursue it These are not cherry pick results all are from 1st test grid
View all activity

Organizations

Social Post Explorers's profile picture Hugging Face Discord Community's profile picture

Posts 76

view post
Post
1050
I have Compared Kohya vs OneTrainer for FLUX Dev Finetuning / DreamBooth Training

OneTrainer can train FLUX Dev with Text-Encoders unlike Kohya so I wanted to try it.

Unfortunately, the developer doesn't want to add feature to save trained Clip L or T5 XXL as safetensors or merge them into output so basically they are useless without so much extra effort.

I still went ahead and wanted to test EMA training. EMA normally improves quality significantly in SD 1.5 training. With FLUX I have to use CPU for EMA and it was really slow but i wanted to test.

I have tried to replicate Kohya config. The below you will see results. Sadly the quality is nothing sort of. More research has to be made and since we still don't get text-encoder training due to developer decision, I don't see any benefit of using OneTrainer for FLUX training instead of using Koha.

1st image : Kohya best config : https://www.patreon.com/posts/112099700

2nd image : One Trainer Kohya config with EMA update every 1 step

3rd image : One Trainer Kohya config with EMA update every 5 steps

4th image : One Trainer Kohya config

5th image : One Trainer Kohya config but Timestep Shift is 1 instead of 3.1582

I am guessing that Timestep Shift of OneTrainer is not same as Discrete Flow Shift of Kohya

Probably I need to work and do more test and i can improve results but i don't see any reason to do atm. If Clip Training + merging it into safetensors file was working, I was gonna pursue it

These are not cherry pick results all are from 1st test grid

Articles 7

Article
1

I have Compared Kohya vs OneTrainer for FLUX Dev Finetuning / DreamBooth Training