Feedback: Successful LoRA training and appreciation (with Japanese guide)

#39
by rara2183ry9 - opened

Dear HiDream development team,

Thank you very much for releasing such a powerful and inspiring model.
I was able to train a LoRA model locally using an RTX 4090 (24GB VRAM), and I was especially impressed by how well the character identity was preserved during generation.

I’ve written a detailed note in Japanese summarizing my training process and settings. I hope it may be of some reference:
👉 https://note.com/fztyhdssise/n/nd2b08f8abdd3

At the same time, I did find that the current model structure required some technical adjustments and workarounds to enable LoRA training. If future versions of HiDream could include more LoRA-aware design considerations, it would make the model even more approachable and valuable for a wider range of users.

I'm truly grateful for the continued development of HiDream, and I look forward to future updates. Thank you again for this incredible work.

Sign up or log in to comment