Post
305
My indev Surge training methodology and paradigm is powerful. The preliminary tests will be available for debugging soon using a customized sd-scripts and a series of full finetunes using sdxl as a catalyst to the training paradigm.
https://civitai.com/articles/14195/the-methodology-of-surge-training-loss-math
The datasets I'm sourcing are going to be catalysts and tests for the power of Surge to teach very sticky or difficult to understand elements; such as text, positioning, offset, controlnet poses, and more directly into the very stubborn SDXL infrastructure without additional tools.
Should be noted that my current running finetunes based on BeatriXL are not Surge trained - so you won't gain knowledge on Surge from them.
GPT and I have prototyped a new version of SD15 that operates on additional attention heads to match the Surge formula, the Omega-VIT-L reformed, a zeroed unet, and the Flux 16 channel AE.
I'll call it SD-SURGE - as it's not sd15 anymore.
The first surge trainings are already under way.
https://civitai.com/articles/14195/the-methodology-of-surge-training-loss-math
The datasets I'm sourcing are going to be catalysts and tests for the power of Surge to teach very sticky or difficult to understand elements; such as text, positioning, offset, controlnet poses, and more directly into the very stubborn SDXL infrastructure without additional tools.
Should be noted that my current running finetunes based on BeatriXL are not Surge trained - so you won't gain knowledge on Surge from them.
GPT and I have prototyped a new version of SD15 that operates on additional attention heads to match the Surge formula, the Omega-VIT-L reformed, a zeroed unet, and the Flux 16 channel AE.
I'll call it SD-SURGE - as it's not sd15 anymore.
The first surge trainings are already under way.