AbstractPhil commited on
Commit
334922e
·
verified ·
1 Parent(s): c329373

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -28,7 +28,7 @@ This repository contains code, configuration, and weights for the Dual Shunt Ada
28
  The adapter bridges T5 (or other transformer) text encoders with CLIP-based pooled embedding spaces, providing delta, gate, log_sigma, anchor, and guidance outputs for per-token, per-field semantic modulation.
29
  Compatible with custom and parallel CLIP streams (e.g., SDXL’s CLIP-L/CLIP-G), the system enables targeted latent field steering, dynamic classifier-free guidance, and localized prompt injection for advanced generative workflows—including direct integration with ComfyUI and HuggingFace Diffusers.
30
 
31
- The "no captions" versions have learned to conform entirely on a zero prompt state, where it is given no prompt as a baseline and then forced to learn null space. This has been the most robust implementation of them all so far as the outcomes show the best results visually and they are the most difficult to corrupt or damage through additional training.
32
 
33
  Adding the "noise" variation to the "no captions" normalized weights, results in a higher response and potency to very short or brief prompts based on random tokens being planted throughout the caption at various locations. This bleeds additional information into the model slowly while still allowing it to converge more rapidly without conforming to the non-noise hard-commit encoding memorization alternative.
34
 
 
28
  The adapter bridges T5 (or other transformer) text encoders with CLIP-based pooled embedding spaces, providing delta, gate, log_sigma, anchor, and guidance outputs for per-token, per-field semantic modulation.
29
  Compatible with custom and parallel CLIP streams (e.g., SDXL’s CLIP-L/CLIP-G), the system enables targeted latent field steering, dynamic classifier-free guidance, and localized prompt injection for advanced generative workflows—including direct integration with ComfyUI and HuggingFace Diffusers.
30
 
31
+ The "no captions" versions have learned to conform entirely on a zero prompt state, where it is given no prompt as a baseline and then forced to learn null space via Flan-T5-Base encodings compared to the anchored prompt. This has been the most robust implementation of them all so far as the outcomes show the best results visually and they are the most difficult to corrupt or damage through additional training.
32
 
33
  Adding the "noise" variation to the "no captions" normalized weights, results in a higher response and potency to very short or brief prompts based on random tokens being planted throughout the caption at various locations. This bleeds additional information into the model slowly while still allowing it to converge more rapidly without conforming to the non-noise hard-commit encoding memorization alternative.
34