--- license: other base_model: "black-forest-labs/FLUX.1-dev" tags: - flux - flux-diffusers - text-to-image - diffusers - simpletuner - safe-for-work - lora - template:sd-lora - lycoris inference: true widget: - text: 'unconditional (blank prompt)' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_0_0.png - text: 'Klimt Style Painting, a hipster man with a beard, building a chair.' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_1_0.png - text: 'Klimt Style Painting, a brown haired woman with a bronze helmet and scaled bronze torso armor. her armor has a mask like face design on the chest. She is holding a spear in her left hand. She holds a small nude female figurine with arms outstretched in her right hand. A simple dark background with shadowy human figures and hints of green foliage.' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_2_0.png - text: 'Klimt Style Painting, a hamster.' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_3_0.png - text: 'a man holding a sign that says, ''this is a sign' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_4_0.png - text: 'a pig, in a post apocalyptic world, with a shotgun, in a leather jacket, in a desert, with a motorcycle' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_5_0.png - text: 'Klimt Style Painting, woman holding a sign that says ''I LOVE PROMPTS!''' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_6_0.png - text: 'Klimt Style Painting, a sleeping nude red haired woman laying in a fetal position. She is wrapped in purple silks and laying on a white sheet. Her right hand is gripping the sheet. there is a signature at the bottom right corner.' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_7_0.png --- # Klimt-Phase2-2e-5-ss3.0 This is a LyCORIS adapter derived from [black-forest-labs/FLUX.1-dev](https://huggingface.co/black-forest-labs/FLUX.1-dev). No validation prompt was used during training. None ## Validation settings - CFG: `4.0` - CFG Rescale: `0.0` - Steps: `20` - Sampler: `FlowMatchEulerDiscreteScheduler` - Seed: `42` - Resolution: `1024x1024` - Skip-layer guidance: Note: The validation settings are not necessarily the same as the [training settings](#training-settings). You can find some example images in the following gallery: The text encoder **was not** trained. You may reuse the base model text encoder for inference. ## Training settings - Training epochs: 21 - Training steps: 9500 - Learning rate: 2e-05 - Learning rate schedule: constant - Warmup steps: 100 - Max grad norm: 0.1 - Effective batch size: 3 - Micro-batch size: 3 - Gradient accumulation steps: 1 - Number of GPUs: 1 - Gradient checkpointing: True - Prediction type: flow-matching (extra parameters=['shift=3.0', 'flux_guidance_mode=constant', 'flux_guidance_value=4.0', 'flow_matching_loss=compatible']) - Optimizer: adamw_bf16 - Trainable parameter precision: Pure BF16 - Caption dropout probability: 10.0% ### LyCORIS Config: ```json { "algo": "lokr", "multiplier": 1.0, "linear_dim": 10000, "linear_alpha": 1, "factor": 16, "apply_preset": { "target_module": [ "Attention", "FeedForward" ], "module_algo_map": { "Attention": { "factor": 16 }, "FeedForward": { "factor": 8 } } } } ``` ## Datasets ### klimt-background-512 - Repeats: 22 - Total number of images: 7 - Total number of aspect buckets: 4 - Resolution: 0.262144 megapixels - Cropped: False - Crop style: None - Crop aspect: None - Used for regularisation data: No ### klimt-background-768 - Repeats: 22 - Total number of images: 7 - Total number of aspect buckets: 4 - Resolution: 0.589824 megapixels - Cropped: False - Crop style: None - Crop aspect: None - Used for regularisation data: No ### klimt-background-1024 - Repeats: 11 - Total number of images: 7 - Total number of aspect buckets: 5 - Resolution: 1.048576 megapixels - Cropped: False - Crop style: None - Crop aspect: None - Used for regularisation data: No ### klimt-background-1536 - Repeats: 5 - Total number of images: 5 - Total number of aspect buckets: 3 - Resolution: 2.359296 megapixels - Cropped: False - Crop style: None - Crop aspect: None - Used for regularisation data: No ### klimt-background-512-crop - Repeats: 11 - Total number of images: 7 - Total number of aspect buckets: 1 - Resolution: 0.262144 megapixels - Cropped: True - Crop style: random - Crop aspect: square - Used for regularisation data: No ### klimt-background-768-crop - Repeats: 11 - Total number of images: 6 - Total number of aspect buckets: 1 - Resolution: 0.589824 megapixels - Cropped: True - Crop style: random - Crop aspect: square - Used for regularisation data: No ### klimt-background-512-tight-crop - Repeats: 11 - Total number of images: 7 - Total number of aspect buckets: 1 - Resolution: 0.262144 megapixels - Cropped: True - Crop style: random - Crop aspect: square - Used for regularisation data: No ### klimt-background-768-tight-crop - Repeats: 11 - Total number of images: 6 - Total number of aspect buckets: 1 - Resolution: 0.589824 megapixels - Cropped: True - Crop style: random - Crop aspect: square - Used for regularisation data: No ### klimt-background-1024-crop - Repeats: 5 - Total number of images: 5 - Total number of aspect buckets: 1 - Resolution: 1.048576 megapixels - Cropped: True - Crop style: random - Crop aspect: square - Used for regularisation data: No ## Inference ```python import torch from diffusers import DiffusionPipeline from lycoris import create_lycoris_from_weights def download_adapter(repo_id: str): import os from huggingface_hub import hf_hub_download adapter_filename = "pytorch_lora_weights.safetensors" cache_dir = os.environ.get('HF_PATH', os.path.expanduser('~/.cache/huggingface/hub/models')) cleaned_adapter_path = repo_id.replace("/", "_").replace("\\", "_").replace(":", "_") path_to_adapter = os.path.join(cache_dir, cleaned_adapter_path) path_to_adapter_file = os.path.join(path_to_adapter, adapter_filename) os.makedirs(path_to_adapter, exist_ok=True) hf_hub_download( repo_id=repo_id, filename=adapter_filename, local_dir=path_to_adapter ) return path_to_adapter_file model_id = 'black-forest-labs/FLUX.1-dev' adapter_repo_id = 'mipat12/Klimt-Phase2-2e-5-ss3.0' adapter_filename = 'pytorch_lora_weights.safetensors' adapter_file_path = download_adapter(repo_id=adapter_repo_id) pipeline = DiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.bfloat16) # loading directly in bf16 lora_scale = 1.0 wrapper, _ = create_lycoris_from_weights(lora_scale, adapter_file_path, pipeline.transformer) wrapper.merge_to() prompt = "An astronaut is riding a horse through the jungles of Thailand." ## Optional: quantise the model to save on vram. ## Note: The model was quantised during training, and so it is recommended to do the same during inference time. from optimum.quanto import quantize, freeze, qint8 quantize(pipeline.transformer, weights=qint8) freeze(pipeline.transformer) pipeline.to('cuda' if torch.cuda.is_available() else 'mps' if torch.backends.mps.is_available() else 'cpu') # the pipeline is already in its target precision level image = pipeline( prompt=prompt, num_inference_steps=20, generator=torch.Generator(device='cuda' if torch.cuda.is_available() else 'mps' if torch.backends.mps.is_available() else 'cpu').manual_seed(42), width=1024, height=1024, guidance_scale=4.0, ).images[0] image.save("output.png", format="PNG") ```