bghira's picture
Model card auto-generated by SimpleTuner
caa5aa4 verified
metadata
license: apache-2.0
base_model: fal/AuraFlow
tags:
  - auraflow
  - auraflow-diffusers
  - text-to-image
  - image-to-image
  - diffusers
  - simpletuner
  - not-for-all-audiences
  - lora
  - template:sd-lora
  - standard
pipeline_tag: text-to-image
inference: true
widget:
  - text: unconditional (blank prompt)
    parameters:
      negative_prompt: ugly, cropped, blurry, low-quality, mediocre average
    output:
      url: ./assets/image_0_0.png
  - text: An domokun running through a field with flowers all around him.
    parameters:
      negative_prompt: ugly, cropped, blurry, low-quality, mediocre average
    output:
      url: ./assets/image_1_0.png

Auraflow-DomoKun-LoRA-rank8

This is a PEFT LoRA derived from fal/AuraFlow.

The main validation prompt used during training was:

An domokun running through a field with flowers all around him.

Validation settings

  • CFG: 4.0
  • CFG Rescale: 0.0
  • Steps: 30
  • Sampler: FlowMatchEulerDiscreteScheduler
  • Seed: 42
  • Resolution: 512x512

Note: The validation settings are not necessarily the same as the training settings.

You can find some example images in the following gallery:

Prompt
unconditional (blank prompt)
Negative Prompt
ugly, cropped, blurry, low-quality, mediocre average
Prompt
An domokun running through a field with flowers all around him.
Negative Prompt
ugly, cropped, blurry, low-quality, mediocre average

The text encoder was not trained. You may reuse the base model text encoder for inference.

Training settings

  • Training epochs: 1

  • Training steps: 1000

  • Learning rate: 0.0001

    • Learning rate schedule: constant
    • Warmup steps: 100
  • Max grad value: 0.01

  • Effective batch size: 1

    • Micro-batch size: 1
    • Gradient accumulation steps: 1
    • Number of GPUs: 1
  • Gradient checkpointing: True

  • Prediction type: flow_matching (extra parameters=['shift=3'])

  • Optimizer: adamw_bf16

  • Trainable parameter precision: Pure BF16

  • Base model precision: no_change

  • Caption dropout probability: 0.1%

  • LoRA Rank: 8

  • LoRA Alpha: 8.0

  • LoRA Dropout: 0.1

  • LoRA initialisation style: default

Datasets

domokun-cropped-512-NonReg

  • Repeats: 10
  • Total number of images: 27
  • Total number of aspect buckets: 3
  • Resolution: 0.262144 megapixels
  • Cropped: False
  • Crop style: None
  • Crop aspect: None
  • Used for regularisation data: No

domokun-cropped-512

  • Repeats: 10
  • Total number of images: 27
  • Total number of aspect buckets: 7
  • Resolution: 0.262144 megapixels
  • Cropped: False
  • Crop style: None
  • Crop aspect: None
  • Used for regularisation data: Yes

Inference

import torch
from diffusers import DiffusionPipeline

model_id = 'fal/AuraFlow'
adapter_id = 'bghira/Auraflow-DomoKun-LoRA-rank8'
pipeline = DiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.bfloat16) # loading directly in bf16
pipeline.load_lora_weights(adapter_id)

prompt = "An domokun running through a field with flowers all around him."
negative_prompt = 'ugly, cropped, blurry, low-quality, mediocre average'

## Optional: quantise the model to save on vram.
## Note: The model was not quantised during training, so it is not necessary to quantise it during inference time.
#from optimum.quanto import quantize, freeze, qint8
#quantize(pipeline.transformer, weights=qint8)
#freeze(pipeline.transformer)
    
pipeline.to('cuda' if torch.cuda.is_available() else 'mps' if torch.backends.mps.is_available() else 'cpu') # the pipeline is already in its target precision level
model_output = pipeline(
    prompt=prompt,
    negative_prompt=negative_prompt,
    num_inference_steps=30,
    generator=torch.Generator(device='cuda' if torch.cuda.is_available() else 'mps' if torch.backends.mps.is_available() else 'cpu').manual_seed(42),
    width=512,
    height=512,
    guidance_scale=4.0,
).images[0]

model_output.save("output.png", format="PNG")