Finetuning - AhmetTek41/output
This pipeline was finetuned from kandinsky-community/kandinsky-2-2-decoder on the AhmetTek41/logo dataset. Below are some example images generated with the finetuned pipeline using the following prompts: ['A logo for a zero waste club featuring a simple image of a closed loop of arrows, with each arrow made from different recyclable materials like paper, plastic, and metal.']:
Pipeline usage
You can use the pipeline like so:
from diffusers import DiffusionPipeline
import torch
pipeline = AutoPipelineForText2Image.from_pretrained("AhmetTek41/output", torch_dtype=torch.float16)
prompt = "A logo for a zero waste club featuring a simple image of a closed loop of arrows, with each arrow made from different recyclable materials like paper, plastic, and metal."
image = pipeline(prompt).images[0]
image.save("my_image.png")
Training info
These are the key hyperparameters used during training:
- Epochs: 77
- Learning rate: 1e-05
- Batch size: 16
- Gradient accumulation steps: 1
- Image resolution: 512
- Mixed-precision: None
More information on all the CLI arguments and the environment are available on your wandb
run page.
- Downloads last month
- 13
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for AhmetTek41/kandinskytest
Base model
kandinsky-community/kandinsky-2-2-decoder