Aryama Ray
Update README.md
8c68143 verified
---
datasets:
- Norod78/Yarn-art-style
base_model:
- stabilityai/stable-diffusion-xl-base-1.0
tags:
- art
- comic
pipeline_tag: text-to-image
library_name: diffusers
license: apache-2.0
---
iamno-one/sdxl_dreambooth_lora_yarn_art_v3 is finetuned a SDXL base with Dreambooth Lora for style transfer.
# Github repo
[Check Git Repo](https://github.com/aryama-ray/yarn-comic-SDXL-DreamboothLora)
# Model Card for Model ID
<pre>from diffusers import DiffusionPipeline, AutoencoderKL
import torch
vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16)
pipe = DiffusionPipeline.from_pretrained(
"stabilityai/stable-diffusion-xl-base-1.0",
vae=vae,
torch_dtype=torch.float16,
variant="fp16",
use_safetensors=True
).to(device)
## adding Lora weight
pipe.load_lora_weights("iamno-one/sdxl_dreambooth_lora_yarn_art_v3")
</pre>
## Sample Prompt
<pre> prompt = "a mother playing with a toddler girl in the park,yarn art style"
negative_prompt = "blurry, low quality, distorted, extra limbs, bad anatomy, deformed face, realistic face"
image = pipe(prompt,height=512, width=512).images[0]
image.show() </pre>
![Sample Output1](sample_3.png)
<pre> prompt = "a girl playing with a balloon, yarn art style"
image = pipe(prompt,height=512, width=512).images[0]
image.show()
</pre>
![Sample Output2](sample_2.png)
<pre>
prompt = "a toddler girl went to zoo,sunny background with lots of trees.The girl is looking at a monkey. Monkey is inside the cage. Toddler girl is standing outside the cage, in yarn art style"
negative_prompt = "blurry, low quality, distorted, extra limbs, bad anatomy, deformed face, realistic face"
image = pipe(prompt,height=512, width=512).images[0]
image.show()
</pre>
![Sample Output3](sample_6.png)