Edit model card

I fine-tuned DMD2 StableDiffusion V1.5, based on the Dreamshaper model.

I set the CFG to 7, which significantly improved the generation quality.

example:

from diffusers import StableDiffusionPipeline

pipe = StableDiffusionPipeline.from_pretrained("georgefen/DMD2-dreamshaper-v1-5").to("cuda")
prompt = "masterpiece, extremely intricate, realistic, portrait of a girl, medieval armor, metal reflections, upper body, outdoors, intense sunlight, far away castle, professional photograph of a stunning woman detailed, sharp focus, dramatic, award winning, cinematic lighting, octane render unreal engine, volumetrics dtx, film grain, blurry background, blurry foreground, bokeh, depth of field, sunset, motion blur, chainmail"
image = pipe(prompt=prompt, num_inference_steps=1, guidance_scale=0.0).images[0]
image.save("test.png")
description

Credits to:

Dreamshaper

DMD2

Downloads last month
0
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.