--- license: creativeml-openrail-m pipeline_tag: text-to-image --- I fine-tuned DMD2 StableDiffusion V1.5, based on the Dreamshaper model. I set the CFG to 7, which significantly improved the generation quality. example: ``` from diffusers import StableDiffusionPipeline pipe = StableDiffusionPipeline.from_pretrained("georgefen/DMD2-dreamshaper-v1-5").to("cuda") prompt = "masterpiece, extremely intricate, realistic, portrait of a girl, medieval armor, metal reflections, upper body, outdoors, intense sunlight, far away castle, professional photograph of a stunning woman detailed, sharp focus, dramatic, award winning, cinematic lighting, octane render unreal engine, volumetrics dtx, film grain, blurry background, blurry foreground, bokeh, depth of field, sunset, motion blur, chainmail" image = pipe(prompt=prompt, num_inference_steps=1, guidance_scale=0.0).images[0] image.save("test.png") ``` description Credits to: [Dreamshaper](https://civitai.com/models/4384/dreamshaper) [DMD2](https://huggingface.co/tianweiy/DMD2)