Came out pretty good. Generated at 2048x1024 then upscaled 2x in img2img. No Refiner.
`cinematic photo breathtaking High Quality, Award Winning, photorealistic, realistic, photograph, landscape of a Overwrought Seductive The Nidhogg Dragon from inside of a Savanna, Hazy conditions, 50s Art, Alabaster lighting, film grain, Canon EF, Circular polarizer, Kodachrome, matte, subsurface scattering, radiosity, studio quality . award-winning, professional, highly detailed . 35mm photograph, film, bokeh, professional, 4k, highly detailed
(low quality:1.4), ugly, deformed, noisy, blurry, distorted, grainy, drawing, painting, crayon, sketch, graphite, impressionist, noisy, blurry, soft, deformed, ugly
Steps: 20, Sampler: DPM2 Karras, CFG scale: 7, Seed: 1207805303, Size: 2048x1024, Model hash: 31e35c80fc, Model: sd_xl_base_1.0, Variation seed: 3557558652, Variation seed strength: 1, Clip skip: 2, Token merging ratio: 0.1, NGMS: 0.4, Version: v1.5.0`
What are you running SDXL in? I tried it in comfy UI yesterday and it seems really powerful, but it seems like it always takes a long time to mess around with images. I haven't tried it in SD.Next or Auto1111 yet.
A1111. I don't like the other UIs that much. I used Comfy before but I find it gets very, very messy with workflows.
Agreed on the Auto1111 UI; I like the idea of ComfyUI but making quick changes + testing rapidly feels like a pain. I always feel like I must be doing something wrong. I do appreciate how easy it is to replicate a workflow, though.