[-] SpeakingColors@beehaw.org 1 points 4 weeks ago

That was the gist of a fever dream I had as a kid. One penny. It multiplies. I'm quickly dwarfed by a mountain of that which should be small. Sounds benign but I was legit shook after waking up.

The mountains here make it more pleasant🫠

[-] SpeakingColors@beehaw.org 1 points 4 months ago

I gather that the "conversation" point is whether being trans has an impact on judgement or something? Because differing from a perceived social norm is an incorrect choice people make..

I wonder if they would feel better if they found out the zodiac killer was cis?

[-] SpeakingColors@beehaw.org 3 points 6 months ago

What a cheerful read. An arguably poetic detailing of what was before a rather simple and casual endeavor: your intentions for the sand and the intentions of everything it’s comprised of. Thanks for sharing it

[-] SpeakingColors@beehaw.org 1 points 6 months ago

Super rad! I’ve played with FM synthesis in musical contexts but understanding it at a deeper level than just monkeying around has already unlocked more precise sounds. Thanks for posting

[-] SpeakingColors@beehaw.org 2 points 9 months ago

Img2img is one of many ways to constrain the AIs efforts to your compositional desires, it’s rad. You can control the amount of “dreaming” the AI does on the base image to get subtle changes, or a radically different image based on the elements of the previous (sometimes to trippy cool results, often to horrendous mutations if the desired image is supposed to be humanoid xD).

Inpainting is another tool, it’s like a precise img2img on an area you mask. Hands are often the most garbled thing from the AI, so a brute force technique is to img2img the hands - but the process works a lot better if you help the AI out and manually fix the hands. So I’ll throw the image into photoshop, make a list (if I remember :P) of everything I need to fix, address them directly and then toss it back into Automatic 1111. Often the shading and overall style are hard things for me to get right so I’ll inpaint over my edits to get the style and shading back.

[-] SpeakingColors@beehaw.org 2 points 9 months ago

Thank you! Essentially I’ll come in with a visual idea, some sketches already or I’ll do one with AI in mind (keep the lines simple so it doesn’t get confused). Generate a batch of images with img-2-img and cherry pick the ones that fit closest to the idea or are surprising and wonderful. Rework those for anatomical errors or other things I want to fix or omit -> send it back through img-2-img if it needs it or to inject detail -> upscale and put it as my desktop/phone wallpaper :P

(I’m using Automatic 1111 which is a webui for Stable Diffusion btw)

[-] SpeakingColors@beehaw.org 3 points 9 months ago

I replied to a previous comment about the “assistance” part which is sorta an abridged version of my workflow (“workflow” is also a term used in Comfy UI, a visual layout that processes the image sequentially through modules). It’s super fun I highly recommend it! Feel free to PM me anytime I’d be glad to help!

Really it was looking up terms and areas of Automatic 1111 I was unsure of and finding various sites and guides. Civitai has LOTS of guides often written by model makers or people with lots of hours in the field - it’s also my main resource for LoRAs and Models. But there’s tons of info on there. The most helpful ones where settings and workflows on actual image generation (I can definitely find some links for you there) to get quality results without too much “and if I change this, what happens?” But honestly I love poking around like that so I still spend hours tweaking just to see what happens xD

view more: ‹ prev next ›

SpeakingColors

joined 10 months ago