A lot of the time it looks far too sharp and with too much contrast
Asklemmy
A loosely moderated place to ask open-ended questions
If your post meets the following criteria, it's welcome here!
- Open-ended question
- Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
- Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
- Not ad nauseam inducing: please make sure it is a question that would be new to most members
- An actual topic of discussion
Looking for support?
Looking for a community?
- Lemmyverse: community search
- sub.rehab: maps old subreddits to fediverse options, marks official as such
- !lemmy411@lemmy.ca: a community for finding communities
~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~
So in my opinion ai gen images are basically the opposite experience of looking at good art.
A good piece of art has loads of details and nuances that came from the artist's taste and vision. The longer you look at it the more there is to appreciate. You keep finding new details you missed as you study it.
Ai gens are the opposite. They look fine and competent at a cursory glance, but the longer you look the less meaning you find, and the more defects you discover. Personally I believe this is why they feel so wrong to look at for any length of time.
Ai doesn't know basic drawing things like perspective, scaling body parts, or "symmetry doesn't look as great as you think it does." It just has data without any details as to why the data exists in the first place
AI adds too many details. When a person draws an anime/cartoon character they will usually put in minimal details or they'll simply paste the character on to an existing background (that could've been drawn by a different artist).
AI doesn't have human limitations so it'll often add a ton of unnecessary details to a given scene. This is why the most convincing AI-generated anime pictures are of one or two characters in a very simple setting (e.g. a plain street/sidewalk) or even a white or gradient background.
Humans can tell when art was put together by different artists. Such as when the background is a completely different style. AI doesn't differentiate like that and will make the entire image using the exact style given by the prompt. So it'll all look like it was "drawn" using the same exact style... Even though anime/cartoons IRL aren't that uniform.
There's a community here on Lemmy that I used to follow, something like "share anime art." It's all AI generated. Or at least, the user who is keeping it going is just posting that. They are not being disingenuous; while they don't tag it as AI (that I have seen), they DO include the prompt, which is pretty transparent in my book. Nothing against that user at all.
In fact, the art looks pretty official. That said... it maybe looks too perfect? Official art usually has copyright tags in the lower right corner. The prompt specifically avoids any kind of copyright or artist tags. Fan art typically does have tags of some kind.
If someone were trying to fool me, they probably could. AI art has gotten to that point. But at this point, my old ass just doesn't trust anything without verifying. I was among the first on the Web and we didn't trust it then. There was a time when we grew to trust the Web. Now we can't trust it again and that's fine with me. I've always tried to be genuine, but I also wouldn't recommend anyone blindly trust me, either. Just take everything with a grain of salt and it's fine.
Generative LLM and so on is just pattern recognition and generation. It may do this several levels deep, but it doesn't break free of this fundamental limitation.
You are noticing that it is just doing patterns and noticing them yourself. Lines flow a bit oddly. Real objects have recognizable textures but are missing parts of the coherent whole. Comic panels that would be copy + paste for an artist are actually "redrawn" by the generative algorithms and that feels odd. Context changes oddly - e.g. the backgrounds.
It's mostly just parlor tricks. Entertaining but rarely actually that useful.
Basically: signal-to-noise ratio
Cory Doctorow explains it better than I can: https://pluralistic.net/2025/03/25/communicative-intent/
For me, an obvious tell for AI videos is that the humans and animals move too smoothly. There’s a herky jerky-ness to real life movement.
I think generators have some kind of inherent style that we somehow learn to recognise
Like sure they have learned on thousands of styles for each type of image, and you have some control of the style through prompt, but one issue with the transformer decoder model (the principles of which back almost all genAI at this point) is that at each generation step it gets the stuff generated so far as input.
This feedback loop might induce repeated choices even on different prompts in the later stages of the generation. This is not apparent on images because they are seen all at once, but it is pretty evident on Suno (at least v3): later parts of different songs might share sounds. At least in my experiments making it generate EDM. I'm now able to spot the synth it often ends up creating.
In terms of pictures and videos, that might be a reason generated stuff are consistently uncanny across image types.
I 2nd this, especially with Suno. As soon as a generated song comes on my Spotify, I recognize the specific synths used by the Suno model.
It's an inhuman facsimile of the expression of humanity.
For me at least it’s that the perspective is off. When someone is learning to draw or is just a shitty artist and the perspective isn’t very good you can immediately identify it. It looks like someone drew “front” eyes on the side of the head or something like that. When AI makes an image, the perspective is off, just in a different ways. The eye doesn’t look like a “front” eye or a “side” eye or a “top” eye. It looks like all of them and none of them. It makes the entire thing unsettling.
I think its that comics/cartoons don't really have a "world model" for the machine to build. Like, with photos, the lighting and physics and stuff all follow some rules and one could build a 3d model from a photo. But with comics/cartoons, everything is exaggerated, 3d models don't exist, lighting is vibes-based, every character is only drawn from certain angles. Let's say the machine determines it needs to draw the cartoon character in a 45 degree angle, but all the training data only had 0 and 60 degree angles. So it would try to base it on the 3d model it should have, but trying to make a 3d model of a cartoon character just results in contradictions. So it probably displays the contradictory result, which is then of course completely wrong.
So I have a latex fetish. A lot of the AI art i am exposed to is latex art for that reason and, sadly, people who make AI art like to make shiny/latex art because it actually simplifies the image and makes the generation superficially better.
The problem is, the lighting is always all fucked up. The most important part is all fucked up and wrong, the angles are random and bad and there are random light sources, backlights and nonsensical shadows everywhere. It's terrible to look at, feels wrong.
To me, this effect is also present on other generated images and i have kinda learned to recognize it now. Comes in handy.
But yea, what I just said about lighting and shadow you can say about every other aspect of the art as well. Too many details, details in weird places, things merging together. it's just gibberish in the form of a picture.
And when it comes to comic,s aka art with more then one panel that is supposed to have a recognizable style and design to each aspect, this is absolute poison. A character with a robotic arm will just have a human hand in the 3rd depiction because that is how the generation went and the cretin writing the prompts didn't care. No one in the chain knows anything about the art from and so it is bad.
No creativity, AI can't make new things, just remixes the old ones.