Nah, we're cool. You and I can tell but AI won't. So it will enshitificate it self into uselessness.
We should all strive to cause confusion in all sorts of databases so AI can't unfuck itself.
Welcome to the Unpopular Opinion community!
How voting works:
Vote the opposite of the norm.
If you agree that the opinion is unpopular give it an arrow up. If it's something that's widely accepted, give it an arrow down.
Guidelines:
Tag your post, if possible (not required)
Rules:
1. NO POLITICS
Politics is everywhere. Let's make this about [general] and [lemmy] - specific topics, and keep politics out of it.
2. Be civil.
Disagreements happen, but that doesn’t provide the right to personally attack others. No racism/sexism/bigotry. Please also refrain from gatekeeping others' opinions.
3. No bots, spam or self-promotion.
Only approved bots, which follow the guidelines for bots set by the instance, are allowed.
4. Shitposts and memes are allowed but...
Only until they prove to be a problem. They can and will be removed at moderator discretion.
5. No trolling.
This shouldn't need an explanation. If your post or comment is made just to get a rise with no real value, it will be removed. You do this too often, you will get a vacation to touch grass, away from this community for 1 or more days. Repeat offenses will result in a perma-ban.
Instance-wide rules always apply. https://legal.lemmy.world/tos/
Nah, we're cool. You and I can tell but AI won't. So it will enshitificate it self into uselessness.
We should all strive to cause confusion in all sorts of databases so AI can't unfuck itself.
Is this controversial?
I suggest banning AI images from communities that arent specifically made for AI images
I suggest banning photoshop images from communities that aren't made specifically for photoshop images
Ideally photoshopped images should all be flagged as impractical as that might be.
So, anything taken on a current-gen mobile phone, is what you're saying.
Yup, I know it is impractical, not only that but because it is a digital recreation it will never be a completely truthful representation of anything. It was the same for film but the changes were understood and accepted. Doctored/manipulated images though were expected to be identified as such, for the most part.
Current-gen mobile phones arent adding fake objects to images
Photoshop, not AI. Read the context of the discussion
By "adding fake objects", i was referring to photoshop
Nobody claimed it does, so again: read the context of the discussion
Someone could photoshop an image to add things that arent actually there. In that case, it shouldnt be hidden that the image isnt real. A filter isnt a major enough change for a tag
Dude, you seem to have missed the original point of the comment. It was sarcasm, and the point was that AI is just the next technological step after photoshop. We had the same discussions decades ago when photoshop was new, with all the purists complaining how photoshop was a horrible technology that would put photographers out of business, they were 'soulless', etc. Now we're rehashing all the same old stupid and tired arguments, but with AI instead of photoshop.
If something that was made relatively effortlessly is possible to be confused with something made with effort, it should be marked
I suggested banning AI from communities that arent made for AI because it requires so little effort and its so easy to mass-produce, that it could flood a community
See my point? Replace 'AI' with photoshop and your exact argument could have come from 2 decades ago.
Could it? I dont think you can mass produce photoshopped images and its much harder to pretend you put effort into something when you didnt (thats why i put relatively)
Even if you could mass produce them and it was easier to fake effort, is anyone doing that? There are people doing those things with AI
Photoshop is magnitudes of effort less than pre-photoshop technology. The amount of images you can churn out quickly with photoshop is certainly mass production when compared with what you can get done without photoshop, for much less effort.
is anyone doing that? There are people doing those things with AI
Is anyone photoshopping images? Hell yes. Do you think we just invented fake images?
By "that", i meant mass producing images and making it hard to tell if it was done the hard way or the easy way (while peoplw assuming it was done the hard way)
People dont automatically assume a picture is made by AI the same way they would assume a picture is photoshopped if it was clearly faked
A future where images are all assumed to be made with AI is hopefully obviously bad
By "that", i meant mass producing images and making it hard to tell if it was done the hard way or the easy way (while peoplw assuming it was done the hard way)
No, because the common sense assumption at this point is that everything is done with photoshop or similar technology.
People dont automatically assume a picture is made by AI the same way they would assume a picture is photoshopped if it was clearly faked
They will in a few years. We've been through this whole cycle, is my point. The technology is here to stay, and we're gonna have to adapt to it. No amount of complaining will change it.
A future where images are all assumed to be made with AI is hopefully obviously bad
Why so?
People who don't use photoshop forget it's used for:
I'm not saying it's the most practical software for those applications but it's a primary tool for many photographers and artists.
There is a paradox here, as there are 2 possibilities either
A) AI generated "slop" is obviously bad quality, theirfor a label is unnecessary as it is obvious.
Or
B) the AI generated content looks as good as human creations therfore is not slop and a label is unnecessary.
If someone makes an ai clip of a politician saying something they didn't should we believe it cause the ai was convincing enough?
Really photoshopped images meant to seem as real as possible should be flagged. It sounds ridiculous just because it has become the norm to accept them.
It is absolutely critical that the capabilities are broadly adopted because in the future, the difference will be indiscernible.
This is absolutely a lose lose nuclear proliferation-like situation and cannot be avoided. If the technology is unknown to the general populous it holds great power over them. There are more nuanced uses of this new toolset than anyone has yet realized. One could respond to populous media and digital social misalignment in complex ways that none of us can see or filter. The tool use is not a simple polar dichotomy. One could use a tool to monitor social sentiment and respond in ways to steer the conversation using one's own likeness and social presence at any instance of qualia from a corporate account, to think tank, or political figure.
Those of us with the time and ability to explore such things should be welcomed and listened to carefully. Most people in this space are not the assholes you are angry or frustrated with. I don't give a @#$% about tricking anyone or replacing anything. I only care about what I am curious about and learning new things to occupy my time in social isolation from physical disability. I could share a lot more, but when people act stupid, I do not share much at all. I'm capable of independence in exploring unique paths and applications. The more grounded I am from engaging with others the more effective I am at doing useful things and sharing them. I'm not some savant genius type at all. I'm a persistent rogue that explores off the beaten path in empirically useful but often unexpected ways. It is very easy to misunderstand the context of things I talk about and might share. I am often wrong about several assumptions and details, but if one takes the time to look into my results, the empirical patterns that ground what I am saying will emerge and those nuggets are often useful. This is the real, messy edge of amateur and hobby culture. When I encounter negative prejudice, I'm not going to endure the stupidity of those that fail to contextualize and see the value of my abstractions through the haze of my explorations. I just want to share something I find interesting or useful as I understand it in my contiguously moving target of learning. Anyone that responds to that kind of post or comment negatively, as if a person's knowledge is some kind of static state is beyond useless and stupid to me. I do not care about egos and narcissism. I do not care about oversimplified idealism of right or wrong. I care about curiosity and empirical usefulness because we live in the universe of irrational numbers where booleans and integers do not exist except in fantasies of the mind and the limited registers of computational machines that are always wrong in their truncation of reality.
It is just a tool. Some are sour because evolution dictates they must be. The culture of artificial scarcity and unnecessary pressure produces and rewards assholes. It is this culture that is the problem, not the tool. We live in a dystopia that is reigned by assholes. Sam Altmann is the asshole funding the culture of blaming this new tool. Monopoly in this space can be used to exploit the status quo for more profit. This exploitation only works in a monopoly where the tool is proprietary. In the real world with an open source tool, the time it saves opens up great wealth to the average person and business. Our culture can expand by reinvesting our newly acquired wealth. This is the intelligent use of the new tool. Those that can only see the present as some kind of final state to extract value are idiotic parasites of humanity. We can become something more like has occurred for thousands of years of human innovation. These proprietary parasites of humanity are twisting reality to subject us to their vampirism of extracted wealth and subjugation. I reject this narrative and stupidity because I can clearly see the big picture. I wish y'all would disconnect, set back, and see the big picture too. Nothing about AI tools is a negative unless you fall in line with Altmann's dystopian vision.
A) Some people are really really bad at noticing AI slop. I’ve seen some really obvious AI generated images with people debating if it’s real or not. Unless those comments were AI and I’m the one who can’t tell…
B) Honestly even good AI generated content should come with a disclaimer IMO.
*therefore
Even if it looks good, it's slop.
Images of me with my wife - Stalin are not AI slop!!!
so what AI detection tool should we use to detect it?
Blaming the AI for plagiarism is like blaming a calculator for a wrong answer. It depends on how it’s used. AI is a tool — and like any tool, it can be misused or used ethically depending on the person behind it.
Plagiarism involves intent and deception, usually by a person taking someone else’s work and claiming it as their own. AI can't have intent, so it seems like your concerns in this regard should be directed at the user, not the content, for which Lemmy already has tools to address i.e blocking a individual.
Nah, it's more like "I'm sick of the eyeball stabbing machine that was built to stab eyeballs, I wish people would stop using it and stabbing my eyeballs"
Ai by it's nature does nothing but plagiarism.
Again, plagiarism isn’t just 'using others’ work' — it’s about copying and passing it off as your own, often without transformation. AI doesn't memorize or intentionally replicate specific works. It generates outputs based on probabilities, not stored text. That’s a big difference in mechanism.
There’s a meaningful distinction between training and theft. A human artist studies other art — that doesn’t make their work plagiarism, even if it’s derivative.
As I said, if the outputs are used irresponsibly — like someone passing off AI writing as their own research or using it to flood markets with low-effort content — that’s where it becomes a tool for exploitation. But the problem then is how it's used, not the tech itself.
Whatever mate people didn't volunteer their art to be scraped by ai so even if it's not plagiarism exactly, as defined by you or whomever, that doesn't mean that it's ethical or people like it.
And most don't.
And again this isn't just about images, there's also the environment and misinformation, plagiarism in academia (and that fits your definition) and a plethora of other issues which are not related to capitalism at all.
Most of the data used to train AI, especially image models, came from publicly available content accessible by anyone. Artists have been doing this kind of thing for centuries: looking at existing work, internalizing styles, and creating something new. AI is doing that at scale — it’s not copying, it’s learning patterns. Just like humans do.
Consent is important, absolutely, but if your art is posted publicly, you're already consenting to it being seen and learned from. That’s how influence works. If someone draws in your style after following you online, that’s not theft. You might not like it, but it’s not unethical in itself.
Also, let’s not pretend this conversation is only about artists’ rights. It’s become a catch-all for every fear around new tech. People are worried about the impact of AI on the environment? Understandable and totally valid, although way less than you might think
https://www.nature.com/articles/s41598-024-54271-x
https://www.nature.com/articles/s41598-024-76682-6
Misinformation? Agreed, serious concern and one I share. But saying AI is inherently unethical because of how some people use it is like saying the internet is inherently unethical because people post lies.
We should absolutely talk about regulation, transparency, and compensation, but let’s not throw out the entire field because it challenges the comfort zone of some industries. Ethics matter, yes, but so does clarity. Not everything that feels unfair is a violation.
Ok, first of all, AI doesn't "learn" the way humans do. That's not how AI imaging works. It basically translates images into a form of static computers can read, uses an algorithm to mix those into a new static, then translates it back. That's easy different than someone studying what negative space is or learning how to draw hands.
Second, posting a picture implies consent for people to see and learn from it, but that doesn't imply consent for people to use it however they want. A 16 year old girl posting pictures of her birthday party isn't really consenting to people using that to generate pornography based off of her body. There's also the issue of copyright, which is there to protect your works from just being used by anyone. (Yes, it's advised by corporations, don't bother trying to bring that up, I'm already pissed at Disney.) But even with people saying specifically that they don't want their art to be used for AI, even prominent artists like Miyazaki, doesn't stop AI from taking those images and doing something they don't consent to, scraping, with them.
Third, trying to say that it's only fear over new tech is a bullshit, hand waving way of dismissing people legitimate concerns with the issue. I like new technology and how it can help people. I even like some applications for AI. Using a bread checkout tool to detect breast cancer is awesome. The problems that have come up with other applications of it are pretty terrible, and you shouldn't stick your head in the sand about them.
(As an aside, trying to compare ai generated slop to all other arts is apples and oranges. There's much more art than digital images, so saying that an AI image takes less energy to make than a Ming vase or literally any other pottery for that matter is a false equivalence. They are not the same even if they have similarities, so comparing their physical costs doesn't track.)
Fourth, I'm not just talking about people using AI to make lies, I'm talking about AI making lies unintentionally. Like putting glue on pizza to keep the cheese on. Or to eat rocks. AI doesn't know what's a joke or misinformation, and will present it as true, and people will believe it as true if they don't know any better. It's inaccurate, and can't be accurate because it doesn't have a filter for its summeries. It's typing only using the suggested next word on your cell phone.
I didn't say to get rid of AI entirely, like I said, some applications are great, like with the breast cancer. But to say that the only issues people have with AI are because of capitalism is incorrect. It's a poorly working machine and saying that communism will make it magically not broken, when the problems are intrinsic to it, is a false and delusional statement.
Ok, first of all, AI doesn't "learn" the way humans do. That's not how AI imaging works. It basically translates images into a form of static computers can read, uses an algorithm to mix those into a new static, then translates it back. That's easy different than someone studying what negative space is or learning how to draw hands.
The comparison to human learning isn’t about identical processes, it’s about function. Human artists absorb influences and styles, often without realizing it, and create new works based on that synthesis. AI models, in a very different but still meaningful way, also synthesize patterns based on what they’re exposed to. When people say AI 'learns from art,' they aren’t claiming it mimics human cognition. They mean that, functionally, it analyzes patterns and structures in vast amounts of data, just as a human might analyze color, composition, and form across many works. So no, AI doesn’t learn "what negative space means" it learns that certain pixel distributions tend to occur in successful compositions. That’s not emotional or intellectual, but it’s not random either.
Second, posting a picture implies consent for people to see and learn from it, but that doesn't imply consent for people to use it however they want. A 16 year old girl posting pictures of her birthday party isn't really consenting to people using that to generate pornography based off of her body. There's also the issue of copyright, which is there to protect your works from just being used by anyone. (Yes, it's advised by corporations, don't bother trying to bring that up, I'm already pissed at Disney.) But even with people saying specifically that they don't want their art to be used for AI, even prominent artists like Miyazaki, doesn't stop AI from taking those images and doing something they don't consent to, scraping, with them.
I agree, posting art online doesn't give others the right to do anything they want with it. However, there’s a difference between viewing and learning from art versus directly copying or redistributing it. AI models don’t store or reproduce exact images — they extract statistical representations and blend features across many sources. They aren’t taking a single image and copying it. That’s why, legally and technically, it isn’t considered theft. Equating all AI art generation with nonconsensual exploitation like kiddie porn is conflating separate issues: ethical misuse of outputs is not the same as the core technology being inherently unethical.
Also, re your point on copyright, it's important to remember that copyright is designed to protect specific expressions of ideas not general styles or patterns. AI-generated content that does not directly replicate existing images does not typically violate copyright, which is why lawsuits over this remain unresolved or unsuccessful so far.
(As an aside, trying to compare ai generated slop to all other arts is apples and oranges. There's much more art than digital images, so saying that an AI image takes less energy to make than a Ming vase or literally any other pottery for that matter is a false equivalence. They are not the same even if they have similarities, so comparing their physical costs doesn't track.)
This thread and conversation isspecifically talking about AI art, so the comparison and data is still apt.
Fourth, I'm not just talking about people using AI to make lies, I'm talking about AI making lies unintentionally. Like putting glue on pizza to keep the cheese on. Or to eat rocks. AI doesn't know what's a joke or misinformation, and will present it as true, and people will believe it as true if they don't know any better. It's inaccurate, and can't be accurate because it doesn't have a filter for its summeries. It's typing only using the suggested next word on your cell phone.
Concerns about misinformation, environmental impact, and misuse are real. That’s why the responsible use of AI must involve regulation, transparency, and ethical boundaries. But that’s very different from claiming that AI is an 'eyeball stabbing machine'. That kind of absolutist framing isn’t helpful. It stifles productive discussion about how we can use these tools in ways that are helpful, including in medicine like you mention.
I didn't say to get rid of AI entirely, like I said, some applications are great, like with the breast cancer. But to say that the only issues people have with AI are because of capitalism is incorrect. It's a poorly working machine and saying that communism will make it magically not broken, when the problems are intrinsic to it, is a false and delusional statement.
I have never once mentioned capitalism or communism.
Yeah sorry was arguing with communists over another pro AI meme.
It seems you're pretty entrenched in being pro AI so I don't see much point in going on about this with you, so I guess enjoy your slop or whatever.
The analogy with a calculator does not work at all :(( ... AI is always predicated on someone's work, that's the problem!
AI doesn’t reproduce individual works the way plagiarism does. It's not like it's pulling out someone’s article and copying it. It’s trained on patterns — like how a person who reads hundreds of books starts to pick up how stories are structured, but doesn't memorize them word-for-word.
If AI is trained on copyrighted material without permission, there's a real argument about whether that’s fair or exploitative, but that’s more of a legal and ethical issue than a plagiarism one.
At the rate it’s advancing, pretty soon you won’t be able to tell which is which.
I agree with you so hard, I actually have to downvote your post because of community.
I spent hours Photoshopping Elon Musk's face onto Scarlett O'Hara (took so long because I made myself do it with Gimp 3). If I could have done it with AI, the results would likely have been better and that time wasted making a meme is something I won't ever get back.
It's not wasted completely time... you gained Gimp skills
Now... put on the mask and ballgag, we have work to do!
You and the previous^2^ poster who complained about people complaining about AI slop should have a rap battle.