this post was submitted on 23 Aug 2024
419 points (91.0% liked)

Technology

59179 readers
2454 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] stoy@lemmy.zip 143 points 2 months ago (5 children)

TL;DR: The new Reimage feature on the Google Pixel 9 phones is really good at AI manipulation, while being very easy to use. This is bad.

[–] kernelle@lemmy.world 9 points 2 months ago (1 children)

This is bad

Some serious old-man-yelling-at-cloud energy

[–] sorghum@sh.itjust.works 58 points 2 months ago (12 children)

It'll sink in for you when photographic evidence is no longer admissible in court

load more comments (12 replies)
load more comments (4 replies)
[–] Th4tGuyII@fedia.io 87 points 2 months ago (4 children)

Image manipulation has always been a thing, and there are ways to counter it...

But we already know that a shocking amount of people will simply take what they see at face value, even if it does look suspicious. The volume of AI generated misinformation online is already too damn high, without it getting more new strings in it's bow.

Governments don't seem to be anywhere near on top of keeping up with these AI developments either, so by the time the law starts accounting for all of this, the damage will be long done already.

[–] Badeendje@lemmy.world 17 points 2 months ago

On our vacation 2 weeks ago my wife made an awesome picture just with one guy annoyingly in the background. She just tucked him and clicked the button.. poof gone, perfect photo.

[–] gravitas_deficiency@sh.itjust.works 11 points 2 months ago (2 children)

But it’s never been this absolutely trivial to generate and distribute completely synthetic media. THAT is the real problem here.

load more comments (2 replies)
load more comments (2 replies)
[–] FinishingDutch@lemmy.world 62 points 2 months ago (15 children)

I work at a newspaper as both a writer and photographer. I deal with images all day.

Photo manipulation has been around as long as the medium itself. And throughout the decades, people have worried about the veracity of images. When PhotoShop became popular, some decried it as the end of truthful photography. And now here’s AI, making things up entirely.

So, as a professional, am I worried? Not really. Because at the end of the day, it all comes down to ‘trust and verify when possible’. We generally receive our images from people who are wholly reliable. They have no reason to deceive us and know that burning that bridge will hurt their organisation and career. It’s not worth it.

If someone was to send us an image that’s ‘too interesting’, we’d obviously try to verify it through other sources. If a bunch of people photographed that same incident from different angles, clearly it’s real. If we can’t verify it, well, we either trust the source and run it, or we don’t.

[–] Knock_Knock_Lemmy_In@lemmy.world 14 points 2 months ago (3 children)

If a bunch of people photographed that same incident from different angles, clearly it’s real.

I don't think you can assume this anymore.

load more comments (3 replies)
[–] golli@lemm.ee 8 points 2 months ago* (last edited 2 months ago)

Photo manipulation has been around as long as the medium itself. And throughout the decades, people have worried about the veracity of images. When PhotoShop became popular, some decried it as the end of truthful photography. And now here’s AI, making things up entirely.

I actually think it isn't the AI photo or video manipulation part that makes it a bigger issue nowadays (at least not primarily), but the way in which they are consumed. AI making things easier is just another puzzle piece in this trend.


Information volume and speed has increased dramatically, resulting in an overflow that significantly shortens the timespan that is dedicated to each piece of content. If i slowly read my sunday newspaper during breakfast, then i'll give it much more attention, compared to scrolling through my social media feed. That lack of engagement makes it much easier for missinformation to have the desired effect.

There's also the increased complexity of the world. Things can on the surface seem reasonable and true, but have knock on consequences that aren't immediately apparent or only hold true within a narrow picture, but fall appart once viewed from a wider perspective. This just gets worse combined with the point above.

Then there's the downfall of high profile leading newsoutlets in relevance and the increased fragmentation of the information landscape. Instead of carefully curated and verified content, immediacy and clickbait take priority. And this imo also has a negative effect on those more classical outlets, which have to compete with it.

You also have increased populism especially in politics and many more trends, all compounding on the same issue of missinformation.

And even if caught and corrected, usually the damage is done and the correction reaches far fewer people.

load more comments (13 replies)
[–] echodot@feddit.uk 58 points 2 months ago (3 children)

Okay so it's the verge so I'm not exactly expecting much but seriously?

No one on Earth today has ever lived in a world where photographs were not the linchpin of social consensus

People have been faking photographs basically since day one, with techniques like double exposure. Also even more sophisticated photo manipulation has been possible with Photoshop which has existed for decades.

There's a photo of me taken in the '90s on thunder mountain at Disneyland which has been edited to look like I'm actually on a mountainside rather than in a theme park. I think we can deal with fakeable photographs the only difference here is the process is automatable which honestly doesn't make even the blindest bit of difference. It's quicker but so what.

[–] TheFriar@lemm.ee 34 points 2 months ago (13 children)

It used to take professionals or serious hobbyists to make something fake look believable. Now it’s at the tip of everyone’s fingers. Fake photos were already a smaller issue, but this very well could become a tidal wave of fakes trying to grab attention.

Think about how many scammers there are. Think about how many horny boys there are. Think about how much online political fuckery goes around these days. When believable photographs of whatever you want people to believe are at the tips of anyone’s fingers, it’s very, very easy to start a wildfire of misinformation. And think about the young girls being tormented in middle school and high school. And all the scammable old people. And all the fascists willing to use any tool at their disposal to sow discord and hatred.

It’s not a nothing problem. It could very well become a torrent of lies.

load more comments (13 replies)
load more comments (2 replies)
[–] LucidNightmare@lemm.ee 53 points 2 months ago (5 children)

There was actually a user on Lemmy that asked if the original photo for the massacre was AI. It hadn’t occurred to me that people who never heard of the 1989 Tiananmen Square protests and massacre would find the image and question if it was real or not.

A very sad sight, a very sad future.

load more comments (5 replies)
[–] randy@lemmy.ca 50 points 2 months ago (5 children)

Relevant XKCD. Humans have always been able to lie. Having a single form of irrefutable proof is the historical exception, not the rule.

[–] samus12345@lemmy.world 9 points 2 months ago (1 children)

Regarding that last panel, why would multiple people go through the trouble of carving lies about Ea-Nasir's shitty copper? And even if they did, why would he keep them? No, his copper definitely sucked.

load more comments (1 replies)
load more comments (4 replies)
[–] hperrin@lemmy.world 49 points 2 months ago (6 children)

We literally lived for thousands of years without photos. And we’ve lived for 30 years with Photoshop.

[–] Squizzy@lemmy.world 31 points 2 months ago (3 children)

The article takes a doomed tone for sure but the reality is we know how dangerous and prolific misinformation is.

load more comments (3 replies)
load more comments (5 replies)
[–] Alexstarfire@lemmy.world 48 points 2 months ago (2 children)
[–] xavier666@lemm.ee 9 points 2 months ago

Clickbait 101

load more comments (1 replies)
[–] WoahWoah@lemmy.world 35 points 2 months ago* (last edited 2 months ago) (2 children)

This is a hyperbolic article to be sure. But many in this thread are missing the point. It's not that photo manipulation is new.

It's the volume and quality of photo manipulation that's new. "Flooding the zone with bullshit," i.e. decreasing the signal-to-noise ratio, can have a demonstrable social effect.

load more comments (2 replies)
[–] Ilovethebomb@lemm.ee 28 points 2 months ago (1 children)

Meh, those edited photos could have been created in Photoshop as well.

This makes editing and retouching photos easier, and that's a concern, but it's not new.

[–] FlihpFlorp@lemm.ee 24 points 2 months ago (2 children)

Something I heard in the photoshop VS ai argument is it makes an already existing process much faster and almost anyone can do it which increases the shear amount that one person or a group could make almost how a printing press made the production of books so much faster (if you’re in to history)

I’m too tired to take a stance so I’m just sharing some arguments I’ve heard

[–] Ilovethebomb@lemm.ee 7 points 2 months ago (1 children)

Making creating fake images even easier definitely isn't great, I agree with you there, but it's nothing that couldn't already be done with Photoshop.

I definitely don't like the idea you can do this on your phone.

[–] Bimbleby@lemmy.world 8 points 2 months ago

Exactly, it was already established that pictures from untrusted sources are to be disregarded unless they can be verified by trusted sources.

It is basically how it has been forever with the written press: Just like everyone now has the capability to manipulate a picture. Everyone can write we are being invaded by aliens, but whether we should believe it is another thing.

It might take some time for the general public to learn this, but it should be a focus area of general schooling within the area of source criticism.

load more comments (1 replies)
[–] Blackmist@feddit.uk 26 points 2 months ago (3 children)

We've had fake photos for over 100 years at this point.

https://en.wikipedia.org/wiki/Cottingley_Fairies

Maybe it's time to do something about confirming authenticity, rather than just accepting any old nonsense as evidence of anything.

At this point anything can be presented as evidence, and now can be equally refuted as an AI fabrication.

We need a new generation of secure cameras with internal signing of images and video (to prevent manipulation), built in LIDAR (to make sure they're not filming a screen), periodic external timestamps of data (so nothing can be changed after the supposed date), etc.

[–] ricdeh@lemmy.world 22 points 2 months ago (2 children)

I am very opposed to this. It means surrendering all trust in pictures to Big Tech. If at some time only photos signed by Sony, Samsung, etc. are considered genuine, then photos taken with other equipment, e.g., independently manufactured cameras or image sensors, will be dismissed out of hand. If, however, you were to accept photos signed by the operating system on those devices regardless of who is the vendor, that would invalidate the entire purpose because everyone could just self-sign their pictures. This means that the only way to effectively enforce your approach is to surrender user freedom, and that runs contrary to the Free Software Movement and the many people around the world aligned with it. It would be a very dystopian world.

[–] Blackmist@feddit.uk 9 points 2 months ago

It would also involve trusting those corporations not to fudge evidence themselves.

I mean, not everything photo related would have to be like this.

But if you wanted you photo to be able to document things, to provide evidence that could send people to prison or be executed...

The other choice is that we no longer accept photographic, audio or video evidence in court at all. If it can no longer be trusted and even a complete novice can convincingly fake things, I don't see how it can be used.

load more comments (1 replies)
load more comments (2 replies)
[–] RageAgainstTheRich@lemmy.world 25 points 2 months ago (1 children)

Even a few months ago it was hard for people with the knowledge to use AI on photos. I don't like the idea of this but its unavoidable. There is already so much misinformation and this will make it so much worse.

load more comments (1 replies)
[–] cley_faye@lemmy.world 23 points 2 months ago (1 children)

This is only a threat to people that took random picture at face value. Which should not have been a thing for a long while, generative AI or not.

The source of an information/picture, as well as how it was checked has been the most important part of handling online content for decades. The fact that it is now easier for some people to make edits does not change that.

load more comments (1 replies)
[–] yamanii@lemmy.world 22 points 2 months ago* (last edited 2 months ago) (2 children)

These photoshop comments are missing the point that it's just like art, a good edit that can fool everyone needs someone that practiced a lot and has lots of experience, now even the lazy asses on the right can fake it easily.

[–] Drewelite@lemmynsfw.com 13 points 2 months ago (2 children)

I think this comment misses the point that even one doctored photo created by a team of highly skilled individuals can change the course of history. And when that's what it takes, it's easier to sell it to the public.

What matters is the source. What we're being forced to reckon with now is: the assumption that photos capture indisputable reality has never and will never be true. That's why we invented journalism. Ethically driven people to investigate and be impartial sources of truth on what's happening in the world. But we've neglected and abused the profession so much that it's a shell of what we need it to be.

load more comments (2 replies)
load more comments (1 replies)
[–] endofline@lemmy.ca 19 points 2 months ago (1 children)

Photography manipulation existed almost since the invention of photography. It was only much harder see the famous photo edition https://www.history.com/news/josef-stalin-great-purge-photo-retouching

load more comments (1 replies)
[–] JackGreenEarth@lemm.ee 18 points 2 months ago (2 children)

People can write things that aren't true! Oh no, now we can't trust trustworthy texts such as scientific papers that have undergone peer review!

[–] echodot@feddit.uk 12 points 2 months ago

The Verge are well versed on writing things that are untrue

load more comments (1 replies)
[–] adam_y@lemmy.world 17 points 2 months ago (7 children)

It's always been about context and provenance. Who took the image? Are there supporting accounts?

But also, it has always been about the knowlege that no one... Absolutely no one... Does lines of coke from a woven mat floor covering.

don't do drugs kids.

load more comments (7 replies)
[–] conciselyverbose@sh.itjust.works 11 points 2 months ago (3 children)

I think this is a good thing.

Pictures/video without verified provenance have not constituted legitimate evidence for anything with meaningful stakes for several years. Perfect fakes have been possible at the level of serious actors already.

Putting it in the hands of everyone brings awareness that pictures aren't evidence, lowering their impact over time. Not being possible for anyone would be great, but that isn't and hasn't been reality for a while.

[–] reksas@sopuli.xyz 9 points 2 months ago* (last edited 2 months ago) (1 children)

While this is good thing, not being able to tell what is real and what is not would be disaster. What if every comment here but you were generated by some really advanced ai? What they can do now will be laughable compared to what they can do many years from now. And at that point it will be too late to demand anything to be done about it.

Ai generated content should have somekind of tag or mark that is inherently tied to it that can be used to identify it as ai generated, even if only part is used. No idea how that would work though if its even possible.

[–] conciselyverbose@sh.itjust.works 11 points 2 months ago (12 children)

You already can't. You can't close Pandora's box.

Adding labels just creates a false sense of security.

load more comments (12 replies)
load more comments (2 replies)
[–] PenisDuckCuck9001@lemmynsfw.com 8 points 2 months ago* (last edited 2 months ago) (1 children)

The world's billionaires probably know there's lots of photographic evidence of stuff they did at Epstien island floating around out there. This is why they're trying to make ai produce art so realistic that photographs are no longer considered evidence so they can just claim its ai generated if any of that stuff ever gets out.

load more comments (1 replies)
load more comments
view more: next ›