this post was submitted on 30 Nov 2023
157 points (91.5% liked)

Technology

59211 readers
2517 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Fake babies, real horror: Deepfakes from the Gaza war increase fears about AI's power to mislead::Among images of the bombed out homes and ravaged streets of Gaza, some stood out for the utter horror: Bloodied, abandoned infants.

top 45 comments
sorted by: hot top controversial new old
[–] idiocracy@lemmy.zip 37 points 11 months ago (2 children)

this just shows that people should be skeptical to everything they read and see, even if it confirm their view/bias.

be humble enough to realize the human brian is easily fooled.

i keep the misspelling to prove the point!

[–] WhiteHawk@lemmy.world 29 points 11 months ago (2 children)

the human brian is easily fooled

I can confirm that Brian is an idiot

[–] 0x0@programming.dev 10 points 11 months ago (1 children)
[–] BrandoGil@lemmy.world 10 points 11 months ago (1 children)

Yeah, but he did maintain a positive outlook all the way through.

[–] muntedcrocodile@lemmy.world 2 points 11 months ago

Bold thing to say about a very naughty boy.

[–] AreaSIX@lemm.ee 1 points 11 months ago

I beg to differ. That's only true for the human Brian, my friend's dog Brian is a very sharp fella.

[–] Wahots@pawb.social 10 points 11 months ago

Really, just avoid consuming news from social media. Pay for a reliable newspaper and treat news posted to social media (eg, even lemmy) from BagPipeFucker.biz with skepticism, and validate the reputibility of the site. I've seen the likes of Fox News and the NYPost posted more often than not. Also the South China Morning Post, Falun Gong-associated papers, and news sites paid for by middle eastern royal families- not exactly the pinnacle of independent journalism.

[–] kurwa@lemmy.world 35 points 11 months ago* (last edited 11 months ago) (1 children)

It's not like babies aren't being killed throughout all of this, Israel has literally leveled neighborhoods where people lived, and where children and babies lived too.

This feels like some sort of ploy to say that innocent people aren't being killed, using AI as some sort of excuse. I don't doubt people are faking stuff, but I do wonder who is creating them, because anyone from any side could be creating intentionally misleading AI generated images.

[–] TheRealLinga@sh.itjust.works 18 points 11 months ago (1 children)

My thoughts exactly. Seems like one of those tried-and-true misinformation campaigns to give us doibt about anything haza related and thus create more apathy and disinterest

[–] BlueBockser@programming.dev -1 points 11 months ago

Absolutely true, everything is a conspiracy theory and the most convoluted version of events you can come up with to make it fit your preconceived notions is always correct.

[–] JackGreenEarth@lemm.ee 17 points 11 months ago (1 children)

There will be no way to watermark all AI images, as someone could just mod stable diffusion to remove the watermark. The best we can do is to doubt any photographic evidence we see.

[–] Doorbook@lemmy.world 3 points 11 months ago (2 children)

Intentionally they spotage and killed journalists. Defunded public media, and privatized the rest. Bought out and censored social media and now its hard to tell which image is real or not.

The only option in my opinion is for camera manufacturer to include a cryptic hash that can be pass to an algorithm to authenticate a photograph metadata.

[–] JackGreenEarth@lemm.ee 6 points 11 months ago (1 children)

That could very easily be abused as some sort of DRM or vendor lockin for photos. I would rather not.

[–] bobgusford@lemmy.world 1 points 11 months ago (1 children)

Well, not necessarily. How about just embedding the following in the EXIF tag: digital signatures from the original camera; digital hashes of the original image; digital sigs for the publisher and the article where the pics will appear.

Any additional processing by a "social media content creator" - for example, adding captions to make a meme out of it - will also include the prior chain of digital sigs and hashes.

Now when it pops up on social media sites/apps, there can be little info bubbles that link to the original pic or article, or provide info on ownership of the camera along with date and timestamps of the pics.

Garbage will always exist on social media, but at least we can have these little tools to verify authentic images.

[–] 15Redstones@sh.itjust.works 2 points 11 months ago (1 children)

How would they be made secure against faking?

If the cryptographic key itself was extractable, it'd be easy to sign fake images with just a bit of custom software.

If it isn't, there's still workarounds. Buy a professional photography camera, disassemble it, extract the chip that does the signature, feed it fake GPS and image data, and you have a modified image signed as legit. A country's intelligence agency could easily do that.

Even if the camera was made completely unmodifiable, you could put it in a Faraday cage, feed it a spoofed GPS signal for fake date/time/location data, and take a picture of a high resolution screen showing your photoshopped image.

Building a system where end users are told "this image is cryptographically confirmed to be legit" just makes it easier to convince users that your fake images are legit.

[–] bobgusford@lemmy.world 1 points 11 months ago (1 children)

Oh no. No social media site should ever claim that a post, story, or image is legit.

For some viral pics/posts, it should probably show a warning that the image doesn't have any signatures, no valid signatures, or a revoked signature. Otherwise, it probably just shows a verified signature chain, for example: BleedingHeartInfluencer*[edited]* → NyTimes*[edited]* → AP*[story]* → AhmedMohammed*[photographer,2023-12-03]*.

We can always assume nation states and other powerful people will know how to fake images, GPS, reality, etc. We can also always assume fakes will still be shared by many people without any proper authentication.

The main goal here would just be to reduce proliferation.

[–] 15Redstones@sh.itjust.works 1 points 11 months ago

In this case you'd still need a way to know who the photographer is and whether they can be trusted. The photographer at the beginning of the chain can sign anything, regardless of if it's a real photograph or edited (or a real photograph of a staged scene with fake location/time data). The cryptography system could only tell you that the image originates with the same person or organisation who is associated with a specific cryptographic key.

[–] CommanderCloon@lemmy.ml 1 points 11 months ago

Film a high res screen projection then

[–] tsonfeir@lemm.ee 14 points 11 months ago

The GQP in the future: The Middle East doesn’t exist, it’s just AI

[–] FluorideMind@lemmy.world 4 points 11 months ago

Oh. Here we go.

[–] autotldr@lemmings.world 2 points 11 months ago

This is the best summary I could come up with:


Other examples of AI-generated images include videos showing supposed Israeli missile strikes, or tanks rolling through ruined neighborhoods, or families combing through rubble for survivors.

In the bloody first days of the war, supporters of both Israel and Hamas alleged the other side had victimized children and babies; deepfake images of wailing infants offered photographic ‘evidence’ that was quickly held up as proof.

The propagandists who create such images are skilled at targeting people’s deepest impulses and anxieties, said Imran Ahmed, CEO of the Center for Countering Digital Hate, a nonprofit that has tracked disinformation from the war.

Around the world a number of startup tech firms are working on new programs that can sniff out deepfakes, affix watermarks to images to prove their origin, or scan text to verify any specious claims that may have been inserted by AI.

While this technology shows promise, those using AI to lie are often a step ahead, according to David Doermann, a computer scientist who led an effort at the Defense Advanced Research Projects Agency to respond to the national security threats posed by AI-manipulated images.

Doermann, who is now a professor at the University at Buffalo, said effectively responding to the political and social challenges posed by AI disinformation will require both better technology and better regulations, voluntary industry standards and extensive investments in digital literacy programs to help internet users figure out ways to tell truth from fantasy.


The original article contains 953 words, the summary contains 238 words. Saved 75%. I'm a bot and I'm open source!

[–] natalie@kbin.chat 1 points 11 months ago

To all of you, greetings. I read your writing regularly and am interested in this subject matter. See also this webpage.

c4yourself