516
submitted 9 months ago by L4s@lemmy.world to c/technology@lemmy.world

TikTok ran a deepfake ad of an AI MrBeast hawking iPhones for $2 — and it's the 'tip of the iceberg'::As AI spreads, it brings new challenges for influencers like MrBeast and platforms like TikTok aiming to police unauthorized advertising.

you are viewing a single comment's thread
view the rest of the comments
[-] Asudox@lemmy.world 14 points 9 months ago

And that is why we need a pixel poisoner but for videos.

[-] KairuByte@lemmy.dbzer0.com 19 points 9 months ago

I’m not familiar with the term, and Google shows nothing that makes sense in context. Can you explain the concept?

[-] Omniraptor@lemm.ee 10 points 9 months ago* (last edited 9 months ago)

Here specifically it's a technique to alter images that makes them distorted for the "perception" by generative neural networks and unusable as training data but still recognizable to a human.

The general term is https://en.wikipedia.org/wiki/Adversarial_machine_learning#Data_poisoning

One example of a tool that does this is https://glaze.cs.uchicago.edu/ but I have doubts about its imperceptibility

[-] SoaringDE@feddit.de 9 points 9 months ago

Yeah I'm at a loss aswell. Is it a way to prove the source of a video?

[-] wildginger@lemmy.myserv.one 3 points 9 months ago

Its AI poison. You alter the data in such a way that the image is unchanged to a humans visual eye, but when imaging AI software uses the image within its sample imaging, the alterations ruin its ability to make correlations and recognize patterns.

Its toxic for the entire data set too, so it can damage the AI output of most things as long as its within the list of images used to train the AI.

[-] p03locke@lemmy.dbzer0.com 1 points 9 months ago

That seems about as effective as those No-AI pictures artists like to pretend will poison AI data sets. A few pixels isn't going to fool AI, and anything more than that is going to look like a real image was AI-generated, ironically.

[-] wildginger@lemmy.myserv.one 3 points 9 months ago

It can seem like whatever you want it to, its already been used and has poisoned data sets.

[-] p03locke@lemmy.dbzer0.com 1 points 9 months ago

Wake me up when orgs like Stability AI or Open AI bitch about this technology. As it stands now, it's not even worth mentioning, and people are freely generating whatever pictures, models, deepfakes, etc. that they want.

[-] stolid_agnostic@lemmy.ml 1 points 9 months ago

It’s a bit unclear what you’re after here. Don’t do it unless it’s already perfect?

[-] wildginger@lemmy.myserv.one -1 points 9 months ago

Why would they openly bitch about it? Thats free advertising that it works. Not to mention, you cant poison food someone already ate. They already have full sets of scrubbed data they can revert to if they add a batch thats been poisoned. They just need to be cautious about newly added data.

Its not worth mentioning if you dont understand the tech, sure. But for people who make content that is publicly viewable, this is pretty important.

[-] stolid_agnostic@lemmy.ml 1 points 9 months ago

It’s sort of like the captcha things. A human brain can recognize photos of crosswalks or bikes or whatever but it’s really hard to train a bot to do that. This is similar but in video format.

this post was submitted on 04 Oct 2023
516 points (96.9% liked)

Technology

55692 readers
2473 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS