this post was submitted on 18 Jan 2024
429 points (95.2% liked)

Technology

59211 readers
2737 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Rep. Joe Morelle, D.-N.Y., appeared with a New Jersey high school victim of nonconsensual sexually explicit deepfakes to discuss a bill stalled in the House.

you are viewing a single comment's thread
view the rest of the comments
[–] kibiz0r@lemmy.world 11 points 9 months ago (1 children)

What does the method matter? If the result is an artifact that is convincing enough for the average person to believe that the subject knowingly posed for sex acts that never occurred, the personal experience and social stigma is traumatizing no matter how it was made.

As the sociologist Brooke Harrington puts it, if there was an E = mc^2^ of social science, it would be SD > PD, “social death is more frightening than physical death.”

[–] guyrocket@kbin.social 1 points 9 months ago (2 children)

What does the method matter?

That's my point. If we're going to ban AI fakes should we then ban ALL fakes? Where do we draw the line and how do we do that without limiting free speech? I'm not sure it is possible.

And the days of believing everything you see are over but most don't know it yet.

[–] kibiz0r@lemmy.world 5 points 9 months ago* (last edited 9 months ago)

Where do we draw the line

It's ever-changing. We're social animals, not math equations, so it's all according to the kind of society we want.

how do we do that without limiting free speech?

All freedoms are in tension between "freedom to" and "freedom from". I can have the freedom to fire my gun in the air. I can have the freedom from my neighbor's randomly-falling bullets. I can't have both of those codified in law (unless I'm granted some special status over my neighbors).

I think that, many times, what we run into is a mismatch between a group thinking in terms of "freedom to" and a group thinking in terms of "freedom from".

The "freedom to" folks feel like any restriction on their ability to act is a breach of liberty, because they aren't worried about "freedom from". If, for example, I live in the middle of nowhere and have no neighbors, what falling bullets do I have to fear except my own?

The "freedom from" folks feel like having to endure the effects of others' actions is a breach of liberty, because they aren't worried about "freedom to". If I spend my life dodging falling bullets, I'm not likely to fire more into the sky.

And the days of believing everything you see are over but most don’t know it yet.

We said the same thing about the printing press. And it plunged us into a long period of epistemic chaos, with rampant plagiarism and reverse-plagiarism (attributing words to someone who never spoke them). The fallout of this led the crown to seize presses and allocate exclusive printing rights to a chartered monopoly (with some censorship just for funsies).

We can either complain it's too hard and do nothing, eventually leading to an overreaction to a policy that is obviously not sustainable... Or we can learn from history, get our heads in the game, and start imagining a framework that embraces the transformative power of large-scale computing while respecting the humanity of our comrades.

C2PA is a good start, but it's probably DOA in the hacker zeitgeist. We tend to view even an opt-in standard for proof of authenticity as a gateway to universal requirements for proof of authenticity and a locked-down tyrannical internet forever and ever. Possibly because a substantial portion of us are terminally online selfish assholes who never have to spend a second worrying about deepfakes of ourselves. And also fancy ourselves utilitarian techno-solutionists willing to sacrifice the squishy unquantifiable touchy-feely human emotions that just get in the way of objective rational progress towards a transhuman future. It's a noble sacrifice, we say, while profiting disproportionately and suffering none of the fallout.

[–] nybble41@programming.dev 1 points 9 months ago

You're restricting speech whether or not you confine your censorship to only AI-generated images.