496
you are viewing a single comment's thread
view the rest of the comments
[-] CosmicCleric@lemmy.world 21 points 3 months ago

From the article...

The real danger lies in those images that are crafted with the explicit intention of deceiving people — the ones that are so convincingly realistic that they could easily pass for authentic historical photographs.

Fundamentally/meta level, the issue is one of is; are people allowed to deceive other people by using AI to do so?

Should all realistic AI generated things be labeled as such?

[-] Drewelite@lemmynsfw.com 31 points 3 months ago

There's no realistic way to enforce that. The answer is to go the other way. We used to have systems in place for accountability of information. We need to bring back institutions for journalism and historians to be trustworthy sources that cite their work and show their research.

[-] CosmicCleric@lemmy.world 6 points 3 months ago* (last edited 3 months ago)

There’s no realistic way to enforce that.

You can still mandate through laws that any AI generated product would have to have a label on it, identifying itself as such. We do the same thing today with other products that are manufactured and sold (recycling icons, etc).

As far as enforcement goes, the public themselves would ultimately (or in addition to) be the enforcers, as the recent British royal family photos scandal suggests.

But ultimately Humanity has to start considering laws that affects the whole species, ones that don't just stop at an individual country border.

[-] Drewelite@lemmynsfw.com 7 points 3 months ago

Don't get me started on the sham that is recycling icons 😂

I'm all for making regulation that would require media companies to disclose that something is fake if it could be reasonably taken as truth. But that doesn't solve the problem of anyone with a computer pumping fake images on to the web. What you're suggesting would require a world government that has chip level access to anything with a CPU.

As for the public enforcing the truth; that's what I'm suggesting. Assume anything you see online could be fake. Only trust trustworthy institutions that back up their media with verifiable facts.

[-] CosmicCleric@lemmy.world 1 points 3 months ago

What you’re suggesting would require a world government that has chip level access to anything with a CPU.

Well, not something that harsh, but I think we're looking at losing some of the faux anonymity that we have (no more sock puppet accounts, etc.).

Most people haven't thought far enough ahead on what this means, all of the ramifications, if we let AI run rampant on the human 'public square'.

Instead of duplicating my other comment on this subject, I'll just link to it here.

[-] Antagnostic@lemmy.world 6 points 3 months ago

Physical products are not the same as digital products. Your suggestions are very unrealistic.

[-] T156@lemmy.world 1 points 3 months ago

Problem with that is that for data, it's much easier to lie and get away with it. If a bot throws up an unlabelled AI generated image, law enforcement agencies would have a much harder time tracking down who made it.

There could be hundreds, or even thousands, and the moment they pin one down, more will appear.

By comparison, physical products can only be made and enter the country so quickly. There are physical factories where they can be tracked down, and it's prohibitively expensive to spin up a new product line every time the other one is shut down.

[-] CosmicCleric@lemmy.world 1 points 3 months ago* (last edited 3 months ago)

Hot take incoming...

If a bot throws up an unlabelled AI generated image, law enforcement agencies would have a much harder time tracking down who made it.

Well they would just start with the person who has the user account, or the site that the user account is associated with (we might end the days of being able to have sock puppet accounts). Or they get that information from the NSA (the government knows every one of your porn fetishes).

Honestly, I realize what I'm stating is not as easy to do as I'm saying it is, and making it actually work would be kind of ugly and not completely fair to all parties, but it is something that is actually doable, and needed.

We shouldn't just throw up our hands on day one and say "fuck it, nothing can be done about it", and then we all suffer in the pollution of the human conversational-sphere to the point that no one can converse with each other anymore because of all the garbage.

When we stop talking to each other, because we think everything is just AI generated, that's a formula for destruction for the human race. We have to be able to talk to each other, and be confident that we're actually talking to each other, and not a robot.

/getsoffsoapbox

[-] Drewelite@lemmynsfw.com 1 points 3 months ago

Well for the majority of human existence we got by on talking to each other in person. So I think the collapse of humanity is a bit dramatic.

Now, as we've seen with torrenting, if any country doesn't comply or enforce laws against how their citizens should interact with the internet you can just VPN through that country to do what you want.

Ok so

  1. Create the infrastructure for an entire world government.
  2. Force every country to join and fully enforce laws tying every person to their online accounts.
  3. Of course this will create a dangerous police-state like China's government for many countries where speaking out against your government is dealt with harshly. So either abolish free speech or fix all corruption in all the countries in the world.
  4. Of course this level of control over the world will attract a lot of corruption itself, so build an unassailable global set of checks and balances for how this government should be run that literally everyone on earth can agree on.

Or

Proper journalism.

[-] CosmicCleric@lemmy.world 2 points 3 months ago* (last edited 3 months ago)

Well for the majority of human existence we got by on talking to each other in person. So I think the collapse of humanity is a bit dramatic.

We never had the ability to con each other over so completely and in very large numbers than we do today with the Internet and specialized Networks.

And more importantly, you always knew you were talking to another person, and not a conflict bot or an astroturfing bot, or a political party bot, etc. Now, you don't, which is my point. We can't solve problems if we don't know we're talking to a person versus a not person.

I wouldn't be so quick to dismiss what I'm saying.

[-] CosmicCleric@lemmy.world 1 points 3 months ago

Proper journalism.

If the last couple of years proves anything, that's not going to save us, not that alone.

You're making an assumption that 100% of people are aware enough to consume the proper journalism and make the proper decisions.

Right now large swaths of people are being convinced the things that are not true through improper journalism.

load more comments (8 replies)
this post was submitted on 22 Mar 2024
496 points (93.8% liked)

Technology

55647 readers
2609 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS