this post was submitted on 17 Mar 2025
325 points (99.4% liked)

Technology

66783 readers
5317 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

The Chinese Communist Party’s (CCP’s) national internet censor just announced that all AI-generated content will be required to have labels that are explicitly seen or heard by its audience and embedded in metadata. The Cyberspace Administration of China (CAC) just released the transcript for the media questions and answers (akin to an FAQ) on its Measures for the Identification of Artificial Intelligence Generated and Synthetic Content [machine translated]. We saw the first signs of this policy move last September when the CAC’s draft plans emerged.

This regulation takes effect on September 1, 2025, and will compel all service providers (i.e., AI LLMs) to “add explicit labels to generated and synthesized content.” The directive includes all types of data: text, images, videos, audio, and even virtual scenes. Aside from that, it also orders app stores to verify whether the apps they host follow the regulations.

Users will still be able to ask for unlabeled AI-generated content for “social concerns and industrial needs.” However, the generating app must reiterate this requirement to the user and also log the information to make it easier to trace. The responsibility of adding the AI-generated label and metadata falls on the shoulders of this end-user person or entity.

top 24 comments
sorted by: hot top controversial new old
[–] futatorius@lemm.ee 2 points 53 minutes ago

I suggest the shit emjoi being used as the indicator.

[–] lorty@lemmy.ml 0 points 37 minutes ago (1 children)

China does good thing Average lemmitor: this is bad actually

[–] cm0002@lemmy.world 1 points 26 minutes ago

Says the .ml tankie in a thread full of lemmings expressing support and agreement with it

LMAO why don't y'all take care of your censorship problems on .ml first

[–] JustZ@lemmy.world 16 points 16 hours ago

Meanwhile best we can do in America is hide tracking dots in every color printer.

[–] singletona@lemmy.world 67 points 1 day ago (3 children)

...I'm...

In full agreement with this*

*with the provision that there are ways to ensure this isn't weaponized so that dissident or oppositional speech/photos/art isn't flagged as AI so that it can be filtered out.

[–] ygajbm2sjcxbggbc0zfb@lemmy.world 5 points 19 hours ago (1 children)

Or propaganda that doesn’t have it is taken as legitimate.

[–] DragonTypeWyvern@midwest.social 3 points 18 hours ago

That doesn't change anything though.

[–] avidamoeba@lemmy.ca 11 points 1 day ago* (last edited 23 hours ago) (1 children)

It doesn't matter whether this is used against dissidents or not. Their speech is censured either way. It shouldn't affect the much larger positive effect this will have on the majority of people.

[–] pycorax@lemmy.world 2 points 14 hours ago

This does provide another tool for them to claim it isn't censored but label it as AI to hurt the credibility of dissidents though. I don't think it doesn't matter.

[–] CosmoNova@lemmy.world 9 points 1 day ago (1 children)

So in short you disagree which is reasonable given the circumstances.

Besides, wouldn‘t it make much more sense to verify and mark genuine content rather than the slob which is becoming the majority of content?

[–] Imgonnatrythis@sh.itjust.works 4 points 20 hours ago

I like that approach better. Just like I'd rather know what doesn't cause cancer in the state of California at this point.

[–] filister@lemmy.world 8 points 18 hours ago (1 children)

They don't want to pollute their training data.

[–] DragonTypeWyvern@midwest.social 18 points 18 hours ago

Honestly?

Good. I assume this is more about controlling narratives but it's something that should be happening no matter what side of the AI debate you're on.

[–] avidamoeba@lemmy.ca 32 points 1 day ago (1 children)

When the dirty commies do the reforms we all know we need in our countries...

We're so fucked. ⚰️

[–] febra@lemmy.world 3 points 7 hours ago

As a dirty commie: you’ll get over it someday.

[–] riskable@programming.dev 17 points 1 day ago* (last edited 1 day ago) (2 children)

Not a bad law if applied to companies and public figures. Complete wishful thinking if applied to individuals.

For companies it's actually enforceable but for individuals it's basically impossible and even if you do catch someone uploading AI-generated stuff: Who cares. It's the intent that matters when it comes to individuals.

Were they trying to besmirch someone's reputation by uploading false images of that person in compromising situations? That's clear bad intent.

Were they trying to incite a riot or intentionally spreading disinformation? Again, clear bad intent.

Were they showing off something cool they made with AI generation? It is of no consequence and should be treated as such.

[–] RandomVideos@programming.dev 1 points 7 hours ago (1 children)

Would applying a watermark to all the training images force the AI to add a watermark?

[–] riskable@programming.dev 1 points 22 minutes ago

Nope. In fact, if you generate a lot of images with AI you'll sometimes notice something resembling a watermark in the output. Demonstrating that the images used to train the model did indeed have watermarks.

Removing such imaginary watermarks is trivial in image2image tools though (it's just a quick extra step after generation).

[–] lily33@lemm.ee 9 points 1 day ago (2 children)

I agree that it's difficult to enforce such a requirement on individuals. That said, I don't agree that nobody cares for the content they post. If they have "something cool they made with AI generation" - then it's not a big deal to have to mark it as AI-generated.

[–] Imgonnatrythis@sh.itjust.works 5 points 20 hours ago

Notice: Those are not my girlfriend's boobs. I used Photoshop with an AI plug in to make them look fuller.

No thanks mate. Government and anyone selling anything should be held to those standards. If you are an influencer pushing a product for profit that applies to you too.

[–] riskable@programming.dev 4 points 1 day ago (2 children)

Why stop at "AI-generated"? Why not have the individual post their entire workflow, showing which model they used, the prompt, and any follow-up editing or post-processing they did to the image?

In the 90s we went through this same shit with legislators trying to ban photoshopped images (hah: They still try this from time to time). Then there were attempts at legislating mandatory watermarks and similar concepts. It's all the same concept: New technology scary, regulate and restrict it.

In a few years AI-generated content will be as common as photoshopped images and no one will bat an eye because it'll "just be normal". A photographer might take a picture of a model (or a number of them) for a cover or something then they'll use AI to change the image after. Or they'll use AI to generate an image from scratch and then have models try to copy it. Or they'll just use AI to change small details in the image such as improving lighting conditions or changing eye color.

AI is very rapidly becoming just another tool in photo/video editing and soon it will be just another tool in document writing and audio recording/music creation.

[–] Stanley_Pain@lemmy.dbzer0.com 1 points 14 hours ago

This really underscores the need for complete reform of the entire media apparatus....

[–] Buelldozer 3 points 21 hours ago* (last edited 21 hours ago)

In a few years AI-generated content will be as common as photoshopped images and no one will bat an eye because it’ll “just be normal”.

We're already there you just aren't noticing them because they've progressed beyond the Six Fingers / Halo Ring in the eyes level of believability.

[–] ShinkanTrain@lemmy.ml 2 points 20 hours ago* (last edited 20 hours ago)

My favorite genre of comment section is when every other post is talking about how someone/thing the poster doesn't like does something they think is good but they gotta reassure everyone that it'll still be bad.

Yeah, he saved the kitten from the tree, But at what cost? 😔