this post was submitted on 26 Jul 2024
230 points (96.7% liked)

science

14350 readers
151 users here now

just science related topics. please contribute

note: clickbait sources/headlines aren't liked generally. I've posted crap sources and later deleted or edit to improve after complaints. whoops, sry

Rule 1) Be kind.

lemmy.world rules: https://mastodon.world/about

I don't screen everything, lrn2scroll

founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Evotech@lemmy.world 1 points 1 month ago (3 children)

As long as you verify the output to be correct before feeding it back is probably not bad.

[–] eleitl@lemm.ee 3 points 1 month ago (1 children)

How do you verify novel content generated by AI? How do you verify content harvested from the Internet to "be correct"?

[–] Evotech@lemmy.world 2 points 1 month ago

Same way you verified the input to begin with. Human labor

[–] pennomi@lemmy.world 3 points 1 month ago

That’s correct, and the paper supports this. But people don’t want to believe it’s true so they keep propagating this myth.

Training on AI outputs is fine as long as you filter the outputs to only things you want to see.

[–] Binette@lemmy.ml 1 points 1 month ago

The issue is that A.I. always does a certain amount of mistakes when outputting something. It may even be the tiniest, most insignificant mistake. But if it internalizes it, it'll make another mistake including the one it internalized. So on and so forth.

Also this is more with scraping in mind. So like, the A.I. goes on the internet, scrapes other A.I. images because there's a lot of them now, and becomes worse.