Technology
This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.
Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.
Rules:
1: All Lemmy rules apply
2: Do not post low effort posts
3: NEVER post naziped*gore stuff
4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.
5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)
6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist
7: crypto related posts, unless essential, are disallowed
view the rest of the comments
I just don't think this bodes well for facebook if a popular post or account is discovered to be fake AI generated drivel. And i don't think it will remain obvious once active counter measures are put into place. It really, truly isn't very hard to generate something that is mostly "original" with these tools with a little effort, and i frankly don't think we've reached the top of the S curve with these models yet. The authors of this article make the same point - that outside of personally-effected individuals who have their work adapted recognizing their own content, there's only a slim chance these hoax accounts are recognized before they reach viral popularity, especially as these models get better.
Relying on AI content being 'obvious' is not a long-term solution to the problem. You have to assume it'll only get more challenging to identify.
I just don't think there's any replacement for shrinking social media circles and abandoning the 'viral' nature of online platforms. But I don't even think it'll take a concerted effort; i think people will naturally grow distrustful of large accounts and popular posts and fall backwards into what and who they are familiar with.