this post was submitted on 31 Aug 2023
24 points (100.0% liked)

Fediverse

7 readers
2 users here now

This magazine is dedicated to discussions on the federated social networking ecosystem, which includes decentralized and open-source social media platforms. Whether you are a user, developer, or simply interested in the concept of decentralized social media, this is the place for you. Here you can share your knowledge, ask questions, and engage in discussions on topics such as the benefits and challenges of decentralized social media, new and existing federated platforms, and more. From the latest developments and trends to ethical considerations and the future of federated social media, this category covers a wide range of topics related to the Fediverse.

founded 1 year ago
 

Threads, Meta's new microblogging platform, is updating its terms to focus on data collection from "Third Party Users".

you are viewing a single comment's thread
view the rest of the comments
[–] Kaldo@kbin.social 14 points 1 year ago* (last edited 1 year ago) (1 children)

Did we really need an LLM summary of an otherwise already short article? Why do you assume it's even able to correctly transcribe the point behind the article in the first place? For example, it says:

The article claims that these changes are harmful to the Fediverse for several reasons:
They violate the Fediverse’s ethos of user autonomy and privacy, by forcing users to give up their data and follow Threads’ rules.

The article never said this. If anything, the author of the article even acquiesces "Granted, these sound like basic table stakes for federation to work well within the Fediverse. Most Mastodon servers collect roughly about the same amount of data for basic features to work correctly. ".

So how can this then be "violating the fediverse's ethos" when it is something the fediverse already does? The issue is not trusting facebook with this data, not the principle of data collection itself. Because of subtle nuance like this I'd say the summary is just misrepresenting the original point and just generating incorrect clickbait. There's other stuff in it that just seems made up since it's not mentioned in the article at all.

TL;DR Fuck LLMs, stop thinking they understand context. They are just glorified autocomplete algorithms.

[–] bionicjoey@lemmy.ca 5 points 1 year ago (1 children)

A huge number of humans were just waiting for a computer to get just good enough at simulating coherence that they could abandon critical thinking forever. People are utterly opposed to using their brains at all.

[–] delcake@kbin.social 1 points 1 year ago

This is why I die a little inside each time I see someone post an LLM summary of an article.

As if generating it in the first place and then reading that is somehow less work than just reading the article to begin with.