Tiktok is the absolute worst at irrational censorship. It's a shame because the site is immensely popular and that means it is full of very interesting content. Yet, this is far from the first unreasonable thing they've been removing. It's well known how Tiktok users came up with alternative words to circumvent words that were likely to get their content removed (e.g., "unalived" instead of "killed").
Strongly agreed. I think a lot of commenters in this thread are getting derailed by their feelings towards Meta. This is truly a dumb, dumb law and it's extremely embarrassing that it even passed.
It's not just Meta. No company wants to comply with this poorly thought out law, written by people who apparently have no idea how the internet works.
I think most of the people in the comments cheering this on haven't read the bill. It requires them to pay news sites to link to the news site. Which is utterly insane. Linking to news sites is a win win. It means Facebook or Google gets to show relevant content and the news site gets users. This bill is going to hurt Canadian news sites because sites like Google and Facebook will avoid linking to them.
There's lots of useful bots besides just summarizers. Reminder bots can be great. Some linkifying bots are also useful (like Marv in r/SCP). Bots can detect malicious spam bots. Subs like AITA use bots to tally up user votes. There's bots for moderation actions, too.
But we really could use a way to get rid of the absolutely useless bots. We don't need terrible spelling correcting bots, a bot whose sole purpose is to tell people not to put "the" in front of "Ukraine", or a bot that lectures people on AMP links.
While you're right that that's a downside of downvotes, I think that it's far better than the alternative.
Downvotes means we have a way to discourage really bad behavior and lets others see that it's discouraged. For example, suppose someone posts something bigoted. It sucks to see those kinda comments (especially when they affect you personally). When those comments are heavily downvoted, it feels better, since it tells you that the views expressed in the comment are not acceptable. It's extremely discouraging when I see bigoted posts with a positive score. Without downvoting, they all have positive scores and it's just "less positive".
It'd be nice if reporting was able to remove such comments before anyone sees them, but that will never be the case. Too many communities don't remove comments fast enough and many more simply won't remove comments unless they're really bad, if at all. Some moderators are bigots themselves and others simply don't have the ability to recognize dog whistles that may be in comments. Or they're not personally affected by the malicious comment, so they can be more easily convinced that if the comment was politely worded, it's acceptable even if it's blatantly bigoted.
To be clear, it does suck that users will use it as a disagree button for comments that are otherwise good, but that is far, far worth it. The presence of downvotes were a major reason why I used Reddit (and now this) while disliking the likes of twitter.
The importance of jump starting can't be understated. Most people will go to the community that has content. If a community is empty, a lot of people won't even start participating in it. Plenty of people who make posts want them to be discussed, so they're only looking for active communities.
That will would cause load for peertube that it otherwise would have though. I'm not sure how well it will be able to scale. Hosting videos is really expensive. If something is already on YouTube it may be best to just leave it there so as to not put all our weight on a new, untested product.
Honestly, I kinda question how good of a time investment it is to try and allow deletion from the public facing parts of the internet, given the numerous places where your content will be cached or otherwise stored.
There is certainly some value in simply making it as hard as possible to find things you want to delete. Why let perfect be the enemy of good, after all. There's plenty of types of content we certainly want to do our best at deleting even if we can't be perfect. Eg, do you wanna be the one to tell a revenge porn victim, "sorry, we can't make it harder to find the content that harms you because we can't delete all of it anyway"?
But at the same time, development time is limited. Everything is a trade off. We do have to decide what is most important, because we can't do it all immediately. The fact we can't actually delete everything does have to be a factor in this prioritization, too.
There is something to be said about ensuring people know and understand that nothing can truly be 100% deleted once it's posted on the internet. Not that Lemmy is doing good about that, either (especially since deleted comments apparently lie about being deleted).
All this said, I do think federated, reliable deletion is critical for illegal content. Such content needs to be removed quickly and easily from as many places as possible. Without this, instance owners are put at considerable legal risk. This risk poses a threat to the scalability of the Fediverse.
I love my smart lights. It's convenient controlling my lights with my voice and setting up automation rules for them.
Yes, there's some privacy concerns. Personally, I just assume it might happen and consider it worth it. Honestly, I just don't really care much if Phillips knows when I turn my lights on. I mean, my neighbours can figure that out just by looking at my place.
But on the other hand, Rust is a highly desirably language whereas PHP has a historically bad rap. I don't think devs necessarily want easiest. They want whatever is most enjoyable to use. Tooling support also matters. Stuff like static typing, for example, makes unfamiliar code way easier to understand. I've contributed to a lot of unfamiliar servers and I've noticed that ones in languages like Go are a lot easier because the static typing means it's easier to read the code. In particular, I found servers written in Python hard to work with, and it's not for lack of experience with the language (I've been using Python for longer than Go).
How easy it is to run the code also matters. Has anyone tried that with Lemmy? I was gonna run a dev kbin instance to try and make some changes, but the amount of work it seemed to require just to run the server was more than I wanted to do at the time (I really just want as close as possible to a single command way to run the server locally to test my changes so I can verify they work). Ease of contributing is very important for me to actually bother to contribute.
You never heard of Rust? Today's lucky ten thousand then. I've personally never had a chance to use Rust, but it's my #1 most interested in language based on all the things I've heard about it.
Though I'm personally on kbin and naturally there's the most interest in fixing issues that are on your instance. Kbin sadly is just PHP, but whatever. I was gonna make a bug fix yesterday, but the steps to turnup a dev instance are so long that I got lazy and didn't bother. I'm spoiled by all the servers at my work that I can just start running with a single command that having to spend potentially a few hours turning up a server feels like too much now (and let's be honest, setting up a dev env is the most boring and annoying part of our job).
What I'm most happy about is that the Fediverse so far seems to be mostly actually pretty good people (though I've been largely chilling in kbin since the blackout started -- it only just turned on federation). Most past attempts to abandon reddit only saw the most toxic, horrible people leave. Sites like Voat were never an option because the users were awful. It's nice that so far, I haven't really seen any of that. In fact, it feels the opposite, with the people who left reddit being disproportionately great people, with the toxic people being more likely to stay on reddit.
I wonder if it'll last? I hope so. I wanted to leave reddit in the past but never felt like there was anywhere comparable to go that wasn't shit.
Strongly agreed. I view this as the biggest issue with LLMs. They will hallucinate a confidently incorrect answer for those cases. It makes them misinformation machines.