this post was submitted on 09 Aug 2023
1120 points (98.5% liked)
Memes
45739 readers
1367 users here now
Rules:
- Be civil and nice.
- Try not to excessively repost, as a rule of thumb, wait at least 2 months to do it if you have to.
founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
People, if an instance is crumbling, sign up to another instance! When you are able to use lemmy.world again, use lemmy2opml/lemmy_migrate (or any other tool that works, there's a list on the Awesome Lemmy Github page) to migrate your followed communities to the new instance.
I don't think the problem is much on the identification side, but on the communities one
Like, I can't access any community on .world while the instance is down
You actually can, you just append @lemmy.world to the community name when accessing from another instance that's federated with lemmy.world and once lemmy.world comes back up your contributions will be there. Any instance that's federated with the instance your posting from will be able to participate in the discussion with you for that matter. The only thing you can't do with a community when the host instance is down is subscribe to it. It would still get added to your subscriptions though if you try, the hosting insurance just won't know until it comes back up and eats through the outboxes of federated instances to "catch up".
Edit When it does come back up it'll also get any messages that are in federated outboxes as well so your posts will ultimately show up on the host instance, just posted by your alt account
wait so how does this work on a technical level?
say I am part of a community of /c/weirdstuff that's on lemmy.world.
if lemmy world is down how do my comments get to lemmy.world? are they stored on whatever instance I am registered on and then synced to lemmy.world once it's up?
Yes
Ok but how other people will know I replied to a comment or posted if the community on the original server is down?
They'll get it when it eventually comes back up
Ok but the question that arise is:" if the community is duplicated on every server that access it, isn't it a little bit of a waste of computational power and disk space ?"
Expecially considering now Lemmy is pretty small, but in the future you could hopefully have a much larger audience
Well in a way yes but that's how the federation/decentralization works. It's like with email everyone gets a copy and if a message doesn't go through to someone it can be redelivered.
Centralized services are usually more efficient than decentralized but that's not the primary goal of the fediverse
My main concern with this is, if only a handful of centralized social network reached long term stability, and most of them are unprofitable, how can Lemmy (or any other foss fediverse project) completely hold itself on 2 unpaid developers and immense unpaid work from volunteers in the long run.
Because ok, Lemmy.world is looking for experienced sysadmin and that post already had a little backslash, but this isn't sustainable long term, it's impossibile to keep scaling like that.
And I feel that's one of the biggest reasons holding back the fediverse
From the 2 developers and The volunteers... The same can be asked about a lot of foss software. Typically what stabilizes foss development though is when developers start getting paid to contribute to the project by a company they work for, however lots of foss software has made it purely through donations (easiest example being mediawiki and wikipedia)
Web hosting is definitely the harder question. In the grand scheme of things, lemmy instances and other fediverse tech will likely end up being pseudo-centralized with a handful of companies like email. Lemmy is very resource intensive as you guessed. The good news is that a very large amount of that resource consumption is storage, and storage is cheap. Though I know I've seen tehdude, the owner of the sh.itjust.works instance, another very stable one, comment on how CPU, networking and memory intensive a busy instance can get. A lot of the early 500s instances were seeing were definitely caused by resource constraints.
Not sure if other instances can communicate between them to get updated before the original server is up again and everything gets updated.