this post was submitted on 21 Dec 2023
5 points (85.7% liked)
Chat
233 readers
1 users here now
General chat about anything at all
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Thanks!
A warning, though...after your restart, I just commented in that other lemmy.ml thread talking about 0.19 federation problems on lemmy.ml and linked to this thread -- given that you identified an important data point -- and that comment doesn't appear to have propagated. That'll be a comment added to the queue without a restart after the comment was made. Now, it was only 5 minutes ago that I made the comment, so maybe I'm just excessively impatient, but...
The local view of the thread:
https://lemmy.today/comment/4245923
The remote view of the thread:
https://lemmy.ml/post/9624005?scrollToComments=true
My comment text:
I did another restart and your comment shows up in the thread.
So seems to be some bug that makes it stop federating after it has polled the queue once.
Hmmm.
A couple thoughts:
As I commented above, this doesn't appear to be impacting every 0.19.1 instance, so there may be something on lemmy.today that is tickling it (or it and some other instances).
If you decide that you want to move back to 0.18.x, I have no idea whether lemmy's PostgreSQL databases and stuff support rolling back, while continuing to use the current databases, whether there were any schema changes in the move to 0.19.x or whatever.
Something that also just occurred to me -- I don't know what kind of backup system, if any, you have rigged up, but normally backup systems backing up servers running databases need to be aware of the database, so that they can get an atomic snapshot. Like, if you have something that just backs up files nightly or something, they may not have valid, atomic snapshots of the PostgreSQL databases. If you do attempt a rollback, you might want to bring all of the services down, and only while they are down back up the PostgreSQL database. That way, if the rollback fails, it's at least possible to get back to a valid copy of the current 0.19.1 state as it is in this moment.
If all that's old hat and you've spent a bunch of time thinking about it, apologies. Just didn't want a failed rollback to wind up in a huge mess, wiping out lemmy.today's data.
I would like to roll back but the database schema changes would mean having to restore a backup. And potentially end up in more issues just like you were thinking too.
I guess it's best to wait for a fix, and I will also see if I can troubleshoot this myself a bit. I'm guessing it's a database issue since I can see very long running update statements on every restart, and they may not be able to complete for some reason.
Possibly relevant:
https://github.com/LemmyNet/lemmy/issues/4288
This was the bug for the original 0.19.0 federation problems, and admins are reporting problems with 0.19.1 there as well.
The lemmy devs reopened the bug four hours ago, so I'm guessing that they're looking at it. Not sure if you want to submit any diagnostic data there or whatnot.
Thank you, very good to know.
My idea was to try and see what specific query is failing in the database and go from there, so currently enabling logging of failed postgres queries. Hopefully see something in those logs...
Indeed, it hasnt federated. So the restart of Lemmy polls the queue and then it stops working again. :/