[-] interolivary@beehaw.org 26 points 6 months ago

The same thing is happening globally, and it's becoming clear that moderate conservatives are very much in the minority. It worries me to no end that fascism seems to be winning, and there doesn't seem to be much we can do about it

[-] interolivary@beehaw.org 28 points 6 months ago

Moderate conservatives pretty much don't exist anymore. I had a "fiscally conservative" and "moderate" acquaintance tell me that the world would be better if there were no sexual or gender minorities

[-] interolivary@beehaw.org 29 points 7 months ago* (last edited 7 months ago)

Not sure that's much better. Did I handle that tactfully? No. Is that grounds for comparing me to a rapist? Also no.

I just really don't have much patience with people who assume that it's everybody's responsibility to shield them from things they somehow find either offensive, traumatizing, or making fun of their trauma. While I'm not unsympathetic to trauma or completely unwilling to accommodate it at all, if we clean the meme community of everything that someone finds somehow either offensive or triggering, there won't be much left here – considering how much you have to reach to say this meme is somehow grievously making fun of trauma or even related to trauma, the bar for removing "offensive" content isn't going to be high.

So, tl;dr, just about anything can be offensive or triggering to someone, so where on earth do we draw the line? Again, I'm not unsympathetic, but is it not a bit ridiculous to come barging in with the apparent assumption that something as inoffensive as this meme be either removed or that they get some sort of apology for it or whatever their end goal was in their mind?

Edit: just to drive the point home because I'm irritated, but despite what could be assumed based on my twattery in some comments, I'm a fairly sensitive person. There's a bunch of subjects that I'm very sensitive about, but I'm not going to go around telling people that their post about $SUBJECT_MATTER is offensive to me; my sensitivities aren't anybody else's problem, they're my problem (well, mine and my therapist's). And this doesn't mean I'd eg. shut up about seeing blatant racism or whatever, but things I figure aren't going to be more widely offensive or "touchy".

[-] interolivary@beehaw.org 27 points 9 months ago

Doubtful you even understand what a Marxist is, honestly

[-] interolivary@beehaw.org 30 points 9 months ago* (last edited 9 months ago)

Shameless plug for just using your little fingies to operate the light switches and thermostats. Everything is controlled locally and you only have to pay for the light and the switch (fingers should be included in your default setup)

[-] interolivary@beehaw.org 24 points 10 months ago

Seems like you had a "when it rains, it pours" start to your week, ugh.

[-] interolivary@beehaw.org 29 points 10 months ago

But isn't this study exactly about asking trans folks how they feel about their top surgery? Not doing a study however would be ignoring the ones who didn't feel satisfied with their surgery, and now those voices are included as well. They're in the minority as expected, but at least now we have some sort of statistical validation for it as well

[-] interolivary@beehaw.org 22 points 11 months ago

All you have to do is add a wide ALU to process more data at once.

Oh that's all? 😄

[-] interolivary@beehaw.org 29 points 11 months ago* (last edited 11 months ago)

The Zuck always manages to give me some pretty heavy "hello fellow humans" vibes.

"See, just like a regular human being, I also enjoy this 'eating'. It's such great fun to put nourishment in our mouth orifices, isn't it? *gags*"

[-] interolivary@beehaw.org 27 points 11 months ago* (last edited 11 months ago)

"So, what Valve invested in was WiNE, a protocol […]". Ah, game journalists; the profession where sniffing glue will actually give you an advantage

93
mmm tasty (beehaw.org)
submitted 11 months ago* (last edited 11 months ago) by interolivary@beehaw.org to c/memes@lemmy.ml

Meme (??) image. On the left side there's a picture of a cylindrical metallic container with the text

DANGER
RADIATION
☢️ (radioactivity warning symbol)
DROP
&
RUN

Co 60
3540
CURIES
7-1-63

And below it the top part of a similar container is just visible.

On the right side next to the first container are 4 lines that look like they measure out portions of the container, each with the text "mmm tasty."

Then below those lines, "measuring out" the empty space between the two containers is the text "sadness", and then below it where the next container starts is the text "another :D yeey".

(This might possibly be the worst description of a meme ever, but goddamn was this not easy 😅 Corrections / edits welcome)

209
Again (beehaw.org)
submitted 11 months ago by interolivary@beehaw.org to c/memes@lemmy.ml

Screenshot of a social media post – maybe Mastodon? – by "Cuttlefish Brand Ambassador" @Sir__Ian.

It has the text "Upstaged by cuttlefish yet again" and below it a screenshot of a preview of a news article. It has a photo of a pair of cuttlefish and below it the headline reads "Cuttlefish have ability to exert self-control, study finds"

163

Screenshot of a Mastodon post by @nikitonsky@mastodon.online. The post has text on top and an image on the bottom. The text reads:

"How are you gonna watch Oppenheimer?"

"In Emacs"

"You mean IMAX?"

"No"

Below the text is what looks like a screenshot of Emacs. There are a couple of text panes open with indistinct green text on black background, and one pane is playing Oppenheimer.

200
submitted 11 months ago* (last edited 11 months ago) by interolivary@beehaw.org to c/memes@lemmy.ml

meme image. Top part has text on white background:

Android: file saved successfully.

Me: and where exactly it is saved

Android:

Below that is a black and white picture of a chimpanzee (or is that a bonobo?) dressed in a long sleeve shirt and smoking a cigarette, with the caption "who the fuck knows"

[-] interolivary@beehaw.org 24 points 11 months ago

For me, the whole point of paying for streaming was so that I could support the film makers without dealing with ads.

That doesn't sound profitable. How about artists and content creators only getting 0.1% of the profits and you have to watch ads? That sounds like it'd make the executives much richer.

3
submitted 11 months ago by interolivary@beehaw.org to c/memes@sopuli.xyz
111
submitted 11 months ago by interolivary@beehaw.org to c/memes@lemmy.ml

I just saw the "slutstation" post and it reminded me of this ancient photo of mine

2
submitted 1 year ago* (last edited 1 year ago) by interolivary@beehaw.org to c/programming@beehaw.org

In a comment on my "right to be forgotten" proposal I mentioned causality tracking (so eg. figuring out whether event A happened before or after event B) in distributed networks as an example of a hard problem, and I figured I'd share a blog post (not mine) on one of the more modern techniques that's still very much underutilized. This class of algorithms is called logical clocks, and the first of them was the Lamport timestamp by Leslie Lamport. Note that many of these algorithms can be used for tracking version changes and not just logical time, often with some changes like in the case of interval tree clocks.

In many cases just plopping a timestamp on a message and using that to establish causality isn't good enough. If you rely on clients to attach that timestamp, you have to trust that their clocks are correct, or that they don't simply lie about the time for whatever reason (although of course that's a problem with logical clocks too.) Also, you might not want to base your causality on when a message was sent but on when it was received; even if message A is sent before message B, there's no telling whether A actually makes it to your system before B does. These are just a few common reasons for needing logical clocks, and they're necessary in a surprising amount of cases when you deal with distributed systems.

The advantage of interval tree clocks compared to eg. vector clocks is that they're designed to work well in networks where participants are constantly leaving and coming back online and where you can't know the number of nodes in the network beforehand. These are cases most other algorithms don't deal with too well. Of course this means more complexity in the algorithm, but this is a case of "them's the breaks" as the problem is definitely not a simple one to solve.

1
submitted 1 year ago* (last edited 1 year ago) by interolivary@beehaw.org to c/chat@beehaw.org

cross-posted from: https://beehaw.org/post/923434

I hope this isn't too technical for this community. I'm looking for feedback on an idea, and figured !chat might also be a good place to ask.

(TTL stands for "time-to-live", which should be fairly self-explanatory)

Original post

Hey fellow nerds, I have an idea that I'd like to discuss with you. All feedback – positive or negative – is welcome. Consider this a baby RFC (Request for Comments).

So. I've been having a think on how to implement the right to be forgotten (one of the cornerstones of eg. the GDPR) in the context of federated services. Currently, it's not possible to remove your comments, posts, etc., from the Fediverse and not just your "home instance" without manually contacting every node in the network. in my opinion, this is a fairly pressing problem, and there would already be a GDPR case here if someone were to bring the "eye of Sauron" (ie. a national data protection authority) upon us.

Please note that this is very much a draft and it does have some issues and downsides, some of which I've outlined towards the end.

The problem

In a nutshell, the problem I'm trying to solve is how to guarantee that "well-behaved" instances, which support this proposal, will delete user content even in the most common exceptional cases, such as changes in network topology, network errors, and server downtime. These are situations where you'd typically expect messages about content or user deletion to be lost. It's important to note that I've specifically approached this from the "right to be forgotten" perspective, so the current version of the proposal solely deals with "mass deletion" when user accounts are deleted. It doesn't currently integrate or work with the normal content deletion flow (I'll further discuss this below).

While I understand that in a federated or decentralized network it's impossible to guarantee that your content will be deleted (and the Wayback Machine exists), but we can't let "perfect be the enemy of good enough". Making a concerted effort to ensure that in most cases user content is deleted (initially this could even just be a Lemmy thing and not a wider Fediverse thing) from systems under our control when the user so wishes would already be a big step in the right direction.

I haven't yet looked into "prior art" except some very cursory searches and I had banged the outline of this proposal out before I even went looking, but I now know that eg. Mastodon has the ability to set TTLs on posts. This proposal is sort of adjacent and could be massaged a bit to support this on Lemmy (or whatever else service) too.

1. The proposal: TTLs on user content

  1. Every comment, post etc. (content) must by default have an associated TTL (eg. a live_until timestamp). This TTL can be long, on the order of weeks or even a couple of months. Users can also opt out (see below)
  2. well before the content's TTL runs out (eg. even halfway through the TTL, with some random jitter to prevent "thundering herds"), an instance asks the "home instance" of the user who created the content whether the user account is still live. If it is, great, update the TTL and go on with life
    1. in cases where the "home instance" of a content creator can't be reached due to eg. network problems, this "liveness check" must be repeated at random long-ish intervals (eg. every 20 – 30h) until an answer is gotten or the TTL runs out
    2. information about user liveness should be cached, but with a much shorter TTL than content
    3. liveness check requests to other instances should be batched, with some sensible time limit on how long to wait for the batch to fill up, and an upper limit for the batch size
    4. in cases where the user's home instance isn't in an instance's linked instance list or is in their blocked instance list, this liveness check may be skipped
  3. when a user liveness check hasn't succeeded and a content's TTL runs out, or when a user liveness check specifically comes back as negative, the content must be deleted
    1. when a liveness check comes back as negative and the user has been removed, instances must delete the rest of that user's content and not just the one whose TTL ran out
    2. when a liveness check fails (eg. the user's home instance doesn't respond), instances may delete the rest of that user's content. Or maybe should? My reason for handling this differently from an explicit negative liveness check is to prevent the spurious deletion of all of a user's content in cases where their home instance experiences a long outage, but I'm not sure if this distinction really matters. Needs more thinkifying
  4. user accounts must have a TTL, on the order of several years
    1. when a user performs any activity on the instance, this TTL must be updated
    2. when this TTL runs out, the account must be deleted. The user's content must be deleted if the user hasn't opted out of the content deletion (see below)
    3. instances may eg. ping users via email to remind them about their account expiring before the TTL runs out
  5. users may opt out of the content deletion mechanism, both on a per-user basis or on a per-content basis
    1. if a user has opted out of the mechanism completely, their content must not be marked with a TTL. However, this does present a problem if they later change their mind

2. Advantages of this proposal

  1. guarantees that user content is deleted from "well behaved" instances, even in the face of changing network topologies when instances defederate or disappear, hiccups in message delivery, server uptime and so on
  2. would allow supporting Mastodon-like general content TTLs with a little modification, hence why it has TTLs per content and not just per user. Maybe something like a refresh_liveness boolean field on content that says whether an instance should do user liveness checks and refresh the content's TTL based on it or not?
  3. with some modification this probably could (and should) be made to work with and support the regular content deletion flow. Something for draft v0.2 in case this gets any traction?

3. Disadvantages of this proposal

  1. more network traffic, DB activity, and CPU usage, even during "normal" operation and not just when something gets deleted. Not a huge amount but the impact should probably be estimated so we'd have at least an idea of what it'd mean
    1. however, considering the nature of the problem, some extra work is to be expected
  2. as noted, the current form of this proposal does not support or work with the regular deletion flow for individual comments or posts, and only addresses the more drastic scenario when a user account is deleted or disappears
  3. spurious deletions of content are theoretically possible, although with long TTLs and persistent liveness check retries they shouldn't happen except in rare cases. Whether this is actually a problem requires more thinkifying
  4. requires buy-in from the rest of the Fediverse as long as it's not a protocol-level feature (and there's more protocols than just ActivityPub). This same disadvantage would naturally apply to all proposals that aren't protocol-level. The end goal would definitely be to have this feature be a protocol thing and not just a Lemmy thing, but one step at a time
  5. need to deal with the case where a user opts out of having their content deleted when they delete their account (whether they did this for all of their content or specific posts/comments) and then alter changes their mind. Will have limitations, such as not having any effect on instances that are no longer federated with their home instance

3.1 "It's a feature, not a bug"

  1. when an instance defederates or otherwise leaves the network, content from users on that instance will eventually disappear from instances no longer connected to its network. This is a feature: when you lose contact with an instance for a long time, you have to assume that it's been "lost at sea" to make sure that the users' right to forgotten is respected. As a side note, this would also help prune content from long gone instances
  2. content can't be assumed to be forever. This is by design: in my opinon Lemmy shouldn't try to be a permanent archive of all content, like the Wayback Machine
  3. content can be copied to eg. the Wayback Machine (as noted above), so you can't actually guarantee deletion of all of a user's content from the whole Internet. As noted in the problem statement this is absolutely true, but what I'm looking for here is best effort to make sure content is deleted from compliant instances. Just because it's impossible to guarantee total deletion of content from everywhere does not mean no effort at all should be made to delete it from places that are under our control
  4. this solution is more complex than simply actually deleting content when the user so wishes, instead of just hiding it from view like it's done now in Lemmy. While "true deletion" definitely needs to also be implemented, it's not enough to guarantee eventual content deletion in cases like defederation, or network and server errors leading to an instance not getting the message about content or a user being deleted
5
submitted 1 year ago* (last edited 1 year ago) by interolivary@beehaw.org to c/programming@beehaw.org

Hey fellow nerds, I have an idea that I'd like to discuss with you. All feedback – positive or negative – is welcome. Consider this a baby RFC (Request for Comments).

So. I've been having a think on how to implement the right to be forgotten (one of the cornerstones of eg. the GDPR) in the context of federated services. Currently, it's not possible to remove your comments, posts, etc., from the Fediverse and not just your "home instance" without manually contacting every node in the network. in my opinion, this is a fairly pressing problem, and there would already be a GDPR case here if someone were to bring the "eye of Sauron" (ie. a national data protection authority) upon us.

Please note that this is very much a draft and it does have some issues and downsides, some of which I've outlined towards the end.

The problem

In a nutshell, the problem I'm trying to solve is how to guarantee that "well-behaved" instances, which support this proposal, will delete user content even in the most common exceptional cases, such as changes in network topology, network errors, and server downtime. These are situations where you'd typically expect messages about content or user deletion to be lost. It's important to note that I've specifically approached this from the "right to be forgotten" perspective, so the current version of the proposal solely deals with "mass deletion" when user accounts are deleted. It doesn't currently integrate or work with the normal content deletion flow (I'll further discuss this below).

While I understand that in a federated or decentralized network it's impossible to guarantee that your content will be deleted (and the Wayback Machine exists), but we can't let "perfect be the enemy of good enough". Making a concerted effort to ensure that in most cases user content is deleted (initially this could even just be a Lemmy thing and not a wider Fediverse thing) from systems under our control when the user so wishes would already be a big step in the right direction.

I haven't yet looked into "prior art" except some very cursory searches and I had banged the outline of this proposal out before I even went looking, but I now know that eg. Mastodon has the ability to set TTLs on posts. This proposal is sort of adjacent and could be massaged a bit to support this on Lemmy (or whatever else service) too.

1. The proposal: TTLs on user content

  1. Every comment, post etc. (content) must by default have an associated TTL (eg. a live_until timestamp). This TTL can be long, on the order of weeks or even a couple of months. Users can also opt out (see below)
  2. well before the content's TTL runs out (eg. even halfway through the TTL, with some random jitter to prevent "thundering herds"), an instance asks the "home instance" of the user who created the content whether the user account is still live. If it is, great, update the TTL and go on with life
    1. in cases where the "home instance" of a content creator can't be reached due to eg. network problems, this "liveness check" must be repeated at random long-ish intervals (eg. every 20 – 30h) until an answer is gotten or the TTL runs out
    2. information about user liveness should be cached, but with a much shorter TTL than content
    3. liveness check requests to other instances should be batched, with some sensible time limit on how long to wait for the batch to fill up, and an upper limit for the batch size
    4. in cases where the user's home instance isn't in an instance's linked instance list or is in their blocked instance list, this liveness check may be skipped
  3. when a user liveness check hasn't succeeded and a content's TTL runs out, or when a user liveness check specifically comes back as negative, the content must be deleted
    1. when a liveness check comes back as negative and the user has been removed, instances must delete the rest of that user's content and not just the one whose TTL ran out
    2. when a liveness check fails (eg. the user's home instance doesn't respond), instances may delete the rest of that user's content. Or maybe should? My reason for handling this differently from an explicit negative liveness check is to prevent the spurious deletion of all of a user's content in cases where their home instance experiences a long outage, but I'm not sure if this distinction really matters. Needs more thinkifying
  4. user accounts must have a TTL, on the order of several years
    1. when a user performs any activity on the instance, this TTL must be updated
    2. when this TTL runs out, the account must be deleted. The user's content must be deleted if the user hasn't opted out of the content deletion (see below)
    3. instances may eg. ping users via email to remind them about their account expiring before the TTL runs out
  5. users may opt out of the content deletion mechanism, both on a per-user basis or on a per-content basis
    1. if a user has opted out of the mechanism completely, their content must not be marked with a TTL. However, this does present a problem if they later change their mind

2. Advantages of this proposal

  1. guarantees that user content is deleted from "well behaved" instances, even in the face of changing network topologies when instances defederate or disappear, hiccups in message delivery, server uptime and so on
  2. would allow supporting Mastodon-like general content TTLs with a little modification, hence why it has TTLs per content and not just per user. Maybe something like a refresh_liveness boolean field on content that says whether an instance should do user liveness checks and refresh the content's TTL based on it or not?
  3. with some modification this probably could (and should) be made to work with and support the regular content deletion flow. Something for draft v0.2 in case this gets any traction?

3. Disadvantages of this proposal

  1. more network traffic, DB activity, and CPU usage, even during "normal" operation and not just when something gets deleted. Not a huge amount but the impact should probably be estimated so we'd have at least an idea of what it'd mean
    1. however, considering the nature of the problem, some extra work is to be expected
  2. as noted, the current form of this proposal does not support or work with the regular deletion flow for individual comments or posts, and only addresses the more drastic scenario when a user account is deleted or disappears
  3. spurious deletions of content are theoretically possible, although with long TTLs and persistent liveness check retries they shouldn't happen except in rare cases. Whether this is actually a problem requires more thinkifying
  4. requires buy-in from the rest of the Fediverse as long as it's not a protocol-level feature (and there's more protocols than just ActivityPub). This same disadvantage would naturally apply to all proposals that aren't protocol-level. The end goal would definitely be to have this feature be a protocol thing and not just a Lemmy thing, but one step at a time
  5. need to deal with the case where a user opts out of having their content deleted when they delete their account (whether they did this for all of their content or specific posts/comments) and then alter changes their mind. Will have limitations, such as not having any effect on instances that are no longer federated with their home instance

3.1 "It's a feature, not a bug"

  1. when an instance defederates or otherwise leaves the network, content from users on that instance will eventually disappear from instances no longer connected to its network. This is a feature: when you lose contact with an instance for a long time, you have to assume that it's been "lost at sea" to make sure that the users' right to forgotten is respected. As a side note, this would also help prune content from long gone instances
  2. content can't be assumed to be forever. This is by design: in my opinon Lemmy shouldn't try to be a permanent archive of all content, like the Wayback Machine
  3. content can be copied to eg. the Wayback Machine (as noted above), so you can't actually guarantee deletion of all of a user's content from the whole Internet. As noted in the problem statement this is absolutely true, but what I'm looking for here is best effort to make sure content is deleted from compliant instances. Just because it's impossible to guarantee total deletion of content from everywhere does not mean no effort at all should be made to delete it from places that are under our control
  4. this solution is more complex than simply actually deleting content when the user so wishes, instead of just hiding it from view like it's done now in Lemmy. While "true deletion" definitely needs to also be implemented, it's not enough to guarantee eventual content deletion in cases like defederation, or network and server errors leading to an instance not getting the message about content or a user being deleted
20
Yevgeni Gump (beehaw.org)
submitted 1 year ago by interolivary@beehaw.org to c/memes@lemmy.ml
5

Good explanation of the difference between work efficiency and step efficiency when talking about parallel algorithms.

5
[-] interolivary@beehaw.org 26 points 1 year ago

I mean, it's a protocol. Nobody needs to "allow" you to use it any more than HTTP; Meta can set up a service and they're good to go.

Whether others will want to federate with them is the question.

view more: ‹ prev next ›

interolivary

joined 1 year ago