Just curious, if some fascists came to your house citing historical claims to your land, how much would you care about the validity of that claim? How about when they burn your house down, kill your family, and arrest you for objecting? I truly, deeply would not give a flying fuck who lived nearby my house 300 years ago.
steveman_ha
How many of the "terrorists" (the Islamic ones, not the Judaic ones) were actually from the oppressed populations, though? There are a lottttttt of people in that region that hate the Israeli government...Not sure how many of the displaced peoples you're telling "this isn't the right way to avenge violent state oppression" are actually participating in the fighting.
(raises hand) It was me, I did.
Idk if the mushrooms are stronger now (the cannabis sure is), but in high school a commonly-suggested dose was 2-4 grams, even up to 7 grams if one is 'not feeling it' sufficiently :)
Same, this sounds like what the homeowner/killer is going to be telling himself the next day to rationalize it (if he even thinks about it that deeply).
Thanks God! Keep up the good work
I feel like "shots fired" is one of the times where it probably would be called for... But totally agreed. This is how things are in quite a few non-US countries, and literally every one of them (I think?) have a lot less of their residents being murdered by police.
Can confirm, Jeroba has been really solid (minor hiccups occasionally w/ feeds and inbox not loading -- usually fixed by refresh, sometimes by app restart) for the few weeks I've been using it. Well-featured, and looks/"feels" nice!!
What they're getting towards (one thing, anyways) is that "indistinguishable to the model" and "the same" are two very different things.
IIRC, one possibility is that LLMs which learn from one another will make such incremental changes to what's considered "acceptable" or "normal" language structuring that, over time, more noticeable linguistic changes begin to emerge that go unnoticed by the models.
As it continues, this phenomena creates a "positive feedback loop" in which the gap progressively widens -- still undetected, because the quality of training data is going down -- to the point where models basically "collapse" in their effectiveness.
So even if their output is indistinguishable now, how the tech is used (I guess?) will determine whether or not a self-destructive LLM echo chamber is produced.
Pretty appropriate way to handle it with current "systems and institutions", too, nice.