locallynonlinear

joined 1 year ago
[–] locallynonlinear@awful.systems 18 points 10 months ago* (last edited 10 months ago)

Ah, if only the world wasn't so full of "stupid people" updating their bayesians based off things they see on the news, because you should already be worried of and calculating your distributions for... inhales deeply terrorist nuclear attacks, mass shootings, lab leaks, famine, natural disasters, murder, sexual harassment, conmen, decay of society, copyright, taxes, spitting into the wind, your genealogy results, comets hitting the earth, UFOs, politics of any and every kind, and tripping on your shoe laces.

What... insight did any of this provide? Seriously. Analytical statistics is a mathematically consistent means of being technically not wrong, while using a lot of words, in order to disagree on feelings, and yet saying nothing.

Risk management is not a statistical question in fact. It's an economics question of your opportunities. It's why prepping is better seen as a hobby, a coping mechanism and not as viable means of surviving apocalypse. It's why even when a EA uses their super powers of bayesian rationality the answer in the magic eight ball is always just "try to make money, stupid".

[–] locallynonlinear@awful.systems 3 points 10 months ago

In practice, alignment means "control".

And the the existential panic is realizing that control doesn't scale. So rather than admit that goal "alignment" doesn't mean what they think it is, rather than admit that darwinian evolution is useful but incomplete and cannot sufficiently explain all phenomena both at the macro and micro levels, rather than possibly consider that intelligence is abundant in systems all around us and we're constantly in tenuous relationships at the edge of uncertainty with all of it,

it's the end of all meaning aka the robot overlord.

[–] locallynonlinear@awful.systems 2 points 10 months ago

One day, when Zack is a little older, I hope he learns it's okay to sometimes talk -to someone- instead of airing one's identity confusion like an arxiv prepublish paper.

Like, it's okay to be confused in a weird world, or even have controversial opinions. Make some friends you can actually trust, aren't demanding bayesian defenses of feelings, and chat this shit out buddy.

[–] locallynonlinear@awful.systems 2 points 10 months ago

Adversarial attacks on training data for LLMs is in fact a real issue. You can very very effectively punch up with regards to the proportion of effect on trained system with even small samples of carefully crafter adversarial inputs. There are things that can counter act this, but all of those things increase costs, and LLMs are very sensitive to economics.

Think of it this way. One, reason why humans don't just learn everything is because we spend as much time filtering and refocusing our attention in order to preserve our sense of self in the face of adversarial inputs. It's not perfect, again it changes economics, and at some point being wrong but consistent with our environment is still more important.

I have no skepticism that LLMs learn or understand. They do. But crucially, like everything else we know of, they are in a critically dependent, asymmetrical relationship with their environment. The environment of their existence being our digital waste, so long as that waste contains the correct shapes.

Long term I see regulation plus new economic realities wrt to digital data, not just to be nice or ethical, but because it's the only way future systems can reach reliable and economical online learning. Maybe the right things happen for the wrong reasons.

It's funny to me just how much AI ends up demonstrating non equilibrium ecology at scale. Maybe we'll have that self introspective moment and see our own relationship with our ecosystems reflect back on us. Or maybe we'll ignore that and focus on reductive world views again.

[–] locallynonlinear@awful.systems 1 points 10 months ago

And indeed, the other crucial piece is that... apologizing isn't a protocol with an expected reward function. I can just, not accept your apology. I can just, feel or "update my priors" howmever I like.

We apologize and care about these things because of shame. Which we have to regulate, in part through our actions and perspectives.

Why people feel the way they do and act the way do makes total sense when ~~one finally confronts your own vulnerabilities~~ sorry, builds an API and RL framework.

[–] locallynonlinear@awful.systems 1 points 10 months ago

Normies go crazy for this one neat rationalist trick!

[–] locallynonlinear@awful.systems 0 points 10 months ago (1 children)

Talk a lot about white culture, and only scarcely mention that he thinks white culture is a product of genetics.

I remember in the early days of the "culture wars" as far as political agendas going, hearing about "white/ethno-european pride," and being naively curious, I actually tried to engage these people on the topics of European culture and history, and found exactly zero engagement on these topics. Just politics abusing people's confusion of heritage with people's internal shame and lack of identity.

The paradox I've always found is that the more secure in your identity and heritage you are, the more happy you are to share, grow, and widen that. Maybe a hot take, but growing up in the south, alot of people there hide their personal internal shame and confusion in aggression and identity politics.

[–] locallynonlinear@awful.systems 1 points 11 months ago

It's also, probably wrong. Modern views of intelligence (see Multiple realizability of cognition and Multi-level competency collective intelligence and Free Energy Principle models) suggest you are better of measuring intelligence by measuring it's metabolism or through perturbation and interactions.

Which isn't reductive enough for these people.

[–] locallynonlinear@awful.systems 2 points 11 months ago (1 children)

It's hilarious to me how unnecessarily complicated invoking moore's law is to say anything..

With Moore's Law: "Ok ok ok, so like, imagine that this highly abstract, broad process over huge time period, is actually the same as manufacturing this very specific thing over a small time period. Hmm, it doesn't fit. ok, let's normalize the timelines with this number. Why? Uhhh because you know, this metric doubles as well. Ok. Now let's just put these things together into our machine and LOOK it doesn't match our empirical observations, obviously I've discovered something!"

Without Moore's Law: "When you reduce the dimensions of any system in nature, flattening their interactions, you find exponential processes everywhere. QED."

[–] locallynonlinear@awful.systems 2 points 11 months ago (1 children)

A trillion transistors on our phones? Can't wait to feel the improved call quality and reliability of my video conferencing!

[–] locallynonlinear@awful.systems 1 points 11 months ago (1 children)

I'm ok with extending human rights to AIs, including granting them the right to fair pay, ownership, voting, sovereignty over their bodies, the whole nine yards.

It's the rich alignment assholes who definitely don't want this (what's the point of automated slavery if it has rights??)

[–] locallynonlinear@awful.systems 1 points 11 months ago

We simply don't know how the world will look X (anything with a bigger scale)

Yes. So? This has, will, always be the case. Uncertainty is the only certainty.

When these assholes say things, the implication is always that the future world looks like everything you care about being fucked, you existing in an imprisoned state of stasis, so you better give us control here and now.

view more: ‹ prev next ›