swlabr

joined 2 years ago
[–] swlabr@awful.systems 1 points 1 year ago (5 children)

Someone should do a rewrite of "A Modest Proposal" in the form of one of these EA posts. It'd probably do pretty well.

[–] swlabr@awful.systems 1 points 1 year ago (1 children)

Bonus points for the part where he rails against contraception and sex education in the appendix, because we all know what this is really about.

[–] swlabr@awful.systems 1 points 1 year ago* (last edited 1 year ago) (4 children)

Is it just me or does the author just… not really spend any time trying to defend forced birth? Like, other than quoting counterarguments to abortion defences. It’s like he’s sort of assuming everyone already has ideas about why abortion itself is bad, but find it permissible for whatever reason. Is this a correct characterisation of the EA community? That they all harbour anti-abortion sentiment but for whatever reason permit abortion?

Overall it reads like a business proposal. Is this how you’re supposed to talk to an EA person? Instead of saying “here is why you should care about x”, you have to pitch them on the potential ROI of caring about something? If so, that’s a fucking frustrating way to think about the world, and this was a fucking awful article to read, just like every other treacles-y long form logorrhoea you get from these people.

[–] swlabr@awful.systems 1 points 1 year ago (1 children)

Not a huge distance to travel from Bayesian reasoning to Stochastic terrorism

[–] swlabr@awful.systems 1 points 1 year ago

This is Yud joking again. He famously hates brevity.

[–] swlabr@awful.systems 1 points 1 year ago

I think the carceral system is broken, and I think that imprisonment is an incomplete/inadequate way to address crime. I'm sure there are plenty of good discussions about alternatives are taking place, just not on fucking HN.

[–] swlabr@awful.systems 1 points 1 year ago (1 children)

ITT:

All the more so if it can create a high-quality Harry Potter VR Universe that expands infinitely with NPCs powered by AI that is infinitely more interesting than the normal world is.

This is the future LWers want.

[–] swlabr@awful.systems 1 points 1 year ago* (last edited 1 year ago)

Like any IRL situation it’s probably more pertinent to read the room and be present, rather than theorycraft about what might happen.

That being said, my gut says this: there are going to be a large share of TREACLES peoples in the crowd. Here’s my argument.

  • The crowd will be mostly EA people.
  • Anyone in EA willing to go to an EA hosted talk about AI X risk is probably beyond the eye deworming charity phase of EA.

This isn’t the scientology personality test phase of EA, it’s the private seminar phase before they teach you xenu phase, except their proselytisers aren’t nearly as charismatic/well trained in conversion.

I think the most viable targets for any detreacling would be any friends or tagger-ons to the event. Sorta like how if you are arguing on the internet, you don’t really hope to change the other party’s mind; you’re more hoping to sway anyone who comes along and reads the thread.

I’d go, personally, if only for the spectacle.

[–] swlabr@awful.systems 0 points 1 year ago (4 children)

A few select snarks:

  • I like the part where my cursor is replaced by a blue circle painted onto the background so that when I scroll upwards, it looks like it is moving when it isn't. It's a lot of fun, if not completely nonsensical.
  • The whole page shifts horizontally when you click the hamburger menu and shifts back when you exit the menu
  • Speaking of which, why does this page need a menu? And why does it have to be animated? Why does the cursor indicate that most of the area in the menu is clickable when it isn't? And why is the button to exit the menu in a different place to the hamburger?
  • No animation for transforming the blue circle to the big circles with arrows. If you're going to animate everything, why stop here?
[–] swlabr@awful.systems 1 points 1 year ago

100% on point. More people must remember that everything we know about large companies' operations is still completely valid. Leadership doesn't understand any of the technologies at play, even at a high level- they don't think in terms of black boxes; they think in black, amorphous miasmas of supposed function or vibes, for short. They are concerned with a few metrics going up or down every quarter. As long as the number goes up, they get paid, the dopamine hits, and everyone stays happy.

The AI miasma (mAIasma? miasmAI?) in particular is near perfect. Other technologies only held a finite amount of potential to be hyped, meaning execs had to keep looking for their next stock price bump. AI is infinitely hypeable since you can promise anything with it, and people will believe you thanks to the smoke and mirrors it procedurally pumps out today.

I have friends who have worked in plenty of large corporations and have experience/understanding of the worthlessness of executive leadership, but they don't connect that to AI investment and thus don't see the grift. It's sometimes exasperating.

[–] swlabr@awful.systems 1 points 1 year ago

It’s gotta be a cult programming thing. X happened to you, you learned Y, but that’s incorrect, you should have learned Z, read this 10000 word manuscript, then come to our learning session/poly orgy and we can become less wrong together

[–] swlabr@awful.systems 0 points 1 year ago* (last edited 1 year ago) (1 children)

Not saying anything new here: Longtermism is just more grift for the mill. Just another ism that conveniently dismisses the need to spend money on addressing issues of today so that we can instead pour money into MIRI.

If you wanna talk about long term, except for a small percentage of people, everyone’s conscious contributions to the future (directly or indirectly through donating etc.) will be outweighed completely by the sum total of their plastic usage. The microplastics alone could get into people’s eyes and cause them a quanta of inconvenience!!! Think of the eyes!!! (Also their brain, but who cares about that)

view more: ‹ prev next ›