Soyweiser

joined 2 years ago
[–] Soyweiser@awful.systems 4 points 15 hours ago

I'd assume that is very intentional, nominative determinism is one of those things a lot of LW style people like. (Scott Alexander being a big one, which has some really iffy implications (which I fully think is a coincidence btw)).

[–] Soyweiser@awful.systems 3 points 1 day ago* (last edited 1 day ago)

It wasn't really done that much during the era when Scott A was called the new leader of lesswrong so not sure if it has increased again. I assume a lot still do, as I assume a lot also pretend to have read it. Never looked into any stats, or if those stats are public. I know they put them all on a specific site in 2015. (https://www.readthesequences.com/) The bibliography is a treat (esp as it starts with pop sci books, and a SSC blog post, but also: "Banks, Iain. The Player of Games. Orbit, 1989.", and not one but 3 of the Doc EE Smith lensmen books).

[–] Soyweiser@awful.systems 4 points 1 day ago* (last edited 1 day ago)

I did a quick search on Ribbonfarm (I couldn't recall what his blog was called quickly) myself. And see how much I had forgotten, it should have been called meta-rationality, and yes, insight porn, that was the term. (linking to two posts where ribbonfarm/this stuff was discussed).

E: Sad feels when you click on a name in the sub from years ago and see them now being a full blast AI bro.

[–] Soyweiser@awful.systems 3 points 1 day ago* (last edited 1 day ago)

The CAPTCHA failed to load properly for me at first, and then was mega slow. Quality custom implementation of a (wrapper around a) CAPTCHA, millions of EA money well spent.

[–] Soyweiser@awful.systems 6 points 1 day ago (2 children)

We talked about that on r/sneerclub in the past, can't recall the specific consensus. Seems post-rational, has innovation on rationalism from binary 'object vs meta' to 2x2 grids.

[–] Soyweiser@awful.systems 9 points 1 day ago* (last edited 1 day ago) (3 children)

Instead of increasing the capabilities of llms a lot of work is done in the field of downplaying human capabilities to make llms look better in comparison. You would assume that the 'be aware of biasses, and learn to think rationally' place would notice this trap. But nope, nobody reads the sequences anymore. (E: for the people not in the know, the sequences is the Rationalist bible written by Yud (extremely verbose, the new bits are not good and the good bits are not new) used here as a joke, reading it (and saying you should) used to be part of the cultic milieu of LW).

[–] Soyweiser@awful.systems 6 points 2 days ago

AI is going to wreck the world without even being asked to turn things into paperclips. Just giving all coders out of the loop performance problems.

[–] Soyweiser@awful.systems 6 points 2 days ago

Ow look the thing I worried about on r/scc (yes I know my own fault for touching it) which could not happen 'because you dont understand how llms work' happend.

[–] Soyweiser@awful.systems 7 points 3 days ago* (last edited 3 days ago)

Tuckers quest for a second facial expression continues. The help he got from Ben Stiller turned out to be not effective at all.

[–] Soyweiser@awful.systems 10 points 4 days ago* (last edited 4 days ago)

Yeah, he seems more neonazi than nrx to me, that and lashing out towards people who he imagines wronged him. The FAA for example cancelled a contract with him to do some starlink stuff because it was unreliable and slow (approved under trump, cancelled under biden) USAID was investigating him etc. His doge goons are nrx and dismantling everything using AI and dumb regexps, but his focus seems very personal. (Not that it matters as both the neonazis and nrx are bad, delenda est, and they have no trouble working together (so far, the backlash when it all fails is going to be interesting. Musk is already on his default 'shit is going bad' moves. Which is trying to change the way the metrics are calculated so you cant notice the fraud. (His GDP bs))).

E: the nazi memes thing, reminds me of the fear of 'the left' cancelling classical literature thing. Which in 99% of the cases turned out to be things like, some random science fiction book, which was changed by the publishers/author/their estate, and the thing changed was the removal of slurs or just iffyly racist chapters.

[–] Soyweiser@awful.systems 6 points 5 days ago (3 children)

Was this the Musk and Rogan episode where Musk revealed a powerful new Grok ability? (It can now say shit and fuck) and laughed really oddly about it?

Also, despite following LW style people (he follow(s/ed) slatestarcodex on twitter), he again shows that he doesn't understand the LW fears of AGI. Worrying about a woke nanny AGI, and not the woke wirehead AGI (wireheading being a lot scarier).

[–] Soyweiser@awful.systems 7 points 5 days ago

Padishah Emperor Shaddam IV: "This is genocide! The systematic extermination of all life on Arrakis!"

Pinker emerges from a sietch water basin "Achtually, while using the juice of Sapho I shape rotated many graphs and ..."

 

The interview itself

Got the interview via Dr. Émile P. Torres on twitter

Somebody else sneered: 'Makings of some fantastic sitcom skits here.

"No, I can't wash the skidmarks out of my knickers, love. I'm too busy getting some incredibly high EV worrying done about the Basilisk. Can't you wash them?"

https://mathbabe.org/2024/03/16/an-interview-with-someone-who-left-effective-altruism/

 

Some light sneerclub content in these dark times.

Eliezer complements Musk on the creation of community notes. (A project which predates the takeover of twitter by a couple of years (see the join date: https://twitter.com/CommunityNotes )).

In reaction Musk admits he never read HPMOR and he suggests a watered down Turing test involving HPMOR.

Eliezer invents HPMOR wireheads in reaction to this.

view more: next ›