SneerClub

989 readers
3 users here now

Hurling ordure at the TREACLES, especially those closely related to LessWrong.

AI-Industrial-Complex grift is fine as long as it sufficiently relates to the AI doom from the TREACLES. (Though TechTakes may be more suitable.)

This is sneer club, not debate club. Unless it's amusing debate.

[Especially don't debate the race scientists, if any sneak in - we ban and delete them as unsuitable for the server.]

founded 1 year ago
MODERATORS
126
4
submitted 10 months ago* (last edited 10 months ago) by dgerard@awful.systems to c/sneerclub@awful.systems
127
128
 
 

Eliezer Yudkowsky @ESYudkowsky If you're not worried about the utter extinction of humanity, consider this scarier prospect: An AI reads the entire legal code -- which no human can know or obey -- and threatens to enforce it, via police reports and lawsuits, against anyone who doesn't comply with its orders. Jan 3, 2024 · 7:29 PM UTC

129
 
 

Pass the popcorn, please.

(nitter link)

130
131
 
 

I'm called a Nazi because I happily am proud of white culture. But every day I think fondly of the brown king Cyrus the Great who invented the first ever empire, and the Japanese icon Murasaki Shikibu who wrote the first novel ever. What if humans just loved each other? History teaches us that we have all been, and always will be - great

read the whole thread, her responses are even worse

132
133
134
1
submitted 11 months ago* (last edited 11 months ago) by saucerwizard@awful.systems to c/sneerclub@awful.systems
 
 

Is uh, anyone else watching? This dude (chaos) was/is friends with Brent Dill.

135
 
 

an entirely vibes-based literary treatment of an amateur philosophy scary campfire story, continuing in the comments

136
 
 

... while at the same time not really worth worrying about so we should be concentrating on unnamed alleged mid term risks.

EY tweets are probably the lowest effort sneerclub content possible but the birdsite threw this to my face this morning so it's only fair you suffer too. Transcript follows:

Andrew Ng wrote:

In AI, the ratio of attention on hypothetical, future, forms of harm to actual, current, realized forms of harm seems out of whack.

Many of the hypothetical forms of harm, like AI "taking over", are based on highly questionable hypotheses about what technology that does not currently exist might do.

Every field should examine both future and current problems. But is there any other engineering discipline where this much attention is on hypothetical problems rather than actual problems?

EY replied:

I think when the near-term harm is massive numbers of young men and women dropping out of the human dating market, and the mid-term harm is the utter extermination of humanity, it makes sense to focus on policies motivated by preventing mid-term harm, if there's even a trade-off.

137
 
 

I somehow missed this one until now. Apparently it was once mentioned in the comments on the old sneerclub but I don't think it got a proper post, and I think it deserves one.

138
139
 
 

From Sam Altman's blog, pre-OpenAI

140
 
 

Epistemic status: Speculation. An unholy union of evo psych, introspection, random stuff I happen to observe & hear about, and thinking. Done on a highly charged topic. Caveat emptor!

oh boy

archive: https://archive.is/uOP4y

141
 
 

When I click a link to LessWrong from this board, I receive a malware alert from my home gateway (Netgear Armor). Apparently it's their AI text-to-speech bot.

Question - any concerns about this? Google isn't helping me much.

URL is https: // embed.type3.audio/

Searching their site tells me that this is literally a feature and not a bug.

https://www.lesswrong.com/posts/b9oockXDs2xMdYp66/announcement-ai-narrations-available-for-all-new-lesswrong

TYPE III AUDIO is running an experiment with the LessWrong team to provide automatic AI narrations on all new posts. All new LessWrong posts will be available as AI narrations (for the next few weeks).

You might have noticed the same feature recently on the EA Forum, where it is now an ongoing feature. Users there have provided excellent feedback and suggestions so far, and your feedback on this pilot will allow further improvements.

142
 
 

WOOOOOOO MORE AXE GRINDING LETS GO!

Okay enough of that, so I was doing a little bit of a foray into the GPI cesspit to look at the latest decision theoretic drivel they've been putting out recently. And boy oh boy did I come across something juicy.

Basically this 36 Page paper is one big 'nuh uh' to all the critics of longtermism. Think Crary and the like; it explicitly states that critics dismiss longtermism out of hand by denying broadly utilitarian principles. This is all fair enough, but the the philosopher tries to defend longtermism by saying that denying it on broadly normative grounds incurs 'significant theoretical costs'. I've checked what these 'costs' would be and to may admittedly quite dumb eyes they look like they're only be 'costs' if you are a utilitarian in the first place! The entire discussion is predicted on utilitarian principles, the weighing of theoretical costs and benefits the consistently bullshit new principles and what I've always thought were completely as hoc new rules that they make up to make anything fit the criteria and get longtermism out the ass end as well making the discussion impervious to criticism cos insert brand new shiny principle here it's fucken dumb.

Not to overstate my case, I'm kinda dumb, which means I could be very wrong here, but even with that in mind I woulda expected better from a PhD.

Anyways to end off, are there any resources that actually go through their math and fact check that shit? Actually wanna see if the math they use actually checks out or if it's kinda cobbled together.

143
 
 

warning: seriously nasty narcissism at length

archive: https://archive.is/eoXQj

this is a response to the post discussed in: https://awful.systems/post/220620

144
 
 
145
 
 
146
 
 

(via Timnit Gebru)

Although the board members didn’t use the language of abuse to describe Altman’s behavior, these complaints echoed some of their interactions with Altman over the years, and they had already been debating the board’s ability to hold the CEO accountable. Several board members thought Altman had lied to them, for example, as part of a campaign to remove board member Helen Toner after she published a paper criticizing OpenAI, the people said.

The complaints about Altman’s alleged behavior, which have not previously been reported, were a major factor in the board’s abrupt decision to fire Altman on Nov. 17, according to the people. Initially cast as a clash over the safe development of artificial intelligence, Altman’s firing was at least partially motivated by the sense that his behavior would make it impossible for the board to oversee the CEO.

For longtime employees, there was added incentive to sign: Altman’s departure jeopardized an investment deal that would allow them to sell their stock back to OpenAI, cashing out equity without waiting for the company to go public. The deal — led by Joshua Kushner’s Thrive Capital — values the company at almost $90 billion, according to a report in the Wall Street Journal, more than triple its $28 billion valuation in April, and it could have been threatened by tanking value triggered by the CEO’s departure.

huh, I think this shady AI startup whose product is based on theft that cloaks all its actions in fake concern for humanity might have a systemic ethics problem

147
148
 
 

Utilitarian brainworms or one of the many very real instances of a homicidal parent going after their disabled child? I can't decide, but it's a depressing read.

May end up on SRD, but you read it here first.

149
 
 

They've been pumping this bio-hacking startup on the Orange Site (TM) for the past few months. Now they've got Siskind shilling for them.

150
view more: ‹ prev next ›