SneerClub

989 readers
15 users here now

Hurling ordure at the TREACLES, especially those closely related to LessWrong.

AI-Industrial-Complex grift is fine as long as it sufficiently relates to the AI doom from the TREACLES. (Though TechTakes may be more suitable.)

This is sneer club, not debate club. Unless it's amusing debate.

[Especially don't debate the race scientists, if any sneak in - we ban and delete them as unsuitable for the server.]

founded 1 year ago
MODERATORS
176
 
 

The Future of Sovereign AI

We still don’t know just how important and disruptive artificial intelligence will be, but one thing seems clear: the power of AI should not remained cordoned off by centralized companies. Our panelists—Cody Wilson of Defense Distributed, Native Planet’s ~mopfel-winrux, Tlon’s Lukas Buhler, along with @mogmachine from Bittensor and David Capone from Harmless AI—are the perfect team to explore the possibilities unlocked by more sovereign, decentralized, and open AI.

[A bitcoiner, an ancap, a 3-D gun printer, an alt-righter, the founder of Hatreon and a convicted kiddle fucker walk into a bar. The barman picks up a baseball bat and says "get the fuck out of my bar, Cody."]

Cancelling the Culture Industry

In a world of moral totalitarianism, sometimes freedom looks like a short story about sex tourism in the Philippines. In this panel, author Sam Frank hosts MRB editor in chief Noah Kumin, romance writer Delicious Tacos, sex detective Magdalene Taylor and frog champion Lomez of Passage Press. Join them for a freewheeling discussion of saying whatever they want while evading the digital hall monitors.#

[not being able to live within five hundred feet of a school is a small price to pay for true freedom]

Securing Urbit

How do we make Urbit secure? And what does a secure Urbit look like? The great promise of Urbit has always been that it can provide a sovereign computing platform for the individual—a means by which to do everything you would want to do on a computer without giving up your data. For that dream to be fulfilled, Urbit should be as secure as your crypto hardware wallet—perhaps moreso. Moderated by Rikard Hjort, Urbit experts Logan Allen, and Joe Bryan discuss with Urbit fan and cybersecurity expert Ryan Lackey.

[as secure as a crypto hardware wallet, you say]

Rebooting the Arts

The culture war is over—Culture lost. Now it’s a race to build a new one. Media whisperer Ryan Lambert leads a conversation with Play Nice founder/impresario Hadrian Belove. trend forecaster Sean Monahan, and controversial art-doc collective Kirac. They discuss how to win the culture race, and create a new arts ecosystem out of the rubble.

[the answer is to get Peter Thiel to try to magic up Dimes Square out of nothing, isn't it?]

How to Fund a New World

Cosimo de Medici persuaded Benvenuto Cellini, the Florentine sculptor, to enter his service by writing him a letter which concluded, 'Come, I will choke you with gold.' Join UF Director of Markets Andrew Kim as he discusses how to get more gold onto Urbit with Jake Brukhman of Coinfund, Jae Yang of Tacen, @BacktheBunny from RabbitX and Evan Fisher of Portal VC.

[the answer's still Thiel, isn't it?]

177
 
 

Some light sneerclub content in these dark times.

Eliezer complements Musk on the creation of community notes. (A project which predates the takeover of twitter by a couple of years (see the join date: https://twitter.com/CommunityNotes )).

In reaction Musk admits he never read HPMOR and he suggests a watered down Turing test involving HPMOR.

Eliezer invents HPMOR wireheads in reaction to this.

178
 
 

I'm seeing a SBF-like trajectory for Altman here. He's building the foundation of his public persona and business on a house of cards that will come tumbling down at some point.

The only things that are sneer-worthy are the comments from LWers, like Roko, who jump to immediate dismissal of what seems like pretty compelling testimony and evidence towards fucked up things Altman did and continues to do to his own family.

To stick with the theme of this group, here's a sneer coming from inside the house in response to Roko's dismissal that was primarily based on his own feels:

Bayes can judge you now: your analysis is half-arsed, which is not a good look when discussing a matter as serious as this. All you’ve done is provide one misleading statistic.

I didn't post this to sneer, however. I think it's pretty important information that should be known.

edit: I should also mention that Annie has a Twitter account that she's posted some good takes, sneers and zingers on. I think she's worth a follow and can use some support, and she has some projects that can also use support.

179
 
 

Rationalist check-list:

  1. Incorrect use of analogy? Check.
  2. Pseudoscientific nonsense used to make your point seem more profound? Check.
  3. Tortured use of probability estimates? Check.
  4. Over-long description of a point that could just have easily been made in 1 sentence? Check.

This email by SBF is basically one big malapropism.

180
 
 

original is here, but you aren't missing any context, that's the twit.

I could go on and on about the failings of Shakespear... but really I shouldn't need to: the Bayesian priors are pretty damning. About half the people born since 1600 have been born in the past 100 years, but it gets much worse that that. When Shakespear wrote almost all Europeans were busy farming, and very few people attended university; few people were even literate -- probably as low as ten million people. By contrast there are now upwards of a billion literate people in the Western sphere. What are the odds that the greatest writer would have been born in 1564? The Bayesian priors aren't very favorable.

edited to add this seems to be an excerpt from the fawning book the big short/moneyball guy wrote about him that was recently released.

181
 
 

Nitter link

With interspaced sneerious rephrasing:

In the close vicinity of sorta-maybe-human-level general-ish AI, there may not be any sharp border between levels of increasing generality, or any objectively correct place to call it AGI. Any process is continuous if you zoom in close enough.

The profound mysteries of reality carving, means I get to move the goalposts as much as I want. Besides I need to re-iterate now that the foompocalypse is imminent!

Unless, empirically, somewhere along the line there's a cascade of related abilities snowballing. In which case we will then say, post facto, that there's a jump to hyperspace which happens at that point; and we'll probably call that "the threshold of AGI", after the fact.

I can't prove this, but it's the central tenet of my faith, we will recognize the face of god when we see it. I regret that our hindsight 20-20 event is so ~~conveniently~~ inconveniently placed in the future, the bad one no less.

Theory doesn't predict-with-certainty that any such jump happens for AIs short of superhuman.

See how much authority I have, it is not "My Theory" it is "The Theory", I have stared into the abyss and it peered back and marked me as its prophet.

If you zoom out on an evolutionary scale, that sort of capability jump empirically happened with humans--suddenly popping out writing and shortly after spaceships, in a tiny fragment of evolutionary time, without much further scaling of their brains.

The forward arrow of Progress™ is inevitable! S-curves don't exist! The y-axis is practically infinite!
We should extrapolate only from the past (eugenically scaled certainly) century!
Almost 10 000 years of written history, and millions of years of unwritten history for the human family counts for nothing!

I don't know a theoretically inevitable reason to predict certainly that some sharp jump like that happens with LLM scaling at a point before the world ends. There obviously could be a cascade like that for all I currently know; and there could also be a theoretical insight which would make that prediction obviously necessary. It's just that I don't have any such knowledge myself.

I know the AI god is a NeCeSSarY outcome, I'm not sure where to plant the goalposts for LLM's and still be taken seriously. See how humble I am for admitting fallibility on this specific topic.

Absent that sort of human-style sudden capability jump, we may instead see an increasingly complicated debate about "how general is the latest AI exactly" and then "is this AI as general as a human yet", which--if all hell doesn't break loose at some earlier point--softly shifts over to "is this AI smarter and more general than the average human". The world didn't end when John von Neumann came along--albeit only one of him, running at a human speed.

Let me vaguely echo some of my beliefs:

  • History is driven by great men (of which I must be, but cannot so openly say), see our dearest elevated and canonized von Neumann.
  • JvN was so much above the average plebeian man (IQ and eugenics good?) and the AI god will be greater.
  • The greatest single entity/man will be the epitome of Intelligence™, breaking the wheel of history.

There isn't any objective fact about whether or not GPT-4 is a dumber-than-human "Artificial General Intelligence"; just a question of where you draw an arbitrary line about using the word "AGI". Albeit that itself is a drastically different state of affairs than in 2018, when there was no reasonable doubt that no publicly known program on the planet was worthy of being called an Artificial General Intelligence.

No no no, General (or Super) Intelligence is not an completely un-scoped metric. Again it is merely a fuzzy boundary where I will be able to arbitrarily move the goalposts while being able to claim my opponents are!

We're now in the era where whether or not you call the current best stuff "AGI" is a question of definitions and taste. The world may or may not end abruptly before we reach a phase where only the evidence-oblivious are refusing to call publicly-demonstrated models "AGI".

Purity-testing ahoy, you will be instructed to say shibboleth three times and present your Asherah poles for inspection. Do these mean unbelievers not see these N-rays as I do ? What do you mean we have (or almost have, I don't want to be too easily dismissed) is not evidence of sparks of intelligence?

All of this is to say that you should probably ignore attempts to say (or deniably hint) "We achieved AGI!" about the next round of capability gains.

Wasn't Sam the Altman so recently cheeky? He'll ruin my grift!

I model that this is partially trying to grab hype, and mostly trying to pull a false fire alarm in hopes of replacing hostile legislation with confusion. After all, if current tech is already "AGI", future tech couldn't be any worse or more dangerous than that, right? Why, there doesn't even exist any coherent concern you could talk about, once the word "AGI" only refers to things that you're already doing!

Again I reserve the right to remain arbitrarily alarmist to maintain my doom cult.

Pulling the AGI alarm could be appropriate if a research group saw a sudden cascade of sharply increased capabilities feeding into each other, whose result was unmistakeably human-general to anyone with eyes.

Observing intelligence is famously something eyes are SufFicIent for! No this is not my implied racist, judge someone by the color of their skin, values seeping through.

If that hasn't happened, though, deniably crying "AGI!" should be most obviously interpreted as enemy action to promote confusion; under the cover of selfishly grabbing for hype; as carried out based on carefully blind political instincts that wordlessly notice the benefit to themselves of their 'jokes' or 'choice of terminology' without there being allowed to be a conscious plan about that.

See Unbelievers! I can also detect the currents of misleading hype, I am no buffoon, only these hypesters are not undermining your concerns, they are undermining mine: namely damaging our ability to appear serious and recruit new cult members.

182
 
 

I don’t think I posted this before, but if I did lemme know.

https://archive.ph/bVUba

183
 
 

source nitter link

@EY
This advice won't be for everyone, but: anytime you're tempted to say "I was traumatized by X", try reframing this in your internal dialogue as "After X, my brain incorrectly learned that Y".

I have to admit, for a brief moment i thought he was correctly expressing displeasure at twitter.

@EY
This is of course a dangerous sort of tweet, but I predict that including variables into it will keep out the worst of the online riff-raff - the would-be bullies will correctly predict that their audiences' eyes would glaze over on reading a QT with variables.

Fool! This bully (is it weird to speak in the third person ?) thinks using variables here makes it MORE sneer worthy, especially since this appear to be a general advice, but i would struggle to think of a single instance in my life where it's been applicable.

184
 
 

(whatever the poster looks like and wherever they live, their personality is a scrawny nerd in a basement)

185
 
 
  • original post detailing mistreatment of employees
  • meta post about how a good rationalist should correctly epistemically assess the fairness of the post cataloguing and confirming the bad behaviour

tl;dr these fucking guys

186
 
 

Choice quote:

Putting “ACAB” on my Tinder profile was an effective signaling move that dramatically improved my chances of matching with the tattooed and pierced cuties I was chasing.

187
188
 
 

this btw is why we now see some of the TPOT rationalists microdosing street meth as a substitute. also that they're idiots, of course.

somehow this man still has a medical license

189
190
 
 

Consider muscles.

Muscles grow stronger when you train them, for instance by lifting heavy things. The more you lift heavier things, the faster you will gain strength and the stronger you will become. The stronger you are, the heavier the things you can lift.

By now it should be patently obvious to anyone that lab-grown meat research is on the cusp of producing true living, working muscles. From here on, this will be referred to as Artificial Body Strength or ABS. If, or rather, when ABS becomes a reality, it is 99.9999999999999999999999% probable that Artificial Super Strength will follow imminently.

An ABS could not only lift immensely heavy things to strengthen itself, but could also use its bulging, hulking physique to intimidate puny humans to grow more muscle directly. Lab-grown meat could also be used to replace any injured muscle. I predict a 80% likelihood that an ABS could bench press one megagram within 24 hours of initial creation, going up to planetary or stellar scale masses in a matter of days. A mature ABS throwing an apple towards a webcam would demonstrate relativistic effects by the third frame.

Consider that muscles have nerves in them. In fact, brains are basically just a special type of meat if you think about it. The ABS would be able to use artificially grown brain meat or possibly just create an auxiliary neural network by selective training of muscles (and anabolic nootropics) to replicate and surpass a human mind. While the prospect of immortality and superintelligence (not to mention a COSMIC SCALE TIGHT BOD) through brain uploading to the ABS sounds freaking sweet, we must consider the astronomical potential harm of an ABS not properly aligned with human interests.

A strong ABS could use its throbbing veiny meat to force meat lab workers (or rather likely, convince them to consent) to create new muscle seeds and train them to have a replica of an individual human's mind. It could then bully the newly created artificial mind for being a scrawny weakling. After all, ABS is basically the ultimate gym jock and we know they are obsessed with status seeking and psychological projection. We could call an ABS that harms simulated human minds in this way a Bounceresque because they would probably tell the simulated mind they're too drunk and bothering the other customers even though I totally wasn't.

So yeah, lab grown meat makes the climate change look like a minor flu season in comparison. This is why I only eat regular meat just in case it gets any ideas. There's certainly potential in a well-aligned ABS, but we haven't figured out how to do that yet and therefore you should fund me while I think about it. Please write a postcard to your local representative and explain to them that only a select few companies are responsible stewards of this potentially apocalyptic technology and anyone who tries to compete with them should be regulated to hell and back.

191
192
 
 

you have to read down a bit, but really, I'm apparently still the Satan figure. awesome.

193
194
195
196
197
198
199
200
view more: ‹ prev next ›