blakestacey

joined 2 years ago
MODERATOR OF
[–] blakestacey@awful.systems 4 points 1 week ago (1 children)

I'd disagree with the media analysis in "What Was The Nerd?" at a few points. For example, Marty McFly isn't a bullied nerd. George McFly is. Marty plays in a band and has a hot girlfriend. He's the non-nerd side of his interactions with Doc Brown, where he's the less intellectual, and with George, where he's the more cool. Likewise, Chicago in Ferris Bueller's Day Off isn't an "urban hellscape". It's the fun place to go when you want to ditch the burbs and take in some urban pleasures (a parade, an art gallery...).

[–] blakestacey@awful.systems 7 points 1 week ago

Because of course.

[–] blakestacey@awful.systems 14 points 1 week ago (1 children)

You know, just this once, I am willing to see the "Dead Dove: Do Not Eat" label and be content to leave the bag closed.

[–] blakestacey@awful.systems 13 points 1 week ago

Or was it a consequence of the fact that capital-R Rationalists just don't shut up?

[–] blakestacey@awful.systems 8 points 1 week ago

I suppose you could explain that on the talk page, if only you expressed it in acronyms for the benefit of the most pedantic nerds on the planet.

[–] blakestacey@awful.systems 6 points 1 week ago

feels like they are wrong on the object level

Who actually wants to sound like this?

[–] blakestacey@awful.systems 5 points 1 week ago

There might be enough point-and-laugh material to merit a post (also this came in at the tail end of the week's Stubsack).

[–] blakestacey@awful.systems 7 points 1 week ago

The opening line of the "Beliefs" section of the Wikipedia article:

Rationalists are concerned with improving human reasoning, rationality, and decision-making.

No, they aren't.

Anyone who still believes this in the year Two Thousand Twenty Five is a cultist.

I am too tired to invent a snappier and funnier way of saying this.

[–] blakestacey@awful.systems 9 points 1 week ago

I'm the torture copy and so is my wife

[–] blakestacey@awful.systems 16 points 1 week ago

In other news, I got an "Is your website AI ready" e-mail from my website host. I think I'm in the market for a new website host.

[–] blakestacey@awful.systems 12 points 1 week ago (1 children)

That Carl Shulman post from 2007 is hilarious.

After years spent studying existential risks, I concluded that the risk of an artificial intelligence with inadequately specified goals dominates. Attempts to create artificial intelligence can be expected to continue, and to become more likely to succeed in light of increased computing power, neuroscience, and intelligence-enhancements. Unless the programmers solve extremely difficult problems in both philosophy and computer science, such an intelligence might eliminate all utility within our future light-cone in the process of pursuing a poorly defined objective.

Accordingly, I invest my efforts into learning more about the relevant technologies and considerations, increasing my earnings capability (so as to deliver most of a large income to relevant expenditures), and developing logistical strategies to more effectively gather and expend resources on the problem of creating AI that promotes (astronomically) and preserves global welfare rather than extinguishing it.

Because the potential stakes are many orders of magnitude greater than relatively good conventional expenditures (vaccine and Green Revolution research), and the probability of disaster much more likely than for, e.g. asteroid impacts, utilitarians with even a very low initial estimate of the practicality of AI in coming decades should still invest significant energy in learning more about the risks and opportunities associated with it. (Having done so, I offer my assurance that this is worthwhile.) Note that for materialists the possibility of AI follows from the existence proof of the human brain, and that an AI able to redesign itself for greater intelligence and copy itself would have the power to determine the future of Earth-derived life.

I suggest beginning with the two articles below on existential risk, the first on relevant cognitive biases, and the second discussing the relation of AI to existential risk. Processing these arguments should provide sufficient reason for further study.

The "two articles below" are by Yudkowsky.

User "gaverick" replies,

Carl, I'm inclined to agree with you, but can you recommend a rigorous discussion of the existential risks posed by Unfriendly AI? I had read Yudkowsky's chapter on AI risks for Bostrom's bk (and some of his other SIAI essays & SL4 posts) but when I forward them to others, their informality fails to impress.

Shulman's response begins,

Have you read through Bostrom's work on the subject? Kurzweil has relevant info for computing power and brain imaging.

Ray mothersodding Kurzweil!

[–] blakestacey@awful.systems 17 points 1 week ago* (last edited 1 week ago) (3 children)

jhbadger:

As Adam Becker shows in his book, EAs started out being reasonable "give to charity as much as you can, and research which charities do the most good" but have gotten into absurdities like "it is more important to fund rockets than help starving people or prevent malaria because maybe an asteroid will hit the Earth, killing everyone, starving or not".

I haven't read Becker's book and probably won't spend the time to do so. But if this is an accurate summary, it's a bad sign for that book, because plenty of them were bonkers all along.

As journalists and scholars scramble to account for this ‘new’ version of EA—what happened to the bednets, and why are Effective Altruists (EAs) so obsessed with AI?—they inadvertently repeat an oversimplified and revisionist history of the EA movement. It goes something like this: EA was once lauded as a movement of frugal do-gooders donating all their extra money to buy anti-malarial bednets for the poor in sub-Saharan Africa; but now, a few EAs have taken their utilitarian logic to an extreme level, and focus on ‘longtermism’, the idea that if we wish to do the most good, our efforts ought to focus on making sure the long-term future goes well; this occurred in tandem with a dramatic influx of funding from tech scions of Silicon Valley, redirecting EA into new cause areas like the development of safe artificial intelligence (‘AI-safety’ and ‘AI-alignment’) and biosecurity/pandemic preparedness, couched as part of a broader mission to reduce existential risks (‘x-risks’) and ‘global catastrophic risks’ that threaten humanity’s future. This view characterizes ‘longtermism’ as a ‘recent outgrowth’ (Ongweso Jr., 2022) or even breakaway ‘sect’ (Aleem, 2022) that does not represent authentic EA (see, e.g., Hossenfelder, 2022; Lenman, 2022; Pinker, 2022; Singer & Wong, 2019). EA’s shift from anti-malarial bednets and deworming pills to AI-safety/x-risk is portrayed as mission-drift, given wings by funding and endorsements from Silicon Valley billionaires like Elon Musk and Sam Bankman-Fried (see, e.g., Bajekal, 2022; Fisher, 2022; Lewis-Kraus, 2022; Matthews, 2022; Visram, 2022). A crucial turning point in this evolution, the story goes, includes EAs encountering the ideas of transhumanist philosopher Nick Bostrom of Oxford University’s Future of Humanity Institute (FHI), whose arguments for reducing x-risks from AI and biotechnology (Bostrom, 2002, 2003, 2013) have come to dominate EA thinking (see, e.g., Naughton, 2022; Ziatchik, 2022).

This version of events gives the impression that EA’s concerns about x-risk, AI, and ‘longtermism’ emerged out of EA’s rigorous approach to evaluating how to do good, and has only recently been embraced by the movement’s leaders. MacAskill’s publicity campaign for WWOTF certainly reinforces this perception. Yet, from the formal inception of EA in 2012 (and earlier) the key figures and intellectual architects of the EA movement were intensely focused on promoting the suite of causes that now fly under the banner of ‘longtermism’, particularly AI-safety, x-risk/global catastrophic risk reduction, and other components of the transhumanist agenda such as human enhancement, mind uploading, space colonization, prediction and forecasting markets, and life extension biotechnologies.

To give just a few examples: Toby Ord, the co-founder of GWWC and CEA, was actively collaborating with Bostrom by 2004 (Bostrom & Ord, 2004),18 and was a researcher at Bostrom’s Future of Humanity Institute (FHI) in 2007 (Future of Humanity Institute, 2007) when he came up with the idea for GWWC; in fact, Bostrom helped create GWWC’s first logo (EffectiveAltruism.org, 2016). Jason Matheny, whom Ord credits with introducing him to global public health metrics as a means for comparing charity effectiveness (Matthews, 2022), was also working to promote Bostrom’s x-risk agenda (Matheny, 2006, 2009), already framing it as the most cost-effective way to save lives through donations in 2006 (User: Gaverick [Jason Gaverick Matheny], 2006). MacAskill approvingly included x-risk as a cause area when discussing his organizations on Felificia and LessWrong (Crouch [MacAskill], 2010, 2012a, 2012b, 2012c, 2012e), and x-risk and transhumanism were part of 80K’s mission from the start (User: LadyMorgana, 2011). Pablo Stafforini, one of the key intellectual architects of EA ‘behind-the-scenes’, initially on Felificia (Stafforini, 2012a, 2012b, 2012c) and later as MacAskill’s research assistant at CEA for Doing Good Better and other projects (see organizational chart in Centre for Effective Altruism, 2017a; see the section entitled “ghostwriting” in Knutsson, 2019), was deeply involved in Bostrom’s transhumanist project in the early 2000s, and founded the Argentine chapter of Bostrom’s World Transhumanist Association in 2003 (Transhumanismo. org, 2003, 2004). Rob Wiblin, who was CEA’s executive director from 2013-2015 prior to moving to his current role at 80K, blogged about Bostrom and Yudkowksy’s x-risk/AI-safety project and other transhumanist themes starting in 2009 (Wiblin, 2009a, 2009b, 2010a, 2010b, 2010c, 2010d, 2012). In 2007, Carl Shulman (one of the most influential thought-leaders of EA, who oversees a $5,000,000 discretionary fund at CEA) articulated an agenda that is virtually identical to EA’s ‘longtermist’ agenda today in a Felificia post (Shulman, 2007). Nick Beckstead, who co-founded and led the first US chapter of GWWC in 2010, was also simultaneously engaging with Bostrom’s x-risk concept (Beckstead, 2010). By 2011, Beckstead’s PhD work was centered on Bostrom’s x-risk project: he entered an extract from the work-in-progress, entitled “Global Priority Setting and Existential Risk: Crucial Ethical Considerations” (Beckstead, 2011b) to FHI’s “Crucial Considerations” writing contest (Future of Humanity Institute, 2011), where it was the winning submission (Future of Humanity institute, 2012). His final dissertation, entitled On the Overwhelming Importance of Shaping the Far Future (Beckstead, 2013) is now treated as a foundational ‘longtermist’ text by EAs.

Throughout this period, however, EA was presented to the general public as an effort to end global poverty through effective giving, inspired by Peter Singer. Even as Beckstead was busy writing about x-risk and the long-term future in his own work, in the media he presented himself as focused on ending global poverty by donating to charities serving the distant poor (Beckstead & Lee, 2011; Chapman, 2011; MSNBC, 2010). MacAskill, too, presented himself as doggedly committed to ending global poverty....

(Becker's previous book, about the interpretation of quantum mechanics, irritated me. It recapitulated earlier pop-science books while introducing historical and technical errors, like getting the basic description of the EPR thought-experiment wrong, and butchering the biography of Grete Hermann while acting self-righteous about sexist men overlooking her accomplishments. See previous rant.)

 

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

Last week's thread

(Semi-obligatory thanks to @dgerard for starting this)

 

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

Last week's thread

(Semi-obligatory thanks to @dgerard for starting this)

 

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

Last week's thread

(Semi-obligatory thanks to @dgerard for starting this)

 

Time for some warm-and-fuzzies! What happy memories do you have from your early days of getting into computers/programming, whenever those early days happened to be?

When I was in middle school, I read an article in Discover Magazine about "artificial life" — computer simulations of biological systems. This sent me off on the path of trying to make a simulation of bugs that ran around and ate each other. My tool of choice was PowerBASIC, which was like QBasic except that it could compile to .EXE files. I decided there would be animals that could move, and plants that could also move. To implement a rule like "when the animal is near the plant, it will chase the plant," I needed to compute distances between points given their x- and y-coordinates. I knew the Pythagorean theorem, and I realized that the line between the plant and the animal is the hypotenuse of a right triangle. Tada: I had invented the distance formula!

 

So, here I am, listening to the Cosmos soundtrack and strangely not stoned. And I realize that it's been a while since we've had a random music recommendation thread. What's the musical haps in your worlds, friends?

 

Need to make a primal scream without gathering footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh facts of Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

 

Bumping this up from the comments.

 

Was anyone else getting a 503 error for a little while today?

 

Need to make a primal scream without gathering footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid!

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

 

Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid!

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

 

Many magazines have closed their submission portals because people thought they could send in AI-written stories.

For years I would tell people who wanted to be writers that the only way to be a writer was to write your own stories because elves would not come in the night and do it for you.

With AI, drunk plagiaristic elves who cannot actually write and would not know an idea or a sentence if it bit their little elvish arses will actually turn up and write something unpublishable for you. This is not a good thing.

 

Tesla's troubled Cybertruck appears to have hit yet another speed bump. Over the weekend, dozens of waiting customers reported that their impending deliveries had been canceled due to "an unexpected delay regarding the preparation of your vehicle."

Tesla has not announced an official stop sale or recall, and as of now, the reason for the suspended deliveries is unknown. But it's possible the electric pickup truck has a problem with its accelerator. [...] Yesterday, a Cybertruck owner on TikTok posted a video showing how the metal cover of his accelerator pedal allegedly worked itself partially loose and became jammed underneath part of the dash. The driver was able to stop the car with the brakes and put it in park. At the beginning of the month, another Cybertruck owner claimed to have crashed into a light pole due to an unintended acceleration problem.

Meanwhile, layoffs!

view more: ‹ prev next ›