Oh FFS, that couple have managed to break into Sweden's public broadcasting site
gerikson
Here's LWer "johnswentworth", who has more than 57k karma on the site and can be characterized as a big cheese:
My Empathy Is Rarely Kind
I usually relate to other people via something like suspension of disbelief. Like, they’re a human, same as me, they presumably have thoughts and feelings and the like, but I compartmentalize that fact. I think of them kind of like cute cats. Because if I stop compartmentalizing, if I start to put myself in their shoes and imagine what they’re facing… then I feel not just their ineptitude, but the apparent lack of desire to ever move beyond that ineptitude. What I feel toward them is usually not sympathy or generosity, but either disgust or disappointment (or both).
"why do people keep saying we sound like fascists? I don't get it!"
The artillery branch of most militaries has long been a haven for the more brainy types. Napoleon was a gunner, for example.
Oh, but LW has the comeback for you in the very first paragraph
Outside of niche circles on this site and elsewhere, the public's awareness about AI-related "x-risk" remains limited to Terminator-style dangers, which they brush off as silly sci-fi. In fact, most people's concerns are limited to things like deepfake-based impersonation, their personal data training AI, algorithmic bias, and job loss.
Silly people! Worrying about problems staring them in the face, instead of the future omnicidal AI that is definitely coming!
LessWronger discovers the great unwashed masses , who inconveniently still indirectly affect policy through outmoded concepts like "voting" instead of writing blogs, might need some easily digested media pablum to be convinced that Big Bad AI is gonna kill them all.
https://www.lesswrong.com/posts/4unfQYGQ7StDyXAfi/someone-should-fund-an-agi-blockbuster
Cites such cultural touchstones as "The Day After Tomorrow", "An Inconvineent Truth" (truly a GenZ hit), and "Slaughterbots" which I've never heard of.
Listen to the plot summary
- Slowburn realism: The movie should start off in mid-2025. Stupid agents.Flawed chatbots, algorithmic bias. Characters discussing these issues behind the scenes while the world is focused on other issues (global conflicts, Trump, celebrity drama, etc). [ok so basically LW: the Movie]
- Explicit exponential growth: A VERY slow build-up of AI progress such that the world only ends in the last few minutes of the film. This seems very important to drill home the part about exponential growth. [ah yes, exponential growth, a concept that lends itself readily to drama]
- Concrete parallels to real actors: Themes like "OpenBrain" or "Nole Tusk" or "Samuel Allmen" seem fitting. ["we need actors to portray real actors!" is genuine Hollywood film talk]
- Fear: There's a million ways people could die, but featuring ones that require the fewest jumps in practicality seem the most fitting. Perhaps microdrones equipped with bioweapons that spray urban areas. Or malicious actors sending drone swarms to destroy crops or other vital infrastructure. [so basically people will watch a conventional thriller except in the last few minutes everyone dies. No motivation. No clear "if we don't cut these wires everyone dies!"]
OK so what should be shown in the film?
compute/reporting caps, robust pre-deployment testing mandates (THESE are all topics that should be covered in the film!)
Again, these are the core components of every blockbuster. I can't wait to see "Avengers vs the AI" where Captain America discusses robust pre-deployment testing mandates with Tony Stark.
All the cited URLS in the footnotes end with "utm_source=chatgpt.com". 'nuff said.
At this point in time, having a substack is in itself a red flag.
The targets are informed, via a grammatically invalid sentence.
Sam Kriss (author of the ‘Laurentius Clung’ piece) has posted a critique. I don’t think it’s good, but I do think it’s representative of a view that I ever encounter in the wild but haven’t really seen written up.
FWIW the search term 'Laurentius Clung' gets no hits on LW, so I'm to assume everyone there also is Extremely Online on Xitter and instantly knows the reference.
https://www.lesswrong.com/posts/3GbM9hmyJqn4LNXrG/yams-s-shortform?commentId=MzkAjd8EWqosiePMf
This was a good read. I also read the post/story/essay that got the rats upset and it's good too.
https://samkriss.substack.com/p/the-law-that-can-be-named-is-not
Yud's sputtering reaction can be read here among the comments here
Remember FizzBuzz? That was originally a simple filter exercise some person recruiting programmers came up with to weed out everyone with multi-year CS degrees but zero actual programming experience.
The argument would be stronger (not strong, but stronger) if he could point to an existing numbering system that is little-endian and somehow show it's better
The guy who thinks it's important to communicate clearly (https://awful.systems/comment/7904956) wants to flip the number order around
https://www.lesswrong.com/posts/KXr8ys8PYppKXgGWj/english-writes-numbers-backwards
I'll consider that when the Yanks abandon middle-endian date formatting.
Edit it's now tagged as "Humor" on LW. Cowards. Own your cranks.
Nothing expresses the inherent atomism and libertarian nature of the rat community like this
https://www.lesswrong.com/posts/HAzoPABejzKucwiow/alcohol-is-so-bad-for-society-that-you-should-probably-stop
A rundown of the health risks of alcohol usage, coupled with actual real proposals (a consumption tax), finishes with the conclusion that the individual reader (statistically well-off and well-socialized) should abstain from alcohol altogether.
No calls for campaigning for a national (US) alcohol tax. No calls to fund orgs fighting alcohol abuse. Just individual, statistically meaningless "action".
Oh well, AGI will solve it (or the robot god will be a raging alcoholic)