Last Week Tonight's rant of the week is about AI slop. A Youtube video is available here. Their presentation is sufficiently down-to-earth to be sharable with parents and extended family, focusing on fake viral videos spreading via Facebook, Instagram, and Pinterest; and dissecting several examples of slop in order to help inoculate the audience.
TechTakes
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community
on the topic of bunk wiki articles, what is this lmao https://en.wikipedia.org/wiki/Risk_of_astronomical_suffering
According to some scholars, s-risks warrant serious consideration as they are not extremely unlikely and can arise from unforeseen scenarios.
Guys I have found a way to phrase my anxiety in a way where every single word is extremely load-bearing
Minor bit of personal news: Newgrounds got hit with a wave of AI slop games recently.
I caught onto it back on Wednesday, but didn't get official confirmation until yesterday, when another user investigated the games and discovered the exact slop-generator used to shit them out - VIDEOGAME.ai.
Thanks for the work you do on Newgrounds! This sentence stuck out to me
No more worrying about lack of content or fickle UGC creators
Oh they're just publically advertising their company to be anti-union. Bold.
Thanks for the work you do on Newgrounds!
Appreciate it - keeping one of the last bastions of creativity free of slop is a thankless task.
This sentence stuck out to me
No more worrying about lack of content or fickle UGC creators
Oh they’re just publically advertising their company to be anti-union. Bold.
What is AI if not a tool built to abuse the proletariat?
That's fucking abominable. I was originally going to ask why anyone would bother throwing their slop on Newgrounds of all sites, but given the business model here I think we can be pretty confident they were hoping to use it to advertise.
Also, fully general bullshit detection question no.142 applies: if this turnkey game studio works as well as you claim, why are you selling it to me instead of doing it yourself? (Hint: it's because it doesn't actually work)
Dan McQuillian just dropped the text of a seminar he gave: The role of the University is to resist AI
Starting this off with Baldur Bjarnason sneering at his fellow techies for their "reading" of Dante's Inferno:
Reading through my feed reader and seeing tech dilettantes “doing” Dante in a week and change, I’m reminded of the time in university when we spent half a semester discussing Dante’s Divine Comedy, followed by tracing it’s impact and influence over the centuries
I don’t think these assholes even bother to read their footnotes, and their writing all sounds like it comes from ChatGPT. Naturally so, because I believe them when they claim they don’t use it for writing. They’re just genuinely that dull
At least read the footnotes FFS
If they were reading Dante for pleasure, that’d be different—genuinely awesome, even. But all of this is framed as doing the entirety of “humanities” in the space of a few weeks.
The field of artificial intelligence has come full circle.
“It’s really hard to think about alignment. Maybe we need to redesign thinking” type shit
Fucking rude to drag lisp into this. How dare they.
PZ Myers boosted the pivot-to-ai piece on veo3: https://freethoughtblogs.com/pharyngula/2025/06/23/so-much-effort-spiraling-down-the-drain-of-ai/
This Thiel interview clip is amazing
Watch Ross Douthat realize for a moment in real time that he's spent a decade making ideological bedfellows with a techno-futurist, fascist Right that wants to see the birth of a "machine god" & is in no way enthusiastic about the survival of the human race in universal terms.
New Yorker put out an article on how AI use is homogenizing thought processes and writing ability.
Our friends on the orange site have clambored over each other to all make very similar counteraguments. Kind of proves the article, no?
I love this one:
All connection technology is a force for homogeneity. Television was the death of the regional accent, for example.
Holy shit. Yes, TV has reduced the strength of accents. But "the death"? Tell me again how little you pay attention to the people you inevitably interact with day to day.
Also tell me more about how you don't have a lower-class or nonwhite-coded accent.
Following up on the thread that spawned from my comment yesterday:
https://awful.systems/comment/7777035
(I'm in vacation mode and forgot it was late on Sunday)
I wonder if Habryka, the LWer who posted both there and on Xhitter that "someone should do something about this troublesome page" realized that there would be less pushback if he'd simply coordinated in the background and got the edits in place without forewarning others. Was it intentional to try to pick a fight with Wikipedians?
Or was it a consequence of the fact that capital-R Rationalists just don't shut up?
The wikipedia talk page is some solid sneering material. It's like Habryka and HandofLixue can't imagine any legitimate reason why Wikipedia has the norms it does, and they can't imagine how a neutral Wikipedian could come to write that article about lesswrong.
Eigenbra accurately calling them out...
"I also didn't call for any particular edits". You literally pointed to two sentences that you wanted edited.
Your twitter post also goes against Wikipedia practices by casting WP:ASPERSIONS. I can't speak for any of the other editors, but I can say I have never read nor edited RationalWiki, so you might be a little paranoid in that regard.
As to your question:
Was it intentional to try to pick a fight with Wikipedians?
It seems to be ignorance on Habyrka's part, but judging by the talk page, instead of acknowledging their ignorance of Wikipedia's reasonable policies, they seem to be doubling down.
Following up because the talk page keeps providing good material..
Hand of Lixue keeps trying to throw around the Wikipedia rules like the other editors haven't seen people try to weaponize the rules to push their views many times before.
Particularly for the unflattering descriptions I included, I made sure they reflect the general view in multiple sources, which is why they might have multiple citations attached. Unfortunately, that has now led to complaints about overcitation from @Hand of Lixue. You can't win with some people...
Looking back on the original lesswrong ~~brigade organizing~~ discussion of how to improve the wikipedia article, someone tried explaining to Habyrka the rules then and they were dismissive.
I don’t think it counts as canvassing in the relevant sense, as I didn’t express any specific opinion on how the article should be edited.
Yes Habyrka, because you clearly have such a good understanding of the Wikipedia rules and norms...
Also, heavily downvoted on the lesswrong discussion is someone suggesting Wikipedia is irrelevant because LLMs will soon be the standard for "access to ground truth". I guess even lesswrong knows that is bullshit.
Wow, this is shit: https://en.wikipedia.org/wiki/Inner_alignment
Edit: I have been informed that the correct statement in line with Wikipedia's policies is WP:WOWTHISISSHIT
Rather than trying to participate in the "article for deletion" dispute with the most pedantic nerds on Earth (complimentary) and the most pedantic nerds on Earth (derogatory), I will content myself with pointing and laughing at the citation to Scientific Reports, aka "we have Nature at home"
Habryka doesn't really know how not to start fights
Maybe instead of worrying about obscure wiki pages, Habryka should reflect why a linkpost titled Racial Dating Preferences and Sexual Racism is on the front page of his precious community now, with 48 karma and 22 comments.
You know, just this once, I am willing to see the "Dead Dove: Do Not Eat" label and be content to leave the bag closed.
Is it praxis when you put theory into inaction?
The bullshit engine has convinced my dirtbag sib-in-law that they can claim squatter's rights on (and take ownership of) the house that they aren't paying rent to live in.
They've been there a year.
They're gonna be homeless before this is over and I can't get them to see reason. I feel totally helpless, real big Cassandra vibes. LLMs are sooooo unhealthy for assholes.
depends on jurisdiction of course, but where i live you can pull something like this. it takes something like 30 years of living in the same place at minimum tho
Yeah, its nuts. They'd have to be resident, pay land taxes, and make improvements for 7 years here. They don't even mow the grass, the owner does.
these idiots made me feel sympathy for a landlord. I might never recover.
...
As an aside, it's fun to imagine the similar sort of brain damage a chatbot would cause Fox Mulder.
Id tell them to contact local squatters who have exp in this stuff over trusting LLMs myself. But those people will prob not tell them what they want to hear.
Another response to Ptacek. "Vibe coding as contempt for materiality" part is particularly good.
Probably worth a thread in its own right. I find the "contempt" framing to be particularly powerful. Contempt as illustrated herein is the necessary shadow of the relentlessly positivist "you can do/be anything!" cultural messaging that accompanied the rise of the current tech industry. (I'm tempted to use Neil Postman's term "technopoly," but I feel the need to reread his book at least once more before appropriating it wholesale into these discussions.) The positivism is the seed that drives people to take an aggressively technical approach to reality, and contempt is one possible response to reality imposing constraints through technical limitations. Not necessarily one that I have ever chosen myself, but I see now that much of what we discuss here comes from people who have.
Overall I think this essay is going to be a bedrock reference for a lot of people going forward.
We were joking about this last week if memory serves, but at least one person out there has started a rough aggregator of different sources of pre-AI internet dumps.
It's all gotta be in the models by now, but it's gonna be a cool resource for something, right?
It’s all gotta be in the models by now, but it’s gonna be a cool resource for something, right?
It'll also be helpful for helping the 'Net recover from the slop-nami once AI finally dies.
AI powered lie detectors spotted in the wild - https://pimagazine.com/wp-content/uploads/2025/05/100.png => https://eyecanknow.com/
Brought to you by researchers at the University of Utah. smh.
The folks over at futurism are continuing to do their damnedest to spotlight the ongoing mental health crisis being spurred by chatbot sycophants.
I think the real problem this poses for OpenAI is that in order to address it they basically need to back out of their entire sales pitch. Like, these are basically people who fully believe the hype and it pretty clearly is part of sending them down a very bad road.
Thomas Claburn writes in The Register:
IT consultancy Gartner predicts that more than 40 percent of agentic AI projects will be cancelled by the end of 2027 due to rising costs, unclear business value, or insufficient risk controls.
That implies something like 60 percent of agentic AI projects would be retained, which is actually remarkable given that the rate of successful task completion for AI agents, as measured by researchers at Carnegie Mellon University (CMU) and at Salesforce, is only about 30 to 35 percent for multi-step tasks.
Apparently Jan Marsalek worked for the GRU. Trashfuture is going to feast:
Was checking out the QOI image format and the politics of the dev and found that he is pretty comfortable around the ladybird people. (sigh) Also the r slur on twitter.
Really amazing that such a simple format achieves PNG sizes and faster encoding speeds. 1-page specification, though it's more like 2 with a bit bigger text, for bragging rights.
Wait Ladybird is anti-woke? Sigh am I going to have to make my own browser?
(I know I know I'm a lot better about posting about wanting to do cool stuff than actually doing it, hazard of having a full time job)
Dominic Szablewski also founded the German image board pr0gramm where he is known under the name cha0s. It's similar to 4chan in many ways. That he enjoys the Ladybird crowd isn't surprising.
Ed Zitron summarizes his premium post in the better offline subreddit: Why Did Microsoft Invest In OpenAI?
Summary of the summary: they fully expected OpenAI would've gone bust by now and MS would be looting the corpse for all it's worth.
I also feel like while it's absolutely true that the whole "we'll make AGI and get a ton of money" narrative was always bullshit (whether or not anyone relevant believed it) it is also another kind of evil. Like, assuming we could reach a sci-fi vision of AGI just as capable as a human being, the primary business case here is literally selling (or rather, licensing out) digital slaves. Like, if they did believe their own hype and weren't grifting their hearts out then they're a whole different class of monster. From an ethical perspective, the grift narrative lets everyone involved be better people.