Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
(Credit and/or blame to David Gerard for starting this.)
Found a thread doing numbers on Bluesky, about Google's AI summaries producing hot garbage (as usual):
Also on the BlueSky-o-tubes today, I saw this from Ketan Joshi:
gnnnnnngh
this shit pisses me off so bad
there's actually quantifiable shit you can use across vendors[0]. there's even some software[1] you can just slap in place and get some good free easy numbers with! these things are real! and are usable!
I'm sorry you fucking what? just how exactly are you getting wattage out of openai? are you lovingly coaxing the model to lie to you about total flops spent?
[0] - intel's def been better on this for a while but leaving that aside for now..
[1] - it's very open source! (when I last looked there was no continual in-process sampling so you got hella at-observation sampling problems; but, y'know, can be dealt with)
I tried this a couple of times and got a few "AI summary not available" replies
Ed: heh
Generative AI is experimental.
NOT THE (PORK-FILLED) BEES!
The link opened up another google search with the same query, tho without the AI summary.
image of a google search result description
Query: “a bear fries bacon meaning”AI summary:
It really aggressively tries to match it up to something with similar keywords and structure, which is kind of interesting in its own right. It pattern-matched every variant I could come up with for "when all you have is..." for example.
Honestly it's kind of an interesting question and limitation for this kind of LLM. How should you respond when someone asks about an idiom neither of you know? The answer is really contextual. Sometimes it's better to try and help them piece together what it means, other times it's more important to acknowledge that this isn't actually a common expression or to try and provide accurate sourcing. The LLM, of course, has none of that context and because the patterns it replicates don't allow expressions of uncertainty or digressions it can't actually do both.