this post was submitted on 20 Apr 2025
22 points (100.0% liked)

TechTakes

1804 readers
54 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

you are viewing a single comment's thread
view the rest of the comments
[–] BlueMonday1984@awful.systems 5 points 5 hours ago (2 children)

Found a thread doing numbers on Bluesky, about Google's AI summaries producing hot garbage (as usual):

[–] blakestacey@awful.systems 5 points 2 hours ago (1 children)

Also on the BlueSky-o-tubes today, I saw this from Ketan Joshi:

Used [hugging face]'s new tool to multiply 2 five digit numbers

Chatbot: wrong answer, 0.3 watthours

Calc: right answer, 0.00000011 watthours (2.5 million times less energy)

[–] froztbyte@awful.systems 2 points 1 hour ago* (last edited 1 hour ago)

Julien Delavande , an engineer at AI research firm Hugging Face , has developed a tool that shows in real time the power consumption of the chatbot generating

gnnnnnngh

this shit pisses me off so bad

there's actually quantifiable shit you can use across vendors[0]. there's even some software[1] you can just slap in place and get some good free easy numbers with! these things are real! and are usable!

"measure the power consumption of the chatbot generating"

I'm sorry you fucking what? just how exactly are you getting wattage out of openai? are you lovingly coaxing the model to lie to you about total flops spent?

[0] - intel's def been better on this for a while but leaving that aside for now..

[1] - it's very open source! (when I last looked there was no continual in-process sampling so you got hella at-observation sampling problems; but, y'know, can be dealt with)

[–] YourNetworkIsHaunted@awful.systems 7 points 3 hours ago* (last edited 3 hours ago) (2 children)

I tried this a couple of times and got a few "AI summary not available" replies

Ed: heh

The phrase "any pork in a swarm" is an idiom, likely meant to be interpreted figuratively. It's not a literal reference to a swarm of bees or other animals containing pork. The most likely interpretation is that it is being used to describe a situation or group where someone is secretly taking advantage of resources, opportunities, or power for their own benefit, often in a way that is not transparent or ethical. It implies that individuals within a larger group are actively participating in corruption or exploitation.

Generative AI is experimental.

[–] blakestacey@awful.systems 7 points 2 hours ago

NOT THE (PORK-FILLED) BEES!

[–] swlabr@awful.systems 3 points 2 hours ago* (last edited 2 hours ago) (1 children)

The link opened up another google search with the same query, tho without the AI summary.

image of a google search result descriptionQuery: “a bear fries bacon meaning”

AI summary:

The phrase "a bear fries bacon" is a play on the saying "a cat dreams of fish" which is a whimsical way to express a craving. In this case, the "bear" and "bacon" are just random pairings. It's not meant to be a literal description of a bear cooking bacon. It's a fun, nonsensical phrase that people may use to express an unusual or unexpected thought or craving, according to Google Search.

It really aggressively tries to match it up to something with similar keywords and structure, which is kind of interesting in its own right. It pattern-matched every variant I could come up with for "when all you have is..." for example.

Honestly it's kind of an interesting question and limitation for this kind of LLM. How should you respond when someone asks about an idiom neither of you know? The answer is really contextual. Sometimes it's better to try and help them piece together what it means, other times it's more important to acknowledge that this isn't actually a common expression or to try and provide accurate sourcing. The LLM, of course, has none of that context and because the patterns it replicates don't allow expressions of uncertainty or digressions it can't actually do both.