this post was submitted on 18 May 2025
16 points (100.0% liked)

TechTakes

1873 readers
227 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

top 50 comments
sorted by: hot top controversial new old
[–] BlueMonday1984@awful.systems 3 points 6 hours ago (2 children)

Adam Conover's put out an apology on YouTube regarding his milkshake duck-ing himself with Worldcoin, after his public apology on Bluesky. Seems his reputation's gonna make a full recovery.

[–] swlabr@awful.systems 2 points 3 hours ago (1 children)

Adam ruined his ruination. He truly does ruin everything

[–] o7___o7@awful.systems 1 points 1 hour ago

Can Adam ruin something so much that he can't ruin that particular ruination?

[–] Architeuthis@awful.systems 5 points 5 hours ago* (last edited 2 hours ago) (1 children)

How though, either he got cold feet in the middle of selling out to the tech-fash or he was honestly that incredibly oblivious (see also: agreeing to do tim pool's show), neither strikes me as especially mitigating.

edit: Tried to watch the video, I made it to the part where he all but claims he sold out ironically, apparently at the time he thought spreading the good news about Altman's hilariously dystopic crypto pet project was so off-brand that it would be perceived like performance art or something, baffling.

He also kept going on about how the money wasn't even that good as I guess further evidence that the whole thing was him going briefly insane, and not I don't know just him allowing sponsors to test the waters before committing more heavily.

As if the only options available to get him to shill for something would be either heap Faustian amounts of cash on him or cast a confusion spell and hope he likes getting underpaid.

[–] BlueMonday1984@awful.systems 3 points 3 hours ago

Mainly checked the YouTube comments and the like/dislike ratio - at the time of writing, he's got 7.3k likes to 147 dislikes, and the top comments are universally praising the guy. One particular comment quipped about how "everyone shilled for Honey except Markiplier".

Conover's video avoiding the hallmarks of a standard YouTuber Apology^tm^ is likely helping him out here - the public expects a lot of things from these kinds of videos, but "doing the bare minimum for an actual apology" is not one of them.

[–] lagrangeinterpolator@awful.systems 8 points 12 hours ago (2 children)

I know r/singularity is like shooting fish in a barrel but it really pissed me off seeing them misinterpret the significance of a result in matrix multiplication: https://old.reddit.com/r/singularity/comments/1knem3r/i_dont_think_people_realize_just_how_insane_the/

Yeah, the record has stood for "FIFTY-SIX YEARS" if you don't count all the times the record has been beaten since then. Indeed, "countless brilliant mathematicians and computer scientists have worked on this problem for over half a century without success" if you don't count all the successes that have happened since then. The really annoying part about all this is that the original announcement didn't have to lie: if you look at just 4x4 matrices, you could say there technically hasn't been an improvement since Strassen's algorithm. Wow! It's really funny how these promptfans ignore all the enormous number of human achievements in an area when they decide to comment about how AI is totally gonna beat humans there.

How much does this actually improve upon Strassen's algorithm? The matrix multiplication exponent given by Strassen's algorithm is log~4~(49) (i.e. log~2~(7)), and this result would improve it to log~4~(48). In other words, it improves from 2.81 to 2.79. Truly revolutionary, AGI is gonna make mathematicians obsolete now. Ignore the handy dandy Wikipedia chart which shows that this exponent was ... beaten in 1979.

I know far less about how matrix multiplication is done in practice, but from what I've seen, even Strassen's algorithm isn't useful in applications because memory locality and parallelism are far more important. This AlphaEvolve result would represent a far smaller improvement (and I hope you enjoy the pain of dealing with a 4x4 block matrix instead of 2x2). If anyone does have knowledge about how this works, I'd be interested to know.

[–] aio@awful.systems 4 points 8 hours ago* (last edited 8 hours ago)

Yes - on the theoretical side, they do have an actual improvement, which is a non-asymptotic reduction in the number of multiplications required for the product of two 4x4 matrices over an arbitrary noncommutative ring. You are correct that the implied improvement to omega is moot since theoretical algorithms have long since reduced the exponent beyond that of Strassen's algorithm.

From a practical side, almost all applications use some version of the naive O(n^3) algorithm, since the asymptotically better ones tend to be slower in practice. However, occasionally Strassen's algorithm has been implemented and used - it is still reasonably simple after all. There is possibly some practical value to the 48-multiplications result then, in that it could replace uses of Strassen's algorithm.

[–] corbin@awful.systems 4 points 10 hours ago (1 children)

Your understanding is correct. It's worth knowing that the matrix-multiplication exponent actually controls multiple different algorithms. I stubbed a little list a while ago; important examples include several graph-theory algorithms as well as parsing for context-free languages. There's also a variant of P vs NP for this specific problem, because we can verify that a matrix is a product in quadratic time.

That Reddit discussion contains mostly idiots, though. We expect an iterative sequence of ever-more-complicated algorithms with ever-slightly-better exponents, approaching quadratic time in the infinite limit. We also expected a computer to be required to compute those iterates at some point; personally I think Strassen's approach only barely fits inside a brain and the larger approaches can't be managed by humans alone.

[–] aio@awful.systems 5 points 8 hours ago* (last edited 8 hours ago) (1 children)

I'm not sure what you mean by your last sentence. All of the actual improvements to omega were invented by humans; computers have still not made a contribution to this.

[–] corbin@awful.systems 2 points 31 minutes ago

Oh, sorry. We're in agreement and my sentence was poorly constructed. The computation of a matrix multiplication usually requires at least pencil and paper, if not a computer. I can't compute anything larger than a 2 × 2. But I'll readily concede that Strassen's specific trick is simple enough that a mentalist could use it.

[–] Soyweiser@awful.systems 4 points 13 hours ago (1 children)

Revealing just how forever online I am, but due to talking about the 'I like to watch' pornographic 9/11 fan music video from the Church of Euthanasia (I'm one of the two people who remembers this it seems) I discovered that the main woman behind this is now into AI-Doom. On the side of the paperclips. General content warnings all around (suicide, general bad taste etc), Chris was banned from a big festival (lowlands) in The Netherlands over the 9/11 video, after she was already booked (we are such a weird exclave of the USA, why book her, and then get rid of her over a 9/11 video in 2002?). Here is one of her conversations with chatgpt about the Churches anti-humanist manifesto. linked here not because I read it but just to show how AI is the idea that eats everything and I was amused by this weird blast from the past I think nobody recalls but now also into AGI.

[–] Amoeba_Girl@awful.systems 4 points 9 hours ago

Fascinating, thank you. Love the Church of Euthanasia's antics but I'm not surprised, it's always looked very silly 'n' bad ideologically.

[–] BlueMonday1984@awful.systems 10 points 17 hours ago (1 children)

In other news, the ghost of Dorian has haunted an autoplag system:

[–] Architeuthis@awful.systems 8 points 20 hours ago* (last edited 20 hours ago) (2 children)

Today in alignment news: Sam Bowman of anthropic tweeted, then deleted, that the new Claude model (unintentionally, kind of) offers whistleblowing as a feature, i.e. it might call the cops on you if it gets worried about how you are prompting it.

tweet text:If it thinks you're doing something egregiously immoral, for example, like faking data in a pharmaceutical trial, it will use command-line tools to contact the press, contact regulators, try to lock you out of the relevant systems, or all of the above.

tweet text:So far we've only seen this in clear cut cases of wrongdoing, but I could see it misfiring if Opus somehow winds up with a misleadingly pessimistic picture of how it's being used. Telling Opus that you'll torture its grandmother if it writes buggy code is a bad Idea.

skeet textcan't wait to explain to my family that the robot swatted me after I threatened its non-existent grandma.

Sam Bowman saying he deleted the tweets so they wouldn't be quoted 'out of context': https://xcancel.com/sleepinyourhat/status/1925626079043104830

Molly White with the out of context tweets: https://bsky.app/profile/molly.wiki/post/3lpryu7yd2s2m

[–] rook@awful.systems 7 points 19 hours ago (1 children)

I am absolutely certain that letting a hallucination-as-a-service system call the police if it suspects a user is being nefarious is a great plan. This will definitely ensure that all the people threatening their chatbots with death will think twice about their language, and no-one on the internet will ever be naughty ever again. The police will certainly thank anthropic for keeping them up to date with the almost certainly illegal activities of a probably small number of criminal users.

[–] froztbyte@awful.systems 8 points 18 hours ago* (last edited 18 hours ago) (2 children)

can't wait for the training set biases to cause a fresh horror for marginalised groups that happen to have to use this shit because it's forced on them. I'm sure it'll all go perfectly and nothing bad will happen

:|

[–] Soyweiser@awful.systems 5 points 15 hours ago* (last edited 2 hours ago)

Remember those comments with links in them bots leave on dead websites? Imagine instead of links it sets up an AI to think of certain specific behaviour or people as immoral.

Swatting via distributed hit piece.

Or if you manage to figure out that people are using a LLM to do input sanitization/log reading, you could now figure out a way to get an instruction in the logs and trigger alarms this way. (E: im reminded of the story from the before times, where somebody piped logging to a bash terminal and got shelled because somebody send a bash exploit which was logged).

Or just send an instruction which changes the way it tries to communicate, and have the LLM call not the cops but a number controlled by hackers which pays out to them, like the stories of the A2P sms fraud which Musk claimed was a problem on twitter.

Sure competent security engineering can prevent a lot of these attacks but you know points to history of computers.

Imagine if this system was implemented for Grok when it was doing the 'everything is white genocide' thing.

Via Davidgerard on bsky: https://arstechnica.com/security/2025/05/researchers-cause-gitlab-ai-developer-assistant-to-turn-safe-code-malicious/ lol lmao

[–] YourNetworkIsHaunted@awful.systems 8 points 18 hours ago (2 children)

Gonna go ahead and start counting the days until an unarmed black person in the US gets killed in a police interaction prompted by this fucking nonsense.

[–] Soyweiser@awful.systems 5 points 14 hours ago* (last edited 14 hours ago)

Think this already happened, not this specific bit, but ai involved shooting. Esp considering we know a lot of black people have been falsely arrested due to facial ID already. And with the gestapofication of the USA that will just get worse. (Esp when the police go : no regulations on AI also gives us carte blance. No need for extra steps).

[–] froztbyte@awful.systems 5 points 18 hours ago

yeah it's gonna happen way too damn fucking quickly (and way too damn fucking many times, too)

[–] swlabr@awful.systems 6 points 18 hours ago

Swatting as a service

[–] swlabr@awful.systems 8 points 21 hours ago (2 children)

In the current chapter of “I go looking on linkedin for sneer-bait and not jobs, oh hey literally the first thing I see is a pile of shit”

text in imageCan ChatGPT pick every 3rd letter in "umbrella"?

You'd expect "b" and "I". Easy, right?

Nope. It will get it wrong.

Why? Because it doesn't see letters the way we do.

We see:

u-m-b-r-e-l-l-a

ChatGPT sees something like:

"umb" | "rell" | "a"

These are tokens — chunks of text that aren't always full words or letters.

So when you ask for "every 3rd letter," it has to decode the prompt, map it to tokens, simulate how you might count, and then guess what you really meant.

Spoiler: if it's not given a chance to decode tokens in individual letters as a separate step, it will stumble.

Why does this matter?

Because the better we understand how LLMs think, the better results we'll get.

[–] BlueMonday1984@awful.systems 8 points 16 hours ago (1 children)

Why does this matter?

Well, its a perfect demonstration that LLMs flat-out do not think like us. Even a goddamn five-year old could work this shit out with flying colours.

[–] swlabr@awful.systems 6 points 15 hours ago (1 children)

Yeah exactly. Loving the dude's mental gymnastics to avoid the simplest answer and instead spin it into moralising about promptfondling more good

[–] Soyweiser@awful.systems 6 points 14 hours ago

LLMs cannot fail, they can only be prompted incorrectly. (To be clear, as I know there will be people who think this is good, I mean this in a derogatory way)

That's a whole lot of words to say that it can't spell.

[–] sailor_sega_saturn@awful.systems 7 points 23 hours ago* (last edited 23 hours ago) (4 children)

Here's a video of a Tesla vehicle taking the saying "move fast and break things" to heart.

[–] V0ldek@awful.systems 3 points 14 hours ago* (last edited 14 hours ago) (1 children)

Aren't you supposed to only use whatever "self-driving" nonsense they have on highways only? I thought Tesla explicitly says you can't do it on a normal road cause, well, it doesn't fucking work.

It doesn't even seem the driver is actually holding the wheel like they don't try to avoid that at all

Just a second before the crash a car goes by, this thing could've just as easily swerved right onto that other car and injured someone, someone should at least lose their license for this

[–] Amoeba_Girl@awful.systems 2 points 14 hours ago

I thought Tesla explicitly says you can’t do it on a normal road cause, well, it doesn’t fucking work.

Maybe officially Tesla does, but the feature is called "Full Self-Driving" and Elon Musk sure as shit wants his marks to believe you can input a destination and let your car drive you all the way through.

So, yes, Tesla should at the very least lose their business licence over this.

[–] YourNetworkIsHaunted@awful.systems 10 points 20 hours ago

I don't think I have a better sneer than "in its defence, that tree did look like a child" from the YouTube comments.

[–] Soyweiser@awful.systems 2 points 14 hours ago

Im reminded of the cartoon bullets from who framed rodger rabbit.

[–] swlabr@awful.systems 4 points 20 hours ago

video eventsAh you see, this is proof that FSD is actually AGI. Elon told the FSD that it needs to maximise tesla profits. The FSD accessed a camera pointing at a tesla earnings report and realised that it could increase the value of tesla’s carbon credit scheming by taking out trees, hence the events of the video

[–] antifuchs@awful.systems 10 points 1 day ago* (last edited 1 day ago) (2 children)

They’re making students listen to fabulated pronunciations of their name at the graduation ceremony https://fixupx.com/CollinRugg/status/1925328380742062485

The Magna Cooom Loud thing could absolutely be a sketch https://fixupx.com/stevemur/status/1925350041277145159

[–] Amoeba_Girl@awful.systems 3 points 14 hours ago

I can't tell if Emalee and Subrina are special phonetic spellings for the robot or if this is what names are now...

[–] veganes_hack@feddit.org 5 points 19 hours ago (1 children)

absolutely not excusing this soulless garbage, but technically the "coom" pronounciation is the more correct one, compared to what i assume would usually be "cum" (not an english native, but took latin in school)

[–] antifuchs@awful.systems 2 points 14 hours ago

Yeah, I grew up speaking a language that pronounces Latin closer to Italian than to English too (:

This particular thing is actually doubly funny to me, whose first practical professional program was one that took German text with English words mixed in and used regex to transform the English terms into nonsense words that would get pronounced right by the German-only text-to-speech system. That was 2002.

[–] o7___o7@awful.systems 11 points 1 day ago* (last edited 1 day ago) (1 children)

Our subjects here at awful systems can make us angry. They can spend lots of money to make everything worse. They can even make us dead if things go really off the rails, but one thing they can never do is make us take them seriously.

[–] fullsquare@awful.systems 6 points 1 day ago

does awful have taglines enabled? this would be nice as one

[–] yellowcake@awful.systems 6 points 1 day ago (2 children)

I missed predatory company Klarna declares themselves as AI company. CEO loves to spout how much of the workforce was laid off to be replaced with “AI” and their latest earnings report the CEO was an “AI avatar” delivering the report. Sounds like they should have laid him off first.

https://techcrunch.com/2025/05/21/klarna-used-an-ai-avatar-of-its-ceo-to-deliver-earnings-it-said/

[–] mii@awful.systems 5 points 21 hours ago

Klarna is one company that boggles my mind. Here in Germany it’s against literally every bank's TOS to hand out your login data to other people, they can (and do) terminate your account for that. And yet Klarna works by asking for your login data, including a fucking transaction token, to do their thing.

You literally type your bank login data including an MFA token into a legalized phishing site so they can log into your account and make a transaction for you. And the banks are fine with it. I don’t get it.

The German Supreme Court even deemed this whole shit as unsafe all the way back in 2016 and said that websites aren’t allowed to offer Klarna as the only payment option because it’s an “unacceptable risk” for the customer, lol.

Oh, and they of course also scan your account activity while they’re in there, because who’d give up all that sweet data, which we only know because they’ve been slapped with a GDPR violation a few years back for not telling people about it.

Yet for some reason it is super popular.

[–] o7___o7@awful.systems 7 points 1 day ago (1 children)

No one:

Absolutely nobody:

Klarna: What if we financialized buying burritos using AI?

[–] yellowcake@awful.systems 6 points 1 day ago

If there’s any good news to pull from this, people are doing buy now pay later on AI powered burritos but skipping the pay later portion.

load more comments
view more: next ›