this post was submitted on 08 Jun 2025
71 points (100.0% liked)

TechTakes

1924 readers
129 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] scruiser@awful.systems 25 points 19 hours ago* (last edited 17 hours ago) (3 children)

The promptfondlers on places like /r/singularity are trying so hard to spin this paper. "It's still doing reasoning, it just somehow mysteriously fails when you it's reasoning gets too long!" or "LRMs improved with an intermediate number of reasoning tokens" or some other excuse. They are missing the point that short and medium length "reasoning" traces are potentially the result of pattern memorization. If the LLMs are actually reasoning and aren't just pattern memorizing, then extending the number of reasoning tokens proportionately with the task length should let the LLMs maintain performance on the tasks instead of catastrophically failing. Because this isn't the case, apple's paper is evidence for what big names like Gary Marcus, Yann Lecun, and many pundits and analysts have been repeatedly saying: LLMs achieve their results through memorization, not generalization, especially not out-of-distribution generalization.

[–] paraphrand@lemmy.world 12 points 15 hours ago* (last edited 15 hours ago) (2 children)

prompfondlers

Holy shit, I love it.

[–] blakestacey@awful.systems 15 points 19 hours ago (1 children)
[–] scruiser@awful.systems 5 points 17 hours ago

Just one more training run bro. Just gotta make the model bigger, then it can do bigger puzzles, obviously!

[–] Architeuthis@awful.systems 7 points 18 hours ago* (last edited 18 hours ago)

Hey now, there's plenty of generalization going on with LLM networks, it's just that we've taken to calling it hallucinations these days.