this post was submitted on 08 Jun 2025
72 points (100.0% liked)
TechTakes
1924 readers
153 users here now
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
As the bioware nerd I am it makes my heart glad to see the Towers of Hanoi doing their part in this fight. And it seems like the published paper undersells how significant this problem is for the promptfondlers' preferred narratives. Given how simple it is to scale the problem complexity for these scenarios, it seems likely that there isn't a viable scaling-based solution here. No matter how big you make the context windows and how many steps the system is able to process it's going to get out scaled by simply increasing some Ns in the puzzle itself.
Diz and others with a better understanding of what's actually under the hood have frequently referenced how bad Transformer models are at recursion and this seems like a pretty straightforward way to demonstrate that and one that I would expect to be pretty consistent.