this post was submitted on 12 Jun 2025
275 points (96.9% liked)

Fuck AI

3065 readers
1157 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

founded 1 year ago
MODERATORS
top 41 comments
sorted by: hot top controversial new old
[–] jsomae@lemmy.ml 3 points 10 hours ago (1 children)

Admittedly, I also can't solve Towers of Hanoi.

[–] YesButActuallyMaybe@lemmy.ca 1 points 3 hours ago

First you take the Shrek and the cabbage, you leave Shrek and Return for the Wolf. You leave the cabbage and take the Wolf. Now you take Shrek to the cabbage and bring both to the wolf. Now you row into the sunset, your work here is done.

[–] some_guy@lemmy.sdf.org 7 points 17 hours ago (1 children)

The tide is turning. I can't wait to see it all come crashing down.

[–] Ledericas@lemm.ee 3 points 11 hours ago

it will once all the VC funds dries up, and the companies desparetely staving off that debt by enshittifying that service.

[–] 4am@lemm.ee 58 points 1 day ago (4 children)

Why did anyone think that a LLM would be able to solve logic or math problems?

They’re literally autocomplete. Like, 100% autocomplete that is based on an enormous statistical model. They don’t think, they don’t reason, they don’t compute. They lay words out in the most likely order.

To be fair it’s pretty amazing they can do that from a user prompt - but it’s not doing whatever it is that our brains do. It’s not a brain. It’s not “intelligent”. LLMs are machine learning algorithms but they are not AI.

It’s a fucking hornswoggle, always has been 🔫🧑‍🚀

[–] WanderingThoughts@europe.pub 4 points 10 hours ago* (last edited 10 hours ago)

They got very good results with just making the model bigger and train it on more data. It started doing stuff that was not programmed in the thing at all, like writing songs and having conversations, the sort of thing nobody expected an autocomplete to do. The reasoning was that if they keep making it bigger and feed it even more days, that the line would keep going up. The the fanboys believed it, investors believed it and many business leaders believed it. Until they ran out of data and datacenters.

[–] danielquinn@lemmy.ca 27 points 1 day ago (1 children)

Because that's how they're marketed and hyped. "The next version of ChatGPT will be smarter than a Nobel laureate" etc. This article is an indictment of the claims these companies make.

[–] ChapulinColorado@lemmy.world 13 points 1 day ago (1 children)

So fraud. It would be nice to get another FTX verdict at the very least. It could make those shit CEOs thinking twice before lying to peoples faces if it means years in prison.

[–] homesweethomeMrL@lemmy.world 7 points 1 day ago

In this administration? heh.

[–] some_guy@lemmy.sdf.org 3 points 17 hours ago

If you're not yet familiar with Ed Zitron, I think you'd enjoy either his newsletter or his podcast (or both).

[–] ignirtoq@fedia.io 13 points 1 day ago (2 children)

My running theory is that human evolution developed a heuristic in our brains that associates language sophistication with general intelligence, and especially with humanity. The very fact that LLMs are so good at composing sophisticated sentences triggers this heuristic and makes people anthropomorphize them far more than other kinds of AI, so they ascribe more capability to them than evidence justifies.

I actually think this may explain some earlier reporting of some weird behavior of AI researchers as well. I seem to recall reports of Google researchers believing they had created sentient AI (a quick search produced this article). The researcher was fooled by his own AI not because he drank the Koolaid, but because he fell prey to this neural heuristic that's in all of us.

[–] null_dot@lemmy.dbzer0.com 1 points 13 hours ago (1 children)

I don't think the mechanisms of evolution are necessarily involved.

We're just not used to interacting with this type of pseudo intelligence.

[–] ignirtoq@fedia.io 1 points 3 hours ago

My point is that this kind of pseudo intelligence has never existed on Earth before, so evolution has had free reign to use language sophistication as a proxy for humanity and intelligence without encountering anything that would put selective pressure against this heuristic.

Human language is old. Way older than the written word. Our brains have evolved specialized regions for language processing, so evolution has clearly had time to operate while language has existed.

And LLMs are not the first sophisticated AI that's been around. We've had AI for decades, and really good AI for a while. But people don't anthropomorphize other kinds of AI nearly as much as LLMs. Sure, they ascribe some human like intelligence to any sophisticated technology, and some people in history have claimed some technology or another is alive/sentient. But with LLMs we're seeing a larger portion of the population believing that that we haven't seen in human behavior before.

[–] homesweethomeMrL@lemmy.world 9 points 1 day ago (1 children)

I think you're right about that.

It didn't help that The Average Person has just shy of absolutely zero understanding of how computers work despite using them mostly all day every day.

Put the two together and it's a grifter's dream.

[–] Aceticon@lemmy.dbzer0.com 1 points 6 hours ago* (last edited 6 hours ago)

IMHO, if one's approach to the world is just - take it as it is and go with it - then probabilistic parrots creating the perceived elements of reality will work on that person because that's what they use to decide what to do next, but if one has an analytical approach to the world - wanting to figure out what's behind the façade to understand it and predict what might happen - then one will spot that the "logic" behind the façades created by the probabilistic parrots is segmented into little pieces of logic which are do not matched to the other little pieces of logic and do not add up to a greater building of logic (phrases are logic because all phrases have an inherent logic in how they are put together which is general, but the choice of which phrases get used in a higher logic which is far more varied than the logic inherent in phrases, so LLMs lose consistency at that level because the training material goes in a lot more directions at that level than it goes at the level of how phrases are put together).

[–] Goodmorningsunshine@lemmy.world 67 points 1 day ago (1 children)

And the obscene levels of water waste when we were already facing a future of scarcity. Can we please stop destroying economies, ecologies, and lives for this now?

[–] 6nk06@sh.itjust.works 19 points 1 day ago (1 children)

But how will you be able to auto-complete this "2 sentences long email" to your team at work without killing humanity?

[–] tuhriel@infosec.pub 1 points 6 hours ago

So...in the future it is "this e-mail could have been a meeting"

[–] LodeMike 4 points 19 hours ago

The time for that was two years ago actually

[–] Tartas1995@discuss.tchncs.de 6 points 23 hours ago* (last edited 19 hours ago) (1 children)

Hey for anyone who doesn't know how to solve tower of Hanoi, there is a simple algorithm.

1.  |.  |
2.  |.  |
3.  |.  |

Let's say, we want to move the center rod.

Count the stack of disks that you need to move: e.g. 3

If it is even, start with placing the first disk on the spot that you don't want to move the tower to. If it is odd, start with placing the first disk on the spot that you want to move the tower to.


|.  |.  |
2.  |.  |
3.  1.  |

|.  |.  |
|.  |.  |
3.  1.  2


|.  |.   |
|.  |.   1
3.  |.   2


|.   |.  |
|.   |.  1
|.   3.  2

Now the 2 stack is basically a new Hanoi tower.

That tower is even and we start with placing the first disk on the spot that we don't want to land on


|.  |.  |
|.  |.  |
1.  3.  2


|.  |.  |
|.  2.  |
1.  3.  |

|.  1.  |
|.  2.  |
|.  3.  |

And we solved the tower. It is that easy

[–] ZDL@lazysoci.al 1 points 16 hours ago (1 children)

Now do it for a stack of 12.

[–] Tartas1995@discuss.tchncs.de 3 points 12 hours ago (1 children)

It works the same way.

1  |  |
2  |  |
3  |  |
4  |  |
5  |  |
6  |  |
7  |  |
8  |  |
9  |  |
10 |  |
11 |  |
12 |  |

|  |  |
2  |  |
3  |  |
4  |  |
5  |  |
6  |  |
7  |  |
8  |  |
9  |  |
10 |  |
11 |  |
12 |  1

|  |  |
|  |  |
3  |  |
4  |  |
5  |  |
6  |  |
7  |  |
8  |  |
9  |  |
10 |  |
11 |  |
12 2  1

|  |  |
|  |  |
3  |  |
4  |  |
5  |  |
6  |  |
7  |  |
8  |  |
9  |  |
10 |  |
11 1  |
12 2  |

|  |  |
|  |  |
|  |  |
4  |  |
5  |  |
6  |  |
7  |  |
8  |  |
9  |  |
10 |  |
11 1  |
12 2 3

|  |  |
|  |  |
1  |  |
4  |  |
5  |  |
6  |  |
7  |  |
8  |  |
9  |  |
10 |  |
11 |  |
12 2  3

|  |  |
|  |  |
1  |  |
4  |  |
5  |  |
6  |  |
7  |  |
8  |  |
9  |  |
10 |  |
11 |  2
12 |  3

|  |  |
|  |  |
|  |  |
4  |  |
5  |  |
6  |  |
7  |  |
8  |  |
9  |  |
10 |  1
11 |  2
12 |  3

|  |  |
|  |  |
|  |  |
|  |  |
5  |  |
6  |  |
7  |  |
8  |  |
9  |  |
10 |  1
11 |  2
12 4  3

|  |  |
|  |  |
|  |  |
|  |  |
5  |  |
6  |  |
7  |  |
8  |  |
9  |  |
10 |  |
11 1  2
12 4  3


|  |  |
|  |  |
|  |  |
2  |  |
5  |  |
6  |  |
7  |  |
8  |  |
9  |  |
10 |  |
11 1  |
12 4  3

|  |  |
|  |  |
1  |  |
2  |  |
5  |  |
6  |  |
7  |  |
8  |  |
9  |  |
10 |  |
11 |  |
12 4  3

|  |  |
|  |  |
1  |  |
2  |  |
5  |  |
6  |  |
7  |  |
8  |  |
9  |  |
10 |  |
11 3  |
12 4  |

|  |  |
|  |  |
|  |  |
|  |  |
5  |  |
6  |  |
7  |  |
8  |  |
9  |  |
10 2  |
11 3  |
12 4  1

|  |  |
|  |  |
|  |  |
|  |  |
5  |  |
6  |  |
7  |  |
8  |  |
9  1  |
10 2  |
11 3  |
12 4  |

|  |  |
|  |  |
|  |  |
|  |  |
|  |  |
6  |  |
7  |  |
8  |  |
9  1  |
10 2  |
11 3  |
12 4  5

And so on... As you can see, when there was a 3 stack in the right pole, and moved the 4, for the solution to create space for the 5, we move the 3 stack as if it is just a 3 tower and we will end up with a 4 stack tower, allowing us to move the 5 and now we need to move the 4 stack on the 5. As the 4 stack is even, we would start by moving the 1 to the left stack, placing the 2 on the 5 and then the 1 on the 2, creating a 2 stack, now we can move the 3 on the left pole, now we solve the 2 stack, as it is even, we move the 1 to the 4 and the 2 on the 3 and then the 1 on the 2, creating a 3 stack and allowing us to move the 4 onto the 5. Now we solve the 3 stack onto the 4. It is odd, so we solve as we solve the previous 3 tower.

A bigger tower is just solving a 1 smaller tower basically twice.

So for solving 12, you solve 11 and move the 12 to the spot, to solve 11 again. To solve 11, you solve 10 and move the 11 and solve 10 again....

[–] ZDL@lazysoci.al 1 points 6 hours ago

I was mostly hoping to have you print out all 1000 or so moves as a prank. :D

[–] LordOfLocksley@lemmy.world 6 points 1 day ago (3 children)

Anyone got a version of the article that doesn't require me paying them so they won't track me across the Internet?

[–] teft@lemmy.world 19 points 1 day ago

A research paper by Apple has taken the tech world by storm, all but eviscerating the popular notion that large language models (LLMs, and their newest variant, LRMs, large reasoning models) are able to reason reliably. Some are shocked by it, some are not. The well-known venture capitalist Josh Wolfe went so far as to post on X that “Apple [had] just GaryMarcus’d LLM reasoning ability” – coining a new verb (and a compliment to me), referring to “the act of critically exposing or debunking the overhyped capabilities of artificial intelligence … by highlighting their limitations in reasoning, understanding, or general intelligence”.

Apple did this by showing that leading models such as ChatGPT, Claude and Deepseek may “look smart – but when complexity rises, they collapse”. In short, these models are very good at a kind of pattern recognition, but often fail when they encounter novelty that forces them beyond the limits of their training, despite being, as the paper notes, “explicitly designed for reasoning tasks”.

As discussed later, there is a loose end that the paper doesn’t tie up, but on the whole, its force is undeniable. So much so that LLM advocates are already partly conceding the blow while hinting at, or at least hoping for, happier futures ahead.

In many ways the paper echoes and amplifies an argument that I have been making since 1998: neural networks of various kinds can generalise within a distribution of data they are exposed to, but their generalisations tend to break down beyond that distribution. A simple example of this is that I once trained an older model to solve a very basic mathematical equation using only even-numbered training data. The model was able to generalise a little bit: solve for even numbers it hadn’t seen before, but unable to do so for problems where the answer was an odd number.

More than a quarter of a century later, when a task is close to the training data, these systems work pretty well. But as they stray further away from that data, they often break down, as they did in the Apple paper’s more stringent tests. Such limits arguably remain the single most important serious weakness in LLMs.

The hope, as always, has been that “scaling” the models by making them bigger, would solve these problems. The new Apple paper resoundingly rebuts these hopes. They challenged some of the latest, greatest, most expensive models with classic puzzles, such as the Tower of Hanoi – and found that deep problems lingered. Combined with numerous hugely expensive failures in efforts to build GPT-5 level systems, this is very bad news.

The Tower of Hanoi is a classic game with three pegs and multiple discs, in which you need to move all the discs on the left peg to the right peg, never stacking a larger disc on top of a smaller one. With practice, though, a bright (and patient) seven-year-old can do it.

What Apple found was that leading generative models could barely do seven discs, getting less than 80% accuracy, and pretty much can’t get scenarios with eight discs correct at all. It is truly embarrassing that LLMs cannot reliably solve Hanoi.

And, as the paper’s co-lead-author Iman Mirzadeh told me via DM, “it’s not just about ‘solving’ the puzzle. We have an experiment where we give the solution algorithm to the model, and [the model still failed] … based on what we observe from their thoughts, their process is not logical and intelligent”.

The new paper also echoes and amplifies several arguments that Arizona State University computer scientist Subbarao Kambhampati has been making about the newly popular LRMs. He has observed that people tend to anthropomorphise these systems, to assume they use something resembling “steps a human might take when solving a challenging problem”. And he has previously shown that in fact they have the same kind of problem that Apple documents.

If you can’t use a billion-dollar AI system to solve a problem that Herb Simon (one of the actual godfathers of AI) solved with classical (but out of fashion) AI techniques in 1957, the chances that models such as Claude or o3 are going to reach artificial general intelligence (AGI) seem truly remote.

So what’s the loose thread that I warn you about? Well, humans aren’t perfect either. On a puzzle like Hanoi, ordinary humans actually have a bunch of (well-known) limits that somewhat parallel what the Apple team discovered. Many (not all) humans screw up on versions of the Tower of Hanoi with eight discs.

But look, that’s why we invented computers, and for that matter calculators: to reliably compute solutions to large, tedious problems. AGI shouldn’t be about perfectly replicating a human, it should be about combining the best of both worlds; human adaptiveness with computational brute force and reliability. We don’t want an AGI that fails to “carry the one” in basic arithmetic just because sometimes humans do.

Whenever people ask me why I actually like AI (contrary to the widespread myth that I am against it), and think that future forms of AI (though not necessarily generative AI systems such as LLMs) may ultimately be of great benefit to humanity, I point to the advances in science and technology we might make if we could combine the causal reasoning abilities of our best scientists with the sheer compute power of modern digital computers.

What the Apple paper shows, most fundamentally, regardless of how you define AGI, is that these LLMs that have generated so much hype are no substitute for good, well-specified conventional algorithms. (They also can’t play chess as well as conventional algorithms, can’t fold proteins like special-purpose neurosymbolic hybrids, can’t run databases as well as conventional databases, etc.)

What this means for business is that you can’t simply drop o3 or Claude into some complex problem and expect them to work reliably. What it means for society is that we can never fully trust generative AI; its outputs are just too hit-or-miss.

One of the most striking findings in the new paper was that an LLM may well work in an easy test set (such as Hanoi with four discs) and seduce you into thinking it has built a proper, generalisable solution when it has not.

To be sure, LLMs will continue to have their uses, especially for coding and brainstorming and writing, with humans in the loop.

But anybody who thinks LLMs are a direct route to the sort of AGI that could fundamentally transform society for the good is kidding themselves.

This essay was adapted from Gary Marcus’s newsletter, Marcus on AI

Gary Marcus is a professor emeritus at New York University, the founder of two AI companies, and the author of six books, including Taming Silicon Valley
[–] Cheradenine@sh.itjust.works 11 points 1 day ago (1 children)

https://archive.vn/YUdhb

Are you on Safari? Firefox based browsers don't seem to have this issue on Android, Windows, Linux (even Arch BTW)

[–] LordOfLocksley@lemmy.world 0 points 1 day ago (2 children)
[–] 4am@lemm.ee 12 points 1 day ago

Cool, that narrows it down to “all of the above”

[–] can@sh.itjust.works 1 points 1 day ago (1 children)

So the default? But which phone?

[–] LordOfLocksley@lemmy.world 2 points 22 hours ago (1 children)
[–] can@sh.itjust.works 3 points 21 hours ago (1 children)

So Samsung Internet? Have you tried installing Firefox Mobile?

[–] LordOfLocksley@lemmy.world 2 points 20 hours ago (1 children)

I have not. I'll give it a shot

[–] can@sh.itjust.works 2 points 20 hours ago

It's worth it for uBlock Origin alone

[–] Mearuu@kbin.melroy.org 1 points 1 day ago (1 children)

You can visit all the websites without tracking if you use https://mullvad.net/en/browser. There are other options such extensions for Firefox but I think Mullvad browser offers even more protection.

[–] ZDL@lazysoci.al 0 points 1 day ago

So what you're saying is you don't want to actually answer the question asked.

Check.