this post was submitted on 31 May 2025
216 points (87.0% liked)

Showerthoughts

34781 readers
717 users here now

A "Showerthought" is a simple term used to describe the thoughts that pop into your head while you're doing everyday things like taking a shower, driving, or just daydreaming. The most popular seem to be lighthearted clever little truths, hidden in daily life.

Here are some examples to inspire your own showerthoughts:

Rules

  1. All posts must be showerthoughts
  2. The entire showerthought must be in the title
  3. No politics
    • If your topic is in a grey area, please phrase it to emphasize the fascinating aspects, not the dramatic aspects. You can do this by avoiding overly politicized terms such as "capitalism" and "communism". If you must make comparisons, you can say something is different without saying something is better/worse.
    • A good place for politics is c/politicaldiscussion
  4. Posts must be original/unique
  5. Adhere to Lemmy's Code of Conduct and the TOS

If you made it this far, showerthoughts is accepting new mods. This community is generally tame so its not a lot of work, but having a few more mods would help reports get addressed a little sooner.

Whats it like to be a mod? Reports just show up as messages in your Lemmy inbox, and if a different mod has already addressed the report, the message goes away and you never worry about it.

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Kyrgizion@lemmy.world 15 points 5 days ago (3 children)

It's not anytime soon. It can get like 90% of the way there but those final 10% are the real bitch.

[–] WhatAmLemmy@lemmy.world 44 points 5 days ago* (last edited 5 days ago) (4 children)

The AI we know is missing the I. It does not understand anything. All it does is find patterns in 1's and 0's. It has no concept of anything but the 1's and 0's in its input data. It has no concept of correlation vs causation, that's why it just hallucinates (presents erroneously illogical patterns) constantly.

Turns out finding patterns in 1's and 0's can do some really cool shit, but it's not intelligence.

[–] Gullible@sh.itjust.works 12 points 5 days ago (2 children)

This is why I hate calling it AI.

[–] idunnololz@lemmy.world 7 points 5 days ago

You can call it an LLM.

[–] magic_lobster_party@fedia.io 5 points 5 days ago

It’s artificial in the sense that it’s not real. It’s “not real” intelligence imitating as “real” intelligence.

[–] Monstrosity@lemm.ee 4 points 5 days ago (2 children)

This is not necessarily true. While it's using pattern recognition on a surface level, we're not entirely sure how AI comes up with it's output.

But beyond that, a lot of talk has been centered around a threshold when AI begins training other AI & can improve through iterations. Once that happens, people believe AI will not only improve extremely rapidly, but we will understand even less of what is happening when an AI black boxes train other AI black boxes.

[–] Coldcell@sh.itjust.works 1 points 5 days ago (3 children)

I can't quite wrap my head around this, these systems were coded, written by humans to call functions, assign weights, parse data. How do we not know what it's doing?

[–] MangoCats@feddit.it 3 points 5 days ago

It's a bit of "emergent properties" - so many things are happening under the hood they don't understand exactly how it's doing what it's doing, why one type of mesh performs better on a particular class of problems than another.

The equations of the Lorenz attractor are simple, well studied, but it's output is less than predictable and even those who study it are at a loss to explain "where it's going to go next" with any precision.

[–] Kyrgizion@lemmy.world 3 points 5 days ago (1 children)

Same way anesthesiology works. We don't know. We know how to sedate people but we have no idea why it works. AI is much the same. That doesn't mean it's sentient yet but to call it merely a text predictor is also selling it short. It's a black box under the hood.

[–] Coldcell@sh.itjust.works 0 points 5 days ago (1 children)

Writing code to process data is absolutely not the same way anesthesiology works 😂 Comparing state specific logic bound systems to the messy biological processes of a nervous system is what gets this misattribution of 'AI' in the first place. Currently it is just glorified auto-correct working off statistical data about human language, I'm still not sure how a written program can have a voodoo spooky black box that does things we don't understand as a core part of it.

[–] irmoz@lemmy.world 2 points 5 days ago* (last edited 5 days ago)

The uncertainty comes from reverse-engineering how a specific output relates to the prompt input. It uses extremely fuzzy logic to compute the answer to "What is the closest planet to the Sun?" We can't know which nodes in the neural network were triggered or in what order, so we can't precisely say how the answer was computed.

[–] The_Decryptor@aussie.zone 1 points 5 days ago (1 children)

Yeah, there's a mysticism that's sprung up around LLMs as if they're some magic blackbox, rather than a well understood construct to the point where you can buy books from Amazon on how to write one from scratch.

It's not like ChatGPT or Claude appeared from nowhere, the people who built them do talks about them all the time.

[–] Monstrosity@lemm.ee 1 points 5 days ago* (last edited 5 days ago) (1 children)

What a load of horseshit lol

EDIT: Sorry, I'll expand. When AI researchers give talks about how AI works, they say things like, "on a fundamental level, we don't actually know what's going on."

Also, even if there are books available about how to write an AI from scratch(?) somehow, the basic understanding of what happens deep within the neural networks is still a "magic black box". They'll crack it open eventually, but not yet.

The ideas that people have that AI is simple and stupid & a passing fad are naive.

[–] The_Decryptor@aussie.zone 1 points 4 days ago

If these AI researchers really have no idea how these things work, then how can they possibly improve the models or techniques?

Like how they now claim all that after upgrades that now these LLMs can "reason" about problems, how did they actually go and add that if it's a black box?

[–] MangoCats@feddit.it 0 points 5 days ago

Steam locomotive operators would notice some behaviors of their machines that they couldn't entirely explain. They were out there, shoveling the coal, filling the boilers, and turning the valves but some aspects of how the engines performed - why they would run stronger in some circumstances than others - were a mystery to the men on the front lines. Decades later, intense theoretical study could explain most of the observed phenomena by things like local boiling inside the boiler insulating the surface against heat transfer from the firebox, etc. but at the time when the tech was new: it was just a mystery.

Most of the "mysteries" of AI are similarly due to the fact that the operators are "vibe coding" - they go through the motions and they see what comes out. They're focused on their objectives, the input-output transform, and most of them aren't too caught up in the how and why of what it is doing.

People will study the how and why, but like any new tech, their understanding is going to lag behind the actions of the doers who are out there breaking new ground.

[–] MangoCats@feddit.it 2 points 5 days ago

Distill intelligence - what is it, really? Predicting what comes next based on... patterns. Patterns you learn in life, from experience, from books, from genetic memories, but that's all your intelligence is too: pattern recognition / prediction.

As massive as current AI systems are, consider that you have ~86 Billion neurons in your head, devices that evolved over the span of billions of years ultimately enabling you to survive in a competitive world with trillions of other living creatures, eating without being eaten at least long enough to reproduce, back and back and back for millions of generations.

Current AI is a bunch of highly simplified computers with up to hundreds of thousands of cores. Like planes fly faster than birds, AI can do some tricks better than human brains, but mostly: not.

[–] Pornacount128@lemmynsfw.com 1 points 5 days ago

Humans are just nurons, we don't "understand either" until so many stack on top of each other than we have a sort of consciousness. The it seems like we CAN understand but do we? Or are we just a bunch of meat computers? Also, llms handle language or correlations of words, don't humans just do that (with maybe body language too) but we're all just communicating. If llms can communicate isn't that enough conceptually to do anything? If llms can program and talk to other llms what can't they do?

[–] UnderpantsWeevil@lemmy.world 4 points 5 days ago

It can get like 90% of the way there

I'm still waiting for the first 10%

[–] JeremyHuntQW12@lemmy.world 1 points 4 days ago

So logarithmic then.