this post was submitted on 28 Oct 2024
1535 points (98.7% liked)

Technology

59758 readers
3968 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] NABDad@lemmy.world 106 points 1 month ago (2 children)

I had a professor in college that said when an AI problem is solved, it is no longer AI.

Computers do all sorts of things today that 30 years ago were the stuff of science fiction. Back then many of those things were considered to be in the realm of AI. Now they're just tools we use without thinking about them.

I'm sitting here using gesture typing on my phone to enter these words. The computer is analyzing my motions and predicting what words I want to type based on a statistical likelihood of what comes next from the group of possible words that my gesture could be. This would have been the realm of AI once, but now it's just the keyboard app on my phone.

[–] designatedhacker@lemm.ee 15 points 1 month ago (1 children)

The approach of LLMs without some sort of symbolic reasoning layer aren't actually able to hold a model of what their context is and their relationships. They predict the next token, but fall apart when you change the numbers in a problem or add some negation to the prompt.

Awesome for protein research, summarization, speech recognition, speech generation, deep fakes, spam creation, RAG document summary, brainstorming, content classification, etc. I don't even think we've found all the patterns they'd be great at predicting.

There are tons of great uses, but just throwing more data, memory, compute, and power at transformers is likely to hit a wall without new models. All the AGI hype is a bit overblown. That's not from me that's Noam Chomsky https://youtu.be/axuGfh4UR9Q?t=9271.

[–] NABDad@lemmy.world 12 points 1 month ago (1 children)

I've often thought LLMs could replace all of the C-suites and upper and middle management.

Funny how no companies push that as a possibility.

[–] Zink@programming.dev 7 points 1 month ago

I almost expect that we’ll see some company reveal it has been letting an AI control the top level decision making for the business itself, including if and when to reveal the AI.

But the funny thing will be that all the executives and board members still have jobs and huge stock awards. They will all pat each other on the back for getting paid more money to do less work, by being bold and taking a risk to let the computer do half their job for them.

[–] marzhall@lemmy.world 4 points 1 month ago

There's a name for it the phenomenon: the AI effect.