this post was submitted on 19 Mar 2025
571 points (98.3% liked)

Technology

66892 readers
5040 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] melpomenesclevage@lemmy.dbzer0.com 9 points 1 hour ago* (last edited 1 hour ago)

I have been shouting this for years. Turing and Minsky were pretty up front about this when they dropped this line of research in like 1952, even lovelace predicted this would be bullshit back before the first computer had been built.

The fact nothing got optimized, and it still didn't collapse, after deepseek? kind of gave the whole game away. there's something else going on here. this isn't about the technology, because there is no meaningful technology here.

I have been called a killjoy luddite by reddit-brained morons almost every time.

[–] iAvicenna@lemmy.world 2 points 1 hour ago

The funny thing is with so much money you could probably do lots of great stuff with the existing AI as it is. Instead they put all the money into compute power so that they can overfit their LLMs to look like a human.

[–] brucethemoose@lemmy.world 43 points 4 hours ago* (last edited 4 hours ago)

It's ironic how conservative the spending actually is.

Awesome ML papers and ideas come out every week. Low power training/inference optimizations, fundamental changes in the math like bitnet, new attention mechanisms, cool tools to make models more controllable and steerable and grounded. This is all getting funded, right?

No.

Universities and such are seeding and putting out all this research, but the big model trainers holding the purse strings/GPU clusters are not using them. They just keep releasing very similar, mostly bog standard transformers models over and over again, bar a tiny expense for a little experiment here and there. In other words, it’s full corporate: tiny, guaranteed incremental improvements without changing much, and no sharing with each other. It’s hilariously inefficient. And it relies on lies and jawboning from people like Sam Altman.

Deepseek is what happens when a company is smart but resource constrained. An order of magnitude more efficient, and even their architecture was very conservative.

[–] fossilesque@lemmy.dbzer0.com 176 points 7 hours ago (1 children)
[–] tetris11@lemmy.ml 53 points 6 hours ago* (last edited 6 hours ago) (4 children)

I like my project manager, they find me work, ask how I'm doing and talk straight.

It's when the CEO/CTO/CFO speaks where my eyes glaze over, my mouth sags, and I bounce my neck at prompted intervals as my brain retreats into itself as it frantically tosses words and phrases into the meaning grinder and cranks the wheel, only for nothing to come out of it time and time again.

[–] killeronthecorner@lemmy.world 14 points 4 hours ago* (last edited 4 hours ago)

COs are corporate politicians, media trained to only say things which are completely unrevealing and lacking of any substance.

This is by design so that sensitive information is centrally controlled, leaks are difficult, and sudden changes in direction cause the minimum amount of whiplash to ICs as possible.

I have the same reaction as you, but the system is working as intended. Better to just shut it out as you described and use the time to think about that issue you're having on a personal project or what toy to buy for your cat's birthday.

[–] spooky2092@lemmy.blahaj.zone 5 points 4 hours ago

The number of times my CTO says we're going to do THING, only to have to be told that this isn't how things work...

[–] MonkderVierte@lemmy.ml 10 points 6 hours ago* (last edited 5 hours ago)

Right, that sweet spot between too less stimuli so your brain just wants to sleep or run away and enough stimuli so you can't just zone out (or sleep).

I just turn of my camera and turn on Forza Motorsport or something like that

[–] ABetterTomorrow@lemm.ee 9 points 6 hours ago

Current big tech is going to keeping pushing limits and have SM influencers/youtubers market and their consumers picking up the R&D bill. Emotionally I want to say stop innovating but really cut your speed by 75%. We are going to witness an era of optimization and efficiency. Most users just need a Pi 5 16gb, Intel NUC or an Apple air base models. Those are easy 7-10 year computers. No need to rush and get latest and greatest. I’m talking about everything computing in general. One point gaming,more people are waking up realizing they don’t need every new GPU, studios are burnt out, IPs are dying due to no lingering core base to keep franchise up float and consumers can't keep opening their wallets. Hence studios like square enix going to start support all platforms and not do late stage capitalism with going with their own launcher with a store. It’s over.

[–] Not_mikey@lemmy.dbzer0.com 56 points 9 hours ago (5 children)

The actual survey result:

Asked whether "scaling up" current AI approaches could lead to achieving artificial general intelligence (AGI), or a general purpose AI that matches or surpasses human cognition, an overwhelming 76 percent of respondents said it was "unlikely" or "very unlikely" to succeed. 

So they're not saying the entire industry is a dead end, or even that the newest phase is. They're just saying they don't think this current technology will make AGI when scaled. I think most people agree, including the investors pouring billions into this. They arent betting this will turn to agi, they're betting that they have some application for the current ai. Are some of those applications dead ends, most definitely, are some of them revolutionary, maybe

Thus would be like asking a researcher in the 90s that if they scaled up the bandwidth and computing power of the average internet user would we see a vastly connected media sharing network, they'd probably say no. It took more than a decade of software, cultural and societal development to discover the applications for the internet.

[–] 10001110101@lemm.ee 3 points 4 hours ago

I think most people agree, including the investors pouring billions into this.

The same investors that poured (and are still pouring) billions into crypto, and invested in sub-prime loans and valued pets.com at $300M? I don't see any way the companies will be able to recoup the costs of their investment in "AI" datacenters (i.e. the $500B Stargate or $80B Microsoft; probably upwards of a trillion dollars globally invested in these data-centers).

[–] pennomi@lemmy.world 3 points 4 hours ago

Right, simply scaling won’t lead to AGI, there will need to be some algorithmic changes. But nobody in the world knows what those are yet. Is it a simple framework on top of LLMs like the “atom of thought” paper? Or are transformers themselves a dead end? Or is multimodality the secret to AGI? I don’t think anyone really knows.

[–] Flocklesscrow@lemm.ee 2 points 4 hours ago

The bigger loss is the ENORMOUS amounts of energy required to train these models. Training an AI can use up more than half the entire output of the average nuclear plant.

AI data centers also generate a ton of CO². For example, training an AI produces more CO² than a 55 year old human has produced since birth.

Complete waste.

[–] cantstopthesignal@sh.itjust.works 16 points 7 hours ago (1 children)

It's becoming clear from the data that more error correction needs exponentially more data. I suspect that pretty soon we will realize that what's been built is a glorified homework cheater and a better search engine.

[–] Sturgist@lemmy.ca 26 points 7 hours ago

what's been built is a glorified homework cheater and an ~~better~~ unreliable search engine.

[–] stormeuh@lemmy.world 11 points 9 hours ago (2 children)

I agree that it's editorialized compared to the very neutral way the survey puts it. That said, I think you also have to take into account how AI has been marketed by the industry.

They have been claiming AGI is right around the corner pretty much since chatGPT first came to market. It's often implied (e.g. you'll be able to replace workers with this) or they are more vague on timeline (e.g. OpenAI saying they believe their research will eventually lead to AGI).

With that context I think it's fair to editorialize to this being a dead-end, because even with billions of dollars being poured into this, they won't be able to deliver AGI on the timeline they are promising.

[–] jj4211@lemmy.world 2 points 4 hours ago

Yeah, it does some tricks, some of them even useful, but the investment is not for the demonstrated capability or realistic extrapolation of that, it is for the sort of product like OpenAI is promising equivalent to a full time research assistant for 20k a month. Which is way more expensive than an actual research assistant, but that's not stopping them from making the pitch.

AI isn't going to figure out what a customer wants when the customer doesn't know what they want.

[–] Coreidan@lemmy.world 5 points 6 hours ago

Good let them waste all their money

[–] Korhaka@sopuli.xyz 12 points 9 hours ago

There are some nice things I have done with AI tools, but I do have to wonder if the amount of money poured into it justifies the result.

[–] lemmydividebyzero@reddthat.com 40 points 11 hours ago (16 children)

Me and my 5.000 closest friends don't like that the website and their 1.300 partners all need my data.

load more comments (16 replies)
[–] Ledericas@lemm.ee 12 points 9 hours ago

It's because customers don't want it or care for it, it's only the corporations themselves are obsessed with it

[–] TommySoda@lemmy.world 52 points 13 hours ago* (last edited 13 hours ago) (1 children)

Technology in most cases progresses on a logarithmic scale when innovation isn't prioritized. We've basically reached the plateau of what LLMs can currently do without a breakthrough. They could absorb all the information on the internet and not even come close to what they say it is. These days we're in the "bells and whistles" phase where they add unnecessary bullshit to make it seem new like adding 5 cameras to a phone or adding touchscreens to cars. Things that make something seem fancy by slapping buzzwords and features nobody needs without needing to actually change anything but bump up the price.

[–] balder1991@lemmy.world 6 points 4 hours ago

I remember listening to a podcast that’s about explaining stuff according to what we know today (scientifically). The guy explaining is just so knowledgeable about this stuff and he does his research and talk to experts when the subject involves something he isn’t himself an expert.

There was this episode where he kinda got into the topic of how technology only evolves with science (because you need to understand the stuff you’re doing and you need a theory of how it works before you make new assumptions and test those assumptions). He gave an example of the Apple visionPro being a machine that despite being new (the hardware capabilities, at least), the algorithm for tracking eyes they use was developed decades ago and was already well understood and proven correct by other applications.

So his point in the episode is that real innovation just can’t be rushed by throwing money or more people at a problem. Because real innovation takes real scientists having novel insights and experiments to expand the knowledge we have. Sometimes those insights are completely random, often you need to have a whole career in that field and sometimes it takes a new genius to revolutionize it (think Newton and Einstein).

Even the current wave of LLMs are simply a product of the Google’s paper that showed we could parallelize language models, leading to the creation of “larger language models”. That was Google doing science. But you can’t control when some new breakthrough is discovered, and LLMs are subject to this constraint.

In fact, the only practice we know that actually accelerates science is the collaboration of scientists around the world, the publishing of reproducible papers so that others can expand upon and have insights you didn’t even think about, and so on.

load more comments
view more: next ›