this post was submitted on 21 May 2025
300 points (96.6% liked)

Technology

72017 readers
3557 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

Absolutely needed: to get high efficiency for this beast ... as it gets better, we'll become too dependent.

"all of this growth is for a new technology that’s still finding its footing, and in many applications—education, medical advice, legal analysis—might be the wrong tool for the job,,,"

you are viewing a single comment's thread
view the rest of the comments
[–] WanderingThoughts@europe.pub 32 points 1 month ago (6 children)

Historically AI always got much better. Usually after the field collapsed in an AI winter and several years went by in search for a new technique to then repeat the hype cycle. Tech bros want it to get better without that winter stage though.

[–] Jesus_666@lemmy.world 28 points 1 month ago (2 children)

AI usually got better when people realized it wasn't going to do all it was hyped up for but was useful for a certain set of tasks.

Then it turned from world-changing hotness to super boring tech your washing machine uses to fine-tune its washing program.

[–] WanderingThoughts@europe.pub 32 points 1 month ago (1 children)

Like the cliché goes: when it works, we don't call it AI anymore.

[–] technocrit@lemmy.dbzer0.com 5 points 1 month ago (1 children)

The smart move is never calling it "AI" in the first place.

[–] Enkers@sh.itjust.works 10 points 1 month ago* (last edited 1 month ago)

Unless you're in comp sci, and AI is a field, not a marketing term. And in that case everyone already knows that's not "it".

[–] frezik@midwest.social 6 points 1 month ago* (last edited 1 month ago) (2 children)

The major thing that killed 1960s/70s AI was the Vietnam War. MIT's CSAIL was funded heavily by DARPA. When public opinion turned against Vietnam and Congress started shutting off funding, DARPA wasn't putting money into CSAIL anymore. Congress didn't create an alternative funding path, so the whole thing dried up.

That lab basically created computing as we know it today. It bore fruit, and many companies owe their success to it. There were plenty of promising lines of research still going on.

[–] IsaamoonKHGDT_6143@lemmy.zip 4 points 1 month ago

I wish there was an alternate history forum or novel that explores this scenario.

[–] technocrit@lemmy.dbzer0.com -4 points 1 month ago (2 children)

Pretty sure "AI" didn't exist in the 60s/70s either.

[–] frezik@midwest.social 9 points 1 month ago* (last edited 1 month ago)

Yes, it did. Most of the basic research came from there. The first section of the book "Hackers" by Steven Levy is a good intro.

[–] Feathercrown@lemmy.world 4 points 1 month ago

The perceptron was created in 1957 and a physical model was built a year later

The spice must flow

[–] IsaamoonKHGDT_6143@lemmy.zip 4 points 1 month ago (1 children)

Each winter marks the beginning and end of a generation of AI. We are now seeing more progress and as long as there is no technical limit it seems that its progress will not be interrupted.

[–] msage@programming.dev 6 points 1 month ago (2 children)
[–] FreedomAdvocate@lemmy.net.au 11 points 1 month ago* (last edited 1 month ago) (3 children)

In what area of AI? Image generation is increasing in leaps and bounds. Video generation even more so. Image reconstruction for games (DLSS, XeSS, FSR) is having generational improvements almost every year. AI chatbots are getting much much smarter seemingly every month.

What’s one main application of AI that hasn’t improved?

[–] msage@programming.dev 4 points 1 month ago (2 children)

Which chatbots are getting smarter?

I know AI has potential, but specifically LLMs (which most people mean when talking about AI) seem to have hit their technological limits.

[–] FreedomAdvocate@lemmy.net.au 2 points 1 month ago (1 children)

Copilot, ChatGPT, pretty much all of them.

[–] msage@programming.dev 1 points 1 month ago (1 children)

Smarter how? Synthetic benchmarks?

Because I've heard the opposite from users and bloggers.

[–] FreedomAdvocate@lemmy.net.au 1 points 1 month ago (1 children)

So you want me to provide some evidence that it's getting smarter, but you can't provide any that it's getting worse other than anecdotal evidence?

What evidence would you accept?

[–] msage@programming.dev 1 points 1 month ago (1 children)

Any proof that we have moved past the current architecture.

[–] FreedomAdvocate@lemmy.net.au 0 points 1 month ago (1 children)

What does "architecture" mean in this scenario?

[–] msage@programming.dev 1 points 1 month ago (1 children)

Any significant shift in the model, or a complete restructuralization of the approach.

As it is, it won't grow anywhere.

[–] FreedomAdvocate@lemmy.net.au 0 points 1 month ago* (last edited 1 month ago) (1 children)

So you’ve got access to all this stuffs source code and know what has and hasn’t changed with every update?

[–] msage@programming.dev 1 points 1 month ago (1 children)

No, if there was any major breakthrough, it would be advertised everywhere.

[–] FreedomAdvocate@lemmy.net.au 1 points 1 month ago (2 children)

They’re constantly advertising updates to these chat bots in what they can do.

[–] msage@programming.dev 0 points 1 month ago

Small incremental updates on little tasks mean nothing, the underlying issues are still the same.

It has no intelligence, and as such carries big risks.

[–] msage@programming.dev 0 points 1 month ago (1 children)

No, they are spreading lies about shit that doesn't matter as to not lose the hype.

If anyone made any significant advance, they would be all over the world.

[–] FreedomAdvocate@lemmy.net.au 1 points 1 month ago (1 children)

You’re not backing this up with anything. Those of us who use them know they’ve been making big updates regularly.

[–] msage@programming.dev 1 points 1 month ago (1 children)

You're not backing anything up either, just 'my experience'.

[–] FreedomAdvocate@lemmy.net.au 1 points 1 month ago

Do you want me to just link to Microsoft and OpenAI’s pages about their AI chatbots updates?

[–] Jakeroxs@sh.itjust.works 2 points 1 month ago (1 children)

Advanced Reasoning models came out like 4 months ago lol

[–] msage@programming.dev 4 points 1 month ago (2 children)

Advanced reasoning? Having LLM talk to itself?

[–] theterrasque@infosec.pub 2 points 1 month ago

Yes, which has improved some tasks measurably. ~20% improvement on programming tasks, as a practical example. It has also improved tool use and agentic tasks, allowing the llm to plan ahead and adjust it's initial approach based on later parts.

Having the llm talk through the tasks allows it to improve or fix bad decisions taken early based on new realizations on later stages. Sort of like when a human thinks through how to do something.

[–] Jakeroxs@sh.itjust.works -1 points 1 month ago* (last edited 1 month ago) (1 children)

Lul yes but no, but they are clearly better at many types of tasks.

[–] technocrit@lemmy.dbzer0.com -1 points 1 month ago* (last edited 1 month ago) (2 children)

For example? Citations?

Pretty sure these "tasks" are meaningless metrics made up by pseudo-scientific grifters.

[–] Jakeroxs@sh.itjust.works 3 points 1 month ago

Small bits of code, language related tasks, basic context understanding, not metrics I have literally measured simply noticed has improved compared to non reasoning models in my homelab testing. 🤷‍♂️

[–] IsaamoonKHGDT_6143@lemmy.zip 2 points 1 month ago

AlphaFold 3 which can help in the prediction of some proteins. Although it has some limitations, it cannot be used in all cases, only in what it can perform without any problem.

[–] Almacca@aussie.zone -5 points 1 month ago* (last edited 1 month ago) (1 children)

They've been a boon for medical diagnoses as well, I believe.

Has anyone made AI powered accounting software yet? I'd love to tell my computer 'Here's all my financial information in a big heap. Do my taxes.' The numbers and tax laws are all known things. It shouldn't be hard.

[–] MagicShel@lemmy.zip 17 points 1 month ago (1 children)

Any strictly rule-based system, like accounting and taxes, is a job for traditional software, not AI. Particularly when the laws change every year.

[–] Almacca@aussie.zone 4 points 1 month ago (1 children)

Once it has the information in a recognisable format. Reading and recognising random receipts, bank statements, payment slips, and whatever and sorting it into a coherent format is what I'm trying to avoid.

[–] MagicShel@lemmy.zip 2 points 1 month ago (1 children)

I see. So AI for gathering the information to put into the accounting/tax software?

That's a more reasonable ask, but I wouldn't personally trust AI with that. I've done something similar in games where I take a picture of something on screen and ask AI to collect all the information from many similar pictures into a table. It's definitely good enough for gaming, but it makes mistakes often enough I wouldn't sign my name attesting to the truth of anything it produced, you know?

[–] Almacca@aussie.zone 2 points 1 month ago* (last edited 1 month ago) (1 children)

Fair point, but i feel like that's something that's technologically solvable, and this is dealing only with text, a lot of which is already digital, just in multiple formats, and all easily checkable against the final figures if anyone so desires.

As a random aside, I saw a clip recently where someone had asked an 'AI' model to reproduce a photo with zero changes one hundred times. There were more than zero changes.

[–] MagicShel@lemmy.zip 2 points 1 month ago

Surprisingly, the mistakes ChatGPT made weren't related to picture processing. Every time I've sent a picture, it has flawlessly analyzed the text (even if it's a screenshot of a massive Linux log or a screenshot with multiple windows / arbitrary text placement). The problems were more like the markdown table I created would not be reproduced perfectly with the new changes/additions. It's pretty reliable early on, but either as the chat gets longer or the table does, fidelity can be lost. Not very often, but it does happen.

Just to clarify. But I find as long as you're paying close attention and can catch mistakes or verify the output, AI does make such tasks much less tedious.

[–] Xaphanos@lemmy.world 0 points 1 month ago

NVL72 will be enormously impactful on high end performance.

[–] frezik@midwest.social 3 points 1 month ago* (last edited 1 month ago) (1 children)

The issue this time around is infrastructure. The current AI Summer depends on massive datacenters with equally massive electrical needs. If companies can't monetize that enough, they'll pull the plug and none of this will be available to general public anymore.

This system can go backwards. Yes, the R&D will still be there after the AI Winter cycle hits, but none of the infrastructure.

[–] theterrasque@infosec.pub 3 points 1 month ago

We'll still have models like deepseek, and (hopefully) discount used server hardware

[–] ZILtoid1991@lemmy.world 2 points 1 month ago

That's part of why they installed Donald Trump as the dictator of the United States. The other is the network states.

[–] technocrit@lemmy.dbzer0.com -2 points 1 month ago (1 children)

Historically "AI" still doesn't exist.

[–] WanderingThoughts@europe.pub 11 points 1 month ago

Technically even 1950s computer chess is classified as AI.