this post was submitted on 19 Jul 2025
16 points (100.0% liked)

Futurology

3013 readers
45 users here now

founded 2 years ago
MODERATORS
 

Capitalism is a long succession of booms and busts stretching back hundreds of years. We're now at the peak of another boom; that means a crash is inevitable. It's just a question of when. But there are other questions to ask too.

If many of the current AI players are destined to crash and burn, what does this mean for the type of AI we will end up with in the 2030s?

Is AGI destined to be created by an as-yet-unknown post-crash company?

Will open-source AI become the bedrock of global AI during the crash & post-crash period?

Crashes mean recessions, which means cost-cutting. Is this when AI will make a big impact on employment?

AI Bubble Warns: Sløk Raises Concerns Over Market Valuations

you are viewing a single comment's thread
view the rest of the comments
[–] hendrik@palaver.p3x.de 2 points 1 day ago* (last edited 1 day ago) (2 children)

Or maybe AGI turns out to be harder than some people thought. That might be simultaneously the prospect, and the reason for the bubble to burst. That hypothetical future is similar to today, minus some burnt money, plus a slightly more "intelligent" version of ChatGPT which can do some tasks and fails at other tasks. So it'd continue to affect some jobs like call center agents, artists and web designers, but we still need a lot of human labor.

[–] Lugh@futurology.today 3 points 1 day ago (1 children)

Or maybe AGI turns out to be harder than some people thought.

Yes. It seems very unlikely to arise from current LLMs. AGI-Hypers keep expecting signs of independent reasoning to arise, and it keeps not happening.

[–] hendrik@palaver.p3x.de 1 points 22 hours ago

I'd be surprised if current-day LLMs reach AGI. I mean it's more a welcome side-effect that they do things like factual answers more often than not. They don't have a proper state of mind, they can't learn from interacting with the world while running, and they generally substitute a thought process, reasoning etc with a weird variant and it all needs to happen within the context window. I believe once it comes to a household robot learning to operate the toaster and microwave, that won't scale any more. It'd be a bit complicated to do that out-of-band in a datacenter, or fetch the required movements and information from a database. I guess we can cheat a bit with that to achieve similar things, but I'd question whether that's really AGI and suitable for any arbitrary task in the world. So I'd expect several major breaktroughs to happen before we can think of AGI.