this post was submitted on 18 Jun 2025
21 points (88.9% liked)

Ask Lemmy

32604 readers
1123 users here now

A Fediverse community for open-ended, thought provoking questions


Rules: (interactive)


1) Be nice and; have funDoxxing, trolling, sealioning, racism, and toxicity are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them


2) All posts must end with a '?'This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?


3) No spamPlease do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.


4) NSFW is okay, within reasonJust remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either !asklemmyafterdark@lemmy.world or !asklemmynsfw@lemmynsfw.com. NSFW comments should be restricted to posts tagged [NSFW].


5) This is not a support community.
It is not a place for 'how do I?', type questions. If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email info@lemmy.world. For other questions check our partnered communities list, or use the search function.


6) No US Politics.
Please don't post about current US Politics. If you need to do this, try !politicaldiscussion@lemmy.world or !askusa@discuss.online


Reminder: The terms of service apply here too.

Partnered Communities:

Tech Support

No Stupid Questions

You Should Know

Reddit

Jokes

Ask Ouija


Logo design credit goes to: tubbadu


founded 2 years ago
MODERATORS
 

Original question by @BalakeKarbon@lemmy.ml

It seems like a lot of professionals are thinking we will reach AGI within my lifetime. Some credible sources say within 5 years but who knows.

Either way I suspect it is inevitable. Who knows what may follow. Infinite wealth gap growth, mass job loss, post-work reforms, I'm not sure.

A bunch of questions bounce around in my head, examples may be:

  • Will private property rights be honored in said future?
  • Could Amish communities still exist?
  • Is it something we can prepare for as individuals?

I figured it is important to talk about seeing as it will likely occur in my lifetime and many of yours.

all 30 comments
sorted by: hot top controversial new old
[–] JackbyDev@programming.dev 2 points 20 hours ago

Anyone telling you it's five years away? Check their investments.

Marketing tool. LLM's are not magic no matter what people think

[–] DirigibleProtein@aussie.zone 16 points 1 day ago (1 children)

25 years away and always will be.

[–] LovableSidekick@lemmy.world 6 points 1 day ago* (last edited 1 day ago)

Just like Fusion power! What if AI and fusion invent each other at the same time?

Maybe that's what the aliens have been trying to tell us ALL ALONG!!!

[–] Arkouda@lemmy.ca 22 points 2 days ago (1 children)

I don't think we will be able to achieve AGI with anything other than an absolute accident. We don't understand our own brains enough to create one from scratch.

[–] amelia@feddit.org 2 points 1 day ago (1 children)

What makes you think a human brain has anything to do with general intelligence? Have you ever talked to people with a human brain?

[–] Arkouda@lemmy.ca 1 points 1 day ago

I have talked to many people. All have demonstrated having a human brain with varying degrees of intelligence.

[–] fubarx@lemmy.world 4 points 1 day ago

Not without a major breakthrough in knowledge representation.

LLMs aren't it.

[–] Tar_alcaran@sh.itjust.works 14 points 1 day ago (1 children)

It won't happen while I'm alive. Current LLMs are basically parrots with a lot of experience, and will never get close to AGI. We're no closer today than when a computer first passed the Turing test in the 60s.

Experienced parrots that are constantly wrong.

[–] Feyd@programming.dev 8 points 1 day ago

I don't see any reason to believe anything currently being done is a direct path to AGI. Sam Altman and Dario Amodei are straight up liars and the fact so many people lap up their shameless hype marketing is just sad.

[–] nickwitha_k@lemmy.sdf.org 3 points 1 day ago

It may or may not happen. What I do know is that it will never spontaneously arrise from an LLM, no matter how much data they dump into it or how many tons of potable water they carelessly waste.

[–] rickdg@lemmy.world 7 points 1 day ago

I'm more worried about jobs getting nuked no matter whatever AGI turns out to be. It can be vapourware and still the capitalist cult will sacrifice labour on that altar.

[–] Dadifer@lemmy.world 7 points 2 days ago

I think it is inevitable. The main flaw I see from a lay perspective in current methodology is trying to make one neural network that does everything. Our own brains are composed of multiple neural networks with different jobs interacting with each other, so I assume that AGI will require this approach.

For example: we are currently struggling with LLM hallucinations. What could reduce this? A separate fact-checking neural network.

Please keep in mind that my opinion is almost worthless, but you asked.

[–] leftzero@lemmynsfw.com 2 points 1 day ago

We were on track for it, but LLMs derailed that.

Now we'll have to wait for the bubble to burst, which will poison the concept of AI (since LLMs are being sold as AI despite being practically the opposite) in the minds of both users and investors for decades.

It'd probably take a couple generations for any funding for AI research to be available after that (not to mention cleaning up all the LLM slop spillage from our knowledge repositories)... but by that time we'll almost certainly be extinct due to global warming.

The LLM peddlers murdered the future for short term profits, and doomed us all in the process.

[–] throwawayacc0430@sh.itjust.works 6 points 2 days ago* (last edited 2 days ago)

Is a lab grown genetically modified human-brain hooked to a computer technically considered "Artificial Intelligence"?

[–] truxnell@aussie.zone 1 points 1 day ago

As others have said, it AGI won't be from LLMs. AGI is their current buzzword to hype stocks. If they declare theyve 'reached' AGI when you read the frine print it will be an arbitrary measure

The computer doesn't even understand things nor asks questions unprompted. I don't think people understand that it doesn't understand, lol. Intelligence seems to be non-computational!

[–] Lembot_0003@lemmy.zip 4 points 2 days ago (2 children)
[–] SGGeorwell@lemmy.world 3 points 1 day ago (1 children)
[–] KittenBiscuits@lemm.ee 1 points 1 day ago

I can't think of this any other way. So many accountants and EAs are going to have to be careful to remember other people aren't suddenly hip to tax lingo.

[–] cm0002@lemmy.world 5 points 2 days ago (1 children)

Artificial General Intelligence or AI that can match or exceed human level of generalization

[–] Today@lemmy.world 2 points 1 day ago

Thank you. I was confused about the concern regarding my adjusted gross income.

[–] Norin@lemmy.world 3 points 1 day ago

Why would AGI threaten the existence of the Amish and/or change laws regarding property rights?

[–] Gerudo@lemm.ee 2 points 1 day ago (1 children)

In a single person's lifetime, we went from not flying to landing on the moon. We absolutely can produce AGI in most of our lifetimes. I predict within 15-20 years, we will have a functioning AGI. It may also need to coincide with actually figuring out quantum computing just for sheer computational needs.

This all hinges on if investments in AI continue at its current pace, which we already see cracks in though.

[–] leftzero@lemmynsfw.com 1 points 1 day ago

You're not taking into account the fact that LLMs are an obvious dead end.

Once that bubble bursts it'll take decades before anyone invests in AI research again and for anything attached to the term “AI” to not be seen as a scam (LLMs are obviously not AI or anything close, but they're being sold as such and that's what the term will be associated with), not to mention we'll need decades to clean up all the LLM slop spillage before proper research of any kind can proceed.

What you said was valid before the well got poisoned.

Now it's extremely unlikely we'll survive long enough to get back on track.

LLM peddlers murdered the future, in the name of short term profits.

[–] qantravon@startrek.website 2 points 1 day ago

I agree with most of the other comments here. Is actual AGI something to be worried about? I'm not sure. I don't know if it's even possible on our current technology path.

Based on what I know, it's almost certainly not going to come from the current crop of LLMs and related research. Despite many claims, they don't actually think or reason. They're just really complicated statistical models. And while they can do some interesting and impressive things, I don't think there is any path of progression that will make them jump beyond what they currently are to actual intelligence.

Could we develop something in my lifetime (the next 50-ish years or so for me)? Maybe. I think slim chances without a major shift, and I think it would take a public effort akin to the Manhattan Project and the Internet to achieve, but it's possible. In the next 5 years? Definitely not, some random, massive, lucky break notwithstanding.

As others have said here, even without AGI, current capitalist practices are already using the limited capabilities of LLMs to upend the labor market and put lots of people out of a job. Even when the LLMs can't really replace the people effectively. But that's not a problem with AI, it's a problem with capitalism that happens with any kind of advancement. They'll take literally any excuse to extract extra value.

In summary, I wouldn't worry about AGI. There's so many other things that are problems now, and are already existential threats, that worrying about this big old "maybe in 50 years" isn't really worth your time and energy.

[–] LovableSidekick@lemmy.world 1 points 1 day ago* (last edited 1 day ago)

I have no doubt software will achieve general intelligence, but I think the point where it does will be hard to define. Software can already outdo humans at lots of specific reasoning tasks where the problems are well defined. But how do you measure the generality of problems, so you can say last week our AI wasn't general enough to call it AGI, but now it is?

Not happening IMO. Though its important to distinguish that the general public and business sentiment act as if LLMs are already some kinda legitimate intelligence. So I think a pretty ugly acceptance and hard dependence on these technologies in the form of altering our public infrastructure and destroying the planet will lead to some hellscapien future for sure.... All the stuff you mentioned and more. All without even reaching this level of AGI as it is understood currently.

Who knows if AGI is possible maybe it wouldn't cause the future you described in post but instead help us avoid this nonsense road we are on now.