this post was submitted on 26 Feb 2024
65 points (100.0% liked)

Technology

37800 readers
134 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
all 34 comments
sorted by: hot top controversial new old
[–] blindsight@beehaw.org 25 points 10 months ago* (last edited 10 months ago)

I liked this article, and I think a lot of the commenters here are missing that the general public is treating LLMs as AGI. I have a whole 5-10 minutes I spend on why this is when I present about LLMs.

"The I in LLM stands for Intelligence" is a joke I read (and include in my presentation to hammer the point home). Laymen have no idea what AI or LLMs are, but they expect it to work similarly to human intelligence, since that's the only model they know, and are surprised to learn it doesn't work that way.

Edit: Forgot what I came to the comments to post, before I read everyone else's complaints about this, lol.

A small correction: the Air Canada example wasn't an LLM, it was just an old "dumb" chatbot that was likely sharing outdated policies.

[–] tal 25 points 10 months ago* (last edited 10 months ago) (2 children)

As an Artificial Intelligence proponent, I want to see the field succeed and go on to do great things. That is precisely why the current exaggerated publicity and investment around "AI" concerns me. I use quotation marks there because what is often referred to as AI today is not whatsoever what the term once described. The recent surge of interest in AI owing to Large Language Models (LLMs) like ChatGPT has put this vaguely defined term at the forefront of dialogue on technology. But LLMs are not meaningfully intelligent (we will get into that), yet it has become common parlance to refer to these chatbots as AI1 2.

Pretty sure that this has been happening for as long as AI and similar things like machine learning have been a thing. Overstated promises, people consistently presenting research or products or investments using the sexiest terms they can manage. New term comes out (e.g. "Artificial General Intelligence") to differentiate more-sophisticated AI, and they get latched onto and dragged down into the muck too.

I think that the fix is to come up with terms attached to concrete technical capabilities, where there's no fuzziness to exploit by people trying to promote their not-as-sophisticated-as-they'd-like-them-to-appear things.

[–] GenderNeutralBro@lemmy.sdf.org 17 points 10 months ago (2 children)

AGI is not a new term. It's been in use since the 90s and the concept has been around for much longer.

I agree that we should use more specific terms whenever possible. I call LLMs "LLMs" or "language models". Not that it's inaccurate to call them AI, but it's not useful either. AI is an extraordinarily broad term. Pac-Man had AI. And there's a large portion of the population who thinks it means something much, much more lofty and specific than it ever really has. At this point, the term should probably be abandoned. Any attempt to reclaim it is bound to fail.

I see this as yet another example of a technical term being bastardized by mainstream press who do not understand the field. It happens all the time with tech. I remember when "virus" actually meant something; the industry eventually abandoned the term because it was bastardized to the point of uselessness; now we just say "malware" and if we need to refer to viruses specifically...well we just don't for the most part.

This is a linguistic problem more than a technical problem.

[–] tal@kbin.social 5 points 10 months ago (1 children)

AGI is not a new term. It’s been in use since the 90s and the concept has been around for much longer.

It's not new today, but it post-dates "AI" and hit the same problem then.

[–] jansk@beehaw.org 1 points 10 months ago

And before AI we had "Thinking Machines".

Perhaps we should go back to that. OpenAI et al can brand themselves "Think-Tech"

[–] Powderhorn@beehaw.org 5 points 10 months ago

I also go to great lengths to say LLMs vs. AI.

But, I also spent most of my career in the "mainstream press," and reporters can be surprisingly blasé about what technology means if that isn't their beat. I've had to spike a story or two about new police tech that includes zero quotes from anyone outside the PD and their vendor. I've held an order of magnitude more so they could be fixed ahead of publication.

And this was 15-20 years ago, when newsrooms employed people with more than three years of experience. I heavily curate my news diet on an ongoing basis, as outlets can go down the shitter in a matter of weeks with buyouts.

What we get today from many supposedly reliable outlets is not helpful to society.

[–] eveninghere@beehaw.org 3 points 10 months ago (1 children)

What's funny, we complain about the terminology use of AI, but nobody can actually define the intelligence.

[–] vexikron@lemmy.zip 4 points 10 months ago* (last edited 10 months ago) (1 children)

https://en.m.wikipedia.org/wiki/Intelligence

Intelligence has been defined in many ways: the capacity for abstraction, logic, understanding, self-awareness, learning, emotional knowledge, reasoning, planning, creativity, critical thinking, and problem-solving.

LLMs are pretty capable of abstraction and understanding.

Though they obviously use logic in that they are constructed from/of it,, they are not really capable of actual logical analysis, beyond emulating it.

They can't really do any of the other attributes of intelligence at all, beyond basically decently to poorly emulating them.

[–] eveninghere@beehaw.org 4 points 10 months ago

The problem with these definitions is that they are verbal. Some could argue ChatGPT is capable of understanding, while others could do the opposite. I don't even believe it is capable of abstraction.

The Turing test was novel in that we could test the intelligence of AIs without actually defining intelligence. And it's still useful because researchers probably can't agree on a rigorous definition of intelligence.

[–] megopie@beehaw.org 21 points 10 months ago (3 children)

gasp you mean, industry is lying to investors about a new technology to get more investment and creating a false narrative for the public to undermine criticism? Who could have seen this coming!

[–] scrubbles@poptalk.scrubbles.tech 7 points 10 months ago

Self driving cars are only 2 years away!

[–] eveninghere@beehaw.org 3 points 10 months ago* (last edited 9 months ago) (1 children)

I'm a scientist entering the industry and can't agree more. Too many lies. There are handful of companies that do deliver, but, generally speaking, many businesses seem to bet on the naivety of investors. Some even do it unintentionally because the bar is simply too low.

But here's the thing. I noticed in my life that there are naive people following weak narratives no matter what. These people can't doubt your arguments. And if you do business, these people are apparently the most solid support.

Edit: it's perhaps also true that this majority of investors forces companies to lie by investing almost solely on LLMs

[–] megopie@beehaw.org 1 points 10 months ago (1 children)

I think there is just too much faith from the current crop of investors in tech start ups, many got hurt in the past by not investing in things after not getting good answers to “ok but how does this make money” or “can this actually do what you’re claiming”.

And larger more established companies like Google and Amazon are happy to feed the hype for a lot of these trends, particularly when all the new start ups are going to be buying stuff from them, so even if the start ups fail because they can’t make money or don’t do what they claim, the big companies still made money selling them server space, computing time, or huge amounts of data. I think investors who hold stakes in the big companies also lean in to the hype for this reason.

Everyone has a pretty good incentive to lean in to the hype, so they do.

[–] eveninghere@beehaw.org 2 points 9 months ago

Until like 3 months ago, I felt the ChatGPT revolution was going on. Every 10-year plan my colleagues in AI research was actually completed in a few weeks(!) by a completely different research team at the opposite side of the planet. The hype was so high that every expert had same plans resulting in surreal competitions.

After that, the LLM businesses entered the b2b space, with all potential customers asking ChatGPT to search information in their pile of documents. That was the next big thing.

We haven't heard back from the pile of garbage so far...

[–] Powderhorn@beehaw.org 2 points 10 months ago

hustles to find pearls to clutch

[–] FaceDeer@kbin.social 17 points 10 months ago (4 children)

I use quotation marks there because what is often referred to as AI today is not whatsoever what the term once described.

The field of AI has been around for decades and covers a wide range of technologies, many of them much "simpler" than the current crop of generative AI. What is often referred to as AI today is absolutely what the term once described, and still does describe.

What people seem to be conflating is the general term "AI" and the more specific "AGI", or Artificial General Intelligence. AGI is the stuff you see on Star Trek. Nobody is claiming that current LLMs are AGI, though they may be a significant step along the way to that.

I may be sounding nitpicky here, but this is the fundamental issue that the article is complaining about. People are not well educated about what AI actually is and what it's good at. It's good at a huge amount of stuff, it's really revolutionary, but it's not good at everything. It's not the fault of AI when people fail to grasp that, no more than it's the fault of the car when someone gets into it and then is annoyed it won't take them to the Moon.

[–] t3rmit3@beehaw.org 8 points 10 months ago

People are not well educated about what AI actually is and what it’s good at.

And half the reason they're not educated about it is that AI companies are actively and intentionally misinforming them about it. AI companies sell people these products using words like "thinking", "assessing", "reasoning", and "learning", none of which are accurate to AI, but would be to AGI.

[–] scrubbles@poptalk.scrubbles.tech 7 points 10 months ago (1 children)

The problem is that the average person and politician don't know this difference, and are running around like skynet is about to kick off any second.

[–] NigelFrobisher@aussie.zone 5 points 10 months ago

The LLM CEOs and evangelists are going on like this too, because they need hype to make number go up.

[–] derbis@beehaw.org 2 points 10 months ago

Oop, wish I'd read this comment before mine. 100% right

[–] exocrinous@lemm.ee 2 points 10 months ago* (last edited 10 months ago) (1 children)

AGI is the stuff you see on Star Trek.

Clarification. AGI describes Data, Moriarty, and Peanut Hamper, but it doesn't describe the Enterprise's computer. Which has speech recognition, but is less intelligent than an LLM.

[–] FaceDeer@kbin.social 2 points 10 months ago (1 children)

I didn't say that everything in Star Trek was AGI, just that you can find examples there.

[–] exocrinous@lemm.ee 2 points 10 months ago

I shall amend my comment to say clarification instead of correction.

[–] derbis@beehaw.org 14 points 10 months ago (2 children)

"...AI" concerns me. I use quotation marks there because what is often referred to as AI today is not whatsoever what the term once described.

Lost me right there. Not only was and is this AI, but the term gets narrower over time, not broader. If you want to go by "what the term once described," you have to include computer vision, text to speech, optical character recognition, behavior trees for video game enemies, etc etc etc.

When I see people complain about calling LLMs "AI," I think the only definition that would satisfy them is "things computers can do that we aren't used to yet."

[–] exocrinous@lemm.ee 2 points 10 months ago

Yeah, bruh is using the Halo definition of AI. Probably played too many video games instead of actually paying attention to the history of computing.

[–] redxef@feddit.de 11 points 10 months ago

I've seen so many bots on lemmy summarising the contents of websites and blocked all of them, because of this. They are not reliable, and I still caught myself reading those. I don't even want to know how many summaries which are in a post body are just generated by an LLM.

[–] Kwakigra@beehaw.org 8 points 10 months ago

I think it's less of an issue of LLMs being drunk and more that ostensibly sober people put them behind the wheel totally aware of how drunk they are while telling everyone that they're stone cold sober.

[–] furrowsofar@beehaw.org 4 points 10 months ago (1 children)

This is the reason I balk at personifiing these things with human terms. It sounds cool but it is both inaccurate and misleading especially in the hands of the media and the general public.

[–] eveninghere@beehaw.org 1 points 10 months ago* (last edited 10 months ago) (1 children)

The dilemma is, ChatGPT can write better reports than most graduate school students in my country. For what these problematic vast majority of students do is remember, not analyze.

Specifically for this context, students are not trained to analyze what they are asked (input query for ChatGPT). When I ask a unique question in their assignment, they can't even form a response. They just write a generic text that doesn't try to answer my question.

They seem to copy and paste what's in their brain. And when it comes to copy and pasting, i.e. mimicking what people do, ChatGPT is the champion in some sense. Hell, OpenAI even tuned it to generate balanced stance, and that's also what students can't do.

Finally, 90% of the population perform actually worse than these graduate students.

[–] furrowsofar@beehaw.org 2 points 9 months ago

It is sad, but most people seem to go to school for certification not learning. Use to grade when in grad school... the lazy sloppy work was nuts. Working for a company... the terrible writing some people do even with advanced degrees.

[–] exocrinous@lemm.ee 2 points 10 months ago

I for one think LLMs are more intelligent than an ant. The writer of this piece is using the movie definition of AI instead of the real world definition of AI.