this post was submitted on 31 Dec 2024
1811 points (98.0% liked)

Fuck AI

1610 readers
293 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

founded 10 months ago
MODERATORS
 
(page 2) 50 comments
sorted by: hot top controversial new old
[–] Zacryon@feddit.org -2 points 6 days ago

Not a good argument. Applying a specific technology in a specific setting does not invalidate its use or power in other application settings. It also doesn't tell how good or bad an entire branch of technology is.

It's like saying "fuck tools", because someone tried to loosen a screw with a hammer.

[–] Nuke_the_whales@lemmy.world -2 points 6 days ago (7 children)

Tbh if I told half the doctors and top scientists in the world to take my burger order, or flip the patty, they'd fall apart and fuck it up. It's apples and oranges

load more comments (7 replies)
[–] Jimmycakes@lemmy.world 0 points 6 days ago (2 children)

This is BBC UK no ai is gonna be able to understand drunk uk mumbles. This shit works prefect in the US

load more comments (2 replies)
[–] kibiz0r@midwest.social 164 points 1 week ago (3 children)

In other words, an AI-supported radiologist should spend exactly the same amount of time considering your X-ray, and then see if the AI agrees with their judgment, and, if not, they should take a closer look. AI should make radiology more expensive, in order to make it more accurate.

But that’s not the AI business model. AI pitchmen are explicit on this score: The purpose of AI, the source of its value, is its capacity to increase productivity, which is to say, it should allow workers to do more, which will allow their bosses to fire some of them, or get each one to do more work in the same time, or both. The entire investor case for AI is “companies will buy our products so they can do more with less.” It’s not “business custom­ers will buy our products so their products will cost more to make, but will be of higher quality.”

Cory Doctorow: What Kind of Bubble is AI?

[–] dance_ninja@lemmy.world 42 points 1 week ago (1 children)

AI tools like this should really be viewed as a calculator. Helpful for speeding up analysis, but you still require an expert to sign off.

[–] Frozengyro@lemmy.world 34 points 1 week ago (2 children)

Honestly anything they are used for should be validated by someone with a brain.

[–] droporain@lemmynsfw.com 14 points 1 week ago

A good brain or just any brain?

load more comments (1 replies)
[–] Apytele@sh.itjust.works 9 points 1 week ago (6 children)

Very much so. As a nurse the AI components I like are things that bring my attention to critical results (and combinations of results) faster. So if my tech gets vitals and the blood pressure is low and the heart rate is high and they're running a temperature, I want it to call both me and the rapid response nurse right away and we can all sort out whether it's sepsis or not when we get to the room together. I DON'T want it to be making decisions for me. I just want some extra heads up here and there.

load more comments (6 replies)
load more comments (1 replies)
[–] Rooskie91@discuss.online 63 points 1 week ago (10 children)
[–] NaibofTabr@infosec.pub 25 points 1 week ago (2 children)

I mean... duh? The purpose of an LLM is to map words to meanings... to derive what a human intends from what they say. That's it. That's all.

It's not a logic tool or a fact regurgitator. It's a context interpretation engine.

The real flaw is that people expect that because it can sometimes (more than past attempts) understand what you mean, it is capable of reasoning.

[–] vithigar@lemmy.ca 20 points 1 week ago (3 children)

Not even that. LLMs have no concept of meaning or understanding. What they do in essence is space filling based on previously trained patterns.

Like showing a bunch of shapes to someone, then drawing a few lines and asking them to complete the shape. And all the shapes are lamp posts but you haven't told them that and they have no idea what a lamp post is. They will just produce results like the shapes you've shown them, which generally end up looking like lamp posts.

Except the "shape" in this case is a sentence or poem or self insert erotic fan fiction, none of which an LLM "understands", it just matches the shape of what's been written so far with previous patterns and extrapolates.

load more comments (3 replies)
load more comments (1 replies)
load more comments (9 replies)
[–] finitebanjo@lemmy.world 39 points 1 week ago (1 children)

You know, OpenAI published a paper in 2020 modelling how far they were from human language error rate and it correctly predicted the accuracy of GPT 4. Deepmind also published a study in 2023 with the same metrics and discovered that even with infinite training and power it would still never break 1.69% error rate.

These companies knew that their basic model was failing and that overfitying trashed their models.

Sam Altman and all these other fuckers knew, they've always known, that their LLMs would never function perfectly. They're convincing all the idiots on earth that they're selling an AGI prototype while they already know that it's a dead-end.

[–] JasminIstMuede@lemmy.blahaj.zone 19 points 1 week ago (4 children)

As far as I know, the Deepmind paper was actually a challenge of the OpenAI paper, suggesting that models are undertrained and underperform while using too much compute due to this. They tested a model with 70B params and were able to outperform much larger models while using less compute by introducing more training. I don't think there can be any general conclusion about some hard ceiling for LLM performance drawn from this.

However, this does not change the fact that there are areas (ones that rely on correctness) that simply cannot be replaced by this kind of model, and it is a foolish pursuit.

load more comments (4 replies)
[–] Imgonnatrythis@sh.itjust.works 37 points 1 week ago (1 children)

Does it rat out CEO hunters though?

[–] MiDaBa@lemmy.ml 31 points 1 week ago (3 children)

That's probably it's primary function. That and maximizing profits through charging flex pricing based on who's the biggest sucker.

load more comments (3 replies)
[–] activ8r@sh.itjust.works 36 points 1 week ago (4 children)

If I've said it once, I've said it a thousand times. LLMs are not AI. It is a natural language tool that would allow an AI to communicate with us using natural language...

What it is being used for now is just completely inappropriate. At best this makes a neat (if sometimes inaccurate) home assistant.

To be clear: LLMs are incredibly cool, powerful and useful. But they are not intelligent, which is a pretty fundamental requirement of artificial intelligence.
I think we are pretty close to AI (in a very simple sense), but marketing has just seen the fun part (natural communication with a computer) and gone "oh yeah, that's good enough. People will buy that because it looks cool". Nevermind that it's not even close to what the term "AI" implies to the average person and it's not even technically AI either so...

I don't remember where I was going with this, but capitalism has once again fucked a massive technical breakthrough by marketing it as something that it's not.

Probably preaching to the choir here though...

[–] glitchdx@lemmy.world 11 points 1 week ago

We also have hoverboards. Well, "hoverboards", because that's the branding. They have wheels, and don't hover.

[–] swordgeek@lemmy.ca 7 points 1 week ago (2 children)

Yep, a great summary.

I keep telling people that what they call AI (e.g. LLMs) are fancy autocomplete. Little more.

load more comments (2 replies)
load more comments (2 replies)
[–] Bluefalcon@discuss.tchncs.de 35 points 1 week ago* (last edited 1 week ago) (2 children)

Bitch just takes orders and you want to make movies with it? No AI wants to work hard anymore. Always looking for a handout.

[–] edgemaster72@lemmy.world 12 points 1 week ago (1 children)

This AI just needs to pull itself up by its bootstraps so it can move up from working fast food

load more comments (1 replies)
load more comments (1 replies)
[–] ch00f@lemmy.world 29 points 1 week ago (1 children)

What blows my mind about all this AI shit is that these bots are “programmed” by just telling them what to do. “You are an employee working at McDonald’s” and they take it from there.

Insanity.

[–] BradleyUffner@lemmy.world 24 points 1 week ago

Yeah, all the control systems are in-band, making them impossible to control. Users can just modify them as part of the normal conversation. It's like they didn't learn anything from phone phreaking.

[–] MystikIncarnate@lemmy.ca 23 points 1 week ago (1 children)

Lol. AI can't do "unskilled labor" jobs.

Hyuck. Let's put it in everything!

[–] const_void@lemmy.ml 0 points 6 days ago

“It’s gonna take everyone’s jobs!” though

[–] uberdroog@lemmy.world 19 points 1 week ago (1 children)

The automated response when you pull up to multiple placed gives me the heebie jeebies. It's nonsense no one asked for.

[–] TORFdot0@lemmy.world 1 points 6 days ago

Cheery woman’s voice- “Hi will you be using your mobile app to check in today”

Me- “no thank you”

Voice of chain smoking grizzled dude who is tired of this- “Go ahead and order”

[–] pennomi@lemmy.world 16 points 1 week ago (2 children)

To be fair, humans also regularly mess up this task. I’d be curious to see comparisons of error rates.

[–] FlyingSquid@lemmy.world 36 points 1 week ago (5 children)

McDonald's did not factor in the same thing you are apparently not factoring in- when humans at McDonald's fuck up your order, you can tell them about it.

load more comments (5 replies)
[–] uberdroog@lemmy.world 13 points 1 week ago

Maybe if they hired enough people.

[–] wizblizz@lemmy.world 11 points 1 week ago (1 children)

It doesn't have to be good, just good enough to replace paying a human being a living wage.

[–] edgemaster72@lemmy.world 7 points 1 week ago

Or in this case, replacing humans being paid starvation wages

[–] inv3r5ion@lemmy.dbzer0.com 11 points 1 week ago* (last edited 1 week ago) (7 children)

When chat gpt first was released to the public I thought I’d test it out by asking it questions about something I’m an expert in. The results I got back were a Frankenstein of the worst possible answers from the internet. What I asked wasn’t very technical or obscure, and what I received was useless garbage. Haven’t used it since, I think it’s fraud like NFTs were fraud, only worse because these fraudsters convinced the business class that they have a tech solution to the problem of labor lowering their already obscene profits.

If it got my thing wrong I can only imagine what else it gets wrong. And our elites want to replace us with this? Ok lol good luck with that

load more comments (6 replies)
[–] RizzRustbolt@lemmy.world 10 points 1 week ago

Hey... the ones we use on drones to target terrorists work perfectly.

[–] babybus@sh.itjust.works 7 points 1 week ago (1 children)

As if AI was a single tool and not an umbrella term.

[–] frayedpickles@lemmy.cafe -3 points 6 days ago (2 children)

It was an umbrella term for stuff we didn't have yet..then marketing teams remembered it existed. So now it's either a nonsense term (like a PID is AI now) or it means large language models. Since that is clearly both what mcd's used and what the follow up message is referring to, I don't think you need to gatekeep this message. Take that cape off, hero of AI-term-correctness. Flip the dictionary-signal off and turn in to bed.

load more comments (2 replies)
load more comments
view more: ‹ prev next ›