this post was submitted on 13 Mar 2025
722 points (98.9% liked)

Fuck AI

2333 readers
393 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

founded 1 year ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] Wilco@lemm.ee 75 points 1 month ago (6 children)

Seriously, TRY and get an AI chat to give an answer without making stuff up. It is impossible. You can tell it "you made that data up, do not do that" ... and it will apologize and say you were right, then make up more dumb shit.

[–] Denvil@lemmy.one 19 points 1 month ago (1 children)

I looked up on google at one point what the minimum required depth for a cable running under a building is by NEC code. It told me it was 0 inches. I laughed and called it stupid, wtf do you mean 0 inches?? Upon further research, 0 inches is the correct answer, I felt real stupid after that -_-

[–] couch1potato@lemmy.dbzer0.com 5 points 1 month ago (2 children)

Seems like a depth of 0 inches means you can just lay it on the floor?

[–] yucandu@lemmy.world 12 points 1 month ago (1 children)

No, it means 50% of the cable must be submerged or buried. Little speed bumps all around.

load more comments (1 replies)
[–] Denvil@lemmy.one 5 points 1 month ago

As mentioned with the other guy, 0 inches is the requirement to the top of the cable or raceway used. So at minimum, you're allowed to be perfectly flush with the ground. Obviously you can and likely would go a little lower, although I don't have any experience with trenching myself.

[–] Comtief@lemm.ee 9 points 1 month ago

Yeah, LLMs are great if you treat them like a tool to create drafts or give you ideas, rather than like an encyclopedia.

[–] Zetta@mander.xyz 6 points 1 month ago* (last edited 1 month ago) (1 children)

I'll get hate for this but in most tasks people use them for they are pretty dang accurate. I'm talking about frontier models fyi

[–] shalafi@lemmy.world 6 points 1 month ago (1 children)

Google Gemini gives me solid results, but I stick to strictly factual questions, nothing ambiguous. Got a couple of responses I thought were wrong, turns out I was wrong.

load more comments (1 replies)
load more comments (3 replies)
[–] Gradually_Adjusting@lemmy.world 63 points 1 month ago (2 children)

Capitalism breeds innovation! Sometimes innovating means summoning... mindless lie demons... Who drink all our water. 🙃

[–] postmateDumbass@lemmy.world 17 points 1 month ago

A thousand wrong answers are more innovative than a single correct one.

[–] Aceticon@lemmy.dbzer0.com 8 points 1 month ago

The core of the scam is making people believe that "novel" is the same as "better".

[–] floquant@lemmy.dbzer0.com 40 points 1 month ago (2 children)

Hilarious that Gemini is so bad. Not like Google had a good starting position on internet search

[–] frezik@midwest.social 21 points 1 month ago (2 children)

The only thing Gemini is good for is bringing up sources that don't appear in the regular Google search results. Which only leads to another question: why are those links not in the regular Google search results?

[–] Cort@lemmy.world 7 points 1 month ago (1 children)

My only guess is that they're trying to see if de-enshittifying results for AI can make it profitable

[–] Yoga@lemmy.ca 5 points 1 month ago (1 children)

I was talking about this with a webdev buddy the other day, wondering if webmasters might start optimizing for AI indexing rather than SEO.

load more comments (1 replies)
load more comments (1 replies)
[–] pyre@lemmy.world 6 points 1 month ago* (last edited 1 month ago) (3 children)

Infinite money, all the data on the internet, and nothing to show for it. I wrote about my experience with Gemini assistant for people who enjoy suffering.

load more comments (3 replies)
[–] SARGE@startrek.website 36 points 1 month ago (1 children)

It's making ten billion calculations per second and they're all wrong!

[–] Zron@lemmy.world 5 points 1 month ago

That’s one of my skills as a certified genius. I’m wicked fast at math.

37/2.4 boom 16.38.

Is it right, maybe, maybe not. But I did it fast

[–] shyguyblue@lemmy.world 32 points 1 month ago (4 children)

I was trying to see if I could sync my entire Calibre ebook library to my kobo, so i googled it. The dumbass AI result told me to hit the "sync library" button, that doesn't friggin exist...

[–] spankmonkey@lemmy.world 9 points 1 month ago (1 children)

This is the most common response from AI on search pages when I'm trying to find some kind of setting.

[–] shyguyblue@lemmy.world 8 points 1 month ago

Yeah, even Googles own operating system.

"To disable Network Notification sounds, do a bunch of shit that doesn't exist anywhere in the settings!"

Orc from Warcraft 1: "Jobs' done!"

[–] Monument@lemmy.sdf.org 6 points 1 month ago (1 children)

That’s the most infuriating thing.

I’m trying to learn how to do new things, well, basically all the time.
Right now I’m stalled out on a sorta important personal project to teach myself about containers/micro-services/certs in a homelab environment. And what I’m discovering is that I don’t know enough to know I don’t know enough - it used to be that I’d take on an ambitious project, mess up, figure out how to overcome that, then learn by looking at what did work, and do better in the future.
But every technical project lately has gotten to the point where I’m trying to just get something, anything, to work or make sense, but every convincing enough AI generated page sets me back by several days as I troubleshoot the convincing enough steps and find myself realizing they’re referencing YAML settings from apps that aren’t part of the service, that every page directs me to install Python, Node, or whatever other helper app directly on my machine that would normally run in a container (which defeats the purpose of trying to containerize things - some stuff I want to use relies on non-compatible versions/configurations). There’s a very clear disconnect from what I’m seeing and what I’m understanding, and the utter lack of authoritative information/proliferation of useless info has just crippled my ability to identify and resolve the disconnect. It’s honestly soul crushing.

[–] shalafi@lemmy.world 4 points 1 month ago

Keep going! It was worse before the internet, slightly better once it started gaining content. When you're ignorant as a stump on a given tech, starting from 0 is hell.

When I began learning SQL I didn't know the search terms I wanted and my questions were too simple to get results. My first script took me 8 hours, for 8 very short lines. A year later I stumbled on that script at work and laughed, all stuff I could write from memory, easily.

Sounds like you need to back up and parse your ambitions into smaller chunks. That's too much to digest at once. You know how to eat an elephant, right?

load more comments (2 replies)
[–] prototype_g2@lemmy.ml 26 points 1 month ago (1 children)

How does this surprise anyone?

LLMs are just pattern recognition machines. You give them a sequence of words and they tell you what is the most statistically likely word to follow based solely on probability, no logic or reasoning.

[–] Lifter@discuss.tchncs.de 6 points 1 month ago

It's amazing that they get it right 40 % of the time then.

[–] taiyang@lemmy.world 25 points 1 month ago (1 children)

Yes, having tested this myself it is absolutely correct. Hell, even when it finds something, it's usually a secondary or tertiary source that's nearly unusable-- or even one of those "we did our own research and vaccines cause autism" type sources. It's awful and idiots seem to think otherwise.

[–] Angelusz@lemmy.world 8 points 1 month ago (1 children)

You shouldn't use them to keep up with the news. They make that option available because it's wanted, but they shouldn't.

It should only be used to research older data from its original dataset, perhaps adding to it a bit with newer knowledge if you're a specialist in the field.

When you ask the right questions in the right way, you'll get the right answers, or at least mostly - and you should always check the sources after. But it's a specialists tool at this time. And most people are not specialists.

So this whole "Fuck AI" movement is actually pretty damn stupid. It's good to point out its flaws, try and make people aware and help guide it better into the future.

But it's actually useful, and not going away. You're just using it wrong, and as the tech progresses, ways to use it wrong will decrease. You can't stop progress, humanity will always come with new things, evolution is designed that way.

[–] taiyang@lemmy.world 5 points 1 month ago (2 children)

Well, no, because what I'm referring to isn't even news, it's research. I'm an adjunct professor and trying to get old articles doesn't even work, even when they're readily available publicly. The linked article here is referencing citations and it doesn't get more citation-y than that. It doesn't change that when you ask differently, either, because LLMs aren't good at that even if tech bros want it to be.

Now, the information itself could be valid, and in basics it usually is. I was at least able to use it to get myself some basic ideas on a subject before ultimately having to browse abstracts for what I need. Still, you need the source of you're doing anything serious and the best I've got from AI are just authors prevalent in the field which at least is useful for my own database searches.

load more comments (2 replies)
[–] dojan@lemmy.world 19 points 1 month ago (7 children)

I’m confused. These are large language models, not search engines?

[–] Tartas1995@discuss.tchncs.de 27 points 1 month ago (1 children)

But they are used like search engine... A lot... That is a huge issue.

[–] FauxLiving@lemmy.world 6 points 1 month ago (1 children)

If people were using Photoshop to create spreadsheets you don't say Photoshop is terrible spreadsheet software, you say the people are dumb for using the tool for something that it isn't designed for.

People are using LLMs as search engines and then pointing out that they're bad search engines. This is mass user error.

[–] TheBeesKnees@lemmy.sdf.org 6 points 1 month ago

Correction: companies are implementing it into their search engines. Users are just providing feedback.

Ironically, Google's original non-LLM summary was pretty great. That's gone now.

[–] erytau@programming.dev 12 points 1 month ago

They do have search functionality. For Perplexity it's even the main focus. Yeah, it's hard to stop them from confidently making things up.

[–] homesweethomeMrL@lemmy.world 4 points 1 month ago (1 children)

Google and to some extent Micro$oft (and Amazon) have all sunk hundreds of billions of dollars into this bullshit technidiocy because they want AI to go out and suck up all the data on the Internet and then come back to (google or wherever) and present it as if it's "common knowledge".

Thereby rendering all authoritative (read; human, expensive) sources unnecessary.

Search and making human workers redundant has always been the goal of AI.

AI does not understand what any words mean. AI does not understand what the word "word" means. It was never going to work. It's been an insanity money pit from day one. This simple fact is only now beginning to leak out because they can't hide it anymore.

load more comments (1 replies)
load more comments (4 replies)
[–] danc4498@lemmy.world 18 points 1 month ago (4 children)

Is this an ad for Perplexity? I’ve never heard of it, and now I’m googling it. So effective ad if so.

[–] ChaoticNeutralCzech@feddit.org 7 points 1 month ago (1 children)

Would be weird for an ad to bash on the paid tier

[–] danc4498@lemmy.world 8 points 1 month ago (1 children)

Yeah, it’s one of those “no bad press” kind of things. It’s bashing on AI, but Perplexity actually looks pretty good by comparison.

[–] ChaoticNeutralCzech@feddit.org 5 points 1 month ago* (last edited 1 month ago) (1 children)

I'm saying the Perplexity paid tier is about 2x more likely to be confidently wrong than Perplexity

load more comments (1 replies)
load more comments (3 replies)
[–] selokichtli@lemmy.ml 11 points 1 month ago (3 children)

Perplexity is not looking bad, IMHO.

[–] shawn1122@lemm.ee 11 points 1 month ago

Perplexity is by far the best for searching but still copiously hallucinates.

load more comments (2 replies)
[–] PeteWheeler@lemmy.world 11 points 1 month ago (1 children)

AI as a search engine is terrible.

Because if you treat it as such, it will just look at the first result, which is usually wrong or has incomplete info.

If you give the AI a source document, then it is amazing as a search engine. But if the source doc is the entire internet.... its fucking bad.

Shit quality in, shit quality out. And we/corporations have made the internet abundant of shit.

load more comments (1 replies)
[–] Flocklesscrow@lemm.ee 10 points 1 month ago

Copilot is such garbage. Microsoft swirling the drain on business capabilities that they should be dominating is very on brand.

[–] clonedhuman@lemmy.world 9 points 1 month ago (3 children)

Now guess how much power it took for each one of those wrong answers.

The upper limit for AI right now has nothing to do with the coding or with the companies programming it. The upper limit is dictated by the amount of power it takes to generate even simple answers (and it doesn't take any less power to generate wrong answers).

Training a large language model like GPT-3, for example, is estimated to use just under 1,300 megawatt hours (MWh) of electricity; about as much power as consumed annually by 130 US homes. To put that in context, streaming an hour of Netflix requires around 0.8 kWh (0.0008 MWh) of electricity. That means you’d have to watch 1,625,000 hours to consume the same amount of power it takes to train GPT-3.

https://www.theverge.com/24066646/ai-electricity-energy-watts-generative-consumption

If the AI wars between powerful billionaire factions in the United States continues, get ready for rolling blackouts.

load more comments (3 replies)
[–] Showroom7561@lemmy.ca 6 points 1 month ago

Musk's gork is as stupid as he is! And he claims it's waaaaaayyyyy better than other AI. 🤡🤡🤡

[–] argon 4 points 1 month ago

Identifying the source of an article is very different from the common use case for search engines.

1:1 quotes of web pages is something conventional search engines are very good at. But usually you aren't quoting pages 1:1.

load more comments
view more: next ›