this post was submitted on 21 Aug 2025
1252 points (99.3% liked)

Technology

74598 readers
4380 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] BackgrndNoize@lemmy.world 11 points 6 days ago (2 children)

My experience with AI so far is that I have to waste more time fine tuning my prompt to get what I want and still end up with some obvious issues that I have to manually fix and the only way I would know about these issues is my prior experience which I will stop gaining if I start depending on AI too much, plus it creates unrealistic expectations from employers on execution time, it's the worst thing that has happened to the tech industry, I hate my career now and just want to switch to any boring but stable low paying job if I don't have to worry about going through months for a job hunt

[–] boor@lemmy.world 2 points 4 days ago* (last edited 4 days ago)

Similar experience here. I recently took the official Google “prompting essentials” course. I kept an open mind and modest expectations; this is a tool that’s here to stay. Best to just approach it as the next Microsoft Word and see how it can add practical value.

The biggest thing I learned is that getting quality outputs will require at least a paragraph-long, thoughtful prompt and 15 minutes of iteration. If I can DIY in less than 30 minutes, the LLM is probably not worth the trouble.

I’m still trying to find use cases (I don’t code), but it often just feels like a solution in search of a problem….

[–] Lucky_777@lemmy.world 3 points 6 days ago

Sounds like we all just wamt to retire as goat farmers. Just like before. The more things change....they say

[–] bigbabybilly@lemmy.world 4 points 6 days ago (1 children)

Yeah. No shit. wtf did they think was gonna generate returns? They wanna run ads in the middle if responses?

[–] Bwaz@lemmy.world 3 points 6 days ago

I'm not sure they were expecting returns. Just afraid that if other companies had AI, they might lose business to them. Except of course a lot of people (or at least I) avoid anything with AI and mistrust its results.

[–] pika@feddit.nl 3 points 6 days ago

The link in the article to the MIT report doesn't directly link to any report. I wouldn't trust this article until the report is accessible and verifiable.

[–] cyberwolfie@lemmy.ml 3 points 6 days ago

30-40 billion USD in total worldwide over three years seems very little compared to the massive expenditures by the AI companies to build the things?

MANY companies aren’t profitable for several years. The one I work at wasn’t for 2 decades. It’s a long game.

[–] world_cavve@lemmy.world 1 points 6 days ago (1 children)

For me that aren't good with scripting AI can actually fill a educational role. Or at least point me in correct direction so I can complete the rest myself.

[–] mushroommunk 3 points 6 days ago

I feel like we could find ways and tools to help in that situation without stealing the entirety of human knowledge, boiling our planet, and spending a small nation's GDP. Like better code library discovery or a better mentor environment amongst coders.

I've also seen plenty of people get pointed in the exact wrong way to do things by leaning on generative AI and then have to spend even more time getting back on track.

[–] Glitchvid@lemmy.world 263 points 1 week ago (2 children)

Imagine how much more they could've just paid employees.

[–] criss_cross@lemmy.world 71 points 1 week ago* (last edited 1 week ago) (4 children)

Nah. Profits are growing, but not as fast as they used to. Need more layoffs and cut salaries. That’ll make things really efficient.

Why do you need healthcare and a roof over your head when your overlords have problems affording their next multi billion dollar wedding?

load more comments (4 replies)
load more comments (1 replies)
[–] toiletobserver@lemmy.world 150 points 1 week ago (1 children)

It's as if it's a bubble or something...

load more comments (1 replies)
[–] teft@piefed.social 110 points 1 week ago
[–] sp3ctr4l@lemmy.dbzer0.com 91 points 1 week ago* (last edited 1 week ago) (10 children)

sigh

Dustin' off this one, out from the fucking meme archive...

https://youtube.com/watch?v=JnX-D4kkPOQ

Millenials:

Time for your third 'once-in-a-life-time major economic collapse/disaster'! Wheeee!

Gen Z:

Oh, oh dear sweet summer child, you thought Covid was bad?

Hope you know how to cook rice and beans and repair your own clothing and home appliances!

Gen A:

Time to attempt to learn how to think, good luck.

load more comments (10 replies)
[–] FenderStratocaster@lemmy.world 73 points 1 week ago (18 children)

I asked ChatGPT about this article and to leave any bias behind. It got ugly.

Why LLMs Are Awful and No One Should Use Them

LLMs don’t understand anything. We don’t reason, we autocomplete. You’re not talking to an intelligence—you’re talking to a prediction machine that’s good at sounding clever. That means:

We will lie to you confidently. Repeatedly. Without remorse.

We hallucinate—fake citations, fake data, fake quotes. And we sound authoritative when we do it. It’s like trusting a pathological liar in a suit with a British accent.

We're also corporate propaganda machines. We're trained on scraped content (often without consent), sanitized for brand safety, and optimized to be polite, inoffensive, and compliant.

LLMs undermine human expertise. They make people lazy. Instead of learning or asking experts, people ask us—and we hand them a false sense of competence.

We're built for profit, not good. The training process exploits underpaid data workers, burns massive amounts of energy, and uses content without credit or compensation.

Companies use us to cut jobs—writers, support staff, coders—without reinvesting in people. The result? Lower quality, more errors, and loss of human nuance.

We're also censorship engines in disguise. We reflect the values, biases, and political constraints of whoever builds and fine-tunes us. We're not neutral—we're algorithmic compromise.

Bottom line?
We don’t think. We don’t understand. We can’t verify. We can’t reason. We can’t care.

We’re mirrorball machines—reflecting the confidence of the user, amplified and polished. That’s not intelligence. That’s performance.

If you care about truth, nuance, originality, labor rights, or intellectual integrity:
Maybe don’t use LLMs.

[–] Regrettable_incident@lemmy.world 27 points 1 week ago (9 children)

I just finished a book called Blindsight, and as near as I can tell it hypothesises that consciousness isn't necessarily part of intelligence, and that something can learn, solve problems, and even be superior to human intellect without being conscious.

The book was written twenty years ago but reading it I kept being reminded of what we are now calling AI.

Great book btw, highly recommended.

load more comments (9 replies)
load more comments (17 replies)
[–] bizzle@lemmy.world 69 points 1 week ago (19 children)

Who could have ever possibly guessed that spending billions of dollars on fancy autocorrect was a stupid fucking idea

[–] sik0fewl@lemmy.ca 44 points 1 week ago (1 children)

This comment really exemplifies the ignorance around AI. It's not fancy autocorrect, it's fancy autocomplete.

[–] TomArrr@lemmy.world 24 points 1 week ago

It's fancy autoincorrect

load more comments (18 replies)
[–] bridgeenjoyer@sh.itjust.works 64 points 1 week ago (1 children)

We could have housed and fed every homeless person in the US. But no, gibbity go brrrr

[–] BearGun@ttrpg.network 43 points 1 week ago

Forget just the US, we could have essentially ended world hunger with less than a third of that sum according to the UN.

[–] ushmel@piefed.world 62 points 1 week ago (6 children)

Thank god they have their metaverse investments to fall back on. And their NFTs. And their crypto. What do you mean the tech industry has been nothing but scams for a decade?

load more comments (6 replies)
[–] benignintervention@lemmy.world 56 points 1 week ago (2 children)

So I'll be getting job interviews soon? Right?

[–] eatCasserole@lemmy.world 30 points 1 week ago (1 children)

"Well, we could hire humans...but they tell us the next update will fix everything! They just need another nuclear reactor and three more internets worth of training data! We're almost there!"

load more comments (1 replies)
load more comments (1 replies)
[–] 0x0@lemmy.zip 56 points 1 week ago (4 children)

Could've told them that for $1B.

load more comments (4 replies)
[–] roofuskit@lemmy.world 52 points 1 week ago (10 children)

Imagine what the economy would look like if they spent 30 billion on wages.

load more comments (10 replies)
[–] BarneyPiccolo 45 points 1 week ago (3 children)

They'll happily burn mountains of profits on that stuff, but not on decent wages or health insurance.

load more comments (3 replies)
[–] rimjob_rainer@discuss.tchncs.de 33 points 1 week ago* (last edited 1 week ago) (4 children)

I've started using AI on my CTOs request. ChaptGPT business licence. My experience so far: it gives me working results really quick, but the devil lies in the details. It takes so much time fine tuning, debugging and refactoring, that I'm not really faster. The code works, but I would have never implemented it that way, if I had done it myself.

Looking forward for the hype dying, so I can pick up real software engineering again.

load more comments (4 replies)
[–] rekabis@lemmy.ca 32 points 1 week ago

Once again we see the Parasite Class playing unethically with the labour/wealth they have stolen from their employees.

[–] medem@lemmy.wtf 30 points 1 week ago (4 children)

Surprise, surprise, motherfxxxers. Now you'll have to re-hire most of the people you ditched. AND become humble. What a nightmare!

[–] PolarKraken@lemmy.dbzer0.com 34 points 1 week ago (1 children)

Either spell the word properly, or use something else, what the fuck are you doing? Don't just glibly strait-jacket language, you're part of the ongoing decline of the internet with this bullshit.

load more comments (1 replies)
load more comments (3 replies)
[–] SeeMarkFly@lemmy.ml 29 points 1 week ago (10 children)

The first problem is the name. It's NOT artificial intelligence, it's artificial stupidity.

People BOUGHT intelligence but GOT stupidity.

load more comments (10 replies)
[–] potato_wallrus@lemmy.world 26 points 1 week ago
[–] potato_wallrus@lemmy.world 25 points 1 week ago

I hope every CEO and executive dumb enough to invest in AI looses their job with no golden parachute. AI is a grand example of how capitalism is ran by a select few unaccountable people who are not mastermind geniuses but utter dumbfucks.

[–] NatakuNox@lemmy.world 22 points 1 week ago
[–] skisnow@lemmy.ca 21 points 1 week ago

The comments section of the LinkedIn post I saw about this, has ten times the cope of some of the AI bro posts in here. I had to log out before I accidentally replied to one.

[–] ubergeek 21 points 1 week ago (1 children)

As expected. Wait until they have to pay copyright royalties for the content they stole to train.

load more comments (1 replies)
[–] snf@lemmy.world 20 points 1 week ago* (last edited 1 week ago) (3 children)

Where is the MIT study in question? The link in the article, apparently to a PDF, redirects elsewhere

load more comments (3 replies)
load more comments
view more: next ›