this post was submitted on 10 Jun 2025
-14 points (43.0% liked)

Technology

71355 readers
4082 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

I''m curious about the strong negative feelings towards AI and LLMs. While I don't defend them, I see their usefulness, especially in coding. Is the backlash due to media narratives about AI replacing software engineers? Or is it the theft of training material without attribution? I want to understand why this topic evokes such emotion and why discussions often focus on negativity rather than control, safety, or advancements.

(page 2) 21 comments
sorted by: hot top controversial new old
[–] Eat_Your_Paisley@lemm.ee 3 points 4 days ago

Its not particularly accurate and then there's the privacy concerns

[–] technocrit@lemmy.dbzer0.com 2 points 3 days ago* (last edited 3 days ago)

"AI" is a pseudo-scientific grift.

Perhaps more importantly, the underlying technologies (like any technology) are already co-opted by the state, capitalism, imperialism, etc. for the purposes of violence, surveillance, control, etc.

Sure, it's cool for a chatbot to summarize stackexchange but it's much less cool to track and murder people while committing genocide. In either case there is no "intelligence" apart from the humans involved. "AI" is primarily a tool for terrible people to do terrible things while putting the responsibility on some ethereal, unaccountable "intelligence" (aka a computer).

[–] ShittyBeatlesFCPres@lemmy.world 2 points 4 days ago (2 children)

My skepticism is because it’s kind of trash for general use. I see great promise in specialized A.I. Stuff like Deepfold or astronomy situations where the telescope data is coming in hot and it would take years for humans to go through it all.

But I don’t think it should be in everything. Google shouldn’t be sticking LLM summaries at the top. It hallucinates so I need to check the veracity anyway. In medicine, it can help double-check but it can’t be the doctor. It’s just not there yet and might never get there. Progress has kind of stalled.

So, I don’t “hate” any technology. I hate when people misapply it. To me, it’s (at best) beta software and should not be in production anywhere important. If you want to use it for summarizing Scooby Doo episodes, fine. But it shouldn’t be part of anything we rely on yet.

load more comments (2 replies)
[–] IsaamoonKHGDT_6143@lemmy.zip 2 points 4 days ago

As several have already explained their questions, I will clarify some points.

Not all countries consider AI training using copyrighted material as theft. For example, Japan has allowed AI to be trained with copyrighted material since 2019, and it's strange because that country is known for its strict laws in that regard.

Also, saying that AI can't or won't harm society sells. Although I don't deny the consequences of this technology. But it will only be effective if AI doesn't get better, because then it could be counterproductive.

[–] Boomkop3@reddthat.com 2 points 4 days ago (1 children)

It's easy to deny it's built on stolen content and difficult to prove. And AI companies know this, and have gotten caught stealing shitty drawings from children and buying user data that should've been private

[–] Dojan@pawb.social 2 points 4 days ago

It’s honestly ridiculous too. Imagine saying that your whole business model is shooting people, and if you’re not allowed to shoot people then it’ll crash. So when accused of killing people, you go “nu uh” and hide the weapons you did it with, and the legal system is okay with that.

It’s all so stupid.

[–] Engywuck@lemm.ee 1 points 4 days ago* (last edited 4 days ago)

Karma farming, as everything on any social network, be it centralized or decentralized. I'm not exactly enthusiastic about AI, but I can tell it has its use case (with caution). AI itself is not the problem. Most likely, Corps behind it are (their practices are not always transparent).

load more comments
view more: ‹ prev next ›