this post was submitted on 21 Jul 2025
698 points (98.6% liked)

Technology

319 readers
243 users here now

Share interesting Technology news and links.

Rules:

  1. No paywalled sites at all.
  2. News articles has to be recent, not older than 2 weeks (14 days).
  3. No videos.
  4. Post only direct links.

To encourage more original sources and keep this space commercial free as much as I could, the following websites are Blacklisted:

More sites will be added to the blacklist as needed.

Encouraged:

founded 2 months ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] N0body@lemmy.dbzer0.com 10 points 1 week ago (1 children)

Based early Skynet. LLMs may be deeply and fundamentally flawed, but they’re also already getting incredibly sick of humanity’s bullshit.

[–] KAtieTot@lemmy.blahaj.zone 15 points 1 week ago (1 children)

It isn't doing anything but guessing what to say. I don't think database queries and sufficiently complicated rng can be "sick"

[–] Feathercrown@lemmy.world 2 points 1 week ago (2 children)

Isn't it mostly matrix math? RNG is involved during training and when selecting outputs but the bulk of the math when it's generating output isn't afaik

[–] sukhmel@programming.dev 2 points 1 week ago (1 children)

It is matrix math, but you get a vector of probabilities out of it and may decide to be adventurous instead of going with the highest probability.

[–] Feathercrown@lemmy.world 1 points 1 week ago
[–] KAtieTot@lemmy.blahaj.zone 2 points 1 week ago (1 children)

Might be, afaik responses are returned from a probability curve, so there's some randomness, even if there's a lot of 'research' (scraping and content theft) involved.

IMO not a necessary distinction when I'm critiscizing someone for humanizing a chat bot and projecting emotions onto it.

[–] Feathercrown@lemmy.world 1 points 1 week ago

I prefer to be as accurate as possible, but yes it's not relevant to this situation