this post was submitted on 19 Oct 2023
540 points (96.6% liked)

Technology

58115 readers
3928 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Black Mirror creator unafraid of AI because it’s “boring”::Charlie Brooker doesn’t think AI is taking his job any time soon because it only produces trash

you are viewing a single comment's thread
view the rest of the comments
[–] Clent@lemmy.world 7 points 11 months ago (1 children)

Yes. I've used them. I have used it beyond the point of it hallucinating.

I am also a software engineer and have deeper understanding of how these systems work than your average user.

The software community tends to approach these things with more caution than the general population. The media overblows the capabilities of these systems.

A more concrete example is autonomous vehicles which were promised for decades and even now with a form of them on the road, they are still closer to remote controlled vehicles than the intelligent self contained systems we have been promised.

The difference between predictive text on a smart phone and predictive text of an LLM is my smart phone is predicting what I am likely to type next based on things i have typed in the past, while the LLM is predicting what comes next based on a larger body of work from source pulled from all across the internet. The LLM is then tuned by humans. This tuning step is under reported.

The LLM is unable to determine the truth of its own output. I would argue that is a key to claiming intelligence but determining what intelligence means is itself a philosophical question up for debate.

[–] banneryear1868@lemmy.world 2 points 11 months ago

The LLM is unable to determine the truth of its own output. I would argue that is a key to claiming intelligence but determining what intelligence means is itself a philosophical question up for debate.

Yeah exactly and a great way to see this is by asking it to produce two viewpoints about the same subject, a negative and positive review of something you're familiar with is perfect. It produces this hilarious "critic" type jargon but you can tell it doesn't actually understand. Coincidentally, it's drawing from a lot of text where the original human author(s) might not understand either and are merely themselves re-producing a jargon-heavy text for an assignment by their employer or academic institution. If AI can so accurately replicate some academic paper that probably didn't need to be written for anything other than to meet publishing standards for tenured professors, then that's really a reflection on the source material. Since LLM can only create something based on existing input, almost all the criticisms of it, are criticisms that can apply to it's source material.