this post was submitted on 18 Dec 2023
114 points (75.0% liked)

Technology

59440 readers
3572 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

AI-screened eye pics diagnose childhood autism with 100% accuracy::undefined

you are viewing a single comment's thread
view the rest of the comments
[–] partial_accumen@lemmy.world 32 points 11 months ago (2 children)

We don’t know what they are identifying. We give it input and it gives output. What exactly is going on internally is a mystery.

Counterintuitively that's also where the benefit comes from.

The reason most AI is powerful isn't because its can think like humans, its because it doesn't. It makes associations that humans don't simply by consumption of massive amounts of data. We humans tell it "Here's a bajillion sample examples of X. Okay, got it? Good. Now here's 10 bajillion samples we don't know if they are X or not. What do you, AI, think?"

AI isn't really a causation machine, but instead a correlation machine. The AI output effectively says "This thing you gave me later has some similarities to the thing you gave me before. I don't know if the similarities mean anything, but they ARE similarities".

Its up to us humans to evaluate the answer AI gave us, and determine if the similarities it found are useful or just coincidental.

[–] SpaceNoodle@lemmy.world 1 points 11 months ago (1 children)

Sure, but if we could take the model generated by the AI and convert it into a set of quantifiable criteria - i.e., what is being correlated - we could use our human abilities of associative thought to gain an understanding of why this correlation may exist, possibly leading to better understanding of Autism overall.

[–] Trainguyrom@reddthat.com 2 points 11 months ago (1 children)

The problem is identifying what an AI model is doing is basically impossible. You can't just decompile an AI model and see a bunch of logic, and you can't view the machine code and reverse engineer it because it isn't code in that sense. The best way to suss it out is to throw corner cases at it and try to figure out any common themes in the false negatives and false positives

[–] SpaceNoodle@lemmy.world 1 points 11 months ago

No, we just haven't come up with a way of reverse-engineering AI models yet.

[–] uriel238@lemmy.blahaj.zone 0 points 11 months ago

Incidentally to train AI, you need a bajillion samples of X and a bajillion-plus samples of not-X.