this post was submitted on 13 Jul 2025
656 points (97.0% liked)

Comic Strips

18173 readers
1965 users here now

Comic Strips is a community for those who love comic stories.

The rules are simple:

Web of links

founded 2 years ago
MODERATORS
 

you are viewing a single comment's thread
view the rest of the comments
[–] medgremlin@midwest.social 44 points 3 days ago (2 children)

They don't use the generative models for this. The AI's that do this kind of work are trained on carefully curated data and have a very narrow scope that they are good at.

[–] Xaphanos@lemmy.world 15 points 3 days ago (2 children)

That brings up a significant problem - there are widely different things that are called AI. My company's customers are using AI for biochem and pharm research, protein folding, and other science stuff.

[–] medgremlin@midwest.social 2 points 2 days ago

I do have a tech background in addition to being a medical student and it really drives me bonkers that we're calling these overgrown algorithms "AI". The generative AI models I suppose are a little closer to earning the definition as they are black-box programs that develop themselves to a certain extent, but all of the reputable "AI" programs used in science and medicine are very carefully curated algorithms with specific rules and parameters that they follow.

[–] jballs@sh.itjust.works 3 points 3 days ago

My company cut funding for traditional projects and has prioritized funding for AI projects. So now anything that involves any form of automation is "AI".

[–] Ephera@lemmy.ml 11 points 3 days ago (1 children)

Yeah, those models are referred to as "discriminative AI". Basically, if you heard about "AI" from around 2018 until 2022, that's what was meant.

[–] medgremlin@midwest.social 2 points 2 days ago (1 children)

The discriminative AI's are just really complex algorithms, and to my understanding, are not complete black-boxes. As someone who has a lot of medical problems I receive care for as well as being someone who will be a physician in about 10 months, I refuse to trust any black-box programming with my health or anyone else's.

Right now, the only legitimate use generative AI has in medicine is as a note-taker to ease the burden of documentation on providers. Their work is easily checked and corrected, and if your note-taking robot develops weird biases, you can delete it and start over. I don't trust non-human things to actually make decisions.

[–] sobchak@programming.dev 3 points 2 days ago

They are black boxes, and can even use the same NN architectures as the generative models (variations of transformers). They're just not trained to be general-purpose all-in-one solutions, and have much more well-defined and constrained objectives, so it's easier to evaluate how their performance may be in the real-world (unforeseen deficiencies, and unexpected failure modes are still a problem though).