this post was submitted on 13 Jul 2025
464 points (97.2% liked)

Comic Strips

18117 readers
1802 users here now

Comic Strips is a community for those who love comic stories.

The rules are simple:

Web of links

founded 2 years ago
MODERATORS
 

top 50 comments
sorted by: hot top controversial new old
[–] Olgratin_Magmatoe@slrpnk.net 1 points 4 hours ago (1 children)

Ok, I give up, where's loss?

[–] squaresinger@lemmy.world 3 points 4 hours ago

Booring. Find a new joke.

[–] ChaoticNeutralCzech@feddit.org 23 points 12 hours ago

They can't possibly train for every possible scenario.

AI: "Pregnant, 94% confidence"
Patient: "I confess, I shoved an umbrella up my asshole. Don't send me to a gynecologist please!"

[–] burgerpocalyse@lemmy.world 23 points 13 hours ago (1 children)

I want to see Dr House make a rude comment to the chatbot that replaced all of his medical staff

[–] squaresinger@lemmy.world 9 points 4 hours ago

Imagine an episode of House, but everyone except House is an AI. And he's getting more and more frustrated by them spewing nonsense after nonsense, while they get more and more appeasing.

"You idiot AI, it is not lupus! It is never lupus!"

"I am very sorry, you are right. The condition referred to Lupus does obviously not exist, and I am sorry that I wasted your time with this incorrect suggestion. Further analysis of the patient's condition leads me to suspect it is lupus."

[–] logicbomb@lemmy.world 83 points 20 hours ago (9 children)

My knowledge on this is several years old, but back then, there were some types of medical imaging where AI consistently outperformed all humans at diagnosis. They used existing data to give both humans and AI the same images and asked them to make a diagnosis, already knowing the correct answer. Sometimes, even when humans reviewed the image after knowing the answer, they couldn't figure out why the AI was right. It would be hard to imagine that AI has gotten worse in the following years.

When it comes to my health, I simply want the best outcomes possible, so whatever method gets the best outcomes, I want to use that method. If humans are better than AI, then I want humans. If AI is better, then I want AI. I think this sentiment will not be uncommon, but I'm not going to sacrifice my health so that somebody else can keep their job. There's a lot of other things that I would sacrifice, but not my health.

[–] Nalivai@discuss.tchncs.de 5 points 25 minutes ago* (last edited 24 minutes ago)

My favourite story about it was that one time when neural network trained on x-rays to recognise tumors I think, was performing amazingly at study, better than any human could.
Later it turned out that the network trained on real life x-rays with confirmed cases, and it was looking for penmarks. Penmarks mean the photo was studied by several doctors, which mean it's more likely to be the case that needed second opinion, which more often than not means there is a tumour. Which obviously means that if the case wasn't studied by humans before, the machine performed worse than random chance.
That's the problem with neural networks, it's incredibly hard to figure out what exactly is happening under the hood, and you can never be sure about anything.
And I'm not even talking about LLM, those are completely different level of bullshit

[–] HubertManne@piefed.social 2 points 4 hours ago (1 children)

When it comes to ai I want it to assist. Like I prefer the robotic surgery where the surgeon controls the robot but I would likely skip a fully automated one.

[–] logicbomb@lemmy.world 2 points 4 hours ago

I think that's the same point the comic is making, which is why it's called "The four eyes principle," meaning two different people look at it.

I understand the sentiment, but I will maintain that I would choose anything that has the better health outcome.

[–] expr@programming.dev 2 points 4 hours ago (3 children)

Except we didn't call all of that AI then, and it's silly to call it AI now. In chess, they're called "chess engines". They are highly specialized tools for analyzing chess positions. In medical imaging, that's called computer vision, which is a specific, well-studied field of computer science.

The problem with using the same meaningless term for everything is the precise issue you're describing: associating specialized computer programs for solving specific tasks with the misapplication of the generative capabilities of LLMs to areas in which it has no business being applied.

[–] marcos@lemmy.world 3 points 2 hours ago

We absolutely did call it "AI" then. The same applies to chess engines when they were being researched.

[–] laranis@lemmy.zip 4 points 4 hours ago (1 children)

Machine Learning is the general field, and I think if we weren't wrapped up in the AI hype we could be training models to do important things like diagnosing disease and not writing shitty code or creating fantasy art work.

[–] hedgehog@ttrpg.network 1 points 1 hour ago

We are. Why do you think we stopped?

[–] jwmgregory@lemmy.dbzer0.com 2 points 4 hours ago (1 children)

chess engines are, and always have been called, AI. computer vision is and always has been AI.

the only reason you might think they’re not is because in the most recent AI winter in which those technologies experienced a boom they avoided terminology like “AI” when requesting funding and advertising their work because people like you who had recently decided that they’re the arbiters of what is and isn’t intelligence.

turing once said if we were to gather the meaning of intelligence from a gallup poll it would be patently absurd, and i agree.

but sure, computer vision and chess engines, the two most prominent use cases for AI and ML technologies - aren’t actual artificial intelligence, because you said so. why? idk. i guess because we can do those things well and the moment we understand something well as a society people start getting offended if you call it intelligence rather than computation. can’t break the “i’m a special and unique snowflake” spell for people, god forbid…

[–] hedgehog@ttrpg.network 1 points 11 minutes ago

There’s a whole history of people, both inside and outside the field, shifting the definition of AI to exclude any problem that had been the focus of AI research as soon as it’s solved.

Bertram Raphael said “AI is a collective name for problems which we do not yet know how to solve properly by computer.”

Pamela McCorduck wrote “it’s part of the history of the field of artificial intelligence that every time somebody figured out how to make a computer do something—play good checkers, solve simple but relatively informal problems—there was a chorus of critics to say, but that’s not thinking” (Page 204 in Machines Who Think).

In Gödel, Escher, Bach: An Eternal Golden Braid, Douglas Hofstadter named “AI is whatever hasn’t been done yet” Tesler’s Theorem (crediting Larry Tesler).

https://praxtime.com/2016/06/09/agi-means-talking-computers/ reiterates the “AI is anything we don’t yet understand” point, but also touches on one reason why LLMs are still considered AI - because in fiction, talking computers were AI.

The author also quotes Jeff Hawkins’ book On Intelligence:

Now we can see the entire picture. Nature first created animals such as reptiles with sophisticated senses and sophisticated but relatively rigid behaviors. It then discovered that by adding a memory system and feeding the sensory stream into it, the animal could remember past experiences. When the animal found itself in the same or a similar situation, the memory would be recalled, leading to a prediction of what was likely to happen next. Thus, intelligence and understanding started as a memory system that fed predictions into the sensory stream. These predictions are the essence of understanding. To know something means that you can make predictions about it. …

The human cortex is particularly large and therefore has a massive memory capacity. It is constantly predicting what you will see, hear, and feel, mostly in ways you are unconscious of. These predictions are our thoughts, and, when combined with sensory input, they are our perceptions. I call this view of the brain the memory-prediction framework of intelligence.

If Searle’s Chinese Room contained a similar memory system that could make predictions about what Chinese characters would appear next and what would happen next in the story, we could say with confidence that the room understood Chinese and understood the story. We can now see where Alan Turing went wrong. Prediction, not behavior, is the proof of intelligence.

Another reason why LLMs are still considered AI, in my opinion, is that we still don’t understand how they work - and by that, I of course mean that LLMs have emergent capabilities that we don’t understand, not that we don’t understand how the technology itself works.

[–] olafurp@lemmy.world 19 points 13 hours ago (1 children)

To expand on this a bit AI in medicine is getting super good at cancer screening in specific use cases.

People now heavily associate it with LLMs hallucinating and speaking out of their ass but forget about how AI completely destroys people at chess. AI is already getting better than top physics models at weather predicting, hurricane paths, protein folding and a lot of other use cases.

AI's uses in specific well defined problems with a specific outcome can potentially become way more accurate than any human can. It's not so much about removing humans but handing humans tools to make medicine both more effective and efficient at the same time.

[–] HubertManne@piefed.social 2 points 4 hours ago (1 children)

The problem is the use of ai in everything as a generic term. Algorithms have been around for awhile and im pretty sure the ai cancer detections are machine learning that are not at all related to LLMs.

[–] olafurp@lemmy.world 1 points 1 hour ago (1 children)

Yeah absolutely, I'm specifically talking about AI as a neural network/reinforcement learning/machine learning and whatnot. Top of the line weather algorithms are now less accurate than neural networks.

LLMs as doctors are pretty garbage since they're predicting words instead of classifying a photo into yes/no or detecting which part of the sleep cycle a sleeping patient is in.

Fun fact, the closer you get the actual math the less magical the words become. Marketing says "AI", programming says "machine learning" or "neural network", mathematicians say "reinforcement learning".

[–] HubertManne@piefed.social 1 points 1 hour ago

I guess I worked with a guy working with algorithms and neural networks so I sorta just equated them. I was very obviously not a CS major.

[–] ILoveUnions@lemmy.world 11 points 14 hours ago (1 children)

One of the large issues was while they had very good rates of correct diagnosis, they also had higher false positive rates. A false cancer diagnosis can seriously hurt people for example

[–] droans@midwest.social 1 points 1 minute ago

Iirc the issue was that the researchers left the manufacturer's logo on the scans.

All of the negative scans were done by the researchers on the same equipment while the positive scans were pulled from various sources. So the AI only learned to identify which scans had the logo.

[–] DarkSirrush@lemmy.ca 54 points 19 hours ago (2 children)

iirc the reason it isn't used still is because even with it being trained by highly skilled professionals, it had some pretty bad biases with race and gender, and was only as accurate as it was with white, male patients.

Plus the publicly released results were fairly cherry picked for their quality.

[–] Ephera@lemmy.ml 14 points 15 hours ago (1 children)

Yeah, there were also several stories where the AI just detected that all the pictures of the illness had e.g. a ruler in them, whereas the control pictures did not. It's easy to produce impressive results when your methodology sucks. And unfortunately, those results will get reported on before peer reviews are in and before others have attempted to reproduce the results.

[–] DarkSirrush@lemmy.ca 7 points 13 hours ago

That reminds me, pretty sure at least one of these ai medical tests it was reading metadata that included the diagnosis on the input image.

[–] yes_this_time@lemmy.world 22 points 19 hours ago* (last edited 19 hours ago)

Medical sciences in general have terrible gender and racial biases. My basic understanding is that it has got better in the past 10 years or so, but past scientific literature is littered with inaccuracies that we are still going along with. I'm thinking drugs specifically, but I suspect it generalizes.

[–] Taleya@aussie.zone 22 points 19 hours ago* (last edited 19 hours ago) (1 children)

That's because the medical one (particularly good ar spotti g cancerous cell clusters) was a pattern and image recognition ai not a plagiarism machine spewing out fresh word salad.

LLMs are not AI

[–] pennomi@lemmy.world 21 points 18 hours ago (1 children)

They are AI, but to be fair, it’s an extraordinarily broad field. Even the venerable A* Pathfinding algorithm technically counts as AI.

[–] logicbomb@lemmy.world 12 points 16 hours ago

When I was in college, expert systems were considered AI. Expert systems can be 100% programmed by a human. As long as they're making decisions that appear intelligent, they're AI.

One example of an expert system "AI" is called "game AI." If a bot in a game appears to be acting similar to a real human, that's considered AI. Or at least it was when I went to college.

[–] medgremlin@midwest.social 14 points 20 hours ago

The important thing to know here is that those AI were trained by very experienced radiologists who are physicians that specialize in reading imaging. The AI's wouldn't have this capability if the humans didn't train them.

Also, the imaging that AI performs well with is fairly specific, and there are many kinds of imaging techniques and diagnostic applications that the AI is still very bad at.

[–] Glytch@lemmy.world 5 points 19 hours ago

Yeah this is one of the few tasks that AI is really good at. It's not perfect and it should always have a human doctor to double check the findings, but diagnostics is something AI can greatly assist with.

[–] Buffalox@lemmy.world 53 points 21 hours ago (1 children)

It's called progress because the cost in frame 4 is just a tenth what it was in frame 1.
Of course prices will still increase, but think of the PROFITS!

[–] noerdman@discuss.tchncs.de 36 points 21 hours ago (1 children)

Also, there'll be no one to blame for mistakes! Failures are just software errors and can be shrugged off! Increase profits and pay less for insurance! What's not to like?

load more comments (1 replies)
[–] rowdy@lemmy.zip 21 points 20 hours ago (10 children)

I hate AI slop as much as the next guy but aren’t medical diagnoses and detecting abnormalities in scans/x-rays something that generative models are actually good at?

[–] medgremlin@midwest.social 38 points 20 hours ago (2 children)

They don't use the generative models for this. The AI's that do this kind of work are trained on carefully curated data and have a very narrow scope that they are good at.

[–] Ephera@lemmy.ml 8 points 15 hours ago (1 children)

Yeah, those models are referred to as "discriminative AI". Basically, if you heard about "AI" from around 2018 until 2022, that's what was meant.

[–] medgremlin@midwest.social 1 points 1 hour ago

The discriminative AI's are just really complex algorithms, and to my understanding, are not complete black-boxes. As someone who has a lot of medical problems I receive care for as well as being someone who will be a physician in about 10 months, I refuse to trust any black-box programming with my health or anyone else's.

Right now, the only legitimate use generative AI has in medicine is as a note-taker to ease the burden of documentation on providers. Their work is easily checked and corrected, and if your note-taking robot develops weird biases, you can delete it and start over. I don't trust non-human things to actually make decisions.

[–] Xaphanos@lemmy.world 12 points 19 hours ago (2 children)

That brings up a significant problem - there are widely different things that are called AI. My company's customers are using AI for biochem and pharm research, protein folding, and other science stuff.

[–] medgremlin@midwest.social 1 points 1 hour ago

I do have a tech background in addition to being a medical student and it really drives me bonkers that we're calling these overgrown algorithms "AI". The generative AI models I suppose are a little closer to earning the definition as they are black-box programs that develop themselves to a certain extent, but all of the reputable "AI" programs used in science and medicine are very carefully curated algorithms with specific rules and parameters that they follow.

[–] jballs@sh.itjust.works 2 points 12 hours ago

My company cut funding for traditional projects and has prioritized funding for AI projects. So now anything that involves any form of automation is "AI".

[–] Mitchie151@lemmy.world 7 points 20 hours ago

Image categorisation AI, or convolutional neural networks, have been in use since well before LLMs and other generative AI. Some medical imaging machines use this technology to highlight features such as specific organs in a scan. CNNs could likely be trained to be extremely proficient and reading X-rays, CT, MRI scans, but these are generally the less operator dependant types of scan, though they can get complicated. An ultrasound for example is highly dependent on the skill of the operator and in certain circumstances things can be made to look worse or better than they are.

I don't know why the technology hasn't become more widespread in the domain. Probably because radiologists are paid really well and have a vested interest in preventing it... they're not going to want to tag the images for their replacement. It's probably also because medical data is hard to get permission for, to ethically train such a model you would need to ask every patient in for every type of scan it their images can be used for medical research which is just another form/hurdle to jump over for everyone.

[–] MartianSands@sh.itjust.works 6 points 20 hours ago

It's certainly not as bad as the problems generative AI tend to have, but it's still difficult to avoid strange and/or subtle biases.

Very promising technology, but likely to be good at diagnosing problems in Californian students and very hit-and-miss with demographics which don't tend to sign up for studies in silicon valley

load more comments (7 replies)
[–] Kirsche_z@lemmy.blahaj.zone 1 points 11 hours ago (1 children)

At first i thought this was an open house where the visitors slowly became relplaced by AI, honestly i thought this was speaking upon the fact that AI would be able to replace even the housing industry, imagine the amount of land that would be bought up if it were given the resources to generate wealth off of unused land, imagine this scenario but replace the "crime" with anything else.

This IS our future if we let it be.

[–] Passerby6497@lemmy.world 2 points 6 hours ago (1 children)

imagine the amount of land that would be bought up if it were given the resources to generate wealth off of unused land,

It's funny, if I take it the parts that reference AI, I just see a description of today

[–] Kirsche_z@lemmy.blahaj.zone 1 points 6 hours ago

yeah but increase that outcome by a multiple of 10, and speed it up the process it takes to do so just as much.

load more comments
view more: next ›