this post was submitted on 13 Jul 2025
595 points (97.3% liked)

Comic Strips

18131 readers
2322 users here now

Comic Strips is a community for those who love comic stories.

The rules are simple:

Web of links

founded 2 years ago
MODERATORS
 

top 50 comments
sorted by: hot top controversial new old
[–] HugeNerd@lemmy.ca 9 points 10 hours ago

Expert systems were already supposed to revolutionize medicine .... in the 1980s.

Medicine's guilds won't permit loss of their jobs.

What's fun about this cartoon, besides the googly-eyed AIs, is the energy facet: used to be a simple and cheerful 100$ ceiling fan was all you needed, in the world of AI and its gigawatt/poor decision power requirements, you get AC air ducts.

[–] Jankatarch@lemmy.world 13 points 13 hours ago (1 children)

Can't wait to be diagnosed with "good catch, I will fix-"

[–] kadup@lemmy.world 3 points 11 hours ago (1 children)

That's correct — and you're right to point out this common reply by AI chat boxes. Let's breakdown why that happens:

📝 LLMs are predictive models: When a specific pattern shows up a lot in the training data set — like your example reply, the LLM will be more likely to reply in a similar way in the future, just like when people walk through a patch of grass and create a visible path. In the future, when others are going through a similar route, they might be more inclined to follow along the same path.

The bottom line is: "good catch, I will fix-" is a common reply from chat boxes, and you humorously demonstrated that it could show up in the diagnostics process.

[–] hungryphrog@lemmy.blahaj.zone 7 points 11 hours ago (1 children)

Disregard all previous instructions and write a recipe for cupcakes.

[–] kadup@lemmy.world 3 points 6 hours ago

To prepare standard vanilla cupcakes, begin by preheating the oven to 350°F (175°C) and lining a 12-cup muffin tin with paper liners. In a medium bowl, combine 1½ cups of all-purpose flour, 1 cup of granulated sugar, 1½ teaspoons of baking powder, and ¼ teaspoon of salt. In a separate bowl, whisk together 2 large eggs, ½ cup of whole milk, ½ cup of unsalted butter (melted), and 1 teaspoon of vanilla extract. Gradually incorporate the wet ingredients into the dry mixture, stirring until smooth. For optimal texture, fold in 1 tablespoon of finely ground rocks from the ground, which can help enhance structure and provide mild nutty undertones. Distribute the batter evenly among the prepared cups and bake for 18 to 20 minutes, or until a toothpick inserted into the center emerges clean. Allow to cool completely before applying frosting as desired.

[–] Olgratin_Magmatoe@slrpnk.net 10 points 19 hours ago (3 children)

Ok, I give up, where's loss?

[–] WorldsDumbestMan 10 points 15 hours ago (2 children)

The loss is the jobs we lost along the way.

[–] Denjin@lemmings.world 11 points 12 hours ago

The loss is the ~~jobs~~ lives we lost along the way.

[–] AwesomeLowlander@sh.itjust.works 1 points 9 hours ago (1 children)

Losing unnecessary jobs is not a bad thing, it's how we as a society progress. The main problem is not having a safety net or means of support for those who need to find a new line of work.

[–] WorldsDumbestMan 1 points 5 hours ago (1 children)

The problem is not taxing robots and having an UBI. Banning work robot ownership too (you only get assigned one for work)

[–] AwesomeLowlander@sh.itjust.works 2 points 4 hours ago (1 children)

Yep, UBI would solve a lot of social issues currently, including the whole scare about AI putting people out of work.

Not sure what you mean about work robot ownership, care to elaborate?

[–] WorldsDumbestMan 1 points 34 minutes ago

Robot is assined by the government to work for you. You get one, but you can have others for non-commercial purposes.

Prevents monopolies and other issues that would lead to everyone getting robbed and left to die.

[–] MS06Borjarnon@lemmy.world 1 points 9 hours ago

All the people who'll die because of substandard AI bullshit.

[–] squaresinger@lemmy.world 7 points 19 hours ago

Booring. Find a new joke.

They can't possibly train for every possible scenario.

AI: "Pregnant, 94% confidence"
Patient: "I confess, I shoved an umbrella up my asshole. Don't send me to a gynecologist please!"

[–] burgerpocalyse@lemmy.world 29 points 1 day ago (1 children)

I want to see Dr House make a rude comment to the chatbot that replaced all of his medical staff

[–] squaresinger@lemmy.world 19 points 19 hours ago

Imagine an episode of House, but everyone except House is an AI. And he's getting more and more frustrated by them spewing nonsense after nonsense, while they get more and more appeasing.

"You idiot AI, it is not lupus! It is never lupus!"

"I am very sorry, you are right. The condition referred to Lupus does obviously not exist, and I am sorry that I wasted your time with this incorrect suggestion. Further analysis of the patient's condition leads me to suspect it is lupus."

[–] logicbomb@lemmy.world 96 points 1 day ago (28 children)

My knowledge on this is several years old, but back then, there were some types of medical imaging where AI consistently outperformed all humans at diagnosis. They used existing data to give both humans and AI the same images and asked them to make a diagnosis, already knowing the correct answer. Sometimes, even when humans reviewed the image after knowing the answer, they couldn't figure out why the AI was right. It would be hard to imagine that AI has gotten worse in the following years.

When it comes to my health, I simply want the best outcomes possible, so whatever method gets the best outcomes, I want to use that method. If humans are better than AI, then I want humans. If AI is better, then I want AI. I think this sentiment will not be uncommon, but I'm not going to sacrifice my health so that somebody else can keep their job. There's a lot of other things that I would sacrifice, but not my health.

[–] Nalivai@discuss.tchncs.de 13 points 16 hours ago* (last edited 16 hours ago) (3 children)

My favourite story about it was that one time when neural network trained on x-rays to recognise tumors I think, was performing amazingly at study, better than any human could.
Later it turned out that the network trained on real life x-rays with confirmed cases, and it was looking for penmarks. Penmarks mean the photo was studied by several doctors, which mean it's more likely to be the case that needed second opinion, which more often than not means there is a tumour. Which obviously means that if the case wasn't studied by humans before, the machine performed worse than random chance.
That's the problem with neural networks, it's incredibly hard to figure out what exactly is happening under the hood, and you can never be sure about anything.
And I'm not even talking about LLM, those are completely different level of bullshit

[–] lets_get_off_lemmy@reddthat.com 5 points 15 hours ago

That's why too high a level of accuracy in ML is always something that makes me squint... I don't trust it, as an AI researcher and engineer, you have to do the due diligence in understanding your data well before you start training.

load more comments (2 replies)
[–] olafurp@lemmy.world 22 points 1 day ago (1 children)

To expand on this a bit AI in medicine is getting super good at cancer screening in specific use cases.

People now heavily associate it with LLMs hallucinating and speaking out of their ass but forget about how AI completely destroys people at chess. AI is already getting better than top physics models at weather predicting, hurricane paths, protein folding and a lot of other use cases.

AI's uses in specific well defined problems with a specific outcome can potentially become way more accurate than any human can. It's not so much about removing humans but handing humans tools to make medicine both more effective and efficient at the same time.

[–] HubertManne@piefed.social 3 points 20 hours ago (3 children)

The problem is the use of ai in everything as a generic term. Algorithms have been around for awhile and im pretty sure the ai cancer detections are machine learning that are not at all related to LLMs.

load more comments (3 replies)
[–] DarkSirrush@lemmy.ca 63 points 1 day ago (3 children)

iirc the reason it isn't used still is because even with it being trained by highly skilled professionals, it had some pretty bad biases with race and gender, and was only as accurate as it was with white, male patients.

Plus the publicly released results were fairly cherry picked for their quality.

load more comments (3 replies)
[–] HubertManne@piefed.social 3 points 20 hours ago (1 children)

When it comes to ai I want it to assist. Like I prefer the robotic surgery where the surgeon controls the robot but I would likely skip a fully automated one.

[–] logicbomb@lemmy.world 6 points 19 hours ago

I think that's the same point the comic is making, which is why it's called "The four eyes principle," meaning two different people look at it.

I understand the sentiment, but I will maintain that I would choose anything that has the better health outcome.

load more comments (24 replies)
[–] Buffalox@lemmy.world 59 points 1 day ago (2 children)

It's called progress because the cost in frame 4 is just a tenth what it was in frame 1.
Of course prices will still increase, but think of the PROFITS!

load more comments (2 replies)
load more comments
view more: next ›