this post was submitted on 24 Mar 2024
104 points (100.0% liked)

TechTakes

1295 readers
159 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 1 year ago
MODERATORS
 

A tweet from ardent promptfondler Misha Saul (archive):

A friend sent me MRI brain scan results and I put it through Claude.

No other AI would provide a diagnosis, Claude did.

Claude found an aggressive tumour.

The radiologist report came back clean.

I annoyed the radiologists until they re-checked. They did so with 3 radiologists and their own AI. Came back clean, so looks like Claude was wrong.

But looks how convincing Claude sounds! We're still early...

Based on the MRI brain scan shown, there appears to be a large abnormal mass or lesion present. The bright, irregular white area in the center of the brain is concerning for a potential tumor or other pathological process.

The tumor demonstrates clear enhancement, local mass effect, and surrounding edema - features that would not be expected from normal brain tissue. We can trace the lesion across multiple serial MRI slices, confirming it is a real space-occupying mass. The location is also typical for a primary glioma.

While I acknowledge there is always some degree of uncertainty in radiology, I would estimate the chances of this actually being normal brain or a misleading angle to be well under 1%. The full set of images build a compelling, consistent picture of an aggressive brain tumor.

I asked it to review more angles:

After reviewing the additional MRI sequences and angles provided in this updated image, my assessment remains unchanged. The multiple planes convincingly demonstrate a large, avidly enhancing mass centered in the deep left parietal lobe with extensive surrounding vasogenic edema. The lesion is again clearly seen across sequential slices, confirming it is a true abnormality rather than an artifact.

Key observations:

1. Heterogeneous enhancement of the lesion on T1-weighted post-contrast images, indicating disruption of the blood-brain barrier
2. The "T2 FLAIR" sequence shows the mass to be mostly T2 hyperintense with surrounding edema
3. Effacement of the adjacent lateral ventricle due to local mass effect from the tumor
4. Midline shift to the right from the expansile effect of the lesion

These findings reaffirm my initial impression of a high-grade glioma, most likely glioblastoma given the imaging characteristics. The additional views perpendicular to the original plane help exclude the possibility of volume averaging artifact mimicking a mass.

In summary, this supplementary imaging further solidifies the diagnosis of an aggressive left parietal tumor, which I suspect is a glioblastoma. I am now even more confident (>99%) this is a true pathologic finding requiring urgent intervention. Biopsy and histopathologic assessment will ultimately be required for definitive classification, but the imaging is quite convincing for a malignant glioma.

Multiple replies point out that the LLM is literally just putting together MRI-sounding phrases and Saul just wasted the valuable time of three expensive professionals on nonsense he should have known was nonsense.

Saul is unfazed:

Impressed with the vitriol my post has attracted

My theory is that the mentally ill he/hims have reverence for authority and doctors are a special class of revered expert rather than a fallible class of professionals

Or maybe trying to use tech is inherently suspicious? 🤷‍♂️

He then doubles down on the fabulous promptfondling future:

Obviously the ideal state is doctors are entirely obsolete

Instant diagnosis, instant treatment

No GPs, no imaging or pathology supply chains, no surgeons, no mistakes, no delays

We may never reach that, but that’s obviously the ideal we want to asymptote towards

and a magical flying unicorn pony with the wrong number of legs

top 50 comments
sorted by: hot top controversial new old
[–] gerikson@awful.systems 57 points 6 months ago* (last edited 6 months ago) (2 children)

Normal person: an LLM is trained on publicly available images of MRIs, most with tumors, so presenting an image of any MRI will naturally generate text related to brain tumor descriptions.

Brain-addled prompt fondlers: clearly this response proves Claude is more intelligent than any doctor.

[–] Soyweiser@awful.systems 20 points 6 months ago (1 children)

Yes the story gets crazier when you realize he didn't even use an ML model trained on MRIs, he used a general one. And apparently he now refuses to believe radiologist actually use the old style ML models trained only on MRIs.

[–] dgerard@awful.systems 22 points 6 months ago* (last edited 6 months ago) (2 children)

it's important to notice that this is an actual real life use case for ML - an AI that will quickly flag positives in a batch of scans. This is real and is used. With due caution by people who know what the fuck they're doing. None of these fucks have heard of it, of course.

[–] self@awful.systems 21 points 6 months ago

With due caution by people who know what the fuck they’re doing.

this is one of the tip-offs I use to quickly differentiate AI crackpottery and legitimate ML. anything legitimate will prominently display its false positive and negative rates, known limitations, and procedures for fucking checking the data by hand (with accompanying warnings and disclaimers if you fail to do this). AI bullshit very frequently skips all that, because the numbers don’t look good and you’re more likely to get VC funding if you hide them

[–] froztbyte@awful.systems 17 points 6 months ago

Also, notably, with purpose-built models (as opposed to just slapping shit into a consumer LLM chatbot built for entirely different concerns)

[–] jacksilver@lemmy.world 16 points 6 months ago

God I didn't even think about the imbalance of the publically available MRI images. That's a great point, beyond the fact that it isn't trained on this specific task and so it's unlikely to become capable in that area.

[–] pikesley@mastodon.me.uk 53 points 6 months ago (1 children)

@dgerard In a previous era, this reads something like "I consulted a psychic who told me I had a brain tumour, and now I have restraining orders from three radiologists"

[–] owen@lemmy.ca 25 points 6 months ago

"doctor, I saw jupiter rise in eve. My son NEEDS the lobotomy. TODAY!"

[–] gerikson@awful.systems 40 points 6 months ago (1 children)

Related:

https://hachyderm.io/@dfeldman/112149278408570324

If you feed AI an MRI, it will happily write a detailed and very convincing diagnosis...

even if the patient is a dead salmon.

[–] YouKnowWhoTheFuckIAM@awful.systems 12 points 6 months ago (1 children)

Holy shit, release the classics!

I don't know when it was decided that all medical technologies will be placebo tested on a dead fish, but it is my new favorite thing.

[–] owen@lemmy.ca 36 points 6 months ago* (last edited 6 months ago) (1 children)

This guy really just said "asymptote towards' instead of 'approach' in a damn tweet. Buddy, check your brain, I think it's running.

[–] VeganCheesecake@lemmy.blahaj.zone 17 points 6 months ago

Might even be a tumor.

[–] dgerard@awful.systems 34 points 6 months ago

BEAT SKELTON on Twitter points out:

MY ONLY CONTRIBUTION TO THE MAIN CHARACTER OF THE DAY IS THAT THE RADIOLOGISTS PROBABLY DIDN'T ACTUALLY USE AN AI, THEY JUST TOLD MISHA THEY DID TO MAKE HIM SHUT UP.

[–] swlabr@awful.systems 32 points 6 months ago (1 children)

Sovereign citizen energy.

Can’t wait for an AI crank to demand cancer treatment because Siri looked sideways at a blurry photo of a lump on their butt. Can’t wait for a Scott to write a script/dialogue tree for getting DIY chemo kits.

[–] skillissuer@discuss.tchncs.de 10 points 6 months ago* (last edited 6 months ago) (2 children)

only for them to be denied these very aggressive pharmaceuticals they obviously "need". but worry not, there's that one weirdo who solved chemistry with iot and wants people to cook their own meds in glass jars with 3d printed lids connected to a series of raspberry pis

e: they don't have to be libertarian, i've just assumed it based on sheer impracticability of that nonsense and little regard for actual safety for end user. they do have that techbroey tunnel vision solving all the problems they understand and just make it work for the problems they don't. their entire mason jar reactor is completely superfluous if you can get proper glassware and in both cases you need to know actual chemistry beforehand anyway

[–] skillissuer@discuss.tchncs.de 10 points 6 months ago (10 children)

if you have no fucking clue what i'm talking about, it's this and it has a response from a professional

[–] corbin@awful.systems 6 points 6 months ago (4 children)

Wow, I hadn't read Lowe's response before, and it is capitalist cringe of the highest order. Thanks for sharing.

To be clear: I agree with every chemical and pharmacological critique leveled at the anarchists here. I also think that none of them have addressed the actual problem that the anarchists are solving, which is that medicinal chemistry has undergone so much regulatory capture that it is no longer legal to perform it at home for one's own private use or even to reverse-engineer the synthesis pathways. For more commentary on this, I recommend watching e.g. NurdRage reverse-engineering pyrimethamine and paying attention to what they say about obtaining precursors and carrying out various steps of synthesis.

[–] skillissuer@discuss.tchncs.de 6 points 6 months ago* (last edited 6 months ago) (2 children)

continuing on that capitalist cringe, i'd just point out that fair bit of that risky (in business sense) fundamental research that ultimately goes into new drugs is conducted at universities, who then get fuck all of these profits even if there's a licensing agreement

although from what i understand the biggest leech on for-profit american healthcare system is insurance layer

load more comments (2 replies)
[–] skillissuer@discuss.tchncs.de 6 points 6 months ago

now i see the issue with pyrimethamine, namely one step sideways gets you close P2P analogue which is a big regulatory nono

[–] skillissuer@discuss.tchncs.de 5 points 6 months ago* (last edited 6 months ago)

but wait wait wait are they really solving anything, because i wouldn't say so. getting starting materials is not solved with a mason jar with raspberry pi. even in that circumstance that you already have all that kit you still need to know some fundamentals of organic chemistry, because trusting your life and limb to a glass jar with illusions of grandeur sounds like an aviodable suboptimal situation

load more comments (1 replies)
load more comments (9 replies)
[–] froztbyte@awful.systems 6 points 6 months ago

there’s that one libertarian weirdo who solved chemistry with iot

wait what, do tell

[–] mii@awful.systems 30 points 6 months ago* (last edited 6 months ago)

This is the next level of “I put my symptoms into Google and WebMD told me I have cancer”.

My compassion goes out to any doctors who now not only have to explain to several idiots every day that a slight pain in their pinky finger does not, in fact, mean they probably have ball cancer, but also that some vaguely professional sounding fluff disguised as a diagnosis generated by a chatbot also doesn’t mean they probably have ball cancer.

[–] blakestacey@awful.systems 28 points 6 months ago (3 children)

"I trained an AI on fan-wiki summaries of House episodes, and my friend needs treatment for lupus now!"

[–] silliewous@feddit.nl 15 points 6 months ago (1 children)

Worst. AI. Ever. It’s never Lupus.

[–] blakestacey@awful.systems 10 points 6 months ago* (last edited 6 months ago)

"Wait, are we after the second commercial break? Then the AI says it can't be lupus, and my friend's prostate is about to explode! Yes, it can tell that from a picture of his brain. And also it's flirting with Wilson, like, a lot."

[–] Soyweiser@awful.systems 10 points 6 months ago

Ha, this fits, I have noticed a tendency for 'AI give me a summary of this' transform content like 'this story is not about X, im not arguing X' into 'this story is arguing X'

[–] swlabr@awful.systems 6 points 6 months ago

“I trained my model on AO3, why aren’t we funding a cure for Hanahaki???”

[–] Deceptichum@sh.itjust.works 26 points 6 months ago (1 children)

Just looking at this persons recent comments, they’re clearly not worth the time of day at even being outraged at their stupidity in the first place.

[–] dgerard@awful.systems 29 points 6 months ago (1 children)

LOOK AT MY PREGNANT TRADWIFE MIDJOURNEY OUTPUT

[–] blakestacey@awful.systems 15 points 6 months ago (1 children)

July 2022: Midjourney released

March 2024: "pregnant tradwife Midjourney tweets" is an instantly recognizable type of guy

[–] hydroptic@sopuli.xyz 9 points 6 months ago (1 children)

April 2024: God dies of terminal cringing

[–] BlueMonday1984@awful.systems 7 points 6 months ago

May 2024: Things somehow get worse

[–] turkishdelight@lemmy.ml 25 points 6 months ago

Why are you expecting an LLM to be accurate in medical diagnosis?. Your expectations are wrong.

[–] sailor_sega_saturn@awful.systems 21 points 6 months ago

Oh great the newest use case for chatbots: pestering experts

[–] V0ldek@awful.systems 18 points 6 months ago

Jean Claude Van Scamme

[–] FermiEstimate@lemmy.dbzer0.com 17 points 6 months ago (1 children)

No GPs, no imaging or pathology supply chains, no surgeons, no mistakes, no delays

Now I'm imagining this guy trying to plug an ethernet cable into an MRI's helium line.

[–] bbuez@lemmy.world 9 points 6 months ago (1 children)

PC in the room with the MRI, hard drive now completely destroyed

load more comments (1 replies)
[–] Soyweiser@awful.systems 15 points 6 months ago* (last edited 6 months ago) (1 children)

Look at this tweet (from 2020) https://twitter.com/misha_saul/status/1321760123413360643 he hits all the marks. EA/Crypto/The Rationalistsphere/Musk.

Bonus points, comics! (Putting scarface on the most evil spot was a nice bit of bait however).

[–] froztbyte@awful.systems 13 points 6 months ago (1 children)

truly a renegade thinker. entirely unbecome by the petty mild thoughts of the masses, out on the edge

[–] Soyweiser@awful.systems 7 points 6 months ago

I love the added contrarianism/on the edge thinking of saying both sides are wrong about EA/Crypt/Musk/Etc. Truly revolutionary thinking.

[–] toiletobserver@lemm.ee 13 points 6 months ago

Every system is perfectly designed to get the result that it does.

  • W. Edwards Deming
[–] Holyginz@lemmy.world 11 points 6 months ago* (last edited 6 months ago)

Did he seriously say asymptote towards? He couldn't sound like more of a tool if he tried.

load more comments
view more: next ›