this post was submitted on 26 Aug 2023
401 points (85.9% liked)

Technology

59087 readers
3163 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

ChatGPT generates cancer treatment plans that are full of errors — Study finds that ChatGPT provided false information when asked to design cancer treatment plans::Researchers at Brigham and Women's Hospital found that cancer treatment plans generated by OpenAI's revolutionary chatbot were full of errors.

you are viewing a single comment's thread
view the rest of the comments
[–] UnbeatenDeployGoofy@lemmy.ml 10 points 1 year ago (2 children)

I suppose most sensible people already know that ChatGPT is not the answer for medical diagnosis.

Prompts were input to the GPT-3.5-turbo-0301 model via the ChatGPT (OpenAI) interface.

If the researcher wanted to investigate whether LLM is helpful, they should develop a model specifically using cancer treatment plans with GPT-4/3.5 before testing it thoroughly, in addition to entering prompts into the model that is available on OpenAI.

[–] wewbull@feddit.uk 2 points 1 year ago

There have been a number of articles about how GPT has been out-diagnosing doctors in various domains. To me, that isn't that surprising as diagnosis is a pattern matching problem, something a neuralnet will be very good at. Human doctors were seen to be discounting rare conditions just because they were rare and so "it was much more likely to be something else" even if the symptoms backed up the conclusion. A computer can be more objective about such things.

...but none of that needs AI/ML. We've had expert systems since the 60s.

It's also very different from constructing a treatment plan, which is what we're discussing here.

[–] ours@lemmy.film 1 points 1 year ago (1 children)

Or they could feed the current model with a reputable source of medical information.

[–] testo12@lemmynsfw.com 2 points 1 year ago (1 children)

That wouldn't guarantee correct answers.

It's arguably more dangerous if ChatGPT gives mostly sane specific medical advice because it makes people put more trust in it than they should.

[–] ours@lemmy.film 1 points 1 year ago

True but it would reduce the chances of it making stuff up entirely.