this post was submitted on 14 Mar 2024
650 points (98.7% liked)

Science Memes

11086 readers
2707 users here now

Welcome to c/science_memes @ Mander.xyz!

A place for majestic STEMLORD peacocking, as well as memes about the realities of working in a lab.



Rules

  1. Don't throw mud. Behave like an intellectual and remember the human.
  2. Keep it rooted (on topic).
  3. No spam.
  4. Infographics welcome, get schooled.

This is a science community. We use the Dawkins definition of meme.



Research Committee

Other Mander Communities

Science and Research

Biology and Life Sciences

Physical Sciences

Humanities and Social Sciences

Practical and Applied Sciences

Memes

Miscellaneous

founded 2 years ago
MODERATORS
all 49 comments
sorted by: hot top controversial new old
[–] Frenchy@aussie.zone 160 points 8 months ago (5 children)

Well that’s… unfortunate. I’d like to know how the fuck that got past editors, typesetters and peer reviewers. I hope this is some universally ignored low impact factor pay to print journal.

[–] fossilesque@mander.xyz 122 points 8 months ago (1 children)

We all know Elsevier only upholds the highest standards, after all why would they have such a large market share?

[–] NegativeInf@lemmy.world 62 points 8 months ago

That name. Being a hobbyist with niche interests has made me hate them so very much. Scihub forever.

[–] gregorum@lemm.ee 33 points 8 months ago* (last edited 8 months ago) (1 children)

because they're all as bad as most of us and only read the headline :(

[–] fossilesque@mander.xyz 63 points 8 months ago* (last edited 8 months ago) (1 children)
[–] Blackmist@feddit.uk 21 points 8 months ago

Editors, typesetters and peer reviewers have also been replaced with AI.

[–] GenEcon@lemm.ee 16 points 8 months ago

Since the rest of the paper looks decent (I am no expert in this field), I have a guess: it got to review and it came back with a 'minor review' and the comment 'please summarize XY at the end'.

In low impact journals minor reviews are handeled in a way, that the editor trusts the scientists to address minor changes accordingly. Afterwards it goes to production, where some badly payed people – most of the time from India – put everything in format, send out a proof with a deadline of max 2 days and then it will be published.

I don't want to defend this practice, but thats how something like this can get through.

[–] Nomecks@lemmy.ca 8 points 8 months ago

They were using AI to proof it

[–] puchaczyk@lemmy.blahaj.zone 125 points 8 months ago (2 children)

Just to remind everyone, Elsevier had a £2 billion of NET INCOME in 2022 and yet this is a quality you get.

[–] OpenStars@startrek.website 35 points 8 months ago

The goal of any medical institution should be to generate profits.

- capitalists

[–] PositiveControl@feddit.it 81 points 8 months ago* (last edited 8 months ago) (1 children)

It's the second time in a few hours that I see a post about AI-written articles published in an Elsevier journal. Maybe I'm not super worried about these specific papers (since the journals are also kinda irrelevant), but I'm worried about all the ones we're not seeing. And I fear that the situation is only going to get worse while AI improves, especially regarding images. The peer review system is not ready to address all of this

[–] Pyr_Pressure@lemmy.ca 4 points 8 months ago

There are so many different journals out there it's hard to keep track of which ones are actually reputable anymore.

Almost need some overarching scientific body that can review and provide ratings for different journals to be able to even cite from the information within or something.

Like science and nature would be S-tier, whereas this journal should be F-tier apparently and people shouldn't even be allowed to cite articles found within it for their own papers.

[–] bjoern_tantau@swg-empire.de 77 points 8 months ago (3 children)

What's so puzzling about this stuff is that I get why they're using AI to write the text because writing is hard. But why don't they at least read it once before submitting?

[–] TxzK@lemmy.zip 62 points 8 months ago (1 children)

Reading is hard too. If only there was an AI that could do the reading

[–] lugal@sopuli.xyz 24 points 8 months ago (2 children)

And we need an electric monk that believes for us

[–] ech@lemm.ee 13 points 8 months ago (2 children)
[–] SpaceNoodle@lemmy.world 3 points 8 months ago

How am I just now learning about this

[–] lugal@sopuli.xyz 1 points 8 months ago

The future is now!

[–] Kratzkopf@discuss.tchncs.de 3 points 8 months ago (1 children)

/c/unexpecteddouglasadams or sonething

[–] lugal@sopuli.xyz 1 points 8 months ago* (last edited 8 months ago)

Someone got the reference

[–] FiniteBanjo 10 points 8 months ago

I don't even get the writing aspect. An LLM is 100% accurate when it has as many errors as an average human, so your product will always be worse with AI. Always. It's never good to use it.

[–] Swedneck@discuss.tchncs.de 4 points 8 months ago

CTRL+f, "AI", enter

but no, let's not take literally 5 seconds to check whether the AI got confused and included an admission of your shame in the paper.

[–] PatFussy@lemm.ee 77 points 8 months ago (2 children)

Dang that got published.... I had to jump through fucking HOOPS to get my advisors to allow me to publish shit. This is ridiculous

[–] RootBeerGuy@discuss.tchncs.de 29 points 8 months ago

Not sure you'd want to publish in Radiology Case Reports. It has an impact factor of 0.8, and I am not saying using impact factor as a general quality metric is good, but anything below 1 is probably not worth your time unless it is a very very new journal that just doesn't have enough history.

[–] 52fighters@sopuli.xyz 4 points 8 months ago

So no peer review? Or did the peer just use a chat not too?

[–] HootinNHollerin@lemmy.world 37 points 8 months ago (1 children)

4 MDs and not a single brain

[–] Risk@feddit.uk 18 points 8 months ago

I work in healthcare. Doesn't surprise me in the slightest.

[–] SpoopyKing@lemmy.sdf.org 31 points 8 months ago (1 children)

Why is the publication date June 2024?

[–] RootBeerGuy@discuss.tchncs.de 39 points 8 months ago* (last edited 8 months ago)

This practise is a remnant of the printing times. Papers would get accepted and then printed in a later issue. But once the online publishing started, this kind of was not necessary anymore. Which lead to online publication before print, but somehow still using the print date for the article because a lot of journals still have physical prints.

That said, I don't know if this journal does that and then if not it is simply stupid. They might do it because they limit "online" issues in size, like the printed ones. Which is idiotic if you don't actually print anything.

[–] WalrusDragonOnABike@reddthat.com 24 points 8 months ago (1 children)

At least the AI saw personal medical info and Nope!'d out of that?

[–] Wirlocke@lemmy.blahaj.zone 13 points 8 months ago* (last edited 8 months ago) (2 children)

Come to think of it, I wonder if using ChatGPT violates HIPPA because it sends the patient data to OpenAI?

I smell a lawsuit.

[–] hissingmeerkat@sh.itjust.works 9 points 8 months ago (1 children)

I don't think HIPPA applies in Jerusalem.

[–] Juviz@feddit.de 7 points 8 months ago

You’re right about that, but other countries have similar protection. E.g. our board equivalent here in Germany would tear you a new one for. And the GDPR is gonna finish the job

[–] survivalmachine@beehaw.org 2 points 8 months ago (1 children)
[–] Wirlocke@lemmy.blahaj.zone 1 points 8 months ago (1 children)

Typically for the AI to do anything useful you'd copy and paste the medical records into it, which would be patient data.

Technically you could expunge enough data to keep it inline with HIPPA, but if there's more people careless enough not to proofread their paper, then I doubt those people would prep the data correctly.

[–] survivalmachine@beehaw.org 2 points 8 months ago (1 children)

ChatGPT has no burden to respect HIPAA in that scenario. The medical provider inputting your PHI into a cloud-based LLM is violating your HIPAA rights in that case.

[–] Wirlocke@lemmy.blahaj.zone 2 points 8 months ago

Just to clarify I am implying the medical provider would be the one sued. I didn't think ChatGPT would be in the wrong.

ChatGPT has just done a great job revealing how lazy and poorly thought out people are all over.

[–] MataVatnik@lemmy.world 8 points 8 months ago
[–] Omega_Haxors@lemmy.ml 7 points 8 months ago

Innovation under capitalism:

[–] Lucien@hexbear.net 6 points 8 months ago (1 children)

Being a lit review, it's not a referreed publication, so no one to call them out on their bullshit. Funny that the author didn't even bother reading their shit sandwich of a "review".

[–] hissingmeerkat@sh.itjust.works 7 points 8 months ago

It's not a literature review. It's a case report on a specific patient. It's impossible to imagine writing a discussion of your own patient in this way, or to accept an approximately 5 page article without reading it.

The journal Radiology Case Reports is refereed by an editorial board led by University of Washington professors, associate professors, and doctors of medicine.

Radiology Case Reports is an open-access journal publishing exclusively case reports that feature diagnostic imaging. Categories in which case reports can be placed include the musculoskeletal system, spine, central nervous system, head and neck, cardiovascular, chest, gastrointestinal, genitourinary, multisystem, pediatric, emergency, women's imaging, oncologic, normal variants, medical devices, foreign bodies, interventional radiology, nuclear medicine, molecular imaging, ultrasonography, imaging artifacts, forensic, anthropological, and medical-legal. Articles must be well-documented and include a review of the appropriate literature.

$550 - Article publishing charge for open access

10 days - Time to first decision

18 days - Review time

19 days - Submission to acceptance

80% - Acceptance rate