this post was submitted on 14 Jul 2025
101 points (100.0% liked)

science

21328 readers
680 users here now

A community to post scientific articles, news, and civil discussion.

rule #1: be kind

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Squizzy@lemmy.world 0 points 1 month ago (1 children)

Those authors should be screened loads forever more. They do not respect the purpose of the scientific process if they are solely trying to push themselves forward.

[–] DaTingGoBrrr@lemmy.ml 11 points 1 month ago (2 children)

Or maybe AI shouldn't review things? Who knows what they are hallucinating.

[–] MysteriousSophon21@lemmy.world 2 points 1 month ago

This is the biggest issue - peer review is supposed to be about critical analysis and domain expertise, not just following promts blindly, and no AI today has actual scientific understanding to catch subtle methodological flaws.

[–] Squizzy@lemmy.world 0 points 1 month ago (1 children)

Yeah absolutely, but researchers who are attempting skirt review processes to only receice positive feedback are not respecting the process.

[–] CrypticCoffee@lemmy.ml 2 points 1 month ago (1 children)

What's to respect in an AI review where they didn't even review the output. It's an LLM lazy review. Deserves to be gamed.

[–] Squizzy@lemmy.world -1 points 1 month ago

Yes, the reviewers should not be using it. The researcher shouldnt be submitting it with the intention of gaming it.

AI is not all LLM chat bots, there are legitimate AI implementations used in research.