Skip to main content


Nature : "More than half of researchers now use AI for peer review"

nature.com/articles/d41586-025…

"GPT-5 could mimic the structure of a peer-review report and use polished language, but that it failed to produce constructive feedback and made factual errors."

Yep, that matches the recent review I had, on the basis of which my manuscript was rejected. Vague criticisms that sound bad but are not actionable.

Bigger picture: Nobody has time to do peer review, so many reviews are shoddy. Now shoddy reviews can have AI help, but they're still shoddy reviews.
AI's making it worse, but the fundemental issues here are around workloads.

#academicchatter

in reply to Simon Waldman

UKRI now requires one to promise not to use generative AI in any way when agreeing to review UK-based proposals, which is a good step forward. But of course, it's hard to know how many will comply.

I guess it is based on this policy:
ukri.org/publications/generati…

in reply to Simon Waldman

The "research" that that Nature article is based on is just a glossy 29-page advertisement [1]. See page 28 [1], which admits that the authors include a Customer Intelligence Manager, a Brand Content Specialist, and a Senior Brand Manager, and instead of research, there was "content development", and the draft and refinement include #AISlop .

Seems like #misinformation to me.

@academia

#NatureMisinformation

[1] frontiersin.org/documents/unlo… (archive: web.archive.org/web/2025121613… )

in reply to Boud

@boud @academia

Indeed, I was finding it that 50% number way too big. I had a quick glance at this Frontiers survey and it feels very cringe. Like, 20% (only!) of people say that use of AI in the publication process increases their trust in the publisher, and their conclusion is "need for clearer communication to build trust" 😂 - when it should be "need to stop using AI in the publication process".

So I stopped reading and I am not going to trust that Nature "news" either. It probably wasn't even peer-reviewed..