As an academic, albeit in a low-stakes field (literature), this is bloody terrifying. It's clearly the result of students using ChatGPT to write their papers and the outdated "publish or perish" model of tenure/promotion metrics incentivizing quantity over quality. Peer reviewing should always be double-blind to prevent nepotism, but also reviewers should be paid -- it's hard work and essential work, but with how precarious many academics are right now, it is not possible to do extra/volunteer for unpaid labour. Journals roll in cash but expect free labour from all sides of academia -- the writers don't get paid (or have to pay to publish) and the reviewers don't get paid, but readers pay for access, either individually or through library subscriptions/universities. It's bonkers.

Sign up to discover human stories that deepen your understanding of the world.

Free

Distraction-free reading. No ads.

Organize your knowledge with lists and highlights.

Tell your story. Find your audience.

Membership

Read member-only stories

Support writers you read most

Earn money for your writing

Listen to audio narrations

Read offline with the Medium app

Dr. Casey Lawrence
Dr. Casey Lawrence

Written by Dr. Casey Lawrence

Canadian author of three LGBT YA novels. PhD from Trinity College Dublin. Check out my lists for stories by genre/type.

Responses (4)

What are your thoughts?

This does not come out of "technology" - it comes out of lazy journalism and a general refusal to fact check. Jonathan Haidt (of New York University) is making his millions with a book that fraudulently misrepresents almost all the research quoted…

"terrifying. It's clearly the result of students using ChatGPT" blaming chatGPT is a easy way out. This is issue is old, and well-known. SCIgen, as an example, was a random-paper generator.

The system you have just described sounds familiar.