As academic review bottlenecks worsen, AI examiners become the first authority on whether new science is reproducible enough to enter the record.
What begins as a response to fraud and overload becomes a new epistemic checkpoint. AI reviewers rerun code, inspect statistical assumptions, generate adversarial test cases, and compare claims against hidden libraries of failed replications. Publication speeds up for well-structured work and slows sharply for fields built on tacit judgment or fragile measurements. Young scientists learn to write for machine scrutiny before peer conversation. Trust rises in some disciplines, but so does conformity: if the examiner cannot parse your method, your method increasingly does not count.
At 11:20 p.m. in a university lab in São Paulo, doctoral student Luiza watches a dashboard mark her paper yellow instead of green. The AI examiner accepts her results but flags two undocumented calibration choices made by a retired technician whose methods were never written down. Her discovery is real, she thinks, yet not legible enough to exist.
Defenders of the system argue that science has always depended on gatekeepers and that automated examiners are at least consistent, tireless, and transparent about criteria. Opponents counter that consistency is not neutrality: entire traditions of fieldwork, craft knowledge, and exploratory inquiry may be sidelined because they resist formal packaging.