%0 Report %A Pataranutaporn, Pat %A Powdthavee, Nattavudh %A Maes, Pattie %T Can AI Solve the Peer Review Crisis? A Large-Scale Experiment on LLM's Performance and Biases in Evaluating Economics Papers %D 2025 %8 2025 Jan %I Institute of Labor Economics (IZA) %C Bonn %7 IZA Discussion Paper %N 17659 %U https://www.iza.org/publications/dp17659 %X We investigate whether artificial intelligence can address the peer review crisis in economics by analyzing 27,090 evaluations of 9,030 unique submissions using a large language model (LLM). The experiment systematically varies author characteristics (e.g., affiliation, reputation, gender) and publication quality (e.g., top-tier, mid-tier, low-tier, AI-generated papers). The results indicate that LLMs effectively distinguish paper quality but exhibit biases favoring prominent institutions, male authors, and renowned economists. Additionally, LLMs struggle to differentiate high-quality AI-generated papers from genuine top-tier submissions. While LLMs offer efficiency gains, their susceptibility to bias necessitates cautious integration and hybrid peer review models to balance equity and accuracy. %K Artificial Intelligence %K peer review %K large language model (LLM) %K bias in academia %K economics publishing %K equity-efficiency trade-off