%0 Report %A Irlenbusch, Bernd %A Rau, Holger A. %A Rilke, Rainer Michael %T Human–AI Evaluation and Gender Transparency: Application Decisions in Competitive Hiring %D 2026 %8 2026 Apr %I Institute of Labor Economics (IZA) %C Bonn %7 IZA Discussion Paper %N 18517 %U https://www.iza.org/publications/dp18517 %X We study how human versus LLM-based evaluation and gender transparency shape entry into competitive jobs. In a preregistered online experiment, participants first complete a Niederle and Vesterlund (2007) tournament task to measure competitive preferences, then prepare text-based job applications and decide whether to apply under each of four evaluation regimes—human only, LLM only, and two hybrid human-in-the-loop configurations—while gender disclosure is randomized between subjects. LLM involvement reduces application rates, with stronger effects for women than men, including under hybrid designs. Effects are driven by non-competitive candidates; non-competitive women, the group most exposed to AI-induced deterrence, receive the strongest objective evaluations under pure AI assessment across all subgroups, yet are systematically underconfident and apply least often. Competitive men persistently apply and exhibit overconfidence-driven adverse selection, whereas competitive women show resilience to AI-induced deterrence while remaining well-calibrated under AI evaluation and exhibiting positive self-selection across regimes. We find no effects of gender transparency. %K AI hiring %K LLMs %K algorithm aversion %K gender differences