TY - RPRT AU - Polachek, Solomon AU - Romano, Kenneth AU - Tonguc, Ozlem TI - Strategic Reasoning and Sensitivity to Stakes in the Dictator and Ultimatum Games: LLMs vs. Human Proposers PY - 2026/Apr/ PB - Institute of Labor Economics (IZA) CY - Bonn T2 - IZA Discussion Paper IS - 18545 UR - https://www.iza.org/publications/dp18545 AB - This study examines how large language models (LLMs) respond to varying stake sizes in the Dictator and Ultimatum games using the high-stakes design introduced by Andersen et al. (2011). We test ten leading LLMs chosen for their accessibility, prominence, and differences in reasoning capabilities. Results reveal substantial variation across models: Only 5 of 10 models exhibit strategic behavior by offering more in the Ultimatum Game (UG) than in the Dictator Game (DG). Relative to humans, 4 models are consistently more generous, 2 consistently less, and 4 vary with stake size. Only 1 model shows a monotonic decline in UG offers as stakes increase; the remaining 9 are non-monotonic or stable. Unlike humans, most models reduce UG offers when endowed with wealth. Prompting for “human-like” decisions generally increases generosity in the UG. These findings are important for evaluating whether LLMs can serve as realistic proxies for human subjects in behavioral experiments and highlight key limitations and future directions for model development. KW - ultimatum game KW - dictator game KW - fairness KW - payoff stakes KW - artificial intelligence ER -