March 2016

IZA DP No. 9785: Time Aggregation and State Dependence in Welfare Receipt

published in: Economic Journal, 2017, 127 (604), 1833-1873.

Dynamic discrete-choice models are an important tool in studies of state dependence in benefit receipt. A common assumption of such models is that benefit receipt sequences follow a conditional Markov process. This property has implications for how estimated period-to-period benefit transition probabilities should relate when receipt processes are aggregated over time. This paper assesses whether the conditional Markov property holds in welfare benefit receipt dynamics in Norway using high-quality monthly data from administrative records. We find that the standard conditional Markov model is seriously misspecified. Estimated state dependence is affected substantially by the chosen time unit of analysis, with the average treatment effect of past benefit receipt increasing with the level of aggregation. The model can be improved considerably by permitting richer types of benefit dynamics: We find strong evidence for both duration and occurrence dependence in benefit receipt. Allowing for heterogeneity in the entry and persistence processes, we find important disparities in the effects of observed and persistent unobserved characteristics. Based on our preferred model, the month-to-month persistence probability in benefit receipt for a first-time entrant is 37 percentage points higher than the entry rate of an individual without previous benefit receipt. Over a 12-month period, this corresponds to an average treatment effect of 5 percentage points.