Generative AI (GenAI) and Large Language Models (LLMs) are moving into domains once seen as uniquely human: reasoning, synthesis, abstraction, and rhetoric. Addressed to labor economists and informed readers, this paper clarifies what is truly new about LLMs, what is not, and why it matters. Using an analogy to auto-regressive models from economics, we explain their stochastic nature, whose fluency is often mistaken for agency. We place LLMs in the longer history of human–machine outsourcing, from digestion to cognition, and examine disruptive effects on white-collar labor, institutions, and epistemic norms. Risks emerge when synthetic content becomes both product and input, creating feedback loops that erode originality and reliability. Grounding the discussion in conceptual clarity over hype, we argue that while GenAI may substitute for some labor, statistical limits will, probably but not without major disruption, preserve a key role for human judgment. The question is not only how these tools are used, but which tasks we relinquish and how we reallocate expertise in a new division of cognitive labor.
We use cookies to provide you with an optimal website experience. This includes cookies that are necessary for the operation of the site as well as cookies that are only used for anonymous statistical purposes, for comfort settings or to display personalized content. You can decide for yourself which categories you want to allow. Please note that based on your settings, you may not be able to use all of the site's functions.
Cookie settings
These necessary cookies are required to activate the core functionality of the website. An opt-out from these technologies is not available.
In order to further improve our offer and our website, we collect anonymous data for statistics and analyses. With the help of these cookies we can, for example, determine the number of visitors and the effect of certain pages on our website and optimize our content.