RWP 25-19, November 2025
Large language models (LLMs) are now used for economic reasoning, but their implicit "preferences” are poorly understood. We study LLM preferences as revealed by their choices in simple allocation games and a job-search setting. Most models favor equal splits in dictator-style allocation games, consistent with inequality aversion. Structural estimates recover Fehr–Schmidt parameters that indicate inequality aversion is stronger than in similar experiments with human participants. However, we find these preferences are malleable: reframing (e.g., masking social context) and learned control vectors shift choices toward payoff-maximizing behavior, while personas move them less effectively. We then turn to a more complex economic scenario. Extending a McCall job search environment, we also recover effective discounting from accept/reject policies, but observe that model responses may not always be rationalizable, and in some cases suggest inconsistent preferences. Efforts to steer LLM responses in the McCall scenario are also less consistent. Together, our results suggest (i) LLMs exhibit latent preferences that may not perfectly align with typical human preferences and (ii) LLMs can be steered toward desired preferences, though this is more difficult with complex economic tasks.
JEL classifications: C63, C68, C61, D14, D83, D91,E20, E21
Article Citation
Cook, Thomas R., Sophia Kazinnik, Zach Modig, and Nathan M. Palmer. “What Do LLMs Want?” Federal Reserve Bank of Kansas City, Research Working Paper no. 25-19, November. Available at External Linkhttps://doi.org/10.18651/RWP2025-19
The views expressed are those of the authors and do not necessarily reflect the positions of the Federal Reserve Bank of Kansas City or the Federal Reserve System.