Rethinking Bounded Rationality: Computationally Rational Choice, Action, and Reward
Dr. Rick Lewis, University of Michigan
Monday, October 15 at 5:30 p.m., 118 Psychology
Abstract
Across the cognitive and behavioral sciences, a distinction is drawn between how we should choose or behave (according to a normative or rational analysis), and how we actually choose or behave (as observed in experiments, and as described by cognitive or neural mechanism theories). This talk presents models based on an alternative perspective that incorporates cognitive bounds into definitions of optimal decision and control, and that explains behavior as a rational adaptation to these bounds. The models offer novel explanations of phenomena in domains of choice, action, and eye-movement control, including phenomena that have previously been taken to be clear violations of rational decision theory (preference reversals). New results in (deep) reinforcement learning show how adapting reward itself to agent bounds can lead to more effective computational agents. These results raise novel theoretical questions for understanding human cognitive development.
Suggested Readings
Singh, S., Lewis, R. L., Barto, A. G., & Sorg, J. (2010). Intrinsically motivated reinforcement learning: An evolutionary perspective. IEEE Transactions on Autonomous Mental Development, 2(2), 70-82. [pdf]
Lewis, R. L., Howes, A., & Singh, S. (2014). Computational rationality: Linking mechanism and behavior through bounded utility maximization. Topics in Cognitive Science, 6, 279-311. [pdf]
Howes, A., Warren, P. A., Farmer, G., El-Deredy, W., & Lewis, R. L. (2014). Why contextual preference reversals maximize expected value. Psychological review, 123(4), 368-391. [pdf]