Thomas Pouncy (Student Talk Series)

Date: 

Thursday, April 4, 2019, 12:00pm

Location: 

Room 105 William James Hall

What is the model in model-based reinforcement learning?


Human intelligence has long been explored in the context of complex, sequential decision making tasks like chess and go. While there has been much theoretical and empirical support for model-based reinforcement learning (MBRL) accounts of human behavior in these kinds of tasks, much of the existing work in cognitive science has been limited to MBRL algorithms with relatively simplistic representations of task dynamics. In this paper we argue that these simple representations fail to capture a hallmark component of human intelligence: our remarkable ability to adapt to changes in task structure with little to no additional training. We demonstrate this key human behavior with a series of novel empirical benchmarks, then demonstrate how a more psychologically plausible MBRL agent can match human performance on these new benchmarks by using rule-based, theory-like representations grounded in core cognitive elements like objects, spaces, actions, relations, and object kinds.