Faculty Recruiting Make a Gift

Intrinsically Motivated Hierarchical Reinforcement Learning in Structured Environments

27 Aug
Thursday, 08/27/2009 6:00am to 8:00am
Ph.D. Dissertation Proposal Defense

Christopher Vigorito

Computer Science Building, Room 151

Designing reinforcement learning agents that exhibit versatile and robust performance over ensembles of related tasks in a given environment is an important open problem in machine learning. Such behavior is commonplace in humans and other mammals, and is largely a result of a substantial developmental period during which individuals are intrinsically motivated to explore their environment and learn a hierarchical set of abstract skills that may be reused in many situations to achieve commonly encountered sub-goals. While there has been a significant amount of research in the reinforcement learning community on identifying these sub-goals and learning skills to reach them, there has been significantly less focus on developmental scenarios in which an agent bootstraps its attempts at learning new skills on its current skill set, making use of already acquired behavioral expertise when expanding its frontier of knowledge and "know-how." We propose a novel framework for achieving this type of behavior in artificial reinforcement learning agents. The abstract skill hierarchies learned in our framework afford an agent the ability to transfer skills learned during its exploration to the solution of novel tasks in its environment, an essential feature of versatile and robust behavior. We aim to show that agents in our framework will over time be able to solve increasingly difficult problems as their skill sets become more sophisticated.

Advisor: Andrew Barto