Think! Evidence

Learning structured representations for perception and control

Show simple item record

dc.contributor Joshua B. Tenenbaum.
dc.contributor Massachusetts Institute of Technology. Department of Brain and Cognitive Sciences.
dc.contributor Massachusetts Institute of Technology. Department of Brain and Cognitive Sciences.
dc.creator Kulkarni, Tejas Dattatraya
dc.date 2017-03-20T19:39:55Z
dc.date 2017-03-20T19:39:55Z
dc.date 2016
dc.date 2016
dc.identifier http://hdl.handle.net/1721.1/107557
dc.identifier 974640245
dc.description Thesis: Ph. D., Massachusetts Institute of Technology, Department of Brain and Cognitive Sciences, 2016.
dc.description Cataloged from PDF version of thesis.
dc.description Includes bibliographical references (pages 117-129).
dc.description I argue that the intersection of deep learning, hierarchical reinforcement learning, and generative models provides a promising avenue towards building agents that learn to produce goal-directed behavior given sensations. I present models and algorithms that learn from raw observations and will emphasize on minimizing their sample complexity and number of training steps required for convergence. To this end, I introduce hierarchical variants of deep reinforcement learning algorithms, which produce and utilize temporally extended abstractions over actions. I also present a hybrid model-free and model-based deep reinforcement learning model, which can also be potentially used to automatically extract subgoals for bootstrapping temporal abstractions. I will then present a model-based approach for perception, which unifies deep learning and probabilistic models, to learn powerful representations of images without labeled data or external rewards. Learning goal-directed behavior with sparse and delayed rewards is a fundamental challenge for reinforcement learning algorithms. The primary difficulty arises due to insufficient exploration, resulting in an agent being unable to learn robust value functions. I present the Deep Hierarchical Reinforcement Learning (h-DQN) approach, which integrates hierarchical value functions operating at different time scales, along with goal-driven intrinsically motivated behavior for efficient exploration. Intrinsically motivated agents can explore new behavior for its own sake rather than to directly solve problems. Such intrinsic behaviors could eventually help the agent solve tasks posed by the environment. h-DQN allows for flexible goal specifications, such as functions over entities and relations. This provides an efficient space for exploration in complicated environments. I will demonstrate h-DQN's ability to learn optimal behavior given raw pixels in environments with very sparse and delayed feedback. I will then introduce the Deep Successor Reinforcement (DSR) learning approach. DSR is a hybrid model-free and model-based RL algorithm. It learns the value function of a state by taking the inner product between the state's expected future feature occupancy and the corresponding immediate rewards. This factorization of the value function has several appealing properties - increased sensitivity to changes in the reward structure and potentially the ability to automatically extract subgoals for learning temporal abstractions. Finally, I argue for the need for better representations of images, both in reinforcement learning tasks and in general. Existing deep learning approaches learn useful representations given lots of labeled data or rewards. Moreover, they also lack the inductive biases needed to disentangle causal structure in images such as objects, shape, pose and other intrinsic scene properties. I present generative models of vision, often referred to as analysis-by-synthesis approaches, by combining deep generative methods with probabilistic modeling. This approach aims to learn structured representations of images given raw observations. I argue that such intermediate representations will be crucial to scale-up deep reinforcement learning algorithms, and to bridge the gap between machine and human learning.
dc.description by Tejas Dattatraya Kulkarni.
dc.description Ph. D.
dc.format 129 pages
dc.format application/pdf
dc.language eng
dc.publisher Massachusetts Institute of Technology
dc.rights MIT theses are protected by copyright. They may be viewed, downloaded, or printed from this source but further reproduction or distribution in any format is prohibited without written permission.
dc.rights http://dspace.mit.edu/handle/1721.1/7582
dc.subject Brain and Cognitive Sciences.
dc.title Learning structured representations for perception and control
dc.type Thesis


Files in this item

Files Size Format View
974640245-MIT.pdf 14.08Mb application/pdf View/Open

Files in this item

Files Size Format View
974640245-MIT.pdf 14.08Mb application/pdf View/Open

Files in this item

Files Size Format View
974640245-MIT.pdf 14.08Mb application/pdf View/Open

This item appears in the following Collection(s)

Show simple item record

Search Think! Evidence


Browse

My Account