Think! Evidence

Stochastic architectures for probabilistic computation

Show simple item record

dc.contributor Joshua B. Tenenbaum.
dc.contributor Massachusetts Institute of Technology. Department of Brain and Cognitive Sciences.
dc.contributor Massachusetts Institute of Technology. Department of Brain and Cognitive Sciences.
dc.creator Jonas, Eric Michael
dc.date 2014-05-23T19:33:07Z
dc.date 2014-05-23T19:33:07Z
dc.date 2014
dc.date 2014
dc.identifier http://hdl.handle.net/1721.1/87457
dc.identifier 879661588
dc.description Thesis: Ph. D., Massachusetts Institute of Technology, Department of Brain and Cognitive Sciences, 2014.
dc.description Cataloged from PDF version of thesis.
dc.description Includes bibliographical references (pages 107-111).
dc.description The brain interprets ambiguous sensory information faster and more reliably than modern computers, using neurons that are slower and less reliable than logic gates. But Bayesian inference, which is at the heart of many models for sensory information processing and cognition, as well as many machine intelligence systems, appears computationally challenging, even given modern transistor speeds and energy budgets. The computational principles and structures needed to narrow this gap are unknown. Here I show how to build fast Bayesian computing machines using intentionally stochastic, digital parts, narrowing this efficiency gap by multiple orders of magnitude. By connecting stochastic digital components according to simple mathematical rules, it is possible to rapidly, reliably and accurately solve many Bayesian inference problems using massively parallel, low precision circuits. I show that our circuits can solve problems of depth and motion perception, perceptual learning and causal reasoning via inference over 10,000+ latent variables in real time - a 1,000x speed advantage over commodity microprocessors - by exploiting stochasticity. I will show how this natively stochastic approach follows naturally from the probability algebra, giving rise to easy-to-understand rules for abstraction and composition. I have developed a compiler that automatically generate circuits for a wide variety of problems fixed-structure problems. I then present stochastic computing architectures for models that are viable even when constrained by silicon area and dynamic creation and destruction of random variables. These results thus expose a new role for randomness and Bayesian inference in the engineering and reverse-engineering of computing machines.
dc.description by Eric Jonas.
dc.description Ph. D.
dc.format 111 pages
dc.format application/pdf
dc.language eng
dc.publisher Massachusetts Institute of Technology
dc.rights M.I.T. theses are protected by copyright. They may be viewed from this source for any purpose, but reproduction or distribution in any format is prohibited without written permission. See provided URL for inquiries about permission.
dc.rights http://dspace.mit.edu/handle/1721.1/7582
dc.subject Brain and Cognitive Sciences.
dc.title Stochastic architectures for probabilistic computation
dc.type Thesis


Files in this item

Files Size Format View
879661588-MIT.pdf 20.10Mb application/pdf View/Open

Files in this item

Files Size Format View
879661588-MIT.pdf 20.10Mb application/pdf View/Open

Files in this item

Files Size Format View
879661588-MIT.pdf 20.10Mb application/pdf View/Open

This item appears in the following Collection(s)

Show simple item record

Search Think! Evidence


Browse

My Account