Think! Evidence

The acquisition of inductive constraints

Show simple item record

dc.contributor Joshua Tenenbaum.
dc.contributor Massachusetts Institute of Technology. Dept. of Brain and Cognitive Sciences.
dc.contributor Massachusetts Institute of Technology. Dept. of Brain and Cognitive Sciences.
dc.creator Kemp, Charles, Ph. D. Massachusetts Institute of Technology
dc.date 2008-09-02T17:59:07Z
dc.date 2008-09-02T17:59:07Z
dc.date 2008
dc.date 2008
dc.identifier http://hdl.handle.net/1721.1/42074
dc.identifier 238611597
dc.description Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Brain and Cognitive Sciences, 2008.
dc.description This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
dc.description Includes bibliographical references (p. 197-216).
dc.description Human learners routinely make inductive inferences, or inferences that go beyond the data they have observed. Inferences like these must be supported by constraints, some of which are innate, although others are almost certainly learned. This thesis presents a hierarchical Bayesian framework that helps to explain the nature, use and acquisition of inductive constraints. Hierarchical Bayesian models include multiple levels of abstraction, and the representations at the upper levels place constraints on the representations at the lower levels. The probabilistic nature of these models allows them to make statistical inferences at multiple levels of abstraction. In particular, they show how knowledge can be acquired at levels quite remote from the data of experience--levels where the representations learned are naturally described as inductive constraints. Hierarchical Bayesian models can address inductive problems from many domains but this thesis focuses on models that address three aspects of high-level cognition. The first model is sensitive to patterns of feature variability, and acquires constraints similar to the shape bias in word learning. The second model acquires causal schemata--systems of abstract causal knowledge that allow learners to discover causal relationships given very sparse data. The final model discovers the structural form of a domain--for instance, it discovers whether the relationships between a set of entities are best described by a tree, a chain, a ring, or some other kind of representation. The hierarchical Bayesian approach captures several principles that go beyond traditional formulations of learning theory.
dc.description (cont.) It supports learning at multiple levels of abstraction, it handles structured representations, and it helps to explain how learning can succeed given sparse and noisy data. Principles like these are needed to explain how humans acquire rich systems of knowledge, and hierarchical Bayesian models point the way towards a modern learning theory that is better able to capture the sophistication of human learning.
dc.description by Charles Kemp.
dc.description Ph.D.
dc.format 216 p.
dc.format application/pdf
dc.language eng
dc.publisher Massachusetts Institute of Technology
dc.rights M.I.T. theses are protected by copyright. They may be viewed from this source for any purpose, but reproduction or distribution in any format is prohibited without written permission. See provided URL for inquiries about permission.
dc.rights http://dspace.mit.edu/handle/1721.1/7582
dc.subject Brain and Cognitive Sciences.
dc.title The acquisition of inductive constraints
dc.type Thesis


Files in this item

Files Size Format View
238611597-MIT.pdf 1.804Mb application/pdf View/Open

Files in this item

Files Size Format View
238611597-MIT.pdf 1.804Mb application/pdf View/Open

Files in this item

Files Size Format View
238611597-MIT.pdf 1.804Mb application/pdf View/Open

This item appears in the following Collection(s)

Show simple item record

Search Think! Evidence


Browse

My Account