Think! Evidence

Perceptually inspired image estimation and enhancement

Show simple item record

dc.contributor Edward H. Adelson.
dc.contributor Massachusetts Institute of Technology. Dept. of Brain and Cognitive Sciences.
dc.contributor Massachusetts Institute of Technology. Dept. of Brain and Cognitive Sciences.
dc.creator Li, Yuanzhen, Ph. D. Massachusetts Institute of Technology
dc.date 2009-11-06T16:27:53Z
dc.date 2009-11-06T16:27:53Z
dc.date 2009
dc.date 2009
dc.identifier http://hdl.handle.net/1721.1/49739
dc.identifier 449225674
dc.description Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Brain and Cognitive Sciences, 2009.
dc.description Includes bibliographical references (p. 137-144).
dc.description In this thesis, we present three image estimation and enhancement algorithms inspired by human vision. In the first part of the thesis, we propose an algorithm for mapping one image to another based on the statistics of a training set. Many vision problems can be cast as image mapping problems, such as, estimating reflectance from luminance, estimating shape from shading, separating signal and noise, etc. Such problems are typically under-constrained, and yet humans are remarkably good at solving them. Classic computational theories about the ability of the human visual system to solve such under-constrained problems attribute this feat to the use of some intuitive regularities of the world, e.g., surfaces tend to be piecewise constant. In recent years, there has been considerable interest in deriving more sophisticated statistical constraints from natural images, but because of the high-dimensional nature of images, representing and utilizing the learned models remains a challenge. Our techniques produce models that are very easy to store and to query. We show these techniques to be effective for a number of applications: removing noise from images, estimating a sharp image from a blurry one, decomposing an image into reflectance and illumination, and interpreting lightness illusions. In the second part of the thesis, we present an algorithm for compressing the dynamic range of an image while retaining important visual detail. The human visual system confronts a serious challenge with dynamic range, in that the physical world has an extremely high dynamic range, while neurons have low dynamic ranges.
dc.description (cont.) The human visual system performs dynamic range compression by applying automatic gain control, in both the retina and the visual cortex. Taking inspiration from that, we designed techniques that involve multi-scale subband transforms and smooth gain control on subband coefficients, and resemble the contrast gain control mechanism in the visual cortex. We show our techniques to be successful in producing dynamic-range-compressed images without compromising the visibility of detail or introducing artifacts. We also show that the techniques can be adapted for the related problem of "companding", in which a high dynamic range image is converted to a low dynamic range image and saved using fewer bits, and later expanded back to high dynamic range with minimal loss of visual quality. In the third part of the thesis, we propose a technique that enables a user to easily localize image and video editing by drawing a small number of rough scribbles. Image segmentation, usually treated as an unsupervised clustering problem, is extremely difficult to solve. With a minimal degree of user supervision, however, we are able to generate selection masks with good quality. Our technique learns a classifier using the user-scribbled pixels as training examples, and uses the classifier to classify the rest of the pixels into distinct classes. It then uses the classification results as per-pixel data terms, combines them with a smoothness term that respects color discontinuities, and generates better results than state-of-art algorithms for interactive segmentation.
dc.description by Yuanzhen Li.
dc.description Ph.D.
dc.format 144 p.
dc.format application/pdf
dc.language eng
dc.publisher Massachusetts Institute of Technology
dc.rights M.I.T. theses are protected by copyright. They may be viewed from this source for any purpose, but reproduction or distribution in any format is prohibited without written permission. See provided URL for inquiries about permission.
dc.rights http://dspace.mit.edu/handle/1721.1/7582
dc.subject Brain and Cognitive Sciences.
dc.title Perceptually inspired image estimation and enhancement
dc.type Thesis


Files in this item

Files Size Format View
449225674-MIT.pdf 31.78Mb application/pdf View/Open

Files in this item

Files Size Format View
449225674-MIT.pdf 31.78Mb application/pdf View/Open

Files in this item

Files Size Format View
449225674-MIT.pdf 31.78Mb application/pdf View/Open

This item appears in the following Collection(s)

Show simple item record

Search Think! Evidence


Browse

My Account