Learning from Incomplete Data
Author(s)
Ghahramani, Zoubin; Jordan, Michael I.
DownloadAIM-1509.ps (379.1Kb)
Additional downloads
Metadata
Show full item recordAbstract
Real-world learning tasks often involve high-dimensional data sets with complex patterns of missing features. In this paper we review the problem of learning from incomplete data from two statistical perspectives---the likelihood-based and the Bayesian. The goal is two-fold: to place current neural network approaches to missing data within a statistical framework, and to describe a set of algorithms, derived from the likelihood-based framework, that handle clustering, classification, and function approximation from incomplete data in a principled and efficient manner. These algorithms are based on mixture modeling and make two distinct appeals to the Expectation-Maximization (EM) principle (Dempster, Laird, and Rubin 1977)---both for the estimation of mixture components and for coping with the missing data.
Date issued
1995-01-24Other identifiers
AIM-1509
CBCL-108
Series/Report no.
AIM-1509CBCL-108
Keywords
AI, MIT, Artificial Intelligence, missing data, mixture models, statistical learning, EM algorithm, maximum likelihood, neural networks