Learning from Incomplete Data
dc.contributor.author | Ghahramani, Zoubin | en_US |
dc.contributor.author | Jordan, Michael I. | en_US |
dc.date.accessioned | 2004-10-20T20:49:37Z | |
dc.date.available | 2004-10-20T20:49:37Z | |
dc.date.issued | 1995-01-24 | en_US |
dc.identifier.other | AIM-1509 | en_US |
dc.identifier.other | CBCL-108 | en_US |
dc.identifier.uri | http://hdl.handle.net/1721.1/7202 | |
dc.description.abstract | Real-world learning tasks often involve high-dimensional data sets with complex patterns of missing features. In this paper we review the problem of learning from incomplete data from two statistical perspectives---the likelihood-based and the Bayesian. The goal is two-fold: to place current neural network approaches to missing data within a statistical framework, and to describe a set of algorithms, derived from the likelihood-based framework, that handle clustering, classification, and function approximation from incomplete data in a principled and efficient manner. These algorithms are based on mixture modeling and make two distinct appeals to the Expectation-Maximization (EM) principle (Dempster, Laird, and Rubin 1977)---both for the estimation of mixture components and for coping with the missing data. | en_US |
dc.format.extent | 11 p. | en_US |
dc.format.extent | 388268 bytes | |
dc.format.extent | 515095 bytes | |
dc.format.mimetype | application/postscript | |
dc.format.mimetype | application/pdf | |
dc.language.iso | en_US | |
dc.relation.ispartofseries | AIM-1509 | en_US |
dc.relation.ispartofseries | CBCL-108 | en_US |
dc.subject | AI | en_US |
dc.subject | MIT | en_US |
dc.subject | Artificial Intelligence | en_US |
dc.subject | missing data | en_US |
dc.subject | mixture models | en_US |
dc.subject | statistical learning | en_US |
dc.subject | EM algorithm | en_US |
dc.subject | maximum likelihood | en_US |
dc.subject | neural networks | en_US |
dc.title | Learning from Incomplete Data | en_US |