Show simple item record

dc.contributor.authorGhahramani, Zoubinen_US
dc.contributor.authorJordan, Michael I.en_US
dc.date.accessioned2004-10-20T20:49:37Z
dc.date.available2004-10-20T20:49:37Z
dc.date.issued1995-01-24en_US
dc.identifier.otherAIM-1509en_US
dc.identifier.otherCBCL-108en_US
dc.identifier.urihttp://hdl.handle.net/1721.1/7202
dc.description.abstractReal-world learning tasks often involve high-dimensional data sets with complex patterns of missing features. In this paper we review the problem of learning from incomplete data from two statistical perspectives---the likelihood-based and the Bayesian. The goal is two-fold: to place current neural network approaches to missing data within a statistical framework, and to describe a set of algorithms, derived from the likelihood-based framework, that handle clustering, classification, and function approximation from incomplete data in a principled and efficient manner. These algorithms are based on mixture modeling and make two distinct appeals to the Expectation-Maximization (EM) principle (Dempster, Laird, and Rubin 1977)---both for the estimation of mixture components and for coping with the missing data.en_US
dc.format.extent11 p.en_US
dc.format.extent388268 bytes
dc.format.extent515095 bytes
dc.format.mimetypeapplication/postscript
dc.format.mimetypeapplication/pdf
dc.language.isoen_US
dc.relation.ispartofseriesAIM-1509en_US
dc.relation.ispartofseriesCBCL-108en_US
dc.subjectAIen_US
dc.subjectMITen_US
dc.subjectArtificial Intelligenceen_US
dc.subjectmissing dataen_US
dc.subjectmixture modelsen_US
dc.subjectstatistical learningen_US
dc.subjectEM algorithmen_US
dc.subjectmaximum likelihooden_US
dc.subjectneural networksen_US
dc.titleLearning from Incomplete Dataen_US


Files in this item

Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record