Show simple item record

dc.contributor.authorKawaguchi, Kenji
dc.contributor.authorBenigo, Yoshua
dc.contributor.authorVerma, Vikas
dc.contributor.authorKaelbling, Leslie Pack
dc.date.accessioned2018-10-01T15:44:44Z
dc.date.available2018-10-01T15:44:44Z
dc.date.issued2018-10-01
dc.identifier.urihttp://hdl.handle.net/1721.1/118307
dc.description.abstractThis paper introduces a novel measure-theoretic theory for machine learning that does not require statistical assumptions. Based on this theory, a new regularization method in deep learning is derived and shown to outperform previous methods in CIFAR-10, CIFAR-100, and SVHN. Moreover, the proposed theory provides a theoretical basis for a family of practically successful regularization methods in deep learning. We discuss several consequences of our results on one-shot learning, representation learning, deep learning, and curriculum learning. Unlike statistical learning theory, the proposed learning theory analyzes each problem instance individually via measure theory, rather than a set of problem instances via statistics. As a result, it provides different types of results and insights when compared to statistical learning theory.en_US
dc.language.isoen_USen_US
dc.relation.ispartofseries;MIT-CSAIL-TR-2018-019
dc.subjectMachine Learningen_US
dc.subjectMeasure Theoryen_US
dc.subjectRegularization methoden_US
dc.subjectNeural Networken_US
dc.titleTowards Understanding Generalization via Analytical Learning Theoryen_US
dc.typeArticleen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record