000 04356nam a22005295i 4500
001 978-3-031-01548-9
003 DE-He213
005 20240730165128.0
007 cr nn 008mamaa
008 220601s2009 sz | s |||| 0|eng d
020 _a9783031015489
_9978-3-031-01548-9
024 7 _a10.1007/978-3-031-01548-9
_2doi
050 4 _aQ334-342
050 4 _aTA347.A78
072 7 _aUYQ
_2bicssc
072 7 _aCOM004000
_2bisacsh
072 7 _aUYQ
_2thema
082 0 4 _a006.3
_223
100 1 _aZhu, Xiaojin.
_eauthor.
_4aut
_4http://id.loc.gov/vocabulary/relators/aut
_987555
245 1 0 _aIntroduction to Semi-Supervised Learning
_h[electronic resource] /
_cby Xiaojin Zhu, Andrew. B Goldberg.
250 _a1st ed. 2009.
264 1 _aCham :
_bSpringer International Publishing :
_bImprint: Springer,
_c2009.
300 _aXII, 116 p.
_bonline resource.
336 _atext
_btxt
_2rdacontent
337 _acomputer
_bc
_2rdamedia
338 _aonline resource
_bcr
_2rdacarrier
347 _atext file
_bPDF
_2rda
490 1 _aSynthesis Lectures on Artificial Intelligence and Machine Learning,
_x1939-4616
505 0 _aIntroduction to Statistical Machine Learning -- Overview of Semi-Supervised Learning -- Mixture Models and EM -- Co-Training -- Graph-Based Semi-Supervised Learning -- Semi-Supervised Support Vector Machines -- Human Semi-Supervised Learning -- Theory and Outlook.
520 _aSemi-supervised learning is a learning paradigm concerned with the study of how computers and natural systems such as humans learn in the presence of both labeled and unlabeled data. Traditionally, learning has been studied either in the unsupervised paradigm (e.g., clustering, outlier detection) where all the data are unlabeled, or in the supervised paradigm (e.g., classification, regression) where all the data are labeled. The goal of semi-supervised learning is to understand how combining labeled and unlabeled data may change the learning behavior, and design algorithms that take advantage of such a combination. Semi-supervised learning is of great interest in machine learning and data mining because it can use readily available unlabeled data to improve supervised learning tasks when the labeled data are scarce or expensive. Semi-supervised learning also shows potential as a quantitative tool to understand human category learning, where most of the input is self-evidently unlabeled. In this introductory book, we present some popular semi-supervised learning models, including self-training, mixture models, co-training and multiview learning, graph-based methods, and semi-supervised support vector machines. For each model, we discuss its basic mathematical formulation. The success of semi-supervised learning depends critically on some underlying assumptions. We emphasize the assumptions made by each model and give counterexamples when appropriate to demonstrate the limitations of the different models. In addition, we discuss semi-supervised learning for cognitive psychology. Finally, we give a computational learning theoretic perspective on semi-supervised learning, and we conclude the book with a brief discussion of open questions in the field. Table of Contents: Introduction to Statistical Machine Learning / Overview of Semi-Supervised Learning / Mixture Models and EM / Co-Training / Graph-Based Semi-Supervised Learning / Semi-Supervised Support Vector Machines/ Human Semi-Supervised Learning / Theory and Outlook.
650 0 _aArtificial intelligence.
_93407
650 0 _aMachine learning.
_91831
650 0 _aNeural networks (Computer science) .
_987558
650 1 4 _aArtificial Intelligence.
_93407
650 2 4 _aMachine Learning.
_91831
650 2 4 _aMathematical Models of Cognitive Processes and Neural Networks.
_932913
700 1 _aGoldberg, Andrew. B.
_eauthor.
_4aut
_4http://id.loc.gov/vocabulary/relators/aut
_987560
710 2 _aSpringerLink (Online service)
_987561
773 0 _tSpringer Nature eBook
776 0 8 _iPrinted edition:
_z9783031004209
776 0 8 _iPrinted edition:
_z9783031026768
830 0 _aSynthesis Lectures on Artificial Intelligence and Machine Learning,
_x1939-4616
_987562
856 4 0 _uhttps://doi.org/10.1007/978-3-031-01548-9
912 _aZDB-2-SXSC
942 _cEBK
999 _c86116
_d86116