Semi-Supervised Learning and Domain Adaptation in Natural Language Processing (Record no. 85483)

000 -LEADER
fixed length control field 03697nam a22005175i 4500
001 - CONTROL NUMBER
control field 978-3-031-02149-7
005 - DATE AND TIME OF LATEST TRANSACTION
control field 20240730164241.0
008 - FIXED-LENGTH DATA ELEMENTS--GENERAL INFORMATION
fixed length control field 220601s2013 sz | s |||| 0|eng d
020 ## - INTERNATIONAL STANDARD BOOK NUMBER
ISBN 9783031021497
-- 978-3-031-02149-7
082 04 - CLASSIFICATION NUMBER
Call Number 006.3
100 1# - AUTHOR NAME
Author Søgaard, Anders.
245 10 - TITLE STATEMENT
Title Semi-Supervised Learning and Domain Adaptation in Natural Language Processing
250 ## - EDITION STATEMENT
Edition statement 1st ed. 2013.
300 ## - PHYSICAL DESCRIPTION
Number of Pages X, 93 p.
490 1# - SERIES STATEMENT
Series statement Synthesis Lectures on Human Language Technologies,
505 0# - FORMATTED CONTENTS NOTE
Remark 2 Introduction -- Supervised and Unsupervised Prediction -- Semi-Supervised Learning -- Learning under Bias -- Learning under Unknown Bias -- Evaluating under Bias.
520 ## - SUMMARY, ETC.
Summary, etc This book introduces basic supervised learning algorithms applicable to natural language processing (NLP) and shows how the performance of these algorithms can often be improved by exploiting the marginal distribution of large amounts of unlabeled data. One reason for that is data sparsity, i.e., the limited amounts of data we have available in NLP. However, in most real-world NLP applications our labeled data is also heavily biased. This book introduces extensions of supervised learning algorithms to cope with data sparsity and different kinds of sampling bias. This book is intended to be both readable by first-year students and interesting to the expert audience. My intention was to introduce what is necessary to appreciate the major challenges we face in contemporary NLP related to data sparsity and sampling bias, without wasting too much time on details about supervised learning algorithms or particular NLP applications. I use text classification, part-of-speech tagging, and dependency parsing as running examples, and limit myself to a small set of cardinal learning algorithms. I have worried less about theoretical guarantees ("this algorithm never does too badly") than about useful rules of thumb ("in this case this algorithm may perform really well"). In NLP, data is so noisy, biased, and non-stationary that few theoretical guarantees can be established and we are typically left with our gut feelings and a catalogue of crazy ideas. I hope this book will provide its readers with both. Throughout the book we include snippets of Python code and empirical evaluations, when relevant.
856 40 - ELECTRONIC LOCATION AND ACCESS
Uniform Resource Identifier https://doi.org/10.1007/978-3-031-02149-7
942 ## - ADDED ENTRY ELEMENTS (KOHA)
Koha item type eBooks
264 #1 -
-- Cham :
-- Springer International Publishing :
-- Imprint: Springer,
-- 2013.
336 ## -
-- text
-- txt
-- rdacontent
337 ## -
-- computer
-- c
-- rdamedia
338 ## -
-- online resource
-- cr
-- rdacarrier
347 ## -
-- text file
-- PDF
-- rda
650 #0 - SUBJECT ADDED ENTRY--SUBJECT 1
-- Artificial intelligence.
650 #0 - SUBJECT ADDED ENTRY--SUBJECT 1
-- Natural language processing (Computer science).
650 #0 - SUBJECT ADDED ENTRY--SUBJECT 1
-- Computational linguistics.
650 14 - SUBJECT ADDED ENTRY--SUBJECT 1
-- Artificial Intelligence.
650 24 - SUBJECT ADDED ENTRY--SUBJECT 1
-- Natural Language Processing (NLP).
650 24 - SUBJECT ADDED ENTRY--SUBJECT 1
-- Computational Linguistics.
830 #0 - SERIES ADDED ENTRY--UNIFORM TITLE
-- 1947-4059
912 ## -
-- ZDB-2-SXSC

No items available.