My Account Log in

1 option

Deep learning for natural language processing / Stephan Raaijmakers.

O'Reilly Online Learning: Academic/Public Library Edition Available online

View online
Format:
Book
Author/Creator:
Raaijmakers, Stephan, author.
Language:
English
Subjects (All):
Natural language processing (Computer science).
Speech processing systems.
Machine learning.
Physical Description:
1 online resource (242 pages)
Edition:
[First edition].
Place of Publication:
Shelter Island, New York : Manning Publications Co. LLC, [2022]
Summary:
Explore the most challenging issues of natural language processing, and learn how to solve them with cutting-edge deep learning! Deep learning has advanced natural language processing to exciting new levels and powerful new applications! For the first time, computer systems can achieve "human" levels of summarizing, making connections, and other tasks that require comprehension and context. Deep Learning for Natural Language Processing reveals the groundbreaking techniques that make these innovations possible. Stephan Raaijmakers distills his extensive knowledge into useful best practices, real-world applications, and the inner workings of top NLP algorithms. Deep learning has transformed the field of natural language processing. Neural networks recognize not just words and phrases, but also patterns. Models infer meaning from context, and determine emotional tone. Powerful deep learning-based NLP models open up a goldmine of potential uses. Deep Learning for Natural Language Processing teaches you how to create advanced NLP applications using Python and the Keras deep learning library. You'll learn to use state-of the-art tools and techniques including BERT and XLNET, multitask learning, and deep memory-based NLP. Fascinating examples give you hands-on experience with a variety of real world NLP applications. Plus, the detailed code discussions show you exactly how to adapt each example to your own uses!
Contents:
Intro
Deep Learning for Natural Language Processing
Copyright
brief contents
contents
front matter
preface
acknowledgments
about this book
Who should read this book
How this book is organized: A road map
About the code
liveBook discussion forum
about the author
about the cover illustration
Part 1. Introduction
1 Deep learning for NLP
1.1 A selection of machine learning methods for NLP
1.1.1 The perceptron
1.1.2 Support vector machines
1.1.3 Memory-based learning
1.2 Deep learning
1.3 Vector representations of language
1.3.1 Representational vectors
1.3.2 Operational vectors
1.4 Vector sanitization
1.4.1 The hashing trick
1.4.2 Vector normalization
Summary
2 Deep learning and language: The basics
2.1 Basic architectures of deep learning
2.1.1 Deep multilayer perceptrons
2.1.2 Two basic operators: Spatial and temporal
2.2 Deep learning and NLP: A new paradigm
3 Text embeddings
3.1 Embeddings
3.1.1 Embedding by direct computation: Representational embeddings
3.1.2 Learning to embed: Procedural embeddings
3.2 From words to vectors: Word2Vec
3.3 From documents to vectors: Doc2Vec
Part 2. Deep NLP
4 Textual similarity
4.1 The problem
4.2 The data
4.2.1 Authorship attribution and verification data
4.3 Data representation
4.3.1 Segmenting documents
4.3.2 Word-level information
4.3.3 Subword-level information
4.4 Models for measuring similarity
4.4.1 Authorship attribution
4.4.2 Verifying authorship
5 Sequential NLP
5.1 Memory and language
5.1.1 The problem: Question Answering
5.2 Data and data processing
5.3 Question Answering with sequential models
5.3.1 RNNs for Question Answering
5.3.2 LSTMs for Question Answering.
5.3.3 End-to-end memory networks for Question Answering
6 Episodic memory for NLP
6.1 Memory networks for sequential NLP
6.2 Data and data processing
6.2.1 PP-attachment data
6.2.2 Dutch diminutive data
6.2.3 Spanish part-of-speech data
6.3 Strongly supervised memory networks: Experiments and results
6.3.1 PP-attachment
6.3.2 Dutch diminutives
6.3.3 Spanish part-of-speech tagging
6.4 Semi-supervised memory networks
6.4.1 Semi-supervised memory networks: Experiments and results
Part 3. Advanced topics
7 Attention
7.1 Neural attention
7.2 Data
7.3 Static attention: MLP
7.4 Temporal attention: LSTM
7.5 Experiments
7.5.1 MLP
7.5.2 LSTM
8 Multitask learning
8.1 Introduction to multitask learning
8.2 Multitask learning
8.3 Multitask learning for consumer reviews: Yelp and Amazon
8.3.1 Data handling
8.3.2 Hard parameter sharing
8.3.3 Soft parameter sharing
8.3.4 Mixed parameter sharing
8.4 Multitask learning for Reuters topic classification
8.4.1 Data handling
8.4.2 Hard parameter sharing
8.4.3 Soft parameter sharing
8.4.4 Mixed parameter sharing
8.5 Multitask learning for part-of-speech tagging and named-entity recognition
8.5.1 Data handling
8.5.2 Hard parameter sharing
8.5.3 Soft parameter sharing
8.5.4 Mixed parameter sharing
9 Transformers
9.1 BERT up close: Transformers
9.2 Transformer encoders
9.2.1 Positional encoding
9.3 Transformer decoders
9.4 BERT: Masked language modeling
9.4.1 Training BERT
9.4.2 Fine-tuning BERT
9.4.3 Beyond BERT
10 Applications of Transformers: Hands-on with BERT
10.1 Introduction: Working with BERT in practice
10.2 A BERT layer
10.3 Training BERT on your data
10.4 Fine-tuning BERT
10.5 Inspecting BERT.
10.5.1 Homonyms in BERT
10.6 Applying BERT
bibliography
index.
Notes:
Description based on print version record.
Includes bibliographical references and index.
ISBN:
9781638353997
1638353999
OCLC:
1352970516

The Penn Libraries is committed to describing library materials using current, accurate, and responsible language. If you discover outdated or inaccurate language, please fill out this feedback form to report it and suggest alternative language.

My Account

Shelf Request an item Bookmarks Fines and fees Settings

Guides

Using the Library Catalog Using Articles+ Library Account