1 option
Deep learning by example : a hands-on guide to implementing advanced machine learning algorithms and neural networks / Ahmed Menshawy.
- Format:
- Book
- Author/Creator:
- Menshawy, Ahmed, author.
- Language:
- English
- Subjects (All):
- Machine learning.
- Physical Description:
- 1 online resource (427 pages) : illustrations
- Edition:
- 1st ed.
- Place of Publication:
- Birmingham, [England] ; Mumbai, [India] : Packt, 2018.
- Summary:
- Deep Learning is a subset of Machine Learning and has gained a lot of popularity recently. This book introduces you to the fundamentals of deep learning in a hands-on manner. You will use Tensorflow to train different types of neural networks for tasks related to computer vision, language processing, and other real-world problems.
- Contents:
- Cover
- Copyright and Credits
- Packt Upsell
- Contributors
- Table of Contents
- Preface
- Chapter 1: Data Science - A Birds' Eye View
- Understanding data science by an example
- Design procedure of data science algorithms
- Data pre-processing
- Data cleaning
- Feature selection
- Model selection
- Learning process
- Evaluating your model
- Getting to learn
- Challenges of learning
- Feature extraction - feature engineering
- Noise
- Overfitting
- Selection of a machine learning algorithm
- Prior knowledge
- Missing values
- Implementing the fish recognition/detection model
- Knowledge base/dataset
- Data analysis pre-processing
- Model building
- Model training and testing
- Fish recognition - all together
- Different learning types
- Supervised learning
- Unsupervised learning
- Semi-supervised learning
- Reinforcement learning
- Data size and industry needs
- Summary
- Chapter 2: Data Modeling in Action - The Titanic Example
- Linear models for regression
- Motivation
- Advertising - a financial example
- Dependencies
- Importing data with pandas
- Understanding the advertising data
- Data analysis and visualization
- Simple regression model
- Learning model coefficients
- Interpreting model coefficients
- Using the model for prediction
- Linear models for classification
- Classification and logistic regression
- Titanic example - model building and training
- Data handling and visualization
- Data analysis - supervised machine learning
- Different types of errors
- Apparent (training set) error
- Generalization/true error
- Chapter 3: Feature Engineering and Model Complexity - The Titanic Example Revisited
- Feature engineering
- Types of feature engineering
- Dimensionality reduction
- Feature construction.
- Titanic example revisited
- Removing any sample with missing values in it
- Missing value inputting
- Assigning an average value
- Using a regression or another simple model to predict the values of missing variables
- Feature transformations
- Dummy features
- Factorizing
- Scaling
- Binning
- Derived features
- Name
- Cabin
- Ticket
- Interaction features
- The curse of dimensionality
- Avoiding the curse of dimensionality
- Titanic example revisited - all together
- Bias-variance decomposition
- Learning visibility
- Breaking the rule of thumb
- Chapter 4: Get Up and Running with TensorFlow
- TensorFlow installation
- TensorFlow GPU installation for Ubuntu 16.04
- Installing NVIDIA drivers and CUDA 8
- Installing TensorFlow
- TensorFlow CPU installation for Ubuntu 16.04
- TensorFlow CPU installation for macOS X
- TensorFlow GPU/CPU installation for Windows
- The TensorFlow environment
- Computational graphs
- TensorFlow data types, variables, and placeholders
- Variables
- Placeholders
- Mathematical operations
- Getting output from TensorFlow
- TensorBoard - visualizing learning
- Chapter 5: TensorFlow in Action - Some Basic Examples
- Capacity of a single neuron
- Biological motivation and connections
- Activation functions
- Sigmoid
- Tanh
- ReLU
- Feed-forward neural network
- The need for multilayer networks
- Training our MLP - the backpropagation algorithm
- Step 1 - forward propagation
- Step 2 - backpropagation and weight updation
- TensorFlow terminologies - recap
- Defining multidimensional arrays using TensorFlow
- Why tensors?
- Operations
- Linear regression model - building and training
- Linear regression with TensorFlow
- Logistic regression model - building and training.
- Utilizing logistic regression in TensorFlow
- Why use placeholders?
- Set model weights and bias
- Logistic regression model
- Training
- Cost function
- Chapter 6: Deep Feed-forward Neural Networks - Implementing Digit Classification
- Hidden units and architecture design
- MNIST dataset analysis
- The MNIST data
- Digit classification - model building and training
- Data analysis
- Building the model
- Model training
- Chapter 7: Introduction to Convolutional Neural Networks
- The convolution operation
- Applications of CNNs
- Different layers of CNNs
- Input layer
- Convolution step
- Introducing non-linearity
- The pooling step
- Fully connected layer
- Logits layer
- CNN basic example - MNIST digit classification
- Performance measures
- Chapter 8: Object Detection - CIFAR-10 Example
- Object detection
- CIFAR-10 - modeling, building, and training
- Used packages
- Loading the CIFAR-10 dataset
- Data analysis and preprocessing
- Building the network
- Testing the model
- Chapter 9: Object Detection - Transfer Learning with CNNs
- Transfer learning
- The intuition behind TL
- Differences between traditional machine learning and TL
- CIFAR-10 object detection - revisited
- Solution outline
- Loading and exploring CIFAR-10
- Inception model transfer values
- Analysis of transfer values
- Model building and training
- Chapter 10: Recurrent-Type Neural Networks - Language Modeling
- The intuition behind RNNs
- Recurrent neural networks architectures
- Examples of RNNs
- Character-level language models
- Language model using Shakespeare data
- The vanishing gradient problem
- The problem of long-term dependencies
- LSTM networks
- Why does LSTM work?.
- Implementation of the language model
- Mini-batch generation for training
- Stacked LSTMs
- Model architecture
- Inputs
- Building an LSTM cell
- RNN output
- Training loss
- Optimizer
- Model hyperparameters
- Training the model
- Saving checkpoints
- Generating text
- Chapter 11: Representation Learning - Implementing Word Embeddings
- Introduction to representation learning
- Word2Vec
- Building Word2Vec model
- A practical example of the skip-gram architecture
- Skip-gram Word2Vec implementation
- Data analysis and pre-processing
- Chapter 12: Neural Sentiment Analysis
- General sentiment analysis architecture
- RNNs - sentiment analysis context
- Exploding and vanishing gradients - recap
- Sentiment analysis - model implementation
- Keras
- Model training and results analysis
- Chapter 13: Autoencoders - Feature Extraction and Denoising
- Introduction to autoencoders
- Examples of autoencoders
- Autoencoder architectures
- Compressing the MNIST dataset
- The MNIST dataset
- Convolutional autoencoder
- Dataset
- Denoising autoencoders
- Applications of autoencoders
- Image colorization
- More applications
- Chapter 14: Generative Adversarial Networks
- An intuitive introduction
- Simple implementation of GANs
- Model inputs
- Variable scope
- Leaky ReLU
- Generator
- Discriminator
- Building the GAN network
- Defining the generator and discriminator
- Discriminator and generator losses
- Optimizers
- Generator samples from training.
- Sampling from the generator
- Chapter 15: Face Generation and Handling Missing Labels
- Face generation
- Getting the data
- Exploring the Data
- Model losses
- Model optimizer
- Semi-supervised learning with Generative Adversarial Networks (GANs)
- Intuition
- Appendix: Implementing Fish Recognition
- Code for fish recognition
- Other Books You May Enjoy
- Index.
- Notes:
- Includes index.
- Description based on online resource; title from PDF title page (EBC, viewed March 22, 2018).
- ISBN:
- 9781788395762
- 178839576X
The Penn Libraries is committed to describing library materials using current, accurate, and responsible language. If you discover outdated or inaccurate language, please fill out this feedback form to report it and suggest alternative language.