did-you-know? rent-now

Amazon no longer offers textbook rentals. We do!

did-you-know? rent-now

Amazon no longer offers textbook rentals. We do!

We're the #1 textbook rental company. Let us show you why.

9781119884149

Deep Learning Approaches for Security Threats in IoT Environments

by ; ;
  • ISBN13:

    9781119884149

  • ISBN10:

    1119884144

  • Edition: 1st
  • Format: Hardcover
  • Copyright: 2022-12-08
  • Publisher: Wiley-IEEE Press
  • Purchase Benefits
  • Free Shipping Icon Free Shipping On Orders Over $35!
    Your order must be $35 or more to qualify for free economy shipping. Bulk sales, PO's, Marketplace items, eBooks and apparel do not qualify for this offer.
  • eCampus.com Logo Get Rewarded for Ordering Your Textbooks! Enroll Now
List Price: $160.00 Save up to $0.80
  • Buy New
    $159.20
    Add to Cart Free Shipping Icon Free Shipping

    PRINT ON DEMAND: 2-4 WEEKS. THIS ITEM CANNOT BE CANCELLED OR RETURNED.

Supplemental Materials

What is included with this book?

Summary

Deep Learning Approaches for Security Threats in IoT Environments

An expert discussion of the application of deep learning methods in the IoT security environment

In Deep Learning Approaches for Security Threats in IoT Environments, a team of distinguished cybersecurity educators deliver an insightful and robust exploration of how to approach and measure the security of Internet-of-Things (IoT) systems and networks. In this book, readers will examine critical concepts in artificial intelligence (AI) and IoT, and apply effective strategies to help secure and protect IoT networks. The authors discuss supervised, semi-supervised, and unsupervised deep learning techniques, as well as reinforcement and federated learning methods for privacy preservation.

This book applies deep learning approaches to IoT networks and solves the security problems that professionals frequently encounter when working in the field of IoT, as well as providing ways in which smart devices can solve cybersecurity issues.

Readers will also get access to a companion website with PowerPoint presentations, links to supporting videos, and additional resources. They’ll also find:

  • A thorough introduction to artificial intelligence and the Internet of Things, including key concepts like deep learning, security, and privacy
  • Comprehensive discussions of the architectures, protocols, and standards that form the foundation of deep learning for securing modern IoT systems and networks
  • In-depth examinations of the architectural design of cloud, fog, and edge computing networks
  • Fulsome presentations of the security requirements, threats, and countermeasures relevant to IoT networks

Perfect for professionals working in the AI, cybersecurity, and IoT industries, Deep Learning Approaches for Security Threats in IoT Environments will also earn a place in the libraries of undergraduate and graduate students studying deep learning, cybersecurity, privacy preservation, and the security of IoT networks.

Author Biography

Mohamed Abdel-Basset, PhD, is an Associate Professor in the Faculty of Computers and Informatics at Zagazig University in Egypt. He is a Senior Member of the IEEE.

Nour Moustafa, PhD, is a Postgraduate Discipline Coordinator (Cyber) and Senior Lecturer in Cyber Security & Computing at the School of Engineering and Information Technology at the University of New South Wales, UNSW Canberra, Australia.

Hossam Hawash is a Research Assistant in the Department of Computer Science, Faculty of Computers and Informatics at Zagazig University in Egypt.

Table of Contents

Author Biography

About the Companion Website

1.            Chapter 1: INTRODUCING DEEP LEARNING FOR IoT SECURITY

1.1.        Introduction

1.2.        Internet of Things (IoT) Architectures

1.2.1.     Physical layer

1.2.2.     Network layer

1.2.3.     Application Layer

1.3.        Internet of Things Vulnerabilities and attacks

1.3.1.     Passive attacks

1.3.2.     Active attacks

1.4.        Artificial Intelligence

1.5.        Deep Learning

1.6.        Taxonomy of Deep Learning Models

1.6.1.     Supervision criterion

1.6.1.1. Supervised deep learning

1.6.1.2. Unsupervised deep learning.

1.6.1.3. Semi-supervised deep learning.

1.6.1.4. Deep reinforcement learning.

1.6.2.     Incrementality criterion

1.6.2.1. Batch Learning

1.6.2.2. Online Learning

1.6.3.     Generalization criterion

1.6.3.1. model-based learning

1.6.3.2. instance-based learning

1.7.        Supplementary Materials

2.            Chapter 2: Deep Neural Networks

2.1.        Introduction

2.2.        From Biological Neurons to Artificial Neurons

2.2.1.     Biological Neurons          

2.2.2.     Artificial Neurons

2.3.        Artificial Neural Network (ANN)

2.4.        Activation Functions

2.4.1.     Types of Activation 

2.4.1.1. Binary Step Function

2.4.1.2. Linear Activation Function

2.4.1.3. Non-Linear Activation Functions

 

2.5.        The Learning process of ANN

2.5.1.     Forward Propagation

2.5.2.     Backpropagation (Gradient Descent)

2.6.        Loss Functions

2.6.1.     Regression Loss Functions

2.6.1.1. Mean Absolute Error (MAE) Loss

2.6.1.2. Mean Squared Error (MSE) Loss

2.6.1.3. Huber Loss

2.6.1.4. Mean Bias Error (MBE) Loss

2.6.1.5. Mean Squared Logarithmic Error (MSLE)

2.6.2.     Classification Loss Functions

2.6.2.1. Binary Cross Entropy (BCE) Loss

2.6.2.2. Categorical Cross Entropy (CCE) Loss

2.6.2.3. Hinge Loss

2.6.2.4. Kullback Leibler Divergence (KL) Loss

2.7.        Supplementary Materials

 

3.            Chapter 3: Training Deep Neural Networks

3.1.        Introduction

3.2.        Gradient Descent revisited

3.2.1.     Gradient Descent

3.2.2.     Stochastic Gradient Descent

3.2.3.     Mini-batch Gradient Descent

3.2.4.    

3.3.        Gradients vanishing and exploding

3.4.        Gradient Clipping

3.5.        Parameter initialization

3.5.1.     Random initialization

3.5.2.     Lecun Initialization

3.5.3.     Xavier initialization

3.5.4.     Kaiming (He) initialization

3.6.        Faster Optimizers

3.6.1.     Momentum optimization

3.6.2.     Nesterov Accelerated Gradient

3.6.3.     AdaGrad

3.6.4.     RMSProp

3.6.5.     Adam optimizer

3.7.        Model training issues

3.7.1.     Bias

3.7.2.     Variance

3.7.3.     Overfitting issues

3.7.4.     Underfitting issues

3.7.5.     Model capacity

3.8.        Supplementary Materials

4.            Chapter 4: Evaluating Deep Neural Networks

4.1.        Introduction

4.2.        Validation dataset

4.3.        Regularization methods

4.3.1.     Early Stopping

4.3.2.     L1 & L2 Regularization

4.3.3.     Dropout

4.3.4.     Max-Norm Regularization

4.3.5.     Data Augmentation

4.4.        Cross-Validation

4.4.1.     Hold-out cross-validation

4.4.2.     K-folds cross-validation

4.4.3.     Repeated K-folds cross-validation

4.4.4.     Leave-one-out cross-validation

4.4.5.     Leave-p-out cross-validation

4.4.6.     Time series cross-validation

4.4.7.     Block cross-validation

4.5.        Performance Metrics.

4.5.1.     Regression Metrics

4.5.1.1. Mean Absolute Error (MAE)

4.5.1.2. Root Mean Squared Error (RMSE)

4.5.1.3. Coefficient of determination (R-Squared)

4.5.1.4. Adjusted R2

4.5.1.5.

4.5.2.     Classification Metrics

4.5.2.1. Confusion Matrix.

4.5.2.2. Accuracy

4.5.2.3. Precision

4.5.2.4. Recall

4.5.2.5. Precision-Recall Curve

4.5.2.6. F1-score

4.5.2.7. Beta F1-score

4.5.2.8. False Positive Rate (FPR)

4.5.2.9. Specificity

4.5.2.10.              Receiving operating characteristics (ROC) curve

4.6.        Supplementary Materials

 

5.            Chapter 5

5.1.        Introduction

5.2.        Shift from full connected to convolutional

5.3.        Basic Architecture

5.3.1.     The Cross-Correlation Operation

5.3.2.     Convolution operation

5.3.3.     Receptive Field

5.3.4.     Padding and Stride

5.3.4.1. Padding

5.3.4.2. Stride

5.4.        Multiple Channels

5.4.1.     Multi-channel Inputs

5.4.2.     Multi-channels Output

5.4.3.     Convolutional kernel 1×1.

5.5.        Pooling Layers

5.5.1.     Max Pooling

5.5.2.     Average Pooling

5.6.        Normalization Layers

5.6.1.     Batch Normalization

5.6.2.     Layer Normalization

5.6.3.     Instance Normalization

5.6.4.     Group Normalization

5.6.5.     Weight Normalization

5.7.        Convolutional Neural Networks (LeNet)

5.8.        Case studies

5.8.1.     Handwritten Digit Classification (one channel input)

5.8.2.     Dog vs Cat Image Classification (Multi-channel input)

5.9.        Supplementary Materials

6.            Chapter 6: Dive into Convolutional Neural Networks

6.1.        Introduction

6.2.        One-dimensional Convolutional Network

6.2.1.     One-dimensional Convolution

6.2.2.     One-dimensional pooling

6.3.        Three-dimensional Convolutional Network

6.3.1.     Three-dimension convolution

6.3.2.     Three-dimensional pooling

6.4.        Transposed Convolution Layer

6.5.        Atrous/Dilated Convolution

6.6.        Separable Convolutions

6.6.1.     Spatially Separable Convolutions

6.6.2.     Depth-wise Separable (DS) Convolutions

6.7.        Grouped Convolution

6.8.        Shuffled Grouped Convolution

6.9.        Supplementary Materials

7.            Chapter 7: Advanced Convolutional Neural Network

7.1.        Introduction

7.2.        AlexNet

7.3.        Block-wise Convolutional Network (VGG)

7.4.        Network-in Network

7.5.        Inception Networks

7.5.1.     GoogLeNet

7.5.2.     Inception Network V2(Inception V2)

7.5.3.     Inception Network V3 (Inception V3)

7.6.        Residual Convolutional Networks

7.7.        Dense Convolutional Networks

7.8.        Temporal Convolutional Network

7.8.1.     One-dimensional Convolutional Network

7.8.2.     Causal and Dilated Convolution

7.8.3.     Residual blocks

7.9.        Supplementary Materials

 

8.            Chapter 8: Introducing Recurrent Neural Networks

8.1.        Introduction

8.2.        Recurrent neural networks

8.2.1.     Recurrent Neurons

8.2.2.     Memory Cell

8.2.3.     Recurrent Neural Network

8.3.        Different Categories of RNNs

8.3.1.     One-to-one RNN

8.3.2.     One-to-many RNN

8.3.3.     Many-to-one RNN

8.3.4.     Many-to-many RNN

8.4.        Backpropagation Through Time

8.5.        Challenges facing simple RNNs

8.5.1.     Vanishing Gradient

8.5.2.     Exploding gradient.

8.5.2.1. Truncated Backpropagation through time (TBPTT)

8.5.3.     Clipping Gradients

8.6.        Case study: Malware Detection

8.7.        Supplementary Materials

 

9.            Chapter 9: Dive into Recurrent Neural Networks

9.1.        Introduction

9.2.        Long Short-term Memory (LSTM)

9.2.1.     LSTM gates

9.2.2.     Candidate Memory Cells

9.2.3.     Memory Cell

9.2.4.     Hidden state

9.3.        LSTM with Peephole Connections

9.4.        Gated Recurrent Units (GRU)

9.4.1.     CRU cell gates

9.4.2.     Candidate State

9.4.3.     Hidden state

9.5.        ConvLSTM

9.6.        Unidirectional vs Bi-directional Recurrent Network

9.7.        Deep Recurrent Network

9.8.        Insights

9.9.        Case study of Malware Detection

9.10.      Supplementary Materials

 

10.         Chapter 10: Attention Neural Networks

10.1.      Introduction

10.2.      From biological to computerized attention

10.2.1.  Biological Attention

10.2.2.  Queries, Keys, and Values

10.3.      Attention Pooling: Nadaraya-Watson Kernel Regression

10.4.      Attention Scoring Functions

10.4.1.  Masked Softmax Operation

10.4.2.  Additive Attention (AA)

10.4.3.  Scaled Dot-Product Attention

10.5.      Multi-Head Attention (MHA)

10.6.      Self-Attention Mechanism

10.6.1.  Self-Attention (SA) mechanism

10.6.2.  Positional encoding

10.7.      Transformer Network

10.8.      Supplementary Materials

 

11.         Chapter 11: Autoencoder Networks

11.1.      Introduction

11.2.      Introducing Autoencoders

11.2.1.  Definition of Autoencoder 

11.2.2.  Structural Design

11.3.      Convolutional Autoencoder

11.4.      Denoising Autoencoder

11.5.      Sparse autoencoders

11.6.      Contractive autoencoders

11.7.      Variational autoencoders

11.8.      Case study

11.9.      Supplementary Materials

 

12.         Chapter 12: Generative Adversarial Networks (GANs)

12.1.      Introduction

12.2.      Foundation of Generative Adversarial Network

12.3.      Deep Convolutional GAN

12.4.      Conditional GAN

12.5.      Supplementary Materials

13.         Chapter 13: Dive into Generative Adversarial Networks

13.1.      Introduction

13.2.      Wasserstein GAN

13.2.1.  Distance functions

13.2.2.  Distance function in GANs

13.2.3.  Wasserstein loss

13.3.      Least-squares GAN (LSGAN)

13.4.      Auxiliary Classifier GAN (ACGAN)

13.5.      Supplementary Materials

 

14.         Chapter 14: Disentangled Representation GANs

14.1.      Introduction

14.2.      Disentangled representations

14.3.      InfoGAN

14.4.      StackedGAN

14.5.      Supplementary Materials

 

15.         Chapter 15: Introducing Federated Learning for Internet of Things (IoT)

15.1.      Introduction

15.2.      Federated Learning in Internet of Things.

15.3.      Taxonomic view of Federated Learning

15.3.1.  Network Structure

15.3.1.1.              Centralized Federated Learning

15.3.1.2.              Decentralized Federated Learning

15.3.1.3.              Hierarchical Federated Learning

15.3.2.  Data Partition

15.3.3.  Horizontal Federated Learning

15.3.4.  Vertical Federated Learning

15.3.5.  Federated Transfer learning

15.4.      Open-source Frameworks

15.4.1.  TensorFlow Federated

15.4.2.  FedML

15.4.3.  LEAF

15.4.4.  Paddle FL

15.4.5.  Federated AI Technology Enabler (FATE)

15.4.6.  OpenFL

15.4.7.  IBM Federated Learning

15.4.8.  NVIDIA FLARE

15.4.9.  Flower

15.4.10.               Sherpa.ai

15.5.      Supplementary Materials

 

16.         Chapter 16: Privacy-Preserved Federated Learning

16.1.      Introduction

16.2.      Statistical Challenges in Federated Learning

16.2.1.  Non-Independent and Identically Distributed (Non-IID) Data

16.2.1.1.              Class Imbalance

16.2.1.2.              Distribution Imbalance

16.2.1.3.              Size Imbalance

16.2.2.  Model Heterogeneity

16.2.3.  Block Cycles

16.3.      Security Challenge in Federated Learning

16.3.1.  Untargeted Attacks

16.3.2.  Targeted Attacks

16.4.      Privacy Challenges in Federated Learning

16.4.1.  Secure Aggregation

16.4.1.1.              Homomorphic Encryption (HE)

16.4.1.2.              Secure Multiparty Computation

16.4.1.3.              Blockchain

16.4.2.  Perturbation Method

16.5.      Supplementary Materials

 

Supplemental Materials

What is included with this book?

The New copy of this book will include any supplemental materials advertised. Please check the title of the book to determine if it should include any access cards, study guides, lab manuals, CDs, etc.

The Used, Rental and eBook copies of this book are not guaranteed to include any supplemental materials. Typically, only the book itself is included. This is true even if the title states it includes any access cards, study guides, lab manuals, CDs, etc.

Rewards Program