Quantcast
Channel: deep learning Archives - PyImageSearch
Viewing all articles
Browse latest Browse all 277

Adversarial Learning with Keras and TensorFlow (Part 3): Exploring Adversarial Attacks Using Neural Structured Learning (NSL)

$
0
0

Home » deep learning

Table of Contents


Adversarial Learning with Keras and TensorFlow (Part 3): Exploring Adversarial Attacks Using Neural Structured Learning (NSL)

In this tutorial, you will learn about adversarial attacks and how we use these attacks to generate adversarial samples using the TensorFlow Neural Structured Learning (NSL) framework. We will discuss the two most common adversarial attacks, that is, the Projected Gradient Descent (PGD attack) and the Fast Gradient Sign Method (FGSM attack), and understand their mathematical formulation.

Furthermore, we will use the TensorFlow NSL framework to perturb images in our dataset with these attacks to generate adversarial samples, which will act as neighbor samples in our NSL-based pipeline and allow us to regularize our model.

This lesson is the 3rd of a 4-part series on Adversarial Learning:

  1. Adversarial Learning with Keras and TensorFlow (Part 1): Overview of Adversarial Learning
  2. Adversarial Learning with Keras and TensorFlow (Part 2): Implementing the Neural Structured Learning (NSL) Framework and Building a Data Pipeline
  3. Adversarial Learning with Keras and TensorFlow (Part 3): Exploring Adversarial Attacks Using Neural Structured Learning (NSL) (this tutorial)
  4. Adversarial Learning with Keras and TensorFlow (Part 4): Enhancing Adversarial Defense and Comparing Models Trained With and Without Neural Structured Learning (NSL)

To learn more about adversarial attacks, just keep reading.


Introduction to Advanced Adversarial Techniques in Machine Learning

In the first tutorial of this series, we briefly discussed the process of generating perturbations (or noise) and tried to engineer our own adversarial examples using TensorFlow. Formally, the process of engineering adversarial noise and adding it to an image is referred to as Adversarial Attack.


Harnessing NSL for Robust Model Training: Insights from Part 2

Furthermore, in the second part of this series, we discussed how the NSL framework uses other related samples or neighbor samples to regularize the training process of our model and allows us to train better systems. In addition to this, we discussed how we can use adversarial examples as neighbor samples to implement adversarial training and make our models robust to adversarial attacks.


Deep Dive into Adversarial Attack Formulations: PGD and FGSM Explored

In this tutorial, we will take a step further and formalize what we mean by adversarial attacks. We will discuss the formulation of the two most common types of adversarial attacks (i.e., PGD and FGSM attacks) and implement our own adversarial attacks to perturb our images using the TensorFlow NSL framework.


Building an End-to-End Adversarial Application with Keras and TensorFlow

Additionally, we will continue building our end-to-end adversarial application and implementing modules using Keras and TensorFlow. This will allow us to quantify the change in performance of our model when it is used to make predictions on adversarial examples generated using PGD and FGSM attacks.


Adversarial Attacks: Unraveling the Intricacies of Crafting Deceptive Samples


Recap of Adversarial Sample Generation: The Foundation of Adversarial Attacks

In the first part of this series, we discussed a basic overview of generating adversarial samples. We implemented the most basic approach to optimize engineered noise, which, when added to our original images, gives us an adversarial sample.


Formalizing PGD and FGSM Attacks: A Deep Dive into Advanced Adversarial Techniques

Let us now revisit that part and try to define and understand the PGD and FGSM adversarial attacks formally.


The Objective of Adversarial Attacks: Subtle Noise for Maximum Deception

As we discussed in the first tutorial of this series, the goal of any adversarial attack is to engineer noise, which, when added to our input image x, does not visibly change x but changes the model’s prediction. In terms of loss, this can be thought of as changing input x to x+noise such that we maximize the following loss expression:

loss = L(softmax(f(x+noise)),p_gt)


Balancing Attack Efficacy and Perceptibility: The Art of Noise Constraint

To ensure that the added noise does not visibly change the appearance of our image, we need to constrain the amount of noise we add to our original image to generate our adversarial examples.

Usually, this is done by defining a condition based on some norm of noise. For example, we can constrain the noise in such a way that it is bounded by some value epsilon. This basically means that the noise vector will be constrained and remain within the sphere ball with radius epsilon defined by the L2 norm.


Leveraging L(infinity) Norm in Crafting Stealthy Adversarial Attacks

For formulating PGD and FGSM adversarial attacks, we usually consider the L(infinity) norm of the noise. The L infinity norm of a vector is simply equal to the largest magnitude among all components of a vector.

Summing up, this implies the above loss expression is maximized under the condition that the L(infinity) norm of the noise is bounded by some value epsilon:

loss = L(softmax(f(x+noise)),p_gt)

where L_infinity(noise) < epsilon

This condition will ensure that the largest component of the noise vector remains in the range (-epsilon, epsilon) and avoid the amount of noise from blowing up, which in turn will avoid perceptible or visible change to the image.


Optimization Techniques in Adversarial Attacks: Utilizing SGD for Noise Engineering

As we discussed in the first part of this series, we can use SGD to optimize the above equation, keeping the weights of the model f fixed and optimizing for the right noise tensor.

Note that this is similar to how we normally use SGD to optimize the weights of a model. The only difference here is that we are optimizing the noise and keeping the weights fixed. This simply means that in the SGD update equation we will have the noise getting updated (as shown below) instead of the weights.


Strategic Noise Update: The Key to Effective Adversarial Attacks

The expression for an SGD optimization step looks like:

noise_final ← noise_initial + \eta (grad)

where \eta is our step size for the noise optimization problem, and grad refers to the gradient of loss w.r.t. noise.


Comparative Analysis of PGD and FGSM: Two Sides of Adversarial Attack Strategies

There are 2 ways to look at this optimization problem. These two ways will give us our PGD and FGSM attack formulations. Let us discuss this in detail.


PGD Adversarial Attack: Crafting Precision Noise with Gradient Descent


Understanding Noise Constraints in PGD Attacks: The Role of Epsilon

Given the expression for noise update using SGD, we also have to ensure that the noise is bounded in the range (-epsilon, epsilon), that is

noise_final ← noise_initial + \eta (grad)

such that \delta noise = noise_final - noise_initial is bounded in range (-epsilon to epsilon).


Implementing Effective Noise Clipping Strategies in PGD

To ensure this, we clip the change noise value so it remains in the desired range and does not cause any perceptible changes. This implies that our final update rule looks like

noise ← clip(\eta (grad), range = (-eps, eps))


Decoding Projected Gradient Descent: The Core Mechanism of PGD Attacks

This is referred to as the Projected Gradient Descent or PGD attack formulation. Notice that this process is similar to the approach we briefly discussed in the first part of this series.


Parameterizing PGD Attacks: Fine-Tuning for Optimal Adversarial Efficacy

It is also worth noting that to define, formulate, and implement our PGD attack, we need the following parameters.

The value of epsilon, the step size (i.e., \eta), the type of norm we will use to constrain noise (i.e., L(infinity) here), and the number of iterations we will run the above SGD update step for.


FGSM Adversarial Attack: Maximizing Impact with Single-Step Optimization


Strategizing Step Size in FGSM: The Pursuit of Efficient Adversarial Noise

In the PGD attack, we formulate the process as an SGD update and take small steps (i.e., small \eta) to get our final engineered noise vector.

What if we take a large step at once to make this process faster? One way to think about this is that we take as large a step size as possible (i.e., large \eta) since our goal is to maximize the loss with the minimum number of steps and time.


The Dynamics of Large Step Optimization in FGSM

Let us see what happens in such a case. We use the expression from above, which tells us the update rule for getting to our noise vector, that is,

noise ← clip(\eta (grad), range =(-eps, eps))


Analyzing the Impact of Extreme Value Clipping in FGSM

Notice that if the step is very large (i.e., \eta is large), the value \eta (grad) will exceed the magnitude epsilon. This means that after the clip operation is applied (on a value greater than epsilon), it will simply clip it to either epsilon or -epsilon.


Unveiling Fast Gradient Sign Method: The Essence of FGSM Attacks

Thus, the expression becomes

noise ← eps* sign of (grad)

This is referred to as the Fast Gradient Sign Method or FGSM attack formulation. This attack only depends on the sign of the gradient.


Tailoring FGSM Attacks: Defining Key Parameters for Success

It is worth noting that to define, formulate, and implement our FGSM attack, we need the following parameters.

The value of epsilon and the type of norm we will use to constrain noise (i.e., L(infinity) here).


Configuring Your Development Environment

To follow this guide, you need to have the TensorFlow and OpenCV libraries installed on your system.

Luckily, both TensorFlow and OpenCV are pip-installable:

$ pip install tensorflow
$ pip install opencv-contrib-python

If you need help configuring your development environment for OpenCV, we highly recommend that you read our pip install OpenCV guide — it will have you up and running in minutes.


Need Help Configuring Your Development Environment?

Need help configuring your dev environment? Want access to pre-configured Jupyter Notebooks running on Google Colab? Be sure to join PyImageSearch University — you’ll be up and running with this tutorial in minutes.
Need help configuring your dev environment? Want access to pre-configured Jupyter Notebooks running on Google Colab? Be sure to join PyImageSearch University — you’ll be up and running with this tutorial in minutes.

All that said, are you:

  • Short on time?
  • Learning on your employer’s administratively locked system?
  • Wanting to skip the hassle of fighting with the command line, package managers, and virtual environments?
  • Ready to run the code immediately on your Windows, macOS, or Linux system?

Then join PyImageSearch University today!

Gain access to Jupyter Notebooks for this tutorial and other PyImageSearch guides pre-configured to run on Google Colab’s ecosystem right in your web browser! No installation required.

And best of all, these Jupyter Notebooks will run on Windows, macOS, and Linux!


Project Structure

We first need to review our project directory structure.

Start by accessing this tutorial’s “Downloads” section to retrieve the source code and example images.

From there, take a look at the directory structure:

├── demo.py
├── inference.py
├── output
├── pyimagesearch
│   ├── __init__.py
│   ├── callbacks.py
│   ├── config.py
│   ├── data.py
│   ├── model.py
│   ├── robust.py
│   └── visualization.py
└── train.py

In this tutorial, we will discuss the model.py file, which implements the code to build our model architecture. Furthermore, we will discuss the robust.py file, which implements PGD and FGSM attacks using the TensorFlow NSL framework and generates adversarial examples, which we will use to evaluate our model’s performance.


Creating the Model: Laying the Foundation for Adversarial Learning


Implementing a Mini VGG Model for Image Classification: A Step-by-Step Guide

Let us start by implementing the architecture of the model we will use for our end-to-end adversarial learning application.

For this tutorial series, we use a smaller version of the Visual Geometry Group (VGG) model architecture. This model will be used to perform image classification tasks on the CIFAR-10 dataset, which we discussed in the previous part of this tutorial.


Code Analysis: Building the Backbone of the Mini VGG Model

Let us open the model.py file and get started.

# import the necessary packages
from tensorflow.keras.layers import BatchNormalization
from tensorflow.keras.layers import Conv2D
from tensorflow.keras.layers import MaxPooling2D
from tensorflow.keras.layers import Activation
from tensorflow.keras.layers import Flatten
from tensorflow.keras.layers import Dropout
from tensorflow.keras.layers import Dense
from tensorflow.keras import Sequential

def get_mini_vgg(inputShape, numClasses):
    # initialize the model
    model = Sequential(name="base_model")

    # first CONV => RELU => CONV => RELU => POOL layer set
    model.add(Conv2D(32, (3, 3), padding="same",
        input_shape=inputShape))
    model.add(Activation("relu"))
    model.add(BatchNormalization())
    model.add(Conv2D(32, (3, 3), padding="same"))
    model.add(Activation("relu"))
    model.add(BatchNormalization())
    model.add(MaxPooling2D(pool_size=(2, 2)))
    model.add(Dropout(0.25))

    # second CONV => RELU => CONV => RELU => POOL layer set
    model.add(Conv2D(64, (3, 3), padding="same"))
    model.add(Activation("relu"))
    model.add(BatchNormalization())
    model.add(Conv2D(64, (3, 3), padding="same"))
    model.add(Activation("relu"))
    model.add(BatchNormalization())
    model.add(MaxPooling2D(pool_size=(2, 2)))
    model.add(Dropout(0.25))

    # first (and only) set of FC => RELU layers
    model.add(Flatten())
    model.add(Dense(512))
    model.add(Activation("relu"))
    model.add(BatchNormalization())
    model.add(Dropout(0.5))

    # softmax classifier
    model.add(Dense(numClasses))

    # return the constructed network architecture
    return model

Importing Key Layers: The Building Blocks of the Model

We start by importing the important layers and modules which will allow us to build our model architecture. On Lines 2-9, we import the various layers we will be using.


Defining the Mini VGG Architecture: Crafting the Model Structure

Next, we define the get_mini_vgg function (Lines 11-47), which implements the model architecture for our mini VGG model. It takes as input the shape of our input image (i.e., inputShape) and the total number of classes in our data (i.e., numClasses) (Line 11).

On Line 13, we define the Sequential layer and start adding layers to our model.


Layer-by-Layer Construction: Creating the Convolutional Framework

Then, we add the first CONV => RELU => CONV => RELU => POOL=> Dropout layer set (Lines 16-24) and then another CONV => RELU => CONV => RELU => POOL=> Dropout layer set (Lines 27-34).


Finalizing the Model: Adding Fully Connected Layers and Classifier

Finally, we add the fully connected layers with ReLU activation, BatchNorm, and Dropout regularization (Lines 37-41) and the final Dense layer with the numClasses number of nodes (Line 44).


Model Construction Complete: Ready for Adversarial Training

We return our model on Line 47.


Harnessing TensorFlow NSL for Advanced Adversarial Attacks


Initiating Adversarial Attack Implementation: Setting the Stage with TensorFlow NSL

Now that we have defined the model architecture, let us go ahead and discuss how the TensorFlow NSL framework can be used to define and implement various adversarial attacks and generate adversarial examples.


Exploring FGSM-Based Adversarial Attack Configurations with NSL

In this section, we will discuss two functions that will define the parameters and configurations for our adversarial attacks, use the NSL framework to implement these attacks and generate adversarial examples, and also allow us to quantify how the model performance changes when evaluated on adversarial examples.


Diving into the Code: Implementing Adversarial Robustness in TensorFlow

Let us open the robust.py file and get started.

# import the necessary packages
from tensorflow.keras.losses import SparseCategoricalCrossentropy
from tensorflow.keras.metrics import SparseCategoricalAccuracy
import neural_structured_learning as nsl
import tensorflow as tf

def check_fgsm_robustness(advGradNorm, pgdEpsilon, testSetForAdvModel,
	labelInputName, imageInputName, modelName, model):
	# set up the neighbor config for FGSM
	fgsmNbrConfig = nsl.configs.AdvNeighborConfig(
		adv_grad_norm=advGradNorm,
		adv_step_size=pgdEpsilon,
		clip_value_min=0.0,
		clip_value_max=1.0,
	)

	# create a labelled loss function to calculate the gradient
	labeledLossFn = SparseCategoricalCrossentropy(from_logits=True)

	# initialize perturbed images, labels and predictions
	(perturbedImages, labels, predictions) = [], [], []

	# we want to record the accuracy
	metric = SparseCategoricalAccuracy()

	# loop over test set
	for batch in testSetForAdvModel:
		# record the loss calculation to get the gradient
		with tf.GradientTape() as tape:
			tape.watch(batch)
			losses = labeledLossFn(
				batch[labelInputName],
				model(batch[imageInputName])
			)
			
		# generate the adversarial example
		(fgsmImages, _) = nsl.lib.adversarial_neighbor.gen_adv_neighbor(
			batch[imageInputName],
			losses,
			fgsmNbrConfig,
			gradient_tape=tape
		)

		# update our accuracy metric
		yTrue = batch[labelInputName]
		yPred = model(fgsmImages)
		metric(yTrue, yPred)

		# store images for later use
		perturbedImages.append(fgsmImages)
		labels.append(tf.squeeze(yTrue).numpy())
		predictions.append(
			tf.argmax(tf.nn.softmax(yPred), axis=-1).numpy()
		)

	# calculate the accuracy on FGSM data
	accuracy = metric.result().numpy()
	print(f"[INFO] {modelName} accuracy on FGSM data: {accuracy:0.2f}")

	# return the perturbed images, labels, and predictions
	return (perturbedImages, labels, predictions)

Building the Robustness Checker Function: Analyzing FGSM Attack Performance

We start by importing the SparseCategoricalCrossentropy loss and the SparseCategoricalAccuracy metric from TensorFlow (Lines 2 and 3).


Configuring FGSM Attacks: Defining Adversarial Parameters in NSL

On Lines 4 and 5, we import the neural_structured_learning module and the tensorflow library, respectively.

Next, we define the check_fgsm_robustness function, which checks the performance of our model on images with FGSM-based perturbations (Lines 7-61).


Setting Up the Adversarial Attack Landscape: Inputs and Configurations

The function takes as input the adversarial norm (i.e., advGradNorm), the value of epsilon (i.e., pgdEpsilon), the test set for the adversarial model (i.e., testSetForAdvModel), the label and image name related arguments (i.e., labelInputName, imageInputName), and finally the model related arguments (i.e., modelName, model) (Lines 7 and 8).


Executing the FGSM Attack: Generating and Evaluating Perturbed Images

On Lines 10-15, we use the in-built AdvNeighborConfig function from nsl to define the configuration of our FGSM attack. Note that nsl will use this configuration to formulate the FGSM attack and generate adversarial examples.


Iterative Model Evaluation: Assessing Accuracy on Adversarial Examples

On Lines 18 and 21, we define labeledLossFn and initialize the empty perturbedImages, labels, and predictions lists. On Line 24, we define our metric, which is the SparseCategoricalAccuracy() function.


Leveraging TensorFlow and NSL for Adversarial Example Generation

Next, we iterate over the batches in our test set (i.e., testSetForAdvModel) (Line 27).


Gradient Tracking and Loss Computation: Essential Steps in Adversarial Crafting

We use tf.GradientTape() so tensorflow can track gradients for losses, as we will need them for backpropagation later (Line 29). On Lines 31-34, we compute the loss between input images (batch[labelInputName]) and corresponding model predictions (i.e., batch[imageInputName]) using labeledLossFn we defined above.


Generating Adversarial Examples: Integrating NSL in Adversarial Workflow

Now, we are ready to generate adversarial examples, which will act as neighbor examples in our NSL-based pipeline.

We use the gen_adv_neighbor function from nsl, which takes as input the batch of input images (i.e., batch[imageInputName]), the loss (i.e., losses), the configuration for our FGSM attack (i.e., fgsmNbrConfig), and the tape parameter to have access to gradient information (i.e., gradient_tape) (Lines 37-42).


Producing FGSM-Perturbed Images: Crafting the Adversarial Samples

This function outputs our adversarial examples perturbed with FGSM attack (i.e., fgsmImages).


Evaluating Model Robustness: Testing Against Adversarial Inputs

Now that we have our adversarial examples, it is time to evaluate the performance of our model. We get the true labels for our batch (i.e., yTrue) (Line 45) and the predicted labels when adversarial samples are passed through our model (i.e., yPred = model(fgsmImages)) (Line 46). Then, we use the pre-defined metric to evaluate our model on these adversarial examples (Line 47).


Assessing Adversarial Impact: Accuracy Metrics and Data Storage

We store the adversarial examples (i.e., fgsmImages) in the perturbedImages list (Line 50), the true ground-truth labels in the labels list (Line 51), and the softmax predictions of our model on adversarial examples in the predictions list (Lines 52-54).


Finalizing Adversarial Example Production: Output and Model Accuracy Analysis

Finally, we calculate and print the final accuracy on the adversarial examples (Lines 57 and 58) and return the perturbedImages, labels, and model predictions lists that we created (Line 61).


Implementing PGD Adversarial Attacks: Advanced Techniques in TensorFlow NSL

def check_pgd_robustness(advGradNorm, pgdEpsilon, advStepSize,
	pgdIterations, testSetForAdvModel, labelInputName, imageInputName,
	modelName, model):
	# set up the neighbor config for PGD
	pgd_nbr_config = nsl.configs.AdvNeighborConfig(
		adv_grad_norm=advGradNorm,
		adv_step_size=advStepSize,
		pgd_iterations=pgdIterations,
		pgd_epsilon=pgdEpsilon,
		clip_value_min=0.0,
		clip_value_max=1.0,
	)

	# create a loss function for repeated calculation of the gradient
	pgdLossFn = SparseCategoricalCrossentropy(from_logits=True)
	labeledLossFn = pgdLossFn

	# initialize perturbed images, labels and predictions
	(perturbedImages, labels, predictions) = [], [], []

	# we want to record the accuracy
	metric = SparseCategoricalAccuracy()

	# loop over the test set
	for batch in testSetForAdvModel:
		# gradient tape to calculate the loss on the first iteration
		with tf.GradientTape() as tape:
			tape.watch(batch)
			losses = labeledLossFn(
				batch[labelInputName],
				model(batch[imageInputName])
			)
			
		# generate the adversarial examples
		(pgdImages, _) = nsl.lib.adversarial_neighbor.gen_adv_neighbor(
			batch[imageInputName],
			losses,
			pgd_nbr_config,
			gradient_tape=tape,
			pgd_model_fn=model,
			pgd_loss_fn=pgdLossFn,
			pgd_labels=batch[labelInputName],
		)

		# update our accuracy metric
		yTrue = batch[labelInputName]
		yPred = model(pgdImages)
		metric(yTrue, yPred)

		# store images for visualization
		perturbedImages.append(pgdImages)
		labels.append(tf.squeeze(yTrue).numpy())
		predictions.append(
			tf.argmax(tf.nn.softmax(yPred), axis=-1).numpy()
		)

	# calculate the accuracy on PGD data
	accuracy = metric.result().numpy()
	print(f"[INFO] {modelName} accuracy on PGD data: {accuracy:0.2f}")

	# return the perturbed images, labels, and predictions
	return (perturbedImages, labels, predictions)

Defining the PGD Robustness Checker: A Comprehensive Approach for Model Evaluation

Similar to the process discussed above, we also define our check_pgd_robustness, which checks the performance of our model on images with PGD-based perturbations.


Setting Up PGD Attack Parameters: Tailoring the Adversarial Challenge

It takes as input the adversarial norm (i.e., advGradNorm), the value of epsilon (i.e., pgdEpsilon), the step size for pad attack (i.e., advStepSize), the number of iterations in the attack (i.e., pgdIterations), the test set for the adversarial model (i.e., testSetForAdvModel), the label and image name related arguments (i.e., labelInputName, imageInputName), and finally the model-related arguments (modelName, model) (Lines 63 and 64).


Configuring PGD Attacks: Establishing the Framework for Adversarial Generation

Then, we use the in-built AdvNeighborConfig function from nsl to define the configuration of our PGD attack (Lines 67-74).


Initializing Data for PGD Analysis: Preparing for Adversarial Example Generation

On Lines 77 and 78, we define the pgdLossFn loss function as we had done in the previous function and initialize the empty perturbedImages, labels, and predictions lists (Line 81).


Calculating Loss and Generating Adversarial Examples: The Core of PGD Attacks

We then define our metric, which is SparseCategoricalAccuracy() (Line 84).

Following the same procedure as discussed in the check_fgsm_robustness function, we iterate over the batches in our test set (i.e., testSetForAdvModel) (Line 87).


Leveraging TensorFlow for PGD Attack Execution: Gradient Tracking and Adversarial Crafting

We use tf.GradientTape() so tensorflow can track gradients and compute the loss between original input images model predictions using the labeledLossFn we defined above (Lines 89-94).


Evaluating Model Performance on PGD Perturbed Images

We then generate the adversarial examples using PGD attack with the help of the gen_adv_neighbor function from nsl, which takes as input the batch of input images (i.e., batch[imageInputName]), the loss (i.e., losses), the configuration for our PGD attack (i.e., pgd_nbr_config), parameter to have access to gradient information (i.e., gradient_tape), the model function (i.e., pgd_model_fn), the loss function (i.e., pgdLossFn), and the labels (i.e., pgd_labels) (Lines 97-105).


Analyzing Model Accuracy on PGD Adversarial Examples: A Critical Assessment

This function outputs our adversarial examples perturbed with PGD attacks (i.e., pgdImages).

Now that we have our adversarial examples, we evaluate the performance of our model.


Storing and Analyzing Adversarial Data: Insights into Model’s Adversarial Robustness

We get the true labels for our batch (i.e., yTrue) (Line 108) and the predicted labels when adversarial samples are passed through our model (i.e., yPred = model(pgdImages)) (Line 109). Then, we use the pre-defined metric to evaluate our model on these adversarial examples (Line 110).

We store the adversarial examples (i.e., pgdImages) in the perturbedImages list (Line 113), the true ground-truth labels in the labels list (Line 114), and the softmax predictions of our model on adversarial examples in the predictions list (Lines 115-117).


Concluding the PGD Attack Analysis: Summarizing Model’s Response to Adversarial Threats

Finally, we calculate and print the final accuracy on the adversarial examples (Lines 120 and 121) and return the perturbedImages, labels, and model predictions lists that we created (Line 124).


What's next? We recommend PyImageSearch University.

Course information:
83 total classes • 113+ hours of on-demand code walkthrough videos • Last updated: December 2023
★★★★★ 4.84 (128 Ratings) • 16,000+ Students Enrolled

I strongly believe that if you had the right teacher you could master computer vision and deep learning.

Do you think learning computer vision and deep learning has to be time-consuming, overwhelming, and complicated? Or has to involve complex mathematics and equations? Or requires a degree in computer science?

That’s not the case.

All you need to master computer vision and deep learning is for someone to explain things to you in simple, intuitive terms. And that’s exactly what I do. My mission is to change education and how complex Artificial Intelligence topics are taught.

If you're serious about learning computer vision, your next stop should be PyImageSearch University, the most comprehensive computer vision, deep learning, and OpenCV course online today. Here you’ll learn how to successfully and confidently apply computer vision to your work, research, and projects. Join me in computer vision mastery.

Inside PyImageSearch University you'll find:

  • &check; 83 courses on essential computer vision, deep learning, and OpenCV topics
  • &check; 83 Certificates of Completion
  • &check; 113+ hours of on-demand video
  • &check; Brand new courses released regularly, ensuring you can keep up with state-of-the-art techniques
  • &check; Pre-configured Jupyter Notebooks in Google Colab
  • &check; Run all code examples in your web browser — works on Windows, macOS, and Linux (no dev environment configuration required!)
  • &check; Access to centralized code repos for all 532+ tutorials on PyImageSearch
  • &check; Easy one-click downloads for code, datasets, pre-trained models, etc.
  • &check; Access on mobile, laptop, desktop, etc.

Click here to join PyImageSearch University


Summary

In this tutorial, we learned about the two most common adversarial attacks, that is the Projected Gradient Descent (PGD) and Fast Gradient SIgn Method (FGSM) attacks.

We discussed the mathematical formulation of these attacks and used the TensorFlow NSL framework to generate adversarial examples using these attacks. Furthermore, we implemented modules to check the change in model performance when used on PGD and FGSM attack-based adversarial samples.


Citation Information

Chandhok, S. “Adversarial Learning with Keras and TensorFlow (Part 3): Exploring Adversarial Attacks Using Neural Structured Learning (NSL),” PyImageSearch, P. Chugh, A. R. Gosthipaty, S. Huot, K. Kidriavsteva, and R. Raha, eds., 2024, https://pyimg.co/7rpen

@incollection{Chandhok_2024_ALwKTF-pt3,
  author = {Shivam Chandhok},
  title = {Adversarial Learning with Keras and TensorFlow (Part 3): Exploring Adversarial Attacks Using Neural Structured Learning (NSL)},
  booktitle = {PyImageSearch},
  editor = {Puneet Chugh and Aritra Roy Gosthipaty and Susan Huot and Kseniia Kidriavsteva and Ritwik Raha},
  year = {2024},
  url = {https://pyimg.co/7rpen},
}

Featured Image

Unleash the potential of computer vision with Roboflow - Free!

  • Step into the realm of the future by signing up or logging into your Roboflow account. Unlock a wealth of innovative dataset libraries and revolutionize your computer vision operations.
  • Jumpstart your journey by choosing from our broad array of datasets, or benefit from PyimageSearch’s comprehensive library, crafted to cater to a wide range of requirements.
  • Transfer your data to Roboflow in any of the 40+ compatible formats. Leverage cutting-edge model architectures for training, and deploy seamlessly across diverse platforms, including API, NVIDIA, browser, iOS, and beyond. Integrate our platform effortlessly with your applications or your favorite third-party tools.
  • Equip yourself with the ability to train a potent computer vision model in a mere afternoon. With a few images, you can import data from any source via API, annotate images using our superior cloud-hosted tool, kickstart model training with a single click, and deploy the model via a hosted API endpoint. Tailor your process by opting for a code-centric approach, leveraging our intuitive, cloud-based UI, or combining both to fit your unique needs.
  • Embark on your journey today with absolutely no credit card required. Step into the future with Roboflow.

Join Roboflow Now


Join the PyImageSearch Newsletter and Grab My FREE 17-page Resource Guide PDF

Enter your email address below to join the PyImageSearch Newsletter and download my FREE 17-page Resource Guide PDF on Computer Vision, OpenCV, and Deep Learning.

The post Adversarial Learning with Keras and TensorFlow (Part 3): Exploring Adversarial Attacks Using Neural Structured Learning (NSL) appeared first on PyImageSearch.


Viewing all articles
Browse latest Browse all 277

Trending Articles