url
stringclasses 675
values | text
stringlengths 0
9.95k
|
---|---|
https://pyimagesearch.com/2020/10/26/targeted-adversarial-attacks-with-keras-and-tensorflow/ | Our tf.keras imports include the:
Adam optimizer
ResNet50 architecture
SparseCategoricalCrossentropy loss function
ImageNet label decoder function, decode_predictions
Image preprocessing utility, preprocess_input
With our imports defined, let’s create a function used to preprocess our input image:
def preprocess_image(image):
# swap color channels, resize the input image, and add a batch
# dimension
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
image = cv2.resize(image, (224, 224))
image = np.expand_dims(image, axis=0)
# return the preprocessed image
return image
The preprocess_image method accepts a single required argument, the image, which we wish to preprocess. Our image is preprocessed by swapping channel ordering from BGR to RGB, calling preprocess_input to scale the pixel intensities, resizing the image to 224×224 pixels, and adding a batch dimension. The preprocessed image is then returned to the calling function. Our next function, clip_eps, clips values of the input tensor to the range [-eps, eps]:
def clip_eps(tensor, eps):
# clip the values of the tensor to a given range and return it
return tf.clip_by_value(tensor, clip_value_min=-eps,
clip_value_max=eps)
We accomplish this clipping by using TensorFlow’s clip_by_value method. We supply the tensor as an input, and then set -eps as the minimum clip value limit, along with eps as the positive clip value limit. This function will be used when we construct our perturbation vector, ensuring that the noise vector we construct falls within tolerable limits, and most importantly, does not significantly impact the visual quality of the output adversarial image. Keep in mind that adversarial images should be identical (to the human eye) to their original inputs — by clipping tensor values within tolerable limits, we are able to enforce this requirement. Next, we need to define the generate_targeted_adversaries function, which is the workhorse of this Python script:
def generate_targeted_adversaries(model, baseImage, delta, classIdx,
target, steps=500):
# iterate over the number of steps
for step in range(0, steps):
# record our gradients
with tf. GradientTape() as tape:
# explicitly indicate that our perturbation vector should
# be tracked for gradient updates
tape.watch(delta)
# add our perturbation vector to the base image and
# preprocess the resulting image
adversary = preprocess_input(baseImage + delta)
Our generated_targeted_adversaries function accepts five parameters, including a fifth optional one:
model: Our ResNet50 model (you could swap in a different pre-trained model such as VGG16, MobileNet, etc. if you prefer). |
https://pyimagesearch.com/2020/10/26/targeted-adversarial-attacks-with-keras-and-tensorflow/ | baseImage: The original non-perturbed input image that we wish to construct an adversarial attack for, causing our model to misclassify it. delta: Our noise vector, which will be added to the baseImage , ultimately causing the misclassification. We’ll update this delta vector by means of gradient descent. classIdx: The integer class label index we obtained by running the predict_normal.py script. steps: Number of gradient descent steps to perform (defaults to 50 steps). Line 30 starts a loop over the number of steps of gradient descent we are going to apply. For each step, we will record our gradients (Line 32), and specifically, watch the delta variable (Line 35). The delta value is the perturbation vector we are generating. Line 39 creates our image adversary by adding the delta perturbation vector to the baseImage (i.e., original input image), the result of which is our adversary image. We then preprocess the generated adversary. |
https://pyimagesearch.com/2020/10/26/targeted-adversarial-attacks-with-keras-and-tensorflow/ | Next comes the gradient descent portion of applying a targeted adversarial attack:
# run this newly constructed image tensor through our
# model and calculate the loss with respect to the
# both the *original* class label and the *target*
# class label
predictions = model(adversary, training=False)
originalLoss = -sccLoss(tf.convert_to_tensor([classIdx]),
predictions)
targetLoss = sccLoss(tf.convert_to_tensor([target]),
predictions)
totalLoss = originalLoss + targetLoss
# check to see if we are logging the loss value, and if
# so, display it to our terminal
if step % 20 == 0:
print("step: {}, loss: {}...".format(step,
totalLoss.numpy()))
# calculate the gradients of loss with respect to the
# perturbation vector
gradients = tape.gradient(totalLoss, delta)
# update the weights, clip the perturbation vector, and
# update its value
optimizer.apply_gradients([(gradients, delta)])
delta.assign_add(clip_eps(delta, eps=EPS))
# return the perturbation vector
return delta
Line 45 makes predictions on the adversary image (i.e., probability predictions for each class label in the ImageNet dataset). We then compute three loss outputs on Lines 46-50:
originalLoss: Computes the negative sparse categorical cross-entropy loss with respect to the original class label. targetLoss: Derives the positive categorical cross-entropy loss with respect to the target class label (i.e., what we want the image adversary to be misclassified as, hence the term targeted adversarial attack). We take the negative/positive signs that way because our objective is to minimize the probability for the true class and maximize the probability of the target class. totalLoss: Sum of the original loss and the targeted loss. Every 20 steps, we display the loss to our terminal (Lines 54-56). Outside of the with statement now, we calculate the gradients of the loss with respect to our perturbation vector (Line 55). Given the gradients, we apply them to our delta, and then clip values inside delta to our epsilon (EPS) limits. Again, keep in mind that the clip_eps function is used to ensure that the noise vector we construct falls within tolerable limits, and most importantly, does not significantly impact the visual quality of the output adversarial image. Finally, we return the resulting perturbation vector to the calling function — the final delta value will allow us to construct the adversarial attack used to fool our model. |
https://pyimagesearch.com/2020/10/26/targeted-adversarial-attacks-with-keras-and-tensorflow/ | With all of our functions now defined, we can move to parsing command line arguments:
# construct the argument parser and parse the arguments
ap = argparse. ArgumentParser()
ap.add_argument("-i", "--input", required=True,
help="path to original input image")
ap.add_argument("-o", "--output", required=True,
help="path to output adversarial image")
ap.add_argument("-c", "--class-idx", type=int, required=True,
help="ImageNet class ID of the predicted label")
ap.add_argument("-t", "--target-class-idx", type=int, required=True,
help="ImageNet class ID of the target adversarial label")
args = vars(ap.parse_args())
Our generate_targeted_adversary.py script requires four command line arguments:
--input: The path to our input image. --output: The path to our output adversarial image after the targeted adversarial attack has been performed. --class-idx: The integer class label index from the ImageNet dataset. We obtained this value by running predict_normal.py in the “Non-adversarial image classification results” section of the prior tutorial. --target-class-idx: The ImageNet class label index of what we want the input image to be incorrectly classified as (you’ll see an example of how to select this class label integer value in the “Step #3: Targeted adversarial attack results” section below). Let’s move on to a few initializations:
EPS = 2 / 255.0
LR = 5e-3
# load image from disk and preprocess it
print("[INFO] loading image...")
image = cv2.imread(args["input"])
image = preprocess_image(image)
Line 82 defines our epsilon (EPS) value used for clipping tensors when constructing the adversarial image. An EPS value of 2 / 255.0 is a standard value used in adversarial publications and tutorials. We then define our learning rate on Line 84. A value of LR = 5e-3 was obtained by empirical tuning — you may need to update this value when constructing your own targeted adversarial attacks. |
https://pyimagesearch.com/2020/10/26/targeted-adversarial-attacks-with-keras-and-tensorflow/ | Lines 88 and 89 load our input image and then preprocess it using ResNet’s preprocessing helper function. Next, we need to load the ResNet model and initialize our loss function:
# load the pre-trained ResNet50 model for running inference
print("[INFO] loading pre-trained ResNet50 model...")
model = ResNet50(weights="imagenet")
# initialize optimizer and loss function
optimizer = Adam(learning_rate=LR)
sccLoss = SparseCategoricalCrossentropy()
# create a tensor based off the input image and initialize the
# perturbation vector (we will update this vector via training)
baseImage = tf.constant(image, dtype=tf.float32)
delta = tf. Variable(tf.zeros_like(baseImage), trainable=True)
In this code block we:
Load ResNet50 from disk with weights pre-trained on the ImageNet dataset
Indicate that the Adam optimizer will be used when applying gradient descent
Initialize our sparse categorical cross-entropy loss function
Convert our input image to a TensorFlow constant (since the input image will not be updated during gradient descent)
Construct a variable for our delta (i.e., the perturbation vector) with the same spatial dimensions as the input image
If you would like more details on these variables and initializations, refer to last week’s tutorial where I cover them in more detail. With all of our variables constructed, we can now apply the targeted adversarial attack:
# generate the perturbation vector to create an adversarial example
print("[INFO] generating perturbation...")
deltaUpdated = generate_targeted_adversaries(model, baseImage, delta,
args["class_idx"], args["target_class_idx"])
# create the adversarial example, swap color channels, and save the
# output image to disk
print("[INFO] creating targeted adversarial example...")
adverImage = (baseImage + deltaUpdated).numpy().squeeze()
adverImage = np.clip(adverImage, 0, 255).astype("uint8")
adverImage = cv2.cvtColor(adverImage, cv2.COLOR_RGB2BGR)
cv2.imwrite(args["output"], adverImage)
A call to generate_targeted_adversaries generates our final deltaUpdated value, which is the perturbation vector used to construct the targeted adversarial attack. From there, we construct adverImage, our final adversarial image, by adding the perturbation vector to the original input image. We then clip any pixel values such that all pixels are in the range [0, 255], followed by converting the image to an unsigned 8-bit integer (such that OpenCV can operate on the image). The final adverImage is then written to disk. The question remains — have we fooled our original ResNet model into making an incorrect prediction? Let’s answer that question in the following code block:
# run inference with this adversarial example, parse the results,
# and display the top-1 predicted result
print("[INFO] running inference on the adversarial example...")
preprocessedImage = preprocess_input(baseImage + deltaUpdated)
predictions = model.predict(preprocessedImage)
predictions = decode_predictions(predictions, top=3)[0]
label = predictions[0][1]
confidence = predictions[0][2] * 100
print("[INFO] label: {} confidence: {:.2f}%".format(label,
confidence))
# write the top-most predicted label on the image along with the
# confidence score
text = "{}: {:.2f}%".format(label, confidence)
cv2.putText(adverImage, text, (3, 20), cv2.FONT_HERSHEY_SIMPLEX, 0.5,
(0, 255, 0), 2)
# show the output image
cv2.imshow("Output", adverImage)
cv2.waitKey(0)
Line 120 constructs a preprocessedImage by first constructing the adversarial image and then preprocessing it using ResNet’s preprocessing utility. Once the image is preprocessed, we make predictions on it using our model. |
https://pyimagesearch.com/2020/10/26/targeted-adversarial-attacks-with-keras-and-tensorflow/ | These predictions are then decoded and the top #1 prediction obtained — the class label and corresponding probability are then displayed to our terminal (Lines 121-126). Finally, we annotate our output image with the predicted label and confidence, and then display the output image to our screen. That was quite a lot of code to review! Take a second to congratulate yourself on a successful implementation of targeted adversarial attacks. In the next section, we’ll see the fruits of our hard work. Step #3: Targeted adversarial attack results
We are now ready to perform a targeted adversarial attack! Make sure you’ve used the “Downloads” section of this tutorial to download the source code and example images. Next, open up the imagenet_class_index.json file and determine the integer index of the ImageNet class label we want to “fool” the network into predicting — the first few lines of the class label index file look like this:
{
"0": [
"n01440764",
"tench"
],
"1": [
"n01443537",
"goldfish"
],
"2": [
"n01484850",
"great_white_shark"
],
"3": [
"n01491361",
"tiger_shark"
],
...
Scroll through the file until you find a class label you want to use. In this case, I have chosen index 189, which corresponds to a “Lakeland terrier” (a type of dog):
...
"189": [
"n02095570",
"Lakeland_terrier"
],
...
From there, you can open up a terminal and execute the following command:
$ python generate_targeted_adversary.py --input pig.jpg --output adversarial.png --class-idx 341 --target-class-idx 189
[INFO] loading image...
[INFO] loading pre-trained ResNet50 model...
[INFO] generating perturbation...
step: 0, loss: 16.111093521118164...
step: 20, loss: 15.760734558105469...
step: 40, loss: 10.959839820861816...
step: 60, loss: 7.728139877319336...
step: 80, loss: 5.327273368835449...
step: 100, loss: 3.629972219467163...
step: 120, loss: 2.3259339332580566...
step: 140, loss: 1.259613037109375...
step: 160, loss: 0.30303144454956055...
step: 180, loss: -0.48499584197998047...
step: 200, loss: -1.158257007598877...
step: 220, loss: -1.759873867034912...
step: 240, loss: -2.321563720703125...
step: 260, loss: -2.910153865814209...
step: 280, loss: -3.470625877380371...
step: 300, loss: -4.021825313568115...
step: 320, loss: -4.589465141296387...
step: 340, loss: -5.136003017425537...
step: 360, loss: -5.707150459289551...
step: 380, loss: -6.300693511962891...
step: 400, loss: -7.014866828918457...
step: 420, loss: -7.820181369781494...
step: 440, loss: -8.733556747436523...
step: 460, loss: -9.780607223510742...
step: 480, loss: -10.977422714233398...
[INFO] creating targeted adversarial example...
[INFO] running inference on the adversarial example...
[INFO] label: Lakeland_terrier confidence: 54.82%
Figure 6: Our original input was correctly classified as “hog” (left); however, our targeted adversarial attack now results in the image being incorrectly classified as a “Lakeland terrier” (right). On the left, you can see our original input image, which was correctly classified as “hog”. |
https://pyimagesearch.com/2020/10/26/targeted-adversarial-attacks-with-keras-and-tensorflow/ | We then applied a targeted adversarial attack (right) that perturbed the input image such that it has been misclassified as a Lakeland terrier (a type of dog) with 68.15% confidence! For reference, a Lakeland terrier looks nothing like a pig:
Figure 7: A “Lakeland terrier” (right) looks nothing like a “hog” (left), thus demonstrating the power of targeted adversarial attacks. In last week’s tutorial on untargeted adversarial attacks, we saw that we have no control over the final predicted class label of the perturbed image; however, by applying a targeted adversarial attack, we are able to control what label is ultimately predicted. What's next? We recommend PyImageSearch University. Course information:
84 total classes • 114+ hours of on-demand code walkthrough videos • Last updated: February 2024
★★★★★ 4.84 (128 Ratings) • 16,000+ Students Enrolled
I strongly believe that if you had the right teacher you could master computer vision and deep learning. Do you think learning computer vision and deep learning has to be time-consuming, overwhelming, and complicated? Or has to involve complex mathematics and equations? Or requires a degree in computer science? That’s not the case. |
https://pyimagesearch.com/2020/10/26/targeted-adversarial-attacks-with-keras-and-tensorflow/ | All you need to master computer vision and deep learning is for someone to explain things to you in simple, intuitive terms. And that’s exactly what I do. My mission is to change education and how complex Artificial Intelligence topics are taught. If you're serious about learning computer vision, your next stop should be PyImageSearch University, the most comprehensive computer vision, deep learning, and OpenCV course online today. Here you’ll learn how to successfully and confidently apply computer vision to your work, research, and projects. Join me in computer vision mastery. Inside PyImageSearch University you'll find:
✓ 84 courses on essential computer vision, deep learning, and OpenCV topics
✓ 84 Certificates of Completion
✓ 114+ hours of on-demand video
✓ Brand new courses released regularly, ensuring you can keep up with state-of-the-art techniques
✓ Pre-configured Jupyter Notebooks in Google Colab
✓ Run all code examples in your web browser — works on Windows, macOS, and Linux (no dev environment configuration required!) ✓ Access to centralized code repos for all 536+ tutorials on PyImageSearch
✓ Easy one-click downloads for code, datasets, pre-trained models, etc. ✓ Access on mobile, laptop, desktop, etc. Click here to join PyImageSearch University
Summary
In this tutorial, you learned how to perform targeted adversarial learning using Keras, TensorFlow, and Deep Learning. |
https://pyimagesearch.com/2020/10/26/targeted-adversarial-attacks-with-keras-and-tensorflow/ | When applying untargeted adversarial learning, our goal is to perturb an input image such that:
The perturbed image is misclassified by our pre-trained CNNYet, to the human eye, the perturbed image is identical to the original
The problem with untargeted adversarial learning is that we have no control over the perturbed output class label. For example, if we have an input image of a “pig”, and we want to perturb that image such that it’s misclassified, we cannot control what the new class label will be. Targeted adversarial learning on the other hand allows us to control what the new class label will be — and it’s super easy to implement, requiring only an update to our loss function computation. So far, we have covered how to construct adversarial attacks, but what if we wanted to defend against them. Is that possible? It certainly is — I’ll cover defending against adversarial attacks in a future blog post. To download the source code to this post (and be notified when future tutorials are published here on PyImageSearch), simply enter your email address in the form below! Download the Source Code and FREE 17-page Resource Guide
Enter your email address below to get a .zip of the code and a FREE 17-page Resource Guide on Computer Vision, OpenCV, and Deep Learning. Inside you'll find my hand-picked tutorials, books, courses, and libraries to help you master CV and DL! Download the code! |
https://pyimagesearch.com/2020/10/26/targeted-adversarial-attacks-with-keras-and-tensorflow/ | Website |
https://pyimagesearch.com/2020/11/09/opencv-super-resolution-with-deep-learning/ | Click here to download the source code to this pos
In this tutorial you will learn how to perform super resolution in images and real-time video streams using OpenCV and Deep Learning. Today’s blog post is inspired by an email I received from PyImageSearch reader, Hisham:
“Hi Adrian, I read your Deep Learning for Computer Vision with Python book and went through your super resolution implementation with Keras and TensorFlow. It was super helpful, thank you. I was wondering:
Are there any pre-trained super resolution models compatible with OpenCV’s dnn module? Can they work in real-time? If you have any suggestions, that would be a big help.” You’re in luck, Hisham — there are super resolution deep neural networks that are both:
Pre-trained (meaning you don’t have to train them yourself on a dataset)Compatible with OpenCV
However, OpenCV’s super resolution functionality is actually “hidden” in a submodule named in dnn_superres in an obscure function called DnnSuperResImpl_create. The function requires a bit of explanation to use, so I decided to author a tutorial on it; that way everyone can learn how to use OpenCV’s super resolution functionality. By the end of this tutorial, you’ll be able to perform super resolution with OpenCV in both images and real-time video streams! To learn how to use OpenCV for deep learning-based super resolution, just keep reading. |
https://pyimagesearch.com/2020/11/09/opencv-super-resolution-with-deep-learning/ | Looking for the source code to this post? Jump Right To The Downloads Section
OpenCV Super Resolution with Deep Learning
In the first part of this tutorial, we will discuss:
What super resolution isWhy we can’t use simple nearest neighbor, linear, or bicubic interpolation to substantially increase the resolution of imagesHow specialized deep learning architectures can help us achieve super resolution in real-time
From there, I’ll show you how to implement OpenCV super resolution with both:
ImagesReal-time video resolutions
We’ll wrap up this tutorial with a discussion of our results. What is super resolution? Super resolution encompases a set of algorithms and techniques used to enhance, increase, and upsample the resolution of an input image. More simply, take an input image and increase the width and height of the image with minimal (and ideally zero) degradation in quality. That’s a lot easier said than done. Anyone who has ever opened a small image in Photoshop or GIMP and then tried to resize it knows that the output image ends up looking pixelated. That’s because Photoshop, GIMP, Image Magick, OpenCV (via the cv2.resize function), etc. all use classic interpolation techniques and algorithms (ex., nearest neighbor interpolation, linear interpolation, bicubic interpolation) to increase the image resolution. |
https://pyimagesearch.com/2020/11/09/opencv-super-resolution-with-deep-learning/ | These functions “work” in the sense that an input image is presented, the image is resized, and then the resized image is returned to the calling function …
… however, if you increase the spatial dimensions too much, then the output image appears pixelated, has artifacts, and in general, just looks “aesthetically unpleasing” to the human eye. For example, let’s consider the following figure:
Figure 1: On the top we have our original input image. We wish to increase the resolution of the area in the red rectangle. Applying bicubic interpolation to this region yields poor results. On the top we have our original image. The area highlighted in the red rectangle is the area we wish to extract and increase the resolution of (i.e., resize to a larger width and height without degrading the quality of the image patch). On the bottom we have the output of applying bicubic interpolation, the standard interpolation method used for increasing the size of input images (and what we commonly use in cv2.resize when needing to increase the spatial dimensions of an input image). However, take a second to note how pixelated, blurry, and just unreadable the image patch is after applying bicubic interpolation. That raises the question:
Is there a better way to increase the resolution of the image without degrading the quality? The answer is yes — and it’s not magic either. |
https://pyimagesearch.com/2020/11/09/opencv-super-resolution-with-deep-learning/ | By applying novel deep learning architectures, we’re able to generate high resolution images without these artifacts:
Figure 2: On the top we have our original input image. The middle shows the output of applying bicubic interpolation to the area in the red rectangle. Finally, the bottom displays the output of a super resolution deep learning model. The resulting image is significantly more clear. Again, on the top we have our original input image. In the middle we have low quality resizing after applying bicubic interpolation. And on the bottom we have the output of applying our super resolution deep learning model. The difference is like night and day. The output deep neural network super resolution model is crisp, easy to read, and shows minimal signs of resizing artifacts. In the rest of this tutorial, I’ll uncover this “magic” and show you how to perform super resolution with OpenCV! |
https://pyimagesearch.com/2020/11/09/opencv-super-resolution-with-deep-learning/ | OpenCV super resolution models
Figure 3: Example of a super resolution architecture compatible with the OpenCV library (image source). We’ll be utilizing four pre-trained super resolution models in this tutorial. A review of the model architectures, how they work, and the training process of each respective model is outside the scope of this guide (as we’re focusing on implementation only). If you would like to read more about these models, I’ve included their names, implementations, and paper links below:
EDSR: Enhanced Deep Residual Networks for Single Image Super-Resolution (implementation)ESPCN: Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network (implementation)FSRCNN: Accelerating the Super-Resolution Convolutional Neural Network (implementation)LapSRN: Fast and Accurate Image Super-Resolution with Deep Laplacian Pyramid Networks (implementation)
A big thank you to Taha Anwar from BleedAI for putting together his guide on OpenCV super resolution, which curated much of this information — it was immensely helpful when authoring this piece. Configuring your development environment for super resolution with OpenCV
In order to apply OpenCV super resolution, you must have OpenCV 4.3 (or greater) installed on your system. While the dnn_superes module was implemented in C++ back in OpenCV 4.1.2, the Python bindings were not implemented until OpenCV 4.3. Luckily, OpenCV 4.3+ is pip-installable:
$ pip install opencv-contrib-python
If you need help configuring your development environment for OpenCV 4.3+, I highly recommend that you read my pip install OpenCV guide — it will have you up and running in a matter of minutes. Having problems configuring your development environment? Figure 4: Having trouble configuring your dev environment? Want access to pre-configured Jupyter Notebooks running on Google Colab? |
https://pyimagesearch.com/2020/11/09/opencv-super-resolution-with-deep-learning/ | Be sure to join PyImageSearch Plus — you’ll be up and running with this tutorial in a matter of minutes. All that said, are you:
Short on time?Learning on your employer’s administratively locked system?Wanting to skip the hassle of fighting with the command line, package managers, and virtual environments?Ready to run the code right now on your Windows, macOS, or Linux system? Then join PyImageSearch Plus today! Gain access to Jupyter Notebooks for this tutorial and other PyImageSearch guides that are pre-configured to run on Google Colab’s ecosystem right in your web browser! No installation required. And best of all, these Jupyter Notebooks will run on Windows, macOS, and Linux! Project structure
With our development environment configured, let’s move on to reviewing our project directory structure:
$ tree . --dirsfirst
. ├── examples
│ ├── adrian.png
│ ├── butterfly.png
│ ├── jurassic_park.png
│ └── zebra.png
├── models
│ ├── EDSR_x4.pb
│ ├── ESPCN_x4.pb
│ ├── FSRCNN_x3.pb
│ └── LapSRN_x8.pb
├── super_res_image.py
└── super_res_video.py
2 directories, 10 files
Here you can see that we have two Python scripts to review today:
super_res_image.py: Performs OpenCV super resolution in images loaded from disk
super_res_video.py: Applies super resolution with OpenCV to real-time video streams
We’ll be covering the implementation of both Python scripts in detail later in this post. From there, we have four super resolution models:
EDSR_x4.pb: Model from the Enhanced Deep Residual Networks for Single Image Super-Resolution paper — increases the input image resolution by 4x
ESPCN_x4.pb: Super resolution model from Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network — increases resolution by 4x
FSRCNN_x3.pb: Model from Accelerating the Super-Resolution Convolutional Neural Network — increases image resolution by 3x
LapSRN_x8.pb: Super resolution model from Fast and Accurate Image Super-Resolution with Deep Laplacian Pyramid Networks — increases image resolution by 8x
Finally, the examples directory contains example input images that we’ll be applying OpenCV super resolution to. |
https://pyimagesearch.com/2020/11/09/opencv-super-resolution-with-deep-learning/ | Implementing OpenCV super resolution with images
We are now ready to implement OpenCV super resolution in images! Open up the super_res_image.py file in your project directory structure, and let’s get to work:
# import the necessary packages
import argparse
import time
import cv2
import os
# construct the argument parser and parse the arguments
ap = argparse. ArgumentParser()
ap.add_argument("-m", "--model", required=True,
help="path to super resolution model")
ap.add_argument("-i", "--image", required=True,
help="path to input image we want to increase resolution of")
args = vars(ap.parse_args())
Lines 2-5 import our required Python packages. We’ll use the dnn_superres submodule of cv2 (our OpenCV bindings) to perform super resolution later in this script. From there, Lines 8-13 parse our command line arguments. We only need two command line arguments here:
--model: The path to the input OpenCV super resolution model
--image: The path to the input image that we want to apply super resolution to
Given our super resolution model path, we now need to extract the model name and the model scale (i.e., factor by which we’ll be increasing the image resolution):
# extract the model name and model scale from the file path
modelName = args["model"].split(os.path.sep)[-1].split("_")[0].lower()
modelScale = args["model"].split("_x")[-1]
modelScale = int(modelScale[:modelScale.find(".")]) Line 16 extracts the modelName, which can be EDSR, ESPCN, FSRCNN, or LapSRN, respectively. The modelNamehas to be one of these model names; otherwise, the dnn_superres module and DnnSuperResImpl_create function will not work. We then extract the modelScale from the input --model path (Lines 17 and 18). Both the modelName and modelPath are displayed to our terminal (just in case we need to perform any debugging). |
https://pyimagesearch.com/2020/11/09/opencv-super-resolution-with-deep-learning/ | With the model name and scale parsed, we can now move on to loading the OpenCV super resolution model:
# initialize OpenCV's super resolution DNN object, load the super
# resolution model from disk, and set the model name and scale
print("[INFO] loading super resolution model: {}".format(
args["model"]))
print("[INFO] model name: {}".format(modelName))
print("[INFO] model scale: {}".format(modelScale))
sr = cv2.dnn_superres. DnnSuperResImpl_create()
sr.readModel(args["model"])
sr.setModel(modelName, modelScale)
We start by instantiating an instance of DnnSuperResImpl_create, which is our actual super resolution object. A call to readModel loads our OpenCV super resolution model from disk. We then have to make a call to setModel to explicitly set the modelName and modelScale. Failing to either read the model from disk or set the model name and scale will result in our super resolution script either erroring out or segfaulting. Let’s now perform super resolution with OpenCV:
# load the input image from disk and display its spatial dimensions
image = cv2.imread(args["image"])
print("[INFO] w: {}, h: {}".format(image.shape[1], image.shape[0]))
# use the super resolution model to upscale the image, timing how
# long it takes
start = time.time()
upscaled = sr.upsample(image)
end = time.time()
print("[INFO] super resolution took {:.6f} seconds".format(
end - start))
# show the spatial dimensions of the super resolution image
print("[INFO] w: {}, h: {}".format(upscaled.shape[1],
upscaled.shape[0]))
Lines 31 and 32 load our input --image from disk and display the original width and height. From there, Line 37 makes a call to sr.upsample, supplying the original input image. The upsample function, as the name suggests, performs a forward pass of our OpenCV super resolution model, returning the upscaled image. We take care to measure the wall time for how long the super resolution process takes, followed by displaying the new width and height of our upscaled image to our terminal. For comparison, let’s apply standard bicubic interpolation and time how long it takes:
# resize the image using standard bicubic interpolation
start = time.time()
bicubic = cv2.resize(image, (upscaled.shape[1], upscaled.shape[0]),
interpolation=cv2.INTER_CUBIC)
end = time.time()
print("[INFO] bicubic interpolation took {:.6f} seconds".format(
end - start))
Bicubic interpolation is the standard algorithm used to increase the resolution of an image. |
https://pyimagesearch.com/2020/11/09/opencv-super-resolution-with-deep-learning/ | This method is implemented in nearly every image processing tool and library, including Photoshop, GIMP, Image Magick, PIL/PIllow, OpenCV, Microsoft Word, Google Docs, etc. — if a piece of software needs to manipulate images, it more than likely implements bicubic interpolation. Finally, let’s display the output results to our screen:
# show the original input image, bicubic interpolation image, and
# super resolution deep learning output
cv2.imshow("Original", image)
cv2.imshow("Bicubic", bicubic)
cv2.imshow("Super Resolution", upscaled)
cv2.waitKey(0)
Here we display our original input image, the bicubic resized image, and finally our upscaled super resolution image. We display the three results to our screen so we can easily compare results. OpenCV super resolution results
Start by making sure you’ve used the “Downloads” section of this tutorial to download the source code, example images, and pre-trained super resolution models. From there, open up a terminal, and execute the following command:
$ python super_res_image.py --model models/EDSR_x4.pb --image examples/adrian.png
[INFO] loading super resolution model: models/EDSR_x4.pb
[INFO] model name: edsr
[INFO] model scale: 4
[INFO] w: 100, h: 100
[INFO] super resolution took 1.183802 seconds
[INFO] w: 400, h: 400
[INFO] bicubic interpolation took 0.000565 seconds
Figure 5: Applying the EDSR model for super resolution with OpenCV. In the top we have our original input image. In the middle we have applied the standard bicubic interpolation image to increase the dimensions of the image. Finally, the bottom shows the output of the EDSR super resolution model (increasing the image dimensions by 4x). If you study the two images, you’ll see that the super resolution images appear “more smooth.” |
https://pyimagesearch.com/2020/11/09/opencv-super-resolution-with-deep-learning/ | In particular, take a look at my forehead region. In the bicubic image, there is a lot of pixelation going on — but in the super resolution image, my forehead is significantly more smooth and less pixelated. The downside to the EDSR super resolution model is that it’s a bit slow. Standard bicubic interpolation could take a 100x100px image and increase it to 400x400px at the rate of > 1700 frames per second. EDSR, on the other hand, takes greater than one second to perform the same upsampling. Therefore, EDSR is not suitable for real-time super resolution (at least not without a GPU). Note: All timings here were collected with a 3 GHz Intel Xeon W processor. A GPU was not used. Let’s try another image, this one of a butterfly:
$ python super_res_image.py --model models/ESPCN_x4.pb --image examples/butterfly.png
[INFO] loading super resolution model: models/ESPCN_x4.pb
[INFO] model name: espcn
[INFO] model scale: 4
[INFO] w: 400, h: 240
[INFO] super resolution took 0.073628 seconds
[INFO] w: 1600, h: 960
[INFO] bicubic interpolation took 0.000833 seconds
Figure 6: The result of applying the ESPCN for super resolution with OpenCV. Again, on the top we have our original input image. |
https://pyimagesearch.com/2020/11/09/opencv-super-resolution-with-deep-learning/ | After applying standard bicubic interpolation we have the middle image. And on the bottom we have the output of applying the ESPCN super resolution model. The best way you can see the difference between these two super resolution models is to study the butterfly’s wings. Notice how the bicubic interpolation method looks more noisy and distorted, while the ESPCN output image is significantly more smooth. The good news here is that the ESPCN model is significantly faster, capable of taking a 400x240px image and upsampling it to a 1600x960px model at the rate of 13 FPS on a CPU. The next example applies the FSRCNN super resolution model:
$ python super_res_image.py --model models/FSRCNN_x3.pb --image examples/jurassic_park.png
[INFO] loading super resolution model: models/FSRCNN_x3.pb
[INFO] model name: fsrcnn
[INFO] model scale: 3
[INFO] w: 350, h: 197
[INFO] super resolution took 0.082049 seconds
[INFO] w: 1050, h: 591
[INFO] bicubic interpolation took 0.001485 seconds
Figure 7: Applying the FSRCNN model for OpenCV super resolution. Pause a second and take a look at Allen Grant’s jacket (the man wearing the blue denim shirt). In the bicubic interpolation image, this shirt is grainy. But in the FSRCNN output, the jacket is far more smoothed. Similar to the ESPCN super resolution model, FSRCNN took only 0.08 seconds to upsample the image (a rate of ~12 FPS). |
https://pyimagesearch.com/2020/11/09/opencv-super-resolution-with-deep-learning/ | Finally, let’s look at the LapSRN model, which will increase our input image resolution by 8x:
$ python super_res_image.py --model models/LapSRN_x8.pb --image examples/zebra.png
[INFO] loading super resolution model: models/LapSRN_x8.pb
[INFO] model name: lapsrn
[INFO] model scale: 8
[INFO] w: 400, h: 267
[INFO] super resolution took 4.759974 seconds
[INFO] w: 3200, h: 2136
[INFO] bicubic interpolation took 0.008516 seconds
Figure 8: Using the LapSRN model to increase the image resolution by 8x with OpenCV super resolution. Perhaps unsurprisingly, this model is the slowest, taking over 4.5 seconds to increase the resolution of a 400x267px input to an output of 3200x2136px. Given that we are increasing the spatial resolution by 8x, this timing result makes sense. That said, the output of the LapSRN super resolution model is fantastic. Look at the zebra stripes between the bicubic interpolation output (middle) and the LapSRN output (bottom). The stripes on the zebra are crisp and defined, unlike the bicubic output. Implementing real-time super resolution with OpenCV
We’ve seen super resolution applied to single images — but what about real-time video streams? Is it possible to perform OpenCV super resolution in real-time? The answer is yes, it’s absolutely possible — and that’s exactly what our super_res_video.py script does. Note: Much of the super_res_video.py script is similar to our super_res_image.py script, so I will spend less time explaining the real-time implementation. |
https://pyimagesearch.com/2020/11/09/opencv-super-resolution-with-deep-learning/ | Refer back to the previous section on “Implementing OpenCV super resolution with images” if you need additional help understanding the code. Let’s get started:
# import the necessary packages
from imutils.video import VideoStream
import argparse
import imutils
import time
import cv2
import os
# construct the argument parser and parse the arguments
ap = argparse. ArgumentParser()
ap.add_argument("-m", "--model", required=True,
help="path to super resolution model")
args = vars(ap.parse_args())
Lines 2-7 import our required Python packages. These are all near-identical to our previous script on super resolution with images, with the exception of my imutils library and the VideoStream implementation from it. We then parse our command line arguments. Only a single argument is required, --model, which is the path to our input super resolution model. Next, let’s extract the model name and model scale, followed by loading our OpenCV super resolution model from disk:
# extract the model name and model scale from the file path
modelName = args["model"].split(os.path.sep)[-1].split("_")[0].lower()
modelScale = args["model"].split("_x")[-1]
modelScale = int(modelScale[:modelScale.find(".")]) # initialize OpenCV's super resolution DNN object, load the super
# resolution model from disk, and set the model name and scale
print("[INFO] loading super resolution model: {}".format(
args["model"]))
print("[INFO] model name: {}".format(modelName))
print("[INFO] model scale: {}".format(modelScale))
sr = cv2.dnn_superres. DnnSuperResImpl_create()
sr.readModel(args["model"])
sr.setModel(modelName, modelScale)
# initialize the video stream and allow the camera sensor to warm up
print("[INFO] starting video stream...")
vs = VideoStream(src=0).start()
time.sleep(2.0)
Lines 16-18 extract our modelName and modelScale from the input --model file path. Using that information, we instantiate our super resolution (sr) object, load the model from disk, and set the model name and scale (Lines 26-28). |
https://pyimagesearch.com/2020/11/09/opencv-super-resolution-with-deep-learning/ | We then initialize our VideoStream (such that we can read frames from our webcam) and allow the camera sensor to warm up. With our initializations taken care of, we can now loop over frames from the VideoStream:
# loop over the frames from the video stream
while True:
# grab the frame from the threaded video stream and resize it
# to have a maximum width of 300 pixels
frame = vs.read()
frame = imutils.resize(frame, width=300)
# upscale the frame using the super resolution model and then
# bicubic interpolation (so we can visually compare the two)
upscaled = sr.upsample(frame)
bicubic = cv2.resize(frame,
(upscaled.shape[1], upscaled.shape[0]),
interpolation=cv2.INTER_CUBIC)
Line 36 starts looping over frames from our video stream. We then grab the next frame and resize it to have a width of 300px. We perform this resizing operation for visualization/example purposes. Recall that the point of this tutorial is to apply super resolution with OpenCV. Therefore, our example should show how to take a low resolution input and then generate a high resolution output (which is exactly why we are reducing the resolution of the frame). Line 44 resizes the input frame using our OpenCV resolution model, resulting in the upscaled image. Lines 45-47 apply basic bicubic interpolation so we can compare the two methods. Our final code block displays the results to our screen:
# show the original frame, bicubic interpolation frame, and super
# resolution frame
cv2.imshow("Original", frame)
cv2.imshow("Bicubic", bicubic)
cv2.imshow("Super Resolution", upscaled)
key = cv2.waitKey(1) & 0xFF
# if the `q` key was pressed, break from the loop
if key == ord("q"):
break
# do a bit of cleanup
cv2.destroyAllWindows()
vs.stop()
Here we display the original frame, bicubic interpolation output, as well as the upscaled output from our super resolution model. We continue processing and displaying frames to our screen until a window opened by OpenCV is clicked and the q is pressed, causing our Python script to quit/exit. |
https://pyimagesearch.com/2020/11/09/opencv-super-resolution-with-deep-learning/ | Finally, we perform a bit of cleanup by closing all windows opened by OpenCV and stopping our video stream. Real-time OpenCV super resolution results
Let’s now apply OpenCV super resolution in real-time video streams! Make sure you’ve used the “Downloads” section of this tutorial to download the source code, example images, and pre-trained models. From there, you can open up a terminal and execute the following command:
$ python super_res_video.py --model models/FSRCNN_x3.pb
[INFO] loading super resolution model: models/FSRCNN_x3.pb
[INFO] model name: fsrcnn
[INFO] model scale: 3
[INFO] starting video stream...
Here you can see that I’m able to run the FSRCNN model in real-time on my CPU (no GPU required!). Furthermore, if you compare the result of bicubic interpolation with super resolution, you’ll see that the super resolution output is much cleaner. Suggestions
It’s hard to show all the subtleties that super resolution gives us in a blog post with limited dimensions to show example images and video, so I strongly recommend that you download the code/models and study the outputs close-up. What's next? We recommend PyImageSearch University. Course information:
84 total classes • 114+ hours of on-demand code walkthrough videos • Last updated: February 2024
★★★★★ 4.84 (128 Ratings) • 16,000+ Students Enrolled
I strongly believe that if you had the right teacher you could master computer vision and deep learning. Do you think learning computer vision and deep learning has to be time-consuming, overwhelming, and complicated? |
https://pyimagesearch.com/2020/11/09/opencv-super-resolution-with-deep-learning/ | Or has to involve complex mathematics and equations? Or requires a degree in computer science? That’s not the case. All you need to master computer vision and deep learning is for someone to explain things to you in simple, intuitive terms. And that’s exactly what I do. My mission is to change education and how complex Artificial Intelligence topics are taught. If you're serious about learning computer vision, your next stop should be PyImageSearch University, the most comprehensive computer vision, deep learning, and OpenCV course online today. Here you’ll learn how to successfully and confidently apply computer vision to your work, research, and projects. Join me in computer vision mastery. Inside PyImageSearch University you'll find:
✓ 84 courses on essential computer vision, deep learning, and OpenCV topics
✓ 84 Certificates of Completion
✓ 114+ hours of on-demand video
✓ Brand new courses released regularly, ensuring you can keep up with state-of-the-art techniques
✓ Pre-configured Jupyter Notebooks in Google Colab
✓ Run all code examples in your web browser — works on Windows, macOS, and Linux (no dev environment configuration required!) |
https://pyimagesearch.com/2020/11/09/opencv-super-resolution-with-deep-learning/ | ✓ Access to centralized code repos for all 536+ tutorials on PyImageSearch
✓ Easy one-click downloads for code, datasets, pre-trained models, etc. ✓ Access on mobile, laptop, desktop, etc. Click here to join PyImageSearch University
Summary
In this tutorial you learned how to implement OpenCV super resolution in both images and real-time video streams. Basic image resizing algorithms such as nearest neighbor interpolation, linear interpolation, and bicubic interpolation can only increase the resolution of an input image to a certain factor — afterward, image quality degrades to the point where images look pixelated, and in general, the resized image is just aesthetically unpleasing to the human eye. Deep learning super resolution models are able to produce these higher resolution images while at the same time helping prevent much of these pixelations, artifacts, and unpleasing results. That said, you need to set the expectation that there are no magical algorithms like you see in TV/movies that take a blurry, thumbnail-sized image and resize it to be a poster that you could print out and hang on your wall — that simply isn’t possible. That said, OpenCV’s super resolution module can be used to apply super resolution. Whether or not that’s appropriate for your pipeline is something that should be tested:
Try first using cv2.resize and standard interpolation algorithms (and time how long the resizing takes). Then, run the same operation, but instead swap in OpenCV’s super resolution module (and again, time how long the resizing takes). Compare both the output and the amount of time it took both standard interpolation and OpenCV super resolution to run. |
https://pyimagesearch.com/2020/11/09/opencv-super-resolution-with-deep-learning/ | From there, select the resizing mode that achieves the best balance between the quality of the output image along with the time it took for the resizing to take place. To download the source code to this post (and be notified when future tutorials are published here on PyImageSearch), simply enter your email address in the form below! Download the Source Code and FREE 17-page Resource Guide
Enter your email address below to get a .zip of the code and a FREE 17-page Resource Guide on Computer Vision, OpenCV, and Deep Learning. Inside you'll find my hand-picked tutorials, books, courses, and libraries to help you master CV and DL! Download the code! Website |
https://pyimagesearch.com/2020/11/16/gans-with-keras-and-tensorflow/ | Click here to download the source code to this pos
In this tutorial you will learn how to implement Generative Adversarial Networks (GANs) using Keras and TensorFlow. Generative Adversarial Networks were first introduced by Goodfellow et al. in their 2014 paper, Generative Adversarial Networks. These networks can be used to generate synthetic (i.e., fake) images that are perceptually near identical to their ground-truth authentic originals. In order to generate synthetic images, we make use of two neural networks during training:
A generator that accepts an input vector of randomly generated noise and produces an output “imitation” image that looks similar, if not identical, to the authentic imageA discriminator or adversary that attempts to determine if a given image is an “authentic” or “fake”
By training these networks at the same time, one giving feedback to the other, we can learn to generate synthetic images. Inside this tutorial we’ll be implementing a variation of Radford et al. ’s paper, Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks — or more simply, DCGANs. As we’ll find out, training GANs can be a notoriously hard task, so we’ll implement a number of best practices recommended by both Radford et al. and Francois Chollet (creator of Keras and deep learning scientist at Google). By the end of this tutorial, you’ll have a fully functioning GAN implementation. |
https://pyimagesearch.com/2020/11/16/gans-with-keras-and-tensorflow/ | To learn how to implement Generative Adversarial Networks (GANs) with Keras and TensorFlow, just keep reading. Looking for the source code to this post? Jump Right To The Downloads Section
GANs with Keras and TensorFlow
Note: This tutorial is a chapter from my book Deep Learning for Computer Vision with Python. If you enjoyed this post and would like to learn more about deep learning applied to computer vision, be sure to give my book a read — I have no doubt it will take you from deep learning beginner all the way to expert. In the first part of this tutorial, we’ll discuss what Generative Adversarial Networks are, including how they are different from more “vanilla” network architectures you have seen before for classification and regression. From there we’ll discuss the general GAN training process, including some guidelines and best practices you should follow when training your own GANs. Next, we’ll review our directory structure for the project and then implement our GAN architecture using Keras and TensorFlow. Once our GAN is implemented, we’ll train it on the Fashion MNIST dataset, thereby allowing us to generate fake/synthetic fashion apparel images. Finally, we’ll wrap up this tutorial on Generative Adversarial Networks with a discussion of our results. What are Generative Adversarial Networks (GANs)? |
https://pyimagesearch.com/2020/11/16/gans-with-keras-and-tensorflow/ | Figure 1: When training our GAN, the goal is for the generator to become progressively better and better at generating synthetic images, to the point where the discriminator is unable to tell the difference between the real vs. synthetic data (image source). The quintessential explanation of GANs typically involves some variant of two people working in collusion to forge a set of documents, replicate a piece of artwork, or print counterfeit money — the counterfeit money printers is my personal favorite, and the one used by Chollet in his work. In this example, we have two people:
Jack, the counterfeit printer (the generator)Jason, an employee of the U.S. Treasury (which is responsible for printing money in the United States), who specializes in detecting counterfeit money (the discriminator)
Jack and Jason were childhood friends, both growing up without much money in the rough parts of Boston. After much hard work, Jason was awarded a college scholarship — Jack was not, and over time started to turn toward illegal ventures to make money (in this case, creating counterfeit money). Jack knew he wasn’t very good at generating counterfeit money, but he felt that with the proper training, he could replicate bills that were passable in circulation. One day, after a few too many pints at a local pub during the Thanksgiving holiday, Jason let it slip to Jack that he wasn’t happy with his job. He was underpaid. His boss was nasty and spiteful, often yelling and embarrassing Jason in front of other employees. Jason was even thinking of quitting. Jack saw an opportunity to use Jason’s access at the U.S. Treasury to create an elaborate counterfeit printing scheme. |
https://pyimagesearch.com/2020/11/16/gans-with-keras-and-tensorflow/ | Their conspiracy worked like this:
Jack, the counterfeit printer, would print fake bills and then mix both the fake bills and real money together, then show them to the expert, Jason. Jason would sort through the bills, classifying each bill as “fake” or “authentic,” giving feedback to Jack along the way on how he could improve his counterfeit printing. At first, Jack is doing a pretty poor job at printing counterfeit money. But over time, with Jason’s guidance, Jack eventually improves to the point where Jason is no longer able to spot the difference between the bills. By the end of this process, both Jack and Jason have stacks of counterfeit money that can fool most people. The general GAN training procedure
Figure 2: The steps involved in training a Generative Adversarial Network (GAN) with Keras and TensorFlow. We’ve discussed what GANs are in terms of an analogy, but what is the actual procedure to train them? Most GANs are trained using a six-step process. To start (Step 1), we randomly generate a vector (i.e., noise). We pass this noise through our generator, which generates an actual image (Step 2). |
https://pyimagesearch.com/2020/11/16/gans-with-keras-and-tensorflow/ | We then sample authentic images from our training set and mix them with our synthetic images (Step 3). The next step (Step 4) is to train our discriminator using this mixed set. The goal of the discriminator is to correctly label each image as “real” or “fake.” Next, we’ll once again generate random noise, but this time we’ll purposely label each noise vector as a “real image” (Step 5). We’ll then train the GAN using the noise vectors and “real image” labels even though they are not actual real images (Step 6). The reason this process works is due to the following:
We have frozen the weights of the discriminator at this stage, implying that the discriminator is not learning when we update the weights of the generator. We’re trying to “fool” the discriminator into being unable to determine which images are real vs. synthetic. The feedback from the discriminator will allow the generator to learn how to produce more authentic images. If you’re confused with this process, I would continue reading through our implementation covered later in this tutorial — seeing a GAN implemented in Python and then explained makes it easier to understand the process. Guidelines and best practices when training GANs
Figure 3: Generative Adversarial Networks are incredibly hard to train due to the evolving loss landscape. |
https://pyimagesearch.com/2020/11/16/gans-with-keras-and-tensorflow/ | Here are some tips to help you successfully train your GANs (image source). GANs are notoriously hard to train due to an evolving loss landscape. At each iteration of our algorithm we are:
Generating random images and then training the discriminator to correctly distinguish the twoGenerating additional synthetic images, but this time purposely trying to fool the discriminatorUpdating the weights of the generator based on the feedback of the discriminator, thereby allowing us to generate more authentic images
From this process you’ll notice there are two losses we need to observe: one loss for the discriminator and a second loss for the generator. And since the loss landscape of the generator can be changed based on the feedback from the discriminator, we end up with a dynamic system. When training GANs, our goal is not to seek a minimum loss value but instead to find some equilibrium between the two (Chollet 2017). This concept of finding an equilibrium may make sense on paper, but once you try to implement and train your own GANs, you’ll find that this is a nontrivial process. In their paper, Radford et al. recommend the following architecture guidelines for more stable GANs:
Replace any pooling layers with strided convolutions (see this tutorial for more information on convolutions and strided convolutions).Use batch normalization in both the generator and discriminator. Remove fully-connected layers in deeper networks. Use ReLU in the generator except for the final layer, which will utilize tanh. |
https://pyimagesearch.com/2020/11/16/gans-with-keras-and-tensorflow/ | Use Leaky ReLU in the discriminator. In his book, Francois Chollet then provides additional recommendations on training GANs:
Sample random vectors from a normal distribution (i.e., Gaussian distribution) rather than a uniform distribution. Add dropout to the discriminator. Add noise to the class labels when training the discriminator. To reduce checkerboard pixel artifacts in the output image, use a kernel size that is divisible by the stride when utilizing convolution or transposed convolution in both the generator and discriminator. If your adversarial loss rises dramatically while your discriminator loss falls to zero, try reducing the learning rate of the discriminator and increasing the dropout of the discriminator. Keep in mind that these are all just heuristics found to work in a number of situations — we’ll be using some of the techniques suggested by both Radford et al. and Chollet, but not all of them. It is possible, and even probable, that the techniques listed here will not work on your GANs. Take the time now to set your expectations that you’ll likely be running orders of magnitude more experiments when tuning the hyperparameters of your GANs as compared to more basic classification or regression tasks. |
https://pyimagesearch.com/2020/11/16/gans-with-keras-and-tensorflow/ | Configuring your development environment to train GANs with Keras and TensorFlow
We’ll be using Keras and TensorFlow to implement and train our GANs. I recommend you follow either of these two guides to install TensorFlow and Keras on your system:
How to install TensorFlow 2.0 on UbuntuHow to install TensorFlow 2.0 on macOS
Either tutorial will help you configure your system with all the necessary software for this blog post in a convenient Python virtual environment. Having problems configuring your development environment? Figure 4: Having trouble configuring your dev environment? Want access to pre-configured Jupyter Notebooks running on Google Colab? Be sure to join PyImageSearch Plus —- you’ll be up and running with this tutorial in a matter of minutes. All that said, are you:
Short on time?Learning on your employer’s administratively locked system?Wanting to skip the hassle of fighting with the command line, package managers, and virtual environments?Ready to run the code right now on your Windows, macOS, or Linux system? Then join PyImageSearch Plus today! Gain access to Jupyter Notebooks for this tutorial and other PyImageSearch guides that are pre-configured to run on Google Colab’s ecosystem right in your web browser! No installation required. |
https://pyimagesearch.com/2020/11/16/gans-with-keras-and-tensorflow/ | And best of all, these Jupyter Notebooks will run on Windows, macOS, and Linux! Project structure
Now that we understand the fundamentals of Generative Adversarial Networks, let’s review our directory structure for the project. Make sure you use the “Downloads” section of this tutorial to download the source code to our GAN project:
$ tree . --dirsfirst
. ├── output
│ ├── epoch_0001_output.png
│ ├── epoch_0001_step_00000.png
│ ├── epoch_0001_step_00025.png
...
│ ├── epoch_0050_step_00300.png
│ ├── epoch_0050_step_00400.png
│ └── epoch_0050_step_00500.png
├── pyimagesearch
│ ├── __init__.py
│ └── dcgan.py
└── dcgan_fashion_mnist.py
3 directories, 516 files
The dcgan.py file inside the pyimagesearch module contains the implementation of our GAN in Keras and TensorFlow. The dcgan_fashion_mnist.py script will take our GAN implementation and train it on the Fashion MNIST dataset, thereby allowing us to generate “fake” examples of clothing using our GAN. The output of the GAN after every set number of steps/epochs will be saved to the output directory, allowing us to visually monitor and validate that the GAN is learning how to generate fashion items. Implementing our “generator” with Keras and TensorFlow
Now that we’ve reviewed our project directory structure, let’s get started implementing our Generative Adversarial Network using Keras and TensorFlow. Open up the dcgan.py file in our project directory structure, and let’s get started:
# import the necessary packages
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import BatchNormalization
from tensorflow.keras.layers import Conv2DTranspose
from tensorflow.keras.layers import Conv2D
from tensorflow.keras.layers import LeakyReLU
from tensorflow.keras.layers import Activation
from tensorflow.keras.layers import Flatten
from tensorflow.keras.layers import Dense
from tensorflow.keras.layers import Reshape
Lines 2-10 import our required Python packages. All of these classes should look fairly familiar to you, especially if you’ve read my Keras and TensorFlow tutorials or my book Deep Learning for Computer Vision with Python. |
https://pyimagesearch.com/2020/11/16/gans-with-keras-and-tensorflow/ | The only exception may be the Conv2DTranspose class. Transposed convolutional layers, sometimes referred to as fractionally-strided convolution or (incorrectly) deconvolution, are used when we need a transform going in the opposite direction of a normal convolution. The generator of our GAN will accept an N dimensional input vector (i.e., a list of numbers, but a volume like an image) and then transform the N dimensional vector into an output image. This process implies that we need to reshape and then upscale this vector into a volume as it passes through the network — to accomplish this reshaping and upscaling, we’ll need transposed convolution. We can thus look at transposed convolution as the method to:
Accept an input volume from a previous layer in the networkProduce an output volume that is larger than the input volumeMaintain a connectivity pattern between the input and output
In essence our transposed convolution layer will reconstruct our target spatial resolution and perform a normal convolution operation, utilizing fancy zero-padding techniques to ensure our output spatial dimensions are met. To learn more about transposed convolution, take a look at the Convolution arithmetic tutorial in the Theano documentation along with An introduction to different Types of Convolutions in Deep Learning By Paul-Louis Pröve. Let’s now move into implementing our DCGAN class:
class DCGAN:
@staticmethod
def build_generator(dim, depth, channels=1, inputDim=100,
outputDim=512):
# initialize the model along with the input shape to be
# "channels last" and the channels dimension itself
model = Sequential()
inputShape = (dim, dim, depth)
chanDim = -1
Here we define the build_generator function inside DCGAN. The build_generator accepts a number of arguments:
dim: The target spatial dimensions (width and height) of the generator after reshaping
depth: The target depth of the volume after reshaping
channels: The number of channels in the output volume from the generator (i.e., 1 for grayscale images and 3 for RGB images)
inputDim: Dimensionality of the randomly generated input vector to the generator
outputDim: Dimensionality of the output fully-connected layer from the randomly generated input vector
The usage of these parameters will become more clear as we define the body of the network in the next code block. Line 19 defines the inputShape of the volume after we reshape it from the fully-connected layer. Line 20 sets the channel dimension (chanDim), which we assume to be “channels-last” ordering (the standard channel ordering for TensorFlow). |
https://pyimagesearch.com/2020/11/16/gans-with-keras-and-tensorflow/ | Below we can find the body of our generator network:
# first set of FC => RELU => BN layers
model.add(Dense(input_dim=inputDim, units=outputDim))
model.add(Activation("relu"))
model.add(BatchNormalization())
# second set of FC => RELU => BN layers, this time preparing
# the number of FC nodes to be reshaped into a volume
model.add(Dense(dim * dim * depth))
model.add(Activation("relu"))
model.add(BatchNormalization())
Lines 23-25 define our first set of FC => RELU => BN layers — applying batch normalization to stabilize GAN training is a guideline from Radford et al. ( see the “Guidelines and best practices when training GANs” section above). Notice how our FC layer will have an input dimension of inputDim (the randomly generated input vector) and then output dimensionality of outputDim. Typically outputDim will be larger than inputDim. Lines 29-31 apply a second set of FC => RELU => BN layers, but this time we prepare the number of nodes in the FC layer to equal the number of units in inputShape (Line 29). Even though we are still utilizing a flattened representation, we need to ensure the output of this FC layer can be reshaped to our target volume sze (i.e., inputShape). The actual reshaping takes place in the next code block:
# reshape the output of the previous layer set, upsample +
# apply a transposed convolution, RELU, and BN
model.add(Reshape(inputShape))
model.add(Conv2DTranspose(32, (5, 5), strides=(2, 2),
padding="same"))
model.add(Activation("relu"))
model.add(BatchNormalization(axis=chanDim))
A call to Reshape while supplying the inputShape allows us to create a 3D volume from the fully-connected layer on Line 29. Again, this reshaping is only possible due to the fact that the number of output nodes in the FC layer matches the target inputShape. We now reach an important guideline when training your own GANs:
To increase spatial resolution, use a transposed convolution with a stride > 1.To create a deeper GAN without increasing spatial resolution, you can use either standard convolution or transposed convolution (but keep the stride equal to 1). Here, our transposed convolution layer is learning 32 filters, each of which is 5×5, while applying a 2×2 stride — since our stride is > 1, we can increase our spatial resolution. |
https://pyimagesearch.com/2020/11/16/gans-with-keras-and-tensorflow/ | Let’s apply another transposed convolution:
# apply another upsample and transposed convolution, but
# this time output the TANH activation
model.add(Conv2DTranspose(channels, (5, 5), strides=(2, 2),
padding="same"))
model.add(Activation("tanh"))
# return the generator model
return model
Lines 43 and 44 apply another transposed convolution, again increasing the spatial resolution, but taking care to ensure the number of filters learned is equal to the target number of channels (1 for grayscale and 3 for RGB). We then apply a tanh activation function per the recommendation of Radford et al. The model is then returned to the calling function on Line 48. Understanding the “generator” in our GAN
Assuming dim=7, depth=64, channels=1, inputDim=100, and outputDim=512 (as we will use when training our GAN on Fashion MNIST later in this tutorial), I have included the model summary below:
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense (Dense) (None, 512) 51712
_________________________________________________________________
activation (Activation) (None, 512) 0
_________________________________________________________________
batch_normalization (BatchNo (None, 512) 2048
_________________________________________________________________
dense_1 (Dense) (None, 3136) 1608768
_________________________________________________________________
activation_1 (Activation) (None, 3136) 0
_________________________________________________________________
batch_normalization_1 (Batch (None, 3136) 12544
_________________________________________________________________
reshape (Reshape) (None, 7, 7, 64) 0
_________________________________________________________________
conv2d_transpose (Conv2DTran (None, 14, 14, 32) 51232
_________________________________________________________________
activation_2 (Activation) (None, 14, 14, 32) 0
_________________________________________________________________
batch_normalization_2 (Batch (None, 14, 14, 32) 128
_________________________________________________________________
conv2d_transpose_1 (Conv2DTr (None, 28, 28, 1) 801
_________________________________________________________________
activation_3 (Activation) (None, 28, 28, 1) 0
=================================================================
Let’s break down what’s going on here. First, our model will accept an input vector that is 100-d, then transform it to a 512-d vector via an FC layer. We then add a second FC layer, this one with 7x7x64 = 3,136 nodes. We reshape these 3,136 nodes into a 3D volume with shape 7×7 = 64 — this reshaping is only possible since our previous FC layer matches the number of nodes in the reshaped volume. Applying a transposed convolution with a 2×2 stride increases our spatial dimensions from 7×7 to 14×14. A second transposed convolution (again, with a stride of 2×2) increases our spatial dimension resolution from 14×14 to 28×18 with a single channel, which is the exact dimensions of our input images in the Fashion MNIST dataset. When implementing your own GANs, make sure the spatial dimensions of the output volume match the spatial dimensions of your input images. |
https://pyimagesearch.com/2020/11/16/gans-with-keras-and-tensorflow/ | Use transposed convolution to increase the spatial dimensions of the volumes in the generator. I also recommend using model.summary() often to help you debug the spatial dimensions. Implementing our “discriminator” with Keras and TensorFlow
The discriminator model is substantially more simplistic, similar to basic CNN classification architectures you may have read in my book or elsewhere on the PyImageSearch blog. Keep in mind that while the generator is intended to create synthetic images, the discriminator is used to classify whether any given input image is real or fake. Continuing our implementation of the DCGAN class in dcgan.py, let’s take a look at the discriminator now:
@staticmethod
def build_discriminator(width, height, depth, alpha=0.2):
# initialize the model along with the input shape to be
# "channels last"
model = Sequential()
inputShape = (height, width, depth)
# first set of CONV => RELU layers
model.add(Conv2D(32, (5, 5), padding="same", strides=(2, 2),
input_shape=inputShape))
model.add(LeakyReLU(alpha=alpha))
# second set of CONV => RELU layers
model.add(Conv2D(64, (5, 5), padding="same", strides=(2, 2)))
model.add(LeakyReLU(alpha=alpha))
# first (and only) set of FC => RELU layers
model.add(Flatten())
model.add(Dense(512))
model.add(LeakyReLU(alpha=alpha))
# sigmoid layer outputting a single value
model.add(Dense(1))
model.add(Activation("sigmoid"))
# return the discriminator model
return model
As we can see, this network is simple and straightforward. We first learn 32, 5×5 filters, followed by a second CONV layer, this one learning a total of 64, 5×5 filters. We only have a single FC layer here, this one with 512 nodes. All activation layers utilize a Leaky ReLU activation to stabilize training, except for the final activation function which is sigmoid. We use a sigmoid here to capture the probability of whether the input image is real or synthetic. Implementing our GAN training script
Now that we’ve implemented our DCGAN architecture, let’s train it on the Fashion MNIST dataset to generate fake apparel items. |
https://pyimagesearch.com/2020/11/16/gans-with-keras-and-tensorflow/ | By the end of the training process, we will be unable to identify real images from synthetic ones. Open up the dcgan_fashion_mnist.py file in our project directory structure, and let’s get to work:
# import the necessary packages
from pyimagesearch.dcgan import DCGAN
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Input
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.datasets import fashion_mnist
from sklearn.utils import shuffle
from imutils import build_montages
import numpy as np
import argparse
import cv2
import os
We start off by importing our required Python packages. Notice that we’re importing DCGAN, which is our implementation of the GAN architecture from the previous section (Line 2). We also import the build_montages function (Line 8). This is a convenience function that will enable us to easily build a montage of generated images and then display them to our screen as a single image. You can read more about building montages in my tutorial Montages with OpenCV. Let’s move to parsing our command line arguments:
# construct the argument parse and parse the arguments
ap = argparse. ArgumentParser()
ap.add_argument("-o", "--output", required=True,
help="path to output directory")
ap.add_argument("-e", "--epochs", type=int, default=50,
help="# epochs to train for")
ap.add_argument("-b", "--batch-size", type=int, default=128,
help="batch size for training")
args = vars(ap.parse_args())
We require only a single command line argument for this script, --output, which is the path to the output directory where we’ll store montages of generated images (thereby allowing us to visualize the GAN training process). We can also (optionally) supply --epochs, the total number of epochs to train for, and --batch-size, used to control the batch size when training. Let’s now take care of a few important initializations:
# store the epochs and batch size in convenience variables, then
# initialize our learning rate
NUM_EPOCHS = args["epochs"]
BATCH_SIZE = args["batch_size"]
INIT_LR = 2e-4
We store both the number of epochs and batch size in convenience variables on Lines 26 and 27. |
https://pyimagesearch.com/2020/11/16/gans-with-keras-and-tensorflow/ | We also initialize our initial learning rate (INIT_LR) on Line 28. This value was empirically tuned through a number of experiments and trial and error. If you choose to apply this GAN implementation to your own dataset, you may need to tune this learning rate. We can now load the Fashion MNIST dataset from disk:
# load the Fashion MNIST dataset and stack the training and testing
# data points so we have additional training data
print("[INFO] loading MNIST dataset...")
((trainX, _), (testX, _)) = fashion_mnist.load_data()
trainImages = np.concatenate([trainX, testX])
# add in an extra dimension for the channel and scale the images
# into the range [-1, 1] (which is the range of the tanh
# function)
trainImages = np.expand_dims(trainImages, axis=-1)
trainImages = (trainImages.astype("float") - 127.5) / 127.5
Line 33 loads the Fashion MNIST dataset from disk. We ignore class labels here, since we do not need them — we are only interested in the actual pixel data. Furthermore, there is no concept of a “test set” for GANs. Our goal when training a GAN isn’t minimal loss or high accuracy. Instead, we seek an equilibrium between the generator and the discriminator. To help us obtain this equilibrium, we combine both the training and testing images (Line 34) to give us additional training data. Lines 39 and 40 prepare our data for training by scaling the pixel intensities to the range [0, 1], the output range of the tanh activation function. |
https://pyimagesearch.com/2020/11/16/gans-with-keras-and-tensorflow/ | Let’s now initialize our generator and discriminator:
# build the generator
print("[INFO] building generator...")
gen = DCGAN.build_generator(7, 64, channels=1)
# build the discriminator
print("[INFO] building discriminator...")
disc = DCGAN.build_discriminator(28, 28, 1)
discOpt = Adam(lr=INIT_LR, beta_1=0.5, decay=INIT_LR / NUM_EPOCHS)
disc.compile(loss="binary_crossentropy", optimizer=discOpt)
Line 44 initializes the generator that will transform the input random vector to a volume of shape 7x7x64-channel map. Lines 48-50 build the discriminator and then compile it using the Adam optimizer with binary cross-entropy loss. Keep in mind that we are using binary cross-entropy here, as our discriminator has a sigmoid activation function that will return a probability indicating whether the input image is real vs. fake. Since there are only two “class labels” (real vs. synthetic), we use binary cross-entropy. The learning rate and beta value for the Adam optimizer were experimentally tuned. I’ve found that a lower learning rate and beta value for the Adam optimizer improves GAN training on the Fashion MNIST dataset. Applying learning rate decay helps stabilize training as well. Given both the generator and discriminator, we can build our GAN:
# build the adversarial model by first setting the discriminator to
# *not* be trainable, then combine the generator and discriminator
# together
print("[INFO] building GAN...")
disc.trainable = False
ganInput = Input(shape=(100,))
ganOutput = disc(gen(ganInput))
gan = Model(ganInput, ganOutput)
# compile the GAN
ganOpt = Adam(lr=INIT_LR, beta_1=0.5, decay=INIT_LR / NUM_EPOCHS)
gan.compile(loss="binary_crossentropy", optimizer=discOpt)
The actual GAN consists of both the generator and the discriminator; however, we first need to freeze the discriminator weights (Line 56) before we combine the models to form our Generative Adversarial Network (Lines 57-59). Here we can see that the input to the gan will take a random vector that is 100-d. This value will be passed through the generator first, the output of which will go to the discriminator — we call this “model composition,” similar to “function composition” we learned about back in algebra class. The discriminator weights are frozen at this point so the feedback from the discriminator will enable the generator to learn how to generate better synthetic images. |
https://pyimagesearch.com/2020/11/16/gans-with-keras-and-tensorflow/ | Lines 62 and 63 compile the gan. I again use the Adam optimizer with the same hyperparameters as the optimizer for the discriminator — this process worked for the purposes of these experiments, but you may need to tune these values on your own datasets and models. Additionally, I’ve often found that setting the learning rate of the GAN to be half that of the discriminator is often a good starting point. Throughout the training process we’ll want to see how our GAN evolves to construct synthetic images from random noise. To accomplish this task, we’ll need to generate some benchmark random noise used to visualize the training process:
# randomly generate some benchmark noise so we can consistently
# visualize how the generative modeling is learning
print("[INFO] starting training...")
benchmarkNoise = np.random.uniform(-1, 1, size=(256, 100))
# loop over the epochs
for epoch in range(0, NUM_EPOCHS):
# show epoch information and compute the number of batches per
# epoch
print("[INFO] starting epoch {} of {}...".format(epoch + 1,
NUM_EPOCHS))
batchesPerEpoch = int(trainImages.shape[0] / BATCH_SIZE)
# loop over the batches
for i in range(0, batchesPerEpoch):
# initialize an (empty) output path
p = None
# select the next batch of images, then randomly generate
# noise for the generator to predict on
imageBatch = trainImages[i * BATCH_SIZE:(i + 1) * BATCH_SIZE]
noise = np.random.uniform(-1, 1, size=(BATCH_SIZE, 100))
Line 68 generates our benchmarkNoise. Notice that the benchmarkNoise is generated from a uniform distribution in the range [-1, 1], the same range as our tanh activation function. Line 68 indicates that we’ll be generating 256 synthetic images, where each input starts as a 100-d vector. Starting on Line 71 we loop over our desired number of epochs. Line 76 computes the number of batches per epoch by dividing the number of training images by the supplied batch size. We then loop over each batch on Line 79. |
https://pyimagesearch.com/2020/11/16/gans-with-keras-and-tensorflow/ | Line 85 subsequently extracts the next imageBatch, while Line 86 generates the random noise that we’ll be passing through the generator. Given the noise vector, we can use the generator to generate synthetic images:
# generate images using the noise + generator model
genImages = gen.predict(noise, verbose=0)
# concatenate the *actual* images and the *generated* images,
# construct class labels for the discriminator, and shuffle
# the data
X = np.concatenate((imageBatch, genImages))
y = ([1] * BATCH_SIZE) + ([0] * BATCH_SIZE)
y = np.reshape(y, (-1,))
(X, y) = shuffle(X, y)
# train the discriminator on the data
discLoss = disc.train_on_batch(X, y)
Line 89 takes our input noise and then generates synthetic apparel images (genImages). Given our generated images, we need to train the discriminator to recognize the difference between real and synthetic images. To accomplish this task, Line 94 concatenates the current imageBatch and the synthetic genImages together. We then need to build our class labels on Line 95 — each real image will have a class label of 1, while every fake image will be labeled 0. The concatenated training data is then jointly shuffled on Line 97 so our real and fake images do not sequentially follow each other one-by-one (which would cause problems during our gradient update phase). Additionally, I have found this shuffling process improves the stability of discriminator training. Line 100 trains the discriminator on the current (shuffled) batch. The final step in our training process is to train the gan itself:
# let's now train our generator via the adversarial model by
# (1) generating random noise and (2) training the generator
# with the discriminator weights frozen
noise = np.random.uniform(-1, 1, (BATCH_SIZE, 100))
fakeLabels = [1] * BATCH_SIZE
fakeLabels = np.reshape(fakeLabels, (-1,))
ganLoss = gan.train_on_batch(noise, fakeLabels)
We first generate a total of BATCH_SIZE random vectors. However, unlike in our previous code block, where we were nice enough to tell our discriminator what is real vs. fake, we’re now going to attempt to trick the discriminator by labeling the random noise as real images. |
https://pyimagesearch.com/2020/11/16/gans-with-keras-and-tensorflow/ | The feedback from the discriminator enables us to actually train the generator (keeping in mind that the discriminator weights are frozen for this operation). Not only is looking at the loss values important when training a GAN, but you also need to examine the output of the gan on your benchmarkNoise:
# check to see if this is the end of an epoch, and if so,
# initialize the output path
if i == batchesPerEpoch - 1:
p = [args["output"], "epoch_{}_output.png".format(
str(epoch + 1).zfill(4))]
# otherwise, check to see if we should visualize the current
# batch for the epoch
else:
# create more visualizations early in the training
# process
if epoch < 10 and i % 25 == 0:
p = [args["output"], "epoch_{}_step_{}.png".format(
str(epoch + 1).zfill(4), str(i).zfill(5))]
# visualizations later in the training process are less
# interesting
elif epoch >= 10 and i % 100 == 0:
p = [args["output"], "epoch_{}_step_{}.png".format(
str(epoch + 1).zfill(4), str(i).zfill(5))]
If we have reached the end of the epoch, we’ll build the path, p, to our output visualization (Lines 112-114). Otherwise, I find it helpful to visually inspect the output of our GAN with more frequency in earlier steps rather than later ones (Lines 118-129). The output visualization will be totally random salt and pepper noise at the beginning but should quickly start to develop characteristics of the input data. These characteristics may not look real, but the evolving attributes will demonstrate to you that the network is actually learning. If your output visualizations are still salt and pepper noise after 5-10 epochs, it may be a sign that you need to tune your hyperparameters, potentially including the model architecture definition itself. Our final code block handles writing the synthetic image visualization to disk:
# check to see if we should visualize the output of the
# generator model on our benchmark data
if p is not None:
# show loss information
print("[INFO] Step {}_{}: discriminator_loss={:.6f}, "
"adversarial_loss={:.6f}".format(epoch + 1, i,
discLoss, ganLoss))
# make predictions on the benchmark noise, scale it back
# to the range [0, 255], and generate the montage
images = gen.predict(benchmarkNoise)
images = ((images * 127.5) + 127.5).astype("uint8")
images = np.repeat(images, 3, axis=-1)
vis = build_montages(images, (28, 28), (16, 16))[0]
# write the visualization to disk
p = os.path.sep.join(p)
cv2.imwrite(p, vis)
Line 141 uses our generator to generate images from our benchmarkNoise. We then scale our image data back from the range [-1, 1] (the boundaries of the tanh activation function) to the range [0, 255] (Line 142). Since we are generating single-channel images, we repeat the grayscale representation of the image three times to construct a 3-channel RGB image (Line 143). The build_montages function generates a 16×16 grid, with a 28×28 image in each vector. |
https://pyimagesearch.com/2020/11/16/gans-with-keras-and-tensorflow/ | The montage is then written to disk on Line 148. Training our GAN with Keras and TensorFlow
To train our GAN on the Fashion MNIST dataset, make sure you use the “Downloads” section of this tutorial to download the source code. From there, open up a terminal, and execute the following command:
$ python dcgan_fashion_mnist.py --output output
[INFO] loading MNIST dataset...
[INFO] building generator...
[INFO] building discriminator...
[INFO] building GAN...
[INFO] starting training...
[INFO] starting epoch 1 of 50...
[INFO] Step 1_0: discriminator_loss=0.683195, adversarial_loss=0.577937
[INFO] Step 1_25: discriminator_loss=0.091885, adversarial_loss=0.007404
[INFO] Step 1_50: discriminator_loss=0.000986, adversarial_loss=0.000562
...
[INFO] starting epoch 50 of 50...
[INFO] Step 50_0: discriminator_loss=0.472731, adversarial_loss=1.194858
[INFO] Step 50_100: discriminator_loss=0.526521, adversarial_loss=1.816754
[INFO] Step 50_200: discriminator_loss=0.500521, adversarial_loss=1.561429
[INFO] Step 50_300: discriminator_loss=0.495300, adversarial_loss=0.963850
[INFO] Step 50_400: discriminator_loss=0.512699, adversarial_loss=0.858868
[INFO] Step 50_500: discriminator_loss=0.493293, adversarial_loss=0.963694
[INFO] Step 50_545: discriminator_loss=0.455144, adversarial_loss=1.128864
Figure 5: Top-left: The initial random noise of 256 input noise vectors. Top-right: The same random noise after two epochs. We are starting to see the makings of clothes/apparel items. Bottom-left: We are now starting to do a good job generating synthetic images based on training on the Fashion MNIST dataset. Bottom-right: The final fashion/apparel items after 50 epochs look very authentic and realistic. Figure 5 shows our random noise vectors (i.e., benchmarkNoise during different moments of training):
The top-left contains 256 (in an 8×8 grid) of our initial random noise vectors before even starting to train the GAN. We can clearly see there is no pattern in this noise. No fashion items have been learned by the GAN.However, by the end of the second epoch (top-right), apparel-like structures are starting to appear. |
https://pyimagesearch.com/2020/11/16/gans-with-keras-and-tensorflow/ | By the end of the fifth epoch (bottom-left), the fashion items are significantly more clear. And by the time we reach the end of the 50th epoch (bottom-right), our fashion items look authentic. Again, it’s important to understand that these fashion items are generated from random noise input vectors — they are totally synthetic images! What's next? We recommend PyImageSearch University. Course information:
84 total classes • 114+ hours of on-demand code walkthrough videos • Last updated: February 2024
★★★★★ 4.84 (128 Ratings) • 16,000+ Students Enrolled
I strongly believe that if you had the right teacher you could master computer vision and deep learning. Do you think learning computer vision and deep learning has to be time-consuming, overwhelming, and complicated? Or has to involve complex mathematics and equations? Or requires a degree in computer science? That’s not the case. |
https://pyimagesearch.com/2020/11/16/gans-with-keras-and-tensorflow/ | All you need to master computer vision and deep learning is for someone to explain things to you in simple, intuitive terms. And that’s exactly what I do. My mission is to change education and how complex Artificial Intelligence topics are taught. If you're serious about learning computer vision, your next stop should be PyImageSearch University, the most comprehensive computer vision, deep learning, and OpenCV course online today. Here you’ll learn how to successfully and confidently apply computer vision to your work, research, and projects. Join me in computer vision mastery. Inside PyImageSearch University you'll find:
✓ 84 courses on essential computer vision, deep learning, and OpenCV topics
✓ 84 Certificates of Completion
✓ 114+ hours of on-demand video
✓ Brand new courses released regularly, ensuring you can keep up with state-of-the-art techniques
✓ Pre-configured Jupyter Notebooks in Google Colab
✓ Run all code examples in your web browser — works on Windows, macOS, and Linux (no dev environment configuration required!) ✓ Access to centralized code repos for all 536+ tutorials on PyImageSearch
✓ Easy one-click downloads for code, datasets, pre-trained models, etc. ✓ Access on mobile, laptop, desktop, etc. Click here to join PyImageSearch University
Summary
In this tutorial we discussed Generative Adversarial Networks (GANs). |
https://pyimagesearch.com/2020/11/16/gans-with-keras-and-tensorflow/ | We learned that GANs actually consist of two networks:
A generator that is responsible for generating fake imagesA discriminator that tries to spot the synthetic images from the authentic ones
By training both of these networks at the same time, we can learn to generate very realistic output images. We then implemented Deep Convolutional Adversarial Networks (DCGANS), a variation of Goodfellow et al. ’s original GAN implementation. Using our DCGAN implementation, we trained both the generator and discriminator on the Fashion MNIST dataset, resulting in output images of fashion items that:
Are not part of the training set and are complete syntheticLook nearly identical to and indistinguishable from any image in the Fashion MNIST dataset
The problem is that training GANs can be extremely challenging, more so than any other architecture or method we have discussed on the PyImageSearch blog. The reason GANs are notoriously hard to train is due to the evolving loss landscape — with every step, our loss landscape changes slightly and is thus ever-evolving. The evolving loss landscape is in stark contrast to other classification or regression tasks where the loss landscape is “fixed” and nonmoving. When training your own GANs, you’ll undoubtedly have to carefully tune your model architecture and associated hyperparameters — be sure to refer to the “Guidelines and best practices when training GANs” section at the top of this tutorial to help you tune your hyperparameters and run your own GAN experiments. To download the source code to this post (and be notified when future tutorials are published here on PyImageSearch), simply enter your email address in the form below! Download the Source Code and FREE 17-page Resource Guide
Enter your email address below to get a .zip of the code and a FREE 17-page Resource Guide on Computer Vision, OpenCV, and Deep Learning. Inside you'll find my hand-picked tutorials, books, courses, and libraries to help you master CV and DL! |
https://pyimagesearch.com/2020/11/16/gans-with-keras-and-tensorflow/ | Download the code! Website |
https://pyimagesearch.com/2020/11/23/building-image-pairs-for-siamese-networks-with-python/ | Click here to download the source code to this pos
In this tutorial, you will learn how to build image pairs for training siamese networks. We’ll implement our image pair generator using Python so that you can use the same code, regardless of whether you’re using TensorFlow, Keras, PyTorch, etc. This tutorial is part one in an introduction to siamese networks:
Part #1: Building image pairs for siamese networks with Python (today’s post)Part #2: Training siamese networks with Keras, TensorFlow, and Deep Learning (next week’s tutorial)Part #3: Comparing images using siamese networks (tutorial two weeks from now)
Siamese networks are incredibly powerful networks, responsible for significant increases in face recognition, signature verification, and prescription pill identification applications (just to name a few). In fact, if you’ve followed my tutorial on OpenCV Face Recognition or Face recognition with OpenCV, Python and deep learning, you will see that the deep learning models used in these posts were siamese networks! Deep learning models such as FaceNet, VGGFace, and dlib’s ResNet face recognition model are all examples of siamese networks. And furthermore, siamese networks make more advanced training procedures like one-shot learning and few-shot learning possible — in comparison to other deep learning architectures, siamese networks require very few training examples, to be effective. Today we’re going to:
Review the basics of siamese networksDiscuss the concept of image pairsSee how we use image pairs to train a siamese networkImplement Python code to generate image pairs for siamese networks
Next week I’ll show you how to implement and train your own siamese network. Eventually, we’ll build up to the concept of image triplets and how we can use triplet loss and contrastive loss to train better, more accurate siamese networks. But for now, let’s understand image pairs, a fundamental requirement when implementing basic siamese networks. To learn how to build image pairs for siamese networks, just keep reading. |
https://pyimagesearch.com/2020/11/23/building-image-pairs-for-siamese-networks-with-python/ | Looking for the source code to this post? Jump Right To The Downloads Section
Building image pairs for siamese networks with Python
In the first part of this tutorial, I’ll provide a high-level overview of siamese networks, including:
What they areWhy we use themWhen to use themHow they are trained
We’ll then discuss the concept of “image pairs” in siamese networks, including why constructing image pairs is a requirement when training siamese networks. From there we’ll review our project directory structure and then implement a Python script to generate image pairs. You can use this image pair generation function in your own siamese network training procedures, regardless of whether you are using Keras, TensorFlow, PyTorch, etc. Finally, we’ll wrap up this tutorial with a review of our results. A high-level overview of siamese networks
The term “siamese twins,” also known as “conjoined twins,” is two identical twins joined in utero. These twins are physically connected to each other (i.e., unable to separate), often sharing the same organs, predominately the lower intestinal tract, liver, and urinary tract. Figure 1: Siamese networks have similarities in siamese twins/conjoined twins where two people are conjoined and share some of the same organs (image source). Just as siamese twins are connected, so are siamese networks. Paraphrasing Sean Benhur, siamese networks are a special class of neural network:
Siamese networks contain two (or more) identical subnetworks. |
https://pyimagesearch.com/2020/11/23/building-image-pairs-for-siamese-networks-with-python/ | These subnetworks have the same architecture, parameters, and weights. Any parameter updates are mirrored across both subnetworks, meaning if you update the weights on one, then the weights in the other are updated as well. We use siamese networks when performing verification, identification, or recognition tasks, the most popular examples being face recognition and signature verification. For example, let’s suppose we are tasked with detecting signature forgeries. Instead of training a classification model to correctly classify signatures for each unique individual in our dataset (which would require significant training data), what if we instead took two images from our training set and asked the neural network if the signatures were from the same person or not? If the two signatures are the same, then siamese network reports “Yes”. Otherwise, if the two signatures are not the same, thereby implying a potential forgery, the siamese network reports “No”. This is an example of a verification task (versus classification, regression, etc.), and while it may sound like a harder problem, it actually becomes far easier in practice — we need significantly less training data, and our accuracy actually improves by using siamese networks rather than classification networks. Another added benefit is that we no longer need a “catch-all” class for when our classification model needs to select “none of the above” when making a classification (which in practice is quite error prone). |
https://pyimagesearch.com/2020/11/23/building-image-pairs-for-siamese-networks-with-python/ | Instead, our siamese network handles this problem gracefully by reporting that the two signatures are not the same. Keep in mind that the siamese network architecture doesn’t have to concern itself with classification in the traditional sense of having to select 1 of N possible classes. Rather, the siamese network just needs to be able to report “same” (belongs to the same class) or “different” (belongs to different classes). Below is a visualization of the siamese network architecture used in Dey et al. ’s 2017 publication, SigNet: Convolutional Siamese Network for Writer Independent Offline Signature Verification:
Figure 2: An example of a siamese network, SigNet, used for signature verification (image source: Figure 1 of Dey et al.) On the left we present two signatures to the SigNet model. Our goal is to determine if these signatures belong to the same person or not. The middle shows the siamese network itself. These two subnetworks have the same architecture and parameters and mirror each other — if the weights in one subnetwork are updated, then the weights in the other subnetwork(s) are updated as well. The final layers in these subnetworks are typically (but not always) embedding layers where we can compute the Euclidean distance between the outputs and adjust the weights of the subnetworks such that they output the correct decision (belong to the same class or not). |
https://pyimagesearch.com/2020/11/23/building-image-pairs-for-siamese-networks-with-python/ | The right then shows our loss function, which combines the outputs of the subnetworks and then checks to see if the siamese network made the correct decision. Popular loss functions when training siamese networks include:
Binary cross-entropyTriplet lossContrastive loss
You might be surprised to see binary cross-entropy listed as a loss function to train siamese networks. Think of it this way:
Each image pair is either the “same” (1), meaning they belong to the same class or “different” (0), meaning they belong to different classes. That lends itself naturally to binary cross-entropy, since there are only two possible outputs (although triplet loss and contrastive loss tend to significantly outperform standard binary cross-entropy). Now that we have a high-level overview of siamese networks, let’s now discuss the concept of image pairs. The concept of “image pairs” in siamese networks
Figure 3: Top: An example of a “positive” image pair (since both images are an example of an “8”). Bottom: A “negative” image pair (since one image is a “6”, and the other is an “8”). After reviewing the previous section, you should understand that a siamese network consists of two subnetworks that mirror each other (i.e., when the weights update in one network, the same weights are updated in the other network). Since there are two subnetworks, we must have two inputs to the siamese model (as you saw in Figure 2 at the top of the previous section). When training siamese networks we need to have positive pairs and negative pairs:
Positive pairs: Two images that belong to the same class (ex., |
https://pyimagesearch.com/2020/11/23/building-image-pairs-for-siamese-networks-with-python/ | two images of the same person, two examples of the same signature, etc.)Negative pairs: Two images that belong to different classes (ex., two images of different people, two examples of different signatures, etc.) When training our siamese network, we randomly sample examples of positive and negative pairs. These pairs serve as our training data such that the siamese network can learn similarity. In the remainder of this tutorial, you will learn how to generate such image pairs. In next week’s tutorial, you will learn how to define the siamese network architecture and then train the siamese model on our dataset of pairs. Configuring your development environment
We’ll be using Keras and TensorFlow throughout this series of tutorials on siamese networks, so I suggest you take the time to configure your deep learning development environment now. I recommend you follow either of these two guides to install TensorFlow and Keras on your system:
How to install TensorFlow 2.0 on UbuntuHow to install TensorFlow 2.0 on macOS
Either tutorial will help you configure your system with all the necessary software for this blog post in a convenient Python virtual environment. Having problems configuring your development environment? Figure 4: Having trouble configuring your dev environment? |
https://pyimagesearch.com/2020/11/23/building-image-pairs-for-siamese-networks-with-python/ | Want access to pre-configured Jupyter Notebooks running on Google Colab? Be sure to join PyImageSearch Plus —- you’ll be up and running with this tutorial in a matter of minutes. All that said, are you:
Short on time?Learning on your employer’s administratively locked system?Wanting to skip the hassle of fighting with the command line, package managers, and virtual environments?Ready to run the code right now on your Windows, macOS, or Linux system? Then join PyImageSearch Plus today! Gain access to Jupyter Notebooks for this tutorial and other PyImageSearch guides that are pre-configured to run on Google Colab’s ecosystem right in your web browser! No installation required. And best of all, these Jupyter Notebooks will run on Windows, macOS, and Linux! Project structure
Make sure you used the “Downloads” section of this tutorial to download the source code. From there, let’s inspect the project directory structure:
$ tree . --dirsfirst
. |
https://pyimagesearch.com/2020/11/23/building-image-pairs-for-siamese-networks-with-python/ | └── build_siamese_pairs.py
0 directories, 1 file
We only have a single Python file to review today, build_siamese_pairs.py. This script includes a helper function named make_pairs. As the name suggests, this function accepts an input set of images and labels and then constructs positive and negative pairs from it. We’ll be reviewing this function in its entirety today. Then, next week, we’ll learn how to use the make_pairs function to train your own siamese network. Implementing our image pair generator for siamese networks
Let’s get started implementing image pair generation for siamese networks. Open up the build_siamese_pairs.py file, and insert the following code:
# import the necessary packages
from tensorflow.keras.datasets import mnist
from imutils import build_montages
import numpy as np
import cv2
Lines 2-5 import our required Python packages. We’ll be using the MNIST digits dataset as our sample dataset (for convenience purposes). That said, our make_pairs function will work with any image dataset, provided you supply two separate image and labels arrays (which you’ll learn how to do in the next code block). To visually validate that our pair generation process is working correctly, we import the build_montages function (Line 3). |
https://pyimagesearch.com/2020/11/23/building-image-pairs-for-siamese-networks-with-python/ | This function generates a montage of images, which is super helpful when needing to visualize multiple images at once. You can learn more about image montages in my Montages with OpenCV guide. Let’s now start defining our make_pairs function:
def make_pairs(images, labels):
# initialize two empty lists to hold the (image, image) pairs and
# labels to indicate if a pair is positive or negative
pairImages = []
pairLabels = []
Our make_pairs method requires we pass in two parameters:
images: The images in our dataset
labels: The class labels associated with the images
In the case of the MNIST dataset, our images are the digits themselves, while the labels are the class label (0-9) for each image in the images array. The next step is to compute the total number of unique class labels in our dataset:
# calculate the total number of classes present in the dataset
# and then build a list of indexes for each class label that
# provides the indexes for all examples with a given label
numClasses = len(np.unique(labels))
idx = [np.where(labels == i)[0] for i in range(0, numClasses)]
Line 16 uses the np.unique function to find all unique class labels in our labels list. Taking the len of the np.unique output yields the total number of unique class labels in the dataset. In the case of the MNIST dataset, there are 10 unique class labels, corresponding to the digits 0-9. Line 17 then builds a list of indexes for each class label using Python array comprehension. We use Python list comprehensions here for performance; however, this code can be a bit tricky to understand, so let’s break it down by writing it out in a dedicated for loop, along with a few print statements:
>>> for i in range(0, numClasses):
>>> idxs = np.where(labels == i)[0]
>>> print("{}: {} {}".format(i, len(idxs), idxs))
0: 5923 [ 1 21 34 ... 59952 59972 59987]
1: 6742 [ 3 6 8 ... 59979 59984 59994]
2: 5958 [ 5 16 25 ... 59983 59985 59991]
3: 6131 [ 7 10 12 ... 59978 59980 59996]
4: 5842 [ 2 9 20 ... 59943 59951 59975]
5: 5421 [ 0 11 35 ... 59968 59993 59997]
6: 5918 [ 13 18 32 ... 59982 59986 59998]
7: 6265 [ 15 29 38 ... 59963 59977 59988]
8: 5851 [ 17 31 41 ... 59989 59995 59999]
9: 5949 [ 4 19 22 ... 59973 59990 59992]
>>>
What this code is doing here is looping over all unique class labels in our labels list. For each unique label, we compute idxs, which is a list of all indexes that belong to the current class label, i.
The output of our print statement consists of three values:
The current class label, i
The total number of data points that belong to the current label, i
The indexes of each of these data points
Line 17 builds this list of indexes, but in a super compact, efficient manner. Given our idx loopup list, let’s now start generating our positive and negative pairs:
# loop over all images
for idxA in range(len(images)):
# grab the current image and label belonging to the current
# iteration
currentImage = images[idxA]
label = labels[idxA]
# randomly pick an image that belongs to the *same* class
# label
idxB = np.random.choice(idx[label])
posImage = images[idxB]
# prepare a positive pair and update the images and labels
# lists, respectively
pairImages.append([currentImage, posImage])
pairLabels.append([1])
On Line 20 we loop over all images in our dataset. |
https://pyimagesearch.com/2020/11/23/building-image-pairs-for-siamese-networks-with-python/ | Line 23 grabs the currentImage associated with idxA. Line 24 obtains the label associated with currentImage. Next, we randomly pick an image that belongs to the same class as the label (Lines 28 and 29). This posImage is the same class as label. Taken together, currentImage and posImage serve as our positive pair. We update our pairImages list with a 2-tuple of the currentImage and posImage (Line 33). We also update pairLabels with a value of 1, indicating that this is a positive pair (Line 34). Next, let’s generate our negative pair:
# grab the indices for each of the class labels *not* equal to
# the current label and randomly pick an image corresponding
# to a label *not* equal to the current label
negIdx = np.where(labels ! = label)[0]
negImage = images[np.random.choice(negIdx)]
# prepare a negative pair of images and update our lists
pairImages.append([currentImage, negImage])
pairLabels.append([0])
# return a 2-tuple of our image pairs and labels
return (np.array(pairImages), np.array(pairLabels))
Line 39 grabs all indices of labels not equal to the current label. We then randomly select one of these indexes as our negative image, negImage (Line 40). Again, we update our pairImages, this time supplying the currentImage and the negImage as our negative pair (Line 43). |
https://pyimagesearch.com/2020/11/23/building-image-pairs-for-siamese-networks-with-python/ | The pairLabels list is again updated, this time with a value of 0 to indicate that this is a negative pair example. Finally, we return our pairImages and pairLabels to the calling function on Line 47. With our make_pairs function defined, let’s move on to loading our MNIST dataset and generating image pairs from them:
# load MNIST dataset and scale the pixel values to the range of [0, 1]
print("[INFO] loading MNIST dataset...")
(trainX, trainY), (testX, testY) = mnist.load_data()
# build the positive and negative image pairs
print("[INFO] preparing positive and negative pairs...")
(pairTrain, labelTrain) = make_pairs(trainX, trainY)
(pairTest, labelTest) = make_pairs(testX, testY)
# initialize the list of images that will be used when building our
# montage
images = []
Line 51 loads the MNIST training and testing split from disk. We then generate training and testing pairs on Lines 55 and 56. Line 60 initializes an images, a list that will be populated with example pairs and then visualized as a montage on our screen. We’ll be constructing this montage to visually validate that our make_pairs function is working properly. Let’s go ahead and populate the images list now:
# loop over a sample of our training pairs
for i in np.random.choice(np.arange(0, len(pairTrain)), size=(49,)):
# grab the current image pair and label
imageA = pairTrain[i][0]
imageB = pairTrain[i][1]
label = labelTrain[i]
# to make it easier to visualize the pairs and their positive or
# negative annotations, we're going to "pad" the pair with four
# pixels along the top, bottom, and right borders, respectively
output = np.zeros((36, 60), dtype="uint8")
pair = np.hstack([imageA, imageB])
output[4:32, 0:56] = pair
# set the text label for the pair along with what color we are
# going to draw the pair in (green for a "positive" pair and
# red for a "negative" pair)
text = "neg" if label[0] == 0 else "pos"
color = (0, 0, 255) if label[0] == 0 else (0, 255, 0)
# create a 3-channel RGB image from the grayscale pair, resize
# it from 60x36 to 96x51 (so we can better see it), and then
# draw what type of pair it is on the image
vis = cv2.merge([output] * 3)
vis = cv2.resize(vis, (96, 51), interpolation=cv2.INTER_LINEAR)
cv2.putText(vis, text, (2, 12), cv2.FONT_HERSHEY_SIMPLEX, 0.75,
color, 2)
# add the pair visualization to our list of output images
images.append(vis)
On Line 63 we loop over a sample of 49 randomly selected pairTrain images. Lines 65 and 66 grab the two images in the pair, while Line 67 accesses the corresponding label (1 for “same”, 0 for “different”). Lines 72-74 allocate a NumPy array for the side-by-side visualization, horizontally stack the two images, and then add the pair to the output array. If we are examining a negative pair, we’ll annotate the output image with the text neg drawn in “red”; otherwise, we’ll draw the text pos in “green” (Lines 79 and 80). |
https://pyimagesearch.com/2020/11/23/building-image-pairs-for-siamese-networks-with-python/ | MNIST example images are grayscale by default, so we construct vis, a three channel RGB image on Line 85. We then increase the resolution of the vis image from 36×60 to 96×51 (so we can better see it on our screen) and then draw the text on the image (Lines 86-88). The vis image is then added to our images list. The last step here is to construct our montage and display it to our screen:
# construct the montage for the images
montage = build_montages(images, (96, 51), (7, 7))[0]
# show the output montage
cv2.imshow("Siamese Image Pairs", montage)
cv2.waitKey(0)
Line 94 constructs a 7×7 montage where each image in the montage is 96×51 pixels. The output siamese image pairs visualization is displayed to our screen on Lines 97 and 98. Siamese network image pair generation results
We are now ready to run our siamese network image pair generation script. Make sure you use the “Downloads” section of this tutorial to download the source code. From there, open up a terminal, and execute the following command:
$ python build_siamese_pairs.py
[INFO] loading MNIST dataset...
[INFO] preparing positive and negative pairs...
Figure 5: Generating image pairs for siamese networks with deep learning and Python. Figure 5 displays the output of our image pair generation script. For every pair of images, our script has marked them as being a positive pair (green) or a negative pair (red). |
https://pyimagesearch.com/2020/11/23/building-image-pairs-for-siamese-networks-with-python/ | For example, the pair located at row one, column one is a positive pair, since both digits are 9’s. However, the digit pair located at row one, column three is a negative pair because one digit is a “2”, and the other is a “0”. During the training process our siamese network will learn how to tell the difference between these two digits. And once you understand how to train siamese networks in this manner, you can swap out the MNIST digits dataset and include any dataset of your own where verification is important, including:
Face recognition: Given two separate images containing a face, determine if it’s the same person in both photos. Signature verification: When presented with two signatures, determine if one is a forgery or not. Prescription pill identification: Given two prescription pills, determine if they are the same medication or different medications. Siamese networks make all of these applications possible — and I’ll show you how to train your very first siamese network next week! What's next? We recommend PyImageSearch University. Course information:
84 total classes • 114+ hours of on-demand code walkthrough videos • Last updated: February 2024
★★★★★ 4.84 (128 Ratings) • 16,000+ Students Enrolled
I strongly believe that if you had the right teacher you could master computer vision and deep learning. |
https://pyimagesearch.com/2020/11/23/building-image-pairs-for-siamese-networks-with-python/ | Do you think learning computer vision and deep learning has to be time-consuming, overwhelming, and complicated? Or has to involve complex mathematics and equations? Or requires a degree in computer science? That’s not the case. All you need to master computer vision and deep learning is for someone to explain things to you in simple, intuitive terms. And that’s exactly what I do. My mission is to change education and how complex Artificial Intelligence topics are taught. If you're serious about learning computer vision, your next stop should be PyImageSearch University, the most comprehensive computer vision, deep learning, and OpenCV course online today. Here you’ll learn how to successfully and confidently apply computer vision to your work, research, and projects. Join me in computer vision mastery. |
https://pyimagesearch.com/2020/11/23/building-image-pairs-for-siamese-networks-with-python/ | Inside PyImageSearch University you'll find:
✓ 84 courses on essential computer vision, deep learning, and OpenCV topics
✓ 84 Certificates of Completion
✓ 114+ hours of on-demand video
✓ Brand new courses released regularly, ensuring you can keep up with state-of-the-art techniques
✓ Pre-configured Jupyter Notebooks in Google Colab
✓ Run all code examples in your web browser — works on Windows, macOS, and Linux (no dev environment configuration required!) ✓ Access to centralized code repos for all 536+ tutorials on PyImageSearch
✓ Easy one-click downloads for code, datasets, pre-trained models, etc. ✓ Access on mobile, laptop, desktop, etc. Click here to join PyImageSearch University
Summary
In this tutorial you learned how to build image pairs for siamese networks using the Python programming language. Our implementation of image pair generation is library agnostic, meaning you can use this code regardless of whether your underlying deep learning library is Keras, TensorFlow, PyTorch, etc. Image pair generation is a fundamental aspect of siamese networks. A siamese network needs to understand the difference between two images of the same class (positive pairs) and two images from different classes (negative pairs). During the training process we can then update the weights of our network such that it can tell the difference between two images of the same class versus two images of a different class. It may sound like a complicated training procedure, but as we’ll see next week, it’s actually quite straightforward (once you have someone explain it to you, of course!). Stay tuned for next week’s tutorial on training siamese networks, you won’t want to miss it. |
https://pyimagesearch.com/2020/11/23/building-image-pairs-for-siamese-networks-with-python/ | To download the source code to this post (and be notified when future tutorials are published here on PyImageSearch), simply enter your email address in the form below! Download the Source Code and FREE 17-page Resource Guide
Enter your email address below to get a .zip of the code and a FREE 17-page Resource Guide on Computer Vision, OpenCV, and Deep Learning. Inside you'll find my hand-picked tutorials, books, courses, and libraries to help you master CV and DL! Download the code! Website |
https://pyimagesearch.com/2020/04/27/fine-tuning-resnet-with-keras-tensorflow-and-deep-learning/ | Click here to download the source code to this pos
In this tutorial, you will learn how to fine-tune ResNet using Keras, TensorFlow, and Deep Learning. A couple of months ago, I posted on Twitter asking my followers for help creating a dataset of camouflage vs. noncamouflage clothes:
Figure 1: My request for a camouflage image dataset to use in my fine-tuning ResNet with Keras, TensorFlow, and deep learning blog post. This dataset was to be used on a special project that Victor Gevers, an esteemed ethical hacker from the GDI.Foundation, and I were working on (more on that in two weeks, when I’ll reveal the details on what we’ve built). Two PyImageSearch readers, Julia Riede and Nitin Rai, not only stepped up to the plate to help out but hit a home run! Both of them spent a couple of days downloading images for each class, organizing the files, and then uploading them so Victor and I could train a model on them — thank you so much, Julia and Nitin; we couldn’t have done it without you! A few days after I started working with the camouflage vs. noncamouflage dataset, I received an email from PyImageSearch reader Lucas:
Hi Adrian, I’m big fan of the PyImageSearch blog. It’s helped me tremendously with my undergrad project. I have a question for you: Do you have any tutorials on how to fine-tune ResNet? I’ve been going through your archives and it seems like you’ve covered fine-tuning other architectures (ex. VGGNet) but I couldn’t find anything on ResNet. |
https://pyimagesearch.com/2020/04/27/fine-tuning-resnet-with-keras-tensorflow-and-deep-learning/ | I’ve been trying to fine-tune ResNet with Keras/TensorFlow for the past few days and I just keep running into errors. If you can help me out I would appreciate it. I was already planning on fine-tuning a model on top of the camouflage vs. noncamouflage clothes dataset, so helping Lucas seemed like a natural fit. Inside the remainder of this this tutorial you will:
Discover the seminal ResNet architectureLearn how to fine-tune it using Keras and TensorFlowFine-tune ResNet for camouflage vs. noncamouflage clothes detection
And in two weeks, I’ll show you the practical, real-world use case that Victor and I applied camouflage detection to — it’s a great story, and you won’t want to miss it! To learn how to fine-tune ResNet with Keras and TensorFlow, just keep reading! Looking for the source code to this post? Jump Right To The Downloads Section
Fine-tuning ResNet with Keras, TensorFlow, and Deep Learning
In the first part of this tutorial, you will learn about the ResNet architecture, including how we can fine-tune ResNet using Keras and TensorFlow. From there, we’ll discuss our camouflage clothing vs. normal clothing image dataset in detail. We’ll then review our project directory structure and proceed to:
Implement our configuration fileCreate a Python script to build/organize our image datasetImplement a second Python script used to fine-tune ResNet with Keras and TensorFlowExecute the training script and fine-tune ResNet on our dataset
Let’s get started! What is ResNet? |
https://pyimagesearch.com/2020/04/27/fine-tuning-resnet-with-keras-tensorflow-and-deep-learning/ | Figure 2: Variations of He et al. ’s residual module in their 2016 research led to a new variation of ResNet. In this blog post we fine-tune ResNet with Keras, TensorFlow, and deep learning to build a camouflage clothing classifier. ( image source: Figure 4 of He et al. 2016)
ResNet was first introduced by He et al. in their seminal 2015 paper, Deep Residual Learning for Image Recognition — that paper has been cited an astonishing 43,064 times! A follow-up paper in 2016, Identity Mappings in Deep Residual Networks, performed a series of ablation experiments, playing with the inclusion, removal, and ordering of various components in the residual module, ultimately resulting in a variation of ResNet that:
Is easier to trainIs more tolerant of hyperparameters, including regularization and initial learning rateGeneralizes better
ResNet is arguably the most important network architecture since:
AlexNet — which reignited researcher interest in deep neural networks back in 2012VGGNet — which demonstrated how deeper neural networks could be trained successfully using only 3×3 convolutions (2014)GoogLeNet — which introduced the inception module/micro-architecture (2014)
In fact, the techniques that ResNet employs have been successfully applied to noncomputer vision tasks, including audio classification and Natural Language Processing (NLP)! How does ResNet work? Note: The following section was adapted from Chapter 12 of my book, Deep Learning for Computer Vision with Python (Practitioner Bundle). The original residual module introduced by He et al. |
https://pyimagesearch.com/2020/04/27/fine-tuning-resnet-with-keras-tensorflow-and-deep-learning/ | relies on the concept of identify mappings, the process of taking the original input to the module and adding it to the output of a series of operations:
Figure 3: ResNet is based on a “residual module” as pictured. In this deep learning blog post, we fine-tune ResNet with Keras and TensorFlow. At the top of the module, we accept an input to the module (i.e., the previous layer in the network). The right branch is a “linear shortcut” — it connects the input to an addition operation at the bottom of the model. Then, on the left branch of the residual module, we apply a series of convolutions (both of which are 3×3), activations, and batch normalizations. This is a standard pattern to follow when constructing Convolutional Neural Networks. But what makes ResNet interesting is that He et al. suggested adding the original input to the output of the CONV, RELU, and BN layers. We call this addition an identity mapping since the input (the identity) is added to the output of a series of operations. It’s also way the term residual is used — the “residual” input is added to the output of a series of layer operations. |
https://pyimagesearch.com/2020/04/27/fine-tuning-resnet-with-keras-tensorflow-and-deep-learning/ | The connection between the input and addition node is called the shortcut. While traditional neural networks can be seen as learning a function y = f(x), a residual layer attempts to approximate y via f(x) + id(x) = f(x) + x where id(x) is the identity function. These residual layers start at the identity function and evolve to become more complex as the network learns. This type of residual learning framework allows us to train networks that are substantially deeper than previously proposed architectures. Furthermore, since the input is included in every residual module, it turns out the network can learn faster and with larger learning rates. In the original 2015 paper, He et al. also included an extension to the original residual module called bottlenecks:
Figure 4: He et al. ’s “bottlenecks” extension to the residual module. We use TensorFlow and Keras to build a deep learning camouflage classifier based on ResNet in this tutorial. Here we can see that the same identity mapping is taking place, only now the CONV layers in the left branch of the residual module have been updated:
We are utilizing three CONV layers rather than just twoThe first and last CONV layers are 1×1 convolutionsThe number of filters learned in the first two CONV layers are 1/4 the number of filters learned in the final CONV
This variation of the residual module serves as a form of dimensionality reduction, thereby reducing the total number of parameters in the network (and doing so without sacrificing accuracy). |
https://pyimagesearch.com/2020/04/27/fine-tuning-resnet-with-keras-tensorflow-and-deep-learning/ | This form of dimensionality reduction is called the bottleneck. He et al. ’s 2016 publication on Identity Mappings in Deep Residual Networks performed a series of ablation studies, playing with the inclusion, removal, and ordering of various components in the residual module, ultimately resulting in the concept of pre-activation:
Figure 5: Comparing the ResNet residual module with bottleneck vs. a pre-activation residual module. Be sure to read this tutorial to learn how to apply fine-tuning with deep learning and ResNet using TensorFlow/Keras. Without going into too much detail, the pre-activation residual module rearranges the order in which convolution, batch normalization, and activation are performed. The original residual module (with bottleneck) accepts an input (i.e., a RELU activation map) and then applies a series of (CONV => BN => RELU) * 2 => CONV => BN before adding this output to the original input and applying a final RELU activation. Their 2016 study demonstrated that instead, applying a series of (BN => RELU => CONV) * 3 led to higher accuracy models that were easier to train. We call this method of layer ordering pre-activation as our RELUs and batch normalizations are placed before the convolutions, which is in contrast to the typical approach of applying RELUs and batch normalizations after the convolutions. For a more complete review of ResNet, including how to implement it from scratch using Keras/TensorFlow, be sure to refer to my book, Deep Learning for Computer Vision with Python. How can we fine-tune it with Keras and TensorFlow? |
https://pyimagesearch.com/2020/04/27/fine-tuning-resnet-with-keras-tensorflow-and-deep-learning/ | In order to fine-tune ResNet with Keras and TensorFlow, we need to load ResNet from disk using the pre-trained ImageNet weights but leaving off the fully-connected layer head. We can do so using the following code:
>>> baseModel = ResNet50(weights="imagenet", include_top=False,
input_tensor=Input(shape=(224, 224, 3)))
Inspecting the baseModel.summary(), you’ll see the following:
...
conv5_block3_3_conv (Conv2D) (None, 7, 7, 2048) 1050624 conv5_block3_2_relu[0][0]
__________________________________________________________________________________________________
conv5_block3_3_bn (BatchNormali (None, 7, 7, 2048) 8192 conv5_block3_3_conv[0][0]
__________________________________________________________________________________________________
conv5_block3_add (Add) (None, 7, 7, 2048) 0 conv5_block2_out[0][0]
conv5_block3_3_bn[0][0]
__________________________________________________________________________________________________
conv5_block3_out (Activation) (None, 7, 7, 2048) 0 conv5_block3_add[0][0]
==================================================================================================
Here, we can observe that the final layer in the ResNet architecture (again, without the fully-connected layer head) is an Activation layer that is 7 x 7 x 2048. We can construct a new, freshly initialized layer head by accepting the baseModel.output and then applying a 7×7 average pooling, followed by our fully-connected layers:
headModel = baseModel.output
headModel = AveragePooling2D(pool_size=(7, 7))(headModel)
headModel = Flatten(name="flatten")(headModel)
headModel = Dense(256, activation="relu")(headModel)
headModel = Dropout(0.5)(headModel)
headModel = Dense(len(config. CLASSES), activation="softmax")(headModel)
With the headModel constructed, we simply need to append it to the body of the ResNet model:
model = Model(inputs=baseModel.input, outputs=headModel)
Now, if we take a look at the model.summary(), we can conclude that we have successfully added a new fully-connected layer head to ResNet, making the architecture suitable for fine-tuning:
conv5_block3_3_conv (Conv2D) (None, 7, 7, 2048) 1050624 conv5_block3_2_relu[0][0]
__________________________________________________________________________________________________
conv5_block3_3_bn (BatchNormali (None, 7, 7, 2048) 8192 conv5_block3_3_conv[0][0]
__________________________________________________________________________________________________
conv5_block3_add (Add) (None, 7, 7, 2048) 0 conv5_block2_out[0][0]
conv5_block3_3_bn[0][0]
__________________________________________________________________________________________________
conv5_block3_out (Activation) (None, 7, 7, 2048) 0 conv5_block3_add[0][0]
__________________________________________________________________________________________________
average_pooling2d (AveragePooli (None, 1, 1, 2048) 0 conv5_block3_out[0][0]
__________________________________________________________________________________________________
flatten (Flatten) (None, 2048) 0 average_pooling2d[0][0]
__________________________________________________________________________________________________
dense (Dense) (None, 256) 524544 flatten[0][0]
__________________________________________________________________________________________________
dropout (Dropout) (None, 256) 0 dense[0][0]
__________________________________________________________________________________________________
dense_1 (Dense) (None, 2) 514 dropout[0][0]
==================================================================================================
In the remainder of this tutorial, I will provide you with a fully working example of fine-tuning ResNet using Keras and TensorFlow. Our camouflage vs. normal clothing dataset
Figure 6: A camouflage clothing dataset will help us to build a camo vs. normal clothes detector. We’ll fine-tune a ResNet50 CNN using Keras and TensorFlow to build a camouflage clothing classifier in today’s tutorial. In this tutorial, we will be training a camouflage clothes vs. normal clothes detector. I’ll be discussing exactly why we’re building a camouflage clothes detector in two weeks, but for the time being, let this serve as a standalone example of how to fine-tune ResNet with Keras and TensorFlow. The dataset we’re using here was curated by PyImageSearch readers, Julia Riede and Nitin Rai. The dataset consists of two classes, each with an equal number of images:
camouflage_clothes: 7,949 images
normal_clothes: 7,949 images
A sample of the images for each class can be seen in Figure 6. |
https://pyimagesearch.com/2020/04/27/fine-tuning-resnet-with-keras-tensorflow-and-deep-learning/ | In the remainder of this tutorial, you’ll learn how to fine-tune ResNet to predict both of these classes — the knowledge that you gain will enable you to fine-tune ResNet on your own datasets as well. Downloading our camouflage vs. normal clothing dataset
Figure 7: We will download a normal vs. camouflage clothing dataset from Kaggle. We’ll then fine-tune ResNet on the deep learning dataset using Keras and TensorFlow. The camouflage clothes vs. normal clothes dataset can be downloaded directly from Kaggle:
https://www.kaggle.com/imneonizer/normal-vs-camouflage-clothes
Simply click the “Download” button (Figure 7) to download a .zip archive of the dataset. Project structure
Be sure to grab and unzip the code from the “Downloads” section of this blog post. Let’s take a moment to inspect the organizational structure of our project:
$ tree --dirsfirst --filelimit 10
. ├── 8k_normal_vs_camouflage_clothes_images
│ ├── camouflage_clothes [7949 entries]
│ └── normal_clothes [7949 entries]
├── pyimagesearch
│ ├── __init__.py
│ └── config.py
├── build_dataset.py
├── camo_detector.model
├── normal-vs-camouflage-clothes.zip
├── plot.png
└── train_camo_detector.py
4 directories, 7 files
As you can see, I’ve placed the dataset (normal-vs-camouflage-clothes.zip) in the root directory of our project and extracted the files. The images therein now reside in the 8k_normal_vs_camouflage_clothes_images directory. Today’s pyimagesearch module comes with a single Python configuration file (config.py) that houses our important paths and variables. We’ll review this file in the next section. |
https://pyimagesearch.com/2020/04/27/fine-tuning-resnet-with-keras-tensorflow-and-deep-learning/ | Our Python driver scripts consist of:
build_dataset.py: Splits our data into training, testing, and validation subdirectories
train_camo_detector.py: Trains a camouflage classifier with Python, TensorFlow/Keras, and fine-tuning
Our configuration file
Before we can (1) build our camouflage vs. noncamouflage image dataset and (2) fine-tune ResNet on our image dataset, let’s first create a simple configuration file to store all our important image paths and variables. Open up the config.py file in your project, and insert the following code:
# import the necessary packages
import os
# initialize the path to the *original* input directory of images
ORIG_INPUT_DATASET = "8k_normal_vs_camouflage_clothes_images"
# initialize the base path to the *new* directory that will contain
# our images after computing the training and testing split
BASE_PATH = "camo_not_camo"
# derive the training, validation, and testing directories
TRAIN_PATH = os.path.sep.join([BASE_PATH, "training"])
VAL_PATH = os.path.sep.join([BASE_PATH, "validation"])
TEST_PATH = os.path.sep.join([BASE_PATH, "testing"])
The os module import allows us to build dynamic paths directly in our configuration file. Our existing input dataset path should be placed on Line 5 (the Kaggle dataset you should have downloaded by this point). The path to our new dataset directory that will contain our training, testing, and validation splits is shown on Line 9. This path will be created by the build_dataset.py script. Three subdirectories per class (we have two classes) will also be created (Lines 12-14) — the paths to our training, validation, and testing dataset splits. Each will be populated with a subset of the images from our dataset. Next, we’ll define our split percentages and classes:
# define the amount of data that will be used training
TRAIN_SPLIT = 0.75
# the amount of validation data will be a percentage of the
# *training* data
VAL_SPLIT = 0.1
# define the names of the classes
CLASSES = ["camouflage_clothes", "normal_clothes"]
Training data will be represented by 75% of all the data available (Line 17), 10% of which will be marked for validation (Line 21). Our camouflage and normal clothes classes are defined on Line 24. We’ll wrap up with a few hyperparameters and our output model path:
# initialize the initial learning rate, batch size, and number of
# epochs to train for
INIT_LR = 1e-4
BS = 32
NUM_EPOCHS = 20
# define the path to the serialized output model after training
MODEL_PATH = "camo_detector.model"
The initial learning rate, batch size, and number of epochs to train for are set on Lines 28-30. |
https://pyimagesearch.com/2020/04/27/fine-tuning-resnet-with-keras-tensorflow-and-deep-learning/ | The path to the output serialized ResNet-based camouflage classification model after fine-tuning will be stored at the path defined on Line 33. Implementing our camouflage dataset builder script
With our configuration file implemented, let’s move on to creating our dataset builder, which will:
Split our dataset into training, validation, and testing sets, respectively
Organize our images on disk so we can use Keras’ ImageDataGenerator class and associated flow_from_directory function to easily fine-tune ResNet
Open up build_dataset.py, and let’s get started:
# import the necessary packages
from pyimagesearch import config
from imutils import paths
import random
import shutil
import os
We begin by importing our config from the previous section along with the paths module, which will help us to find the image files on disk. Three modules built into Python will be used for shuffling paths and creating directories/subdirectories. Let’s go ahead and grab the paths to all original images in our dataset:
# grab the paths to all input images in the original input directory
# and shuffle them
imagePaths = list(paths.list_images(config. ORIG_INPUT_DATASET))
random.seed(42)
random.shuffle(imagePaths)
# compute the training and testing split
i = int(len(imagePaths) * config. TRAIN_SPLIT)
trainPaths = imagePaths[:i]
testPaths = imagePaths[i:]
# we'll be using part of the training data for validation
i = int(len(trainPaths) * config. VAL_SPLIT)
valPaths = trainPaths[:i]
trainPaths = trainPaths[i:]
# define the datasets that we'll be building
datasets = [
("training", trainPaths, config. TRAIN_PATH),
("validation", valPaths, config. VAL_PATH),
("testing", testPaths, config. TEST_PATH)
]
We grab our imagePaths and randomly shuffle them with a seed for reproducibility (Line 15-17). |
https://pyimagesearch.com/2020/04/27/fine-tuning-resnet-with-keras-tensorflow-and-deep-learning/ | From there, we calculate the list index for our training/testing split (currently set to 75% by in our configuration file) via Line 15. The list index, i, is used to form our trainPaths and testPaths. The next split index is calculated from the number of trainPaths — 10% of the paths are marked as valPaths for validation (Lines 20-22). Lines 25-29 define the dataset splits we’ll be building in the remainder of this script. Let’s proceed:
# loop over the datasets
for (dType, imagePaths, baseOutput) in datasets:
# show which data split we are creating
print("[INFO] building '{}' split".format(dType))
# if the output base output directory does not exist, create it
if not os.path.exists(baseOutput):
print("[INFO] 'creating {}' directory".format(baseOutput))
os.makedirs(baseOutput)
# loop over the input image paths
for inputPath in imagePaths:
# extract the filename of the input image along with its
# corresponding class label
filename = inputPath.split(os.path.sep)[-1]
label = inputPath.split(os.path.sep)[-2]
# build the path to the label directory
labelPath = os.path.sep.join([baseOutput, label])
# if the label output directory does not exist, create it
if not os.path.exists(labelPath):
print("[INFO] 'creating {}' directory".format(labelPath))
os.makedirs(labelPath)
# construct the path to the destination image and then copy
# the image itself
p = os.path.sep.join([labelPath, filename])
shutil.copy2(inputPath, p)
This last block of code handles copying images from their original location into their destination path; directories and subdirectories are created in the process. Let’s review in more detail:
We loop over each of the datasets, creating the directory if it doesn’t exist (Lines 32-39)
For each of our imagePaths, we proceed to:
Extract the filename and class label (Lines 45 and 46)
Build the path to the label directory (Line 49) and create the subdirectory, if required (Lines 52-54)
Copy the image from the source directory into its destination (Lines 58 and 59)
In the next section, we’ll build our dataset accordingly. Building the camouflage image dataset
Let’s now build and organize our image camouflage dataset. Make sure you have:
Used the “Downloads” section of this tutorial to download the source codeFollowed the “Downloading our camouflage vs. normal clothing dataset” section above to download the dataset
From there, open a terminal, and execute the following command:
$ python build_dataset.py
[INFO] building 'training' split
[INFO] 'creating camo_not_camo/training' directory
[INFO] 'creating camo_not_camo/training/normal_clothes' directory
[INFO] 'creating camo_not_camo/training/camouflage_clothes' directory
[INFO] building 'validation' split
[INFO] 'creating camo_not_camo/validation' directory
[INFO] 'creating camo_not_camo/validation/camouflage_clothes' directory
[INFO] 'creating camo_not_camo/validation/normal_clothes' directory
[INFO] building 'testing' split
[INFO] 'creating camo_not_camo/testing' directory
[INFO] 'creating camo_not_camo/testing/normal_clothes' directory
[INFO] 'creating camo_not_camo/testing/camouflage_clothes' directory
You can then use the tree command to inspect camo_not_camo directory to validate that each of the training, testing, and validation splits was created:
$ tree camo_not_camo --filelimit 20
camo_not_camo
├── testing
│ ├── camouflage_clothes [2007 entries]
│ └── normal_clothes [1968 entries]
├── training
│ ├── camouflage_clothes [5339 entries]
│ └── normal_clothes [5392 entries]
└── validation
├── camouflage_clothes [603 entries]
└── normal_clothes [589 entries]
9 directories, 0 files
Implementing our ResNet fine-tuning script with Keras and TensorFlow
With our dataset created and properly organized on disk, let’s learn how we can fine-tune ResNet using Keras and TensorFlow. Open the train_camo_detector.py file, and insert the following code:
# set the matplotlib backend so figures can be saved in the background
import matplotlib
matplotlib.use("Agg")
# import the necessary packages
from pyimagesearch import config
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.layers import AveragePooling2D
from tensorflow.keras.layers import Dropout
from tensorflow.keras.layers import Flatten
from tensorflow.keras.layers import Dense
from tensorflow.keras.layers import Input
from tensorflow.keras.models import Model
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.applications import ResNet50
from sklearn.metrics import classification_report
from imutils import paths
import matplotlib.pyplot as plt
import numpy as np
import argparse
Our most notable imports include the ResNet50 CNN architecture and Keras layers for building the head of our model for fine-tuning. Settings for the entire script are housed in the config. |
https://pyimagesearch.com/2020/04/27/fine-tuning-resnet-with-keras-tensorflow-and-deep-learning/ | Additionally, we’ll use the ImageDataGenerator class for data augmentation and scikit-learn’s classification_report to print statistics in our terminal. We also need matplotlib for plotting and paths which assists with finding image files on disk. With our imports ready to go, let’s go ahead and parse command line arguments:
# construct the argument parser and parse the arguments
ap = argparse. ArgumentParser()
ap.add_argument("-p", "--plot", type=str, default="plot.png",
help="path to output loss/accuracy plot")
args = vars(ap.parse_args())
# determine the total number of image paths in training, validation,
# and testing directories
totalTrain = len(list(paths.list_images(config. TRAIN_PATH)))
totalVal = len(list(paths.list_images(config. VAL_PATH)))
totalTest = len(list(paths.list_images(config. TEST_PATH)))
We have a single command line argument --plot, the path to an image file that will have our accuracy/loss training curves. Our other configurations are in the Python configuration file we reviewed previously. Lines 30-32 determine the total number of training, validation, and testing images, respectively. Next, we’ll prepare for data augmentation:
# initialize the training training data augmentation object
trainAug = ImageDataGenerator(
rotation_range=25,
zoom_range=0.1,
width_shift_range=0.1,
height_shift_range=0.1,
shear_range=0.2,
horizontal_flip=True,
fill_mode="nearest")
# initialize the validation/testing data augmentation object (which
# we'll be adding mean subtraction to)
valAug = ImageDataGenerator()
# define the ImageNet mean subtraction (in RGB order) and set the
# the mean subtraction value for each of the data augmentation
# objects
mean = np.array([123.68, 116.779, 103.939], dtype="float32")
trainAug.mean = mean
valAug.mean = mean
Data augmentation allows for training time mutations of our images including random rotations, zooms, shifts, shears, flips, and mean subtraction. |
https://pyimagesearch.com/2020/04/27/fine-tuning-resnet-with-keras-tensorflow-and-deep-learning/ | Lines 35-42 initialize our training data augmentation object with a selection of these parameters. Similarly, Line 46 initializes the validation/testing data augmentation object (it will only be used for mean subtraction). Both of our data augmentation objects are set up to perform mean subtraction on-the-fly (Lines 51-53). We’ll now instantiate three Python generators from our data augmentation objects:
# initialize the training generator
trainGen = trainAug.flow_from_directory(
config. TRAIN_PATH,
class_mode="categorical",
target_size=(224, 224),
color_mode="rgb",
shuffle=True,
batch_size=config. BS)
# initialize the validation generator
valGen = valAug.flow_from_directory(
config. VAL_PATH,
class_mode="categorical",
target_size=(224, 224),
color_mode="rgb",
shuffle=False,
batch_size=config. BS)
# initialize the testing generator
testGen = valAug.flow_from_directory(
config. TEST_PATH,
class_mode="categorical",
target_size=(224, 224),
color_mode="rgb",
shuffle=False,
batch_size=config. BS)
Here, we’ve initialized training, validation, and testing image data generators. |
https://pyimagesearch.com/2020/04/27/fine-tuning-resnet-with-keras-tensorflow-and-deep-learning/ | Notice that both the valGen and testGen are derived from the same valAug object, which performs mean subtraction. Let’s load our ResNet50 classification model and prepare it for fine-tuning:
# load the ResNet-50 network, ensuring the head FC layer sets are left
# off
print("[INFO] preparing model...")
baseModel = ResNet50(weights="imagenet", include_top=False,
input_tensor=Input(shape=(224, 224, 3)))
# construct the head of the model that will be placed on top of the
# the base model
headModel = baseModel.output
headModel = AveragePooling2D(pool_size=(7, 7))(headModel)
headModel = Flatten(name="flatten")(headModel)
headModel = Dense(256, activation="relu")(headModel)
headModel = Dropout(0.5)(headModel)
headModel = Dense(len(config. CLASSES), activation="softmax")(headModel)
# place the head FC model on top of the base model (this will become
# the actual model we will train)
model = Model(inputs=baseModel.input, outputs=headModel)
# loop over all layers in the base model and freeze them so they will
# *not* be updated during the training process
for layer in baseModel.layers:
layer.trainable = False
The process of fine-tuning allows us to reuse the filters learned during a previous training exercise. In our case, we load ResNet50 pre-trained on the ImageNet dataset, leaving off the fully-connected (FC) head (Lines 85 and 86). We then construct a new FC headModel (Lines 90-95) and append it to the baseModel (Line 99). The final step for fine-tuning is to ensure that the weights of the base of our CNN are frozen (Lines 103 and 104) — we only want to train (i.e., fine-tune) the head of the network. If you need to brush up on the concept of fine-tuning, please refer to my fine-tuning articles, in particular Fine-tuning with Keras and Deep Learning. We’re now ready to fine-tune our ResNet-based camouflage detector with TensorFlow, Keras, and deep learning:
# compile the model
opt = Adam(lr=config. INIT_LR, decay=config. INIT_LR / config. |
https://pyimagesearch.com/2020/04/27/fine-tuning-resnet-with-keras-tensorflow-and-deep-learning/ | NUM_EPOCHS)
model.compile(loss="binary_crossentropy", optimizer=opt,
metrics=["accuracy"])
# train the model
print("[INFO] training model...")
H = model.fit_generator(
trainGen,
steps_per_epoch=totalTrain // config. BS,
validation_data=valGen,
validation_steps=totalVal // config. BS,
epochs=config. NUM_EPOCHS)
First, we compile our model with learning rate decay and the Adam optimizer using "binary_crossentropy" loss, since this is a two-class problem (Lines 107-109). If you are training with more than two classes of data, be sure to set your loss to "categorical_crossentropy". Lines 113-118 then train our model using our training and validation data generators. Upon the completion of training, we’ll evaluate our model on the testing set:
# reset the testing generator and then use our trained model to
# make predictions on the data
print("[INFO] evaluating network...")
testGen.reset()
predIdxs = model.predict_generator(testGen,
steps=(totalTest // config. BS) + 1)
# for each image in the testing set we need to find the index of the
# label with corresponding largest predicted probability
predIdxs = np.argmax(predIdxs, axis=1)
# show a nicely formatted classification report
print(classification_report(testGen.classes, predIdxs,
target_names=testGen.class_indices.keys()))
# serialize the model to disk
print("[INFO] saving model...")
model.save(config. MODEL_PATH, save_format="h5")
Lines 123-133 make predictions on the testing set and generate and print a classification report in your terminal for inspection. Then, we serialize our TensorFlow/Keras camouflage classifier to disk (Line 137). |
https://pyimagesearch.com/2020/04/27/fine-tuning-resnet-with-keras-tensorflow-and-deep-learning/ | Finally, plot the training accuracy/loss history via matplotlib:
# plot the training loss and accuracy
N = config. NUM_EPOCHS
plt.style.use("ggplot")
plt.figure()
plt.plot(np.arange(0, N), H.history["loss"], label="train_loss")
plt.plot(np.arange(0, N), H.history["val_loss"], label="val_loss")
plt.plot(np.arange(0, N), H.history["accuracy"], label="train_acc")
plt.plot(np.arange(0, N), H.history["val_accuracy"], label="val_acc")
plt.title("Training Loss and Accuracy on Dataset")
plt.xlabel("Epoch #")
plt.ylabel("Loss/Accuracy")
plt.legend(loc="lower left")
plt.savefig(args["plot"])
Once the plot is generated, Line 151 saves it to disk in the location specified by our --plot command line argument. Fine-tuning ResNet with Keras and TensorFlow results
We are now ready to fine-tune ResNet with Keras and TensorFlow. Make sure you have:
Used the “Downloads” section of this tutorial to download the source code
Followed the “Downloading our camouflage vs. normal clothing dataset” section above to download the dataset
Executed the build_dataset.py script to organize the dataset into the project directory structure for training
From there, open up a terminal, and run the train_camo_detector.py script:
$ python train_camo_detector.py
Found 10731 images belonging to 2 classes. Found 1192 images belonging to 2 classes. Found 3975 images belonging to 2 classes. [INFO] preparing model...
[INFO] training model...
Epoch 1/20
335/335 [==============================] - 311s 929ms/step - loss: 0.1736 - accuracy: 0.9326 - val_loss: 0.1050 - val_accuracy: 0.9671
Epoch 2/20
335/335 [==============================] - 305s 912ms/step - loss: 0.0997 - accuracy: 0.9632 - val_loss: 0.1028 - val_accuracy: 0.9586
Epoch 3/20
335/335 [==============================] - 305s 910ms/step - loss: 0.0729 - accuracy: 0.9753 - val_loss: 0.0951 - val_accuracy: 0.9730
...
Epoch 18/20
335/335 [==============================] - 298s 890ms/step - loss: 0.0336 - accuracy: 0.9878 - val_loss: 0.0854 - val_accuracy: 0.9696
Epoch 19/20
335/335 [==============================] - 298s 891ms/step - loss: 0.0296 - accuracy: 0.9896 - val_loss: 0.0850 - val_accuracy: 0.9679
Epoch 20/20
335/335 [==============================] - 299s 894ms/step - loss: 0.0275 - accuracy: 0.9905 - val_loss: 0.0955 - val_accuracy: 0.9679
[INFO] evaluating network...
precision recall f1-score support
normal_clothes 0.95 0.99 0.97 2007
camouflage_clothes 0.99 0.95 0.97 1968
accuracy 0.97 3975
macro avg 0.97 0.97 0.97 3975
weighted avg 0.97 0.97 0.97 3975
[INFO] saving model...
Here, you can see that we are obtaining ~97% accuracy on our normal clothes vs. camouflage clothes detector. Our training plot is shown below:
Figure 8: Training plot of our accuracy/loss curves when fine-tuning ResNet on a camouflage deep learning dataset using Keras and TensorFlow. Our training loss decreases at a much sharper rate than our validation loss; furthermore, it appears that validation loss may be rising toward the end of training, indicating that the model may be overfitting. Future experiments should look into applying additional regularization to the model as well as gathering additional training data. |
https://pyimagesearch.com/2020/04/27/fine-tuning-resnet-with-keras-tensorflow-and-deep-learning/ | In two weeks, I’ll show you how to take this fine-tuned ResNet model and use it in a practical, real-world application! Stay tuned for the post; you won’t want to miss it! Credits
This tutorial would not be possible without:
Victor Gevers of the GDI.Foundation, who brought this project to my attentionNitin Rai who curated the normal clothes vs. camouflage clothes and posted the dataset on KaggleJulia Riede who curated a variation of the dataset
Additionally, I’d like to credit Han et al. for the ResNet-152 visualization used in the header of this image. What's next? We recommend PyImageSearch University. Course information:
84 total classes • 114+ hours of on-demand code walkthrough videos • Last updated: February 2024
★★★★★ 4.84 (128 Ratings) • 16,000+ Students Enrolled
I strongly believe that if you had the right teacher you could master computer vision and deep learning. Do you think learning computer vision and deep learning has to be time-consuming, overwhelming, and complicated? Or has to involve complex mathematics and equations? Or requires a degree in computer science? |
https://pyimagesearch.com/2020/04/27/fine-tuning-resnet-with-keras-tensorflow-and-deep-learning/ | That’s not the case. All you need to master computer vision and deep learning is for someone to explain things to you in simple, intuitive terms. And that’s exactly what I do. My mission is to change education and how complex Artificial Intelligence topics are taught. If you're serious about learning computer vision, your next stop should be PyImageSearch University, the most comprehensive computer vision, deep learning, and OpenCV course online today. Here you’ll learn how to successfully and confidently apply computer vision to your work, research, and projects. Join me in computer vision mastery. Inside PyImageSearch University you'll find:
✓ 84 courses on essential computer vision, deep learning, and OpenCV topics
✓ 84 Certificates of Completion
✓ 114+ hours of on-demand video
✓ Brand new courses released regularly, ensuring you can keep up with state-of-the-art techniques
✓ Pre-configured Jupyter Notebooks in Google Colab
✓ Run all code examples in your web browser — works on Windows, macOS, and Linux (no dev environment configuration required!) ✓ Access to centralized code repos for all 536+ tutorials on PyImageSearch
✓ Easy one-click downloads for code, datasets, pre-trained models, etc. ✓ Access on mobile, laptop, desktop, etc. |
https://pyimagesearch.com/2020/04/27/fine-tuning-resnet-with-keras-tensorflow-and-deep-learning/ | Click here to join PyImageSearch University
Summary
In this tutorial you learned how to fine-tune ResNet with Keras and TensorFlow. Fine-tuning is the process of:
Taking a pre-trained deep neural network (in this case, ResNet)Removing the fully-connected layer head from the networkPlacing a new, freshly initialized layer head on top of the body of the networkOptionally freezing the weights for the layers in the bodyTraining the model, using the pre-trained weights as a starting point to help the model learn faster
Using fine-tune we can obtain a higher accuracy model, typically with much less effort, data, and training time. As a practical application, we fine-tuned ResNet on a dataset of camouflage vs. noncamouflage clothes images. This dataset was curated and put together for us by PyImageSearch readers, Julia Riede and Nitin Rai — without them, this tutorial, as well as the project Victor Gevers and I were working on, would not have been possible! Please thank both Julia and Nitin if you see them online. In two weeks, I’ll go into the details of the project that Victor Gevers and I have been working on, which wraps a nice a little bow on the following topics that we’ve recently covered on PyImageSearch:
Face detectionAge detectionRemoving duplicates from a deep learning datasetFine-tuning a model for camouflage clothes vs. noncamouflage clothes detection
It’s a great post with very real applications to make the world a better place with computer vision and deep learning — you won’t want to miss it! To download the source code to this post (and be notified when future tutorials are published here on PyImageSearch), simply enter your email address in the form below! Download the Source Code and FREE 17-page Resource Guide
Enter your email address below to get a .zip of the code and a FREE 17-page Resource Guide on Computer Vision, OpenCV, and Deep Learning. Inside you'll find my hand-picked tutorials, books, courses, and libraries to help you master CV and DL! Download the code! |
https://pyimagesearch.com/2020/04/27/fine-tuning-resnet-with-keras-tensorflow-and-deep-learning/ | Website |
https://pyimagesearch.com/2020/06/01/opencv-social-distancing-detector/ | Click here to download the source code to this pos
In this tutorial, you will learn how to implement a COVID-19 social distancing detector using OpenCV, Deep Learning, and Computer Vision. Today’s tutorial is inspired by PyImageSearch reader Min-Jun, who emailed in asking:
Hi Adrian, I’ve seen a number of people in the computer vision community implementing “social distancing detectors”, but I’m not sure how they work. Would you consider writing a tutorial on the topic? Thank you. Min-Jun is correct — I’ve seen a number of social distancing detector implementations on social media, my favorite ones being from reddit user danlapko and Rohit Kumar Srivastava’s implementation. Today, I’m going to provide you with a starting point for your own social distancing detector. You can then extend it as you see fit to develop your own projects. To learn how to implement a social distancing detector with OpenCV, just keep reading. Looking for the source code to this post? Jump Right To The Downloads Section
OpenCV Social Distancing Detector
In the first part of this tutorial, we’ll briefly discuss what social distancing is and how OpenCV and deep learning can be used to implement a social distancing detector. |
https://pyimagesearch.com/2020/06/01/opencv-social-distancing-detector/ | We’ll then review our project directory structure including:
Our configuration file used to keep our implementation neat and tidy
Our detect_people utility function, which detects people in video streams using the YOLO object detector
Our Python driver script, which glues all the pieces together into a full-fledged OpenCV social distancing detector
We’ll wrap up the post by reviewing the results, including a brief discussion on limitations and future improvements. What is social distancing? Figure 1: Social distancing is important in times of epidemics and pandemics to prevent the spread of disease. Can we build a social distancing detector with OpenCV? ( image source)
Social distancing is a method used to control the spread of contagious diseases. As the name suggests, social distancing implies that people should physically distance themselves from one another, reducing close contact, and thereby reducing the spread of a contagious disease (such as coronavirus):
Figure 2: Social distancing is crucial to preventing the spread of disease. Using computer vision technology based on OpenCV and YOLO-based deep learning, we are able to estimate the social distance of people in video streams. ( image source)
Social distancing is not a new concept, dating back to the fifth century (source), and has even been referenced in religious texts such as the Bible:
And the leper in whom the plague is … he shall dwell alone; [outside] the camp shall his habitation be. — Leviticus 13:46
Social distancing is arguably the most effective nonpharmaceutical way to prevent the spread of a disease — by definition, if people are not close together, they cannot spread germs. Using OpenCV, computer vision, and deep learning for social distancing
Figure 3: The steps involved in an OpenCV-based social distancing application. |
https://pyimagesearch.com/2020/06/01/opencv-social-distancing-detector/ | We can use OpenCV, computer vision, and deep learning to implement social distancing detectors. The steps to build a social distancing detector include:
Apply object detection to detect all people (and only people) in a video stream (see this tutorial on building an OpenCV people counter)Compute the pairwise distances between all detected peopleBased on these distances, check to see if any two people are less than N pixels apart
For the most accurate results, you should calibrate your camera through intrinsic/extrinsic parameters so that you can map pixels to measurable units. An easier alternative (but less accurate) method would be to apply triangle similarity calibration (as discussed in this tutorial). Both of these methods can be used to map pixels to measurable units. Finally, if you do not want/cannot apply camera calibration, you can still utilize a social distancing detector, but you’ll have to rely strictly on the pixel distances, which won’t necessarily be as accurate. For the sake of simplicity, our OpenCV social distancing detector implementation will rely on pixel distances — I will leave it as an exercise for you, the reader, to extend the implementation as you see fit. Project structure
Be sure to grab the code from the “Downloads” section of this blog post. From there, extract the files, and use the tree command to see how our project is organized:
$ tree --dirsfirst
. ├── pyimagesearch
│ ├── __init__.py
│ ├── detection.py
│ └── social_distancing_config.py
├── yolo-coco
│ ├── coco.names
│ ├── yolov3.cfg
│ └── yolov3.weights
├── output.avi
├── pedestrians.mp4
└── social_distance_detector.py
2 directories, 9 files
Our YOLO object detector files including the CNN architecture definition, pre-trained weights, and class names are housed in the yolo-coco/ directory. This YOLO model is compatible with OpenCV’s DNN module. |
https://pyimagesearch.com/2020/06/01/opencv-social-distancing-detector/ | Today’s pyimagesearch module (in the “Downloads”) consists of:
social_distancing_config.py: A Python file holding a number of constants in one convenient place. detection.py: YOLO object detection with OpenCV involves more lines of code that some easier models. I’ve decided to put the object detection logic in a function in this file for convenience. Doing so frees up our driver script’s frame processing loop from becoming especially cluttered. Our social distance detector application logic resides in the social_distance_detector.py script. This file is responsible for looping over frames of a video stream and ensuring that people are maintaining a healthy distance from one another during a pandemic. It is compatible with both video files and webcam streams. Our input video file is pedestrians.mp4 and comes from TRIDE’s Test video for object detection. The output.avi file contains the processed output file. Let’s dive into the Python configuration file in the next section. |
https://pyimagesearch.com/2020/06/01/opencv-social-distancing-detector/ | Our configuration file
To help keep our code tidy and organized, we’ll be using a configuration file to store important variables. Let’s take a look at them now — open up the social_distancing_config.py file inside the pyimagesearch module, and take a peek:
# base path to YOLO directory
MODEL_PATH = "yolo-coco"
# initialize minimum probability to filter weak detections along with
# the threshold when applying non-maxima suppression
MIN_CONF = 0.3
NMS_THRESH = 0.3
Here, we have the path to the YOLO object detection model (Line 2). We also define the minimum object detection confidence and non-maxima suppression threshold. We have two more configuration constants to define:
# boolean indicating if NVIDIA CUDA GPU should be used
USE_GPU = False
# define the minimum safe distance (in pixels) that two people can be
# from each other
MIN_DISTANCE = 50
The USE_GPU boolean on Line 10 indicates whether your NVIDIA CUDA-capable GPU will be used to speed up inference (requires that OpenCV’s “dnn” module be installed with NVIDIA GPU support). Line 14 defines the minimum distance (in pixels) that people must stay from each other in order to adhere to social distancing protocols. Detecting people in images and video streams with OpenCV
Figure 4: Social distancing applications can be used by humanitarian and law enforcement processionals to gauge whether people are abiding by public health guidance. Pictured is an OpenCV social distancing detection application where the red boxes represent people who are too close to one another. We’ll be using the YOLO object detector to detect people in our video stream. Using YOLO with OpenCV requires a bit more output processing than other object detection methods (such as Single Shot Detectors or Faster R-CNN), so in order to keep our code tidy, let’s implement a detect_people function that encapsulates any YOLO object detection logic. Open up the detection.py file inside the pyimagesearch module, and let’s get started:
# import the necessary packages
from .social_distancing_config import NMS_THRESH
from .social_distancing_config import MIN_CONF
import numpy as np
import cv2
We begin with imports, including those needed from our configuration file on Lines 2 and 3 — the NMS_THRESH and MIN_CONF (refer to the previous section as needed). |
https://pyimagesearch.com/2020/06/01/opencv-social-distancing-detector/ | We’ll also take advantage of NumPy and OpenCV in this script (Lines 4 and 5). Our script consists of a single function definition for detecting people — let’s define that function now:
def detect_people(frame, net, ln, personIdx=0):
# grab the dimensions of the frame and initialize the list of
# results
(H, W) = frame.shape[:2]
results = []
Beginning on Line 7, we define detect_people; the function accepts four parameters:
frame: The frame from your video file or directly from your webcam
net: The pre-initialized and pre-trained YOLO object detection model
ln: The YOLO CNN output layer names
personIdx: The YOLO model can detect many types of objects; this index is specifically for the person class, as we won’t be considering other objects
Line 10 grabs the frame dimensions for scaling purposes. We then initialize our results list, which the function ultimately returns. The results consist of (1) the person prediction probability, (2) bounding box coordinates for the detection, and (3) the centroid of the object. Given our frame, now it is time to perform inference with YOLO:
# construct a blob from the input frame and then perform a forward
# pass of the YOLO object detector, giving us our bounding boxes
# and associated probabilities
blob = cv2.dnn.blobFromImage(frame, 1 / 255.0, (416, 416),
swapRB=True, crop=False)
net.setInput(blob)
layerOutputs = net.forward(ln)
# initialize our lists of detected bounding boxes, centroids, and
# confidences, respectively
boxes = []
centroids = []
confidences = []
Pre-processing our frame requires that we construct a blob (Lines 16 and 17). From there, we are able to perform object detection with YOLO and OpenCV (Lines 18 and 19). Lines 23-25 initialize lists that will soon hold our bounding boxes, object centroids, and object detection confidences. Now that we’ve performed inference, let’s process the results:
# loop over each of the layer outputs
for output in layerOutputs:
# loop over each of the detections
for detection in output:
# extract the class ID and confidence (i.e., probability)
# of the current object detection
scores = detection[5:]
classID = np.argmax(scores)
confidence = scores[classID]
# filter detections by (1) ensuring that the object
# detected was a person and (2) that the minimum
# confidence is met
if classID == personIdx and confidence > MIN_CONF:
# scale the bounding box coordinates back relative to
# the size of the image, keeping in mind that YOLO
# actually returns the center (x, y)-coordinates of
# the bounding box followed by the boxes' width and
# height
box = detection[0:4] * np.array([W, H, W, H])
(centerX, centerY, width, height) = box.astype("int")
# use the center (x, y)-coordinates to derive the top
# and left corner of the bounding box
x = int(centerX - (width / 2))
y = int(centerY - (height / 2))
# update our list of bounding box coordinates,
# centroids, and confidences
boxes.append([x, y, int(width), int(height)])
centroids.append((centerX, centerY))
confidences.append(float(confidence))
Looping over each of the layerOutputs and detections (Lines 28-30), we first extract the classID and confidence (i.e., probability) of the current detected object (Lines 33-35). From there, we verify that (1) the current detection is a person and (2) the minimum confidence is met or exceeded (Line 40). Assuming so, we compute bounding box coordinates and then derive the center (i.e., centroid) of the bounding box (Lines 46 and 47). |
https://pyimagesearch.com/2020/06/01/opencv-social-distancing-detector/ | Notice how we scale (i.e., multiply) our detection by the frame dimensions we gathered earlier. Using the bounding box coordinates, Lines 51 and 52 then derive the top-left coordinates for the object. We then update each of our lists (boxes, centroids, and confidences) via Lines 56-58. Next, we apply non-maxima suppression:
# apply non-maxima suppression to suppress weak, overlapping
# bounding boxes
idxs = cv2.dnn. NMSBoxes(boxes, confidences, MIN_CONF, NMS_THRESH)
# ensure at least one detection exists
if len(idxs) > 0:
# loop over the indexes we are keeping
for i in idxs.flatten():
# extract the bounding box coordinates
(x, y) = (boxes[i][0], boxes[i][1])
(w, h) = (boxes[i][2], boxes[i][3])
# update our results list to consist of the person
# prediction probability, bounding box coordinates,
# and the centroid
r = (confidences[i], (x, y, x + w, y + h), centroids[i])
results.append(r)
# return the list of results
return results
The purpose of non-maxima suppression is to suppress weak, overlapping bounding boxes. Line 62 applies this method (it is built-in to OpenCV) and results in the idxs of the detections. Assuming the result of NMS yields at least one detection (Line 65), we loop over them, extract bounding box coordinates, and update our results list consisting of the:
Confidence of each person detection
Bounding box of each person
Centroid of each person
Finally, we return the results to the calling function. Implementing a social distancing detector with OpenCV and deep learning
We are now ready to implement our social distancing detector with OpenCV. Open up a new file, name it social_distance_detector.py, and insert the following code:
# import the necessary packages
from pyimagesearch import social_distancing_config as config
from pyimagesearch.detection import detect_people
from scipy.spatial import distance as dist
import numpy as np
import argparse
import imutils
import cv2
import os
The most notable imports on Lines 2-9 include our config, our detect_people function, and the Euclidean distance metric (shortened to dist and to be used to determine the distance between centroids). With our imports taken care of, let’s handle our command line arguments:
# construct the argument parse and parse the arguments
ap = argparse. |
https://pyimagesearch.com/2020/06/01/opencv-social-distancing-detector/ | ArgumentParser()
ap.add_argument("-i", "--input", type=str, default="",
help="path to (optional) input video file")
ap.add_argument("-o", "--output", type=str, default="",
help="path to (optional) output video file")
ap.add_argument("-d", "--display", type=int, default=1,
help="whether or not output frame should be displayed")
args = vars(ap.parse_args())
This script requires the following arguments to be passed via the command line/terminal:
--input: The path to the optional video file. If no video file path is provided, your computer’s first webcam will be used by default. --output: The optional path to an output (i.e., processed) video file. If this argument is not provided, the processed video will not be exported to disk. --display: By default, we’ll display our social distance application on-screen as we process each frame. Alternatively, you can set this value to 0 to process the stream in the background. Now we have a handful of initializations to take care of:
# load the COCO class labels our YOLO model was trained on
labelsPath = os.path.sep.join([config. MODEL_PATH, "coco.names"])
LABELS = open(labelsPath).read().strip().split("\n")
# derive the paths to the YOLO weights and model configuration
weightsPath = os.path.sep.join([config. MODEL_PATH, "yolov3.weights"])
configPath = os.path.sep.join([config. MODEL_PATH, "yolov3.cfg"])
Here, we load our load COCO labels (Lines 22 and 23) as well as define our YOLO paths (Lines 26 and 27). |
https://pyimagesearch.com/2020/06/01/opencv-social-distancing-detector/ | Using the YOLO paths, now we can load the model into memory:
# load our YOLO object detector trained on COCO dataset (80 classes)
print("[INFO] loading YOLO from disk...")
net = cv2.dnn.readNetFromDarknet(configPath, weightsPath)
# check if we are going to use GPU
if config. USE_GPU:
# set CUDA as the preferable backend and target
print("[INFO] setting preferable backend and target to CUDA...")
net.setPreferableBackend(cv2.dnn. DNN_BACKEND_CUDA)
net.setPreferableTarget(cv2.dnn. DNN_TARGET_CUDA)
Using OpenCV’s DNN module, we load our YOLO net into memory (Line 31). If you have the USE_GPU option set in the config, then the backend processor is set to be your NVIDIA CUDA-capable GPU. If you don’t have a CUDA-capable GPU, ensure that the configuration option is set to False so that your CPU is the processor used. Next, we’ll perform three more initializations:
# determine only the *output* layer names that we need from YOLO
ln = net.getLayerNames()
ln = [ln[i[0] - 1] for i in net.getUnconnectedOutLayers()]
# initialize the video stream and pointer to output video file
print("[INFO] accessing video stream...")
vs = cv2.VideoCapture(args["input"] if args["input"] else 0)
writer = None
Here, Lines 41 and 42 gather the output layer names from YOLO; we’ll need them in order to process our results. We then start our video stream (either a video file via the --input command line argument or a webcam stream) Line 46. For now, we initialize our output video writer to None. Further setup occurs in the frame processing loop. |
https://pyimagesearch.com/2020/06/01/opencv-social-distancing-detector/ | Finally, we’re ready to begin processing frames and determining if people are maintaining safe social distance:
# loop over the frames from the video stream
while True:
# read the next frame from the file
(grabbed, frame) = vs.read()
# if the frame was not grabbed, then we have reached the end
# of the stream
if not grabbed:
break
# resize the frame and then detect people (and only people) in it
frame = imutils.resize(frame, width=700)
results = detect_people(frame, net, ln,
personIdx=LABELS.index("person"))
# initialize the set of indexes that violate the minimum social
# distance
violate = set()
Lines 50-52 begins a loop over frames from our video stream. The dimensions of our input video for testing are quite large, so we resize each frame while maintaining aspect ratio (Line 60). Using our detect_people function implemented in the previous section, we grab results of YOLO object detection (Lines 61 and 62). If you need a refresher on the input parameters required or the format of the output results for the function call, be sure to refer to the listing in the previous section. We then initialize our violate set on Line 66; this set maintains a listing of people who violate social distance regulations set forth by public health professionals. We’re now ready to check the distances among the people in the frame:
# ensure there are *at least* two people detections (required in
# order to compute our pairwise distance maps)
if len(results) >= 2:
# extract all centroids from the results and compute the
# Euclidean distances between all pairs of the centroids
centroids = np.array([r[2] for r in results])
D = dist.cdist(centroids, centroids, metric="euclidean")
# loop over the upper triangular of the distance matrix
for i in range(0, D.shape[0]):
for j in range(i + 1, D.shape[1]):
# check to see if the distance between any two
# centroid pairs is less than the configured number
# of pixels
if D[i, j] < config. MIN_DISTANCE:
# update our violation set with the indexes of
# the centroid pairs
violate.add(i)
violate.add(j)
Assuming that at least two people were detected in the frame (Line 70), we proceed to:
Compute the Euclidean distance between all pairs of centroids (Lines 73 and 74)
Loop over the upper triangular of distance matrix (since the matrix is symmetrical) beginning on Lines 77 and 78
Check to see if the distance violates our minimum social distance set forth by public health professionals (Line 82). If two people are too close, we add them to the violate set
What fun would our app be if we couldn’t visualize results? No fun at all, I say! So let’s annotate our frame with rectangles, circles, and text:
# loop over the results
for (i, (prob, bbox, centroid)) in enumerate(results):
# extract the bounding box and centroid coordinates, then
# initialize the color of the annotation
(startX, startY, endX, endY) = bbox
(cX, cY) = centroid
color = (0, 255, 0)
# if the index pair exists within the violation set, then
# update the color
if i in violate:
color = (0, 0, 255)
# draw (1) a bounding box around the person and (2) the
# centroid coordinates of the person,
cv2.rectangle(frame, (startX, startY), (endX, endY), color, 2)
cv2.circle(frame, (cX, cY), 5, color, 1)
# draw the total number of social distancing violations on the
# output frame
text = "Social Distancing Violations: {}".format(len(violate))
cv2.putText(frame, text, (10, frame.shape[0] - 25),
cv2.FONT_HERSHEY_SIMPLEX, 0.85, (0, 0, 255), 3)
Looping over the results on Line 89, we proceed to:
Extract the bounding box and centroid coordinates (Lines 92 and 93)
Initialize the color of the bounding box to green (Line 94)
Check to see if the current index exists in our violate set, and if so, update the color to red (Lines 98 and 99)
Draw both the bounding box of the person and their object centroid (Lines 103 and 104). |
https://pyimagesearch.com/2020/06/01/opencv-social-distancing-detector/ | Each is color-coordinated, so we’ll see which people are too close. Display information on the total number of social distancing violations (the length of our violate set (Lines 108-110)
Let’s wrap up our OpenCV social distance detector:
# check to see if the output frame should be displayed to our
# screen
if args["display"] > 0:
# show the output frame
cv2.imshow("Frame", frame)
key = cv2.waitKey(1) & 0xFF
# if the `q` key was pressed, break from the loop
if key == ord("q"):
break
# if an output video file path has been supplied and the video
# writer has not been initialized, do so now
if args["output"] ! = "" and writer is None:
# initialize our video writer
fourcc = cv2.VideoWriter_fourcc(*"MJPG")
writer = cv2.VideoWriter(args["output"], fourcc, 25,
(frame.shape[1], frame.shape[0]), True)
# if the video writer is not None, write the frame to the output
# video file
if writer is not None:
writer.write(frame)
To close out, we:
Display the frame to the screen if required (Lines 114-116) while waiting for the q (quit) key to be pressed (Lines 117-121)
Initialize our video writer if necessary (Lines 125-129)
Write the processed (annotated) frame to disk (Lines 133 and 134)
OpenCV social distancing detector results
We are now ready to test our OpenCV social distancing detector. Make sure you use the “Downloads” section of this tutorial to download the source code and example demo video. From there, open up a terminal, and execute the following command:
$ time python social_distance_detector.py --input pedestrians.mp4 \
--output output.avi --display 0
[INFO] loading YOLO from disk...
[INFO] accessing video stream...
real 3m43.120s
user 23m20.616s
sys 0m25.824s
Here, you can see that I was able to process the entire video in 3m43s on my CPU, and as the results show, our social distancing detector is correctly marking people who violate social distancing rules. The problem with this current implementation is speed. Our CPU-based social distancing detector is obtaining ~2.3 FPS, which is far too slow for real-time processing. You can obtain a higher frame processing rate by (1) utilizing an NVIDIA CUDA-capable GPU and (2) compiling/installing OpenCV’s “dnn” module with NVIDIA GPU support. Provided you already have OpenCV installed with NVIDIA GPU support, all you need to do is set USE_GPU = True in your social_distancing_config.py file:
# boolean indicating if NVIDIA CUDA GPU should be used
USE_GPU = True
Again, make sure USE_GPU = True if you wish to use your GPU. From there, you can re-run the social_distance_detector.py script:
$ time python social_distance_detector.py --input pedestrians.mp4 \
--output output.avi --display 0
[INFO] loading YOLO from disk...
[INFO] setting preferable backend and target to CUDA...
[INFO] accessing video stream...
real 0m56.008s
user 1m15.772s
sys 0m7.036s
Here, we processed the entire video in only 56 seconds, amounting to ~9.38 FPS, which is a 307% speedup! |
https://pyimagesearch.com/2020/06/01/opencv-social-distancing-detector/ | Limitations and future improvements
As already mentioned earlier in this tutorial, our social distancing detector did not leverage a proper camera calibration, meaning that we could not (easily) map distances in pixels to actual measurable units (i.e., meters, feet, etc.). Therefore, the first step to improving our social distancing detector is to utilize a proper camera calibration. Doing so will yield better results and enable you to compute actual measurable units (rather than pixels). Secondly, you should consider applying a top-down transformation of your viewing angle, as this implementation has done:
Figure 5: Applying a perspective transform or using stereo computer vision would allow you to get a more accurate representation of social distancing with OpenCV. While more accurate, the engineering involved in such a system is more complex and isn’t always necessary. ( image source)
From there, you can apply the distance calculations to the top-down view of the pedestrians, leading to a better distance approximation. My third recommendation is to improve the people detection process. OpenCV’s YOLO implementation is quite slow not because of the model itself but because of the additional post-processing required by the model. To further speedup the pipeline, consider utilizing a Single Shot Detector (SSD) running on your GPU — that will improve frame throughput rate considerably. To wrap up, I’d like to mention that there are a number of social distancing detector implementations you’ll see online — the one I’ve covered here today should be considered a template and starting point that you can build off of. |