url
stringclasses 675
values | text
stringlengths 0
9.95k
|
---|---|
https://pyimagesearch.com/blog/ | Table of Contents Understanding Tasks in Diffusers: Part 3 Introduction Why Not Image-to-Image? ControlNet Models Configuring Your Development Environment Setup and Imports Installation Imports Utility Functions Canny ControlNet Setting Up Loading the Model Optimizing the Pipeline Image Generation Cleaning Up… |
https://pyimagesearch.com/2017/09/25/configuring-ubuntu-for-deep-learning-with-python/ | Click here to download the source code to this pos
When it comes to learning new technology such as deep learning, configuring your development environment tends to be half the battle. Different operating systems, hardware, dependencies, and the actual libraries themselves can lead to many headaches before you’re even able to get started studying deep learning. These issues are further compounded by the speed of deep learning library updates and releases — new features push innovation, but oftentimes break previous versions. Your environment can quickly become obsolete, so it is imperative to become an expert in installing and configuring your deep learning environment. Now that Deep Learning for Computer Vision with Python has officially released, I’ll be publishing three posts this week where I will demonstrate how to stand up your own deep learning environments so that you can get a head start before you dive into reading. I’ll be demonstrating how to configure your own native development environment for the following operating systems and peripherals:
Configuring Ubuntu for deep learning with Python (i.e., the post you are currently reading)
Setting up Ubuntu (with GPU support) for deep learning with Python
Configuring macOS for deep learning with Python
As you start to walk the path to deep learning and computer vision mastery, I’ll be right there with you. To get started configuring your Ubuntu machine for deep learning with Python, just keep reading. Configuring Ubuntu for deep learning with Python
Accompanying my new deep learning book is a downloadable pre-configured Ubuntu VirtualBox virtual machine with Keras, TensorFlow, OpenCV, and other computer vision/machine learning libraries pre-installed. By far, this is the fastest way to get up and running with Deep Learning for Computer Vision with Python. That being said, it is often desirable to install your environment on the bare metal so that you can take advantage of your physical hardware. |
https://pyimagesearch.com/2017/09/25/configuring-ubuntu-for-deep-learning-with-python/ | For the GPU install tutorial part of this series it is a requirement that you be on the metal — a VM just won’t cut it since it doesn’t have access to your physical GPU. Today, our blog post is broken down into four relatively easy steps:
Step #1: Install Ubuntu system dependencies
Step #2: Create your Python 3 virtual environment
Step #3: Compile and Install OpenCV
Step #4: Install Keras
Taking note of the steps, you will see that Deep Learning for Computer Vision with Python supports Python 3. Python 3 will be the standard on PyImageSearch going forward as it is stable and quite frankly the future. Many organizations have been hesitant to adopt Python 3 at first (me included, as there was no Python 3 support for OpenCV until OpenCV 3), but at this point if you don’t adopt Python 3 you will be left in the dust. Expect PyImageSearch Gurus course content to be compatible with Python 3 in the near future as well. Notice that we have chosen Keras as our deep learning library. Keras “stands out from the rest” of the available libraries for it’s ease of use and compatibility with both Tensorflow and Theano. My deep learning book focuses on fundamentals and breaking into the field with ease rather than introducing you to a bunch of libraries — so for the Starter Bundle and Practitioner Bundle, I demonstrate various tasks and exercises with Keras (as well as implementing some basic neural network concepts by hand). The ImageNet Bundle takes advantage of mxnet as well. While we will be primarily using Keras in my book, there are many deep learning libraries for Python, and I encourage you to become familiar with my top 9 favorite Python deep learning libraries. |
https://pyimagesearch.com/2017/09/25/configuring-ubuntu-for-deep-learning-with-python/ | To get started, you’ll want to have some time on your hands and access to an Ubuntu machine’s terminal — SSH is perfectly suitable if your box is in the cloud or elsewhere. Let’s begin! Step #1: Install Ubuntu system dependencies
The purpose of this step is to prepare your system with the dependencies necessary for OpenCV. All steps in this tutorial will be accomplished by using your terminal. To start, open up your command line and update the apt-get package manager to refresh and upgrade and pre-installed packages/libraries:
$ sudo apt-get update
$ sudo apt-get upgrade
We’ll also need to install some developer tools as well as prerequisites required for image and video I/O, optimizations, and creating plots/visualizations:
$ sudo apt-get install build-essential cmake git unzip pkg-config
$ sudo apt-get install libjpeg-dev libtiff5-dev libjasper-dev libpng12-dev
$ sudo apt-get install libavcodec-dev libavformat-dev libswscale-dev libv4l-dev
$ sudo apt-get install libxvidcore-dev libx264-dev
$ sudo apt-get install libgtk-3-dev
$ sudo apt-get install libhdf5-serial-dev graphviz
$ sudo apt-get install libopenblas-dev libatlas-base-dev gfortran
$ sudo apt-get install python-tk python3-tk python-imaging-tk
We’ll wrap up Step #1 by installing the Python development headers and libraries for both Python 2.7 and Python 3.5 (that way you have both). $ sudo apt-get install python2.7-dev python3-dev
Note: If you do not install the Python development headers and static library, you’ll run into issues during Step #3 where we run cmake to configure our build. If these headers are not installed, then the cmake command will be unable to automatically determine the proper values of the Python interpreter and Python libraries. In short, the output of this section will look “empty” and you will not be able to build the Python bindings. When you get to Step #3, take the time to compare your output of the command to mine. Let’s continue on by creating a virtual environment to house OpenCV and Keras. |
https://pyimagesearch.com/2017/09/25/configuring-ubuntu-for-deep-learning-with-python/ | Step #2: Create your Python virtual environment
In this section we will setup a Python virtual environment on your system. Installing pip
We are now ready to start configuring our Python development environment for the build. The first step is to install pip , a Python package manager:
$ wget https://bootstrap.pypa.io/get-pip.py
$ sudo python get-pip.py
$ sudo python3 get-pip.py
Installing virtualenv and virtualenvwrapper
I’ve mentioned this in every single install tutorial I’ve ever done, but I’ll say it again here today: I’m a huge fan of both virtualenv and virtualenvwrapper. These Python packages allow you to create separate, independent Python environments for eachproject that you are working on. In short, using these packages allows you to solve the “Project X depends on version 1.x, but Project Y needs 4.x dilemma. A fantastic side effect of using Python virtual environments is that you can keep your system Python neat, tidy, and free from clutter. While you can certainly install OpenCV with Python bindings without Python virtual environments, I highly recommend you use them as other PyImageSearch tutorials leverage Python virtual environments. I’ll also be assuming that you have both virtualenv and virtualenvwrapper installed throughout the remainder of this guide. If you would like a full, detailed explanation on why Python virtual environments are a best practice, you should absolutely give this excellent blog post on RealPython a read. I also provide some commentary on why I personally prefer Python virtual environments in the first half of this tutorial. |
https://pyimagesearch.com/2017/09/25/configuring-ubuntu-for-deep-learning-with-python/ | Again, let me reiterate that it’s standard practice in the Python community to be leveraging virtual environments of some sort, so I suggest you do the same:
$ sudo pip install virtualenv virtualenvwrapper
$ sudo rm -rf ~/.cache/pip get-pip.py
Once we have virtualenv and virtualenvwrapper installed, we need to update our ~/.bashrc file to include the following lines at the bottom of the file:
# virtualenv and virtualenvwrapper
export WORKON_HOME=$HOME/.virtualenvs
export VIRTUALENVWRAPPER_PYTHON=/usr/bin/python3
source /usr/local/bin/virtualenvwrapper.sh
The ~/.bashrc file is simply a shell script that Bash runs whenever you launch a new terminal. You normally use this file to set various configurations. In this case, we are setting an environment variable called WORKON_HOME to point to the directory where our Python virtual environments live. We then load any necessary configurations from virtualenvwrapper . To update your ~/.bashrc file simply use a standard text editor. I would recommend using nano , vim , or emacs . You can also use graphical editors as well, but if you’re just getting started, nano is likely the easiest to operate. A more simple solution is to use the cat command and avoid editors entirely:
$ echo -e "\n# virtualenv and virtualenvwrapper" >> ~/.bashrc
$ echo "export WORKON_HOME=$HOME/.virtualenvs" >> ~/.bashrc
$ echo "export VIRTUALENVWRAPPER_PYTHON=/usr/bin/python3" >> ~/.bashrc
$ echo "source /usr/local/bin/virtualenvwrapper.sh" >> ~/.bashrc
After editing our ~/.bashrc file, we need to reload the changes:
$ source ~/.bashrc
Note: Calling source on ~/.bashrc only has to be done once for our current shell session. Anytime we open up a new terminal, the contents of ~/.bashrc will be automatically executed (including our updates). Creating a virtual environment for deep learning and computer vision
Now that we have installed virtualenv and virtualenvwrapper, the next step is to actually create the Python virtual environment — we do this using the mkvirtualenv command. |
https://pyimagesearch.com/2017/09/25/configuring-ubuntu-for-deep-learning-with-python/ | In past install tutorials, I’ve presented the choice of Python 2.7 or Python 3. At this point in the Python 3 development cycle, I consider it stable and the right choice. You may elect to use Python 2.7 if you have specific compatibility requirements, but for the purposes of my new deep learning book we will use Python 3. With that said, for the following command, ensure your Python (-p ) flag is set to python3 :
$ mkvirtualenv dl4cv -p python3
Regardless of which Python version you decide to use, the end result is that we have created a Python virtual environment named dl4cv (short for “deep learning for computer vision”). You can name this virtual environment whatever you like (and create as many Python virtual environments as you want), but for the time being, I would suggest sticking with the dl4cv name as that is what I’ll be using throughout the rest of this tutorial as well as the remaining install guides in this series. Verifying that you are in the “dl4cv” virtual environment
If you ever reboot your Ubuntu system; log out and log back in; or open up a new terminal, you’ll need to use the workon command to re-access your dl4cv virtual environment. An example of the workon command follows:
$ workon dl4cv
To validate that you are in the dl4cv virtual environment, simply examine your command line — if you see the text (dl4cv) preceding your prompt, then you are in the dl4cv virtual environment:
Figure 1: Inside the dl4cv virtual environment denoted by ‘(dl4cv)’ in the prompt. Otherwise if you do not see the dl4cv text, then you are not in the dl4cv virtual environment:
Figure 2: Outside of the dl4cv virtual environment. Simply execute the ‘workon dl4cv’ command to get into the environment. Installing NumPy
The final step before we compile OpenCV is to install NumPy, a Python package used for numerical processing. |
https://pyimagesearch.com/2017/09/25/configuring-ubuntu-for-deep-learning-with-python/ | To install NumPy, ensure you are in the dl4cv virtual environment (otherwise NumPy will be installed into the system version of Python rather than the dl4cv environment). From there execute the following command:
$ pip install numpy
Step #3: Compile and Install OpenCV
In this section we will install and compile OpenCV. We’ll start by downloading and unarchiving OpenCV 3.3. Then we will build and compile OpenCV from source. Finally we will test that OpenCV has been installed. Downloading OpenCV
First let’s download opencv and opencv_contrib into your home directory:
$ cd ~
$ wget -O opencv.zip https://github.com/Itseez/opencv/archive/3.3.0.zip
$ wget -O opencv_contrib.zip https://github.com/Itseez/opencv_contrib/archive/3.3.0.zip
You may need to expand the commands above to copy and past the full path to the opencv_contrib file. Then, let’s unzip both files:
$ unzip opencv.zip
$ unzip opencv_contrib.zip
Running CMake
Let’s create a build directory and run CMake:
$ cd ~/opencv-3.3.0/
$ mkdir build
$ cd build
$ cmake -D CMAKE_BUILD_TYPE=RELEASE \
-D CMAKE_INSTALL_PREFIX=/usr/local \
-D WITH_CUDA=OFF \
-D INSTALL_PYTHON_EXAMPLES=ON \
-D OPENCV_EXTRA_MODULES_PATH=~/opencv_contrib-3.3.0/modules \
-D BUILD_EXAMPLES=ON ..
For CMake, it is important that your flags match mine for compatibility. Also, make sure that your opencv_contrib version is the exact same as the OpenCV version you downloaded (in this case version 3.3.0 ). Before we move on to the actual compilation step make sure you examine the output of CMake! Start by scrolling to the section titled Python 3 . |
https://pyimagesearch.com/2017/09/25/configuring-ubuntu-for-deep-learning-with-python/ | Make sure that your Python 3 section looks like the figure below:
Figure 3: Checking that Python 3 will be used when compiling OpenCV 3 for Ubuntu. Pay attention that the Interpreter points to our python3.5 binary located in the dl4cv virtual environment while numpy points to our NumPy install. In either case if you do not see the dl4cv virtual environment in these variables’ paths, then it’s almost certainly because you are NOT in the dl4cv virtual environment prior to running CMake! If this is the case, access the dl4cv virtual environment using workon dl4cv and re-run the command outlined above (I would also suggest deleting the build directory and re-creating it). Compiling OpenCV
Now we are now ready to compile OpenCV with 4 cores:
$ make -j4
Note: You can try a version of the -j4 flag corresponding to the number of cores of your CPU to achieve compile time speedups. In this case I used -j4 since my machine has four cores. If you run into compilation errors, you may run the command make clean and then just compile without the parallel flag: make . From there, all you need to do is to install OpenCV 3.3 and then free up some disk space if you so desire:
$ sudo make install
$ sudo ldconfig
$ cd ~
$ rm -rf opencv-3.3.0 opencv.zip
$ rm -rf opencv_contrib-3.3.0 opencv_contrib.zip
When your compilation is complete you should see output that looks similar to the following:
Figure 4: OpenCV compilation is complete. Symbolic linking OpenCV to your virtual environment
To sym-link our OpenCV bindings into the dl4cv virtual environment, issue the following commands:
$ cd ~/.virtualenvs/dl4cv/lib/python3.5/site-packages/
$ ln -s /usr/local/lib/python3.5/site-packages/cv2.cpython-35m-x86_64-linux-gnu.so cv2.so
$ cd ~
Notice that I am using Python 3.5 in this example. If you are using Python 3.6 (or newer) you’ll want to update the paths to use your version of Python. |
https://pyimagesearch.com/2017/09/25/configuring-ubuntu-for-deep-learning-with-python/ | Secondly, your .so file (i.e., the actual OpenCV bindings) may be some variant of what is shown above, so be sure to use the appropriate file by double-checking the path. Testing your OpenCV 3.3 install
Now that we’ve got OpenCV 3.3 installed and linked, let’s do a quick sanity test to see if things work:
$ python
>>> import cv2
>>> cv2.__version__
'3.3.0'
Make sure you are in the dl4cv virtual environment before firing up Python (workon dl4cv ). When you print out the version, it should match the version of OpenCV that you installed (in our case, OpenCV 3.3.0 ). That’s it — assuming you didn’t encounter an import error, you’re ready to go on to Step #4 where we will install Keras. Step #4: Install Keras
For this step, be sure that you are in the dl4cv environment by issuing the workon dl4cv command. Then install our various Python computer vision, image processing, and machine learning libraries:
$ pip install scipy matplotlib pillow
$ pip install imutils h5py requests progressbar2
$ pip install scikit-learn scikit-image
Next, install Tensorflow:
$ pip install tensorflow
Notice how we are using the CPU version of TensorFlow. I will be covering the GPU version in a separate tutorial. Installing Keras is extremely simple, thanks to pip :
$ pip install keras
Again, do this in the dl4cv virtual environment. You can test our Keras install from a Python shell:
$ python
>>> import keras
Using TensorFlow backend. >>>
You should see that Keras has been imported with no errors and the TensorFlow backend is being used. |
https://pyimagesearch.com/2017/09/25/configuring-ubuntu-for-deep-learning-with-python/ | Before you wrap up the install tutorial take a second to familiarize yourself with the ~/.keras/keras.json file:
{
"image_data_format": "channels_last",
"backend": "tensorflow",
"epsilon": 1e-07,
"floatx": "float32"
}
Ensure that image_data_format is set to channels_last and backend is tensorflow . Congratulations! You are now ready to begin your Deep learning for Computer Vision with Python journey. Summary
In today’s blog post, I demonstrated how to set up your deep learning environment on an Ubuntu machine using only the CPU. Configuring your development environment is half the battle when it comes to learning new techniques, algorithms, and libraries. If you’re interested in studying deep learning in more detail, be sure to take a look at my new book, Deep Learning for Computer Vision with Python. The next few blog posts in this series will cover alternative environments including macOS and Ubuntu (with GPU support). Of course, if you’re interested in pre-configured deep learning development environments, take a look my Ubuntu virtual machine and Amazon EC2 instance. If you are interested in learning more about computer vision and deep learning, be sure to enter your email address in the form below to be notified when new blog posts + tutorials are published! Join the PyImageSearch Newsletter and Grab My FREE 17-page Resource Guide PDF
Enter your email address below to join the PyImageSearch Newsletter and download my FREE 17-page Resource Guide PDF on Computer Vision, OpenCV, and Deep Learning. |
https://pyimagesearch.com/2017/09/25/configuring-ubuntu-for-deep-learning-with-python/ | Join the Newsletter! Website |
https://pyimagesearch.com/2017/09/23/getting-started-deep-learning-computer-vision-python/ | Click here to download the source code to this pos
This blog post is intended for readers who have purchased a copy of my new book, Deep Learning for Computer Vision with Python. Inside this tutorial you’ll learn how to:
Download the books, code, datasets, and any extras associated with your purchase. Obtain your email receipt and invoice. Access the companion website associated with Deep Learning for Computer Vision with Python. Post an issue, submit a bug, or report a typo using the companion website. Reactivate an expired download link. If you have any other questions related to the book, please send me an email or use the contact form. Getting started with Deep Learning for Computer Vision with Python
Thank you for picking up a copy of Deep Learning for Computer Vision with Python! I appreciate your support of both myself and the PyImageSearch blog. Without you, PyImageSearch would not be possible. |
https://pyimagesearch.com/2017/09/23/getting-started-deep-learning-computer-vision-python/ | My goal is to ensure you receive a huge return on both your investment of time and finances. To ensure you get off on the right foot, this guide will help you get started with your brand new copy of Deep Learning for Computer Vision with Python. Downloading the files
After you successfully checkout and purchase your copy of Deep Learning for Computer Vision with Python you will be redirected to a page that looks similar to the one below:
Figure 1: The “Downloads Page” you can use to download the files associated with your purchase of Deep Learning for Computer Vision with Python. This is your purchase page and where you will be able to download your files. Left click on each file and your download will start. All files that start with the prefix SB are part of the Starter Bundle. Files that start with PB are part of the Practitioner Bundle. And finally, file names that start with IB are part of the ImageNet Bundle. File names that include *_Book.zip contain the PDF of the respective bundle. File names that include *_Videos.zip contain the videos for the bundle. |
https://pyimagesearch.com/2017/09/23/getting-started-deep-learning-computer-vision-python/ | File names including *_Code.zip contain your code/datasets associated for the bundle. For example, the file name SB_Code.zip contains all code/datasets associated with the Starter Bundle. The file name SB_Book.zip contains your PDF of the Starter Bundle. Finally, the VirtualMachine.zip file contains your pre-configured Ubuntu VirtualBox virtual machine. Note: At this time only the Starter Bundle contents have been released. The contents of the Practitioner Bundle and ImageNet Bundle will be released in October. If you close this tab in your browser and need to access it again, simply:
Open up your inbox. Find the email receipt (see section below). Click on the “View Purchase Online” link. From there you’ll be able to access the downloads page. |
https://pyimagesearch.com/2017/09/23/getting-started-deep-learning-computer-vision-python/ | Please go ahead and download these files at your earliest convenience. The service I use to handle payments and distribution of digital downloads automatically expires URLs after four days for security reasons. If your download ever expires, no problem at all, just refer to the “Reactivating an expired download” section below. Your email receipt and invoice
A few minutes after you purchase your copy of Deep Learning for Computer Vision with Python you’ll receive an email with the subject: “Your purchase from PyImageSearch is complete”. Inside this email you’ll find links to view/print your invoice as well as access the downloads page:
Figure 2: After purchasing your copy of Deep Learning for Computer Vision with Python you will receive an email containing your receipt/invoice and link to re-access the downloads page. If you did not receive this email, please ensure you are examining the inbox/email address you used when checking out. If you used PayPal you’ll want to check the email address associated with your PayPal account. If you still cannot find the email, no worries! Please just email me or send me a message from via the contact form and include any pertinent information, such as:
The email address the purchase should be listed under. Your name. |
https://pyimagesearch.com/2017/09/23/getting-started-deep-learning-computer-vision-python/ | Any other relevant information you may have (purchase number, whether the payment was made via credit card or PayPal, if a friend/colleague purchased for you etc.). From there I can double-check the database and ensure you receive your email receipt and downloads link. Accessing the companion website
Your purchase of Deep Learning for Computer Vision with Python includes access to the supplementary material/companion website. To access the companion website:
Download the PDF of the Starter Bundle. Open the Starter Bundle to the “Companion Website” section (page 15 of the PDF). Follow the link to the companion website. Register your account on the companion website by creating a username and password. From there you’ll be able to access the companion website:
Figure 3: The Deep Learning for Computer Vision with Python companion website. Right now the companion website includes links to (1) configure your development environment and (2) report a bug. In the future this website will contain additional supplementary material. |
https://pyimagesearch.com/2017/09/23/getting-started-deep-learning-computer-vision-python/ | Posting an issue, bug report, or typo
The most important reason you should create your account on the companion website is to report an issue, bug, or typo. You can do this by clicking the “Issues” button in the header of the companion website:
Figure 4: If you encounter an error when using the book, please check the “Issues” page inside the companion website. You’ll then see a list of all open tickets. You can search these tickets by clicking the “Apply Filters” button. If no ticket matches your query, click “Create New Ticket” and fill out the required fields:
Figure 5: If no (already submitted) bug report matches your error, please create a new ticket so myself and others in the PyImageSearch community can help you. From there, myself and the rest of the PyImageSearch community can help you with the problem. You can always email me regarding any issues as well; however, I may refer you to the companion website to post the bug so:
I can keep track of the issue and ensure your problem is resolved in a timely manner. Other readers can learn from the issue if they encounter it as well. Since Deep Learning for Computer Vision with Python is a brand new book, there are bound to be many questions. By using the issue tracker we can keep all bugs organized while ensuring the community can learn from other questions as well. |
https://pyimagesearch.com/2017/09/23/getting-started-deep-learning-computer-vision-python/ | Reactivating an expired download
The service I use to handle payments and distribution of digital downloads automatically expires URLs after four days for security reasons. If your URL ever expires, no problem at all — simply email me or send me a message and I can reactivate your purchase for you. Summary
In this tutorial you learned how to get started with your new purchase of Deep Learning for Computer Vision with Python. If you have a question that is not discussed in this guide, please shoot me an email or send me a message — I’ll be happy to discuss the problem with you. Otherwise, if your question is specifically related to a chapter, a piece of code, an error message, or anything pertinent to the actual contents of the book, please refer to the “Posting an issue, bug report, or typo” section above. Thank you again for purchasing a copy of Deep Learning for Computer Vision with Python. I feel incredibly excited and privileged to guide you on your journey to deep learning mastery. Without you, this blog would not be possible. Have a wonderful day and happy reading! P.S. If you haven’t already purchased a copy of Deep Learning for Computer Vision with Python, you can do so here. |
https://pyimagesearch.com/2017/09/23/getting-started-deep-learning-computer-vision-python/ | Join the PyImageSearch Newsletter and Grab My FREE 17-page Resource Guide PDF
Enter your email address below to join the PyImageSearch Newsletter and download my FREE 17-page Resource Guide PDF on Computer Vision, OpenCV, and Deep Learning. Join the Newsletter! Website |
https://pyimagesearch.com/2016/11/21/raspbian-opencv-pre-configured-and-pre-installed/ | Click here to download the source code to this pos
Updated: September 16th, 2019
Ever since I wrote the first PyImageSearch tutorial on installing OpenCV + Python on the Raspberry Pi B+ back in February 2015 it has been my dream to offer a downloadable, pre-configured Raspbian .img file with OpenCV pre-installed. Since November 2016 (the original publish date of this post), this dream has been a reality. I am pleased to announce that the following products ship with my downloadable Raspbian .img files pre-configured and pre-installed:
Practical Python and OpenCV (Quickstart and Hardcopy Bundles)
Raspberry Pi for Computer Vision (Hobbyist, Hacker, and Complete Bundles)
There are two files included:
Raspbian3B_4B.img.gz (compatible with RPi 3B, 3B+, 4B [1GB, 2GB, and 4GB models])
RaspbianZeroW.img.gz (compatible with Pi Zero W)
All you have to do is download the .img files, flash the appropriate one to your SD card using BalenaEtcher, and boot your Pi. From there, you’ll have a complete Python + OpenCV development environment at your fingertips, all without the hassle of configuring, compiling, and installing OpenCV. To learn more about the Raspbian .img files, keep reading. Raspbian Buster + OpenCV 4 out-of-the-box
I went back to my recent tutorials on installing OpenCV on the Raspberry Pi and computed the amount of time it takes to perform each step. You know what I found? Even if you know exactly what you are doing it can take significant time to compile and install OpenCV on your Raspberry Pi:
Over 55 minutes on the Raspberry Pi 4B
Over 2.2 hours to compile the Raspberry Pi 3B+. Over 14 hours to compile on the Raspberry Pi Zero W
In the past, I’ve emailed a sample of novice readers who successfully installed OpenCV on their Pi and asked how long it took them to complete the compile and installation process. Perhaps not surprisingly I found the amount of time for novice readers to install OpenCV on their Raspberry Pi 3B+jumped nearly 4x to over 8.7 hours (even longer for the Zero W). |
https://pyimagesearch.com/2016/11/21/raspbian-opencv-pre-configured-and-pre-installed/ | I haven’t conducted a similar survey for the Raspberry Pi 4, but my guess is that it would take most people about 4 hours to configure their Raspberry Pi 4. Clearly, the barrier to entry for many PyImageSearch readers trying to learn OpenCV and computer vision is getting OpenCV itself installed on their Raspberry Pi. In an effort to help these readers get the most out of their Raspberry Pi, I have decided to release my own personal Raspbian .img files that have OpenCV pre-configured and pre-installed. By bundling the pre-configured Raspbian .img together with either (1) Practical Python and OpenCV, and/or (2) Raspberry Pi for Computer Vision my goal is to:
Jumpstart your computer vision education by skipping the tedious process of installing OpenCV + Python on your Raspberry Pi. Provide you with a book with the best introduction to the world of computer vision and image processing that you can possibly get. Of course, I will continue to create, support, and provide help to any PyImageSearch reader who is using the many free tutorials I offer on installing OpenCV + Python on the Raspberry Pi. Again, this pre-configured Raspbian .img is intended for PyImageSearch readers who want to save time and jumpstart their computer vision education. If this doesn’t sound like you, no worries, I totally understand — I’ll be still be providing free tutorials to help you get OpenCV up and running on your Raspberry Pi. Just keep in mind that customers of mine receive priority support from me (while you’re on that page be sure to check out my other FAQs). Raspbian Buster + OpenCV 4 pre-configured and pre-installed
The rest of this document describes how to install and use the Raspbian pre-configured .img file included in your purchase of either:
Practical Python and OpenCV (Quickstart and Hardcopy Bundles)
Raspberry Pi for Computer Vision (Hobbyist, Hacker, and Complete Bundles)
At the end of this guide, you’ll also find answers to frequently asked questions regarding the Raspbian + OpenCV .img file. |
https://pyimagesearch.com/2016/11/21/raspbian-opencv-pre-configured-and-pre-installed/ | If you have a question that is not covered in FAQ, please send me a message. Download and unpack the archive
When you receive the link to your purchase, be sure to download the book, code, videos, and Raspbian. Each file is in the form of a .zip. The Raspbian.zip contains the preconfigured images and a README.txt file. Go ahead and unzip the files using your favorite unarchiving utility (7zip, Keka, etc.). There is no need to extract the included .gz files since we will flash with them directly. After you unzip Raspbian.zip your folder should look like this:
Figure 1: After downloading the Raspbian.zip file, unpack it to obtain the .img.gz file that you’ll flash to your SD card directly with BalenaEtcher. Write OS image to a 32GB microSD card using BalenaEtcher
This Raspbian .img will work only on 32GB microSD cards. The .imgs are too large for 8GB or 16GB cards. To my knowledge, the Raspberry Pi does not support 64GB+ microSD cards. |
https://pyimagesearch.com/2016/11/21/raspbian-opencv-pre-configured-and-pre-installed/ | I recommend the high quality Sandisk 32GB 98MB/s cards. They are available at Amazon and many online distributors. To write the pre-configured Raspbian .img to your card simply follow the official Raspberry Pi documentation. The recommended tool is BalenaEtcher (compatible with Mac, Linux, and Windows). BalenaEtcher can handle compressed files such as .gz (no need to extract the .img.gz before loading into Etcher). Figure 2: Flashing your pre-configured Raspbian .img with BalenaEtcher for your Raspberry Pi. Booting your Pi for the first time
After writing the the Raspbian .img to your card, insert the card into your Pi and boot it up. The username is pi and the password is raspberry. On the first boot, your Raspbian filesystem needs to be expanded to fit the SD card. This means that you have to run raspi-config => Advanced => Expand Filesystem manually. |
https://pyimagesearch.com/2016/11/21/raspbian-opencv-pre-configured-and-pre-installed/ | After the expansion has completed your Pi will reboot and you will be able to use it as normal (the expansion of the filesystem is only necessary on the first boot). Here is a screenshot of the disk utilization on my Pi after it has been auto-expanded:
Figure 3: After booting my Raspberry Pi for the first time your filesystem will be expanded to utilize the entire disk. Notice that my entire 32GB card is available and 35% is in use. Default WiFi
By default, your Raspberry Pi will attempt to connect to a network named pyimagesearch with passphrase computervision . This is useful if you are in a pickle:
Maybe you just flashed your microSD and you need to get connected quickly. Perhaps you aren’t near your typical wireless and you want to hotspot your phone so that your Pi and laptop connect through your phone’s wireless network. Works with iPhone and Android. Maybe you forgot your keyboard/mouse/HDMI screen and you need to do everything via SSH and VNC, but you can’t easily connect to your Pi at the moment. Refer to this tutorial about remote development and connectivity with your Raspberry Pi. We’ve used this method to get connected many times in the field. |
https://pyimagesearch.com/2016/11/21/raspbian-opencv-pre-configured-and-pre-installed/ | It is convenient, but it is a security risk. While we do not recommend using this wireless network long term since this password is public (in nearly all deployment applications you should delete the network + password from your Pi), it is a great way to get connected if you just flashed a microSD. We also recommend changing the default password associated with the username of your Raspberry Pi. Using Python and OpenCV on your Raspberry Pi
In order to gain access to OpenCV 4 (and OpenCV 3) with Python 3 bindings we leverage Python virtual environments. Each Python virtual environment is totally independent from one another ensuring there are no dependency or versioning issues. In the remainder of this section, I explain (1) what Python virtual environments are and (2) how to access the Python 3 + OpenCV 3/4 environments. What are Python virtual environments? At the very core, Python virtual environments allow us to create isolated, independent environments for each of our Python projects. This implies that each project can have its own set of dependencies, regardless of which dependencies another project has. In the context of OpenCV, this allows us to have one virtual environment for OpenCV 4 and then another virtual environment for OpenCV 3. |
https://pyimagesearch.com/2016/11/21/raspbian-opencv-pre-configured-and-pre-installed/ | Furthermore, we can have Intel OpenVINO and Google Coral virtual environments. For a detailed look at Python virtual environments please refer to this tutorial. Python 2.7 support is deprecated
On January 1, 2020, Python.org will no longer be updating Python 2.7 (that goes for security updates too). Read Python 2.7’s sunset announcement here. PyImageSearch officially no longer supports Python 2.7. All future code is Python 3-compatible only. What virtual environments are on the .imgs? The Raspberry Pi 3B/3B+/4B .img contains the following environments:
py3cv4 : Python 3.7 and OpenCV 4.1.1
py3cv3 : Python 3.7 and OpenCV 3.4.7
openvino : Python 3.7 and OpenCV 4.1.1-openvino (OpenVINO is an Intel deep learning + hardware-optimized toolkit by Intel)
coral : Python 3.7 and OpenCV 4.1.1
gopigo : Python 3.7 and OpenCV 4.1.1
The Raspberry Pi Zero W .img contains the following environments:
py3cv4 : Python 3.7 and OpenCV 4.1.1
py3cv3 : Python 3.7 and OpenCV 3.4.7
Accessing a virtual environment
There are two ways to access our virtual environments on the Raspbian .imgs. Option 1: Use the workon command
If, for example, you desire to use the Python 3 + OpenCV 4.1.1 environment simply use the workon command and the environment name:
$ workon py3cv4
(py3cv4) $
Notice that the bash prompt is then preceded with the environment name in parentheses. Note: The OpenVINO environment requires that you use the Option 2 method below. |
https://pyimagesearch.com/2016/11/21/raspbian-opencv-pre-configured-and-pre-installed/ | Option 2: Use the source command
You can also just use the following command with the start scripts located in your home directory:
$ source ~/start_py3cv4.sh
Starting Python 3.7 with OpenCV 4.1.1 bindings...
(py3cv4) $
If you use OpenVINO, an additional Intel-provided script will be called automatically via the “start” script:
$ source ~/start_openvino.sh
Starting Python 3.7 with OpenCV-OpenVINO 4.1.1 bindings...
[setupvars.sh] OpenVINO environment initialized
(py3cv4) $
Your terminal will look like similar to this (I’m SSH’ed into my Raspberry Pi from macOS):
Figure 4: Starting the OpenVINO environment for the Movidius NCS on a Raspberry Pi pre-configured .img. Executing code from my books on your Raspberry Pi
There are multiple methods to access the source code for Practical Python and OpenCV or Raspberry Pi for Computer Vision on your Pi. The first is to use Chromium, Raspbian’s built-in web browser to download the .zip archive(s):
Figure 5: Downloading the source code from Practical Python and OpenCV using the Raspberry Pi web browser. Simply download the .zip directly to your Pi. If the code currently resides on your laptop/desktop, you may also use your favorite SFTP/FTP client and transfer the code from your system to your Pi:
Figure 6: Utilize a SFTP/FTP client to transfer the Practical Python and OpenCV code from your system to the Raspberry Pi. Or you may want to manually write the code on the Pi using the built-in text editor as you follow along with the book:
Figure 7: Using the built-in text editor that ships with the Raspberry Pi to write code. I would suggest either downloading the book’s source code via web browser or using SFTP/FTP as this also includes the datasets utilized in the book as well. However, manually coding along is a great way to learn and I highly recommend it as well! For more tips on how to work remotely with your Raspberry Pi, be sure to read my Remote development on the Raspberry Pi blog post. Frequently Asked Questions (FAQ)
In this section, I detail the answers to frequently asked questions regarding the Raspberry Pi .img file. |
https://pyimagesearch.com/2016/11/21/raspbian-opencv-pre-configured-and-pre-installed/ | Which Raspbian images are compatible with which respective hardware? Here is the compatibility listing:
Raspbian3B_4B.img.gz :
Raspberry Pi 4B (1GB, 2GB, and 4GB models)
Raspberry Pi 3B+
Raspberry Pi 3B
RaspbianZeroW.img.gz :
Raspberry Pi Zero W
What if I want to install OpenCV + Python on my Raspberry Pi by myself? By all means, I encourage you to do so. It’s a great exercise and you’ll learn a lot about the Linux environment. I would suggest you follow one of my many free tutorials on installing OpenCV + Python your Raspberry Pi. Again, this pre-configured Raspbian image is intended for readers who want to skip the install process and jumpstart their education. How long will it take to install Python + OpenCV by hand? I’ve ran the numbers and even if you know exactly what you are doing it will take a bare minimum of 55 minutes to compile and install OpenCV on a Raspberry Pi 4 and approximately 14 hours on the Raspberry Pi Zero W.
If you have never installed OpenCV before or you are not familiar with Linux-based environments that number can easily jump to many times those numbers based on my survey of novice readers who successfully installed OpenCV on their Raspberry Pi. In fact, to install everything on the Raspberry Pi Zero W including 2 environments (2 compiles of OpenCV) it took approximately 6 days (including overnight compiles). It really comes down to how much you value your time and how quickly you want to get started learning computer vision. |
https://pyimagesearch.com/2016/11/21/raspbian-opencv-pre-configured-and-pre-installed/ | I always encourage you to use my free tutorials on installing OpenCV on the Raspberry Pi, but if you want to save yourself time (and headaches), then definitely consider going with the pre-configured Raspbian .img. Which Practical Python and OpenCV bundles is the Raspbian image included in? The pre-configured Raspbian image is included in both the Quickstart Bundle and Hardcopy Bundle of Practical Python and OpenCV. The pre-configured Raspbian image is not included in the Basic Bundle. Which Raspberry Pi for Computer Vision bundles is the Raspbian image included in? The pre-configured Raspbian image is included all bundles: Hobbyist, Hacker, and Complete Bundles. After installing your distribution of Raspbian, how do I access Python + OpenCV? See the “Using Python and OpenCV on your Raspberry Pi” section above. Is Wolfram’s Mathematica included in your Raspbian distribution? No, I am not legally allowed to distribute a modified version of Raspbian (that is part of a product) with Mathematica installed. |
https://pyimagesearch.com/2016/11/21/raspbian-opencv-pre-configured-and-pre-installed/ | How did you reduce the size of the Raspbian image file? To start, I removed unneeded software such as Wolfram’s Mathematica and LibreOffice. Removing these two pieces of software alone saved nearly 1GB of space. From there, the size of the main partition was reduced by zeroing all bits and compressing the file to .gz format. Which Operating System version of Raspbian is included? The latest .imgs run Raspbian Buster. I have your previous StretchOS image. Why won’t the workon command work? My previous .imgs did not have the virtualenvwrapper settings in the ~/.bashrc (they were placed in ~/.profile ). Therefore, you need to either (1) copy the virtualenvwrapper settings from ~/.profile to ~/.bashrc , or (2) source the profile first via source ~/.profile. |
https://pyimagesearch.com/2016/11/21/raspbian-opencv-pre-configured-and-pre-installed/ | What Python packages are installed on the Raspberry Pi? After accessing any virtual environment (see “Accessing a virtual environment”) run pip freeze to see a full list of Python packages installed. In short, I have included all necessary Python packages you will need to be successful executing the examples in Raspberry Pi for Computer Vision and Practical Python and OpenCV, including OpenCV, NumPy, SciPy, scikit-learn, scikit-image, mahotas, and many others. Click the following image to enlarge it so you can see all the packages:
Figure 8: A listing of the packages installed in each of the environments on the Raspberry Pi Raspbian .img. Where can I learn more about Python virtual environments? My favorite resource and introduction to Python virtual environments can be found here. I also discuss them in the first half of this blog post. Where can I purchase a copy of Practical Python and OpenCV? To purchase your copy of Practical Python and OpenCV, simply click here, select your bundle (I recommend either the Quickstart Bundle or the Hardcopy Bundle), and checkout. Where can I purchase a copy of Raspberry Pi for Computer Vision? |
https://pyimagesearch.com/2016/11/21/raspbian-opencv-pre-configured-and-pre-installed/ | To purchase your copy of Raspberry Pi for Computer Vision, simply click here, select your bundle (I recommend either the Hacker Bundle or the Complete Bundle if you really want to master the Raspberry Pi), and checkout. Can I purchase the .img as a standalone product? The .img files are intended to accompany my books as added benefits. I would recommend purchasing a book to gain access to the .img. I have another question. If you have a question not listed in this FAQ, please send me a message. Sound good? Figure 9: Purchase (1) Raspberry Pi for Computer Vision, or (2) Practical Python and OpenCV + Case Studies to get ahold of the pre-configured Raspbian .img files! If you’re ready to put the .img to use on all of the Raspberry Pis you own, just purchase one of my books that come with the .img. To purchase your copy of Raspberry Pi for Computer Vision, just click here. |
https://pyimagesearch.com/2016/11/21/raspbian-opencv-pre-configured-and-pre-installed/ | All bundles come with the pre-configured Raspbian .img files. Or to purchase your copy of Practical Python and OpenCV, just use this link. You will find the pre-configured Raspbian .img files inside both the Quickstart Bundle and the Hardcopy Bundle (the Basic Bundle does not include the Raspbian .img). To see all the products I offer, click here. Join the PyImageSearch Newsletter and Grab My FREE 17-page Resource Guide PDF
Enter your email address below to join the PyImageSearch Newsletter and download my FREE 17-page Resource Guide PDF on Computer Vision, OpenCV, and Deep Learning. Join the Newsletter! Website |
https://pyimagesearch.com/2015/02/04/train-custom-image-classifiers-object-detectors-object-trackers/ | Click here to download the source code to this pos
Did you watch the Super Bowl this past weekend? I did. Kind of. I spent Super Bowl Sunday (which is practically a holiday in the United States) at my favorite Indian bar. Pounding Kingfisher beers. Savoring a delicious dish of Tandoori chicken…
…all while hacking up a storm on my laptop and coding up some custom image classifiers, object detectors, & object trackers for the PyImageSearch Gurus computer vision course. And oh yeah — the Super Bowl game was on in the background. Can’t say I watched much of it though, I was too busy with my nose in my laptop, looking like a deranged programmer! I’m sure it was a peculiar sight to see. Anyway, I’m posting this article today because I wanted to share with you the results of my Indian beer fueled hacking binge…a preview of the image classifiers, object detectors, and object trackers we’ll be building inside PyImageSearch Gurus. |
https://pyimagesearch.com/2015/02/04/train-custom-image-classifiers-object-detectors-object-trackers/ | Let’s start off with something simple, training your own face detector:
Figure 1: Inside PyImageSearch Gurus you’ll learn how to train your own custom object detector to detect faces in images. Here you can see that I have trained my custom object detector using the Histogram of Oriented Gradients descriptor and a Linear SVM to detect faces from the cast of Back to the Future. And here I have trained another custom object detector using Histogram of Oriented Gradients to detect the presence of a car in an image:
Figure 2: Learn how to detect a car in an image inside the PyImageSearch Gurus course. Let’s do something a little more complicated now. In the below image I have trained a Pyramid of Bag of Visual Words (PBOW) and a Pyramid of Histogram of Oriented Gradients (PHOG) to recognize images from the popular CALTECH-101 dataset:
Figure 3: Learn how to train an image classifier on the popular 101 category CALTECH dataset. As another example, I have trained a classifier to tell the difference between Fido and Mrs. Whiskers on the ASIRRA Cats vs. Dogs dataset:
Figure 4: You’ll learn how to train a custom image classifier to recognize the difference between cats and dogs. Lastly, I utilized keypoint detection, local invariant descriptors, and keypoint matching to track the cover of a video game box in a real-time video stream:
Figure 5: Training your own custom object tracker for use in real-time video is a breeze. Discover how inside PyImageSearch Gurus. Really cool, right? These techniques aren’t magic — and I guarantee that you can learn them yourself. |
https://pyimagesearch.com/2015/02/04/train-custom-image-classifiers-object-detectors-object-trackers/ | Join PyImageSearch Gurus before the door closes…
As you can see, we’ll be learning a lot of actionable skills inside the PyImageSearch Gurus course. From custom image classifiers, to object detectors, to real-time object tracking, you’re guaranteed to become a computer vision master inside the PyImageSearch Gurus course. So if you’re interested in uncovering these techniques and becoming a computer vision master, I would definitely suggest joining me inside PyImageSearch Gurus! Once the Kickstarter ends you will not be able to enroll in PyImageSearch Gurus again until August! This is your chance! Be sure to get in now…
There are still a few seats left open in the PyImageSearch Gurus course, so definitely act now and claim your spot! Join the PyImageSearch Newsletter and Grab My FREE 17-page Resource Guide PDF
Enter your email address below to join the PyImageSearch Newsletter and download my FREE 17-page Resource Guide PDF on Computer Vision, OpenCV, and Deep Learning. Join the Newsletter! Website |
https://pyimagesearch.com/2014/06/26/announcing-case-studies-solving-real-world-problems-computer-vision/ | Click here to download the source code to this pos
I have some big news to announce today…
Besides writing a ton of blog posts about computer vision, image processing, and image search engines, I’ve been behind the scenes, working on a second book. And you may be thinking, hey, didn’t you just finish up Practical Python and OpenCV? Yep. I did. Now, don’t get me wrong. The feedback for Practical Python and OpenCV has been amazing. And it’s done exactly what I thought it would — teach developers, programmers, and students just like you the basics of computer vision in a single weekend. But now that you know the fundamentals of computer vision and have a solid starting point, it’s time to move on to something more interesting…
Let’s take your knowledge of computer vision and solve some actual, real world problems. What type of problems? I’m happy you asked. |
https://pyimagesearch.com/2014/06/26/announcing-case-studies-solving-real-world-problems-computer-vision/ | Read on and I’ll show you. What does this book cover? This book covers five main topics related to computer vision in the real world. Check out each one below, along with a screenshot of each. #1. Face detection in photos and video
Figure 1: Learn how to use OpenCV and Python to detect faces in images. By far, the most requested tutorial of all time on this blog has been “How do I find faces in images?” If you’re interested in face detection and finding faces in images and video, then this book is for you. #2. Object tracking in video
Figure 2: My Case Studies book will show you how to track objects in video as they move along the screen. |
https://pyimagesearch.com/2014/06/26/announcing-case-studies-solving-real-world-problems-computer-vision/ | Another common question I get asked is “How can I track objects in video?” In this chapter, I discuss how you can use the color of an object to track its trajectory as it moves in the video. #3. Handwriting recognition with Histogram of Oriented Gradients (HOG)
Figure 3: Learn how to use HOG and a Linear Support Vector Machine to recognize handwritten text. This is probably my favorite chapter in the entire Case Studies book, simply because it is so practical and useful. Imagine you’re at a bar or pub with a group of friends, when all of a sudden a beautiful stranger comes up to you and hands you their phone number written on a napkin. Do you stuff the napkin in your pocket, hoping you don’t lose it? Do you take out your phone and manually create a new contact? Well you could. Or. |
https://pyimagesearch.com/2014/06/26/announcing-case-studies-solving-real-world-problems-computer-vision/ | You could take a picture of the phone number and have it automatically recognized and stored safely. In this chapter of my Case Studies book, you’ll learn how to use the Histogram of Oriented Gradients (HOG) descriptor and Linear Support Vector Machines to classify digits in an image. #4. Plant classification using color histograms and machine learning
Figure 4: Learn how to apply machine learning techniques to classify the species of flowers. A common use of computer vision is to classify the contents of an image. In order to do this, you need to utilize machine learning. This chapter explores how to extract color histograms using OpenCV and then train a Random Forest Classifier using scikit-learn to classify the species of a flower. #5. Building an Amazon.com book cover search
Figure 5: Applying keypoint detection and SIFT descriptors to recognize and identify book covers. Three weeks ago, I went out to have a few beers with my friend Gregory, a hot shot entrepreneur in San Francisco who has been developing a piece of software to instantly recognize and identify book covers — using only an image. |
https://pyimagesearch.com/2014/06/26/announcing-case-studies-solving-real-world-problems-computer-vision/ | Using this piece of software, users could snap a photo of books they were interested in, and then have them automatically added to their cart and shipped to their doorstep — at a substantially cheaper price than your standard Barnes & Noble! Anyway, I guess Gregory had one too many beers, because guess what? He clued me in on his secrets. Gregory begged me not to tell…but I couldn’t resist. In this chapter you’ll learn how to utilize keypoint extraction and SIFT descriptors to perform keypoint matching. The end result is a system that can recognize and identify the cover of a book in a snap…of your smartphone! All of these examples are covered in detail, from front to back, with lots of code. By the time you finish reading my Case Studies book, you’ll be a pro at solving real world computer vision problems. So who is this book for? This book is for people like yourself who have a solid foundation of computer vision and image processing. |
https://pyimagesearch.com/2014/06/26/announcing-case-studies-solving-real-world-problems-computer-vision/ | Ideally, you have already read through Practical Python and OpenCV and have a strong grasp on the basics (if you haven’t had a chance to read Practical Python and OpenCV, definitely pick up a copy). I consider my new Case Studies book to be the next logical step in your journey to learn computer vision. You see, this book focuses on taking the fundamentals of computer vision, and then applying them to solve, actual real-world problems. So if you’re interested in applying computer vision to solve real world problems, you’ll definitely want to pick up a copy. Reserve your spot in line to receive early access
If you signup for my newsletter, I’ll be sending out previews of each chapter so you can get see first hand how you can use computer vision techniques to solve real world problems. But if you simply can’t wait and want to lock-in your spot in line to receive early access to my new Case Studies eBook, just click here. Sound good? Sign-up now to receive an exclusive pre-release deal when the book launches. Join the PyImageSearch Newsletter and Grab My FREE 17-page Resource Guide PDF
Enter your email address below to join the PyImageSearch Newsletter and download my FREE 17-page Resource Guide PDF on Computer Vision, OpenCV, and Deep Learning. Join the Newsletter! |
https://pyimagesearch.com/2014/06/26/announcing-case-studies-solving-real-world-problems-computer-vision/ | Website |
https://pyimagesearch.com/2021/02/01/opencv-histogram-equalization-and-adaptive-histogram-equalization-clahe/ | Click here to download the source code to this pos
In this tutorial, you will learn to perform both histogram equalization and adaptive histogram equalization with OpenCV. Histogram equalization is a basic image processing technique that adjusts the global contrast of an image by updating the image histogram’s pixel intensity distribution. Doing so enables areas of low contrast to obtain higher contrast in the output image. Essentially, histogram equalization works by:
Computing a histogram of image pixel intensitiesEvenly spreading out and distributing the most frequent pixel values (i.e., the ones with the largest counts in the histogram)Giving a linear trend to the cumulative distribution function (CDF)
The result of applying histogram equalization is an image with higher global contrast. We can further improve histogram equalization by applying an algorithm called Contrast Limited Adaptive Histogram Equalization (CLAHE), resulting in higher quality output images. Other than photographers using histogram equalization to correct under/over-exposed images, the most widely used histogram equalization application can be found in the medical field. You’ll typically see histogram equalization applied to X-ray scans and CT scans to improve the radiograph’s contrast. Doing so helps doctors and radiologists better interpret the scans and make an accurate diagnosis. By the end of this tutorial, you will be able to successfully apply both basic histogram equalization and adaptive histogram equalization to images with OpenCV. To learn to use histogram equalization and adaptive histogram equalization with OpenCV, just keep reading. |
https://pyimagesearch.com/2021/02/01/opencv-histogram-equalization-and-adaptive-histogram-equalization-clahe/ | Looking for the source code to this post? Jump Right To The Downloads Section
OpenCV Histogram Equalization and Adaptive Histogram Equalization (CLAHE)
In the first part of this tutorial, we’ll discuss what histogram equalization is and how we can apply histogram equalization with OpenCV. From there, we’ll configure our development environment and then review the project directory structure for this guide. We’ll then implement two Python scripts:
simple_equalization.py: Performs basic histogram equalization using OpenCV’s cv2.equalizeHist function.adaptive_equalization.py: Uses OpenCV’s cv2.createCLAHE method to perform adaptive histogram equalization. We’ll wrap up this guide with a discussion of our results. What is histogram equalization? Histogram equalization is a basic image processing technique that can improve an image’s overall contrast. Applying histogram equalization starts by computing the histogram of pixel intensities in an input grayscale/single-channel image:
Figure 1: Left: Our original input grayscale image. Right: Computing the histogram of the grayscale image. Notice how our histogram has numerous peaks, indicating there are a good number of pixels binned to those respective buckets. |
https://pyimagesearch.com/2021/02/01/opencv-histogram-equalization-and-adaptive-histogram-equalization-clahe/ | With histogram equalization, our goal is to spread these pixels to buckets that don’t have as many pixels binned to them. Mathematically, what this means is that we’re attempting to apply a linear trend to our cumulative distribution function (CDF):
Figure 2: The histogram equalization goal gives the output image a linear CDF (image source). The before and after histogram equalization application can be seen in Figure 3:
Figure 3: Left: Original input image before applying histogram equalization. Right: Output image after applying histogram equalization. Notice how the input image’s contrast has improved significantly but at the expense of also boosting the contrast of the noise in the input image. That raises the question:
Is it possible to improve image contrast without also boosting noise at the same time? The answer is “Yes,” you just need to apply adaptive histogram equalization. With adaptive histogram equalization, we divide an input image into an M x N grid. We then apply equalization to each cell in the grid, resulting in a higher quality output image:
Figure 4: Left: Basic histogram equalization. Right: Adaptive histogram equalization. |
https://pyimagesearch.com/2021/02/01/opencv-histogram-equalization-and-adaptive-histogram-equalization-clahe/ | The downside is that adaptive histogram equalization is by definition more computationally complex (but given modern hardware, both implementations are still quite speedy). How can we use OpenCV for histogram equalization? Figure 5: OpenCV provides two functions for histogram equalization: cv2.equalizeHist and cv2.createCLAHE. OpenCV includes implementations of both basic histogram equalization and adaptive histogram equalization through the following two functions:
cv2.equalizeHistcv2.createCLAHE
Applying the cv2.equalizeHist function is as simple as converting an image to grayscale and then calling cv2.equalizeHist on it:
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
equalized = cv2.equalizeHist(gray)
Performing adaptive histogram equalization requires that we:
Convert the input image to grayscale/extract a single channel from itInstantiate the CLAHE algorithm using cv2.createCLAHECall the .apply method on the CLAHE object to apply histogram equalization
It’s a lot easier than it sounds, requiring only a few lines of code:
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
clahe = cv2.createCLAHE(clipLimit=2.0, tileGridSize=(8, 8))
equalized = clahe.apply(gray)
Notice that we supply two parameters to cv2.createCLAHE:
clipLimit: This is the threshold for contrast limitingtileGridSize: Divides the input image into M x N tiles and then applies histogram equalization to each local tile
You will get practice using both cv2.equalizeHist and cv2.createCLAHE in the remainder of this guide. Configuring your development environment
To learn how to apply histogram equalization with OpenCV, you need to have the OpenCV library installed. Luckily, OpenCV is pip-installable:
$ pip install opencv-contrib-python
If you need help configuring your development environment for OpenCV, I highly recommend that you read my pip install OpenCV guide — it will have you up and running in a matter of minutes. Having problems configuring your development environment? Figure 6: Having trouble configuring your development environment? Want access to pre-configured Jupyter Notebooks running on Google Colab? Be sure to join PyImageSearch Plus — you’ll be up and running with this tutorial in a matter of minutes. |
https://pyimagesearch.com/2021/02/01/opencv-histogram-equalization-and-adaptive-histogram-equalization-clahe/ | All that said, are you:
Short on time?Learning on your employer’s administratively locked system?Wanting to skip the hassle of fighting with the command line, package managers, and virtual environments?Ready to run the code right now on your Windows, macOS, or Linux system? Then join PyImageSearch Plus today! Gain access to Jupyter Notebooks for this tutorial and other PyImageSearch guides that are pre-configured to run on Google Colab’s ecosystem right in your web browser! No installation required. And best of all, these Jupyter Notebooks will run on Windows, macOS, and Linux! Project structure
Before we implement histogram equalization with OpenCV, let’s start by reviewing our project directory structure. Be sure to access the “Downloads” section of this tutorial to retrieve the source code and example images. From there, inspect the project directory structure:
$ tree . --dirsfirst
. ├── images
│ ├── boston.png
│ ├── dog.png
│ └── moon.png
├── adaptive_equalization.py
└── simple_equalization.py
1 directory, 5 files
We have two Python scripts that we’ll be reviewing today:
simple_equalization.py: Applies basic histogram equalization with OpenCV.adaptive_equalization.py: Uses the CLAHE algorithm to perform adaptive histogram equalization. |
https://pyimagesearch.com/2021/02/01/opencv-histogram-equalization-and-adaptive-histogram-equalization-clahe/ | Our images directory contains example images to which we will apply histogram equalization. Implementing standard histogram equalization with OpenCV
With our project directory structure reviewed, let’s move on to implementing basic histogram equalization with OpenCV. Open the simple_equalization.py file in your project folder, and let’s get to work:
# import the necessary packages
import argparse
import cv2
# construct the argument parser and parse the arguments
ap = argparse. ArgumentParser()
ap.add_argument("-i", "--image", type=str, required=True,
help="path to the input image")
args = vars(ap.parse_args())
Lines 2 and 3 import our required Python packages while Lines 6-9 parse our command line arguments. We only need a single argument here, --image, which is the path to our input image on disk, where we wish to apply the histogram equalization. With the command line arguments parsed, we can move on to the next step:
# load the input image from disk and convert it to grayscale
print("[INFO] loading input image...")
image = cv2.imread(args["image"])
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
# apply histogram equalization
print("[INFO] performing histogram equalization...")
equalized = cv2.equalizeHist(gray)
Line 13 loads our image from disk, while Line 14 converts our image from RGB to grayscale. Line 18 performs basic histogram equalization using the cv2.equalizeHist function. The only required argument we must pass in is the grayscale/single-channel image. Note: When performing histogram equalization with OpenCV, we must supply a grayscale/single-channel image. If we try to pass in a multi-channel image, OpenCV will throw an error. |
https://pyimagesearch.com/2021/02/01/opencv-histogram-equalization-and-adaptive-histogram-equalization-clahe/ | To perform histogram equalization on a multi-channel image, you would need to (1) split the image into its respective channels, (2) equalize each channel, and (3) merge the channels back together. The final step is to show our output images:
# show the original grayscale image and equalized image
cv2.imshow("Input", gray)
cv2.imshow("Histogram Equalization", equalized)
cv2.waitKey(0)
Here, we are displaying our input gray image along with the histogram equalized image. OpenCV histogram equalization results
We are now ready to apply basic histogram equalization with OpenCV! Be sure to access the “Downloads” section of this tutorial to retrieve the source code and example images. From there, open a terminal and execute the following command:
$ python simple_equalization.py --image images/moon.png
[INFO] loading input image...
[INFO] performing histogram equalization...
Figure 7: Applying histogram equalization with OpenCV, we increase the global contrast in the output image. On the top, we have the original input image of the moon. The bottom shows the output after applying histogram equalization. Notice that we have boosted the image’s global contrast. Let’s try a different image, this one of an under-exposed photograph:
$ python simple_equalization.py --image images/dog.png
[INFO] loading input image...
[INFO] performing histogram equalization...
Figure 8: The original image (left) appears washed out. We can improve the contrast by applying histogram equalization with OpenCV (right). |
https://pyimagesearch.com/2021/02/01/opencv-histogram-equalization-and-adaptive-histogram-equalization-clahe/ | The dog (left) appears washed out due to underexposure. By applying histogram equalization (right), we correct this effect and improve the dog’s contrast. The following image highlights one of the limitations of global contrast adjustment via histogram equalization:
$ python simple_equalization.py --image images/boston.png
[INFO] loading input image...
[INFO] performing histogram equalization...
Figure 9: The original image (left) is very dark. It’s hard to see the faces of my wife and me. After applying histogram equalization with OpenCV (right), our faces become more visible — and you can even see another couple that was “hidden” in the shadows behind us! The image on the left shows my wife and me in Boston over the Christmas holiday a few years ago. Due to the auto-adjustment on the camera, our faces are quite dark, and it’s hard to see us. By applying histogram equalization (right), we can see that not only are our faces visible, but we can see another couple sitting behind us! Without histogram equalization, you may have missed the other couple. However, our output is not entirely desirable. |
https://pyimagesearch.com/2021/02/01/opencv-histogram-equalization-and-adaptive-histogram-equalization-clahe/ | To start, the fire in the fireplace is totally washed out. And if you study our faces, particularly mine, you’ll see that portions of my forehead are now totally washed out. To improve our results, we need to apply adaptive histogram equalization. Implementing adaptive histogram equalization with OpenCV
At this point, we’ve seen some of the limitations of basic histogram equalization. While a bit more computationally expensive, adaptive histogram equalization can yield better results than simple histogram equalization. But don’t take my word for it — you should see the results for yourself. Open the adaptive_equalization.py file in your project directory structure and insert the following code:
# import the necessary packages
import argparse
import cv2
# construct the argument parser and parse the arguments
ap = argparse. ArgumentParser()
ap.add_argument("-i", "--image", type=str, required=True,
help="path to the input image")
ap.add_argument("-c", "--clip", type=float, default=2.0,
help="threshold for contrast limiting")
ap.add_argument("-t", "--tile", type=int, default=8,
help="tile grid size -- divides image into tile x time cells")
args = vars(ap.parse_args())
We only need two imports here, argparse for command line arguments and cv2 for our OpenCV bindings. We then have three command line arguments, one of which is required, the second two optional (but useful to tune and play with when experimenting with CLAHE):
--image: The path to our input image on disk, where we wish to apply histogram equalization.--clip: The threshold for contrast limiting. You’ll typically want to leave this value in the range of 2-5. |
https://pyimagesearch.com/2021/02/01/opencv-histogram-equalization-and-adaptive-histogram-equalization-clahe/ | If you set the value too large, then effectively, what you’re doing is maximizing local contrast, which will, in turn, maximize noise (which is the opposite of what you want). Instead, try to keep this value as low as possible.--tile: The tile grid size for CLAHE. Conceptually, what we are doing here is dividing our input image into tile x tile cells and then applying histogram equalization to each cell (with the additional bells and whistles that CLAHE provides).Let’s now apply CLAHE with OpenCV:
# load the input image from disk and convert it to grayscale
print("[INFO] loading input image...")
image = cv2.imread(args["image"])
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
# apply CLAHE (Contrast Limited Adaptive Histogram Equalization)
print("[INFO] applying CLAHE...")
clahe = cv2.createCLAHE(clipLimit=args["clip"],
tileGridSize=(args["tile"], args["tile"]))
equalized = clahe.apply(gray)
Lines 17 and 18 load our input image from disk and convert it to grayscale, just like we did for basic histogram equalization. Lines 22 and 23 initialize our clahe object via the cv2.createCLAHE function. Here, we supply the clipLimit and our tileGridSize, which we provided via our command line arguments. A call to the .apply method applies adaptive histogram equalization to the gray image. The final step is to display the output images to our screen:
# show the original grayscale image and CLAHE output image
cv2.imshow("Input", gray)
cv2.imshow("CLAHE", equalized)
cv2.waitKey(0)
Here, we are displaying our input gray image along with the output equalized image from the CLAHE algorithm. Adaptive histogram equalization results
Let’s now apply adaptive histogram equalization with OpenCV! Access the “Downloads” section of this tutorial to retrieve the source code and example images. From there, open a shell and execute the following command:
$ python adaptive_equalization.py --image images/boston.png
[INFO] loading input image...
[INFO] applying CLAHE...
Figure 10: Using OpenCV to apply adaptive histogram equalization via the CLAHE algorithm. |
https://pyimagesearch.com/2021/02/01/opencv-histogram-equalization-and-adaptive-histogram-equalization-clahe/ | On the left, we have our original input image. We then apply adaptive histogram equalization on the right — compare these results to that of Figure 4, where we applied basic histogram equalization. Notice how adaptive histogram equalization has improved the contrast of the input image. My wife and I are more visible. The once near-invisible couple in the background can be seen. There are fewer artifacts on my forehead, etc. Histogram equalization suggestions
When building your own image processing pipelines and finding that histogram equalization should be applied, I suggest starting with simple histogram equalization using cv2.equalizeHist. But if you find that the results are poor and instead boost the input image’s noise, you should then try using adaptive histogram equalization through cv2.createCLAHE. Credits
I thank Aruther Cotse (University of Utah) for the fantastic report on using histograms for image processing. Cotse’s work inspired some of the example figures in this post. |
https://pyimagesearch.com/2021/02/01/opencv-histogram-equalization-and-adaptive-histogram-equalization-clahe/ | Additionally, I acknowledge the contributors to Wikipedia’s page on histogram equalization. If you’re interested in more mathematical details behind histogram equalization, be sure to refer to that page. The example moon.png image was obtained from this article on EarthSky, while the dog.png image came from this page. What's next? We recommend PyImageSearch University. Course information:
84 total classes • 114+ hours of on-demand code walkthrough videos • Last updated: February 2024
★★★★★ 4.84 (128 Ratings) • 16,000+ Students Enrolled
I strongly believe that if you had the right teacher you could master computer vision and deep learning. Do you think learning computer vision and deep learning has to be time-consuming, overwhelming, and complicated? Or has to involve complex mathematics and equations? Or requires a degree in computer science? That’s not the case. |
https://pyimagesearch.com/2021/02/01/opencv-histogram-equalization-and-adaptive-histogram-equalization-clahe/ | All you need to master computer vision and deep learning is for someone to explain things to you in simple, intuitive terms. And that’s exactly what I do. My mission is to change education and how complex Artificial Intelligence topics are taught. If you're serious about learning computer vision, your next stop should be PyImageSearch University, the most comprehensive computer vision, deep learning, and OpenCV course online today. Here you’ll learn how to successfully and confidently apply computer vision to your work, research, and projects. Join me in computer vision mastery. Inside PyImageSearch University you'll find:
✓ 84 courses on essential computer vision, deep learning, and OpenCV topics
✓ 84 Certificates of Completion
✓ 114+ hours of on-demand video
✓ Brand new courses released regularly, ensuring you can keep up with state-of-the-art techniques
✓ Pre-configured Jupyter Notebooks in Google Colab
✓ Run all code examples in your web browser — works on Windows, macOS, and Linux (no dev environment configuration required!) ✓ Access to centralized code repos for all 536+ tutorials on PyImageSearch
✓ Easy one-click downloads for code, datasets, pre-trained models, etc. ✓ Access on mobile, laptop, desktop, etc. Click here to join PyImageSearch University
Summary
In this tutorial, you learned how to perform both basic histogram equalization and adaptive histogram equalization with OpenCV. |
https://pyimagesearch.com/2021/02/01/opencv-histogram-equalization-and-adaptive-histogram-equalization-clahe/ | Basic histogram equalization aims to improve the global contrast of an image by “spreading out” pixel intensities often used in the image. But while simple histogram equalization is easy to apply and computationally efficient, the problem is that it can increase noise. What would be basic noise that could be easily filtered out is now further contaminating the signal (i.e., the components of the image we want to process). If and when that happens, we can apply adaptive histogram equalization to obtain better results. Adaptive histogram equalization works by dividing an image into an M x N grid and then applying histogram equalization locally to each grid. The result is an output image that overall has higher contrast with (ideally) the noise still suppressed. To download the source code to this post (and be notified when future tutorials are published here on PyImageSearch), simply enter your email address in the form below! Download the Source Code and FREE 17-page Resource Guide
Enter your email address below to get a .zip of the code and a FREE 17-page Resource Guide on Computer Vision, OpenCV, and Deep Learning. Inside you'll find my hand-picked tutorials, books, courses, and libraries to help you master CV and DL! Download the code! |
https://pyimagesearch.com/2021/02/01/opencv-histogram-equalization-and-adaptive-histogram-equalization-clahe/ | Website |
https://pyimagesearch.com/2021/02/03/opencv-image-translation/ | Click here to download the source code to this pos
In this tutorial, you will learn how to translate and shift images using OpenCV. Translation is the shifting of an image along the x- and y-axis. To translate an image using OpenCV, we must:
Load an image from diskDefine an affine transformation matrixApply the cv2.warpAffine function to perform the translation
This sounds like a complicated process, but as you will see, it can all be done in only two lines of code! To learn how to translate images with OpenCV, just keep reading. Looking for the source code to this post? Jump Right To The Downloads Section
OpenCV Image Translation
In the first part of this tutorial, we will discuss what a translation matrix is and how we can define it using OpenCV and NumPy. From there, we will configure our development environment and review our project directory structure. With our project directory structure reviewed, we will move on to implement a Python script to perform translation with OpenCV, opencv_translate.py. We will review this script in detail, along with our results generated by the script. By the end of this guide, you will understand how to perform image translation using OpenCV. |
https://pyimagesearch.com/2021/02/03/opencv-image-translation/ | Defining a translation matrix with OpenCV
To perform image translation with OpenCV, we first need to define a 2 x 3 matrix called an affine transformation matrix:
Figure 1: To translate an image with OpenCV, we must first construct an affine transformation matrix. For the purposes of translation, all we care about are the and values:
Negative values for the value will shift the image to the leftPositive values for shifts the image to the rightNegative values for shifts the image upPositive values for will shift the image down
For example, let’s suppose we want to shift an image 25 pixels to the right and 50 pixels down. Our translation matrix would look like the following (implemented as a NumPy array):
M = np.float32([
[1, 0, 25],
[0, 1, 50]
])
Now, if we want to shift an image 7 pixels to the left and 23 pixels up, our translation matrix would look like the following:
M = np.float32([
[1, 0, -7],
[0, 1, -23]
])
And as a final example, let’s suppose we want to translate our image 30 pixels to the left and 12 pixels down:
M = np.float32([
[1, 0, -30],
[0, 1, 12]
])
As you can see, defining our affine transformation matrix for image translation is quite easy! And once our transformation matrix is defined, we can simply perform the image translation using the cv2.warpAffine function, like so:
shifted = cv2.warpAffine(image, M, (image.shape[1], image.shape[0]))
We will see a complete example of defining our image translation matrix and applying the cv2.warpAffine function later in this guide. Configuring your development environment
To follow along with this guide, you need to have the OpenCV library installed on your system. Luckily, OpenCV is pip-installable:
$ pip install opencv-contrib-python
If you need help configuring your development environment for OpenCV, I highly recommend that you read my pip install OpenCV guide — it will have you up and running in a matter of minutes. Having problems configuring your development environment? Figure 2: Having trouble configuring your development environment? Want access to pre-configured Jupyter Notebooks running on Google Colab? Be sure to join PyImageSearch Plus — you will be up and running with this tutorial in a matter of minutes. |
https://pyimagesearch.com/2021/02/03/opencv-image-translation/ | All that said, are you:
Short on time?Learning on your employer’s administratively locked system?Wanting to skip the hassle of fighting with the command line, package managers, and virtual environments?Ready to run the code right now on your Windows, macOS, or Linux system? Then join PyImageSearch Plus today! Gain access to Jupyter Notebooks for this tutorial and other PyImageSearch guides that are pre-configured to run on Google Colab’s ecosystem right in your web browser! No installation required. And best of all, these Jupyter Notebooks will run on Windows, macOS, and Linux! Project structure
Before we can perform image translation with OpenCV, let’s first review our project directory structure:
$ tree . --dirsfirst
. ├── opencv_logo.png
└── opencv_translate.py
0 directories, 2 files
We have a single Python script, opencv_translate.py, which we will be reviewing in detail. This script will load the opencv_logo.png image from disk and then translate/shift it using the OpenCV library. Image translation with OpenCV
Translation is the shifting of an image along the x- and y-axis. |
https://pyimagesearch.com/2021/02/03/opencv-image-translation/ | Using translation, we can shift an image up, down, left, or right, along with any combination of the above. Mathematically, we define a translation matrix, M, that we can use to translate an image:
Figure 3: Defining an image translation matrix with OpenCV. This concept is better explained through some code:
# import the necessary packages
import numpy as np
import argparse
import imutils
import cv2
On Lines 2-5, we simply import the packages we will make use of. At this point, using NumPy, argparse, and cv2 should feel commonplace. However, I am introducing a new package here: imutils. This isn’t a package included in NumPy or OpenCV. Rather, it’s a library that I personally wrote containing a handful of “convenience” methods to more easily perform common tasks like translation, rotation, and resizing (and with less code). If you don’t already have imutils installed on your machine, you can install it with pip:
$ pip install imutils
Let’s now parse our command line arguments:
# construct the argument parser and parse the arguments
ap = argparse. ArgumentParser()
ap.add_argument("-i", "--image", type=str, default="opencv_logo.png",
help="path to the input image")
args = vars(ap.parse_args())
We only need a single argument, --image, which points to the input image we want to load from disk and apply OpenCV translation operations to. By default, we will set the --image argument to be opencv_logo.png. |
https://pyimagesearch.com/2021/02/03/opencv-image-translation/ | Let’s now load our image from disk and perform our first translation:
# load the image and display it to our screen
image = cv2.imread(args["image"])
cv2.imshow("Original", image)
# shift the image 25 pixels to the right and 50 pixels down
M = np.float32([[1, 0, 25], [0, 1, 50]])
shifted = cv2.warpAffine(image, M, (image.shape[1], image.shape[0]))
cv2.imshow("Shifted Down and Right", shifted)
Lines 14 and 15 load our input image from disk and then display it to our screen:
Figure 4: Our example input image that we will be applying translation to with OpenCV. The first actual translation takes place on Lines 18-20, where we start by defining our translation matrix, M.
This matrix tells us how many pixels to the left or right our image will be shifted, and then how many pixels up or down the image will be shifted, again keeping in mind that the translation matrix has the form:
M = np.float32([
[1, 0, shiftX],
[0, 1, shiftY]
])
Our translation matrix M is defined as a floating point array — this is important because OpenCV expects this matrix to be of floating point type. The first row of the matrix is , where is the number of pixels we will shift the image left or right. Negative values of will shift the image to the left, and positive values will shift the image to the right. Then, we define the second row of the matrix as , where is the number of pixels we will shift the image up or down. Negative values of will shift the image up, and positive values will shift the image down. Using this notation, on Line 18, we can see that and , indicating that we are shifting the image 25 pixels to the right and 50 pixels down. Now that we have our translation matrix defined, the actual translation takes place on Line 19 using the cv2.warpAffine function. The first argument is the image we wish to shift, and the second argument is our translation matrix, M. Finally, we manually supply the image’s dimensions (width and height) as the third argument. Line 20 displays the results of the translation, which we can see below:
Figure 5: Using OpenCV to translate an image 25 pixels to the right and 50 pixels down. |
https://pyimagesearch.com/2021/02/03/opencv-image-translation/ | Notice how the image has clearly been “shifted” down and to the right. Let’s examine another example of image translation with OpenCV. # now, let's shift the image 50 pixels to the left and 90 pixels
# up by specifying negative values for the x and y directions,
# respectively
M = np.float32([[1, 0, -50], [0, 1, -90]])
shifted = cv2.warpAffine(image, M, (image.shape[1], image.shape[0]))
cv2.imshow("Shifted Up and Left", shifted)
Line 25 sets and , implying that we are shifting the image 50 pixels to the left and 90 pixels up. The image is shifted left and up rather than right and down because we are providing negative values for both and . Figure 6 shows the output of supplying negative values for both and :
Figure 6: Applying translation with OpenCV to shift an image 50 pixels to the left and 90 pixels up. Again, notice how our image is “shifted” to the left 50 pixels and up 90 pixels. However, manually constructing this translation matrix and calling the cv2.warpAffine method takes a bit of effort — and it’s not necessarily pretty code either! This is where my imutils package comes in. Instead of having to define our matrix M and make a call to cv2.warpAffine each time we want to translate an image, we can instead call imutils.translate to take care of the operation for us:
# use the imutils helper function to translate the image 100 pixels
# down in a single function call
shifted = imutils.translate(image, 0, 100)
cv2.imshow("Shifted Down", shifted)
cv2.waitKey(0)
The output of the translation operation can be seen in Figure 7:
Figure 7: Translating an image 100 pixels down using OpenCV. The benefit of using imutils.translate is cleaner code — the output of imutils.translate versus cv2.warpAffine will be the same, regardless. |
https://pyimagesearch.com/2021/02/03/opencv-image-translation/ | Note: If you are interested in seeing the implementation of the imutils.translate function, simply refer to my GitHub repo. OpenCV image translation results
To perform image translation with OpenCV, be sure to access the “Downloads” section of this tutorial to retrieve the source code and example image. You can then execute the following command:
$ python opencv_translate.py
Your results should look like mine from the previous section. What's next? We recommend PyImageSearch University. Course information:
84 total classes • 114+ hours of on-demand code walkthrough videos • Last updated: February 2024
★★★★★ 4.84 (128 Ratings) • 16,000+ Students Enrolled
I strongly believe that if you had the right teacher you could master computer vision and deep learning. Do you think learning computer vision and deep learning has to be time-consuming, overwhelming, and complicated? Or has to involve complex mathematics and equations? Or requires a degree in computer science? That’s not the case. |
https://pyimagesearch.com/2021/02/03/opencv-image-translation/ | All you need to master computer vision and deep learning is for someone to explain things to you in simple, intuitive terms. And that’s exactly what I do. My mission is to change education and how complex Artificial Intelligence topics are taught. If you're serious about learning computer vision, your next stop should be PyImageSearch University, the most comprehensive computer vision, deep learning, and OpenCV course online today. Here you’ll learn how to successfully and confidently apply computer vision to your work, research, and projects. Join me in computer vision mastery. Inside PyImageSearch University you'll find:
✓ 84 courses on essential computer vision, deep learning, and OpenCV topics
✓ 84 Certificates of Completion
✓ 114+ hours of on-demand video
✓ Brand new courses released regularly, ensuring you can keep up with state-of-the-art techniques
✓ Pre-configured Jupyter Notebooks in Google Colab
✓ Run all code examples in your web browser — works on Windows, macOS, and Linux (no dev environment configuration required!) ✓ Access to centralized code repos for all 536+ tutorials on PyImageSearch
✓ Easy one-click downloads for code, datasets, pre-trained models, etc. ✓ Access on mobile, laptop, desktop, etc. Click here to join PyImageSearch University
Summary
In this tutorial, you learned how to perform image translation using OpenCV. |
https://pyimagesearch.com/2021/02/03/opencv-image-translation/ | You accomplished this task by first defining an affine transformation matrix:
Figure 9: To translate an image with OpenCV, we must first construct an affine transformation matrix. You then specified how you wanted to shift the image:
Negative values for the value will shift the image to the leftPositive values for shifts the image to the rightNegative values for shifts the image upPositive values for will shift the image down
While performing image translation with OpenCV requires only two lines of code, it’s not exactly the most “pretty” code in the world. For convenience, you can use the imutils.translate function to perform image translation in a single, concise, and readable function call. To download the source code to this post (and be notified when future tutorials are published here on PyImageSearch), simply enter your email address in the form below! Download the Source Code and FREE 17-page Resource Guide
Enter your email address below to get a .zip of the code and a FREE 17-page Resource Guide on Computer Vision, OpenCV, and Deep Learning. Inside you'll find my hand-picked tutorials, books, courses, and libraries to help you master CV and DL! Download the code! Website |
https://pyimagesearch.com/2021/02/22/opencv-connected-component-labeling-and-analysis/ | Click here to download the source code to this pos
In this tutorial, you will learn how to perform connected component labeling and analysis with OpenCV. Specifically, we will focus on OpenCV’s most used connected component labeling function, cv2.connectedComponentsWithStats. Connected component labeling (also known as connected component analysis, blob extraction, or region labeling) is an algorithmic application of graph theory used to determine the connectivity of “blob”-like regions in a binary image. We often use connected component analysis in the same situations that contours are used; however, connected component labeling can often give us more granular filtering of the blobs in a binary image. When using contour analysis, we are often restricted by the hierarchy of the outlines (i.e., one contour contained within another). With connected component analysis, we can more easily segment and analyze these structures. A great example of connected component analysis is computing the connected components of a binary (i.e., thresholded) license plate image and filtering the blobs based on their properties (e.g., width, height, area, solidity, etc.). This is exactly what we’ll be doing here today. Connected component analysis is another tool to add to your OpenCV toolbelt! To learn how to perform connected component labeling and analysis with OpenCV, just keep reading. |
https://pyimagesearch.com/2021/02/22/opencv-connected-component-labeling-and-analysis/ | Looking for the source code to this post? Jump Right To The Downloads Section
OpenCV Connected Component Labeling and Analysis
In the first part of this tutorial, we’ll review the four (yes, four) functions OpenCV provides to perform connected component analysis. The most popular of these functions is cv2.connectedComponentsWithStats. From there, we’ll configure our development environment and review our project directory structure. Next, we’ll implement two forms of connected component analysis:
The first method will demonstrate how to use OpenCV’s connected component analysis function, compute statistics for each component, and then extract/visualize each of the components individually. The second method shows a practical, real-world example of connecting component analysis. We threshold a license plate and then use connected component analysis to extract just the license plate characters. We’ll wrap up this guide with a discussion of our results. OpenCV’s connected component functions
Figure 1: OpenCV implements four functions that can be used for connected component analysis and labeling. OpenCV provides four connected component analysis functions:
cv2.connectedComponentscv2.connectedComponentsWithStatscv2.connectedComponentsWithAlgorithmcv2.connectedComponentsWithStatsWithAlgorithm
The most popular method is cv2.connectedComponentsWithStats which returns the following information:
The bounding box of the connected componentThe area (in pixels) of the componentThe centroid/center (x, y)-coordinates of the component
The first method, cv2.connectedComponents, is the same as the second, only it does not return the above statistical information. |
https://pyimagesearch.com/2021/02/22/opencv-connected-component-labeling-and-analysis/ | In the vast majority of situations, you will need the stats, so it’s worth simply using cv2.connectedComponentsWithStats instead. The third method, cv2.connectedComponentsWithAlgorithm, implements faster, more efficient algorithms for connected component analysis. If you have OpenCV compiled with parallel processing support then both cv2.connectedComponentsWithAlgorithm and cv2.connectedComponentsWithStatsWithAlgorithm will run faster than the first two. But in general, stick with cv2.connectedComponentsWithStats until you are comfortable working with connected component labeling. Configuring your development environment
To learn how to perform connected component analysis, you need to have OpenCV installed on your machine:
Luckily, OpenCV is pip-installable:
$ pip install opencv-contrib-python
If you need help configuring your development environment for OpenCV, I highly recommend that you read my pip install OpenCV guide — it will have you up and running in a matter of minutes. Having problems configuring your development environment? Figure 2: Having trouble configuring your dev environment? Want access to pre-configured Jupyter Notebooks running on Google Colab? Be sure to join PyImageSearch Plus — you’ll be up and running with this tutorial in a matter of minutes. All that said, are you:
Short on time?Learning on your employer’s administratively locked system?Wanting to skip the hassle of fighting with the command line, package managers, and virtual environments?Ready to run the code right now on your Windows, macOS, or Linux systems? |
https://pyimagesearch.com/2021/02/22/opencv-connected-component-labeling-and-analysis/ | Then join PyImageSearch Plus today! Gain access to Jupyter Notebooks for this tutorial and other PyImageSearch guides that are pre-configured to run on Google Colab’s ecosystem right in your web browser! No installation required. And best of all, these Jupyter Notebooks will run on Windows, macOS, and Linux! Project structure
Before we implement connected component analysis with OpenCV, let’s first take a peek at our project directory structure. Start by accessing the “Downloads” section of this tutorial to retrieve the source code and example image:
$ tree . --dirsfirst
. ├── basic_connected_components.py
├── filtering_connected_components.py
└── license_plate.png
0 directories, 3 files
We’ll be applying connected component analysis to automatically filter out characters from a license plate (license_plate.png). To accomplish this task and to learn more about connected component analysis, we’ll implement two Python scripts:
basic_connected_components.py: Demonstrates how to apply connected component labeling, extract each of the components and their statistics, and visualize them on our screen.filtering_connected_components.py: Applies connected component analysis but filters out non-license plate characters by examining each component’s width, height, and area (in pixels). Implementing basic connected components with OpenCV
Let’s get started implementing connected component analysis with OpenCV. |
https://pyimagesearch.com/2021/02/22/opencv-connected-component-labeling-and-analysis/ | Open up the basic_connected_components.py file in your project folder, and let’s get to work:
# import the necessary packages
import argparse
import cv2
# construct the argument parser and parse the arguments
ap = argparse. ArgumentParser()
ap.add_argument("-i", "--image", required=True,
help="path to input image")
ap.add_argument("-c", "--connectivity", type=int, default=4,
help="connectivity for connected component analysis")
args = vars(ap.parse_args())
Lines 2 and 3 import our required Python packages while Lines 6-11 parse our command line arguments. We have two command line arguments:
--image: The path to our input image residing on disk.--connectivity: Either 4 or 8 connectivity (you can refer to this page for more details on four versus eight connectivity). Let’s move on to preprocessing our input image:
# load the input image from disk, convert it to grayscale, and
# threshold it
image = cv2.imread(args["image"])
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
thresh = cv2.threshold(gray, 0, 255,
cv2.THRESH_BINARY_INV | cv2.THRESH_OTSU)[1]
Lines 15-18 proceed to:
Load our input image from diskConvert it to grayscaleThreshold it using Otsu’s thresholding method
After thresholding, our image will look like the following:
Figure 3: Top: The original input image of the license plate. Bottom: Output after applying Otsu’s thresholding to the image. Notice how the license plate characters appear as white on a black background. However, there is also a bunch of noise in the input image that appears as foreground too. Our goal is to apply connected component analysis to filter out these noise regions, leaving us with just the license plate characters. But before we can get to that, let’s first learn how to use the cv2.connectedComponentsWithStats function:
# apply connected component analysis to the thresholded image
output = cv2.connectedComponentsWithStats(
thresh, args["connectivity"], cv2.CV_32S)
(numLabels, labels, stats, centroids) = output
A call to cv2.connectedComponentsWithStats on Lines 21 and 22 performs connected component analysis with OpenCV. We pass in three arguments here:
The binary thresh imageThe --connectivity command line argumentThe data type (which you should leave as cv2.CV_32S)
The cv2.connectedComponentsWithStats then returns a 4-tuple of:
The total number of unique labels (i.e., number of total components) that were detectedA mask named labels has the same spatial dimensions as our input thresh image. |
https://pyimagesearch.com/2021/02/22/opencv-connected-component-labeling-and-analysis/ | For each location in labels, we have an integer ID value that corresponds to the connected component where the pixel belongs. You’ll learn how to filter the labels matrix later in this section.stats: Statistics on each connected component, including the bounding box coordinates and area (in pixels).The centroids (i.e., center) (x, y)-coordinates of each connected component. Let’s learn how to parse these values now:
# loop over the number of unique connected component labels
for i in range(0, numLabels):
# if this is the first component then we examine the
# *background* (typically we would just ignore this
# component in our loop)
if i == 0:
text = "examining component {}/{} (background)".format(
i + 1, numLabels)
# otherwise, we are examining an actual connected component
else:
text = "examining component {}/{}".format( i + 1, numLabels)
# print a status message update for the current connected
# component
print("[INFO] {}".format(text))
# extract the connected component statistics and centroid for
# the current label
x = stats[i, cv2.CC_STAT_LEFT]
y = stats[i, cv2.CC_STAT_TOP]
w = stats[i, cv2.CC_STAT_WIDTH]
h = stats[i, cv2.CC_STAT_HEIGHT]
area = stats[i, cv2.CC_STAT_AREA]
(cX, cY) = centroids[i]
Line 26 loops over the IDs of all unique connected components returned by OpenCV. We then encounter an if/else statement:
The first connected component, with an ID of 0, is always the background. We typically ignore the background, but if you ever need it, keep in mind that ID 0 contains it. Otherwise, if i > 0, then we know the component is worth exploring more. Lines 44-49 show us how to parse our stats and centroids lists, allowing us to extract:
The starting x coordinate of the componentThe starting y coordinate of the componentThe width (w) of the componentThe height (h) of the componentThe centroid (x, y)-coordinates of the component
Let’s now visualize the bounding box and centroid of the current component:
# clone our original image (so we can draw on it) and then draw
# a bounding box surrounding the connected component along with
# a circle corresponding to the centroid
output = image.copy()
cv2.rectangle(output, (x, y), (x + w, y + h), (0, 255, 0), 3)
cv2.circle(output, (int(cX), int(cY)), 4, (0, 0, 255), -1)
Line 54 creates an output image that we can draw on. We then draw the bounding box of the component as a green rectangle (Line 55) and the centroid as a red circle (Line 56). Our final code block demonstrates how to create a mask for the current connected component:
# construct a mask for the current connected component by
# finding a pixels in the labels array that have the current
# connected component ID
componentMask = (labels == i).astype("uint8") * 255
# show our output image and connected component mask
cv2.imshow("Output", output)
cv2.imshow("Connected Component", componentMask)
cv2.waitKey(0)
Line 61 first finds all locations in labels equal to the current component ID, i. We then convert the result to an unsigned 8-bit integer with a value of 0 for the background and a value of 255 for the foreground. The output image and componentMask are then displayed on our screen on Lines 64-66. |
https://pyimagesearch.com/2021/02/22/opencv-connected-component-labeling-and-analysis/ | OpenCV connected component analysis results
We are now ready to perform connected component labeling with OpenCV! Be sure to access the “Downloads” section of this tutorial to retrieve the source code and example image:
$ python basic_connected_components.py --image license_plate.png
[INFO] examining component 1/17 (background)
[INFO] examining component 2/17
[INFO] examining component 3/17
[INFO] examining component 4/17
[INFO] examining component 5/17
[INFO] examining component 6/17
[INFO] examining component 7/17
[INFO] examining component 8/17
[INFO] examining component 9/17
[INFO] examining component 10/17
[INFO] examining component 11/17
[INFO] examining component 12/17
[INFO] examining component 13/17
[INFO] examining component 14/17
[INFO] examining component 15/17
[INFO] examining component 16/17
[INFO] examining component 17/17
The animation below shows me cycling through each of the 17 detected components:
Figure 4: Using connected component analysis to find all structures on the license plate. The first connected component is actually our background. We typically skip this component as the background isn’t often needed. The rest of the 16 components are then displayed. For each component, we draw the bounding box (green rectangle) and centroid/center (red circle). You may have noticed that some of these connected components are license plate characters while others are simply “noise.” That raises the question:
Is it possible to detect just the license plate characters’ components? And if so, how do we do that? We’ll address that question in the next section. |
https://pyimagesearch.com/2021/02/22/opencv-connected-component-labeling-and-analysis/ | How to filter connected components with OpenCV
Our previous code example demonstrated how to extract connected components with OpenCV, but it didn’t demonstrate how to filter them. Let’s learn how we can filter connected components now:
# import the necessary packages
import numpy as np
import argparse
import cv2
# construct the argument parser and parse the arguments
ap = argparse. ArgumentParser()
ap.add_argument("-i", "--image", required=True,
help="path to input image")
ap.add_argument("-c", "--connectivity", type=int, default=4,
help="connectivity for connected component analysis")
args = vars(ap.parse_args())
Lines 2-4 import our required Python packages while Lines 7-12 parse our command line arguments. These command line arguments are identical to the ones from our previous script, so I suggest you refer to earlier in this tutorial for a detailed explanation of them. From there, we load our image, preprocess it, and apply connected component analysis:
# load the input image from disk, convert it to grayscale, and
# threshold it
image = cv2.imread(args["image"])
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
thresh = cv2.threshold(gray, 0, 255,
cv2.THRESH_BINARY_INV | cv2.THRESH_OTSU)[1]
# apply connected component analysis to the thresholded image
output = cv2.connectedComponentsWithStats(
thresh, args["connectivity"], cv2.CV_32S)
(numLabels, labels, stats, centroids) = output
# initialize an output mask to store all characters parsed from
# the license plate
mask = np.zeros(gray.shape, dtype="uint8")
Lines 16-19 load our input image and preprocess it in the same manner as we did in our previous script. We then apply connected component analysis on Lines 22-24. Line 28 initializes an output mask to store all license plate characters we have found after performing connected component analysis. Speaking of which, let’s loop over each of the unique labels now:
# loop over the number of unique connected component labels, skipping
# over the first label (as label zero is the background)
for i in range(1, numLabels):
# extract the connected component statistics for the current
# label
x = stats[i, cv2.CC_STAT_LEFT]
y = stats[i, cv2.CC_STAT_TOP]
w = stats[i, cv2.CC_STAT_WIDTH]
h = stats[i, cv2.CC_STAT_HEIGHT]
area = stats[i, cv2.CC_STAT_AREA]
Notice that our for loop starts from ID 1, implying that we are skipping over 0, our background value. We then extract the bounding box coordinates and area of the current connected component on Lines 35-39. We are now ready to filter our connected components:
# ensure the width, height, and area are all neither too small
# nor too big
keepWidth = w > 5 and w < 50
keepHeight = h > 45 and h < 65
keepArea = area > 500 and area < 1500
# ensure the connected component we are examining passes all
# three tests
if all((keepWidth, keepHeight, keepArea)):
# construct a mask for the current connected component and
# then take the bitwise OR with the mask
print("[INFO] keeping connected component '{}'".format(i))
componentMask = (labels == i).astype("uint8") * 255
mask = cv2.bitwise_or(mask, componentMask)
Lines 43-45 demonstrate that we are filtering our connected components based on their width, height, and area, discarding components that are either too small or too large. |
https://pyimagesearch.com/2021/02/22/opencv-connected-component-labeling-and-analysis/ | Note: Wondering how I came up with these values? I used print statements to show the width, height, and area for each connected component while visualizing them individually to my screen. I noted the width, height, and area for the license plate characters and found their minimum/maximum values, with a bit of tolerance on each end. You should do the same for your own applications. Line 49 verifies that keepWidth, keepHeight, and keepArea are all True, implying that each of them passed the test. If that’s indeed the case, we compute the componentMask for the current label ID (just like we did in our basic_connected_components.py script) and add the license plate character to our mask. Finally, we display our input image and output license plate characters mask on our screen. # show the original input image and the mask for the license plate
# characters
cv2.imshow("Image", image)
cv2.imshow("Characters", mask)
cv2.waitKey(0)
As we’ll see in the next section, our mask will only contain the license plate characters. Filtering connected components results
Let’s learn how to filter connected components with OpenCV! Be sure to access the “Downloads” section of this guide to retrieve the source code and example image — from there, you can execute the following command:
$ python filtering_connected_components.py --image license_plate.png
[INFO] keeping connected component 7
[INFO] keeping connected component 8
[INFO] keeping connected component 9
[INFO] keeping connected component 10
[INFO] keeping connected component 11
[INFO] keeping connected component 12
[INFO] keeping connected component 13
Figure 5: Top: The original input image containing the license plate. |
https://pyimagesearch.com/2021/02/22/opencv-connected-component-labeling-and-analysis/ | Bottom: Output after applying connected component filtering with OpenCV. Notice how we’ve been able to filter out just the characters from the license plate. Figure 5 displays the results of filtering our connected components. On the top, we have our original input image containing the license plate. The bottom has the results of filtering the connected components, resulting in just the license plate characters themselves. If we were building an Automatic License/Number Plate Recognition (ALPR/ANPR) system, we would take these characters and then pass them into an Optical Character Recognition (OCR) algorithm for recognition. But all of that hinges on us being able to binarize the characters and extract them, which connected component analysis enabled us to do! What's next? We recommend PyImageSearch University. Course information:
84 total classes • 114+ hours of on-demand code walkthrough videos • Last updated: February 2024
★★★★★ 4.84 (128 Ratings) • 16,000+ Students Enrolled
I strongly believe that if you had the right teacher you could master computer vision and deep learning. |
https://pyimagesearch.com/2021/02/22/opencv-connected-component-labeling-and-analysis/ | Do you think learning computer vision and deep learning has to be time-consuming, overwhelming, and complicated? Or has to involve complex mathematics and equations? Or requires a degree in computer science? That’s not the case. All you need to master computer vision and deep learning is for someone to explain things to you in simple, intuitive terms. And that’s exactly what I do. My mission is to change education and how complex Artificial Intelligence topics are taught. If you're serious about learning computer vision, your next stop should be PyImageSearch University, the most comprehensive computer vision, deep learning, and OpenCV course online today. Here you’ll learn how to successfully and confidently apply computer vision to your work, research, and projects. Join me in computer vision mastery. |
https://pyimagesearch.com/2021/02/22/opencv-connected-component-labeling-and-analysis/ | Inside PyImageSearch University you'll find:
✓ 84 courses on essential computer vision, deep learning, and OpenCV topics
✓ 84 Certificates of Completion
✓ 114+ hours of on-demand video
✓ Brand new courses released regularly, ensuring you can keep up with state-of-the-art techniques
✓ Pre-configured Jupyter Notebooks in Google Colab
✓ Run all code examples in your web browser — works on Windows, macOS, and Linux (no dev environment configuration required!) ✓ Access to centralized code repos for all 536+ tutorials on PyImageSearch
✓ Easy one-click downloads for code, datasets, pre-trained models, etc. ✓ Access on mobile, laptop, desktop, etc. Click here to join PyImageSearch University
Summary
In this tutorial, you learned how to perform connected component analysis. OpenCV provides us with four functions for connected component labeling:
cv2.connectedComponentscv2.connectedComponentsWithStatscv2.connectedComponentsWithAlgorithmcv2.connectedComponentsWithStatsWithAlgorithm()
The most popular of which is the cv2.connectedComponentsWithStats function we used today. When dealing with blob-like structures in your images, connected component analysis can actually replace the process of contour detection, computing statistics over the contours, and filtering them. Connected component analysis is a handy function to have in your toolbelt, so be sure you get some practice using it. To download the source code to this post (and be notified when future tutorials are published here on PyImageSearch), simply enter your email address in the form below! Download the Source Code and FREE 17-page Resource Guide
Enter your email address below to get a .zip of the code and a FREE 17-page Resource Guide on Computer Vision, OpenCV, and Deep Learning. Inside you'll find my hand-picked tutorials, books, courses, and libraries to help you master CV and DL! |
https://pyimagesearch.com/2021/02/22/opencv-connected-component-labeling-and-analysis/ | Download the code! Website |
https://pyimagesearch.com/2021/03/08/defending-against-adversarial-image-attacks-with-keras-and-tensorflow/ | Click here to download the source code to this pos
In this tutorial, you will learn how to defend against adversarial image attacks using Keras and TensorFlow. So far, you have learned how to generate adversarial images using three different methods:
Adversarial images and attacks with Keras and TensorFlowTargeted adversarial attacks with Keras and TensorFlowAdversarial attacks with FGSM (Fast Gradient Sign Method)
Using adversarial images, we can trick our Convolutional Neural Networks (CNNs) into making incorrect predictions. While, according to the human eye, adversarial images may look identical to their original counterparts, they contain small perturbations that cause our CNNs to make wildly incorrect predictions. As I discuss in this tutorial, there are enormous consequences to deploying undefended models into the wild. For example, imagine a deep neural network deployed to a self-driving car. Nefarious users could generate adversarial images, print them, and then apply them to the road, signs, overpasses, etc., which would result in the model thinking there were pedestrians, cars, or obstacles when there are, in fact, none! The result could be disastrous, including car accidents, injuries, and loss of life. Given the risk that adversarial images pose, that raises the question:
What can we do to defend against these attacks? We’ll be addressing that question in a two-part series on adversarial image defense:
Defending against adversarial image attacks with Keras and TensorFlow (today’s tutorial)Mixing normal images and adversarial images when training CNNs (next week’s guide)
Adversarial image defense is no joke. |
https://pyimagesearch.com/2021/03/08/defending-against-adversarial-image-attacks-with-keras-and-tensorflow/ | If you’re deploying models into the real-world, then be sure you have procedures in place to defend against adversarial attacks. By following these tutorials, you can train your CNNs to make correct predictions even if they are presented with adversarial images. To learn how to train a CNN to defend against adversarial attacks with Keras and TensorFlow, just keep reading. Looking for the source code to this post? Jump Right To The Downloads Section
Defending against adversarial image attacks with Keras and TensorFlow
In the first part of this tutorial, we’ll discuss the concept of adversarial images as an “arms race” and what we can do to defend against them. We’ll then discuss two methods that we can use to defend against adversarial images. We’ll implement the first method today and implement the second method next week. From there, we’ll configure our development environment and review our project directory structure. We then have several Python scripts to review, including:
Our CNN architectureA function used to generate adversarial images using the FGSM A data generator function used to generate batches of adversarial images such that we can fine-tune our CNN on themA training script that puts all the pieces together trains our model on the MNIST dataset, generates adversarial images, and then fine-tunes the CNN on them to improve accuracy
Let’s get started! Adversarial images are an “arms race,” and we need to defend against them
Figure 1: Defending against adversarial images is an arms race (image source). |
https://pyimagesearch.com/2021/03/08/defending-against-adversarial-image-attacks-with-keras-and-tensorflow/ | Defending against adversarial attacks has been and will continue to be an active research area. There is no “magic bullet” method that will make your model robust to adversarial attacks. Instead, you should reframe your thinking of adversarial attacks — it’s less of a “magic bullet” procedure and more like an arms race. During the Cold War between the United States and the Soviet Union, both countries spent tremendous sums of money and countless hours of research and development to both:
Build powerful weaponsWhile simultaneously creating systems to defend against these weapons
For every move on the nuclear weapon chessboard there was an equal attempt to defend against it. We see these types of arms races all the time:
One business creates a new product in the industry while the other company creates its own version. A great example of this is Honda and Toyota. When Honda launched Acura, their version of higher-end luxury cars in 1986, Toyota countered by creating Lexus in 1989, their version of luxury cars. Another example comes from anti-virus software, which continually defends against new attacks. When a new computer virus enters the digital world, anti-virus companies quickly release patches to their software to detect and remove these viruses. Whether we like it or not, we live in a world of constant escalation. |
https://pyimagesearch.com/2021/03/08/defending-against-adversarial-image-attacks-with-keras-and-tensorflow/ | For each action, there is an equal reaction. It’s not just physics, and it’s the way of the world. It would not be wise to assume that our computer vision and deep learning models exist in a vacuum, devoid of manipulation. They can (and are) manipulated. Just like our computers can contract viruses developed by hackers, our neural networks are also vulnerable to various types of attacks, the most prevalent being adversarial attacks. The good news is that we can defend against these attacks. How can you defend against adversarial image attacks? Figure 2: The process of training a model to defend against adversarial attacks. One of the easiest ways to defend against adversarial attacks is to train your model on these types of images. For example, if we are worried nefarious users applying FGSM attacks to our model, then we can “inoculate” our neural network by training them on FSGM images of our own. |
https://pyimagesearch.com/2021/03/08/defending-against-adversarial-image-attacks-with-keras-and-tensorflow/ | Typically, this type of adversarial inoculation is applied by either:
Training our model on a given dataset, generating a set of adversarial images, and then fine-tuning the model on the adversarial imagesGenerating mixed batches of both the original training images and adversarial images, followed by fine-tuning our neural network on these mixed batches
The first method is simpler and requires less computation (since we need to generate only one set of adversarial images). The downside is that this method tends to be less robust since we’re only fine-tuning the model on adversarial examples at the end of training. The second method is much more complicated and requires significantly more computation. We need to use the model to generate adversarial images for each batch where the network is trained. The second method’s benefit is that the model tends to be more robust because it sees both original training images and adversarial images during every single batch update during training. Furthermore, the model itself is being used to generate the adversarial images during each batch. As the model gets better at fooling itself, it can learn from its mistakes, resulting in a model that can better defend against adversarial attacks. We’ll be covering the first method here today. Next week we’ll implement the more advanced method. Problems and considerations with adversarial image defense
Both of the adversarial image defense methods mentioned in the previous section are dependent on:
The model architecture and weights used to generate the adversarial examplesThe optimizer used to generate them
These training schemes might not generalize well if we simply create an adversarial image with a different model (potentially a more complex one). |
https://pyimagesearch.com/2021/03/08/defending-against-adversarial-image-attacks-with-keras-and-tensorflow/ | Additionally, if we train only on adversarial images then the model might not perform well on the regular images. This phenomenon is often referred to as catastrophic forgetting, and in the context of adversarial defense, means that the model has “forgotten” what a real image looks like. To mitigate this problem, we first generate a set of adversarial images, mix them with the regular training set, and then finally train the model (which we will do in next week’s blog post). Configuring your development environment
This tutorial on defending against adversarial image attacks uses Keras and TensorFlow. If you intend to follow this tutorial, I suggest you take the time to configure your deep learning development environment. You can utilize either of these two guides to install TensorFlow and Keras on your system:
How to install TensorFlow 2.0 on UbuntuHow to install TensorFlow 2.0 on macOS
Either tutorial will help you configure your system with all the necessary software for this blog post in a convenient Python virtual environment. Having problems configuring your development environment? Figure 3: Having trouble configuring your dev environment? Want access to pre-configured Jupyter Notebooks running on Google Colab? Be sure to join PyImageSearch University — you’ll be up and running with this tutorial in a matter of minutes. |
https://pyimagesearch.com/2021/03/08/defending-against-adversarial-image-attacks-with-keras-and-tensorflow/ | All that said, are you:
Short on time?Learning on your employer’s administratively locked system?Wanting to skip the hassle of fighting with the command line, package managers, and virtual environments?Ready to run the code right now on your Windows, macOS, or Linux systems? Then join PyImageSearch University today! Gain access to Jupyter Notebooks for this tutorial and other PyImageSearch guides that are pre-configured to run on Google Colab’s ecosystem right in your web browser! No installation required. And best of all, these Jupyter Notebooks will run on Windows, macOS, and Linux! Project structure
Before we dive into any code, let’s first review our project directory structure. Be sure to access the “Downloads” section of this guide to retrieve the source code:
$ tree . --dirsfirst
. ├── pyimagesearch
│ ├── __init__.py
│ ├── datagen.py
│ ├── fgsm.py
│ └── simplecnn.py
└── train_adversarial_defense.py
1 directory, 5 files
Inside the pyimagesearch module, you’ll find three files:
datagen.py: Implements a function to generate batches of adversarial images at a time. We’ll use this function to train and evaluate our CNN on adversarial defense accuracy.fgsm.py: Implements the Fast Gradient Sign Method (FGSM) for adversarial image generation.simplecnn.py: Our CNN architecture we will train and evaluate for image adversary defense. |
https://pyimagesearch.com/2021/03/08/defending-against-adversarial-image-attacks-with-keras-and-tensorflow/ | Finally, train_adversarial_defense.py glues all these pieces together and will demonstrate:
How to train our CNN architectureHow to evaluate the CNN on our testing setHow to generate batches of image adversaries using our trained CNNHow to evaluate the accuracy of our CNN on the image adversariesHow to fine-tune our CNN on image adversariesHow to re-evaluate the CNN on both the original training set and image adversaries
By the end of this guide, you’ll have a good understanding of training a CNN for basic image adversary defense. Our simple CNN architecture
We’ll be training a basic CNN architecture and use it to demonstrate adversarial image defense. While I’ve included this model’s implementation here today, I covered the architecture in detail in last week’s tutorial on the Fast Gradient Sign Method, so I suggest you refer there if you need a more comprehensive review. Open the simplecnn.py file in your pyimagesearch module, and you’ll find the following code:
# import the necessary packages
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import BatchNormalization
from tensorflow.keras.layers import Conv2D
from tensorflow.keras.layers import Activation
from tensorflow.keras.layers import Flatten
from tensorflow.keras.layers import Dropout
from tensorflow.keras.layers import Dense
The top of our file consists of our Keras and TensorFlow imports. We then define the SimpleCNN architecture. class SimpleCNN:
@staticmethod
def build(width, height, depth, classes):
# initialize the model along with the input shape
model = Sequential()
inputShape = (height, width, depth)
chanDim = -1
# first CONV => RELU => BN layer set
model.add(Conv2D(32, (3, 3), strides=(2, 2), padding="same",
input_shape=inputShape))
model.add(Activation("relu"))
model.add(BatchNormalization(axis=chanDim))
# second CONV => RELU => BN layer set
model.add(Conv2D(64, (3, 3), strides=(2, 2), padding="same"))
model.add(Activation("relu"))
model.add(BatchNormalization(axis=chanDim))
# first (and only) set of FC => RELU layers
model.add(Flatten())
model.add(Dense(128))
model.add(Activation("relu"))
model.add(BatchNormalization())
model.add(Dropout(0.5))
# softmax classifier
model.add(Dense(classes))
model.add(Activation("softmax"))
# return the constructed network architecture
return model
As you can see, this is a basic CNN model that includes two sets of CONV => RELU => BN layers followed by a softmax layer head. The softmax classifier will return the class label probability distribution for a given input image. Again, you should refer to last week’s tutorial for a more detailed explanation. The FGSM technique for generating adversarial images
We’ll use the Fast Gradient Sign Method (FGSM) to generate adversarial images. We covered this technique last week, but I’ve included the code here today as a matter of completeness. |
https://pyimagesearch.com/2021/03/08/defending-against-adversarial-image-attacks-with-keras-and-tensorflow/ | If you open the fgsm.py file in the pyimagesearch module, you will find the following code:
# import the necessary packages
from tensorflow.keras.losses import MSE
import tensorflow as tf
def generate_image_adversary(model, image, label, eps=2 / 255.0):
# cast the image
image = tf.cast(image, tf.float32)
# record our gradients
with tf. GradientTape() as tape:
# explicitly indicate that our image should be tacked for
# gradient updates
tape.watch(image)
# use our model to make predictions on the input image and
# then compute the loss
pred = model(image)
loss = MSE(label, pred)
# calculate the gradients of loss with respect to the image, then
# compute the sign of the gradient
gradient = tape.gradient(loss, image)
signedGrad = tf.sign(gradient)
# construct the image adversary
adversary = (image + (signedGrad * eps)).numpy()
# return the image adversary to the calling function
return adversary
Essentially, this function tracks the gradients of our image, makes predictions on it, computes the loss, and then uses the sign of the gradients to update the pixel intensities of the input image, such that:
The image is ultimately misclassified by our CNNYet the image looks identical to the original (according to the human eye)
Refer to last week’s tutorial on the Fast Gradient Sign Method for more details on how this technique works and its implementation. Implementing a custom data generator used to generate adversarial images during training
Our most important function here today is the generate_adverserial_batch method. This function is a custom data generator that we’ll use during training. At a high-level, this function:
Accepts a set of training imagesRandomly samples a batch of size N from our training imagesApplies the generate_image_adversary function to them to create our image adversaryYields the batch of image adversaries to our training loop, thereby allowing our model to learn patterns from the image adversaries and ideally defend against them
Let’s take a look at our custom data generator now. Open the datagen.py file in our project directory structure and insert the following code:
# import the necessary packages
from .fgsm import generate_image_adversary
import numpy as np
def generate_adversarial_batch(model, total, images, labels, dims,
eps=0.01):
# unpack the image dimensions into convenience variables
(h, w, c) = dims
We start by importing our required packages. Notice that we’re using our FGSM implementation via the generate_image_adversary function we implemented earlier. Our generate_adversarial_batch function requires several parameters, including:
model: The CNN that we want to fool (i.e., the model we are training).total: The size of the batch of adversarial images we want to generate.images: The set of images we’ll be sampling from (typically either the training or testing set).labels: The corresponding class labels for the imagesdims: The spatial dimensions of our input images.eps: A small epsilon factor used to control the magnitude of the pixel intensity update when applying the Fast Gradient Sign Method. Line 8 unpacks our dims into the height (h), width (w), and number of channels (c) so that we can easily reference them throughout the rest of our function. Let’s now build the data generator itself:
# we're constructing a data generator here so we need to loop
# indefinitely
while True:
# initialize our perturbed images and labels
perturbImages = []
perturbLabels = []
# randomly sample indexes (without replacement) from the
# input data
idxs = np.random.choice(range(0, len(images)), size=total,
replace=False)
Line 12 starts a loop that will continue indefinitely until training is complete. |
https://pyimagesearch.com/2021/03/08/defending-against-adversarial-image-attacks-with-keras-and-tensorflow/ | We then initialize two lists, perturbImages (to store the batch of adversarial images generated later in this while loop) and perturbLabels (to store the original class labels for the image). Lines 19 and 20 randomly sample a set of our images. We can now loop over the indexes of each of these randomly selected images:
# loop over the indexes
for i in idxs:
# grab the current image and label
image = images[i]
label = labels[i]
# generate an adversarial image
adversary = generate_image_adversary(model,
image.reshape(1, h, w, c), label, eps=eps)
# update our perturbed images and labels lists
perturbImages.append(adversary.reshape(h, w, c))
perturbLabels.append(label)
# yield the perturbed images and labels
yield (np.array(perturbImages), np.array(perturbLabels))
Lines 25 and 26 grab the current image and label. We then apply our generate_image_adversary function to create the image adversary using FGSM (Lines 29 and 30). With the adversary generated, we update both our perturbImages and perturbLabels lists, respectively. Our data generator rounds out by yielding a 2-tuple of our adversarial images and labels to the training process. This function can be summarized by:
Accepting an input set of imagesRandomly selecting a subset of themGenerating image adversaries for the subsetReturning the image adversaries to the training process, such that our CNN can learn patterns from them
Suppose we train our CNN on both the original training images and adversarial images. In that case, our CNN can make correct predictions on both sets, thereby making our model more robust against adversarial attacks. Training on normal images, fine-tuning on adversarial images
With all of our helper functions implemented, let’s move on to creating our training script to defend against adversarial images. Open the train_adverserial_defense.py file in your project structure, and let’s get to work:
# import the necessary packages
from pyimagesearch.simplecnn import SimpleCNN
from pyimagesearch.datagen import generate_adversarial_batch
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.utils import to_categorical
from tensorflow.keras.datasets import mnist
import numpy as np
Lines 2-7 import our required Python packages. |
https://pyimagesearch.com/2021/03/08/defending-against-adversarial-image-attacks-with-keras-and-tensorflow/ | Notice that we’re importing our SimpleCNN architecture along with the generate_adverserial_batch function, which we just implemented. We then proceed to load the MNIST dataset and preprocess it:
# load MNIST dataset and scale the pixel values to the range [0, 1]
print("[INFO] loading MNIST dataset...")
(trainX, trainY), (testX, testY) = mnist.load_data()
trainX = trainX / 255.0
testX = testX / 255.0
# add a channel dimension to the images
trainX = np.expand_dims(trainX, axis=-1)
testX = np.expand_dims(testX, axis=-1)
# one-hot encode our labels
trainY = to_categorical(trainY, 10)
testY = to_categorical(testY, 10)
With the MNIST dataset loaded, we can compile our model and train it on our training set:
# initialize our optimizer and model
print("[INFO] compiling model...")
opt = Adam(lr=1e-3)
model = SimpleCNN.build(width=28, height=28, depth=1, classes=10)
model.compile(loss="categorical_crossentropy", optimizer=opt,
metrics=["accuracy"])
# train the simple CNN on MNIST
print("[INFO] training network...")
model.fit(trainX, trainY,
validation_data=(testX, testY),
batch_size=64,
epochs=20,
verbose=1)
The next step is to evaluate the model on the test set:
# make predictions on the testing set for the model trained on
# non-adversarial images
(loss, acc) = model.evaluate(x=testX, y=testY, verbose=0)
print("[INFO] normal testing images:")
print("[INFO] loss: {:.4f}, acc: {:.4f}\n".format(loss, acc))
# generate a set of adversarial from our test set
print("[INFO] generating adversarial examples with FGSM...\n")
(advX, advY) = next(generate_adversarial_batch(model, len(testX),
testX, testY, (28, 28, 1), eps=0.1))
# re-evaluate the model on the adversarial images
(loss, acc) = model.evaluate(x=advX, y=advY, verbose=0)
print("[INFO] adversarial testing images:")
print("[INFO] loss: {:.4f}, acc: {:.4f}\n".format(loss, acc))
Lines 40-42 utilize our trained CNN to make predictions on the testing set. We then display the accuracy and loss on our terminal. Now, let’s see how our model performs on adversarial images. Lines 46 and 47 generate a set of adversarial images while Lines 50-52 re-evaluate our trained CNN on these adversary examples. As we’ll see in the next section, our prediction accuracy plummets on the adversarial images. That raises the question:
How can we defend against these adversarial attacks? A basic solution is to fine-tune our model on the adversarial images:
# lower the learning rate and re-compile the model (such that we can
# fine-tune it on the adversarial images)
print("[INFO] re-compiling model...")
opt = Adam(lr=1e-4)
model.compile(loss="categorical_crossentropy", optimizer=opt,
metrics=["accuracy"])
# fine-tune our CNN on the adversarial images
print("[INFO] fine-tuning network on adversarial examples...")
model.fit(advX, advY,
batch_size=64,
epochs=10,
verbose=1)
Lines 57-59 lower our optimizer’s learning rate and then re-compiles the model. We then fine-tune our model on the adversarial examples (Lines 63-66). Finally, we’ll perform one last set of evaluations:
# now that our model is fine-tuned we should evaluate it on the test
# set (i.e., non-adversarial) again to see if performance has degraded
(loss, acc) = model.evaluate(x=testX, y=testY, verbose=0)
print("")
print("[INFO] normal testing images *after* fine-tuning:")
print("[INFO] loss: {:.4f}, acc: {:.4f}\n".format(loss, acc))
# do a final evaluation of the model on the adversarial images
(loss, acc) = model.evaluate(x=advX, y=advY, verbose=0)
print("[INFO] adversarial images *after* fine-tuning:")
print("[INFO] loss: {:.4f}, acc: {:.4f}".format(loss, acc))
After fine-tuning, we need to re-evaluate our model’s accuracy on both the original testing set (Lines 70-73) and our adversarial examples (Lines 76-78). |
https://pyimagesearch.com/2021/03/08/defending-against-adversarial-image-attacks-with-keras-and-tensorflow/ | As we’ll see in the next section, fine-tuning our CNN on these adversarial examples allows our model to make correct predictions for both the original images and images generated by adversarial techniques! Adversarial image defense results
We are now ready to train our CNN to defend against adversarial image attacks! Start by accessing the “Downloads” section of this guide to retrieve the source code. From there, open a terminal and execute the following command:
$ time python train_adversarial_defense.py
[INFO] loading MNIST dataset...
[INFO] compiling model...
[INFO] training network...
Epoch 1/20
938/938 [==============================] - 12s 13ms/step - loss: 0.1973 - accuracy: 0.9402 - val_loss: 0.0589 - val_accuracy: 0.9809
Epoch 2/20
938/938 [==============================] - 12s 12ms/step - loss: 0.0781 - accuracy: 0.9762 - val_loss: 0.0453 - val_accuracy: 0.9838
Epoch 3/20
938/938 [==============================] - 12s 13ms/step - loss: 0.0599 - accuracy: 0.9814 - val_loss: 0.0410 - val_accuracy: 0.9868
...
Epoch 18/20
938/938 [==============================] - 11s 12ms/step - loss: 0.0103 - accuracy: 0.9963 - val_loss: 0.0476 - val_accuracy: 0.9883
Epoch 19/20
938/938 [==============================] - 11s 12ms/step - loss: 0.0091 - accuracy: 0.9967 - val_loss: 0.0420 - val_accuracy: 0.9889
Epoch 20/20
938/938 [==============================] - 11s 12ms/step - loss: 0.0087 - accuracy: 0.9970 - val_loss: 0.0443 - val_accuracy: 0.9892
[INFO] normal testing images:
[INFO] loss: 0.0443, acc: 0.9892
Here, you can see that we have trained our CNN on the MNIST dataset for 20 epochs. We’ve obtained 99.70% accuracy on the training set and 98.92% accuracy on our testing set, implying that our CNN is doing a good job making digit predictions. However, this “high accuracy” model is woefully inadequate and inaccurate when we generate a set of 10,000 adversarial images and ask the CNN to classify them:
[INFO] generating adversarial examples with FGSM...
[INFO] adversarial testing images:
[INFO] loss: 17.2824, acc: 0.0170
As you can see, our accuracy plummets from the original 98.92% down to 1.7%. Clearly, our CNN has utterly failed on adversarial images. That said, hope is not lost! Let’s now fine-tune our CNN on the set of 10,000 adversarial images:
[INFO] re-compiling model...
[INFO] fine-tuning network on adversarial examples...
Epoch 1/10
157/157 [==============================] - 2s 12ms/step - loss: 8.0170 - accuracy: 0.2455
Epoch 2/10
157/157 [==============================] - 2s 11ms/step - loss: 1.9634 - accuracy: 0.7082
Epoch 3/10
157/157 [==============================] - 2s 11ms/step - loss: 0.7707 - accuracy: 0.8612
...
Epoch 8/10
157/157 [==============================] - 2s 11ms/step - loss: 0.1186 - accuracy: 0.9701
Epoch 9/10
157/157 [==============================] - 2s 12ms/step - loss: 0.0894 - accuracy: 0.9780
Epoch 10/10
157/157 [==============================] - 2s 12ms/step - loss: 0.0717 - accuracy: 0.9817
We’re now obtaining 98% accuracy on the adversarial images after fine-tuning. Let’s now go back and re-evaluate the CNN on both the original testing set and our adversarial images:
[INFO] normal testing images *after* fine-tuning:
[INFO] loss: 0.0594, acc: 0.9844
[INFO] adversarial images *after* fine-tuning:
[INFO] loss: 0.0366, acc: 0.9906
real 5m12.753s
user 12m42.125s
sys 10m0.498s
Initially, our CNN obtained 98.92% accuracy on our testing set. |
https://pyimagesearch.com/2021/03/08/defending-against-adversarial-image-attacks-with-keras-and-tensorflow/ | Accuracy has dropped on the testing set by 0.5%, but the good news is that we’re now hitting 99% accuracy when classifying our adversarial images, thereby implying that:
Our model can make correct predictions on the original, non-perturbed images from the MNIST dataset. We can also make accurate predictions on the generated adversarial images (meaning that we’ve successfully defended against them). How else can we defend against adversarial attacks? Fine-tuning a model on adversarial images is just one way to defend against adversarial attacks. A better way is to mix and incorporate adversarial images with the original images during the training process. The result is a more robust model capable of defending against adversarial attacks since the model generates its own adversarial images in each batch, thereby continually improving itself rather than relying on a single round of fine-tuning after training. We’ll be covering this “mixed batch adversarial training method” in next week’s tutorial. Credits and references
The FGSM and data generator implementation were inspired by Sebastian Theiler’s excellent article on adversarial attacks and defenses. A huge shoutout and thank you to Sebastian for sharing his knowledge. What's next? |
https://pyimagesearch.com/2021/03/08/defending-against-adversarial-image-attacks-with-keras-and-tensorflow/ | We recommend PyImageSearch University. Course information:
84 total classes • 114+ hours of on-demand code walkthrough videos • Last updated: February 2024
★★★★★ 4.84 (128 Ratings) • 16,000+ Students Enrolled
I strongly believe that if you had the right teacher you could master computer vision and deep learning. Do you think learning computer vision and deep learning has to be time-consuming, overwhelming, and complicated? Or has to involve complex mathematics and equations? Or requires a degree in computer science? That’s not the case. All you need to master computer vision and deep learning is for someone to explain things to you in simple, intuitive terms. And that’s exactly what I do. My mission is to change education and how complex Artificial Intelligence topics are taught. If you're serious about learning computer vision, your next stop should be PyImageSearch University, the most comprehensive computer vision, deep learning, and OpenCV course online today. |
https://pyimagesearch.com/2021/03/08/defending-against-adversarial-image-attacks-with-keras-and-tensorflow/ | Here you’ll learn how to successfully and confidently apply computer vision to your work, research, and projects. Join me in computer vision mastery. Inside PyImageSearch University you'll find:
✓ 84 courses on essential computer vision, deep learning, and OpenCV topics
✓ 84 Certificates of Completion
✓ 114+ hours of on-demand video
✓ Brand new courses released regularly, ensuring you can keep up with state-of-the-art techniques
✓ Pre-configured Jupyter Notebooks in Google Colab
✓ Run all code examples in your web browser — works on Windows, macOS, and Linux (no dev environment configuration required!) ✓ Access to centralized code repos for all 536+ tutorials on PyImageSearch
✓ Easy one-click downloads for code, datasets, pre-trained models, etc. ✓ Access on mobile, laptop, desktop, etc. Click here to join PyImageSearch University
Summary
In this tutorial, you learned how to defend against adversarial image attacks using Keras and TensorFlow. Our adversarial image defense worked by:
Training a CNN on our datasetGenerating a set of adversarial images using the trained modelFine-tuning our model on the adversarial images
The result is a model that is both:
Accurate on the original testing imagesCapable of correctly classifying the adversarial images as well
The fine-tuning approach to adversarial image defense is essentially the most basic adversarial defense. Next week you’ll learn a more advanced method that incorporates batches of adversarial images generated on the fly, allowing the model to learn from the adversarial examples that “fooled” it during each epoch. If you enjoyed this guide, you certainly wouldn’t want to miss next week’s tutorial! To download the source code to this post (and be notified when future tutorials are published here on PyImageSearch), simply enter your email address in the form below! |
https://pyimagesearch.com/2021/03/08/defending-against-adversarial-image-attacks-with-keras-and-tensorflow/ | Download the Source Code and FREE 17-page Resource Guide
Enter your email address below to get a .zip of the code and a FREE 17-page Resource Guide on Computer Vision, OpenCV, and Deep Learning. Inside you'll find my hand-picked tutorials, books, courses, and libraries to help you master CV and DL! Download the code! Website |
https://pyimagesearch.com/2021/03/15/mixing-normal-images-and-adversarial-images-when-training-cnns/ | Click here to download the source code to this pos
In this tutorial, you will learn how to generate image batches of (1) normal images and (2) adversarial images during the training process. Doing so improves your model’s ability to generalize and defend against adversarial attacks. Last week we learned a simple method to defend against adversarial attacks. This method was a simple three-step process:
Train the CNN on your original training setGenerate adversarial examples from the testing set (or equivalent holdout set)Fine-tune the CNN on the adversarial examples
This method works fine but can be vastly improved simply by altering the training process. Instead of fine-tuning the network on a set of adversarial examples, we can alter the batch generation process itself. When we train neural networks, we do so in batches of data. Each batch is a subset of the training data and is typically sized in powers of two (8, 16, 32, 64, 128, etc.). For each batch, we perform a forward pass of the network, compute the loss, perform backpropagation, and then update the network’s weights. This is the standard training protocol of essentially any neural network. We can modify this standard training procedure to incorporate adversarial examples by:
Initializing our neural networkSelecting a total of N training examplesUse the model and a method like FGSM to generate a total of N adversarial examples as wellCombine the two sets, forming a batch of size Nx2Train the model on both the adversarial examples and original training samples
The benefit of this approach is that the model can learn from itself. |
https://pyimagesearch.com/2021/03/15/mixing-normal-images-and-adversarial-images-when-training-cnns/ | After each batch update, the model has improved by two factors. First, the model has ideally learned more discriminating patterns in the training data. Secondly, the model has learned to defend against adversarial examples that the model itself generated. Throughout an entire training procedure (tens to hundreds of epochs with tens of thousands to hundreds of thousands of batch updates), the model naturally learns to defend itself against adversarial attacks. This method is more complex than the basic fine-tuning approach, but the benefits dramatically outweigh the negatives. To learn how to mix normal images with adversarial images during training to improve model robustness, just keep reading. Looking for the source code to this post? Jump Right To The Downloads Section
Mixing normal images and adversarial images when training CNNs
In the first part of this tutorial, we’ll learn how to mix normal images and adversarial images during the training process. From there, we’ll configure our development environment and then review our project directory structure. We’ll have several Python scripts to implement today, including:
Our CNN architectureAn adversarial image generatorA data generator that (1) samples training data points and (2) generates adversarial examples on the flyA training script that puts all the pieces together
We’ll wrap up this tutorial by training our model on the mixed adversarial image generation process and then discuss the results. |
https://pyimagesearch.com/2021/03/15/mixing-normal-images-and-adversarial-images-when-training-cnns/ | Let’s get started! How can we mix normal images and adversarial images during training? Mixing training images with adversarial images is best explained visually. We start with both a neural network architecture and a training set:
Figure 1: To defend against adversarial attacks, we start with a neural network architecture and training set. The normal training process works by sampling batches of data from the training set and then training the model:
Figure 2: The normal process of training. However, we want to incorporate adversarial training, so we need a separate process that uses the model to generate adversarial examples:
Figure 3: To defend against adversarial attacks, we need to update our training procedure to sample batches of both normal training images and adversarial images (that are generated by the model during training). Now, during our training process, we sample the training set and generate adversarial examples, and then train the network:
Figure 4: The full training process of mixing normal images and adversarial images together. The training process is slightly more complex since we are sampling from our training set and generating adversarial examples on the fly. Still, the benefit is that the model can:
Learn patterns from the original training setLearn patterns from the adversarial examples
Since the model has now been trained on adversarial examples, it will be more robust and generalize better when presented with adversarial images. Configuring your development environment
This tutorial on defending against adversarial image attacks uses Keras and TensorFlow. |
End of preview. Expand
in Dataset Viewer.
README.md exists but content is empty.
Use the Edit dataset card button to edit it.
- Downloads last month
- 36