url
stringclasses
675 values
text
stringlengths
0
9.95k
https://pyimagesearch.com/2021/01/04/opencv-augmented-reality-ar/
Our opencv_ar_image.py script is the primary script in this tutorial and will take care of constructing our augmented reality output. If you wish to purchase your own Pantone color correction card, you can do so on Pantone’s official website. But if you don’t want to purchase one, don’t sweat, you can still follow along with this guide! Inside our project directory structure, you’ll see that I’ve included markers.pdf, which is a scan of my own Pantone color match card: Figure 7: Don’t have a Pantone color match card? Don’t want to purchase one? No worries! Just use the scan that I included in the “Downloads” associated with this tutorial. While it won’t help you perform color matching, you can still use it for the purposes of this example (i.e., detecting ArUco markers on it and then transforming the source image onto the input). Simply print markers.pdf on a piece of paper, cut it out, and then place it in view of your camera. From there you’ll be able to follow along.
https://pyimagesearch.com/2021/01/04/opencv-augmented-reality-ar/
With our directory structure reviewed, let’s move on to implementing augmented reality with OpenCV. Implementing augmented reality with OpenCV We are now ready to implement augmented reality with OpenCV! Open up the opencv_ar_image.py file in your project directory structure, and let’s get to work: # import the necessary packages import numpy as np import argparse import imutils import sys import cv2 Lines 2-6 handle importing our required Python packages. We’ll use NumPy for numerical processing, argparse for parsing command line arguments, and imutils for basic image operations (such as resizing). The sys package will allow us to gracefully exit our script (in the event if/when we cannot find the Pantone card in the input image), while cv2 provides our OpenCV bindings. With our imports taken care of, let’s move on to our command line arguments: # construct the argument parser and parse the arguments ap = argparse. ArgumentParser() ap.add_argument("-i", "--image", required=True, help="path to input image containing ArUCo tag") ap.add_argument("-s", "--source", required=True, help="path to input source image that will be put on input") args = vars(ap.parse_args()) We have two command line arguments here: --image: The path to the input image on disk, containing the surface we’ll be applying augmented reality to --source: The path to the source image that will be transformed onto the input image surface, thus creating our augmented reality output Let’s load both of these images now: # load the input image from disk, resize it, and grab its spatial # dimensions print("[INFO] loading input image and source image...") image = cv2.imread(args["image"]) image = imutils.resize(image, width=600) (imgH, imgW) = image.shape[:2] # load the source image from disk source = cv2.imread(args["source"]) Lines 19 and 20 load the input image from disk and resize it to have a width of 600px. We grab the spatial dimensions (width and height) from the image after the resizing operation on Line 21. We’ll need these dimensions later in this script when we perform a perspective warp. Line 24 then loads the original --source image from disk.
https://pyimagesearch.com/2021/01/04/opencv-augmented-reality-ar/
With our images loaded from disk, let’s move on to detecting ArUco markers in the input image: # load the ArUCo dictionary, grab the ArUCo parameters, and detect # the markers print("[INFO] detecting markers...") arucoDict = cv2.aruco. Dictionary_get(cv2.aruco. DICT_ARUCO_ORIGINAL) arucoParams = cv2.aruco. DetectorParameters_create() (corners, ids, rejected) = cv2.aruco.detectMarkers(image, arucoDict, parameters=arucoParams) # if we have not found four markers in the input image then we cannot # apply our augmented reality technique if len(corners) ! = 4: print("[INFO] could not find 4 corners...exiting") sys.exit(0) For reference, our input image looks like the following: Figure 8: Our example input image for applying augmented reality with OpenCV. The first step is to detect the four ArUco markers on the input image. Our goal is to detect the four ArUco markers on the Pantone card. Once we have the card and its ArUco markers, we can take the source image and transform it onto the card surface, thus forming the augmented reality output. The entire augmented reality process hinges on finding these ArUco markers first. If you haven’t yet, go back and read my previous tutorials on ArUco markers — those guides will help you get up to speed.
https://pyimagesearch.com/2021/01/04/opencv-augmented-reality-ar/
From here on out I will assume you are comfortable with ArUco markers. Lines 29-32 proceed to: Load our ArUco dictionary (from our previous set of tutorials on ArUco markers we know the Pantone card was generated using the DICT_ARUCO_ORIGINAL dictionary) Initialize our ArUco detector parameters Detect the ArUco markers in the input image In the event that the four ArUco markers were not found, we gracefully exit the script (Lines 36-38). Again, our augmented reality process here depends on all four markers being successfully found. Provided that our script is still executing, we can safely assume that all four ArUco markers were successfully detected. From there, we can grab the IDs of the ArUco markers and initialize refPts, a list to contain the (x, y)-coordinates of the ArUco tag bounding boxes: # otherwise, we've found the four ArUco markers, so we can continue # by flattening the ArUco IDs list and initializing our list of # reference points print("[INFO] constructing augmented reality visualization...") ids = ids.flatten() refPts = [] # loop over the IDs of the ArUco markers in top-left, top-right, # bottom-right, and bottom-left order for i in (923, 1001, 241, 1007): # grab the index of the corner with the current ID and append the # corner (x, y)-coordinates to our list of reference points j = np.squeeze(np.where(ids == i)) corner = np.squeeze(corners[j]) refPts.append(corner) On Line 49 we loop over our four ArUco marker IDs in the Pantone color image. These IDs were obtained using our ArUco marker detection blog post. If you are using your own ArUco marker IDs, you will need to update this list and insert the IDs. Line 52 grabs the index, j, of the current ID. This index is then used to extract the corner and add it to the refPts list (Lines 53 and 54). We’re almost ready to perform our perspective warp!
https://pyimagesearch.com/2021/01/04/opencv-augmented-reality-ar/
The next step is to unpack our reference point coordinates: # unpack our ArUco reference points and use the reference points to # define the *destination* transform matrix, making sure the points # are specified in top-left, top-right, bottom-right, and bottom-left # order (refPtTL, refPtTR, refPtBR, refPtBL) = refPts dstMat = [refPtTL[0], refPtTR[1], refPtBR[2], refPtBL[3]] dstMat = np.array(dstMat) # grab the spatial dimensions of the source image and define the # transform matrix for the *source* image in top-left, top-right, # bottom-right, and bottom-left order (srcH, srcW) = source.shape[:2] srcMat = np.array([[0, 0], [srcW, 0], [srcW, srcH], [0, srcH]]) # compute the homography matrix and then warp the source image to the # destination based on the homography (H, _) = cv2.findHomography(srcMat, dstMat) warped = cv2.warpPerspective(source, H, (imgW, imgH)) In order to perform augmented reality with OpenCV, we need to compute a homography matrix that is then used to perform a perspective warp. However, in order to compute the homography, we need both a source matrix and destination matrix. Lines 60-62 construct our destination matrix, dstMat. We take special care to ensure the reference points of the ArUco markers are provided in top-left, top-right, bottom-right, and bottom-left order. This is a requirement so take special care to ensure the proper ordering. Next, we do the same for the source matrix (Lines 67 and 68), but as you can see, the process here is more simple. All we need to do is provide the (x, y)-coordinates of the top-left, top-right, bottom-right, and bottom-left coordinates of the source image, all of which is quite trivial, once you have the width and height of the source. The next step is to take the source and destination matrices and use them to compute our homography matrix, H (Line 72). The homography matrix tells OpenCV’s cv2.warpPerspective function how to take the source image and then warp it such that it can fit into the area provided in the destination matrix. This warping process takes place on Line 73, the output of which can be seen below: Figure 9: The output of the warping operation.
https://pyimagesearch.com/2021/01/04/opencv-augmented-reality-ar/
We now need to apply this image to the surface of the input image, thus forming the augmented reality output. Notice how the input source has now been warped to the surface of the input image! Now that we have our warped image, we need to overlay it on the original input image. We can accomplish this task using some basic image processing operations: # construct a mask for the source image now that the perspective warp # has taken place (we'll need this mask to copy the source image into # the destination) mask = np.zeros((imgH, imgW), dtype="uint8") cv2.fillConvexPoly(mask, dstMat.astype("int32"), (255, 255, 255), cv2.LINE_AA) # this step is optional, but to give the source image a black border # surrounding it when applied to the source image, you can apply a # dilation operation rect = cv2.getStructuringElement(cv2.MORPH_RECT, (3, 3)) mask = cv2.dilate(mask, rect, iterations=2) # create a three channel version of the mask by stacking it depth-wise, # such that we can copy the warped source image into the input image maskScaled = mask.copy() / 255.0 maskScaled = np.dstack([maskScaled] * 3) # copy the warped source image into the input image by (1) multiplying # the warped image and masked together, (2) multiplying the original # input image with the mask (giving more weight to the input where # there *ARE NOT* masked pixels), and (3) adding the resulting # multiplications together warpedMultiplied = cv2.multiply(warped.astype("float"), maskScaled) imageMultiplied = cv2.multiply(image.astype(float), 1.0 - maskScaled) output = cv2.add(warpedMultiplied, imageMultiplied) output = output.astype("uint8") First, we create an empty mask with the same spatial dimensions as the input image (Line 78). We then fill the polygon area with white, implying that the area we just drew is foreground and the rest is background (Lines 79 and 80). The output mask looks like the following: Figure 10: In order to apply the warped image to the input, we need to generate a mask for the warped region. Lines 85 and 86 are optional, but I like to dilate the mask, thereby enlarging it slightly. Doing so creates a nice little black border surrounding the area where the warped source image will be applied to the input image. Again, it’s optional, but it provides a nice effect. Next, we take the mask, scale it from the range [0, 255] to [0, 1].
https://pyimagesearch.com/2021/01/04/opencv-augmented-reality-ar/
We then stack the mask depth-wise, creating a 3-channel representation of the mask. We perform this operation so we can copy the warped source image into the input image. All that’s left now is to: Multiply the warped image and the masked together (Line 98) Multiply the original input image with the mask, giving more weight to the input areas where there are not masked pixels (Line 99) Add the resulting multiplications together to form our output augmented reality image (Line 100) Convert the output image from a floating point data type to an unsigned 8-bit integer (Line 101) Finally, we can display the input image, source, and output to our screen: # show the input image, source image, output of our augmented reality cv2.imshow("Input", image) cv2.imshow("Source", source) cv2.imshow("OpenCV AR Output", output) cv2.waitKey(0) These three images will be displayed to our screen until a window opened by OpenCV is clicked on and a key on your keyboard is pressed. OpenCV augmented reality results We are now ready to perform augmented reality with OpenCV! Start by using the “Downloads” section of this tutorial to download the source code and example images. From there, open up a terminal, and execute the following command: $ python opencv_ar_image.py --image examples/input_01.jpg \ --source sources/squirrel.jpg [INFO] loading input image and source image... [INFO] detecting markers... [INFO] constructing augmented reality visualization... Figure 11: Applying augmented reality with OpenCV and Python. On the right you can see our source image of a squirrel. This source image will be transformed into the scene (via augmented reality) on the left. The left image contains an input color correction card with ArUco markers (i.e., markers/fiducial tags) that our opencv_ar_image.py script detects. Once the markers are found, we apply a transform that warps the source image into the input, thus generating the output (bottom).
https://pyimagesearch.com/2021/01/04/opencv-augmented-reality-ar/
Notice how the squirrel image has been transformed onto the color correction card itself, perfectly maintaining the aspect ratio, scale, viewing angle, etc. of the color correction card. Let’s try another example, this one with difference source and input images: $ python opencv_ar_image.py --image examples/input_02.jpg \ --source sources/antelope_canyon.jpg [INFO] loading input image and source image... [INFO] detecting markers... [INFO] constructing augmented reality visualization... Figure 12: The results of building a simple augmented reality application with OpenCV. On the right (Figure 12) we have an example image from a few years back of myself exploring Antelope Canyon in Page, AZ. The image on the left contains our input image, where our input source image will be applied to construct the augmented reality scene. Our Python script is able to detect the four ArUco tag markers and then apply a transform, thus generating the image on the bottom. Again, notice how the source image has been perfectly transformed to the input, maintaining the scale, aspect ratio, and most importantly, viewing angle, of the input image. Let’s look at one final example: $ python opencv_ar_image.py --image examples/input_03.jpg \ --source sources/jp.jpg [INFO] loading input image and source image... [INFO] detecting markers... [INFO] constructing augmented reality visualization... Figure 13: One final example of augmented reality with OpenCV. Figure 13 displays our results. This time we have a source image of my favorite movie, Jurassic Park (right).
https://pyimagesearch.com/2021/01/04/opencv-augmented-reality-ar/
We then detect the AprilTag markers in the input image (left) and then apply a transform to construct our augmented reality image (bottom). Next week you’ll learn how to perform this same technique, only in real time, thus creating a more seamless and thus more interesting and immersive augmented reality experience. Credits The code used to perform the perspective warp and masking was inspired by Satya Mallick’s implementation at LearnOpenCV. I took their implementation as a reference and then modified it to work for my example images along with providing additional details and commentary within the code and article. Check out Satya’s article if you feel so inclined. What's next? We recommend PyImageSearch University. Course information: 84 total classes • 114+ hours of on-demand code walkthrough videos • Last updated: February 2024 ★★★★★ 4.84 (128 Ratings) • 16,000+ Students Enrolled I strongly believe that if you had the right teacher you could master computer vision and deep learning. Do you think learning computer vision and deep learning has to be time-consuming, overwhelming, and complicated? Or has to involve complex mathematics and equations?
https://pyimagesearch.com/2021/01/04/opencv-augmented-reality-ar/
Or requires a degree in computer science? That’s not the case. All you need to master computer vision and deep learning is for someone to explain things to you in simple, intuitive terms. And that’s exactly what I do. My mission is to change education and how complex Artificial Intelligence topics are taught. If you're serious about learning computer vision, your next stop should be PyImageSearch University, the most comprehensive computer vision, deep learning, and OpenCV course online today. Here you’ll learn how to successfully and confidently apply computer vision to your work, research, and projects. Join me in computer vision mastery. Inside PyImageSearch University you'll find: ✓ 84 courses on essential computer vision, deep learning, and OpenCV topics ✓ 84 Certificates of Completion ✓ 114+ hours of on-demand video ✓ Brand new courses released regularly, ensuring you can keep up with state-of-the-art techniques ✓ Pre-configured Jupyter Notebooks in Google Colab ✓ Run all code examples in your web browser — works on Windows, macOS, and Linux (no dev environment configuration required!) ✓ Access to centralized code repos for all 536+ tutorials on PyImageSearch ✓ Easy one-click downloads for code, datasets, pre-trained models, etc.
https://pyimagesearch.com/2021/01/04/opencv-augmented-reality-ar/
✓ Access on mobile, laptop, desktop, etc. Click here to join PyImageSearch University Summary In this tutorial you learned the basics of augmented reality using OpenCV. However, to construct a true augmented reality experience, we need to create a more immersive environment, one that leverages real-time video streams. And in fact, that’s exactly what we’ll be covering next week! To download the source code to this post (and be notified when future tutorials are published here on PyImageSearch), simply enter your email address in the form below! Download the Source Code and FREE 17-page Resource Guide Enter your email address below to get a .zip of the code and a FREE 17-page Resource Guide on Computer Vision, OpenCV, and Deep Learning. Inside you'll find my hand-picked tutorials, books, courses, and libraries to help you master CV and DL! Download the code! Website
https://pyimagesearch.com/2021/01/11/opencv-video-augmented-reality/
Click here to download the source code to this pos In this tutorial you will learn how to perform real-time augmented reality in video streams using OpenCV. Last week we covered the basics of augmented reality with OpenCV; however, that tutorial only focused on applying augmented reality to images. That raises the question: “Is it possible to perform real-time augmented reality in real-time video with OpenCV?” It absolutely is — and the rest of this tutorial will show you how. To learn how to perform real-time augmented reality with OpenCV, just keep reading. Looking for the source code to this post? Jump Right To The Downloads Section OpenCV: Real-time video augmented reality In the first part of this tutorial, you will learn how OpenCV can facilitate augmented reality in video streams in real time. From there, we’ll configure our development environment and review our project directory structure. We’ll then review two Python scripts: The first one will contain a helper function, find_and_warp, which will accept an input image, detect augmented reality markers, and then warp a source image onto the input. The second script will act as a driver script and utilize our find_and_warp function within a real-time video stream.
https://pyimagesearch.com/2021/01/11/opencv-video-augmented-reality/
We’ll wrap up the tutorial with a discussion of our real-time augmented reality results. Let’s get started! How can we apply augmented reality to real-time video streams with OpenCV? Figure 1: OpenCV can be used to apply augmented reality to real-time video streams. The very reason the OpenCV library exists is to facilitate real-time image processing. The library accepts input images/frames, processes them as quickly as possible, and then returns the results. Since OpenCV is geared to work with real-time image processing, we can also use OpenCV to facilitate real-time augmented reality. For the purposes of this tutorial we will: Access our video streamDetect ArUco markers in each input frameTake a source image and apply a perspective transform to map the source input onto the frame, thus creating our augmented reality output! And just to make this project even more fun and interesting, we’ll utilize two video streams: The first video stream will act as our “eyes” into the real world (i.e., what our camera sees).We’ll then read frames from the second video stream and then transform them into the first. By the end of this tutorial, you will have a fully functional OpenCV augmented reality project running in real time!
https://pyimagesearch.com/2021/01/11/opencv-video-augmented-reality/
Configuring your development environment In order to perform real-time augmented reality with OpenCV, you need to have the OpenCV library installed. Luckily, OpenCV is pip-installable: $ pip install opencv-contrib-python If you need help configuring your development environment for OpenCV, I highly recommend that you read my pip install OpenCV guide — it will have you up and running in a matter of minutes. Having problems configuring your development environment? Figure 2: Having trouble configuring your dev environment? Want access to pre-configured Jupyter Notebooks running on Google Colab? Be sure to join PyImageSearch Plus — you’ll be up and running with this tutorial in a matter of minutes. All that said, are you: Short on time?Learning on your employer’s administratively locked system?Wanting to skip the hassle of fighting with the command line, package managers, and virtual environments?Ready to run the code right now on your Windows, macOS, or Linux system? Then join PyImageSearch Plus today! Gain access to Jupyter Notebooks for this tutorial and other PyImageSearch guides that are pre-configured to run on Google Colab’s ecosystem right in your web browser! No installation required.
https://pyimagesearch.com/2021/01/11/opencv-video-augmented-reality/
And best of all, these Jupyter Notebooks will run on Windows, macOS, and Linux! Project structure Before we can implement real-time augmented reality with OpenCV, we first need to review our project directory structure. Start by using the “Downloads” section of this tutorial to download the source code and example video files. Let’s now take a peek at the directory contents: $ tree . --dirsfirst . ├── pyimagesearch │ ├── __init__.py │ └── augmented_reality.py ├── videos │ └── jp_trailer_short.mp4 ├── markers.pdf └── opencv_ar_video.py 2 directories, 4 files Inside the pyimagesearch module you’ll see that we have a Python file named augmented_reality.py. This file contains a function named find_and_warp. The find_and_warp function encapsulates the logic used in our previous tutorial on OpenCV Augmented Reality and allows us to: Detect ArUco tags in our Pantone color match card Transform an input frame onto the match card surface Return the output augmented reality image to the calling function The output of which will look something like this: If you don’t have your own color match card, don’t worry! Inside our project directory structure, you’ll see that I’ve included markers.pdf, which is a scan of my own Pantone color match card: Figure 3: Don’t have a Pantone color match card? Don’t want to purchase one?
https://pyimagesearch.com/2021/01/11/opencv-video-augmented-reality/
No worries! Just use the scan that I included in the “Downloads” associated with this tutorial. While it won’t help you perform color matching, you can still use it for the purposes of this example (i.e., detecting ArUco markers on it and then transforming the source image onto the frame). Simply print markers.pdf on a piece of paper, cut it out, and then place it in view of your camera. From there you’ll be able to follow along. Finally, opencv_ar_video.py includes all logic required to implement augmented reality in real time with OpenCV. Implementing our marker detector/augmented reality utility function Before we can implement augmented reality with OpenCV in real-time video streams, we first need to create a helper function, find_and_warp, which as the name suggests, will: Accept an input image and source imageFind the four ArUco tags on the input imageConstruct and apply a homography matrix to warp the source image into the input surface Additionally, we’ll include logic to handle when all four ArUco reference points are not detected (and how to ensure there is no flickering/choppiness in our output). Open up the augmented_reality.py file inside the pyimagesearch module of our project directory structure, and let’s get to work: # import the necessary packages import numpy as np import cv2 # initialize our cached reference points CACHED_REF_PTS = None Our imports are taken care of on Lines 2 and 3. We need only two, NumPy for numerical array processing and cv2 for our OpenCV bindings. We then initialize a global variable, CACHED_REF_POINTS, which is our cached reference points (i.e., location of ArUco tag markers in the previous frames).
https://pyimagesearch.com/2021/01/11/opencv-video-augmented-reality/
Due to changes in lighting conditions, viewpoint, or motion blur, there will be times when our four reference ArUco markers cannot be detected in a given input frame. When that happens we have two courses of action: Return from the function with empty output. The benefit to this approach is that it’s simple and easy to implement (and also logically sound). The problem is that it creates a “flickering” effect if the ArUco tags are found in frame #1, missed in #2, and then found again in frame #3.Fall back on the previous known location of ArUco markers. This is the caching method. It reduces flickering and helps create a seamless augmented reality experience, but if the reference markers move quickly, then the effects may appear a bit “laggy.” Which approach you decide to use is totally up to you, but I personally like the caching method, as it creates a better user experience for augmented reality. With our imports and variable initializations taken care of, let’s move on to our find_and_warp function. def find_and_warp(frame, source, cornerIDs, arucoDict, arucoParams, useCache=False): # grab a reference to our cached reference points global CACHED_REF_PTS # grab the width and height of the frame and source image, # respectively (imgH, imgW) = frame.shape[:2] (srcH, srcW) = source.shape[:2] This function is responsible for accepting an input source and frame, finding the ArUco markers on the frame, and then constructing and applying a perspective warp to transform the source onto the frame. This function accepts six arguments: frame: The input frame from our video stream source: The source image/frame that will be warped onto the input frame cornerIDs: The IDs of the ArUco tags that we need to detect arucoDict: OpenCV’s ArUco tag dictionary arucoParams: The ArUco marker detector parameters useCache: A boolean indicating whether or not we should use the reference point caching method We then grab the width and height of both our frame and source image on Lines 15 and 16.
https://pyimagesearch.com/2021/01/11/opencv-video-augmented-reality/
Let’s now detect ArUco markers in our frame: # detect AruCo markers in the input frame (corners, ids, rejected) = cv2.aruco.detectMarkers( frame, arucoDict, parameters=arucoParams) # if we *did not* find our four ArUco markers, initialize an # empty IDs list, otherwise flatten the ID list ids = np.array([]) if len(corners) ! = 4 else ids.flatten() # initialize our list of reference points refPts = [] Lines 19 and 20 make a call to cv2.aruco.detectMarkers to detect ArUco markers in the input frame. Line 24 initializes a list of ids. If we found four corners, then our ids list is a 1-d NumPy array of the ArUco markers detected. Otherwise, we set ids to an empty array. Line 27 initializes our list of reference points (refPts), which correspond to the four detected ArUco markers. We can now loop over our cornerIDs: # loop over the IDs of the ArUco markers in top-left, top-right, # bottom-right, and bottom-left order for i in cornerIDs: # grab the index of the corner with the current ID j = np.squeeze(np.where(ids == i)) # if we receive an empty list instead of an integer index, # then we could not find the marker with the current ID if j.size == 0: continue # otherwise, append the corner (x, y)-coordinates to our list # of reference points corner = np.squeeze(corners[j]) refPts.append(corner) Line 33 finds the index, j, of the corner marker ID, i. If no such marker exists for the current marker ID, i, then we continue looping (Lines 37 and 38). Otherwise, we add the corner (x, y)-coordinates to our reference list (Lines 42 and 43). But what happens if we could not find all four reference points? What happens then?
https://pyimagesearch.com/2021/01/11/opencv-video-augmented-reality/
The next code block addresses that question: # check to see if we failed to find the four ArUco markers if len(refPts) ! = 4: # if we are allowed to use cached reference points, fall # back on them if useCache and CACHED_REF_PTS is not None: refPts = CACHED_REF_PTS # otherwise, we cannot use the cache and/or there are no # previous cached reference points, so return early else: return None # if we are allowed to use cached reference points, then update # the cache with the current set if useCache: CACHED_REF_PTS = refPts Line 46 makes a check to see if we failed to detect all four ArUco markers. When that happens we have two choices: Fall back on the cache and use our CACHED_REF_PTS (Lines 49 and 50) Simply return None to the calling function, indicating that we could not perform the augmented reality transform (Lines 54 and 55) Provided we are using the reference point cache, we update our CACHED_REF_PTS on Lines 59 and 60 with the current set of refPts. Given our refPts (cached or otherwise) we now need to construct our homography matrix and apply a perspective warp: # unpack our ArUco reference points and use the reference points # to define the *destination* transform matrix, making sure the # points are specified in top-left, top-right, bottom-right, and # bottom-left order (refPtTL, refPtTR, refPtBR, refPtBL) = refPts dstMat = [refPtTL[0], refPtTR[1], refPtBR[2], refPtBL[3]] dstMat = np.array(dstMat) # define the transform matrix for the *source* image in top-left, # top-right, bottom-right, and bottom-left order srcMat = np.array([[0, 0], [srcW, 0], [srcW, srcH], [0, srcH]]) # compute the homography matrix and then warp the source image to # the destination based on the homography (H, _) = cv2.findHomography(srcMat, dstMat) warped = cv2.warpPerspective(source, H, (imgW, imgH)) The code above, as well as in the remainder of this function, is essentially identical to that of last week, so I will defer a detailed discussion of these code blocks to the previous guide. Lines 66-68 construct our destination matrix (i.e., where the source image will be mapped to in the input frame), while Line 72 creates the source matrix, which is simply the top-left, top-right, bottom-right, and bottom-left corners of the source image. Line 76 computes our homography matrix from the two matrices. This homography matrix is used on Line 77 to construct the warped image. From there we need to prepare a mask that will allow us to seamlessly apply the warped image to the frame: # construct a mask for the source image now that the perspective # warp has taken place (we'll need this mask to copy the source # image into the destination) mask = np.zeros((imgH, imgW), dtype="uint8") cv2.fillConvexPoly(mask, dstMat.astype("int32"), (255, 255, 255), cv2.LINE_AA) # this step is optional, but to give the source image a black # border surrounding it when applied to the source image, you # can apply a dilation operation rect = cv2.getStructuringElement(cv2.MORPH_RECT, (3, 3)) mask = cv2.dilate(mask, rect, iterations=2) # create a three channel version of the mask by stacking it # depth-wise, such that we can copy the warped source image # into the input image maskScaled = mask.copy() / 255.0 maskScaled = np.dstack([maskScaled] * 3) Lines 82-84 allocate memory for a mask that we then fill in with white for the foreground and black for the background. A dilation operation is performed on Lines 89 and 90 to create a black border surrounding the source image (optional, but looks good for aesthetic purposes). We then scale our mask from the range [0, 255] to [0, 1] and then stack it depth-wise, resulting in a 3-channel mask.
https://pyimagesearch.com/2021/01/11/opencv-video-augmented-reality/
The final step is to use the mask to apply the warped image to the input surface: # copy the warped source image into the input image by # (1) multiplying the warped image and masked together, # (2) then multiplying the original input image with the # mask (giving more weight to the input where there # *ARE NOT* masked pixels), and (3) adding the resulting # multiplications together warpedMultiplied = cv2.multiply(warped.astype("float"), maskScaled) imageMultiplied = cv2.multiply(frame.astype(float), 1.0 - maskScaled) output = cv2.add(warpedMultiplied, imageMultiplied) output = output.astype("uint8") # return the output frame to the calling function return output Lines 104-109 copy the warped image onto the output frame, which we then return to the calling function on Line 112. For a more detailed review of the actual homography matrix construction, warp transform, and post-processing tasks, refer to last week’s guide. Creating our OpenCV video augmented reality driver script With our find_and_warp helper function implemented, we can move on to creating our opencv_ar_video.py script, which is responsible for real-time augmented reality. Let’s open up the opencv_ar_video.py script and start coding: # import the necessary packages from pyimagesearch.augmented_reality import find_and_warp from imutils.video import VideoStream from collections import deque import argparse import imutils import time import cv2 Lines 2-8 handle importing our required Python packages. Notable imports include: find_and_warp: Responsible for constructing the actual augmented reality output VideoStream: Accesses our webcam video stream deque: Provides a queue data structure of source frames (read from a video file) to be applied to the output frame, thus creating our augmented reality output Let’s now parse our command line arguments: # construct the argument parser and parse the arguments ap = argparse. ArgumentParser() ap.add_argument("-i", "--input", type=str, required=True, help="path to input video file for augmented reality") ap.add_argument("-c", "--cache", type=int, default=-1, help="whether or not to use reference points cache") args = vars(ap.parse_args()) Our script accepts two command line arguments, one of which is required and the other optional: --input: Path to our input video residing on disk. We’ll read frames from this video file and then apply them to the frames read from our webcam. --cache: Whether or not to use our reference point caching method. Moving on, let’s now prepare our ArUco marker detector and video pointers: # load the ArUCo dictionary and grab the ArUCo parameters print("[INFO] initializing marker detector...") arucoDict = cv2.aruco. Dictionary_get(cv2.aruco.
https://pyimagesearch.com/2021/01/11/opencv-video-augmented-reality/
DICT_ARUCO_ORIGINAL) arucoParams = cv2.aruco. DetectorParameters_create() # initialize the video file stream print("[INFO] accessing video stream...") vf = cv2.VideoCapture(args["input"]) # initialize a queue to maintain the next frame from the video stream Q = deque(maxlen=128) # we need to have a frame in our queue to start our augmented reality # pipeline, so read the next frame from our video file source and add # it to our queue (grabbed, source) = vf.read() Q.appendleft(source) # initialize the video stream and allow the camera sensor to warm up print("[INFO] starting video stream...") vs = VideoStream(src=0).start() time.sleep(2.0) Lines 20 and 21 initialize our ArUco tag dictionary and detector parameters. The ArUco tags used on our input surface are DICT_ARUCO_ORIGINAL (which we know from our previous series of posts on ArUco marker detection). Line 25 opens our --input video file for reading. We also initialize Q, a FIFO (First In, First Out) deque data structure used to store frames read from our vf file pointer. We use a queue here to improve file I/O latency by ensuring a source frame is (nearly) always ready for the augmented reality transform. Later in this script we’ll make the assumption that our Q is populated, so we read an initial source from the vf and then update our Q (Lines 33 and 34). Lines 38 and 39 then initialize our webcam video stream and allow the camera sensor to warm up. Our next code block starts a while loop that will continue until our Q is empty (implying that the input video file ran out of frames and has reached the end of the file): # loop over the frames from the video stream while len(Q) > 0: # grab the frame from our video stream and resize it frame = vs.read() frame = imutils.resize(frame, width=600) # attempt to find the ArUCo markers in the frame, and provided # they are found, take the current source image and warp it onto # input frame using our augmented reality technique warped = find_and_warp( frame, source, cornerIDs=(923, 1001, 241, 1007), arucoDict=arucoDict, arucoParams=arucoParams, useCache=args["cache"] > 0) Lines 44 and 45 read a frame from our webcam video stream which we resize to have a width of 600 pixels. We then apply our find_and_warp function to: Detect the ArUco markers on input frame Construct a homography matrix to map the source to the frame Apply the perspective warp Return the final warped image to the calling function Take special note of the cornerIDs and useCache parameters.
https://pyimagesearch.com/2021/01/11/opencv-video-augmented-reality/
The cornerIDs were obtained from our previous series of tutorials on ArUco markers, where we were tasked with detecting and identifying each of the four ArUco markers in our input image. If you are using your own custom ArUco marker, then you’ll likely need to update the cornerIDs, accordingly. Secondly, the useCache parameter controls whether or not we are utilizing reference point caching (controlled via the --cache command line argument). Play with this parameter, and explore what happens when caching is turned on versus off. Our next code block handles updating our queue data structure: # if the warped frame is not None, then we know (1) we found the # four ArUCo markers and (2) the perspective warp was successfully # applied if warped is not None: # set the frame to the output augment reality frame and then # grab the next video file frame from our queue frame = warped source = Q.popleft() # for speed/efficiency, we can use a queue to keep the next video # frame queue ready for us -- the trick is to ensure the queue is # always (or nearly full) if len(Q) ! = Q.maxlen: # read the next frame from the video file stream (grabbed, nextFrame) = vf.read() # if the frame was read (meaning we are not at the end of the # video file stream), add the frame to our queue if grabbed: Q.append(nextFrame) Lines 60-64 handle the case where our perspective warp was successful. In this case, we update our frame to be the warped output image (i.e., the output of applying our augmented reality process) and then read the next source frame from our queue. Lines 69-76 attempt to ensure our queue data structure is filled. If we haven’t reached the maximum length of the Q, we read the nextFrame from our video file and then add it to the queue. Our final code block handles displaying our output frame: # show the output frame cv2.imshow("Frame", frame) key = cv2.waitKey(1) & 0xFF # if the `q` key was pressed, break from the loop if key == ord("q"): break # do a bit of cleanup cv2.destroyAllWindows() vs.stop() Our real-time augmented reality script will continue to execute until either: We press the q key on our keyboard The source --input video file runs out of frames Take a second to congratulate yourself on implementing real-time augmented reality with OpenCV!
https://pyimagesearch.com/2021/01/11/opencv-video-augmented-reality/
Augmented reality in real-time video streams with OpenCV Ready to perform augmented reality in real-time video streams with OpenCV? Start by using the “Downloads” section of this tutorial to download the source code and example video. From there, open up a terminal, and execute the following command: $ python opencv_ar_video.py --input videos/jp_trailer_short.mp4 [INFO] initializing marker detector... [INFO] accessing video stream... [INFO] starting video stream... As you can see from my output, we are: Reading frames from both my camera sensor as well as the Jurassic Park trailer video residing on diskDetecting the ArUco tags on the cardApplying a perspective warp to transform the video frame from the Jurassic Park trailer onto the real-world environment captured by my camera Furthermore, note that our augmented reality application is running in real time! However, there is a bit of an issue … Notice there is considerable flickering that appears in the output frames — why is that? The reason is that the ArUco marker detection is not fully “stable.” In some frames all four markers are detected and in others they are not. An ideal solution would be to ensure all four markers are always detected, but that can’t be guaranteed in every scenario. Instead, what we can do is fall back on reference point caching: $ python opencv_ar_video.py --input videos/jp_trailer_short.mp4 --cache 1 [INFO] initializing marker detector... [INFO] accessing video stream... [INFO] starting video stream... Using reference point caching you can now see that our results are a bit better. When the four ArUco markers are not detected in the current frame, we fall back to their location in the previous frame where all four were detected. Another potential solution is to utilize optical flow to help aid in reference point tracking (but that topic is outside the scope of this tutorial).
https://pyimagesearch.com/2021/01/11/opencv-video-augmented-reality/
What's next? We recommend PyImageSearch University. Course information: 84 total classes • 114+ hours of on-demand code walkthrough videos • Last updated: February 2024 ★★★★★ 4.84 (128 Ratings) • 16,000+ Students Enrolled I strongly believe that if you had the right teacher you could master computer vision and deep learning. Do you think learning computer vision and deep learning has to be time-consuming, overwhelming, and complicated? Or has to involve complex mathematics and equations? Or requires a degree in computer science? That’s not the case. All you need to master computer vision and deep learning is for someone to explain things to you in simple, intuitive terms. And that’s exactly what I do. My mission is to change education and how complex Artificial Intelligence topics are taught.
https://pyimagesearch.com/2021/01/11/opencv-video-augmented-reality/
If you're serious about learning computer vision, your next stop should be PyImageSearch University, the most comprehensive computer vision, deep learning, and OpenCV course online today. Here you’ll learn how to successfully and confidently apply computer vision to your work, research, and projects. Join me in computer vision mastery. Inside PyImageSearch University you'll find: ✓ 84 courses on essential computer vision, deep learning, and OpenCV topics ✓ 84 Certificates of Completion ✓ 114+ hours of on-demand video ✓ Brand new courses released regularly, ensuring you can keep up with state-of-the-art techniques ✓ Pre-configured Jupyter Notebooks in Google Colab ✓ Run all code examples in your web browser — works on Windows, macOS, and Linux (no dev environment configuration required!) ✓ Access to centralized code repos for all 536+ tutorials on PyImageSearch ✓ Easy one-click downloads for code, datasets, pre-trained models, etc. ✓ Access on mobile, laptop, desktop, etc. Click here to join PyImageSearch University Summary In this tutorial you learned how to perform real-time augmented reality with OpenCV. Using OpenCV, we were able to access our webcam, detect ArUco tags, and then transform an input image/frame into our scene, all while running in real time! However, one of the biggest drawbacks to this augmented reality approach is that it requires we use markers/fiducials, such as ArUco tags, AprilTags, etc. There is an active area of augmented reality research called markerless augmented reality.
https://pyimagesearch.com/2021/01/11/opencv-video-augmented-reality/
With markerless augmented reality we do not need prior knowledge of the real-world environment, such as specific markers or objects that have to reside in our video stream. Markerless augmented reality makes for much more beautiful, immersive experiences; however, most markerless augmented reality systems require flat textures/regions in order to work. And furthermore, markerless augmented reality requires significantly more complex and computationally expensive algorithms. We’ll cover markerless augmented reality in a future set of tutorials on the PyImageSearch blog. To download the source code to this post (and be notified when future tutorials are published here on PyImageSearch), simply enter your email address in the form below! Download the Source Code and FREE 17-page Resource Guide Enter your email address below to get a .zip of the code and a FREE 17-page Resource Guide on Computer Vision, OpenCV, and Deep Learning. Inside you'll find my hand-picked tutorials, books, courses, and libraries to help you master CV and DL! Download the code! Website
https://pyimagesearch.com/2021/01/18/contrastive-loss-for-siamese-networks-with-keras-and-tensorflow/
Click here to download the source code to this pos In this tutorial, you will learn about contrastive loss and how it can be used to train more accurate siamese neural networks. We will implement contrastive loss using Keras and TensorFlow. Previously, I authored a three-part series on the fundamentals of siamese neural networks: Building image pairs for siamese networks with PythonSiamese networks with Keras, TensorFlow, and Deep LearningComparing images for similarity using siamese networks, Keras, and TenorFlow This series covered the fundamentals of siamese networks, including: Generating image pairsImplementing the siamese neural network architectureUsing binary cross-entry to train the siamese network But while binary cross-entropy is certainly a valid choice of loss function, it’s not the only choice (or even the best choice). State-of-the-art siamese networks tend to use some form of either contrastive loss or triplet loss when training — these loss functions are better suited for siamese networks and tend to improve accuracy. By the end of this guide, you will understand how to implement siamese networks and then train them with contrastive loss. To learn how to train a siamese neural network with contrastive loss, just keep reading. Looking for the source code to this post? Jump Right To The Downloads Section Contrastive Loss for Siamese Networks with Keras and TensorFlow In the first part of this tutorial, we will discuss what contrastive loss is and, more importantly, how it can be used to more accurately and effectively train siamese neural networks. We’ll then configure our development environment and review our project directory structure. We have a number of Python scripts to implement today, including: A configuration fileHelper utilities for generating image pairs, plotting training history, and implementing custom layersOur contrastive loss implementationA training scriptA testing/inference script We’ll review each of these scripts; however, some of them have been covered in my previous guides on siamese neural networks, so when appropriate I’ll refer you to my other tutorials for additional details.
https://pyimagesearch.com/2021/01/18/contrastive-loss-for-siamese-networks-with-keras-and-tensorflow/
We’ll also spend a considerable amount of time discussing our contrastive loss implementation, ensuring you understand what it’s doing, how it works, and why we are utilizing it. By the end of this tutorial, you will have a fully functioning contrastive loss implementation that is capable of training a siamese neural network. What is contrastive loss? And how can contrastive loss be used to train siamese networks? In our previous series of tutorials on siamese neural networks, we learned how to train a siamese network using the binary cross-entropy loss function: Figure 1: The binary cross-entropy loss function (image source). Binary cross-entropy was a valid choice here because what we’re essentially doing is 2-class classification: Either the two images presented to the network belong to the same classOr the two images belong to different classes Framed in that manner, we have a classification problem. And since we only have two classes, binary cross-entropy makes sense. However, there is actually a loss function much better suited for siamese networks called contrastive loss: Figure 2: The contrastive loss function. Paraphrasing Harshvardhan Gupta, we need to keep in mind that the goal of a siamese network isn’t to classify a set of image pairs but instead to differentiate between them. Essentially, contrastive loss is evaluating how good a job the siamese network is distinguishing between the image pairs.
https://pyimagesearch.com/2021/01/18/contrastive-loss-for-siamese-networks-with-keras-and-tensorflow/
The difference is subtle but incredibly important. To break this equation down: The value is our label. It will be if the image pairs are of the same class, and it will be if the image pairs are of a different class. The variable is the Euclidean distance between the outputs of the sister network embeddings. The max function takes the largest value of and the margin, , minus the distance. We’ll be implementing this loss function using Keras and TensorFlow later in this tutorial. If you would like more mathematically motivated details on contrastive loss, be sure to refer to Hadsell et al. ’s paper, Dimensionality Reduction by Learning an Invariant Mapping. Configuring your development environment This series of tutorials on siamese networks utilizes Keras and TensorFlow. If you intend on following this tutorial on the previous two parts in this series, I suggest you take the time now to configure your deep learning development environment.
https://pyimagesearch.com/2021/01/18/contrastive-loss-for-siamese-networks-with-keras-and-tensorflow/
You can utilize either of these two guides to install TensorFlow and Keras on your system: How to install TensorFlow 2.0 on UbuntuHow to install TensorFlow 2.0 on macOS Either tutorial will help you configure your system with all the necessary software for this blog post in a convenient Python virtual environment. Having problems configuring your development environment? Figure 3: Having trouble configuring your dev environment? Want access to pre-configured Jupyter Notebooks running on Google Colab? Be sure to join PyImageSearch Plus — you’ll be up and running with this tutorial in a matter of minutes. All that said, are you: Short on time?Learning on your employer’s administratively locked system?Wanting to skip the hassle of fighting with the command line, package managers, and virtual environments?Ready to run the code right now on your Windows, macOS, or Linux system? Then join PyImageSearch Plus today! Gain access to Jupyter Notebooks for this tutorial and other PyImageSearch guides that are pre-configured to run on Google Colab’s ecosystem right in your web browser! No installation required. And best of all, these Jupyter Notebooks will run on Windows, macOS, and Linux!
https://pyimagesearch.com/2021/01/18/contrastive-loss-for-siamese-networks-with-keras-and-tensorflow/
Project structure Today’s tutorial on contrastive loss on siamese networks builds on my three previous tutorials that cover the fundamentals of building image pairs, implementing and training siamese networks, and using siamese networks for inference: Building image pairs for siamese networks with PythonSiamese networks with Keras, TensorFlow, and Deep LearningComparing images for similarity using siamese networks, Keras, and TensorFlow We’ll be building on the knowledge we gained from those guides (including the project directory structure itself) today, so consider the previous guides required reading before continuing today. Once you’ve gotten caught up, we can proceed to review our project directory structure: $ tree . --dirsfirst . ├── examples │ ├── image_01.png │ ├── image_02.png │ ├── image_03.png ... │ └── image_13.png ├── output │ ├── contrastive_siamese_model │ │ ├── assets │ │ ├── variables │ │ │ ├── variables.data-00000-of-00001 │ │ │ └── variables.index │ │ └── saved_model.pb │ └── contrastive_plot.png ├── pyimagesearch │ ├── config.py │ ├── metrics.py │ ├── siamese_network.py │ └── utils.py ├── test_contrastive_siamese_network.py └── train_contrastive_siamese_network.py 6 directories, 23 files Inside the pyimagesearch module you’ll find four Python files: config.py: Contains our configuration of important variables, including batch size, epochs, output file paths, etc. metrics.py: Holds our implementation of the contrastive_loss function siamese_network.py: Contains the siamese network model architecture utils.py: Includes helper utilities, including a function to generate image pairs, compute the Euclidean distance as a layer inside of a CNN, and a training history plotting function We then have two Python driver scripts: train_contrastive_siamese_network.py: Trains our siamese neural network using contrastive loss and serializes the training history and model weights/architecture to disk inside the output directory test_contrastive_siamse_network.py: Loads our trained siamese network from disk and applies it to image pairs from inside the examples directory Again, I cannot stress the importance of reviewing my previous series of tutorials on siamese networks. Doing so is an absolute requirement before continuing here today. Implementing our configuration file Our configuration file holds important variables used to train our siamese network with contrastive loss. Open up the config.py file in your project directory structure, and let’s take a look inside: # import the necessary packages import os # specify the shape of the inputs for our network IMG_SHAPE = (28, 28, 1) # specify the batch size and number of epochs BATCH_SIZE = 64 EPOCHS = 100 # define the path to the base output directory BASE_OUTPUT = "output" # use the base output path to derive the path to the serialized # model along with training history plot MODEL_PATH = os.path.sep.join([BASE_OUTPUT, "contrastive_siamese_model"]) PLOT_PATH = os.path.sep.join([BASE_OUTPUT, "contrastive_plot.png"]) Line 5 sets our IMG_SHAPE dimensions. We’ll be working with the MNIST digits dataset, which has 28×28 grayscale (i.e., single channel) images. We then set our BATCH_SIZE and number of EPOCHS to train before.
https://pyimagesearch.com/2021/01/18/contrastive-loss-for-siamese-networks-with-keras-and-tensorflow/
These parameters were experimentally tuned. Lines 16-19 define the output file paths for both our serialized model and training history. For more details on the configuration file, refer to my tutorial on Siamese networks with Keras, TensorFlow, and Deep Learning. Creating our helper utility functions Figure 4: In order to train our siamese network, we need to generate positive and negative image pairs. In order to train our siamese network model, we’ll need three helper utilities: make_pairs: Generates a set of image pairs from the MNIST dataset that will serve as our training set euclidean_distance: A custom layer implementation that computes the Euclidean distance between two volumes inside of a CNN plot_training: Plots the training and validation contrastive loss over the course of the training process Let’s start off with our imports: # import the necessary packages import tensorflow.keras.backend as K import matplotlib.pyplot as plt import numpy as np We then have our make_pairs function, which I discussed in detail in my Building image pairs for siamese networks with Python tutorial (make sure you read that guide before continuing): def make_pairs(images, labels): # initialize two empty lists to hold the (image, image) pairs and # labels to indicate if a pair is positive or negative pairImages = [] pairLabels = [] # calculate the total number of classes present in the dataset # and then build a list of indexes for each class label that # provides the indexes for all examples with a given label numClasses = len(np.unique(labels)) idx = [np.where(labels == i)[0] for i in range(0, numClasses)] # loop over all images for idxA in range(len(images)): # grab the current image and label belonging to the current # iteration currentImage = images[idxA] label = labels[idxA] # randomly pick an image that belongs to the *same* class # label idxB = np.random.choice(idx[label]) posImage = images[idxB] # prepare a positive pair and update the images and labels # lists, respectively pairImages.append([currentImage, posImage]) pairLabels.append([1]) # grab the indices for each of the class labels *not* equal to # the current label and randomly pick an image corresponding # to a label *not* equal to the current label negIdx = np.where(labels ! = label)[0] negImage = images[np.random.choice(negIdx)] # prepare a negative pair of images and update our lists pairImages.append([currentImage, negImage]) pairLabels.append([0]) # return a 2-tuple of our image pairs and labels return (np.array(pairImages), np.array(pairLabels)) I’ve already covered this function in detail previously, but the gist here is that: In order to train siamese networks, we need examples of positive and negative image pairs A positive pair is two images that belong to the same class (i.e., two examples of the digit “8”) A negative pair is two images that belong to different classes (i.e., one image containing a “1” and the other image containing a “3”) The make_pairs function accepts an input set of images and associated labels and then constructs the positive and negative image pairs The next function, euclidean_distance, accepts a 2-tuple of vectors and then computes the Euclidean distance between them, utilizing Keras/TensorFlow functions such that the Euclidean distance can be computed inside the siamese neural network: def euclidean_distance(vectors): # unpack the vectors into separate lists (featsA, featsB) = vectors # compute the sum of squared distances between the vectors sumSquared = K.sum(K.square(featsA - featsB), axis=1, keepdims=True) # return the euclidean distance between the vectors return K.sqrt(K.maximum(sumSquared, K.epsilon())) Finally, we have a helper utility, plot_training, which accepts a plotPath, plots our training and validation contrastive loss over the course of training, and then saves the plot to disk: def plot_training(H, plotPath): # construct a plot that plots and saves the training history plt.style.use("ggplot") plt.figure() plt.plot(H.history["loss"], label="train_loss") plt.plot(H.history["val_loss"], label="val_loss") plt.title("Training Loss") plt.xlabel("Epoch #") plt.ylabel("Loss") plt.legend(loc="lower left") plt.savefig(plotPath) Let’s move on to implementing the siamese network architecture itself. Implementing our siamese network architecture Figure 5: Siamese networks with Keras and TensorFlow. Our siamese neural network architecture is essentially a basic CNN: # import the necessary packages from tensorflow.keras.models import Model from tensorflow.keras.layers import Input from tensorflow.keras.layers import Conv2D from tensorflow.keras.layers import Dense from tensorflow.keras.layers import Dropout from tensorflow.keras.layers import GlobalAveragePooling2D from tensorflow.keras.layers import MaxPooling2D def build_siamese_model(inputShape, embeddingDim=48): # specify the inputs for the feature extractor network inputs = Input(inputShape) # define the first set of CONV => RELU => POOL => DROPOUT layers x = Conv2D(64, (2, 2), padding="same", activation="relu")(inputs) x = MaxPooling2D(pool_size=(2, 2))(x) x = Dropout(0.3)(x) # second set of CONV => RELU => POOL => DROPOUT layers x = Conv2D(64, (2, 2), padding="same", activation="relu")(x) x = MaxPooling2D(pool_size=2)(x) x = Dropout(0.3)(x) # prepare the final outputs pooledOutput = GlobalAveragePooling2D()(x) outputs = Dense(embeddingDim)(pooledOutput) # build the model model = Model(inputs, outputs) # return the model to the calling function return model You can refer to my tutorial on Siamese networks with Keras, TensorFlow, and Deep Learning for more details on the model architecture and implementation. Implementing contrastive loss with Keras and TensorFlow With our helper utilities and model architecture implemented, we can move on to defining the contrastive_loss function in Keras/TensorFlow. For reference, here is the equation for the contrastive loss function that we’ll be implementing in Keras/TensorFlow code: Figure 6: Implementing the contrastive loss function with Keras and TensorFlow.
https://pyimagesearch.com/2021/01/18/contrastive-loss-for-siamese-networks-with-keras-and-tensorflow/
The full implementation of contrastive loss is concise, spanning only 18 lines, including comments: # import the necessary packages import tensorflow.keras.backend as K import tensorflow as tf def contrastive_loss(y, preds, margin=1): # explicitly cast the true class label data type to the predicted # class label data type (otherwise we run the risk of having two # separate data types, causing TensorFlow to error out) y = tf.cast(y, preds.dtype) # calculate the contrastive loss between the true labels and # the predicted labels squaredPreds = K.square(preds) squaredMargin = K.square(K.maximum(margin - preds, 0)) loss = K.mean(y * squaredPreds + (1 - y) * squaredMargin) # return the computed contrastive loss to the calling function return loss Line 5 defines our contrastive_loss function, which accepts three arguments, two of which are required and the third optional: y: The ground-truth labels from our dataset. A value of 1 indicates that the two images in the pair are of the same class, while a value of 0 indicates that the images belong to two different classes. preds: The predictions from our siamese network (i.e., distances between the image pairs). margin: Margin used for the contrastive loss function (typically this value is set to 1). Line 9 ensures our ground-truth labels are of the same data type as our preds. Failing to perform this explicit casting may result in TensorFlow erroring out when we try to perform mathematical operations on y and preds. We then proceed to compute the contrastive loss by: Taking the square of the preds (Line 13) Computing the squaredMargin, which is the square of the maximum value of either 0 or margin - preds (Line 14) Computing the final loss (Line 15) The computed contrastive loss value is then returned to the calling function. I suggest you review the “What is contrastive loss? And how can contrastive loss be used to train siamese networks?” section above and compare our implementation to the equation so you can better understand how contrastive loss is implemented.
https://pyimagesearch.com/2021/01/18/contrastive-loss-for-siamese-networks-with-keras-and-tensorflow/
Creating our contrastive loss training script We are now ready to implement our training script! This script is responsible for: Loading the MNIST digits dataset from diskPreprocessing it and constructing image pairsInstantiating the siamese neural network architectureTraining the siamese network with contrastive lossSerializing both the trained network and training history plot to disk The majority of this code is identical to our previous post on Siamese networks with Keras, TensorFlow, and Deep Learning, so while I’m still going to cover our implementation in full, I’m going to defer a detailed discussion to the previous post (and of course, pointing out the details along the way). Open up the train_contrastive_siamese_network.py file in your project directory structure, and let’s get to work: # import the necessary packages from pyimagesearch.siamese_network import build_siamese_model from pyimagesearch import metrics from pyimagesearch import config from pyimagesearch import utils from tensorflow.keras.models import Model from tensorflow.keras.layers import Dense from tensorflow.keras.layers import Input from tensorflow.keras.layers import Lambda from tensorflow.keras.datasets import mnist import numpy as np Lines 2-11 import our required Python packages. Note how we are importing the metrics submodule of pyimagesearch, which contains our contrastive_loss implementation. From there we can load the MNIST dataset from disk: # load MNIST dataset and scale the pixel values to the range of [0, 1] print("[INFO] loading MNIST dataset...") (trainX, trainY), (testX, testY) = mnist.load_data() trainX = trainX / 255.0 testX = testX / 255.0 # add a channel dimension to the images trainX = np.expand_dims(trainX, axis=-1) testX = np.expand_dims(testX, axis=-1) # prepare the positive and negative pairs print("[INFO] preparing positive and negative pairs...") (pairTrain, labelTrain) = utils.make_pairs(trainX, trainY) (pairTest, labelTest) = utils.make_pairs(testX, testY) Line 15 loads the MNIST dataset with the pre-supplied training and testing splits. We then preprocess the dataset by: Scaling the input pixel intensities in the images from the range [0, 255] to [0, 1] (Lines 16 and 17)Adding a channel dimension (Lines 20 and 21)Constructing our image pairs (Lines 25 and 26) Next, we can instantiate the siamese network architecture: # configure the siamese network print("[INFO] building siamese network...") imgA = Input(shape=config. IMG_SHAPE) imgB = Input(shape=config. IMG_SHAPE) featureExtractor = build_siamese_model(config. IMG_SHAPE) featsA = featureExtractor(imgA) featsB = featureExtractor(imgB) # finally, construct the siamese network distance = Lambda(utils.euclidean_distance)([featsA, featsB]) model = Model(inputs=[imgA, imgB], outputs=distance) Lines 30-34 create our sister networks: We start by creating two inputs, one for each image in the image pair (Lines 30 and 31).We then build the sister network architecture, which acts as our feature extractor (Line 32).Each image in the pair will be passed through our feature extractor, resulting in a vector that quantifies each image (Lines 33 and 34). Using the 48-d vector generated by the sister networks, we proceed to compute the euclidean_distance between our two vectors (Line 37) — this distance serves as our output from the siamese network: The smaller the distance is, the more similar the two images are.
https://pyimagesearch.com/2021/01/18/contrastive-loss-for-siamese-networks-with-keras-and-tensorflow/
The larger the distance is, the less similar the images are. Line 38 defines the model by specifying imgA and imgB, our two images in the image pair, as inputs, and our distance layer as the output. Finally, we can train our siamese network using contrastive loss: # compile the model print("[INFO] compiling model...") model.compile(loss=metrics.contrastive_loss, optimizer="adam") # train the model print("[INFO] training model...") history = model.fit( [pairTrain[:, 0], pairTrain[:, 1]], labelTrain[:], validation_data=([pairTest[:, 0], pairTest[:, 1]], labelTest[:]), batch_size=config. BATCH_SIZE, epochs=config. EPOCHS) # serialize the model to disk print("[INFO] saving siamese model...") model.save(config. MODEL_PATH) # plot the training history print("[INFO] plotting training history...") utils.plot_training(history, config. PLOT_PATH) Line 42 compiles our model architecture using the contrastive_loss function. We then proceed to train the model using our training/validation image pairs (Lines 46-50) and then serialize the model to disk (Line 54) and plot the training history (Line 58). Training a siamese network with contrastive loss We are now ready to train our siamese neural network with contrastive loss using Keras and TensorFlow. Make sure you use the “Downloads” section of this guide to download the source code, helper utilities, and contrastive loss implementation.
https://pyimagesearch.com/2021/01/18/contrastive-loss-for-siamese-networks-with-keras-and-tensorflow/
From there, you can execute the following command: $ python train_contrastive_siamese_network.py [INFO] loading MNIST dataset... [INFO] preparing positive and negative pairs... [INFO] building siamese network... [INFO] compiling model... [INFO] training model... Epoch 1/100 1875/1875 [==============================] - 81s 43ms/step - loss: 0.2038 - val_loss: 0.1755 Epoch 2/100 1875/1875 [==============================] - 80s 43ms/step - loss: 0.1756 - val_loss: 0.1571 Epoch 3/100 1875/1875 [==============================] - 80s 43ms/step - loss: 0.1619 - val_loss: 0.1394 Epoch 4/100 1875/1875 [==============================] - 81s 43ms/step - loss: 0.1548 - val_loss: 0.1356 Epoch 5/100 1875/1875 [==============================] - 81s 43ms/step - loss: 0.1501 - val_loss: 0.1262 ... Epoch 96/100 1875/1875 [==============================] - 81s 43ms/step - loss: 0.1264 - val_loss: 0.1066 Epoch 97/100 1875/1875 [==============================] - 80s 43ms/step - loss: 0.1262 - val_loss: 0.1100 Epoch 98/100 1875/1875 [==============================] - 82s 44ms/step - loss: 0.1262 - val_loss: 0.1078 Epoch 99/100 1875/1875 [==============================] - 81s 43ms/step - loss: 0.1268 - val_loss: 0.1067 Epoch 100/100 1875/1875 [==============================] - 80s 43ms/step - loss: 0.1261 - val_loss: 0.1107 [INFO] saving siamese model... [INFO] plotting training history... Figure 7: Training our siamese network with contrastive loss. Each epoch took ~80 seconds on my 3 GHz Intel Xeon W processor. Training would be even faster with a GPU. Our training history can be seen in Figure 7. Notice how our validation loss is actually lower than our training loss, a phenomenon that I discuss in this tutorial. Having our validation loss lower than our training loss implies that we can “train harder” to improve our siamese network accuracy, typically by relaxing regularization constraints, deepening the model, and using a more aggressive learning rate. But for now, our training model is more than sufficient. Implementing our contrastive loss test script The final script we need to implement is test_contrastive_siamese_network.py. This script is essentially identical to the one covered in our previous tutorial on Comparing images for similarity using siamese networks, Keras, and TensorFlow, so while I’ll still cover the script in its entirety today, I’ll defer a detailed discussion to my previous guide. Let’s get started: # import the necessary packages from pyimagesearch import config from pyimagesearch import utils from tensorflow.keras.models import load_model from imutils.paths import list_images import matplotlib.pyplot as plt import numpy as np import argparse import cv2 Lines 2-9 import our required Python packages.
https://pyimagesearch.com/2021/01/18/contrastive-loss-for-siamese-networks-with-keras-and-tensorflow/
We’ll be using load_model to load our serialized siamese network from disk. The list_images function will be used to grab image paths and facilitate building sample image pairs. Let’s move on to our command line arguments: # construct the argument parser and parse the arguments ap = argparse. ArgumentParser() ap.add_argument("-i", "--input", required=True, help="path to input directory of testing images") args = vars(ap.parse_args()) The only command line argument we need is --input, the path to our directory containing sample images we want to build pairs from (i.e., the examples directory in our project directory). Speaking of building image pairs, let’s do that now: # grab the test dataset image paths and then randomly generate a # total of 10 image pairs print("[INFO] loading test dataset...") testImagePaths = list(list_images(args["input"])) np.random.seed(42) pairs = np.random.choice(testImagePaths, size=(10, 2)) # load the model from disk print("[INFO] loading siamese model...") model = load_model(config. MODEL_PATH, compile=False) Line 20 grabs the paths to all images in our --input directory. We then randomly generate a total of 10 pairs of images (Line 22). Line 26 loads our trained siamese network from disk. With the siamese network loaded from disk, we can now compare images: # loop over all image pairs for (i, (pathA, pathB)) in enumerate(pairs): # load both the images and convert them to grayscale imageA = cv2.imread(pathA, 0) imageB = cv2.imread(pathB, 0) # create a copy of both the images for visualization purpose origA = imageA.copy() origB = imageB.copy() # add channel a dimension to both the images imageA = np.expand_dims(imageA, axis=-1) imageB = np.expand_dims(imageB, axis=-1) # add a batch dimension to both images imageA = np.expand_dims(imageA, axis=0) imageB = np.expand_dims(imageB, axis=0) # scale the pixel values to the range of [0, 1] imageA = imageA / 255.0 imageB = imageB / 255.0 # use our siamese model to make predictions on the image pair, # indicating whether or not the images belong to the same class preds = model.predict([imageA, imageB]) proba = preds[0][0] Line 29 loops over all pairs. For each pair, we: Load the two images from disk (Lines 31 and 32)Clone the images such that we can visualize/draw on them (Lines 35 and 36)Add a channel dimension to both images, a requirement for inference (Lines 39 and 40)Add a batch dimension to the images, again, a requirement for inference (Lines 43 and 44)Scale the pixel intensities from the range [0, 255] to [0, 1], just like we did during training The image pairs are then passed through our siamese network on Lines 52 and 53, resulting in the computed Euclidean distance between the vectors generated by the sister networks.
https://pyimagesearch.com/2021/01/18/contrastive-loss-for-siamese-networks-with-keras-and-tensorflow/
Again, keep in mind that the smaller the distance is, the more similar the two images are. Conversely, the larger the distance, the less similar the images are. The final code block handles visualizing the two images in the pair along with their computed distance: # initialize the figure fig = plt.figure("Pair #{}".format(i + 1), figsize=(4, 2)) plt.suptitle("Distance: {:.2f}".format(proba)) # show first image ax = fig.add_subplot(1, 2, 1) plt.imshow(origA, cmap=plt.cm.gray) plt.axis("off") # show the second image ax = fig.add_subplot(1, 2, 2) plt.imshow(origB, cmap=plt.cm.gray) plt.axis("off") # show the plot plt.show() Congratulations on implementing an inference script for siamese networks! For more details on this implementation, refer to my previous tutorial, Comparing images for similarity using siamese networks, Keras, and TensorFlow. Making predictions using our siamese network with contrastive loss model Let’s put our test_contrastive_siamse_network.py script to work. Make sure you use the “Downloads” section of this tutorial to download the source code, pre-trained model, and example images. From there, you can run the following command: $ python test_contrastive_siamese_network.py --input examples [INFO] loading test dataset... [INFO] loading siamese model... Figure 8: Results of applying our siamese network inference script. Image pairs with smaller distances are considered to belong to the same class, while image pairs with larger distances belong to different classes. Looking at Figure 8, you’ll see that we have sets of example image pairs presented to our siamese network trained with contrastive loss. Images that are of the same class have lower distances while images of different classes have larger classes.
https://pyimagesearch.com/2021/01/18/contrastive-loss-for-siamese-networks-with-keras-and-tensorflow/
You can thus set a threshold value, T, to act as a cutoff on distance. If the computed distance, D, is < T, then the image pair must belong to the same class. Otherwise, if D >= T, then the images are different classes. Setting the threshold T should be done empirically through experimentation: Train the network. Compute distances for image pairs. Manually visualize the pairs and their corresponding differences. Find a cutoff value that maximizes correct classifications and minimizes incorrect ones. In this case, setting T=0.16 would be an appropriate threshold, since it allows us to correctly mark all image pairs that belong to the same class, while all image pairs of different classes are treated as such. What's next? We recommend PyImageSearch University.
https://pyimagesearch.com/2021/01/18/contrastive-loss-for-siamese-networks-with-keras-and-tensorflow/
Course information: 84 total classes • 114+ hours of on-demand code walkthrough videos • Last updated: February 2024 ★★★★★ 4.84 (128 Ratings) • 16,000+ Students Enrolled I strongly believe that if you had the right teacher you could master computer vision and deep learning. Do you think learning computer vision and deep learning has to be time-consuming, overwhelming, and complicated? Or has to involve complex mathematics and equations? Or requires a degree in computer science? That’s not the case. All you need to master computer vision and deep learning is for someone to explain things to you in simple, intuitive terms. And that’s exactly what I do. My mission is to change education and how complex Artificial Intelligence topics are taught. If you're serious about learning computer vision, your next stop should be PyImageSearch University, the most comprehensive computer vision, deep learning, and OpenCV course online today. Here you’ll learn how to successfully and confidently apply computer vision to your work, research, and projects.
https://pyimagesearch.com/2021/01/18/contrastive-loss-for-siamese-networks-with-keras-and-tensorflow/
Join me in computer vision mastery. Inside PyImageSearch University you'll find: ✓ 84 courses on essential computer vision, deep learning, and OpenCV topics ✓ 84 Certificates of Completion ✓ 114+ hours of on-demand video ✓ Brand new courses released regularly, ensuring you can keep up with state-of-the-art techniques ✓ Pre-configured Jupyter Notebooks in Google Colab ✓ Run all code examples in your web browser — works on Windows, macOS, and Linux (no dev environment configuration required!) ✓ Access to centralized code repos for all 536+ tutorials on PyImageSearch ✓ Easy one-click downloads for code, datasets, pre-trained models, etc. ✓ Access on mobile, laptop, desktop, etc. Click here to join PyImageSearch University Summary In this tutorial, you learned about contrastive loss, including how it’s a better loss function than binary cross-entropy for training siamese networks. What you need to keep in mind here is that a siamese network isn’t specifically designed for classification. Instead, it’s utilized for differentiation, meaning that it should not only be able to tell if an image pair belongs to the same class or not but whether the two images are identical/similar or not. Contrastive loss works far better in this situation. I recommend you experiment with both binary cross-entropy and contrastive loss when training your own siamese neural networks, but I think you’ll find that overall, contrastive loss does a much better job. To download the source code to this post (and be notified when future tutorials are published here on PyImageSearch), simply enter your email address in the form below!
https://pyimagesearch.com/2021/01/18/contrastive-loss-for-siamese-networks-with-keras-and-tensorflow/
Download the Source Code and FREE 17-page Resource Guide Enter your email address below to get a .zip of the code and a FREE 17-page Resource Guide on Computer Vision, OpenCV, and Deep Learning. Inside you'll find my hand-picked tutorials, books, courses, and libraries to help you master CV and DL! Download the code! Website
https://pyimagesearch.com/2021/01/19/crop-image-with-opencv/
Click here to download the source code to this pos In this tutorial, you will learn how to crop images using OpenCV. As the name suggests, cropping is the act of selecting and extracting the Region of Interest (or simply, ROI) and is the part of the image in which we are interested. For instance, in a face detection application, we may want to crop the face from an image. And if we were developing a Python script to recognize dogs in images, we may want to crop the dog from the image once we have found it. We already utilized cropping in our tutorial, Getting and setting pixels with OpenCV, but we’ll review it again for more completeness. To learn how to crop images with OpenCV, just keep reading. Looking for the source code to this post? Jump Right To The Downloads Section Crop Image with OpenCV In the first part of this tutorial, we’ll discuss how we represent OpenCV images as NumPy arrays. Since each image is a NumPy array, we can leverage NumPy array slicing to crop an image. From there, we’ll configure our development environments and review our project directory structure.
https://pyimagesearch.com/2021/01/19/crop-image-with-opencv/
I’ll then demonstrate how simple it is to crop images with OpenCV! Understanding image cropping with OpenCV and NumPy array slicing Figure 1: We accomplish image cropping by using NumPy array slicing (image source). When we crop an image, we want to remove the outer parts of the image we are not interested in. We commonly refer to this process as selecting our Region of Interest, or more simply, our ROI. We can accomplish image cropping by using NumPy array slicing. Let’s start by initializing a NumPy list with values ranging from [0, 24]: >>> import numpy as np >>> I = np.arange(0, 25) >>> I array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24]) >>> And let’s now reshape this 1D list into a 2D matrix, pretending that it is an image: >>> I = I.reshape((5, 5)) >>> I array([[ 0, 1, 2, 3, 4], [ 5, 6, 7, 8, 9], [10, 11, 12, 13, 14], [15, 16, 17, 18, 19], [20, 21, 22, 23, 24]]) >>> Now, let’s suppose I want to extract the “pixels” starting at x = 0, y = 0 and ending at x = 2, y = 3. Doing so can be accomplished using the following code: >>> I[0:3, 0:2] array([[ 0, 1], [ 5, 6], [10, 11]]) >>> Notice how we have extracted three rows (y = 3) and two columns (x = 2). Now, let’s extract the pixels starting at x = 1, y = 3 and ending at x = 5 and y = 5: >>> I[3:5, 1:5] array([[16, 17, 18, 19], [21, 22, 23, 24]]) >>> This result provides the final two rows of the image, minus the first column. Are you noticing a pattern here? When applying NumPy array slicing to images, we extract the ROI using the following syntax: roi = image[startY:endY, startX:endX] The startY:endY slice provides our rows (since the y-axis is our number of rows) while startX:endX provides our columns (since the x-axis is the number of columns) in the image.
https://pyimagesearch.com/2021/01/19/crop-image-with-opencv/
Take a second now to convince yourself that the above statement is true. But if you’re a bit more confused and need more convincing, don’t worry! I’ll show you some code examples later in this guide to make image cropping with OpenCV more clear and concrete for you. Configuring your development environment To follow this guide, you need to have the OpenCV library installed on your system. Luckily, OpenCV is pip-installable: $ pip install opencv-contrib-python If you need help configuring your development environment for OpenCV, I highly recommend that you read my pip install OpenCV guide — it will have you up and running in a matter of minutes. Having problems configuring your development environment? Figure 2: Having trouble configuring your development environment? Want access to pre-configured Jupyter Notebooks running on Google Colab? Be sure to join PyImageSearch Plus — you will be up and running with this tutorial in a matter of minutes. All that said, are you: Short on time?Learning on your employer’s administratively locked system?Wanting to skip the hassle of fighting with the command line, package managers, and virtual environments?Ready to run the code right now on your Windows, macOS, or Linux system?
https://pyimagesearch.com/2021/01/19/crop-image-with-opencv/
Then join PyImageSearch Plus today! Gain access to Jupyter Notebooks for this tutorial and other PyImageSearch guides that are pre-configured to run on Google Colab’s ecosystem right in your web browser! No installation required. And best of all, these Jupyter Notebooks will run on Windows, macOS, and Linux! Project structure Before we can implement image cropping with OpenCV, let’s first review our project directory structure. Start by using the “Downloads” section of this guide to access the source code and example images: $ tree . --dirsfirst . ├── adrian.png └── opencv_crop.py 0 directories, 2 files We only have a single Python script to review today, opencv_crop.py, which will load the input adrian.png image from disk and then crop out the face and body from the image using NumPy array slicing. Implementing image cropping with OpenCV We are now ready to implement image cropping with OpenCV. Open the opencv_crop.py file in your project directory structure and insert the following code: # import the necessary packages import argparse import cv2 # construct the argument parser and parse the arguments ap = argparse.
https://pyimagesearch.com/2021/01/19/crop-image-with-opencv/
ArgumentParser() ap.add_argument("-i", "--image", type=str, default="adrian.png", help="path to the input image") args = vars(ap.parse_args()) Lines 2 and 3 import our required Python packages while Lines 6-9 parse our command line arguments. We only need one command line argument, --image, which is the path to the input image we wish to crop. For this example, we’ll default the --image switch to the adrian.png file in our project directory. Next, let’s load our image from disk: # load the input image and display it to our screen image = cv2.imread(args["image"]) cv2.imshow("Original", image) # cropping an image with OpenCV is accomplished via simple NumPy # array slices in startY:endY, startX:endX order -- here we are # cropping the face from the image (these coordinates were # determined using photo editing software such as Photoshop, # GIMP, Paint, etc.) face = image[85:250, 85:220] cv2.imshow("Face", face) cv2.waitKey(0) Lines 12 and 13 load our original image and then display it to our screen: Figure 3: The original input image that we will be cropping using OpenCV. Our goal here is to extract my face and body from the region using simple cropping methods. We would normally apply object detection techniques to detect my face and body in the image. However, since we are still relatively early in our OpenCV education course, we will use our a priori knowledge of the image and manually supply the NumPy array slices where the body and face reside. Again, we can, of course, use object detection methods to detect and extract faces from images automatically, but for the time being, let’s keep things simple. We extract my face from the image on a single line of code (Line 20).
https://pyimagesearch.com/2021/01/19/crop-image-with-opencv/
We are supplying NumPy array slices to extract a rectangular region of the image, starting at (85, 85) and ending at (220, 250). The order in which we supply the indexes to the crop may seem counterintuitive; however, remember that OpenCV represents images as NumPy arrays with the height first (# of rows) and the width second (# of columns). To perform our cropping, NumPy expects four indexes: Start y: The starting y-coordinate. In this case, we start at y = 85.End y: The ending y-coordinate. We will end our crop at y = 250.Start x: The starting x-coordinate of the slice. We start the crop at x = 85.End x: The ending x-axis coordinate of the slice. Our slice ends at x = 220. We can see the result of cropping my face below: Figure 4: Cropping the face using OpenCV. Similarly, we can crop my body from the image: # apply another image crop, this time extracting the body body = image[90:450, 0:290] cv2.imshow("Body", body) cv2.waitKey(0) Cropping my body is accomplished by starting the crop from coordinates (0, 90) and ending at (290, 450) of the original image. Below you can see the output of cropping with OpenCV: Figure 5: Cropping the body from the image using OpenCV.
https://pyimagesearch.com/2021/01/19/crop-image-with-opencv/
While simple, cropping is an extremely important skill that we will utilize throughout this series. If you are still feeling uneasy with cropping, definitely take the time to practice now and hone your skills. From here on, cropping will be an assumed skill that you will need to understand! OpenCV image cropping results To crop images with OpenCV, be sure you have gone to the “Downloads” section of this tutorial to access the source code and example images. From there, open a shell and execute the following command: $ python opencv_crop.py Your cropping output should match mine from the previous section. What's next? We recommend PyImageSearch University. Course information: 84 total classes • 114+ hours of on-demand code walkthrough videos • Last updated: February 2024 ★★★★★ 4.84 (128 Ratings) • 16,000+ Students Enrolled I strongly believe that if you had the right teacher you could master computer vision and deep learning. Do you think learning computer vision and deep learning has to be time-consuming, overwhelming, and complicated? Or has to involve complex mathematics and equations?
https://pyimagesearch.com/2021/01/19/crop-image-with-opencv/
Or requires a degree in computer science? That’s not the case. All you need to master computer vision and deep learning is for someone to explain things to you in simple, intuitive terms. And that’s exactly what I do. My mission is to change education and how complex Artificial Intelligence topics are taught. If you're serious about learning computer vision, your next stop should be PyImageSearch University, the most comprehensive computer vision, deep learning, and OpenCV course online today. Here you’ll learn how to successfully and confidently apply computer vision to your work, research, and projects. Join me in computer vision mastery. Inside PyImageSearch University you'll find: ✓ 84 courses on essential computer vision, deep learning, and OpenCV topics ✓ 84 Certificates of Completion ✓ 114+ hours of on-demand video ✓ Brand new courses released regularly, ensuring you can keep up with state-of-the-art techniques ✓ Pre-configured Jupyter Notebooks in Google Colab ✓ Run all code examples in your web browser — works on Windows, macOS, and Linux (no dev environment configuration required!) ✓ Access to centralized code repos for all 536+ tutorials on PyImageSearch ✓ Easy one-click downloads for code, datasets, pre-trained models, etc.
https://pyimagesearch.com/2021/01/19/crop-image-with-opencv/
✓ Access on mobile, laptop, desktop, etc. Click here to join PyImageSearch University Summary In this tutorial, you learned how to crop an image using OpenCV. Since OpenCV represents images as NumPy arrays, cropping is as simple as supplying the crop’s starting and ending ranges as a NumPy array slice. All you need to do is remember the following syntax: cropped = image[startY:endY, startX:endX] As long as you remember the order in which to supply the starting and ending (x, y)-coordinates, cropping images with OpenCV is a breeze! To download the source code to this post (and be notified when future tutorials are published here on PyImageSearch), simply enter your email address in the form below! Download the Source Code and FREE 17-page Resource Guide Enter your email address below to get a .zip of the code and a FREE 17-page Resource Guide on Computer Vision, OpenCV, and Deep Learning. Inside you'll find my hand-picked tutorials, books, courses, and libraries to help you master CV and DL! Download the code! Website
https://pyimagesearch.com/2021/01/19/image-arithmetic-opencv/
Click here to download the source code to this pos In this tutorial, you will learn how to perform image arithmetic (addition and subtraction) with OpenCV. Remember way, way back when you studied how to add and subtract numbers in grade school? Well, it turns out, performing arithmetic with images is quite similar — with only a few caveats, of course. In this blog post, you’ll learn how to add and subtract images, along with two important differences you need to understand regarding arithmetic operations in OpenCV and Python. To learn how to perform image arithmetic with OpenCV, just keep reading. Looking for the source code to this post? Jump Right To The Downloads Section Image Arithmetic OpenCV In the first part of this guide, we’ll discuss what image arithmetic is, including where you see image arithmetic in real-world applications. From there, we’ll configure our development environment and review our project directory structure. I’ll then show you two ways to perform image arithmetic: The first way is to use OpenCV’s cv2.add and cv2.subtractThe second way is to use NumPy’s basic addition and subtraction operators There are very important caveats you need to understand between the two, so be sure you pay attention as you review this tutorial! What is image arithmetic?
https://pyimagesearch.com/2021/01/19/image-arithmetic-opencv/
Image arithmetic is simply matrix addition (with an added caveat on data types, which we’ll explain later). Let’s take a second and review some very basic linear algebra. Suppose we were to add the following two matrices: What would the output of the matrix addition be? The answer is simply the element-wise sum of matrix entries: Pretty simple, right? So it’s obvious at this point that we all know basic arithmetic operations like addition and subtraction. But when working with images, we need to keep in mind the numerical limits of our color space and data type. For example, RGB images have pixels that fall within the range [0, 255]. What happens if we examine a pixel with an intensity of 250 and try to add 10 to it? Under normal arithmetic rules, we would end up with a value of 260. However, since we represent RGB images as 8-bit unsigned integers who can only take on values in the range [0, 255], 260 is not a valid value.
https://pyimagesearch.com/2021/01/19/image-arithmetic-opencv/
So what should happen? Should we perform a check of some sort to ensure no pixel falls outside the range of [0, 255], thus clipping all pixels to have a minimum value of 0 and a maximum value of 255? Or do we apply a modulus operation and “wrap around” (which is what NumPy does)? Under modulus rules, adding 10 to 255 would simply wrap around to a value of 9. Which way is the “correct” way to handle image additions and subtractions that fall outside the range of [0, 255]? The answer is that there is no “correct way” — it simply depends on how you are manipulating your pixels and what you want the desired results to be. However, be sure to keep in mind that there is a difference between OpenCV and NumPy addition. NumPy will perform modulus arithmetic and “wrap around.” On the other hand, OpenCV will perform clipping and ensure pixel values never fall outside the range [0, 255]. But don’t worry!
https://pyimagesearch.com/2021/01/19/image-arithmetic-opencv/
These nuances will become more clear as we explore some code below. What is image arithmetic used for? Figure 1: Image arithmetic is applied to create functions that can adjust brightness and contrast, apply alpha blending and transparency, and create Instagram-like filters. Now that we understand the basics of image arithmetic, you may be wondering where we would use image arithmetic in the real world. Basic examples include: Adjusting brightness and contrast by adding or subtracting a set amount (for example, adding 50 to all pixel values to increase the brightness of an image)Working with alpha blending and transparency, as we do in this tutorialCreating Instagram-like filters — these filters are simply mathematical functions applied to the pixel intensities While you may be tempted to quickly gloss over this guide on image arithmetic and move on to more advanced topics, I strongly encourage you to read this tutorial in detail. While simplistic, image arithmetic is used in many computer vision and image processing applications (whether you realize it or not). Configuring your development environment To follow this guide, you need to have the OpenCV library installed on your system. Luckily, OpenCV is pip-installable: $ pip install opencv-contrib-python If you need help configuring your development environment for OpenCV, I highly recommend that you read my pip install OpenCV guide — it will have you up and running in a matter of minutes. Having problems configuring your development environment? Figure 2: Having trouble configuring your development environment?
https://pyimagesearch.com/2021/01/19/image-arithmetic-opencv/
Want access to pre-configured Jupyter Notebooks running on Google Colab? Be sure to join PyImageSearch Plus — you will be up and running with this tutorial in a matter of minutes. All that said, are you: Short on time?Learning on your employer’s administratively locked system?Wanting to skip the hassle of fighting with the command line, package managers, and virtual environments?Ready to run the code right now on your Windows, macOS, or Linux system? Then join PyImageSearch Plus today! Gain access to Jupyter Notebooks for this tutorial and other PyImageSearch guides that are pre-configured to run on Google Colab’s ecosystem right in your web browser! No installation required. And best of all, these Jupyter Notebooks will run on Windows, macOS, and Linux! Project structure Ready to learn the fundamentals of image arithmetic with OpenCV? Great, let’s get going. Start by using the “Downloads” section of this tutorial to access the source code and example images: $ tree .
https://pyimagesearch.com/2021/01/19/image-arithmetic-opencv/
--dirsfirst . ├── grand_canyon.png └── image_arithmetic.py 0 directories, 2 files Our image_arithmetic.py file will demonstrate the differences/caveats between addition and subtraction operations in OpenCV versus NumPy. You’ll then learn how to manually adjust the brightness of an image, grand_canyon.png, using image arithmetic with OpenCV. Implementing image arithmetic with OpenCV We are now ready to explore image arithmetic with OpenCV and NumPy. Open the image_arithmetic.py file in your project folder, and let’s get started: # import the necessary packages import numpy as np import argparse import cv2 # construct the argument parser and parse the arguments ap = argparse. ArgumentParser() ap.add_argument("-i", "--image", type=str, default="grand_canyon.png", help="path to the input image") args = vars(ap.parse_args()) Lines 2-4 import our required Python packages. Notice how we are importing NumPy for numerical array processing. Lines 7-10 then parse our command line arguments. We need only a single switch here, --image, which points to the image on disk where we’ll be applying image arithmetic operations. We’ll default the image path to the grand_canyon.png image on disk, but you can easily update the switch if you wish to use your own image(s).
https://pyimagesearch.com/2021/01/19/image-arithmetic-opencv/
Remember how I mentioned the difference between OpenCV and NumPy arithmetic above? Well, now we are going to explore it further and provide a concrete example to ensure we fully understand it: # images are NumPy arrays stored as unsigned 8-bit integers (unit8) # with values in the range [0, 255]; when using the add/subtract # functions in OpenCV, these values will be *clipped* to this range, # even if they fall outside the range [0, 255] after applying the # operation added = cv2.add(np.uint8([200]), np.uint8([100])) subtracted = cv2.subtract(np.uint8([50]), np.uint8([100])) print("max of 255: {}".format(added)) print("min of 0: {}".format(subtracted)) On Line 17, we define two NumPy arrays that are 8-bit unsigned integers. The first array has one element: a value of 200. The second array has only one element but a value of 100. We then use OpenCV’s cv2.add method to add the values together. What do you think the output is going to be? According to standard arithmetic rules, we would think the result should be 300, but remember that we are working with 8-bit unsigned integers that only have a range between [0, 255]. Since we are using the cv2.add method, OpenCV takes care of clipping for us and ensures that the addition produces a maximum value of 255. When we execute this code, we can see the result on the first line in the listing below: max of 255: [[255]] Sure enough, the addition returned a value of 255. Line 20 then performs subtraction using cv2.subtract.
https://pyimagesearch.com/2021/01/19/image-arithmetic-opencv/
Again, we define two NumPy arrays, each with a single element, and of the 8-bit unsigned integer data type. The first array has a value of 50 and the second a value of 100. According to our arithmetic rules, the subtraction should return a value of -50; however, OpenCV once again performs clipping for us. We find that the value is clipped to a value of 0. Our output below verifies this: min of 0: [[0]] Subtracting 100 from 50 using cv2.subtract returns a value of 0. But what happens if we use NumPy to perform the arithmetic instead of OpenCV? Let’s explore that now: # using NumPy arithmetic operations (rather than OpenCV operations) # will result in a modulo ("wrap around") instead of being clipped # to the range [0, 255] added = np.uint8([200]) + np.uint8([100]) subtracted = np.uint8([50]) - np.uint8([100]) print("wrap around: {}".format(added)) print("wrap around: {}".format(subtracted)) First, we define two NumPy arrays, each with a single element, and of the 8-bit unsigned integer data type. The first array has a value of 200, and the second has a value of 100. If we use the cv2.add function, our addition would be clipped and a value of 255 returned; however, NumPy does not perform clipping — it instead performs modulo arithmetic and “wraps around.” Once a value of 255 is reached, NumPy wraps around to zero and then starts counting up again until 100 steps have been reached.
https://pyimagesearch.com/2021/01/19/image-arithmetic-opencv/
You can see this is true via the first line of output below: wrap around: [44] Line 26 defines two more NumPy arrays: one has a value of 50 and the other 100. When using the cv2.subtract method, this subtraction would be clipped to return a value of 0; however, we know that NumPy performs modulo arithmetic rather than clipping. Instead, once 0 is reached during the subtraction, the modulo operation wraps around and starts counting backward from 255 — we can verify this from the output below: wrap around: [206] It is important to keep your desired output in mind when performing integer arithmetic: Do you want all values to be clipped if they fall outside the range [0, 255]? Then use OpenCV’s built-in methods for image arithmetic. Do you want modulus arithmetic operations and have values wrap around if they fall outside the range of [0, 255]? Then simply add and subtract the NumPy arrays as you usually would. Now that we have explored the caveats of image arithmetic in OpenCV and NumPy, let’s perform the arithmetic on actual images and view the results: # load the original input image and display it to our screen image = cv2.imread(args["image"]) cv2.imshow("Original", image) We start on Lines 31 and 32 by loading our original input image from disk and then displaying it to our screen: Figure 3: Our original image loaded from disk. With our image loaded from disk, let’s proceed to increasing the brightness: # increasing the pixel intensities in our input image by 100 is # accomplished by constructing a NumPy array that has the *same # dimensions* as our input image, filling it with ones, multiplying # it by 100, and then adding the input image and matrix together M = np.ones(image.shape, dtype="uint8") * 100 added = cv2.add(image, M) cv2.imshow("Lighter", added) Line 38 defines a NumPy array of ones, with the same dimensions as our image. Again, we are sure to use 8-bit unsigned integers as our data type. To fill our matrix with values of 100s rather than 1s, we simply multiply our matrix of 1s by 100.
https://pyimagesearch.com/2021/01/19/image-arithmetic-opencv/
Finally, we use the cv2.add function to add our matrix of 100s to the original image, thus increasing every pixel intensity in the image by 100, but ensuring all values are clipped to the range [0, 255] if they attempt to exceed 255. The result of our operation can be seen below: Figure 4: Adding a value of 100 to every pixel value. Notice how the image now looks washed out. Notice how the image looks more “washed out” and is substantially brighter than the original. This is because we increase the pixel intensities by adding 100 to them and pushing them toward brighter colors. Let’s now darken our image by using cv2.subtract: # similarly, we can subtract 50 from all pixels in our image and make it # darker M = np.ones(image.shape, dtype="uint8") * 50 subtracted = cv2.subtract(image, M) cv2.imshow("Darker", subtracted) cv2.waitKey(0) Line 44 creates another NumPy array filled with 50s and then uses cv2.subtract to subtract 50 from each pixel in the image. Figure 5 shows the results of this subtraction: Figure 5: Subtracting a value of 50 from every pixel. Notice how the image now looks considerably darker. Our image now looks considerably darker than the original photo of the Grand Canyon. Pixels that were once white now look gray.
https://pyimagesearch.com/2021/01/19/image-arithmetic-opencv/
This is because we subtract 50 from the pixels and push them toward the darker regions of the RGB color space. OpenCV image arithmetic results To perform image arithmetic with OpenCV and NumPy, be sure you have gone to the “Downloads” section of this tutorial to access the source code and example images. From there, open a shell and execute the following command: $ python image_arithmetic.py max of 255: [[255]] min of 0: [[0]] wrap around: [44] wrap around: [206] Your cropping output should match mine from the previous section. What's next? We recommend PyImageSearch University. Course information: 84 total classes • 114+ hours of on-demand code walkthrough videos • Last updated: February 2024 ★★★★★ 4.84 (128 Ratings) • 16,000+ Students Enrolled I strongly believe that if you had the right teacher you could master computer vision and deep learning. Do you think learning computer vision and deep learning has to be time-consuming, overwhelming, and complicated? Or has to involve complex mathematics and equations? Or requires a degree in computer science? That’s not the case.
https://pyimagesearch.com/2021/01/19/image-arithmetic-opencv/
All you need to master computer vision and deep learning is for someone to explain things to you in simple, intuitive terms. And that’s exactly what I do. My mission is to change education and how complex Artificial Intelligence topics are taught. If you're serious about learning computer vision, your next stop should be PyImageSearch University, the most comprehensive computer vision, deep learning, and OpenCV course online today. Here you’ll learn how to successfully and confidently apply computer vision to your work, research, and projects. Join me in computer vision mastery. Inside PyImageSearch University you'll find: ✓ 84 courses on essential computer vision, deep learning, and OpenCV topics ✓ 84 Certificates of Completion ✓ 114+ hours of on-demand video ✓ Brand new courses released regularly, ensuring you can keep up with state-of-the-art techniques ✓ Pre-configured Jupyter Notebooks in Google Colab ✓ Run all code examples in your web browser — works on Windows, macOS, and Linux (no dev environment configuration required!) ✓ Access to centralized code repos for all 536+ tutorials on PyImageSearch ✓ Easy one-click downloads for code, datasets, pre-trained models, etc. ✓ Access on mobile, laptop, desktop, etc. Click here to join PyImageSearch University Summary In this tutorial, we learned how to apply image addition and subtraction with OpenCV, two basic (but important) image arithmetic operations.
https://pyimagesearch.com/2021/01/19/image-arithmetic-opencv/
As we saw, image arithmetic operations are simply no more than basic matrix addition and subtraction. We also explored the peculiarities of image arithmetic using OpenCV and NumPy. Remember that: OpenCV addition and subtraction clip values outside the range [0, 255] to fit inside the unsigned 8-bit integer range……whereas NumPy performs a modulus operation and “wraps around” These caveats are important to keep in mind. Otherwise, you may get unwanted results when performing arithmetic operations on your images. To download the source code to this post (and be notified when future tutorials are published here on PyImageSearch), simply enter your email address in the form below! Download the Source Code and FREE 17-page Resource Guide Enter your email address below to get a .zip of the code and a FREE 17-page Resource Guide on Computer Vision, OpenCV, and Deep Learning. Inside you'll find my hand-picked tutorials, books, courses, and libraries to help you master CV and DL! Download the code! Website
https://pyimagesearch.com/2021/01/19/opencv-bitwise-and-or-xor-and-not/
Click here to download the source code to this pos In this tutorial, you will learn how to apply bitwise AND, OR, XOR, and NOT with OpenCV. In our previous tutorial on Cropping with OpenCV, you learned how to crop and extract a Region of Interest (ROI) from an image. In that particular example, our ROI had to be rectangular . . . but what if you wanted to crop a non-rectangular region? What would you do then? The answer is to apply both bitwise operations and masking (we’ll discuss how to do that in our guide on image masking with OpenCV). For now, we’ll cover the basic bitwise operations — and in the next blog post, we’ll learn how to utilize these bitwise operations to construct masks. To learn how to apply bitwise operators with OpenCV, just keep reading. Looking for the source code to this post? Jump Right To The Downloads Section OpenCV Bitwise AND, OR, XOR, and NOT Before we get too far into this tutorial, I’m going to assume that you understand the four basic bitwise operators: ANDORXOR (exclusive OR)NOT If you’ve never worked with bitwise operators before, I suggest you read this excellent (and highly detailed) guide from RealPython.
https://pyimagesearch.com/2021/01/19/opencv-bitwise-and-or-xor-and-not/
While you don’t have to review that guide, I find that readers who understand the basics of applying bitwise operators to digits can quickly grasp bitwise operators applied to images. Regardless, computer vision and image processing are highly visual, and I’ve crafted the examples in this tutorial to ensure you understand how bitwise operators are applied to images with OpenCV. We’ll start this guide by configuring our development environment and then reviewing our project directory structure. From there, we’ll implement a Python script to perform the AND, OR, XOR, and NOT bitwise operators with OpenCV. We’ll conclude this guide with a discussion of our results. Configuring your development environment To follow this guide, you need to have the OpenCV library installed on your system. Luckily, OpenCV is pip-installable: $ pip install opencv-contrib-python If you need help configuring your development environment for OpenCV, I highly recommend that you read my pip install OpenCV guide — it will have you up and running in a matter of minutes. Having problems configuring your development environment? Figure 1: Having trouble configuring your development environment? Want access to pre-configured Jupyter Notebooks running on Google Colab?
https://pyimagesearch.com/2021/01/19/opencv-bitwise-and-or-xor-and-not/
Be sure to join PyImageSearch Plus — you will be up and running with this tutorial in a matter of minutes. All that said, are you: Short on time?Learning on your employer’s administratively locked system?Wanting to skip the hassle of fighting with the command line, package managers, and virtual environments?Ready to run the code right now on your Windows, macOS, or Linux systems? Then join PyImageSearch Plus today! Gain access to Jupyter Notebooks for this tutorial and other PyImageSearch guides that are pre-configured to run on Google Colab’s ecosystem right in your web browser! No installation required. And best of all, these Jupyter Notebooks will run on Windows, macOS, and Linux! Project structure Ready to learn how to apply bitwise operators using OpenCV? Great, let’s get started. Be sure to use the “Downloads” section of this guide to access the source code, and from there, take a look at our project directory structure: $ tree . --dirsfirst .
https://pyimagesearch.com/2021/01/19/opencv-bitwise-and-or-xor-and-not/
└── opencv_bitwise.py 0 directories, 1 file We have just a single script to review today, opencv_bitwise.py, which will apply the AND, OR, XOR, and NOR operators to example images. By the end of this guide, you’ll have a good understanding of how to apply bitwise operators with OpenCV. Implementing OpenCV AND, OR, XOR, and NOT bitwise operators In this section, we will review four bitwise operations: AND, OR, XOR, and NOT. While very basic and low level, these four operations are paramount to image processing — especially when working with masks later in this series. Bitwise operations function in a binary manner and are represented as grayscale images. A given pixel is turned “off” if it has a value of zero, and it is turned “on” if the pixel has a value greater than zero. Let’s proceed and jump into some code: # import the necessary packages import numpy as np import cv2 # draw a rectangle rectangle = np.zeros((300, 300), dtype="uint8") cv2.rectangle(rectangle, (25, 25), (275, 275), 255, -1) cv2.imshow("Rectangle", rectangle) # draw a circle circle = np.zeros((300, 300), dtype = "uint8") cv2.circle(circle, (150, 150), 150, 255, -1) cv2.imshow("Circle", circle) For the first few lines of code import, the packages we will need include: NumPy and our OpenCV bindings. We initialize our rectangle image as a 300 x 300 NumPy array on Line 6. We then draw a 250 x 250 white rectangle at the center of the image. Similarly, on Line 11, we initialize another image to contain our circle, which we draw on Line 12 again centered in the middle of the image, with a radius of 150 pixels.
https://pyimagesearch.com/2021/01/19/opencv-bitwise-and-or-xor-and-not/
Figure 2 displays our two shapes: Figure 2: Our initial input images onto which we’ll perform bitwise operations. If we consider these input images, we’ll see that they only have two pixel intensity values — either the pixel is 0 (black) or the pixel is greater than zero (white). We call images that only have two pixel intensity values binary images. Another way to think of binary images is like an on/off switch in our living room. Imagine each pixel in the 300 x 300 image is a light switch. If the switch is off, then the pixel has a value of zero. But if the pixel is on, it has a value greater than zero. In Figure 2, we can see the white pixels that comprise the rectangle and circle, respectively, all have pixel values that are on, whereas the surrounding pixels have a value of off. Keep this notion of on/off as we demonstrate bitwise operations: # a bitwise 'AND' is only 'True' when both inputs have a value that # is 'ON' -- in this case, the cv2.bitwise_and function examines # every pixel in the rectangle and circle; if *BOTH* pixels have a # value greater than zero then the pixel is turned 'ON' (i.e., 255) # in the output image; otherwise, the output value is set to # 'OFF' (i.e., 0) bitwiseAnd = cv2.bitwise_and(rectangle, circle) cv2.imshow("AND", bitwiseAnd) cv2.waitKey(0) As I mentioned above, a given pixel is turned “on” if it has a value greater than zero, and it is turned “off” if it has a value of zero. Bitwise functions operate on these binary conditions.
https://pyimagesearch.com/2021/01/19/opencv-bitwise-and-or-xor-and-not/
To utilize bitwise functions, we assume (in most cases) that we are comparing two pixels (the only exception is the NOT function). We’ll compare each of the pixels and then construct our bitwise representation. Let’s quickly review our binary operations: AND: A bitwise AND is true if and only if both pixels are greater than zero. OR: A bitwise OR is true if either of the two pixels is greater than zero. XOR: A bitwise XOR is true if and only if one of the two pixels is greater than zero, but not both. NOT: A bitwise NOT inverts the “on” and “off” pixels in an image. On Line 21, we apply a bitwise AND to our rectangle and circle images using the cv2.bitwise_and function. As the list above mentions, a bitwise AND is true if and only if both pixels are greater than zero. The output of our bitwise AND can be seen in Figure 3: Figure 3: Applying a bitwise AND with OpenCV. We can see that edges of our square are lost — this makes sense because our rectangle does not cover as large of an area as the circle, and thus both pixels are not “on.”
https://pyimagesearch.com/2021/01/19/opencv-bitwise-and-or-xor-and-not/
Let’s now apply a bitwise OR: # a bitwise 'OR' examines every pixel in the two inputs, and if # *EITHER* pixel in the rectangle or circle is greater than 0, # then the output pixel has a value of 255, otherwise it is 0 bitwiseOr = cv2.bitwise_or(rectangle, circle) cv2.imshow("OR", bitwiseOr) cv2.waitKey(0) We apply a bitwise OR on Line 28 using the cv2.bitwise_or function. A bitwise OR is true if either of the two pixels is greater than zero. Take a look at the output of the bitwise OR in Figure 4: Figure 4: Applying a bitwise OR with OpenCV. In this case, our square and rectangle have been combined. Next is the bitwise XOR: # the bitwise 'XOR' is identical to the 'OR' function, with one # exception: the rectangle and circle are not allowed to *BOTH* # have values greater than 0 (only one can be 0) bitwiseXor = cv2.bitwise_xor(rectangle, circle) cv2.imshow("XOR", bitwiseXor) cv2.waitKey(0) We apply the bitwise XOR on Line 35 using the cv2.bitwise_xor function. An XOR operation is true if and only if one of the two pixels is greater than zero, but both pixels cannot be greater than zero. The output of the XOR operation is displayed in Figure 5: Figure 5: Applying a bitwise XOR with OpenCV. Here, we see that the center of the square has been removed. Again, this makes sense because an XOR operation cannot have both pixels greater than zero. Finally, we arrive at the bitwise NOT function: # finally, the bitwise 'NOT' inverts the values of the pixels; # pixels with a value of 255 become 0, and pixels with a value of 0 # become 255 bitwiseNot = cv2.bitwise_not(circle) cv2.imshow("NOT", bitwiseNot) cv2.waitKey(0) We apply a bitwise NOT on Line 42 using the cv2.bitwise_not function.
https://pyimagesearch.com/2021/01/19/opencv-bitwise-and-or-xor-and-not/
Essentially, the bitwise NOT function flips pixel values. All pixels that are greater than zero are set to zero, and all pixels that are equal to zero are set to 255: Figure 6: Applying a bitwise NOT with OpenCV. Notice how our circle has been inverted — initially, the circle was white on a black background, and now the circle is black on a white background. OpenCV bitwise AND, OR, XOR, and NOT results To perform bitwise operations with OpenCV, be sure to access the “Downloads” section of this tutorial to download the source code. From there, open a shell and execute the following command: $ python opencv_bitwise.py Your output should match mine from the previous section. What's next? We recommend PyImageSearch University. Course information: 84 total classes • 114+ hours of on-demand code walkthrough videos • Last updated: February 2024 ★★★★★ 4.84 (128 Ratings) • 16,000+ Students Enrolled I strongly believe that if you had the right teacher you could master computer vision and deep learning. Do you think learning computer vision and deep learning has to be time-consuming, overwhelming, and complicated? Or has to involve complex mathematics and equations?
https://pyimagesearch.com/2021/01/19/opencv-bitwise-and-or-xor-and-not/
Or requires a degree in computer science? That’s not the case. All you need to master computer vision and deep learning is for someone to explain things to you in simple, intuitive terms. And that’s exactly what I do. My mission is to change education and how complex Artificial Intelligence topics are taught. If you're serious about learning computer vision, your next stop should be PyImageSearch University, the most comprehensive computer vision, deep learning, and OpenCV course online today. Here you’ll learn how to successfully and confidently apply computer vision to your work, research, and projects. Join me in computer vision mastery. Inside PyImageSearch University you'll find: ✓ 84 courses on essential computer vision, deep learning, and OpenCV topics ✓ 84 Certificates of Completion ✓ 114+ hours of on-demand video ✓ Brand new courses released regularly, ensuring you can keep up with state-of-the-art techniques ✓ Pre-configured Jupyter Notebooks in Google Colab ✓ Run all code examples in your web browser — works on Windows, macOS, and Linux (no dev environment configuration required!) ✓ Access to centralized code repos for all 536+ tutorials on PyImageSearch ✓ Easy one-click downloads for code, datasets, pre-trained models, etc.
https://pyimagesearch.com/2021/01/19/opencv-bitwise-and-or-xor-and-not/
✓ Access on mobile, laptop, desktop, etc. Click here to join PyImageSearch University Summary In this tutorial, you learned how to perform bitwise AND, OR, XOR, and NOT using OpenCV. While bitwise operators may not seem useful by themselves, they’re necessary when you start working with alpha blending and masking, a concept that we’ll begin to discuss in another blog post. Take the time to practice and become familiar with bitwise operations now before proceeding. To download the source code to this post (and be notified when future tutorials are published here on PyImageSearch), simply enter your email address in the form below! Download the Source Code and FREE 17-page Resource Guide Enter your email address below to get a .zip of the code and a FREE 17-page Resource Guide on Computer Vision, OpenCV, and Deep Learning. Inside you'll find my hand-picked tutorials, books, courses, and libraries to help you master CV and DL! Download the code! Website
https://pyimagesearch.com/2021/01/20/opencv-getting-and-setting-pixels/
Click here to download the source code to this pos In this tutorial, you will learn how to get and set pixel values using OpenCV and Python. You will also learn: What pixels areHow the image coordinate system works in OpenCVHow to access/get individual pixel values in an imageHow to set/update pixels in an imageHow to use array slicing to grab regions of an image By the end of this tutorial, you will have a strong understanding of how to access and manipulate pixels in an image using OpenCV. To learn how to get and set pixels with OpenCV, just keep reading. Looking for the source code to this post? Jump Right To The Downloads Section OpenCV Getting and Setting Pixels In the first part of this tutorial, you will discover what pixels are (i.e., the building blocks of an image). We’ll also review the image coordinate system in OpenCV, including the proper notation to access individual pixel values. From there, we’ll configure our development environment and review our project directory structure. With our project directory structure reviewed, we’ll implement a Python script, opencv_getting_setting.py. As the name suggests, this allows us to access and manipulate pixels using OpenCV. We’ll wrap up this tutorial with a discussion of our results.
https://pyimagesearch.com/2021/01/20/opencv-getting-and-setting-pixels/
Let’s get started! What are pixels? Pixels are the raw building blocks of an image. Every image consists of a set of pixels. There is no finer granularity than the pixel. Normally, a pixel is considered the “color” or the “intensity” of light that appears in a given place in our image. If we think of an image as a grid, each square in the grid contains a single pixel. Let’s look at the example image in Figure 1: Figure 1: This image is 600 pixels wide and 450 pixels tall for a total of 600 x 450 = 270,000 pixels. Most pixels are represented in two ways: Grayscale/single channelColor In a grayscale image, each pixel has a value between 0 and 255, where 0 corresponds to “black” and 255 being “white.” The values between 0 and 255 are varying shades of gray, where values closer to 0 are darker and values closer 255 are lighter: Figure 2: Image gradient demonstrating pixel values going from black (0) to white (255).
https://pyimagesearch.com/2021/01/20/opencv-getting-and-setting-pixels/
The grayscale gradient image in Figure 2 demonstrates darker pixels on the left-hand side and progressively lighter pixels on the right-hand side. Color pixels, however, are normally represented in the RGB color space — one value for the Red component, one for Green, and one for Blue leading to a total of 3 values per pixel: Figure 3: The RGB cube. Other color spaces exist (HSV (Hue, Saturation, Value), L*a*b*, etc.), but let’s start with the basics and move our way up from there. Each of the three Red, Green, and Blue colors are represented by an integer in the range from 0 to 255, which indicates how “much” of the color there is. Given that the pixel value only needs to be in the range [0, 255], we normally use an 8-bit unsigned integer to represent each color intensity. We then combine these values into an RGB tuple in the form (red, green, blue). This tuple represents our color. To construct a white color, we would completely fill each of the red, green, and blue buckets, like this: (255, 255, 255) — since white is the presence of all colors. Then, to create a black color, we would completely empty each of the buckets: (0, 0, 0) — since black is the absence of color.
https://pyimagesearch.com/2021/01/20/opencv-getting-and-setting-pixels/
To create a pure red color, we would completely fill the red bucket (and only the red bucket): (255, 0, 0). Are you starting to see a pattern? Look at the following image to make this concept more clear: Figure 4: Here, we have four examples of colors and the “bucket” amounts for each of the Red, Green, and Blue components, respectively. In the top-left example, we have the color white — each of the Red, Green, and Blue buckets have been completely filled to form the white color. And on the top-right, we have the color black — the Red, Green, and Blue buckets are now totally empty. Similarly, to form the color red in the bottom-left, we simply fill the Red bucket completely, leaving the other Green and Blue buckets totally empty. Finally, blue is formed by filling only the Blue bucket, as demonstrated in the bottom-right. For your reference, here are some common colors represented as RGB tuples: Black: (0, 0, 0)White: (255, 255, 255)Red: (255, 0, 0)Green: (0, 255, 0)Blue: (0, 0, 255)Aqua: (0, 255, 255)Fuchsia: (255, 0, 255)Maroon: (128, 0, 0)Navy: (0, 0, 128)Olive: (128, 128, 0)Purple: (128, 0, 128)Teal: (0, 128, 128)Yellow: (255, 255, 0) Now that we have a good understanding of pixels let’s have a quick review of the coordinate system. Overview of the image coordinate system in OpenCV As I mentioned in Figure 1, an image is represented as a grid of pixels. Imagine our grid as a piece of graph paper.
https://pyimagesearch.com/2021/01/20/opencv-getting-and-setting-pixels/
Using this graph paper, the point (0, 0) corresponds to the top-left corner of the image (i.e., the origin). As we move down and to the right, both the x and y-values increase. Let’s look at the image in Figure 5 to make this point more clear: Figure 5: In OpenCV, pixels are accessed by their (x, y)-coordinates. The origin, (0, 0), is located at the top-left of the image. OpenCV images are zero-indexed, where the x-values go left-to-right (column number) and y-values go top-to-bottom (row number). Here, we have the letter “I” on a piece of graph paper. We see that we have an 8 x 8 grid with 64 total pixels. The point at (0, 0) corresponds to the top-left pixel in our image, whereas the point (7, 7) corresponds to the bottom-right corner. It is important to note that we are counting from zero rather than one. The Python language is zero-indexed, meaning that we always start counting from zero.
https://pyimagesearch.com/2021/01/20/opencv-getting-and-setting-pixels/
Keep this in mind, and you will avoid a lot of confusion later on. Finally, the pixel 4 columns to the right and 5 rows down is indexed by the point (3, 4), keeping in mind that we are counting from zero rather than one. Configuring your development environment To follow this guide, you need to have the OpenCV library installed on your system. Luckily, OpenCV is pip-installable: $ pip install opencv-contrib-python If you need help configuring your development environment for OpenCV, I highly recommend that you read my pip install OpenCV guide — it will have you up and running in a matter of minutes. Having problems configuring your development environment? Figure 6: Having trouble configuring your development environment? Want access to pre-configured Jupyter Notebooks running on Google Colab? Be sure to join PyImageSearch Plus — you will be up and running with this tutorial in a matter of minutes. All that said, are you: Short on time?Learning on your employer’s administratively locked system?Wanting to skip the hassle of fighting with the command line, package managers, and virtual environments?Ready to run the code right now on your Windows, macOS, or Linux system? Then join PyImageSearch Plus today!
https://pyimagesearch.com/2021/01/20/opencv-getting-and-setting-pixels/
Gain access to Jupyter Notebooks for this tutorial and other PyImageSearch guides that are pre-configured to run on Google Colab’s ecosystem right in your web browser! No installation required. And best of all, these Jupyter Notebooks will run on Windows, macOS, and Linux! Project structure Before we start looking at code, let’s review our project directory structure: $ tree . --dirsfirst . ├── adrian.png └── opencv_getting_setting.py 0 directories, 2 files We have a single Python script to review today, opencv_getting_setting.py, which will allow us to access and manipulate the image pixels from the image adrian.png. Getting and setting pixels with OpenCV Let’s learn how to get and set pixels with OpenCV. Open the opencv_getting_setting.py file in your project directory structure, and let’s get to work: # import the necessary packages import argparse import cv2 # construct the argument parser and parse the arguments ap = argparse. ArgumentParser() ap.add_argument("-i", "--image", type=str, default="adrian.png", help="path to the input image") args = vars(ap.parse_args()) Lines 2 and 3 import our required Python packages. We only need argparse for our command line arguments cv2 for our OpenCV bindings.
https://pyimagesearch.com/2021/01/20/opencv-getting-and-setting-pixels/
The --image command line argument points to the image we want to manipulate residing on disk. By default, the --image command line argument is set to adrian.png. Next, let’s load this image and start accessing pixel values: # load the image, grab its spatial dimensions (width and height), # and then display the original image to our screen image = cv2.imread(args["image"]) (h, w) = image.shape[:2] cv2.imshow("Original", image) Lines 13-15 load our input image from disk, grab its width and height, and displays the image to our screen: Figure 7: Loading our input image from disk and displaying it with OpenCV. Images in OpenCV are represented by NumPy arrays. To access a particular image pixel, all we need to do is pass in the (x, y)-coordinates as image[y, x]: # images are simply NumPy arrays -- with the origin (0, 0) located at # the top-left of the image (b, g, r) = image[0, 0] print("Pixel at (0, 0) - Red: {}, Green: {}, Blue: {}".format(r, g, b)) # access the pixel located at x=50, y=20 (b, g, r) = image[20, 50] print("Pixel at (50, 20) - Red: {}, Green: {}, Blue: {}".format(r, g, b)) # update the pixel at (50, 20) and set it to red image[20, 50] = (0, 0, 255) (b, g, r) = image[20, 50] print("Pixel at (50, 20) - Red: {}, Green: {}, Blue: {}".format(r, g, b)) Line 19 accesses the pixel located at (0, 0), which is the top-left corner of the image. In return, we receive the Blue, Green, and Red intensities (BGR), in that order. The big question is: Why does OpenCV represent images in BGR channel ordering rather than the standard RGB? The answer is that back when OpenCV was originally developed, BGR ordering was the standard! It was only later that the RGB order was adopted. The BGR ordering is standard in OpenCV, so get used to seeing it.
https://pyimagesearch.com/2021/01/20/opencv-getting-and-setting-pixels/
Line 23 then accesses the pixel located at x = 50, y = 20 using the array indexing of image[20, 50]. But wait . . . isn’t that backward? Shouldn’t it instead be image[50, 20] since x = 50 and y = 20? Not so fast! Let’s back up a step and consider that an image is simply a matrix with a width (number of columns) and height (number of rows). If we were to access an individual location in that matrix, we would denote it as the x value (column number) and y value (row number). Therefore, to access the pixel located at x = 50, y = 20, you pass the y-value first (the row number) followed by the x-value (the column number), resulting in image[y, x]. Note: I’ve found that the concept of accessing individual pixels with the syntax of image[y, x] is what trips up many students. Take a second to convince yourself that image[y, x] is the correct syntax based on the fact that the x-value is your column number (i.e., width), and the y-value is your row number (i.e., height).
https://pyimagesearch.com/2021/01/20/opencv-getting-and-setting-pixels/
Lines 27 and 28 update the pixel located at x = 50, y = 20, setting it to red, which is (0, 0, 255) in BGR ordering. Line 29 then prints the updated pixel value to our terminal, thereby demonstrating that it has been updated. Next, let’s learn how to use NumPy array slicing to grab large chunks/regions of interest from an image: # compute the center of the image, which is simply the width and height # divided by two (cX, cY) = (w // 2, h // 2) # since we are using NumPy arrays, we can apply array slicing to grab # large chunks/regions of interest from the image -- here we grab the # top-left corner of the image tl = image[0:cY, 0:cX] cv2.imshow("Top-Left Corner", tl) On Line 33, we compute the center (x, y)-coordinates of the image. This is accomplished by simply dividing the width and height by two, ensuring integer conversion (since we cannot access “fractional pixel” locations). Then, on Line 38, we use simple NumPy array slicing to extract the [0, cX) and [0, cY) region of the image. In fact, this region corresponds to the top-left corner of the image! To grab chunks of an image, NumPy expects we provide four indexes: Start y: The first value is the starting y-coordinate. This is where our array slice will start along the y-axis. In our example above, our slice starts at y = 0.End y: Just as we supplied a starting y-value, we must provide an ending y-value. Our slice stops along the y-axis when y = cY.Start x: The third value we must supply is the starting x-coordinate for the slice.
https://pyimagesearch.com/2021/01/20/opencv-getting-and-setting-pixels/
To grab the top-left region of the image, we start at x = 0.End x: Lastly, we need to provide the x-axis value for our slice to stop. We stop when x = cX. Once we have extracted the top-left corner of the image, Line 39 shows the cropping result. Notice how our image is just the top-left corner of our original image: Figure 8: Extracting the top-left corner of the image using array slicing. Let’s extend this example a little further so we can get some practice using NumPy array slicing to extract regions from images: # in a similar fashion, we can crop the top-right, bottom-right, and # bottom-left corners of the image and then display them to our # screen tr = image[0:cY, cX:w] br = image[cY:h, cX:w] bl = image[cY:h, 0:cX] cv2.imshow("Top-Right Corner", tr) cv2.imshow("Bottom-Right Corner", br) cv2.imshow("Bottom-Left Corner", bl) In a similar fashion to the example above, Line 44 extracts the top-right corner of the image, Line 45 extracts the bottom-right corner, and Line 46 the bottom-left. Finally, all four corners of the image are displayed on screen on Lines 47-49, like this: Figure 9: Using array slicing to extract the four corners of an image with OpenCV. Understanding NumPy array slicing is a very important skill that you will use time and time again as a computer vision practitioner. If you are unfamiliar with NumPy array slicing, I would suggest taking a few minutes and reading this page on the basics of NumPy indexes, arrays, and slicing. The last task we are going to do is use array slices to change the color of a region of pixels: # set the top-left corner of the original image to be green image[0:cY, 0:cX] = (0, 255, 0) # Show our updated image cv2.imshow("Updated", image) cv2.waitKey(0) On Line 52, you can see that we are again accessing the top-left corner of the image; however, this time, we are setting this region to have a value of (0, 255, 0) (green). Lines 55 and 56 then show the results of our work: Figure 10: Setting the top-left corner of the image to be “green.” OpenCV pixel getting and setting results Let’s now learn how to get and set individual pixel values using OpenCV!
https://pyimagesearch.com/2021/01/20/opencv-getting-and-setting-pixels/
Be sure you have used the “Downloads” section of this tutorial to access the source code and example images. From there, you can execute the following command: $ python opencv_getting_setting.py --image adrian.png Pixel at (0, 0) - Red: 233, Green: 240, Blue: 246 Pixel at (50, 20) - Red: 229, Green: 238, Blue: 245 Pixel at (50, 20) - Red: 255, Green: 0, Blue: 0 Once our script starts running, you should see some output printed to your console. The first line of output tells us that the pixel located at (0, 0) has a value of R = 233, G = 240, and B = 246. The buckets for all three channels are nearly white, indicating that the pixel is very bright. The next two lines of output show that we have successfully changed the pixel located at (50, 20) to be red rather than the (nearly) white color. You can refer to the images and screenshots from the “Getting and setting pixels with OpenCV” section for the image visualizations from each step of our image processing pipeline. What's next? We recommend PyImageSearch University. Course information: 84 total classes • 114+ hours of on-demand code walkthrough videos • Last updated: February 2024 ★★★★★ 4.84 (128 Ratings) • 16,000+ Students Enrolled I strongly believe that if you had the right teacher you could master computer vision and deep learning. Do you think learning computer vision and deep learning has to be time-consuming, overwhelming, and complicated?
https://pyimagesearch.com/2021/01/20/opencv-getting-and-setting-pixels/
Or has to involve complex mathematics and equations? Or requires a degree in computer science? That’s not the case. All you need to master computer vision and deep learning is for someone to explain things to you in simple, intuitive terms. And that’s exactly what I do. My mission is to change education and how complex Artificial Intelligence topics are taught. If you're serious about learning computer vision, your next stop should be PyImageSearch University, the most comprehensive computer vision, deep learning, and OpenCV course online today. Here you’ll learn how to successfully and confidently apply computer vision to your work, research, and projects. Join me in computer vision mastery. Inside PyImageSearch University you'll find: ✓ 84 courses on essential computer vision, deep learning, and OpenCV topics ✓ 84 Certificates of Completion ✓ 114+ hours of on-demand video ✓ Brand new courses released regularly, ensuring you can keep up with state-of-the-art techniques ✓ Pre-configured Jupyter Notebooks in Google Colab ✓ Run all code examples in your web browser — works on Windows, macOS, and Linux (no dev environment configuration required!)
https://pyimagesearch.com/2021/01/20/opencv-getting-and-setting-pixels/
✓ Access to centralized code repos for all 536+ tutorials on PyImageSearch ✓ Easy one-click downloads for code, datasets, pre-trained models, etc. ✓ Access on mobile, laptop, desktop, etc. Click here to join PyImageSearch University Summary In this tutorial, you learned how to get and set pixel values using OpenCV. You also learned about pixels, the building blocks of an image, along with the image coordinate system OpenCV uses. Unlike the coordinate system you studied in basic algebra, where the origin, denoted as (0, 0), is at the bottom-left, the origin for images is actually located at the top-left of the image. As the x-value increases, we go farther to the right of the image. And as the y-value increases, we go farther down the image. To download the source code to this post (and be notified when future tutorials are published here on PyImageSearch), simply enter your email address in the form below! Download the Source Code and FREE 17-page Resource Guide Enter your email address below to get a .zip of the code and a FREE 17-page Resource Guide on Computer Vision, OpenCV, and Deep Learning. Inside you'll find my hand-picked tutorials, books, courses, and libraries to help you master CV and DL!
https://pyimagesearch.com/2021/01/20/opencv-getting-and-setting-pixels/
Download the code! Website
https://pyimagesearch.com/2021/01/20/opencv-flip-image-cv2-flip/
Click here to download the source code to this pos In this tutorial, you will learn how to flip images using OpenCV and the cv2.flip function. Similar to image rotation, OpenCV also provides methods to flip an image across its x- or y-axis. Though flipping operations are used less often, they are still very valuable to learn — and for reasons that you may not consider off the top of your head. For example, let’s imagine working for a small startup company that wants to build a machine learning classifier to detect faces within images. We would need a dataset of example faces that our algorithm could use to “learn” what a face is. But unfortunately, the company has only provided us with a tiny dataset of 20 faces, and we don’t have the means to acquire more data. So what do we do? We apply flipping operations to augment our dataset! We can horizontally flip each face image (since a face is still a face, whether mirrored or not) and use these mirrored versions as additional training data. While this example sounds silly and contrived, it’s not.
https://pyimagesearch.com/2021/01/20/opencv-flip-image-cv2-flip/
Powerful, data-hungry, deep learning algorithms purposely use flipping to generate extra data during training time (through a technique called data augmentation). So, as you see, the image processing techniques you’re learning here really are the building blocks for larger computer vision systems! To learn how to flip images with OpenCV and cv2.flip, just keep reading. Looking for the source code to this post? Jump Right To The Downloads Section OpenCV Flip Image ( cv2.flip ) In the first part of this tutorial, we’ll discuss what image flipping is and how OpenCV can help us flip images. From there, we’ll configure our development environment and review our project directory structure. We’ll then implement a Python script to perform image flipping with OpenCV. What is image flipping? We can flip an image around either the x-axis, y-axis, or even both. Flipping an image is better explained by viewing an image flip’s output before starting the code.
https://pyimagesearch.com/2021/01/20/opencv-flip-image-cv2-flip/
Check out Figure 1 to see an image flipped horizontally: Figure 1: Using OpenCV to flip an image horizontally. Notice how on the left, we have our original image, and on the right, the image has been mirrored horizontally. We can do the same vertically: Figure 2: Flipping an image vertically with OpenCV. And we can combine horizontal and vertical flips as well: Figure 3: Flipping an image both horizontally and vertically with OpenCV. Later in this tutorial, you’ll discover how to perform these image flipping operations with OpenCV. Configuring your development environment To follow this guide, you need to have the OpenCV library installed on your system. Luckily, OpenCV is pip-installable: $ pip install opencv-contrib-python If you need help configuring your development environment for OpenCV, I highly recommend that you read my pip install OpenCV guide — it will have you up and running in a matter of minutes. Having problems configuring your development environment? Figure 4: Having trouble configuring your development environment? Want access to pre-configured Jupyter Notebooks running on Google Colab?
https://pyimagesearch.com/2021/01/20/opencv-flip-image-cv2-flip/
Be sure to join PyImageSearch Plus — you will be up and running with this tutorial in a matter of minutes. All that said, are you: Short on time?Learning on your employer’s administratively locked system?Wanting to skip the hassle of fighting with the command line, package managers, and virtual environments?Ready to run the code right now on your Windows, macOS, or Linux systems? Then join PyImageSearch Plus today! Gain access to Jupyter Notebooks for this tutorial and other PyImageSearch guides that are pre-configured to run on Google Colab’s ecosystem right in your web browser! No installation required. And best of all, these Jupyter Notebooks will run on Windows, macOS, and Linux! Project structure Before reviewing any code for clipping an image with OpenCV, let’s first review our project directory structure. Be sure to access the “Downloads” section of this tutorial to retrieve the source code and example image. From there, take a peek at your project folder: $ tree . --dirsfirst .
https://pyimagesearch.com/2021/01/20/opencv-flip-image-cv2-flip/
├── opencv_flip.py └── opencv_logo.png 0 directories, 2 files Our opencv_flip.py script will load the opencv_logo.png image from disk and then demonstrate how to use the cv2.flip function to flip an image. Implementing image flipping with OpenCV Next on our list of image transformations to explore is flipping. We can flip an image around either the x- or y-axis, or even both. Flipping an image is better explained by viewing the output of an image flip, before we get into the code. Check out Figure 5 to see an image flipped horizontally: Figure 5: Horizontal image flipping with OpenCV and cv2.flip. Now that you see what an image flip looks like, we can explore the code: # import the necessary packages import argparse import cv2 # construct the argument parser and parse the arguments ap = argparse. ArgumentParser() ap.add_argument("-i", "--image", type=str, default="opencv_logo.png", help="path to the input image") args = vars(ap.parse_args()) Lines 2 and 3 import our required Python packages while Lines 6-9 parse our command line arguments. We only need a single argument here, --image, which is the path to the input image we want to flip. We default this value to the opencv_logo.png image in our project directory. Let’s now flip the image horizontally: # load the original input image and display it to our screen image = cv2.imread(args["image"]) cv2.imshow("Original", image) # flip the image horizontally print("[INFO] flipping image horizontally...") flipped = cv2.flip(image, 1) cv2.imshow("Flipped Horizontally", flipped) We start on Lines 12 and 13 by loading our input image from disk and displaying it to our screen.
https://pyimagesearch.com/2021/01/20/opencv-flip-image-cv2-flip/
Flipping an image horizontally is accomplished by making a call to the cv2.flip function on Line 17, the output of which is seen in Figure 5. The cv2.flip method requires two arguments: the image we want to flip and a specific code/flag used to determine how we flip the image. Using a flip code value of 1 indicates that we flipped the image horizontally, around the y-axis. Specifying a flip code of 0 indicates that we want to flip the image vertically, around the x-axis: # flip the image vertically flipped = cv2.flip(image, 0) print("[INFO] flipping image vertically...") cv2.imshow("Flipped Vertically", flipped) Figure 6 displays the output of flipping an image vertically: Figure 6: Flipping an image vertically with OpenCV and cv2.flip. Finally, using a negative flip code flips the image around both axes. # flip the image along both axes flipped = cv2.flip(image, -1) print("[INFO] flipping image horizontally and vertically...") cv2.imshow("Flipped Horizontally & Vertically", flipped) cv2.waitKey(0) Here, you can see that our image is flipped both horizontally and vertically: Figure 7: Supply a negative file to cv2.flip to flip an image both horizontally and vertically. Flipping an image is very simple — perhaps one of the most simple examples in this series! OpenCV image flipping results To flip images with OpenCV, be sure to access the “Downloads” section of this tutorial to retrieve the source code and example image. From there, open a shell and execute the following command: $ python opencv_flip.py [INFO] flipping image horizontally... [INFO] flipping image vertically... [INFO] flipping image horizontally and vertically... Your OpenCV flipping results should match mine from the previous section. What's next?
https://pyimagesearch.com/2021/01/20/opencv-flip-image-cv2-flip/
We recommend PyImageSearch University. Course information: 84 total classes • 114+ hours of on-demand code walkthrough videos • Last updated: February 2024 ★★★★★ 4.84 (128 Ratings) • 16,000+ Students Enrolled I strongly believe that if you had the right teacher you could master computer vision and deep learning. Do you think learning computer vision and deep learning has to be time-consuming, overwhelming, and complicated? Or has to involve complex mathematics and equations? Or requires a degree in computer science? That’s not the case. All you need to master computer vision and deep learning is for someone to explain things to you in simple, intuitive terms. And that’s exactly what I do. My mission is to change education and how complex Artificial Intelligence topics are taught. If you're serious about learning computer vision, your next stop should be PyImageSearch University, the most comprehensive computer vision, deep learning, and OpenCV course online today.
https://pyimagesearch.com/2021/01/20/opencv-flip-image-cv2-flip/
Here you’ll learn how to successfully and confidently apply computer vision to your work, research, and projects. Join me in computer vision mastery. Inside PyImageSearch University you'll find: ✓ 84 courses on essential computer vision, deep learning, and OpenCV topics ✓ 84 Certificates of Completion ✓ 114+ hours of on-demand video ✓ Brand new courses released regularly, ensuring you can keep up with state-of-the-art techniques ✓ Pre-configured Jupyter Notebooks in Google Colab ✓ Run all code examples in your web browser — works on Windows, macOS, and Linux (no dev environment configuration required!) ✓ Access to centralized code repos for all 536+ tutorials on PyImageSearch ✓ Easy one-click downloads for code, datasets, pre-trained models, etc. ✓ Access on mobile, laptop, desktop, etc. Click here to join PyImageSearch University Summary In this tutorial, you learned how to flip images horizontally and vertically with OpenCV and the cv2.flip function. Admittedly, image flipping is one of the easiest image processing concepts we have covered. However, just because a concept is simple does not mean that it’s not used for more powerful purposes. As I mentioned in the introduction to this tutorial, flipping is consistently used in machine learning/deep learning to generate more training data samples, thus creating more powerful and robust image classifiers. To download the source code to this post (and be notified when future tutorials are published here on PyImageSearch), simply enter your email address in the form below!
https://pyimagesearch.com/2021/01/20/opencv-flip-image-cv2-flip/
Download the Source Code and FREE 17-page Resource Guide Enter your email address below to get a .zip of the code and a FREE 17-page Resource Guide on Computer Vision, OpenCV, and Deep Learning. Inside you'll find my hand-picked tutorials, books, courses, and libraries to help you master CV and DL! Download the code! Website
https://pyimagesearch.com/2021/01/23/splitting-and-merging-channels-with-opencv/
Click here to download the source code to this pos In this tutorial, you will learn how to split and merge channels with OpenCV. As we know, an image is represented by three components: a Red, Green, and Blue channel. And while we’ve briefly discussed grayscale and binary representations of an image, you may be wondering: How do I access each individual Red, Green, and Blue channel of an image? Since images in OpenCV are internally represented as NumPy arrays, accessing each channel can be accomplished in multiple ways, implying multiple ways to skin this cat. However, we’ll focus on the two main methods that you should use: cv2.split and cv2.merge. By the end of this tutorial, you will have a good understanding of how to split images into channels using cv2.split and merge the individual channels back together with cv2.merge. To learn how to split and merge channels with OpenCV, just keep reading. Looking for the source code to this post? Jump Right To The Downloads Section Splitting and Merging Channels with OpenCV In the first part of this tutorial, we will configure our development environment and review our project structure. We’ll then implement a Python script that will: Load an input image from diskSplit it into its respective Red, Green, and Blue channelsDisplay each channel onto our screen for visualization purposesMerge the individual channels back together to form the original image Let’s get started!
https://pyimagesearch.com/2021/01/23/splitting-and-merging-channels-with-opencv/
Configuring your development environment To follow this guide, you need to have the OpenCV library installed on your system. Luckily, OpenCV is pip-installable: $ pip install opencv-contrib-python If you need help configuring your development environment for OpenCV, I highly recommend that you read my pip install OpenCV guide — it will have you up and running in a matter of minutes. Having problems configuring your development environment? Figure 1: Having trouble configuring your development environment? Want access to pre-configured Jupyter Notebooks running on Google Colab? Be sure to join PyImageSearch Plus — you will be up and running with this tutorial in a matter of minutes. All that said, are you: Short on time?Learning on your employer’s administratively locked system?Wanting to skip the hassle of fighting with the command line, package managers, and virtual environments?Ready to run the code right now on your Windows, macOS, or Linux system? Then join PyImageSearch Plus today! Gain access to Jupyter Notebooks for this tutorial and other PyImageSearch guides that are pre-configured to run on Google Colab’s ecosystem right in your web browser! No installation required.