url
stringclasses 675
values | text
stringlengths 0
9.95k
|
---|---|
https://pyimagesearch.com/2021/11/17/computer-graphics-and-deep-learning-with-nerf-using-tensorflow-and-keras-part-2/ | The map function takes in the tf.data. Dataset instance and a function that is applied to each element of the dataset. The later part of the pyimagesearch/data.py defines functions used with the map function to transform the dataset. class GetImages():
def __init__(self, imageWidth, imageHeight):
# define the image width and height
self.imageWidth = imageWidth
self.imageHeight = imageHeight
def __call__(self, imagePath):
# read the image file
image = read_file(imagePath)
# decode the image string
image = decode_jpeg(image, 3)
# convert the image dtype from uint8 to float32
image = convert_image_dtype(image, dtype=tf.float32)
# resize the image to the height and width in config
image = resize(image, (self.imageWidth, self.imageHeight))
image = reshape(image, (self.imageWidth, self.imageHeight, 3))
# return the image
return image
Before moving ahead, let’s discuss why we chose to build a class with a __call__ method instead of building a function that could be applied with the map function. The problem is that the function passed to the map function cannot accept anything other than the element of the dataset. This is an imposed constraint which we need to bypass. To overcome this problem, we have created a class that can hold some properties (here imageWidth and imageHeight) used during the function call. On Lines 39-60, we build the GetImages class with a custom __call__ and __init__ function. __init__: we will be using this function to initialize the parameters imageWidth and imageHeight (Lines 40-43)
__call__: this method makes the object callable. We will be using this function to read the images from the imagePaths (Line 47). |
https://pyimagesearch.com/2021/11/17/computer-graphics-and-deep-learning-with-nerf-using-tensorflow-and-keras-part-2/ | Next, it is now decoded in a usable jpeg format (Line 50). We then convert the image from uint8 to float32 and reshape it (Lines 53-57). Generate Rays
A ray in computer graphics can be parameterized as
where
is the ray is the origin of the ray is the unit vector for the direction of the ray is the parameter (e.g., time)
To build the ray equation, we need the origin and the direction. In the context of NeRF, we generate rays by taking the origin of the ray as the pixel position of the image plane and the direction as the straight line joining the pixel and the camera aperture. This is illustrated in Figure 10. Figure 10: The process of ray generation. We can easily devise the pixel positions of the 2D image with respect to the camera coordinate frame using the following equations. It is easy to locate the origin of the pixel points but a little challenging to get the direction of the rays. From the previous section, we have
The camera-to-world matrix from the dataset is the that we need. To define the direction vector, we do not need the entire camera-to-world matrix; instead, we use the upper matrix that defines the camera’s orientation. |
https://pyimagesearch.com/2021/11/17/computer-graphics-and-deep-learning-with-nerf-using-tensorflow-and-keras-part-2/ | With the rotation matrix, we can get the unit direction vector by the following equation. The difficult calculations are now over. For the easy part, the rays’ origin will be the translation vector of the camera-to-world matrix. Let’s see how we can translate this to code. We will be continuing with the pyimagesearch/data.py file. class GetRays:
def __init__(self, focalLength, imageWidth, imageHeight, near,
far, nC):
# define the focal length, image width, and image height
self.focalLength = focalLength
self.imageWidth = imageWidth
self.imageHeight = imageHeight
# define the near and far bounding values
self.near = near
self.far = far
# define the number of samples for coarse model
self.nC = nC
On Lines 62-75, we create the class GetRays with a custom __call__ and __init__ function. __init__: we initialize the focalLength, imageWidth, and imageHeight on Lines 66-68 and also the near and far bounds of the camera viewing field (Lines 71 and 72). We will need this to construct the rays to be marched into the scene, as shown In Figure 8. def __call__(self, camera2world):
# create a meshgrid of image dimensions
(x, y) = tf.meshgrid(
tf.range(self.imageWidth, dtype=tf.float32),
tf.range(self.imageHeight, dtype=tf.float32),
indexing="xy",
)
# define the camera coordinates
xCamera = (x - self.imageWidth * 0.5) / self.focalLength
yCamera = (y - self.imageHeight * 0.5) / self.focalLength
# define the camera vector
xCyCzC = tf.stack([xCamera, -yCamera, -tf.ones_like(x)],
axis=-1)
# slice the camera2world matrix to obtain the rotation and
# translation matrix
rotation = camera2world[:3, :3]
translation = camera2world[:3, -1]
__call__: we input the camera2world matrix to this method which in turn returns
rayO: the origin pointsrayD: the set of direction pointstVals: the sampled points
On Lines 79-83, we create a meshgrid of the image dimension. This is the same as the Image Plane shown in Figure 10. |
https://pyimagesearch.com/2021/11/17/computer-graphics-and-deep-learning-with-nerf-using-tensorflow-and-keras-part-2/ | Next, we obtain the camera coordinates (Lines 86 and 87) using the equation derived from our previous blog. We define a homogeneous representation (Lines 90 and 91) of the camera vector xCyCzC by stacking the camera coordinates. On Lines 95 and 96, we extract the rotation matrix and the translation vector from the camera-to-world matrix. # expand the camera coordinates to
xCyCzC = xCyCzC[..., None, :]
# get the world coordinates
xWyWzW = xCyCzC * rotation
# calculate the direction vector of the ray
rayD = tf.reduce_sum(xWyWzW, axis=-1)
rayD = rayD / tf.norm(rayD, axis=-1, keepdims=True)
# calculate the origin vector of the ray
rayO = tf.broadcast_to(translation, tf.shape(rayD))
# get the sample points from the ray
tVals = tf.linspace(self.near, self.far, self.nC)
noiseShape = list(rayO.shape[:-1]) + [self.nC]
noise = (tf.random.uniform(shape=noiseShape) *
(self.far - self.near) / self.nC)
tVals = tVals + noise
# return ray origin, direction, and the sample points
return (rayO, rayD, tVals)
We then transform the camera coordinates to world coordinates using the rotation matrix (Lines 99-102). Next, we calculate the direction rayD (Lines 105 and 106) and the origin vector rayO (Line 109). On Lines 112-116, we sample points from the ray. Note: We will learn about sampling points on a ray in the following section. Finally we return rayO, rayD, and tVals on Line 119. Sample Points
After the generation of rays, we need to draw sample 3D points from the rays. To do this, we suggest two ways. |
https://pyimagesearch.com/2021/11/17/computer-graphics-and-deep-learning-with-nerf-using-tensorflow-and-keras-part-2/ | Sample points at regular intervals: The name of the method is self-explanatory. Here, we sample points on the ray at regular intervals, as shown in Figure 11. Figure 11: Sample points at regular intervals. The sampling equation is as follows:
where and are the farthest and nearest points on the ray, respectively. We divide the entire ray into equidistant parts, and the divisions serve as the sample points. Sample points randomly: In this method, we add randomness into the process of sampling points. The idea here is that if the sample points come from random positions of the ray, the model will be exposed to new data. This will regularize it to produce better results. The strategy is shown in Figure 12. Figure 12: Sample points at random. |
https://pyimagesearch.com/2021/11/17/computer-graphics-and-deep-learning-with-nerf-using-tensorflow-and-keras-part-2/ | This is demonstrated by the equation below:
where refers to uniform sampling. Here, we take a random point from the space between two adjacent points. NeRF Multi-Layer Perceptron
Each sample point is of 5 dimensions. The spatial location of the point is a 3D vector (), and the direction of the point is a 2D vector (). Mildenhall et al. ( 2020) advocate expressing the viewing direction as a 3D Cartesian unit vector . These 5D points serve as the input to the MLP. This field of rays with 5D points is referred to as the neural radiance field in the paper. The MLP network predicts each input point’s color and volume density . Color refers to the () content of the point. |
https://pyimagesearch.com/2021/11/17/computer-graphics-and-deep-learning-with-nerf-using-tensorflow-and-keras-part-2/ | The volume density can be interpreted as the differential probability of a ray terminating at an infinitesimal particle at that point. The MLP architecture is displayed in Figure 13. Figure 13: MLP architecture (inspired by Mildenhall et al., 2020). An important point to note here is that:
We encourage the representation to be multiview consistent by restricting the network to predict the volume density as a function of only the location while allowing the RGB color to be predicted as a function of both locations and viewing direction. With all that theory out of the way, we can start building the NeRF architecture in TensorFlow. So, let’s open the file pyimagesearch/nerf.py and start digging. # import the necessary packages
from tensorflow.keras.layers import Dense
from tensorflow.keras.layers import concatenate
from tensorflow.keras import Input
from tensorflow.keras import Model
We begin with importing our necessary packages on Lines 2-5. def get_model(lxyz, lDir, batchSize, denseUnits, skipLayer):
# build input layer for rays
rayInput = Input(shape=(None, None, None, 2 * 3 * lxyz + 3),
batch_size=batchSize)
# build input layer for direction of the rays
dirInput = Input(shape=(None, None, None, 2 * 3 * lDir + 3),
batch_size=batchSize)
# creating an input for the MLP
x = rayInput
for i in range(8):
# build a dense layer
x = Dense(units=denseUnits, activation="relu")(x)
# check if we have to include residual connection
if i % skipLayer == 0 and i > 0:
# inject the residual connection
x = concatenate([x, rayInput], axis=-1)
# get the sigma value
sigma = Dense(units=1, activation="relu")(x)
# create the feature vector
feature = Dense(units=denseUnits)(x)
# concatenate the feature vector with the direction input and put
# it through a dense layer
feature = concatenate([feature, dirInput], axis=-1)
x = Dense(units=denseUnits//2, activation="relu")(feature)
# get the rgb value
rgb = Dense(units=3, activation="sigmoid")(x)
# create the nerf model
nerfModel = Model(inputs=[rayInput, dirInput],
outputs=[rgb, sigma])
# return the nerf model
return nerfModel
Next, on Lines 7-46, we create our MLP model in the function called get_model. This method takes in the following inputs:
lxyz: the number of dimensions used for positional encoding of the xyz coordinateslDir: the number of dimensions used for positional encoding of the direction vectorbatchSize: the batch size of the datadenseUnits: the number of units in each layer of MLPskipLayer: the layer at which we want the skip connection
On Lines 9-14, we define the rayInput and the dirInput layers. |
https://pyimagesearch.com/2021/11/17/computer-graphics-and-deep-learning-with-nerf-using-tensorflow-and-keras-part-2/ | Next, we create the MLP with the skip connection (Lines 17-25). To align with the paper (multiview consistency), only the rayInput is passed through the model to produce sigma (volume density) and a feature vector on Lines 28-31. Finally, the feature vector is concatenated with the dirInput (Line 35) to produce color (Line 39). On Lines 42 and 43, we build the nerfModel using the Keras functional API. Finally, we return the nerfModel on Line 46. Volume Rendering
In this section, we study how to achieve volume rendering. We use the predicted color and volume density from the MLP to render the 3D scene. The predictions from the network are plugged into the classical volume rendering equation to derive the color of one particular point. For example, the equation for the same is given below:
Sounds complicated? Let us break this equation down into simple parts. |
https://pyimagesearch.com/2021/11/17/computer-graphics-and-deep-learning-with-nerf-using-tensorflow-and-keras-part-2/ | The term is the color of the point of the object. is the ray that is fed into the network where the variable stands for the following: as the origin of the ray point is the direction of the ray is the set of uniform samples between the near and far points used for the integral is the volume density which can also be interpreted as the differential probability of the ray terminating at the point . is the color of the ray at the point
These are the building blocks of the equation. Apart from these, there is another term
This represents the transmittance along the ray from near point to the current point . Think of this as a measure of how much the ray can penetrate the 3D space to a certain point. Now when we have all the terms together, we can finally make sense of the equation. The color of an object in the 3D space is defined as the sum over of () the transmittance (), volume density (), the color of the current point () and the direction of the ray sampled for all points existing between the near () and far () of the viewing plane. Let’s look at how to express this in code. First, we will look at the render_image_depth in the pyimagesearch/utils.py file. def render_image_depth(rgb, sigma, tVals):
# squeeze the last dimension of sigma
sigma = sigma[..., 0]
# calculate the delta between adjacent tVals
delta = tVals[..., 1:] - tVals[..., :-1]
deltaShape = [BATCH_SIZE, IMAGE_HEIGHT, IMAGE_WIDTH, 1]
delta = tf.concat(
[delta, tf.broadcast_to([1e10], shape=deltaShape)], axis=-1)
# calculate alpha from sigma and delta values
alpha = 1.0 - tf.exp(-sigma * delta)
# calculate the exponential term for easier calculations
expTerm = 1.0 - alpha
epsilon = 1e-10
# calculate the transmittance and weights of the ray points
transmittance = tf.math.cumprod(expTerm + epsilon, axis=-1,
exclusive=True)
weights = alpha * transmittance
# build the image and depth map from the points of the rays
image = tf.reduce_sum(weights[..., None] * rgb, axis=-2)
depth = tf.reduce_sum(weights * tVals, axis=-1)
# return rgb, depth map and weights
return (image, depth, weights)
On Lines 15-42, we are building a render_image_depth function which takes as inputs:
rgb: the red-green-blue color matrix of the ray pointssigma: the volume density of the sample pointstVals: the sample points
It produces the volume-rendered image (image), its depth map (depth), and the weights (required for hierarchical sampling). |
https://pyimagesearch.com/2021/11/17/computer-graphics-and-deep-learning-with-nerf-using-tensorflow-and-keras-part-2/ | On Line 17, we reshape sigma for ease of calculation. Next, we calculate the space (delta) between adjacent tVals (Lines 20-23).Next we create alpha using sigma and delta (Line 26).We create the transmittance and weight vector (Lines 33-35).On Lines 38 and 39, we create the image and depth map. Finally, we return image, depth, and weights on Line 42. Photometric Loss
We refer to the loss function used by NeRF as the photometric loss. This is computed by comparing the colors of the synthesized image with the ground-truth image. Mathematically this can be expressed as:
where is the real image and is the synthesized image. This function, when applied to the entire pipeline, is still fully differentiable. This allows us to train the model parameters () using backpropagation. Breather
Let’s take a moment here to realize how far we have come. Take a deep breath like our friend in Figure 14. |
https://pyimagesearch.com/2021/11/17/computer-graphics-and-deep-learning-with-nerf-using-tensorflow-and-keras-part-2/ | Figure 14: Pause and Ponder (image source). We have learned about computer graphics and their fundamentals in the first part of our blog series. In this tutorial, we have taken those concepts and applied them to 3D scene representation. Here we have:
Built an image and a ray dataset from the given json files. Sampled points from the rays using the random sampling strategy. Passed these points into the NeRF MLP.Rendered a novel image using the color and volume density predicted by the MLP.Established a loss function (photometric loss) with which we will optimize the parameters of the MLP. These steps are sufficient to train a NeRF model and render novel views. However, this vanilla architecture will eventually produce renderings of low quality. To mitigate these issues, Mildenhall et al. ( 2020) propose additional enhancements. |
https://pyimagesearch.com/2021/11/17/computer-graphics-and-deep-learning-with-nerf-using-tensorflow-and-keras-part-2/ | In the next section, we will learn about these enhancements and their implementation using TensorFlow. Enhancing NeRF
Mildenhall et al. ( 2020) propose two methods to enhance the renderings from NeRF. positional encodinghierarchical sampling
Positional Encoding
Positional Encoding is a popular encoding format used in architectures like transformers. Mildenhall et al. ( 2020) justify using this to better render high-frequency features such as texture and details. Rahaman et al. ( 2019) suggest that deep networks are biased toward learning low-frequency functions. To bypass this problem NeRF proposes mapping the input vector to a higher dimensional representation. Since the 5D input space is the position of the points, we are essentially encoding the positions from which it gets the name. |
https://pyimagesearch.com/2021/11/17/computer-graphics-and-deep-learning-with-nerf-using-tensorflow-and-keras-part-2/ | Let’s say we have 10 positions indexed as . The indices are in the decimal system. If we encode the digits in the binary system, we will get something, as shown in Figure 15. Figure 15: Binary encoding. The binary system is an easy encoding system. The only problem we face here is that the binary system is filled with zeros, making it a sparse representation. We would want to make this system continuous and compact. The encoding function used in NeRF is as follows:
To draw a parallel between the binary and the NeRF encoding, let’s look at Figure 16. Figure 16: Similarity between binary encoding and NeRF’s positional encoding. The sine and cosine functions make the encoding continuous, and the term makes it similar to the binary system. |
https://pyimagesearch.com/2021/11/17/computer-graphics-and-deep-learning-with-nerf-using-tensorflow-and-keras-part-2/ | A visualization of the positional encoding function is given in Figure 17. The blue line depicts the cosine component, while the red line is the sine component. Figure 17: Visualization of the sinusoids used for positional encoding. We can create this fairly simply in a function called encoder_fn in the pyimagesearch/encode.py file. # import the necessary packages
import tensorflow as tf
def encoder_fn(p, L):
# build the list of positional encodings
gamma = [p]
# iterate over the number of dimensions in time
for i in range(L):
# insert sine and cosine of the product of current dimension
# and the position vector
gamma.append(tf.sin((2.0 ** i) * p))
gamma.append(tf.cos((2.0 ** i) * p))
# concatenate the positional encodings into a positional vector
gamma = tf.concat(gamma, axis=-1)
# return the positional encoding vector
return gamma
We start with importing tensorflow (Line 2). On Lines 4-19, we define the encoder function, which takes in the following parameters:
p: position of each element to be encodedL: the dimension into which the encoding will take place
On Line 6, we define a list that will hold the positional encoding. Next, we iterate over dimensions and append the encoded values into the list (Lines 9-13). Lines 16-19 are used to convert the same list into a tensor and finally return it. Hierarchical Sampling
Mildenhall et al. ( 2020) found another problem with the original structure. |
https://pyimagesearch.com/2021/11/17/computer-graphics-and-deep-learning-with-nerf-using-tensorflow-and-keras-part-2/ | The random sampling method would sample N points along each camera ray. This means we don’t have any prior understanding of where it should sample. That ultimately leads to an inefficient rendering. They propose the following solution to remedy this:
Build two identical NeRF MLP models, the coarse and fine network. Sample a set of points along the camera ray using the random sampling strategy, as shown in Figure 12. These points will be used to query the coarse network. The output of the coarse network is used to produce a more informed sampling of points along each ray. These samples are biased towards the more relevant parts of the 3D scene. To do this, we rewrite the color equation:As a weighted sum of all sample colors .where the term .The weights, when normalized, produce a piecewise-constant probability density function. The entire procedure of turning the weights into a probability density function is visualized in Figure 18. |
https://pyimagesearch.com/2021/11/17/computer-graphics-and-deep-learning-with-nerf-using-tensorflow-and-keras-part-2/ | Figure 18: From weights to PDF. From the probability density function, we sample the second set of locations using the inverse transform sampling method, as shown in Figure 19. Figure 19: Entire process of hierarchical sampling (Arandjelović and Zisserman, 2021). Now we have both and set of sampled points. We send these points to the fine network to produce the final rendered color of the ray. This process of converting weights to a new set of sample points can be expressed through a function called sample_pdf. First, let’s refer to the utils.py file inside the pyimagesearch folder. def sample_pdf(tValsMid, weights, nF):
# add a small value to the weights to prevent it from nan
weights += 1e-5
# normalize the weights to get the pdf
pdf = weights / tf.reduce_sum(weights, axis=-1, keepdims=True)
# from pdf to cdf transformation
cdf = tf.cumsum(pdf, axis=-1)
# start the cdf with 0s
cdf = tf.concat([tf.zeros_like(cdf[..., :1]), cdf], axis=-1)
# get the sample points
uShape = [BATCH_SIZE, IMAGE_HEIGHT, IMAGE_WIDTH, nF]
u = tf.random.uniform(shape=uShape)
# get the indices of the points of u when u is inserted into cdf in a
# sorted manner
indices = tf.searchsorted(cdf, u, side="right")
# define the boundaries
below = tf.maximum(0, indices-1)
above = tf.minimum(cdf.shape[-1]-1, indices)
indicesG = tf.stack([below, above], axis=-1)
# gather the cdf according to the indices
cdfG = tf.gather(cdf, indicesG, axis=-1,
batch_dims=len(indicesG.shape)-2)
# gather the tVals according to the indices
tValsMidG = tf.gather(tValsMid, indicesG, axis=-1,
batch_dims=len(indicesG.shape)-2)
# create the samples by inverting the cdf
denom = cdfG[..., 1] - cdfG[..., 0]
denom = tf.where(denom < 1e-5, tf.ones_like(denom), denom)
t = (u - cdfG[..., 0]) / denom
samples = (tValsMidG[..., 0] + t *
(tValsMidG[..., 1] - tValsMidG[..., 0]))
# return the samples
return samples
This code snippet has been inspired by the official NeRF implementation. On Lines 44-86, we create a function called sample_pdf that takes in the following parameters:
tValsMid: the midpoints between two adjacent tValsweights: the weights used in the volume rendering functionnF: number of points used by the fine model
On Lines 46-49, we define the probability density function from the weights and then convert the same into a cumulative distribution function (cdf). This is then converted back into sample points for the fine model using inverse transform sampling (Lines 52-86). |
https://pyimagesearch.com/2021/11/17/computer-graphics-and-deep-learning-with-nerf-using-tensorflow-and-keras-part-2/ | We recommend this supplementary reading material to understand hierarchical sampling better. Credits
This tutorial was inspired by the work of Mildenhall et al. ( 2020). What's next? We recommend PyImageSearch University. Course information:
84 total classes • 114+ hours of on-demand code walkthrough videos • Last updated: February 2024
★★★★★ 4.84 (128 Ratings) • 16,000+ Students Enrolled
I strongly believe that if you had the right teacher you could master computer vision and deep learning. Do you think learning computer vision and deep learning has to be time-consuming, overwhelming, and complicated? Or has to involve complex mathematics and equations? Or requires a degree in computer science? That’s not the case. |
https://pyimagesearch.com/2021/11/17/computer-graphics-and-deep-learning-with-nerf-using-tensorflow-and-keras-part-2/ | All you need to master computer vision and deep learning is for someone to explain things to you in simple, intuitive terms. And that’s exactly what I do. My mission is to change education and how complex Artificial Intelligence topics are taught. If you're serious about learning computer vision, your next stop should be PyImageSearch University, the most comprehensive computer vision, deep learning, and OpenCV course online today. Here you’ll learn how to successfully and confidently apply computer vision to your work, research, and projects. Join me in computer vision mastery. Inside PyImageSearch University you'll find:
✓ 84 courses on essential computer vision, deep learning, and OpenCV topics
✓ 84 Certificates of Completion
✓ 114+ hours of on-demand video
✓ Brand new courses released regularly, ensuring you can keep up with state-of-the-art techniques
✓ Pre-configured Jupyter Notebooks in Google Colab
✓ Run all code examples in your web browser — works on Windows, macOS, and Linux (no dev environment configuration required!) ✓ Access to centralized code repos for all 536+ tutorials on PyImageSearch
✓ Easy one-click downloads for code, datasets, pre-trained models, etc. ✓ Access on mobile, laptop, desktop, etc. Click here to join PyImageSearch University
Summary
We have gone through the core concepts proposed in the paper NeRF and also implemented them using TensorFlow. |
https://pyimagesearch.com/2021/11/17/computer-graphics-and-deep-learning-with-nerf-using-tensorflow-and-keras-part-2/ | We can recall what we have learned so far in the following steps:
Building the image and ray dataset for 5D scene representationSample points from the rays using any of the sampling strategiesPassing these points through the NeRF MLP modelVolume rendering based on the output of the MLP modelCalculating the photometric lossUsing positional encoding and hierarchical sampling to improve the quality of rendering
In next week’s tutorial, we will cover how to utilize all of these concepts to train the NeRF model. In addition, we will also render a 360-degree video of a 3D scene from 2D images. We hope you enjoyed this week’s tutorial, and as always, you can download the source code and try it out yourself. Citation Information
Gosthipaty, A. R., and Raha, R. “Computer Graphics and Deep Learning with NeRF using TensorFlow and Keras: Part 2,” PyImageSearch, 2021, https://pyimagesearch.com/2021/11/17/computer-graphics-and-deep-learning-with-nerf-using-tensorflow-and-keras-part-2/
@article{Gosthipaty_Raha_2021_pt2,
author = {Aritra Roy Gosthipaty and Ritwik Raha},
title = {Computer Graphics and Deep Learning with {NeRF} using {TensorFlow} and {Keras}: Part 2},
journal = {PyImageSearch},
year = {2021},
note = {https://pyimagesearch.com/2021/11/17/computer-graphics-and-deep-learning-with-nerf-using-tensorflow-and-keras-part-2/},
}
To download the source code to this post (and be notified when future tutorials are published here on PyImageSearch), simply enter your email address in the form below! Download the Source Code and FREE 17-page Resource Guide
Enter your email address below to get a .zip of the code and a FREE 17-page Resource Guide on Computer Vision, OpenCV, and Deep Learning. Inside you'll find my hand-picked tutorials, books, courses, and libraries to help you master CV and DL! Download the code! Website |
https://pyimagesearch.com/2024/05/06/introduction-to-machine-learning-why-there-are-no-programmed-answers/ | Click here to download the source code to this pos
Home » Blog » Introduction to Machine Learning: Why There Are No Programmed Answers
Table of Contents
Introduction to Machine Learning: Why There Are No Programmed Answers
Machine Learning Explained: Moving Beyond Hard-Coded Logic
How Machine Learning Transforms Data into Insights: The Learning Mechanics
The Crucial Role of Data in Machine Learning Insights
The Impact of Data Variety and Volume on Machine Learning
Overcoming Real-World Data Challenges in Machine Learning
Addressing Bias and Ensuring Fairness in Machine Learning Models
Machine Learnings Continuous Learning Loop: A Cycle of Improvement
The Power of Unpredictability in Machine Learning
The Critical Role of Adaptation and Insight in Machine Learning
The Transformative Power of Machine Learning
How Adaptive Intelligence Is Revolutionizing Industries
Navigating the Societal Implications of Machine Learning
Ethics and Governance in the Age of Machine Learning
Machine Learning and Bridging the Digital Divide: Strategies and Solutions
Forging the Future: Adaptation, Innovation, and Responsibility in Machine Learning
Embracing a New Paradigm in Machine Learning
Conclusion: Navigating the Future with Machine Learning
Introduction to Machine Learning: Why There Are No Programmed Answers
Welcome to the exciting world where the predictability of traditional programming meets its match: machine learning (ML). If you’ve ever tinkered with programming, you’re accustomed to the reliability of if/then statements. Feed the same input, receive the same output — this predictability is the bedrock of software development. Yet, there’s a realm within computer science that dares to defy this norm, opening the door to a future where software isn’t just coded, but taught. Let’s embark on a journey to uncover the essence of machine learning and its transformative impact on technology and society. Machine Learning Explained: Moving Beyond Hard-Coded Logic
At its core, machine learning challenges the fundamental principles of traditional programming. Where conventional algorithms operate within the confines of explicitly programmed logic, machine learning thrives on the ability to learn and adapt from data. This shift from deterministic outputs to dynamic learning introduces a new era of computing, one that mimics human learning processes more closely than ever before. More people than ever stand to benefit from machine learning, see Figure 1. Figure 1: Community of machine learning developers (source: image generated using DALL-E). |
https://pyimagesearch.com/2024/05/06/introduction-to-machine-learning-why-there-are-no-programmed-answers/ | How Machine Learning Transforms Data into Insights: The Learning Mechanics
Imagine trying to create a program that can identify animals in images. In a traditional setting, you’d painstakingly define features like whiskers or fur patterns. It’s precise but inherently limited. Machine learning, however, starts with a blank canvas — a model that learns from examples rather than following rigid rules. For instance, if you wanted to have a machine learning application that can spot cats, Figure 2 shows how you would do it. from PIL import Image
import requests
from transformers import pipeline
from matplotlib import pyplot as plt
On Lines 1-4, we import the necessary packages to create an image detector. checkpoint = "openai/clip-vit-large-patch14"
detector = pipeline(model=checkpoint, task="zero-shot-image-classification")
url = "https://huggingface.co/datasets/pyimagesearch/blog-post-images/resolve/main/cat.jpeg"
image = Image.open(requests.get(url, stream=True).raw)
plt.imshow(image)
On Line 5, we define the model to use. Here, we are using the openai/clip-vit-large-patch14 model. Do not worry if you do not understand the architecture of the model. Essentially, we want a minimum viable product (here, image classifier) to work out of the box. |
https://pyimagesearch.com/2024/05/06/introduction-to-machine-learning-why-there-are-no-programmed-answers/ | To make it work, we are using a Deep Learning Model. Line 6 creates a Hugging Face pipeline. A pipeline consists of the entire pipeline of the model, from preprocessing, model computation, to postprocessing. We are using a zero-shot-image-classification pipeline with our model, which helps in the image classification task. Lines 8-10 download the image from a url and plot the image for visualization purposes. Figure 2: A cat image used in machine learning (source: Wikipedia). predictions = detector(image, candidate_labels=["cat", "dog"])
print(predictions)
Now, we pass the image into the detector on Line 11. The predictions that we get from the detector will be printed on Line 12. [
{'score': 0.9929975271224976, 'label': 'cat'},
{'score': 0.0070024654269218445, 'label': 'dog'}
]
Looking at the output, it is quite evident that the image passed into the model is that of a cat. Notice how we get the probability of the image being a cat. |
https://pyimagesearch.com/2024/05/06/introduction-to-machine-learning-why-there-are-no-programmed-answers/ | This probability is what creates stochasticity in a deep learning pipeline. Here’s how it works: you feed the model thousands of images, each labeled “cat” or “not cat.” Over time, the model discerns patterns, perhaps noticing the subtle curvature of ears or the specific texture of fur, identifying cats in a way that seems almost intuitive. This process, akin to teaching a child through examples, highlights the power and unpredictability of machine learning. A model could surprise you by recognizing a cat breed it was never explicitly shown, thanks to the patterns it has learned. The Crucial Role of Data in Machine Learning Insights
In the world of machine learning, data is not just a resource; it’s the foundation upon which the entire learning process is built. The comparison to a child learning from examples is apt, but perhaps it understates the complexity and depth of what happens within a machine learning model. To fully appreciate the role of data, we must explore its multifaceted impact on the learning journey of these models. The Impact of Data Variety and Volume on Machine Learning
A machine learning model’s ability to identify patterns, make predictions, or generate insights is profoundly influenced by the quality, quantity, and diversity of the data it is exposed to during its training phase. Here’s why each of these factors is crucial:
Quality: High-quality data is clean, well-annotated, and representative of the real-world scenarios the model will encounter. |
https://pyimagesearch.com/2024/05/06/introduction-to-machine-learning-why-there-are-no-programmed-answers/ | Poor quality data, filled with errors or irrelevant information, can mislead the model, akin to teaching a child with incorrect or incomplete information. Quantity: The adage “practice makes perfect” holds for machine learning. A larger volume of data provides more examples from which the model can learn, improving its ability to generalize from the training data to new, unseen data. It’s like giving a child a broader range of experiences to learn from, enriching their understanding and skills. Diversity: Diverse data encompasses a wide range of examples, scenarios, and variations within the dataset. This ensures the model is not only familiar with a broad spectrum of cases but also resilient to variations and exceptions. Training a model on a diverse dataset is akin to exposing a child to different cultures, languages, and perspectives, fostering a well-rounded and adaptable learning process. Overcoming Real-World Data Challenges in Machine Learning
The pursuit of high-quality, voluminous, and diverse data is not without its challenges. Real-world data is often messy, incomplete, and biased. The process of collecting, cleaning, and preparing this data for training can be as critical as the algorithmic innovations in machine learning itself. |
https://pyimagesearch.com/2024/05/06/introduction-to-machine-learning-why-there-are-no-programmed-answers/ | Addressing issues such as missing values, imbalanced datasets, and removing biases are essential for creating models that are fair, accurate, and truly insightful. Figure 3 shows a utopian tech-powered future, but the data to achieve such a future could be clearer. Figure 3: A green future with autonomous vehicles (source: image generated using DALL-E). Addressing Bias and Ensuring Fairness in Machine Learning Models
One of the most significant challenges in working with real-world data is the inherent biases that may be present. These biases can skew the model’s learning process, leading to unfair or prejudiced outcomes. For example, if a facial recognition system is trained predominantly on images of people from a single ethnic background, it may perform poorly on images of people from other ethnicities. Combatting these biases requires deliberate efforts to curate diverse and representative datasets, as well as employing techniques like algorithmic fairness to ensure the model’s decisions are equitable. Machine Learning’s Continuous Learning Loop: A Cycle of Improvement
Machine learning is not a one-time event but a continuous cycle of learning, evaluating, and refining. As new data becomes available, models can be retrained or fine-tuned, enhancing their accuracy and adaptability. This iterative process mirrors the ongoing learning journey of a human, where new experiences and information lead to growth and improvement over time. |
https://pyimagesearch.com/2024/05/06/introduction-to-machine-learning-why-there-are-no-programmed-answers/ | The dynamic nature of data means that machine learning models are always a work in progress, striving for better understanding and performance as they ingest more data. The pivotal role of data in machine learning cannot be overstated. It is the lens through which models perceive the world, the guide that directs their learning pathways, and the yardstick by which their performance is measured. As we continue to push the boundaries of what machine learning can achieve, our focus must remain on curating, understanding, and ethically utilizing the data that fuels these advanced algorithms. In doing so, we ensure that our models learn from the best teacher available: a diverse, rich, and ever-expanding dataset that mirrors the complexity of the world around us. The Power of Unpredictability in Machine Learning
The unpredictability of machine learning stems from several factors:
Stochastic Processes: Introducing randomness, akin to shaking a snow globe, can help models explore a wider range of solutions, enhancing their ability to discover and learn. Complex Data: The vast and intricate nature of real-world data offers a playground for models to detect subtle trends and correlations, often uncovering insights that humans might miss. Iterative Learning: Machine learning models aren’t static; they evolve with new data, constantly refining their understanding and predictions. The Critical Role of Adaptation and Insight in Machine Learning
Expanding on the significance of machine learning’s capacity for adaptation and insight, we delve into how this technology’s non-deterministic nature is reshaping industries, revolutionizing societal norms, and challenging us to rethink our approach to ethics and accountability in the digital age. This exploration will provide a deeper understanding of why machine learning matters, both in practical applications and broader societal implications. |
https://pyimagesearch.com/2024/05/06/introduction-to-machine-learning-why-there-are-no-programmed-answers/ | The Transformative Power of Machine Learning
Machine learning’s ability to process and learn from vast amounts of data offers unparalleled advantages in various sectors, from healthcare and transportation to finance and environmental protection. Its adaptive nature allows for solutions that are not only innovative but also incredibly responsive to the complexities of real-world challenges. How Adaptive Intelligence Is Revolutionizing Industries
Healthcare: In the medical field depicted in Figure 4, machine learning is pioneering personalized medicine, improving diagnostic accuracy, and optimizing treatment plans. By analyzing patient data and medical records, algorithms can identify patterns and predict outcomes, such as the likelihood of disease or the response to specific treatments. This leads to earlier interventions, tailored therapies, and better patient outcomes. Autonomous Vehicles: Self-driving technology exemplifies machine learning’s potential to handle unpredictable environments. By continuously learning from vast amounts of data collected from sensors and cameras, these vehicles can make split-second decisions, navigate complex road conditions, and improve safety. The technology adapts to new scenarios, enhancing its decision-making processes over time. Environmental Protection: Machine learning aids in climate modeling, conservation efforts, and predicting natural disasters. By analyzing data from satellites and sensors, algorithms can track deforestation, monitor wildlife populations, and predict extreme weather events with improved accuracy, helping to mitigate environmental risks and guide conservation strategies. |
https://pyimagesearch.com/2024/05/06/introduction-to-machine-learning-why-there-are-no-programmed-answers/ | Figure 4: A modern healthcare setting where doctors and AI-powered systems collaborate to diagnose and treat patients, emphasizing the integration of machine learning in enhancing medical care (source: image generated using DALL-E). Navigating the Societal Implications of Machine Learning
The integration of machine learning into our daily lives and industries carries profound societal implications, necessitating a careful consideration of ethics, governance, and the digital divide. Ethics and Governance in the Age of Machine Learning
The unpredictability and complexity of machine learning models bring forward critical ethical considerations. As these systems increasingly make decisions that impact human lives, from job applications to loan approvals, the need for ethical frameworks and governance structures becomes paramount. These frameworks should ensure that machine learning applications respect privacy and consent, and are free from biases that could lead to discrimination. Machine Learning and Bridging the Digital Divide: Strategies and Solutions
The rapid advancement of machine learning technologies also accentuates the digital divide. Ensuring equitable access to the benefits of these technologies requires concerted efforts to address disparities in education, infrastructure, and resources. Initiatives to democratize access to data, provide digital literacy training, and support open-source machine learning projects are vital steps toward an inclusive digital future. Forging the Future: Adaptation, Innovation, and Responsibility in Machine Learning
The journey of integrating machine learning into the fabric of society is fraught with challenges but also brimming with potential. As we stand on the brink of this technological frontier, three pillars should guide our approach:
Adaptive Innovation: Continuously pushing the boundaries of what machine learning can achieve while being responsive to the ethical, societal, and environmental implications of these advancements. |
https://pyimagesearch.com/2024/05/06/introduction-to-machine-learning-why-there-are-no-programmed-answers/ | Responsible Deployment: Implementing machine learning solutions with a commitment to transparency, fairness, and accountability. This includes developing explainable AI that allows stakeholders to understand how decisions are made. Collaborative Governance: Building multi-stakeholder partnerships to craft policies and frameworks that govern the use of machine learning. This collaborative effort should involve policymakers, technologists, ethicists, and the public to ensure that the deployment of machine learning technologies benefits society as a whole. Machine learning’s power of adaptation and insight opens up a world of possibilities for addressing some of the most pressing challenges of our time. However, realizing this potential requires more than technological innovation; it demands a collective commitment to ethical responsibility, inclusive access, and global collaboration. As we navigate this evolving landscape, the decisions we make today will shape the future of machine learning and its impact on society for generations to come. Embracing a New Paradigm in Machine Learning
Adopting machine learning requires a shift in mindset. Success no longer hinges on achieving perfect predictability but on embracing the potential for continuous improvement and insights that the model itself might reveal. This shift is accompanied by an ongoing discussion about ensuring that machine learning systems are not only effective but also fair and understandable. |
https://pyimagesearch.com/2024/05/06/introduction-to-machine-learning-why-there-are-no-programmed-answers/ | What's next? We recommend PyImageSearch University. Course information:
84 total classes • 114+ hours of on-demand code walkthrough videos • Last updated: February 2024
★★★★★ 4.84 (128 Ratings) • 16,000+ Students Enrolled
I strongly believe that if you had the right teacher you could master computer vision and deep learning. Do you think learning computer vision and deep learning has to be time-consuming, overwhelming, and complicated? Or has to involve complex mathematics and equations? Or requires a degree in computer science? That’s not the case. All you need to master computer vision and deep learning is for someone to explain things to you in simple, intuitive terms. And that’s exactly what I do. My mission is to change education and how complex Artificial Intelligence topics are taught. |
https://pyimagesearch.com/2024/05/06/introduction-to-machine-learning-why-there-are-no-programmed-answers/ | If you're serious about learning computer vision, your next stop should be PyImageSearch University, the most comprehensive computer vision, deep learning, and OpenCV course online today. Here you’ll learn how to successfully and confidently apply computer vision to your work, research, and projects. Join me in computer vision mastery. Inside PyImageSearch University you'll find:
✓ 84 courses on essential computer vision, deep learning, and OpenCV topics
✓ 84 Certificates of Completion
✓ 114+ hours of on-demand video
✓ Brand new courses released regularly, ensuring you can keep up with state-of-the-art techniques
✓ Pre-configured Jupyter Notebooks in Google Colab
✓ Run all code examples in your web browser — works on Windows, macOS, and Linux (no dev environment configuration required!) ✓ Access to centralized code repos for all 536+ tutorials on PyImageSearch
✓ Easy one-click downloads for code, datasets, pre-trained models, etc. ✓ Access on mobile, laptop, desktop, etc. Click here to join PyImageSearch University
Conclusion: Navigating the Future with Machine Learning
Machine learning represents a significant shift in how we approach problem-solving in technology and beyond. By embracing its non-deterministic nature, we open the door to innovations that adapt, learn, and unveil insights in ways previously unimaginable. As we move forward, the challenge will be to harness this power responsibly, ensuring that as our machines learn, they do so in ways that are transparent, equitable, and aligned with our collective values. In some cases, we can be proud of our values that show up in useful machine learning applications and use them to their fullest potential. |
https://pyimagesearch.com/2024/05/06/introduction-to-machine-learning-why-there-are-no-programmed-answers/ | In other cases, we may find our own human biases are creeping into our machine-learning data and solutions. In such cases, leadership and good judgment remain imperative to prevent bias, and such is the nature of the “trust in AI” debate. It is our privilege to sort out this issue today and for future generations that will inherit our data and values. May they be worth preserving. Unleash the potential of computer vision with Roboflow - Free! Step into the realm of the future by signing up or logging into your Roboflow account. Unlock a wealth of innovative dataset libraries and revolutionize your computer vision operations. Jumpstart your journey by choosing from our broad array of datasets, or benefit from PyimageSearch’s comprehensive library, crafted to cater to a wide range of requirements. Transfer your data to Roboflow in any of the 40+ compatible formats. Leverage cutting-edge model architectures for training, and deploy seamlessly across diverse platforms, including API, NVIDIA, browser, iOS, and beyond. |
https://pyimagesearch.com/2024/05/06/introduction-to-machine-learning-why-there-are-no-programmed-answers/ | Integrate our platform effortlessly with your applications or your favorite third-party tools. Equip yourself with the ability to train a potent computer vision model in a mere afternoon. With a few images, you can import data from any source via API, annotate images using our superior cloud-hosted tool, kickstart model training with a single click, and deploy the model via a hosted API endpoint. Tailor your process by opting for a code-centric approach, leveraging our intuitive, cloud-based UI, or combining both to fit your unique needs. Embark on your journey today with absolutely no credit card required. Step into the future with Roboflow. Join Roboflow Now
Join the PyImageSearch Newsletter and Grab My FREE 17-page Resource Guide PDF
Enter your email address below to join the PyImageSearch Newsletter and download my FREE 17-page Resource Guide PDF on Computer Vision, OpenCV, and Deep Learning. Join the Newsletter! Website |
https://pyimagesearch.com/2020/11/30/siamese-networks-with-keras-tensorflow-and-deep-learning/ | Click here to download the source code to this pos
In this tutorial you will learn how to implement and train a siamese network using Keras, TensorFlow, and Deep Learning. This tutorial is part two in our three-part series on the fundamentals of siamese networks:
Part #1: Building image pairs for siamese networks with Python (last week’s post)Part #2: Training siamese networks with Keras, TensorFlow, and Deep Learning (this week’s tutorial)Part #3: Comparing images using siamese networks (next week’s tutorial)
Using our siamese network implementation, we will be able to:
Present two input images to our network. The network will predict whether or not these two images belong to the same class (i.e., verification).We’ll then be able to check the confidence score of the network to confirm the verification. Practical, real-world use cases of siamese networks include face recognition, signature verification, prescription pill identification, and more! Furthermore, siamese networks can be trained with astoundingly little data, making more advanced applications such as one-shot learning and few-shot learning possible. A dataset with pair samples is crucial for training and understanding Siamese networks. It helps us to observe how the network learns to differentiate between similar and dissimilar pairs. Roboflow has free tools for each stage of the computer vision pipeline that will streamline your workflows and supercharge your productivity. Sign up or Log in to your Roboflow account to access state of the art dataset libaries and revolutionize your computer vision pipeline. You can start by choosing your own datasets or using our PyimageSearch’s assorted library of useful datasets. |
https://pyimagesearch.com/2020/11/30/siamese-networks-with-keras-tensorflow-and-deep-learning/ | Bring data in any of 40+ formats to Roboflow, train using any state-of-the-art model architectures, deploy across multiple platforms (API, NVIDIA, browser, iOS, etc), and connect to applications or 3rd party tools. To learn how to implement and train siamese networks with Keras and TenorFlow, just keep reading. Looking for the source code to this post? Jump Right To The Downloads Section
Siamese network with Keras, TensorFlow, and Deep Learning
In the first part of this tutorial, we will discuss siamese networks, how they work, and why you may want to use them in your own deep learning applications. From there, you’ll learn how to configure your development environment such that you can follow along with this tutorial and learn how to train your own siamese networks. We’ll then review our project directory structure and implement a configuration file, followed by three helper functions:
A method used to generate image pairs such that we can train our siamese networkA custom CNN layer to compute Euclidean distances between vectors inside of the networkA utility used to plot the siamese network training history to disk
Given our helper utilities, we’ll implement our training script used to load the MNIST dataset from disk and train a siamese network on the data. We’ll wrap up this tutorial with a discussion of our results. What is a siamese network and how do they work? Figure 1: A basic siamese network architecture implementation accepts two input images (left), has identical CNN subnetworks for each input with each subnetwork ending in a fully-connected layer (middle), computes the Euclidean distance between the fully-connected layer outputs, and then passes the distance through a sigmoid activation function to determine similarity (right) (figure inspiration). Last week’s tutorial covered the fundamentals of siamese networks, how they work, and what real-world applications are applicable to them. |
https://pyimagesearch.com/2020/11/30/siamese-networks-with-keras-tensorflow-and-deep-learning/ | I’ll provide a quick review of them here, but I highly suggest that you read last week’s guide for a more in-depth review of siamese networks. Figure 1 at the top of this section shows the basic architecture of a siamese network. You’ll immediately notice that the siamese network architecture is different from most standard classification architectures. Notice how there are two inputs to the network along with two branches (i.e., “sister networks”). Each of these sister networks is identical to the other. The outputs of the two subnetworks are combined, and then the final output similarity score is returned. To make this concept a bit more concrete, let’s break it down further in context of Figure 1 above:
On the left we present two example digits (from the MNIST dataset) to the siamese model. Our goal is to determine if these digits belong to the same class or not. The middle shows the siamese network itself. These two subnetworks have the same architecture and same parameters, and they mirror each other — if the weights in one subnetwork are updated, then the weights in the other subnetwork(s) are updated as well. |
https://pyimagesearch.com/2020/11/30/siamese-networks-with-keras-tensorflow-and-deep-learning/ | The output of each subnetwork is a fully-connected (FC) layer. We typically compute the Euclidean distance between these outputs and feed them through a sigmoid activation such that we can determine how similar the two input images are. The sigmoid activation function values closer to “1” imply more similar while values closer to “0” indicate “less similar.” To actually train the siamese network architecture, we have a number of loss functions that we can utilize, including binary cross-entropy, triplet loss, and contrastive loss. The latter two loss functions require image triplets (three input images to the network), which is different from the image pairs (two input images) that we are using today. We’ll be using binary cross-entropy to train our siamese networks today. In the future I will cover intermediate/advanced siamese networks, including image triplets, triplet loss, and contrastive loss — but for now, let’s walk before we run. Configuring your development environment
We’ll be using Keras and TensorFlow throughout this series of tutorials on siamese networks. I suggest you take the time to configure your deep learning development environment now. I recommend you follow either of these two guides to install TensorFlow and Keras on your system (I recommend you install TensorFlow 2.3 for this guide):
How to install TensorFlow 2.0 on UbuntuHow to install TensorFlow 2.0 on macOS
Either tutorial will help you configure your system with all the necessary software for this blog post in a convenient Python virtual environment. |
https://pyimagesearch.com/2020/11/30/siamese-networks-with-keras-tensorflow-and-deep-learning/ | Having problems configuring your development environment? Figure 2: Having trouble configuring your dev environment? Want access to pre-configured Jupyter Notebooks running on Google Colab? Be sure to join PyImageSearch Plus —- you’ll be up and running with this tutorial in a matter of minutes. All that said, are you:
Short on time?Learning on your employer’s administratively locked system?Wanting to skip the hassle of fighting with the command line, package managers, and virtual environments?Ready to run the code right now on your Windows, macOS, or Linux system? Then join PyImageSearch Plus today! Gain access to Jupyter Notebooks for this tutorial and other PyImageSearch guides that are pre-configured to run on Google Colab’s ecosystem right in your web browser! No installation required. And best of all, these Jupyter Notebooks will run on Windows, macOS, and Linux! Project structure
Before we can train our siamese network, we first need to review our project directory structure. |
https://pyimagesearch.com/2020/11/30/siamese-networks-with-keras-tensorflow-and-deep-learning/ | Start by using the “Downloads” section of this tutorial to download the source code, pre-trained siamese network model, etc. From there, let’s take a peek at what’s inside:
$ tree . --dirsfirst
. ├── output
│ ├── siamese_model
│ │ ├── variables
│ │ │ ├── variables.data-00000-of-00001
│ │ │ └── variables.index
│ │ └── saved_model.pb
│ └── plot.png
├── pyimagesearch
│ ├── config.py
│ ├── siamese_network.py
│ └── utils.py
└── train_siamese_network.py
2 directories, 6 files
Inside the pyimagesearch module we have three Python scripts:
config.py: A configuration file used to store important parameters, including input image spatial dimensions, batch size, number of epochs, etc. siamese_network.py: Our implementation of the base network (i.e., “sister network”) in the siamese model architecture
utils.py: Contains helper utilities used to create image pairs (which we covered last week), compute the Euclidean distance as a custom Keras/TensorFlow, layer, and plot training history to disk
The train_siamese_network.py uses the three Python scripts in our pyimagesearch module to:
Load the MNIST dataset from disk
Create positive and negative image pairs from MNIST
Build the siamese network architecture
Train the siamese network on the image pairs
Serialize the siamese network model and training history plot to our output directory
With our project directory structure reviewed, let’s move on to creating our configuration file. Note: The pre-trained siamese_model included in the “Downloads” associated with this tutorial was created using TensorFlow 2.3. I recommend you use TensorFlow 2.3 for this guide. If you instead wish to use another version of TensorFlow, that’s perfectly okay, but you will need to execute train_siamese_network.py to train and serialize the model. You’ll also need to keep this model for next week’s tutorial when we use the trained siamese network to compare images. Creating our siamese network configuration file
Our configuration file is short and sweet. |
https://pyimagesearch.com/2020/11/30/siamese-networks-with-keras-tensorflow-and-deep-learning/ | Open up config.py, and insert the following code:
# import the necessary packages
import os
# specify the shape of the inputs for our network
IMG_SHAPE = (28, 28, 1)
# specify the batch size and number of epochs
BATCH_SIZE = 64
EPOCHS = 100
Line 5 initializes our input IMG_SHAPE spatial dimensions. Since we are working with the MNIST digits dataset, our images are 28×28 pixels with a single grayscale channel. We then define our BATCH_SIZE and the total number of epochs we are training for. In our own experiments we found that training for only 10 epochs yielded good results, but training for longer yielded higher accuracy. If you’re short on time, or if your machine doesn’t have a GPU, updating EPOCHS to 10 will still yield good results. Next, let’s define our output paths:
# define the path to the base output directory
BASE_OUTPUT = "output"
# use the base output path to derive the path to the serialized
# model along with training history plot
MODEL_PATH = os.path.sep.join([BASE_OUTPUT, "siamese_model"])
PLOT_PATH = os.path.sep.join([BASE_OUTPUT, "plot.png"])
Line 12 initializes the BASE_OUTPUT path to be our output directory. We then use the BASE_OUTPUT path to derive the path to our MODEL_PATH, which is our serialized Keras/TensorFlow model. Since our siamese network implementation requires that we use a Lambda layer, we’ll be using SavedModel format, which according to the TensorFlow documentation, handles custom objects and implementations better. The SavedModel format results in an output model directory containing the optimizer, losses, and metrics (saved_model.pb) along with the model weights themselves (stored in a variables/ directory). Implementing the siamese network architecture with Keras and TensorFlow
Figure 3: We’ll be implementing the basic ConvNet architecture used for our sister networks when building a siamese model. |
https://pyimagesearch.com/2020/11/30/siamese-networks-with-keras-tensorflow-and-deep-learning/ | A siamese network architecture consists of two or more sister networks (highlighted in Figure 3 above). Essentially, a sister network is a basic Convolutional Neural Network that results in a fully-connected (FC) layer, sometimes called an embedded layer. When we go to construct the siamese network architecture itself, we will:
Instantiate our sister networks
Create a Lambda layer that computes the Euclidean distances between the outputs of the sister networks
Create an FC layer with a single node and a sigmoid activation function
The result will be a fully-constructed siamese network. But before we get there, we first need to implement our sister network component of the siamese network architecture. Open up siamese_network.py in your project directory structure, and let’s get to work:
# import the necessary packages
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Input
from tensorflow.keras.layers import Conv2D
from tensorflow.keras.layers import Dense
from tensorflow.keras.layers import Dropout
from tensorflow.keras.layers import GlobalAveragePooling2D
from tensorflow.keras.layers import MaxPooling2D
We start on Lines 2-8 by importing our required Python packages. These imports should all feel pretty standard to you if you’ve ever trained a CNN with Keras/TensorFlow before. If you need a refresher on CNNs, I recommend you read my Keras tutorial along with my book Deep Learning for Computer Vision with Python. With our imports taken care of, we can now define the build_siamese_model function responsible for constructing the sister networks:
def build_siamese_model(inputShape, embeddingDim=48):
# specify the inputs for the feature extractor network
inputs = Input(inputShape)
# define the first set of CONV => RELU => POOL => DROPOUT layers
x = Conv2D(64, (2, 2), padding="same", activation="relu")(inputs)
x = MaxPooling2D(pool_size=(2, 2))(x)
x = Dropout(0.3)(x)
# second set of CONV => RELU => POOL => DROPOUT layers
x = Conv2D(64, (2, 2), padding="same", activation="relu")(x)
x = MaxPooling2D(pool_size=2)(x)
x = Dropout(0.3)(x)
Our build_siamese_model function accepts two parameters:
inputShape: The spatial dimensions (width, height, and number channels) of input images. For the MNIST dataset, our input images will have the shape 28x28x1. embeddingDim: Output dimensionality of the final fully-connected layer in the network. |
https://pyimagesearch.com/2020/11/30/siamese-networks-with-keras-tensorflow-and-deep-learning/ | Line 12 initializes the input spatial dimensions to our sister network. From there, Lines 15-22 define two sets of CONV => RELU => POOL layer sets. Each CONV layer learns a total of 64 2×2 filters. We then apply a ReLU activation function and apply max pooling with a 2×2 stride. We can now finish constructing the sister network architecture:
# prepare the final outputs
pooledOutput = GlobalAveragePooling2D()(x)
outputs = Dense(embeddingDim)(pooledOutput)
# build the model
model = Model(inputs, outputs)
# return the model to the calling function
return model
Line 25 applies global average pooling to the 7x7x64 volume (assuming a 28×28 input to the network), resulting in an output of 64-d.
We take this pooledOutput and then apply a fully-connected layer with the specified embeddingDim (Line 26) — this Dense layer serves as the output of the sister network. Line 29 then builds the sister network Model, which is then returned to the calling function. I’ve included a summary of the model below:
Model: "model"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_3 (InputLayer) [(None, 28, 28, 1)] 0
_________________________________________________________________
conv2d (Conv2D) (None, 28, 28, 64) 320
_________________________________________________________________
max_pooling2d (MaxPooling2D) (None, 14, 14, 64) 0
_________________________________________________________________
dropout (Dropout) (None, 14, 14, 64) 0
_________________________________________________________________
conv2d_1 (Conv2D) (None, 14, 14, 64) 16448
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 7, 7, 64) 0
_________________________________________________________________
dropout_1 (Dropout) (None, 7, 7, 64) 0
_________________________________________________________________
global_average_pooling2d (Gl (None, 64) 0
_________________________________________________________________
dense (Dense) (None, 48) 3120
=================================================================
Total params: 19,888
Trainable params: 19,888
Non-trainable params: 0
_________________________________________________________________
Here’s a quick review of the model we just constructed:
Each sister network will accept a 28x28x1 input. We then apply a CONV layer to learn a total of 64 filters. Max pooling is applied with a 2×2 stride to reduce the spatial dimensions to 14x14x64.Another CONV layer (again, learning 64 filters) and POOL layer are applied, reducing the spatial dimensions further to 7x7x64.Global average pooling is applied to average the 7x7x64 volume down to 64-d.This 64-d pooling output is passed into an FC layer that has 48 nodes. The 48-d vector serves as the output of our sister network. |
https://pyimagesearch.com/2020/11/30/siamese-networks-with-keras-tensorflow-and-deep-learning/ | In the train_siamese_network.py script, you will learn how to instantiate two instances of our sister network and then finish constructing the siamese network architecture itself. Implementing our pair generation, euclidean distance, and plot history utility functions
With our configuration file and sister network component of the siamese network architecture implemented, let’s now move on to our helper functions and methods located in the utils.py file of the pyimagesearch module. Open up utils.py, and let’s review it:
# import the necessary packages
import tensorflow.keras.backend as K
import matplotlib.pyplot as plt
import numpy as np
We start off on Lines 2-4 importing our required Python packages. We import our Keras/TensorFlow backend so that we can construct our custom Euclidean distance Lambda layer. The matplotlib library will be used to create a helper function to plot our training history. Next, we have our make_pairs function, which we discussed in detail last week:
def make_pairs(images, labels):
# initialize two empty lists to hold the (image, image) pairs and
# labels to indicate if a pair is positive or negative
pairImages = []
pairLabels = []
# calculate the total number of classes present in the dataset
# and then build a list of indexes for each class label that
# provides the indexes for all examples with a given label
numClasses = len(np.unique(labels))
idx = [np.where(labels == i)[0] for i in range(0, numClasses)]
# loop over all images
for idxA in range(len(images)):
# grab the current image and label belonging to the current
# iteration
currentImage = images[idxA]
label = labels[idxA]
# randomly pick an image that belongs to the *same* class
# label
idxB = np.random.choice(idx[label])
posImage = images[idxB]
# prepare a positive pair and update the images and labels
# lists, respectively
pairImages.append([currentImage, posImage])
pairLabels.append([1])
# grab the indices for each of the class labels *not* equal to
# the current label and randomly pick an image corresponding
# to a label *not* equal to the current label
negIdx = np.where(labels ! = label)[0]
negImage = images[np.random.choice(negIdx)]
# prepare a negative pair of images and update our lists
pairImages.append([currentImage, negImage])
pairLabels.append([0])
# return a 2-tuple of our image pairs and labels
return (np.array(pairImages), np.array(pairLabels))
I’m not going to perform a full review of this function, as again, we covered in great detail in Part 1 of this series on siamese networks; however, the high-level gist is that:
In order to train siamese networks, we need both positive and negative pairs
A positive pair is two images that belong to the same class (i.e., two examples of the digit “8”)
A negative pair is two images that belong to different classes (i.e., one image containing a “1” and the other image containing a “3”)
The make_pairs function accepts an input set of images and associated labels and then constructs these positive and negative image pairs for training, returning them to the calling function
For a more detailed review on the make_pairs function, refer to my tutorial Building image pairs for siamese networks with Python. Our next function, euclidean_distance, accepts a 2-tuple of vectors and then computes the Euclidean distance between them, utilizing Keras/TensorFlow functions to do so:
def euclidean_distance(vectors):
# unpack the vectors into separate lists
(featsA, featsB) = vectors
# compute the sum of squared distances between the vectors
sumSquared = K.sum(K.square(featsA - featsB), axis=1,
keepdims=True)
# return the euclidean distance between the vectors
return K.sqrt(K.maximum(sumSquared, K.epsilon()))
The euclidean_distance function accepts a single parameter, vectors, which are the outputs from the fully-connected layers of both our sister networks in the siamese network architecture. We unpack the vectors into featsA and featsB (Line 50) and then compute the sum of squared differences between the vectors (Line 53 and 54). We round out the function by taking the square root of the sum of squared differences, yielding the Euclidean distance (Line 57). |
https://pyimagesearch.com/2020/11/30/siamese-networks-with-keras-tensorflow-and-deep-learning/ | Take note that we are using Keras/TensorFlow functions to compute the Euclidean distance rather than using NumPy or SciPy. Why is that? Wouldn’t it just be simpler to use the Euclidean distance functions built into NumPy and SciPy? Why go through all the hassle of reimplementing the Euclidean distance with Keras/TensorFlow? The reason will become more clear once we get to the train_siamese_network.py script, but the gist is that in order to construct our siamese network architecture, we need to be able to compute the Euclidean distance between the sister network outputs inside the siamese architecture itself. To accomplish this task we’ll use a custom Lambda layer that can be used to embed arbitrary Keras/TensorFlow functions inside of a model (hence why Keras/TensorFlow functions are used to implement the Euclidean distance). Our final function, plot_training, accepts (1) the training history from calling model.fit and (2) an output plotPath:
def plot_training(H, plotPath):
# construct a plot that plots and saves the training history
plt.style.use("ggplot")
plt.figure()
plt.plot(H.history["loss"], label="train_loss")
plt.plot(H.history["val_loss"], label="val_loss")
plt.plot(H.history["accuracy"], label="train_acc")
plt.plot(H.history["val_accuracy"], label="val_acc")
plt.title("Training Loss and Accuracy")
plt.xlabel("Epoch #")
plt.ylabel("Loss/Accuracy")
plt.legend(loc="lower left")
plt.savefig(plotPath)
Given our training history variable, H, we plot both our training and validation loss and accuracy. The output plot is then saved to disk to plotPath. Creating our siamese network training script with Keras and TensorFlow
We are now ready to implement our siamese network training script! Inside train_siamese_network.py we will:
Load the MNIST dataset from disk
Construct our training and testing image pairs
Create two instances of our build_siamese_model to serve as our sister networks
Finish constructing the siamese network architecture by piping the outputs of the sister networks through our custom euclidean_distance function (using a Lambda layer)
Apply a sigmoid activation to the output of the Euclidean distance
Train the siamese network architecture on our image pairs
It sounds like a complicated process, but we’ll be able to accomplish all of these tasks in under 60 lines of code! |
https://pyimagesearch.com/2020/11/30/siamese-networks-with-keras-tensorflow-and-deep-learning/ | Open up train_siamese_network.py, and let’s get to work:
# import the necessary packages
from pyimagesearch.siamese_network import build_siamese_model
from pyimagesearch import config
from pyimagesearch import utils
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Dense
from tensorflow.keras.layers import Input
from tensorflow.keras.layers import Lambda
from tensorflow.keras.datasets import mnist
import numpy as np
Lines 2-10 import our required Python packages. Notable imports include:
build_siamese_model: Constructs the sister network components of the siamese network architecture
config: Stores our training configurations
utils: Holds our helper function utilities used to create image pairs, plot training history, and compute the Euclidean distance using Keras/TensorFlow functions
Lambda: Takes our implementation of the Euclidean distances and embeds it inside the siamese network architecture itself
With our imports taken care of, we can move on to loading the MNIST dataset from disk, preprocessing it, and constructing our image pairs:
# load MNIST dataset and scale the pixel values to the range of [0, 1]
print("[INFO] loading MNIST dataset...")
(trainX, trainY), (testX, testY) = mnist.load_data()
trainX = trainX / 255.0
testX = testX / 255.0
# add a channel dimension to the images
trainX = np.expand_dims(trainX, axis=-1)
testX = np.expand_dims(testX, axis=-1)
# prepare the positive and negative pairs
print("[INFO] preparing positive and negative pairs...")
(pairTrain, labelTrain) = utils.make_pairs(trainX, trainY)
(pairTest, labelTest) = utils.make_pairs(testX, testY)
Line 14 loads the MNIST digits dataset from disk. We then preprocess the MNIST images by scaling them from the range [0, 255] to [0, 1] (Lines 15 and 16) and then adding a channel dimension (Lines 19 and 20). We use our make_pairs function to create positive and negative image pairs for our training and testing sets, respectively (Lines 24 and 25). If you need a refresher on the make_pairs function, I suggest you read Part 1 of this series, which covers image pairs in detail. Let’s now construct our siamese network architecture:
# configure the siamese network
print("[INFO] building siamese network...")
imgA = Input(shape=config. IMG_SHAPE)
imgB = Input(shape=config. IMG_SHAPE)
featureExtractor = build_siamese_model(config. IMG_SHAPE)
featsA = featureExtractor(imgA)
featsB = featureExtractor(imgB)
Lines 29-33 create our sister networks:
First, we create two inputs, one for each image in the pair (Lines 29 and 30). Line 31 then builds the sister network architecture, which serves as featureExtractor. |
https://pyimagesearch.com/2020/11/30/siamese-networks-with-keras-tensorflow-and-deep-learning/ | Each image in the pair will be passed through the featureExtractor, resulting in a 48-d feature vector (Lines 32 and 33). Since there are two images in a pair, we thus have two 48-d feature vectors. Perhaps you’re wondering why we didn’t call build_siamese_model twice? We have two sister networks in our architecture, right? Well, keep in mind what you learned last week:
“These two sister networks have the same architecture and same parameters and mirror each other — if the weights in one subnetwork are updated, then the weights in the other network(s) are updated as well.” So, even though there are two sister networks, we actually implement them as a single instance. Essentially, this single network is treated as a feature extractor (hence why we named it featureExtractor). The weights of the network are then updated via backpropagation as we train the network. Let’s now finish constructing our siamese network architecture:
# finally, construct the siamese network
distance = Lambda(utils.euclidean_distance)([featsA, featsB])
outputs = Dense(1, activation="sigmoid")(distance)
model = Model(inputs=[imgA, imgB], outputs=outputs)
Line 36 utilizes a Lambda layer to compute the euclidean_distance between the featsA and featsB network (remember, these values are the outputs of passing each image in the pair through the sister network feature extractor). We then apply a Dense layer with a single node with a sigmoid activation function applied to it. |
https://pyimagesearch.com/2020/11/30/siamese-networks-with-keras-tensorflow-and-deep-learning/ | The sigmoid activation function is used here because the output range of the function is [0, 1]. An output closer to 0 implies that the image pairs are less similar (and therefore from different classes), while a value closer to 1 implies they are more similar (and more likely to be from the same class). Line 38 then constructs the siamese network Model. The inputs consist of our image pair, imgA and imgB. The outputs of the network is the sigmoid activation. Now that our siamese network architecture is constructed, we can move on to training it:
# compile the model
print("[INFO] compiling model...")
model.compile(loss="binary_crossentropy", optimizer="adam",
metrics=["accuracy"])
# train the model
print("[INFO] training model...")
history = model.fit(
[pairTrain[:, 0], pairTrain[:, 1]], labelTrain[:],
validation_data=([pairTest[:, 0], pairTest[:, 1]], labelTest[:]),
batch_size=config. BATCH_SIZE,
epochs=config. EPOCHS)
Lines 42 and 43 compile our siamese network using binary cross-entropy as our loss function. We use binary cross-entropy here because this is essentially a two-class classification problem — given a pair of input images, we seek to determine how similar these two images are and, more specifically, if they are from the same or different class. More advanced loss functions can be used here as well, including triplet loss and contrastive loss. I’ll be covering how to use these loss functions, including constructing image triplets, in a future series on the PyImageSearch blog (which will cover more advanced siamese networks). |
https://pyimagesearch.com/2020/11/30/siamese-networks-with-keras-tensorflow-and-deep-learning/ | Lines 47-51 then train the siamese network on the image pairs. Once the model is trained, we can serialize it to disk and plot the training history:
# serialize the model to disk
print("[INFO] saving siamese model...")
model.save(config. MODEL_PATH)
# plot the training history
print("[INFO] plotting training history...")
utils.plot_training(history, config. PLOT_PATH)
Congrats on implementing our siamese network training script! Training our siamese network with Keras and TensorFlow
We are now ready to train our siamese network using Keras and TensorFlow! Make sure you use the “Downloads” section of this tutorial to download the source code. From there, open up a terminal, and execute the following command:
$ python train_siamese_network.py
[INFO] loading MNIST dataset...
[INFO] preparing positive and negative pairs...
[INFO] building siamese network...
[INFO] training model...
Epoch 1/100
1875/1875 [==============================] - 11s 6ms/step - loss: 0.6210 - accuracy: 0.6469 - val_loss: 0.5511 - val_accuracy: 0.7541
Epoch 2/100
1875/1875 [==============================] - 11s 6ms/step - loss: 0.5433 - accuracy: 0.7335 - val_loss: 0.4749 - val_accuracy: 0.7911
Epoch 3/100
1875/1875 [==============================] - 11s 6ms/step - loss: 0.5014 - accuracy: 0.7589 - val_loss: 0.4418 - val_accuracy: 0.8040
Epoch 4/100
1875/1875 [==============================] - 11s 6ms/step - loss: 0.4788 - accuracy: 0.7717 - val_loss: 0.4125 - val_accuracy: 0.8173
Epoch 5/100
1875/1875 [==============================] - 11s 6ms/step - loss: 0.4581 - accuracy: 0.7847 - val_loss: 0.3882 - val_accuracy: 0.8331
...
Epoch 95/100
1875/1875 [==============================] - 11s 6ms/step - loss: 0.3335 - accuracy: 0.8565 - val_loss: 0.3076 - val_accuracy: 0.8630
Epoch 96/100
1875/1875 [==============================] - 11s 6ms/step - loss: 0.3326 - accuracy: 0.8564 - val_loss: 0.2821 - val_accuracy: 0.8764
Epoch 97/100
1875/1875 [==============================] - 11s 6ms/step - loss: 0.3333 - accuracy: 0.8566 - val_loss: 0.2807 - val_accuracy: 0.8773
Epoch 98/100
1875/1875 [==============================] - 11s 6ms/step - loss: 0.3335 - accuracy: 0.8554 - val_loss: 0.2717 - val_accuracy: 0.8836
Epoch 99/100
1875/1875 [==============================] - 11s 6ms/step - loss: 0.3307 - accuracy: 0.8578 - val_loss: 0.2793 - val_accuracy: 0.8784
Epoch 100/100
1875/1875 [==============================] - 11s 6ms/step - loss: 0.3329 - accuracy: 0.8567 - val_loss: 0.2751 - val_accuracy: 0.8810
[INFO] saving siamese model...
[INFO] plotting training history...
Figure 4: Training our siamese network model on the MNIST dataset using Keras, TensorFlow, and Deep Learning. As you can see, our model is obtaining ~88.10% accuracy on our validation set, implying that 88% of the time, the model is able to correctly determine if two input images belong to the same class or not. Figure 4 above shows our training history over the course of 100 epochs. Our model appears fairly stable, and given that our validation loss is lower than our training loss, it appears that we could further improve accuracy by “training harder” (something I cover here). |
https://pyimagesearch.com/2020/11/30/siamese-networks-with-keras-tensorflow-and-deep-learning/ | Examining your output directory, you should now see a directory named siamese_model:
$ ls output/
plot.png siamese_model
$ ls output/siamese_model/
saved_model.pb variables
This directory contains our serialized siamese network. Next week you will learn how to take this trained model and use it to make predictions on input images — stay tuned for the final part in our intro to siamese network series; you won’t want to miss it! What's next? We recommend PyImageSearch University. Course information:
84 total classes • 114+ hours of on-demand code walkthrough videos • Last updated: February 2024
★★★★★ 4.84 (128 Ratings) • 16,000+ Students Enrolled
I strongly believe that if you had the right teacher you could master computer vision and deep learning. Do you think learning computer vision and deep learning has to be time-consuming, overwhelming, and complicated? Or has to involve complex mathematics and equations? Or requires a degree in computer science? That’s not the case. All you need to master computer vision and deep learning is for someone to explain things to you in simple, intuitive terms. |
https://pyimagesearch.com/2020/11/30/siamese-networks-with-keras-tensorflow-and-deep-learning/ | And that’s exactly what I do. My mission is to change education and how complex Artificial Intelligence topics are taught. If you're serious about learning computer vision, your next stop should be PyImageSearch University, the most comprehensive computer vision, deep learning, and OpenCV course online today. Here you’ll learn how to successfully and confidently apply computer vision to your work, research, and projects. Join me in computer vision mastery. Inside PyImageSearch University you'll find:
✓ 84 courses on essential computer vision, deep learning, and OpenCV topics
✓ 84 Certificates of Completion
✓ 114+ hours of on-demand video
✓ Brand new courses released regularly, ensuring you can keep up with state-of-the-art techniques
✓ Pre-configured Jupyter Notebooks in Google Colab
✓ Run all code examples in your web browser — works on Windows, macOS, and Linux (no dev environment configuration required!) ✓ Access to centralized code repos for all 536+ tutorials on PyImageSearch
✓ Easy one-click downloads for code, datasets, pre-trained models, etc. ✓ Access on mobile, laptop, desktop, etc. Click here to join PyImageSearch University
Summary
In this tutorial you learned how to implement and train siamese networks using Keras, TensorFlow, and Deep Learning. We trained our siamese network on the MNIST dataset. |
https://pyimagesearch.com/2020/11/30/siamese-networks-with-keras-tensorflow-and-deep-learning/ | Our network accepts a pair of input images (digits) and then attempts to determine if these two images belong to the same class or not. For example, if we were to present two images, each containing a “9” to the model, then the siamese network would report high similarity between the two, indicating that they are indeed part of the same class. However, if we provided two images, one containing a “9” and the other containing a “2”, then the network should report low similarity, given that the two digits belong to separate classes. We used the MNIST dataset here for convenience such that we can learn the fundamentals of siamese networks; however, this same type of training procedure can be applied to face recognition, signature verification, prescription pill identification, etc. Next week you’ll learn how to actually take our trained, serialized siamese network model and use it to make similarity predictions. I’ll then do a future series of posts on more advanced siamese networks, including image triplets, triplet loss, and contrastive loss. To download the source code to this post (and be notified when future tutorials are published here on PyImageSearch), simply enter your email address in the form below! Download the Source Code and FREE 17-page Resource Guide
Enter your email address below to get a .zip of the code and a FREE 17-page Resource Guide on Computer Vision, OpenCV, and Deep Learning. Inside you'll find my hand-picked tutorials, books, courses, and libraries to help you master CV and DL! Download the code! |
https://pyimagesearch.com/2020/11/30/siamese-networks-with-keras-tensorflow-and-deep-learning/ | Website |
https://pyimagesearch.com/2021/05/24/grid-search-hyperparameter-tuning-with-scikit-learn-gridsearchcv/ | Click here to download the source code to this pos
In this tutorial, you will learn how to use the GridSearchCV class to do grid search hyperparameter tuning using the scikit-learn machine learning library. We’ll apply the grid search to a computer vision project. This blog post is part two in our four-part series on hyperparameter tuning:
Introduction to hyperparameter tuning with scikit-learn and Python (last week’s tutorial)Grid search hyperparameter tuning with scikit-learn’s GridSearchCV class (today’s post)Hyperparameter tuning for Deep Learning with scikit-learn, Keras, and TensorFlow (next week’s post)Easy Hyperparameter Tuning with Keras Tuner and TensorFlow (tutorial two weeks from now)
Last week we learned how to tune hyperparameters to a Support Vector Machine (SVM) trained to predict the age of a marine snail. This was a good introduction to the concept of hyperparameter tuning, but it didn’t demonstrate how to apply hyperparameter tuning to a computer vision project. Today, we’ll build a computer vision system to automatically recognize the texture of an object in an image. We will use hyperparameter tuning to find the optimal set of hyperparameters that yields the highest accuracy. You can use the code included with this post as a starting point when you need to tune hyperparameters in your own projects. To learn how to grid search hyperparameters with GridSearchCV and scikit-learn, just keep reading. Grid search hyperparameter tuning with scikit-learn’s GridSearchCV
In the first part of this tutorial, we’ll discuss:
What a grid search isHow a grid search can be applied to hyperparameter tuningHow the scikit-learn machine learning library implements grid search through the GridSearchCV class
From there, we’ll configure our development environment and review our project directory structure. I’ll then show you how to use computer vision, machine learning, and grid search hyperparameter tuning to tune the parameters to a texture recognition pipeline, resulting in a system with near 100% texture recognition accuracy. |
https://pyimagesearch.com/2021/05/24/grid-search-hyperparameter-tuning-with-scikit-learn-gridsearchcv/ | By the end of this guide, you’ll have a strong understanding of how to apply a grid search to the hyperparameters of a computer vision project. What is a hyperparameter grid search? Figure 1: Hyperparameter tuning using a grid search (image source). A grid search allows us to exhaustively test all possible hyperparameter configurations that we are interested in tuning. Later in this tutorial, we’ll tune the hyperparameters of a Support Vector Machine (SVM) to obtain high accuracy. The hyperparameters to an SVM include:
Kernel choice: linear, polynomial, radial basis functionStrictness (C): Typical values are in the range of 0.0001 to 1000Kernel-specific parameters: degree (for polynomial) and gamma (RBF)
For example, consider the following list of possible hyperparameters:
parameters = [
{"kernel":
["linear"],
"C": [0.0001, 0.001, 0.1, 1, 10, 100, 1000]},
{"kernel":
["poly"],
"degree": [2, 3, 4],
"C": [0.0001, 0.001, 0.1, 1, 10, 100, 1000]},
{"kernel":
["rbf"],
"gamma": ["auto", "scale"],
"C": [0.0001, 0.001, 0.1, 1, 10, 100, 1000]}
]
A grid search will exhaustively test all possible combinations of these hyperparameters, training an SVM for each set. The grid search will then report the best hyperparameters (i.e., the ones that maximized accuracy). Configuring your development environment
To follow this guide, you need to have the following libraries installed on your machine:
OpenCVscikit-learnscikit-imageimutils
Luckily, both of these packages are pip-installable:
$ pip install opencv-contrib-python
$ pip install scikit-learn
$ pip install scikit-image
$ pip install imutils
If you need help configuring your development environment for OpenCV, I highly recommend that you read my pip install OpenCV guide — it will have you up and running in a matter of minutes. Having problems configuring your development environment? Figure 2: Having trouble configuring your dev environment? |
https://pyimagesearch.com/2021/05/24/grid-search-hyperparameter-tuning-with-scikit-learn-gridsearchcv/ | Want access to pre-configured Jupyter Notebooks running on Google Colab? Be sure to join PyImageSearch University — you’ll be up and running with this tutorial in a matter of minutes. All that said, are you:
Short on time?Learning on your employer’s administratively locked system?Wanting to skip the hassle of fighting with the command line, package managers, and virtual environments?Ready to run the code right now on your Windows, macOS, or Linux systems? Then join PyImageSearch University today! Gain access to Jupyter Notebooks for this tutorial and other PyImageSearch guides that are pre-configured to run on Google Colab’s ecosystem right in your web browser! No installation required. And best of all, these Jupyter Notebooks will run on Windows, macOS, and Linux! Our example texture dataset
Figure 3: Our example texture image dataset includes three classes: brick, marble, and sand. We’ll create a computer vision and machine learning model capable of automatically recognizing the texture of an object in an image. There are three textures we’ll train our model to recognize:
BrickMarbleSand
Each class has 30 images each for a total of 90 images in the dataset. |
https://pyimagesearch.com/2021/05/24/grid-search-hyperparameter-tuning-with-scikit-learn-gridsearchcv/ | Our goal is to now:
Quantify the texture of each image in the datasetDefine the set of hyperparameters we’re going to search overUse a grid search to tune the hyperparameters and find the values that maximize our texture recognition accuracy
Note: This dataset was created by following my tutorial on creating an image dataset with Google Images. I’ve provided the example texture dataset inside the “Downloads” associated with this tutorial. That way, you don’t have to recreate the dataset yourself. Project structure
Before we can implement a grid search for hyperparameter tuning, let’s take a second to review our project directory structure. Start with the “Downloads” section of this tutorial to access the source code and example texture dataset. From there, unzip the archive, and you should find the following project directory:
$ tree . --dirsfirst --filelimit 10
. ├── pyimagesearch
│ ├── __init__.py
│ └── localbinarypatterns.py
├── texture_dataset
│ ├── brick [30 entries exceeds filelimit, not opening dir]
│ ├── marble [30 entries exceeds filelimit, not opening dir]
│ └── sand [30 entries exceeds filelimit, not opening dir]
└── train_model.py
5 directories, 3 files
The texture_dataset contains the dataset where we’ll train our model. We have three subdirectories, brick, marble, and sand, each with 30 images. We’ll use Local Binary Patterns (LBPs) to quantify the contents of each image in the texture dataset. |
https://pyimagesearch.com/2021/05/24/grid-search-hyperparameter-tuning-with-scikit-learn-gridsearchcv/ | The LBP image descriptor is implemented inside the localbinarypatterns.py file inside the pyimagesearch module. The train_model.py script is responsible for:
Loading all images in texture_dataset from diskQuantifying each of the images using LBPsPerforming a grid search on the hyperparameter space to determine the values that optimize accuracy
Let’s get started implementing our Python scripts. Our Local Binary Pattern (LBP) descriptor
Figure 4: We’ll use the Local Binary Patterns descriptor to quantify the texture of our input image. ( Thanks to Bikramjot of Hanzra Tech for the inspiration on this visualization!) The Local Binary Patterns implementation we’ll follow today comes from my previous tutorial. While I’ve included the full code here as a matter of completeness, I will defer a detailed review of the implementation to my previous blog post. With that said, open the localbinarypatterns.py file in the pyimagesearch module of your project directory structure, and we can get started:
# import the necessary packages
from skimage import feature
import numpy as np
class LocalBinaryPatterns:
def __init__(self, numPoints, radius):
# store the number of points and radius
self.numPoints = numPoints
self.radius = radius
Lines 2 and 3 import our required Python packages. The feature submodule of scikit-image contains the local_binary_pattern function — this method computes the LBPs from an input image. Next, we define our describe function:
def describe(self, image, eps=1e-7):
# compute the Local Binary Pattern representation
# of the image, and then use the LBP representation
# to build the histogram of patterns
lbp = feature.local_binary_pattern(image, self.numPoints,
self.radius, method="uniform")
(hist, _) = np.histogram(lbp.ravel(),
bins=np.arange(0, self.numPoints + 3),
range=(0, self.numPoints + 2))
# normalize the histogram
hist = hist.astype("float")
hist /= (hist.sum() + eps)
# return the histogram of Local Binary Patterns
return hist
This method accepts an input image (i.e., the image we want to compute LBPs for) along with a small epsilon value. As we’ll see, the eps value prevents division by zero errors when normalizing the resulting LBP histogram to the range [0, 1]. |
https://pyimagesearch.com/2021/05/24/grid-search-hyperparameter-tuning-with-scikit-learn-gridsearchcv/ | From there, Lines 15 and 16 compute the uniform LBPs from the input image. Given the LBPs, we then use NumPy to construct a histogram of each LBP type (Lines 17-19). The resulting histogram is then scaled to the range [0, 1] (Lines 22 and 23). For a more detailed review of our LBP implementation, be sure to refer to my tutorial, Local Binary Patterns with Python & OpenCV. Implementing our grid search for hyperparameter tuning using gridsearchcv. With our LBP image descriptor implemented, we can create our grid search hyperparameter tuning script. Open the train_model.py file in your project directory, and we’ll get started:
# import the necessary packages
from pyimagesearch.localbinarypatterns import LocalBinaryPatterns
from sklearn.model_selection import GridSearchCV
from sklearn.metrics import classification_report
from sklearn.svm import SVC
from sklearn.model_selection import train_test_split
from imutils import paths
import argparse
import time
import cv2
import os
Lines 2-11 import our required Python packages. Our notable imports included:
LocalBinaryPatterns: Responsible for computing LBPs for each input image, thereby quantifying the textureGridSearchCV: scikit-learn’s implementation of a grid search for hyperparameter tuningSVC: Our Support Vector Machine (SVM) used for classification (SVC)paths: Grabs the paths of all images in our input dataset directorytime: Used to time how long the grid search takes
Next, we have our command line arguments:
# construct the argument parser and parse the arguments
ap = argparse. ArgumentParser()
ap.add_argument("-d", "--dataset", required=True,
help="path to input dataset")
args = vars(ap.parse_args())
We only have a single command line argument here, --dataset, which will point to our texture_dataset residing on disk. Let’s grab our image paths and initialize our LBP descriptor:
# grab the image paths in the input dataset directory
imagePaths = list(paths.list_images(args["dataset"]))
# initialize the local binary patterns descriptor along with
# the data and label lists
print("[INFO] extracting features...")
desc = LocalBinaryPatterns(24, 8)
data = []
labels = []
Line 20 grabs the paths to all input images in our --dataset directory. |
https://pyimagesearch.com/2021/05/24/grid-search-hyperparameter-tuning-with-scikit-learn-gridsearchcv/ | We then initialize our LocalBinaryPatterns descriptor, along with two lists:
data: Stores the LBPs extracted from each imagelabels: Contains the class label of the particular image
Let’s populate both data and labels now:
# loop over the dataset of images
for imagePath in imagePaths:
# load the image, convert it to grayscale, and quantify it
# using LBPs
image = cv2.imread(imagePath)
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
hist = desc.describe(gray)
# extract the label from the image path, then update the
# label and data lists
labels.append(imagePath.split(os.path.sep)[-2])
data.append(hist)
# partition the data into training and testing splits using 75% of
# the data for training and the remaining 25% for testing
print("[INFO] constructing training/testing split...")
(trainX, testX, trainY, testY) = train_test_split(data, labels,
random_state=22, test_size=0.25)
On Line 30, we loop over our input images. For each image we:
Load it from disk (Line 33)Convert it to grayscale (Line 34)Compute LBPs for the image (Line 35)
We then update our labels list with the class label of the particular image along with our data list with the computed LBP histogram. Note: Confused on how we determined the class label from the image path? Recall that inside the texture_dataset directory, there are three subdirectories, one for each of the three texture classes: brick, marble, and sand. Since the class label of a given image is contained within the file path, all we need to do is extract the subdirectory name, which is exactly what Line 39 does. Before we can run GridSearchCV we first need to define the hyperparameters to search over:
# construct the set of hyperparameters to tune
parameters = [
{"kernel":
["linear"],
"C": [0.0001, 0.001, 0.1, 1, 10, 100, 1000]},
{"kernel":
["poly"],
"degree": [2, 3, 4],
"C": [0.0001, 0.001, 0.1, 1, 10, 100, 1000]},
{"kernel":
["rbf"],
"gamma": ["auto", "scale"],
"C": [0.0001, 0.001, 0.1, 1, 10, 100, 1000]}
]
Line 49 defines a parameters list that the grid search will run over. As you can see, we’re testing three different types of SVM kernels: linear, polynomial, and radial basis function (RBF). Each kernel has its own set of associated hyperparameters to search over as well. SVMs tend to be quite sensitive to hyperparameter choices; that is especially true for the non-linear kernels. If we want high texture classification accuracy, we need to get these hyperparameter choices correct. |
https://pyimagesearch.com/2021/05/24/grid-search-hyperparameter-tuning-with-scikit-learn-gridsearchcv/ | The values listed above are the ones you’ll typically want to tune for an SVM and given kernel. Let’s now run GridSearchCV over the hyperparameter space:
# tune the hyperparameters via a cross-validated GridSearchCV
print("[INFO] tuning hyperparameters via gridsearchcv")
grid = GridSearchCV(estimator=SVC(), param_grid=parameters, n_jobs=-1)
start = time.time()
grid.fit(trainX, trainY)
end = time.time()
# show GridSearchCV information
print("[INFO] GridSearchCV took {:.2f} seconds".format(
end - start))
print("[INFO] GridSearchCV best score: {:.2f}%".format(
grid.best_score_ * 100))
print("[INFO] GridSearchCV best parameters: {}".format(
grid.best_params_))
Line 65 initializes our GridSearchCV, which accepts three parameters:
estimator: The model we are tuning (in this case, a Support Vector Machine classifier).param_grid: The hyperparameter space we wish to search (i.e., our parameters list).n_jobs: The number of parallel jobs to run. A value of -1 implies that all processors/cores of your machine will be used, thereby speeding up the GridSearchCV process. Line 67 starts the grid search of the hyperparameter space. We wrap the .fit call with the time() function to measure how long the hyperparameter search space takes. Once GridSearchCV is complete, we display three important pieces of information on our terminal:
How long GridSearchCV tookThe best accuracy we obtained during the grid searchThe hyperparameters associated with our highest accuracy model
From there, we do a full evaluation of the best model:
# grab the best model and evaluate it
print("[INFO] evaluating...")
model = grid.best_estimator_
predictions = model.predict(testX)
print(classification_report(testY, predictions))
Line 80 grabs the best_estimator_ from the grid search. This is the SVM with the highest accuracy. Note: After a hyperparameter search is complete, the scikit-learn library always populates the best_estimator_ variable of the grid with our highest accuracy model. Lines 81 uses the best model found to make predictions on our testing data. We then display a full classification report on Line 82. |
https://pyimagesearch.com/2021/05/24/grid-search-hyperparameter-tuning-with-scikit-learn-gridsearchcv/ | GridSearchCV for computer vision project results
We are now ready to apply a grid search to tune the hyperparameters to our texture recognition system. Be sure to access the “Downloads” section of this tutorial to retrieve the source code and example texture dataset. From there, you can execute the train_model.py script:
$ time python train_model.py --dataset texture_dataset
[INFO] extracting features...
[INFO] constructing training/testing split...
[INFO] tuning hyperparameters via gridsearchcv
[INFO] GridSearchCV took 1.17 seconds
[INFO] GridSearchCV best score: 86.81%
[INFO] GridSearchCV best parameters: {'C': 1000, 'degree': 3,
'kernel': 'poly'}
[INFO] evaluating...
precision recall f1-score support
brick 1.00 1.00 1.00 10
marble 1.00 1.00 1.00 5
sand 1.00 1.00 1.00 8
accuracy 1.00 23
macro avg 1.00 1.00 1.00 23
weighted avg 1.00 1.00 1.00 23
real 1m39.581s
user 1m45.836s
sys 0m2.896s
As you can see, we’ve obtained 100% accuracy on our testing set, meaning that our SVM was capable of recognizing the texture inside every one of our images. Furthermore, running the tuning script took only 1m39s. A grid search worked well here, but as I mentioned in last week’s tutorial, a random search tends to work just as well and requires less time to run — the more hyperparameters in your search space, the longer GridSearchCV takes (growing exponentially). To illustrate this point, next week, I’ll show you how to use a random search to tune the hyperparameters in a deep learning model. What's next? We recommend PyImageSearch University. Course information:
84 total classes • 114+ hours of on-demand code walkthrough videos • Last updated: February 2024
★★★★★ 4.84 (128 Ratings) • 16,000+ Students Enrolled
I strongly believe that if you had the right teacher you could master computer vision and deep learning. Do you think learning computer vision and deep learning has to be time-consuming, overwhelming, and complicated? |
https://pyimagesearch.com/2021/05/24/grid-search-hyperparameter-tuning-with-scikit-learn-gridsearchcv/ | Or has to involve complex mathematics and equations? Or requires a degree in computer science? That’s not the case. All you need to master computer vision and deep learning is for someone to explain things to you in simple, intuitive terms. And that’s exactly what I do. My mission is to change education and how complex Artificial Intelligence topics are taught. If you're serious about learning computer vision, your next stop should be PyImageSearch University, the most comprehensive computer vision, deep learning, and OpenCV course online today. Here you’ll learn how to successfully and confidently apply computer vision to your work, research, and projects. Join me in computer vision mastery. Inside PyImageSearch University you'll find:
✓ 84 courses on essential computer vision, deep learning, and OpenCV topics
✓ 84 Certificates of Completion
✓ 114+ hours of on-demand video
✓ Brand new courses released regularly, ensuring you can keep up with state-of-the-art techniques
✓ Pre-configured Jupyter Notebooks in Google Colab
✓ Run all code examples in your web browser — works on Windows, macOS, and Linux (no dev environment configuration required!) |
https://pyimagesearch.com/2021/05/24/grid-search-hyperparameter-tuning-with-scikit-learn-gridsearchcv/ | ✓ Access to centralized code repos for all 536+ tutorials on PyImageSearch
✓ Easy one-click downloads for code, datasets, pre-trained models, etc. ✓ Access on mobile, laptop, desktop, etc. Click here to join PyImageSearch University
Summary
In this tutorial, you learned how to use a GridSearchCV to tune hyperparameters to a machine learning model automatically. To implement the grid search, we used the scikit-learn library and the GridSearchCV class. Our goal was to train a computer vision model that can automatically recognize the texture of an object in an image (brick, marble, or sand). The training pipeline itself included:
Looping over all images in our datasetQuantifying the texture of each image using the Local Binary Patterns descriptor (a popular image descriptor often used for quantifying texture)Using a grid search to explore hyperparameters to our Support Vector Machine
After tuning our SVM hyperparameters, we obtained 100% classification accuracy on our texture recognition dataset. To download the source code to this post (and be notified when future tutorials are published here on PyImageSearch), simply enter your email address in the form below! Download the Source Code and FREE 17-page Resource Guide
Enter your email address below to get a .zip of the code and a FREE 17-page Resource Guide on Computer Vision, OpenCV, and Deep Learning. Inside you'll find my hand-picked tutorials, books, courses, and libraries to help you master CV and DL! Download the code! |
https://pyimagesearch.com/2021/05/24/grid-search-hyperparameter-tuning-with-scikit-learn-gridsearchcv/ | Website |
https://pyimagesearch.com/2020/11/02/apriltag-with-python/ | Click here to download the source code to this pos
In this tutorial, you will learn how to perform AprilTag detection with Python and the OpenCV library. AprilTags are a type of fiducial marker. Fiducials, or more simply “markers,” are reference objects that are placed in the field of view of the camera when an image or video frame is captured. The computer vision software running behind the scenes then takes the input image, detects the fiducial marker, and performs some operation based on the type of marker and where the marker is located in the input image. AprilTags are a specific type of fiducial marker, consisting of a black square with a white foreground that has been generated in a particular pattern (as seen in the figure at the top of this tutorial). The black border surrounding the marker makes it easier for computer vision and image processing algorithms to detect the AprilTags in a variety of scenarios, including variations in rotation, scale, lighting conditions, etc. You can conceptually think of an AprilTag as similar to a QR code — a 2D binary pattern that can be detected using computer vision algorithms. However, an AprilTag only holds 4-12 bits of data, multiple orders of magnitude less than a QR code (a typical QR code can hold up to 3KB of data). So, why bother using AprilTags at all? Why not simply use QR codes if AprilTags hold such little data? |
https://pyimagesearch.com/2020/11/02/apriltag-with-python/ | The fact that AprilTags store less data is actually a feature and not a bug/limitation. To paraphrase the official AprilTag documentation, since AprilTag payloads are so small, they can be more easily detected, more robustly identified, and less difficult to detect at longer ranges. Basically, if you want to store data in a 2D barcode, use QR codes. But if you need to use markers that can be more easily detected in your computer vision pipeline, use AprilTags. Fiducial markers such as AprilTags are an integral part of many computer vision systems, including but not limited to:
Camera calibrationObject size estimationMeasuring the distance between the camera and an object3D positioningObject orientationRobotics (i.e., autonomously navigating to a specific marker)etc. One of the primary benefits of AprilTags is that they can be created using basic software and a printer. Just generate the AprilTag on your system, print it out, and include it in your image processing pipeline — Python libraries exist to automatically detect the AprilTags for you! In the rest of this tutorial, I will show you how to detect AprilTags using Python and OpenCV. To learn how to detect AprilTags with OpenCV and Python, just keep reading. Looking for the source code to this post? |
https://pyimagesearch.com/2020/11/02/apriltag-with-python/ | Jump Right To The Downloads Section
AprilTag with Python
In the first part of this tutorial, we will discuss what AprilTags and fiducial markers are. We’ll then install apriltag, the Python package we’ll be using to detect AprilTags in input images. Next, we’ll review our project directory structure and then implement our Python script used to detect and identify AprilTags. We’ll wrap up the tutorial by reviewing our results, including a discussion on some of the limitations (and frustrations) associated with AprilTags specifically. What are AprilTags and fiducial markers? Figure 1: Examples of generated AprilTags (image source)
AprilTags are a type of fiducial marker. Fiducials are special markers we place in the view of the camera such that they are easily identifiable. For example, all of the following tutorials used fiducial markers to measure either the size of an object in an image or the distance between specific objects:
Find distance from camera to object/marker using Python and OpenCVMeasuring size of objects in an image with OpenCVMeasuring distance between objects in an image with OpenCV
Successfully implementing these projects was only possible because a marker/reference object was placed in view of the camera. Once I detected the object, I could derive the width and height of other objects because I already know the size of the reference object. AprilTags are a special type of fiducial marker. |
https://pyimagesearch.com/2020/11/02/apriltag-with-python/ | These markers have the following properties:
They are a square with binary values. The background is “black. ”The foreground is a generated pattern displayed in “white. ”There is a black border surrounding the pattern, thereby making it easier to detect. They can be generated in nearly any size. Once generated, they can be printed out and added to your application. Once detected in a computer vision pipeline, AprilTags can be used for:
Camera calibration3D applicationsSLAMRoboticsAutonomous navigationObject size measurementDistance measurementObject orientation… and more! A great example of using fiducials could be in a large fulfillment warehouse (i.e., Amazon) where you’re using autonomous forklifts. You could place AprilTags on the floor to define “lanes” for the forklifts to drive on. Specific markers could be placed on large shelves such that the forklift knows which crate to pull down. |
https://pyimagesearch.com/2020/11/02/apriltag-with-python/ | And markers could even be used for “emergency shutdowns” where if that “911” marker is detected, the forklift automatically stops, halts operations, and shuts down. There are an incredible number of use cases for AprilTags and the closely related ArUco tags. I’ll be covering the basics of how to detect AprilTags in this tutorial. Future tutorials on the PyImageSearch blog will then build off this one and show you how to implement real-world applications using them. Installing the “apriltag” Python package on your system
In order to detect AprilTags in our images, we first need to install a Python package to facilitate AprilTag detection. The library we’ll be using is apriltag, which, lucky for us, is pip-installable. To start, make sure you follow my pip install opencv guide to install OpenCV on your system. If you are using a Python virtual environment (which I recommend, since it is a Python best practice), make sure you use the workon command to access your Python environment and then install apriltag into that environment:
$ workon your_env_name
$ pip install apriltag
From there, validate that you can import both cv2 (your OpenCV bindings) and apriltag (your AprilTag detector library) into your Python shell:
$ python
>>> import cv2
>>> import apriltag
>>>
Congrats on installing both OpenCV and AprilTag on your system! Having problems configuring your development environment? All that said, are you:
Short on time?Learning on your employer’s administratively locked system?Wanting to skip the hassle of fighting with the command line, package managers, and virtual environments?Ready to run the code right now on your Windows, macOS, or Linux system? |
https://pyimagesearch.com/2020/11/02/apriltag-with-python/ | Then join PyImageSearch Plus today! Gain access to PyImageSearch tutorial Jupyter Notebooks that run on Google Colab’s ecosystem right in your browser! No installation required. And best of all, these notebooks will run on Windows, macOS, and Linux! Project structure
Before we implement our Python script to detect AprilTags in images, let’s first review our project directory structure:
$ tree . --dirsfirst
. ├── images
│ ├── example_01.png
│ └── example_02.png
└── detect_apriltag.py
1 directory, 3 files
Here you can see that we have a single Python file, detect_apriltag.py. As the name suggests, this script is used to detect AprilTags in input images. We then have an images directory that contains two example images. These images each contain one or more AprilTags. |
https://pyimagesearch.com/2020/11/02/apriltag-with-python/ | We’ll use our detect_apriltag.py script to detect the AprilTags in each of these images. Implementing AprilTag detection with Python
With the apriltag Python package installed, we are now ready to implement AprilTag detection with OpenCV! Open up the detect_apriltag.py file in your project directory structure, and let’s get started:
# import the necessary packages
import apriltag
import argparse
import cv2
# construct the argument parser and parse the arguments
ap = argparse. ArgumentParser()
ap.add_argument("-i", "--image", required=True,
help="path to input image containing AprilTag")
args = vars(ap.parse_args())
We start off on Lines 2-4 importing our required Python packages. We have:
apriltag: Our Python library to detect and identify AprilTags in an input image
argparse: Used to parse command line arguments
cv2: Our OpenCV bindings used to interact with the OpenCV library
From here, Lines 7-10 parse our command line arguments. We only need a single argument here, --image, the path to our input image containing the AprilTags we want to detect. Next, let’s load our input image and preprocess it:
# load the input image and convert it to grayscale
print("[INFO] loading image...")
image = cv2.imread(args["image"])
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
Line 14 loads our input image from disk using the supplied --image path. We then convert the image to grayscale, the only preprocessing step required for AprilTag detection. Speaking of AprilTag detection, let’s go ahead and perform the detection step now:
# define the AprilTags detector options and then detect the AprilTags
# in the input image
print("[INFO] detecting AprilTags...")
options = apriltag. DetectorOptions(families="tag36h11")
detector = apriltag. |
https://pyimagesearch.com/2020/11/02/apriltag-with-python/ | Detector(options)
results = detector.detect(gray)
print("[INFO] {} total AprilTags detected".format(len(results)))
In order to detect AprilTags in an image, we first need to specify options, and more specifically, the AprilTag family:
Figure 2: The set of six possible AprilTag families, which our AprilTag detector can detect. A family in AprilTags defines the set of tags the AprilTag detector will assume in the input image. The standard/default AprilTag family is “Tag36h11”; however, there are a total of six families in AprilTags:
Tag36h11TagStandard41h12TagStandard52h13TagCircle21h7TagCircle49h12TagCustom48h12
You can read more about the AprilTag families on the official AprilTag website, but for the most part, you typically use “Tag36h11”. Line 20 initializes our options with the default AprilTag family of tag36h11. From there, we:
Initialize the detector with these options (Line 21)
Detect AprilTags in the input image using the detector object (Line 22)
Display the total number of detected AprilTags to our terminal (Line 23)
The final step here is to loop over the AprilTags and display the results:
# loop over the AprilTag detection results
for r in results:
# extract the bounding box (x, y)-coordinates for the AprilTag
# and convert each of the (x, y)-coordinate pairs to integers
(ptA, ptB, ptC, ptD) = r.corners
ptB = (int(ptB[0]), int(ptB[1]))
ptC = (int(ptC[0]), int(ptC[1]))
ptD = (int(ptD[0]), int(ptD[1]))
ptA = (int(ptA[0]), int(ptA[1]))
# draw the bounding box of the AprilTag detection
cv2.line(image, ptA, ptB, (0, 255, 0), 2)
cv2.line(image, ptB, ptC, (0, 255, 0), 2)
cv2.line(image, ptC, ptD, (0, 255, 0), 2)
cv2.line(image, ptD, ptA, (0, 255, 0), 2)
# draw the center (x, y)-coordinates of the AprilTag
(cX, cY) = (int(r.center[0]), int(r.center[1]))
cv2.circle(image, (cX, cY), 5, (0, 0, 255), -1)
# draw the tag family on the image
tagFamily = r.tag_family.decode("utf-8")
cv2.putText(image, tagFamily, (ptA[0], ptA[1] - 15),
cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 255, 0), 2)
print("[INFO] tag family: {}".format(tagFamily))
# show the output image after AprilTag detection
cv2.imshow("Image", image)
cv2.waitKey(0)
We start looping over our AprilTag detections on Line 26. Each AprilTag is specified by a set of corners. Lines 29-33 extract the four corners of the AprilTag square, while Lines 36-39 draw the AprilTag bounding box on the image. We also compute the center (x, y)-coordinates of the AprilTag bounding box and then draw a circle representing the center of the AprilTag (Lines 42 and 43). The last annotation we’ll perform is grabbing the detected tagFamily from the result object and then drawing it on the output image as well. Finally, we wrap up our Python by displaying the results of our AprilTag detection. |
https://pyimagesearch.com/2020/11/02/apriltag-with-python/ | AprilTag Python detection results
Let’s put our Python AprilTag detector to the test! Make sure you use the “Downloads” section of this tutorial to download the source code and example image. From there, open up a terminal, and execute the following command:
$ python detect_apriltag.py --image images/example_01.png
[INFO] loading image...
[INFO] detecting AprilTags...
[INFO] 1 total AprilTags detected
[INFO] tag family: tag36h11
Figure 3: Detecting a single AprilTag with Python. Despite the fact that the AprilTag has been rotated, we were still able to detect it in the input image, thereby demonstrating that AprilTags have a certain level of robustness that makes them easier to detect. Let’s try another image, this one with multiple AprilTags:
$ python detect_apriltag.py --image images/example_02.png
[INFO] loading image...
[INFO] detecting AprilTags...
[INFO] 5 total AprilTags detected
[INFO] tag family: tag36h11
[INFO] tag family: tag36h11
[INFO] tag family: tag36h11
[INFO] tag family: tag36h11
[INFO] tag family: tag36h11
Figure 4: Detecting multiple AprilTags in an image with Python. Here we have a fleet of autonomous vehicles, each with an AprilTag placed on it. We are able to detect all AprilTags in the input image, except for the ones that are partially obscured by other robots (which makes sense — the entire AprilTag has to be in view for us to detect it; occlusion creates a big problem for many fiducial markers). Be sure to use this code as a starting point for when you need to detect AprilTags in your own input images! Limitations and frustrations
You may have noticed that I did not cover how to manually generate your own AprilTag images. That’s for two reasons:
All possible AprilTags across all AprilTag families can be downloaded from the official AprilRobotics repo. |
https://pyimagesearch.com/2020/11/02/apriltag-with-python/ | Additionally, the AprilTags repo contains Java source code that you can use to generate your own tags. And if you really want to dive down the rabbit hole, the TagSLAM library contains a special Python script that can be used to generate tags — you can read more about this script here. All that said, I find generating AprilTags to be a pain in the ass. Instead, I prefer to use ArUco tags, which OpenCV can both detect and generate using it’s cv2.aruco submodule. I’ll be showing you how to use the cv2.aruco module to detect both AprilTags and ArUco tags in a tutorial in late-2020/early-2021. Be sure to stay tuned for that tutorial! Credits
In this tutorial, we used example images of AprilTags from other websites. I would like to take a second and credit the official AprilTag website as well as Bernd Pfrommer from the TagSLAM documentation for the examples of AprilTags. What's next? We recommend PyImageSearch University. |
https://pyimagesearch.com/2020/11/02/apriltag-with-python/ | Course information:
84 total classes • 114+ hours of on-demand code walkthrough videos • Last updated: February 2024
★★★★★ 4.84 (128 Ratings) • 16,000+ Students Enrolled
I strongly believe that if you had the right teacher you could master computer vision and deep learning. Do you think learning computer vision and deep learning has to be time-consuming, overwhelming, and complicated? Or has to involve complex mathematics and equations? Or requires a degree in computer science? That’s not the case. All you need to master computer vision and deep learning is for someone to explain things to you in simple, intuitive terms. And that’s exactly what I do. My mission is to change education and how complex Artificial Intelligence topics are taught. If you're serious about learning computer vision, your next stop should be PyImageSearch University, the most comprehensive computer vision, deep learning, and OpenCV course online today. Here you’ll learn how to successfully and confidently apply computer vision to your work, research, and projects. |
https://pyimagesearch.com/2020/11/02/apriltag-with-python/ | Join me in computer vision mastery. Inside PyImageSearch University you'll find:
✓ 84 courses on essential computer vision, deep learning, and OpenCV topics
✓ 84 Certificates of Completion
✓ 114+ hours of on-demand video
✓ Brand new courses released regularly, ensuring you can keep up with state-of-the-art techniques
✓ Pre-configured Jupyter Notebooks in Google Colab
✓ Run all code examples in your web browser — works on Windows, macOS, and Linux (no dev environment configuration required!) ✓ Access to centralized code repos for all 536+ tutorials on PyImageSearch
✓ Easy one-click downloads for code, datasets, pre-trained models, etc. ✓ Access on mobile, laptop, desktop, etc. Click here to join PyImageSearch University
Summary
In this tutorial, you learned about AprilTags, a set of fiducial markers that are often used for robotics, calibration, and 3D computer vision projects. We use AprilTags (as well as the closely related ArUco tags) in these situations because they tend to be very easy to detect in real time. Libraries exist to detect AprilTags and ArUco tags in nearly any programming language used to perform computer vision, including Python, Java, C++, etc. In our case, we used the april-tag Python package. This package is pip-installable and allows us to pass in images loaded by OpenCV, making it quite effective and efficient in many Python-based computer vision pipelines. Later this year/in early 2021, I’ll be showing you real-world projects of using AprilTags and ArUco tags, but I wanted to introduce them now so you have a chance to familiarize yourself with them. |
https://pyimagesearch.com/2020/11/02/apriltag-with-python/ | AprilTag FAQ:
a
Can an AprilTag be customized for specific applications, and if so, how can one modify it? Customization of an AprilTag is indeed possible and often essential for tailoring to specific applications that require varying sizes or encoding capabilities. While the typical AprilTag holds limited data, the size and error-correction levels can be adjusted. The customization generally involves selecting different tag families or creating a unique configuration of the tag matrix. This process can be handled via tools provided in the AprilTag library or by generating tags using specific software that supports customization features. Users might need to experiment with tag generation parameters to balance detectability with the amount of information each tag carries, adapting the tags to the constraints of their particular application. window.rmpanda = window.rmpanda || {};
window.rmpanda.cmsdata = {“cms”:”wordpress”,”postId”:44649,”taxonomyTerms”:{“ufaq-category”:[1200],”ufaq-tag”:[]}};
Permalink
a
What are the computational requirements for detecting an AprilTag in real-time applications? Detecting an AprilTag in real-time demands a combination of efficient software and capable hardware. The computational load largely depends on factors such as the resolution of the input video, the number of tags being tracked, and the complexity of the scene. Real-time detection typically requires a modern processor and may benefit from hardware acceleration, such as using GPUs, especially in environments where multiple tags are present or high-resolution video feeds are processed. |
https://pyimagesearch.com/2020/11/02/apriltag-with-python/ | For robotics and autonomous systems, where swift and accurate detection is crucial, investing in more robust computing resources ensures that the detection algorithms run smoothly and quickly, thereby supporting timely responses to the tags’ data. For developers working on such applications, profiling the software implementation on target hardware during the development phase is critical to ensure that performance requirements are met. This might include testing with various camera inputs under different operating conditions to optimize performance and reliability. window.rmpanda = window.rmpanda || {};
window.rmpanda.cmsdata = {“cms”:”wordpress”,”postId”:44650,”taxonomyTerms”:{“ufaq-category”:[1200],”ufaq-tag”:[]}};
Permalink
a
How does the performance of AprilTag detection vary among different camera types or imaging conditions? The performance of AprilTag detection can vary significantly across different camera types and imaging conditions. Generally, the quality of the camera sensor and the lens can affect how well the tags are captured, which in turn influences the detection accuracy. High-resolution cameras tend to perform better, as they capture more detailed images, making it easier for the detection algorithms to recognize the patterns on the AprilTag even from greater distances or smaller tag sizes. Imaging conditions also play a crucial role. Good lighting is essential for optimal detection as it affects the camera’s ability to capture the contrast between the black and white parts of the tag. Under low light conditions, the camera might struggle to distinguish the tag from the background, leading to lower detection rates. |
https://pyimagesearch.com/2020/11/02/apriltag-with-python/ | Conversely, very bright conditions can cause glare or overexposure, which might obscure the tag’s details. Additionally, the angle and distance of the camera relative to the tag can affect detection. Ideally, the camera should capture the tag head-on. Angled views can distort the tag in the image, making detection more challenging, although AprilTags are designed to be robust against such variations to a certain extent. In practice, selecting the right camera and adjusting the imaging setup to ensure consistent lighting and minimal angle distortion are crucial steps in deploying a system that uses AprilTag detection effectively. This may involve using specialized camera equipment or enhancing the environmental conditions to maintain a high level of detection accuracy. window.rmpanda = window.rmpanda || {};
window.rmpanda.cmsdata = {“cms”:”wordpress”,”postId”:44651,”taxonomyTerms”:{“ufaq-category”:[1200],”ufaq-tag”:[]}};
Permalink
To download the source code to this post (and be notified when future tutorials are published here on PyImageSearch), simply enter your email address in the form below! Download the Source Code and FREE 17-page Resource Guide
Enter your email address below to get a .zip of the code and a FREE 17-page Resource Guide on Computer Vision, OpenCV, and Deep Learning. Inside you'll find my hand-picked tutorials, books, courses, and libraries to help you master CV and DL! Download the code! |
https://pyimagesearch.com/2020/11/02/apriltag-with-python/ | Website |
https://pyimagesearch.com/2024/04/26/pandas-melt-pd-melt/ | Click here to download the source code to this post
Home » Blog » Pandas Melt (pd.melt)
Introduction to pandas melt() function
In this tutorial, you will learn about the Pandas melt() function, a powerful tool in Python’s Pandas library for reshaping your dataframes. Whether you’re new to data manipulation or looking to enhance your data preparation skills, understanding how to use Pandas melt() can significantly simplify your data transformation tasks. Pandas is a staple in the data science community for its robust capabilities in data manipulation and analysis. The Pandas melt() function specifically is invaluable for turning wide data into long format, making it easier to analyze, visualize, and model. If you’ve ever struggled with cumbersome datasets or needed to restructure your data for better insight, this function will become a crucial part of your data wrangling toolkit. Throughout this guide, we’ll dive deep into practical examples that not only demonstrate the syntax of pd.melt() but also illustrate its applications in real-world scenarios. By the end of this post, you’ll be equipped to reshape any dataframe to fit your analytical needs, ensuring your data tells the story you want to hear. Configuring Your Development Environment
To follow this guide, you need to have the Pandas library installed on your system. Luckily, Pandas is pip-installable:
$ pip install pandas
Need Help Configuring Your Development Environment? Figure 3: Having trouble configuring your development environment? |
https://pyimagesearch.com/2024/04/26/pandas-melt-pd-melt/ | Want access to pre-configured Jupyter Notebooks running on Google Colab? Be sure to join PyImageSearch University — you will be up and running with this tutorial in a matter of minutes. All that said, are you:
Short on time?Learning on your employer’s administratively locked system?Wanting to skip the hassle of fighting with the command line, package managers, and virtual environments?Ready to run the code immediately on your Windows, macOS, or Linux system? Then join PyImageSearch University today! Gain access to Jupyter Notebooks for this tutorial and other PyImageSearch guides pre-configured to run on Google Colab’s ecosystem right in your web browser! No installation required. And best of all, these Jupyter Notebooks will run on Windows, macOS, and Linux! Project Structure
We first need to review our project directory structure. Start by accessing this tutorial’s “Downloads” section to retrieve the source code and example images. From there, take a look at the directory structure:
$ tree . |
https://pyimagesearch.com/2024/04/26/pandas-melt-pd-melt/ | --dirsfirst
. └── pandas_melt_examples.py
0 directories, 1 files
Implementing Pandas melt()
Let’s start with a simple example to demonstrate the pandas melt() function. We’ll create a small dataset representing sales data for different products across multiple months, and then use pd.melt() to reshape this data. Simple pd.melt Example:
Suppose we have a dataframe that lists the monthly sales figures for three different products: A, B, and C. The data is structured with columns for each month and rows for each product. # Import Pandas library
import pandas as pd
import numpy as np
# Creating the example dataframe
data = {
'Product': ['A', 'B', 'C'],
'January': [100, 150, 200],
'February': [120, 160, 210],
'March': [130, 170, 220]
}
df = pd. DataFrame(data)
print("Original Data:")
print(df)
#Apply Pandas melt
melted_df = pd.melt(df, id_vars=['Product'], var_name='Month', value_name='Sales')
print("\nMelted Data:")
print(melted_df)
We start on Line 1-2, we Import Python Library. The script begins by importing the Pandas and other python packages, which is essential for data manipulation, including datetime conversion. Line 4-10 – Create Sample Data:
Product: An array containing the names of the products (‘A’, ‘B’, ‘C’). This represents the different items being sold. January, February, March: These keys correspond to arrays containing integers ([100, 150, 200], [120, 160, 210], [130, 170, 220]), representing the sales figures for each product for each month. |
https://pyimagesearch.com/2024/04/26/pandas-melt-pd-melt/ | Line 11 – Creating a Pandas DataFrame: Constructs a DataFrame from the data dictionary. The DataFrame, df, organizes the data in a tabular form conducive to manipulation and visualization. Line 12-13 – Print the sample data. Line 16 – Implement pd.melt():
id_vars: Specifies the columns that will remain vertical (as identifier variables), which in this case are [‘Product’].var_name: Names the new column that will contain the former column headers of melted columns, which are designated as Month.value_name: Names the new column that will contain the values from the melted columns, here called Sales. Line 17-18 – Printing Results: The converted datetime objects are printed to demonstrate the conversion effect. When we run this code, the output will show the original wide-format dataframe and then the melted long-format dataframe. The pandas melt() function takes the ‘Product’ column as the identifier variable (id_vars), and it treats the other columns (‘January’, ‘February’, ‘March’) as value variables, which are unpivoted to the row axis, forming two new columns: ‘Month’ and ‘Sales’. The output should looks similar to below:
Original Extended Data:
Product January February March
A 100 120 130
B 150 160 170
C 200 210 220
Melted Extended Data:
Product Month Sales
A January 100
B January 150
C January 200
A February 120
B February 160
C February 210
A March 130
B March 170
C March 220
This transformation is particularly useful for statistical modeling or plotting functions that expect data in a long format, making pd.melt() an indispensable tool for data scientists. Advanced Example Using Pandas melt()
Building on our previous example, let’s explore more complex scenarios where pd.melt() can be particularly useful. This time, we’ll introduce additional variables and demonstrate how to handle multiple identifier variables and filter which columns to melt. |
https://pyimagesearch.com/2024/04/26/pandas-melt-pd-melt/ | This gives us more control over the reshaping process. Imagine we have a more detailed dataset that includes not only the sales data for products A, B, and C across several months but also their corresponding categories and target sales figures. # Extended example dataframe
data = {
'Product': ['A', 'B', 'C'],
'Category': ['Electronics', 'Furniture', 'Electronics'],
'Target_Sales': [300, 450, 500],
'January': [100, 150, 200],
'February': [120, 160, 210],
'March': [130, 170, 220]
}
df = pd. DataFrame(data)
print("Original Extended Data:")
print(df)
# Melting the dataframe with multiple identifier variables
melted_df = pd.melt(df, id_vars=['Product', 'Category', 'Target_Sales'], var_name='Month', value_name='Sales')
print("\nMelted Extended Data:")
print(melted_df)
Line 17-24 – Sample Data with more Complex Arrays:
Product: An array containing the names of the products (‘A’, ‘B’, ‘C’). This represents the different items being sold. Category: An array indicating the category of each product (‘Electronics’, ‘Furniture’, ‘Electronics’). This provides a classification for each product. Target_Sales: An array of integers representing the target sales figures for each product ([300, 450, 500]). This is the sales goal for each item. January, February, March: These keys correspond to arrays containing integers ([100, 150, 200], [120, 160, 210], [130, 170, 220]), representing the sales figures for each product for each month. |
https://pyimagesearch.com/2024/04/26/pandas-melt-pd-melt/ | Line 25 – Creating a Pandas DataFrame Creation (df):
Constructs a DataFrame from the data dictionary. The DataFrame, df, organizes the data in a tabular form conducive to manipulation and visualization. Line 26-27 – Printing Results: We print the original complex array date. Line 30 – Implement pd.melt():
id_vars: Specifies the columns that will remain vertical (as identifier variables), which in this case are [‘Product’, ‘Category’, ‘Target_Sales’].var_name: Names the new column that will contain the former column headers of melted columns, which are designated as Month.value_name: Names the new column that will contain the values from the melted columns, here called Sales. Line 31-32 – Printing Results: We print the converted datetime objects to show how the function handles different formats and errors. Running this code will produce an output that includes both the original extended dataframe and the melted dataframe, now including category and target sales data aligned with each month’s sales. Here’s the output reflecting the melted dataframe using the pd.melt function:
Original Extended Data:
Product Category Target_Sales January February March
A Electronics 300 100 120 130
B Furniture 450 150 160 170
C Electronics 500 200 210 220
Melted Extended Data:
Product Category Target_Sales Month Sales
A Electronics 300 January 100
B Furniture 450 January 150
C Electronics 500 January 200
A Electronics 300 February 120
B Furniture 450 February 160
C Electronics 500 February 210
A Electronics 300 March 130
B Furniture 450 March 170
C Electronics 500 March 220
This enhanced example illustrates the flexibility of pd.melt(), allowing us to keep essential categorical information alongside the reshaped sales data, which can be pivotal for more detailed analysis or reporting. This should clarify how each part of the pandas melt() function works and how the variables are structured to facilitate understanding and enable effective application in data manipulation tasks. Consideration while using Pandas melt
When using the pandas melt function, there are several considerations and common issues that users should be aware of to effectively manage and avoid potential pitfalls:
Common Issues and Considerations with pd.melt():
Loss of Data Integrity:
When melting data, ensure that the identifier variables (id_vars) adequately summarize the necessary key columns. If not, the melting process can lead to a loss of context about the data values, making the dataset harder to understand or analyze accurately. |
https://pyimagesearch.com/2024/04/26/pandas-melt-pd-melt/ | Performance Issues:
Melting can significantly increase the size of the DataFrame because it transforms it into a longer format. This can lead to performance issues, especially with very large datasets. It’s important to consider whether the long format is necessary for your specific analysis or if there are ways to aggregate the data before melting. Repetitive Variable Names:
If the column names that are being melted are not unique or are only numerically different (e.g., ‘January_1’, ‘January_2’, etc.), it can be challenging to distinguish between them in the melted DataFrame. It’s essential to rename these columns meaningfully before melting to maintain clarity. Alternative Function: pd.pivot_table()
While pd.melt() is useful for transforming data from wide to long format, sometimes you might need to perform the inverse operation or need a more controlled reshaping of your DataFrame. In such cases, pd.pivot_table() can be a more suitable alternative. Example of pd.pivot_table():
Suppose we want to pivot the melted data back into a wide format where we summarize the sales by the average per month and category. # Assuming melted_df is already defined from previous steps
pivot_df = pd.pivot_table(melted_df, values='Sales', index=['Category'], columns=['Month'], aggfunc=np.mean)
print("Pivoted Data:")
print(pivot_df)
This code will summarize the sales by calculating the average for each product category per month, effectively pivoting the data back to a format that might be more useful for certain types of analysis. |
https://pyimagesearch.com/2024/04/26/pandas-melt-pd-melt/ | This example demonstrates the flexibility of pd.pivot_table() for aggregating and reshaping data, providing an alternative method that might better suit certain analytical needs compared to pd.melt(). Next, we’ll provide a summary of what has been covered in this tutorial and highlight important considerations for using pandas melt. What's next? We recommend PyImageSearch University. Course information:
84 total classes • 114+ hours of on-demand code walkthrough videos • Last updated: February 2024
★★★★★ 4.84 (128 Ratings) • 16,000+ Students Enrolled
I strongly believe that if you had the right teacher you could master computer vision and deep learning. Do you think learning computer vision and deep learning has to be time-consuming, overwhelming, and complicated? Or has to involve complex mathematics and equations? Or requires a degree in computer science? That’s not the case. All you need to master computer vision and deep learning is for someone to explain things to you in simple, intuitive terms. |
https://pyimagesearch.com/2024/04/26/pandas-melt-pd-melt/ | And that’s exactly what I do. My mission is to change education and how complex Artificial Intelligence topics are taught. If you're serious about learning computer vision, your next stop should be PyImageSearch University, the most comprehensive computer vision, deep learning, and OpenCV course online today. Here you’ll learn how to successfully and confidently apply computer vision to your work, research, and projects. Join me in computer vision mastery. Inside PyImageSearch University you'll find:
✓ 84 courses on essential computer vision, deep learning, and OpenCV topics
✓ 84 Certificates of Completion
✓ 114+ hours of on-demand video
✓ Brand new courses released regularly, ensuring you can keep up with state-of-the-art techniques
✓ Pre-configured Jupyter Notebooks in Google Colab
✓ Run all code examples in your web browser — works on Windows, macOS, and Linux (no dev environment configuration required!) ✓ Access to centralized code repos for all 536+ tutorials on PyImageSearch
✓ Easy one-click downloads for code, datasets, pre-trained models, etc. ✓ Access on mobile, laptop, desktop, etc. Click here to join PyImageSearch University
Summary
This tutorial provides an in-depth exploration of the Pandas melt() function, a key tool in the Python Pandas library for reshaping dataframes from a wide to a long format. The function is particularly useful for making datasets easier to analyze, visualize, and model. |
https://pyimagesearch.com/2024/04/26/pandas-melt-pd-melt/ | The guide includes practical examples to demonstrate how to use pd.melt() effectively in real-world scenarios, illustrating the transformation of a dataset of sales data for different products across multiple months. The tutorial also discusses common issues and considerations, such as potential loss of data integrity and performance issues due to the increase in DataFrame size. It offers insights into alternative methods like pd.pivot_table(), useful for the inverse operation of reshaping data back into a wide format, where it summarizes sales by the average per month and category. By the end of this tutorial, you should have a solid understanding of how to reshape any dataframe to fit your analytical needs with pd.melt(). To learn more about all pandas melt capabilities check out the developer doc. Unleash the potential of computer vision with Roboflow - Free! Step into the realm of the future by signing up or logging into your Roboflow account. Unlock a wealth of innovative dataset libraries and revolutionize your computer vision operations. Jumpstart your journey by choosing from our broad array of datasets, or benefit from PyimageSearch’s comprehensive library, crafted to cater to a wide range of requirements. Transfer your data to Roboflow in any of the 40+ compatible formats. |
https://pyimagesearch.com/2024/04/26/pandas-melt-pd-melt/ | Leverage cutting-edge model architectures for training, and deploy seamlessly across diverse platforms, including API, NVIDIA, browser, iOS, and beyond. Integrate our platform effortlessly with your applications or your favorite third-party tools. Equip yourself with the ability to train a potent computer vision model in a mere afternoon. With a few images, you can import data from any source via API, annotate images using our superior cloud-hosted tool, kickstart model training with a single click, and deploy the model via a hosted API endpoint. Tailor your process by opting for a code-centric approach, leveraging our intuitive, cloud-based UI, or combining both to fit your unique needs. Embark on your journey today with absolutely no credit card required. Step into the future with Roboflow. Join Roboflow Now
Join the PyImageSearch Newsletter and Grab My FREE 17-page Resource Guide PDF
Enter your email address below to join the PyImageSearch Newsletter and download my FREE 17-page Resource Guide PDF on Computer Vision, OpenCV, and Deep Learning. Join the Newsletter! Website |
https://pyimagesearch.com/2024/05/13/harnessing-power-at-the-edge-an-introduction-to-local-large-language-models/ | Home » Blog » Harnessing Power at the Edge: An Introduction to Local Large Language Models
Table of Contents
Harnessing Power at the Edge: An Introduction to Local Large Language Models
Introduction to Large Language Models (LLMs)
What Are Large Language Models? Historical Context and Technological Evolution
The Development of OpenAI’s Generative Pre-Trained Transformers
Key Training Methodologies
Broad Spectrum of Applications
Future Prospects and Ethical Considerations
Introduction to Local LLMs
The Emergence of Local LLMs
Advantages of Local LLMs
Data Privacy and Security
Low Latency
Cost
Always-on Availability
Technical Considerations for Local Deployment
Hardware
Maintenance
Scalability
Framework Support and Accessibility
The Future of Local LLMs
Common Model Formats for LLMs
PyTorch Models
SafeTensors
GGML and GGUF
Background and Development of GGML
Transition to GGUF
Key Features and Benefits of GGUF
Practical Implications and Framework Support
Advantages and Challenges
Conclusion
Generalized Post-Training Quantization
Introduction to Generalized Post-Training Quantization
Quantization Process
GPTQ’s Impact on LLMs
Research and Development
AutoGPTQ Library
Conclusion
AWQ
Introduction to AWQ
Core Principle of AWQ
Framework Support and Adoption
Future Outlook and Coexistence with Other Methods
Conclusion
Frameworks for Local LLMs
Ollama
LM Studio
NVIDIA ChatRTX
Text Generation Web UI
GPT4All
AnythingLLM
Continue.dev
Llama.cpp
Final Perspectives on Local LLM Ecosystem
GPTQ Format Model Implementations
GGUF Model Usage
User Interfaces for Non-Technical Users
Personal Recommendations
Summary
Citation Information
Harnessing Power at the Edge: An Introduction to Local Large Language Models
Why pay a monthly fee when you can run powerful bots equivalent to ChatGPT on your local machine? In this series, we will embark on an in-depth exploration of Local Large Language Models (LLMs), focusing on the array of frameworks and technologies that empower these models to function efficiently at the network’s edge. Each installment of the series will explore a different framework that enables Local LLMs, detailing how to configure it on our workstations, feed prompts and data to generate actionable insights, perform retrieval-augmented generations, and much more. We will discuss each framework’s architecture, usability, and applications in real-world scenarios in depth. In today’s post, we’ll start by discussing what Local LLMs are, exploring their advantages and limitations, and explaining why their deployment is becoming increasingly essential for localized, real-time AI processing. After establishing this foundation, we will then delve into a handful of prominent Local LLM frameworks currently available in the market. Join us as we navigate through the transformative landscape of local data processing and decision-making enhanced by LLMs. This lesson is the 1st in a 4-part series on Local LLMs:
Harnessing Power at the Edge: An Introduction to Local Large Language Models (this tutorial)Lesson 2Lesson 3Lesson 4
To learn about Local LLMs, their advantages and limitations, and the different frameworks available, just keep reading. Introduction to Large Language Models (LLMs)
In the rapidly evolving landscape of artificial intelligence (AI), Large Language Models (LLMs) have emerged as one of the most transformative technologies, particularly in the field of natural language processing (NLP). |
https://pyimagesearch.com/2024/05/13/harnessing-power-at-the-edge-an-introduction-to-local-large-language-models/ | These models, built on sophisticated neural network architectures, are designed to understand, interpret, and generate human-like text, opening new frontiers in AI applications. This section delves into the intricacies of LLMs, exploring their development, functionality, and the profound impact they have on various industries. What Are Large Language Models? Large Language Models (LLMs) are advanced AI systems trained on extensive datasets comprising text from a myriad of sources, including books, articles, websites, and other digital content. These models use architectures like Transformer, a deep learning model introduced in 2017. Transformer relies on self-attention mechanisms to process words in relation to all other words in a sentence, thereby capturing nuances of language that were previously elusive. This capability allows LLMs to generate text that is not only coherent but also contextually relevant to the input provided. Historical Context and Technological Evolution
The journey of LLMs began with foundational models such as GPT (Generative Pre-trained Transformer) and BERT (Bidirectional Encoder Representations from Transformers). These models laid the groundwork for the development of more sophisticated systems. Today, models like GPT-3, which features over 175 billion parameters, and its even more advanced successors, exemplify the pinnacle of current LLM technology. |
https://pyimagesearch.com/2024/05/13/harnessing-power-at-the-edge-an-introduction-to-local-large-language-models/ | Training such models requires an enormous amount of computational power and data, making it a resource-intensive endeavor. This comprehensive timeline presents a visual representation of the key milestones in the evolution of Large Language Models. It begins with the advent of models like GPT-3, which set the stage for the future of automated language processing. Moving through the timeline, we observe a proliferation of models from various tech giants and research institutions, each contributing unique enhancements and specializations. The image categorizes models by year, showing a clear trajectory of advancement and complexity. Notable mentions include T5 and BERT, which introduced new methodologies for understanding context and semantics. The timeline also includes industry-specific models like Codex and AlphaCode, which are tailored for programming-related tasks. As we approach the present, the timeline introduces models such as ChatGPT and GPT-4, which incorporate more profound learning capabilities and broader contextual understanding. The diversity of models reflects the specialized needs of different sectors, from web-based applications to more technical, research-oriented tasks. Each model is a stepping stone toward more sophisticated, nuanced, and ethically aware AI communication tools. |
https://pyimagesearch.com/2024/05/13/harnessing-power-at-the-edge-an-introduction-to-local-large-language-models/ | Figure 1: A timeline of existing large language models (source: https://arxiv.org/pdf/2303.18223.pdf). The Development of OpenAI’s Generative Pre-Trained Transformers
This timeline chart traces the pivotal developments in OpenAI’s generative pre-trained transformers, commencing with the debut of GPT-1 in June 2018. GPT-1 was notable for its decoder-only architecture and its pioneering approach to generative pre-training. Advancing to GPT-2, which was unveiled in February 2019, we witnessed an upscaling in the model’s size and an enhancement in its multitasking learning capabilities. May 2020 marked the arrival of GPT-3, which further expanded the horizons of in-context learning and tested the limits of scaling with an unprecedented number of parameters. The progression continued with the inception of Codex in July 2021, an AI tailored for understanding and generating code, paving the way for specialized iterations like code-davinci-002. The davinci series progressed, with text-davinci-002 advancing instruction-following capabilities, text-davinci-003 aiming for human alignment, and later models augmenting chat functionalities and overall comprehension. March 2023 introduced GPT-3.5 as a bridge to the more sophisticated GPT-4, which boasted enhanced reasoning skills. This was quickly followed by variants like GPT-4 Turbo, offering an extended context window, and GPT-4 Turbo with vision, bringing in multimodal capabilities — both launched in September 2023. These iterations underscore OpenAI’s commitment to refining the complexity and real-world utility of their language models. |
https://pyimagesearch.com/2024/05/13/harnessing-power-at-the-edge-an-introduction-to-local-large-language-models/ | Accompanying the timeline is a visual guide illustrating the developmental trajectory of OpenAI’s models. Solid lines denote direct and explicitly stated evolution paths between models, as per official announcements. For example, the solid line connecting GPT-2 to GPT-3 indicates a directly acknowledged progression. Conversely, dashed lines suggest a less definitive evolutionary link, signifying observed technological progress that may not have been formally documented as a direct lineage by OpenAI. Understanding these nuances is key to appreciating the deliberate and methodical advancement of each model. While GPT-3’s direct descent from GPT-2 is well-documented, the dashed line from GPT-1 to GPT-2 implies an evolutionary step that is inferred from technological strides rather than explicitly delineated by OpenAI. Figure 2: Evolution of OpenAI’s GPT-series LLMs (source: https://arxiv.org/pdf/2303.18223.pdf). Key Training Methodologies
Training LLMs involves two major stages: pre-training and fine-tuning. During pre-training, the model learns a broad understanding of language by predicting words in sentences from the training corpus. Fine-tuning adjusts the model’s parameters to specific tasks, such as question answering or text summarization, using smaller, task-specific datasets. |
https://pyimagesearch.com/2024/05/13/harnessing-power-at-the-edge-an-introduction-to-local-large-language-models/ | This method of training allows LLMs to adapt to a wide range of applications without losing the general language understanding acquired during pre-training. Broad Spectrum of Applications
The versatility of LLMs enables their use in a diverse array of applications:
Content Generation: LLMs assist in composing textual content, from fictional stories to marketing copy, dramatically reducing the time and effort required by human creators. Customer Service: They power sophisticated chatbots that offer personalized customer interactions, capable of handling complex queries with ease. Programming and Code Generation: Tools like GitHub’s Copilot utilize LLMs to suggest code snippets and entire functions, helping programmers write code more efficiently. Translation and Localization: LLMs provide fast and accurate translation services that are crucial for global communication, supporting a multitude of languages and dialects. Education and Tutoring: In education, these models can personalize learning by providing tutoring or generating practice tests tailored to the student’s level. Future Prospects and Ethical Considerations
The potential future developments of LLMs are boundless, with ongoing research aiming to enhance their efficiency, accuracy, and generalizability. However, the deployment of these models also raises significant ethical concerns. Issues such as data privacy, model bias, and the generation of misleading information are critical challenges that researchers and developers continue to address. Additionally, the environmental impact of training large-scale models is a growing concern, prompting a search for more sustainable AI practices. |
https://pyimagesearch.com/2024/05/13/harnessing-power-at-the-edge-an-introduction-to-local-large-language-models/ | Introduction to Local LLMs
The landscape of machine learning and, more specifically, the utilization of Large Language Models (LLMs) is experiencing a paradigm shift. Initially, the use of LLMs was largely dominated by cloud-based services, where the computational load of running such expansive models was handled by powerful remote servers. However, there is a growing trend toward the deployment and usage of LLMs on local infrastructures. This movement is driven by various factors, including concerns over data privacy, the need for lower latency, and the desire for greater control over the models. This “Shift to Local LLMs” signifies a substantial turn in how organizations and individuals leverage these powerful AI tools. The conversation around AI has begun to pivot from cloud reliance to embracing the feasibility and independence of local LLMs. The allure of steering clear of subscription models and having unrestricted usage of AI tools has led to a surge in interest in local deployment. This not only aligns with a cost-saving approach but also champions privacy and immediate access — attributes highly valued in our current technological climate. The Emergence of Local LLMs
Local LLMs refer to instances where the large language models are deployed directly on-premises or on local machines and servers. This enables direct access to the computational capabilities required to run these models without the need for constant internet connectivity or reliance on external cloud providers. |
https://pyimagesearch.com/2024/05/13/harnessing-power-at-the-edge-an-introduction-to-local-large-language-models/ | Advantages of Local LLMs
Data Privacy and Security
Local deployment of LLMs offers enhanced data security and privacy since sensitive information does not need to traverse the internet or be stored on external servers. This is critically important for industries like healthcare and finance, where data confidentiality is paramount. Low Latency
Organizations can significantly reduce latency by running LLMs locally. The immediate availability of computational resources translates into quicker responses, which is vital for applications that demand real-time processing, such as automated trading systems or emergency response services. Cost
Shifting to local LLMs eliminates the costs associated with cloud-hosted APIs and the infrastructure typically required for LLM inference. Organizations can directly utilize their existing compute resources, thereby reducing operational expenses and leveraging investments in their own hardware. Always-on Availability
Local LLMs ensure that the capabilities of these models are always accessible, independent of network connectivity. This is especially advantageous in environments where high-bandwidth connections are unreliable or unavailable, allowing users to maintain productivity with uninterrupted AI assistance. These advantages collectively forge a path to a more autonomous and resilient AI infrastructure, ensuring that organizations and users can enjoy the power of Large Language Models with the added advantages of cost-efficiency, privacy, control, and uninterrupted access. Technical Considerations for Local Deployment
Deploying LLMs locally requires addressing several technical considerations. |
https://pyimagesearch.com/2024/05/13/harnessing-power-at-the-edge-an-introduction-to-local-large-language-models/ | Hardware
Significant hardware investments are necessary to facilitate the computation-heavy workload of LLMs. This includes high-end GPUs and specialized neural network processors. Maintenance
Locally deployed LLMs require a dedicated team to manage and update the models, handle data security, and ensure the infrastructure’s integrity. Scalability
As the demand for AI’s computational power grows, scaling local hardware can be challenging and expensive compared to scalable cloud solutions. Framework Support and Accessibility
It’s important to note that there are numerous frameworks available that are compatible with various operating systems (e.g., Windows, Linux, and macOS), and they support a wide range of hardware from AMD, NVIDIA, and Apple M series GPUs. Many of these Local LLM frameworks have matured significantly, making LLMs more accessible and easier to run than ever before. We will discuss these frameworks at a high level today, highlighting their ease of use and robustness. This diversity in support ensures that regardless of your specific environment or hardware capabilities, there are viable options available to successfully deploy and manage LLMs locally. The Future of Local LLMs
The shift to local LLMs does not suggest a complete move away from cloud-based models but rather indicates a hybrid approach where organizations choose the deployment strategy that best fits their needs. In the future, we may see more sophisticated methods of optimizing LLMs for local use, including compression techniques that reduce the model size without compromising performance and specialized hardware that can run these models more efficiently. |