id
stringlengths 8
9
| chunk_id
stringlengths 8
9
| text
stringclasses 6
values | start_text
int64 235
36k
| stop_text
int64 559
36.1k
| code
stringclasses 14
values | start_code
int64 356
8.04k
| stop_code
int64 386
8.58k
| __index_level_0__
int64 0
35
|
---|---|---|---|---|---|---|---|---|
chap04-0 | chap04-0 | 4
Implementing Text Classification Using Perceptron and Logistic Regression
In the previous chapters we have discussed the theory behind the perceptron and logistic regression, including mathematical explanations of how and why they are able to learn from examples.
In this chapter we will transition from math to code.
Specifically, we will discuss how to implement these models in the Python programming language.
All the code that we will introduce throughout this book is available online as well: http://clulab.github.io/gentlenlp/. The reader who is not familiar with the Python programming language is encouraged to read first Appendix A, for a brief introduction to the language, and Appendix B, for a discussion on how computers encode and preprocess text.
Once done, please return here.
To get a better understanding of how these algorithms work under the hood, we will start by implementing them from scratch.
However, as the book progresses, we will introduce some of the popular tools and libraries that make Python the language of choice for machine learning, e.g., PyTorch,1 and Hugging Face’s transformers.2
The code for all the examples in the book is provided in the form of Jupyter notebooks.3
Important fragments of these notebooks will be presented in the implementation chapters so that the reader has the whole picture just by reading the book.
However, we strongly encourage you to download the notebooks and execute them yourself.
We also encourage you to modify them to conduct your own experiments!
1 https://pytorch.org
2 https://huggingface.co 3 https://jupyter.org/
55
56 Implementing Text Classification Using Perceptron and LR 4.1 Binary Classification
We begin this chapter with binary classification.
That is, we aim to train classifiers that assign one of two labels to a given text.
As the example for this task, we will train a review classifier using the the Large Movie Review Dataset (Maas et al., 2011).4 We tackle this task by implementing first a binary perceptron classifier, followed by a binary logistic regression one.
We will implement the latter both from scratch as well as using PyTorch, so the reader has a clearer understanding on how PyTorch works “under the hood.”
4.1.1 Large Movie Review Dataset
This dataset contains movie reviews and their associated scores (between 1 and 10) as provided by IMDb.5 converted these scores to binary labels by assigning each review a positive or negative label if the review score was above 6 or below 5, respectively.
Reviews with scores 5 and 6 were considered too neutral and thus excluded.
We follow the same protocol in this chapter.
The dataset is divided in two even partitions called train and test, each containing 25,000 reviews.
The dataset also provides additional unlabeled reviews, but we will not use those here.
Each partition contains two directories called pos and neg where the positive and negative examples are stored.
Each review is stored in an independent text file, whose name is composed of an id unique to the partition and the score associated with the review, separated by an underscore.
An example of a positive and a negative review is shown in Table 4.1.
4.1.2 Bag-of-words Model
As discussed in Section 2.2, we will encode the text to classify as a bag of words.
That is, we encode each review as a list of numbers, with each position in the list corresponding to a word in our vocabulary, and the value stored in that position corresponding to the number of times the word appears in the review.
For example, say we want to encode the following two reviews:
4 https://ai.stanford.edu/~amaas/data/sentiment/ 5 https://www.imdb.com/
Maas et al.
4.1 Binary Classification 57
Table 4.1 Two examples of movie reviews from IMDb.
The first is a positive review of the movie Puss in Boots (1988).
The second is a negative review of the movie Valentine (2001).
These reviews can be found at https://www.imdb.com/review/rw0606396/ and https://www.imdb.com/review/rw0721861/, respectively.
Filename Score Binary Label
train/pos/24_8.txt 8/10 Positive
train/neg/141_3.txt 3/10 Negative
Review Text
Although this was obviously a low-budget production, the performances and the songs in this movie are worth seeing.
One of Walken’s few musical roles to date.
(he is a marvelous dancer and singer and he demonstrates his acrobatic skills as well - watch for the cartwheel!)
Also starring Jason Connery.
A great children’s story and very likable characters.
This stalk and slash turkey manages to bring nothing new to an increasingly stale genre.
A masked killer stalks young, pert girls and slaughters them in a variety of gruesome ways, none of which are particularly inventive.
It’s not scary, it’s not clever, and it’s not funny.
So what was the point of it?
Review 1: Review 2:
"I liked the movie.
My friend liked it too.
"
"I hated it.
Would not recommend.
"
First, we need to create a vocabulary that maps each word to an id that uniquely identifies it.
Each of these numbers will be used as the index in a list, so they must start at zero and grow by one for each word in the vocabulary.
For example, one possible vocabulary that encodes the previous reviews is:
{'would': 0,
'hated': 1,
58
Implementing Text Classification Using Perceptron and LR
'my': 2,
'liked': 3,
'not': 4,
'it': 5,
'movie': 6,
'recommend': 7,
'the': 8,
'I': 9,
'too': 10,
'friend': 11}
Using this mapping, we can encode the two reviews as follows:
Review1:
[0,0,1,2,0,1,1,0,1,1,1,1]
Review2:
[1,1,0,0,1,1,0,1,0,1,0,0]
Note that the word liked (fourth position) in the first review has a value of two.
This is because this word appears twice in that review.
This is a small example with a vocabulary of only 12 terms.
Of course, the same process needs to be implemented for our whole training dataset.
For this purpose we will use scikit-learn’s CountVectorizer
class.6 Using the CountVectorizer class simplifies things, allowing us to get started quickly with a bag-of-words approach.
However, note that it makes several simplifying assumptions (e.g., text is lowercased, and punctuation and single character tokens are removed).
Some of these may not be adequate to other tasks.
First, we need to obtain the filenames for the reviews in the training set:
Once we have acquired the filenames for the training reviews, we need
to read them using the CountVectorizer.
In order for the CountVectorizer to open and read the files for us, we make use of the input='filename' constructor parameter (otherwise it would expect the string content directly).
The CountVectorizer provides three methods that will be use-
ful for us: a method called fit() that is used to acquire the vocabulary,
a method transform() that converts the text into the bag-of-words representation, and a method fit_transform() that conveniently acquires the vocabulary and transforms the data in a single step.
The resulting object is referred to as a document-term matrix, where each row corre-
6 https://scikitlearn.org/stable/modules/generated/sklearn.feature_ extraction.text.CountVectorizer.html
4.1 Binary Classification 59
sponds to a document, and each column corresponds to a term in the vocabulary.
As the output above indicates, the resulting matrix has 25,000 rows (one for each review), and 74,849 columns (one for each term).
Also you may note that this matrix is sparse, with 3,445,861 stored elements.
A regular matrix of shape 25,000×74,849 would have 1,871,225,000 elements.
However, most of the elements in a document-term matrix are zeros because only a few words from the vocabulary appear in each document.
A sparse matrix takes advantage of this fact by storing only the non-zero cells in order to reduce the memory required to store it.
Thus, sparse matrices are convenient, especially when dealing with lots of data.
Nevertheless, to simplify the downstream code in this example, we will convert it into a dense matrix, i.e., a regular two-dimensional NumPy array.
Finally, we also need the labels of the reviews.
We assign a label of one to positive reviews, and a label of zero to negative ones.
Note that the first half of the reviews are positive and the second half are negative.
The label at the ith position of the y_train array corresponds to the review encoded in the ith row of the X_train matrix.
4.1.3 Perceptron
Now that we have defined our task and the data processing pipeline, we will implement a perceptron classifier that classifies the movie reviews as positive or negative.
The entire code discussed in this section is available in the chap4_perceptron notebook.
Recall from Section 2.4 that the perceptron is composed of a weight vector w and a bias term b.
These will be represented as a NumPy array w of the same length as our document vectors, and a variable b for the bias term.
Both will be initialized with zeros.
The parameters w and b are learned through the following algorithm, which implements Algorithm 2 from Chapter 2:
There are a couple of details to point out.
Line 3 of Algorithm 2 indicates that we need to repeat the training loop until convergence.
Theoretically, convergence is defined as predicting all training examples correctly.
This is an ambitious requirement, which is not always possible in practice, so in this code we also include a stop condition if we reach a maximum number of epochs.
Another crucial difference between our implementation here and the theoretical Algorithm 2, is that we randomize the order in which the training examples are seen at the beginning of
60 Implementing Text Classification Using Perceptron and LR
each epoch.
This simple (but highly recommended!)
change is necessary to avoid the introduction of spurious biases due to the arbitrary order of the examples in the original training partition.7 We accomplish this by storing the indices corresponding to the X_train matrix rows in a NumPy array, and shuffling these indices at the beginning of each epoch.
We shuffle the indices instead of the examples so that we can preserve the mapping between examples and labels.
The training loop aligns closely with Algorithm 2.
We start by iterating over each example in our training data, storing the current example in the variable x,8
and its corresponding label in the variable y_true.
Next, we compute the perceptron decision function shown in Algorithm 1.
Note that NumPy (as well as PyTorch) uses Python’s @ operator to indicate vector or matrix multiplication, depending on its operand types.
Here we use it to calculate the dot product of the example x and the weights w.
To this we add the bias b to obtain the predicted score, whose sign is used to assign a positive or negative predicted label.
If the prediction is correct, then no update is needed, and we can move on to the next training example.
However, if the prediction is incorrect, then we need to adjust w and b, as described in Algorithm 2.
Sidebar 4.1 The tqdm function
This is our first exposure to the tqdm function.
tqdm is a progress bar that “make your loops show a smart progress meter.”9
The name tqdm comes from the Arabic word taqaddum which can mean “progress.”
Using tqdm is as simple as wrapping it around the collection to be traversed.
After training, we evaluate the model’s performance on the heldout test partition.
The test data is loaded similarly to the training partition, but with one notable difference; we use CountVectorizer’s transform() method instead of the fit_transform() method so that the vocabulary is not adjusted for the test data.
We won’t show here the loading of the test partition since it is so similar to the code already shown, but it is available in the Jupyter notebook that accompanies this section.
.
7
As an extreme example, consider a dataset where all the positive examples appear first in the training partition.
This would cause the perceptron to artificially inflate the weights of the features that occur in these examples, a situation from which the learning algorithm may struggle to recover.
.
8 We use typewriter font when we discuss variables in the code, to distinguish code from the theoretical discussion in the other chapters.
9 https://github.com/tqdm/tqdm
4.1 Binary Classification 61
Using the model to assign labels to all the test data is easily done in one step – we simply multiply the entire test data document-term matrix by the previously learned weights and add the bias.
Scores greater than zero indicate a positive review, and those less than zero are negative.
At this point we can evaluate the classifier’s performance, which we will do using precision, recall, and F1 scores for binary classification (described in Section 2.3).
For this purpose, we implement a function called binary_classification_report that computes these metrics and returns them as a dictionary:
We call this function to compare the predicted labels to the true labels, and obtain the evaluation scores.
Our F1 score here is 86.8%, which is much higher than the baseline that assigns labels randomly, which yields an F1 score of about 50%.
This is a good result, especially considering the simplicity of the perceptron!
In the next sections and chapters, we will discuss a battery of strategies to considerably improve this performance.
4.1.4 Binary Logistic Regression from Scratch
Using the same task, dataset, and evaluation, we will now implement a logistic regression classifier, as described in Algorithm 5 from Chapter 3.
To give the reader hands-on experience with the implementation of the gradient calculations for logistic regression, we start by implementing it from scratch using NumPy.
All the code shown in this section is available in the chap4_logistic_regression_numpy notebook.
In the perceptron implementation, we represented the weights and the bias as two different variables.
Here, however, we will use a different approach that will allow us to unify them into a single vector variable.
Specifically, we take advantage of the similarity between the derivative of the cost function with respect to the weights (Equation 3.14) and the derivative of the cost with respect to the bias (Equation 3.15).
d Ci(w, b) = (σi − yi)xij (3.14 revisited) dwj
d Ci(w, b) = σi − yi (3.15 revisited) db
Note that the two derivative formulas are identical except that the former has a multiplication by xij, while the latter does not.
However,
62 Implementing Text Classification Using Perceptron and LR since
σi − yi = (σi − yi)1
we can multiply the derivative of the cost with respect to the bias by one without changing the semantics.
This gives an opportunity for combining the computations, doing them both in a single pass.
The idea is that we can treat the bias as a weight corresponding to a feature that always has a value of one.
As can be seen above, we created a NumPy array of ones of the same length as the number of examples in our training set (i.e., the number of rows in the data matrix).
Then we add this array as a new column to the data matrix, using NumPy’s column_stack function.
Next, we need to initialize our model.
This time we will use a single NumPy array w of the same length as the number of columns in the data matrix.
The weight vector w is initialized randomly with values between 0 and 1:
Before implementing the learning algorithm, we need an implementation of the logistic function.
Recall that the logistic function is
σ(x) = 1 (3.1 revisited) 1+e−x
This function can be easily implemented in NumPy as follows:
However, this naive implementation may produce the following warning during training:
The term overflow indicates that the result of evaluating exp(-x) is a number so large that it can’t be represented by a float (specifically, we’re using float64 numbers).
We will avoid this issue by not calling exp with values that will overflow.
NumPy provides the function finfo that can be consulted to find the limits of floating point numbers:
The log of the largest floating point number is the largest number for which exp() will not overflow, so we will use it as a threshold to filter out problematic values:
We now have everything we need to implement Algorithm 4.
The steps to follow for each example are: (1) use the model to make a prediction, (2) calculate the gradient of the loss function with respect to the model parameters, and (3) update the model parameters using the gradient.
The size of the update is controlled by the learning rate.
Once the model has been trained, we evaluate it on the test dataset using our binary_classification_report function from the previous section.
Loading and preprocessing the test dataset follows the same
4.1 Binary Classification 63
steps as with the previous classifier.
We omit the code for brevity.
These are the results:
The performance is comparable with that of the perceptron.
The difference in F1 scores between the two classifiers (84.9% here vs. 86.8% for the perceptron) is not significant.
Classifier parity is probably attributable to the fact that the signal distinguishing the two classes being easy to learn and the simpler perceptron training algorithm being sufficient in this case.
Nevertheless, this task is useful in showing how to implement the logistic regression model from scratch, i.e., by implementing the gradient calculation and parameter updates manually.
Next, we will implement the same model again using PyTorch, highlighting how this machine learning library simplifies the process.
4.1.5 Binary Logistic Regression Utilizing PyTorch
While it is fairly straightforward to compute the derivatives for logistic regression and implement then directly in NumPy, this will not scale well to arbitrary neural architectures.
Fortunately, there are libraries that automate the computation of the derivatives of the cost function (assuming it is differentiable!)
for any neural network, and use the resulting gradients to perform gradient descent or other more sophisticated optimization procedures.
To this end, we will use the PyTorch deep learning library10.
The corresponding notebook for this section is chap4_logistic_regression_pytorch_bce.
Our model for logistic regression corresponds to PyTorch’s Linear layer.
When we instantiate this layer, we specify the size of the inputs (the size of our vocabulary) and the size of the output, i.e., the number of output neurons (which is one because we’re doing binary classification).
The loss function we use is the binary cross-entropy loss (see Chapter 3), which is implemented as BCEWithLogitsLoss in PyTorch.
In PyTorch, the gradients obtained from the loss function are applied to the model by an optimizer object, which implements and applies an optimization algorithm.
Here we will use the vanilla stochastic gradient descent optimizer; we set its learning rate to 0.1.
This is equivalent to the discussion in Section 3.2.
Similarly to the manual implementation, the steps required to train the model for a given training example are: (1) ensure the gradients are set to zeros, (2) apply the model to obtain a prediction, (3) calculate
10 https://pytorch.org/
64 Implementing Text Classification Using Perceptron and LR
the loss, (4) compute the gradient of the loss by back-propagation, and (5) update the model parameters.
Recall that in our previous implementation everything was hardcoded: applying the model, computing the gradients, and optimizing the model parameters.
Here, however, the implementation of the logistic regression is expressed at a higher level of abstraction.
This means that we are describing the logical steps without specifying a particular implementation.
Instead, implementation details are the responsability of the chosen model, loss function, and optimizer.
Thus, we could even choose a different model, loss function, and/or optimizer, and use the same training steps with little or no modification.
This decoupling of the training logic from the implementation details is one of the main advantages of libraries such as PyTorch.
As shown in the code above, calling the model as a function, with the feature vectors as inputs, produces the predicted scores.
Once again, a positive score corresponds to a positive label.
When we evaluate this implementation on the test dataset, we obtain results that are in line with our previous models:
Writing the perceptron and the logistic regression from scratch is a good exercise, as it exposes us to the fundamentals of implementing machine learning algorithms.
However, this becomes cumbersome for more complex neural architectures.
For this reason, from this point on, we will use PyTorch for all our coding examples.
4.2 Multiclass Classification
So far, in this chapter we have discussed implementing binary classifiers.
Next, we will modify these binary classifiers to perform multiclass classification, following the discussion in Section 3.5.
4.2.1 AG News Dataset
Before explaining the actual training/testing code, we have to choose a new dataset that is suitable for multiclass classification.
To this end, we will use the AG News Classification Dataset (Zhang et al., 2015), a subset of the larger AG corpus of news articles collected from thousands of different news sources.11
The classification dataset consists of four
11 http://groups.di.unipi.it/~gulli/AG_corpus_of_news_articles.html
4.2 Multiclass Classification 65
classes, and the data is equally balanced across all classes (30,000 articles per class for train, and 1,900 articles per class for testing).
The goal of the task is to classify each article as one of the four classes: World, Sports, Business, or Sci/Tech.
4.2.2 Preparing the Dataset
The AG News Dataset is distributed as two CSV files (one for training and one for testing), each containing three columns: the class index, the title, and the description.
The dataset also provides a text file that maps the above class indexes to more descriptive class labels.
Because of the tabular nature of the dataset, pandas, a Python library
for tabular data analysis,12 is a natural choice for loading and transform-
ing it.
To this end, our Jupyter notebook (chap4_multiclass_logistic_regression) demonstrates the sequence of steps required to handle the data, as well
as model training and evaluation.
First, we show how to load the CSV,
add column names, and inspect the result:
class index
.
0 3
.
1 3
.
2 3
.
3 3
.
4 3
... ...
.
119995 1
.
119996 2
.
119997 2
.
119998 2
.
119999 2
title Wall St. Bears Claw Back Into the Black (Reuters) Carlyle Looks Toward Commercial Aerospace (Reu... Oil and Economy Cloud Stocks' Outlook (Reuters) Iraq Halts Oil Exports from Main Southern Pipe...
Oil prices soar to all-time record, posing new... ...
Pakistan's Musharraf Says Won't Quit as Army C...
Renteria signing a top-shelf deal Saban not going to Dolphins yet
Today's NFL games Nets get Carter from Raptors
description Reuters - Short-sellers, Wall Street's dwindli...
Reuters - Private investment firm Carlyle Grou... Reuters - Soaring crude prices plus worries\ab... Reuters - Authorities have halted oil export\f... AFP - Tearaway world oil prices, toppling reco... ...
KARACHI (Reuters) - Pakistani President Perve...
Red Sox general manager Theo Epstein acknowled...
The Miami Dolphins will put their courtship of... PITTSBURGH at NY GIANTS Time: 1:30 p.m. Line: ...
INDIANAPOLIS -- All-Star Vince Carter was trad...
120000 rows × 3 columns
Since the class labels themselves are in a separate file, we manually add them to the pandas data structure (called dataframe in pandas’ terminology) to increase the interpretability of the data.
We use the class index column as a starting point, and use its map method to create a new column with the corresponding labels (technically a new Series object) that is added to the dataframe using its insert method, which allows us to insert the column in a specific position.
Note that the label indices are one-based, so we subtract one to align them with their labels.
12 https://pandas.pydata.org
66 Implementing Text Classification Using Perceptron and LR
class index
.
0 3
.
1 3
.
2 3
.
3 3
.
4 3
... ...
.
119995 1
.
119996 2
.
119997 2
.
119998 2
.
119999 2
class Business Business Business Business Business ...
World Sports Sports Sports Sports
title Wall St. Bears Claw Back Into the Black (Reuters)
Oil and Economy Cloud Stocks' Outlook (Reuters)
Oil prices soar to all-time record, posing new...
Pakistan's Musharraf Says Won't Quit as Army C...
Saban not going to Dolphins yet Nets get Carter from Raptors
description Reuters - Short-sellers, Wall Street's dwindli...
Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco...
KARACHI (Reuters) - Pakistani President Perve...
The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad...
Iraq Halts Oil Exports from Main Southern Pipe... Reuters - Authorities have halted oil export\f...
... ...
Renteria signing a top-shelf deal Red Sox general manager Theo Epstein acknowled...
120000 rows × 4 columns
Carlyle Looks Toward Commercial Aerospace (Reu... Reuters - Private investment firm Carlyle Grou...
Today's NFL games PITTSBURGH at NY GIANTS Time: 1:30 p.m.
Line: ...
Next we will preprocess the text.
First we lowercase the title and description, and then we concatenate them into a single string.
Then we remove some spurious backslashes from the text.
Once this is done, the preprocessed text is added to the dataframe as a new column.
Note that pandas allows these steps to be applied to all rows simultaneously.
class index
class
title
Wall St. Bears Claw Back Into the Black (Reuters)
Oil and Economy Cloud Stocks' Outlook (Reuters)
Oil prices soar to all-time record, posing new...
...
Pakistan's Musharraf Says Won't Quit as Army C...
Saban not going to Dolphins yet
Nets get Carter from Raptors
description Reuters - Short-sellers, Wall Street's dwindli...
Reuters - Soaring crude prices plus worries\ab...
AFP - Tearaway world oil prices, toppling reco...
...
KARACHI (Reuters) - Pakistani President Perve...
The Miami Dolphins will put their courtship of...
INDIANAPOLIS -- All-Star Vince Carter was trad...
text
wall st. bears claw back into the black (reute...
oil and economy cloud stocks' outlook (reuters...
oil prices soar to all-time record, posing new...
...
pakistan's musharraf says won't quit as army c...
saban not going to dolphins yet the miami dolp...
nets get carter from raptors indianapolis -- a...
. 0 3 Business
. 1 3 Business
. 2 3 Business
. 3 3 Business
. 4 3 Business
... ... ...
. 119995 1 World
. 119996 2 Sports
. 119997 2 Sports
. 119998 2 Sports
. 119999 2 Sports
120000 rows × 5 columns
Carlyle Looks Toward Commercial Reuters - Private investment firm Carlyle carlyle looks toward commercial Aerospace (Reu... Grou... aerospace (reu...
Iraq Halts Oil Exports from Main Southern Pipe...
Reuters - Authorities have halted oil export\f...
iraq halts oil exports from main southern pipe...
Renteria signing a top-shelf deal
Red Sox general manager Theo Epstein renteria signing a top-shelf deal red sox acknowled... gene...
Today's NFL games
PITTSBURGH at NY GIANTS Time: 1:30 p.m.
today's nfl games pittsburgh at ny giants Line: ... time...
At this point, the text is ready to be tokenized.
For this purpose we will use NLTK’s word_tokenize function.
This function can be applied to the whole column at once using the pandas map function, which returns a new column which we add to the dataframe.
However, here we actually use the progress_map function, which provides a visual progress bar.
This visual feedback is especially helpful for tasks that take more time to complete.
4.2 Multiclass Classification 67
class index
.
0 3
.
1 3
.
2 3
.
3 3
.
4 3
... ...
.
119995 1
.
119996 2
.
119997 2
.
119998 2
.
119999 2
class Business Business Business Business Business
...
World
Sports Sports Sports Sports
title
Wall St. Bears Claw Back Into the Black (Reuters)
Oil and Economy Cloud Stocks' Outlook (Reuters)
Oil prices soar to all-time record, posing new...
...
Pakistan's Musharraf Says Won't Quit as Army C...
Saban not going to Dolphins yet
Nets get Carter from Raptors
description
Reuters - Short-sellers, Wall Street's dwindli...
Reuters - Soaring crude prices plus worries\ab...
AFP - Tearaway world oil prices, toppling reco...
...
KARACHI (Reuters) - Pakistani President Perve...
The Miami Dolphins will put their courtship of...
INDIANAPOLIS -- All-Star Vince Carter was trad...
text
wall st. bears claw back into the black (reute...
oil and economy cloud stocks' outlook (reuters...
oil prices soar to all-time record, posing new...
...
pakistan's musharraf says won't quit as army c...
saban not going to dolphins yet the miami dolp...
nets get carter from raptors indianapolis -- a...
tokens
[wall, st., bears, claw, back, into, the, blac...
[oil, and, economy, cloud, stocks, ', outlook,...
[oil, prices, soar, to, all-time, record, ,, p...
...
[pakistan, 's, musharraf, says, wo, n't, quit,...
[saban, not, going, to, dolphins, yet, the, mi...
[nets, get, carter, from, raptors, indianapoli...
120000 rows × 6 columns
Carlyle Looks Toward Commercial Reuters - Private investment firm carlyle looks toward commercial [carlyle, looks, toward, Aerospace (Reu... Carlyle Grou... aerospace (reu... commercial, aerospace...
Iraq Halts Oil Exports from Main Reuters - Authorities have halted iraq halts oil exports from main [iraq, halts, oil, exports, from, Southern Pipe... oil export\f... southern pipe...
main, southe...
Renteria signing a top-shelf deal
Red Sox general manager Theo renteria signing a top-shelf deal [renteria, signing, a, top-shelf, Epstein acknowled... red sox gene...
deal, red, s...
Today's NFL games
PITTSBURGH at NY GIANTS today's nfl games pittsburgh at [today, 's, nfl, games, Time: 1:30 p.m.
Line: ...
ny giants time...
pittsburgh, at, ny, gi...
From the tokens we just created, we then create a vocabulary for our corpus.
Here, we only keep the words that occur at least 10 times, decreasing the memory needed and reducing the likelihood that our vocabulary contains noisy tokens.
Note that each row in the tokens column contains a list of tokens.
In order to create the vocabulary, we will need to convert the Series of lists of tokens into a Series of tokens using the explode() Pandas method.
Then we will use the value_counts() method to create a Series object in which the index are the tokens and the values are the number of times they appear in the corpus.
The next step is removing the tokens with a count lower than our chosen threshold.
Finally, we create a list with the remaining tokens, as well as a dictionary that maps tokens to token ids (i.e., the index of the token in the list).
We include in the vocabulary a special token
[UNK] that will be used as a placeholder for tokens that do not appear in our vocabulary after the frequency pruning.
Using this vocabulary, we construct a feature vector for each news article in the corpus.
This feature vector will be encoded as a dictionary, with keys corresponding to token ids, and values corresponding to the number of times the token appears in the article.
As above, the feature vectors will be stored as a new column in the dataframe.
68 Implementing Text Classification Using Perceptron and LR
class index
class
title
Wall St. Bears Claw Back Into the Black (Reuters)
Oil and Economy Cloud Stocks' Outlook (Reuters)
Oil prices soar to all-time record, posing new...
...
Pakistan's Musharraf Says Won't Quit as Army C...
Saban not going to Dolphins yet
Nets get Carter from Raptors
description
Reuters - Short-sellers, Wall Street's dwindli...
Reuters - Soaring crude prices plus worries\ab...
AFP - Tearaway world oil prices, toppling reco...
...
KARACHI (Reuters) - Pakistani President Perve...
The Miami Dolphins will put their courtship of...
INDIANAPOLIS -- All-Star Vince Carter was trad...
text
wall st. bears claw back into the black (reute...
oil and economy cloud stocks' outlook (reuters...
oil prices soar to all-time record, posing new...
...
pakistan's musharraf says won't quit as army c...
saban not going to dolphins yet the miami dolp...
nets get carter from raptors indianapolis -- a...
tokens
[wall, st., bears, claw, back, into, the, blac...
[oil, and, economy, cloud, stocks, ', outlook,...
[oil, prices, soar, to, alltime, record, ,, p...
...
[pakistan, 's, musharraf, says, wo, n't, quit,...
[saban, not, going, to, dolphins, yet, the, mi...
[nets, get, carter, from, raptors, indianapoli...
features
{427: 2, 563: 1, 1607: 1, 15062: 1, 120: 1, 73...
{66: 1, 9: 2, 351: 2, 4565: 1, 158: 1, 116: 1,...
{66: 2, 99: 2, 4390: 1, 4: 2, 3595: 1, 149: 1,...
...
{383: 1, 23: 1, 1626: 2, 91: 1, 1809: 1, 285: ...
{7762: 2, 68: 1, 661: 1, 4: 2, 1439: 2, 703: 1...
{2170: 2, 226: 1, 2402: 2, 32: 1, 2995: 2, 219...
. 0 3 Business
. 1 3 Business
. 2 3 Business
. 3 3 Business
. 4 3 Business
... ... ...
. 119995 1 World
. 119996 2 Sports
. 119997 2 Sports
. 119998 2 Sports
. 119999 2 Sports
120000 rows × 7 columns
Carlyle Looks Toward Commercial Aerospace (Reu...
Reuters - Private investment firm Carlyle Grou...
carlyle looks toward commercial aerospace (reu...
Iraq Halts Oil Exports from Reuters - Authorities have iraq halts oil exports from Main Southern Pipe... halted oil export\f... main southern pipe...
Renteria signing a top-shelf Red Sox general manager renteria signing a topdeal Theo Epstein acknowled... shelf deal red sox gene...
PITTSBURGH at NY Today's NFL games GIANTS Time: 1:30 p.m. Line: ...
today's nfl games pittsburgh at ny giants time...
[carlyle, looks, toward, {15999: 2, 1076: 1, 855: commercial, aerospace... 1, 1286: 1, 4251: 1, ...
[iraq, halts, oil, exports, {77: 2, 7380: 1, 66: 3, from, main, southe... 1787: 1, 32: 2, 900: 2...
[renteria, signing, a, top- {8428: 2, 2638: 1, 5: 4, shelf, deal, red, s... 0: 3, 127: 1, 202: 3,...
[today, 's, nfl, games, {106: 1, 23: 1, 729: 1, pittsburgh, at, ny, gi... 225: 1, 1586: 1, 22: 1...
The final preprocessing step is converting the features and the class indices into PyTorch tensors.
Recall that we need to subtract one from the class indices to make them zero-based.
At this point, the data is fully processed and we are ready to begin training.
4.2.3 Multiclass Logistic Regression Using PyTorch
The model itself is a single linear layer whose input size corresponds to the size of our vocabulary, and its output size corresponds to the number of classes in our corpus.
PyTorch’s Linear layer includes a bias by default, so there is no need to handle that manually the way we did for our perceptron example.
The code for training this model (which implements Algorithm 6) is almost identical to that of the binary logistic repression.
However, since we have to calculate a score for each of the four different classes, we need to replace the previous BCEWithLogitsLoss with CrossEntropyLoss, which applies a softmax over the scores to obtain probabilities for each class.
For each example, the model predicts 4 scores – one for each label.
The label with the highest score is selected using the argmax function.
We evaluate the predictions of our model for each class using Scikitlearn’s classification_report, which handles the results of multiclass classification.
4.3 Summary 69 4.3 Summary
In this chapter, we used movie review and news article classification to illustrate the implementation of the previously described algorithms for the binary perceptron, binary logistic regression, and multiclass logistic regression.
For the binary logistic regression, we made a direct comparison between the lower-level NumPy implementation and a higher-level version that made use of PyTorch.
We hope that through this series of exercises the reader has noted several key takeaways.
First, data preparation is important and should be done thoughtfully.
Certain tasks (e.g., text normalization or sentence splitting) are going to be frequently needed if you continue with NLP, so using or creating generic functions can be very helpful.
However, what works for one dataset and one language may not be suitable for another scenario.
For example, in our case, we selected different tokenizers for each of our tasks to account for the different registers of English, as well as removing diacritics during normalization.
Second, when it comes to implementing machine learning algorithms, it is often easier to use a higher-level library such as PyTorch instead of NumPy.
For example, with the former, the gradients are calculated by the library, whereas in NumPy we have to code them ourselves.
This becomes cumbersome quickly.
For example, even the derivative of the softmax is non-trivial.
Third, PyTorch imposes a training structure that remains largely the same, regardless of what models are being trained.
That is, at a high level, the same steps are always required: clearing the current gradients, predicting output scores for the provided inputs, calculating the loss, and optimizing.
These features make PyTorch a very powerful and convenient deep learning library; we will continue to use it throughout the remainder of the book to implement more complex neural architectures.
| 11,350 | 11,432 | #!/usr/bin/env python
# coding: utf-8
# # Binary Text Classification with Perceptron
# In[1]:
import random
import numpy as np
from tqdm.notebook import tqdm
# set this variable to a number to be used as the random seed
# or to None if you don't want to set a random seed
seed = 1234
if seed is not None:
random.seed(seed)
np.random.seed(seed)
# The dataset is divided in two directories called `train` and `test`.
# These directories contain the training and testing splits of the dataset.
# In[2]:
get_ipython().system('ls -lh data/aclImdb/')
# Both the `train` and `test` directories contain two directories called `pos` and `neg` that contain text files with the positive and negative reviews, respectively.
# In[3]:
get_ipython().system('ls -lh data/aclImdb/train/')
# We will now read the filenames of the positive and negative examples.
# In[4]:
from glob import glob
pos_files = glob('data/aclImdb/train/pos/*.txt')
neg_files = glob('data/aclImdb/train/neg/*.txt')
print('number of positive reviews:', len(pos_files))
print('number of negative reviews:', len(neg_files))
# Now, we will use a [`CountVectorizer`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html) to read the text files, tokenize them, acquire a vocabulary from the training data, and encode it in a document-term matrix in which each row represents a review, and each column represents a term in the vocabulary. Each element $(i,j)$ in the matrix represents the number of times term $j$ appears in example $i$.
# In[5]:
from sklearn.feature_extraction.text import CountVectorizer
# initialize CountVectorizer indicating that we will give it a list of filenames that have to be read
cv = CountVectorizer(input='filename')
# learn vocabulary and return sparse document-term matrix
doc_term_matrix = cv.fit_transform(pos_files + neg_files)
doc_term_matrix
# Note in the message printed above that the matrix is of shape (25000, 74894).
# In other words, it has 1,871,225,000 elements.
# However, only 3,445,861 elements were stored.
# This is because most of the elements in the matrix are zeros.
# The reason is that the reviews are short and most words in the english language don't appear in each review.
# A matrix that only stores non-zero values is called *sparse*.
#
# Now we will convert it to a dense numpy array:
# In[6]:
X_train = doc_term_matrix.toarray()
X_train.shape
# We will also create a numpy array with the binary labels for the reviews.
# One indicates a positive review and zero a negative review.
# The label `y_train[i]` corresponds to the review encoded in row `i` of the `X_train` matrix.
# In[7]:
# training labels
y_pos = np.ones(len(pos_files))
y_neg = np.zeros(len(neg_files))
y_train = np.concatenate([y_pos, y_neg])
y_train
# Now we will initialize our model, in the form of an array of weights `w` of the same size as the number of features in our dataset (i.e., the number of words in the vocabulary acquired by [`CountVectorizer`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html)), and a bias term `b`.
# Both are initialized to zeros.
# In[8]:
# initialize model: the feature vector and bias term are populated with zeros
n_examples, n_features = X_train.shape
w = np.zeros(n_features)
b = 0
# Now we will use the perceptron learning algorithm to learn the values of `w` and `b` from our training data.
# In[9]:
n_epochs = 10
indices = np.arange(n_examples)
for epoch in range(10):
n_errors = 0
# randomize the order in which training examples are seen in this epoch
np.random.shuffle(indices)
# traverse the training data
for i in tqdm(indices, desc=f'epoch {epoch+1}'):
x = X_train[i]
y_true = y_train[i]
# the perceptron decision based on the current model
score = x @ w + b
y_pred = 1 if score > 0 else 0
# update the model is the prediction was incorrect
if y_true == y_pred:
continue
elif y_true == 1 and y_pred == 0:
w = w + x
b = b + 1
n_errors += 1
elif y_true == 0 and y_pred == 1:
w = w - x
b = b - 1
n_errors += 1
if n_errors == 0:
break
# The next step is evaluating the model on the test dataset.
# Note that this time we use the [`transform()`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html#sklearn.feature_extraction.text.CountVectorizer.transform) method of the [`CountVectorizer`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html), instead of the [`fit_transform()`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html#sklearn.feature_extraction.text.CountVectorizer.fit_transform) method that we used above. This is because we want to use the learned vocabulary in the test set, instead of learning a new one.
# In[10]:
pos_files = glob('data/aclImdb/test/pos/*.txt')
neg_files = glob('data/aclImdb/test/neg/*.txt')
doc_term_matrix = cv.transform(pos_files + neg_files)
X_test = doc_term_matrix.toarray()
y_pos = np.ones(len(pos_files))
y_neg = np.zeros(len(neg_files))
y_test = np.concatenate([y_pos, y_neg])
# Using the model is easy: multiply the document-term matrix by the learned weights and add the bias.
# We use Python's `@` operator to perform the matrix-vector multiplication.
# In[11]:
y_pred = (X_test @ w + b) > 0
# Now we print an evaluation of the prediction results using scikit-learn's [`classification_report()`](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.classification_report.html) function.
# In[12]:
def binary_classification_report(y_true, y_pred):
# count true positives, false positives, true negatives, and false negatives
tp = fp = tn = fn = 0
for gold, pred in zip(y_true, y_pred):
if pred == True:
if gold == True:
tp += 1
else:
fp += 1
else:
if gold == False:
tn += 1
else:
fn += 1
# calculate precision and recall
precision = tp / (tp + fp)
recall = tp / (tp + fn)
# calculate f1 score
fscore = 2 * precision * recall / (precision + recall)
# calculate accuracy
accuracy = (tp + tn) / len(y_true)
# number of positive labels in y_true
support = sum(y_true)
return {
"precision": precision,
"recall": recall,
"f1-score": fscore,
"support": support,
"accuracy": accuracy,
}
# In[13]:
binary_classification_report(y_test, y_pred)
| 5,070 | 5,166 | 0 |
chap04-1 | chap04-1 | 4
Implementing Text Classification Using Perceptron and Logistic Regression
In the previous chapters we have discussed the theory behind the perceptron and logistic regression, including mathematical explanations of how and why they are able to learn from examples.
In this chapter we will transition from math to code.
Specifically, we will discuss how to implement these models in the Python programming language.
All the code that we will introduce throughout this book is available online as well: http://clulab.github.io/gentlenlp/. The reader who is not familiar with the Python programming language is encouraged to read first Appendix A, for a brief introduction to the language, and Appendix B, for a discussion on how computers encode and preprocess text.
Once done, please return here.
To get a better understanding of how these algorithms work under the hood, we will start by implementing them from scratch.
However, as the book progresses, we will introduce some of the popular tools and libraries that make Python the language of choice for machine learning, e.g., PyTorch,1 and Hugging Face’s transformers.2
The code for all the examples in the book is provided in the form of Jupyter notebooks.3
Important fragments of these notebooks will be presented in the implementation chapters so that the reader has the whole picture just by reading the book.
However, we strongly encourage you to download the notebooks and execute them yourself.
We also encourage you to modify them to conduct your own experiments!
1 https://pytorch.org
2 https://huggingface.co 3 https://jupyter.org/
55
56 Implementing Text Classification Using Perceptron and LR 4.1 Binary Classification
We begin this chapter with binary classification.
That is, we aim to train classifiers that assign one of two labels to a given text.
As the example for this task, we will train a review classifier using the the Large Movie Review Dataset (Maas et al., 2011).4 We tackle this task by implementing first a binary perceptron classifier, followed by a binary logistic regression one.
We will implement the latter both from scratch as well as using PyTorch, so the reader has a clearer understanding on how PyTorch works “under the hood.”
4.1.1 Large Movie Review Dataset
This dataset contains movie reviews and their associated scores (between 1 and 10) as provided by IMDb.5 converted these scores to binary labels by assigning each review a positive or negative label if the review score was above 6 or below 5, respectively.
Reviews with scores 5 and 6 were considered too neutral and thus excluded.
We follow the same protocol in this chapter.
The dataset is divided in two even partitions called train and test, each containing 25,000 reviews.
The dataset also provides additional unlabeled reviews, but we will not use those here.
Each partition contains two directories called pos and neg where the positive and negative examples are stored.
Each review is stored in an independent text file, whose name is composed of an id unique to the partition and the score associated with the review, separated by an underscore.
An example of a positive and a negative review is shown in Table 4.1.
4.1.2 Bag-of-words Model
As discussed in Section 2.2, we will encode the text to classify as a bag of words.
That is, we encode each review as a list of numbers, with each position in the list corresponding to a word in our vocabulary, and the value stored in that position corresponding to the number of times the word appears in the review.
For example, say we want to encode the following two reviews:
4 https://ai.stanford.edu/~amaas/data/sentiment/ 5 https://www.imdb.com/
Maas et al.
4.1 Binary Classification 57
Table 4.1 Two examples of movie reviews from IMDb.
The first is a positive review of the movie Puss in Boots (1988).
The second is a negative review of the movie Valentine (2001).
These reviews can be found at https://www.imdb.com/review/rw0606396/ and https://www.imdb.com/review/rw0721861/, respectively.
Filename Score Binary Label
train/pos/24_8.txt 8/10 Positive
train/neg/141_3.txt 3/10 Negative
Review Text
Although this was obviously a low-budget production, the performances and the songs in this movie are worth seeing.
One of Walken’s few musical roles to date.
(he is a marvelous dancer and singer and he demonstrates his acrobatic skills as well - watch for the cartwheel!)
Also starring Jason Connery.
A great children’s story and very likable characters.
This stalk and slash turkey manages to bring nothing new to an increasingly stale genre.
A masked killer stalks young, pert girls and slaughters them in a variety of gruesome ways, none of which are particularly inventive.
It’s not scary, it’s not clever, and it’s not funny.
So what was the point of it?
Review 1: Review 2:
"I liked the movie.
My friend liked it too.
"
"I hated it.
Would not recommend.
"
First, we need to create a vocabulary that maps each word to an id that uniquely identifies it.
Each of these numbers will be used as the index in a list, so they must start at zero and grow by one for each word in the vocabulary.
For example, one possible vocabulary that encodes the previous reviews is:
{'would': 0,
'hated': 1,
58
Implementing Text Classification Using Perceptron and LR
'my': 2,
'liked': 3,
'not': 4,
'it': 5,
'movie': 6,
'recommend': 7,
'the': 8,
'I': 9,
'too': 10,
'friend': 11}
Using this mapping, we can encode the two reviews as follows:
Review1:
[0,0,1,2,0,1,1,0,1,1,1,1]
Review2:
[1,1,0,0,1,1,0,1,0,1,0,0]
Note that the word liked (fourth position) in the first review has a value of two.
This is because this word appears twice in that review.
This is a small example with a vocabulary of only 12 terms.
Of course, the same process needs to be implemented for our whole training dataset.
For this purpose we will use scikit-learn’s CountVectorizer
class.6 Using the CountVectorizer class simplifies things, allowing us to get started quickly with a bag-of-words approach.
However, note that it makes several simplifying assumptions (e.g., text is lowercased, and punctuation and single character tokens are removed).
Some of these may not be adequate to other tasks.
First, we need to obtain the filenames for the reviews in the training set:
Once we have acquired the filenames for the training reviews, we need
to read them using the CountVectorizer.
In order for the CountVectorizer to open and read the files for us, we make use of the input='filename' constructor parameter (otherwise it would expect the string content directly).
The CountVectorizer provides three methods that will be use-
ful for us: a method called fit() that is used to acquire the vocabulary,
a method transform() that converts the text into the bag-of-words representation, and a method fit_transform() that conveniently acquires the vocabulary and transforms the data in a single step.
The resulting object is referred to as a document-term matrix, where each row corre-
6 https://scikitlearn.org/stable/modules/generated/sklearn.feature_ extraction.text.CountVectorizer.html
4.1 Binary Classification 59
sponds to a document, and each column corresponds to a term in the vocabulary.
As the output above indicates, the resulting matrix has 25,000 rows (one for each review), and 74,849 columns (one for each term).
Also you may note that this matrix is sparse, with 3,445,861 stored elements.
A regular matrix of shape 25,000×74,849 would have 1,871,225,000 elements.
However, most of the elements in a document-term matrix are zeros because only a few words from the vocabulary appear in each document.
A sparse matrix takes advantage of this fact by storing only the non-zero cells in order to reduce the memory required to store it.
Thus, sparse matrices are convenient, especially when dealing with lots of data.
Nevertheless, to simplify the downstream code in this example, we will convert it into a dense matrix, i.e., a regular two-dimensional NumPy array.
Finally, we also need the labels of the reviews.
We assign a label of one to positive reviews, and a label of zero to negative ones.
Note that the first half of the reviews are positive and the second half are negative.
The label at the ith position of the y_train array corresponds to the review encoded in the ith row of the X_train matrix.
4.1.3 Perceptron
Now that we have defined our task and the data processing pipeline, we will implement a perceptron classifier that classifies the movie reviews as positive or negative.
The entire code discussed in this section is available in the chap4_perceptron notebook.
Recall from Section 2.4 that the perceptron is composed of a weight vector w and a bias term b.
These will be represented as a NumPy array w of the same length as our document vectors, and a variable b for the bias term.
Both will be initialized with zeros.
The parameters w and b are learned through the following algorithm, which implements Algorithm 2 from Chapter 2:
There are a couple of details to point out.
Line 3 of Algorithm 2 indicates that we need to repeat the training loop until convergence.
Theoretically, convergence is defined as predicting all training examples correctly.
This is an ambitious requirement, which is not always possible in practice, so in this code we also include a stop condition if we reach a maximum number of epochs.
Another crucial difference between our implementation here and the theoretical Algorithm 2, is that we randomize the order in which the training examples are seen at the beginning of
60 Implementing Text Classification Using Perceptron and LR
each epoch.
This simple (but highly recommended!)
change is necessary to avoid the introduction of spurious biases due to the arbitrary order of the examples in the original training partition.7 We accomplish this by storing the indices corresponding to the X_train matrix rows in a NumPy array, and shuffling these indices at the beginning of each epoch.
We shuffle the indices instead of the examples so that we can preserve the mapping between examples and labels.
The training loop aligns closely with Algorithm 2.
We start by iterating over each example in our training data, storing the current example in the variable x,8
and its corresponding label in the variable y_true.
Next, we compute the perceptron decision function shown in Algorithm 1.
Note that NumPy (as well as PyTorch) uses Python’s @ operator to indicate vector or matrix multiplication, depending on its operand types.
Here we use it to calculate the dot product of the example x and the weights w.
To this we add the bias b to obtain the predicted score, whose sign is used to assign a positive or negative predicted label.
If the prediction is correct, then no update is needed, and we can move on to the next training example.
However, if the prediction is incorrect, then we need to adjust w and b, as described in Algorithm 2.
Sidebar 4.1 The tqdm function
This is our first exposure to the tqdm function.
tqdm is a progress bar that “make your loops show a smart progress meter.”9
The name tqdm comes from the Arabic word taqaddum which can mean “progress.”
Using tqdm is as simple as wrapping it around the collection to be traversed.
After training, we evaluate the model’s performance on the heldout test partition.
The test data is loaded similarly to the training partition, but with one notable difference; we use CountVectorizer’s transform() method instead of the fit_transform() method so that the vocabulary is not adjusted for the test data.
We won’t show here the loading of the test partition since it is so similar to the code already shown, but it is available in the Jupyter notebook that accompanies this section.
.
7
As an extreme example, consider a dataset where all the positive examples appear first in the training partition.
This would cause the perceptron to artificially inflate the weights of the features that occur in these examples, a situation from which the learning algorithm may struggle to recover.
.
8 We use typewriter font when we discuss variables in the code, to distinguish code from the theoretical discussion in the other chapters.
9 https://github.com/tqdm/tqdm
4.1 Binary Classification 61
Using the model to assign labels to all the test data is easily done in one step – we simply multiply the entire test data document-term matrix by the previously learned weights and add the bias.
Scores greater than zero indicate a positive review, and those less than zero are negative.
At this point we can evaluate the classifier’s performance, which we will do using precision, recall, and F1 scores for binary classification (described in Section 2.3).
For this purpose, we implement a function called binary_classification_report that computes these metrics and returns them as a dictionary:
We call this function to compare the predicted labels to the true labels, and obtain the evaluation scores.
Our F1 score here is 86.8%, which is much higher than the baseline that assigns labels randomly, which yields an F1 score of about 50%.
This is a good result, especially considering the simplicity of the perceptron!
In the next sections and chapters, we will discuss a battery of strategies to considerably improve this performance.
4.1.4 Binary Logistic Regression from Scratch
Using the same task, dataset, and evaluation, we will now implement a logistic regression classifier, as described in Algorithm 5 from Chapter 3.
To give the reader hands-on experience with the implementation of the gradient calculations for logistic regression, we start by implementing it from scratch using NumPy.
All the code shown in this section is available in the chap4_logistic_regression_numpy notebook.
In the perceptron implementation, we represented the weights and the bias as two different variables.
Here, however, we will use a different approach that will allow us to unify them into a single vector variable.
Specifically, we take advantage of the similarity between the derivative of the cost function with respect to the weights (Equation 3.14) and the derivative of the cost with respect to the bias (Equation 3.15).
d Ci(w, b) = (σi − yi)xij (3.14 revisited) dwj
d Ci(w, b) = σi − yi (3.15 revisited) db
Note that the two derivative formulas are identical except that the former has a multiplication by xij, while the latter does not.
However,
62 Implementing Text Classification Using Perceptron and LR since
σi − yi = (σi − yi)1
we can multiply the derivative of the cost with respect to the bias by one without changing the semantics.
This gives an opportunity for combining the computations, doing them both in a single pass.
The idea is that we can treat the bias as a weight corresponding to a feature that always has a value of one.
As can be seen above, we created a NumPy array of ones of the same length as the number of examples in our training set (i.e., the number of rows in the data matrix).
Then we add this array as a new column to the data matrix, using NumPy’s column_stack function.
Next, we need to initialize our model.
This time we will use a single NumPy array w of the same length as the number of columns in the data matrix.
The weight vector w is initialized randomly with values between 0 and 1:
Before implementing the learning algorithm, we need an implementation of the logistic function.
Recall that the logistic function is
σ(x) = 1 (3.1 revisited) 1+e−x
This function can be easily implemented in NumPy as follows:
However, this naive implementation may produce the following warning during training:
The term overflow indicates that the result of evaluating exp(-x) is a number so large that it can’t be represented by a float (specifically, we’re using float64 numbers).
We will avoid this issue by not calling exp with values that will overflow.
NumPy provides the function finfo that can be consulted to find the limits of floating point numbers:
The log of the largest floating point number is the largest number for which exp() will not overflow, so we will use it as a threshold to filter out problematic values:
We now have everything we need to implement Algorithm 4.
The steps to follow for each example are: (1) use the model to make a prediction, (2) calculate the gradient of the loss function with respect to the model parameters, and (3) update the model parameters using the gradient.
The size of the update is controlled by the learning rate.
Once the model has been trained, we evaluate it on the test dataset using our binary_classification_report function from the previous section.
Loading and preprocessing the test dataset follows the same
4.1 Binary Classification 63
steps as with the previous classifier.
We omit the code for brevity.
These are the results:
The performance is comparable with that of the perceptron.
The difference in F1 scores between the two classifiers (84.9% here vs. 86.8% for the perceptron) is not significant.
Classifier parity is probably attributable to the fact that the signal distinguishing the two classes being easy to learn and the simpler perceptron training algorithm being sufficient in this case.
Nevertheless, this task is useful in showing how to implement the logistic regression model from scratch, i.e., by implementing the gradient calculation and parameter updates manually.
Next, we will implement the same model again using PyTorch, highlighting how this machine learning library simplifies the process.
4.1.5 Binary Logistic Regression Utilizing PyTorch
While it is fairly straightforward to compute the derivatives for logistic regression and implement then directly in NumPy, this will not scale well to arbitrary neural architectures.
Fortunately, there are libraries that automate the computation of the derivatives of the cost function (assuming it is differentiable!)
for any neural network, and use the resulting gradients to perform gradient descent or other more sophisticated optimization procedures.
To this end, we will use the PyTorch deep learning library10.
The corresponding notebook for this section is chap4_logistic_regression_pytorch_bce.
Our model for logistic regression corresponds to PyTorch’s Linear layer.
When we instantiate this layer, we specify the size of the inputs (the size of our vocabulary) and the size of the output, i.e., the number of output neurons (which is one because we’re doing binary classification).
The loss function we use is the binary cross-entropy loss (see Chapter 3), which is implemented as BCEWithLogitsLoss in PyTorch.
In PyTorch, the gradients obtained from the loss function are applied to the model by an optimizer object, which implements and applies an optimization algorithm.
Here we will use the vanilla stochastic gradient descent optimizer; we set its learning rate to 0.1.
This is equivalent to the discussion in Section 3.2.
Similarly to the manual implementation, the steps required to train the model for a given training example are: (1) ensure the gradients are set to zeros, (2) apply the model to obtain a prediction, (3) calculate
10 https://pytorch.org/
64 Implementing Text Classification Using Perceptron and LR
the loss, (4) compute the gradient of the loss by back-propagation, and (5) update the model parameters.
Recall that in our previous implementation everything was hardcoded: applying the model, computing the gradients, and optimizing the model parameters.
Here, however, the implementation of the logistic regression is expressed at a higher level of abstraction.
This means that we are describing the logical steps without specifying a particular implementation.
Instead, implementation details are the responsability of the chosen model, loss function, and optimizer.
Thus, we could even choose a different model, loss function, and/or optimizer, and use the same training steps with little or no modification.
This decoupling of the training logic from the implementation details is one of the main advantages of libraries such as PyTorch.
As shown in the code above, calling the model as a function, with the feature vectors as inputs, produces the predicted scores.
Once again, a positive score corresponds to a positive label.
When we evaluate this implementation on the test dataset, we obtain results that are in line with our previous models:
Writing the perceptron and the logistic regression from scratch is a good exercise, as it exposes us to the fundamentals of implementing machine learning algorithms.
However, this becomes cumbersome for more complex neural architectures.
For this reason, from this point on, we will use PyTorch for all our coding examples.
4.2 Multiclass Classification
So far, in this chapter we have discussed implementing binary classifiers.
Next, we will modify these binary classifiers to perform multiclass classification, following the discussion in Section 3.5.
4.2.1 AG News Dataset
Before explaining the actual training/testing code, we have to choose a new dataset that is suitable for multiclass classification.
To this end, we will use the AG News Classification Dataset (Zhang et al., 2015), a subset of the larger AG corpus of news articles collected from thousands of different news sources.11
The classification dataset consists of four
11 http://groups.di.unipi.it/~gulli/AG_corpus_of_news_articles.html
4.2 Multiclass Classification 65
classes, and the data is equally balanced across all classes (30,000 articles per class for train, and 1,900 articles per class for testing).
The goal of the task is to classify each article as one of the four classes: World, Sports, Business, or Sci/Tech.
4.2.2 Preparing the Dataset
The AG News Dataset is distributed as two CSV files (one for training and one for testing), each containing three columns: the class index, the title, and the description.
The dataset also provides a text file that maps the above class indexes to more descriptive class labels.
Because of the tabular nature of the dataset, pandas, a Python library
for tabular data analysis,12 is a natural choice for loading and transform-
ing it.
To this end, our Jupyter notebook (chap4_multiclass_logistic_regression) demonstrates the sequence of steps required to handle the data, as well
as model training and evaluation.
First, we show how to load the CSV,
add column names, and inspect the result:
class index
.
0 3
.
1 3
.
2 3
.
3 3
.
4 3
... ...
.
119995 1
.
119996 2
.
119997 2
.
119998 2
.
119999 2
title Wall St. Bears Claw Back Into the Black (Reuters) Carlyle Looks Toward Commercial Aerospace (Reu... Oil and Economy Cloud Stocks' Outlook (Reuters) Iraq Halts Oil Exports from Main Southern Pipe...
Oil prices soar to all-time record, posing new... ...
Pakistan's Musharraf Says Won't Quit as Army C...
Renteria signing a top-shelf deal Saban not going to Dolphins yet
Today's NFL games Nets get Carter from Raptors
description Reuters - Short-sellers, Wall Street's dwindli...
Reuters - Private investment firm Carlyle Grou... Reuters - Soaring crude prices plus worries\ab... Reuters - Authorities have halted oil export\f... AFP - Tearaway world oil prices, toppling reco... ...
KARACHI (Reuters) - Pakistani President Perve...
Red Sox general manager Theo Epstein acknowled...
The Miami Dolphins will put their courtship of... PITTSBURGH at NY GIANTS Time: 1:30 p.m. Line: ...
INDIANAPOLIS -- All-Star Vince Carter was trad...
120000 rows × 3 columns
Since the class labels themselves are in a separate file, we manually add them to the pandas data structure (called dataframe in pandas’ terminology) to increase the interpretability of the data.
We use the class index column as a starting point, and use its map method to create a new column with the corresponding labels (technically a new Series object) that is added to the dataframe using its insert method, which allows us to insert the column in a specific position.
Note that the label indices are one-based, so we subtract one to align them with their labels.
12 https://pandas.pydata.org
66 Implementing Text Classification Using Perceptron and LR
class index
.
0 3
.
1 3
.
2 3
.
3 3
.
4 3
... ...
.
119995 1
.
119996 2
.
119997 2
.
119998 2
.
119999 2
class Business Business Business Business Business ...
World Sports Sports Sports Sports
title Wall St. Bears Claw Back Into the Black (Reuters)
Oil and Economy Cloud Stocks' Outlook (Reuters)
Oil prices soar to all-time record, posing new...
Pakistan's Musharraf Says Won't Quit as Army C...
Saban not going to Dolphins yet Nets get Carter from Raptors
description Reuters - Short-sellers, Wall Street's dwindli...
Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco...
KARACHI (Reuters) - Pakistani President Perve...
The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad...
Iraq Halts Oil Exports from Main Southern Pipe... Reuters - Authorities have halted oil export\f...
... ...
Renteria signing a top-shelf deal Red Sox general manager Theo Epstein acknowled...
120000 rows × 4 columns
Carlyle Looks Toward Commercial Aerospace (Reu... Reuters - Private investment firm Carlyle Grou...
Today's NFL games PITTSBURGH at NY GIANTS Time: 1:30 p.m.
Line: ...
Next we will preprocess the text.
First we lowercase the title and description, and then we concatenate them into a single string.
Then we remove some spurious backslashes from the text.
Once this is done, the preprocessed text is added to the dataframe as a new column.
Note that pandas allows these steps to be applied to all rows simultaneously.
class index
class
title
Wall St. Bears Claw Back Into the Black (Reuters)
Oil and Economy Cloud Stocks' Outlook (Reuters)
Oil prices soar to all-time record, posing new...
...
Pakistan's Musharraf Says Won't Quit as Army C...
Saban not going to Dolphins yet
Nets get Carter from Raptors
description Reuters - Short-sellers, Wall Street's dwindli...
Reuters - Soaring crude prices plus worries\ab...
AFP - Tearaway world oil prices, toppling reco...
...
KARACHI (Reuters) - Pakistani President Perve...
The Miami Dolphins will put their courtship of...
INDIANAPOLIS -- All-Star Vince Carter was trad...
text
wall st. bears claw back into the black (reute...
oil and economy cloud stocks' outlook (reuters...
oil prices soar to all-time record, posing new...
...
pakistan's musharraf says won't quit as army c...
saban not going to dolphins yet the miami dolp...
nets get carter from raptors indianapolis -- a...
. 0 3 Business
. 1 3 Business
. 2 3 Business
. 3 3 Business
. 4 3 Business
... ... ...
. 119995 1 World
. 119996 2 Sports
. 119997 2 Sports
. 119998 2 Sports
. 119999 2 Sports
120000 rows × 5 columns
Carlyle Looks Toward Commercial Reuters - Private investment firm Carlyle carlyle looks toward commercial Aerospace (Reu... Grou... aerospace (reu...
Iraq Halts Oil Exports from Main Southern Pipe...
Reuters - Authorities have halted oil export\f...
iraq halts oil exports from main southern pipe...
Renteria signing a top-shelf deal
Red Sox general manager Theo Epstein renteria signing a top-shelf deal red sox acknowled... gene...
Today's NFL games
PITTSBURGH at NY GIANTS Time: 1:30 p.m.
today's nfl games pittsburgh at ny giants Line: ... time...
At this point, the text is ready to be tokenized.
For this purpose we will use NLTK’s word_tokenize function.
This function can be applied to the whole column at once using the pandas map function, which returns a new column which we add to the dataframe.
However, here we actually use the progress_map function, which provides a visual progress bar.
This visual feedback is especially helpful for tasks that take more time to complete.
4.2 Multiclass Classification 67
class index
.
0 3
.
1 3
.
2 3
.
3 3
.
4 3
... ...
.
119995 1
.
119996 2
.
119997 2
.
119998 2
.
119999 2
class Business Business Business Business Business
...
World
Sports Sports Sports Sports
title
Wall St. Bears Claw Back Into the Black (Reuters)
Oil and Economy Cloud Stocks' Outlook (Reuters)
Oil prices soar to all-time record, posing new...
...
Pakistan's Musharraf Says Won't Quit as Army C...
Saban not going to Dolphins yet
Nets get Carter from Raptors
description
Reuters - Short-sellers, Wall Street's dwindli...
Reuters - Soaring crude prices plus worries\ab...
AFP - Tearaway world oil prices, toppling reco...
...
KARACHI (Reuters) - Pakistani President Perve...
The Miami Dolphins will put their courtship of...
INDIANAPOLIS -- All-Star Vince Carter was trad...
text
wall st. bears claw back into the black (reute...
oil and economy cloud stocks' outlook (reuters...
oil prices soar to all-time record, posing new...
...
pakistan's musharraf says won't quit as army c...
saban not going to dolphins yet the miami dolp...
nets get carter from raptors indianapolis -- a...
tokens
[wall, st., bears, claw, back, into, the, blac...
[oil, and, economy, cloud, stocks, ', outlook,...
[oil, prices, soar, to, all-time, record, ,, p...
...
[pakistan, 's, musharraf, says, wo, n't, quit,...
[saban, not, going, to, dolphins, yet, the, mi...
[nets, get, carter, from, raptors, indianapoli...
120000 rows × 6 columns
Carlyle Looks Toward Commercial Reuters - Private investment firm carlyle looks toward commercial [carlyle, looks, toward, Aerospace (Reu... Carlyle Grou... aerospace (reu... commercial, aerospace...
Iraq Halts Oil Exports from Main Reuters - Authorities have halted iraq halts oil exports from main [iraq, halts, oil, exports, from, Southern Pipe... oil export\f... southern pipe...
main, southe...
Renteria signing a top-shelf deal
Red Sox general manager Theo renteria signing a top-shelf deal [renteria, signing, a, top-shelf, Epstein acknowled... red sox gene...
deal, red, s...
Today's NFL games
PITTSBURGH at NY GIANTS today's nfl games pittsburgh at [today, 's, nfl, games, Time: 1:30 p.m.
Line: ...
ny giants time...
pittsburgh, at, ny, gi...
From the tokens we just created, we then create a vocabulary for our corpus.
Here, we only keep the words that occur at least 10 times, decreasing the memory needed and reducing the likelihood that our vocabulary contains noisy tokens.
Note that each row in the tokens column contains a list of tokens.
In order to create the vocabulary, we will need to convert the Series of lists of tokens into a Series of tokens using the explode() Pandas method.
Then we will use the value_counts() method to create a Series object in which the index are the tokens and the values are the number of times they appear in the corpus.
The next step is removing the tokens with a count lower than our chosen threshold.
Finally, we create a list with the remaining tokens, as well as a dictionary that maps tokens to token ids (i.e., the index of the token in the list).
We include in the vocabulary a special token
[UNK] that will be used as a placeholder for tokens that do not appear in our vocabulary after the frequency pruning.
Using this vocabulary, we construct a feature vector for each news article in the corpus.
This feature vector will be encoded as a dictionary, with keys corresponding to token ids, and values corresponding to the number of times the token appears in the article.
As above, the feature vectors will be stored as a new column in the dataframe.
68 Implementing Text Classification Using Perceptron and LR
class index
class
title
Wall St. Bears Claw Back Into the Black (Reuters)
Oil and Economy Cloud Stocks' Outlook (Reuters)
Oil prices soar to all-time record, posing new...
...
Pakistan's Musharraf Says Won't Quit as Army C...
Saban not going to Dolphins yet
Nets get Carter from Raptors
description
Reuters - Short-sellers, Wall Street's dwindli...
Reuters - Soaring crude prices plus worries\ab...
AFP - Tearaway world oil prices, toppling reco...
...
KARACHI (Reuters) - Pakistani President Perve...
The Miami Dolphins will put their courtship of...
INDIANAPOLIS -- All-Star Vince Carter was trad...
text
wall st. bears claw back into the black (reute...
oil and economy cloud stocks' outlook (reuters...
oil prices soar to all-time record, posing new...
...
pakistan's musharraf says won't quit as army c...
saban not going to dolphins yet the miami dolp...
nets get carter from raptors indianapolis -- a...
tokens
[wall, st., bears, claw, back, into, the, blac...
[oil, and, economy, cloud, stocks, ', outlook,...
[oil, prices, soar, to, alltime, record, ,, p...
...
[pakistan, 's, musharraf, says, wo, n't, quit,...
[saban, not, going, to, dolphins, yet, the, mi...
[nets, get, carter, from, raptors, indianapoli...
features
{427: 2, 563: 1, 1607: 1, 15062: 1, 120: 1, 73...
{66: 1, 9: 2, 351: 2, 4565: 1, 158: 1, 116: 1,...
{66: 2, 99: 2, 4390: 1, 4: 2, 3595: 1, 149: 1,...
...
{383: 1, 23: 1, 1626: 2, 91: 1, 1809: 1, 285: ...
{7762: 2, 68: 1, 661: 1, 4: 2, 1439: 2, 703: 1...
{2170: 2, 226: 1, 2402: 2, 32: 1, 2995: 2, 219...
. 0 3 Business
. 1 3 Business
. 2 3 Business
. 3 3 Business
. 4 3 Business
... ... ...
. 119995 1 World
. 119996 2 Sports
. 119997 2 Sports
. 119998 2 Sports
. 119999 2 Sports
120000 rows × 7 columns
Carlyle Looks Toward Commercial Aerospace (Reu...
Reuters - Private investment firm Carlyle Grou...
carlyle looks toward commercial aerospace (reu...
Iraq Halts Oil Exports from Reuters - Authorities have iraq halts oil exports from Main Southern Pipe... halted oil export\f... main southern pipe...
Renteria signing a top-shelf Red Sox general manager renteria signing a topdeal Theo Epstein acknowled... shelf deal red sox gene...
PITTSBURGH at NY Today's NFL games GIANTS Time: 1:30 p.m. Line: ...
today's nfl games pittsburgh at ny giants time...
[carlyle, looks, toward, {15999: 2, 1076: 1, 855: commercial, aerospace... 1, 1286: 1, 4251: 1, ...
[iraq, halts, oil, exports, {77: 2, 7380: 1, 66: 3, from, main, southe... 1787: 1, 32: 2, 900: 2...
[renteria, signing, a, top- {8428: 2, 2638: 1, 5: 4, shelf, deal, red, s... 0: 3, 127: 1, 202: 3,...
[today, 's, nfl, games, {106: 1, 23: 1, 729: 1, pittsburgh, at, ny, gi... 225: 1, 1586: 1, 22: 1...
The final preprocessing step is converting the features and the class indices into PyTorch tensors.
Recall that we need to subtract one from the class indices to make them zero-based.
At this point, the data is fully processed and we are ready to begin training.
4.2.3 Multiclass Logistic Regression Using PyTorch
The model itself is a single linear layer whose input size corresponds to the size of our vocabulary, and its output size corresponds to the number of classes in our corpus.
PyTorch’s Linear layer includes a bias by default, so there is no need to handle that manually the way we did for our perceptron example.
The code for training this model (which implements Algorithm 6) is almost identical to that of the binary logistic repression.
However, since we have to calculate a score for each of the four different classes, we need to replace the previous BCEWithLogitsLoss with CrossEntropyLoss, which applies a softmax over the scores to obtain probabilities for each class.
For each example, the model predicts 4 scores – one for each label.
The label with the highest score is selected using the argmax function.
We evaluate the predictions of our model for each class using Scikitlearn’s classification_report, which handles the results of multiclass classification.
4.3 Summary 69 4.3 Summary
In this chapter, we used movie review and news article classification to illustrate the implementation of the previously described algorithms for the binary perceptron, binary logistic regression, and multiclass logistic regression.
For the binary logistic regression, we made a direct comparison between the lower-level NumPy implementation and a higher-level version that made use of PyTorch.
We hope that through this series of exercises the reader has noted several key takeaways.
First, data preparation is important and should be done thoughtfully.
Certain tasks (e.g., text normalization or sentence splitting) are going to be frequently needed if you continue with NLP, so using or creating generic functions can be very helpful.
However, what works for one dataset and one language may not be suitable for another scenario.
For example, in our case, we selected different tokenizers for each of our tasks to account for the different registers of English, as well as removing diacritics during normalization.
Second, when it comes to implementing machine learning algorithms, it is often easier to use a higher-level library such as PyTorch instead of NumPy.
For example, with the former, the gradients are calculated by the library, whereas in NumPy we have to code them ourselves.
This becomes cumbersome quickly.
For example, even the derivative of the softmax is non-trivial.
Third, PyTorch imposes a training structure that remains largely the same, regardless of what models are being trained.
That is, at a high level, the same steps are always required: clearing the current gradients, predicting output scores for the provided inputs, calculating the loss, and optimizing.
These features make PyTorch a very powerful and convenient deep learning library; we will continue to use it throughout the remainder of the book to implement more complex neural architectures.
| 16,510 | 16,556 | #!/usr/bin/env python
# coding: utf-8
# # Binary Text Classification with
# # Logistic Regression Implemented from Scratch
# In[1]:
import random
import numpy as np
from tqdm.notebook import tqdm
# set this variable to a number to be used as the random seed
# or to None if you don't want to set a random seed
seed = 1234
if seed is not None:
random.seed(seed)
np.random.seed(seed)
# The dataset is divided in two directories called `train` and `test`.
# These directories contain the training and testing splits of the dataset.
# In[2]:
get_ipython().system('ls -lh data/aclImdb/')
# Both the `train` and `test` directories contain two directories called `pos` and `neg` that contain text files with the positive and negative reviews, respectively.
# In[3]:
get_ipython().system('ls -lh data/aclImdb/train/')
# We will now read the filenames of the positive and negative examples.
# In[4]:
from glob import glob
pos_files = glob('data/aclImdb/train/pos/*.txt')
neg_files = glob('data/aclImdb/train/neg/*.txt')
print('number of positive reviews:', len(pos_files))
print('number of negative reviews:', len(neg_files))
# Now, we will use a [`CountVectorizer`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html) to read the text files, tokenize them, acquire a vocabulary from the training data, and encode it in a document-term matrix in which each row represents a review, and each column represents a term in the vocabulary. Each element $(i,j)$ in the matrix represents the number of times term $j$ appears in example $i$.
# In[5]:
from sklearn.feature_extraction.text import CountVectorizer
# initialize CountVectorizer indicating that we will give it a list of filenames that have to be read
cv = CountVectorizer(input='filename')
# learn vocabulary and return sparse document-term matrix
doc_term_matrix = cv.fit_transform(pos_files + neg_files)
doc_term_matrix
# Note in the message printed above that the matrix is of shape (25000, 74894).
# In other words, it has 1,871,225,000 elements.
# However, only 3,445,861 elements were stored.
# This is because most of the elements in the matrix are zeros.
# The reason is that the reviews are short and most words in the english language don't appear in each review.
# A matrix that only stores non-zero values is called *sparse*.
#
# Now we will convert it to a dense numpy array:
# In[6]:
X_train = doc_term_matrix.toarray()
X_train.shape
# In[7]:
# Append 1s to the xs; this will allow us to multiply by the weights and
# the bias in a single pass.
# Make an array with a one for each row/data point
ones = np.ones(X_train.shape[0])
# Concatenate these ones to existing feature vectors
X_train = np.column_stack((X_train, ones))
X_train.shape
# We will also create a numpy array with the binary labels for the reviews.
# One indicates a positive review and zero a negative review.
# The label `y_train[i]` corresponds to the review encoded in row `i` of the `X_train` matrix.
# In[8]:
# training labels
y_pos = np.ones(len(pos_files))
y_neg = np.zeros(len(neg_files))
y_train = np.concatenate([y_pos, y_neg])
y_train
# Now we will initialize our model, in the form of an array of weights `w` of the same size as the number of features in our dataset (i.e., the number of words in the vocabulary acquired by [`CountVectorizer`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html)), and a bias term `b`.
# Both are initialized to zeros.
# In[9]:
# initialize model: the feature vector and bias term are populated with zeros
n_examples, n_features = X_train.shape
w = np.random.random(n_features)
# Now we will use the logistic regression learning algorithm to learn the values of `w` and `b` from our training data.
# In[10]:
# from scipy.special import expit as sigmoid
def sigmoid(z):
if -z > np.log(np.finfo(float).max):
return 0.0
return 1 / (1 + np.exp(-z))
# In[11]:
lr = 1e-1
n_epochs = 10
indices = np.arange(n_examples)
for epoch in range(10):
# randomize the order in which training examples are seen in this epoch
np.random.shuffle(indices)
# traverse the training data
for i in tqdm(indices, desc=f'epoch {epoch+1}'):
x = X_train[i]
y = y_train[i]
# calculate the derivative of the cost function for this batch
deriv_cost = (sigmoid(x @ w) - y) * x
# update the weights
w = w - lr * deriv_cost
# The next step is evaluating the model on the test dataset.
# Note that this time we use the [`transform()`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html#sklearn.feature_extraction.text.CountVectorizer.transform) method of the [`CountVectorizer`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html), instead of the [`fit_transform()`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html#sklearn.feature_extraction.text.CountVectorizer.fit_transform) method that we used above. This is because we want to use the learned vocabulary in the test set, instead of learning a new one.
# In[12]:
pos_files = glob('data/aclImdb/test/pos/*.txt')
neg_files = glob('data/aclImdb/test/neg/*.txt')
doc_term_matrix = cv.transform(pos_files + neg_files)
X_test = doc_term_matrix.toarray()
X_test = np.column_stack((X_test, np.ones(X_test.shape[0])))
y_pos = np.ones(len(pos_files))
y_neg = np.zeros(len(neg_files))
y_test = np.concatenate([y_pos, y_neg])
# Using the model is easy: multiply the document-term matrix by the learned weights and add the bias.
# We use Python's `@` operator to perform the matrix-vector multiplication.
# In[13]:
y_pred = X_test @ w > 0
# Now we print an evaluation of the prediction results using scikit-learn's [`classification_report()`](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.classification_report.html) function.
# In[14]:
def binary_classification_report(y_true, y_pred):
# count true positives, false positives, true negatives, and false negatives
tp = fp = tn = fn = 0
for gold, pred in zip(y_true, y_pred):
if pred == True:
if gold == True:
tp += 1
else:
fp += 1
else:
if gold == False:
tn += 1
else:
fn += 1
# calculate precision and recall
precision = tp / (tp + fp)
recall = tp / (tp + fn)
# calculate f1 score
fscore = 2 * precision * recall / (precision + recall)
# calculate accuracy
accuracy = (tp + tn) / len(y_true)
# number of positive labels in y_true
support = sum(y_true)
return {
"precision": precision,
"recall": recall,
"f1-score": fscore,
"support": support,
"accuracy": accuracy,
}
# In[15]:
binary_classification_report(y_test, y_pred)
| 4,482 | 4,514 | 1 |
chap04-2 | chap04-2 | 4
Implementing Text Classification Using Perceptron and Logistic Regression
In the previous chapters we have discussed the theory behind the perceptron and logistic regression, including mathematical explanations of how and why they are able to learn from examples.
In this chapter we will transition from math to code.
Specifically, we will discuss how to implement these models in the Python programming language.
All the code that we will introduce throughout this book is available online as well: http://clulab.github.io/gentlenlp/. The reader who is not familiar with the Python programming language is encouraged to read first Appendix A, for a brief introduction to the language, and Appendix B, for a discussion on how computers encode and preprocess text.
Once done, please return here.
To get a better understanding of how these algorithms work under the hood, we will start by implementing them from scratch.
However, as the book progresses, we will introduce some of the popular tools and libraries that make Python the language of choice for machine learning, e.g., PyTorch,1 and Hugging Face’s transformers.2
The code for all the examples in the book is provided in the form of Jupyter notebooks.3
Important fragments of these notebooks will be presented in the implementation chapters so that the reader has the whole picture just by reading the book.
However, we strongly encourage you to download the notebooks and execute them yourself.
We also encourage you to modify them to conduct your own experiments!
1 https://pytorch.org
2 https://huggingface.co 3 https://jupyter.org/
55
56 Implementing Text Classification Using Perceptron and LR 4.1 Binary Classification
We begin this chapter with binary classification.
That is, we aim to train classifiers that assign one of two labels to a given text.
As the example for this task, we will train a review classifier using the the Large Movie Review Dataset (Maas et al., 2011).4 We tackle this task by implementing first a binary perceptron classifier, followed by a binary logistic regression one.
We will implement the latter both from scratch as well as using PyTorch, so the reader has a clearer understanding on how PyTorch works “under the hood.”
4.1.1 Large Movie Review Dataset
This dataset contains movie reviews and their associated scores (between 1 and 10) as provided by IMDb.5 converted these scores to binary labels by assigning each review a positive or negative label if the review score was above 6 or below 5, respectively.
Reviews with scores 5 and 6 were considered too neutral and thus excluded.
We follow the same protocol in this chapter.
The dataset is divided in two even partitions called train and test, each containing 25,000 reviews.
The dataset also provides additional unlabeled reviews, but we will not use those here.
Each partition contains two directories called pos and neg where the positive and negative examples are stored.
Each review is stored in an independent text file, whose name is composed of an id unique to the partition and the score associated with the review, separated by an underscore.
An example of a positive and a negative review is shown in Table 4.1.
4.1.2 Bag-of-words Model
As discussed in Section 2.2, we will encode the text to classify as a bag of words.
That is, we encode each review as a list of numbers, with each position in the list corresponding to a word in our vocabulary, and the value stored in that position corresponding to the number of times the word appears in the review.
For example, say we want to encode the following two reviews:
4 https://ai.stanford.edu/~amaas/data/sentiment/ 5 https://www.imdb.com/
Maas et al.
4.1 Binary Classification 57
Table 4.1 Two examples of movie reviews from IMDb.
The first is a positive review of the movie Puss in Boots (1988).
The second is a negative review of the movie Valentine (2001).
These reviews can be found at https://www.imdb.com/review/rw0606396/ and https://www.imdb.com/review/rw0721861/, respectively.
Filename Score Binary Label
train/pos/24_8.txt 8/10 Positive
train/neg/141_3.txt 3/10 Negative
Review Text
Although this was obviously a low-budget production, the performances and the songs in this movie are worth seeing.
One of Walken’s few musical roles to date.
(he is a marvelous dancer and singer and he demonstrates his acrobatic skills as well - watch for the cartwheel!)
Also starring Jason Connery.
A great children’s story and very likable characters.
This stalk and slash turkey manages to bring nothing new to an increasingly stale genre.
A masked killer stalks young, pert girls and slaughters them in a variety of gruesome ways, none of which are particularly inventive.
It’s not scary, it’s not clever, and it’s not funny.
So what was the point of it?
Review 1: Review 2:
"I liked the movie.
My friend liked it too.
"
"I hated it.
Would not recommend.
"
First, we need to create a vocabulary that maps each word to an id that uniquely identifies it.
Each of these numbers will be used as the index in a list, so they must start at zero and grow by one for each word in the vocabulary.
For example, one possible vocabulary that encodes the previous reviews is:
{'would': 0,
'hated': 1,
58
Implementing Text Classification Using Perceptron and LR
'my': 2,
'liked': 3,
'not': 4,
'it': 5,
'movie': 6,
'recommend': 7,
'the': 8,
'I': 9,
'too': 10,
'friend': 11}
Using this mapping, we can encode the two reviews as follows:
Review1:
[0,0,1,2,0,1,1,0,1,1,1,1]
Review2:
[1,1,0,0,1,1,0,1,0,1,0,0]
Note that the word liked (fourth position) in the first review has a value of two.
This is because this word appears twice in that review.
This is a small example with a vocabulary of only 12 terms.
Of course, the same process needs to be implemented for our whole training dataset.
For this purpose we will use scikit-learn’s CountVectorizer
class.6 Using the CountVectorizer class simplifies things, allowing us to get started quickly with a bag-of-words approach.
However, note that it makes several simplifying assumptions (e.g., text is lowercased, and punctuation and single character tokens are removed).
Some of these may not be adequate to other tasks.
First, we need to obtain the filenames for the reviews in the training set:
Once we have acquired the filenames for the training reviews, we need
to read them using the CountVectorizer.
In order for the CountVectorizer to open and read the files for us, we make use of the input='filename' constructor parameter (otherwise it would expect the string content directly).
The CountVectorizer provides three methods that will be use-
ful for us: a method called fit() that is used to acquire the vocabulary,
a method transform() that converts the text into the bag-of-words representation, and a method fit_transform() that conveniently acquires the vocabulary and transforms the data in a single step.
The resulting object is referred to as a document-term matrix, where each row corre-
6 https://scikitlearn.org/stable/modules/generated/sklearn.feature_ extraction.text.CountVectorizer.html
4.1 Binary Classification 59
sponds to a document, and each column corresponds to a term in the vocabulary.
As the output above indicates, the resulting matrix has 25,000 rows (one for each review), and 74,849 columns (one for each term).
Also you may note that this matrix is sparse, with 3,445,861 stored elements.
A regular matrix of shape 25,000×74,849 would have 1,871,225,000 elements.
However, most of the elements in a document-term matrix are zeros because only a few words from the vocabulary appear in each document.
A sparse matrix takes advantage of this fact by storing only the non-zero cells in order to reduce the memory required to store it.
Thus, sparse matrices are convenient, especially when dealing with lots of data.
Nevertheless, to simplify the downstream code in this example, we will convert it into a dense matrix, i.e., a regular two-dimensional NumPy array.
Finally, we also need the labels of the reviews.
We assign a label of one to positive reviews, and a label of zero to negative ones.
Note that the first half of the reviews are positive and the second half are negative.
The label at the ith position of the y_train array corresponds to the review encoded in the ith row of the X_train matrix.
4.1.3 Perceptron
Now that we have defined our task and the data processing pipeline, we will implement a perceptron classifier that classifies the movie reviews as positive or negative.
The entire code discussed in this section is available in the chap4_perceptron notebook.
Recall from Section 2.4 that the perceptron is composed of a weight vector w and a bias term b.
These will be represented as a NumPy array w of the same length as our document vectors, and a variable b for the bias term.
Both will be initialized with zeros.
The parameters w and b are learned through the following algorithm, which implements Algorithm 2 from Chapter 2:
There are a couple of details to point out.
Line 3 of Algorithm 2 indicates that we need to repeat the training loop until convergence.
Theoretically, convergence is defined as predicting all training examples correctly.
This is an ambitious requirement, which is not always possible in practice, so in this code we also include a stop condition if we reach a maximum number of epochs.
Another crucial difference between our implementation here and the theoretical Algorithm 2, is that we randomize the order in which the training examples are seen at the beginning of
60 Implementing Text Classification Using Perceptron and LR
each epoch.
This simple (but highly recommended!)
change is necessary to avoid the introduction of spurious biases due to the arbitrary order of the examples in the original training partition.7 We accomplish this by storing the indices corresponding to the X_train matrix rows in a NumPy array, and shuffling these indices at the beginning of each epoch.
We shuffle the indices instead of the examples so that we can preserve the mapping between examples and labels.
The training loop aligns closely with Algorithm 2.
We start by iterating over each example in our training data, storing the current example in the variable x,8
and its corresponding label in the variable y_true.
Next, we compute the perceptron decision function shown in Algorithm 1.
Note that NumPy (as well as PyTorch) uses Python’s @ operator to indicate vector or matrix multiplication, depending on its operand types.
Here we use it to calculate the dot product of the example x and the weights w.
To this we add the bias b to obtain the predicted score, whose sign is used to assign a positive or negative predicted label.
If the prediction is correct, then no update is needed, and we can move on to the next training example.
However, if the prediction is incorrect, then we need to adjust w and b, as described in Algorithm 2.
Sidebar 4.1 The tqdm function
This is our first exposure to the tqdm function.
tqdm is a progress bar that “make your loops show a smart progress meter.”9
The name tqdm comes from the Arabic word taqaddum which can mean “progress.”
Using tqdm is as simple as wrapping it around the collection to be traversed.
After training, we evaluate the model’s performance on the heldout test partition.
The test data is loaded similarly to the training partition, but with one notable difference; we use CountVectorizer’s transform() method instead of the fit_transform() method so that the vocabulary is not adjusted for the test data.
We won’t show here the loading of the test partition since it is so similar to the code already shown, but it is available in the Jupyter notebook that accompanies this section.
.
7
As an extreme example, consider a dataset where all the positive examples appear first in the training partition.
This would cause the perceptron to artificially inflate the weights of the features that occur in these examples, a situation from which the learning algorithm may struggle to recover.
.
8 We use typewriter font when we discuss variables in the code, to distinguish code from the theoretical discussion in the other chapters.
9 https://github.com/tqdm/tqdm
4.1 Binary Classification 61
Using the model to assign labels to all the test data is easily done in one step – we simply multiply the entire test data document-term matrix by the previously learned weights and add the bias.
Scores greater than zero indicate a positive review, and those less than zero are negative.
At this point we can evaluate the classifier’s performance, which we will do using precision, recall, and F1 scores for binary classification (described in Section 2.3).
For this purpose, we implement a function called binary_classification_report that computes these metrics and returns them as a dictionary:
We call this function to compare the predicted labels to the true labels, and obtain the evaluation scores.
Our F1 score here is 86.8%, which is much higher than the baseline that assigns labels randomly, which yields an F1 score of about 50%.
This is a good result, especially considering the simplicity of the perceptron!
In the next sections and chapters, we will discuss a battery of strategies to considerably improve this performance.
4.1.4 Binary Logistic Regression from Scratch
Using the same task, dataset, and evaluation, we will now implement a logistic regression classifier, as described in Algorithm 5 from Chapter 3.
To give the reader hands-on experience with the implementation of the gradient calculations for logistic regression, we start by implementing it from scratch using NumPy.
All the code shown in this section is available in the chap4_logistic_regression_numpy notebook.
In the perceptron implementation, we represented the weights and the bias as two different variables.
Here, however, we will use a different approach that will allow us to unify them into a single vector variable.
Specifically, we take advantage of the similarity between the derivative of the cost function with respect to the weights (Equation 3.14) and the derivative of the cost with respect to the bias (Equation 3.15).
d Ci(w, b) = (σi − yi)xij (3.14 revisited) dwj
d Ci(w, b) = σi − yi (3.15 revisited) db
Note that the two derivative formulas are identical except that the former has a multiplication by xij, while the latter does not.
However,
62 Implementing Text Classification Using Perceptron and LR since
σi − yi = (σi − yi)1
we can multiply the derivative of the cost with respect to the bias by one without changing the semantics.
This gives an opportunity for combining the computations, doing them both in a single pass.
The idea is that we can treat the bias as a weight corresponding to a feature that always has a value of one.
As can be seen above, we created a NumPy array of ones of the same length as the number of examples in our training set (i.e., the number of rows in the data matrix).
Then we add this array as a new column to the data matrix, using NumPy’s column_stack function.
Next, we need to initialize our model.
This time we will use a single NumPy array w of the same length as the number of columns in the data matrix.
The weight vector w is initialized randomly with values between 0 and 1:
Before implementing the learning algorithm, we need an implementation of the logistic function.
Recall that the logistic function is
σ(x) = 1 (3.1 revisited) 1+e−x
This function can be easily implemented in NumPy as follows:
However, this naive implementation may produce the following warning during training:
The term overflow indicates that the result of evaluating exp(-x) is a number so large that it can’t be represented by a float (specifically, we’re using float64 numbers).
We will avoid this issue by not calling exp with values that will overflow.
NumPy provides the function finfo that can be consulted to find the limits of floating point numbers:
The log of the largest floating point number is the largest number for which exp() will not overflow, so we will use it as a threshold to filter out problematic values:
We now have everything we need to implement Algorithm 4.
The steps to follow for each example are: (1) use the model to make a prediction, (2) calculate the gradient of the loss function with respect to the model parameters, and (3) update the model parameters using the gradient.
The size of the update is controlled by the learning rate.
Once the model has been trained, we evaluate it on the test dataset using our binary_classification_report function from the previous section.
Loading and preprocessing the test dataset follows the same
4.1 Binary Classification 63
steps as with the previous classifier.
We omit the code for brevity.
These are the results:
The performance is comparable with that of the perceptron.
The difference in F1 scores between the two classifiers (84.9% here vs. 86.8% for the perceptron) is not significant.
Classifier parity is probably attributable to the fact that the signal distinguishing the two classes being easy to learn and the simpler perceptron training algorithm being sufficient in this case.
Nevertheless, this task is useful in showing how to implement the logistic regression model from scratch, i.e., by implementing the gradient calculation and parameter updates manually.
Next, we will implement the same model again using PyTorch, highlighting how this machine learning library simplifies the process.
4.1.5 Binary Logistic Regression Utilizing PyTorch
While it is fairly straightforward to compute the derivatives for logistic regression and implement then directly in NumPy, this will not scale well to arbitrary neural architectures.
Fortunately, there are libraries that automate the computation of the derivatives of the cost function (assuming it is differentiable!)
for any neural network, and use the resulting gradients to perform gradient descent or other more sophisticated optimization procedures.
To this end, we will use the PyTorch deep learning library10.
The corresponding notebook for this section is chap4_logistic_regression_pytorch_bce.
Our model for logistic regression corresponds to PyTorch’s Linear layer.
When we instantiate this layer, we specify the size of the inputs (the size of our vocabulary) and the size of the output, i.e., the number of output neurons (which is one because we’re doing binary classification).
The loss function we use is the binary cross-entropy loss (see Chapter 3), which is implemented as BCEWithLogitsLoss in PyTorch.
In PyTorch, the gradients obtained from the loss function are applied to the model by an optimizer object, which implements and applies an optimization algorithm.
Here we will use the vanilla stochastic gradient descent optimizer; we set its learning rate to 0.1.
This is equivalent to the discussion in Section 3.2.
Similarly to the manual implementation, the steps required to train the model for a given training example are: (1) ensure the gradients are set to zeros, (2) apply the model to obtain a prediction, (3) calculate
10 https://pytorch.org/
64 Implementing Text Classification Using Perceptron and LR
the loss, (4) compute the gradient of the loss by back-propagation, and (5) update the model parameters.
Recall that in our previous implementation everything was hardcoded: applying the model, computing the gradients, and optimizing the model parameters.
Here, however, the implementation of the logistic regression is expressed at a higher level of abstraction.
This means that we are describing the logical steps without specifying a particular implementation.
Instead, implementation details are the responsability of the chosen model, loss function, and optimizer.
Thus, we could even choose a different model, loss function, and/or optimizer, and use the same training steps with little or no modification.
This decoupling of the training logic from the implementation details is one of the main advantages of libraries such as PyTorch.
As shown in the code above, calling the model as a function, with the feature vectors as inputs, produces the predicted scores.
Once again, a positive score corresponds to a positive label.
When we evaluate this implementation on the test dataset, we obtain results that are in line with our previous models:
Writing the perceptron and the logistic regression from scratch is a good exercise, as it exposes us to the fundamentals of implementing machine learning algorithms.
However, this becomes cumbersome for more complex neural architectures.
For this reason, from this point on, we will use PyTorch for all our coding examples.
4.2 Multiclass Classification
So far, in this chapter we have discussed implementing binary classifiers.
Next, we will modify these binary classifiers to perform multiclass classification, following the discussion in Section 3.5.
4.2.1 AG News Dataset
Before explaining the actual training/testing code, we have to choose a new dataset that is suitable for multiclass classification.
To this end, we will use the AG News Classification Dataset (Zhang et al., 2015), a subset of the larger AG corpus of news articles collected from thousands of different news sources.11
The classification dataset consists of four
11 http://groups.di.unipi.it/~gulli/AG_corpus_of_news_articles.html
4.2 Multiclass Classification 65
classes, and the data is equally balanced across all classes (30,000 articles per class for train, and 1,900 articles per class for testing).
The goal of the task is to classify each article as one of the four classes: World, Sports, Business, or Sci/Tech.
4.2.2 Preparing the Dataset
The AG News Dataset is distributed as two CSV files (one for training and one for testing), each containing three columns: the class index, the title, and the description.
The dataset also provides a text file that maps the above class indexes to more descriptive class labels.
Because of the tabular nature of the dataset, pandas, a Python library
for tabular data analysis,12 is a natural choice for loading and transform-
ing it.
To this end, our Jupyter notebook (chap4_multiclass_logistic_regression) demonstrates the sequence of steps required to handle the data, as well
as model training and evaluation.
First, we show how to load the CSV,
add column names, and inspect the result:
class index
.
0 3
.
1 3
.
2 3
.
3 3
.
4 3
... ...
.
119995 1
.
119996 2
.
119997 2
.
119998 2
.
119999 2
title Wall St. Bears Claw Back Into the Black (Reuters) Carlyle Looks Toward Commercial Aerospace (Reu... Oil and Economy Cloud Stocks' Outlook (Reuters) Iraq Halts Oil Exports from Main Southern Pipe...
Oil prices soar to all-time record, posing new... ...
Pakistan's Musharraf Says Won't Quit as Army C...
Renteria signing a top-shelf deal Saban not going to Dolphins yet
Today's NFL games Nets get Carter from Raptors
description Reuters - Short-sellers, Wall Street's dwindli...
Reuters - Private investment firm Carlyle Grou... Reuters - Soaring crude prices plus worries\ab... Reuters - Authorities have halted oil export\f... AFP - Tearaway world oil prices, toppling reco... ...
KARACHI (Reuters) - Pakistani President Perve...
Red Sox general manager Theo Epstein acknowled...
The Miami Dolphins will put their courtship of... PITTSBURGH at NY GIANTS Time: 1:30 p.m. Line: ...
INDIANAPOLIS -- All-Star Vince Carter was trad...
120000 rows × 3 columns
Since the class labels themselves are in a separate file, we manually add them to the pandas data structure (called dataframe in pandas’ terminology) to increase the interpretability of the data.
We use the class index column as a starting point, and use its map method to create a new column with the corresponding labels (technically a new Series object) that is added to the dataframe using its insert method, which allows us to insert the column in a specific position.
Note that the label indices are one-based, so we subtract one to align them with their labels.
12 https://pandas.pydata.org
66 Implementing Text Classification Using Perceptron and LR
class index
.
0 3
.
1 3
.
2 3
.
3 3
.
4 3
... ...
.
119995 1
.
119996 2
.
119997 2
.
119998 2
.
119999 2
class Business Business Business Business Business ...
World Sports Sports Sports Sports
title Wall St. Bears Claw Back Into the Black (Reuters)
Oil and Economy Cloud Stocks' Outlook (Reuters)
Oil prices soar to all-time record, posing new...
Pakistan's Musharraf Says Won't Quit as Army C...
Saban not going to Dolphins yet Nets get Carter from Raptors
description Reuters - Short-sellers, Wall Street's dwindli...
Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco...
KARACHI (Reuters) - Pakistani President Perve...
The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad...
Iraq Halts Oil Exports from Main Southern Pipe... Reuters - Authorities have halted oil export\f...
... ...
Renteria signing a top-shelf deal Red Sox general manager Theo Epstein acknowled...
120000 rows × 4 columns
Carlyle Looks Toward Commercial Aerospace (Reu... Reuters - Private investment firm Carlyle Grou...
Today's NFL games PITTSBURGH at NY GIANTS Time: 1:30 p.m.
Line: ...
Next we will preprocess the text.
First we lowercase the title and description, and then we concatenate them into a single string.
Then we remove some spurious backslashes from the text.
Once this is done, the preprocessed text is added to the dataframe as a new column.
Note that pandas allows these steps to be applied to all rows simultaneously.
class index
class
title
Wall St. Bears Claw Back Into the Black (Reuters)
Oil and Economy Cloud Stocks' Outlook (Reuters)
Oil prices soar to all-time record, posing new...
...
Pakistan's Musharraf Says Won't Quit as Army C...
Saban not going to Dolphins yet
Nets get Carter from Raptors
description Reuters - Short-sellers, Wall Street's dwindli...
Reuters - Soaring crude prices plus worries\ab...
AFP - Tearaway world oil prices, toppling reco...
...
KARACHI (Reuters) - Pakistani President Perve...
The Miami Dolphins will put their courtship of...
INDIANAPOLIS -- All-Star Vince Carter was trad...
text
wall st. bears claw back into the black (reute...
oil and economy cloud stocks' outlook (reuters...
oil prices soar to all-time record, posing new...
...
pakistan's musharraf says won't quit as army c...
saban not going to dolphins yet the miami dolp...
nets get carter from raptors indianapolis -- a...
. 0 3 Business
. 1 3 Business
. 2 3 Business
. 3 3 Business
. 4 3 Business
... ... ...
. 119995 1 World
. 119996 2 Sports
. 119997 2 Sports
. 119998 2 Sports
. 119999 2 Sports
120000 rows × 5 columns
Carlyle Looks Toward Commercial Reuters - Private investment firm Carlyle carlyle looks toward commercial Aerospace (Reu... Grou... aerospace (reu...
Iraq Halts Oil Exports from Main Southern Pipe...
Reuters - Authorities have halted oil export\f...
iraq halts oil exports from main southern pipe...
Renteria signing a top-shelf deal
Red Sox general manager Theo Epstein renteria signing a top-shelf deal red sox acknowled... gene...
Today's NFL games
PITTSBURGH at NY GIANTS Time: 1:30 p.m.
today's nfl games pittsburgh at ny giants Line: ... time...
At this point, the text is ready to be tokenized.
For this purpose we will use NLTK’s word_tokenize function.
This function can be applied to the whole column at once using the pandas map function, which returns a new column which we add to the dataframe.
However, here we actually use the progress_map function, which provides a visual progress bar.
This visual feedback is especially helpful for tasks that take more time to complete.
4.2 Multiclass Classification 67
class index
.
0 3
.
1 3
.
2 3
.
3 3
.
4 3
... ...
.
119995 1
.
119996 2
.
119997 2
.
119998 2
.
119999 2
class Business Business Business Business Business
...
World
Sports Sports Sports Sports
title
Wall St. Bears Claw Back Into the Black (Reuters)
Oil and Economy Cloud Stocks' Outlook (Reuters)
Oil prices soar to all-time record, posing new...
...
Pakistan's Musharraf Says Won't Quit as Army C...
Saban not going to Dolphins yet
Nets get Carter from Raptors
description
Reuters - Short-sellers, Wall Street's dwindli...
Reuters - Soaring crude prices plus worries\ab...
AFP - Tearaway world oil prices, toppling reco...
...
KARACHI (Reuters) - Pakistani President Perve...
The Miami Dolphins will put their courtship of...
INDIANAPOLIS -- All-Star Vince Carter was trad...
text
wall st. bears claw back into the black (reute...
oil and economy cloud stocks' outlook (reuters...
oil prices soar to all-time record, posing new...
...
pakistan's musharraf says won't quit as army c...
saban not going to dolphins yet the miami dolp...
nets get carter from raptors indianapolis -- a...
tokens
[wall, st., bears, claw, back, into, the, blac...
[oil, and, economy, cloud, stocks, ', outlook,...
[oil, prices, soar, to, all-time, record, ,, p...
...
[pakistan, 's, musharraf, says, wo, n't, quit,...
[saban, not, going, to, dolphins, yet, the, mi...
[nets, get, carter, from, raptors, indianapoli...
120000 rows × 6 columns
Carlyle Looks Toward Commercial Reuters - Private investment firm carlyle looks toward commercial [carlyle, looks, toward, Aerospace (Reu... Carlyle Grou... aerospace (reu... commercial, aerospace...
Iraq Halts Oil Exports from Main Reuters - Authorities have halted iraq halts oil exports from main [iraq, halts, oil, exports, from, Southern Pipe... oil export\f... southern pipe...
main, southe...
Renteria signing a top-shelf deal
Red Sox general manager Theo renteria signing a top-shelf deal [renteria, signing, a, top-shelf, Epstein acknowled... red sox gene...
deal, red, s...
Today's NFL games
PITTSBURGH at NY GIANTS today's nfl games pittsburgh at [today, 's, nfl, games, Time: 1:30 p.m.
Line: ...
ny giants time...
pittsburgh, at, ny, gi...
From the tokens we just created, we then create a vocabulary for our corpus.
Here, we only keep the words that occur at least 10 times, decreasing the memory needed and reducing the likelihood that our vocabulary contains noisy tokens.
Note that each row in the tokens column contains a list of tokens.
In order to create the vocabulary, we will need to convert the Series of lists of tokens into a Series of tokens using the explode() Pandas method.
Then we will use the value_counts() method to create a Series object in which the index are the tokens and the values are the number of times they appear in the corpus.
The next step is removing the tokens with a count lower than our chosen threshold.
Finally, we create a list with the remaining tokens, as well as a dictionary that maps tokens to token ids (i.e., the index of the token in the list).
We include in the vocabulary a special token
[UNK] that will be used as a placeholder for tokens that do not appear in our vocabulary after the frequency pruning.
Using this vocabulary, we construct a feature vector for each news article in the corpus.
This feature vector will be encoded as a dictionary, with keys corresponding to token ids, and values corresponding to the number of times the token appears in the article.
As above, the feature vectors will be stored as a new column in the dataframe.
68 Implementing Text Classification Using Perceptron and LR
class index
class
title
Wall St. Bears Claw Back Into the Black (Reuters)
Oil and Economy Cloud Stocks' Outlook (Reuters)
Oil prices soar to all-time record, posing new...
...
Pakistan's Musharraf Says Won't Quit as Army C...
Saban not going to Dolphins yet
Nets get Carter from Raptors
description
Reuters - Short-sellers, Wall Street's dwindli...
Reuters - Soaring crude prices plus worries\ab...
AFP - Tearaway world oil prices, toppling reco...
...
KARACHI (Reuters) - Pakistani President Perve...
The Miami Dolphins will put their courtship of...
INDIANAPOLIS -- All-Star Vince Carter was trad...
text
wall st. bears claw back into the black (reute...
oil and economy cloud stocks' outlook (reuters...
oil prices soar to all-time record, posing new...
...
pakistan's musharraf says won't quit as army c...
saban not going to dolphins yet the miami dolp...
nets get carter from raptors indianapolis -- a...
tokens
[wall, st., bears, claw, back, into, the, blac...
[oil, and, economy, cloud, stocks, ', outlook,...
[oil, prices, soar, to, alltime, record, ,, p...
...
[pakistan, 's, musharraf, says, wo, n't, quit,...
[saban, not, going, to, dolphins, yet, the, mi...
[nets, get, carter, from, raptors, indianapoli...
features
{427: 2, 563: 1, 1607: 1, 15062: 1, 120: 1, 73...
{66: 1, 9: 2, 351: 2, 4565: 1, 158: 1, 116: 1,...
{66: 2, 99: 2, 4390: 1, 4: 2, 3595: 1, 149: 1,...
...
{383: 1, 23: 1, 1626: 2, 91: 1, 1809: 1, 285: ...
{7762: 2, 68: 1, 661: 1, 4: 2, 1439: 2, 703: 1...
{2170: 2, 226: 1, 2402: 2, 32: 1, 2995: 2, 219...
. 0 3 Business
. 1 3 Business
. 2 3 Business
. 3 3 Business
. 4 3 Business
... ... ...
. 119995 1 World
. 119996 2 Sports
. 119997 2 Sports
. 119998 2 Sports
. 119999 2 Sports
120000 rows × 7 columns
Carlyle Looks Toward Commercial Aerospace (Reu...
Reuters - Private investment firm Carlyle Grou...
carlyle looks toward commercial aerospace (reu...
Iraq Halts Oil Exports from Reuters - Authorities have iraq halts oil exports from Main Southern Pipe... halted oil export\f... main southern pipe...
Renteria signing a top-shelf Red Sox general manager renteria signing a topdeal Theo Epstein acknowled... shelf deal red sox gene...
PITTSBURGH at NY Today's NFL games GIANTS Time: 1:30 p.m. Line: ...
today's nfl games pittsburgh at ny giants time...
[carlyle, looks, toward, {15999: 2, 1076: 1, 855: commercial, aerospace... 1, 1286: 1, 4251: 1, ...
[iraq, halts, oil, exports, {77: 2, 7380: 1, 66: 3, from, main, southe... 1787: 1, 32: 2, 900: 2...
[renteria, signing, a, top- {8428: 2, 2638: 1, 5: 4, shelf, deal, red, s... 0: 3, 127: 1, 202: 3,...
[today, 's, nfl, games, {106: 1, 23: 1, 729: 1, pittsburgh, at, ny, gi... 225: 1, 1586: 1, 22: 1...
The final preprocessing step is converting the features and the class indices into PyTorch tensors.
Recall that we need to subtract one from the class indices to make them zero-based.
At this point, the data is fully processed and we are ready to begin training.
4.2.3 Multiclass Logistic Regression Using PyTorch
The model itself is a single linear layer whose input size corresponds to the size of our vocabulary, and its output size corresponds to the number of classes in our corpus.
PyTorch’s Linear layer includes a bias by default, so there is no need to handle that manually the way we did for our perceptron example.
The code for training this model (which implements Algorithm 6) is almost identical to that of the binary logistic repression.
However, since we have to calculate a score for each of the four different classes, we need to replace the previous BCEWithLogitsLoss with CrossEntropyLoss, which applies a softmax over the scores to obtain probabilities for each class.
For each example, the model predicts 4 scores – one for each label.
The label with the highest score is selected using the argmax function.
We evaluate the predictions of our model for each class using Scikitlearn’s classification_report, which handles the results of multiclass classification.
4.3 Summary 69 4.3 Summary
In this chapter, we used movie review and news article classification to illustrate the implementation of the previously described algorithms for the binary perceptron, binary logistic regression, and multiclass logistic regression.
For the binary logistic regression, we made a direct comparison between the lower-level NumPy implementation and a higher-level version that made use of PyTorch.
We hope that through this series of exercises the reader has noted several key takeaways.
First, data preparation is important and should be done thoughtfully.
Certain tasks (e.g., text normalization or sentence splitting) are going to be frequently needed if you continue with NLP, so using or creating generic functions can be very helpful.
However, what works for one dataset and one language may not be suitable for another scenario.
For example, in our case, we selected different tokenizers for each of our tasks to account for the different registers of English, as well as removing diacritics during normalization.
Second, when it comes to implementing machine learning algorithms, it is often easier to use a higher-level library such as PyTorch instead of NumPy.
For example, with the former, the gradients are calculated by the library, whereas in NumPy we have to code them ourselves.
This becomes cumbersome quickly.
For example, even the derivative of the softmax is non-trivial.
Third, PyTorch imposes a training structure that remains largely the same, regardless of what models are being trained.
That is, at a high level, the same steps are always required: clearing the current gradients, predicting output scores for the provided inputs, calculating the loss, and optimizing.
These features make PyTorch a very powerful and convenient deep learning library; we will continue to use it throughout the remainder of the book to implement more complex neural architectures.
| 27,786 | 27,991 | #!/usr/bin/env python
# coding: utf-8
# # Multiclass Text Classification with
# # Logistic Regression Implemented with PyTorch and CE Loss
# First, we will do some initialization.
# In[1]:
import random
import torch
import numpy as np
import pandas as pd
from tqdm.notebook import tqdm
# enable tqdm in pandas
tqdm.pandas()
# set to True to use the gpu (if there is one available)
use_gpu = True
# select device
device = torch.device('cuda' if use_gpu and torch.cuda.is_available() else 'cpu')
print(f'device: {device.type}')
# random seed
seed = 1234
# set random seed
if seed is not None:
print(f'random seed: {seed}')
random.seed(seed)
np.random.seed(seed)
torch.manual_seed(seed)
# We will be using the AG's News Topic Classification Dataset.
# It is stored in two CSV files: `train.csv` and `test.csv`, as well as a `classes.txt` that stores the labels of the classes to predict.
#
# First, we will load the training dataset using [pandas](https://pandas.pydata.org/) and take a quick look at how the data.
# In[2]:
train_df = pd.read_csv('data/ag_news_csv/train.csv', header=None)
train_df.columns = ['class index', 'title', 'description']
train_df
# The dataset consists of 120,000 examples, each consisting of a class index, a title, and a description.
# The class labels are distributed in a separated file. We will add the labels to the dataset so that we can interpret the data more easily. Note that the label indexes are one-based, so we need to subtract one to retrieve them from the list.
# In[3]:
labels = open('data/ag_news_csv/classes.txt').read().splitlines()
classes = train_df['class index'].map(lambda i: labels[i-1])
train_df.insert(1, 'class', classes)
train_df
# Let's inspect how balanced our examples are by using a bar plot.
# In[4]:
pd.value_counts(train_df['class']).plot.bar()
# The classes are evenly distributed. That's great!
#
# However, the text contains some spurious backslashes in some parts of the text.
# They are meant to represent newlines in the original text.
# An example can be seen below, between the words "dwindling" and "band".
# In[5]:
print(train_df.loc[0, 'description'])
# We will replace the backslashes with spaces on the whole column using pandas replace method.
# In[6]:
title = train_df['title'].str.lower()
descr = train_df['description'].str.lower()
text = title + " " + descr
train_df['text'] = text.str.replace('\\', ' ', regex=False)
train_df
# Now we will proceed to tokenize the title and description columns using NLTK's word_tokenize().
# We will add a new column to our dataframe with the list of tokens.
# In[7]:
from nltk.tokenize import word_tokenize
train_df['tokens'] = train_df['text'].progress_map(word_tokenize)
train_df
# Now we will create a vocabulary from the training data. We will only keep the terms that repeat beyond some threshold established below.
# In[8]:
threshold = 10
tokens = train_df['tokens'].explode().value_counts()
tokens = tokens[tokens > threshold]
id_to_token = ['[UNK]'] + tokens.index.tolist()
token_to_id = {w:i for i,w in enumerate(id_to_token)}
vocabulary_size = len(id_to_token)
print(f'vocabulary size: {vocabulary_size:,}')
# In[9]:
from collections import defaultdict
def make_feature_vector(tokens, unk_id=0):
vector = defaultdict(int)
for t in tokens:
i = token_to_id.get(t, unk_id)
vector[i] += 1
return vector
train_df['features'] = train_df['tokens'].progress_map(make_feature_vector)
train_df
# In[10]:
def make_dense(feats):
x = np.zeros(vocabulary_size)
for k,v in feats.items():
x[k] = v
return x
X_train = np.stack(train_df['features'].progress_map(make_dense))
y_train = train_df['class index'].to_numpy() - 1
X_train = torch.tensor(X_train, dtype=torch.float32)
y_train = torch.tensor(y_train)
# In[11]:
from torch import nn
from torch import optim
# hyperparameters
lr = 1.0
n_epochs = 5
n_examples = X_train.shape[0]
n_feats = X_train.shape[1]
n_classes = len(labels)
# initialize the model, loss function, optimizer, and data-loader
model = nn.Linear(n_feats, n_classes).to(device)
loss_func = nn.CrossEntropyLoss()
optimizer = optim.SGD(model.parameters(), lr=lr)
# train the model
indices = np.arange(n_examples)
for epoch in range(n_epochs):
np.random.shuffle(indices)
for i in tqdm(indices, desc=f'epoch {epoch+1}'):
# clear gradients
model.zero_grad()
# send datum to right device
x = X_train[i].unsqueeze(0).to(device)
y_true = y_train[i].unsqueeze(0).to(device)
# predict label scores
y_pred = model(x)
# compute loss
loss = loss_func(y_pred, y_true)
# backpropagate
loss.backward()
# optimize model parameters
optimizer.step()
# Next, we evaluate on the test dataset
# In[12]:
# repeat all preprocessing done above, this time on the test set
test_df = pd.read_csv('data/ag_news_csv/test.csv', header=None)
test_df.columns = ['class index', 'title', 'description']
test_df['text'] = test_df['title'].str.lower() + " " + test_df['description'].str.lower()
test_df['text'] = test_df['text'].str.replace('\\', ' ', regex=False)
test_df['tokens'] = test_df['text'].progress_map(word_tokenize)
test_df['features'] = test_df['tokens'].progress_map(make_feature_vector)
X_test = np.stack(test_df['features'].progress_map(make_dense))
y_test = test_df['class index'].to_numpy() - 1
X_test = torch.tensor(X_test, dtype=torch.float32)
y_test = torch.tensor(y_test)
# In[13]:
from sklearn.metrics import classification_report
# set model to evaluation mode
model.eval()
# don't store gradients
with torch.no_grad():
X_test = X_test.to(device)
y_pred = torch.argmax(model(X_test), dim=1)
y_pred = y_pred.cpu().numpy()
print(classification_report(y_test, y_pred, target_names=labels))
| 2,684 | 2,750 | 2 |
chap04-3 | chap04-3 | 4
Implementing Text Classification Using Perceptron and Logistic Regression
In the previous chapters we have discussed the theory behind the perceptron and logistic regression, including mathematical explanations of how and why they are able to learn from examples.
In this chapter we will transition from math to code.
Specifically, we will discuss how to implement these models in the Python programming language.
All the code that we will introduce throughout this book is available online as well: http://clulab.github.io/gentlenlp/. The reader who is not familiar with the Python programming language is encouraged to read first Appendix A, for a brief introduction to the language, and Appendix B, for a discussion on how computers encode and preprocess text.
Once done, please return here.
To get a better understanding of how these algorithms work under the hood, we will start by implementing them from scratch.
However, as the book progresses, we will introduce some of the popular tools and libraries that make Python the language of choice for machine learning, e.g., PyTorch,1 and Hugging Face’s transformers.2
The code for all the examples in the book is provided in the form of Jupyter notebooks.3
Important fragments of these notebooks will be presented in the implementation chapters so that the reader has the whole picture just by reading the book.
However, we strongly encourage you to download the notebooks and execute them yourself.
We also encourage you to modify them to conduct your own experiments!
1 https://pytorch.org
2 https://huggingface.co 3 https://jupyter.org/
55
56 Implementing Text Classification Using Perceptron and LR 4.1 Binary Classification
We begin this chapter with binary classification.
That is, we aim to train classifiers that assign one of two labels to a given text.
As the example for this task, we will train a review classifier using the the Large Movie Review Dataset (Maas et al., 2011).4 We tackle this task by implementing first a binary perceptron classifier, followed by a binary logistic regression one.
We will implement the latter both from scratch as well as using PyTorch, so the reader has a clearer understanding on how PyTorch works “under the hood.”
4.1.1 Large Movie Review Dataset
This dataset contains movie reviews and their associated scores (between 1 and 10) as provided by IMDb.5 converted these scores to binary labels by assigning each review a positive or negative label if the review score was above 6 or below 5, respectively.
Reviews with scores 5 and 6 were considered too neutral and thus excluded.
We follow the same protocol in this chapter.
The dataset is divided in two even partitions called train and test, each containing 25,000 reviews.
The dataset also provides additional unlabeled reviews, but we will not use those here.
Each partition contains two directories called pos and neg where the positive and negative examples are stored.
Each review is stored in an independent text file, whose name is composed of an id unique to the partition and the score associated with the review, separated by an underscore.
An example of a positive and a negative review is shown in Table 4.1.
4.1.2 Bag-of-words Model
As discussed in Section 2.2, we will encode the text to classify as a bag of words.
That is, we encode each review as a list of numbers, with each position in the list corresponding to a word in our vocabulary, and the value stored in that position corresponding to the number of times the word appears in the review.
For example, say we want to encode the following two reviews:
4 https://ai.stanford.edu/~amaas/data/sentiment/ 5 https://www.imdb.com/
Maas et al.
4.1 Binary Classification 57
Table 4.1 Two examples of movie reviews from IMDb.
The first is a positive review of the movie Puss in Boots (1988).
The second is a negative review of the movie Valentine (2001).
These reviews can be found at https://www.imdb.com/review/rw0606396/ and https://www.imdb.com/review/rw0721861/, respectively.
Filename Score Binary Label
train/pos/24_8.txt 8/10 Positive
train/neg/141_3.txt 3/10 Negative
Review Text
Although this was obviously a low-budget production, the performances and the songs in this movie are worth seeing.
One of Walken’s few musical roles to date.
(he is a marvelous dancer and singer and he demonstrates his acrobatic skills as well - watch for the cartwheel!)
Also starring Jason Connery.
A great children’s story and very likable characters.
This stalk and slash turkey manages to bring nothing new to an increasingly stale genre.
A masked killer stalks young, pert girls and slaughters them in a variety of gruesome ways, none of which are particularly inventive.
It’s not scary, it’s not clever, and it’s not funny.
So what was the point of it?
Review 1: Review 2:
"I liked the movie.
My friend liked it too.
"
"I hated it.
Would not recommend.
"
First, we need to create a vocabulary that maps each word to an id that uniquely identifies it.
Each of these numbers will be used as the index in a list, so they must start at zero and grow by one for each word in the vocabulary.
For example, one possible vocabulary that encodes the previous reviews is:
{'would': 0,
'hated': 1,
58
Implementing Text Classification Using Perceptron and LR
'my': 2,
'liked': 3,
'not': 4,
'it': 5,
'movie': 6,
'recommend': 7,
'the': 8,
'I': 9,
'too': 10,
'friend': 11}
Using this mapping, we can encode the two reviews as follows:
Review1:
[0,0,1,2,0,1,1,0,1,1,1,1]
Review2:
[1,1,0,0,1,1,0,1,0,1,0,0]
Note that the word liked (fourth position) in the first review has a value of two.
This is because this word appears twice in that review.
This is a small example with a vocabulary of only 12 terms.
Of course, the same process needs to be implemented for our whole training dataset.
For this purpose we will use scikit-learn’s CountVectorizer
class.6 Using the CountVectorizer class simplifies things, allowing us to get started quickly with a bag-of-words approach.
However, note that it makes several simplifying assumptions (e.g., text is lowercased, and punctuation and single character tokens are removed).
Some of these may not be adequate to other tasks.
First, we need to obtain the filenames for the reviews in the training set:
Once we have acquired the filenames for the training reviews, we need
to read them using the CountVectorizer.
In order for the CountVectorizer to open and read the files for us, we make use of the input='filename' constructor parameter (otherwise it would expect the string content directly).
The CountVectorizer provides three methods that will be use-
ful for us: a method called fit() that is used to acquire the vocabulary,
a method transform() that converts the text into the bag-of-words representation, and a method fit_transform() that conveniently acquires the vocabulary and transforms the data in a single step.
The resulting object is referred to as a document-term matrix, where each row corre-
6 https://scikitlearn.org/stable/modules/generated/sklearn.feature_ extraction.text.CountVectorizer.html
4.1 Binary Classification 59
sponds to a document, and each column corresponds to a term in the vocabulary.
As the output above indicates, the resulting matrix has 25,000 rows (one for each review), and 74,849 columns (one for each term).
Also you may note that this matrix is sparse, with 3,445,861 stored elements.
A regular matrix of shape 25,000×74,849 would have 1,871,225,000 elements.
However, most of the elements in a document-term matrix are zeros because only a few words from the vocabulary appear in each document.
A sparse matrix takes advantage of this fact by storing only the non-zero cells in order to reduce the memory required to store it.
Thus, sparse matrices are convenient, especially when dealing with lots of data.
Nevertheless, to simplify the downstream code in this example, we will convert it into a dense matrix, i.e., a regular two-dimensional NumPy array.
Finally, we also need the labels of the reviews.
We assign a label of one to positive reviews, and a label of zero to negative ones.
Note that the first half of the reviews are positive and the second half are negative.
The label at the ith position of the y_train array corresponds to the review encoded in the ith row of the X_train matrix.
4.1.3 Perceptron
Now that we have defined our task and the data processing pipeline, we will implement a perceptron classifier that classifies the movie reviews as positive or negative.
The entire code discussed in this section is available in the chap4_perceptron notebook.
Recall from Section 2.4 that the perceptron is composed of a weight vector w and a bias term b.
These will be represented as a NumPy array w of the same length as our document vectors, and a variable b for the bias term.
Both will be initialized with zeros.
The parameters w and b are learned through the following algorithm, which implements Algorithm 2 from Chapter 2:
There are a couple of details to point out.
Line 3 of Algorithm 2 indicates that we need to repeat the training loop until convergence.
Theoretically, convergence is defined as predicting all training examples correctly.
This is an ambitious requirement, which is not always possible in practice, so in this code we also include a stop condition if we reach a maximum number of epochs.
Another crucial difference between our implementation here and the theoretical Algorithm 2, is that we randomize the order in which the training examples are seen at the beginning of
60 Implementing Text Classification Using Perceptron and LR
each epoch.
This simple (but highly recommended!)
change is necessary to avoid the introduction of spurious biases due to the arbitrary order of the examples in the original training partition.7 We accomplish this by storing the indices corresponding to the X_train matrix rows in a NumPy array, and shuffling these indices at the beginning of each epoch.
We shuffle the indices instead of the examples so that we can preserve the mapping between examples and labels.
The training loop aligns closely with Algorithm 2.
We start by iterating over each example in our training data, storing the current example in the variable x,8
and its corresponding label in the variable y_true.
Next, we compute the perceptron decision function shown in Algorithm 1.
Note that NumPy (as well as PyTorch) uses Python’s @ operator to indicate vector or matrix multiplication, depending on its operand types.
Here we use it to calculate the dot product of the example x and the weights w.
To this we add the bias b to obtain the predicted score, whose sign is used to assign a positive or negative predicted label.
If the prediction is correct, then no update is needed, and we can move on to the next training example.
However, if the prediction is incorrect, then we need to adjust w and b, as described in Algorithm 2.
Sidebar 4.1 The tqdm function
This is our first exposure to the tqdm function.
tqdm is a progress bar that “make your loops show a smart progress meter.”9
The name tqdm comes from the Arabic word taqaddum which can mean “progress.”
Using tqdm is as simple as wrapping it around the collection to be traversed.
After training, we evaluate the model’s performance on the heldout test partition.
The test data is loaded similarly to the training partition, but with one notable difference; we use CountVectorizer’s transform() method instead of the fit_transform() method so that the vocabulary is not adjusted for the test data.
We won’t show here the loading of the test partition since it is so similar to the code already shown, but it is available in the Jupyter notebook that accompanies this section.
.
7
As an extreme example, consider a dataset where all the positive examples appear first in the training partition.
This would cause the perceptron to artificially inflate the weights of the features that occur in these examples, a situation from which the learning algorithm may struggle to recover.
.
8 We use typewriter font when we discuss variables in the code, to distinguish code from the theoretical discussion in the other chapters.
9 https://github.com/tqdm/tqdm
4.1 Binary Classification 61
Using the model to assign labels to all the test data is easily done in one step – we simply multiply the entire test data document-term matrix by the previously learned weights and add the bias.
Scores greater than zero indicate a positive review, and those less than zero are negative.
At this point we can evaluate the classifier’s performance, which we will do using precision, recall, and F1 scores for binary classification (described in Section 2.3).
For this purpose, we implement a function called binary_classification_report that computes these metrics and returns them as a dictionary:
We call this function to compare the predicted labels to the true labels, and obtain the evaluation scores.
Our F1 score here is 86.8%, which is much higher than the baseline that assigns labels randomly, which yields an F1 score of about 50%.
This is a good result, especially considering the simplicity of the perceptron!
In the next sections and chapters, we will discuss a battery of strategies to considerably improve this performance.
4.1.4 Binary Logistic Regression from Scratch
Using the same task, dataset, and evaluation, we will now implement a logistic regression classifier, as described in Algorithm 5 from Chapter 3.
To give the reader hands-on experience with the implementation of the gradient calculations for logistic regression, we start by implementing it from scratch using NumPy.
All the code shown in this section is available in the chap4_logistic_regression_numpy notebook.
In the perceptron implementation, we represented the weights and the bias as two different variables.
Here, however, we will use a different approach that will allow us to unify them into a single vector variable.
Specifically, we take advantage of the similarity between the derivative of the cost function with respect to the weights (Equation 3.14) and the derivative of the cost with respect to the bias (Equation 3.15).
d Ci(w, b) = (σi − yi)xij (3.14 revisited) dwj
d Ci(w, b) = σi − yi (3.15 revisited) db
Note that the two derivative formulas are identical except that the former has a multiplication by xij, while the latter does not.
However,
62 Implementing Text Classification Using Perceptron and LR since
σi − yi = (σi − yi)1
we can multiply the derivative of the cost with respect to the bias by one without changing the semantics.
This gives an opportunity for combining the computations, doing them both in a single pass.
The idea is that we can treat the bias as a weight corresponding to a feature that always has a value of one.
As can be seen above, we created a NumPy array of ones of the same length as the number of examples in our training set (i.e., the number of rows in the data matrix).
Then we add this array as a new column to the data matrix, using NumPy’s column_stack function.
Next, we need to initialize our model.
This time we will use a single NumPy array w of the same length as the number of columns in the data matrix.
The weight vector w is initialized randomly with values between 0 and 1:
Before implementing the learning algorithm, we need an implementation of the logistic function.
Recall that the logistic function is
σ(x) = 1 (3.1 revisited) 1+e−x
This function can be easily implemented in NumPy as follows:
However, this naive implementation may produce the following warning during training:
The term overflow indicates that the result of evaluating exp(-x) is a number so large that it can’t be represented by a float (specifically, we’re using float64 numbers).
We will avoid this issue by not calling exp with values that will overflow.
NumPy provides the function finfo that can be consulted to find the limits of floating point numbers:
The log of the largest floating point number is the largest number for which exp() will not overflow, so we will use it as a threshold to filter out problematic values:
We now have everything we need to implement Algorithm 4.
The steps to follow for each example are: (1) use the model to make a prediction, (2) calculate the gradient of the loss function with respect to the model parameters, and (3) update the model parameters using the gradient.
The size of the update is controlled by the learning rate.
Once the model has been trained, we evaluate it on the test dataset using our binary_classification_report function from the previous section.
Loading and preprocessing the test dataset follows the same
4.1 Binary Classification 63
steps as with the previous classifier.
We omit the code for brevity.
These are the results:
The performance is comparable with that of the perceptron.
The difference in F1 scores between the two classifiers (84.9% here vs. 86.8% for the perceptron) is not significant.
Classifier parity is probably attributable to the fact that the signal distinguishing the two classes being easy to learn and the simpler perceptron training algorithm being sufficient in this case.
Nevertheless, this task is useful in showing how to implement the logistic regression model from scratch, i.e., by implementing the gradient calculation and parameter updates manually.
Next, we will implement the same model again using PyTorch, highlighting how this machine learning library simplifies the process.
4.1.5 Binary Logistic Regression Utilizing PyTorch
While it is fairly straightforward to compute the derivatives for logistic regression and implement then directly in NumPy, this will not scale well to arbitrary neural architectures.
Fortunately, there are libraries that automate the computation of the derivatives of the cost function (assuming it is differentiable!)
for any neural network, and use the resulting gradients to perform gradient descent or other more sophisticated optimization procedures.
To this end, we will use the PyTorch deep learning library10.
The corresponding notebook for this section is chap4_logistic_regression_pytorch_bce.
Our model for logistic regression corresponds to PyTorch’s Linear layer.
When we instantiate this layer, we specify the size of the inputs (the size of our vocabulary) and the size of the output, i.e., the number of output neurons (which is one because we’re doing binary classification).
The loss function we use is the binary cross-entropy loss (see Chapter 3), which is implemented as BCEWithLogitsLoss in PyTorch.
In PyTorch, the gradients obtained from the loss function are applied to the model by an optimizer object, which implements and applies an optimization algorithm.
Here we will use the vanilla stochastic gradient descent optimizer; we set its learning rate to 0.1.
This is equivalent to the discussion in Section 3.2.
Similarly to the manual implementation, the steps required to train the model for a given training example are: (1) ensure the gradients are set to zeros, (2) apply the model to obtain a prediction, (3) calculate
10 https://pytorch.org/
64 Implementing Text Classification Using Perceptron and LR
the loss, (4) compute the gradient of the loss by back-propagation, and (5) update the model parameters.
Recall that in our previous implementation everything was hardcoded: applying the model, computing the gradients, and optimizing the model parameters.
Here, however, the implementation of the logistic regression is expressed at a higher level of abstraction.
This means that we are describing the logical steps without specifying a particular implementation.
Instead, implementation details are the responsability of the chosen model, loss function, and optimizer.
Thus, we could even choose a different model, loss function, and/or optimizer, and use the same training steps with little or no modification.
This decoupling of the training logic from the implementation details is one of the main advantages of libraries such as PyTorch.
As shown in the code above, calling the model as a function, with the feature vectors as inputs, produces the predicted scores.
Once again, a positive score corresponds to a positive label.
When we evaluate this implementation on the test dataset, we obtain results that are in line with our previous models:
Writing the perceptron and the logistic regression from scratch is a good exercise, as it exposes us to the fundamentals of implementing machine learning algorithms.
However, this becomes cumbersome for more complex neural architectures.
For this reason, from this point on, we will use PyTorch for all our coding examples.
4.2 Multiclass Classification
So far, in this chapter we have discussed implementing binary classifiers.
Next, we will modify these binary classifiers to perform multiclass classification, following the discussion in Section 3.5.
4.2.1 AG News Dataset
Before explaining the actual training/testing code, we have to choose a new dataset that is suitable for multiclass classification.
To this end, we will use the AG News Classification Dataset (Zhang et al., 2015), a subset of the larger AG corpus of news articles collected from thousands of different news sources.11
The classification dataset consists of four
11 http://groups.di.unipi.it/~gulli/AG_corpus_of_news_articles.html
4.2 Multiclass Classification 65
classes, and the data is equally balanced across all classes (30,000 articles per class for train, and 1,900 articles per class for testing).
The goal of the task is to classify each article as one of the four classes: World, Sports, Business, or Sci/Tech.
4.2.2 Preparing the Dataset
The AG News Dataset is distributed as two CSV files (one for training and one for testing), each containing three columns: the class index, the title, and the description.
The dataset also provides a text file that maps the above class indexes to more descriptive class labels.
Because of the tabular nature of the dataset, pandas, a Python library
for tabular data analysis,12 is a natural choice for loading and transform-
ing it.
To this end, our Jupyter notebook (chap4_multiclass_logistic_regression) demonstrates the sequence of steps required to handle the data, as well
as model training and evaluation.
First, we show how to load the CSV,
add column names, and inspect the result:
class index
.
0 3
.
1 3
.
2 3
.
3 3
.
4 3
... ...
.
119995 1
.
119996 2
.
119997 2
.
119998 2
.
119999 2
title Wall St. Bears Claw Back Into the Black (Reuters) Carlyle Looks Toward Commercial Aerospace (Reu... Oil and Economy Cloud Stocks' Outlook (Reuters) Iraq Halts Oil Exports from Main Southern Pipe...
Oil prices soar to all-time record, posing new... ...
Pakistan's Musharraf Says Won't Quit as Army C...
Renteria signing a top-shelf deal Saban not going to Dolphins yet
Today's NFL games Nets get Carter from Raptors
description Reuters - Short-sellers, Wall Street's dwindli...
Reuters - Private investment firm Carlyle Grou... Reuters - Soaring crude prices plus worries\ab... Reuters - Authorities have halted oil export\f... AFP - Tearaway world oil prices, toppling reco... ...
KARACHI (Reuters) - Pakistani President Perve...
Red Sox general manager Theo Epstein acknowled...
The Miami Dolphins will put their courtship of... PITTSBURGH at NY GIANTS Time: 1:30 p.m. Line: ...
INDIANAPOLIS -- All-Star Vince Carter was trad...
120000 rows × 3 columns
Since the class labels themselves are in a separate file, we manually add them to the pandas data structure (called dataframe in pandas’ terminology) to increase the interpretability of the data.
We use the class index column as a starting point, and use its map method to create a new column with the corresponding labels (technically a new Series object) that is added to the dataframe using its insert method, which allows us to insert the column in a specific position.
Note that the label indices are one-based, so we subtract one to align them with their labels.
12 https://pandas.pydata.org
66 Implementing Text Classification Using Perceptron and LR
class index
.
0 3
.
1 3
.
2 3
.
3 3
.
4 3
... ...
.
119995 1
.
119996 2
.
119997 2
.
119998 2
.
119999 2
class Business Business Business Business Business ...
World Sports Sports Sports Sports
title Wall St. Bears Claw Back Into the Black (Reuters)
Oil and Economy Cloud Stocks' Outlook (Reuters)
Oil prices soar to all-time record, posing new...
Pakistan's Musharraf Says Won't Quit as Army C...
Saban not going to Dolphins yet Nets get Carter from Raptors
description Reuters - Short-sellers, Wall Street's dwindli...
Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco...
KARACHI (Reuters) - Pakistani President Perve...
The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad...
Iraq Halts Oil Exports from Main Southern Pipe... Reuters - Authorities have halted oil export\f...
... ...
Renteria signing a top-shelf deal Red Sox general manager Theo Epstein acknowled...
120000 rows × 4 columns
Carlyle Looks Toward Commercial Aerospace (Reu... Reuters - Private investment firm Carlyle Grou...
Today's NFL games PITTSBURGH at NY GIANTS Time: 1:30 p.m.
Line: ...
Next we will preprocess the text.
First we lowercase the title and description, and then we concatenate them into a single string.
Then we remove some spurious backslashes from the text.
Once this is done, the preprocessed text is added to the dataframe as a new column.
Note that pandas allows these steps to be applied to all rows simultaneously.
class index
class
title
Wall St. Bears Claw Back Into the Black (Reuters)
Oil and Economy Cloud Stocks' Outlook (Reuters)
Oil prices soar to all-time record, posing new...
...
Pakistan's Musharraf Says Won't Quit as Army C...
Saban not going to Dolphins yet
Nets get Carter from Raptors
description Reuters - Short-sellers, Wall Street's dwindli...
Reuters - Soaring crude prices plus worries\ab...
AFP - Tearaway world oil prices, toppling reco...
...
KARACHI (Reuters) - Pakistani President Perve...
The Miami Dolphins will put their courtship of...
INDIANAPOLIS -- All-Star Vince Carter was trad...
text
wall st. bears claw back into the black (reute...
oil and economy cloud stocks' outlook (reuters...
oil prices soar to all-time record, posing new...
...
pakistan's musharraf says won't quit as army c...
saban not going to dolphins yet the miami dolp...
nets get carter from raptors indianapolis -- a...
. 0 3 Business
. 1 3 Business
. 2 3 Business
. 3 3 Business
. 4 3 Business
... ... ...
. 119995 1 World
. 119996 2 Sports
. 119997 2 Sports
. 119998 2 Sports
. 119999 2 Sports
120000 rows × 5 columns
Carlyle Looks Toward Commercial Reuters - Private investment firm Carlyle carlyle looks toward commercial Aerospace (Reu... Grou... aerospace (reu...
Iraq Halts Oil Exports from Main Southern Pipe...
Reuters - Authorities have halted oil export\f...
iraq halts oil exports from main southern pipe...
Renteria signing a top-shelf deal
Red Sox general manager Theo Epstein renteria signing a top-shelf deal red sox acknowled... gene...
Today's NFL games
PITTSBURGH at NY GIANTS Time: 1:30 p.m.
today's nfl games pittsburgh at ny giants Line: ... time...
At this point, the text is ready to be tokenized.
For this purpose we will use NLTK’s word_tokenize function.
This function can be applied to the whole column at once using the pandas map function, which returns a new column which we add to the dataframe.
However, here we actually use the progress_map function, which provides a visual progress bar.
This visual feedback is especially helpful for tasks that take more time to complete.
4.2 Multiclass Classification 67
class index
.
0 3
.
1 3
.
2 3
.
3 3
.
4 3
... ...
.
119995 1
.
119996 2
.
119997 2
.
119998 2
.
119999 2
class Business Business Business Business Business
...
World
Sports Sports Sports Sports
title
Wall St. Bears Claw Back Into the Black (Reuters)
Oil and Economy Cloud Stocks' Outlook (Reuters)
Oil prices soar to all-time record, posing new...
...
Pakistan's Musharraf Says Won't Quit as Army C...
Saban not going to Dolphins yet
Nets get Carter from Raptors
description
Reuters - Short-sellers, Wall Street's dwindli...
Reuters - Soaring crude prices plus worries\ab...
AFP - Tearaway world oil prices, toppling reco...
...
KARACHI (Reuters) - Pakistani President Perve...
The Miami Dolphins will put their courtship of...
INDIANAPOLIS -- All-Star Vince Carter was trad...
text
wall st. bears claw back into the black (reute...
oil and economy cloud stocks' outlook (reuters...
oil prices soar to all-time record, posing new...
...
pakistan's musharraf says won't quit as army c...
saban not going to dolphins yet the miami dolp...
nets get carter from raptors indianapolis -- a...
tokens
[wall, st., bears, claw, back, into, the, blac...
[oil, and, economy, cloud, stocks, ', outlook,...
[oil, prices, soar, to, all-time, record, ,, p...
...
[pakistan, 's, musharraf, says, wo, n't, quit,...
[saban, not, going, to, dolphins, yet, the, mi...
[nets, get, carter, from, raptors, indianapoli...
120000 rows × 6 columns
Carlyle Looks Toward Commercial Reuters - Private investment firm carlyle looks toward commercial [carlyle, looks, toward, Aerospace (Reu... Carlyle Grou... aerospace (reu... commercial, aerospace...
Iraq Halts Oil Exports from Main Reuters - Authorities have halted iraq halts oil exports from main [iraq, halts, oil, exports, from, Southern Pipe... oil export\f... southern pipe...
main, southe...
Renteria signing a top-shelf deal
Red Sox general manager Theo renteria signing a top-shelf deal [renteria, signing, a, top-shelf, Epstein acknowled... red sox gene...
deal, red, s...
Today's NFL games
PITTSBURGH at NY GIANTS today's nfl games pittsburgh at [today, 's, nfl, games, Time: 1:30 p.m.
Line: ...
ny giants time...
pittsburgh, at, ny, gi...
From the tokens we just created, we then create a vocabulary for our corpus.
Here, we only keep the words that occur at least 10 times, decreasing the memory needed and reducing the likelihood that our vocabulary contains noisy tokens.
Note that each row in the tokens column contains a list of tokens.
In order to create the vocabulary, we will need to convert the Series of lists of tokens into a Series of tokens using the explode() Pandas method.
Then we will use the value_counts() method to create a Series object in which the index are the tokens and the values are the number of times they appear in the corpus.
The next step is removing the tokens with a count lower than our chosen threshold.
Finally, we create a list with the remaining tokens, as well as a dictionary that maps tokens to token ids (i.e., the index of the token in the list).
We include in the vocabulary a special token
[UNK] that will be used as a placeholder for tokens that do not appear in our vocabulary after the frequency pruning.
Using this vocabulary, we construct a feature vector for each news article in the corpus.
This feature vector will be encoded as a dictionary, with keys corresponding to token ids, and values corresponding to the number of times the token appears in the article.
As above, the feature vectors will be stored as a new column in the dataframe.
68 Implementing Text Classification Using Perceptron and LR
class index
class
title
Wall St. Bears Claw Back Into the Black (Reuters)
Oil and Economy Cloud Stocks' Outlook (Reuters)
Oil prices soar to all-time record, posing new...
...
Pakistan's Musharraf Says Won't Quit as Army C...
Saban not going to Dolphins yet
Nets get Carter from Raptors
description
Reuters - Short-sellers, Wall Street's dwindli...
Reuters - Soaring crude prices plus worries\ab...
AFP - Tearaway world oil prices, toppling reco...
...
KARACHI (Reuters) - Pakistani President Perve...
The Miami Dolphins will put their courtship of...
INDIANAPOLIS -- All-Star Vince Carter was trad...
text
wall st. bears claw back into the black (reute...
oil and economy cloud stocks' outlook (reuters...
oil prices soar to all-time record, posing new...
...
pakistan's musharraf says won't quit as army c...
saban not going to dolphins yet the miami dolp...
nets get carter from raptors indianapolis -- a...
tokens
[wall, st., bears, claw, back, into, the, blac...
[oil, and, economy, cloud, stocks, ', outlook,...
[oil, prices, soar, to, alltime, record, ,, p...
...
[pakistan, 's, musharraf, says, wo, n't, quit,...
[saban, not, going, to, dolphins, yet, the, mi...
[nets, get, carter, from, raptors, indianapoli...
features
{427: 2, 563: 1, 1607: 1, 15062: 1, 120: 1, 73...
{66: 1, 9: 2, 351: 2, 4565: 1, 158: 1, 116: 1,...
{66: 2, 99: 2, 4390: 1, 4: 2, 3595: 1, 149: 1,...
...
{383: 1, 23: 1, 1626: 2, 91: 1, 1809: 1, 285: ...
{7762: 2, 68: 1, 661: 1, 4: 2, 1439: 2, 703: 1...
{2170: 2, 226: 1, 2402: 2, 32: 1, 2995: 2, 219...
. 0 3 Business
. 1 3 Business
. 2 3 Business
. 3 3 Business
. 4 3 Business
... ... ...
. 119995 1 World
. 119996 2 Sports
. 119997 2 Sports
. 119998 2 Sports
. 119999 2 Sports
120000 rows × 7 columns
Carlyle Looks Toward Commercial Aerospace (Reu...
Reuters - Private investment firm Carlyle Grou...
carlyle looks toward commercial aerospace (reu...
Iraq Halts Oil Exports from Reuters - Authorities have iraq halts oil exports from Main Southern Pipe... halted oil export\f... main southern pipe...
Renteria signing a top-shelf Red Sox general manager renteria signing a topdeal Theo Epstein acknowled... shelf deal red sox gene...
PITTSBURGH at NY Today's NFL games GIANTS Time: 1:30 p.m. Line: ...
today's nfl games pittsburgh at ny giants time...
[carlyle, looks, toward, {15999: 2, 1076: 1, 855: commercial, aerospace... 1, 1286: 1, 4251: 1, ...
[iraq, halts, oil, exports, {77: 2, 7380: 1, 66: 3, from, main, southe... 1787: 1, 32: 2, 900: 2...
[renteria, signing, a, top- {8428: 2, 2638: 1, 5: 4, shelf, deal, red, s... 0: 3, 127: 1, 202: 3,...
[today, 's, nfl, games, {106: 1, 23: 1, 729: 1, pittsburgh, at, ny, gi... 225: 1, 1586: 1, 22: 1...
The final preprocessing step is converting the features and the class indices into PyTorch tensors.
Recall that we need to subtract one from the class indices to make them zero-based.
At this point, the data is fully processed and we are ready to begin training.
4.2.3 Multiclass Logistic Regression Using PyTorch
The model itself is a single linear layer whose input size corresponds to the size of our vocabulary, and its output size corresponds to the number of classes in our corpus.
PyTorch’s Linear layer includes a bias by default, so there is no need to handle that manually the way we did for our perceptron example.
The code for training this model (which implements Algorithm 6) is almost identical to that of the binary logistic repression.
However, since we have to calculate a score for each of the four different classes, we need to replace the previous BCEWithLogitsLoss with CrossEntropyLoss, which applies a softmax over the scores to obtain probabilities for each class.
For each example, the model predicts 4 scores – one for each label.
The label with the highest score is selected using the argmax function.
We evaluate the predictions of our model for each class using Scikitlearn’s classification_report, which handles the results of multiclass classification.
4.3 Summary 69 4.3 Summary
In this chapter, we used movie review and news article classification to illustrate the implementation of the previously described algorithms for the binary perceptron, binary logistic regression, and multiclass logistic regression.
For the binary logistic regression, we made a direct comparison between the lower-level NumPy implementation and a higher-level version that made use of PyTorch.
We hope that through this series of exercises the reader has noted several key takeaways.
First, data preparation is important and should be done thoughtfully.
Certain tasks (e.g., text normalization or sentence splitting) are going to be frequently needed if you continue with NLP, so using or creating generic functions can be very helpful.
However, what works for one dataset and one language may not be suitable for another scenario.
For example, in our case, we selected different tokenizers for each of our tasks to account for the different registers of English, as well as removing diacritics during normalization.
Second, when it comes to implementing machine learning algorithms, it is often easier to use a higher-level library such as PyTorch instead of NumPy.
For example, with the former, the gradients are calculated by the library, whereas in NumPy we have to code them ourselves.
This becomes cumbersome quickly.
For example, even the derivative of the softmax is non-trivial.
Third, PyTorch imposes a training structure that remains largely the same, regardless of what models are being trained.
That is, at a high level, the same steps are always required: clearing the current gradients, predicting output scores for the provided inputs, calculating the loss, and optimizing.
These features make PyTorch a very powerful and convenient deep learning library; we will continue to use it throughout the remainder of the book to implement more complex neural architectures.
| 16,420 | 16,500 | #!/usr/bin/env python
# coding: utf-8
# # Binary Text Classification with
# # Logistic Regression Implemented from Scratch
# In[1]:
import random
import numpy as np
from tqdm.notebook import tqdm
# set this variable to a number to be used as the random seed
# or to None if you don't want to set a random seed
seed = 1234
if seed is not None:
random.seed(seed)
np.random.seed(seed)
# The dataset is divided in two directories called `train` and `test`.
# These directories contain the training and testing splits of the dataset.
# In[2]:
get_ipython().system('ls -lh data/aclImdb/')
# Both the `train` and `test` directories contain two directories called `pos` and `neg` that contain text files with the positive and negative reviews, respectively.
# In[3]:
get_ipython().system('ls -lh data/aclImdb/train/')
# We will now read the filenames of the positive and negative examples.
# In[4]:
from glob import glob
pos_files = glob('data/aclImdb/train/pos/*.txt')
neg_files = glob('data/aclImdb/train/neg/*.txt')
print('number of positive reviews:', len(pos_files))
print('number of negative reviews:', len(neg_files))
# Now, we will use a [`CountVectorizer`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html) to read the text files, tokenize them, acquire a vocabulary from the training data, and encode it in a document-term matrix in which each row represents a review, and each column represents a term in the vocabulary. Each element $(i,j)$ in the matrix represents the number of times term $j$ appears in example $i$.
# In[5]:
from sklearn.feature_extraction.text import CountVectorizer
# initialize CountVectorizer indicating that we will give it a list of filenames that have to be read
cv = CountVectorizer(input='filename')
# learn vocabulary and return sparse document-term matrix
doc_term_matrix = cv.fit_transform(pos_files + neg_files)
doc_term_matrix
# Note in the message printed above that the matrix is of shape (25000, 74894).
# In other words, it has 1,871,225,000 elements.
# However, only 3,445,861 elements were stored.
# This is because most of the elements in the matrix are zeros.
# The reason is that the reviews are short and most words in the english language don't appear in each review.
# A matrix that only stores non-zero values is called *sparse*.
#
# Now we will convert it to a dense numpy array:
# In[6]:
X_train = doc_term_matrix.toarray()
X_train.shape
# In[7]:
# Append 1s to the xs; this will allow us to multiply by the weights and
# the bias in a single pass.
# Make an array with a one for each row/data point
ones = np.ones(X_train.shape[0])
# Concatenate these ones to existing feature vectors
X_train = np.column_stack((X_train, ones))
X_train.shape
# We will also create a numpy array with the binary labels for the reviews.
# One indicates a positive review and zero a negative review.
# The label `y_train[i]` corresponds to the review encoded in row `i` of the `X_train` matrix.
# In[8]:
# training labels
y_pos = np.ones(len(pos_files))
y_neg = np.zeros(len(neg_files))
y_train = np.concatenate([y_pos, y_neg])
y_train
# Now we will initialize our model, in the form of an array of weights `w` of the same size as the number of features in our dataset (i.e., the number of words in the vocabulary acquired by [`CountVectorizer`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html)), and a bias term `b`.
# Both are initialized to zeros.
# In[9]:
# initialize model: the feature vector and bias term are populated with zeros
n_examples, n_features = X_train.shape
w = np.random.random(n_features)
# Now we will use the logistic regression learning algorithm to learn the values of `w` and `b` from our training data.
# In[10]:
# from scipy.special import expit as sigmoid
def sigmoid(z):
if -z > np.log(np.finfo(float).max):
return 0.0
return 1 / (1 + np.exp(-z))
# In[11]:
lr = 1e-1
n_epochs = 10
indices = np.arange(n_examples)
for epoch in range(10):
# randomize the order in which training examples are seen in this epoch
np.random.shuffle(indices)
# traverse the training data
for i in tqdm(indices, desc=f'epoch {epoch+1}'):
x = X_train[i]
y = y_train[i]
# calculate the derivative of the cost function for this batch
deriv_cost = (sigmoid(x @ w) - y) * x
# update the weights
w = w - lr * deriv_cost
# The next step is evaluating the model on the test dataset.
# Note that this time we use the [`transform()`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html#sklearn.feature_extraction.text.CountVectorizer.transform) method of the [`CountVectorizer`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html), instead of the [`fit_transform()`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html#sklearn.feature_extraction.text.CountVectorizer.fit_transform) method that we used above. This is because we want to use the learned vocabulary in the test set, instead of learning a new one.
# In[12]:
pos_files = glob('data/aclImdb/test/pos/*.txt')
neg_files = glob('data/aclImdb/test/neg/*.txt')
doc_term_matrix = cv.transform(pos_files + neg_files)
X_test = doc_term_matrix.toarray()
X_test = np.column_stack((X_test, np.ones(X_test.shape[0])))
y_pos = np.ones(len(pos_files))
y_neg = np.zeros(len(neg_files))
y_test = np.concatenate([y_pos, y_neg])
# Using the model is easy: multiply the document-term matrix by the learned weights and add the bias.
# We use Python's `@` operator to perform the matrix-vector multiplication.
# In[13]:
y_pred = X_test @ w > 0
# Now we print an evaluation of the prediction results using scikit-learn's [`classification_report()`](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.classification_report.html) function.
# In[14]:
def binary_classification_report(y_true, y_pred):
# count true positives, false positives, true negatives, and false negatives
tp = fp = tn = fn = 0
for gold, pred in zip(y_true, y_pred):
if pred == True:
if gold == True:
tp += 1
else:
fp += 1
else:
if gold == False:
tn += 1
else:
fn += 1
# calculate precision and recall
precision = tp / (tp + fp)
recall = tp / (tp + fn)
# calculate f1 score
fscore = 2 * precision * recall / (precision + recall)
# calculate accuracy
accuracy = (tp + tn) / len(y_true)
# number of positive labels in y_true
support = sum(y_true)
return {
"precision": precision,
"recall": recall,
"f1-score": fscore,
"support": support,
"accuracy": accuracy,
}
# In[15]:
binary_classification_report(y_test, y_pred)
| 4,407 | 4,453 | 3 |
chap04-4 | chap04-4 | "4 \n\nImplementing Text Classification Using Perceptron and Logistic Regression \n\nIn the previous(...TRUNCATED) | 9,407 | 9,479 | "#!/usr/bin/env python\n# coding: utf-8\n\n# # Binary Text Classification with Perceptron\n\n# In[1](...TRUNCATED) | 3,534 | 3,558 | 4 |
chap04-5 | chap04-5 | "4 \n\nImplementing Text Classification Using Perceptron and Logistic Regression \n\nIn the previous(...TRUNCATED) | 10,827 | 10,931 | "#!/usr/bin/env python\n# coding: utf-8\n\n# # Binary Text Classification with Perceptron\n\n# In[1](...TRUNCATED) | 4,004 | 4,054 | 5 |
chap04-6 | chap04-6 | "4 \n\nImplementing Text Classification Using Perceptron and Logistic Regression \n\nIn the previous(...TRUNCATED) | 23,684 | 24,252 | "#!/usr/bin/env python\n# coding: utf-8\n\n# # Multiclass Text Classification with \n# # Logistic Re(...TRUNCATED) | 1,551 | 1,715 | 6 |
chap04-7 | chap04-7 | "4 \n\nImplementing Text Classification Using Perceptron and Logistic Regression \n\nIn the previous(...TRUNCATED) | 22,466 | 22,543 | "#!/usr/bin/env python\n# coding: utf-8\n\n# # Multiclass Text Classification with \n# # Logistic Re(...TRUNCATED) | 1,054 | 1,179 | 7 |
chap04-8 | chap04-8 | "4 \n\nImplementing Text Classification Using Perceptron and Logistic Regression \n\nIn the previous(...TRUNCATED) | 8,010 | 8,094 | "#!/usr/bin/env python\n# coding: utf-8\n\n# # Binary Text Classification with Perceptron\n\n# In[1](...TRUNCATED) | 2,401 | 2,437 | 8 |
chap04-9 | chap04-9 | "4 \n\nImplementing Text Classification Using Perceptron and Logistic Regression \n\nIn the previous(...TRUNCATED) | 19,082 | 19,452 | "#!/usr/bin/env python\n# coding: utf-8\n\n# # Binary Text Classification with \n# # Logistic Regres(...TRUNCATED) | 4,090 | 4,394 | 9 |
End of preview. Expand
in Dataset Viewer.
- Downloads last month
- 35