id
stringlengths 11
11
| channel
stringclasses 2
values | channel_id
stringclasses 2
values | title
stringlengths 12
100
| categories
sequence | tags
sequence | description
stringlengths 66
5k
| text
stringlengths 577
90.4k
| segments
list |
---|---|---|---|---|---|---|---|---|
eYgPJ_7BkEw | Yannic Kilcher | UCZHmQk67mSJgfCCTn7xBfew | FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence | [
"Science & Technology"
] | [
"deep learning",
"machine learning",
"arxiv",
"google",
"semi-supervised",
"unlabeled",
"augmentation",
"research",
"randaugment"
] | FixMatch is a simple, yet surprisingly effective approach to semi-supervised learning. It combines two previous methods in a clever way and achieves state-of-the-art in regimes with few and very few labeled examples.
Paper: https://arxiv.org/abs/2001.07685
Code: https://github.com/google-research/fixmatch
Abstract:
Semi-supervised learning (SSL) provides an effective means of leveraging unlabeled data to improve a model's performance. In this paper, we demonstrate the power of a simple combination of two common SSL methods: consistency regularization and pseudo-labeling. Our algorithm, FixMatch, first generates pseudo-labels using the model's predictions on weakly-augmented unlabeled images. For a given image, the pseudo-label is only retained if the model produces a high-confidence prediction. The model is then trained to predict the pseudo-label when fed a strongly-augmented version of the same image. Despite its simplicity, we show that FixMatch achieves state-of-the-art performance across a variety of standard semi-supervised learning benchmarks, including 94.93% accuracy on CIFAR-10 with 250 labels and 88.61% accuracy with 40 -- just 4 labels per class. Since FixMatch bears many similarities to existing SSL methods that achieve worse performance, we carry out an extensive ablation study to tease apart the experimental factors that are most important to FixMatch's success. We make our code available at this https URL.
Authors: Kihyuk Sohn, David Berthelot, Chun-Liang Li, Zizhao Zhang, Nicholas Carlini, Ekin D. Cubuk, Alex Kurakin, Han Zhang, Colin Raffel
Links:
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
BitChute: https://www.bitchute.com/channel/yannic-kilcher
Minds: https://www.minds.com/ykilcher | Hi, today we're looking at FixMatch simplifying semi-supervised learning with consistency and confidence by Kyuk Son, David Berthelot and others of Google research. So this paper concerns semi-supervised learning. So what does semi-supervised learning mean? In semi-supervised learning you have a data set of labeled samples. So right, you have this data set of X's and corresponding Y labels. But this data set sometimes is very small. Now you have a much bigger data set of unlabeled examples, just X's with no labels, right? So you don't know what the labels of the unlabeled examples are, but what you would like to do is you would like to use this really large data set in order to help you with learning the association between the data points and the labels. So for example, in this case you would have something like an image classification data set. And I'm going to take the example here of medical data. So you have pictures of lungs. Let's draw a lung here. That is an ugly lung. You have pictures of lungs and whether or not they have a tumor in them. So medical data is very hard to get, especially labeled medical data. Because first of all you need the data itself, but then you also need at least one, but ideally three radiologists to look at whether or not this is a good or a bad image and label it. So it's usually very expensive to collect that data. But you might have plenty of unlabeled data, right? You might just be able to go through some database and find like anonymized, undiagnosed lung scans somewhere lying around. The same with image, like other images. So labeling images is pretty human intensive, but the internet contains like a whole bunch of unlabeled images. So the task of semi-supervised learning is how do you use this unlabeled data set in order to make your classification on the labeled data set easier. And FixMatch combines two approaches to this in a smart way, namely consistency and confidence approach. So what does... we'll jump right into the method. So basically what you want to do is you want to say my loss that I optimize, this is my loss, consists of two parts, namely a supervised loss, which is your classic classification loss, plus an unsupervised loss, right? And then you have like some sort of a trade-off parameter in front. Now your supervised loss here, this is just the cross entropy, let's call it H, between your predicted labels and the actual true labels, right? And the predicted labels, they can be, you know, kind of a distribution over labels. Now the magic of course is here in the unsupervised loss. And this unsupervised loss, this is what's described here in this part, right? So the unsupervised loss is going to be this H between P and Q, and we'll see what P and Q is. So if for the unsupervised loss you of course want to start with an unlabeled example, then you have the same sample go into two different pipelines. In the first pipeline up here, what you do is you so called weakly augmented. And here we're dealing with images, so we have to talk about image augmentation. So image augmentation has long been used in supervised learning to kind of give you more, it's kind of a cheat to give you more training data. So if you have an image, right, of let's say our famous cat, you can obtain more training data if you, for example, by random cropping. So you can random crop, let's say we just take this bottom right corner here, and then we enlarge it to the original size, right? Then it is still sort of a cat, but it's just a part of a cat, right? But usually that helps because you say, okay, my image data set is just pictures of animals, right? It's entirely conceivable that someone held the camera like this or like this, right? So technically in terms of generalizing to a test set, these both data points should be valid. So I'm just going to add both to my training data. So you can see how from one training data point you can get many training data points just by doing this cropping. What you can also do is you can flip it left right, right? You just swap the pixels left right, and usually these kind of... So a cat that has a little dark spot here is still a cat when it has the little dark spot over there, right? But to your classifier, those are two different samples. So you can do many of those things, and they have two kind of augmentations. They have what they call weakly augmented and strongly augmented, right? So in the weakly augmented pipeline, I think they just they crop and they shift and they rotate or something like this. So you can see here this horsey here, it is something like it's cropped here about, then it is turned slightly to the left, and then... Yeah, I think that's it. So they crop, they rotate, and then they also flip horizontally at random in like 50% of the time. So these are what's called weakly augmented. The goal here is just to kind of obtain a bit more training data, alright? So you run this through your model, through your classification model as you would a regular sample, and you get a prediction. Now from your prediction, you can take the highest prediction here, and that is going to be your pseudo-label. So this is P of Y, this is your distribution that you estimate, right? So and this, if you just take the max, this is going to be your Y hat, right? And this is what they call a pseudo-label, sorry. You'll see why it is called a pseudo-label. So the other pipeline here is the strong augmentation pipeline. Now in weak augmentation, we just wanted to get some more training data in strong augmentation. Now the goal is to really screw up that picture to the point where it's still, you know, you could recognize in the same class, but you can see here the augmentations, they go wild. So you play around with the color, with the hue, you play around with the light intensity, right? With the contrast, you can do many, many things. You can see this image looks basically nothing like this image, but you can still kind of recognize it as a horse. But the strongly augmented data is much more distorted than the weakly augmented data. And that's the point. So also you send the strongly augmented data through the model, and again you get a prediction, right? And now the trick is you take the label from here, and you take that as if it were the true label, right? You take that as if it were the true label, and you form a loss from this prediction being the model prediction, as if this thing here that also comes from the model, as if that was the true label, right? That's why it's called a pseudo label, because it is a label that you produce from the model itself. Now of course if these were to be the same picture, it would be kind of pointless, right? That's why you see there needs to be a weakly and a strongly augmented pipeline. I'm pretty sure if you want a more basic version of this, make this just clean, so no augmentation, and make this augmented, right? That's how you can think of it. The fact that there is weak and here strong augmentation I think is just your classic trick to get more training data. But in essence you can think of it as this is here, the clean thing, you just want to produce a label, and then you want that an augmented version of the image has the same label. Now you can think of it shortly, what does this model learn? If you just have this, you remember. I think the important thing is always to remember that there are two components here, right? There is first the supervised loss, this is the important one ultimately, because we have the true labels, right? And then second there is the unsupervised loss, which is just an auxiliary loss that is supposed to just kind of tune our model to the nature of the data, right? So don't forget that this down here just concerns the unsupervised part of that loss. So if you think what does the model actually learn whenever you train it like this, it basically learns to revert this strong augmentation, right? So it basically says, hey model, whenever I give you a weak augmented image and I distort it heavily, right? Whenever I give you an image and I distort it heavily, I want the label to be the same. So the model basically learns that whatever the image, the whatever the image, the model at the end of the training will be able to basically map any strongly augmented picture to the same class as a weakly augmented picture if it comes from the same source, right? So the model basically learns to ignore these kinds of augmentations. That's what this loss over here does. It basically says these sorts of augmentations, these sorts of distortions of images, please ignore those because I always want you to output the same label here in the prediction here as if I had not distorted or just weakly distorted the image. So that's what you have to keep in mind that this loss is designed to make the model not distinguish between differently augmented versions of the same image. And interestingly, that really seems to help with the supervised loss, right? My kind of hypothesis is that all these methods, what they're kind of trying to do is to just tune the neural network to the, let's say the orders of magnitude of the input data and also to the kinds of augmentations that the humans come up with. And that's a very important point. So the augmentations, and here we said, you know, it's kind of a rotation and the crop, the kind of augmentation really seemed to play a role. So this paper finds that on CIFAR-10, where the state of the art I believe is something like 96, 97 percent accuracy, on CIFAR-10 with just 250 labeled examples, right? Now the usual data set size is about 50,000. It goes to 94.9%. So almost 95 percent accuracy with the state of the art being like 97. This is incredible with just 250 labeled examples. Crazy, right? And with only four labels per class, it gets 88.6 percent. So that's just 40 images with labels. They get 88.6 percent of accuracy compared to the 97 percent that you get with like 50,000 images. That is pretty pretty cool, right? Simply by having all other images not labeled but pseudo labeled and consistency regularized, right? So the two things that are combined by FixMatch again are consistency regularization, which basically it means that the model should output similar predictions when fed perturbed versions of the same image, right? They're really forthcoming that they are not the ones who invented this. They just combine the consistency regularization with the pseudo labeling. Now the pseudo labeling they have also not invented. The pseudo labeling leverages the idea that we should use the model itself to obtain artificial labels for unlabeled data. We've seen a lot of papers in the last few months or years where it's like the teacher teaches the student and then the student teaches the teacher model again and so on. So they simply combine the two methods in a clever way. They have one last thing that is not in this drawing, namely they only use the pseudo label. They have a break right here and they only use the pseudo label if the confidence, so if this P of Y here is above a certain threshold. So they don't take all the pseudo labels but they only take the labels where the model is fairly sure about, right? So they have actually an ablation study where they show that this is reasonably important. And if you go down here where they say ablation, where is it? Ablation study, oh yeah something I also find cool. If you just give one image per class, one image per class, ten images that are labeled, it still gets like 78% accuracy. I think the images are chosen as good representations of their class but still one image per class. Pretty pretty cool. An important part of this is the ablation study where they say okay we want to tease apart why this algorithm, why this semi-supervised learning technique works so well. And they find several important factors. They find for example that their augmentation strategy is extremely important. So how they augment the images is very important. You see here the error of this 4.8% on the 250 label split. If you change up the augmentation strategies your error gets higher, right? So they say we use this cutout and we measure the effect of cutout. We find that both cutout and CCT augment are required to obtain the best performance. Removing either results in a comparable increase in error rate. Now you've seen before for example they went from this 93, sorry, 93 point something percent to 94 point something percent from the previous state-of-the-art semi-supervised learning. And here they find that simply changing the augmentation strategy changes the error by more than a percent. So you can just see this in context of what's important here. They say again the ratio of unlabeled data seems pretty important. We observe a significant decrease in error rates by using large amounts of unlabeled data. Then the optimizer and learning rate schedule seems to be very important as well in that they use this, they say SGD with momentum works much better than Adam and then they use this decreasing learning rate schedule, this cosine learning rate schedule. So there seem to be a lot of things, a lot of hyperparameters that are fairly important here. And you can see that the gains are substantial sometimes but they aren't like through the roof substantial, where you can make a good argument that it is unclear how much really comes from this clever combination that FixMatch proposes and how much also just comes from whether or not you set the hyperparameters correctly and exactly how much computation are you able to throw at selecting your hyper parameters. So that seems to be a bit of a pain point for me. They also say we find that tuning the weight decay is exceptionally important for low label regimes. Choosing a value that is just one order of magnitude larger or smaller than optimal can cost 10 percentage points or more. And so that all of that seems to me that this kind of research where you're nibbling for half or single percentage points in accuracy while a single misstep in a choice of hyper parameter might cost you 10 times that gain is a bit sketchy. Now I recognize they get numbers like no one else has gotten before but where exactly the gains come from and if the gains really come from this architecture or actually just more from throwing computers at it I don't know. Alright with that I hope you enjoyed this and I invite you to check out the paper. Bye bye. | [
{
"start": 0,
"end": 5.5600000000000005,
"text": " Hi, today we're looking at FixMatch simplifying semi-supervised learning"
},
{
"start": 5.5600000000000005,
"end": 13.280000000000001,
"text": " with consistency and confidence by Kyuk Son, David Berthelot and others of"
},
{
"start": 13.280000000000001,
"end": 19.76,
"text": " Google research. So this paper concerns semi-supervised learning. So what does"
},
{
"start": 19.76,
"end": 24.2,
"text": " semi-supervised learning mean? In semi-supervised learning you have a"
},
{
"start": 24.2,
"end": 30.64,
"text": " data set of labeled samples. So right, you have this data set of X's and"
},
{
"start": 30.64,
"end": 38.2,
"text": " corresponding Y labels. But this data set sometimes is very small. Now you have a"
},
{
"start": 38.2,
"end": 47.28,
"text": " much bigger data set of unlabeled examples, just X's with no labels, right?"
},
{
"start": 47.28,
"end": 53.32,
"text": " So you don't know what the labels of the unlabeled examples are, but"
},
{
"start": 53.32,
"end": 58.24,
"text": " what you would like to do is you would like to use this really large data set"
},
{
"start": 58.24,
"end": 65.28,
"text": " in order to help you with learning the association between the data points and"
},
{
"start": 65.28,
"end": 72.64,
"text": " the labels. So for example, in this case you would have something like an"
},
{
"start": 72.64,
"end": 75.76,
"text": " image classification data set. And I'm going to take the example here of"
},
{
"start": 75.76,
"end": 82.6,
"text": " medical data. So you have pictures of lungs. Let's draw a lung here. That is an"
},
{
"start": 82.6,
"end": 89.64,
"text": " ugly lung. You have pictures of lungs and whether or not they have"
},
{
"start": 89.64,
"end": 94.72,
"text": " a tumor in them. So medical data is very hard to get, especially"
},
{
"start": 94.72,
"end": 100.11999999999999,
"text": " labeled medical data. Because first of all you need the data itself, but"
},
{
"start": 100.11999999999999,
"end": 106.44,
"text": " then you also need at least one, but ideally three radiologists"
},
{
"start": 106.44,
"end": 113.84,
"text": " to look at whether or not this is a good or a bad image and label it. So it's"
},
{
"start": 113.84,
"end": 118.03999999999999,
"text": " usually very expensive to collect that data. But you might have plenty of"
},
{
"start": 118.03999999999999,
"end": 122.03999999999999,
"text": " unlabeled data, right? You might just be able to go through some"
},
{
"start": 122.03999999999999,
"end": 128.52,
"text": " database and find like anonymized, undiagnosed lung scans somewhere lying"
},
{
"start": 128.52,
"end": 135.56,
"text": " around. The same with image, like other images. So labeling images is pretty"
},
{
"start": 135.56,
"end": 139.96,
"text": " human intensive, but the internet contains like a whole bunch of unlabeled"
},
{
"start": 139.96,
"end": 145,
"text": " images. So the task of semi-supervised learning is how do you use this"
},
{
"start": 145,
"end": 150.76,
"text": " unlabeled data set in order to make your classification on the labeled data set"
},
{
"start": 150.76,
"end": 156.56,
"text": " easier. And FixMatch combines two approaches to this in a smart way, namely"
},
{
"start": 156.56,
"end": 166.24,
"text": " consistency and confidence approach. So what does... we'll jump right"
},
{
"start": 166.24,
"end": 171.44,
"text": " into the method. So basically what you want to do is you want to say my loss"
},
{
"start": 171.44,
"end": 178.44,
"text": " that I optimize, this is my loss, consists of two parts, namely a"
},
{
"start": 178.44,
"end": 184.88,
"text": " supervised loss, which is your classic classification loss, plus an"
},
{
"start": 184.88,
"end": 189.44,
"text": " unsupervised loss, right? And then you have like some sort of a trade-off"
},
{
"start": 189.44,
"end": 194.48,
"text": " parameter in front. Now your supervised loss here, this is just the"
},
{
"start": 194.48,
"end": 200.76,
"text": " cross entropy, let's call it H, between your predicted labels and the"
},
{
"start": 200.76,
"end": 206.44,
"text": " actual true labels, right? And the predicted labels, they can be, you know,"
},
{
"start": 206.44,
"end": 212.76,
"text": " kind of a distribution over labels. Now the magic of course is here in the"
},
{
"start": 212.76,
"end": 217.84,
"text": " unsupervised loss. And this unsupervised loss, this is what's described here in"
},
{
"start": 217.84,
"end": 224.84,
"text": " this part, right? So the unsupervised loss is going to be this H between P and Q,"
},
{
"start": 224.84,
"end": 232.79999999999998,
"text": " and we'll see what P and Q is. So if for the unsupervised loss you of course"
},
{
"start": 232.79999999999998,
"end": 239.76,
"text": " want to start with an unlabeled example, then you have the same sample go into"
},
{
"start": 239.76,
"end": 244.92,
"text": " two different pipelines. In the first pipeline up here, what you do is you so"
},
{
"start": 244.92,
"end": 252.2,
"text": " called weakly augmented. And here we're dealing with images, so we have to talk"
},
{
"start": 252.2,
"end": 255.76,
"text": " about image augmentation. So image augmentation has long been used in"
},
{
"start": 255.76,
"end": 260.88,
"text": " supervised learning to kind of give you more, it's kind of a cheat to give you"
},
{
"start": 260.88,
"end": 269.24,
"text": " more training data. So if you have an image, right, of let's say our famous cat,"
},
{
"start": 269.24,
"end": 279.8,
"text": " you can obtain more training data if you, for example, by random cropping. So you"
},
{
"start": 279.8,
"end": 285.32,
"text": " can random crop, let's say we just take this bottom right corner here, and then"
},
{
"start": 285.32,
"end": 293.12,
"text": " we enlarge it to the original size, right? Then it is still sort of a cat, but it's"
},
{
"start": 293.12,
"end": 298.52,
"text": " just a part of a cat, right? But usually that helps because you say, okay,"
},
{
"start": 298.52,
"end": 303.96,
"text": " my image data set is just pictures of animals, right? It's entirely conceivable"
},
{
"start": 303.96,
"end": 309.32,
"text": " that someone held the camera like this or like this, right? So technically in"
},
{
"start": 309.32,
"end": 313.91999999999996,
"text": " terms of generalizing to a test set, these both data points should be valid."
},
{
"start": 313.91999999999996,
"end": 317.59999999999997,
"text": " So I'm just going to add both to my training data. So you can see how from"
},
{
"start": 317.59999999999997,
"end": 322.4,
"text": " one training data point you can get many training data points just by doing this"
},
{
"start": 322.4,
"end": 326.52,
"text": " cropping. What you can also do is you can flip it left right, right? You just"
},
{
"start": 326.52,
"end": 334.76,
"text": " swap the pixels left right, and usually these kind of... So a cat that has a"
},
{
"start": 334.76,
"end": 339.44,
"text": " little dark spot here is still a cat when it has the little dark spot over"
},
{
"start": 339.44,
"end": 344.28,
"text": " there, right? But to your classifier, those are two different samples. So you can do"
},
{
"start": 344.28,
"end": 350.35999999999996,
"text": " many of those things, and they have two kind of augmentations. They have what"
},
{
"start": 350.35999999999996,
"end": 355.84,
"text": " they call weakly augmented and strongly augmented, right? So in the weakly"
},
{
"start": 355.84,
"end": 361.79999999999995,
"text": " augmented pipeline, I think they just they crop and they shift and they"
},
{
"start": 361.79999999999995,
"end": 367.15999999999997,
"text": " rotate or something like this. So you can see here this horsey here, it is"
},
{
"start": 367.15999999999997,
"end": 374.35999999999996,
"text": " something like it's cropped here about, then it is turned slightly to the left,"
},
{
"start": 374.35999999999996,
"end": 383.88,
"text": " and then... Yeah, I think that's it. So they crop, they rotate, and then they also flip"
},
{
"start": 383.88,
"end": 389.44,
"text": " horizontally at random in like 50% of the time. So these are what's called"
},
{
"start": 389.44,
"end": 394.24,
"text": " weakly augmented. The goal here is just to kind of obtain a bit more training"
},
{
"start": 394.24,
"end": 399.44,
"text": " data, alright? So you run this through your model, through your classification"
},
{
"start": 399.44,
"end": 405.44,
"text": " model as you would a regular sample, and you get a prediction. Now from your"
},
{
"start": 405.44,
"end": 409.76,
"text": " prediction, you can take the highest prediction here, and that is going to be"
},
{
"start": 409.76,
"end": 416.59999999999997,
"text": " your pseudo-label. So this is P of Y, this is your distribution that you"
},
{
"start": 416.59999999999997,
"end": 423.68,
"text": " estimate, right? So and this, if you just take the max, this is going to be"
},
{
"start": 423.68,
"end": 431.36,
"text": " your Y hat, right? And this is what they call a pseudo-label, sorry. You'll see why"
},
{
"start": 431.36,
"end": 436.12,
"text": " it is called a pseudo-label. So the other pipeline here is the strong"
},
{
"start": 436.12,
"end": 440.28000000000003,
"text": " augmentation pipeline. Now in weak augmentation, we just wanted to get some"
},
{
"start": 440.28000000000003,
"end": 444.96,
"text": " more training data in strong augmentation. Now the goal is to really"
},
{
"start": 444.96,
"end": 450.16,
"text": " screw up that picture to the point where it's still, you know, you could recognize"
},
{
"start": 450.16,
"end": 455.24,
"text": " in the same class, but you can see here the augmentations, they go wild. So you"
},
{
"start": 455.24,
"end": 460.24,
"text": " play around with the color, with the hue, you play around with the light intensity,"
},
{
"start": 460.24,
"end": 469.44,
"text": " right? With the contrast, you can do many, many things. You can see this image"
},
{
"start": 469.44,
"end": 475.16,
"text": " looks basically nothing like this image, but you can still kind of recognize it"
},
{
"start": 475.16,
"end": 482.12,
"text": " as a horse. But the strongly augmented data is much more distorted than the"
},
{
"start": 482.12,
"end": 486.92,
"text": " weakly augmented data. And that's the point. So also you send the strongly"
},
{
"start": 486.92,
"end": 493.04,
"text": " augmented data through the model, and again you get a prediction, right? And now"
},
{
"start": 493.04,
"end": 502.20000000000005,
"text": " the trick is you take the label from here, and you take that as if it"
},
{
"start": 502.20000000000005,
"end": 508.12,
"text": " were the true label, right? You take that as if it were the true label, and you"
},
{
"start": 508.12,
"end": 515.9200000000001,
"text": " form a loss from this prediction being the model prediction, as if this thing"
},
{
"start": 515.92,
"end": 521.1999999999999,
"text": " here that also comes from the model, as if that was the true label, right? That's"
},
{
"start": 521.1999999999999,
"end": 526.7199999999999,
"text": " why it's called a pseudo label, because it is a label that you produce from the"
},
{
"start": 526.7199999999999,
"end": 531.88,
"text": " model itself. Now of course if these were to be the same picture, it would be kind"
},
{
"start": 531.88,
"end": 535.7199999999999,
"text": " of pointless, right? That's why you see there needs to be a weakly and a"
},
{
"start": 535.7199999999999,
"end": 543.3199999999999,
"text": " strongly augmented pipeline. I'm pretty sure if you want a more basic version"
},
{
"start": 543.32,
"end": 551.5200000000001,
"text": " of this, make this just clean, so no augmentation, and make this augmented,"
},
{
"start": 551.5200000000001,
"end": 556.12,
"text": " right? That's how you can think of it. The fact that there is weak and"
},
{
"start": 556.12,
"end": 560.8000000000001,
"text": " here strong augmentation I think is just your classic trick to get more"
},
{
"start": 560.8000000000001,
"end": 564.84,
"text": " training data. But in essence you can think of it as this is here, the clean"
},
{
"start": 564.84,
"end": 570.5600000000001,
"text": " thing, you just want to produce a label, and then you want that an augmented"
},
{
"start": 570.56,
"end": 576.28,
"text": " version of the image has the same label. Now you can think of it shortly, what"
},
{
"start": 576.28,
"end": 581.28,
"text": " does this model learn? If you just have this, you remember. I think the important"
},
{
"start": 581.28,
"end": 585.0999999999999,
"text": " thing is always to remember that there are two components here, right? There is"
},
{
"start": 585.0999999999999,
"end": 590.7199999999999,
"text": " first the supervised loss, this is the important one ultimately, because we have"
},
{
"start": 590.7199999999999,
"end": 596,
"text": " the true labels, right? And then second there is the unsupervised loss, which is"
},
{
"start": 596,
"end": 602.88,
"text": " just an auxiliary loss that is supposed to just kind of tune our model to the"
},
{
"start": 602.88,
"end": 607.16,
"text": " nature of the data, right? So don't forget that this down here just"
},
{
"start": 607.16,
"end": 614.08,
"text": " concerns the unsupervised part of that loss. So if you think what does the model"
},
{
"start": 614.08,
"end": 621.08,
"text": " actually learn whenever you train it like this, it basically learns to"
},
{
"start": 621.08,
"end": 629.88,
"text": " revert this strong augmentation, right? So it basically says, hey model, whenever I"
},
{
"start": 629.88,
"end": 636,
"text": " give you a weak augmented image and I distort it heavily, right? Whenever I"
},
{
"start": 636,
"end": 640.08,
"text": " give you an image and I distort it heavily, I want the label to be the same."
},
{
"start": 640.08,
"end": 650.1600000000001,
"text": " So the model basically learns that whatever the image, the whatever the"
},
{
"start": 650.16,
"end": 657.68,
"text": " image, the model at the end of the training will be able to basically map"
},
{
"start": 657.68,
"end": 663.92,
"text": " any strongly augmented picture to the same class as a weakly augmented"
},
{
"start": 663.92,
"end": 670.64,
"text": " picture if it comes from the same source, right? So the model basically learns to"
},
{
"start": 670.64,
"end": 677.28,
"text": " ignore these kinds of augmentations. That's what this loss over here does. It"
},
{
"start": 677.28,
"end": 681.68,
"text": " basically says these sorts of augmentations, these sorts of distortions"
},
{
"start": 681.68,
"end": 688.92,
"text": " of images, please ignore those because I always want you to output the same label"
},
{
"start": 688.92,
"end": 695.56,
"text": " here in the prediction here as if I had not distorted or just weakly distorted"
},
{
"start": 695.56,
"end": 701.56,
"text": " the image. So that's what you have to keep in mind that this"
},
{
"start": 701.56,
"end": 707.8,
"text": " loss is designed to make the model not distinguish between differently"
},
{
"start": 707.8,
"end": 714,
"text": " augmented versions of the same image. And interestingly, that really seems to help"
},
{
"start": 714,
"end": 720.3199999999999,
"text": " with the supervised loss, right? My kind of hypothesis is that all"
},
{
"start": 720.3199999999999,
"end": 724.56,
"text": " these methods, what they're kind of trying to do is to just tune the neural"
},
{
"start": 724.56,
"end": 731.3599999999999,
"text": " network to the, let's say the orders of magnitude of the input data and also"
},
{
"start": 731.36,
"end": 736.08,
"text": " to the kinds of augmentations that the humans come up with. And that's a very"
},
{
"start": 736.08,
"end": 743.48,
"text": " important point. So the augmentations, and here we said, you know, it's kind of a"
},
{
"start": 743.48,
"end": 748.88,
"text": " rotation and the crop, the kind of augmentation really seemed to play a"
},
{
"start": 748.88,
"end": 756.08,
"text": " role. So this paper finds that on CIFAR-10, where the state of the art I believe is"
},
{
"start": 756.08,
"end": 763.6800000000001,
"text": " something like 96, 97 percent accuracy, on CIFAR-10 with just 250 labeled"
},
{
"start": 763.6800000000001,
"end": 774.32,
"text": " examples, right? Now the usual data set size is about 50,000. It goes to 94.9%."
},
{
"start": 774.32,
"end": 779.36,
"text": " So almost 95 percent accuracy with the state of the art being like 97."
},
{
"start": 779.36,
"end": 790.28,
"text": " This is incredible with just 250 labeled examples. Crazy, right? And with"
},
{
"start": 790.28,
"end": 798.96,
"text": " only four labels per class, it gets 88.6 percent. So that's just 40 images with"
},
{
"start": 798.96,
"end": 809.88,
"text": " labels. They get 88.6 percent of accuracy compared to the 97 percent that"
},
{
"start": 809.88,
"end": 815.84,
"text": " you get with like 50,000 images. That is pretty pretty cool, right? Simply by"
},
{
"start": 815.84,
"end": 821.48,
"text": " having all other images not labeled but pseudo labeled and consistency"
},
{
"start": 821.48,
"end": 830,
"text": " regularized, right? So the two things that are combined by FixMatch again"
},
{
"start": 830,
"end": 836.6,
"text": " are consistency regularization, which basically it means that the model"
},
{
"start": 836.6,
"end": 841.16,
"text": " should output similar predictions when fed perturbed versions of the same image,"
},
{
"start": 841.16,
"end": 847.24,
"text": " right? They're really forthcoming that they are not the ones who"
},
{
"start": 847.24,
"end": 851.48,
"text": " invented this. They just combine the consistency regularization with the"
},
{
"start": 851.48,
"end": 857.16,
"text": " pseudo labeling. Now the pseudo labeling they have also not invented. The pseudo"
},
{
"start": 857.16,
"end": 862.6800000000001,
"text": " labeling leverages the idea that we should use the model itself to obtain"
},
{
"start": 862.6800000000001,
"end": 866.88,
"text": " artificial labels for unlabeled data. We've seen a lot of papers in the last"
},
{
"start": 866.88,
"end": 872.12,
"text": " few months or years where it's like the teacher teaches the student and then the"
},
{
"start": 872.12,
"end": 879.12,
"text": " student teaches the teacher model again and so on. So they simply combine"
},
{
"start": 879.12,
"end": 884.5600000000001,
"text": " the two methods in a clever way. They have one last thing that is not in this"
},
{
"start": 884.5600000000001,
"end": 890.64,
"text": " drawing, namely they only use the pseudo label. They have a break right here and"
},
{
"start": 890.64,
"end": 898,
"text": " they only use the pseudo label if the confidence, so if this P of Y here is"
},
{
"start": 898,
"end": 904.76,
"text": " above a certain threshold. So they don't take all the pseudo labels but they only"
},
{
"start": 904.76,
"end": 910.28,
"text": " take the labels where the model is fairly sure about, right? So they have"
},
{
"start": 910.28,
"end": 914.48,
"text": " actually an ablation study where they show that this is reasonably"
},
{
"start": 914.48,
"end": 923.38,
"text": " important. And if you go down here where they say ablation, where is it?"
},
{
"start": 923.38,
"end": 929.04,
"text": " Ablation study, oh yeah something I also find cool. If you just give one"
},
{
"start": 929.04,
"end": 935.36,
"text": " image per class, one image per class, ten images that are labeled, it still gets"
},
{
"start": 935.36,
"end": 943.96,
"text": " like 78% accuracy. I think the images are chosen as good representations of their"
},
{
"start": 943.96,
"end": 951.28,
"text": " class but still one image per class. Pretty pretty cool. An important part of"
},
{
"start": 951.28,
"end": 958,
"text": " this is the ablation study where they say okay we want to tease apart why this"
},
{
"start": 958,
"end": 963.4,
"text": " algorithm, why this semi-supervised learning technique works so well. And"
},
{
"start": 963.4,
"end": 967.8399999999999,
"text": " they find several important factors. They find for example that their"
},
{
"start": 967.8399999999999,
"end": 973.0799999999999,
"text": " augmentation strategy is extremely important. So how they augment the"
},
{
"start": 973.08,
"end": 983.2,
"text": " images is very important. You see here the error of this 4.8% on the"
},
{
"start": 983.2,
"end": 993.48,
"text": " 250 label split. If you change up the augmentation"
},
{
"start": 993.48,
"end": 999.5600000000001,
"text": " strategies your error gets higher, right?"
},
{
"start": 999.56,
"end": 1011.1199999999999,
"text": " So they say we use this cutout and we measure the effect of cutout. We find"
},
{
"start": 1011.1199999999999,
"end": 1015.28,
"text": " that both cutout and CCT augment are required to obtain the best performance."
},
{
"start": 1015.28,
"end": 1023.0799999999999,
"text": " Removing either results in a comparable increase in error rate. Now you've"
},
{
"start": 1023.08,
"end": 1030.1200000000001,
"text": " seen before for example they went from this 93, sorry, 93 point something"
},
{
"start": 1030.1200000000001,
"end": 1035.52,
"text": " percent to 94 point something percent from the previous state-of-the-art"
},
{
"start": 1035.52,
"end": 1041.08,
"text": " semi-supervised learning. And here they find that simply changing the"
},
{
"start": 1041.08,
"end": 1046.52,
"text": " augmentation strategy changes the error by more than a percent. So you can just"
},
{
"start": 1046.52,
"end": 1056.28,
"text": " see this in context of what's important here. They say again the ratio"
},
{
"start": 1056.28,
"end": 1062.04,
"text": " of unlabeled data seems pretty important. We observe a significant decrease in"
},
{
"start": 1062.04,
"end": 1066.68,
"text": " error rates by using large amounts of unlabeled data. Then the"
},
{
"start": 1066.68,
"end": 1071.8,
"text": " optimizer and learning rate schedule seems to be very important as well in"
},
{
"start": 1071.8,
"end": 1079.04,
"text": " that they use this, they say SGD with momentum works much better than Adam and"
},
{
"start": 1079.04,
"end": 1084.84,
"text": " then they use this decreasing learning rate schedule, this cosine learning rate"
},
{
"start": 1084.84,
"end": 1092.76,
"text": " schedule. So there seem to be a lot of things, a lot of hyperparameters that are"
},
{
"start": 1092.76,
"end": 1101.56,
"text": " fairly important here. And you can see that the gains are substantial sometimes"
},
{
"start": 1101.56,
"end": 1109.72,
"text": " but they aren't like through the roof substantial, where you can make a good"
},
{
"start": 1109.72,
"end": 1115.84,
"text": " argument that it is unclear how much really comes from this clever"
},
{
"start": 1115.84,
"end": 1121.8799999999999,
"text": " combination that FixMatch proposes and how much also just comes from"
},
{
"start": 1121.8799999999999,
"end": 1127.6,
"text": " whether or not you set the hyperparameters correctly and exactly how"
},
{
"start": 1127.6,
"end": 1134.76,
"text": " much computation are you able to throw at selecting your hyper"
},
{
"start": 1134.76,
"end": 1143.7199999999998,
"text": " parameters. So that seems to be a bit of a pain point for me. They also"
},
{
"start": 1143.7199999999998,
"end": 1150.8799999999999,
"text": " say we find that tuning the weight decay is exceptionally important for low label"
},
{
"start": 1150.8799999999999,
"end": 1157.08,
"text": " regimes. Choosing a value that is just one order of magnitude larger or"
},
{
"start": 1157.08,
"end": 1164.8,
"text": " smaller than optimal can cost 10 percentage points or more. And so that"
},
{
"start": 1164.8,
"end": 1170.6,
"text": " all of that seems to me that this kind of research where you're"
},
{
"start": 1170.6,
"end": 1179,
"text": " nibbling for half or single percentage points in accuracy while a single"
},
{
"start": 1179,
"end": 1186,
"text": " misstep in a choice of hyper parameter might cost you 10 times that gain is"
},
{
"start": 1186,
"end": 1192.48,
"text": " a bit sketchy. Now I recognize they get numbers like no one else has gotten"
},
{
"start": 1192.48,
"end": 1197.72,
"text": " before but where exactly the gains come from and if the gains really come from"
},
{
"start": 1197.72,
"end": 1203.6,
"text": " this architecture or actually just more from throwing computers at it I don't"
},
{
"start": 1203.6,
"end": 1209.72,
"text": " know. Alright with that I hope you enjoyed this and I invite you to check"
},
{
"start": 1209.72,
"end": 1216.28,
"text": " out the paper. Bye bye."
}
] |
AU30czb4iQA | Yannic Kilcher | UCZHmQk67mSJgfCCTn7xBfew | Imputer: Sequence Modelling via Imputation and Dynamic Programming | [
"Science & Technology"
] | [
"deep learning",
"machine learning",
"nlp",
"natural language processing",
"machine translation",
"arxiv",
"google",
"attention mechanism",
"attention",
"transformer",
"seq2seq",
"autoregressive",
"independence",
"decoding"
] | The imputer is a sequence-to-sequence model that strikes a balance between fully autoregressive models with long inference times and fully non-autoregressive models with fast inference. The imputer achieves constant decoding time independent of sequence length by exploiting dynamic programming.
https://arxiv.org/abs/2002.08926
Abstract:
This paper presents the Imputer, a neural sequence model that generates output sequences iteratively via imputations. The Imputer is an iterative generative model, requiring only a constant number of generation steps independent of the number of input or output tokens. The Imputer can be trained to approximately marginalize over all possible alignments between the input and output sequences, and all possible generation orders. We present a tractable dynamic programming training algorithm, which yields a lower bound on the log marginal likelihood. When applied to end-to-end speech recognition, the Imputer outperforms prior non-autoregressive models and achieves competitive results to autoregressive models. On LibriSpeech test-other, the Imputer achieves 11.1 WER, outperforming CTC at 13.0 WER and seq2seq at 12.5 WER.
Authors: William Chan, Chitwan Saharia, Geoffrey Hinton, Mohammad Norouzi, Navdeep Jaitly
Links:
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
BitChute: https://www.bitchute.com/channel/yannic-kilcher
Minds: https://www.minds.com/ykilcher | Hi there! Today we're looking at the imputer sequence modeling via imputation and dynamic programming by William Chan, Chitwan Sariah, Jeffrey Hinton, Mohamed Nourouzi and Navdeep Jaitley. So this is a model to perform sequence-to-sequence tasks. Now sequence-to-sequence tasks are very very common in NLP, but in this case it's kind of a subset of sequence-to-sequence tasks. So a classic sequence-to-sequence task is a machine translation. Here for example the sentence I like you. If you want to translate it to German, sorry you, if you want to translate it to German that would become Ich mag dich. And you see that the input is a sequence right and the output is a sequence. Now the imputer deals with very special kind of sequence-to-sequence tasks. Namely it deals with sequence-to-sequence tasks where there is a monotonic alignment. So you see that this is given here. The first word is corresponding to the first word here, the second to the second and the third to the third. This is not always the case in machine translation. You know different languages have different sentence structures. So for example in French this would be je d'aime. And you can see that the first word is still the first word, however the third word has become the second, the you and the verb goes to the end. So the imputer would not be able to deal with this task very well. A different task where the imputer would be useful for would be something like speech recognition. So if someone were to speak the words I like you and you would measure the waveform of that it would look something like I like you. So if you have this waveform let's actually make some chunk samples out of this. Let's say this is a sample right here and here is a break here and here. So we have five samples on the bottom. You can see pretty easily that this sample here, this is the I and then this is silence, this is the like, this is silence and this is the you. So the imputer deals with these kind of sequence to sequence tasks where first of all there is a monotonic alignment, sorry monotonic alignment and second of all this is an engineering constraint where the length of the input sequence X is larger or equal to the length of the input sequence Y and you'll see why mainly because we rely on being able to compute this alignment here. The alignment of input samples to output samples. You can see that the monotonic alignment is actually given fairly well in speech recognition because if something is later down here it is also later in the sequence up here. That is a monotonic alignment and also usually we have more wave samples then we have words in the output sequence. So that would be a task for the imputer. Now let's think about how we would do something like this. So let's put X at the top here and we said X has five tokens in it and let's put Y at the bottom. Y actually has three tokens. So this here is I like you. This is the waveform and we want the I like you at the bottom. So what could we do? First of all what the imputer does is it represents I like you not as this form right here but as a form where you have the same length as X divided into the same amount of things and then it does the following. So for this this is an example. This is how it would represent Y. It would say I have as many chunks on the top as on the bottom. I know this chunk here corresponds to this token then this here to this and this here to this and then these are these intermediate ones. So you can see these correspond to those. These are silents right here. Now it doesn't always need to be that there is always one token and a silence then a token and a silence. The task of the imputer is actually to see whether this is more likely than for example I like and then silence silence and then you. So the task of the imputer is to distinguish these two from each other and then of course also produce the actual tokens. Now if you think about how would you go about taking X and producing something like Y. So this is Y let's call it tilde. This is the actual Y right but you can see that this here is a deterministic function in one way. It's actually not a deterministic function in the other way and that becomes interesting when you have to compute a loss for this. But how would we go about doing this? What we could do is we could just take a big transformer BERT. That's actually drawn arrow. We could just take BERT and we could simply so in BERT you have in if you if you construct it correctly you have as many input tokens as output tokens. So what we could simply say is for each of the outputs that we get we simply make this as a softmax classifier over our vocabulary with the silence being one special token and we simply classify each of the outputs into this vocabulary. This would be one step right? So we could do one step BERT bang bang input to output and there is more there are more sophisticated approaches to doing this in one step like CTC but ultimately we could just do one step but then you'd have the same problem like for example XL net if you haven't seen my XL net video I recommend it that they exactly take the problem if you do this right then at the moment where you decode the word like you have no idea that there is an I over here all you know is the the vector you have here that you sample the I from right but this could be a distribution where I is pretty high but some other word is also pretty high so this process over here that samples the word like has no idea which of the two here you actually would sample so it cannot condition on it so it is the the assumption here is that the sampling of the word like is independent of the sampling of the word I and of course that's not the case the you need to know what word is there if you want to sample the word like otherwise you can end up with some very confusing sentences so this one step process is pretty quick but it has the drawback that there are these conditional independence assumptions and again I invite you to watch the XL net video if you want to dive more into this problem the second thing we could do is we could just decode one after another right so we could say all right I'll make sorry I'll make my five slots here and I just leave them empty for now and I'm just going to decode the one that I am most sure about and let's say the the speech at the back here is very clear and you say other I'm I know this is a you right so I'm gonna fill in you right here right and make this alignment that this goes here this is the you right I still don't know what the others are but now what I did they do a second step and in the second step I get as an input not only the original input like this this thing here but I also get the fact that I already decoded the word you to here right in this step so now I say given that I already decoded the word you which one am I now most sure about and I might be most sure about to say I'm most sure about this now being an eye because there's a you at the end and this kind of sounds like an eye so an eye here right it goes to the next step and then the next step it already has the information that it decoded I and you and now it's a might say ah okay given these that's so probably this thing so I here probably the thing here the thing right here is silence right makes the most sense I kind of hear some noise but there's already a word after so now I'm pretty sure that this here is a silent token right and you go this until the end until you're actually at this so this here would be n step decoding this here would be n steps of decoding which now no longer has the problem of these conditional independence assumptions but of course now you have the problem that you need n steps right the imputer does something in the middle of this the imputer will as you can see here it will form this into blocks right blocks of size B and this is the empty symbol here right and what it will do is it will make a step where in each block for each block it will conditioned on the previous alignment and conditioned on the input it will decode whatever it feels it is most certain about in each block and then it does this for as long as there are still empty tokens right you can see here the first block and then in the second step it will decode this this this and this so the imputer can trade off between the conditional independence assumption of the one step BERT and the full conditional independence assumption of the n step decoding right so it will compute this alignment and the actual tokens at the same time in this process so how many steps does this take this takes now B steps and this is pretty cool because B is the block size so this is independent of the sequence length so it is able to compute this alignment and output in a constant number of steps right so you're by modulating this B you're now able to trade off speed versus let's say performance in the imputer and this is pretty cool so I think actually I think the the bigger point to understand here is how to actually use the assumption that there is a monotonic alignment right because if there is a monotonic alignment and if this thing is given here then you can do this you can do this representation right here with the silence tokens and that allows you to basically represent the output in a form that is of the same length as the input and do this kind of token by token decoding while still allowing you to have variable lengths output as long as they're smaller in length than the input so that's pretty cool and then the the next pretty cool thing is the fact that they do this in blocks now of course my issue with this so this is how the system works my issue with this is how the system is trained so if you think about how you train this you must train this first of all the loss function right has to revert this and how they do it as they marginalize you see this down here you want to marginalize over all the possible alignments right here so this is how you train you sample an alignment from the alignment policy and this alignment policy is I think they have some heuristics of how they construct the alignments during during training or you have experts actually giving you this alignment I think they use in the speech recognition they use something like CTC to give you the alignments from the alignment policy and then you have a masking policy and I think they also they just do random masking and then they use that for training and then they marginalize over the alignments this I'm pretty sure is not the same distribution as the decoding procedure I just described right so the decoding procedure if you do this in B steps right that means each of the step is dependent on the step before so that means the distribution of whatever you whatever the imputer sees is actually dependent on itself while these people are proposing a training framework where you have here you have a heuristic in order to come up with the training sample alignments and here you have a random I think a random masking policy that comes up with the with where the empty tokens are so this is not the same distribution and then also it marginalizes over all compatible alignments which I'm I'm pretty sure this is not the same distribution this is not the correct loss distribution they have some math to show that in expectation it's the same but yeah this is this is over there over their role in policy and role and expert and and marginalization this I don't want to go too deep into this I've given it some thought but it will make this video too long and boring if I actually go into the details here suffice to say I invite you to look at the loss computation and ask yourself if you think that is the correct way to produce the data set for training given how you do the inference later the architecture of the imputer is actually pretty similar to BERT in that first of all well okay you're dealing with audio in the input so you're going to have some convolutional network here and you also need to take as an input the prior alignment that you've already produced right so this you embed and but then you simply do an attention network a transformer which will which is pretty close to to the bird example we've made and so I mean they stress that that their that their loss is actually a lower bound on the loss so I shouldn't be I shouldn't be too hard when I say it's not the correct distribution they do minimize something some loss that actually makes sense but yeah I mainly wanted to go over the over the how the imputer works and how the it is structured and I think it's pretty cool and it lends itself very well to these tasks and most of all I like the fact that it exploits the these assumptions here so not all tasks fit these assumptions but if a task does fit the assumption then I think it should be you know it it should be fairly obvious that one should exploit that in order to perform better all right that was it for me thanks | [
{
"start": 0,
"end": 6.12,
"text": " Hi there! Today we're looking at the imputer sequence modeling via imputation"
},
{
"start": 6.12,
"end": 12.72,
"text": " and dynamic programming by William Chan, Chitwan Sariah, Jeffrey Hinton, Mohamed"
},
{
"start": 12.72,
"end": 18.96,
"text": " Nourouzi and Navdeep Jaitley. So this is a model to perform sequence-to-sequence"
},
{
"start": 18.96,
"end": 28.2,
"text": " tasks. Now sequence-to-sequence tasks are very very common in NLP, but in this"
},
{
"start": 28.2,
"end": 33.44,
"text": " case it's kind of a subset of sequence-to-sequence tasks. So a classic"
},
{
"start": 33.44,
"end": 38.04,
"text": " sequence-to-sequence task is a machine translation. Here for example the"
},
{
"start": 38.04,
"end": 45.32,
"text": " sentence I like you. If you want to translate it to German, sorry you, if you"
},
{
"start": 45.32,
"end": 55.92,
"text": " want to translate it to German that would become Ich mag dich. And you see that the"
},
{
"start": 55.92,
"end": 62.56,
"text": " input is a sequence right and the output is a sequence. Now the imputer deals with"
},
{
"start": 62.56,
"end": 66.76,
"text": " very special kind of sequence-to-sequence tasks. Namely it deals with"
},
{
"start": 66.76,
"end": 71.88,
"text": " sequence-to-sequence tasks where there is a monotonic alignment. So you see"
},
{
"start": 71.88,
"end": 75.88,
"text": " that this is given here. The first word is corresponding to the first word here,"
},
{
"start": 75.88,
"end": 82.64,
"text": " the second to the second and the third to the third. This is not always the case"
},
{
"start": 82.64,
"end": 86.24,
"text": " in machine translation. You know different languages have different sentence"
},
{
"start": 86.24,
"end": 93.86,
"text": " structures. So for example in French this would be je d'aime. And you can see that"
},
{
"start": 93.86,
"end": 99.6,
"text": " the first word is still the first word, however the third word has become the"
},
{
"start": 99.6,
"end": 104.96000000000001,
"text": " second, the you and the verb goes to the end. So the imputer would not be able to"
},
{
"start": 104.96000000000001,
"end": 110.92,
"text": " deal with this task very well. A different task where the imputer would be"
},
{
"start": 110.92,
"end": 117.2,
"text": " useful for would be something like speech recognition. So if someone were to speak"
},
{
"start": 117.2,
"end": 121.4,
"text": " the words I like you and you would measure the waveform of that it would"
},
{
"start": 121.4,
"end": 129.64,
"text": " look something like I like you. So if you have this waveform let's actually"
},
{
"start": 129.64,
"end": 136.12,
"text": " make some chunk samples out of this. Let's say this is a sample right here and"
},
{
"start": 136.12,
"end": 143.6,
"text": " here is a break here and here. So we have five samples on the bottom."
},
{
"start": 143.6,
"end": 150.72,
"text": " You can see pretty easily that this sample here, this is the I and then this"
},
{
"start": 150.72,
"end": 157.28,
"text": " is silence, this is the like, this is silence and this is the you. So the"
},
{
"start": 157.28,
"end": 161.04000000000002,
"text": " imputer deals with these kind of sequence to sequence tasks where first"
},
{
"start": 161.04,
"end": 167.64,
"text": " of all there is a monotonic alignment, sorry monotonic alignment and second of"
},
{
"start": 167.64,
"end": 173.68,
"text": " all this is an engineering constraint where the length of the input sequence X"
},
{
"start": 173.68,
"end": 179.07999999999998,
"text": " is larger or equal to the length of the input sequence Y and you'll see"
},
{
"start": 179.07999999999998,
"end": 185.95999999999998,
"text": " why mainly because we rely on being able to compute this alignment here. The"
},
{
"start": 185.96,
"end": 193.92000000000002,
"text": " alignment of input samples to output samples. You can see that the"
},
{
"start": 193.92000000000002,
"end": 197.96,
"text": " monotonic alignment is actually given fairly well in speech recognition"
},
{
"start": 197.96,
"end": 202.88,
"text": " because if something is later down here it is also later in the sequence up here."
},
{
"start": 202.88,
"end": 210.68,
"text": " That is a monotonic alignment and also usually we have more wave samples"
},
{
"start": 210.68,
"end": 217.84,
"text": " then we have words in the output sequence. So that would be a task for the"
},
{
"start": 217.84,
"end": 225.36,
"text": " imputer. Now let's think about how we would do something like this. So let's"
},
{
"start": 225.36,
"end": 233.68,
"text": " put X at the top here and we said X has five tokens in it and let's put Y at the"
},
{
"start": 233.68,
"end": 246.28,
"text": " bottom. Y actually has three tokens. So this here is I like you."
},
{
"start": 246.28,
"end": 252.44,
"text": " This is the waveform and we want the I like you at the bottom. So what could we"
},
{
"start": 252.44,
"end": 259.36,
"text": " do? First of all what the imputer does is it represents I like you not as this"
},
{
"start": 259.36,
"end": 267.88,
"text": " form right here but as a form where you have the same length as X divided into"
},
{
"start": 267.88,
"end": 276.16,
"text": " the same amount of things and then it does the following. So for this this is"
},
{
"start": 276.16,
"end": 278.68,
"text": " an example."
},
{
"start": 278.68,
"end": 291,
"text": " This is how it would represent Y. It would say I have as many chunks on"
},
{
"start": 291,
"end": 296.24,
"text": " the top as on the bottom. I know this chunk here corresponds to this token"
},
{
"start": 296.24,
"end": 302.48,
"text": " then this here to this and this here to this and then these are these"
},
{
"start": 302.48,
"end": 308.6,
"text": " intermediate ones. So you can see these correspond to those. These are"
},
{
"start": 308.6,
"end": 314,
"text": " silents right here. Now it doesn't always need to be that there is always one"
},
{
"start": 314,
"end": 318.32000000000005,
"text": " token and a silence then a token and a silence. The task of the imputer is"
},
{
"start": 318.32000000000005,
"end": 329.20000000000005,
"text": " actually to see whether this is more likely than for example I like and then"
},
{
"start": 329.20000000000005,
"end": 334.52000000000004,
"text": " silence silence and then you. So the task of the imputer is to distinguish"
},
{
"start": 334.52,
"end": 339.24,
"text": " these two from each other and then of course also produce the actual tokens."
},
{
"start": 339.24,
"end": 346.32,
"text": " Now if you think about how would you go about taking X and producing something"
},
{
"start": 346.32,
"end": 351.84,
"text": " like Y. So this is Y let's call it tilde. This is the actual Y right but you can"
},
{
"start": 351.84,
"end": 356.2,
"text": " see that this here is a deterministic function in one way. It's actually not a"
},
{
"start": 356.2,
"end": 360.79999999999995,
"text": " deterministic function in the other way and that becomes interesting when you"
},
{
"start": 360.8,
"end": 365,
"text": " have to compute a loss for this. But how would we go about doing this? What"
},
{
"start": 365,
"end": 370.08,
"text": " we could do is we could just take a big transformer BERT. That's actually"
},
{
"start": 370.08,
"end": 379.12,
"text": " drawn arrow. We could just take BERT and we could simply so in BERT you have"
},
{
"start": 379.12,
"end": 385.64,
"text": " in if you if you construct it correctly you have as many input tokens as output"
},
{
"start": 385.64,
"end": 390.36,
"text": " tokens. So what we could simply say is for each of the outputs that we get we"
},
{
"start": 390.36,
"end": 397.24,
"text": " simply make this as a softmax classifier over our vocabulary with the silence"
},
{
"start": 397.24,
"end": 404.08000000000004,
"text": " being one special token and we simply classify each of the outputs into this"
},
{
"start": 404.08000000000004,
"end": 412.16,
"text": " vocabulary. This would be one step right? So we could do one step BERT bang bang"
},
{
"start": 412.16,
"end": 418.16,
"text": " input to output and there is more there are more sophisticated approaches to"
},
{
"start": 418.16,
"end": 423.20000000000005,
"text": " doing this in one step like CTC but ultimately we could just do one step but"
},
{
"start": 423.20000000000005,
"end": 428.16,
"text": " then you'd have the same problem like for example XL net if you haven't seen"
},
{
"start": 428.16,
"end": 434.20000000000005,
"text": " my XL net video I recommend it that they exactly take the problem if you do this"
},
{
"start": 434.20000000000005,
"end": 441.04,
"text": " right then at the moment where you decode the word like you have no idea"
},
{
"start": 441.04,
"end": 446.32000000000005,
"text": " that there is an I over here all you know is the the vector you have here"
},
{
"start": 446.32,
"end": 453.52,
"text": " that you sample the I from right but this could be a distribution where I is"
},
{
"start": 453.52,
"end": 458.2,
"text": " pretty high but some other word is also pretty high so this process over here"
},
{
"start": 458.2,
"end": 464.56,
"text": " that samples the word like has no idea which of the two here you actually would"
},
{
"start": 464.56,
"end": 470.08,
"text": " sample so it cannot condition on it so it is the the assumption here is that"
},
{
"start": 470.08,
"end": 473.68,
"text": " the sampling of the word like is independent of the sampling of the word"
},
{
"start": 473.68,
"end": 480,
"text": " I and of course that's not the case the you need to know what word is there if"
},
{
"start": 480,
"end": 486.12,
"text": " you want to sample the word like otherwise you can end up with some very"
},
{
"start": 486.12,
"end": 492.6,
"text": " confusing sentences so this one step process is pretty quick but it has the"
},
{
"start": 492.6,
"end": 495.8,
"text": " drawback that there are these conditional independence assumptions and"
},
{
"start": 495.8,
"end": 500.88,
"text": " again I invite you to watch the XL net video if you want to dive more into this"
},
{
"start": 500.88,
"end": 507.04,
"text": " problem the second thing we could do is we could just decode one after another"
},
{
"start": 507.04,
"end": 516.64,
"text": " right so we could say all right I'll make sorry I'll make my five slots here"
},
{
"start": 516.64,
"end": 521.52,
"text": " and I just leave them empty for now and I'm just going to decode the one that I"
},
{
"start": 521.52,
"end": 527.72,
"text": " am most sure about and let's say the the speech at the back here is very clear"
},
{
"start": 527.72,
"end": 533.48,
"text": " and you say other I'm I know this is a you right so I'm gonna fill in you right"
},
{
"start": 533.48,
"end": 540.36,
"text": " here right and make this alignment that this goes here this is the you right I"
},
{
"start": 540.36,
"end": 546.6800000000001,
"text": " still don't know what the others are but now what I did they do a second step and"
},
{
"start": 546.6800000000001,
"end": 556.6,
"text": " in the second step I get as an input not only the original input like this this"
},
{
"start": 556.6,
"end": 562.8000000000001,
"text": " thing here but I also get the fact that I already decoded the word you to here"
},
{
"start": 562.8000000000001,
"end": 568.48,
"text": " right in this step so now I say given that I already decoded the word you which"
},
{
"start": 568.48,
"end": 575.36,
"text": " one am I now most sure about and I might be most sure about to say I'm most sure"
},
{
"start": 575.36,
"end": 578.52,
"text": " about this now being an eye because there's a you at the end and this kind"
},
{
"start": 578.52,
"end": 584.2,
"text": " of sounds like an eye so an eye here right it goes to the next step and then"
},
{
"start": 584.2,
"end": 589.12,
"text": " the next step it already has the information that it decoded I and you"
},
{
"start": 589.12,
"end": 597.4000000000001,
"text": " and now it's a might say ah okay given these that's so probably this thing so I"
},
{
"start": 597.4000000000001,
"end": 604.2800000000001,
"text": " here probably the thing here the thing right here is silence right makes the"
},
{
"start": 604.2800000000001,
"end": 608.2,
"text": " most sense I kind of hear some noise but there's already a word after so now I'm"
},
{
"start": 608.2,
"end": 613.76,
"text": " pretty sure that this here is a silent token right and you go this until the"
},
{
"start": 613.76,
"end": 621.96,
"text": " end until you're actually at this so this here would be n step decoding this"
},
{
"start": 621.96,
"end": 628.24,
"text": " here would be n steps of decoding which now no longer has the problem of these"
},
{
"start": 628.24,
"end": 632.72,
"text": " conditional independence assumptions but of course now you have the problem that"
},
{
"start": 632.72,
"end": 641.16,
"text": " you need n steps right the imputer does something in the middle of this the"
},
{
"start": 641.16,
"end": 648.12,
"text": " imputer will as you can see here it will form this into blocks right blocks of"
},
{
"start": 648.12,
"end": 655.3199999999999,
"text": " size B and this is the empty symbol here right and what it will do is it will"
},
{
"start": 655.3199999999999,
"end": 661.36,
"text": " make a step where in each block for each block it will conditioned on the"
},
{
"start": 661.36,
"end": 665.36,
"text": " previous alignment and conditioned on the input it will decode whatever it"
},
{
"start": 665.36,
"end": 673.24,
"text": " feels it is most certain about in each block and then it does this for as long"
},
{
"start": 673.24,
"end": 678.64,
"text": " as there are still empty tokens right you can see here the first block and then"
},
{
"start": 678.64,
"end": 686.4,
"text": " in the second step it will decode this this this and this so the imputer can"
},
{
"start": 686.4,
"end": 692,
"text": " trade off between the conditional independence assumption of the one step"
},
{
"start": 692,
"end": 697.48,
"text": " BERT and the full conditional independence assumption of the n step"
},
{
"start": 697.48,
"end": 705.44,
"text": " decoding right so it will compute this alignment and the actual tokens at the"
},
{
"start": 705.44,
"end": 712.4,
"text": " same time in this process so how many steps does this take this takes now B"
},
{
"start": 712.4,
"end": 721.36,
"text": " steps and this is pretty cool because B is the block size so this is independent"
},
{
"start": 721.36,
"end": 727.4,
"text": " of the sequence length so it is able to compute this alignment and output in a"
},
{
"start": 727.4,
"end": 734.2,
"text": " constant number of steps right so you're by modulating this B you're now able to"
},
{
"start": 734.2,
"end": 741.84,
"text": " trade off speed versus let's say performance in the imputer and this is"
},
{
"start": 741.84,
"end": 747.12,
"text": " pretty cool so I think actually I think the the bigger point to understand here"
},
{
"start": 747.12,
"end": 753.28,
"text": " is how to actually use the assumption that there is a monotonic alignment"
},
{
"start": 753.28,
"end": 757.28,
"text": " right because if there is a monotonic alignment and if this thing is given"
},
{
"start": 757.28,
"end": 765.6,
"text": " here then you can do this you can do this representation right here with the"
},
{
"start": 765.6,
"end": 773.72,
"text": " silence tokens and that allows you to basically represent the output in a"
},
{
"start": 773.72,
"end": 778.48,
"text": " form that is of the same length as the input and do this kind of token by token"
},
{
"start": 778.48,
"end": 784.84,
"text": " decoding while still allowing you to have variable lengths output as long as"
},
{
"start": 784.84,
"end": 792.08,
"text": " they're smaller in length than the input so that's pretty cool and then the the"
},
{
"start": 792.08,
"end": 799.88,
"text": " next pretty cool thing is the fact that they do this in blocks now of course my"
},
{
"start": 799.88,
"end": 805.92,
"text": " issue with this so this is how the system works my issue with this is how"
},
{
"start": 805.92,
"end": 812.76,
"text": " the system is trained so if you think about how you train this you must train"
},
{
"start": 812.76,
"end": 820.56,
"text": " this first of all the loss function right has to revert this and how they"
},
{
"start": 820.56,
"end": 829.48,
"text": " do it as they marginalize you see this down here you want to marginalize over"
},
{
"start": 829.48,
"end": 838.96,
"text": " all the possible alignments right here so this is how you train you sample an"
},
{
"start": 838.96,
"end": 848.16,
"text": " alignment from the alignment policy and this alignment policy is I think they"
},
{
"start": 848.16,
"end": 853.2,
"text": " have some heuristics of how they construct the alignments during during"
},
{
"start": 853.2,
"end": 858.32,
"text": " training or you have experts actually giving you this alignment I think they"
},
{
"start": 858.32,
"end": 864.84,
"text": " use in the speech recognition they use something like CTC to give you the"
},
{
"start": 864.84,
"end": 872.1600000000001,
"text": " alignments from the alignment policy and then you have a masking policy and I"
},
{
"start": 872.1600000000001,
"end": 877.5200000000001,
"text": " think they also they just do random masking and then they use that for"
},
{
"start": 877.5200000000001,
"end": 884.7600000000001,
"text": " training and then they marginalize over the alignments this I'm pretty sure is"
},
{
"start": 884.76,
"end": 892.72,
"text": " not the same distribution as the decoding procedure I just described"
},
{
"start": 892.72,
"end": 901.16,
"text": " right so the decoding procedure if you do this in B steps right that means each"
},
{
"start": 901.16,
"end": 908.04,
"text": " of the step is dependent on the step before so that means the distribution of"
},
{
"start": 908.04,
"end": 914.4,
"text": " whatever you whatever the imputer sees is actually dependent on itself while"
},
{
"start": 914.4,
"end": 921.68,
"text": " these people are proposing a training framework where you have here you have a"
},
{
"start": 921.68,
"end": 928.16,
"text": " heuristic in order to come up with the training sample alignments and here you"
},
{
"start": 928.16,
"end": 936,
"text": " have a random I think a random masking policy that comes up with the with where"
},
{
"start": 936,
"end": 941.3199999999999,
"text": " the empty tokens are so this is not the same distribution and then also it"
},
{
"start": 941.32,
"end": 947.44,
"text": " marginalizes over all compatible alignments which I'm I'm pretty sure"
},
{
"start": 947.44,
"end": 952.2800000000001,
"text": " this is not the same distribution this is not the correct loss distribution"
},
{
"start": 952.2800000000001,
"end": 959.5200000000001,
"text": " they have some math to show that in expectation it's the same but yeah this"
},
{
"start": 959.5200000000001,
"end": 967.7600000000001,
"text": " is this is over there over their role in policy and role and expert and and"
},
{
"start": 967.76,
"end": 974.2,
"text": " marginalization this I don't want to go too deep into this I've given it some"
},
{
"start": 974.2,
"end": 979.28,
"text": " thought but it will make this video too long and boring if I actually go into"
},
{
"start": 979.28,
"end": 984.96,
"text": " the details here suffice to say I invite you to look at the loss computation and"
},
{
"start": 984.96,
"end": 992.68,
"text": " ask yourself if you think that is the correct way to produce the data set for"
},
{
"start": 992.68,
"end": 999.7199999999999,
"text": " training given how you do the inference later the architecture of the imputer is"
},
{
"start": 999.7199999999999,
"end": 1006.52,
"text": " actually pretty similar to BERT in that first of all well okay you're dealing"
},
{
"start": 1006.52,
"end": 1011.3599999999999,
"text": " with audio in the input so you're going to have some convolutional network here"
},
{
"start": 1011.3599999999999,
"end": 1015.8,
"text": " and you also need to take as an input the prior alignment that you've already"
},
{
"start": 1015.8,
"end": 1021.9599999999999,
"text": " produced right so this you embed and but then you simply do an attention"
},
{
"start": 1021.96,
"end": 1029.1200000000001,
"text": " network a transformer which will which is pretty close to to the bird example"
},
{
"start": 1029.1200000000001,
"end": 1039.44,
"text": " we've made and so I mean they stress that that their that their loss is"
},
{
"start": 1039.44,
"end": 1044.44,
"text": " actually a lower bound on the loss so I shouldn't be I shouldn't be too hard when"
},
{
"start": 1044.44,
"end": 1050.64,
"text": " I say it's not the correct distribution they do minimize something some loss"
},
{
"start": 1050.64,
"end": 1058.4,
"text": " that actually makes sense but yeah I mainly wanted to go over the over the"
},
{
"start": 1058.4,
"end": 1064.3200000000002,
"text": " how the imputer works and how the it is structured and I think it's pretty cool"
},
{
"start": 1064.3200000000002,
"end": 1072.48,
"text": " and it lends itself very well to these tasks and most of all I like the fact"
},
{
"start": 1072.48,
"end": 1079.96,
"text": " that it exploits the these assumptions here so not all tasks fit these"
},
{
"start": 1079.96,
"end": 1085.8400000000001,
"text": " assumptions but if a task does fit the assumption then I think it should be you"
},
{
"start": 1085.8400000000001,
"end": 1090.32,
"text": " know it it should be fairly obvious that one should exploit that in order to"
},
{
"start": 1090.32,
"end": 1110,
"text": " perform better all right that was it for me thanks"
}
] |
ZVVnvZdUMUk | Yannic Kilcher | UCZHmQk67mSJgfCCTn7xBfew | The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks | [
"Science & Technology"
] | [
"deep learning",
"machine learning",
"neural networks",
"pruning",
"distillation",
"quantization",
"size",
"weights",
"optimization",
"training",
"generalization",
"overparameterization",
"winning ticket",
"winning lottery ticket",
"arxiv"
] | Stunning evidence for the hypothesis that neural networks work so well because their random initialization almost certainly contains a nearly optimal sub-network that is responsible for most of the final performance.
https://arxiv.org/abs/1803.03635
Abstract:
Neural network pruning techniques can reduce the parameter counts of trained networks by over 90%, decreasing storage requirements and improving computational performance of inference without compromising accuracy. However, contemporary experience is that the sparse architectures produced by pruning are difficult to train from the start, which would similarly improve training performance.
We find that a standard pruning technique naturally uncovers subnetworks whose initializations made them capable of training effectively. Based on these results, we articulate the "lottery ticket hypothesis:" dense, randomly-initialized, feed-forward networks contain subnetworks ("winning tickets") that - when trained in isolation - reach test accuracy comparable to the original network in a similar number of iterations. The winning tickets we find have won the initialization lottery: their connections have initial weights that make training particularly effective.
We present an algorithm to identify winning tickets and a series of experiments that support the lottery ticket hypothesis and the importance of these fortuitous initializations. We consistently find winning tickets that are less than 10-20% of the size of several fully-connected and convolutional feed-forward architectures for MNIST and CIFAR10. Above this size, the winning tickets that we find learn faster than the original network and reach higher test accuracy.
Authors: Jonathan Frankle, Michael Carbin
Links:
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
BitChute: https://www.bitchute.com/channel/yannic-kilcher
Minds: https://www.minds.com/ykilcher | Hi there. Today we're looking at the lottery ticket hypothesis finding sparse trainable neural networks by Jonathan Frankel and Michael Carbon. So this paper is sort of an empirical paper into what makes neural networks train successfully. And it comes out of the literature of pruning. So they say neural network pruning techniques, right, they have been around for a while. They can reduce the parameter counts of trained networks by over 90%, decreasing storage requirements and improving computational performance or inference without compromising accuracy. So what does this mean? If you have a neural network, let's say you just have three nodes, each layer, you have two layers here. You have a neural network here. If you have a fully connected neural network, every node is going to be connected with every node in the next layer, right? And these connections are your weights, your thetas. And you're going to train them, which means you have a number of steps in this direction. And let's say you have a test set accuracy right here. So here is steps. You're going to train them. And if you train them, your accuracy will reach a certain point, right? I'm just going to draw the end point here. Let's say you reach a 90% test accuracy. So your network generalizes pretty well. That's pretty good. So people have been wondering, these networks, they require quite a lot of storage. You know, this is nine connections right here. So three times three. And this is also nine connections. Can we make it smaller but still retain the accuracy? And this is where pruning comes in. So with pruning, people would go and after you train them. So the first step is train the full network, right? And then the second step is prune. Now when you prune, you basically select among the weights that you have that you have trained, you select the best ones in some form or another. In this case, people just select the ones with the largest magnitudes. But there are multiple techniques to do this. And this is very related to things like quantization or distillation. So with pruning, you just leave away some of the weights or most of the weights. And you hope that you still retain a pretty good accuracy right here, right? Sorry, actually, we don't need these steps thing. So you leave away weights and you retain a good accuracy. So pruning methods have been deployed successfully to make networks use less space or be faster to evaluate. Because, of course, with less numbers, you need to do less calculations. So this paper builds on top of this and it basically says, all right, if we do the following, if we now take this network that we identified after training and we just take this network and we train it from the beginning, only this sub network. Right. So three is retrain. Then it will also perform pretty well or even better under one condition. Right. So if you only train this thing, it will perform well under one condition. And the condition is that you transfer over the initial weights. So right. The question is, can we train just the small network from the beginning so that we don't have to train the big network? Right. And the paper identifies that this works if if your initial weights, theta zero of the small network are equal to the initial weights of the large network. Right. Just so just the ones where you ported them over. But basically, the short answer is no. And the reason is, if you only want to train the small network, you need to know the good initialization of these of these weights all here. And the good initialization, you only know after you've trained the large network and actually identified which of these connections make sense. You can't just take a smaller network from the beginning. You have to train the larger one. Then you know which weights and which initializations make sense. So this is the winning lottery ticket hypothesis. Basically, it states and we can read it out in full. The lottery ticket hypothesis is a randomly initialized dense neural network contains a sub network that is initialized such that when trained in isolation, it can match the test accuracy of the original network after trading for at most the same number of iterations. Right. Now, the important part here is that it contains a sub network that is initialized such that when trained in isolation. So two things are important. It is important. The structure of the network of the sub network, but it is also important. What are the initialization of the connections? So the paper kind of hints at why neural networks work at all. And the reason why neural networks work is because we've often thought of neural networks have so many parameters, how can they even generalize? The reason is the following. If we have a neural network, we throw so many parameters at it. Some of the parameters, one subset of the parameters, namely the red ones here, are going to be initialized in such a way, in such a beneficial way that training will perform, will make the network perform well. So it's initialization plus SGD on that sub network. So it is actually only a very small sub network that is responsible for the performance of the neural network. But that sub network needs to be initialized at the correct position. And by over parameterizing these neural networks so much, we actually give it combinatorically many sub networks to choose from where the initialization could be well. So because of this combinatorics, it means that if we over parameterize by some margin, then there's almost guaranteed to be a good sub network in there that can then perform well. So I hope this makes sense. It is basically not a way, it is not a magic thing where we now can train the smaller networks. It is an explanation of why the over parameterization in neural networks makes sense. Because by over parameterizing, we allow the neural networks to exploit the combinatorics to find a good, well initialized sub network that will perform well. And the evidence for this is exactly the fact that if we transfer over the sub network, it by itself will reach the same performance or actually exceed the performance. But only if we initialize it at the same point as it was initialized in the original network. So here is how these sub networks are identified. We've already hinted at that, but here is how the paper does it. So it says identifying winning tickets. First randomly initialize a neural network. This is the full neural network. Then train the network for j iterations arriving at some parameters. These are the trained parameters. Prune p% of the parameters. So of these parameters, prune some. And this is in order to know which ones you prune, you need to have first trained the full neural network. So this is the catch here. You need to train the full neural network to know which ones you must prune. And thereby you create a mask m. And then they say reset the remaining parameters to their value in theta 0. Actually you don't need to say remaining. You can just say reset the parameters to their values in theta 0. Now this is also important. This is the same theta 0 as it was at the beginning of the training. So you need to actually set them back to those exact values. And thereby you create the winning ticket. If you just want to end up with a trained network, then this remaining thing here is important. But if you then want to retrain, you can set everything back and only train the masked version of the network. And they say this will identify these winning tickets. And it will actually work better if you don't do this in what they call one shot. But if you do this iterative pruning, that means it repeatedly trains, prunes and resets the network over n rounds. Each round prunes p to the 1 over n percent of the weights that survived the previous round. Now why might that be? It might be. And this is I think a somewhat valid hypothesis that I myself put forth here. It might be that if you prune some of the weights, let's say you prune this one and this one, what you'll do is you'll put the responsibility of these weights onto other weights. So maybe on this one and this one. So as we said, they prune by looking at which weights are large. So let's say here we have the weights of the layer and these are the magnitudes of the weights. So you would prune, let's say you only want to keep two of those around. So you would prune this one and this one because these are pretty small. Here's the magnitude. And you would also prune that one. If you just do this one shot and then you would retrain and maybe these weights would end up somewhat different. But if you do this in multiple rounds, let's say you first prune one of them. So you would actually prune the smallest one, this one here. And then you retrain and then your weights actually change. And all of the responsibility that this weight carried before is now transferred onto this. So your new weights look like this. And you prune another one like this. And again, all the responsibility of this would, in my hypothetical example, fall on this one. And now if you prune a third one, you would actually prune this one because you realize this weight here, in absence of these two other weights, is actually important. So you would prune this one as well. So I think that is why this iterative pruning method might work a bit better than the one shot pruning method that they say here. So they do a lot of empirical investigation. And I just want to highlight very few of them. So that you get the gist and then the paper goes into a lot of detail and a lot of different architectures that you can check out yourself. So here we have a plot that deals with percent of weights remaining. So as you go to the right here, they drop more and more weights and realize this is a log plot. So if the dashed lines here are random pruning, which means you just drop out a certain number of weights and then you retrain. And you can see that the dashed line here, it starts dropping and just becomes worse as you have less and less weights remaining, which is exactly what's expected. You prune the network, you make it smaller, you make it less performant. And the more weights you take away, the less performing it is. But interestingly enough, if you do this pruning that they suggest and then retrain with the correct initialization, not only do you retain the same level of accuracy for very long, you see here this is 2.9 or 1.2 percent of weights remaining, but you actually go higher. So you can see here when you have 16 percent of weights remaining, there's actually a significant difference between the full network and the prune network. And that's only by simply training this winning hypothesis. So this I find very, very fascinating. And again, this is not a magic bullet that you can do from the beginning, but it does give a clue that if you could train these from the beginning, then you might actually end up at a better point. So it does actually give a practical application. Also, you see they train faster. So the blue line here is the full network over the course of training. Sorry, this should be blue. So here is training iterations and this is test accuracy. So you see the full network does something like this. Now, if you prune to 20 percent of the weights, actually train faster and you go higher. And even if you have 7 percent of the weights, you go almost as high. So this is very interesting. Only when you go to like 1.9 percent of the weights does your performance degrade again and eventually actually go lower than the original network. So that is pretty, pretty, pretty cool, I think. Now, as I said, they do a lot of investigation. And I think one of the main takeaways is that it is not only the structure of the winning hypothesis. So it's not only the structure of the sub network that makes it to be a winning hypothesis. It is actually the initialization. Here I want to show one of these plots. They have lots of plots. You can see here, for example, sorry, this is from my own annotations. Again, this is percent of weights remaining and this is test accuracy at the final iteration. And if we initialize the sub network at its original position, like this method suggests, you see, we first increase the accuracy and then decrease it after a long time. If we take the same sub network, right, but we randomly reinitialize it, then it drops much faster and actually immediately drops. So it really is about not only the structure of the sub network, but about its initialization. I think that is that is the core of the hypothesis here. A very interesting related finding that I just want to mention, I find, to be that they actually discover that the weights, so if you have a weight of the, so if you have two kinds of weights, let's actually go up to my original drawing here. If you compare how fast or how far do the weights travel in optimization space, right, so you can basically look at how far weights travel during optimization. So you take the full neural network here and you look at a parameter that ends up being in the winning hypothesis, theta, theta zero, and it goes to theta end, which let's say theta final. And you also look at parameters that don't end up in the winning hypothesis. Let's call these theta one, two, theta, also final, prime. I'm not too good at labeling. And you look at how far they travel, you'll find that the weights that end up in the winning hypothesis, they, during optimization, they travel much further in optimization space than weights that are not in the winning hypothesis, right? They just stay around much more. So it's not that the kind of good network is already contained in initialization. It's much more than the good network lends itself very favorably to be initialized by SGD, right? Because it travels farther. It means SGD has a bigger pull on it, right? I think there is a lot of things that are yet to be explored in this space, and I think this paper is a very cool contribution to our understanding of how neural networks work. All right, I invite you to check out all the experiments. They do a very thorough job. And with that, I say bye bye. | [
{
"start": 0,
"end": 11,
"text": " Hi there. Today we're looking at the lottery ticket hypothesis finding sparse trainable neural networks by Jonathan Frankel and Michael Carbon."
},
{
"start": 11,
"end": 21,
"text": " So this paper is sort of an empirical paper into what makes neural networks train successfully."
},
{
"start": 21,
"end": 29,
"text": " And it comes out of the literature of pruning. So they say neural network pruning techniques, right, they have been around for a while."
},
{
"start": 29,
"end": 44,
"text": " They can reduce the parameter counts of trained networks by over 90%, decreasing storage requirements and improving computational performance or inference without compromising accuracy."
},
{
"start": 44,
"end": 57,
"text": " So what does this mean? If you have a neural network, let's say you just have three nodes, each layer, you have two layers here."
},
{
"start": 57,
"end": 69,
"text": " You have a neural network here. If you have a fully connected neural network, every node is going to be connected with every node in the next layer, right?"
},
{
"start": 69,
"end": 84,
"text": " And these connections are your weights, your thetas. And you're going to train them, which means you have a number of steps in this direction."
},
{
"start": 84,
"end": 94,
"text": " And let's say you have a test set accuracy right here. So here is steps. You're going to train them."
},
{
"start": 94,
"end": 103,
"text": " And if you train them, your accuracy will reach a certain point, right? I'm just going to draw the end point here."
},
{
"start": 103,
"end": 109,
"text": " Let's say you reach a 90% test accuracy. So your network generalizes pretty well. That's pretty good."
},
{
"start": 109,
"end": 118,
"text": " So people have been wondering, these networks, they require quite a lot of storage. You know, this is nine connections right here."
},
{
"start": 118,
"end": 126,
"text": " So three times three. And this is also nine connections. Can we make it smaller but still retain the accuracy?"
},
{
"start": 126,
"end": 131,
"text": " And this is where pruning comes in. So with pruning, people would go and after you train them."
},
{
"start": 131,
"end": 140,
"text": " So the first step is train the full network, right? And then the second step is prune."
},
{
"start": 140,
"end": 155,
"text": " Now when you prune, you basically select among the weights that you have that you have trained, you select the best ones in some form or another."
},
{
"start": 155,
"end": 162,
"text": " In this case, people just select the ones with the largest magnitudes. But there are multiple techniques to do this."
},
{
"start": 162,
"end": 172,
"text": " And this is very related to things like quantization or distillation. So with pruning, you just leave away some of the weights or most of the weights."
},
{
"start": 172,
"end": 184,
"text": " And you hope that you still retain a pretty good accuracy right here, right? Sorry, actually, we don't need these steps thing."
},
{
"start": 184,
"end": 195,
"text": " So you leave away weights and you retain a good accuracy. So pruning methods have been deployed successfully to make networks use less space or be faster to evaluate."
},
{
"start": 195,
"end": 200,
"text": " Because, of course, with less numbers, you need to do less calculations."
},
{
"start": 200,
"end": 228,
"text": " So this paper builds on top of this and it basically says, all right, if we do the following, if we now take this network that we identified after training and we just take this network and we train it from the beginning, only this sub network."
},
{
"start": 228,
"end": 234,
"text": " Right. So three is retrain."
},
{
"start": 234,
"end": 247,
"text": " Then it will also perform pretty well or even better under one condition. Right. So if you only train this thing, it will perform well under one condition."
},
{
"start": 247,
"end": 262,
"text": " And the condition is that you transfer over the initial weights. So right. The question is, can we train just the small network from the beginning so that we don't have to train the big network?"
},
{
"start": 262,
"end": 278,
"text": " Right. And the paper identifies that this works if if your initial weights, theta zero of the small network are equal to the initial weights of the large network."
},
{
"start": 278,
"end": 285,
"text": " Right. Just so just the ones where you ported them over. But basically, the short answer is no."
},
{
"start": 285,
"end": 297,
"text": " And the reason is, if you only want to train the small network, you need to know the good initialization of these of these weights all here."
},
{
"start": 297,
"end": 307,
"text": " And the good initialization, you only know after you've trained the large network and actually identified which of these connections make sense."
},
{
"start": 307,
"end": 316,
"text": " You can't just take a smaller network from the beginning. You have to train the larger one. Then you know which weights and which initializations make sense."
},
{
"start": 316,
"end": 324,
"text": " So this is the winning lottery ticket hypothesis. Basically, it states and we can read it out in full."
},
{
"start": 324,
"end": 337,
"text": " The lottery ticket hypothesis is a randomly initialized dense neural network contains a sub network that is initialized such that when trained in isolation,"
},
{
"start": 337,
"end": 345,
"text": " it can match the test accuracy of the original network after trading for at most the same number of iterations."
},
{
"start": 345,
"end": 356,
"text": " Right. Now, the important part here is that it contains a sub network that is initialized such that when trained in isolation."
},
{
"start": 356,
"end": 369,
"text": " So two things are important. It is important. The structure of the network of the sub network, but it is also important."
},
{
"start": 369,
"end": 378,
"text": " What are the initialization of the connections? So the paper kind of hints at why neural networks work at all."
},
{
"start": 378,
"end": 387,
"text": " And the reason why neural networks work is because we've often thought of neural networks have so many parameters, how can they even generalize?"
},
{
"start": 387,
"end": 394,
"text": " The reason is the following. If we have a neural network, we throw so many parameters at it."
},
{
"start": 394,
"end": 403,
"text": " Some of the parameters, one subset of the parameters, namely the red ones here, are going to be initialized in such a way,"
},
{
"start": 403,
"end": 412,
"text": " in such a beneficial way that training will perform, will make the network perform well."
},
{
"start": 412,
"end": 421,
"text": " So it's initialization plus SGD on that sub network."
},
{
"start": 421,
"end": 428,
"text": " So it is actually only a very small sub network that is responsible for the performance of the neural network."
},
{
"start": 428,
"end": 435,
"text": " But that sub network needs to be initialized at the correct position."
},
{
"start": 435,
"end": 448,
"text": " And by over parameterizing these neural networks so much, we actually give it combinatorically many sub networks to choose from where the initialization could be well."
},
{
"start": 448,
"end": 455,
"text": " So because of this combinatorics, it means that if we over parameterize by some margin,"
},
{
"start": 455,
"end": 462,
"text": " then there's almost guaranteed to be a good sub network in there that can then perform well."
},
{
"start": 462,
"end": 472,
"text": " So I hope this makes sense. It is basically not a way, it is not a magic thing where we now can train the smaller networks."
},
{
"start": 472,
"end": 479,
"text": " It is an explanation of why the over parameterization in neural networks makes sense."
},
{
"start": 479,
"end": 493,
"text": " Because by over parameterizing, we allow the neural networks to exploit the combinatorics to find a good, well initialized sub network that will perform well."
},
{
"start": 493,
"end": 506,
"text": " And the evidence for this is exactly the fact that if we transfer over the sub network, it by itself will reach the same performance or actually exceed the performance."
},
{
"start": 506,
"end": 515,
"text": " But only if we initialize it at the same point as it was initialized in the original network."
},
{
"start": 515,
"end": 520,
"text": " So here is how these sub networks are identified."
},
{
"start": 520,
"end": 524,
"text": " We've already hinted at that, but here is how the paper does it."
},
{
"start": 524,
"end": 529,
"text": " So it says identifying winning tickets. First randomly initialize a neural network."
},
{
"start": 529,
"end": 531,
"text": " This is the full neural network."
},
{
"start": 531,
"end": 537,
"text": " Then train the network for j iterations arriving at some parameters."
},
{
"start": 537,
"end": 540,
"text": " These are the trained parameters."
},
{
"start": 540,
"end": 545,
"text": " Prune p% of the parameters."
},
{
"start": 545,
"end": 548,
"text": " So of these parameters, prune some."
},
{
"start": 548,
"end": 558,
"text": " And this is in order to know which ones you prune, you need to have first trained the full neural network."
},
{
"start": 558,
"end": 564,
"text": " So this is the catch here. You need to train the full neural network to know which ones you must prune."
},
{
"start": 564,
"end": 568,
"text": " And thereby you create a mask m."
},
{
"start": 568,
"end": 574,
"text": " And then they say reset the remaining parameters to their value in theta 0."
},
{
"start": 574,
"end": 580,
"text": " Actually you don't need to say remaining. You can just say reset the parameters to their values in theta 0."
},
{
"start": 580,
"end": 587,
"text": " Now this is also important. This is the same theta 0 as it was at the beginning of the training."
},
{
"start": 587,
"end": 592,
"text": " So you need to actually set them back to those exact values."
},
{
"start": 592,
"end": 596,
"text": " And thereby you create the winning ticket."
},
{
"start": 596,
"end": 606,
"text": " If you just want to end up with a trained network, then this remaining thing here is important."
},
{
"start": 606,
"end": 616,
"text": " But if you then want to retrain, you can set everything back and only train the masked version of the network."
},
{
"start": 616,
"end": 620,
"text": " And they say this will identify these winning tickets."
},
{
"start": 620,
"end": 626,
"text": " And it will actually work better if you don't do this in what they call one shot."
},
{
"start": 626,
"end": 634,
"text": " But if you do this iterative pruning, that means it repeatedly trains, prunes and resets the network over n rounds."
},
{
"start": 634,
"end": 641,
"text": " Each round prunes p to the 1 over n percent of the weights that survived the previous round."
},
{
"start": 641,
"end": 645,
"text": " Now why might that be? It might be."
},
{
"start": 645,
"end": 655,
"text": " And this is I think a somewhat valid hypothesis that I myself put forth here."
},
{
"start": 655,
"end": 665,
"text": " It might be that if you prune some of the weights, let's say you prune this one and this one,"
},
{
"start": 665,
"end": 671,
"text": " what you'll do is you'll put the responsibility of these weights onto other weights."
},
{
"start": 671,
"end": 679,
"text": " So maybe on this one and this one. So as we said, they prune by looking at which weights are large."
},
{
"start": 679,
"end": 689,
"text": " So let's say here we have the weights of the layer and these are the magnitudes of the weights."
},
{
"start": 689,
"end": 699,
"text": " So you would prune, let's say you only want to keep two of those around."
},
{
"start": 699,
"end": 703,
"text": " So you would prune this one and this one because these are pretty small."
},
{
"start": 703,
"end": 709,
"text": " Here's the magnitude. And you would also prune that one."
},
{
"start": 709,
"end": 717,
"text": " If you just do this one shot and then you would retrain and maybe these weights would end up somewhat different."
},
{
"start": 717,
"end": 723,
"text": " But if you do this in multiple rounds, let's say you first prune one of them."
},
{
"start": 723,
"end": 729,
"text": " So you would actually prune the smallest one, this one here."
},
{
"start": 729,
"end": 733,
"text": " And then you retrain and then your weights actually change."
},
{
"start": 733,
"end": 741,
"text": " And all of the responsibility that this weight carried before is now transferred onto this."
},
{
"start": 741,
"end": 745,
"text": " So your new weights look like this."
},
{
"start": 745,
"end": 747,
"text": " And you prune another one like this."
},
{
"start": 747,
"end": 753,
"text": " And again, all the responsibility of this would, in my hypothetical example, fall on this one."
},
{
"start": 753,
"end": 759,
"text": " And now if you prune a third one, you would actually prune this one because you realize this weight here,"
},
{
"start": 759,
"end": 763,
"text": " in absence of these two other weights, is actually important."
},
{
"start": 763,
"end": 765,
"text": " So you would prune this one as well."
},
{
"start": 765,
"end": 775,
"text": " So I think that is why this iterative pruning method might work a bit better than the one shot pruning method that they say here."
},
{
"start": 775,
"end": 779,
"text": " So they do a lot of empirical investigation."
},
{
"start": 779,
"end": 783,
"text": " And I just want to highlight very few of them."
},
{
"start": 783,
"end": 793,
"text": " So that you get the gist and then the paper goes into a lot of detail and a lot of different architectures that you can check out yourself."
},
{
"start": 793,
"end": 799,
"text": " So here we have a plot that deals with percent of weights remaining."
},
{
"start": 799,
"end": 807,
"text": " So as you go to the right here, they drop more and more weights and realize this is a log plot."
},
{
"start": 807,
"end": 817,
"text": " So if the dashed lines here are random pruning, which means you just drop out a certain number of weights and then you retrain."
},
{
"start": 817,
"end": 831,
"text": " And you can see that the dashed line here, it starts dropping and just becomes worse as you have less and less weights remaining,"
},
{
"start": 831,
"end": 833,
"text": " which is exactly what's expected."
},
{
"start": 833,
"end": 837,
"text": " You prune the network, you make it smaller, you make it less performant."
},
{
"start": 837,
"end": 843,
"text": " And the more weights you take away, the less performing it is."
},
{
"start": 843,
"end": 854,
"text": " But interestingly enough, if you do this pruning that they suggest and then retrain with the correct initialization,"
},
{
"start": 854,
"end": 863,
"text": " not only do you retain the same level of accuracy for very long, you see here this is 2.9 or 1.2 percent of weights remaining,"
},
{
"start": 863,
"end": 867,
"text": " but you actually go higher."
},
{
"start": 867,
"end": 879,
"text": " So you can see here when you have 16 percent of weights remaining, there's actually a significant difference between the full network and the prune network."
},
{
"start": 879,
"end": 884,
"text": " And that's only by simply training this winning hypothesis."
},
{
"start": 884,
"end": 887,
"text": " So this I find very, very fascinating."
},
{
"start": 887,
"end": 892,
"text": " And again, this is not a magic bullet that you can do from the beginning,"
},
{
"start": 892,
"end": 904,
"text": " but it does give a clue that if you could train these from the beginning, then you might actually end up at a better point."
},
{
"start": 904,
"end": 906,
"text": " So it does actually give a practical application."
},
{
"start": 906,
"end": 908,
"text": " Also, you see they train faster."
},
{
"start": 908,
"end": 913,
"text": " So the blue line here is the full network over the course of training."
},
{
"start": 913,
"end": 915,
"text": " Sorry, this should be blue."
},
{
"start": 915,
"end": 919,
"text": " So here is training iterations and this is test accuracy."
},
{
"start": 919,
"end": 922,
"text": " So you see the full network does something like this."
},
{
"start": 922,
"end": 929,
"text": " Now, if you prune to 20 percent of the weights, actually train faster and you go higher."
},
{
"start": 929,
"end": 934,
"text": " And even if you have 7 percent of the weights, you go almost as high."
},
{
"start": 934,
"end": 937,
"text": " So this is very interesting."
},
{
"start": 937,
"end": 948,
"text": " Only when you go to like 1.9 percent of the weights does your performance degrade again and eventually actually go lower than the original network."
},
{
"start": 948,
"end": 954,
"text": " So that is pretty, pretty, pretty cool, I think."
},
{
"start": 954,
"end": 958,
"text": " Now, as I said, they do a lot of investigation."
},
{
"start": 958,
"end": 965,
"text": " And I think one of the main takeaways is that it is not only the structure of the winning hypothesis."
},
{
"start": 965,
"end": 971,
"text": " So it's not only the structure of the sub network that makes it to be a winning hypothesis."
},
{
"start": 971,
"end": 974,
"text": " It is actually the initialization."
},
{
"start": 974,
"end": 978,
"text": " Here I want to show one of these plots."
},
{
"start": 978,
"end": 980,
"text": " They have lots of plots."
},
{
"start": 980,
"end": 987,
"text": " You can see here, for example, sorry, this is from my own annotations."
},
{
"start": 987,
"end": 994,
"text": " Again, this is percent of weights remaining and this is test accuracy at the final iteration."
},
{
"start": 994,
"end": 1001,
"text": " And if we initialize the sub network at its original position, like this method suggests, you see,"
},
{
"start": 1001,
"end": 1007,
"text": " we first increase the accuracy and then decrease it after a long time."
},
{
"start": 1007,
"end": 1018,
"text": " If we take the same sub network, right, but we randomly reinitialize it, then it drops much faster and actually immediately drops."
},
{
"start": 1018,
"end": 1025,
"text": " So it really is about not only the structure of the sub network, but about its initialization."
},
{
"start": 1025,
"end": 1029,
"text": " I think that is that is the core of the hypothesis here."
},
{
"start": 1029,
"end": 1039,
"text": " A very interesting related finding that I just want to mention, I find, to be that they actually discover that the weights,"
},
{
"start": 1039,
"end": 1048,
"text": " so if you have a weight of the, so if you have two kinds of weights, let's actually go up to my original drawing here."
},
{
"start": 1048,
"end": 1056,
"text": " If you compare how fast or how far do the weights travel in optimization space, right,"
},
{
"start": 1056,
"end": 1062,
"text": " so you can basically look at how far weights travel during optimization."
},
{
"start": 1062,
"end": 1074,
"text": " So you take the full neural network here and you look at a parameter that ends up being in the winning hypothesis, theta,"
},
{
"start": 1074,
"end": 1080,
"text": " theta zero, and it goes to theta end, which let's say theta final."
},
{
"start": 1080,
"end": 1086,
"text": " And you also look at parameters that don't end up in the winning hypothesis."
},
{
"start": 1086,
"end": 1091,
"text": " Let's call these theta one, two, theta, also final, prime."
},
{
"start": 1091,
"end": 1093,
"text": " I'm not too good at labeling."
},
{
"start": 1093,
"end": 1101,
"text": " And you look at how far they travel, you'll find that the weights that end up in the winning hypothesis,"
},
{
"start": 1101,
"end": 1110,
"text": " they, during optimization, they travel much further in optimization space than weights that are not in the winning hypothesis, right?"
},
{
"start": 1110,
"end": 1112,
"text": " They just stay around much more."
},
{
"start": 1112,
"end": 1117,
"text": " So it's not that the kind of good network is already contained in initialization."
},
{
"start": 1117,
"end": 1129,
"text": " It's much more than the good network lends itself very favorably to be initialized by SGD, right?"
},
{
"start": 1129,
"end": 1132,
"text": " Because it travels farther."
},
{
"start": 1132,
"end": 1137,
"text": " It means SGD has a bigger pull on it, right?"
},
{
"start": 1137,
"end": 1142,
"text": " I think there is a lot of things that are yet to be explored in this space,"
},
{
"start": 1142,
"end": 1148,
"text": " and I think this paper is a very cool contribution to our understanding of how neural networks work."
},
{
"start": 1148,
"end": 1150,
"text": " All right, I invite you to check out all the experiments."
},
{
"start": 1150,
"end": 1152,
"text": " They do a very thorough job."
},
{
"start": 1152,
"end": 1162,
"text": " And with that, I say bye bye."
}
] |
-0aM99dMu_4 | Yannic Kilcher | UCZHmQk67mSJgfCCTn7xBfew | Dynamical Distance Learning for Semi-Supervised and Unsupervised Skill Discovery | [
"Science & Technology"
] | [
"deep learning",
"machine learning",
"reinforcement learning",
"deep rl",
"auxiliary",
"reward",
"distance",
"value function",
"shortest path",
"neural networks",
"maze",
"unsupervised",
"discovery",
"exploration"
] | DDL is an auxiliary task for an agent to learn distances between states in episodes. This can then be used further to improve the agent's policy learning procedure.
Paper: https://arxiv.org/abs/1907.08225
Blog: https://sites.google.com/view/dynamical-distance-learning/home
Abstract:
Reinforcement learning requires manual specification of a reward function to learn a task. While in principle this reward function only needs to specify the task goal, in practice reinforcement learning can be very time-consuming or even infeasible unless the reward function is shaped so as to provide a smooth gradient towards a successful outcome. This shaping is difficult to specify by hand, particularly when the task is learned from raw observations, such as images. In this paper, we study how we can automatically learn dynamical distances: a measure of the expected number of time steps to reach a given goal state from any other state. These dynamical distances can be used to provide well-shaped reward functions for reaching new goals, making it possible to learn complex tasks efficiently. We show that dynamical distances can be used in a semi-supervised regime, where unsupervised interaction with the environment is used to learn the dynamical distances, while a small amount of preference supervision is used to determine the task goal, without any manually engineered reward function or goal examples. We evaluate our method both on a real-world robot and in simulation. We show that our method can learn to turn a valve with a real-world 9-DoF hand, using raw image observations and just ten preference labels, without any other supervision. Videos of the learned skills can be found on the project website: this https URL.
Authors: Kristian Hartikainen, Xinyang Geng, Tuomas Haarnoja, Sergey Levine
Links:
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
BitChute: https://www.bitchute.com/channel/yannic-kilcher
Minds: https://www.minds.com/ykilcher | Hi there! If you look at this robot, this robot has learned to turn this valve by itself. Now by itself isn't really correct, but it has learned it in a semi-supervised way with only 10 human inputs along the entire learning trajectory. So only 10 times was there a true reward for this reinforcement learning procedure and the rest is unsupervised discovery of this skill. And the paper we're going to look at today and the technique by which this was achieved is dynamical distance learning for semi-supervised and unsupervised skill discovery by Kristian Hartikeinen, Xin Yang Geng, Thomas Harnoja and Sergei Levine. So this is a technique for reinforcement learning. So they claim reinforcement learning requires manual specification of a reward function to learn a task. And they say while in principle this reward function only needs to specify the task goal, in practice reinforcement learning can be very time-consuming or even infeasible unless the reward function is shaped so as to provide a smooth gradient towards a successful outcome. So what does this mean? Let's look at it. So if you want the robot here to turn the valve to the right, ideally you simply want to say, so the robot is here, this is the start state, ideally you just want to say I want this, I want the thing to be at the right, so this is good. All of this I don't want any of that. And the reinforcement learning, I mean this is enough, this is a reward function, all of this is zero and this is one. This is a reward function and in theory if you apply any sort of reinforcement learning algorithm with any sort of guarantee, this should get you there. But of course we all know that it's not that easy, right? There is basically an exploration bottleneck where your robot has these three digits and lots of joints to move around and the probability that by itself it discovered that it needs to do this here and get this reward is very very slim. So what you want to do is in your reward function that you're providing to the robot, you would want to say okay so this here I see the blue thing is a bit more turned so I'm maybe going to give this a 0.1 and then here it's a bit more turned so maybe this is 0.2 and this I really like 0.3 here is 0.6 maybe because it's even more right and then one at the end right so this is what they would call a smooth gradient in the reward function where it's kind of the reward function ramps up until the goal is reached but oftentimes this isn't really possible because if you already knew how exactly to do the task which then you could you can only shape the reward function truly if you know how to perform the task in the first hand and then why exactly do you do reinforcement learning except for as an academic exercise. So the issue this paper has is clear right? What they want to say is like let's assume that your reward function is actually pretty bad can we provide artificially a way that this discovery of these of these what they call of these new skills is facilitated as if the reward function had some sort of a gradient. So that's the the outset let's actually go back to the to this for a second and they have these mazes as a kind of an example. So if you look at these mazes what we want to keep in mind is let's actually draw this over here. So let's say you have one of these mazes right and always there is a start state so you're here and then there is a goal state right let's say over here and the task is you can move up down left right and the task is to reach the goal right but if the reward function is simply that if you reach the goal you get a reward of one and otherwise you get a reward of zero then all the agent can do is kind of explore around right until it reaches the goal. Now if you do random exploration like a lot of reinforcement learning algorithms for example Q learning or policy gradient they'll have some sort of a just of a random exploration element where they if they don't if they don't absent of what they of the when they know what to do they just kind of boogle around like up up up right left right left down right up that doesn't work okay down down left down so it's sort of and then up again right and then they just kind of wonk around so this this method takes issue with that and it says okay while the agent is doing its thing trying to reach the goal right what we can do is we can learn a distance function between states now we'll reduce the problem for now and just say the task is always that the goal state is reached in the shortest amount of steps right so let's say the agent does something right it goes here here here here and then here right it that's that's one rollout of the policy and then it crashes into a wall okay that's bad so it gets a negative reward right but in addition to that we can we can learn so it has visited all of these states here in between right these are intermediate states this paper wants us now to to learn a distance function between the states so this distance function let's call it D it learns how far two states are away so it'll you can you can tell it okay this state here let's call that state a and this state here state B how far are those away now this is not super well defined yet but you want to say how far are they away for this agent here so the agent has maybe policy pi like that's what it used to create this trajectory under policy pi how far away are states a and B and how far away is simply going to be the amount of steps that it takes the agent to go from a to B so in this case that would be two right so the the and you can do this between any two states right this and this this and this right here here these all of these states you can also start from this state right let's do it in a different color and do every so the the this distance function D can actually has a pretty tight reward signal like a pretty wealth of information that it can learn these things from right that so the policy pi in this case can't learn much because it just got a reward of zero or something because it didn't reach the goal but the distance function has very very big reward or a big rework it has a very dense reward signal where it can learn distances between two states right now let's say we've explored a bunch right a bunch we've had many trajectories some here like to here and then here sometimes we even reach the goal right so so sometimes we actually reach the goal so we learn the two distances between all of the states now if we had a perfect distance function let's assume we have a perfect distance function our task now becomes very very simple so let's assume that's so let's assume I am here where the green agent is and I have these I can either go up or down and let's go that's up let's say that's X and the down is Y right which one should I choose now without even asking my policy per se what I can do is I can ask hey distance function so I can ask the distance function two different things so first let's do it like this distance function what do you think of the distance between X to the goal and what do you think of the distance from Y to the goal and the distance function if it's learned correctly it will tell you the distance of X to the goal is whatever maybe you need eight steps the distance of white the goal is ten steps right so definitely you would go with X right so if you had a good distance function then you could solve the task fairly fairly easily now this by itself isn't super interesting you will quickly notice that if you are able to learn such a good distance function especially with the goal state here then you might as well learn a good policy because that means you've reached the goal a fair number of times right so that the kind of information theoretic signal of D versus the signal on pi if you just want to reach the same goal to me it seems the same this this paper it tries to talk this up I feel but to me if you are in the situation where you have a fixed goal and that's it then this doesn't seem too interesting or too beneficial with compared to let's say just learning a value function right like you would do in a 3c or something the difference between this and a value function so if if if the number of steps is actually your reward so your negative reward you want to reach the goal in the shortest amount of time then learning a value function is the same the difference is for a value function the value function simply takes a state s right and the policy pi while the distance function takes a state s and a goal state for the policy pi the goal state for the value function is implicit right so it implicitly has the goal state because you assume that the goal is always the same with the distance function you can technically change your goal and this is where it becomes interesting so let's say you've explored but you haven't quite reached the goal yet right but we said okay most of these are algorithms they have some sort of some notion of random of random exploration right in order to to reach the goal what if you went from here to here and to here and to here and you learn the distances fairly well for the trajectories that you can do but you just haven't been able to go any further what you can say is you can go to your replay buffer write your memory of everything you've done and you can ask which of these states has the furthest distance from my starting state and the answer will be okay this state here as the furthest distance so now what you can do is you can make this your goal right you can just try to reach that state right and once you reach the state you can explore from that state right because this is the farthest away from your original starting state that probably means that you know if you that's kind of the frontier of what you know so if you explore from here you can go even further noticeably because it is the farthest that you know so it might turn out that from here you can only go back right so that's a possibility but probably you could go even further right so then you go further and you might reach this state here right and again you ask your your replay buffer it tells you this state here is the farthest so far so you take this as your new goal and now you're just trying to reach that and explore from here this is extremely similar to an algorithm like go explorer that I already made a video about where it remembers what it did and then it it will always travel to the furthest states it has seen so far and then from there try to go farther right so this this if you if you can learn a good distance function here that will help you in exploring the space and eventually of course you think you might actually reach this goal state so you might go far enough into in this maze you might explore it enough such that you you stumble over the goal state by itself alright so this is this is sort of the the goal this can be used in a number of different ways now instead of always going for the furthest what they did in the robot is they just let the algorithm explore right you explore explore explore if this is like a state tree and then at some point it it asked the human which one is closest to what you want and then the human says this one and then they say okay cool so this is now the new goal right so we'll try to reach this as much as possible and then explore from here right so this in the case of the robot the robot simply just like does some things it explores in in the unsupervised manner and then at some point you ask the human which of these things that the robot has done you like the most and then that becomes the new intermediate goal state and the algorithm explores from there right so that's the the main gist and how you can use this now the entire learning thing is actually pretty simple so what they propose is simply to to learn the distance function that they put it pretty formal here they say okay if you're two states that were visited after one another in an episode then you can define the distance function as the sum from i to j if if the they were visited at time steps I and J respectively this is a discounted cost function across this but ultimately they consider problems where it's shortest path problems so the cost function simply becomes how many steps does it take you to reach to reach the goal so the cost function so this becomes this this becomes the identity I guess you can you can set it to to one and this you can also set to one so this simply becomes J minus I how many steps does it take you to reach state state in time step J from the state you visited in time step I and then they simply train a pot a neural network or I'm not even sure if it's a neural network but you train a bunch of a parameterized function that learns to map the distance between these states to how many steps it took you from one to the other right and you do this simply by having by regressing so mean squared regression mean squared loss regression simple as that and that's how you learn the distance function and then you can use the distance function in the ways we discussed to either to improve your shortest path policy by giving it by providing it so what you want to do is you want to provide the distance function as the negative reward right so they say they they they provide the distance function as a negative reward for this or you can do this in an unsupervised fashion where you always propose the furthest away goals or you can do this in the semi supervised fashion so they have a bunch of things that they did here they have a bunch of videos of things that they trained this is from the human sorry from the semi supervised where the humans were simply selecting the hoppers that went furthest to the right and you can see over time this hops to the right with very very sparse input only so this is semi supervised right and then it goes to the right and it also has an unsupervised video where you simply let it perform and it on in unsupervised fashion it tries to discover states that are as far away as possible from its initial states and you can see it actually learns to move to the right and to the left because these are these rich states that are very far from its original state right so that's it's pretty cool that it turns out that the unsupervised method will discover such states alright so what to make of this this if you recognize this already it's very plausible because I had seen this some sort of this idea in many many papers before so and they make some connections in their related work so if you know for example universal value functions sorry universal value estimation universal value functions and so on where basically it's also an unsupervised way where you always just you'd select two states you say this and this agent now try try to go from here to here right just try that and so it is and then you select two new states so you basically teach your agent to go between two states that you choose at random and it's supposed to in an unsupervised fashion learn something about the environment very similar to what we have here right also a bunch of other a bunch of other things like just pure value functions are also pretty similar I think to this go explore there is a big connection to go explore so this has been around in one way or the other but possibly not in this specific formulation and what I think is cool applied to this specific semi supervised task so if I had to formulate a criticism to this method I would guess that it probably doesn't work when let's say the branching factor of the task is super high you see here you can you can only really turn the valve in one way or another of course the digits and the joints are are they have they have degrees of freedom but if you think if the branching factor is super high right so from a from a given state here you can go in many many many different ways and then from each of those you can go in many many different ways right then the the notion of something being far away right you go to this thing and use what's the farthest away all right is is almost meaningless because you have so much not explored right so if you have if you are three steps deep here right it will always tell you well this state here is the farthest away but you haven't explored these you know 15 directions here right so it might be that you actually miss so that you you go so here's the goal and here's the start and you go a long way but you miss this obvious shortcut here because you always want to go along the longest path around so it seems like there is there there are probably environments where this works well right but they're right but but it appears that if if either the branching factor is super high or if there are maybe this this kind of loops in the game loops between states non obvious combinatorial things it might be somewhat even counterproductive sometimes not not sure about that but it seems to be very specific environments where this would work all right so this was my commentary I invite you to read the paper check it out and bye bye | [
{
"start": 0,
"end": 7.34,
"text": " Hi there! If you look at this robot, this robot has learned to turn this valve by"
},
{
"start": 7.34,
"end": 12.200000000000001,
"text": " itself. Now by itself isn't really correct, but it has learned it in a"
},
{
"start": 12.200000000000001,
"end": 17.52,
"text": " semi-supervised way with only 10 human inputs along the entire learning"
},
{
"start": 17.52,
"end": 23.68,
"text": " trajectory. So only 10 times was there a true reward for this reinforcement"
},
{
"start": 23.68,
"end": 28.92,
"text": " learning procedure and the rest is unsupervised discovery of this skill."
},
{
"start": 28.92,
"end": 33.400000000000006,
"text": " And the paper we're going to look at today and the technique by which this was"
},
{
"start": 33.400000000000006,
"end": 38.84,
"text": " achieved is dynamical distance learning for semi-supervised and unsupervised"
},
{
"start": 38.84,
"end": 46.08,
"text": " skill discovery by Kristian Hartikeinen, Xin Yang Geng, Thomas Harnoja and Sergei"
},
{
"start": 46.08,
"end": 53.2,
"text": " Levine. So this is a technique for reinforcement learning. So they claim"
},
{
"start": 53.2,
"end": 58.72,
"text": " reinforcement learning requires manual specification of a reward function to"
},
{
"start": 58.72,
"end": 64.44,
"text": " learn a task. And they say while in principle this reward function only"
},
{
"start": 64.44,
"end": 70.03999999999999,
"text": " needs to specify the task goal, in practice reinforcement learning can be"
},
{
"start": 70.03999999999999,
"end": 75.68,
"text": " very time-consuming or even infeasible unless the reward function is shaped so"
},
{
"start": 75.68,
"end": 80.24,
"text": " as to provide a smooth gradient towards a successful outcome. So what does this"
},
{
"start": 80.24,
"end": 85.44,
"text": " mean? Let's look at it. So if you want the robot here to turn the valve to the"
},
{
"start": 85.44,
"end": 92.03999999999999,
"text": " right, ideally you simply want to say, so the robot is here, this is the"
},
{
"start": 92.03999999999999,
"end": 97.75999999999999,
"text": " start state, ideally you just want to say I want this, I want the"
},
{
"start": 97.75999999999999,
"end": 103.08,
"text": " thing to be at the right, so this is good. All of this I don't"
},
{
"start": 103.08,
"end": 109.68,
"text": " want any of that. And the reinforcement learning, I mean this"
},
{
"start": 109.68,
"end": 114.88,
"text": " is enough, this is a reward function, all of this is zero and this is one."
},
{
"start": 114.88,
"end": 121.03999999999999,
"text": " This is a reward function and in theory if you apply any sort of reinforcement"
},
{
"start": 121.03999999999999,
"end": 125.08,
"text": " learning algorithm with any sort of guarantee, this should get you there. But"
},
{
"start": 125.08,
"end": 129.96,
"text": " of course we all know that it's not that easy, right? There is basically an"
},
{
"start": 129.96,
"end": 138.28,
"text": " exploration bottleneck where your robot has these three digits and lots of"
},
{
"start": 138.28,
"end": 145.08,
"text": " joints to move around and the probability that by itself it discovered that"
},
{
"start": 145.08,
"end": 150.16,
"text": " it needs to do this here and get this reward is very very slim. So what you"
},
{
"start": 150.16,
"end": 154.72,
"text": " want to do is in your reward function that you're providing to the robot, you"
},
{
"start": 154.72,
"end": 161.44,
"text": " would want to say okay so this here I see the blue thing is a bit more"
},
{
"start": 161.44,
"end": 166.4,
"text": " turned so I'm maybe going to give this a 0.1 and then here it's a bit more"
},
{
"start": 166.4,
"end": 173.12,
"text": " turned so maybe this is 0.2 and this I really like 0.3 here is 0.6 maybe"
},
{
"start": 173.12,
"end": 179.52,
"text": " because it's even more right and then one at the end right so this is what"
},
{
"start": 179.52,
"end": 185.48000000000002,
"text": " they would call a smooth gradient in the reward function where it's kind of the"
},
{
"start": 185.48000000000002,
"end": 191.24,
"text": " reward function ramps up until the goal is reached but oftentimes this isn't"
},
{
"start": 191.24,
"end": 198.56,
"text": " really possible because if you already knew how exactly to do the task which"
},
{
"start": 198.56,
"end": 202.84,
"text": " then you could you can only shape the reward function truly if you know how to"
},
{
"start": 202.84,
"end": 208.44,
"text": " perform the task in the first hand and then why exactly do you do reinforcement"
},
{
"start": 208.44,
"end": 215.24,
"text": " learning except for as an academic exercise. So the issue this paper has"
},
{
"start": 215.24,
"end": 220.68,
"text": " is clear right? What they want to say is like let's assume that your"
},
{
"start": 220.68,
"end": 226.6,
"text": " reward function is actually pretty bad can we provide artificially a way that"
},
{
"start": 226.6,
"end": 232.8,
"text": " this discovery of these of these what they call of these new skills is"
},
{
"start": 232.8,
"end": 240.16,
"text": " facilitated as if the reward function had some sort of a gradient. So that's"
},
{
"start": 240.16,
"end": 248.16,
"text": " the the outset let's actually go back to the to this for a second and they have"
},
{
"start": 248.16,
"end": 254.84,
"text": " these mazes as a kind of an example. So if you look at these mazes what we want"
},
{
"start": 254.84,
"end": 261.76,
"text": " to keep in mind is let's actually draw this over here. So let's say you have one"
},
{
"start": 261.76,
"end": 271.8,
"text": " of these mazes right and always there is a start state so you're here and"
},
{
"start": 271.8,
"end": 277.68,
"text": " then there is a goal state right let's say over here and the task is you"
},
{
"start": 277.68,
"end": 283.92,
"text": " can move up down left right and the task is to reach the goal right but if the"
},
{
"start": 283.92,
"end": 287.04,
"text": " reward function is simply that if you reach the goal you get a reward of one"
},
{
"start": 287.04,
"end": 291.40000000000003,
"text": " and otherwise you get a reward of zero then all the agent can do is kind of"
},
{
"start": 291.40000000000003,
"end": 297.92,
"text": " explore around right until it reaches the goal. Now if you do random"
},
{
"start": 297.92,
"end": 302.48,
"text": " exploration like a lot of reinforcement learning algorithms for"
},
{
"start": 302.48,
"end": 306.6,
"text": " example Q learning or policy gradient they'll have some sort of a just of a"
},
{
"start": 306.6,
"end": 312.08000000000004,
"text": " random exploration element where they if they don't if they don't absent of what"
},
{
"start": 312.08000000000004,
"end": 317.68,
"text": " they of the when they know what to do they just kind of boogle around like up"
},
{
"start": 317.68,
"end": 326,
"text": " up up right left right left down right up that doesn't work okay down down left"
},
{
"start": 326,
"end": 332.20000000000005,
"text": " down so it's sort of and then up again right and then they just kind of wonk"
},
{
"start": 332.2,
"end": 340.32,
"text": " around so this this method takes issue with that and it says okay while the"
},
{
"start": 340.32,
"end": 346.88,
"text": " agent is doing its thing trying to reach the goal right what we can do is we can"
},
{
"start": 346.88,
"end": 352.71999999999997,
"text": " learn a distance function between states now we'll reduce the problem for now and"
},
{
"start": 352.71999999999997,
"end": 358.88,
"text": " just say the task is always that the goal state is reached in the shortest"
},
{
"start": 358.88,
"end": 366.4,
"text": " amount of steps right so let's say the agent does something right it goes here"
},
{
"start": 366.4,
"end": 372.32,
"text": " here here here and then here right it that's that's one rollout of the policy"
},
{
"start": 372.32,
"end": 376.68,
"text": " and then it crashes into a wall okay that's bad so it gets a negative reward"
},
{
"start": 376.68,
"end": 382.12,
"text": " right but in addition to that we can we can learn so it has visited all of these"
},
{
"start": 382.12,
"end": 388.68,
"text": " states here in between right these are intermediate states this paper wants us"
},
{
"start": 388.68,
"end": 395.36,
"text": " now to to learn a distance function between the states so this distance"
},
{
"start": 395.36,
"end": 404.94,
"text": " function let's call it D it learns how far two states are away so it'll you can"
},
{
"start": 404.94,
"end": 410.16,
"text": " you can tell it okay this state here let's call that state a and this state"
},
{
"start": 410.16,
"end": 417.72,
"text": " here state B how far are those away now this is not super well defined yet but"
},
{
"start": 417.72,
"end": 422.72,
"text": " you want to say how far are they away for this agent here so the agent has"
},
{
"start": 422.72,
"end": 428.20000000000005,
"text": " maybe policy pi like that's what it used to create this trajectory under policy"
},
{
"start": 428.20000000000005,
"end": 435.76000000000005,
"text": " pi how far away are states a and B and how far away is simply going to be the"
},
{
"start": 435.76000000000005,
"end": 444.96000000000004,
"text": " amount of steps that it takes the agent to go from a to B so in this case that"
},
{
"start": 444.96,
"end": 451.91999999999996,
"text": " would be two right so the the and you can do this between any two states right"
},
{
"start": 451.91999999999996,
"end": 458.08,
"text": " this and this this and this right here here these all of these states you can"
},
{
"start": 458.08,
"end": 463.76,
"text": " also start from this state right let's do it in a different color and do every"
},
{
"start": 463.76,
"end": 469.35999999999996,
"text": " so the the this distance function D can actually has a pretty tight reward"
},
{
"start": 469.35999999999996,
"end": 473.67999999999995,
"text": " signal like a pretty wealth of information that it can learn these"
},
{
"start": 473.68,
"end": 478.6,
"text": " things from right that so the policy pi in this case can't learn much because it"
},
{
"start": 478.6,
"end": 484.12,
"text": " just got a reward of zero or something because it didn't reach the goal but the"
},
{
"start": 484.12,
"end": 490.6,
"text": " distance function has very very big reward or a big rework it has a very"
},
{
"start": 490.6,
"end": 496.44,
"text": " dense reward signal where it can learn distances between two states right now"
},
{
"start": 496.44,
"end": 503.66,
"text": " let's say we've explored a bunch right a bunch we've had many trajectories some"
},
{
"start": 503.66,
"end": 508.96000000000004,
"text": " here like to here and then here sometimes we even reach the goal right"
},
{
"start": 508.96000000000004,
"end": 513.84,
"text": " so so sometimes we actually reach the goal so we learn the two distances"
},
{
"start": 513.84,
"end": 520.76,
"text": " between all of the states now if we had a perfect distance function let's assume"
},
{
"start": 520.76,
"end": 527.1600000000001,
"text": " we have a perfect distance function our task now becomes very very simple so"
},
{
"start": 527.16,
"end": 535.56,
"text": " let's assume that's so let's assume I am here where the green agent is and I have"
},
{
"start": 535.56,
"end": 540.7199999999999,
"text": " these I can either go up or down and let's go that's up let's say that's X"
},
{
"start": 540.7199999999999,
"end": 547.16,
"text": " and the down is Y right which one should I choose now without even asking my"
},
{
"start": 547.16,
"end": 555.76,
"text": " policy per se what I can do is I can ask hey distance function so I can ask the"
},
{
"start": 555.76,
"end": 565.48,
"text": " distance function two different things so first let's do it like this distance"
},
{
"start": 565.48,
"end": 570.16,
"text": " function what do you think of the distance between X to the goal and what"
},
{
"start": 570.16,
"end": 574.52,
"text": " do you think of the distance from Y to the goal and the distance function if"
},
{
"start": 574.52,
"end": 578.28,
"text": " it's learned correctly it will tell you the distance of X to the goal is"
},
{
"start": 578.28,
"end": 585.08,
"text": " whatever maybe you need eight steps the distance of white the goal is ten steps"
},
{
"start": 585.08,
"end": 592.1600000000001,
"text": " right so definitely you would go with X right so if you had a good distance"
},
{
"start": 592.1600000000001,
"end": 599.24,
"text": " function then you could solve the task fairly fairly easily now this by itself"
},
{
"start": 599.24,
"end": 604.44,
"text": " isn't super interesting you will quickly notice that if you are able to learn"
},
{
"start": 604.44,
"end": 609.08,
"text": " such a good distance function especially with the goal state here then you might"
},
{
"start": 609.08,
"end": 614.1600000000001,
"text": " as well learn a good policy because that means you've reached the goal a fair"
},
{
"start": 614.16,
"end": 619.9599999999999,
"text": " number of times right so that the kind of information theoretic signal of D"
},
{
"start": 619.9599999999999,
"end": 625.6,
"text": " versus the signal on pi if you just want to reach the same goal to me it seems"
},
{
"start": 625.6,
"end": 632,
"text": " the same this this paper it tries to talk this up I feel but to me if you are"
},
{
"start": 632,
"end": 637.24,
"text": " in the situation where you have a fixed goal and that's it then this doesn't"
},
{
"start": 637.24,
"end": 647.24,
"text": " seem too interesting or too beneficial with compared to let's say just learning"
},
{
"start": 647.24,
"end": 652.8,
"text": " a value function right like you would do in a 3c or something the difference"
},
{
"start": 652.8,
"end": 659.5600000000001,
"text": " between this and a value function so if if if the number of steps is actually"
},
{
"start": 659.5600000000001,
"end": 662.64,
"text": " your reward so your negative reward you want to reach the goal in the shortest"
},
{
"start": 662.64,
"end": 670.92,
"text": " amount of time then learning a value function is the same the difference is"
},
{
"start": 670.92,
"end": 676.24,
"text": " for a value function the value function simply takes a state s right and the"
},
{
"start": 676.24,
"end": 683.36,
"text": " policy pi while the distance function takes a state s and a goal state for the"
},
{
"start": 683.36,
"end": 689.52,
"text": " policy pi the goal state for the value function is implicit right so it"
},
{
"start": 689.52,
"end": 693.36,
"text": " implicitly has the goal state because you assume that the goal is always the"
},
{
"start": 693.36,
"end": 700.16,
"text": " same with the distance function you can technically change your goal and this is"
},
{
"start": 700.16,
"end": 706.4399999999999,
"text": " where it becomes interesting so let's say you've explored but you haven't"
},
{
"start": 706.4399999999999,
"end": 712.4399999999999,
"text": " quite reached the goal yet right but we said okay most of these are algorithms"
},
{
"start": 712.4399999999999,
"end": 718.14,
"text": " they have some sort of some notion of random of random exploration right in"
},
{
"start": 718.14,
"end": 725.04,
"text": " order to to reach the goal what if you went from here to here and to here and"
},
{
"start": 725.04,
"end": 729.24,
"text": " to here and you learn the distances fairly well for the trajectories that"
},
{
"start": 729.24,
"end": 733.5,
"text": " you can do but you just haven't been able to go any further what you can say"
},
{
"start": 733.5,
"end": 737.12,
"text": " is you can go to your replay buffer write your memory of everything you've"
},
{
"start": 737.12,
"end": 744.3199999999999,
"text": " done and you can ask which of these states has the furthest distance from my"
},
{
"start": 744.32,
"end": 749.5600000000001,
"text": " starting state and the answer will be okay this state here as the furthest"
},
{
"start": 749.5600000000001,
"end": 755.84,
"text": " distance so now what you can do is you can make this your goal right you can"
},
{
"start": 755.84,
"end": 763.08,
"text": " just try to reach that state right and once you reach the state you can explore"
},
{
"start": 763.08,
"end": 767.44,
"text": " from that state right because this is the farthest away from your original"
},
{
"start": 767.44,
"end": 772.72,
"text": " starting state that probably means that you know if you that's kind of the"
},
{
"start": 772.72,
"end": 776.6800000000001,
"text": " frontier of what you know so if you explore from here you can go even"
},
{
"start": 776.6800000000001,
"end": 781.8000000000001,
"text": " further noticeably because it is the farthest that you know so it might turn"
},
{
"start": 781.8000000000001,
"end": 786.2,
"text": " out that from here you can only go back right so that's a possibility but"
},
{
"start": 786.2,
"end": 791.84,
"text": " probably you could go even further right so then you go further and you might"
},
{
"start": 791.84,
"end": 797.12,
"text": " reach this state here right and again you ask your your replay buffer it tells"
},
{
"start": 797.12,
"end": 801.1800000000001,
"text": " you this state here is the farthest so far so you take this as your new goal"
},
{
"start": 801.18,
"end": 806.2399999999999,
"text": " and now you're just trying to reach that and explore from here this is extremely"
},
{
"start": 806.2399999999999,
"end": 812.3199999999999,
"text": " similar to an algorithm like go explorer that I already made a video about where"
},
{
"start": 812.3199999999999,
"end": 817.7199999999999,
"text": " it remembers what it did and then it it will always travel to the furthest"
},
{
"start": 817.7199999999999,
"end": 823.64,
"text": " states it has seen so far and then from there try to go farther right so this"
},
{
"start": 823.64,
"end": 829.4,
"text": " this if you if you can learn a good distance function here that will help"
},
{
"start": 829.4,
"end": 834.4399999999999,
"text": " you in exploring the space and eventually of course you think you might"
},
{
"start": 834.4399999999999,
"end": 839.24,
"text": " actually reach this goal state so you might go far enough into in this maze"
},
{
"start": 839.24,
"end": 845.4,
"text": " you might explore it enough such that you you stumble over the goal state by"
},
{
"start": 845.4,
"end": 851.88,
"text": " itself alright so this is this is sort of the the goal this can be used in a"
},
{
"start": 851.88,
"end": 855.88,
"text": " number of different ways now instead of always going for the furthest what they"
},
{
"start": 855.88,
"end": 861.6,
"text": " did in the robot is they just let the algorithm explore right you explore"
},
{
"start": 861.6,
"end": 867.88,
"text": " explore explore if this is like a state tree and then at some point it it asked"
},
{
"start": 867.88,
"end": 873.48,
"text": " the human which one is closest to what you want and then the human says this"
},
{
"start": 873.48,
"end": 880.52,
"text": " one and then they say okay cool so this is now the new goal right so we'll try"
},
{
"start": 880.52,
"end": 886.6,
"text": " to reach this as much as possible and then explore from here right so this in"
},
{
"start": 886.6,
"end": 893.48,
"text": " the case of the robot the robot simply just like does some things it explores"
},
{
"start": 893.48,
"end": 897.52,
"text": " in in the unsupervised manner and then at some point you ask the human which of"
},
{
"start": 897.52,
"end": 901.68,
"text": " these things that the robot has done you like the most and then that becomes the"
},
{
"start": 901.68,
"end": 908.28,
"text": " new intermediate goal state and the algorithm explores from there right so"
},
{
"start": 908.28,
"end": 916.24,
"text": " that's the the main gist and how you can use this now the entire learning thing"
},
{
"start": 916.24,
"end": 922.3199999999999,
"text": " is actually pretty simple so what they propose is simply to to learn the"
},
{
"start": 922.3199999999999,
"end": 925.68,
"text": " distance function that they put it pretty formal here they say okay if"
},
{
"start": 925.68,
"end": 931.28,
"text": " you're two states that were visited after one another in an episode then you"
},
{
"start": 931.28,
"end": 938.4,
"text": " can define the distance function as the sum from i to j if if the they were"
},
{
"start": 938.4,
"end": 944.24,
"text": " visited at time steps I and J respectively this is a discounted cost"
},
{
"start": 944.24,
"end": 950.24,
"text": " function across this but ultimately they consider problems where it's shortest"
},
{
"start": 950.24,
"end": 954.56,
"text": " path problems so the cost function simply becomes how many steps does it"
},
{
"start": 954.56,
"end": 962.8399999999999,
"text": " take you to reach to reach the goal so the cost function so this becomes this"
},
{
"start": 962.8399999999999,
"end": 968.28,
"text": " this becomes the identity I guess you can you can set it to to one and this"
},
{
"start": 968.28,
"end": 974.8399999999999,
"text": " you can also set to one so this simply becomes J minus I how many steps does"
},
{
"start": 974.8399999999999,
"end": 981.4,
"text": " it take you to reach state state in time step J from the state you visited in"
},
{
"start": 981.4,
"end": 989.48,
"text": " time step I and then they simply train a pot a neural network or I'm not even"
},
{
"start": 989.48,
"end": 992.48,
"text": " sure if it's a neural network but you train a bunch of a parameterized"
},
{
"start": 992.48,
"end": 1000.48,
"text": " function that learns to map the distance between these states to how many steps"
},
{
"start": 1000.48,
"end": 1007.1999999999999,
"text": " it took you from one to the other right and you do this simply by having by"
},
{
"start": 1007.2,
"end": 1015.9200000000001,
"text": " regressing so mean squared regression mean squared loss regression simple as"
},
{
"start": 1015.9200000000001,
"end": 1019.44,
"text": " that and that's how you learn the distance function and then you can use"
},
{
"start": 1019.44,
"end": 1023.2800000000001,
"text": " the distance function in the ways we discussed to either to improve your"
},
{
"start": 1023.2800000000001,
"end": 1030.72,
"text": " shortest path policy by giving it by providing it so what you want to do is"
},
{
"start": 1030.72,
"end": 1037.1200000000001,
"text": " you want to provide the distance function as the negative reward right so"
},
{
"start": 1037.12,
"end": 1042.3999999999999,
"text": " they say they they they provide the distance function as a negative reward"
},
{
"start": 1042.3999999999999,
"end": 1046.9199999999998,
"text": " for this or you can do this in an unsupervised fashion where you always"
},
{
"start": 1046.9199999999998,
"end": 1051.28,
"text": " propose the furthest away goals or you can do this in the semi supervised"
},
{
"start": 1051.28,
"end": 1057.8799999999999,
"text": " fashion so they have a bunch of things that they did here they have a bunch of"
},
{
"start": 1057.8799999999999,
"end": 1065.2399999999998,
"text": " videos of things that they trained this is from the human sorry from the semi"
},
{
"start": 1065.24,
"end": 1072,
"text": " supervised where the humans were simply selecting the hoppers that went furthest"
},
{
"start": 1072,
"end": 1079.8,
"text": " to the right and you can see over time this hops to the right with very very"
},
{
"start": 1079.8,
"end": 1085.04,
"text": " sparse input only so this is semi supervised right and then it goes to the"
},
{
"start": 1085.04,
"end": 1094.24,
"text": " right and it also has an unsupervised video where you simply let it perform"
},
{
"start": 1094.24,
"end": 1100.92,
"text": " and it on in unsupervised fashion it tries to discover states that are as far"
},
{
"start": 1100.92,
"end": 1106.28,
"text": " away as possible from its initial states and you can see it actually learns to"
},
{
"start": 1106.28,
"end": 1112.96,
"text": " move to the right and to the left because these are these rich states that"
},
{
"start": 1112.96,
"end": 1117.72,
"text": " are very far from its original state right so that's it's pretty cool that it"
},
{
"start": 1117.72,
"end": 1125.48,
"text": " turns out that the unsupervised method will discover such states alright so"
},
{
"start": 1125.48,
"end": 1131.76,
"text": " what to make of this this if you recognize this already it's very"
},
{
"start": 1131.76,
"end": 1140.16,
"text": " plausible because I had seen this some sort of this idea in many many papers"
},
{
"start": 1140.16,
"end": 1144.8,
"text": " before so and they make some connections in their related work so if you know for"
},
{
"start": 1144.8,
"end": 1152.9199999999998,
"text": " example universal value functions sorry universal value estimation universal"
},
{
"start": 1152.9199999999998,
"end": 1159.24,
"text": " value functions and so on where basically it's also an unsupervised way"
},
{
"start": 1159.24,
"end": 1163.96,
"text": " where you always just you'd select two states you say this and this agent now"
},
{
"start": 1163.96,
"end": 1172.36,
"text": " try try to go from here to here right just try that and so it is and then you"
},
{
"start": 1172.36,
"end": 1177.6,
"text": " select two new states so you basically teach your agent to go between two"
},
{
"start": 1177.6,
"end": 1182.6399999999999,
"text": " states that you choose at random and it's supposed to in an unsupervised"
},
{
"start": 1182.6399999999999,
"end": 1186.84,
"text": " fashion learn something about the environment very similar to what we have"
},
{
"start": 1186.84,
"end": 1191.8,
"text": " here right also a bunch of other a bunch of other things like just pure value"
},
{
"start": 1191.8,
"end": 1197.28,
"text": " functions are also pretty similar I think to this go explore there is a big"
},
{
"start": 1197.28,
"end": 1202,
"text": " connection to go explore so this has been around in one way or the other but"
},
{
"start": 1202,
"end": 1207.24,
"text": " possibly not in this specific formulation and what I think is cool"
},
{
"start": 1207.24,
"end": 1216.4,
"text": " applied to this specific semi supervised task so if I had to formulate a"
},
{
"start": 1216.4,
"end": 1224.36,
"text": " criticism to this method I would guess that it probably doesn't work when let's"
},
{
"start": 1224.36,
"end": 1231.04,
"text": " say the branching factor of the task is super high you see here you can you can"
},
{
"start": 1231.04,
"end": 1236.44,
"text": " only really turn the valve in one way or another of course the digits and the"
},
{
"start": 1236.44,
"end": 1243.1599999999999,
"text": " joints are are they have they have degrees of freedom but if you think if"
},
{
"start": 1243.1599999999999,
"end": 1249.6399999999999,
"text": " the branching factor is super high right so from a from a given state here you"
},
{
"start": 1249.6399999999999,
"end": 1254.3999999999999,
"text": " can go in many many many different ways and then from each of those you can go"
},
{
"start": 1254.3999999999999,
"end": 1260.24,
"text": " in many many different ways right then the the notion of something being far"
},
{
"start": 1260.24,
"end": 1265.96,
"text": " away right you go to this thing and use what's the farthest away all right is is"
},
{
"start": 1265.96,
"end": 1271.36,
"text": " almost meaningless because you have so much not explored right so if you have"
},
{
"start": 1271.36,
"end": 1275.6,
"text": " if you are three steps deep here right it will always tell you well this state"
},
{
"start": 1275.6,
"end": 1279.72,
"text": " here is the farthest away but you haven't explored these you know 15"
},
{
"start": 1279.72,
"end": 1289.96,
"text": " directions here right so it might be that you actually miss so that you"
},
{
"start": 1289.96,
"end": 1297.68,
"text": " you go so here's the goal and here's the start and you go a long way but you miss"
},
{
"start": 1297.68,
"end": 1304.16,
"text": " this obvious shortcut here because you always want to go along the longest path"
},
{
"start": 1304.16,
"end": 1310.3600000000001,
"text": " around so it seems like there is there there are probably environments where"
},
{
"start": 1310.3600000000001,
"end": 1318.2,
"text": " this works well right but they're right but but it appears that if if either the"
},
{
"start": 1318.2,
"end": 1323.56,
"text": " branching factor is super high or if there are maybe this this kind of loops"
},
{
"start": 1323.56,
"end": 1334.3600000000001,
"text": " in the game loops between states non obvious combinatorial things it might be"
},
{
"start": 1334.3600000000001,
"end": 1339.96,
"text": " somewhat even counterproductive sometimes not not sure about that but it"
},
{
"start": 1339.96,
"end": 1345.88,
"text": " seems to be very specific environments where this would work all right so this"
},
{
"start": 1345.88,
"end": 1356.24,
"text": " was my commentary I invite you to read the paper check it out and bye bye"
}
] |
hg2Q_O5b9w4 | Yannic Kilcher | UCZHmQk67mSJgfCCTn7xBfew | CURL: Contrastive Unsupervised Representations for Reinforcement Learning | [
"Science & Technology"
] | [
"deep learning",
"machine learning",
"rl",
"reinforcement learning",
"unsupervised",
"contrast",
"contrastive",
"encoder",
"self-supervised",
"deep rl",
"representation",
"representation learning",
"query",
"key"
] | Contrastive Learning has been an established method in NLP and Image classification. The authors show that with relatively minor adjustments, CL can be used to augment and improve RL dramatically.
Paper: https://arxiv.org/abs/2004.04136
Code: https://github.com/MishaLaskin/curl
Abstract:
We present CURL: Contrastive Unsupervised Representations for Reinforcement Learning. CURL extracts high-level features from raw pixels using contrastive learning and performs off-policy control on top of the extracted features. CURL outperforms prior pixel-based methods, both model-based and model-free, on complex tasks in the DeepMind Control Suite and Atari Games showing 2.8x and 1.6x performance gains respectively at the 100K interaction steps benchmark. On the DeepMind Control Suite, CURL is the first image-based algorithm to nearly match the sample-efficiency and performance of methods that use state-based features.
Authors: Aravind Srinivas, Michael Laskin, Pieter Abbeel
Links:
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
BitChute: https://www.bitchute.com/channel/yannic-kilcher
Minds: https://www.minds.com/ykilcher | Hi there! Today we're going to look at CURL, Contrastive Unsupervised Representations for Reinforcement Learning, by Aravind Srinivas, Michael Laskin and Pieter Abbeel. So this is a general framework for unsupervised representation learning for RL. So let's untangle the title a little bit. It is FOR reinforcement learning, which if you don't know what reinforcement learning is, I've done a bunch of videos on RL frameworks. So it's for general reinforcement learning. That means it can be paired with almost any RL algorithm out there. So we're not going to dive into specific RL algorithms today. It is unsupervised, which means it doesn't need any sort of labels, and it also doesn't need a reward signal for RL, which is pretty cool because usually the entire RL pipelines rely on some sort of a reward or auxiliary reward signal. Now there is a training objective here, but it doesn't have to do with the RL reward. And then it is learning representations, which means it learns intermediate representations of the input data that is useful. And in the end it is contrastive, and that is the secret sauce in here. The training objective is what's called contrastive learning, and that's what we're going to spend most of our time on today, exploring what that means. So here's the general framework. You can see it down here. Sorry about that. So you can see that reinforcement learning is just a box, which is we don't care about the RL algorithm you use, that's just what comes at the end. What comes at the beginning, oh, here is the observation. So the observation in an RL algorithm is kind of fundamental. Now if someone explains RL to you, reinforcement learning, usually what they'll say is there is some kind of actor and there is some kind of environment. And the environment will give you an observation, observation O, which is some sort of, let's say here is an image. So in this RL framework specifically, the examples they give are of image-based reinforcement learning. Let's say the Atari game where you have this little spaceship here and there are meteorites up here, and you need to shoot them. So there is a little shot here. You need to shoot those meteorites. So this is the observation O. And then as an age, as an actor, you have to come up with some sort of action. And the actions here can be something like move to the left, move to the right, press the button that does the shooting. So you have to come up with an action somehow given this observation. And then the environment will give you back a reward along with the next observation, like the next frame of the game. And you're going to have to come up with another action in response to that. And the environment is going to give you back another reward and the next observation and so on. So what you want to do is you want to find a mapping from observation to action, such that your reward is going to be as high as possible. This is the fundamental problem of RL. And usually what people do is they take this mapping here from observation to action to be some sort of function, some sort of function that is parameterized maybe. Nowadays, of course, it's often a neural network. But you're trying to learn, given the input observation, what output action you need to do. And you can think of the same here. So you have this input observation up here. And down here, after the reinforcement learning, the output is going to be an action. And so this function we talked about up here is usually implemented. It's usually implemented like this. You put the observation into the RL framework. And then the RL framework learns this f of theta function to give you an action. Now here you can see the pipeline is a bit different. We don't want to shove the observation in directly, right? We don't want the observation directly. But what we put into the RL framework is this queue thing. Now the queue is supposed to be a representation of the observation and a useful representation. So if we think of this game here, of this Atari game up here, what could be a useful representation if I had to craft one by hand? How would I construct a useful representation? Keep in mind the goal is to have a representation of the observation that is more useful to the RL algorithm than just the pure pixels of the image. So if I had to craft a representation, let's say it's a vector. Let's say our representations need to be vectors. What I would do is I would probably take the x and y coordinates of the little spaceship, x and y, and put it in the vector. That's pretty useful. Then I would probably take the x and y coordinates of the meteorites that are around. Let's say there's a maximum of two, so x, y, x, y here. I would probably take the angle where my spaceship is pointing to. That should be pretty useful because if I shoot, I want to know where I shoot. So theta here, and then probably the x and y coordinates of the red shot that I fired, if there is one. I'm also going to put that into my representation. So x and y, and maybe delta x, delta y. Something like this. You can see if I had to handcraft something, I can pretty much guarantee that if I put in this representation right here into the RL algorithm, if I put this in here, it would turn out guaranteed, it would turn out to be a better RL agent that learns faster than if I put in the original observation, which is the pixel image of the game. Because, of course, in order to play the game correctly, in order to play the game to win, you need to extract this information. You need to get, ah, there's something like a spaceship, there's something like meteorites. This is all things that the RL algorithm doesn't know per se, and would have to learn from the pixels. But if I already give it the information that is useful, it can learn much faster. So you can see if I handcraft a good representation, it's pretty easy for the RL algorithm to improve. Now we want to come up with a framework that automatically comes up with a good representation. So it alleviates the RL algorithm here, the reinforcement learning. It alleviates that from having to learn a good representation. It already is burdened with learning what a good action is in any given situation. We want to alleviate it of the burden to also extract useful information from the observation space. So how do we do this? This Q here is supposed to be exactly that. It's supposed to be a good representation, but not one that we handcrafted, but used with a technique that can be employed pretty much everywhere. The goal, sorry, the secret sauce here is this contrastive loss thing. Okay, this bombed. Contrastive learning is this kind of magic thing that will make us good representations. What is contrastive learning? In this case, I'm going to explain it. In this case, for image-based reinforcement learning, but just for image-based neural networks, how can we come up with a contrastive loss? So you see there's a two pipeline thing going on here. This and this, and then one of them is going to be the good encoding. So let's check it out. Let's say we have this image that we had before. Draw it again. This little spaceship. This and this. And shot. We want to do this. What we need to do is we need to produce three different things from it. We need to produce an anchor, what's called an anchor. We need to produce a positive sample. And we need to produce negative samples. Let's just go with one negative sample for now. The goal is to come up with a task where we produce our own labels. Since we're training an encoder, and the encoder is a neural network that is parameterized, we need some sort of loss function. The goal is to come up with a method where we can create our own labels to a task, but we construct the task in a way such that the neural network has no choice and we can create something meaningful, even though we made the task up ourselves. I hope this was kind of clear. How are we going to do this? Our method of choice here is going to be random cropping. Random cropping means that I take an image and I crop a piece from it. A smaller piece from the image. I take a view inside the image. In case of the anchor, I'm going to draw the same picture here. Bear with me, I'm going to draw the same picture here a couple of times. This is all supposed to be the same picture. With the negative sample, I'm just going to leave it empty for now. Ta-da! Two meteorites. Two meteorites. Shot. Shot. For the anchor, we're going to center crop. We're going to take the center image. The assumption is that if I center crop, I won't lose too much of the image. I can actually make the crop bigger, such that almost everything of the image is somewhat contained in this. This is going to be my anchor. The positive sample is going to be a random crop of the same image. I'm just randomly going to select a same size section from that image. Let's say this is up right here. The negative sample is going to be a random crop from a different image. A different image might be from the same game, but there is a meteorite here and there is no shot. I don't shoot. I'm going to take a random crop from this. Let's say I'm going to take a random crop here. Let's put a meteorite here as well, just for fun. These are going to be our three samples. Now the question is going to be if I give the anchor to the neural network. I give you the anchor, but I'm also going to give you this and this thing. I'm not going to give any of this. I'm just going to give whatever I cropped. Just these things. I ask the neural network, I give you the anchor. Which one of these two crops comes from the same image? As a human you look at this and if you just see the center crop, you see down here there is this tip of this thing and then there is the shot. In relation to the shot there is a meteor here. Then you look at the second one and you say I don't see the spaceship, but there is the same relation here from the shot to the meteor. I can kind of see the meteor up here. This also fits with that. The spaceship must be down here somewhere. Then I go over here and I try to do the same thing. Here is the meteor. In the original image it might be over here somewhere. That's possible. I don't see it. That's possible, but then there should be a shot somewhere here. There should be a shot somewhere here. I'm pretty sure because there is one over here and I don't see it. I am fairly sure that this image here is the positive sample, while this image here is the negative sample. This is the task that you ask of the neural network. Give it the anchor and you ask which one of these two comes from the same image. This is called contrastive learning. It is a bit more complicated in that of course what you do is you encode these things using neural networks. Each of the things you encode. The anchor you are going to encode all of these things using a neural network. Then this is what's going to become the query. These are becoming the keys. Key 1 or key 2. Then you are going to feed always two of them into a bilinear product. A bilinear product is simply an inner product in a perturbed space that you can learn. You are going to have these two here. These go into Q, W, K, 1. Then these two here, sorry, this and this go into Q, W, K, 2. Now W here is a learnable parameter. You have some freedom. Then you basically take whichever one of those two is highest. This might be this high and this might only be this high. Then you say, aha, cool, this one is higher so this one must be the positive. You train the W specifically to make the positive ones higher and the negative ones lower. This is a supervised learning task. These things here are going to be the logits. They are inner products but you basically then pick the one that is highest in a softmax way. They put this in the paper. If we go down here, the objective that they use to do the contrastive learning is this one. As you can see, it's a softmax like in multiclass classification. The inner product, the bilinear product with the positive samples over the bilinear product with the positive samples plus the bilinear product with all of the negative samples. You are going to come up with more than one negative sample. The only thing left that we don't have here is that the encoding, how you are going to come from the image space to this space here, is going to be slightly different depending on whether you are talking on the anchor or on what are called the keys, the things you compare to. This is out of a stability criterion. Maybe you know something like double Q-learning or things like this. Sometimes when you train with your own thing, in Q-learning you are trying to come up with an actor and a critic. It's not the same thing, but you are using the same neural network twice in your setup. Then you compare the outputs to each other, which leads to instability. In our case, we took it three times here, or multiple times. Especially for the same objective here, we have twice something that was encoded by the same neural network and is on the two sides of this bilinear product. If we were to use the same neural network, that tends to be somewhat unstable. We have different neural networks, one that will encode the query, which is this FQ, and one which will encode the keys, sorry, FK. We don't want to learn two neural networks. That's why there's a bit of a compromise, where we say it is the same neural network, but basically this one is the one we learn. Every now and then we transfer over the parameters to that one. In fact, each step we transfer over the parameters and do an exponentially moving average with the parameters of this momentum encoder from the step before. The momentum encoder parameters are a moving average of the parameters of the query encoder. You get the best of both worlds. You don't have to learn a second neural network, but your second neural network is not the same as your first neural network. It kind of lags behind, but it is also performing almost as well. I don't know if that makes sense, but it is the best I can explain it. To recap, you take your observation, you encode it as a query, sorry, you crop here for your anchor, that gets your query, and then you random crop for your keys into positive and negative samples. Random crop from the same observation or from different observations. These become your positive and negative samples. Then you push these through your encoders for the query and for the keys respectively. You end up with the queue, which is the encoded anchor, and the k's, which are the encoded positive and negative samples. Then you learn, you update this encoder here using the contrastive loss. At the same time, you feed the queue into the reinforcement learning algorithm, and you learn your reinforcement learning algorithm. Instead of having the observation directly as an input here, you now have the queue here as an input. The reinforcement learning works exactly the same, except having the pixel input O, you now have the representation input Q. You don't have to worry about anything else in terms of the reinforcement learning algorithm. It works exactly the same. This whole thing here can run either in parallel, or you can think of it before, you can think of it off-policy, on-policy. It is sort of modular how you fit this in. It simply comes up with good representation. That is basically the deal here. You hope that the whole procedure of this contrastive learning then gives you good representation of this anchor thing here. If you encode that to the queue, you hope that this representation now is a good representation as a basis for the RL algorithm. It turns out, at least in their experiments, it is. Here you see the same thing. You can do something more where in RL you usually deal with a stack of observations, not just a single observation. For example, in Atari, people always concatenate something like the four last frames. Their point is, if we have this stack here, if we do this data augmentation, these crops, we need to do them consistently. We need to crop every single image at the same point for the query. Also, if we do a random crop, let's say a random crop down here, we need to do this same random crop for all of the stack of images here. That is the additional thing they introduce with respect to RL that deals with stacked timeframes. It's the same diagram as above here. They explain the RL algorithms they use and exactly their thing. Here you can see that the anchor is a crop, and the positive sample is a random crop from the same image. This would be up here somewhere. The anchor is cropped from the middle. Then the negative would be a random crop from a different image or a different stack of images. They have a pseudocode here. It's pretty simple. We'll just go through it quickly. You start off with FQ and FK. These are the encoders for the query and keys. You start them off the same. Then you go through your data loader. You do this random augmentation of your query and your keys. I'm not even sure if the random augmentation needs to be a center crop for the anchor, but it's just two different crops from the same image. I guess it's a thing you could choose. I don't know what exactly is the best thing. Then I forward the query through the FQ and I forward the keys through the FK. It's important to detach this so I don't want to train the FK. I only want to train the FQ. Then I do the bilinear product here with the W. These are the bilinear product. Then I put all of this into a cross entropy loss. In the end I update my FQ and my W and I do this exponentially moving average for my key encoder. They test on two different things. They test on the DeepMind control tasks. They always test 100k time steps. Their big point is data efficiency. They claim they can learn useful representations with not much data. The task is here, how good are you at 100k time steps? You don't optimize until the end. You get 100k time steps and then the question is how good are you? The curl here outperforms all of the baselines handily in the DeepMind control tasks. It also outperforms a lot of the baselines in the Atari tasks. If you look at the results, it doesn't outperform everything. For example, the red is curl and the dashed grey is stateSAC. StateSAC has access to the state. Curl only works from pixels. If I had to craft a representation, stateSAC has access to that. You see that in many of the tasks, the curl comes close or performs equally well to stateSAC. That's pretty impressive. Especially if you look at pixelSAC, which is the same algorithm but does not have access to the state, it often fails terribly. That is pretty interesting to see. Even to me, it's pretty interesting to see that this kind of self-labeled algorithm comes up with such useful representations. I hope I have explained this satisfactorily. Check out the paper for more experiments, ablation studies and general reading. I wish you a good day. | [
{
"start": 0,
"end": 7.5,
"text": " Hi there! Today we're going to look at CURL, Contrastive Unsupervised Representations for Reinforcement Learning,"
},
{
"start": 7.5,
"end": 12.5,
"text": " by Aravind Srinivas, Michael Laskin and Pieter Abbeel."
},
{
"start": 12.5,
"end": 19,
"text": " So this is a general framework for unsupervised representation learning for RL."
},
{
"start": 19,
"end": 22.5,
"text": " So let's untangle the title a little bit."
},
{
"start": 22.5,
"end": 28.5,
"text": " It is FOR reinforcement learning, which if you don't know what reinforcement learning is,"
},
{
"start": 28.5,
"end": 32,
"text": " I've done a bunch of videos on RL frameworks."
},
{
"start": 32,
"end": 35,
"text": " So it's for general reinforcement learning."
},
{
"start": 35,
"end": 41,
"text": " That means it can be paired with almost any RL algorithm out there."
},
{
"start": 41,
"end": 46,
"text": " So we're not going to dive into specific RL algorithms today."
},
{
"start": 46,
"end": 53,
"text": " It is unsupervised, which means it doesn't need any sort of labels,"
},
{
"start": 53,
"end": 57,
"text": " and it also doesn't need a reward signal for RL,"
},
{
"start": 57,
"end": 65.5,
"text": " which is pretty cool because usually the entire RL pipelines rely on some sort of a reward or auxiliary reward signal."
},
{
"start": 65.5,
"end": 71,
"text": " Now there is a training objective here, but it doesn't have to do with the RL reward."
},
{
"start": 71,
"end": 83,
"text": " And then it is learning representations, which means it learns intermediate representations of the input data that is useful."
},
{
"start": 83,
"end": 88,
"text": " And in the end it is contrastive, and that is the secret sauce in here."
},
{
"start": 88,
"end": 91.5,
"text": " The training objective is what's called contrastive learning,"
},
{
"start": 91.5,
"end": 97,
"text": " and that's what we're going to spend most of our time on today, exploring what that means."
},
{
"start": 97,
"end": 103,
"text": " So here's the general framework. You can see it down here."
},
{
"start": 103,
"end": 107,
"text": " Sorry about that."
},
{
"start": 107,
"end": 116,
"text": " So you can see that reinforcement learning is just a box, which is we don't care about the RL algorithm you use,"
},
{
"start": 116,
"end": 120,
"text": " that's just what comes at the end."
},
{
"start": 120,
"end": 123.5,
"text": " What comes at the beginning, oh, here is the observation."
},
{
"start": 123.5,
"end": 128,
"text": " So the observation in an RL algorithm is kind of fundamental."
},
{
"start": 128,
"end": 132,
"text": " Now if someone explains RL to you, reinforcement learning,"
},
{
"start": 132,
"end": 138,
"text": " usually what they'll say is there is some kind of actor and there is some kind of environment."
},
{
"start": 138,
"end": 152,
"text": " And the environment will give you an observation, observation O, which is some sort of, let's say here is an image."
},
{
"start": 152,
"end": 158.5,
"text": " So in this RL framework specifically, the examples they give are of image-based reinforcement learning."
},
{
"start": 158.5,
"end": 168,
"text": " Let's say the Atari game where you have this little spaceship here and there are meteorites up here,"
},
{
"start": 168,
"end": 172,
"text": " and you need to shoot them. So there is a little shot here."
},
{
"start": 172,
"end": 174,
"text": " You need to shoot those meteorites."
},
{
"start": 174,
"end": 176,
"text": " So this is the observation O."
},
{
"start": 176,
"end": 181,
"text": " And then as an age, as an actor, you have to come up with some sort of action."
},
{
"start": 181,
"end": 185,
"text": " And the actions here can be something like move to the left, move to the right,"
},
{
"start": 185,
"end": 189,
"text": " press the button that does the shooting."
},
{
"start": 189,
"end": 194,
"text": " So you have to come up with an action somehow given this observation."
},
{
"start": 194,
"end": 200,
"text": " And then the environment will give you back a reward along with the next observation,"
},
{
"start": 200,
"end": 202,
"text": " like the next frame of the game."
},
{
"start": 202,
"end": 206,
"text": " And you're going to have to come up with another action in response to that."
},
{
"start": 206,
"end": 211,
"text": " And the environment is going to give you back another reward and the next observation and so on."
},
{
"start": 211,
"end": 218.5,
"text": " So what you want to do is you want to find a mapping from observation to action,"
},
{
"start": 218.5,
"end": 223,
"text": " such that your reward is going to be as high as possible."
},
{
"start": 223,
"end": 226,
"text": " This is the fundamental problem of RL."
},
{
"start": 226,
"end": 232.5,
"text": " And usually what people do is they take this mapping here from observation to action"
},
{
"start": 232.5,
"end": 239,
"text": " to be some sort of function, some sort of function that is parameterized maybe."
},
{
"start": 239,
"end": 242,
"text": " Nowadays, of course, it's often a neural network."
},
{
"start": 242,
"end": 249,
"text": " But you're trying to learn, given the input observation, what output action you need to do."
},
{
"start": 249,
"end": 251,
"text": " And you can think of the same here."
},
{
"start": 251,
"end": 254,
"text": " So you have this input observation up here."
},
{
"start": 254,
"end": 261,
"text": " And down here, after the reinforcement learning, the output is going to be an action."
},
{
"start": 261,
"end": 267,
"text": " And so this function we talked about up here is usually implemented."
},
{
"start": 267,
"end": 271,
"text": " It's usually implemented like this. You put the observation into the RL framework."
},
{
"start": 271,
"end": 276,
"text": " And then the RL framework learns this f of theta function to give you an action."
},
{
"start": 276,
"end": 279,
"text": " Now here you can see the pipeline is a bit different."
},
{
"start": 279,
"end": 283,
"text": " We don't want to shove the observation in directly, right?"
},
{
"start": 283,
"end": 286,
"text": " We don't want the observation directly."
},
{
"start": 286,
"end": 291,
"text": " But what we put into the RL framework is this queue thing."
},
{
"start": 291,
"end": 296,
"text": " Now the queue is supposed to be a representation of the observation"
},
{
"start": 296,
"end": 298,
"text": " and a useful representation."
},
{
"start": 298,
"end": 304,
"text": " So if we think of this game here, of this Atari game up here,"
},
{
"start": 304,
"end": 310,
"text": " what could be a useful representation if I had to craft one by hand?"
},
{
"start": 310,
"end": 314,
"text": " How would I construct a useful representation?"
},
{
"start": 314,
"end": 320,
"text": " Keep in mind the goal is to have a representation of the observation"
},
{
"start": 320,
"end": 327,
"text": " that is more useful to the RL algorithm than just the pure pixels of the image."
},
{
"start": 327,
"end": 331,
"text": " So if I had to craft a representation, let's say it's a vector."
},
{
"start": 331,
"end": 336,
"text": " Let's say our representations need to be vectors."
},
{
"start": 336,
"end": 343,
"text": " What I would do is I would probably take the x and y coordinates of the little spaceship,"
},
{
"start": 343,
"end": 347,
"text": " x and y, and put it in the vector. That's pretty useful."
},
{
"start": 347,
"end": 355,
"text": " Then I would probably take the x and y coordinates of the meteorites that are around."
},
{
"start": 355,
"end": 360,
"text": " Let's say there's a maximum of two, so x, y, x, y here."
},
{
"start": 360,
"end": 370,
"text": " I would probably take the angle where my spaceship is pointing to."
},
{
"start": 370,
"end": 375,
"text": " That should be pretty useful because if I shoot, I want to know where I shoot."
},
{
"start": 375,
"end": 386,
"text": " So theta here, and then probably the x and y coordinates of the red shot that I fired, if there is one."
},
{
"start": 386,
"end": 389,
"text": " I'm also going to put that into my representation."
},
{
"start": 389,
"end": 395,
"text": " So x and y, and maybe delta x, delta y."
},
{
"start": 395,
"end": 397,
"text": " Something like this."
},
{
"start": 397,
"end": 400,
"text": " You can see if I had to handcraft something,"
},
{
"start": 400,
"end": 409,
"text": " I can pretty much guarantee that if I put in this representation right here into the RL algorithm,"
},
{
"start": 409,
"end": 414,
"text": " if I put this in here, it would turn out guaranteed,"
},
{
"start": 414,
"end": 422,
"text": " it would turn out to be a better RL agent that learns faster than if I put in the original observation,"
},
{
"start": 422,
"end": 427,
"text": " which is the pixel image of the game."
},
{
"start": 427,
"end": 433,
"text": " Because, of course, in order to play the game correctly, in order to play the game to win,"
},
{
"start": 433,
"end": 436,
"text": " you need to extract this information."
},
{
"start": 436,
"end": 441,
"text": " You need to get, ah, there's something like a spaceship, there's something like meteorites."
},
{
"start": 441,
"end": 448,
"text": " This is all things that the RL algorithm doesn't know per se, and would have to learn from the pixels."
},
{
"start": 448,
"end": 453,
"text": " But if I already give it the information that is useful, it can learn much faster."
},
{
"start": 453,
"end": 461,
"text": " So you can see if I handcraft a good representation, it's pretty easy for the RL algorithm to improve."
},
{
"start": 461,
"end": 468,
"text": " Now we want to come up with a framework that automatically comes up with a good representation."
},
{
"start": 468,
"end": 473,
"text": " So it alleviates the RL algorithm here, the reinforcement learning."
},
{
"start": 473,
"end": 480,
"text": " It alleviates that from having to learn a good representation."
},
{
"start": 480,
"end": 487,
"text": " It already is burdened with learning what a good action is in any given situation."
},
{
"start": 487,
"end": 498,
"text": " We want to alleviate it of the burden to also extract useful information from the observation space."
},
{
"start": 498,
"end": 500,
"text": " So how do we do this?"
},
{
"start": 500,
"end": 504,
"text": " This Q here is supposed to be exactly that."
},
{
"start": 504,
"end": 510,
"text": " It's supposed to be a good representation, but not one that we handcrafted,"
},
{
"start": 510,
"end": 516,
"text": " but used with a technique that can be employed pretty much everywhere."
},
{
"start": 516,
"end": 522,
"text": " The goal, sorry, the secret sauce here is this contrastive loss thing."
},
{
"start": 522,
"end": 524,
"text": " Okay, this bombed."
},
{
"start": 524,
"end": 532,
"text": " Contrastive learning is this kind of magic thing that will make us good representations."
},
{
"start": 532,
"end": 534,
"text": " What is contrastive learning?"
},
{
"start": 534,
"end": 537,
"text": " In this case, I'm going to explain it."
},
{
"start": 537,
"end": 550,
"text": " In this case, for image-based reinforcement learning, but just for image-based neural networks,"
},
{
"start": 550,
"end": 554,
"text": " how can we come up with a contrastive loss?"
},
{
"start": 554,
"end": 558,
"text": " So you see there's a two pipeline thing going on here."
},
{
"start": 558,
"end": 566,
"text": " This and this, and then one of them is going to be the good encoding."
},
{
"start": 566,
"end": 569,
"text": " So let's check it out."
},
{
"start": 569,
"end": 575,
"text": " Let's say we have this image that we had before."
},
{
"start": 575,
"end": 578,
"text": " Draw it again."
},
{
"start": 578,
"end": 583,
"text": " This little spaceship."
},
{
"start": 583,
"end": 585,
"text": " This and this."
},
{
"start": 585,
"end": 588,
"text": " And shot."
},
{
"start": 588,
"end": 590,
"text": " We want to do this."
},
{
"start": 590,
"end": 595,
"text": " What we need to do is we need to produce three different things from it."
},
{
"start": 595,
"end": 602,
"text": " We need to produce an anchor, what's called an anchor."
},
{
"start": 602,
"end": 607,
"text": " We need to produce a positive sample."
},
{
"start": 607,
"end": 610,
"text": " And we need to produce negative samples."
},
{
"start": 610,
"end": 614,
"text": " Let's just go with one negative sample for now."
},
{
"start": 614,
"end": 621,
"text": " The goal is to come up with a task where we produce our own labels."
},
{
"start": 621,
"end": 627,
"text": " Since we're training an encoder, and the encoder is a neural network that is parameterized,"
},
{
"start": 627,
"end": 629,
"text": " we need some sort of loss function."
},
{
"start": 629,
"end": 635,
"text": " The goal is to come up with a method where we can create our own labels to a task,"
},
{
"start": 635,
"end": 640,
"text": " but we construct the task in a way such that the neural network has no choice"
},
{
"start": 640,
"end": 645,
"text": " and we can create something meaningful, even though we made the task up ourselves."
},
{
"start": 645,
"end": 649,
"text": " I hope this was kind of clear."
},
{
"start": 649,
"end": 651,
"text": " How are we going to do this?"
},
{
"start": 651,
"end": 655,
"text": " Our method of choice here is going to be random cropping."
},
{
"start": 655,
"end": 664,
"text": " Random cropping means that I take an image and I crop a piece from it."
},
{
"start": 664,
"end": 667,
"text": " A smaller piece from the image."
},
{
"start": 667,
"end": 670,
"text": " I take a view inside the image."
},
{
"start": 670,
"end": 676,
"text": " In case of the anchor, I'm going to draw the same picture here."
},
{
"start": 676,
"end": 680,
"text": " Bear with me, I'm going to draw the same picture here a couple of times."
},
{
"start": 680,
"end": 684,
"text": " This is all supposed to be the same picture."
},
{
"start": 684,
"end": 689,
"text": " With the negative sample, I'm just going to leave it empty for now."
},
{
"start": 689,
"end": 694,
"text": " Ta-da! Two meteorites. Two meteorites."
},
{
"start": 694,
"end": 696,
"text": " Shot. Shot."
},
{
"start": 696,
"end": 702,
"text": " For the anchor, we're going to center crop."
},
{
"start": 702,
"end": 708,
"text": " We're going to take the center image."
},
{
"start": 708,
"end": 716,
"text": " The assumption is that if I center crop, I won't lose too much of the image."
},
{
"start": 716,
"end": 721,
"text": " I can actually make the crop bigger, such that almost everything of the image"
},
{
"start": 721,
"end": 726,
"text": " is somewhat contained in this."
},
{
"start": 726,
"end": 728,
"text": " This is going to be my anchor."
},
{
"start": 728,
"end": 734,
"text": " The positive sample is going to be a random crop of the same image."
},
{
"start": 734,
"end": 743,
"text": " I'm just randomly going to select a same size section from that image."
},
{
"start": 743,
"end": 747,
"text": " Let's say this is up right here."
},
{
"start": 747,
"end": 753,
"text": " The negative sample is going to be a random crop from a different image."
},
{
"start": 753,
"end": 757,
"text": " A different image might be from the same game,"
},
{
"start": 757,
"end": 763,
"text": " but there is a meteorite here and there is no shot."
},
{
"start": 763,
"end": 765,
"text": " I don't shoot."
},
{
"start": 765,
"end": 768,
"text": " I'm going to take a random crop from this."
},
{
"start": 768,
"end": 772,
"text": " Let's say I'm going to take a random crop here."
},
{
"start": 772,
"end": 777,
"text": " Let's put a meteorite here as well, just for fun."
},
{
"start": 777,
"end": 784,
"text": " These are going to be our three samples."
},
{
"start": 784,
"end": 792,
"text": " Now the question is going to be if I give the anchor to the neural network."
},
{
"start": 792,
"end": 801,
"text": " I give you the anchor, but I'm also going to give you this and this thing."
},
{
"start": 801,
"end": 803,
"text": " I'm not going to give any of this."
},
{
"start": 803,
"end": 813,
"text": " I'm just going to give whatever I cropped."
},
{
"start": 813,
"end": 816,
"text": " Just these things."
},
{
"start": 816,
"end": 820,
"text": " I ask the neural network, I give you the anchor."
},
{
"start": 820,
"end": 829,
"text": " Which one of these two crops comes from the same image?"
},
{
"start": 829,
"end": 833,
"text": " As a human you look at this and if you just see the center crop,"
},
{
"start": 833,
"end": 838,
"text": " you see down here there is this tip of this thing and then there is the shot."
},
{
"start": 838,
"end": 842,
"text": " In relation to the shot there is a meteor here."
},
{
"start": 842,
"end": 847,
"text": " Then you look at the second one and you say I don't see the spaceship,"
},
{
"start": 847,
"end": 851,
"text": " but there is the same relation here from the shot to the meteor."
},
{
"start": 851,
"end": 854,
"text": " I can kind of see the meteor up here."
},
{
"start": 854,
"end": 857,
"text": " This also fits with that."
},
{
"start": 857,
"end": 861,
"text": " The spaceship must be down here somewhere."
},
{
"start": 861,
"end": 865,
"text": " Then I go over here and I try to do the same thing."
},
{
"start": 865,
"end": 867,
"text": " Here is the meteor."
},
{
"start": 867,
"end": 874,
"text": " In the original image it might be over here somewhere."
},
{
"start": 874,
"end": 877,
"text": " That's possible. I don't see it."
},
{
"start": 877,
"end": 887,
"text": " That's possible, but then there should be a shot somewhere here."
},
{
"start": 887,
"end": 893,
"text": " There should be a shot somewhere here."
},
{
"start": 893,
"end": 898,
"text": " I'm pretty sure because there is one over here and I don't see it."
},
{
"start": 898,
"end": 905,
"text": " I am fairly sure that this image here is the positive sample,"
},
{
"start": 905,
"end": 909,
"text": " while this image here is the negative sample."
},
{
"start": 909,
"end": 912,
"text": " This is the task that you ask of the neural network."
},
{
"start": 912,
"end": 921,
"text": " Give it the anchor and you ask which one of these two comes from the same image."
},
{
"start": 921,
"end": 925,
"text": " This is called contrastive learning."
},
{
"start": 925,
"end": 934,
"text": " It is a bit more complicated in that of course what you do is you encode these things using neural networks."
},
{
"start": 934,
"end": 939,
"text": " Each of the things you encode."
},
{
"start": 939,
"end": 947,
"text": " The anchor you are going to encode all of these things using a neural network."
},
{
"start": 947,
"end": 952,
"text": " Then this is what's going to become the query."
},
{
"start": 952,
"end": 956,
"text": " These are becoming the keys. Key 1 or key 2."
},
{
"start": 956,
"end": 963,
"text": " Then you are going to feed always two of them into a bilinear product."
},
{
"start": 963,
"end": 970,
"text": " A bilinear product is simply an inner product in a perturbed space that you can learn."
},
{
"start": 970,
"end": 975,
"text": " You are going to have these two here."
},
{
"start": 975,
"end": 979,
"text": " These go into Q, W, K, 1."
},
{
"start": 979,
"end": 986,
"text": " Then these two here, sorry, this and this go into Q, W, K, 2."
},
{
"start": 986,
"end": 990,
"text": " Now W here is a learnable parameter."
},
{
"start": 990,
"end": 993,
"text": " You have some freedom."
},
{
"start": 993,
"end": 999,
"text": " Then you basically take whichever one of those two is highest."
},
{
"start": 999,
"end": 1004,
"text": " This might be this high and this might only be this high."
},
{
"start": 1004,
"end": 1010,
"text": " Then you say, aha, cool, this one is higher so this one must be the positive."
},
{
"start": 1010,
"end": 1019,
"text": " You train the W specifically to make the positive ones higher and the negative ones lower."
},
{
"start": 1019,
"end": 1023,
"text": " This is a supervised learning task."
},
{
"start": 1023,
"end": 1030,
"text": " These things here are going to be the logits."
},
{
"start": 1030,
"end": 1037,
"text": " They are inner products but you basically then pick the one that is highest in a softmax way."
},
{
"start": 1037,
"end": 1040,
"text": " They put this in the paper."
},
{
"start": 1040,
"end": 1048,
"text": " If we go down here, the objective that they use to do the contrastive learning is this one."
},
{
"start": 1048,
"end": 1054,
"text": " As you can see, it's a softmax like in multiclass classification."
},
{
"start": 1054,
"end": 1061,
"text": " The inner product, the bilinear product with the positive samples"
},
{
"start": 1061,
"end": 1067,
"text": " over the bilinear product with the positive samples plus the bilinear product with all of the negative samples."
},
{
"start": 1067,
"end": 1071,
"text": " You are going to come up with more than one negative sample."
},
{
"start": 1071,
"end": 1078,
"text": " The only thing left that we don't have here is that the encoding,"
},
{
"start": 1078,
"end": 1086,
"text": " how you are going to come from the image space to this space here,"
},
{
"start": 1086,
"end": 1092,
"text": " is going to be slightly different depending on whether you are talking on the anchor"
},
{
"start": 1092,
"end": 1097,
"text": " or on what are called the keys, the things you compare to."
},
{
"start": 1097,
"end": 1100,
"text": " This is out of a stability criterion."
},
{
"start": 1100,
"end": 1106,
"text": " Maybe you know something like double Q-learning or things like this."
},
{
"start": 1106,
"end": 1112,
"text": " Sometimes when you train with your own thing,"
},
{
"start": 1112,
"end": 1119,
"text": " in Q-learning you are trying to come up with an actor and a critic."
},
{
"start": 1119,
"end": 1130,
"text": " It's not the same thing, but you are using the same neural network twice in your setup."
},
{
"start": 1130,
"end": 1138,
"text": " Then you compare the outputs to each other, which leads to instability."
},
{
"start": 1138,
"end": 1145,
"text": " In our case, we took it three times here, or multiple times."
},
{
"start": 1145,
"end": 1151,
"text": " Especially for the same objective here, we have twice something that was encoded by the same neural network"
},
{
"start": 1151,
"end": 1154,
"text": " and is on the two sides of this bilinear product."
},
{
"start": 1154,
"end": 1160,
"text": " If we were to use the same neural network, that tends to be somewhat unstable."
},
{
"start": 1160,
"end": 1166,
"text": " We have different neural networks, one that will encode the query, which is this FQ,"
},
{
"start": 1166,
"end": 1172,
"text": " and one which will encode the keys, sorry, FK."
},
{
"start": 1172,
"end": 1176,
"text": " We don't want to learn two neural networks."
},
{
"start": 1176,
"end": 1181,
"text": " That's why there's a bit of a compromise, where we say it is the same neural network,"
},
{
"start": 1181,
"end": 1188,
"text": " but basically this one is the one we learn."
},
{
"start": 1188,
"end": 1196,
"text": " Every now and then we transfer over the parameters to that one."
},
{
"start": 1196,
"end": 1203,
"text": " In fact, each step we transfer over the parameters and do an exponentially moving average"
},
{
"start": 1203,
"end": 1209,
"text": " with the parameters of this momentum encoder from the step before."
},
{
"start": 1209,
"end": 1217,
"text": " The momentum encoder parameters are a moving average of the parameters of the query encoder."
},
{
"start": 1217,
"end": 1221,
"text": " You get the best of both worlds."
},
{
"start": 1221,
"end": 1227,
"text": " You don't have to learn a second neural network, but your second neural network"
},
{
"start": 1227,
"end": 1231,
"text": " is not the same as your first neural network."
},
{
"start": 1231,
"end": 1239,
"text": " It kind of lags behind, but it is also performing almost as well."
},
{
"start": 1239,
"end": 1246,
"text": " I don't know if that makes sense, but it is the best I can explain it."
},
{
"start": 1246,
"end": 1253,
"text": " To recap, you take your observation, you encode it as a query, sorry,"
},
{
"start": 1253,
"end": 1260,
"text": " you crop here for your anchor, that gets your query,"
},
{
"start": 1260,
"end": 1269,
"text": " and then you random crop for your keys into positive and negative samples."
},
{
"start": 1269,
"end": 1274,
"text": " Random crop from the same observation or from different observations."
},
{
"start": 1274,
"end": 1277,
"text": " These become your positive and negative samples."
},
{
"start": 1277,
"end": 1286,
"text": " Then you push these through your encoders for the query and for the keys respectively."
},
{
"start": 1286,
"end": 1291,
"text": " You end up with the queue, which is the encoded anchor,"
},
{
"start": 1291,
"end": 1296,
"text": " and the k's, which are the encoded positive and negative samples."
},
{
"start": 1296,
"end": 1307,
"text": " Then you learn, you update this encoder here using the contrastive loss."
},
{
"start": 1307,
"end": 1316,
"text": " At the same time, you feed the queue into the reinforcement learning algorithm,"
},
{
"start": 1316,
"end": 1321,
"text": " and you learn your reinforcement learning algorithm."
},
{
"start": 1321,
"end": 1326,
"text": " Instead of having the observation directly as an input here,"
},
{
"start": 1326,
"end": 1332,
"text": " you now have the queue here as an input."
},
{
"start": 1332,
"end": 1336,
"text": " The reinforcement learning works exactly the same,"
},
{
"start": 1336,
"end": 1343,
"text": " except having the pixel input O, you now have the representation input Q."
},
{
"start": 1343,
"end": 1348,
"text": " You don't have to worry about anything else in terms of the reinforcement learning algorithm."
},
{
"start": 1348,
"end": 1351,
"text": " It works exactly the same."
},
{
"start": 1351,
"end": 1357,
"text": " This whole thing here can run either in parallel, or you can think of it before,"
},
{
"start": 1357,
"end": 1360,
"text": " you can think of it off-policy, on-policy."
},
{
"start": 1360,
"end": 1363,
"text": " It is sort of modular how you fit this in."
},
{
"start": 1363,
"end": 1366,
"text": " It simply comes up with good representation."
},
{
"start": 1366,
"end": 1371,
"text": " That is basically the deal here."
},
{
"start": 1371,
"end": 1381,
"text": " You hope that the whole procedure of this contrastive learning then gives you good representation of this anchor thing here."
},
{
"start": 1381,
"end": 1391,
"text": " If you encode that to the queue, you hope that this representation now is a good representation as a basis for the RL algorithm."
},
{
"start": 1391,
"end": 1396,
"text": " It turns out, at least in their experiments, it is."
},
{
"start": 1396,
"end": 1398,
"text": " Here you see the same thing."
},
{
"start": 1398,
"end": 1404,
"text": " You can do something more where in RL you usually deal with a stack of observations,"
},
{
"start": 1404,
"end": 1407,
"text": " not just a single observation."
},
{
"start": 1407,
"end": 1413,
"text": " For example, in Atari, people always concatenate something like the four last frames."
},
{
"start": 1413,
"end": 1420,
"text": " Their point is, if we have this stack here, if we do this data augmentation, these crops,"
},
{
"start": 1420,
"end": 1422,
"text": " we need to do them consistently."
},
{
"start": 1422,
"end": 1429,
"text": " We need to crop every single image at the same point for the query."
},
{
"start": 1429,
"end": 1433,
"text": " Also, if we do a random crop, let's say a random crop down here,"
},
{
"start": 1433,
"end": 1440,
"text": " we need to do this same random crop for all of the stack of images here."
},
{
"start": 1440,
"end": 1453,
"text": " That is the additional thing they introduce with respect to RL that deals with stacked timeframes."
},
{
"start": 1453,
"end": 1460,
"text": " It's the same diagram as above here."
},
{
"start": 1460,
"end": 1467,
"text": " They explain the RL algorithms they use and exactly their thing."
},
{
"start": 1467,
"end": 1475,
"text": " Here you can see that the anchor is a crop, and the positive sample is a random crop from the same image."
},
{
"start": 1475,
"end": 1477,
"text": " This would be up here somewhere."
},
{
"start": 1477,
"end": 1479,
"text": " The anchor is cropped from the middle."
},
{
"start": 1479,
"end": 1485,
"text": " Then the negative would be a random crop from a different image or a different stack of images."
},
{
"start": 1485,
"end": 1488,
"text": " They have a pseudocode here."
},
{
"start": 1488,
"end": 1494,
"text": " It's pretty simple. We'll just go through it quickly."
},
{
"start": 1494,
"end": 1500,
"text": " You start off with FQ and FK. These are the encoders for the query and keys."
},
{
"start": 1500,
"end": 1503,
"text": " You start them off the same."
},
{
"start": 1503,
"end": 1505,
"text": " Then you go through your data loader."
},
{
"start": 1505,
"end": 1511,
"text": " You do this random augmentation of your query and your keys."
},
{
"start": 1511,
"end": 1517,
"text": " I'm not even sure if the random augmentation needs to be a center crop for the anchor,"
},
{
"start": 1517,
"end": 1527,
"text": " but it's just two different crops from the same image."
},
{
"start": 1527,
"end": 1532,
"text": " I guess it's a thing you could choose. I don't know what exactly is the best thing."
},
{
"start": 1532,
"end": 1541,
"text": " Then I forward the query through the FQ and I forward the keys through the FK."
},
{
"start": 1541,
"end": 1547,
"text": " It's important to detach this so I don't want to train the FK."
},
{
"start": 1547,
"end": 1550,
"text": " I only want to train the FQ."
},
{
"start": 1550,
"end": 1557,
"text": " Then I do the bilinear product here with the W."
},
{
"start": 1557,
"end": 1559,
"text": " These are the bilinear product."
},
{
"start": 1559,
"end": 1569,
"text": " Then I put all of this into a cross entropy loss."
},
{
"start": 1569,
"end": 1578,
"text": " In the end I update my FQ and my W and I do this exponentially moving average for my key encoder."
},
{
"start": 1578,
"end": 1581,
"text": " They test on two different things."
},
{
"start": 1581,
"end": 1586,
"text": " They test on the DeepMind control tasks."
},
{
"start": 1586,
"end": 1591,
"text": " They always test 100k time steps."
},
{
"start": 1591,
"end": 1594,
"text": " Their big point is data efficiency."
},
{
"start": 1594,
"end": 1600,
"text": " They claim they can learn useful representations with not much data."
},
{
"start": 1600,
"end": 1606,
"text": " The task is here, how good are you at 100k time steps?"
},
{
"start": 1606,
"end": 1608,
"text": " You don't optimize until the end."
},
{
"start": 1608,
"end": 1614,
"text": " You get 100k time steps and then the question is how good are you?"
},
{
"start": 1614,
"end": 1623,
"text": " The curl here outperforms all of the baselines handily in the DeepMind control tasks."
},
{
"start": 1623,
"end": 1631,
"text": " It also outperforms a lot of the baselines in the Atari tasks."
},
{
"start": 1631,
"end": 1638,
"text": " If you look at the results, it doesn't outperform everything."
},
{
"start": 1638,
"end": 1645,
"text": " For example, the red is curl and the dashed grey is stateSAC."
},
{
"start": 1645,
"end": 1651,
"text": " StateSAC has access to the state."
},
{
"start": 1651,
"end": 1654,
"text": " Curl only works from pixels."
},
{
"start": 1654,
"end": 1661,
"text": " If I had to craft a representation, stateSAC has access to that."
},
{
"start": 1661,
"end": 1673,
"text": " You see that in many of the tasks, the curl comes close or performs equally well to stateSAC."
},
{
"start": 1673,
"end": 1676,
"text": " That's pretty impressive."
},
{
"start": 1676,
"end": 1684,
"text": " Especially if you look at pixelSAC, which is the same algorithm but does not have access to the state,"
},
{
"start": 1684,
"end": 1690,
"text": " it often fails terribly."
},
{
"start": 1690,
"end": 1693,
"text": " That is pretty interesting to see."
},
{
"start": 1693,
"end": 1705,
"text": " Even to me, it's pretty interesting to see that this kind of self-labeled algorithm comes up with such useful representations."
},
{
"start": 1705,
"end": 1713,
"text": " I hope I have explained this satisfactorily."
},
{
"start": 1713,
"end": 1720,
"text": " Check out the paper for more experiments, ablation studies and general reading."
},
{
"start": 1720,
"end": 1735,
"text": " I wish you a good day."
}
] |
gbG1X8Xq-T8 | Yannic Kilcher | UCZHmQk67mSJgfCCTn7xBfew | Enhanced POET: Open-Ended RL through Unbounded Invention of Learning Challenges and their Solutions | [
"Science & Technology"
] | [
"deep learning",
"machine learning",
"unbounded",
"open-ended",
"evolution",
"evolutionary",
"uber",
"uber ai",
"distributed",
"reinforcement learning",
"rl",
"generative"
] | The enhanced POET makes some substantial and well-crafted improvements over the original POET algorithm and excels at open-ended learning like no system before.
https://arxiv.org/abs/2003.08536
https://youtu.be/RX0sKDRq400
Abstract:
Creating open-ended algorithms, which generate their own never-ending stream of novel and appropriately challenging learning opportunities, could help to automate and accelerate progress in machine learning. A recent step in this direction is the Paired Open-Ended Trailblazer (POET), an algorithm that generates and solves its own challenges, and allows solutions to goal-switch between challenges to avoid local optima. However, the original POET was unable to demonstrate its full creative potential because of limitations of the algorithm itself and because of external issues including a limited problem space and lack of a universal progress measure. Importantly, both limitations pose impediments not only for POET, but for the pursuit of open-endedness in general. Here we introduce and empirically validate two new innovations to the original algorithm, as well as two external innovations designed to help elucidate its full potential. Together, these four advances enable the most open-ended algorithmic demonstration to date. The algorithmic innovations are (1) a domain-general measure of how meaningfully novel new challenges are, enabling the system to potentially create and solve interesting challenges endlessly, and (2) an efficient heuristic for determining when agents should goal-switch from one problem to another (helping open-ended search better scale). Outside the algorithm itself, to enable a more definitive demonstration of open-endedness, we introduce (3) a novel, more flexible way to encode environmental challenges, and (4) a generic measure of the extent to which a system continues to exhibit open-ended innovation. Enhanced POET produces a diverse range of sophisticated behaviors that solve a wide range of environmental challenges, many of which cannot be solved through other means.
Authors: Rui Wang, Joel Lehman, Aditya Rawal, Jiale Zhi, Yulun Li, Jeff Clune, Kenneth O. Stanley
Links:
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
BitChute: https://www.bitchute.com/channel/yannic-kilcher
Minds: https://www.minds.com/ykilcher | There, before we jump into today's paper, I just want to give a shout out to Machine Learning Street Talk, where every week we talk about current or big trends or topics in machine learning. The first discussion that we launched is actually on today's paper, The Enhanced Poet. So if you like the following video, you might want to jump over to Machine Learning Street Talk and check out our discussion about it. It's very interesting. Alright, have fun. Hi there. What you're seeing here are many different environments from a single run of a system that's called The Enhanced Poet. Last time we've taken a look at a system called Poet, and The Enhanced Poet is kind of an improvement over the original Poet, fixing some of its shortcomings. And you see here that the agent is able to solve this very, very diverse set of environments. And the notable thing is, this is from a single run of this algorithm. So one run will produce all these different environments and will produce agents that are able to solve all the different environments at the same time in parallel. So it's a population-based method. If you haven't seen the video I did on Poet, I suggest you go and see that now. This is simply an enhancement to it, and I expect people to know kind of what I'm talking about. Alright, it's going to be a short video, but I think it is a good addendum to Poet. So it's The Enhanced Poet, Open-ended Reinforcement Learning Through Unbounded Invention of Learning Challenges and Their Solutions by Rui Wangchou, Leymann Adhitar Wahl, Jial Qi, Julun Li, Jeff Klun, and Kenneth O. Stanley. So we'll jump right in. They make a number of improvements to the original Poet, and I simply want to discuss the most important ones. So you know, they have a nice graphic down here of what happens in Poet. Poet builds up this tree of environments, and to each environment it has an agent that it trains to solve that environment at the same time. So at the same time it will kind of start out here. It will generate offspring. It will continuously generate offspring, and then it will also continuously train agents in each environment that it produced in order to solve that environment. And it keeps doing that while producing more and more offspring. And then once in a while it does what is called a transfer. So that means that, for example, you see here the offspring produced here from this environment. You kind of see that the lineage here kind of focuses on squiggly environments, right? You see that there's a bit of a squiggle here and a bit of a squiggle here. And then the offspring all of a sudden is a bit more smooth, but has this little step here. And then this offspring of this environment has this large step here. Now the agents that come from here have kind of been optimized to solve the squiggliness problem. But here, over here, this lineage has specified or specialized more and more in kind of like these kind of large drops or steep hills. So the agent that was trained over here was found to be very effective in this environment and therefore can be transferred. So this kind of population branching out into the different trees and then transferring solutions between the parts of the trees, that's what makes Poet very very powerful mechanism to solve these kind of tasks. All right, so how does this improve? Now the first thing that Poet does is it generates these environments and it always wants to generate new environments. So it always generates offspring to each environment. So let's say we are here, it will generate offspring to each environment here, each that we have. Let's see, we have only seen so far. And then it only picks the most novel ones, the ones that are most novel, which is this, probably this. Then there are other criteria, namely that it can be solved by some agents, but it cannot be solved by others. It's not too difficult, but also not too hard. But one of the aspects is it must be novel, right? So we're not seeing any here, which means that those weren't novel enough. How does it measure novel? In the original implementation of Poet, you had this environment generator, which was like a level generator, which made these gaps here and the stumps here. And you could specify, I believe, five numbers. So there was a five-point scale in which you could specify how high the stumps were. You get this kind of pentagon here, how high the stumps were and how deep the gaps were and how rough the terrain was. And the level generator would generate this level. And so basically your distance metric between environments was a vector of size five, right? This is environment one. And you had environment two, which if it's more, it has higher stumps, right? Than this particular number here, maybe would be higher than this number here. So it was kind of limited to taking the Euclidean distance between two environment encodings in order to measure the distance between environments. This is very, very domain specific. And the authors here argue what we should rather do is have a general environment agnostic distance metric, right? So here is what they propose. They propose the following. Why don't we, if we have a new environment, right? Let's say we have a new environment. We measure all of the agents, the current agents and the ones we've already seen, right? We measure all the agents in our database on this new environment. That's this. And they come up with scores, right? Each of them gets a score. And then we, you know, clip and bound the score. So the max here is 300 and the minimum is 50. But in any case, we then rank them, right? So we evaluate them and then we rank them from best to worst. And then we normalize, which simply means that the best one gets a score of 0.5 and the worst one gets a score of negative 0.5. And now this vector here, this is now used to compare environments. So if we have another environment, right? Right here, we have E2 and that gets a different ordering, right? So maybe agent one is now the best agent two is really bad and so on, right? That gets a different ordering. Then the resulting vector here will be very, very different from from this vector right here. And this is very agnostic. So no matter which environment it is, if the ordering of agents in it, the score they get, the order of it is the same, the environments aren't really different from each other, the authors argue. But if the scores are very differently ranked, right? So imagine the environment is harder but essentially the same, then the scores will be lower, but still the agents would be ranked the same. So you can argue, well, that's just kind of the same environment, except a step like this now has a super steep step, right? It's not very different. But if instead of that, you get an environment that is like this, like you say, wow, that's qualitatively different. And you would expect from this one to this one that the agents would be ranked the same, the agents performance would roughly stay the same, but you would expect from the middle one to this one that an entirely different set of agents might perform well right in this one. So that's how novelty is measured and I think it's a pretty cool way. I don't have coronavirus, by the way, maybe, who knows? No, I just have a dry throat. All right, so this is the first enhancement they make is that they now measure novelty in this domain agnostic way. Pretty cool so far. And what this allows them to do, this allows them to actually not rely on this level generator with the five parameters in order to generate these levels. But these levels can now be produced however they want with different generators and that's exactly what they do. They now employ neural networks. Well, it's kind of a prototypical, it's called a CP&N that generates these things. You might have seen in the examples the enhanced poet doesn't have these gaps and stumps anymore. It simply has these landscapes that are super diverse, but they're still just their landscapes. And what it does is it evolves neural networks at the same time as it evolves the population. It evolves these, so the architecture of these networks isn't fixed. It's actually evolving along with the agent to make the challenges harder and harder. So you see there are like cosines and sines in here and you can add them and subtract and so on. And that will give you a mapping from x, which is the x coordinate here, to y, which is the y coordinate. And that will give you kind of a continuous landscape depending on the architecture here and on the internal parameters of course. I guess there would also be a node, some here like times a lambda factor and then the lambda would also be a thing that is evolved. So pretty cool. Of course the internals of this now aren't just described by a fixed vector anymore, but you don't need that anymore because we have a method to compare environments even if they come from completely different architectures of generators. So it's pretty cool that the agnostic comparison of environments allows you to now have a much more general level generator and of course now produce much more diverse environments. And that's exactly what they see. Of course you see here the environments get super crazy. So they also propose kind of a novel metric to measure novelty, sorry to measure progress. So the question is how do we measure progress in these algorithms, in these open-ended algorithms? And what they propose is this ANNX score, which is, I have to go and look it up, the ANNX score I think is the number of new environments that are solved. Yes, so exactly. The question is whether a system continues to generate interesting new things. And the way they measure it is by the accumulated number of novel environments created and solved. So the question here is accumulated, that means over the entire run they count up how many environments that they've seen that are novel, and we've already had the definition of novel. And in this case it basically means that it must pass the minimal criterion. It's neither too hard nor too easy. We've already seen this in how the offspring of environments is generated. There's a minimal criterion and it must be eventually solved. So that means the novel environments created and solved. So how many new environments are created and solved? And then at a later point solved. You can see the difference to the original poet in this graph. So the original poet eventually runs out of new environments because its generator is just not powerful enough. It can only modify these five variables and eventually the environments aren't substantially novel from the old environments. Whereas the enhanced poet you can see even after this run, and I'm sure they have large infrastructure to do these experiments, it just continues to innovate new more elaborate environments continuously. So this I think are the main things. They also do some improvement to the transfers and so on. I don't want to go into that. I just wanted to show these improvements so that you have the complete picture of how such an algorithm runs. My criticism to this is that if you just look at their thing is that with the leaving out of the gaps and the stumps and so on, in a weird way, of course the levels are diverse, but they have become even more similar it seems. Like you're really relying on your ability to kind of continuously create these levels. Kind of like a GAN for levels, right? And you're relying on your ability to smoothly increase the difficulty of the levels, right? To actually have a diversity in your level generator, but also a kind of a smoothness with regard to the difficulty in order to build this whole curriculum. And I think even though the environments look more diverse, it might be going into a direction where you kind of engineer yourself into a corner where you are now even more and more relying on these evolving and parameterizable generators. Nonetheless, the ideas I think are pretty cool and that's all I have to say about it. Bye bye! | [
{
"start": 0,
"end": 5.44,
"text": " There, before we jump into today's paper, I just want to give a shout out to Machine Learning Street"
},
{
"start": 5.44,
"end": 11.68,
"text": " Talk, where every week we talk about current or big trends or topics in machine learning."
},
{
"start": 12.4,
"end": 19.36,
"text": " The first discussion that we launched is actually on today's paper, The Enhanced Poet. So if you like"
},
{
"start": 19.36,
"end": 24.8,
"text": " the following video, you might want to jump over to Machine Learning Street Talk and check out our"
},
{
"start": 24.8,
"end": 32.8,
"text": " discussion about it. It's very interesting. Alright, have fun. Hi there. What you're seeing here are"
},
{
"start": 32.8,
"end": 38.4,
"text": " many different environments from a single run of a system that's called The Enhanced Poet."
},
{
"start": 39.760000000000005,
"end": 46.56,
"text": " Last time we've taken a look at a system called Poet, and The Enhanced Poet is kind of an improvement"
},
{
"start": 47.28,
"end": 53.84,
"text": " over the original Poet, fixing some of its shortcomings. And you see here that the"
},
{
"start": 53.84,
"end": 66,
"text": " agent is able to solve this very, very diverse set of environments. And the notable thing is,"
},
{
"start": 66,
"end": 71.84,
"text": " this is from a single run of this algorithm. So one run will produce all these different"
},
{
"start": 71.84,
"end": 77.04,
"text": " environments and will produce agents that are able to solve all the different environments"
},
{
"start": 77.92,
"end": 82.16,
"text": " at the same time in parallel. So it's a population-based method. If you haven't"
},
{
"start": 82.16,
"end": 89.36,
"text": " seen the video I did on Poet, I suggest you go and see that now. This is simply an enhancement to it,"
},
{
"start": 89.36,
"end": 95.03999999999999,
"text": " and I expect people to know kind of what I'm talking about. Alright, it's going to be a short"
},
{
"start": 95.03999999999999,
"end": 101.28,
"text": " video, but I think it is a good addendum to Poet. So it's The Enhanced Poet, Open-ended"
},
{
"start": 101.28,
"end": 107.67999999999999,
"text": " Reinforcement Learning Through Unbounded Invention of Learning Challenges and Their Solutions"
},
{
"start": 107.68,
"end": 115.76,
"text": " by Rui Wangchou, Leymann Adhitar Wahl, Jial Qi, Julun Li, Jeff Klun, and Kenneth O. Stanley."
},
{
"start": 117.76,
"end": 124.64000000000001,
"text": " So we'll jump right in. They make a number of improvements to the original Poet, and I simply"
},
{
"start": 124.64000000000001,
"end": 132.4,
"text": " want to discuss the most important ones. So you know, they have a nice graphic down here of what"
},
{
"start": 132.4,
"end": 139.92000000000002,
"text": " happens in Poet. Poet builds up this tree of environments, and to each environment it has an"
},
{
"start": 139.92000000000002,
"end": 146.16,
"text": " agent that it trains to solve that environment at the same time. So at the same time it will kind of"
},
{
"start": 146.16,
"end": 152.96,
"text": " start out here. It will generate offspring. It will continuously generate offspring, and then it will"
},
{
"start": 152.96,
"end": 160.24,
"text": " also continuously train agents in each environment that it produced in order to solve that environment."
},
{
"start": 160.24,
"end": 165.20000000000002,
"text": " And it keeps doing that while producing more and more offspring. And then once in a while"
},
{
"start": 167.28,
"end": 174.4,
"text": " it does what is called a transfer. So that means that, for example, you see here the offspring"
},
{
"start": 174.4,
"end": 183.36,
"text": " produced here from this environment. You kind of see that the lineage here kind of focuses on"
},
{
"start": 183.36,
"end": 188.4,
"text": " squiggly environments, right? You see that there's a bit of a squiggle here and a bit of a squiggle"
},
{
"start": 188.4,
"end": 193.68,
"text": " here. And then the offspring all of a sudden is a bit more smooth, but has this little step here."
},
{
"start": 194.24,
"end": 199.92000000000002,
"text": " And then this offspring of this environment has this large step here. Now the agents that come"
},
{
"start": 199.92000000000002,
"end": 208.8,
"text": " from here have kind of been optimized to solve the squiggliness problem. But here, over here,"
},
{
"start": 208.8,
"end": 215.28,
"text": " this lineage has specified or specialized more and more in kind of like these kind of large"
},
{
"start": 215.28,
"end": 223.92000000000002,
"text": " drops or steep hills. So the agent that was trained over here was found to be very effective"
},
{
"start": 223.92000000000002,
"end": 232,
"text": " in this environment and therefore can be transferred. So this kind of population branching out into the"
},
{
"start": 232,
"end": 239.44,
"text": " different trees and then transferring solutions between the parts of the trees, that's what makes"
},
{
"start": 239.44,
"end": 250.96,
"text": " Poet very very powerful mechanism to solve these kind of tasks. All right, so how does this improve?"
},
{
"start": 250.96,
"end": 258.48,
"text": " Now the first thing that Poet does is it generates these environments and it always wants to generate"
},
{
"start": 258.48,
"end": 265.44,
"text": " new environments. So it always generates offspring to each environment. So let's say we are here,"
},
{
"start": 265.44,
"end": 271.84,
"text": " it will generate offspring to each environment here, each that we have. Let's see, we have only seen"
},
{
"start": 271.84,
"end": 279.36,
"text": " so far. And then it only picks the most novel ones, the ones that are most novel, which is this,"
},
{
"start": 279.36,
"end": 285.12,
"text": " probably this. Then there are other criteria, namely that it can be solved by some agents,"
},
{
"start": 285.12,
"end": 290.96,
"text": " but it cannot be solved by others. It's not too difficult, but also not too hard. But one of the"
},
{
"start": 290.96,
"end": 297.35999999999996,
"text": " aspects is it must be novel, right? So we're not seeing any here, which means that those weren't"
},
{
"start": 297.35999999999996,
"end": 304.4,
"text": " novel enough. How does it measure novel? In the original implementation of Poet, you had this"
},
{
"start": 304.4,
"end": 311.76,
"text": " environment generator, which was like a level generator, which made these gaps here and the"
},
{
"start": 311.76,
"end": 319.76,
"text": " stumps here. And you could specify, I believe, five numbers. So there was a five-point scale in"
},
{
"start": 319.76,
"end": 326.08,
"text": " which you could specify how high the stumps were. You get this kind of pentagon here, how high the"
},
{
"start": 326.08,
"end": 332.64,
"text": " stumps were and how deep the gaps were and how rough the terrain was. And the level generator"
},
{
"start": 332.64,
"end": 340.4,
"text": " would generate this level. And so basically your distance metric between environments was"
},
{
"start": 341.12,
"end": 348.08,
"text": " a vector of size five, right? This is environment one. And you had environment two, which if it's"
},
{
"start": 348.08,
"end": 353.84,
"text": " more, it has higher stumps, right? Than this particular number here, maybe would be higher"
},
{
"start": 353.84,
"end": 361.68,
"text": " than this number here. So it was kind of limited to taking the Euclidean distance between two"
},
{
"start": 361.68,
"end": 370.47999999999996,
"text": " environment encodings in order to measure the distance between environments. This is very,"
},
{
"start": 370.48,
"end": 379.44,
"text": " very domain specific. And the authors here argue what we should rather do is have a general"
},
{
"start": 379.44,
"end": 387.68,
"text": " environment agnostic distance metric, right? So here is what they propose. They propose the"
},
{
"start": 387.68,
"end": 394.8,
"text": " following. Why don't we, if we have a new environment, right? Let's say we have a new"
},
{
"start": 394.8,
"end": 401.6,
"text": " environment. We measure all of the agents, the current agents and the ones we've already seen,"
},
{
"start": 401.6,
"end": 407.92,
"text": " right? We measure all the agents in our database on this new environment. That's this. And they"
},
{
"start": 407.92,
"end": 414.08000000000004,
"text": " come up with scores, right? Each of them gets a score. And then we, you know, clip and bound the"
},
{
"start": 414.08000000000004,
"end": 422,
"text": " score. So the max here is 300 and the minimum is 50. But in any case, we then rank them, right? So"
},
{
"start": 422,
"end": 430.16,
"text": " we evaluate them and then we rank them from best to worst. And then we normalize, which simply"
},
{
"start": 430.16,
"end": 441.04,
"text": " means that the best one gets a score of 0.5 and the worst one gets a score of negative 0.5. And"
},
{
"start": 441.04,
"end": 447.6,
"text": " now this vector here, this is now used to compare environments. So if we have another environment,"
},
{
"start": 447.6,
"end": 458.32000000000005,
"text": " right? Right here, we have E2 and that gets a different ordering, right? So maybe agent one is"
},
{
"start": 458.32000000000005,
"end": 464.32000000000005,
"text": " now the best agent two is really bad and so on, right? That gets a different ordering. Then the"
},
{
"start": 464.32000000000005,
"end": 473.92,
"text": " resulting vector here will be very, very different from from this vector right here. And this is very"
},
{
"start": 473.92,
"end": 482.08000000000004,
"text": " agnostic. So no matter which environment it is, if the ordering of agents in it, the score they get,"
},
{
"start": 482.08000000000004,
"end": 487.92,
"text": " the order of it is the same, the environments aren't really different from each other, the"
},
{
"start": 487.92,
"end": 495.20000000000005,
"text": " authors argue. But if the scores are very differently ranked, right? So imagine the"
},
{
"start": 495.20000000000005,
"end": 502,
"text": " environment is harder but essentially the same, then the scores will be lower, but still the"
},
{
"start": 502,
"end": 507.52,
"text": " agents would be ranked the same. So you can argue, well, that's just kind of the same environment,"
},
{
"start": 507.52,
"end": 515.76,
"text": " except a step like this now has a super steep step, right? It's not very different. But if"
},
{
"start": 516.72,
"end": 524.24,
"text": " instead of that, you get an environment that is like this, like you say, wow, that's qualitatively"
},
{
"start": 524.24,
"end": 531.28,
"text": " different. And you would expect from this one to this one that the agents would be ranked"
},
{
"start": 531.28,
"end": 536.9599999999999,
"text": " the same, the agents performance would roughly stay the same, but you would expect from the middle"
},
{
"start": 536.9599999999999,
"end": 543.36,
"text": " one to this one that an entirely different set of agents might perform well right in this one."
},
{
"start": 543.36,
"end": 551.76,
"text": " So that's how novelty is measured and I think it's a pretty cool way. I don't have coronavirus,"
},
{
"start": 551.76,
"end": 564.3199999999999,
"text": " by the way, maybe, who knows? No, I just have a dry throat. All right, so this is the first"
},
{
"start": 564.3199999999999,
"end": 570.48,
"text": " enhancement they make is that they now measure novelty in this domain agnostic way. Pretty cool"
},
{
"start": 570.48,
"end": 577.76,
"text": " so far. And what this allows them to do, this allows them to actually not rely on this level"
},
{
"start": 577.76,
"end": 586.24,
"text": " generator with the five parameters in order to generate these levels. But these levels can now"
},
{
"start": 586.24,
"end": 591.52,
"text": " be produced however they want with different generators and that's exactly what they do."
},
{
"start": 591.52,
"end": 602.4,
"text": " They now employ neural networks. Well, it's kind of a prototypical, it's called a CP&N that generates"
},
{
"start": 602.4,
"end": 608.56,
"text": " these things. You might have seen in the examples the enhanced poet doesn't have these gaps and"
},
{
"start": 608.56,
"end": 614.8,
"text": " stumps anymore. It simply has these landscapes that are super diverse, but they're still just"
},
{
"start": 614.8,
"end": 623.04,
"text": " their landscapes. And what it does is it evolves neural networks at the same time as it evolves"
},
{
"start": 623.04,
"end": 629.36,
"text": " the population. It evolves these, so the architecture of these networks isn't fixed. It's actually"
},
{
"start": 629.36,
"end": 636.16,
"text": " evolving along with the agent to make the challenges harder and harder. So you see there"
},
{
"start": 636.16,
"end": 641.6800000000001,
"text": " are like cosines and sines in here and you can add them and subtract and so on. And that will give"
},
{
"start": 641.6800000000001,
"end": 649.92,
"text": " you a mapping from x, which is the x coordinate here, to y, which is the y coordinate. And that"
},
{
"start": 649.92,
"end": 657.36,
"text": " will give you kind of a continuous landscape depending on the architecture here and on the"
},
{
"start": 657.36,
"end": 663.92,
"text": " internal parameters of course. I guess there would also be a node, some here like times a lambda"
},
{
"start": 663.92,
"end": 671.92,
"text": " factor and then the lambda would also be a thing that is evolved. So pretty cool. Of course the"
},
{
"start": 671.92,
"end": 677.2,
"text": " internals of this now aren't just described by a fixed vector anymore, but you don't need that"
},
{
"start": 677.2,
"end": 682.32,
"text": " anymore because we have a method to compare environments even if they come from completely"
},
{
"start": 682.32,
"end": 691.12,
"text": " different architectures of generators. So it's pretty cool that the agnostic"
},
{
"start": 691.12,
"end": 698.5600000000001,
"text": " comparison of environments allows you to now have a much more general level generator and of course"
},
{
"start": 698.5600000000001,
"end": 704.8000000000001,
"text": " now produce much more diverse environments. And that's exactly what they see. Of course you see"
},
{
"start": 704.8,
"end": 715.52,
"text": " here the environments get super crazy. So they also propose kind of a novel metric to measure"
},
{
"start": 715.52,
"end": 722.56,
"text": " novelty, sorry to measure progress. So the question is how do we measure progress in these"
},
{
"start": 722.56,
"end": 729.76,
"text": " algorithms, in these open-ended algorithms? And what they propose is this ANNX score, which is,"
},
{
"start": 729.76,
"end": 739.52,
"text": " I have to go and look it up, the ANNX score I think is the number of new environments that are solved."
},
{
"start": 746.08,
"end": 754.56,
"text": " Yes, so exactly. The question is whether a system continues to generate interesting new things."
},
{
"start": 754.56,
"end": 762.88,
"text": " And the way they measure it is by the accumulated number of novel environments created and solved."
},
{
"start": 764,
"end": 771.4399999999999,
"text": " So the question here is accumulated, that means over the entire run they count up how many"
},
{
"start": 771.4399999999999,
"end": 778.88,
"text": " environments that they've seen that are novel, and we've already had the definition of novel."
},
{
"start": 778.88,
"end": 787.12,
"text": " And in this case it basically means that it must pass the minimal criterion. It's neither too hard"
},
{
"start": 787.12,
"end": 792.48,
"text": " nor too easy. We've already seen this in how the offspring of environments is generated."
},
{
"start": 792.48,
"end": 801.84,
"text": " There's a minimal criterion and it must be eventually solved. So that means the novel"
},
{
"start": 801.84,
"end": 808.8,
"text": " environments created and solved. So how many new environments are created and solved?"
},
{
"start": 808.8,
"end": 815.92,
"text": " And then at a later point solved. You can see the difference to the original poet in this graph."
},
{
"start": 817.04,
"end": 825.52,
"text": " So the original poet eventually runs out of new environments because its generator is just not"
},
{
"start": 825.52,
"end": 831.8399999999999,
"text": " powerful enough. It can only modify these five variables and eventually the environments aren't"
},
{
"start": 831.8399999999999,
"end": 837.92,
"text": " substantially novel from the old environments. Whereas the enhanced poet you can see even after"
},
{
"start": 837.92,
"end": 842.9599999999999,
"text": " this run, and I'm sure they have large infrastructure to do these experiments,"
},
{
"start": 842.9599999999999,
"end": 850.9599999999999,
"text": " it just continues to innovate new more elaborate environments continuously."
},
{
"start": 852,
"end": 858.24,
"text": " So this I think are the main things. They also do some improvement to the transfers and so on."
},
{
"start": 858.24,
"end": 862.0799999999999,
"text": " I don't want to go into that. I just wanted to show these improvements so that you have"
},
{
"start": 862.08,
"end": 870.32,
"text": " the complete picture of how such an algorithm runs. My criticism to this is that if you just"
},
{
"start": 870.32,
"end": 877.76,
"text": " look at their thing is that with the leaving out of the gaps and the stumps and so on,"
},
{
"start": 878.8000000000001,
"end": 884.72,
"text": " in a weird way, of course the levels are diverse, but they have become even more similar it seems."
},
{
"start": 884.72,
"end": 891.5200000000001,
"text": " Like you're really relying on your ability to kind of continuously create these levels. Kind of like"
},
{
"start": 891.52,
"end": 902.64,
"text": " a GAN for levels, right? And you're relying on your ability to smoothly increase the difficulty"
},
{
"start": 902.64,
"end": 909.36,
"text": " of the levels, right? To actually have a diversity in your level generator, but also a kind of a"
},
{
"start": 909.36,
"end": 916.8,
"text": " smoothness with regard to the difficulty in order to build this whole curriculum. And I think"
},
{
"start": 916.8,
"end": 922.16,
"text": " even though the environments look more diverse, it might be going into a direction where you kind of"
},
{
"start": 922.16,
"end": 930,
"text": " engineer yourself into a corner where you are now even more and more relying on these evolving"
},
{
"start": 930,
"end": 936.4799999999999,
"text": " and parameterizable generators. Nonetheless, the ideas I think are pretty cool and that's"
},
{
"start": 936.48,
"end": 947.84,
"text": " all I have to say about it. Bye bye!"
}
] |
klPuEHCKG9M | Yannic Kilcher | UCZHmQk67mSJgfCCTn7xBfew | Evolving Normalization-Activation Layers | [
"Science & Technology"
] | [
"deep learning",
"machine learning",
"cnn",
"resnet",
"residual",
"efficientnet",
"mobilenet",
"cifar10",
"imagenet",
"batch normalization",
"batchnorm",
"relu",
"sigmoid",
"evolution",
"architecture",
"transfer",
"image classification",
"supervised learning",
"population",
"activation",
"normalization",
"google",
"deepmind"
] | Normalization and activation layers have seen a long history of hand-crafted variants with various results. This paper proposes an evolutionary search to determine the ultimate, final and best combined normalization-activation layer... in a very specific setting.
https://arxiv.org/abs/2004.02967
Abstract:
Normalization layers and activation functions are critical components in deep neural networks that frequently co-locate with each other. Instead of designing them separately, we unify them into a single computation graph, and evolve its structure starting from low-level primitives. Our layer search algorithm leads to the discovery of EvoNorms, a set of new normalization-activation layers that go beyond existing design patterns. Several of these layers enjoy the property of being independent from the batch statistics. Our experiments show that EvoNorms not only excel on a variety of image classification models including ResNets, MobileNets and EfficientNets, but also transfer well to Mask R-CNN for instance segmentation and BigGAN for image synthesis, outperforming BatchNorm and GroupNorm based layers by a significant margin in many cases.
Authors: Hanxiao Liu, Andrew Brock, Karen Simonyan, Quoc V. Le
Links:
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
BitChute: https://www.bitchute.com/channel/yannic-kilcher
Minds: https://www.minds.com/ykilcher | Hi there! Today we're looking at evolving normalization activation layers by Hanjiao Liu, Andrew Brock, Karen Simonian and Guo Vili. These are people from Google Brain and Google DeepMind. The topic of this paper is, as you can see, it's about normalization activation layers and we want to evolve them. I think the title says a lot, but let's go down here and see what this is about. We'll look at image neural networks and current architectures are kind of focused around the same principles. What they'll have is, ever since ResNet, these neural networks will be composed of these kind of blocks that come one after another. There will be a block up here and then the signal will propagate and there will be another block down here. These blocks usually consist of what's called a skip connection. This is the fundamental ingredient of ResNets that made ResNets so effective, it seems to be the introduction of this skip connection. You see all of these have the skip connection here. These are variants on ResNets and then we see that these are always mixed between convolutional layers and then other things that here are called evoNorm. In a classic ResNet you would have something like a convolutional layer, then you would have a batch normalization and then you would have a non-linearity, for example a ReLU, and then you would go on to the next convolutional layer. You see that the paper mainly cares about these two layers here, the batch norm and the ReLU, and it combines them into what it's called an evoNorm. The evoNorm layers here are supposed to replace the normalization and the activation layers, combine them and make them better. How does it do that? Through evolutionary search. These three models here are the ResNet, MobileNet and EfficientNet architectures. They're all image classifier architectures. Let's see how it does that. What it does is it evolves these layers from single primitives. If you've seen the batch normalization paper, then you know that the batch normalization is just kind of a formula you can write down. These other normalization methods, people have developed other ones than batch norm, for example this is groupNorm with a ReLU activation function. You can write these two layers down as this mathematical expression. It features things like, this is the input signal, I think this is the mean across some groups, this is a bias term that you can train, this is the standard deviation across the same groups and so on. This here is the ReLU term. You can write this down as a combination of these primitives. You can write it in a graph. This graph here is actually an activation function that this paper has found. It's called EVO norm S0 and the mathematical equation is the thing down here. It's not that different, you can see from previous activations. It also has the input signal, it has this variance or standard deviation across groups, it has a non-linearity here and the graph here is simply a graph of mathematical operations made out of primitives. This paper takes all of the primitives that they can think of and puts them in this table and they say, okay, our search space for these layers, so we want to evolve these layers, our search space is a combination of any of these primitives. You can see here you have something like addition, multiplication, negation, so you can subtract things, you can take the log of things, you can take the square root and so on. Here you have a max which is the ReLU activation function, but if you put 0 as one of them, you have the sigmoid which is another non-linearity. Then you can also do something like I want to compute the batch mean or I want to compute a group standard deviation, pretty much anything that current activation functions that have been handcrafted use are available as primitives across this search. So how does this method search? This is the process of how the method searches, it does this in an evolutionary way and evolutionary protocols it means that you don't develop one layer like you would do if you were to do something like gradient descent. You develop a whole population of layers, so these can be maybe a couple of hundred or a couple of thousands, different layer architectures that you develop at the same time. What you do each time you put them into a tournament, which basically means you want to sample a couple of them, I don't think they do all at the same time, they sample a couple of them, right, and then they train on what they call a proxy task. Their proxy task is CIFAR-10. So CIFAR-10 is a fairly small image classification task and they train on CIFAR-10 because it's pretty fast, right, you can train something on CIFAR-10 in like a couple of minutes or an hour or so and get a pretty good feeling for how good the final accuracy will be. You can get that pretty fast. So this is a fast classification task because they need to do this a lot, right, the population is large and they need to repeat this over time, right. In any case they take a sample, they train it on CIFAR-10 and then the winner, the winning layer is picked from this sample and only the winning layer is allowed to mutate, right. So the winning layer is mutated then and mutation means you kind of change it a bit. Now you don't change it in an informed way, you just change it at random and of course the, and then you put the mutated layers back into the population. Of course the hope is that by repeating this process, right, you repeat and repeat and repeat that the, simply by picking the winning layers over and over and over again is a selective pressure such that through the random mutations but the tournament style evaluation and picking of the winner, that over time the best performing models in your population, right, the best scoring model here will get better and better and better, right. So the assumption is that this isn't like a pure combinatorial optimization or like a pure random function, is that if I take something that works well there are ways that I can perturb it that make it work even better, right. So even if most of the perturbations are worse there are some and the tournament style will always find these ones for me that perform better and then I can modify these again at random and then among these I can again find the ones that perform even better. So that is the method and so the question, there are two questions, how do you mutate a layer, right, and mutation I believe is done in sort of different ways here but if you look at this here, at this expression, so what you have here is the input is this signal here, right, and you always start out I believe with the input with a layer that just emits the number one, with a layer that just emits the number zero or a component and then you have two trainable vectors that you can include and you just start out with these four things and then every time you mutate you add one of these blocks and I believe there's also a method like a randomness of changing the individual things or of actually starting from scratch again, it's pretty important otherwise you just grow bigger and bigger monsters and but the way you mutate is the following, you add a new block, let's say I add one here, and you decide on one of the primitives from the table, right, here I'm going to simply decide on a minus operation, so a subtraction operation and then once you've done that you choose two children, sorry two parents, however you see it, you choose two parents because the minus operation needs two parents, you choose two of the parents at random here, so I'm going to choose this thing here to be a parent and I'm going to choose this thing here to be a parent at random, right, and then this new node will become the new output of the layer, so you see that this was the previous output here, this multiplication node between this and this, now this is no longer the output, now this is obsolete, right, this is no longer part of the final mathematical expression here, so you see all the gray nodes here were actually sort of obsolete nodes but they are still kept because in subsequent steps you can choose them as parents and then they become part of the expression again, you can see here this tanh node, it was just a node that was sort of a dead end in the expression before but now with the new mutation it is again included in the expression because I've randomly selected it as a parent but then this node here and that was reset this node here, they are now obsolete nodes because they are no longer part of the expression, the expression in this case would go from here to here, right, including this node and it would go from here over here, right, so these nodes are now part of the expression, so this is how you mutate and as I said you can also mutate such that you start from scratch and so that's how you mutate, the second part in this thing is how do you exactly determine the winner and what is the tournament, so how do you do that, the tournament exactly is what we've seen before when we looked at the different layers, so we said we train on CIFAR-10 and what we do is we train these three architectures on CIFAR-10, so the ResNet, the MobileNet and the EfficientNet, we train these three architectures on CIFAR-10 with the EVO norm layer instantiated by you know that particular sample from the population and then we look at their accuracies and we do, we determine what is called the Pareto frontier of the population, so I think this is further up, oh right here, okay, so the dots here, the red and the grey dots would be our sample, so all of this would be our samples and their performance, here it's on actually on two models but in practice we have three just to graph it better, so we plot them here and we determine the Pareto frontier, now here A, B and C are part of the Pareto frontier because A outperforms everything else on model 1, C outperforms everything else on model 2 and B outperforms C on model 1 but also outperforms A on model 2, so it's what's called the Pareto frontier and we pick one of those as the winner, so they all are kind of one-third winners here, so this is how you do the tournament, you pick the winner like this and then you allow the winner to mutate, the last part that is not drawn in here actually is somewhere here-ish which is called the rejection step, so the rejection step is important because what they want to do is they say, hi we have these mutated layers but some of the mutations are probably going to be just terrible, like destroying everything, not trainable layers, it's just horrible, horrible, such that the layer is useless, they don't want to keep, they don't want to put them back here into the population because that might either deteriorate or severely slow down this progress here, so they want to stop them and only the good ones, only the ones that are somewhat fairly okay get back to the population, right, they don't always have to improve but they have to be at minimally useful, so the rejection step they describe down here in the rejection protocols, they have two criteria for rejecting mutated architectures, first they have a quality criterion, say we discard layers that achieve less than 20% validation accuracy in 100 training steps on any of the three anchor architectures, right, so the reasoning behind this is if you have a hundred training steps and you achieve less than 20% validation accuracy you're not going anywhere, right, you're just because 10% is already random performance, if you are less than 20% after a hundred steps your layer is pretty useless and can be discarded, right, so they say this simple mechanism ensures the compute resources to concentrate on the full training process of a small subset of promising candidates, oh sorry, yeah, so the hundred training steps of course is not enough to train fully but you can see after a hundred training steps whether or not the layer even does something, so you reject those, so this makes pretty much sense, right, the second criterion is what they call stability, right, they say we reject layers that are subject to numerical instability, right, and how do they find numerical instability? They define it like this, so what they do is they take the parameters, so the layers, and this is an architecture, yeah, so the model, the model, these are the convolutional weights, are the theta, right, and the G is the computation graph which is the EVO norm in this case and there is a loss defined across them, of course, right, this is the loss of the neural network on the samples, right, so these are the convolutional layers and these are the normalization layers, now what we want to do is we want to see how does this loss change when we change the convolutional layers, so you have to imagine, here are the convolutional layers and then there are these weird normalization layers and then again there are the convolutional layers, now we want to see how does the loss change if we change the weights of the convolution by a little bit, right, we just change it a little bit and see how does the loss change, this is the gradient of the weights basically, this is how you train, this entire thing here is how you train the neural network, right, so you want to see how large is this gradient and you kind of want to do this in an adversarial way, so you want to find the maximum perturbation you can achieve, right, you say okay if I change this a little bit in the worst direction I possibly can, how large is the perturbation going to be and that's how they define numerical instability, so it basically means if this is very high then the network might be doing well right where it is but just a little bit changing it will make it terrible, right, so they say we ascend the value on this direction for 100 steps and layer with the worst-case gradient norm greater than 10 to the 8th are rejected, in addition, so as a reason, this seems pretty strange, right, this quality criterion, it made sense but the stability criterion, it kind of seems, I mean reasonable but strange in here, the reason now, so the two tests are complementary with each other, for example we found a layer like this is able to achieve reasonable accuracies on C for 10 across all the anchor architectures, so it passes the quality criterion above but its gradients quickly explode on ImageNet possibly due to the absence of normalization operations, so and then you see aha, okay, so what probably happened is the following, they did their experiment without this, right, just with this quality criterion which I guess makes sense, they did this, right, they trained on C for 10 that's how they do their evolutionary research, then they took their best performing things among them is this one and they went to ImageNet and they said let's test these now on ImageNet class first, like we found these new architectures, let's see, and then they got exploding gradients, right, and then they went back into their original problem formulation, okay, what can we build in to the evolution such that this won't happen and here you already see kind of the problem with this, what you would like to have is kind of an algorithm that is general such as to not depend on the architectures and so on that is used but you see already here that the authors, they don't direct the search, right, the search is evolution but they guide, the evolution is very much guided by what these rejection protocols are and you see here the authors tailoring their rejection protocols to the specific data sets and architectures that they use and the specific problems they experienced when trying to apply their method and that I think weakens a bit the application of this method because it seems that this particular form of protocols, of this particular form of rejection protocols is very much tailored to this, let's do these three architectures on CIFAR-10 and then go to ImageNet and that tells me if I want to do this in a very different domain that I would have to, couldn't, it is not very clear that I could just to plop whatever they found works in and it would just work just as outperformingly of the others as in their experiment, it tells me that there is pretty like a somewhat large dependence on the specifics here. Yeah so but that being said these are the rejection criteria so they reject each step here, the worst ones and they go back into the population and then that process repeats and repeats and repeats and then at the end you hopefully end up with some very good normalization layers. Now I have to see here if you compare now these these found normalization layers with the classic variant so the classic thing here is this red line this is batch norm and relu, this is a classic activation normalization combo you put in a neural network and you see that these methods outperform this on a very kind of a stable basis right. So that's pretty cool but that is as we said on CIFAR-10 that is on the exact thing that they search over right there so it's not really a surprise that if I you know search a bunch of combinations and always get the best ones I would outperform just one of them. The interesting thing is what happens now if we take what we found and put them into a different architecture for a different data set. Now here the architecture isn't really different because it's kind of the same but they do evaluate it on ImageNet right. ImageNet different data set than CIFAR-10 much larger and so they put their their architectures which here evoNorm into ImageNet and evaluate it and you can see that it has fairly competitive results across right. So I find that to be to be fairly cool that the best performing ones on CIFAR-10 would also perform better than the corresponding ones on ImageNet. But you already see as well that it's not super high right. So the the differences here are I would say it is improving but sometimes you know it's the same sometimes it's actually worse. It doesn't it doesn't appear to know it to me that those kind of things are not super convincing especially because this is the paper that suggests these methods so they're naturally going to present them in the best possible way. So it seems like the the massive outperformance on CIFAR translates only marginally to ImageNet and these are the same architectures right the ResNet-50 and MobileNet and EfficientNet. These were already the architectures that they searched over so my trust that this new normalization layer put into a an actual different architecture is less still. Now they do actually do some experiments on that as well but I just this is my thoughts when reading this and as well and this I find very interesting this column here are random search so if you just do a random search which means you just produce random layers then it doesn't work at all right. So you take the best ones of the random ones you found and it doesn't transfer at all but interestingly if you do random search plus rejection so the same rejection that they do just you don't do this tournament evolution mutation style you simply random search and then do rejection that gives you fairly competitive numbers right and in some cases even see here it does it outperforms some of the classic methods so just that will give you fairly decent results right and that is to me that that seems to be even more a sign of okay this what this method is mostly doing is just searching like mad for something that works on these particular architectures and of course you can find things that work better if you search like mad but then what do you do with it like what does it mean it can we generalize now they do two additional tasks to show that it does generalize to other architecture and tasks so first of all they do object detection and instance segmentation right on cocoa so this is a very different task this is a mask or CNN right and they just put in their layer there and you can see here that they generally outperform the baseline I don't I can't speak to how much this is this outperformance is here it seems like the numbers are fairly close together but they are consistently better and again I don't I don't necessarily trust these kind of experiments too much because who knows how much effort you can spend on making your method better but in any case they show that they are better which is already something but again here the the r50 indicates that we're again dealing with like resin at 50 a resident 101 architectures which are fairly similar to the ones that we that the method was searching over so the second thing is they say we generalize to gan training so they take a big gan a big gan deep and they show that their method will outperform these other methods on the IS and FID metrics I don't even know what inception score and fresh lit inception distance yay so it will out perform them but in kind of a weird way okay here it outperforms them consistently but then in the inception score this batch norm plus reluces still seems to be like a lot higher than this evil norm be zero and then this thing here that was performing worse in the image net is now performing somewhat better it just so it is a cool result and definitely cool that you can pop this in here I I just think that the things that turn out here that they are tuned to very specific architectures to very specific tasks so I think the big gan deep the kind of architectures will always be kind of the same it will always be kind of resonant ish style neural networks and the tasks here will always be sort of C for image net style things and therefore I believe with the results we've seen the fact that it outperforms so much on C for 10 but then the gains on image net become more marginal I think that indicates that the gains here most probably don't translate the further away you go so I'm not sure that the evil norm that they find like that this particular thing here will remain the best thing across across tasks I think they just found this to work well in their particular setting here and if I run the same thing with the slightly different architectures and slightly different tasks I will come up with a different best thing yeah all right so these were my comments they do some interesting experiments where they show that if they just do random layers it it's not as performant which I can believe if you just jumble these things around probably not as good so you need some kind of search criterion and yeah that was my thoughts on this paper I invite you to read it look at it look at the additional experiment it is a very good evaluated paper and that bye bye | [
{
"start": 0,
"end": 5.76,
"text": " Hi there! Today we're looking at evolving normalization activation layers by"
},
{
"start": 5.76,
"end": 13.44,
"text": " Hanjiao Liu, Andrew Brock, Karen Simonian and Guo Vili. These are people from"
},
{
"start": 13.44,
"end": 20.080000000000002,
"text": " Google Brain and Google DeepMind. The topic of this paper is, as you can see,"
},
{
"start": 20.080000000000002,
"end": 26.080000000000002,
"text": " it's about normalization activation layers and we want to evolve them."
},
{
"start": 26.08,
"end": 31.04,
"text": " I think the title says a lot, but let's go down here and see what this is about."
},
{
"start": 31.04,
"end": 41.92,
"text": " We'll look at image neural networks and current architectures are kind of"
},
{
"start": 41.92,
"end": 46.959999999999994,
"text": " focused around the same principles. What they'll have is, ever since ResNet,"
},
{
"start": 46.959999999999994,
"end": 52.239999999999995,
"text": " these neural networks will be composed of these kind of blocks that come"
},
{
"start": 52.24,
"end": 56.56,
"text": " one after another. There will be a block up here and then the signal will"
},
{
"start": 56.56,
"end": 61.760000000000005,
"text": " propagate and there will be another block down here. These blocks usually"
},
{
"start": 61.760000000000005,
"end": 67.2,
"text": " consist of what's called a skip connection. This is the fundamental"
},
{
"start": 67.2,
"end": 74.72,
"text": " ingredient of ResNets that made ResNets so effective, it seems to be the"
},
{
"start": 74.72,
"end": 78.64,
"text": " introduction of this skip connection. You see all of these have the skip"
},
{
"start": 78.64,
"end": 85.76,
"text": " connection here. These are variants on ResNets and then we see that these"
},
{
"start": 85.76,
"end": 90.88,
"text": " are always mixed between convolutional layers and then other things that here"
},
{
"start": 90.88,
"end": 95.84,
"text": " are called evoNorm. In a classic ResNet you would have something like a"
},
{
"start": 95.84,
"end": 101.92,
"text": " convolutional layer, then you would have a batch normalization and then you"
},
{
"start": 101.92,
"end": 105.92,
"text": " would have a non-linearity, for example a ReLU, and then you would go on to the"
},
{
"start": 105.92,
"end": 113.36,
"text": " next convolutional layer. You see that the paper mainly cares about these two"
},
{
"start": 113.36,
"end": 118.4,
"text": " layers here, the batch norm and the ReLU, and it combines them into what it's"
},
{
"start": 118.4,
"end": 125.36,
"text": " called an evoNorm. The evoNorm layers here are supposed to replace"
},
{
"start": 125.36,
"end": 132.88,
"text": " the normalization and the activation layers, combine them and make them"
},
{
"start": 132.88,
"end": 140.4,
"text": " better. How does it do that? Through evolutionary search. These three"
},
{
"start": 140.4,
"end": 147.51999999999998,
"text": " models here are the ResNet, MobileNet and EfficientNet architectures. They're all"
},
{
"start": 147.51999999999998,
"end": 154.88,
"text": " image classifier architectures. Let's see how it does that. What it does"
},
{
"start": 154.88,
"end": 162.32,
"text": " is it evolves these layers from single primitives. If you've seen the batch"
},
{
"start": 162.32,
"end": 171.28,
"text": " normalization paper, then you know that the batch normalization is"
},
{
"start": 171.28,
"end": 176.79999999999998,
"text": " just kind of a formula you can write down. These other normalization"
},
{
"start": 176.79999999999998,
"end": 181.04,
"text": " methods, people have developed other ones than batch norm, for example this is"
},
{
"start": 181.04,
"end": 186.79999999999998,
"text": " groupNorm with a ReLU activation function. You can write these two layers"
},
{
"start": 186.79999999999998,
"end": 191.51999999999998,
"text": " down as this mathematical expression. It features things like, this is the"
},
{
"start": 191.52,
"end": 198.08,
"text": " input signal, I think this is the mean across some groups, this is a bias term"
},
{
"start": 198.08,
"end": 203.44,
"text": " that you can train, this is the standard deviation across the same groups and so"
},
{
"start": 203.44,
"end": 210.56,
"text": " on. This here is the ReLU term. You can write this down as a"
},
{
"start": 210.56,
"end": 218.4,
"text": " combination of these primitives. You can write it in a graph. This"
},
{
"start": 218.4,
"end": 225.84,
"text": " graph here is actually an activation function that this paper has found. It's"
},
{
"start": 225.84,
"end": 233.48000000000002,
"text": " called EVO norm S0 and the mathematical equation is the thing down here. It's not"
},
{
"start": 233.48000000000002,
"end": 238.12,
"text": " that different, you can see from previous activations. It also has the input signal,"
},
{
"start": 238.12,
"end": 244.64000000000001,
"text": " it has this variance or standard deviation across groups, it has a"
},
{
"start": 244.64,
"end": 252.92,
"text": " non-linearity here and the graph here is simply a graph of mathematical"
},
{
"start": 252.92,
"end": 259.59999999999997,
"text": " operations made out of primitives. This paper takes all of the"
},
{
"start": 259.59999999999997,
"end": 264.28,
"text": " primitives that they can think of and puts them in this table and they say,"
},
{
"start": 264.28,
"end": 269.64,
"text": " okay, our search space for these layers, so we want to evolve these layers, our"
},
{
"start": 269.64,
"end": 275.15999999999997,
"text": " search space is a combination of any of these primitives. You can see here"
},
{
"start": 275.15999999999997,
"end": 282.47999999999996,
"text": " you have something like addition, multiplication, negation, so you"
},
{
"start": 282.47999999999996,
"end": 287.84,
"text": " can subtract things, you can take the log of things, you can take the square"
},
{
"start": 287.84,
"end": 294.52,
"text": " root and so on. Here you have a max which is the ReLU activation function,"
},
{
"start": 294.52,
"end": 299.56,
"text": " but if you put 0 as one of them, you have the sigmoid which is another"
},
{
"start": 299.56,
"end": 304,
"text": " non-linearity. Then you can also do something like I want to compute the"
},
{
"start": 304,
"end": 309.32,
"text": " batch mean or I want to compute a group standard deviation, pretty much anything"
},
{
"start": 309.32,
"end": 315.04,
"text": " that current activation functions that have been handcrafted use are available"
},
{
"start": 315.04,
"end": 321.92,
"text": " as primitives across this search. So how does this method search? This is the"
},
{
"start": 321.92,
"end": 326.2,
"text": " process of how the method searches, it does this in an evolutionary way and"
},
{
"start": 326.2,
"end": 332.28,
"text": " evolutionary protocols it means that you don't develop one layer like you would"
},
{
"start": 332.28,
"end": 336.96,
"text": " do if you were to do something like gradient descent. You develop a whole"
},
{
"start": 336.96,
"end": 341.88,
"text": " population of layers, so these can be maybe a couple of hundred or a couple of"
},
{
"start": 341.88,
"end": 347.88,
"text": " thousands, different layer architectures that you develop at the same time."
},
{
"start": 347.88,
"end": 353.08,
"text": " What you do each time you put them into a tournament, which basically means you"
},
{
"start": 353.08,
"end": 359.71999999999997,
"text": " want to sample a couple of them, I don't think they do all at the same time, they"
},
{
"start": 359.71999999999997,
"end": 364.8,
"text": " sample a couple of them, right, and then they train on what they call a proxy"
},
{
"start": 364.8,
"end": 371.12,
"text": " task. Their proxy task is CIFAR-10. So CIFAR-10 is a fairly small image"
},
{
"start": 371.12,
"end": 378.4,
"text": " classification task and they train on CIFAR-10 because it's pretty fast, right,"
},
{
"start": 378.4,
"end": 382.08,
"text": " you can train something on CIFAR-10 in like a couple of minutes or an hour or"
},
{
"start": 382.08,
"end": 391.52,
"text": " so and get a pretty good feeling for how good the final accuracy will be. You can"
},
{
"start": 391.52,
"end": 395.59999999999997,
"text": " get that pretty fast. So this is a fast classification task because they need to"
},
{
"start": 395.59999999999997,
"end": 400.47999999999996,
"text": " do this a lot, right, the population is large and they need to repeat this over"
},
{
"start": 400.47999999999996,
"end": 405.18,
"text": " time, right. In any case they take a sample, they train it on CIFAR-10 and then the"
},
{
"start": 405.18,
"end": 411.08,
"text": " winner, the winning layer is picked from this sample and only the winning layer"
},
{
"start": 411.08,
"end": 416.56,
"text": " is allowed to mutate, right. So the winning layer is mutated then and"
},
{
"start": 416.56,
"end": 420.4,
"text": " mutation means you kind of change it a bit. Now you don't change it in an"
},
{
"start": 420.4,
"end": 426.44,
"text": " informed way, you just change it at random and of course the, and then you"
},
{
"start": 426.44,
"end": 433.03999999999996,
"text": " put the mutated layers back into the population. Of course the hope is"
},
{
"start": 433.03999999999996,
"end": 437.79999999999995,
"text": " that by repeating this process, right, you repeat and repeat and repeat that the,"
},
{
"start": 437.8,
"end": 441.84000000000003,
"text": " simply by picking the winning layers over and over and over again is a"
},
{
"start": 441.84000000000003,
"end": 447.36,
"text": " selective pressure such that through the random mutations but the tournament"
},
{
"start": 447.36,
"end": 454.12,
"text": " style evaluation and picking of the winner, that over time the best"
},
{
"start": 454.12,
"end": 458.56,
"text": " performing models in your population, right, the best scoring model here will"
},
{
"start": 458.56,
"end": 462.7,
"text": " get better and better and better, right. So the assumption is that this isn't like"
},
{
"start": 462.7,
"end": 469,
"text": " a pure combinatorial optimization or like a pure random function, is that if"
},
{
"start": 469,
"end": 475.08,
"text": " I take something that works well there are ways that I can perturb it that make"
},
{
"start": 475.08,
"end": 479.64,
"text": " it work even better, right. So even if most of the perturbations are worse"
},
{
"start": 479.64,
"end": 485.56,
"text": " there are some and the tournament style will always find these ones"
},
{
"start": 485.56,
"end": 490.15999999999997,
"text": " for me that perform better and then I can modify these again at random and"
},
{
"start": 490.16,
"end": 497.84000000000003,
"text": " then among these I can again find the ones that perform even better. So that"
},
{
"start": 497.84000000000003,
"end": 503.36,
"text": " is the method and so the question, there are two questions, how do you"
},
{
"start": 503.36,
"end": 509.6,
"text": " mutate a layer, right, and mutation I believe is done in sort of different ways"
},
{
"start": 509.6,
"end": 515.64,
"text": " here but if you look at this here, at this expression, so what you have here"
},
{
"start": 515.64,
"end": 523.28,
"text": " is the input is this signal here, right, and you always start out I believe with"
},
{
"start": 523.28,
"end": 530.28,
"text": " the input with a layer that just emits the number one, with a layer that just"
},
{
"start": 530.28,
"end": 537.4,
"text": " emits the number zero or a component and then you have two trainable vectors that"
},
{
"start": 537.4,
"end": 544.2,
"text": " you can include and you just start out with these four things and then every"
},
{
"start": 544.2,
"end": 548.88,
"text": " time you mutate you add one of these blocks and I believe there's also a"
},
{
"start": 548.88,
"end": 553.6,
"text": " method like a randomness of changing the individual things or of actually"
},
{
"start": 553.6,
"end": 558.5600000000001,
"text": " starting from scratch again, it's pretty important otherwise you just grow bigger"
},
{
"start": 558.5600000000001,
"end": 568,
"text": " and bigger monsters and but the way you mutate is the following, you add a new"
},
{
"start": 568,
"end": 574.1800000000001,
"text": " block, let's say I add one here, and you decide on one of the primitives from the"
},
{
"start": 574.18,
"end": 578.8,
"text": " table, right, here I'm going to simply decide on a minus operation, so a"
},
{
"start": 578.8,
"end": 586,
"text": " subtraction operation and then once you've done that you choose two"
},
{
"start": 586,
"end": 591.12,
"text": " children, sorry two parents, however you see it, you choose two parents because"
},
{
"start": 591.12,
"end": 597.12,
"text": " the minus operation needs two parents, you choose two of the parents at random"
},
{
"start": 597.12,
"end": 603.04,
"text": " here, so I'm going to choose this thing here to be a parent and I'm going to"
},
{
"start": 603.04,
"end": 610.56,
"text": " choose this thing here to be a parent at random, right, and then this new node will"
},
{
"start": 610.56,
"end": 616.68,
"text": " become the new output of the layer, so you see that this was the previous"
},
{
"start": 616.68,
"end": 622.4399999999999,
"text": " output here, this multiplication node between this and this, now this is no"
},
{
"start": 622.4399999999999,
"end": 626.5999999999999,
"text": " longer the output, now this is obsolete, right, this is no longer part of the"
},
{
"start": 626.5999999999999,
"end": 632.68,
"text": " final mathematical expression here, so you see all the gray nodes here were"
},
{
"start": 632.68,
"end": 638.12,
"text": " actually sort of obsolete nodes but they are still kept because in subsequent"
},
{
"start": 638.12,
"end": 643.0799999999999,
"text": " steps you can choose them as parents and then they become part of the"
},
{
"start": 643.0799999999999,
"end": 651.4799999999999,
"text": " expression again, you can see here this tanh node, it was just a node that"
},
{
"start": 651.4799999999999,
"end": 657.8,
"text": " was sort of a dead end in the expression before but now with the new mutation it"
},
{
"start": 657.8,
"end": 662.1999999999999,
"text": " is again included in the expression because I've randomly selected it as a"
},
{
"start": 662.2,
"end": 667.5600000000001,
"text": " parent but then this node here and that was reset this node here, they are now"
},
{
"start": 667.5600000000001,
"end": 671.24,
"text": " obsolete nodes because they are no longer part of the expression, the"
},
{
"start": 671.24,
"end": 678.1600000000001,
"text": " expression in this case would go from here to here, right, including this node"
},
{
"start": 678.1600000000001,
"end": 688.2,
"text": " and it would go from here over here, right, so these nodes are now part of the"
},
{
"start": 688.2,
"end": 692.6800000000001,
"text": " expression, so this is how you mutate and as I said you can also mutate such"
},
{
"start": 692.6800000000001,
"end": 700.5200000000001,
"text": " that you start from scratch and so that's how you mutate, the second part in this"
},
{
"start": 700.5200000000001,
"end": 708.08,
"text": " thing is how do you exactly determine the winner and what is the tournament, so"
},
{
"start": 708.08,
"end": 714.0400000000001,
"text": " how do you do that, the tournament exactly is what we've seen before when"
},
{
"start": 714.04,
"end": 718.5999999999999,
"text": " we looked at the different layers, so we said we train on CIFAR-10 and what we do"
},
{
"start": 718.5999999999999,
"end": 724.88,
"text": " is we train these three architectures on CIFAR-10, so the ResNet, the MobileNet"
},
{
"start": 724.88,
"end": 731.76,
"text": " and the EfficientNet, we train these three architectures on CIFAR-10 with the"
},
{
"start": 731.76,
"end": 737.3199999999999,
"text": " EVO norm layer instantiated by you know that particular sample from the"
},
{
"start": 737.3199999999999,
"end": 743.56,
"text": " population and then we look at their accuracies and we do, we determine what"
},
{
"start": 743.56,
"end": 750.7199999999999,
"text": " is called the Pareto frontier of the population, so I think this is further up,"
},
{
"start": 750.7199999999999,
"end": 758.5999999999999,
"text": " oh right here, okay, so the dots here, the red and the grey dots would be our sample,"
},
{
"start": 758.5999999999999,
"end": 764.56,
"text": " so all of this would be our samples and their performance, here it's on"
},
{
"start": 764.56,
"end": 770.7199999999999,
"text": " actually on two models but in practice we have three just to graph it better, so"
},
{
"start": 770.72,
"end": 774.76,
"text": " we plot them here and we determine the Pareto frontier, now here A, B and C are"
},
{
"start": 774.76,
"end": 779.8000000000001,
"text": " part of the Pareto frontier because A outperforms everything else on"
},
{
"start": 779.8000000000001,
"end": 787.64,
"text": " model 1, C outperforms everything else on model 2 and B outperforms C on model 1"
},
{
"start": 787.64,
"end": 793.0400000000001,
"text": " but also outperforms A on model 2, so it's what's called the Pareto frontier"
},
{
"start": 793.0400000000001,
"end": 800,
"text": " and we pick one of those as the winner, so they all are kind of one-third winners"
},
{
"start": 800,
"end": 805.88,
"text": " here, so this is how you do the tournament, you pick the winner like this"
},
{
"start": 805.88,
"end": 814.88,
"text": " and then you allow the winner to mutate, the last part that is not drawn in here"
},
{
"start": 814.88,
"end": 824.12,
"text": " actually is somewhere here-ish which is called the rejection step, so the"
},
{
"start": 824.12,
"end": 831.36,
"text": " rejection step is important because what they want to do is they say, hi we have"
},
{
"start": 831.36,
"end": 836.96,
"text": " these mutated layers but some of the mutations are probably going to be just"
},
{
"start": 836.96,
"end": 843.08,
"text": " terrible, like destroying everything, not trainable layers, it's just"
},
{
"start": 843.08,
"end": 849.64,
"text": " horrible, horrible, such that the layer is useless, they don't"
},
{
"start": 849.64,
"end": 853.72,
"text": " want to keep, they don't want to put them back here"
},
{
"start": 853.72,
"end": 860.9200000000001,
"text": " into the population because that might either deteriorate or severely slow"
},
{
"start": 860.9200000000001,
"end": 866.24,
"text": " down this progress here, so they want to stop them and only the good ones,"
},
{
"start": 866.24,
"end": 873.08,
"text": " only the ones that are somewhat fairly okay get back to the population, right,"
},
{
"start": 873.08,
"end": 878.98,
"text": " they don't always have to improve but they have to be at minimally useful, so"
},
{
"start": 878.98,
"end": 887.4,
"text": " the rejection step they describe down here in the rejection protocols, they"
},
{
"start": 887.4,
"end": 893.4,
"text": " have two criteria for rejecting mutated architectures, first they have a quality"
},
{
"start": 893.4,
"end": 899.6800000000001,
"text": " criterion, say we discard layers that achieve less than 20% validation"
},
{
"start": 899.6800000000001,
"end": 905.88,
"text": " accuracy in 100 training steps on any of the three anchor architectures, right, so"
},
{
"start": 905.88,
"end": 910.92,
"text": " the reasoning behind this is if you have a hundred training steps and you achieve"
},
{
"start": 910.92,
"end": 917.12,
"text": " less than 20% validation accuracy you're not going anywhere, right, you're just"
},
{
"start": 917.12,
"end": 923.04,
"text": " because 10% is already random performance, if you are less than 20%"
},
{
"start": 923.04,
"end": 928.72,
"text": " after a hundred steps your layer is pretty useless and can be discarded,"
},
{
"start": 928.72,
"end": 934.4,
"text": " right, so they say this simple mechanism ensures the compute resources to"
},
{
"start": 934.4,
"end": 939.28,
"text": " concentrate on the full training process of a small subset of promising candidates,"
},
{
"start": 939.28,
"end": 945.68,
"text": " oh sorry, yeah, so the hundred training steps of course is not enough to train"
},
{
"start": 945.68,
"end": 949.56,
"text": " fully but you can see after a hundred training steps whether or not the layer"
},
{
"start": 949.56,
"end": 954.8,
"text": " even does something, so you reject those, so this makes pretty much sense, right,"
},
{
"start": 954.8,
"end": 961.28,
"text": " the second criterion is what they call stability, right, they say we reject"
},
{
"start": 961.28,
"end": 967.6,
"text": " layers that are subject to numerical instability, right, and how do they find"
},
{
"start": 967.6,
"end": 975.4399999999999,
"text": " numerical instability? They define it like this, so what they do is they take"
},
{
"start": 975.4399999999999,
"end": 986.36,
"text": " the parameters, so the layers, and this is an architecture, yeah, so the model,"
},
{
"start": 986.36,
"end": 996,
"text": " the model, these are the convolutional weights, are the theta, right, and the G"
},
{
"start": 996,
"end": 1003.16,
"text": " is the computation graph which is the EVO norm in this case and there is a"
},
{
"start": 1003.16,
"end": 1007.04,
"text": " loss defined across them, of course, right, this is the loss of the neural"
},
{
"start": 1007.04,
"end": 1011.96,
"text": " network on the samples, right, so these are the convolutional"
},
{
"start": 1011.96,
"end": 1015.64,
"text": " layers and these are the normalization layers, now what we want to do is we"
},
{
"start": 1015.64,
"end": 1021.28,
"text": " want to see how does this loss change when we change the convolutional layers,"
},
{
"start": 1021.28,
"end": 1027,
"text": " so you have to imagine, here are the convolutional layers and then there are"
},
{
"start": 1027,
"end": 1031.12,
"text": " these weird normalization layers and then again there are the convolutional"
},
{
"start": 1031.12,
"end": 1041.96,
"text": " layers, now we want to see how does the loss change if we change the weights"
},
{
"start": 1041.96,
"end": 1046.1200000000001,
"text": " of the convolution by a little bit, right, we just change it a little bit and see"
},
{
"start": 1046.1200000000001,
"end": 1051.16,
"text": " how does the loss change, this is the gradient of the weights basically,"
},
{
"start": 1051.16,
"end": 1056.8,
"text": " this is how you train, this entire thing here is how you train the"
},
{
"start": 1056.8,
"end": 1063.1200000000001,
"text": " neural network, right, so you want to see how large is this gradient and you kind"
},
{
"start": 1063.1200000000001,
"end": 1067.72,
"text": " of want to do this in an adversarial way, so you want to find the maximum"
},
{
"start": 1067.72,
"end": 1074.56,
"text": " perturbation you can achieve, right, you say okay if I change this a little"
},
{
"start": 1074.56,
"end": 1082.68,
"text": " bit in the worst direction I possibly can, how large is the"
},
{
"start": 1082.68,
"end": 1088.44,
"text": " perturbation going to be and that's how they define numerical"
},
{
"start": 1088.44,
"end": 1095,
"text": " instability, so it basically means if this is very high then the network might"
},
{
"start": 1095,
"end": 1101.36,
"text": " be doing well right where it is but just a little bit changing it will make it"
},
{
"start": 1101.36,
"end": 1111.92,
"text": " terrible, right, so they say we ascend the value on this direction for 100 steps and"
},
{
"start": 1111.92,
"end": 1116.52,
"text": " layer with the worst-case gradient norm greater than 10 to the 8th are rejected,"
},
{
"start": 1116.52,
"end": 1121.68,
"text": " in addition, so as a reason, this seems pretty strange, right, this"
},
{
"start": 1121.68,
"end": 1128.0800000000002,
"text": " quality criterion, it made sense but the stability criterion, it kind of seems, I"
},
{
"start": 1128.0800000000002,
"end": 1135.8400000000001,
"text": " mean reasonable but strange in here, the reason now, so the two tests are"
},
{
"start": 1135.8400000000001,
"end": 1140.28,
"text": " complementary with each other, for example we found a layer like this is"
},
{
"start": 1140.28,
"end": 1145.3600000000001,
"text": " able to achieve reasonable accuracies on C for 10 across all the anchor"
},
{
"start": 1145.36,
"end": 1152.08,
"text": " architectures, so it passes the quality criterion above but its gradients"
},
{
"start": 1152.08,
"end": 1156.6399999999999,
"text": " quickly explode on ImageNet possibly due to the absence of normalization"
},
{
"start": 1156.6399999999999,
"end": 1162,
"text": " operations, so and then you see aha, okay, so what probably happened is the"
},
{
"start": 1162,
"end": 1166.9199999999998,
"text": " following, they did their experiment without this, right, just with this quality"
},
{
"start": 1166.9199999999998,
"end": 1172.12,
"text": " criterion which I guess makes sense, they did this, right, they trained on C for 10"
},
{
"start": 1172.12,
"end": 1175.4799999999998,
"text": " that's how they do their evolutionary research, then they took their best"
},
{
"start": 1175.4799999999998,
"end": 1181.8799999999999,
"text": " performing things among them is this one and they went to ImageNet and they said"
},
{
"start": 1181.8799999999999,
"end": 1186.2399999999998,
"text": " let's test these now on ImageNet class first, like we found these new"
},
{
"start": 1186.2399999999998,
"end": 1192.12,
"text": " architectures, let's see, and then they got exploding gradients, right, and then"
},
{
"start": 1192.12,
"end": 1196.6999999999998,
"text": " they went back into their original problem formulation, okay, what can we"
},
{
"start": 1196.6999999999998,
"end": 1201.84,
"text": " build in to the evolution such that this won't happen and here you already see"
},
{
"start": 1201.84,
"end": 1206.1599999999999,
"text": " kind of the problem with this, what you would like to have is kind of an"
},
{
"start": 1206.1599999999999,
"end": 1212.28,
"text": " algorithm that is general such as to not depend on the architectures and so on"
},
{
"start": 1212.28,
"end": 1218.84,
"text": " that is used but you see already here that the authors, they don't direct the"
},
{
"start": 1218.84,
"end": 1223.8,
"text": " search, right, the search is evolution but they guide, the evolution is very much"
},
{
"start": 1223.8,
"end": 1228.6399999999999,
"text": " guided by what these rejection protocols are and you see here the authors"
},
{
"start": 1228.64,
"end": 1233.3600000000001,
"text": " tailoring their rejection protocols to the specific data sets and"
},
{
"start": 1233.3600000000001,
"end": 1239.16,
"text": " architectures that they use and the specific problems they experienced when"
},
{
"start": 1239.16,
"end": 1245.5600000000002,
"text": " trying to apply their method and that I think weakens a bit the"
},
{
"start": 1245.5600000000002,
"end": 1251.48,
"text": " application of this method because it seems that this particular form of"
},
{
"start": 1251.48,
"end": 1256.6000000000001,
"text": " protocols, of this particular form of rejection protocols is very much"
},
{
"start": 1256.6,
"end": 1262,
"text": " tailored to this, let's do these three architectures on CIFAR-10 and then go to"
},
{
"start": 1262,
"end": 1269.8,
"text": " ImageNet and that tells me if I want to do this in a very different domain that"
},
{
"start": 1269.8,
"end": 1277.8,
"text": " I would have to, couldn't, it is not very clear that I could just to plop whatever"
},
{
"start": 1277.8,
"end": 1283.08,
"text": " they found works in and it would just work just as outperformingly of the"
},
{
"start": 1283.08,
"end": 1290.96,
"text": " others as in their experiment, it tells me that there is pretty like a somewhat"
},
{
"start": 1290.96,
"end": 1301.52,
"text": " large dependence on the specifics here. Yeah so but that being said these are"
},
{
"start": 1301.52,
"end": 1308.08,
"text": " the rejection criteria so they reject each step here, the worst ones and they"
},
{
"start": 1308.08,
"end": 1312.52,
"text": " go back into the population and then that process repeats and repeats and"
},
{
"start": 1312.52,
"end": 1316.92,
"text": " repeats and then at the end you hopefully end up with some very good"
},
{
"start": 1316.92,
"end": 1328.24,
"text": " normalization layers. Now I have to see here if you compare now these these"
},
{
"start": 1328.24,
"end": 1334.36,
"text": " found normalization layers with the classic variant so the classic thing"
},
{
"start": 1334.36,
"end": 1339.72,
"text": " here is this red line this is batch norm and relu, this is a classic"
},
{
"start": 1339.72,
"end": 1344.48,
"text": " activation normalization combo you put in a neural network and you see that"
},
{
"start": 1344.48,
"end": 1355.16,
"text": " these methods outperform this on a very kind of a stable basis right. So that's"
},
{
"start": 1355.16,
"end": 1359.48,
"text": " pretty cool but that is as we said on CIFAR-10 that is on the exact thing"
},
{
"start": 1359.48,
"end": 1364.48,
"text": " that they search over right there so it's not really a surprise that if I you"
},
{
"start": 1364.48,
"end": 1368.72,
"text": " know search a bunch of combinations and always get the best ones I would"
},
{
"start": 1368.72,
"end": 1376.24,
"text": " outperform just one of them. The interesting thing is what happens now if"
},
{
"start": 1376.24,
"end": 1383.96,
"text": " we take what we found and put them into a different architecture for a different"
},
{
"start": 1383.96,
"end": 1388.92,
"text": " data set. Now here the architecture isn't really different because it's kind of"
},
{
"start": 1388.92,
"end": 1393.72,
"text": " the same but they do evaluate it on ImageNet right. ImageNet different"
},
{
"start": 1393.72,
"end": 1401.4,
"text": " data set than CIFAR-10 much larger and so they put their their architectures"
},
{
"start": 1401.4,
"end": 1407.64,
"text": " which here evoNorm into ImageNet and evaluate it and you can see that it has"
},
{
"start": 1407.64,
"end": 1417,
"text": " fairly competitive results across right. So I find that to be to be fairly cool"
},
{
"start": 1417,
"end": 1424.48,
"text": " that the best performing ones on CIFAR-10 would also perform better than the"
},
{
"start": 1424.48,
"end": 1431.28,
"text": " corresponding ones on ImageNet. But you already see as well that it's not super"
},
{
"start": 1431.28,
"end": 1439.48,
"text": " high right. So the the differences here are I would say it is improving but"
},
{
"start": 1439.48,
"end": 1446.04,
"text": " sometimes you know it's the same sometimes it's actually worse. It doesn't"
},
{
"start": 1446.04,
"end": 1452.2,
"text": " it doesn't appear to know it to me that those kind of things are not super"
},
{
"start": 1452.2,
"end": 1455.8,
"text": " convincing especially because this is the paper that suggests these methods so"
},
{
"start": 1455.8,
"end": 1462.8,
"text": " they're naturally going to present them in the best possible way. So it seems"
},
{
"start": 1462.8,
"end": 1470.08,
"text": " like the the massive outperformance on CIFAR translates only marginally to"
},
{
"start": 1470.08,
"end": 1474,
"text": " ImageNet and these are the same architectures right the ResNet-50 and"
},
{
"start": 1474,
"end": 1477.32,
"text": " MobileNet and EfficientNet. These were already the architectures that they"
},
{
"start": 1477.32,
"end": 1483.04,
"text": " searched over so my trust that this new normalization layer put into a an"
},
{
"start": 1483.04,
"end": 1488.8,
"text": " actual different architecture is less still. Now they do actually do"
},
{
"start": 1488.8,
"end": 1494.32,
"text": " some experiments on that as well but I just this is my thoughts when reading"
},
{
"start": 1494.32,
"end": 1501.08,
"text": " this and as well and this I find very interesting this column here are random"
},
{
"start": 1501.08,
"end": 1505.6,
"text": " search so if you just do a random search which means you just produce"
},
{
"start": 1505.6,
"end": 1511.1599999999999,
"text": " random layers then it doesn't work at all right. So you take the best ones of"
},
{
"start": 1511.1599999999999,
"end": 1518.8799999999999,
"text": " the random ones you found and it doesn't transfer at all but interestingly if you"
},
{
"start": 1518.8799999999999,
"end": 1526.3999999999999,
"text": " do random search plus rejection so the same rejection that they do just you"
},
{
"start": 1526.4,
"end": 1533.48,
"text": " don't do this tournament evolution mutation style you simply random search"
},
{
"start": 1533.48,
"end": 1541.2,
"text": " and then do rejection that gives you fairly competitive numbers right and in"
},
{
"start": 1541.2,
"end": 1549.96,
"text": " some cases even see here it does it outperforms some of the classic methods"
},
{
"start": 1549.96,
"end": 1558.2,
"text": " so just that will give you fairly decent results right and that is to me"
},
{
"start": 1558.2,
"end": 1567.3600000000001,
"text": " that that seems to be even more a sign of okay this what this method is mostly"
},
{
"start": 1567.3600000000001,
"end": 1571.2,
"text": " doing is just searching like mad for something that works on these"
},
{
"start": 1571.2,
"end": 1577.4,
"text": " particular architectures and of course you can find things that work better if"
},
{
"start": 1577.4,
"end": 1584.88,
"text": " you search like mad but then what do you do with it like what does it mean it can"
},
{
"start": 1584.88,
"end": 1591.5600000000002,
"text": " we generalize now they do two additional tasks to show that it does generalize"
},
{
"start": 1591.5600000000002,
"end": 1597.72,
"text": " to other architecture and tasks so first of all they do object detection and"
},
{
"start": 1597.72,
"end": 1605.88,
"text": " instance segmentation right on cocoa so this is a very different task this is a"
},
{
"start": 1605.88,
"end": 1611.68,
"text": " mask or CNN right and they just put in their layer there and you can see here"
},
{
"start": 1611.68,
"end": 1618.3200000000002,
"text": " that they generally outperform the baseline I don't I can't speak to how"
},
{
"start": 1618.3200000000002,
"end": 1624.1200000000001,
"text": " much this is this outperformance is here it seems like the numbers are fairly"
},
{
"start": 1624.1200000000001,
"end": 1629.64,
"text": " close together but they are consistently better and again I don't I don't"
},
{
"start": 1629.64,
"end": 1635.48,
"text": " necessarily trust these kind of experiments too much because who knows"
},
{
"start": 1635.48,
"end": 1640.32,
"text": " how much effort you can spend on making your method better but in any case they"
},
{
"start": 1640.32,
"end": 1643.68,
"text": " show that they are better which is already something but again here the"
},
{
"start": 1643.68,
"end": 1648.64,
"text": " the r50 indicates that we're again dealing with like resin at 50 a resident"
},
{
"start": 1648.64,
"end": 1655.28,
"text": " 101 architectures which are fairly similar to the ones that we that the"
},
{
"start": 1655.28,
"end": 1662.2,
"text": " method was searching over so the second thing is they say we generalize to gan"
},
{
"start": 1662.2,
"end": 1672.3600000000001,
"text": " training so they take a big gan a big gan deep and they show that their method"
},
{
"start": 1672.3600000000001,
"end": 1681.0800000000002,
"text": " will outperform these other methods on the IS and FID metrics I don't even know"
},
{
"start": 1681.0800000000002,
"end": 1688.6000000000001,
"text": " what inception score and fresh lit inception distance yay so it will out"
},
{
"start": 1688.6,
"end": 1696.08,
"text": " perform them but in kind of a weird way okay here it outperforms them"
},
{
"start": 1696.08,
"end": 1703.1599999999999,
"text": " consistently but then in the inception score this batch norm plus reluces still"
},
{
"start": 1703.1599999999999,
"end": 1711.76,
"text": " seems to be like a lot higher than this evil norm be zero and then this thing"
},
{
"start": 1711.76,
"end": 1718.92,
"text": " here that was performing worse in the image net is now performing somewhat"
},
{
"start": 1718.92,
"end": 1727.28,
"text": " better it just so it is a cool result and definitely cool that you can pop"
},
{
"start": 1727.28,
"end": 1733.76,
"text": " this in here I I just think that the things that turn out here that they are"
},
{
"start": 1733.76,
"end": 1740.44,
"text": " tuned to very specific architectures to very specific tasks so I think the big"
},
{
"start": 1740.44,
"end": 1745.16,
"text": " gan deep the kind of architectures will always be kind of the same it will"
},
{
"start": 1745.16,
"end": 1750.16,
"text": " always be kind of resonant ish style neural networks and the tasks here will"
},
{
"start": 1750.16,
"end": 1758.4,
"text": " always be sort of C for image net style things and therefore I believe with the"
},
{
"start": 1758.4,
"end": 1762.6000000000001,
"text": " results we've seen the fact that it outperforms so much on C for 10 but then"
},
{
"start": 1762.6000000000001,
"end": 1768.64,
"text": " the gains on image net become more marginal I think that indicates that the"
},
{
"start": 1768.64,
"end": 1775.96,
"text": " gains here most probably don't translate the further away you go so I'm not sure"
},
{
"start": 1775.96,
"end": 1783.88,
"text": " that the evil norm that they find like that this particular thing here will"
},
{
"start": 1783.88,
"end": 1791.2800000000002,
"text": " remain the best thing across across tasks I think they just found this to"
},
{
"start": 1791.2800000000002,
"end": 1797.3000000000002,
"text": " work well in their particular setting here and if I run the same thing with"
},
{
"start": 1797.3,
"end": 1800.68,
"text": " the slightly different architectures and slightly different tasks I will come up"
},
{
"start": 1800.68,
"end": 1807.62,
"text": " with a different best thing yeah all right so these were my comments they do"
},
{
"start": 1807.62,
"end": 1811.6,
"text": " some interesting experiments where they show that if they just do random layers"
},
{
"start": 1811.6,
"end": 1818.6,
"text": " it it's not as performant which I can believe if you just jumble these things"
},
{
"start": 1818.6,
"end": 1826.12,
"text": " around probably not as good so you need some kind of search criterion and yeah"
},
{
"start": 1826.12,
"end": 1831.04,
"text": " that was my thoughts on this paper I invite you to read it look at it look at"
},
{
"start": 1831.04,
"end": 1857.52,
"text": " the additional experiment it is a very good evaluated paper and that bye bye"
}
] |
DRy_Mr732yA | Yannic Kilcher | UCZHmQk67mSJgfCCTn7xBfew | [Drama] Who invented Contrast Sets? | [
"Science & Technology"
] | [
"deep learning",
"machine learning",
"nlp",
"natural language processing",
"arxiv",
"twitter",
"drama",
"credit",
"related",
"lipton",
"gardner",
"counterfactual",
"augmentation",
"plagiarism"
] | Funny Twitter spat between researchers arguing who was the first to invent an idea that has probably been around since 1990 :D
References:
https://arxiv.org/abs/2004.02709
https://twitter.com/nlpmattg/status/1247326213296672768
https://arxiv.org/abs/1909.12434
https://twitter.com/zacharylipton/status/1247357810410762240
https://twitter.com/nlpmattg/status/1247373386839252992
https://twitter.com/zacharylipton/status/1247383141075083267
Links:
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
BitChute: https://www.bitchute.com/channel/yannic-kilcher
Minds: https://www.minds.com/ykilcher | I love me some good Twitter drama look at this this is awesome so after this contrast set paper appeared and I've done a video on that the author of it tweeted it out with one of the long Twitter threads with screenshots and all this seems to be the new marketing tool of academics and as you know I'm not a fan of this paper I think that the number that comes out of such a contrast set is very either useless or counterproductive and you can see my video on that in any case there there was another researcher Zachary Lipton who felt like he needed to jump in here saying before the media blitz and retweet party gets out of control this idea exists has been published it has a name and a clear justification is called counterfactually augmented data this is amazing look at that and here's the published paper of course and if we look at the published paper this is it right here of course Zach Lipton is an author on that paper and so let's just just read the abstract I haven't read the paper but let's just read the abstract it so I have it classically I have it here my nifty thing here so we can analyze it so this paper if you read the abstract it does sound similar right despite alarm over the reliance of union learning systems blah blah blah blah spurious correlations so it talks about the same problems now what do they say given documents and their initial labels we task humans with revising each document so that it accords with a counterfactual target label retains internal coherence and avoids unnecessary changes right so this sounds very similar to what these contrast sets do so the counterfactual target label would be the necessary of the contrast set to change the label retains internal coherence which is the in the contrast that this simply given by it supposed to conform to the intent of the data set makers which the intent probably includes internal coherence and it avoids unnecessary changes that conforms to the contrast set only searching in the local environment of a test set sample so you see that the definition of these is pretty similar then we go on and say they say class first trained on original data fail on their counterfactually revised counterparts and vice versa this experiment was also done by the contrast that paper and then they say class first trained on combined data sets performed remarkably well just chive those specialized in either domain so immediately we see some differences as well right the main difference I see is they say we task humans and then they train on the the train on the counterfactually revised counterparts which probably means they use some mechanical Turks here when they say humans because if you want to create a training data set you need lots of data so they probably take a data set and run its training data set again through something like mechanical Turk to get annotations this is exactly what the people of the of the contrast sets claim is wrong with the current pipeline and they so here we have this this thing counterfactually augmented stuff so the contrast sets what they say is we actually need the experts to do this that this the these humans are exactly the wrong people to make the data sets so it has the CFA has some elements correctly the same namely how they construct these labels but who construct the labels and for what reason so here it's experts and for what reason it's for testing it's they say the experts that make the data set should provide an additional contrast test set so this is I mean if if this is just my opinion if this is the same idea of course it's very similar but if this counts as the same idea then 95% of all research counts as the same idea as something that Jürgen Schmidhuber has done in the 1990s which of course Jürgen Schmidhuber will eloquently argue exactly he invented GANs basically the same thing so yeah so if this is it's not the same like I have to say this is very close but it's not as I understand they even cited the other ones so then the bickering starts and this is funny I'm like this this is just funny to me so Zach Lippen jumps here it says this has been published has a name and a clearer justification it's called contractual augmented data here is the published paper we just looked at this right and then Matt Gardner answers and he says Zach and Divyansh work is excellent recommend you all go look at it our work provides a different concurrent take on similar issues right and I think here someone comments that so he says it is in the related work section although mischaracterized and misattributed as contemporary work so position is really that it is kind of a stolen idea and they were apparently in contact with each other during that so this Matt Gardner here says what the differences are he says we take a geometrical view we demonstrate such a wider variety I mean for all intents and purposes if you go through any of the research go to computer vision go to NLP you'll find like the exact I have like I have I review two papers each year that want to produce data that better defines the decision boundary like these people here I mean this is this idea is just get rehashed over and over in the slightly different form these two are particularly close but and then see how they pick our paper was finished two months after theirs and then they say we started the project well before and so on why do we feel defensive and then he answers again with this is absolutely false our paper was drafted in July your paper was finished the night before the ACL deadline this is not two months ago but a half a year it is nothing to do it says why do you presume to know when we started drop the nonsense we did this work in May 2019 present the public results in July posted it's a better drop the posturing so much of what you're doing here is the very cancer in the system I mean I agree just you know slightly refining ideas that previously were there is very bad problem in academia so this is actually correct to point out but I don't think that this particular instance is particularly bad and then he says I'm afraid you're simply mistaken I have a history of publishing similar so I've I've something like the last thing I say I just invite you to read this beautiful but the last thing to say here if if this counterfactually augmented data if this is in fact the first instance of this general idea to produce counterfactually augmented data that that that does actually fulfill these criteria I would be extremely surprised because this is nothing to do with deep learning right and the real novelty in her field is mostly deep learning so I'm pretty sure someone must have thought of something like this when everyone was just doing grammars and manual features and things like this so I'm I would be extremely surprised if this hasn't been there in one form or another and why the authors of that shouldn't make exactly the same argument that being said it is fairly close like that the fun part here is that it is actually a fairly similar idea except after so the idea itself is fairly similar but here the focus is on different things and it's also on different data sets and I believe yeah as I said 95% of research falls into exactly this category so much fun check it out yeah bye bye | [
{
"start": 0,
"end": 7.24,
"text": " I love me some good Twitter drama look at this this is awesome so after this"
},
{
"start": 7.24,
"end": 13.4,
"text": " contrast set paper appeared and I've done a video on that the author of it"
},
{
"start": 13.4,
"end": 19.72,
"text": " tweeted it out with one of the long Twitter threads with screenshots and all"
},
{
"start": 19.72,
"end": 26.2,
"text": " this seems to be the new marketing tool of academics and as you know I'm not a"
},
{
"start": 26.2,
"end": 30.4,
"text": " fan of this paper I think that the number that comes out of such a contrast"
},
{
"start": 30.4,
"end": 35.6,
"text": " set is very either useless or counterproductive and you can see my"
},
{
"start": 35.6,
"end": 42.96,
"text": " video on that in any case there there was another researcher Zachary Lipton"
},
{
"start": 42.96,
"end": 50,
"text": " who felt like he needed to jump in here saying before the media blitz and"
},
{
"start": 50,
"end": 56,
"text": " retweet party gets out of control this idea exists has been published it has"
},
{
"start": 56,
"end": 61.6,
"text": " a name and a clear justification is called counterfactually augmented data"
},
{
"start": 61.6,
"end": 69.36,
"text": " this is amazing look at that and here's the published paper of course and if we"
},
{
"start": 69.36,
"end": 76.08,
"text": " look at the published paper this is it right here of course Zach Lipton is an"
},
{
"start": 76.08,
"end": 82.8,
"text": " author on that paper and so let's just just read the abstract I haven't read"
},
{
"start": 82.8,
"end": 88.12,
"text": " the paper but let's just read the abstract it so I have it classically I"
},
{
"start": 88.12,
"end": 98.03999999999999,
"text": " have it here my nifty thing here so we can analyze it so this paper if you read"
},
{
"start": 98.03999999999999,
"end": 105,
"text": " the abstract it does sound similar right despite alarm over the reliance of"
},
{
"start": 105,
"end": 108.12,
"text": " union learning systems blah blah blah blah spurious correlations so it talks"
},
{
"start": 108.12,
"end": 114,
"text": " about the same problems now what do they say given documents and their initial"
},
{
"start": 114,
"end": 119.08000000000001,
"text": " labels we task humans with revising each document so that it accords with a"
},
{
"start": 119.08000000000001,
"end": 123.36,
"text": " counterfactual target label retains internal coherence and avoids"
},
{
"start": 123.36,
"end": 129.64000000000001,
"text": " unnecessary changes right so this sounds very similar to what these contrast sets"
},
{
"start": 129.64000000000001,
"end": 135.8,
"text": " do so the counterfactual target label would be the necessary of the"
},
{
"start": 135.8,
"end": 143.36,
"text": " contrast set to change the label retains internal coherence which is the in the"
},
{
"start": 143.36,
"end": 148.92000000000002,
"text": " contrast that this simply given by it supposed to conform to the intent of the"
},
{
"start": 148.92000000000002,
"end": 154.52,
"text": " data set makers which the intent probably includes internal coherence and"
},
{
"start": 154.52,
"end": 160.12,
"text": " it avoids unnecessary changes that conforms to the contrast set only"
},
{
"start": 160.12,
"end": 166.72,
"text": " searching in the local environment of a test set sample so you see that the"
},
{
"start": 166.72,
"end": 174.04,
"text": " definition of these is pretty similar then we go on and say they say class"
},
{
"start": 174.04,
"end": 177.16,
"text": " first trained on original data fail on their counterfactually revised"
},
{
"start": 177.16,
"end": 180.72,
"text": " counterparts and vice versa this experiment was also done by the"
},
{
"start": 180.72,
"end": 186.56,
"text": " contrast that paper and then they say class first trained on combined data"
},
{
"start": 186.56,
"end": 190.44,
"text": " sets performed remarkably well just chive those specialized in either"
},
{
"start": 190.44,
"end": 197.76,
"text": " domain so immediately we see some differences as well right the main"
},
{
"start": 197.76,
"end": 203.92000000000002,
"text": " difference I see is they say we task humans and then they train on the the"
},
{
"start": 203.92000000000002,
"end": 208.24,
"text": " train on the counterfactually revised counterparts which probably means they"
},
{
"start": 208.24,
"end": 213.28,
"text": " use some mechanical Turks here when they say humans because if you want to create"
},
{
"start": 213.28,
"end": 218.48,
"text": " a training data set you need lots of data so they probably take a data set"
},
{
"start": 218.48,
"end": 222.24,
"text": " and run its training data set again through something like mechanical Turk"
},
{
"start": 222.24,
"end": 230.72,
"text": " to get annotations this is exactly what the people of the of the contrast sets"
},
{
"start": 230.72,
"end": 237.88,
"text": " claim is wrong with the current pipeline and they so here we have this this thing"
},
{
"start": 237.88,
"end": 243.2,
"text": " counterfactually augmented stuff so the contrast sets what they say is we"
},
{
"start": 243.2,
"end": 248.44,
"text": " actually need the experts to do this that this the these humans are exactly"
},
{
"start": 248.44,
"end": 255,
"text": " the wrong people to make the data sets so it has the CFA has some elements"
},
{
"start": 255,
"end": 260.8,
"text": " correctly the same namely how they construct these labels but who construct"
},
{
"start": 260.8,
"end": 266.24,
"text": " the labels and for what reason so here it's experts and for what reason it's"
},
{
"start": 266.24,
"end": 271.91999999999996,
"text": " for testing it's they say the experts that make the data set should provide an"
},
{
"start": 271.92,
"end": 280.44,
"text": " additional contrast test set so this is I mean if if this is just my opinion if"
},
{
"start": 280.44,
"end": 285.04,
"text": " this is the same idea of course it's very similar but if this counts as the"
},
{
"start": 285.04,
"end": 290.20000000000005,
"text": " same idea then 95% of all research counts as the same idea as something"
},
{
"start": 290.20000000000005,
"end": 294.88,
"text": " that Jürgen Schmidhuber has done in the 1990s which of course Jürgen Schmidhuber"
},
{
"start": 294.88,
"end": 304,
"text": " will eloquently argue exactly he invented GANs basically the same thing"
},
{
"start": 304,
"end": 310.52,
"text": " so yeah so if this is it's not the same like I have to say this is very close"
},
{
"start": 310.52,
"end": 316.86,
"text": " but it's not as I understand they even cited the other ones so then the"
},
{
"start": 316.86,
"end": 321.15999999999997,
"text": " bickering starts and this is funny I'm like this this is just funny to me so"
},
{
"start": 321.16,
"end": 326.40000000000003,
"text": " Zach Lippen jumps here it says this has been published has a name and a clearer"
},
{
"start": 326.40000000000003,
"end": 330.48,
"text": " justification it's called contractual augmented data here is the published"
},
{
"start": 330.48,
"end": 336.52000000000004,
"text": " paper we just looked at this right and then Matt Gardner answers and he says"
},
{
"start": 336.52000000000004,
"end": 345.64000000000004,
"text": " Zach and Divyansh work is excellent recommend you all go look at it our work"
},
{
"start": 345.64000000000004,
"end": 350.84000000000003,
"text": " provides a different concurrent take on similar issues right and I think here"
},
{
"start": 350.84,
"end": 356.91999999999996,
"text": " someone comments that so he says it is in the related work section although"
},
{
"start": 356.91999999999996,
"end": 362.35999999999996,
"text": " mischaracterized and misattributed as contemporary work so position is really"
},
{
"start": 362.35999999999996,
"end": 369.47999999999996,
"text": " that it is kind of a stolen idea and they were apparently in contact with"
},
{
"start": 369.47999999999996,
"end": 374.55999999999995,
"text": " each other during that so this Matt Gardner here says what the differences"
},
{
"start": 374.55999999999995,
"end": 379.64,
"text": " are he says we take a geometrical view we demonstrate such a wider variety I"
},
{
"start": 379.64,
"end": 383.32,
"text": " mean for all intents and purposes if you go through any of the research go to"
},
{
"start": 383.32,
"end": 388.71999999999997,
"text": " computer vision go to NLP you'll find like the exact I have like I have I"
},
{
"start": 388.71999999999997,
"end": 396.96,
"text": " review two papers each year that want to produce data that better defines the"
},
{
"start": 396.96,
"end": 401.96,
"text": " decision boundary like these people here I mean this is this idea is just get"
},
{
"start": 401.96,
"end": 409.2,
"text": " rehashed over and over in the slightly different form these two are"
},
{
"start": 409.2,
"end": 414.47999999999996,
"text": " particularly close but and then see how they pick our paper was finished two"
},
{
"start": 414.47999999999996,
"end": 423.15999999999997,
"text": " months after theirs and then they say we started the project well before and so"
},
{
"start": 423.15999999999997,
"end": 434.24,
"text": " on why do we feel defensive and then he answers again with this is absolutely"
},
{
"start": 434.24,
"end": 438.91999999999996,
"text": " false our paper was drafted in July your paper was finished the night before the"
},
{
"start": 438.92,
"end": 443.8,
"text": " ACL deadline this is not two months ago but a half a year it is nothing to do"
},
{
"start": 443.8,
"end": 450.08000000000004,
"text": " it says why do you presume to know when we started drop the nonsense we did this"
},
{
"start": 450.08000000000004,
"end": 454.88,
"text": " work in May 2019 present the public results in July posted it's a better"
},
{
"start": 454.88,
"end": 460.32,
"text": " drop the posturing so much of what you're doing here is the very cancer in"
},
{
"start": 460.32,
"end": 468.76,
"text": " the system I mean I agree just you know slightly refining ideas that previously"
},
{
"start": 468.76,
"end": 473.71999999999997,
"text": " were there is very bad problem in academia so this is actually correct to"
},
{
"start": 473.71999999999997,
"end": 478,
"text": " point out but I don't think that this particular instance is particularly bad"
},
{
"start": 478,
"end": 480.71999999999997,
"text": " and then he says I'm afraid you're simply mistaken I have a history of"
},
{
"start": 480.71999999999997,
"end": 485.24,
"text": " publishing similar so I've I've something like the last thing I say I"
},
{
"start": 485.24,
"end": 492.32,
"text": " just invite you to read this beautiful but the last thing to say here if if"
},
{
"start": 492.32,
"end": 499.15999999999997,
"text": " this counterfactually augmented data if this is in fact the first instance of"
},
{
"start": 499.15999999999997,
"end": 505.08,
"text": " this general idea to produce counterfactually augmented data that"
},
{
"start": 505.08,
"end": 512.4,
"text": " that that does actually fulfill these criteria I would be extremely surprised"
},
{
"start": 512.4,
"end": 517.96,
"text": " because this is nothing to do with deep learning right and the real novelty in"
},
{
"start": 517.96,
"end": 523.48,
"text": " her field is mostly deep learning so I'm pretty sure someone must have thought of"
},
{
"start": 523.48,
"end": 529.48,
"text": " something like this when everyone was just doing grammars and manual features"
},
{
"start": 529.48,
"end": 536.5600000000001,
"text": " and things like this so I'm I would be extremely surprised if this hasn't been"
},
{
"start": 536.5600000000001,
"end": 541.2,
"text": " there in one form or another and why the authors of that shouldn't make exactly"
},
{
"start": 541.2,
"end": 545.72,
"text": " the same argument that being said it is fairly close like that the fun part here"
},
{
"start": 545.72,
"end": 551.76,
"text": " is that it is actually a fairly similar idea except after so the idea itself is"
},
{
"start": 551.76,
"end": 558.12,
"text": " fairly similar but here the focus is on different things and it's also on"
},
{
"start": 558.12,
"end": 563.36,
"text": " different data sets and I believe yeah as I said 95% of research falls into"
},
{
"start": 563.36,
"end": 576.88,
"text": " exactly this category so much fun check it out yeah bye bye"
}
] |
qeEO2GECQk0 | Yannic Kilcher | UCZHmQk67mSJgfCCTn7xBfew | Evaluating NLP Models via Contrast Sets | [
"Science & Technology"
] | [
"deep learning",
"machine learning",
"nlp",
"natural language processing",
"arxiv",
"attention",
"evaluation",
"cheat",
"easy",
"hard",
"adversarial",
"counterfactual",
"hand-crafted",
"test set",
"supervised"
] | Current NLP models are often "cheating" on supervised learning tasks by exploiting correlations that arise from the particularities of the dataset. Therefore they often fail to learn the original intent of the dataset creators. This paper argues that NLP models should be evaluated on Contrast Sets, which are hand-crafted perturbations by the dataset authors that capture their intent in a meaningful way.
https://arxiv.org/abs/2004.02709
Abstract:
Standard test sets for supervised learning evaluate in-distribution generalization. Unfortunately, when a dataset has systematic gaps (e.g., annotation artifacts), these evaluations are misleading: a model can learn simple decision rules that perform well on the test set but do not capture a dataset's intended capabilities. We propose a new annotation paradigm for NLP that helps to close systematic gaps in the test data. In particular, after a dataset is constructed, we recommend that the dataset authors manually perturb the test instances in small but meaningful ways that (typically) change the gold label, creating contrast sets. Contrast sets provide a local view of a model's decision boundary, which can be used to more accurately evaluate a model's true linguistic capabilities. We demonstrate the efficacy of contrast sets by creating them for 10 diverse NLP datasets (e.g., DROP reading comprehension, UD parsing, IMDb sentiment analysis). Although our contrast sets are not explicitly adversarial, model performance is significantly lower on them than on the original test sets---up to 25\% in some cases. We release our contrast sets as new evaluation benchmarks and encourage future dataset construction efforts to follow similar annotation processes.
Authors: Matt Gardner, Yoav Artzi, Victoria Basmova, Jonathan Berant, Ben Bogin, Sihao Chen, Pradeep Dasigi, Dheeru Dua, Yanai Elazar, Ananth Gottumukkala, Nitish Gupta, Hanna Hajishirzi, Gabriel Ilharco, Daniel Khashabi, Kevin Lin, Jiangming Liu, Nelson F. Liu, Phoebe Mulcaire, Qiang Ning, Sameer Singh, Noah A. Smith, Sanjay Subramanian, Reut Tsarfaty, Eric Wallace, Ally Zhang, Ben Zhou
Links:
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
BitChute: https://www.bitchute.com/channel/yannic-kilcher
Minds: https://www.minds.com/ykilcher | Hi there! Today we're looking at evaluating NLP models via contrast sets. These are too many authors from too many places for me to read out. We'll just jump right into the problem. What is the problem? Let's jump into the solution. Here you see a visual question answering task. Visual question answering in this case. You have two pictures right here. Picture one, picture two and a sentence. Two similarly colored and similarly posed chow dogs are face-to-face in one image. I guess the task here is to have the system answer. Is this correct or incorrect? As you see here I believe that's a correct statement. Or you're maybe tasked to ask which is the image that this applies to. Is it image one or image two? Of course here it's image one. The problem with such systems is that there are a lot of easy things that the models can do that will usually get them the answer. What we like to imagine is that the model will look at this and recognize that this is a dog here. This is a dog. Here is its face and this is a dog and here is its face. It will see there's a count. There's two of them. There's two of them. There's a notion of face and there's notion of pose and so on. Usually there are tricks that the models can do to get this easier. For example I know that in a particular visual question answering system whenever there is a question of what is the ground covered in or something like this. The answer is always snow. You don't even have to look at the image. Similarly there are a lot of these kind of tricks that the models learn and the authors recognize correctly that this is mostly a data set problem. Usually what you do in these data sets is you have an image that you scrape from the web or something and it has some mountains and then there's snow on the mountains, on the ground. You give this to a bunch of mechanical turks or someone like a raider and you instruct them. You produce a question to this image. You give them a couple of examples and they're usually kind of lazy and they will just look at it and be like what questions could I ask? You need to ask something. Usually the instructions are it must be visual and it must maybe be answerable with a one word answer or something like this. Or it must be a multiple choice question. There are these number of instructions and they will usually be like what's kind of special about this picture? There's snow so I'm gonna ask about that. Snow right? The problem is mainly the process of data set generation. That will lead to biases and easy solutions for the models where the models will simply learn statistical correlations between things and the intention. We have a big divergence between the intention of what the data set creators want. The intention is in this case is visual understanding, visual of the world. There's a big difference between this and between how the data set is really constructed. The authors are trying to address this with what they call contrast sets. They say you get out of this process a data set. You get a training data set and a test data set. Maybe here a smaller test data set. What they say is what we should do is we should additionally have these things called contrast sets. This is train and this is test. Usually these two come from the same distribution. You simply make them and then you split them somehow and you take the test from the train. But these here are not from the same distribution. This is the contrast. What they argue is that the authors of the data set should create the contrast set. You see that there's a split here where the data set comes from. They argue that the authors of the data set with knowing what intention they have, they should create the contrast data set manually by hand in order to make really hard examples that show what they want out of a system. They capture this here in their example. If we go back to the example, here are things. They suggest to do this via perturbations. What they would do is they would start at this example up here. They would start and they would perturb it textually or via image. They would perturb it to make it change the gold label. This is different from adversarial examples. In adversarial examples you would want to perturb a sample such that it is still the same but to the classifier it's different. Here you have the opposite gold. You want to make something that is means kind of the opposite but you want to test whether your classifier can pick up on that. In this case the one example would be two similarly colored and similarly posed cats instead of dogs are face to face in one image. That would change the label. Whereas before the answer was yes that's a correct sentence. Now it's no that's not a correct sentence. There are no cats in these images. Also here three similarly colored dogs. The intention of the authors, you have to view it through this lens, the intention here is that the system can recognize the species of the entities in the images. The system can count and the system can compare right compare in this case colors. You want to kind of make perturbations on these attributes from a test image. You can also think about image perturbations where you keep the sentence but you modify the image such that there are still two dogs and they're still facing each other. But they're not similarly colored anymore. So the similarly colored here would be the attribute that where before it was true now it's false with the new image. You get the gist that the people that created the data set that know their intention will create manually these samples. The authors they propose a new metric to track this but essentially the authors propose how well the models do on these contrast sets will be a reflection. It should be kind of an additional thing that people do with their NLP models. Alright so you get the picture. That is I believe the entire gist of this paper and I have some problems. First of all here they say alright let's give a toy example in two dimensions. Say you have this data set right and the red one is the correct decision boundary right and you want to capture that but because you only have limited training data and because you in in this generation processes you have systematic biases. So if we had non-systematic biases we would think that okay we maybe pick this and this one and this one here and this one here and this one here right. We don't get all of them but we kind of get an IID sample right. That wouldn't be so much of a problem. You could still kind of recover the decision boundary but because we have systematic biases the authors argue we actually introduce biases. So the systematic bias here is that we would of the blue ones we would only capture things on this sorry on the on this layer up here and of the red ones orange ones we'd only capture things of the level down here and thereby we introduce the kind of data set, the bias. It doesn't require this complex decision boundary anymore. Right and if we now the problem is if we collect the data set like this and we simply say well these ones are the test set and these ones are the train set right it will generalize well to the test set but it will not generalize well to what we actually want and therefore the authors say well if we introduce these contrast sets here then you see that the decision boundary that we found will not perform well on these contrast sets right. So they would say we take one example of the test set right. This is you can see this is this example right here and we would perturb it to make it change its label or in this case one of them where the label remains. We would kind of perturb it meaningfully and yeah so as I said I have multiple problems with this. First 2D toy examples very very bad for NLP models. First of all low-dimensional intuition does not generalize to high-dimensional intuition like very very little. Second of all even worse usually these NLP models have so many parameters much more parameters than you have data set which means that your decision boundary is incidentally going to be simple even if you had all the data you could possibly want. It's just a very different kind of problem and then the next problem is if even with by doing this contrast set and you already see it here right you already see it you can only kind of bicker about the data okay but with the contrast that you only really capture this one aspect so if that was actually well adhered to you could measure very locally whether or not this this would work or not and the ability to come up with meaningful contrast sets to ever capture what the model is doing is almost impossible because you have to create them manually and then you suggest that the authors themselves make these contrast sets. Remember the authors are the ones that gave these instructions right these instructions right here the authors provided them to the to the data set annotators so the authors will probably be even more biased if they have to do their own right if they have to now create their own contrast examples they will probably even though they know their intention they will probably be like more biased than if you at least this here at least this here is a distributed process across people right so you get things that you wouldn't have thought of but if just the three authors of the date of the paper make the contrast examples I would argue that that's an even more biased measure often so all of this it just strikes me as as the paper is basically saying let's try on a few things and I think the fundamental problem is much much deeper and it goes with this intention part like I get it the the visual question answering data set doesn't capture the doesn't capture what you want it doesn't make the model suddenly understand that there are dogs and there are species of animal and so on it simply makes it correlate things but that's what deep learning and especially NLP does so right it's like it's like saying you you build a build an image net classifier and it can't fly and identify if I try it on my tests that it requires my computer to fly and my image net model can't do this then it doesn't serve my intention right and I mean it's it's a crass example but ultimately you the correct approach should be to better encapsulate your intention into the data set generating process and then correctly interpreting the results that mean okay on this data set as far as we can tell the way we created it this is the performance of the model it doesn't the model will never learn to fulfill your intention and I get it that's what you're saying but still even with this contrast set I think it's a really bad measure to formally propose it's I think you should much more propose how is the data set generating process different from what you want and what are the limitations there right and so that's that that I think that will lead to much more meaningful meaningful results than simply the authors providing a few manually put examples that they feel capture their intention it will not will not the reason we do deep learning instead of straightforward if else programming is because we cannot capture even our intentions and therefore data set generation is the only is the only method we have so to say all right so ultimately I believe these these whole NLP especially the visual question answering and so on the natural language understanding part needs to have a grounding so ultimately I think grounding grounded NLP it means basically that you're not only doing NLP which is simply you take text and you take images and you correlate them somehow right you just make a statistical connection grounded NLP models is the hope that you could build something that actually understands the world understands that there's entities that is interacted there's something like a pose that there is something like what the color means right what a dog is and so on and as entities I think we're not there yet and I think that will be the ultimate solution to these kind of tasks not not any sort of local very local very low dimensional perturbation I mean yeah let's say you create a contrast set you will be able to capture one tiny little bit of your intention one tiny little bit even though you know your intention you will capture a tiny little bit all of the thousand other degrees of freedom of your own intention you won't be able in there to capture in the contrast set I guarantee you all right that was my quarrels with that I invite you to read the whole paper they actually do this for NLP datasets it's a lot of work and they show that the models perform much worse on their contrast sets and interestingly the humans don't the humans are able to solve the contrast set of course of course because you tell the humans what the task is right that's like humans succeed on contrasts at like how surprising what you should do is you should just provide the humans with the data set not tell them what the task is even worse just provide them with the encoded data set like not the text itself but actually the token IDs right and then and then make them do the thing and the humans will just as well make a statistical correlation between the tokens and the images or whatnot and the humans will fail just as well on the test on these contrast sets because the humans maybe they'll figure out what the task is but probably not so humans succeed on contrasts at how surprising you tell them the intention while you don't tell it to the model yes I see critical but yeah please read the paper it's an interesting paper and with that goodbye | [
{
"start": 0,
"end": 5.68,
"text": " Hi there! Today we're looking at evaluating NLP models via contrast sets."
},
{
"start": 5.68,
"end": 12.8,
"text": " These are too many authors from too many places for me to read out."
},
{
"start": 12.8,
"end": 22.32,
"text": " We'll just jump right into the problem. What is the problem? Let's jump into"
},
{
"start": 22.32,
"end": 28.92,
"text": " the solution. Here you see a visual question answering task. Visual question"
},
{
"start": 28.92,
"end": 34.32,
"text": " answering in this case. You have two pictures right here. Picture one, picture"
},
{
"start": 34.32,
"end": 42.6,
"text": " two and a sentence. Two similarly colored and similarly posed chow dogs are"
},
{
"start": 42.6,
"end": 51.72,
"text": " face-to-face in one image. I guess the task here is to have the"
},
{
"start": 51.72,
"end": 57.68000000000001,
"text": " system answer. Is this correct or incorrect? As you see here I believe"
},
{
"start": 57.68,
"end": 65.48,
"text": " that's a correct statement. Or you're maybe tasked to ask which is the"
},
{
"start": 65.48,
"end": 70.2,
"text": " image that this applies to. Is it image one or image two? Of course"
},
{
"start": 70.2,
"end": 78.52,
"text": " here it's image one. The problem with such systems is that there are a"
},
{
"start": 78.52,
"end": 84.16,
"text": " lot of easy things that the models can do that will usually get them the"
},
{
"start": 84.16,
"end": 89,
"text": " answer. What we like to imagine is that the model will look at this and recognize"
},
{
"start": 89,
"end": 94.39999999999999,
"text": " that this is a dog here. This is a dog. Here is its face and this is a dog and"
},
{
"start": 94.39999999999999,
"end": 100.47999999999999,
"text": " here is its face. It will see there's a count. There's two of them."
},
{
"start": 100.47999999999999,
"end": 110.19999999999999,
"text": " There's two of them. There's a notion of face and there's notion of pose and so"
},
{
"start": 110.2,
"end": 117.72,
"text": " on. Usually there are tricks that the models can do to get this easier."
},
{
"start": 117.72,
"end": 122.24000000000001,
"text": " For example I know that in a particular visual question answering system"
},
{
"start": 122.24000000000001,
"end": 135.12,
"text": " whenever there is a question of what is the ground covered in or something like"
},
{
"start": 135.12,
"end": 142.64000000000001,
"text": " this. The answer is always snow. You don't even have to look at the image."
},
{
"start": 142.64000000000001,
"end": 148.88,
"text": " Similarly there are a lot of these kind of tricks that the models learn and the"
},
{
"start": 148.88,
"end": 154.20000000000002,
"text": " authors recognize correctly that this is mostly a data set problem."
},
{
"start": 154.20000000000002,
"end": 160.08,
"text": " Usually what you do in these data sets is you have an image"
},
{
"start": 160.08,
"end": 163.8,
"text": " that you scrape from the web or something"
},
{
"start": 163.8,
"end": 170.92000000000002,
"text": " and it has some mountains and then there's snow on the mountains, on the ground."
},
{
"start": 170.92000000000002,
"end": 181,
"text": " You give this to a bunch of mechanical turks or someone like a raider and you"
},
{
"start": 181,
"end": 186.36,
"text": " instruct them. You produce a question to this image. You give them a couple of"
},
{
"start": 186.36,
"end": 190.92000000000002,
"text": " examples and they're usually kind of lazy and they will just look at it and"
},
{
"start": 190.92,
"end": 196.72,
"text": " be like what questions could I ask? You need to ask something."
},
{
"start": 196.72,
"end": 204.32,
"text": " Usually the instructions are it must be visual and it must maybe be answerable"
},
{
"start": 204.32,
"end": 210.79999999999998,
"text": " with a one word answer or something like this. Or it must be a"
},
{
"start": 210.79999999999998,
"end": 214.79999999999998,
"text": " multiple choice question. There are these number of instructions and they will"
},
{
"start": 214.79999999999998,
"end": 218.92,
"text": " usually be like what's kind of special about this picture? There's snow"
},
{
"start": 218.92,
"end": 228.44,
"text": " so I'm gonna ask about that. Snow right? The problem is mainly the process"
},
{
"start": 228.44,
"end": 235.64,
"text": " of data set generation. That will lead to biases and easy"
},
{
"start": 235.64,
"end": 240.16,
"text": " solutions for the models where the models will simply learn"
},
{
"start": 240.16,
"end": 245.72,
"text": " statistical correlations between things and the intention. We have a big"
},
{
"start": 245.72,
"end": 257.64,
"text": " divergence between the intention of what the data set creators"
},
{
"start": 257.64,
"end": 268.88,
"text": " want. The intention is in this case is visual understanding, visual of the"
},
{
"start": 268.88,
"end": 275.96,
"text": " world. There's a big difference between this and between how the data"
},
{
"start": 275.96,
"end": 282.68,
"text": " set is really constructed. The authors are trying to address this with"
},
{
"start": 282.68,
"end": 287.64,
"text": " what they call contrast sets. They say you get out of this process"
},
{
"start": 287.64,
"end": 292.56,
"text": " a data set. You get a training data set and a test data set."
},
{
"start": 292.56,
"end": 298.68,
"text": " Maybe here a smaller test data set. What they say is what we should do is we"
},
{
"start": 298.68,
"end": 306.88,
"text": " should additionally have these things called contrast sets. This is train"
},
{
"start": 306.88,
"end": 313.56,
"text": " and this is test. Usually these two come from the same distribution. You"
},
{
"start": 313.56,
"end": 318.76,
"text": " simply make them and then you split them somehow and you take the test from the"
},
{
"start": 318.76,
"end": 326.15999999999997,
"text": " train. But these here are not from the same distribution. This is the contrast."
},
{
"start": 326.15999999999997,
"end": 334.08,
"text": " What they argue is that the authors of the data set should create the contrast"
},
{
"start": 334.08,
"end": 341.08,
"text": " set. You see that there's a split here where the data set comes from."
},
{
"start": 341.08,
"end": 345.64,
"text": " They argue that the authors of the data set with knowing what intention they"
},
{
"start": 345.64,
"end": 351.88,
"text": " have, they should create the contrast data set manually by hand in order to"
},
{
"start": 351.88,
"end": 357.71999999999997,
"text": " make really hard examples that show what they want out of a system."
},
{
"start": 357.71999999999997,
"end": 364.96,
"text": " They capture this here in their example. If we go back to the example, here"
},
{
"start": 364.96,
"end": 371.74,
"text": " are things. They suggest to do this via perturbations. What they would do"
},
{
"start": 371.74,
"end": 377.56,
"text": " is they would start at this example up here. They would start and they would"
},
{
"start": 377.56,
"end": 386.56,
"text": " perturb it textually or via image. They would perturb it to make it change"
},
{
"start": 386.56,
"end": 391.28000000000003,
"text": " the gold label. This is different from adversarial examples. In"
},
{
"start": 391.28000000000003,
"end": 397.40000000000003,
"text": " adversarial examples you would want to perturb a sample such that it is still"
},
{
"start": 397.4,
"end": 402.03999999999996,
"text": " the same but to the classifier it's different. Here you have the opposite gold."
},
{
"start": 402.03999999999996,
"end": 408.67999999999995,
"text": " You want to make something that is means kind of the opposite but you want to"
},
{
"start": 408.67999999999995,
"end": 414.12,
"text": " test whether your classifier can pick up on that. In this case the one example"
},
{
"start": 414.12,
"end": 418.28,
"text": " would be two similarly colored and similarly posed cats instead of dogs"
},
{
"start": 418.28,
"end": 423.59999999999997,
"text": " are face to face in one image. That would change the label. Whereas"
},
{
"start": 423.6,
"end": 429.16,
"text": " before the answer was yes that's a correct sentence. Now it's no that's not"
},
{
"start": 429.16,
"end": 435.44,
"text": " a correct sentence. There are no cats in these images. Also here three similarly"
},
{
"start": 435.44,
"end": 440.28000000000003,
"text": " colored dogs. The intention of the authors, you have to view it through"
},
{
"start": 440.28000000000003,
"end": 446.92,
"text": " this lens, the intention here is that the system can recognize the species of"
},
{
"start": 446.92,
"end": 454.04,
"text": " the entities in the images. The system can count and the system can compare"
},
{
"start": 454.04,
"end": 460.08000000000004,
"text": " right compare in this case colors. You want to kind of make perturbations on"
},
{
"start": 460.08000000000004,
"end": 465.36,
"text": " these attributes from a test image. You can also think about image"
},
{
"start": 465.36,
"end": 471.08000000000004,
"text": " perturbations where you keep the sentence but you modify the image such"
},
{
"start": 471.08000000000004,
"end": 475.84000000000003,
"text": " that there are still two dogs and they're still facing each other."
},
{
"start": 475.84,
"end": 481.59999999999997,
"text": " But they're not similarly colored anymore. So the similarly"
},
{
"start": 481.59999999999997,
"end": 489.23999999999995,
"text": " colored here would be the attribute that where before it was true now it's false"
},
{
"start": 489.23999999999995,
"end": 495.28,
"text": " with the new image. You get the gist that the people that created the"
},
{
"start": 495.28,
"end": 503.2,
"text": " data set that know their intention will create manually these samples. The"
},
{
"start": 503.2,
"end": 508.64,
"text": " authors they propose a new metric to track this but essentially the authors"
},
{
"start": 508.64,
"end": 515.04,
"text": " propose how well the models do on these contrast sets will be a reflection."
},
{
"start": 515.04,
"end": 521.16,
"text": " It should be kind of an additional thing that people do with their NLP"
},
{
"start": 521.16,
"end": 530.24,
"text": " models. Alright so you get the picture. That is I believe the entire gist of"
},
{
"start": 530.24,
"end": 540.04,
"text": " this paper and I have some problems. First of all here they say alright let's"
},
{
"start": 540.04,
"end": 544.6,
"text": " give a toy example in two dimensions. Say you have this data set right and the red"
},
{
"start": 544.6,
"end": 549.16,
"text": " one is the correct decision boundary right and you want to capture that but"
},
{
"start": 549.16,
"end": 555.48,
"text": " because you only have limited training data and because you in in this"
},
{
"start": 555.48,
"end": 562.6,
"text": " generation processes you have systematic biases. So if we had non-systematic"
},
{
"start": 562.6,
"end": 569.32,
"text": " biases we would think that okay we maybe pick this and this one and this one here"
},
{
"start": 569.32,
"end": 573.16,
"text": " and this one here and this one here right. We don't get all of them but we"
},
{
"start": 573.16,
"end": 577.4,
"text": " kind of get an IID sample right. That wouldn't be so much of a problem. You"
},
{
"start": 577.4,
"end": 580.88,
"text": " could still kind of recover the decision boundary but because we have"
},
{
"start": 580.88,
"end": 588.96,
"text": " systematic biases the authors argue we actually introduce biases. So the"
},
{
"start": 588.96,
"end": 594.04,
"text": " systematic bias here is that we would of the blue ones we would only capture"
},
{
"start": 594.04,
"end": 603.04,
"text": " things on this sorry on the on this layer up here and of the red ones orange"
},
{
"start": 603.04,
"end": 608.72,
"text": " ones we'd only capture things of the level down here and thereby we introduce"
},
{
"start": 608.72,
"end": 615.64,
"text": " the kind of data set, the bias. It doesn't require this complex decision"
},
{
"start": 615.64,
"end": 623.6,
"text": " boundary anymore. Right and if we now the problem is if we collect the data set"
},
{
"start": 623.6,
"end": 628.9200000000001,
"text": " like this and we simply say well these ones are the test set and these ones"
},
{
"start": 628.9200000000001,
"end": 633.12,
"text": " are the train set right it will generalize well to the test set but it"
},
{
"start": 633.12,
"end": 640.4,
"text": " will not generalize well to what we actually want and therefore the authors"
},
{
"start": 640.4,
"end": 645.84,
"text": " say well if we introduce these contrast sets here then you see that the decision"
},
{
"start": 645.84,
"end": 652.96,
"text": " boundary that we found will not perform well on these contrast sets right. So"
},
{
"start": 652.96,
"end": 659.12,
"text": " they would say we take one example of the test set right. This is you can see"
},
{
"start": 659.12,
"end": 665.36,
"text": " this is this example right here and we would perturb it to make it change its"
},
{
"start": 665.36,
"end": 670.8,
"text": " label or in this case one of them where the label remains. We would kind of"
},
{
"start": 670.8,
"end": 678.76,
"text": " perturb it meaningfully and yeah so as I said I have multiple problems with this."
},
{
"start": 678.76,
"end": 687.12,
"text": " First 2D toy examples very very bad for NLP models. First of all low-dimensional"
},
{
"start": 687.12,
"end": 692.2,
"text": " intuition does not generalize to high-dimensional intuition like very very"
},
{
"start": 692.2,
"end": 699.8,
"text": " little. Second of all even worse usually these NLP models have so many parameters"
},
{
"start": 699.8,
"end": 704.84,
"text": " much more parameters than you have data set which means that your decision"
},
{
"start": 704.84,
"end": 710.92,
"text": " boundary is incidentally going to be simple even if you had all the data you"
},
{
"start": 710.92,
"end": 719.8399999999999,
"text": " could possibly want. It's just a very different kind of problem and then the"
},
{
"start": 719.8399999999999,
"end": 727.68,
"text": " next problem is if even with by doing this contrast set and you already see it"
},
{
"start": 727.68,
"end": 733.18,
"text": " here right you already see it you can only kind of bicker about the data okay"
},
{
"start": 733.18,
"end": 737.68,
"text": " but with the contrast that you only really capture this one aspect so if"
},
{
"start": 737.68,
"end": 746.1999999999999,
"text": " that was actually well adhered to you could measure very locally whether or"
},
{
"start": 746.1999999999999,
"end": 752.16,
"text": " not this this would work or not and the ability to come up with meaningful"
},
{
"start": 752.16,
"end": 758.16,
"text": " contrast sets to ever capture what the model is doing is almost impossible"
},
{
"start": 758.16,
"end": 764.7199999999999,
"text": " because you have to create them manually and then you suggest that the authors"
},
{
"start": 764.72,
"end": 769.64,
"text": " themselves make these contrast sets. Remember the authors are the ones that"
},
{
"start": 769.64,
"end": 774.28,
"text": " gave these instructions right these instructions right here the authors"
},
{
"start": 774.28,
"end": 782.48,
"text": " provided them to the to the data set annotators so the authors will probably"
},
{
"start": 782.48,
"end": 787.72,
"text": " be even more biased if they have to do their own right if they have to now"
},
{
"start": 787.72,
"end": 793.4,
"text": " create their own contrast examples they will probably even though they know"
},
{
"start": 793.4,
"end": 799.0799999999999,
"text": " their intention they will probably be like more biased than if you at least"
},
{
"start": 799.0799999999999,
"end": 803.4399999999999,
"text": " this here at least this here is a distributed process across people right"
},
{
"start": 803.4399999999999,
"end": 807.36,
"text": " so you get things that you wouldn't have thought of but if just the three authors"
},
{
"start": 807.36,
"end": 811.28,
"text": " of the date of the paper make the contrast examples I would argue that"
},
{
"start": 811.28,
"end": 819.56,
"text": " that's an even more biased measure often so all of this it just strikes me as as"
},
{
"start": 819.56,
"end": 825.4799999999999,
"text": " the paper is basically saying let's try on a few things and I think the"
},
{
"start": 825.4799999999999,
"end": 831.4,
"text": " fundamental problem is much much deeper and it goes with this intention part"
},
{
"start": 831.4,
"end": 839.92,
"text": " like I get it the the visual question answering data set doesn't capture the"
},
{
"start": 839.92,
"end": 845.52,
"text": " doesn't capture what you want it doesn't make the model suddenly understand that"
},
{
"start": 845.52,
"end": 849.1999999999999,
"text": " there are dogs and there are species of animal and so on it simply makes it"
},
{
"start": 849.2,
"end": 855,
"text": " correlate things but that's what deep learning and especially NLP does so"
},
{
"start": 855,
"end": 861.72,
"text": " right it's like it's like saying you you build a build an image net classifier"
},
{
"start": 861.72,
"end": 870.24,
"text": " and it can't fly and identify if I try it on my tests that it requires my"
},
{
"start": 870.24,
"end": 876.2,
"text": " computer to fly and my image net model can't do this then it doesn't serve my"
},
{
"start": 876.2,
"end": 883.12,
"text": " intention right and I mean it's it's a crass example but ultimately you the"
},
{
"start": 883.12,
"end": 889.8000000000001,
"text": " correct approach should be to better encapsulate your intention into the"
},
{
"start": 889.8000000000001,
"end": 894.76,
"text": " data set generating process and then correctly interpreting the results that"
},
{
"start": 894.76,
"end": 900.08,
"text": " mean okay on this data set as far as we can tell the way we created it this is"
},
{
"start": 900.08,
"end": 906.24,
"text": " the performance of the model it doesn't the model will never learn to fulfill"
},
{
"start": 906.24,
"end": 910.6,
"text": " your intention and I get it that's what you're saying but still even with this"
},
{
"start": 910.6,
"end": 919.2,
"text": " contrast set I think it's a really bad measure to formally propose it's I think"
},
{
"start": 919.2,
"end": 923.96,
"text": " you should much more propose how is the data set generating process different"
},
{
"start": 923.96,
"end": 931.4000000000001,
"text": " from what you want and what are the limitations there right and so that's"
},
{
"start": 931.4000000000001,
"end": 938.5600000000001,
"text": " that that I think that will lead to much more meaningful meaningful results than"
},
{
"start": 938.5600000000001,
"end": 943.9200000000001,
"text": " simply the authors providing a few manually put examples that they feel"
},
{
"start": 943.9200000000001,
"end": 948.76,
"text": " capture their intention it will not will not the reason we do deep learning"
},
{
"start": 948.76,
"end": 954.64,
"text": " instead of straightforward if else programming is because we cannot"
},
{
"start": 954.64,
"end": 961.2,
"text": " capture even our intentions and therefore data set generation is the"
},
{
"start": 961.2,
"end": 969.64,
"text": " only is the only method we have so to say all right so ultimately I believe"
},
{
"start": 969.64,
"end": 973.8,
"text": " these these whole NLP especially the visual question answering and so on the"
},
{
"start": 973.8,
"end": 980.5999999999999,
"text": " natural language understanding part needs to have a grounding so ultimately I"
},
{
"start": 980.5999999999999,
"end": 988.8399999999999,
"text": " think grounding grounded NLP it means basically that you're not only doing NLP"
},
{
"start": 988.8399999999999,
"end": 992.76,
"text": " which is simply you take text and you take images and you correlate them"
},
{
"start": 992.76,
"end": 997.92,
"text": " somehow right you just make a statistical connection grounded NLP"
},
{
"start": 997.92,
"end": 1001.92,
"text": " models is the hope that you could build something that actually understands the"
},
{
"start": 1001.92,
"end": 1005.76,
"text": " world understands that there's entities that is interacted there's something"
},
{
"start": 1005.76,
"end": 1011,
"text": " like a pose that there is something like what the color means right what a dog is"
},
{
"start": 1011,
"end": 1017.4399999999999,
"text": " and so on and as entities I think we're not there yet and I think that will be"
},
{
"start": 1017.4399999999999,
"end": 1026.6,
"text": " the ultimate solution to these kind of tasks not not any sort of local very"
},
{
"start": 1026.6,
"end": 1032.08,
"text": " local very low dimensional perturbation I mean yeah let's say you create a"
},
{
"start": 1032.08,
"end": 1039.08,
"text": " contrast set you will be able to capture one tiny little bit of your intention"
},
{
"start": 1039.08,
"end": 1043.36,
"text": " one tiny little bit even though you know your intention you will capture a tiny"
},
{
"start": 1043.36,
"end": 1048.7199999999998,
"text": " little bit all of the thousand other degrees of freedom of your own intention"
},
{
"start": 1048.7199999999998,
"end": 1053.76,
"text": " you won't be able in there to capture in the contrast set I guarantee you all"
},
{
"start": 1053.76,
"end": 1058.96,
"text": " right that was my quarrels with that I invite you to read the whole paper they"
},
{
"start": 1058.96,
"end": 1065.4,
"text": " actually do this for NLP datasets it's a lot of work and they show that the"
},
{
"start": 1065.4,
"end": 1070.08,
"text": " models perform much worse on their contrast sets and interestingly the"
},
{
"start": 1070.08,
"end": 1073.8799999999999,
"text": " humans don't the humans are able to solve the contrast set of course of"
},
{
"start": 1073.8799999999999,
"end": 1080.36,
"text": " course because you tell the humans what the task is right that's like humans"
},
{
"start": 1080.36,
"end": 1087,
"text": " succeed on contrasts at like how surprising what you should do is you"
},
{
"start": 1087,
"end": 1091.8799999999999,
"text": " should just provide the humans with the data set not tell them what the task is"
},
{
"start": 1091.8799999999999,
"end": 1096.6399999999999,
"text": " even worse just provide them with the encoded data set like not the text"
},
{
"start": 1096.6399999999999,
"end": 1102.1599999999999,
"text": " itself but actually the token IDs right and then and then make them do the thing"
},
{
"start": 1102.1599999999999,
"end": 1107.56,
"text": " and the humans will just as well make a statistical correlation between the"
},
{
"start": 1107.56,
"end": 1113.32,
"text": " tokens and the images or whatnot and the humans will fail just as well on the"
},
{
"start": 1113.32,
"end": 1118.98,
"text": " test on these contrast sets because the humans maybe they'll figure out what the"
},
{
"start": 1118.98,
"end": 1124.32,
"text": " task is but probably not so humans succeed on contrasts at how surprising"
},
{
"start": 1124.32,
"end": 1131.6799999999998,
"text": " you tell them the intention while you don't tell it to the model yes I see"
},
{
"start": 1131.6799999999998,
"end": 1136.6,
"text": " critical but yeah please read the paper it's an interesting paper and with that"
},
{
"start": 1136.6,
"end": 1139.6,
"text": " goodbye"
}
] |
8wkgDnNxiVs | Yannic Kilcher | UCZHmQk67mSJgfCCTn7xBfew | POET: Endlessly Generating Increasingly Complex and Diverse Learning Environments and Solutions | [
"Science & Technology"
] | [
"deep learning",
"machine learning",
"arxiv",
"evolution",
"reinforcement learning",
"neat",
"open-ended",
"never ending",
"population",
"bipedal walker"
] | From the makers of Go-Explore, POET is a mixture of ideas from novelty search, evolutionary methods, open-ended learning and curriculum learning.
https://arxiv.org/abs/1901.01753
Abstract:
While the history of machine learning so far largely encompasses a series of problems posed by researchers and algorithms that learn their solutions, an important question is whether the problems themselves can be generated by the algorithm at the same time as they are being solved. Such a process would in effect build its own diverse and expanding curricula, and the solutions to problems at various stages would become stepping stones towards solving even more challenging problems later in the process. The Paired Open-Ended Trailblazer (POET) algorithm introduced in this paper does just that: it pairs the generation of environmental challenges and the optimization of agents to solve those challenges. It simultaneously explores many different paths through the space of possible problems and solutions and, critically, allows these stepping-stone solutions to transfer between problems if better, catalyzing innovation. The term open-ended signifies the intriguing potential for algorithms like POET to continue to create novel and increasingly complex capabilities without bound. Our results show that POET produces a diverse range of sophisticated behaviors that solve a wide range of environmental challenges, many of which cannot be solved by direct optimization alone, or even through a direct-path curriculum-building control algorithm introduced to highlight the critical role of open-endedness in solving ambitious challenges. The ability to transfer solutions from one environment to another proves essential to unlocking the full potential of the system as a whole, demonstrating the unpredictable nature of fortuitous stepping stones. We hope that POET will inspire a new push towards open-ended discovery across many domains, where algorithms like POET can blaze a trail through their interesting possible manifestations and solutions.
Authors: Rui Wang, Joel Lehman, Jeff Clune, Kenneth O. Stanley
Links:
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
BitChute: https://www.bitchute.com/channel/yannic-kilcher
Minds: https://www.minds.com/ykilcher | Alright, so what you're seeing here are solutions found to this bipedal walker problem by a new algorithm called PoET. So as you might guess, the challenge is to keep this little thing here walking to the right as far as you can while it encounters various obstacles. And it is and remains a challenging reinforcement learning problem to have an agent learn to overcome various obstacles and walk well in different environments. So the paper we're going to look at is called PoET. It's by Uber Engineering. And the full pronunciation is the Paired Open-Ended Trail Blazer, endlessly generating increasingly complex and diverse learning environments and their solutions by Roy Wang, Joel Lehmann, Jeff Klun and Kenneth O. Stanley, as I said from Uber AI Labs. So as you already saw, the challenge they take on is this bipedal walker problem. Now their method is very general and not limited to this problem, but this is the problem that they focus on. I'm going to jump some of the explanations here and dig right into the problem. As you can see, the problem is the following. You have this thing here, which is the walker, and it has two legs and specifically it has four joints. So the four joints are here too, and here too. And you can give torque on all of the four joints. So it's basically a four output problem. And you do have sensors as input. So the inputs, I believe, is a LIDAR. So the LIDAR is this red line you see here. I think it has 16 of those in various angles. And also it has pressure detection on the feet, I believe, to see whether or not they are in contact with the ground. And it might also have a gyroscope in that tells you which angle with respect to the ground the head is. So you have various sensors on these things, and you're able to basically control what the legs are doing. And your goal is to make this go as far to the right and as fast as possible. You see the reward down here is negative 100 if the robot falls over. That means if the head hits the ground. And then it is 130 times delta x. That's how far you go to the right minus the whole angle. And the whole angle, as I said, is this angle here. So you want to keep it as stable as possible. Because if there's a difference in the angle per step, then you get penalized. And also you get penalized for each torque you apply. So you want to kind of apply minimal force on the joints in order to go very far. But by far the most important point is to go to the right as far and as fast as you can. There is an end here somewhere. And if you reach it, you get a score that is above 230. They choose the limit of 230 here to determine. So if the agent gets 230 or more, then it has solved the environment. That's what they claim. That's from experience. So as you see, the environment has various obstacles here. There are holes that you can fall into that you need to jump or step over. There are these kind of stumps here. They can be of various height. So this is a bit shorter and this is a bit longer. And the general terrain has a roughness. And this can go to very rough from very smooth. So this is a parameterized environment. And obviously they are able to generate these environments from parameters. And the goal now is to have an agent that walks well in any environment that you can think of. Right, so here on the left you see this is very challenging down the stairs. This also isn't too easy because there is a gap here. And there are five parameters of these environments. So there is the general roughness of the terrain. That means how many hills it has and how fast they are coming. There is the stump lower bound and stump upper bound, I believe. So how high the stumps are. And also how long the gaps are. And with these parameters you control how difficult an environment is. So the straightforward thing to do is simply to sample environments and have a reinforcement learn approach to this. And that usually doesn't work. I already want to see this without having talked about what the algorithm is. This is the approach where you try this thing. It's called evolution strategies. But you can think of it as just a straightforward optimization procedure. So there is an agent and there is an environment and you are trying to solve the environment using just straightforward optimization. Now the evolution strategies are not your classic algorithm but you can compare it to it. It's just that these people, they like the more, I have a feeling they like the more esoteric learning algorithms. In any case, you see in these environments large gap, rough surface and so on. These are supposed to be the platinum figures. So these two environments and also these environments here. The evolution strategy, so the classic approach if you just straight forward optimize, they get very low scores on average, whereas poet gets here very high scores above the 230 threshold. So what's happening? If you're trying to just solve these environments from scratch, you basically don't really have a big chance of solving them. Because let's say you're here and you're trying to move to the right, you know, you might learn how to do this and you see this from scratch solution actually manages to get to the right. But then as soon as you reach this, you're in this gap and you just fall down the gap because all you've learned so far is how to move right. So what you would need to do is you would need to plan ahead like what poet does. You need to see that there is a gap. You need to plan ahead and already lift up a leg in order to then step over the gap here and then do a little jump right here. And this sequence of action, this kind of planning ahead, it is very difficult to learn this for a classic RL algorithm because you basically get reward for everything you do. So initially you get reward for moving to the right. So that's 10 if you reach here, another 10 if you reach here. And so there is another 10 if you reach here and another 10 if you reach here. Whereas if you lift up your leg, that's like minus five because now this you've changed this angle and we saw this is negative reward, right? So a classic optimization algorithm will always fall into the hole because that is where you get the immediate reward. Whereas you'd have to you'd have to do a sequence of action that doesn't give you a reward right now, but it gives you more reward later. And in order to learn this, we need a kind of a better algorithm that just straightforward optimization. So maybe I can explain this if you have a maze, here is the start and here is the goal and there is like walls and the walls are something like this. What you need to do is go around here. But what a classic optimization algorithm does is always like goes here because that's ever so closer to the goal. And then it just gets stuck because it can't fathom that it needs to go around here. So it needs to go farther away before it gets closer. So these people we've talked about this before in like open ended learning novelty search. What you would want to do is you would want to gradually build up solutions that can explore the space like to go here, go here, go here and basically build up these solutions. And there are two components to what this poet algorithm does. So the first component is curriculum learning. Curriculum learning. What does curriculum learning mean? Curriculum learning means that you start off with easy tasks and you increasingly build up more and more and more complex tasks. So let's say I have an environment here and I'm going to draw and at the beginning we just kind of start off with this flat surface right and here is our little walker right here. And we'll just train it to move right on that and that should be doable with kind of a classic approach. And then we gradually move to more difficult environments. So maybe we'll make it a bit more rough right. And an agent that can already walk to the right already kind of has think of it as a pre-training in like NLP. You can then get more and more challenging and then maybe at some point you can build in a gap right. So you build in one of these gaps and now it already knows how to move to the right and now it might actually learn to jump a small gap right if you make it small at the beginning not like this one down here. There's a very large gap. But if you make it small by accident it might stumble over it and then learn and continuously how to master the gap. So this is the curriculum learning approach. It means that from environment to environment you get harder, harder and harder challenges. So first flat then more rough then more rough with a gap and so on. The second approach, the second ingredient to POET is what they call stepping stone learning or transfer learning or things like this. And that's where you kind of have to think of this not as a single agent optimizing but as a population of agents. So let's say you do this curriculum learning right. And you're getting fairly well here at rough terrains right. More and more rough terrains. But in parallel you also have a second optimization procedure. You also start out kind of flat but with this thing you go as we said before small gap you keep it flat but you just increase the number of gaps here right. Whereas over here you just keep making the terrain rougher and rougher. So what the philosophy is that an agent that might be able to master this rougher terrain it might actually that skill because here you this kind of this kind of looks like a gap here. The skill of hopping over this gap here might actually transfer to the environment over here where you do have a proper you know a gap in the environment or the skill that you learn from an environment where you have one of these stumps right. So here let's draw in one of these stumps where you have to go over and if you have a walker that can successfully walk over this that skill now might transfer over here in order to get over this over this peaky terrain here. So the idea of poet is to start off with a generic flat very easy environment and then spawn new ones so you want to spawn new environments in kind of a hereditary way. So this one might get a bit rougher this one might include this and this one might include a gap or something like this and then again you want to spawn new environments and more rough more rough more rough with a stump here and this one retains the gap sorry and um this one now gets two gaps and so on and you want to continuously train these and then always you want to check whether or not the skill that you learn over here might actually transfer to anyone over here. So you get this tree of this continuous tree of solutions and once you improve on one branch this might actually be good on another branch right they always make the comparison to let's say biological evolution where a strategy that works over here for birds is all of a sudden can be cross adopted by mammals for an entirely different problem but the same skill might be valuable. Yeah so this this is basically the two ingredients of poet and now I want to show you the complete poet algorithm. So what does it do you start off with an initial environment right and in poet every environment is paired with an agent so there is one agent per environment right so for the time steps what you do is first of all you go through your environments and you mutate them and we already seen these environments they can be generated from a parameter vector so we have five numbers right how rough how stumpy and how wide the gaps are let's say we have three numbers to two and this might be one this might be two this might be five right so what you want to do is you want to mutate them right you want to spawn children and each of these parameters has a chance of mutating this might be one three five and this environment might be one four six and this one might be two two five right you spawn new ones you already see that the requirement here is that you can actually have environments that are procedurally generated and mutated like this where a small mutation probably is going to lead to a small change in the environment in any case you mutate them and then you you want to let's you want to optimize your eight each agent so each of these environments is paired with a new agent that always tries to solve that particular environment so now within one environment you simply do your classic optimization we already saw here the evolution strategy is akin to a classic optimization algorithm from reinforcement learning all right so each agent you optimize for a couple of steps right not fully every time but for a couple of steps so each agent including the one in the original environment each agent is continuously trained on its environment throughout the process of course you like you have to be you have bounded computation so you need to drop out the very old ones but in principle continuously as all of this goes on all the agents are always trained on their environments so the agent here this Walker will always try to solve this particular environment and the Walker here that is now newly generated when the environment is generated will only try to solve this particular environment throughout the whole algorithm right and then all right so you do mutations you spawn new ones and then you do a couple of steps in optimization right and yes step and then you do this transfer attempt right what you want to do is you want to evaluate all the candidates on all the environments in principle you can you can cut this down but in principle you want to go through the environments and say okay this environment right here I'm going to evaluate all of the other agents in this environment you can do this in a couple of different ways where you just straight up try them or try to optimize them for a few steps to see whether they can be adapted easily to that environment but ultimately you have to come up with a criterion to say for each agent is the agent better or worse than the agent that is continuously trained on this environment if it's worse then you keep this one if if anyone is better then you transfer that better one to replace this one right and you basically copy it over to this new environment and that's where this transfer learning comes in so you're continuously trying all the agents on all the environments and if they are better you transfer them right so here you say if the environment score is better than the one that you have you transfer it all right now there is a lot hidden here for example in this mutate environment step they do check whether or not the new mutated environments are not too hard and not too easy and that basically means whether or not the agents can solve them but not solve them too easily they also check whether the environments are enough novel so you need a couple of checks here you solvable and that that means not too easy and not too hard right so they need to pass like a certain score but they need to be kind of solvable to a to an okay score so there's a score range and also novel they check whether or not the out the mutated environments are novel enough and I believe they just do this by calculating the the distance between two environments in terms of their parameter vectors so to determine whether or not these are novel and sorry I don't mean the distance just between two but the distance of all of the ones you've seen so far so if we go to original very beautiful drawing here where is my tree if you create a new environment let's say you create a new environment right here then you want to check it against all environments you've seen so far to determine whether or not it is new or not so you want to create the distance to all of these and if you have enough distance to your nearest neighbors then you are novel and that's kind of how they they determine whether environment is new all right so that's basically the poet algorithm you continuously create new environments by mutation you ensure that they are solvable not hard enough sorry not too hard but hard enough ensure that they are novel and then you optimize each agent for its own environment continuously as the process goes on and so it's not I want to stress this it's not only the frontier so you're not only looking at the newest generation but you're always looking at all of the generation of the because the older ones while the environments are easier they have been optimized for longer on this environment so the skills might be very handy so you always want to look at your entire population and then you do crucially you do this these transfer attempts so that's the poet algorithm there is a lot hidden here and I kind of want to stress that just if you just look at the amount of hyper parameters there is so many hyper parameters in this how much you transfer how much you mutate how many steps you do each of these subroutines here has a billion hyper parameters and learning rates and and so on so to me that's a that is kind of if I look at this algorithm I am very scared if I attempted to do something like this myself it's it's going to be a long and hard thing to evaluate all of these different hyper parameters that you have to do shortly want to dip into what the evolution strategy does just so you know because you just might be familiar with your classic your classic reinforce algorithm so in policy gradient methods what you do is you scale your parameters of your neural network which is you can if this is your policy then your policy network here you want to scale the gradient according to your reward so in classic reinforcement learning this here would be the reward you got which basically means if you did an action and you got higher reward you want to make your network do that action more right here in evolution strategies what you do is you spawn it's a different way of doing the same thing basically you spawn different environments and sorry you spawn you spawn different agents so you have your current parameters and you want to spawn a number of noisy versions of those parameters and then you want to evaluate each one right and now you want to adjust your parameters into the direction of that particular so basically you are here with your parameters you create a bunch of noisy versions of it right and let's say these two performed really well you want to adjust your parameters into the direction of those two right that's basically what this says so this is the noisy version and then this is the noise that produced the noisy version so if this is high if this number here is high then you will adjust your parameters into that direction it's a fairly cool way if you especially if you can't back prop through your policy as it's pretty neat thing so this is the ES step algorithm but you can think of it just as a RL algorithm all right so they do various experiments to show that this actually has merits I've already shown you if you're trying if you take the same environments and try to solve them directly by this evolution step then it will not succeed because of the problems we've discussed before now the comparison is a bit unfair because um of course these environments for poet poet the problem here is you can't have it solve a particular environments because the environments they constantly change right you constantly mutate the environments you never know where it's going it's not directed so if your goal is to solve a particular environment you cannot do it with poet you can hope that the agent that comes out will perform well right you can do something like this but I believe I believe that these environments that they test on here are ones that appeared during the poet run right so it's kind of an unfair comparison I feel to to do this on an environment that you know this environment this poet agent actually comes from an environment that poet has generated in its all mutation tree curriculum while building it up and then the poor ES algorithm is simply tasked with solving that particular environment from scratch so yes always keep in mind this is this can have a goal this doesn't have a goal right that's kind of the drawback but as you can see poet does get super high scores whereas es the classic algorithm completely fails and they also investigate the importance of transfer learning so they compare to like a classic classic curriculum learning algorithms there are curriculum learning algorithms where you can continuously try to build up the difficulties of these environments but you also do it in a goal-directed way so as I said if you have an environment that has like a gap and then a stump a high stump or two high stumps you want to start out flat and then maybe build in a small gap and a small stump and so on until you're here it's very much goal-directed but it doesn't have this kind of population with transfer learning aspect of poet so if they compare this you can see here the red the red the red one sorry colored it blue stupidly the red one is whatever poet was able to solve now these are the five dimensions of the parameters and the more on the outside it is the harder the environment and for the same for the same environment the blue one is what the curriculum learning algorithm has managed so it's the best environment the curriculum learning algorithm has been able to solve while trying to build up to the so if we take this here is the environment that poet solved again the comparison is kind of unfair because we're starting out from an environment that poet has already solved and then we're trying to build our way up to it with the classic algorithm by basically again this is it's comparing a non goal-directed thing something that just happened to a goal-directed process that needs to get this particular environment to work in any case at some point this curriculum learning algorithm will fail like let's say that's here that's the environment that has somewhat of a gap but no stump right and that would be the the blue line here they do like five runs and they plot them here and you can see every time the classic curriculum learning algorithm manages to only solve a much much less challenging environment than the poet algorithm achieved even though it's it's trying to reach exactly that right and so here they show the difference so if you just the classified environment if it's just challenging then the classic algorithm the curriculum learning algorithm can solve it somewhat so the distance is close to zero but as you go more and more challenging the distance between poet and the classic becomes larger and larger they do give some examples of what this transfer learning does so they have this parent environment that just kind of slouches forward on the ground and then the child environment has a mutation that has now little stumps in it right so you can't get over it right now but the child environment because it's it's a small stump so it might stumble across learns to lift its leg here and it transfers this back to the parent right at a later iteration which is pretty cool and then the parent gets even better as a result of that transfer so we have two transfer learning events here that mutually help these agents remember both the parent and the child are continuously trained as the process goes on all right and they do some more things where they do actual poet not a classic algorithm but poet without transfer learning and they see that okay the poet without transfer is able to solve some of the very challenging problems but never reaches the extremely challenging stage and that's kind of their argument why the transfer learning is necessary so in total I would say this is a cool algorithm it has many many many many many many hyper parameters and these experimental results with that many hyper parameters you need to take it with a grain of salt because it's always possible that they just haven't put as much effort into their comparisons as they have into their own thing to get it to work all right with that I wish you a nice day and check out the paper they have lots of descriptions check out the blog post where they have animations and the YouTube video and with that bye bye | [
{
"start": 0,
"end": 6.88,
"text": " Alright, so what you're seeing here are solutions found to this bipedal walker problem by a"
},
{
"start": 6.88,
"end": 10.52,
"text": " new algorithm called PoET."
},
{
"start": 10.52,
"end": 16.84,
"text": " So as you might guess, the challenge is to keep this little thing here walking to the"
},
{
"start": 16.84,
"end": 21.72,
"text": " right as far as you can while it encounters various obstacles."
},
{
"start": 21.72,
"end": 30.92,
"text": " And it is and remains a challenging reinforcement learning problem to have an agent learn to"
},
{
"start": 30.92,
"end": 35.96,
"text": " overcome various obstacles and walk well in different environments."
},
{
"start": 35.96,
"end": 41.2,
"text": " So the paper we're going to look at is called PoET."
},
{
"start": 41.2,
"end": 46.08,
"text": " It's by Uber Engineering."
},
{
"start": 46.08,
"end": 52.96,
"text": " And the full pronunciation is the Paired Open-Ended Trail Blazer, endlessly generating increasingly"
},
{
"start": 52.96,
"end": 57.96,
"text": " complex and diverse learning environments and their solutions by Roy Wang, Joel Lehmann,"
},
{
"start": 57.96,
"end": 64.56,
"text": " Jeff Klun and Kenneth O. Stanley, as I said from Uber AI Labs."
},
{
"start": 64.56,
"end": 70.48,
"text": " So as you already saw, the challenge they take on is this bipedal walker problem."
},
{
"start": 70.48,
"end": 75.6,
"text": " Now their method is very general and not limited to this problem, but this is the problem that"
},
{
"start": 75.6,
"end": 76.6,
"text": " they focus on."
},
{
"start": 76.6,
"end": 83.67999999999999,
"text": " I'm going to jump some of the explanations here and dig right into the problem."
},
{
"start": 83.67999999999999,
"end": 86.03999999999999,
"text": " As you can see, the problem is the following."
},
{
"start": 86.03999999999999,
"end": 91.36,
"text": " You have this thing here, which is the walker, and it has two legs and specifically it has"
},
{
"start": 91.36,
"end": 93.03999999999999,
"text": " four joints."
},
{
"start": 93.03999999999999,
"end": 97.56,
"text": " So the four joints are here too, and here too."
},
{
"start": 97.56,
"end": 102.56,
"text": " And you can give torque on all of the four joints."
},
{
"start": 102.56,
"end": 109.68,
"text": " So it's basically a four output problem."
},
{
"start": 109.68,
"end": 112.72,
"text": " And you do have sensors as input."
},
{
"start": 112.72,
"end": 116.26,
"text": " So the inputs, I believe, is a LIDAR."
},
{
"start": 116.26,
"end": 118.84,
"text": " So the LIDAR is this red line you see here."
},
{
"start": 118.84,
"end": 123.48,
"text": " I think it has 16 of those in various angles."
},
{
"start": 123.48,
"end": 129.88,
"text": " And also it has pressure detection on the feet, I believe, to see whether or not they"
},
{
"start": 129.88,
"end": 132.76,
"text": " are in contact with the ground."
},
{
"start": 132.76,
"end": 143.72,
"text": " And it might also have a gyroscope in that tells you which angle with respect to the"
},
{
"start": 143.72,
"end": 146.68,
"text": " ground the head is."
},
{
"start": 146.68,
"end": 151.07999999999998,
"text": " So you have various sensors on these things, and you're able to basically control what"
},
{
"start": 151.07999999999998,
"end": 153.68,
"text": " the legs are doing."
},
{
"start": 153.68,
"end": 161.76000000000002,
"text": " And your goal is to make this go as far to the right and as fast as possible."
},
{
"start": 161.76000000000002,
"end": 170.06,
"text": " You see the reward down here is negative 100 if the robot falls over."
},
{
"start": 170.06,
"end": 173.52,
"text": " That means if the head hits the ground."
},
{
"start": 173.52,
"end": 178.20000000000002,
"text": " And then it is 130 times delta x."
},
{
"start": 178.2,
"end": 184.64,
"text": " That's how far you go to the right minus the whole angle."
},
{
"start": 184.64,
"end": 187.39999999999998,
"text": " And the whole angle, as I said, is this angle here."
},
{
"start": 187.39999999999998,
"end": 190.48,
"text": " So you want to keep it as stable as possible."
},
{
"start": 190.48,
"end": 197.07999999999998,
"text": " Because if there's a difference in the angle per step, then you get penalized."
},
{
"start": 197.07999999999998,
"end": 200.94,
"text": " And also you get penalized for each torque you apply."
},
{
"start": 200.94,
"end": 209.64,
"text": " So you want to kind of apply minimal force on the joints in order to go very far."
},
{
"start": 209.64,
"end": 216.35999999999999,
"text": " But by far the most important point is to go to the right as far and as fast as you"
},
{
"start": 216.35999999999999,
"end": 217.36,
"text": " can."
},
{
"start": 217.36,
"end": 220.56,
"text": " There is an end here somewhere."
},
{
"start": 220.56,
"end": 227.36,
"text": " And if you reach it, you get a score that is above 230."
},
{
"start": 227.36,
"end": 233.24,
"text": " They choose the limit of 230 here to determine."
},
{
"start": 233.24,
"end": 238.12,
"text": " So if the agent gets 230 or more, then it has solved the environment."
},
{
"start": 238.12,
"end": 240.72000000000003,
"text": " That's what they claim."
},
{
"start": 240.72000000000003,
"end": 242.04000000000002,
"text": " That's from experience."
},
{
"start": 242.04000000000002,
"end": 244.76000000000002,
"text": " So as you see, the environment has various obstacles here."
},
{
"start": 244.76000000000002,
"end": 251.24,
"text": " There are holes that you can fall into that you need to jump or step over."
},
{
"start": 251.24,
"end": 253.76000000000002,
"text": " There are these kind of stumps here."
},
{
"start": 253.76000000000002,
"end": 255.86,
"text": " They can be of various height."
},
{
"start": 255.86,
"end": 259.36,
"text": " So this is a bit shorter and this is a bit longer."
},
{
"start": 259.36,
"end": 262.28000000000003,
"text": " And the general terrain has a roughness."
},
{
"start": 262.28000000000003,
"end": 268.7,
"text": " And this can go to very rough from very smooth."
},
{
"start": 268.7,
"end": 273.04,
"text": " So this is a parameterized environment."
},
{
"start": 273.04,
"end": 280.88,
"text": " And obviously they are able to generate these environments from parameters."
},
{
"start": 280.88,
"end": 288.71999999999997,
"text": " And the goal now is to have an agent that walks well in any environment that you can"
},
{
"start": 288.71999999999997,
"end": 289.71999999999997,
"text": " think of."
},
{
"start": 289.71999999999997,
"end": 295.48,
"text": " Right, so here on the left you see this is very challenging down the stairs."
},
{
"start": 295.48,
"end": 301.8,
"text": " This also isn't too easy because there is a gap here."
},
{
"start": 301.8,
"end": 306.96,
"text": " And there are five parameters of these environments."
},
{
"start": 306.96,
"end": 310.32,
"text": " So there is the general roughness of the terrain."
},
{
"start": 310.32,
"end": 314.2,
"text": " That means how many hills it has and how fast they are coming."
},
{
"start": 314.2,
"end": 319.4,
"text": " There is the stump lower bound and stump upper bound, I believe."
},
{
"start": 319.4,
"end": 322.52,
"text": " So how high the stumps are."
},
{
"start": 322.52,
"end": 326.84,
"text": " And also how long the gaps are."
},
{
"start": 326.84,
"end": 332.76,
"text": " And with these parameters you control how difficult an environment is."
},
{
"start": 332.76,
"end": 342.03999999999996,
"text": " So the straightforward thing to do is simply to sample environments and have a reinforcement"
},
{
"start": 342.03999999999996,
"end": 344.44,
"text": " learn approach to this."
},
{
"start": 344.44,
"end": 347.2,
"text": " And that usually doesn't work."
},
{
"start": 347.2,
"end": 353.8,
"text": " I already want to see this without having talked about what the algorithm is."
},
{
"start": 353.8,
"end": 358.24,
"text": " This is the approach where you try this thing."
},
{
"start": 358.24,
"end": 360.5,
"text": " It's called evolution strategies."
},
{
"start": 360.5,
"end": 364.68,
"text": " But you can think of it as just a straightforward optimization procedure."
},
{
"start": 364.68,
"end": 371.4,
"text": " So there is an agent and there is an environment and you are trying to solve the environment"
},
{
"start": 371.4,
"end": 374.76,
"text": " using just straightforward optimization."
},
{
"start": 374.76,
"end": 380.72,
"text": " Now the evolution strategies are not your classic algorithm but you can compare it to"
},
{
"start": 380.72,
"end": 381.72,
"text": " it."
},
{
"start": 381.72,
"end": 386.08,
"text": " It's just that these people, they like the more, I have a feeling they like the more"
},
{
"start": 386.08,
"end": 390.96,
"text": " esoteric learning algorithms."
},
{
"start": 390.96,
"end": 399,
"text": " In any case, you see in these environments large gap, rough surface and so on."
},
{
"start": 399,
"end": 402.68,
"text": " These are supposed to be the platinum figures."
},
{
"start": 402.68,
"end": 408.88,
"text": " So these two environments and also these environments here."
},
{
"start": 408.88,
"end": 414.91999999999996,
"text": " The evolution strategy, so the classic approach if you just straight forward optimize, they"
},
{
"start": 414.92,
"end": 424.6,
"text": " get very low scores on average, whereas poet gets here very high scores above the 230 threshold."
},
{
"start": 424.6,
"end": 426.28000000000003,
"text": " So what's happening?"
},
{
"start": 426.28000000000003,
"end": 434.34000000000003,
"text": " If you're trying to just solve these environments from scratch, you basically don't really have"
},
{
"start": 434.34000000000003,
"end": 437.02000000000004,
"text": " a big chance of solving them."
},
{
"start": 437.02000000000004,
"end": 441.68,
"text": " Because let's say you're here and you're trying to move to the right, you know, you might"
},
{
"start": 441.68,
"end": 447.72,
"text": " learn how to do this and you see this from scratch solution actually manages to get to"
},
{
"start": 447.72,
"end": 448.72,
"text": " the right."
},
{
"start": 448.72,
"end": 453.84000000000003,
"text": " But then as soon as you reach this, you're in this gap and you just fall down the gap"
},
{
"start": 453.84000000000003,
"end": 457.82,
"text": " because all you've learned so far is how to move right."
},
{
"start": 457.82,
"end": 464.74,
"text": " So what you would need to do is you would need to plan ahead like what poet does."
},
{
"start": 464.74,
"end": 466.24,
"text": " You need to see that there is a gap."
},
{
"start": 466.24,
"end": 472.92,
"text": " You need to plan ahead and already lift up a leg in order to then step over the gap here"
},
{
"start": 472.92,
"end": 476,
"text": " and then do a little jump right here."
},
{
"start": 476,
"end": 481.24,
"text": " And this sequence of action, this kind of planning ahead, it is very difficult to learn"
},
{
"start": 481.24,
"end": 488.06,
"text": " this for a classic RL algorithm because you basically get reward for everything you do."
},
{
"start": 488.06,
"end": 490.76,
"text": " So initially you get reward for moving to the right."
},
{
"start": 490.76,
"end": 494.72,
"text": " So that's 10 if you reach here, another 10 if you reach here."
},
{
"start": 494.72,
"end": 502.8,
"text": " And so there is another 10 if you reach here and another 10 if you reach here."
},
{
"start": 502.8,
"end": 508.12,
"text": " Whereas if you lift up your leg, that's like minus five because now this you've changed"
},
{
"start": 508.12,
"end": 512.44,
"text": " this angle and we saw this is negative reward, right?"
},
{
"start": 512.44,
"end": 517.24,
"text": " So a classic optimization algorithm will always fall into the hole because that is where you"
},
{
"start": 517.24,
"end": 519.6800000000001,
"text": " get the immediate reward."
},
{
"start": 519.6800000000001,
"end": 524.5600000000001,
"text": " Whereas you'd have to you'd have to do a sequence of action that doesn't give you a reward right"
},
{
"start": 524.56,
"end": 528.7199999999999,
"text": " now, but it gives you more reward later."
},
{
"start": 528.7199999999999,
"end": 534.8399999999999,
"text": " And in order to learn this, we need a kind of a better algorithm that just straightforward"
},
{
"start": 534.8399999999999,
"end": 536.4399999999999,
"text": " optimization."
},
{
"start": 536.4399999999999,
"end": 542.68,
"text": " So maybe I can explain this if you have a maze, here is the start and here is the goal"
},
{
"start": 542.68,
"end": 547.66,
"text": " and there is like walls and the walls are something like this."
},
{
"start": 547.66,
"end": 550.3599999999999,
"text": " What you need to do is go around here."
},
{
"start": 550.3599999999999,
"end": 554.52,
"text": " But what a classic optimization algorithm does is always like goes here because that's"
},
{
"start": 554.52,
"end": 557.12,
"text": " ever so closer to the goal."
},
{
"start": 557.12,
"end": 563.6,
"text": " And then it just gets stuck because it can't fathom that it needs to go around here."
},
{
"start": 563.6,
"end": 567.96,
"text": " So it needs to go farther away before it gets closer."
},
{
"start": 567.96,
"end": 574.0799999999999,
"text": " So these people we've talked about this before in like open ended learning novelty search."
},
{
"start": 574.0799999999999,
"end": 581.24,
"text": " What you would want to do is you would want to gradually build up solutions that can explore"
},
{
"start": 581.24,
"end": 589.28,
"text": " the space like to go here, go here, go here and basically build up these solutions."
},
{
"start": 589.28,
"end": 595.24,
"text": " And there are two components to what this poet algorithm does."
},
{
"start": 595.24,
"end": 602.6800000000001,
"text": " So the first component is curriculum learning."
},
{
"start": 602.6800000000001,
"end": 606.62,
"text": " Curriculum learning."
},
{
"start": 606.62,
"end": 608.42,
"text": " What does curriculum learning mean?"
},
{
"start": 608.42,
"end": 615.04,
"text": " Curriculum learning means that you start off with easy tasks and you increasingly build"
},
{
"start": 615.04,
"end": 620.28,
"text": " up more and more and more complex tasks."
},
{
"start": 620.28,
"end": 627.68,
"text": " So let's say I have an environment here and I'm going to draw and at the beginning we"
},
{
"start": 627.68,
"end": 632.8399999999999,
"text": " just kind of start off with this flat surface right and here is our little walker right"
},
{
"start": 632.8399999999999,
"end": 633.9599999999999,
"text": " here."
},
{
"start": 633.96,
"end": 644,
"text": " And we'll just train it to move right on that and that should be doable with kind of a classic"
},
{
"start": 644,
"end": 645.6800000000001,
"text": " approach."
},
{
"start": 645.6800000000001,
"end": 649.94,
"text": " And then we gradually move to more difficult environments."
},
{
"start": 649.94,
"end": 653.4000000000001,
"text": " So maybe we'll make it a bit more rough right."
},
{
"start": 653.4000000000001,
"end": 657.88,
"text": " And an agent that can already walk to the right already kind of has think of it as a"
},
{
"start": 657.88,
"end": 661.36,
"text": " pre-training in like NLP."
},
{
"start": 661.36,
"end": 666.92,
"text": " You can then get more and more challenging and then maybe at some point you can build"
},
{
"start": 666.92,
"end": 670.64,
"text": " in a gap right."
},
{
"start": 670.64,
"end": 675.44,
"text": " So you build in one of these gaps and now it already knows how to move to the right"
},
{
"start": 675.44,
"end": 682.32,
"text": " and now it might actually learn to jump a small gap right if you make it small at the"
},
{
"start": 682.32,
"end": 684.5600000000001,
"text": " beginning not like this one down here."
},
{
"start": 684.5600000000001,
"end": 686.5600000000001,
"text": " There's a very large gap."
},
{
"start": 686.56,
"end": 692.7199999999999,
"text": " But if you make it small by accident it might stumble over it and then learn and continuously"
},
{
"start": 692.7199999999999,
"end": 695.4399999999999,
"text": " how to master the gap."
},
{
"start": 695.4399999999999,
"end": 698.2399999999999,
"text": " So this is the curriculum learning approach."
},
{
"start": 698.2399999999999,
"end": 703.9599999999999,
"text": " It means that from environment to environment you get harder, harder and harder challenges."
},
{
"start": 703.9599999999999,
"end": 711.1199999999999,
"text": " So first flat then more rough then more rough with a gap and so on."
},
{
"start": 711.12,
"end": 721.36,
"text": " The second approach, the second ingredient to POET is what they call stepping stone learning"
},
{
"start": 721.36,
"end": 726,
"text": " or transfer learning or things like this."
},
{
"start": 726,
"end": 731.82,
"text": " And that's where you kind of have to think of this not as a single agent optimizing but"
},
{
"start": 731.82,
"end": 734.64,
"text": " as a population of agents."
},
{
"start": 734.64,
"end": 738.1800000000001,
"text": " So let's say you do this curriculum learning right."
},
{
"start": 738.18,
"end": 744.8,
"text": " And you're getting fairly well here at rough terrains right."
},
{
"start": 744.8,
"end": 746.04,
"text": " More and more rough terrains."
},
{
"start": 746.04,
"end": 751.12,
"text": " But in parallel you also have a second optimization procedure."
},
{
"start": 751.12,
"end": 761.76,
"text": " You also start out kind of flat but with this thing you go as we said before small gap you"
},
{
"start": 761.76,
"end": 770.16,
"text": " keep it flat but you just increase the number of gaps here right."
},
{
"start": 770.16,
"end": 776.96,
"text": " Whereas over here you just keep making the terrain rougher and rougher."
},
{
"start": 776.96,
"end": 786.4399999999999,
"text": " So what the philosophy is that an agent that might be able to master this rougher terrain"
},
{
"start": 786.44,
"end": 791.6,
"text": " it might actually that skill because here you this kind of this kind of looks like a"
},
{
"start": 791.6,
"end": 793.8800000000001,
"text": " gap here."
},
{
"start": 793.8800000000001,
"end": 802.6800000000001,
"text": " The skill of hopping over this gap here might actually transfer to the environment over"
},
{
"start": 802.6800000000001,
"end": 809.4000000000001,
"text": " here where you do have a proper you know a gap in the environment or the skill that you"
},
{
"start": 809.4000000000001,
"end": 813.32,
"text": " learn from an environment where you have one of these stumps right."
},
{
"start": 813.32,
"end": 821.9200000000001,
"text": " So here let's draw in one of these stumps where you have to go over and if you have"
},
{
"start": 821.9200000000001,
"end": 830.72,
"text": " a walker that can successfully walk over this that skill now might transfer over here in"
},
{
"start": 830.72,
"end": 836.88,
"text": " order to get over this over this peaky terrain here."
},
{
"start": 836.88,
"end": 849.8,
"text": " So the idea of poet is to start off with a generic flat very easy environment and then"
},
{
"start": 849.8,
"end": 859.28,
"text": " spawn new ones so you want to spawn new environments in kind of a hereditary way."
},
{
"start": 859.28,
"end": 869.68,
"text": " So this one might get a bit rougher this one might include this and this one might include"
},
{
"start": 869.68,
"end": 876.6,
"text": " a gap or something like this and then again you want to spawn new environments and more"
},
{
"start": 876.6,
"end": 887.72,
"text": " rough more rough more rough with a stump here and this one retains the gap sorry and um"
},
{
"start": 887.72,
"end": 897.08,
"text": " this one now gets two gaps and so on and you want to continuously train these and then"
},
{
"start": 897.08,
"end": 902.52,
"text": " always you want to check whether or not the skill that you learn over here might actually"
},
{
"start": 902.52,
"end": 905.26,
"text": " transfer to anyone over here."
},
{
"start": 905.26,
"end": 914.48,
"text": " So you get this tree of this continuous tree of solutions and once you improve on one branch"
},
{
"start": 914.48,
"end": 920.32,
"text": " this might actually be good on another branch right they always make the comparison to let's"
},
{
"start": 920.32,
"end": 926.88,
"text": " say biological evolution where a strategy that works over here for birds is all of a"
},
{
"start": 926.88,
"end": 935,
"text": " sudden can be cross adopted by mammals for an entirely different problem but the same"
},
{
"start": 935,
"end": 938.6,
"text": " skill might be valuable."
},
{
"start": 938.6,
"end": 948.64,
"text": " Yeah so this this is basically the two ingredients of poet and now I want to show you the complete"
},
{
"start": 948.64,
"end": 951.88,
"text": " poet algorithm."
},
{
"start": 951.88,
"end": 960.64,
"text": " So what does it do you start off with an initial environment right and in poet every environment"
},
{
"start": 960.64,
"end": 969.9399999999999,
"text": " is paired with an agent so there is one agent per environment right so for the time steps"
},
{
"start": 969.9399999999999,
"end": 976.68,
"text": " what you do is first of all you go through your environments and you mutate them and"
},
{
"start": 976.68,
"end": 982.48,
"text": " we already seen these environments they can be generated from a parameter vector so we"
},
{
"start": 982.48,
"end": 994.76,
"text": " have five numbers right how rough how stumpy and how wide the gaps are let's say we have"
},
{
"start": 994.76,
"end": 999.6800000000001,
"text": " three numbers to two and this might be one this might be two this might be five right"
},
{
"start": 999.6800000000001,
"end": 1006.64,
"text": " so what you want to do is you want to mutate them right you want to spawn children and"
},
{
"start": 1006.64,
"end": 1013.4399999999999,
"text": " each of these parameters has a chance of mutating this might be one three five and this environment"
},
{
"start": 1013.4399999999999,
"end": 1025.92,
"text": " might be one four six and this one might be two two five right you spawn new ones you"
},
{
"start": 1025.92,
"end": 1031.52,
"text": " already see that the requirement here is that you can actually have environments that are"
},
{
"start": 1031.52,
"end": 1038.92,
"text": " procedurally generated and mutated like this where a small mutation probably is going to"
},
{
"start": 1038.92,
"end": 1050.52,
"text": " lead to a small change in the environment in any case you mutate them and then you you"
},
{
"start": 1050.52,
"end": 1061.84,
"text": " want to let's you want to optimize your eight each agent so each of these environments is"
},
{
"start": 1061.84,
"end": 1069.82,
"text": " paired with a new agent that always tries to solve that particular environment so now"
},
{
"start": 1069.82,
"end": 1075.74,
"text": " within one environment you simply do your classic optimization we already saw here the"
},
{
"start": 1075.74,
"end": 1084.16,
"text": " evolution strategy is akin to a classic optimization algorithm from reinforcement learning all"
},
{
"start": 1084.16,
"end": 1090.36,
"text": " right so each agent you optimize for a couple of steps right not fully every time but for"
},
{
"start": 1090.36,
"end": 1097,
"text": " a couple of steps so each agent including the one in the original environment each agent"
},
{
"start": 1097,
"end": 1104.36,
"text": " is continuously trained on its environment throughout the process of course you like"
},
{
"start": 1104.36,
"end": 1110.2199999999998,
"text": " you have to be you have bounded computation so you need to drop out the very old ones"
},
{
"start": 1110.2199999999998,
"end": 1117.32,
"text": " but in principle continuously as all of this goes on all the agents are always trained"
},
{
"start": 1117.32,
"end": 1122.8799999999999,
"text": " on their environments so the agent here this Walker will always try to solve this particular"
},
{
"start": 1122.8799999999999,
"end": 1128.6999999999998,
"text": " environment and the Walker here that is now newly generated when the environment is generated"
},
{
"start": 1128.7,
"end": 1135.28,
"text": " will only try to solve this particular environment throughout the whole algorithm right and then"
},
{
"start": 1135.28,
"end": 1144.88,
"text": " all right so you do mutations you spawn new ones and then you do a couple of steps in"
},
{
"start": 1144.88,
"end": 1153.04,
"text": " optimization right and yes step and then you do this transfer attempt right what you want"
},
{
"start": 1153.04,
"end": 1159.32,
"text": " to do is you want to evaluate all the candidates on all the environments in principle you can"
},
{
"start": 1159.32,
"end": 1167.6,
"text": " you can cut this down but in principle you want to go through the environments and say"
},
{
"start": 1167.6,
"end": 1174.32,
"text": " okay this environment right here I'm going to evaluate all of the other agents in this"
},
{
"start": 1174.32,
"end": 1179.5,
"text": " environment you can do this in a couple of different ways where you just straight up"
},
{
"start": 1179.5,
"end": 1186.52,
"text": " try them or try to optimize them for a few steps to see whether they can be adapted easily"
},
{
"start": 1186.52,
"end": 1193.52,
"text": " to that environment but ultimately you have to come up with a criterion to say for each"
},
{
"start": 1193.52,
"end": 1199.6,
"text": " agent is the agent better or worse than the agent that is continuously trained on this"
},
{
"start": 1199.6,
"end": 1208.5,
"text": " environment if it's worse then you keep this one if if anyone is better then you transfer"
},
{
"start": 1208.5,
"end": 1215.8,
"text": " that better one to replace this one right and you basically copy it over to this new"
},
{
"start": 1215.8,
"end": 1220.7,
"text": " environment and that's where this transfer learning comes in so you're continuously trying"
},
{
"start": 1220.7,
"end": 1228.08,
"text": " all the agents on all the environments and if they are better you transfer them right"
},
{
"start": 1228.08,
"end": 1235.72,
"text": " so here you say if the environment score is better than the one that you have you transfer"
},
{
"start": 1235.72,
"end": 1245.88,
"text": " it all right now there is a lot hidden here for example in this mutate environment step"
},
{
"start": 1245.88,
"end": 1252.92,
"text": " they do check whether or not the new mutated environments are not too hard and not too"
},
{
"start": 1252.92,
"end": 1262.08,
"text": " easy and that basically means whether or not the agents can solve them but not solve them"
},
{
"start": 1262.08,
"end": 1268.9199999999998,
"text": " too easily they also check whether the environments are enough novel so you need a couple of checks"
},
{
"start": 1268.9199999999998,
"end": 1279.04,
"text": " here you solvable and that that means not too easy and not too hard right so they need"
},
{
"start": 1279.04,
"end": 1285.96,
"text": " to pass like a certain score but they need to be kind of solvable to a to an okay score"
},
{
"start": 1285.96,
"end": 1293.04,
"text": " so there's a score range and also novel they check whether or not the out the mutated environments"
},
{
"start": 1293.04,
"end": 1299.72,
"text": " are novel enough and I believe they just do this by calculating the the distance between"
},
{
"start": 1299.72,
"end": 1307.3600000000001,
"text": " two environments in terms of their parameter vectors so to determine whether or not these"
},
{
"start": 1307.3600000000001,
"end": 1313.76,
"text": " are novel and sorry I don't mean the distance just between two but the distance of all of"
},
{
"start": 1313.76,
"end": 1323.44,
"text": " the ones you've seen so far so if we go to original very beautiful drawing here where"
},
{
"start": 1323.44,
"end": 1329.4,
"text": " is my tree if you create a new environment let's say you create a new environment right"
},
{
"start": 1329.4,
"end": 1337.56,
"text": " here then you want to check it against all environments you've seen so far to determine"
},
{
"start": 1337.56,
"end": 1342.96,
"text": " whether or not it is new or not so you want to create the distance to all of these and"
},
{
"start": 1342.96,
"end": 1348.1200000000001,
"text": " if you have enough distance to your nearest neighbors then you are novel and that's kind"
},
{
"start": 1348.1200000000001,
"end": 1356.64,
"text": " of how they they determine whether environment is new all right so that's basically the poet"
},
{
"start": 1356.64,
"end": 1363.72,
"text": " algorithm you continuously create new environments by mutation you ensure that they are solvable"
},
{
"start": 1363.72,
"end": 1371.54,
"text": " not hard enough sorry not too hard but hard enough ensure that they are novel and then"
},
{
"start": 1371.54,
"end": 1380.72,
"text": " you optimize each agent for its own environment continuously as the process goes on and so"
},
{
"start": 1380.72,
"end": 1385.76,
"text": " it's not I want to stress this it's not only the frontier so you're not only looking at"
},
{
"start": 1385.76,
"end": 1391.44,
"text": " the newest generation but you're always looking at all of the generation of the because the"
},
{
"start": 1391.44,
"end": 1397.52,
"text": " older ones while the environments are easier they have been optimized for longer on this"
},
{
"start": 1397.52,
"end": 1403.16,
"text": " environment so the skills might be very handy so you always want to look at your entire"
},
{
"start": 1403.16,
"end": 1411.96,
"text": " population and then you do crucially you do this these transfer attempts so that's the"
},
{
"start": 1411.96,
"end": 1418.48,
"text": " poet algorithm there is a lot hidden here and I kind of want to stress that just if"
},
{
"start": 1418.48,
"end": 1427.04,
"text": " you just look at the amount of hyper parameters there is so many hyper parameters in this"
},
{
"start": 1427.04,
"end": 1433.08,
"text": " how much you transfer how much you mutate how many steps you do each of these subroutines"
},
{
"start": 1433.08,
"end": 1443.08,
"text": " here has a billion hyper parameters and learning rates and and so on so to me that's a that"
},
{
"start": 1443.08,
"end": 1449.3999999999999,
"text": " is kind of if I look at this algorithm I am very scared if I attempted to do something"
},
{
"start": 1449.4,
"end": 1457.64,
"text": " like this myself it's it's going to be a long and hard thing to evaluate all of these different"
},
{
"start": 1457.64,
"end": 1465.1200000000001,
"text": " hyper parameters that you have to do shortly want to dip into what the evolution strategy"
},
{
"start": 1465.1200000000001,
"end": 1473.68,
"text": " does just so you know because you just might be familiar with your classic your classic"
},
{
"start": 1473.68,
"end": 1482.72,
"text": " reinforce algorithm so in policy gradient methods what you do is you scale your parameters"
},
{
"start": 1482.72,
"end": 1492.88,
"text": " of your neural network which is you can if this is your policy then your policy network"
},
{
"start": 1492.88,
"end": 1501.76,
"text": " here you want to scale the gradient according to your reward so in classic reinforcement"
},
{
"start": 1501.76,
"end": 1507.04,
"text": " learning this here would be the reward you got which basically means if you did an action"
},
{
"start": 1507.04,
"end": 1514.44,
"text": " and you got higher reward you want to make your network do that action more right here"
},
{
"start": 1514.44,
"end": 1521.92,
"text": " in evolution strategies what you do is you spawn it's a different way of doing the same"
},
{
"start": 1521.92,
"end": 1530.9,
"text": " thing basically you spawn different environments and sorry you spawn you spawn different agents"
},
{
"start": 1530.9,
"end": 1537.68,
"text": " so you have your current parameters and you want to spawn a number of noisy versions of"
},
{
"start": 1537.68,
"end": 1545.76,
"text": " those parameters and then you want to evaluate each one right and now you want to adjust"
},
{
"start": 1545.76,
"end": 1553.74,
"text": " your parameters into the direction of that particular so basically you are here with"
},
{
"start": 1553.74,
"end": 1564.16,
"text": " your parameters you create a bunch of noisy versions of it right and let's say these two"
},
{
"start": 1564.16,
"end": 1571.84,
"text": " performed really well you want to adjust your parameters into the direction of those two"
},
{
"start": 1571.84,
"end": 1579.36,
"text": " right that's basically what this says so this is the noisy version and then this is the"
},
{
"start": 1579.36,
"end": 1586.3999999999999,
"text": " noise that produced the noisy version so if this is high if this number here is high"
},
{
"start": 1586.3999999999999,
"end": 1594.56,
"text": " then you will adjust your parameters into that direction it's a fairly cool way if you"
},
{
"start": 1594.56,
"end": 1603.52,
"text": " especially if you can't back prop through your policy as it's pretty neat thing so this"
},
{
"start": 1603.52,
"end": 1614,
"text": " is the ES step algorithm but you can think of it just as a RL algorithm all right so"
},
{
"start": 1614,
"end": 1619.28,
"text": " they do various experiments to show that this actually has merits I've already shown you"
},
{
"start": 1619.28,
"end": 1626.28,
"text": " if you're trying if you take the same environments and try to solve them directly by this evolution"
},
{
"start": 1626.28,
"end": 1633,
"text": " step then it will not succeed because of the problems we've discussed before now the comparison"
},
{
"start": 1633,
"end": 1641.04,
"text": " is a bit unfair because um of course these environments for poet poet the problem here"
},
{
"start": 1641.04,
"end": 1646,
"text": " is you can't have it solve a particular environments because the environments they constantly change"
},
{
"start": 1646,
"end": 1651.32,
"text": " right you constantly mutate the environments you never know where it's going it's not directed"
},
{
"start": 1651.32,
"end": 1657.26,
"text": " so if your goal is to solve a particular environment you cannot do it with poet you can hope that"
},
{
"start": 1657.26,
"end": 1662.48,
"text": " the agent that comes out will perform well right you can do something like this but I"
},
{
"start": 1662.48,
"end": 1672,
"text": " believe I believe that these environments that they test on here are ones that appeared"
},
{
"start": 1672,
"end": 1680.1200000000001,
"text": " during the poet run right so it's kind of an unfair comparison I feel to to do this"
},
{
"start": 1680.1200000000001,
"end": 1685.64,
"text": " on an environment that you know this environment this poet agent actually comes from an environment"
},
{
"start": 1685.64,
"end": 1692.44,
"text": " that poet has generated in its all mutation tree curriculum while building it up and then"
},
{
"start": 1692.44,
"end": 1699.56,
"text": " the poor ES algorithm is simply tasked with solving that particular environment from scratch"
},
{
"start": 1699.56,
"end": 1706.76,
"text": " so yes always keep in mind this is this can have a goal this doesn't have a goal right"
},
{
"start": 1706.76,
"end": 1713.8,
"text": " that's kind of the drawback but as you can see poet does get super high scores whereas"
},
{
"start": 1713.8,
"end": 1722.72,
"text": " es the classic algorithm completely fails and they also investigate the importance of transfer"
},
{
"start": 1722.72,
"end": 1733.2,
"text": " learning so they compare to like a classic classic curriculum learning algorithms there"
},
{
"start": 1733.2,
"end": 1738.44,
"text": " are curriculum learning algorithms where you can continuously try to build up the difficulties"
},
{
"start": 1738.44,
"end": 1744.04,
"text": " of these environments but you also do it in a goal-directed way so as I said if you have"
},
{
"start": 1744.04,
"end": 1751.16,
"text": " an environment that has like a gap and then a stump a high stump or two high stumps you"
},
{
"start": 1751.16,
"end": 1758.68,
"text": " want to start out flat and then maybe build in a small gap and a small stump and so on"
},
{
"start": 1758.68,
"end": 1764.96,
"text": " until you're here it's very much goal-directed but it doesn't have this kind of population"
},
{
"start": 1764.96,
"end": 1774.64,
"text": " with transfer learning aspect of poet so if they compare this you can see here the red"
},
{
"start": 1774.64,
"end": 1785.1200000000001,
"text": " the red the red one sorry colored it blue stupidly the red one is whatever poet was"
},
{
"start": 1785.1200000000001,
"end": 1791.96,
"text": " able to solve now these are the five dimensions of the parameters and the more on the outside"
},
{
"start": 1791.96,
"end": 1802.72,
"text": " it is the harder the environment and for the same for the same environment the blue one"
},
{
"start": 1802.72,
"end": 1808.24,
"text": " is what the curriculum learning algorithm has managed so it's the best environment the"
},
{
"start": 1808.24,
"end": 1815.24,
"text": " curriculum learning algorithm has been able to solve while trying to build up to the so"
},
{
"start": 1815.24,
"end": 1821.56,
"text": " if we take this here is the environment that poet solved again the comparison is kind of"
},
{
"start": 1821.56,
"end": 1826.12,
"text": " unfair because we're starting out from an environment that poet has already solved and"
},
{
"start": 1826.12,
"end": 1833.28,
"text": " then we're trying to build our way up to it with the classic algorithm by basically again"
},
{
"start": 1833.28,
"end": 1840.9199999999998,
"text": " this is it's comparing a non goal-directed thing something that just happened to a goal-directed"
},
{
"start": 1840.9199999999998,
"end": 1848.8,
"text": " process that needs to get this particular environment to work in any case at some point"
},
{
"start": 1848.8,
"end": 1853.76,
"text": " this curriculum learning algorithm will fail like let's say that's here that's the environment"
},
{
"start": 1853.76,
"end": 1861.8,
"text": " that has somewhat of a gap but no stump right and that would be the the blue line here they"
},
{
"start": 1861.8,
"end": 1868.76,
"text": " do like five runs and they plot them here and you can see every time the classic curriculum"
},
{
"start": 1868.76,
"end": 1874.48,
"text": " learning algorithm manages to only solve a much much less challenging environment than"
},
{
"start": 1874.48,
"end": 1884.08,
"text": " the poet algorithm achieved even though it's it's trying to reach exactly that right and"
},
{
"start": 1884.08,
"end": 1889.08,
"text": " so here they show the difference so if you just the classified environment if it's just"
},
{
"start": 1889.08,
"end": 1895.24,
"text": " challenging then the classic algorithm the curriculum learning algorithm can solve it"
},
{
"start": 1895.24,
"end": 1900.96,
"text": " somewhat so the distance is close to zero but as you go more and more challenging the"
},
{
"start": 1900.96,
"end": 1911,
"text": " distance between poet and the classic becomes larger and larger they do give some examples"
},
{
"start": 1911,
"end": 1917.8,
"text": " of what this transfer learning does so they have this parent environment that just kind"
},
{
"start": 1917.8,
"end": 1923.4,
"text": " of slouches forward on the ground and then the child environment has a mutation that"
},
{
"start": 1923.4,
"end": 1930.16,
"text": " has now little stumps in it right so you can't get over it right now but the child environment"
},
{
"start": 1930.16,
"end": 1936.52,
"text": " because it's it's a small stump so it might stumble across learns to lift its leg here"
},
{
"start": 1936.52,
"end": 1943.3200000000002,
"text": " and it transfers this back to the parent right at a later iteration which is pretty cool"
},
{
"start": 1943.3200000000002,
"end": 1949.0800000000002,
"text": " and then the parent gets even better as a result of that transfer so we have two transfer"
},
{
"start": 1949.0800000000002,
"end": 1955.8400000000001,
"text": " learning events here that mutually help these agents remember both the parent and the child"
},
{
"start": 1955.84,
"end": 1964.6799999999998,
"text": " are continuously trained as the process goes on all right and they do some more things"
},
{
"start": 1964.6799999999998,
"end": 1970.76,
"text": " where they do actual poet not a classic algorithm but poet without transfer learning and they"
},
{
"start": 1970.76,
"end": 1977.48,
"text": " see that okay the poet without transfer is able to solve some of the very challenging"
},
{
"start": 1977.48,
"end": 1983.36,
"text": " problems but never reaches the extremely challenging stage and that's kind of their argument why"
},
{
"start": 1983.36,
"end": 1991.7199999999998,
"text": " the transfer learning is necessary so in total I would say this is a cool algorithm it has"
},
{
"start": 1991.7199999999998,
"end": 1999.56,
"text": " many many many many many many hyper parameters and these experimental results with that many"
},
{
"start": 1999.56,
"end": 2004.6399999999999,
"text": " hyper parameters you need to take it with a grain of salt because it's always possible"
},
{
"start": 2004.6399999999999,
"end": 2010.84,
"text": " that they just haven't put as much effort into their comparisons as they have into their"
},
{
"start": 2010.84,
"end": 2019.76,
"text": " own thing to get it to work all right with that I wish you a nice day and check out the"
},
{
"start": 2019.76,
"end": 2025.1999999999998,
"text": " paper they have lots of descriptions check out the blog post where they have animations"
},
{
"start": 2025.2,
"end": 2041.88,
"text": " and the YouTube video and with that bye bye"
}
] |
awyuuJoHawo | Yannic Kilcher | UCZHmQk67mSJgfCCTn7xBfew | Dream to Control: Learning Behaviors by Latent Imagination | [
"Science & Technology"
] | [
"deep learning",
"machine learning",
"arxiv",
"google",
"rnn",
"recurrent",
"reinforcement learning",
"deep reinforcement learning",
"imagination",
"latent space",
"world model",
"control",
"deepmind",
"deep mind"
] | Dreamer is a new RL agent by DeepMind that learns a continuous control task through forward-imagination in latent space.
https://arxiv.org/abs/1912.01603
Videos: https://dreamrl.github.io/
Abstract:
Learned world models summarize an agent's experience to facilitate learning complex behaviors. While learning world models from high-dimensional sensory inputs is becoming feasible through deep learning, there are many potential ways for deriving behaviors from them. We present Dreamer, a reinforcement learning agent that solves long-horizon tasks from images purely by latent imagination. We efficiently learn behaviors by propagating analytic gradients of learned state values back through trajectories imagined in the compact state space of a learned world model. On 20 challenging visual control tasks, Dreamer exceeds existing approaches in data-efficiency, computation time, and final performance.
Authors: Danijar Hafner, Timothy Lillicrap, Jimmy Ba, Mohammad Norouzi
Links:
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
BitChute: https://www.bitchute.com/channel/yannic-kilcher
Minds: https://www.minds.com/ykilcher | Hi there! Today we're looking at Dream to Control Learning Behaviors by Latent Imagination by Dani Jarhofner, Timothy Lillikrup, Jimmy Baa and Mohamed Nerozi. This is a reinforcement learning paper that iterates on a series of previous papers where the goal is to learn a policy. In this case they want to learn policies for these kind of continuous control tasks of these physics-based robots, these hopper or walker types of tasks where you have to control these joints in order to move forward. The goal is that you have multiple observations as you do in reinforcement learning and from each observation you need to somehow come up with an action of what to do. Then that will give you the next observation as well as a reward. If your goal is to move this spider, maybe the reward is proportional to how far you move. So your goal is to collect the maximum reward, which would mean you have to move the spider as far as possible simply by doing the correct actions. The goal of this paper now is to do this by learning to plan ahead in this latent space. As you can see here, the way they do it is they take the observation and they feed it through an encoder. You can think of this as maybe a convolutional neural network or something. Anything that can work, that can take an image as an input and give you a hidden representation. This here is the hidden representation. From this hidden representation you can determine what the next action is going to be. Then you get a new observation and then again you can feed that along with the last hidden state into a new hidden state. Previous models do this a lot. You encode your observation and you have a recurrent neural network that incorporates all of the observations into a hidden state along with the actions you take. Then you always decide on a next action to do. What does this model do differently? This model wants to do this all in hidden space. This model wants to say I am here, I have this observation. Now my encoder tells me that this is going to give me this hidden state. Now what it wants to do is it wants to take in the action that it's doing and without seeing the next observation, it wants to predict it. It wants to say if I am here and I do this action, what might the action be? The action might be to put the joystick to the right. It will learn the hidden state corresponding to the spider being a bit more to the right. This is a bit more to the right than it is right now. It will need to do so a number of time steps into the future and it will learn from its own imagination. It will imagine into the future how the hidden states look and then it will learn from that instead of having to really do the actions in the real world. We've already looked at a number of papers including something like mu0 or I2A or something like this. This now is slightly different. You can see what's different here. What is different is in mu0 we used this latent model in order to plan ahead, like in order to do our decision tree planning ahead and so on. This model doesn't do this. This model still wants to come up with a single policy where you encode your state. On the right is the final result. You encode your state, it gets you to a hidden representation and then from that you determine what your actions going to be and you have your next state and so on. The final goal is simply going to be a policy like a single shot policy without any Monte Carlo tree expansion and so on. What it wants to do is it wants to learn this policy not by interacting in the real world like here on the left but actually by interacting only in the dream world right here. The crucial part if you want to learn from your dreams is to make sure that your dreams are an accurate representation of the real world. We already saw this in a paper called World Models by Jürgen Schmidhuber. In that paper what they did was they first collected experience, like this one, and then they learned from the one observation to predict the next ones or to predict the next hidden states. They did so by basically moving in the world at random. They have this little spider thingy and they just do random movements. They randomly move around and thus they collect these trajectories and then they learn from the random trajectories. The difference that this paper does is it does these steps iteratively. It will not learn from random policy but it will actually first start out learning this random, learning a good policy for its environment model, then acting going back and using that policy in order to learn a better environment model and then again learn using the better environment model in order to learn a better policy. If this wasn't clear enough we'll jump to the algorithm. The algorithm isn't actually too complicated. As I said I think it's a relatively minor iteration on previous research but it appears to work and it works in these kind of continuous control tasks. You see you have three models here that you need to learn and that's what you see over here. There is representation, transition and reward and you'll see they all have the same parameters. That gives you an indication that these things are a single model. Now what is the model representation, transition and reward? This is the thing on the left here. In this part of the algorithm you assume that you have a policy. You already know what action you do or you can even assume that you have some experience. You have your agent is running with a given policy and you simply collect that and now you're trying to learn. Let me scratch all of this. What do you have given? Given is the observation sequence and the actions you took and the rewards you got. That's also given. Each action gives you reward. These things are given, provided to you and now what do you want to learn? You want to learn a representation and a transition and let's say a reward. You also want to predict the next reward. This thing, this thing. As we already said you can do this by encoding the state using for example a CNN and then using an LSTM in order to incorporate this over time. What you learn is the transition from one hidden state to the next hidden state and you also learn how the observation goes into the hidden state. Thirdly you learn that if I'm in this hidden state and I take this particular action I will get this reward in the future. You can learn this from just a set of pre-computed or from a set of experience that you have in your let's say your replay buffer. This is one model and you learn this here in this first step in this called dynamics learning section. You see while not converged, you do dynamics learning, you draw data sequences from your experience, then you compute the model states. These are the hidden states and then you update this parameter theta using representation learning. They don't really specify what representation learning is but they do give examples of what you can do. I think their point is whatever you need to do in order to learn this representation. One example is actually drawn here. One example is you can learn a model that reconstructs the next state or actually sorry reconstructs the same state. You can learn a model that predicts. If you give the observation as an input it goes through the hidden state. You can learn a decoder that reconstructs that observation. This is usually done in things like variational auto encoders in order to produce generative models. This part here would be the generator and that would be kind of the thing of interest if you are doing a variational auto encoder. Of course here our quantity of interest is this encoder model because we want a good representation of the state. It comes down to the same thing. If you can learn a model that learns to accurately reconstruct the observation then your representation here in the middle is probably an informative one. Because you learn the same model across multiple observations that means it can accurately encode what makes one observation different from another one. This is how you learn the theta parameters. The other models here are the action and the value parameters. This is here in the step called behavior learning. In the behavior learning what they say is imagine trajectories from each of the states that you have. What you're going to do is from each of the observations here you're going to obtain the hidden states. From each of the hidden states here, here is an observation from its hidden state, you're going to use the model that you learned here through the LSTM. This is terrible. Through the LSTM you're going to use that model to imagine future trajectories of hidden states. You have given, or now is the observation here, and the hidden state. You're going to imagine future hidden states, you're also going to imagine future rewards. You are going to use your policy in order to determine which actions you're going to take. The ultimate goal here is to learn a good policy, so a policy that will give you better rewards in the future. This is regular reinforcement learning, except that the difference is in regular reinforcement learning I have my observation, I encode it and then I determine what action I want to take. Then I feed that action back into the environment, which would give me the next observation. Then I'd use that to determine, maybe in conjunction with the last hidden state, the next action. In this thing, since we learned a dynamics model of the hidden states, we can simply determine the action and then simply compute what the probable next hidden state is going to be. Then use that to determine an action again and so on. There's no need to go through the environment, which means potentially we can learn much faster without having to expensively interact with the environment. That allows us to basically... Also these models here, they might be quite large, so our backprop now only needs to happen through this path basically, if we want to, or through this path here, in case we have discrete actions. That will be the dynamics learning. As you can see, we predict the rewards and the values and compute value estimates. Then we update these parameters. What we have is here a value function. The value function is dependent on this psi here. This we update using a gradient of its output minus the true value. This here is an estimate of the value. As you know, a value function is supposed to tell you the complete future reward given a state. It's important for us that we have a function that can estimate that, because of course then we can take actions. If we can make this function go high and this is an accurate function, that means we get a lot of reward in the future. It's important to learn this function. Here you can see we adjust it into the direction of matching this quantity better. We'll get to this quantity in a second. You can also see we update this parameter, which is the action model. Here you see that the action model depends on this. This is our policy. This thing here determines which action we take. We update it into the direction. This is a gradient with respect to this value function. We train the policy to maximize the value, which is all the future rewards that we get. Of course we can do this because we can now back propagate through all of these time steps. We have this transition model. We can back propagate through all of this, which is pretty cool. I think in my opinion the workhorse of this paper might be this quantity here. How exactly do you compute the value of a state? Especially in these continuous control tasks you sometimes have a lot of steps. These trajectories might be pretty long and they might be longer than what you can back propagate here reasonably from time step to time step. Even an LSTM might only be able to back prop through a couple of dozen or maybe a few hundred steps in time. Maybe you have longer trajectories here. I think this value estimate here is a main component of extending that range. They say this is according to equation 6 and this is what it does. This is my opinion that this here is the workhorse of the method. It's a three-step process actually. It's pretty heavy. You see this is the quantity they estimate with the value function. It is set between an average over... H is the time horizon that you're looking for. It is set between these two things across the sum over the time horizon. Now each of those things again here is a sum over this tau here, which is this tau and H minus 1. H here is the minimum of tau plus K and tau plus horizon. This quantity looks K steps into the future. For each step to the horizon we look K steps into the future. For each step we look into the future we sum again across these quantities here. These quantities here, what is that? It's a mixture of the reward you get in that particular step plus your own your estimate of the value function at the at the horizon step discounted by that. So it's a pretty... Imagine you have like a time number of steps that you took and each time you get a reward. This is a very complicated way of going into the future, summing up the rewards, going more steps, summing up the rewards again in different fashion and then mixing these individual quantities. So this one, this one, this one that you got from accumulating all of these in a weird fashion. That allows you to look way beyond. Especially you see here your estimate of the value function will actually include your own value function that again probably looks into the future. So what you accumulate from the last step in your time horizon already includes information from all the future steps because you take your own value estimate into account. This is I think it's very convoluted but again I think this complicated value estimate allows you to have a better value estimate far into the future. They do show some kind of samples here of what they can do. I haven't found any videos of it unfortunately but it appears to work pretty well. They have a discussion of different representation learning methods and different experiments and ablations and so on. So I invite you to look at this paper and I hope this was somewhat clear. Bye bye. | [
{
"start": 0,
"end": 5.92,
"text": " Hi there! Today we're looking at Dream to Control Learning Behaviors by Latent"
},
{
"start": 5.92,
"end": 13.08,
"text": " Imagination by Dani Jarhofner, Timothy Lillikrup, Jimmy Baa and"
},
{
"start": 13.08,
"end": 21.2,
"text": " Mohamed Nerozi. This is a reinforcement learning paper that iterates on a"
},
{
"start": 21.2,
"end": 31.439999999999998,
"text": " series of previous papers where the goal is to learn a policy. In this"
},
{
"start": 31.439999999999998,
"end": 35.76,
"text": " case they want to learn policies for these kind of continuous control tasks"
},
{
"start": 35.76,
"end": 42.76,
"text": " of these physics-based robots, these hopper or walker types of tasks where"
},
{
"start": 42.76,
"end": 53.26,
"text": " you have to control these joints in order to move forward. The"
},
{
"start": 53.26,
"end": 57.72,
"text": " goal is that you have multiple observations as you do in reinforcement"
},
{
"start": 57.72,
"end": 64.08,
"text": " learning and from each observation you need to somehow come up with an action"
},
{
"start": 64.08,
"end": 71.88,
"text": " of what to do. Then that will give you the next observation as well as a"
},
{
"start": 71.88,
"end": 80.52,
"text": " reward. If your goal is to move this spider, maybe the reward is"
},
{
"start": 80.52,
"end": 85.64,
"text": " proportional to how far you move. So your goal is to collect the maximum reward,"
},
{
"start": 85.64,
"end": 91.47999999999999,
"text": " which would mean you have to move the spider as far as possible simply by"
},
{
"start": 91.47999999999999,
"end": 100.08,
"text": " doing the correct actions. The goal of this paper now is to do this by"
},
{
"start": 100.08,
"end": 108.6,
"text": " learning to plan ahead in this latent space. As you can see"
},
{
"start": 108.6,
"end": 115.28,
"text": " here, the way they do it is they take the observation and they feed it through an"
},
{
"start": 115.28,
"end": 121.03999999999999,
"text": " encoder. You can think of this as maybe a convolutional neural network or"
},
{
"start": 121.03999999999999,
"end": 125.92,
"text": " something. Anything that can work, that can take an image as an input and give"
},
{
"start": 125.92,
"end": 132.72,
"text": " you a hidden representation. This here is the hidden representation. From"
},
{
"start": 132.72,
"end": 137.64000000000001,
"text": " this hidden representation you can determine what the next action is going"
},
{
"start": 137.64000000000001,
"end": 144.24,
"text": " to be. Then you get a new observation and then again you can feed that along"
},
{
"start": 144.24,
"end": 151.08,
"text": " with the last hidden state into a new hidden state. Previous"
},
{
"start": 151.08,
"end": 157.52,
"text": " models do this a lot. You encode your observation and you have a"
},
{
"start": 157.52,
"end": 163.72000000000003,
"text": " recurrent neural network that incorporates all of the observations"
},
{
"start": 163.72000000000003,
"end": 167.8,
"text": " into a hidden state along with the actions you take. Then you always"
},
{
"start": 167.8,
"end": 176,
"text": " decide on a next action to do. What does this model do differently? This model"
},
{
"start": 176,
"end": 187.16,
"text": " wants to do this all in hidden space. This model wants to say"
},
{
"start": 187.16,
"end": 193.16,
"text": " I am here, I have this observation. Now my encoder tells me that this is going to"
},
{
"start": 193.16,
"end": 198.44,
"text": " give me this hidden state. Now what it wants to do is it wants to take in the"
},
{
"start": 198.44,
"end": 205.04,
"text": " action that it's doing and without seeing the next observation, it wants to"
},
{
"start": 205.04,
"end": 211.56,
"text": " predict it. It wants to say if I am here and I do this action, what"
},
{
"start": 211.56,
"end": 215.72,
"text": " might the action be? The action might be to put the joystick to the right. It will"
},
{
"start": 215.72,
"end": 221.88,
"text": " learn the hidden state corresponding to the spider being a bit more to the right."
},
{
"start": 221.88,
"end": 228.68,
"text": " This is a bit more to the right than it is right now. It will need to"
},
{
"start": 228.68,
"end": 235.28,
"text": " do so a number of time steps into the future and it will learn from"
},
{
"start": 235.28,
"end": 243.4,
"text": " its own imagination. It will imagine into the future how the hidden"
},
{
"start": 243.4,
"end": 250.16,
"text": " states look and then it will learn from that instead of having to really do the"
},
{
"start": 250.16,
"end": 254.72,
"text": " actions in the real world. We've already looked at a number of papers"
},
{
"start": 254.72,
"end": 262.88,
"text": " including something like mu0 or I2A or something like this. This now is"
},
{
"start": 262.88,
"end": 268.64,
"text": " slightly different. You can see what's different here."
},
{
"start": 268.64,
"end": 275.44,
"text": " What is different is in mu0 we used this latent model in order to"
},
{
"start": 275.44,
"end": 280.24,
"text": " plan ahead, like in order to do our decision tree planning ahead and so on."
},
{
"start": 280.24,
"end": 284.88,
"text": " This model doesn't do this. This model still wants to come up with a single"
},
{
"start": 284.88,
"end": 291.04,
"text": " policy where you encode your state. On the right is the final result."
},
{
"start": 291.04,
"end": 295.28000000000003,
"text": " You encode your state, it gets you to a hidden representation and then from that"
},
{
"start": 295.28000000000003,
"end": 301.8,
"text": " you determine what your actions going to be and you have your next state and so on."
},
{
"start": 301.8,
"end": 308.24,
"text": " The final goal is simply going to be a policy like a single shot policy"
},
{
"start": 308.24,
"end": 315.92,
"text": " without any Monte Carlo tree expansion and so on. What it wants to do is it"
},
{
"start": 315.92,
"end": 321.64,
"text": " wants to learn this policy not by interacting in the real world like here"
},
{
"start": 321.64,
"end": 330.76,
"text": " on the left but actually by interacting only in the dream world right here."
},
{
"start": 330.76,
"end": 335.88,
"text": " The crucial part if you want to learn from your dreams is to make sure"
},
{
"start": 335.88,
"end": 345.2,
"text": " that your dreams are an accurate representation of the real world."
},
{
"start": 345.2,
"end": 351.12,
"text": " We already saw this in a paper called World Models by Jürgen Schmidhuber."
},
{
"start": 351.12,
"end": 359.96,
"text": " In that paper what they did was they first collected experience,"
},
{
"start": 359.96,
"end": 367.08,
"text": " like this one, and then they learned from the one observation"
},
{
"start": 367.08,
"end": 376.52,
"text": " to predict the next ones or to predict the next hidden states."
},
{
"start": 376.52,
"end": 383.03999999999996,
"text": " They did so by basically moving in the world at random. They have this"
},
{
"start": 383.03999999999996,
"end": 389.4,
"text": " little spider thingy and they just do random movements. They randomly"
},
{
"start": 389.4,
"end": 394.35999999999996,
"text": " move around and thus they collect these trajectories and then they learn from"
},
{
"start": 394.35999999999996,
"end": 399.91999999999996,
"text": " the random trajectories. The difference that this paper does is it does these"
},
{
"start": 399.91999999999996,
"end": 405.56,
"text": " steps iteratively. It will not learn from random policy but it will"
},
{
"start": 405.56,
"end": 412.59999999999997,
"text": " actually first start out learning this random, learning a good policy for its"
},
{
"start": 412.6,
"end": 420.24,
"text": " environment model, then acting going back and using that policy in order to learn"
},
{
"start": 420.24,
"end": 425.12,
"text": " a better environment model and then again learn using the better environment"
},
{
"start": 425.12,
"end": 433.28000000000003,
"text": " model in order to learn a better policy. If this wasn't clear enough we'll jump"
},
{
"start": 433.28000000000003,
"end": 441.64000000000004,
"text": " to the algorithm. The algorithm isn't actually too complicated. As I said"
},
{
"start": 441.64,
"end": 447.76,
"text": " I think it's a relatively minor iteration on previous research but it"
},
{
"start": 447.76,
"end": 454.03999999999996,
"text": " appears to work and it works in these kind of continuous control tasks."
},
{
"start": 454.03999999999996,
"end": 458.44,
"text": " You see you have three models here that you need to learn and that's what you see"
},
{
"start": 458.44,
"end": 463.32,
"text": " over here. There is representation, transition and reward and you'll see"
},
{
"start": 463.32,
"end": 468.24,
"text": " they all have the same parameters. That gives you an indication that these"
},
{
"start": 468.24,
"end": 474.16,
"text": " things are a single model. Now what is the model representation,"
},
{
"start": 474.16,
"end": 482.64,
"text": " transition and reward? This is the thing on the left here."
},
{
"start": 482.64,
"end": 491.24,
"text": " In this part of the algorithm you assume that you have a policy. You"
},
{
"start": 491.24,
"end": 497.76,
"text": " already know what action you do or you can even assume that you have some"
},
{
"start": 497.76,
"end": 503.92,
"text": " experience. You have your agent is running with a given policy and you"
},
{
"start": 503.92,
"end": 512.28,
"text": " simply collect that and now you're trying to learn. Let me scratch all of"
},
{
"start": 512.28,
"end": 523.48,
"text": " this. What do you have given? Given is the observation sequence and the actions"
},
{
"start": 523.48,
"end": 534.32,
"text": " you took and the rewards you got. That's also given. Each action gives"
},
{
"start": 534.32,
"end": 542.36,
"text": " you reward. These things are given, provided to you and now what do"
},
{
"start": 542.36,
"end": 552.32,
"text": " you want to learn? You want to learn a representation and a transition and"
},
{
"start": 552.32,
"end": 562.48,
"text": " let's say a reward. You also want to predict the next reward. This thing,"
},
{
"start": 562.48,
"end": 573.12,
"text": " this thing. As we already said you can do this by encoding the state using"
},
{
"start": 573.12,
"end": 580.6,
"text": " for example a CNN and then using an LSTM in order to incorporate this over time."
},
{
"start": 580.6,
"end": 587.28,
"text": " What you learn is the transition from one hidden state to the next hidden"
},
{
"start": 587.28,
"end": 594.6800000000001,
"text": " state and you also learn how the observation goes into the hidden state."
},
{
"start": 594.6800000000001,
"end": 602.2,
"text": " Thirdly you learn that if I'm in this hidden state and I take this particular"
},
{
"start": 602.2,
"end": 608.8000000000001,
"text": " action I will get this reward in the future. You can learn this from"
},
{
"start": 608.8,
"end": 615.04,
"text": " just a set of pre-computed or from a set of experience that you have in your"
},
{
"start": 615.04,
"end": 621.28,
"text": " let's say your replay buffer. This is one model and you learn this here"
},
{
"start": 621.28,
"end": 627.3199999999999,
"text": " in this first step in this called dynamics learning section. You see"
},
{
"start": 627.3199999999999,
"end": 637.56,
"text": " while not converged, you do dynamics learning, you draw data sequences from"
},
{
"start": 637.56,
"end": 643.64,
"text": " your experience, then you compute the model states. These are the hidden"
},
{
"start": 643.64,
"end": 651.68,
"text": " states and then you update this parameter theta using representation"
},
{
"start": 651.68,
"end": 656.64,
"text": " learning. They don't really specify what representation learning is but they"
},
{
"start": 656.64,
"end": 663.0799999999999,
"text": " do give examples of what you can do. I think their point is whatever you need"
},
{
"start": 663.08,
"end": 668.84,
"text": " to do in order to learn this representation. One example is"
},
{
"start": 668.84,
"end": 679.2800000000001,
"text": " actually drawn here. One example is you can learn a model that reconstructs the"
},
{
"start": 679.2800000000001,
"end": 685.2800000000001,
"text": " next state or actually sorry reconstructs the same state. You can learn a"
},
{
"start": 685.2800000000001,
"end": 691.72,
"text": " model that predicts. If you give the observation as an input it goes"
},
{
"start": 691.72,
"end": 699.4,
"text": " through the hidden state. You can learn a decoder that reconstructs that"
},
{
"start": 699.4,
"end": 705.24,
"text": " observation. This is usually done in things like variational auto encoders in"
},
{
"start": 705.24,
"end": 710.44,
"text": " order to produce generative models. This part here would be the"
},
{
"start": 710.44,
"end": 714.64,
"text": " generator and that would be kind of the thing of interest if you are doing a"
},
{
"start": 714.64,
"end": 720.9200000000001,
"text": " variational auto encoder. Of course here our quantity of interest is this"
},
{
"start": 720.92,
"end": 729.1999999999999,
"text": " encoder model because we want a good representation of the state."
},
{
"start": 729.1999999999999,
"end": 734.4799999999999,
"text": " It comes down to the same thing. If you can learn a model that learns to"
},
{
"start": 734.4799999999999,
"end": 740.68,
"text": " accurately reconstruct the observation then your representation here in the"
},
{
"start": 740.68,
"end": 746.76,
"text": " middle is probably an informative one. Because you learn the same model"
},
{
"start": 746.76,
"end": 753.28,
"text": " across multiple observations that means it can accurately encode what makes one"
},
{
"start": 753.28,
"end": 759.3,
"text": " observation different from another one. This is how you learn the"
},
{
"start": 759.3,
"end": 768.36,
"text": " theta parameters. The other models here are the action and the value"
},
{
"start": 768.36,
"end": 775.08,
"text": " parameters. This is here in the step called behavior learning. In the"
},
{
"start": 775.08,
"end": 780.2800000000001,
"text": " behavior learning what they say is imagine trajectories from each of the"
},
{
"start": 780.2800000000001,
"end": 785.32,
"text": " states that you have. What you're going to do is from each of the observations"
},
{
"start": 785.32,
"end": 791.64,
"text": " here you're going to obtain the hidden states. From each"
},
{
"start": 791.64,
"end": 797.48,
"text": " of the hidden states here, here is an observation from its hidden state,"
},
{
"start": 797.48,
"end": 806.12,
"text": " you're going to use the model that you learned here through the LSTM."
},
{
"start": 806.12,
"end": 812.52,
"text": " This is terrible. Through the LSTM you're going to use that model to imagine future"
},
{
"start": 812.52,
"end": 820.9200000000001,
"text": " trajectories of hidden states. You have given, or now is the"
},
{
"start": 820.9200000000001,
"end": 826.72,
"text": " observation here, and the hidden state. You're going to imagine future hidden"
},
{
"start": 826.72,
"end": 838.6,
"text": " states, you're also going to imagine future rewards. You are going to use"
},
{
"start": 838.6,
"end": 846.4,
"text": " your policy in order to determine which actions you're"
},
{
"start": 846.4,
"end": 852.88,
"text": " going to take. The ultimate goal here is to learn a good policy, so a"
},
{
"start": 852.88,
"end": 858.56,
"text": " policy that will give you better rewards in the future. This is"
},
{
"start": 858.56,
"end": 867.36,
"text": " regular reinforcement learning, except that the difference is in regular"
},
{
"start": 867.36,
"end": 873.36,
"text": " reinforcement learning I have my observation, I encode it and then I"
},
{
"start": 873.36,
"end": 878,
"text": " determine what action I want to take. Then I feed that action back into the"
},
{
"start": 878,
"end": 883.28,
"text": " environment, which would give me the next observation. Then I'd use that to"
},
{
"start": 883.28,
"end": 888.48,
"text": " determine, maybe in conjunction with the last hidden state, the next action."
},
{
"start": 888.48,
"end": 894,
"text": " In this thing, since we learned a dynamics model of the hidden states, we can simply"
},
{
"start": 894,
"end": 899.76,
"text": " determine the action and then simply compute what the probable next hidden"
},
{
"start": 899.76,
"end": 906.32,
"text": " state is going to be. Then use that to determine an action again and so on."
},
{
"start": 906.32,
"end": 910.7600000000001,
"text": " There's no need to go through the environment, which means potentially we"
},
{
"start": 910.7600000000001,
"end": 916.36,
"text": " can learn much faster without having to expensively interact with the"
},
{
"start": 916.36,
"end": 925.7600000000001,
"text": " environment. That allows us to basically... Also these models here, they might be"
},
{
"start": 925.7600000000001,
"end": 931.72,
"text": " quite large, so our backprop now only needs to happen through this path"
},
{
"start": 931.72,
"end": 938.6,
"text": " basically, if we want to, or through this path here, in case we have"
},
{
"start": 938.6,
"end": 948.28,
"text": " discrete actions. That will be the dynamics learning."
},
{
"start": 948.28,
"end": 957.76,
"text": " As you can see, we predict the rewards and the values and"
},
{
"start": 957.76,
"end": 964.8,
"text": " compute value estimates. Then we update these parameters. What we have"
},
{
"start": 964.8,
"end": 971.4399999999999,
"text": " is here a value function. The value function is dependent on this psi here."
},
{
"start": 971.4399999999999,
"end": 981.52,
"text": " This we update using a gradient of its output minus the true value."
},
{
"start": 981.52,
"end": 985.68,
"text": " This here is an estimate of the value. As you know, a value function is"
},
{
"start": 985.68,
"end": 993.28,
"text": " supposed to tell you the complete future reward given a state."
},
{
"start": 993.28,
"end": 998.0799999999999,
"text": " It's important for us that we have a function that can estimate that, because of"
},
{
"start": 998.0799999999999,
"end": 1004.16,
"text": " course then we can take actions. If we can make this function go high and this"
},
{
"start": 1004.16,
"end": 1011.12,
"text": " is an accurate function, that means we get a lot of reward in the future."
},
{
"start": 1011.12,
"end": 1015,
"text": " It's important to learn this function. Here you can see we adjust it into the"
},
{
"start": 1015,
"end": 1020.76,
"text": " direction of matching this quantity better. We'll get to this quantity in a"
},
{
"start": 1020.76,
"end": 1028.92,
"text": " second. You can also see we update this parameter, which is the action model."
},
{
"start": 1028.92,
"end": 1034.96,
"text": " Here you see that the action model depends on this. This is our policy."
},
{
"start": 1034.96,
"end": 1042.16,
"text": " This thing here determines which action we take. We update it into the"
},
{
"start": 1042.16,
"end": 1046.88,
"text": " direction. This is a gradient with respect to this value function."
},
{
"start": 1046.88,
"end": 1053.68,
"text": " We train the policy to maximize the value, which is all the future rewards that we get."
},
{
"start": 1053.68,
"end": 1059.52,
"text": " Of course we can do this because we can now back propagate through all of these"
},
{
"start": 1059.52,
"end": 1065.52,
"text": " time steps. We have this transition model. We can back"
},
{
"start": 1065.52,
"end": 1073.8,
"text": " propagate through all of this, which is pretty cool. I think in my opinion the"
},
{
"start": 1073.8,
"end": 1080.4,
"text": " workhorse of this paper might be this quantity here."
},
{
"start": 1080.4,
"end": 1088.6399999999999,
"text": " How exactly do you compute the value of a state? Especially in these continuous"
},
{
"start": 1088.64,
"end": 1096.3600000000001,
"text": " control tasks you sometimes have a lot of steps. These trajectories"
},
{
"start": 1096.3600000000001,
"end": 1101.96,
"text": " might be pretty long and they might be longer than what you can back propagate"
},
{
"start": 1101.96,
"end": 1111.6000000000001,
"text": " here reasonably from time step to time step. Even an LSTM might only be"
},
{
"start": 1111.6000000000001,
"end": 1117.2,
"text": " able to back prop through a couple of dozen or maybe a few hundred steps in"
},
{
"start": 1117.2,
"end": 1125.1200000000001,
"text": " time. Maybe you have longer trajectories here. I think this"
},
{
"start": 1125.1200000000001,
"end": 1132.88,
"text": " value estimate here is a main component of extending that range. They say this"
},
{
"start": 1132.88,
"end": 1140.32,
"text": " is according to equation 6 and this is what it does. This is my"
},
{
"start": 1140.32,
"end": 1145.48,
"text": " opinion that this here is the workhorse of the method. It's a"
},
{
"start": 1145.48,
"end": 1151.28,
"text": " three-step process actually. It's pretty heavy. You see this is the"
},
{
"start": 1151.28,
"end": 1160.24,
"text": " quantity they estimate with the value function. It is set between an"
},
{
"start": 1160.24,
"end": 1167.6,
"text": " average over... H is the time horizon that you're looking for. It is"
},
{
"start": 1167.6,
"end": 1177,
"text": " set between these two things across the sum over the time horizon. Now each of"
},
{
"start": 1177,
"end": 1189.6,
"text": " those things again here is a sum over this tau here, which is this"
},
{
"start": 1189.6,
"end": 1199.8,
"text": " tau and H minus 1. H here is the minimum of tau plus K and tau plus horizon."
},
{
"start": 1199.8,
"end": 1206.84,
"text": " This quantity looks K steps into the future. For each"
},
{
"start": 1206.84,
"end": 1219.8,
"text": " step to the horizon we look K steps into the future. For each step we"
},
{
"start": 1219.8,
"end": 1225.76,
"text": " look into the future we sum again across these quantities here. These"
},
{
"start": 1225.76,
"end": 1231.12,
"text": " quantities here, what is that? It's a mixture of the reward you get in that"
},
{
"start": 1231.12,
"end": 1239.6,
"text": " particular step plus your own your estimate of the value function at the"
},
{
"start": 1239.6,
"end": 1246.36,
"text": " at the horizon step discounted by that. So it's a pretty... Imagine you have"
},
{
"start": 1246.36,
"end": 1252.12,
"text": " like a time number of steps that you took and each time you get a reward."
},
{
"start": 1252.12,
"end": 1258.32,
"text": " This is a very complicated way of going into the future,"
},
{
"start": 1258.32,
"end": 1264.24,
"text": " summing up the rewards, going more steps, summing up the rewards again in different"
},
{
"start": 1264.24,
"end": 1269.4399999999998,
"text": " fashion and then mixing these individual quantities. So this one, this"
},
{
"start": 1269.4399999999998,
"end": 1273.36,
"text": " one, this one that you got from accumulating all of these in a weird"
},
{
"start": 1273.36,
"end": 1282.2,
"text": " fashion. That allows you to look way beyond. Especially you see here your"
},
{
"start": 1282.2,
"end": 1290.64,
"text": " estimate of the value function will actually include your own value function"
},
{
"start": 1290.64,
"end": 1298.1200000000001,
"text": " that again probably looks into the future. So what you accumulate from the"
},
{
"start": 1298.1200000000001,
"end": 1304.16,
"text": " last step in your time horizon already includes information from all the future"
},
{
"start": 1304.16,
"end": 1311.0800000000002,
"text": " steps because you take your own value estimate into account. This is I think"
},
{
"start": 1311.08,
"end": 1319.24,
"text": " it's very convoluted but again I think this complicated value"
},
{
"start": 1319.24,
"end": 1326.9199999999998,
"text": " estimate allows you to have a better value estimate far into the future."
},
{
"start": 1327.72,
"end": 1336.48,
"text": " They do show some kind of samples here of what they can do. I haven't found any"
},
{
"start": 1336.48,
"end": 1342.92,
"text": " videos of it unfortunately but it appears to work pretty well. They have a"
},
{
"start": 1342.92,
"end": 1346.8,
"text": " discussion of different representation learning methods and different"
},
{
"start": 1346.8,
"end": 1353.24,
"text": " experiments and ablations and so on. So I invite you to look at this paper and I"
},
{
"start": 1353.24,
"end": 1369.24,
"text": " hope this was somewhat clear. Bye bye."
}
] |
XdpF9ZixIbI | Yannic Kilcher | UCZHmQk67mSJgfCCTn7xBfew | Can we Contain Covid-19 without Locking-down the Economy? | [
"Science & Technology"
] | [
"machine learning",
"epidemiology",
"worst case",
"statistics",
"hypothesis test",
"covid",
"corona",
"coronavirus"
] | My thoughts on the let-the-young-get-infected argument.
https://medium.com/amnon-shashua/can-we-contain-covid-19-without-locking-down-the-economy-2a134a71873f
Abstract:
In this article, we present an analysis of a risk-based selective quarantine model where the population is divided into low and high-risk groups. The high-risk group is quarantined until the low-risk group achieves herd-immunity. We tackle the question of whether this model is safe, in the sense that the health system can contain the number of low-risk people that require severe ICU care (such as life support systems).
Authors: Shai Shalev-Shwartz, Amnon Shashua
Links:
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
BitChute: https://www.bitchute.com/channel/yannic-kilcher
Minds: https://www.minds.com/ykilcher | Can we contain COVID-19 without locking down the economy? This is a question and I do care about this article because Shai Shalef-Schwarz is one of the bigger names in machine learning theory. So it was interesting for me to see what he and his collaborator here had to say about the kind of outbreak and the strategy to contain it. So contain maybe isn't the right word that they ask. I think the way they ask the question is how are we going to survive this the best? And so this in no means is an endorsement by me. I'm not a medical professional. Please just view this as a commentary and an explanation of what they are saying. I'll give my opinions along the way, of course. So they identify three different models for handling the spread of COVID-19. And we'll start with the third one because they argue for the first one and this builds more suspense. So they say there is countrywide lockdown, right, until the spread of the virus is under control. They say it could take anywhere from weeks to months. It is the safest route, but it does not prevent a second wave from occurring. Now, of course, if you have people, let's say these are people, right, then the thing is everyone just stays in there, stay in your house, right, everybody, right, until it's kind of gone. Now they say correctly there is a risk of a second wave because only a single infected person, because there's no immunity, still has the potential of creating another epicenter. So they don't consider this option. The next option is called containment-based selective quarantine, which means find all the positive cases and put them in quarantine. So let's say we go here and we let you roam around freely, but we know we can test people and we know that some of them are positive. So we simply tell them to stay at home, right. Now this depends a lot on how well you can test people, and it also depends on what they claim the contagious time interval. We know that there are people that are contagious without showing symptoms. So unless you can test every single person all the time, this is likely to not really help a lot. There's various data from various countries that actually shows it can reduce the load, but they basically argue against that because there are these contagious people and you can never test fast enough or accurate or thoroughly enough. And then they say there is risk-based selective quarantine, which means what? It means that some of these people are going to be at risk. And in this case, we obviously mean old people. So old people, I'm going to draw them with a cane, not because old people aren't fit, just because they have better tastes in canes. And then there are young people and they run a smartphone with TikTok. And what we're going to say is that you youngsters, you're not really at risk from this. So you go out, you sneeze on each other, you go about your life normally, and you old people basically stay at home until all the young people have immunity. So we ramp up the cases and then it flattens out eventually in the low-risk population. And at that point, there is enough herd immunity, right? All these people are now immune so that the old person here, even if they now go out again, they won't catch it because everyone's already had it. So they argue for this particular strategy, or at least they analyze this particular strategy. Now, I have to say at the beginning that the core assumption here is that this quarantine of the high-risk people, you can do basically in a perfect way. So the assumption here is that you are able to perfectly quarantine all the high-risk people and that the level of infection in the low-risk population has no influence on the level of infection in the high-risk population. And in my opinion, I simply don't believe that. I simply don't believe you can build this quarantine. I think even these old people, they need food sometimes, the nursing home needs staff. So even if they can reduce their contact to the outside world, they cannot fully be sheltered. And that means the more infections you have in the low-risk population, the more infections you will have in the high-risk population. So I think the fundamental core assumption of this model is quite flawed. That being said, let's analyze it. So we assume that all the high-risk people, none of them is going to get sick because they all stay at home. So the math in this paper is actually pretty basic. So we'll go through it a bit more detailed. So we'll understand the core argument. So they introduce the following quantities, M here. M is the low-risk population, right? This is the population size. V or new, let's call it new. New here is the probability. So that's the probability that if you are sick, you need to go to the ICU. Right? So sick means simply you have the virus. And ICU means that the symptoms are so bad that you need help from the medical system in order to overcome the disease. So if we multiply the population size by the probability that if you get sick, you need to go to the ICU, what do we get? We get a worst-case scenario. So basically the authors here, and I find this is the good part of this analysis. They really don't rely on kind of pandemic dynamics, epidemiology, exponential growth and so on. They simply consider the worst case. So MD here, if you multiply these two numbers, what does that mean? That is the number of severe cases. Severe meaning you need ICU cases. If everybody gets sick, if all get sick. If everybody gets sick at the same time, right? Same time. So this is the work. So let's say we all go out, the lowest population, and we all sneeze in each other's faces as much as we can. And we just all get sick at the same time. Then this here is the number of people going to the ICU. Right? And if this, so they introduce this quantity B here, B is the number of beds in the ICU. If the number of beds in the ICU is larger than the worst case, severe cases, right? Then we are safe. So that's the argument. Basically it's not that we are safe. It is no one will die from lack of an ICU bed. Which is kind of the lever we have as a population. If you assume everyone's going to get sick anyway and so on. If the number of beds is larger than the worst case number of ICU patients, we are safe. That's at least how they define safe. Alright, so that's their premise. Now what are they going to do? They're going to find a quantity where they can bound this thing. So they are going to find a bound, an upper bound on the number of severe cases. And if this upper bound is lower than the number of beds, then they can say we're safe with this method. See this is a worst case analysis under their assumptions. Alright, so I said they don't resort to any kind of epidemiological dynamics. They simply estimate this thing from current numbers. I'm going to introduce two more quantities here. P star and K. Now K is the current number of severe cases. So this is kind of an analog to this thing here. So these two are connected. This is the current number of severe cases and this up here is the total possible, like the worst case number of severe cases in the future. Likewise, P star here is the percentage of people, the percent of people that are sick. And they claim correctly this is unknown. So if we could test everybody who is sick, not severe, just sick. And up here this has no connection because of course you can imagine here another factor times, let's call this P plus or something. Which is the number of people who are sick in the worst case, which of course in our worst case scenario is one. So that's why they don't include it here. So this is the current percentage of sick people. So this here is a percentage and this here is an actual number. Keep that in mind. All right, now if we do some basic reformulation here, if we take this P star and multiply that by, you see it in this corner here, multiply it by the total size of the population, right? We get the number of people who are currently sick. This is a percentage of current sick ones, this is the total size of the population. Get the number of people who are currently sick. If we take that in the denominator and put K here, which is the number of people who are currently severe, then we get an estimate of this quantity new. So remember what new is? New is the probability if you are sick, you go to the ICU. So the ICU means you're severe, right? So these are the current number of sick people and these are the current number of severe people. This gives you an estimate of if you are sick, what's the probability that you're severe? Now they argue that this number, it doesn't change, independent of, so this quantity here is a constant. So the probability that if you are sick from this virus, you go to the ICU, doesn't change over time. So we can estimate it with current numbers, right? Which is a pretty reasonable thing to assume that this stays constant unless the virus mutates or something. So we know the total size of the population. We know the current number of severe cases. You can make an argument about that. So do we really know the current number of severe cases? Because there is an exponential growth involved, this might be difficult to estimate. And they say the same thing. So they say this is the only time where they reference the dynamics of the situation. It grows at an exponential rate. So what we can do is we can take a worst case upper bound, they say, to be on the safe side, perform a worst case analysis. So instead of taking K, they add this confidence interval on it that is based on concentration inequalities. So they don't use K, they use this K tilde here, which has two additional summons here. That is supposed to be an upper bound with confidence, at least one over delta here. And this you can set, for example, to be 0.05. That gives you a 95 percent confidence that this is an upper bound on that. Now, this comes from some concentration bound. And there are certain assumptions behind this upper bound here, which I don't know enough about to critique them here. I'm going to assume they are reasonable. If they are not, then of course, that is an additional point of criticism of this work. All right. So instead of using K here, we are saying we're on the safe side and we use this K tilde. So we know this as well. Now, the unknown quantity, of course, is this thing here, P star. What is the percentage of people that are currently sick? So the goal is now to find that. So they say, OK, if we plug in this upper bound of K tilde, then with this probability, we can upper bound this quantity, nu, which is exactly what we wanted, because we need to upper bound MD. That's what they say here. So since at the top we saw that M times nu equals MD and we want to upper bound MD, we can rearrange this thing. If we plug in these two together, we see that the M cancels out. We can upper bound MD by this quantity here. The upper bound on the current severe cases divided by the percentage of the currently sick people. So again, they reformulate and they plug in. This, of course, needs to be smaller than the number of beds. So they plug this in here and they say, now what we have to do to see is if this quantity is larger than this quantity of two quantities we know, then we are safe. Now, again, our goal is going to be to find a quantity that lower bounds P star, but up, but is larger than this quantity here. And they do this via hypothesis testing. They call this quantity here, they call it P tilde. And they do a hypothesis test for classic statistics where they ask, is P star significantly larger than P tilde? If that's the case, then we're safe. If not, we're not. And how do they do that? They say, OK, we have the population. I did draw this at one point. Let's go back there. We have the population here, right? And what we can do is we can just go out and uniformly, uniformly test people, like just randomly select people. Now, this is an old person, old people stay at home. So we randomly select people to test and their test results come back. And this one, this one's healthy, this one's healthy, this one's healthy, this one's not healthy. And so we have four tests and out of the four, one was positive. Can we work out a hypothesis test from that? So can we decide whether P star is probably much larger than P tilde or not? And the answer is yes, because this is a uniform sample. You can work out using classic statistical tools whether or not you can reject an old hypothesis or not. And they actually work this out and they do give a number here. And that's this. So they say if we test N, which is four, let's say four and a half times this quantity B divided by K. So the number of beds divided by the upper bound on the current severe cases. So we test four point five times this many people. Then if we find at least 10 positive cases or more, then with a probability of 95 percent, we know that the risk based model is safe. So the more, of course, the more infected people you find in this case, the better, because that means because the number of severe cases stays constant at any given time. It means that a lot more people are infected. That means the probability that you are going to become severe is lower. That's why it says at least. So again, you go out, you test N people and according to this formula, plug in the numbers here for your current situation. If you find at least 10 people, then with a probability of at least 95 percent, you know that this model is safe. Cool. And this is done using, you know, classic statistical testing hypothesis testing literature. So I think that is a pretty cool result. But I do severely criticize the underlying assumption, which is that you can perfectly enforce this quarantine. Of course, if you can't, it means that there is a direct correlation between the number of sick people in your low risk population, the number of sick people in your high risk population, which means that more of the high risk population are going to get infected as well, which again means that your number B of ICU beds is going to drop severely because they have a higher hospitalization rate, which makes your entire model that we developed down there less valid, because now this used to be a constant in the model. It's now no longer a constant. It's sinking. And the worse it gets, the more it's sinking. And yes, so that that may make what you initially thought was a safe model into a very non safe model very quickly. And that doesn't include all the high risk people that are going to be in danger additionally because you can't enforce the quarantine. All right. So this was my take on that. Take it for what it's worth. And I wish you a healthy pandemic. Bye bye. | [
{
"start": 0,
"end": 6,
"text": " Can we contain COVID-19 without locking down the economy?"
},
{
"start": 6,
"end": 11,
"text": " This is a question and I do care about this article because"
},
{
"start": 11,
"end": 16,
"text": " Shai Shalef-Schwarz is one of the bigger names in machine learning theory."
},
{
"start": 16,
"end": 21,
"text": " So it was interesting for me to see what he and his collaborator here"
},
{
"start": 21,
"end": 28,
"text": " had to say about the kind of outbreak and the strategy to contain it."
},
{
"start": 28,
"end": 35,
"text": " So contain maybe isn't the right word that they ask."
},
{
"start": 35,
"end": 44,
"text": " I think the way they ask the question is how are we going to survive this the best?"
},
{
"start": 44,
"end": 49,
"text": " And so this in no means is an endorsement by me."
},
{
"start": 49,
"end": 51,
"text": " I'm not a medical professional."
},
{
"start": 51,
"end": 59,
"text": " Please just view this as a commentary and an explanation of what they are saying."
},
{
"start": 59,
"end": 63,
"text": " I'll give my opinions along the way, of course."
},
{
"start": 63,
"end": 70,
"text": " So they identify three different models for handling the spread of COVID-19."
},
{
"start": 70,
"end": 78,
"text": " And we'll start with the third one because they argue for the first one and this builds more suspense."
},
{
"start": 78,
"end": 86,
"text": " So they say there is countrywide lockdown, right, until the spread of the virus is under control."
},
{
"start": 86,
"end": 89,
"text": " They say it could take anywhere from weeks to months."
},
{
"start": 89,
"end": 96,
"text": " It is the safest route, but it does not prevent a second wave from occurring."
},
{
"start": 96,
"end": 103,
"text": " Now, of course, if you have people, let's say these are people, right,"
},
{
"start": 103,
"end": 115,
"text": " then the thing is everyone just stays in there, stay in your house, right, everybody, right, until it's kind of gone."
},
{
"start": 115,
"end": 122,
"text": " Now they say correctly there is a risk of a second wave because only a single infected person,"
},
{
"start": 122,
"end": 129,
"text": " because there's no immunity, still has the potential of creating another epicenter."
},
{
"start": 129,
"end": 133,
"text": " So they don't consider this option."
},
{
"start": 133,
"end": 138,
"text": " The next option is called containment-based selective quarantine,"
},
{
"start": 138,
"end": 144,
"text": " which means find all the positive cases and put them in quarantine."
},
{
"start": 144,
"end": 150,
"text": " So let's say we go here and we let you roam around freely,"
},
{
"start": 150,
"end": 158,
"text": " but we know we can test people and we know that some of them are positive."
},
{
"start": 158,
"end": 162,
"text": " So we simply tell them to stay at home, right."
},
{
"start": 162,
"end": 167,
"text": " Now this depends a lot on how well you can test people,"
},
{
"start": 167,
"end": 173,
"text": " and it also depends on what they claim the contagious time interval."
},
{
"start": 173,
"end": 178,
"text": " We know that there are people that are contagious without showing symptoms."
},
{
"start": 178,
"end": 187,
"text": " So unless you can test every single person all the time, this is likely to not really help a lot."
},
{
"start": 187,
"end": 192,
"text": " There's various data from various countries that actually shows it can reduce the load,"
},
{
"start": 192,
"end": 200,
"text": " but they basically argue against that because there are these contagious people"
},
{
"start": 200,
"end": 206,
"text": " and you can never test fast enough or accurate or thoroughly enough."
},
{
"start": 206,
"end": 213,
"text": " And then they say there is risk-based selective quarantine, which means what?"
},
{
"start": 213,
"end": 219,
"text": " It means that some of these people are going to be at risk."
},
{
"start": 219,
"end": 223,
"text": " And in this case, we obviously mean old people."
},
{
"start": 223,
"end": 230,
"text": " So old people, I'm going to draw them with a cane, not because old people aren't fit,"
},
{
"start": 230,
"end": 235,
"text": " just because they have better tastes in canes."
},
{
"start": 235,
"end": 241,
"text": " And then there are young people and they run a smartphone with TikTok."
},
{
"start": 241,
"end": 248,
"text": " And what we're going to say is that you youngsters, you're not really at risk from this."
},
{
"start": 248,
"end": 253,
"text": " So you go out, you sneeze on each other, you go about your life normally,"
},
{
"start": 253,
"end": 262,
"text": " and you old people basically stay at home until all the young people have immunity."
},
{
"start": 262,
"end": 270,
"text": " So we ramp up the cases and then it flattens out eventually in the low-risk population."
},
{
"start": 270,
"end": 273,
"text": " And at that point, there is enough herd immunity, right?"
},
{
"start": 273,
"end": 279,
"text": " All these people are now immune so that the old person here,"
},
{
"start": 279,
"end": 284,
"text": " even if they now go out again, they won't catch it because everyone's already had it."
},
{
"start": 284,
"end": 295,
"text": " So they argue for this particular strategy, or at least they analyze this particular strategy."
},
{
"start": 295,
"end": 304,
"text": " Now, I have to say at the beginning that the core assumption here is that this quarantine of the high-risk people,"
},
{
"start": 304,
"end": 309,
"text": " you can do basically in a perfect way."
},
{
"start": 309,
"end": 317,
"text": " So the assumption here is that you are able to perfectly quarantine all the high-risk people"
},
{
"start": 317,
"end": 328,
"text": " and that the level of infection in the low-risk population has no influence on the level of infection in the high-risk population."
},
{
"start": 328,
"end": 332,
"text": " And in my opinion, I simply don't believe that."
},
{
"start": 332,
"end": 335,
"text": " I simply don't believe you can build this quarantine."
},
{
"start": 335,
"end": 343,
"text": " I think even these old people, they need food sometimes, the nursing home needs staff."
},
{
"start": 343,
"end": 350,
"text": " So even if they can reduce their contact to the outside world, they cannot fully be sheltered."
},
{
"start": 350,
"end": 356,
"text": " And that means the more infections you have in the low-risk population,"
},
{
"start": 356,
"end": 360,
"text": " the more infections you will have in the high-risk population."
},
{
"start": 360,
"end": 366,
"text": " So I think the fundamental core assumption of this model is quite flawed."
},
{
"start": 366,
"end": 368,
"text": " That being said, let's analyze it."
},
{
"start": 368,
"end": 379,
"text": " So we assume that all the high-risk people, none of them is going to get sick because they all stay at home."
},
{
"start": 379,
"end": 383,
"text": " So the math in this paper is actually pretty basic."
},
{
"start": 383,
"end": 386,
"text": " So we'll go through it a bit more detailed."
},
{
"start": 386,
"end": 389,
"text": " So we'll understand the core argument."
},
{
"start": 389,
"end": 393,
"text": " So they introduce the following quantities, M here."
},
{
"start": 393,
"end": 397,
"text": " M is the low-risk population, right?"
},
{
"start": 397,
"end": 402,
"text": " This is the population size."
},
{
"start": 402,
"end": 408,
"text": " V or new, let's call it new."
},
{
"start": 408,
"end": 412,
"text": " New here is the probability."
},
{
"start": 412,
"end": 421,
"text": " So that's the probability that if you are sick, you need to go to the ICU."
},
{
"start": 421,
"end": 424,
"text": " Right? So sick means simply you have the virus."
},
{
"start": 424,
"end": 435,
"text": " And ICU means that the symptoms are so bad that you need help from the medical system in order to overcome the disease."
},
{
"start": 435,
"end": 445,
"text": " So if we multiply the population size by the probability that if you get sick, you need to go to the ICU, what do we get?"
},
{
"start": 445,
"end": 448,
"text": " We get a worst-case scenario."
},
{
"start": 448,
"end": 455,
"text": " So basically the authors here, and I find this is the good part of this analysis."
},
{
"start": 455,
"end": 464,
"text": " They really don't rely on kind of pandemic dynamics, epidemiology, exponential growth and so on."
},
{
"start": 464,
"end": 468,
"text": " They simply consider the worst case."
},
{
"start": 468,
"end": 473,
"text": " So MD here, if you multiply these two numbers, what does that mean?"
},
{
"start": 473,
"end": 478,
"text": " That is the number of severe cases."
},
{
"start": 478,
"end": 482,
"text": " Severe meaning you need ICU cases."
},
{
"start": 482,
"end": 492,
"text": " If everybody gets sick, if all get sick."
},
{
"start": 492,
"end": 498,
"text": " If everybody gets sick at the same time, right?"
},
{
"start": 498,
"end": 501,
"text": " Same time."
},
{
"start": 501,
"end": 510,
"text": " So this is the work. So let's say we all go out, the lowest population, and we all sneeze in each other's faces as much as we can."
},
{
"start": 510,
"end": 513,
"text": " And we just all get sick at the same time."
},
{
"start": 513,
"end": 519,
"text": " Then this here is the number of people going to the ICU."
},
{
"start": 519,
"end": 530,
"text": " Right? And if this, so they introduce this quantity B here, B is the number of beds in the ICU."
},
{
"start": 530,
"end": 539,
"text": " If the number of beds in the ICU is larger than the worst case, severe cases, right?"
},
{
"start": 539,
"end": 541,
"text": " Then we are safe."
},
{
"start": 541,
"end": 543,
"text": " So that's the argument."
},
{
"start": 543,
"end": 545,
"text": " Basically it's not that we are safe."
},
{
"start": 545,
"end": 549,
"text": " It is no one will die from lack of an ICU bed."
},
{
"start": 549,
"end": 553,
"text": " Which is kind of the lever we have as a population."
},
{
"start": 553,
"end": 557,
"text": " If you assume everyone's going to get sick anyway and so on."
},
{
"start": 557,
"end": 566,
"text": " If the number of beds is larger than the worst case number of ICU patients, we are safe."
},
{
"start": 566,
"end": 569,
"text": " That's at least how they define safe."
},
{
"start": 569,
"end": 572,
"text": " Alright, so that's their premise."
},
{
"start": 572,
"end": 574,
"text": " Now what are they going to do?"
},
{
"start": 574,
"end": 579,
"text": " They're going to find a quantity where they can bound this thing."
},
{
"start": 579,
"end": 585,
"text": " So they are going to find a bound, an upper bound on the number of severe cases."
},
{
"start": 585,
"end": 594,
"text": " And if this upper bound is lower than the number of beds, then they can say we're safe with this method."
},
{
"start": 594,
"end": 601,
"text": " See this is a worst case analysis under their assumptions."
},
{
"start": 601,
"end": 607,
"text": " Alright, so I said they don't resort to any kind of epidemiological dynamics."
},
{
"start": 607,
"end": 611,
"text": " They simply estimate this thing from current numbers."
},
{
"start": 611,
"end": 615,
"text": " I'm going to introduce two more quantities here. P star and K."
},
{
"start": 615,
"end": 626,
"text": " Now K is the current number of severe cases."
},
{
"start": 626,
"end": 632,
"text": " So this is kind of an analog to this thing here."
},
{
"start": 632,
"end": 635,
"text": " So these two are connected."
},
{
"start": 635,
"end": 646,
"text": " This is the current number of severe cases and this up here is the total possible, like the worst case number of severe cases in the future."
},
{
"start": 646,
"end": 661,
"text": " Likewise, P star here is the percentage of people, the percent of people that are sick."
},
{
"start": 661,
"end": 666,
"text": " And they claim correctly this is unknown."
},
{
"start": 666,
"end": 672,
"text": " So if we could test everybody who is sick, not severe, just sick."
},
{
"start": 672,
"end": 685,
"text": " And up here this has no connection because of course you can imagine here another factor times, let's call this P plus or something."
},
{
"start": 685,
"end": 693,
"text": " Which is the number of people who are sick in the worst case, which of course in our worst case scenario is one."
},
{
"start": 693,
"end": 695,
"text": " So that's why they don't include it here."
},
{
"start": 695,
"end": 702,
"text": " So this is the current percentage of sick people."
},
{
"start": 702,
"end": 706,
"text": " So this here is a percentage and this here is an actual number."
},
{
"start": 706,
"end": 709,
"text": " Keep that in mind."
},
{
"start": 709,
"end": 726,
"text": " All right, now if we do some basic reformulation here, if we take this P star and multiply that by, you see it in this corner here,"
},
{
"start": 726,
"end": 731,
"text": " multiply it by the total size of the population, right?"
},
{
"start": 731,
"end": 739,
"text": " We get the number of people who are currently sick."
},
{
"start": 739,
"end": 743,
"text": " This is a percentage of current sick ones, this is the total size of the population."
},
{
"start": 743,
"end": 746,
"text": " Get the number of people who are currently sick."
},
{
"start": 746,
"end": 760,
"text": " If we take that in the denominator and put K here, which is the number of people who are currently severe,"
},
{
"start": 760,
"end": 764,
"text": " then we get an estimate of this quantity new."
},
{
"start": 764,
"end": 766,
"text": " So remember what new is?"
},
{
"start": 766,
"end": 773,
"text": " New is the probability if you are sick, you go to the ICU."
},
{
"start": 773,
"end": 777,
"text": " So the ICU means you're severe, right?"
},
{
"start": 777,
"end": 783,
"text": " So these are the current number of sick people and these are the current number of severe people."
},
{
"start": 783,
"end": 791,
"text": " This gives you an estimate of if you are sick, what's the probability that you're severe?"
},
{
"start": 791,
"end": 800,
"text": " Now they argue that this number, it doesn't change, independent of, so this quantity here is a constant."
},
{
"start": 800,
"end": 805,
"text": " So the probability that if you are sick from this virus, you go to the ICU, doesn't change over time."
},
{
"start": 805,
"end": 809,
"text": " So we can estimate it with current numbers, right?"
},
{
"start": 809,
"end": 817,
"text": " Which is a pretty reasonable thing to assume that this stays constant unless the virus mutates or something."
},
{
"start": 817,
"end": 820,
"text": " So we know the total size of the population."
},
{
"start": 820,
"end": 826,
"text": " We know the current number of severe cases."
},
{
"start": 826,
"end": 829,
"text": " You can make an argument about that."
},
{
"start": 829,
"end": 832,
"text": " So do we really know the current number of severe cases?"
},
{
"start": 832,
"end": 837,
"text": " Because there is an exponential growth involved, this might be difficult to estimate."
},
{
"start": 837,
"end": 840,
"text": " And they say the same thing."
},
{
"start": 840,
"end": 845,
"text": " So they say this is the only time where they reference the dynamics of the situation."
},
{
"start": 845,
"end": 847,
"text": " It grows at an exponential rate."
},
{
"start": 847,
"end": 857,
"text": " So what we can do is we can take a worst case upper bound, they say, to be on the safe side, perform a worst case analysis."
},
{
"start": 857,
"end": 869,
"text": " So instead of taking K, they add this confidence interval on it that is based on concentration inequalities."
},
{
"start": 869,
"end": 878,
"text": " So they don't use K, they use this K tilde here, which has two additional summons here."
},
{
"start": 878,
"end": 885,
"text": " That is supposed to be an upper bound with confidence, at least one over delta here."
},
{
"start": 885,
"end": 888,
"text": " And this you can set, for example, to be 0.05."
},
{
"start": 888,
"end": 906,
"text": " That gives you a 95 percent confidence that this is an upper bound on that."
},
{
"start": 906,
"end": 910,
"text": " Now, this comes from some concentration bound."
},
{
"start": 910,
"end": 921,
"text": " And there are certain assumptions behind this upper bound here, which I don't know enough about to critique them here."
},
{
"start": 921,
"end": 923,
"text": " I'm going to assume they are reasonable."
},
{
"start": 923,
"end": 929,
"text": " If they are not, then of course, that is an additional point of criticism of this work."
},
{
"start": 929,
"end": 938,
"text": " All right. So instead of using K here, we are saying we're on the safe side and we use this K tilde."
},
{
"start": 938,
"end": 940,
"text": " So we know this as well."
},
{
"start": 940,
"end": 945,
"text": " Now, the unknown quantity, of course, is this thing here, P star."
},
{
"start": 945,
"end": 956,
"text": " What is the percentage of people that are currently sick?"
},
{
"start": 956,
"end": 960,
"text": " So the goal is now to find that."
},
{
"start": 960,
"end": 967,
"text": " So they say, OK, if we plug in this upper bound of K tilde, then with this probability,"
},
{
"start": 967,
"end": 977,
"text": " we can upper bound this quantity, nu, which is exactly what we wanted, because we need to upper bound MD."
},
{
"start": 977,
"end": 979,
"text": " That's what they say here."
},
{
"start": 979,
"end": 990,
"text": " So since at the top we saw that M times nu equals MD and we want to upper bound MD,"
},
{
"start": 990,
"end": 1001,
"text": " we can rearrange this thing. If we plug in these two together, we see that the M cancels out."
},
{
"start": 1001,
"end": 1005,
"text": " We can upper bound MD by this quantity here."
},
{
"start": 1005,
"end": 1019,
"text": " The upper bound on the current severe cases divided by the percentage of the currently sick people."
},
{
"start": 1019,
"end": 1027,
"text": " So again, they reformulate and they plug in."
},
{
"start": 1027,
"end": 1033,
"text": " This, of course, needs to be smaller than the number of beds."
},
{
"start": 1033,
"end": 1043,
"text": " So they plug this in here and they say, now what we have to do to see is if this quantity is larger than this quantity of two quantities we know, then we are safe."
},
{
"start": 1043,
"end": 1056,
"text": " Now, again, our goal is going to be to find a quantity that lower bounds P star, but up, but is larger than this quantity here."
},
{
"start": 1056,
"end": 1064,
"text": " And they do this via hypothesis testing. They call this quantity here, they call it P tilde."
},
{
"start": 1064,
"end": 1075,
"text": " And they do a hypothesis test for classic statistics where they ask, is P star significantly larger than P tilde?"
},
{
"start": 1075,
"end": 1081,
"text": " If that's the case, then we're safe. If not, we're not."
},
{
"start": 1081,
"end": 1087,
"text": " And how do they do that? They say, OK, we have the population."
},
{
"start": 1087,
"end": 1091,
"text": " I did draw this at one point."
},
{
"start": 1091,
"end": 1095,
"text": " Let's go back there. We have the population here, right?"
},
{
"start": 1095,
"end": 1105,
"text": " And what we can do is we can just go out and uniformly, uniformly test people, like just randomly select people."
},
{
"start": 1105,
"end": 1108,
"text": " Now, this is an old person, old people stay at home."
},
{
"start": 1108,
"end": 1113,
"text": " So we randomly select people to test and their test results come back."
},
{
"start": 1113,
"end": 1120,
"text": " And this one, this one's healthy, this one's healthy, this one's healthy, this one's not healthy."
},
{
"start": 1120,
"end": 1127,
"text": " And so we have four tests and out of the four, one was positive."
},
{
"start": 1127,
"end": 1131,
"text": " Can we work out a hypothesis test from that?"
},
{
"start": 1131,
"end": 1139,
"text": " So can we decide whether P star is probably much larger than P tilde or not?"
},
{
"start": 1139,
"end": 1143,
"text": " And the answer is yes, because this is a uniform sample."
},
{
"start": 1143,
"end": 1152,
"text": " You can work out using classic statistical tools whether or not you can reject an old hypothesis or not."
},
{
"start": 1152,
"end": 1160,
"text": " And they actually work this out and they do give a number here."
},
{
"start": 1160,
"end": 1162,
"text": " And that's this."
},
{
"start": 1162,
"end": 1173,
"text": " So they say if we test N, which is four, let's say four and a half times this quantity B divided by K."
},
{
"start": 1173,
"end": 1178,
"text": " So the number of beds divided by the upper bound on the current severe cases."
},
{
"start": 1178,
"end": 1184,
"text": " So we test four point five times this many people."
},
{
"start": 1184,
"end": 1196,
"text": " Then if we find at least 10 positive cases or more, then with a probability of 95 percent,"
},
{
"start": 1196,
"end": 1200,
"text": " we know that the risk based model is safe."
},
{
"start": 1200,
"end": 1205,
"text": " So the more, of course, the more infected people you find in this case, the better,"
},
{
"start": 1205,
"end": 1212,
"text": " because that means because the number of severe cases stays constant at any given time."
},
{
"start": 1212,
"end": 1214,
"text": " It means that a lot more people are infected."
},
{
"start": 1214,
"end": 1219,
"text": " That means the probability that you are going to become severe is lower."
},
{
"start": 1219,
"end": 1222,
"text": " That's why it says at least."
},
{
"start": 1222,
"end": 1226,
"text": " So again, you go out, you test N people and according to this formula,"
},
{
"start": 1226,
"end": 1229,
"text": " plug in the numbers here for your current situation."
},
{
"start": 1229,
"end": 1237,
"text": " If you find at least 10 people, then with a probability of at least 95 percent, you know that this model is safe."
},
{
"start": 1237,
"end": 1239,
"text": " Cool."
},
{
"start": 1239,
"end": 1248,
"text": " And this is done using, you know, classic statistical testing hypothesis testing literature."
},
{
"start": 1248,
"end": 1252,
"text": " So I think that is a pretty cool result."
},
{
"start": 1252,
"end": 1262,
"text": " But I do severely criticize the underlying assumption, which is that you can perfectly enforce this quarantine."
},
{
"start": 1262,
"end": 1268,
"text": " Of course, if you can't, it means that there is a direct correlation between the number of sick people"
},
{
"start": 1268,
"end": 1273,
"text": " in your low risk population, the number of sick people in your high risk population,"
},
{
"start": 1273,
"end": 1279,
"text": " which means that more of the high risk population are going to get infected as well,"
},
{
"start": 1279,
"end": 1288,
"text": " which again means that your number B of ICU beds is going to drop severely because they have a higher hospitalization rate,"
},
{
"start": 1288,
"end": 1295,
"text": " which makes your entire model that we developed down there less valid,"
},
{
"start": 1295,
"end": 1298,
"text": " because now this used to be a constant in the model."
},
{
"start": 1298,
"end": 1300,
"text": " It's now no longer a constant."
},
{
"start": 1300,
"end": 1301,
"text": " It's sinking."
},
{
"start": 1301,
"end": 1303,
"text": " And the worse it gets, the more it's sinking."
},
{
"start": 1303,
"end": 1316,
"text": " And yes, so that that may make what you initially thought was a safe model into a very non safe model very quickly."
},
{
"start": 1316,
"end": 1322,
"text": " And that doesn't include all the high risk people that are going to be in danger additionally"
},
{
"start": 1322,
"end": 1325,
"text": " because you can't enforce the quarantine."
},
{
"start": 1325,
"end": 1326,
"text": " All right."
},
{
"start": 1326,
"end": 1328,
"text": " So this was my take on that."
},
{
"start": 1328,
"end": 1330,
"text": " Take it for what it's worth."
},
{
"start": 1330,
"end": 1334,
"text": " And I wish you a healthy pandemic."
},
{
"start": 1334,
"end": 1353,
"text": " Bye bye."
}
] |
lqtlua-Ylts | Yannic Kilcher | UCZHmQk67mSJgfCCTn7xBfew | State-of-Art-Reviewing: A Radical Proposal to Improve Scientific Publication | [
"Science & Technology"
] | [
"deep learning",
"machine learning",
"nlp",
"natural language processing",
"arxiv",
"attention",
"peer review",
"automate",
"distributed",
"scalable",
"neurips",
"score",
"objective"
] | Peer Review is outdated and ineffective. SOAR is a new and revolutionary way to distribute scientific reviewing and scale to the new age of faster, better and more significant research.
https://arxiv.org/abs/2003.14415
Abstract:
Peer review forms the backbone of modern scientific manuscript evaluation. But after two hundred and eighty-nine years of egalitarian service to the scientific community, does this protocol remain fit for purpose in 2020? In this work, we answer this question in the negative (strong reject, high confidence) and propose instead State-Of-the-Art Review (SOAR), a neoteric reviewing pipeline that serves as a 'plug-and-play' replacement for peer review. At the heart of our approach is an interpretation of the review process as a multi-objective, massively distributed and extremely-high-latency optimisation, which we scalarise and solve efficiently for PAC and CMT-optimal solutions. We make the following contributions: (1) We propose a highly scalable, fully automatic methodology for review, drawing inspiration from best-practices from premier computer vision and machine learning conferences; (2) We explore several instantiations of our approach and demonstrate that SOAR can be used to both review prints and pre-review pre-prints; (3) We wander listlessly in vain search of catharsis from our latest rounds of savage CVPR rejections.
Authors: Samuel Albanie, Jaime Thewmore, Robert McCraith, Joao F. Henriques
Links:
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
BitChute: https://www.bitchute.com/channel/yannic-kilcher
Minds: https://www.minds.com/ykilcher | Hi everyone. Today we're looking at state-of-the-art reviewing a radical proposal to improve scientific publication. This has been on my mind for a while. The review process for modern science, especially machine learning, is just broken. I've spoken numerous times about the fact that we need to replace it with a better system. Samuel Albany at Al have actually come up with such a system and we're going to explore it today. I am a big fan of this work and I'm 100% on board with this. They basically say peer review forms the backbone of modern scientific manuscript evaluation. If you don't know what peer review is in machine learning right now, if you have some genius idea, so here is your idea, that's a light bulb by the way, you write it up into an eight-page PDF. Yes, it must be a PDF and yes, it must be eight pages. You submit it to a conference, which is some kind of... So you submit it to be accepted in a conference proceeding. And if conference organizers, of course, they're just a bunch of people, they can't review these 1,000 million applications that come by themselves. So what they do is they recruit experts, which are called peers. So peers are other people. These are called peers and they have usually written up their own papers and they can critique each other's paper and they decide what gets accepted and what doesn't. Now, I've spoken numerous times of how this is super noisy right now. They're way, way, they're not enough peers, they're not experienced enough. So whether or not your particular idea gets accepted is extremely dependent on probability, on a coin flip usually. And it's just overloaded and just makes no sense. And they ask the same question, is this fit for purpose in 2020? And we need to replace it and I support. So they, you can already see they kind of want to automate this away with their state of the art review score and the score will be an out of 10 score that can be integrated into something like archive. And, you know, display right away. So they have some requirements to this new system. What should it be done? It should have the ability to scale. Very important. Our current review system doesn't have this, right? Our current review system relies on other humans reviewing your paper. And that means that the reviewers need to scale with the amount of papers, which just isn't the case currently. So a new review system must have the ability to scale. Right. And then, you know, automating the reviews away or scaling it up in a distributed fashion does this. Speed. Yes, because right now, if I submit my manuscript for review, it takes them months to review it. And our science progress is faster than that. So a speedy, more speedy version of peer review is definitely required. And then consistency. And this is the most shocking part, right? There is a the grand 2014 NURIPS experiment, which concluded that 57 percent of papers accepted by one committee were rejected by another committee and vice versa. Reviewing the exact same paper, different committees came to completely different conclusions to an astounding degree. So basically, you're flipping a coin of whether or not your paper gets accepted or not. And I think this is just not acceptable. And so they propose these three things, speed, scale, consistency, and their new method certainly has this. Now, let's jump down here where they introduce this state of the art reviewing SOAR. So they say, OK, the quality of a scientific work can be judged along three axes, efficacy, significance and novelty. So there are these three pillars, right? Efficacy, which means is is kind of how how effective is your work in achieving the goal in machine learning? That's usually to train some good classifier or something like this. Then the other one, sorry, is significance, right? Significance is how relevant is what you've done to the to the field. Right. And the third one is novelty. So, you know, in your scientific work should be an original contribution to the knowledge of mankind and therefore it should be novel. Right. So the more of these three things you have, of course, the higher your score should be. And here in the middle is where the highest scores should be. So imagine this is kind of a landscape. And so you want to grade papers along these three axes. But they have a pretty good method of of of assessing these in an automated fashion. So, first of all, assessing efficacy, efficacy, they say, is best assessed by determining if the proposed method achieves a new state of the art. I think that's not really I don't think you can really doubt this. I mean, this this is this is kind of the gold standard of of whether your paper is effective, is whether or not it achieves state of the art. I mean, I it might be a bit of a controversial opinion, but if a paper doesn't achieve a state of the art, it's you know, why? Why do you even care? Like no one cares. So from they say from an implementation perspective, they can they can use they can kind of abuse a fact of the current research environment is that you don't actually have to review this yourself. But the authors themselves can be relied upon to state this repeatedly in the text. Right. And this this is important. So the authors will state that they have state of the art many, many times in the text if they have actually achieved it. If they haven't achieved it or not so sure about it, they probably won't repeat it as many times. But this is can be can kind of abuse now to distribute it. Basically, you don't imagine now these these all of these reviewers. They don't they don't have to do this work anymore. They can just distribute to all the authors of their own papers, right? Because the authors in the text by the way, the text is structures is kind of an NLP approach to reviewing kind of NLP mixed with game theory. Right. So the other authors themselves if they have state of the art, you have to do some stemming and stuff, but they will put that into the text a lot. So it's a bit controversial, but the the authors here propose to simply count the number of word occurrences of state of the art case and sensitive very important in the text, right? It stands to reason that a higher state of the art count is preferable. Of course. All right. So the second thing so this might be a bit controversial. The second thing significance and they now make the claim significance is measured by efficacy. So they simply the efficacy term. So if your paper is effective at achieving its goal, you can also say it's significant for the community because again, significance should like if you have state of the art, then your paper is significant. If you do not have state of the art, then your paper is obviously not significant because why should it matter if you don't have state of the art in a given task? It's useless. All right. So we weigh it twice. That's pretty good. And then novelty now here they take much of the same approach. They say the authors probably state this. So how much they use the word novel in their manuscript will dictate. So here they say, okay, they novel. Wow, this is failing me. Hello. How much they use the word novel in the text will probably be an indication. I don't think so though. They do do the smart thing of they include the works. They include the related work section from this. Sorry, they exclude the related work section. They say we make the key observation that individuals best play to make the judgment are the authors themselves since they have likely read at least one of the works cited in the bibliography. I don't agree here. I think a better method would be to simply count the number of references and the lower the amount of references to related work, the higher the novelty. Because if you think, if these are current papers and your paper is here, you'll have a lot of related work. So it's not as novel. If you're way out here, you'll have maybe one or two related works. So it's way more novel if you have less references. So this would be my criticism of this. So this novelty thing here, I think this term should be replaced by a graph centrality measure or simply a count of how many references you have would be enough. All right, so they define their score. Their score, as we saw, is the SOTA term weighted twice. A geometric mean between that and the novelty term, which I've criticized. They add the suffix out of 10 because out of 10 score is pretty interpretable. So they divide by 10 here. So yeah, they say that here. We attach a suffix out of 10 because that's easy to interpret. And as you saw in the kind of archive implementation right here, sorry, this will be then easy to integrate right here. So they even give code, right? They give code in the paper themselves of how to implement this. It's pretty easy. And I think, yeah, even though it's quite a short paper, it's thorough and it's a good new method. And I think this could revolutionize publishing. And they even, so as a bit of a bonus, they give the official pronunciation of state of the art reviewing, which is something like state of the art reviewing pretty smooth. And yeah, with that, I hope you enjoyed this. And if the authors could just be a little more subtle next time, that would be great. And I guess you'd have to go. Yeah, nothing more. Bye. | [
{
"start": 0,
"end": 8.6,
"text": " Hi everyone. Today we're looking at state-of-the-art reviewing a radical proposal to improve scientific publication."
},
{
"start": 8.6,
"end": 17.8,
"text": " This has been on my mind for a while. The review process for modern science, especially machine learning, is just broken."
},
{
"start": 17.8,
"end": 23.400000000000002,
"text": " I've spoken numerous times about the fact that we need to replace it with a better system."
},
{
"start": 23.4,
"end": 30,
"text": " Samuel Albany at Al have actually come up with such a system and we're going to explore it today."
},
{
"start": 30,
"end": 35.6,
"text": " I am a big fan of this work and I'm 100% on board with this."
},
{
"start": 35.6,
"end": 42.4,
"text": " They basically say peer review forms the backbone of modern scientific manuscript evaluation."
},
{
"start": 42.4,
"end": 47.8,
"text": " If you don't know what peer review is in machine learning right now, if you have some genius idea,"
},
{
"start": 47.8,
"end": 54.199999999999996,
"text": " so here is your idea, that's a light bulb by the way, you write it up into an eight-page PDF."
},
{
"start": 54.199999999999996,
"end": 64.19999999999999,
"text": " Yes, it must be a PDF and yes, it must be eight pages. You submit it to a conference, which is some kind of..."
},
{
"start": 64.19999999999999,
"end": 68.6,
"text": " So you submit it to be accepted in a conference proceeding."
},
{
"start": 68.6,
"end": 79.39999999999999,
"text": " And if conference organizers, of course, they're just a bunch of people, they can't review these 1,000 million applications that come by themselves."
},
{
"start": 79.39999999999999,
"end": 87,
"text": " So what they do is they recruit experts, which are called peers. So peers are other people."
},
{
"start": 87,
"end": 95,
"text": " These are called peers and they have usually written up their own papers and they can critique each other's paper"
},
{
"start": 95,
"end": 102.2,
"text": " and they decide what gets accepted and what doesn't. Now, I've spoken numerous times of how this is super noisy right now."
},
{
"start": 102.2,
"end": 106.4,
"text": " They're way, way, they're not enough peers, they're not experienced enough."
},
{
"start": 106.4,
"end": 117.6,
"text": " So whether or not your particular idea gets accepted is extremely dependent on probability, on a coin flip usually."
},
{
"start": 117.6,
"end": 126,
"text": " And it's just overloaded and just makes no sense. And they ask the same question, is this fit for purpose in 2020?"
},
{
"start": 126,
"end": 136.79999999999998,
"text": " And we need to replace it and I support. So they, you can already see they kind of want to automate this away"
},
{
"start": 136.79999999999998,
"end": 145.4,
"text": " with their state of the art review score and the score will be an out of 10 score that can be integrated into something like archive."
},
{
"start": 145.4,
"end": 156.4,
"text": " And, you know, display right away. So they have some requirements to this new system."
},
{
"start": 156.4,
"end": 164.20000000000002,
"text": " What should it be done? It should have the ability to scale. Very important. Our current review system doesn't have this, right?"
},
{
"start": 164.20000000000002,
"end": 171.8,
"text": " Our current review system relies on other humans reviewing your paper."
},
{
"start": 171.8,
"end": 178,
"text": " And that means that the reviewers need to scale with the amount of papers, which just isn't the case currently."
},
{
"start": 178,
"end": 182.60000000000002,
"text": " So a new review system must have the ability to scale. Right."
},
{
"start": 182.60000000000002,
"end": 190,
"text": " And then, you know, automating the reviews away or scaling it up in a distributed fashion does this."
},
{
"start": 190,
"end": 197.4,
"text": " Speed. Yes, because right now, if I submit my manuscript for review, it takes them months to review it."
},
{
"start": 197.4,
"end": 207.6,
"text": " And our science progress is faster than that. So a speedy, more speedy version of peer review is definitely required."
},
{
"start": 207.6,
"end": 211,
"text": " And then consistency. And this is the most shocking part, right?"
},
{
"start": 211,
"end": 219,
"text": " There is a the grand 2014 NURIPS experiment, which concluded that"
},
{
"start": 219,
"end": 229,
"text": " 57 percent of papers accepted by one committee were rejected by another committee and vice versa. Reviewing the exact same paper,"
},
{
"start": 229,
"end": 233.8,
"text": " different committees came to completely different conclusions to an astounding degree."
},
{
"start": 233.8,
"end": 239.6,
"text": " So basically, you're flipping a coin of whether or not your paper gets accepted or not."
},
{
"start": 239.6,
"end": 243.2,
"text": " And I think this is just not acceptable."
},
{
"start": 243.2,
"end": 251.39999999999998,
"text": " And so they propose these three things, speed, scale, consistency, and their new method certainly has this."
},
{
"start": 251.39999999999998,
"end": 260.8,
"text": " Now, let's jump down here where they introduce this state of the art reviewing SOAR."
},
{
"start": 260.8,
"end": 270.8,
"text": " So they say, OK, the quality of a scientific work can be judged along three axes, efficacy, significance and novelty."
},
{
"start": 270.8,
"end": 275.8,
"text": " So there are these three pillars, right?"
},
{
"start": 275.8,
"end": 286,
"text": " Efficacy, which means is is kind of how how effective is your work in achieving the goal in machine learning?"
},
{
"start": 286,
"end": 292.2,
"text": " That's usually to train some good classifier or something like this."
},
{
"start": 292.2,
"end": 299.8,
"text": " Then the other one, sorry, is significance, right?"
},
{
"start": 299.8,
"end": 308.40000000000003,
"text": " Significance is how relevant is what you've done to the to the field."
},
{
"start": 308.40000000000003,
"end": 314.2,
"text": " Right. And the third one is novelty."
},
{
"start": 314.2,
"end": 323,
"text": " So, you know, in your scientific work should be an original contribution to the knowledge of mankind and therefore it should be novel."
},
{
"start": 323,
"end": 327.8,
"text": " Right. So the more of these three things you have, of course,"
},
{
"start": 327.8,
"end": 333.6,
"text": " the higher your score should be. And here in the middle is where the highest scores should be."
},
{
"start": 333.6,
"end": 340.40000000000003,
"text": " So imagine this is kind of a landscape. And so you want to grade papers along these three axes."
},
{
"start": 340.40000000000003,
"end": 348.7,
"text": " But they have a pretty good method of of of assessing these in an automated fashion."
},
{
"start": 348.7,
"end": 355.5,
"text": " So, first of all, assessing efficacy, efficacy, they say,"
},
{
"start": 355.5,
"end": 361.9,
"text": " is best assessed by determining if the proposed method achieves a new state of the art."
},
{
"start": 361.9,
"end": 367,
"text": " I think that's not really I don't think you can really doubt this."
},
{
"start": 367,
"end": 373.9,
"text": " I mean, this this is this is kind of the gold standard of of whether your paper is effective,"
},
{
"start": 373.9,
"end": 376.3,
"text": " is whether or not it achieves state of the art."
},
{
"start": 376.3,
"end": 381,
"text": " I mean, I it might be a bit of a controversial opinion,"
},
{
"start": 381,
"end": 385.9,
"text": " but if a paper doesn't achieve a state of the art, it's you know, why?"
},
{
"start": 385.9,
"end": 389.6,
"text": " Why do you even care? Like no one cares."
},
{
"start": 389.6,
"end": 393.1,
"text": " So from they say from an implementation perspective,"
},
{
"start": 393.1,
"end": 402.3,
"text": " they can they can use they can kind of abuse a fact of the current research environment is that you don't actually have to review this yourself."
},
{
"start": 402.3,
"end": 409.5,
"text": " But the authors themselves can be relied upon to state this repeatedly in the text."
},
{
"start": 409.5,
"end": 412.1,
"text": " Right. And this this is important."
},
{
"start": 412.1,
"end": 416.2,
"text": " So the authors will state that they have state of the art many,"
},
{
"start": 416.2,
"end": 419.8,
"text": " many times in the text if they have actually achieved it."
},
{
"start": 419.8,
"end": 422.2,
"text": " If they haven't achieved it or not so sure about it,"
},
{
"start": 422.2,
"end": 424.6,
"text": " they probably won't repeat it as many times."
},
{
"start": 424.6,
"end": 431.3,
"text": " But this is can be can kind of abuse now to distribute it."
},
{
"start": 431.3,
"end": 436.8,
"text": " Basically, you don't imagine now these these all of these reviewers."
},
{
"start": 436.8,
"end": 439.7,
"text": " They don't they don't have to do this work anymore."
},
{
"start": 439.7,
"end": 443.90000000000003,
"text": " They can just distribute to all the authors of their own papers,"
},
{
"start": 443.90000000000003,
"end": 448.90000000000003,
"text": " right? Because the authors in the text by the way,"
},
{
"start": 448.90000000000003,
"end": 456.8,
"text": " the text is structures is kind of an NLP approach to reviewing kind of NLP mixed with game theory."
},
{
"start": 456.8,
"end": 461.1,
"text": " Right. So the other authors themselves if they have state of the art,"
},
{
"start": 461.1,
"end": 466.7,
"text": " you have to do some stemming and stuff, but they will put that into the text a lot."
},
{
"start": 466.7,
"end": 468.8,
"text": " So it's a bit controversial,"
},
{
"start": 468.8,
"end": 478.7,
"text": " but the the authors here propose to simply count the number of word occurrences of state of the art case"
},
{
"start": 478.7,
"end": 483,
"text": " and sensitive very important in the text, right?"
},
{
"start": 483,
"end": 485.9,
"text": " It stands to reason that a higher state of the art count is preferable."
},
{
"start": 485.9,
"end": 489.09999999999997,
"text": " Of course."
},
{
"start": 489.09999999999997,
"end": 489.8,
"text": " All right."
},
{
"start": 489.8,
"end": 492.59999999999997,
"text": " So the second thing so this might be a bit controversial."
},
{
"start": 492.6,
"end": 500.6,
"text": " The second thing significance and they now make the claim significance is measured by efficacy."
},
{
"start": 500.6,
"end": 503.70000000000005,
"text": " So they simply the efficacy term."
},
{
"start": 503.70000000000005,
"end": 506.70000000000005,
"text": " So if your paper is effective at achieving its goal,"
},
{
"start": 506.70000000000005,
"end": 510.90000000000003,
"text": " you can also say it's significant for the community because again,"
},
{
"start": 510.90000000000003,
"end": 516.8000000000001,
"text": " significance should like if you have state of the art,"
},
{
"start": 516.8000000000001,
"end": 519.3000000000001,
"text": " then your paper is significant."
},
{
"start": 519.3,
"end": 523.5999999999999,
"text": " If you do not have state of the art, then your paper is obviously not significant"
},
{
"start": 523.5999999999999,
"end": 530.4,
"text": " because why should it matter if you don't have state of the art in a given task?"
},
{
"start": 530.4,
"end": 532.1999999999999,
"text": " It's useless."
},
{
"start": 532.1999999999999,
"end": 532.5,
"text": " All right."
},
{
"start": 532.5,
"end": 534.3,
"text": " So we weigh it twice."
},
{
"start": 534.3,
"end": 535.5,
"text": " That's pretty good."
},
{
"start": 535.5,
"end": 541.5,
"text": " And then novelty now here they take much of the same approach."
},
{
"start": 541.5,
"end": 543.5999999999999,
"text": " They say the authors probably state this."
},
{
"start": 543.5999999999999,
"end": 549.0999999999999,
"text": " So how much they use the word novel in their manuscript will dictate."
},
{
"start": 549.1,
"end": 554.6,
"text": " So here they say, okay, they novel."
},
{
"start": 554.6,
"end": 557,
"text": " Wow, this is failing me."
},
{
"start": 557,
"end": 557.9,
"text": " Hello."
},
{
"start": 557.9,
"end": 563.4,
"text": " How much they use the word novel in the text will probably be an indication."
},
{
"start": 563.4,
"end": 565.2,
"text": " I don't think so though."
},
{
"start": 565.2,
"end": 574.9,
"text": " They do do the smart thing of they include the works."
},
{
"start": 574.9,
"end": 579.6,
"text": " They include the related work section from this."
},
{
"start": 579.6,
"end": 584.1999999999999,
"text": " Sorry, they exclude the related work section."
},
{
"start": 584.1999999999999,
"end": 587.5,
"text": " They say we make the key observation that individuals best play to make the judgment"
},
{
"start": 587.5,
"end": 591.3,
"text": " are the authors themselves since they have likely read at least one of the works"
},
{
"start": 591.3,
"end": 593.9,
"text": " cited in the bibliography."
},
{
"start": 593.9,
"end": 595,
"text": " I don't agree here."
},
{
"start": 595,
"end": 601.5,
"text": " I think a better method would be to simply count the number of references"
},
{
"start": 601.5,
"end": 607.7,
"text": " and the lower the amount of references to related work, the higher the novelty."
},
{
"start": 607.7,
"end": 617.7,
"text": " Because if you think, if these are current papers and your paper is here,"
},
{
"start": 617.7,
"end": 620.2,
"text": " you'll have a lot of related work."
},
{
"start": 620.2,
"end": 622.3,
"text": " So it's not as novel."
},
{
"start": 622.3,
"end": 627.2,
"text": " If you're way out here, you'll have maybe one or two related works."
},
{
"start": 627.2,
"end": 631.1,
"text": " So it's way more novel if you have less references."
},
{
"start": 631.1,
"end": 634,
"text": " So this would be my criticism of this."
},
{
"start": 634,
"end": 640,
"text": " So this novelty thing here, I think this term should be replaced by a graph"
},
{
"start": 640,
"end": 646.6,
"text": " centrality measure or simply a count of how many references you have would be enough."
},
{
"start": 646.6,
"end": 649.4,
"text": " All right, so they define their score."
},
{
"start": 649.4,
"end": 653.6,
"text": " Their score, as we saw, is the SOTA term weighted twice."
},
{
"start": 653.6,
"end": 661,
"text": " A geometric mean between that and the novelty term, which I've criticized."
},
{
"start": 661,
"end": 669.2,
"text": " They add the suffix out of 10 because out of 10 score is pretty interpretable."
},
{
"start": 669.2,
"end": 673.6,
"text": " So they divide by 10 here."
},
{
"start": 673.6,
"end": 677,
"text": " So yeah, they say that here."
},
{
"start": 677,
"end": 682.8,
"text": " We attach a suffix out of 10 because that's easy to interpret."
},
{
"start": 682.8,
"end": 689.3,
"text": " And as you saw in the kind of archive implementation right here,"
},
{
"start": 689.3,
"end": 694.5999999999999,
"text": " sorry, this will be then easy to integrate right here."
},
{
"start": 694.5999999999999,
"end": 700.0999999999999,
"text": " So they even give code, right?"
},
{
"start": 700.0999999999999,
"end": 705,
"text": " They give code in the paper themselves of how to implement this."
},
{
"start": 705,
"end": 708.8,
"text": " It's pretty easy."
},
{
"start": 708.8,
"end": 714.0999999999999,
"text": " And I think, yeah, even though it's quite a short paper,"
},
{
"start": 714.0999999999999,
"end": 719.1999999999999,
"text": " it's thorough and it's a good new method."
},
{
"start": 719.2,
"end": 722.6,
"text": " And I think this could revolutionize publishing."
},
{
"start": 722.6,
"end": 725,
"text": " And they even, so as a bit of a bonus,"
},
{
"start": 725,
"end": 728.9000000000001,
"text": " they give the official pronunciation of state of the art reviewing,"
},
{
"start": 728.9000000000001,
"end": 734.8000000000001,
"text": " which is something like state of the art reviewing pretty smooth."
},
{
"start": 734.8000000000001,
"end": 739.8000000000001,
"text": " And yeah, with that, I hope you enjoyed this."
},
{
"start": 739.8000000000001,
"end": 744.4000000000001,
"text": " And if the authors could just be a little more subtle next time,"
},
{
"start": 744.4000000000001,
"end": 747.2,
"text": " that would be great."
},
{
"start": 747.2,
"end": 756.6,
"text": " And I guess you'd have to go."
},
{
"start": 756.6,
"end": 758.4000000000001,
"text": " Yeah, nothing more."
},
{
"start": 758.4,
"end": 784.4,
"text": " Bye."
}
] |
U3zmekzQ8WQ | Yannic Kilcher | UCZHmQk67mSJgfCCTn7xBfew | Agent57: Outperforming the Atari Human Benchmark | [
"Science & Technology"
] | [
"deep learning",
"machine learning",
"arxiv",
"google",
"rnn",
"recurrent",
"deepmind",
"r2d2",
"ngu",
"reinforcement learning",
"deep q learning",
"replay buffer",
"exploration",
"exploitation",
"tradeoff",
"policy",
"lstm",
"atari"
] | DeepMind's Agent57 is the first RL agent to outperform humans in all 57 Atari benchmark games. It extends previous algorithms like Never Give Up and R2D2 by meta-learning the exploration-exploitation tradeoff controls.
https://arxiv.org/abs/2003.13350
https://deepmind.com/blog/article/Agent57-Outperforming-the-human-Atari-benchmark
Abstract:
Atari games have been a long-standing benchmark in the reinforcement learning (RL) community for the past decade. This benchmark was proposed to test general competency of RL algorithms. Previous work has achieved good average performance by doing outstandingly well on many games of the set, but very poorly in several of the most challenging games. We propose Agent57, the first deep RL agent that outperforms the standard human benchmark on all 57 Atari games. To achieve this result, we train a neural network which parameterizes a family of policies ranging from very exploratory to purely exploitative. We propose an adaptive mechanism to choose which policy to prioritize throughout the training process. Additionally, we utilize a novel parameterization of the architecture that allows for more consistent and stable learning.
Authors: Adrià Puigdomènech Badia, Bilal Piot, Steven Kapturowski, Pablo Sprechmann, Alex Vitvitskyi, Daniel Guo, Charles Blundell
Links:
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
BitChute: https://www.bitchute.com/channel/yannic-kilcher
Minds: https://www.minds.com/ykilcher | Hi there, you're looking at Solaris, which is a game in the Atari benchmark, and it has been one of the hardest games for reinforcement learning agents to solve. What you're seeing is Agent 57, which is a new agent by DeepMind that is the first one to beat all of the 57 games in the Atari suite to a human or superhuman performance. So some of these games have been pretty easy for RL agents, but some of them, look at this one here, have been pretty hard, mainly because of the reward structure. Now you see on the top edge, the reward, it's not going up for a long time, and this kind of games where the reward doesn't go up for a long time is very hard for RL agents. So Agent 57 builds on a number of previous improvements to the original DeepQ networks of DeepMind, and today we'll look into this. So it's called Agent 57, as I said, because it beats all of these 57 games. They're quite diverse, and it's a cool thing that a single system can beat it. So they go into this. This is a printout of the website, so I can scribble on it. And this here, it's been cut off, but it should say DQN here. DQN from 2015. All right, so this DQN paper of 2015 was kind of the original paper that popularized this Atari benchmark and introduced neural networks to reinforcement learning, basically, that made it work. Since then, there have been a number of improvements. So maybe we'll just go into what DeepQ learning is. So in reinforcement learning, usually, you have an agent here, and you have an environment over here, right? And the environment will give you an observation. Now, the observation in our case would be something like the frame of a game, right? And you're here, you're a little rocket, and there is a bunch of meteors, right? And then the agent needs to somehow give back an action. So an action, and the actions in the Atari benchmark are always defined. So you can, in Atari, you used to have this kind of joystick thing. You can put it up, down, left, right, or you can put it upright, up, left, and so on. And also you have a button, I think, one or two buttons. I don't actually remember, but you can press at least one button. So these are the actions. Let's say there is something like 20 different actions. So all of the directions here, and then you can always press or not press a button with it. So you have to give, you send back this action here. You say, I want to put the joystick up, and I want to press the button at the same time. And then the environment will give you back a, it will say, okay, we'll give you back a new observation, which would be the next frame of the game. You've pressed up, so your little rocket is a bit more forward. You've pressed the button, so you fired a shot, and the meteors are still here. And it will also give you back a reward. So the reward, different games give different rewards. For example, in Pac-Man, every time your Pac-Man eats a little one of these dots, you get a reward. But in other games, most famously games like Montezuma's Revenge, you're in this room, and there are these platforms and ladders and stuff, and you're here, and there are opponents rolling around, and there's a door over here. You need to go down, jump over here, get up, get some key, and then go to the door. And only then will you get a reward. So games vary in many ways in how you get this reward. So that's kind of the intrinsic problem here. So deep queue learning is the following. We have a neural network taking in the observation. So we have a neural network, let's designate it as this. And the observation goes in here, right? This is O, the observation goes in here. And also, the action goes in here. Now let's call this AI, because we have different actions. And O, observation at step T. And it will give you a queue value. So the queue value for the observation at time T and action I. Now you do this for every single action. So you put observation with action A, J in the same network, right? You get an output that is the queue value for observation, the same observation with this different action. You do this for every action. And wherever the queue value is the highest, right? Wherever that's the highest, that's the action you go with. So what you have to do is you have to train this neural network to predict the queue value as accurate as possible. And the queue value is basically the reward that you expect from now until the end of the episode by performing this action in this situation, right? That's queue learning. Simply predicting if I do action I right now, how much reward am I going to get from now until the end of the episode, right? That's basically it. That's deep queue and deep queue learning simply because you have a neural network doing the learning. So that was deep queue networks and they work pretty well, but they don't work for these long time horizons because you always just learn. You just see one observation, right? And you kind of learn one step at a time and you rely on these queue values propagating through from your experience. It doesn't work very well for these long credit assignments. Now a significant improvement upon that is this R2D2 algorithm that incorporated LSTMs or GRUs, which are recurrent neural networks. So not only does your observation go into the neural network, right? Now your history of observations, so what happened before, so not only the current game state, right? But here you have the observation from step one, the action you did at that step, then the observation time two, the action you did at time two and so on. They now all, so this is encoded and this is encoded and then you have a recurrent neural network that incorporates all of these things that happened previously, right? To your current representation. So now not only does the agent see what is happening right now, it also gets the information of what happened previously, what did it do previously and it can also now back propagate through these things and kind of learn a longer range credit assignment. Credit assignment means it gets to figure out which actions had actually an influence on the final reward. If you incorporate the history, right, you can have a direct gradient flow across that history. So notably these LSTMs or GRUs, you can let them, you know, compute over maybe 10 or 100 steps, right? And then you get a pretty good idea of which of these actions within those 100 steps led to which rewards. And the other thing on R2D2 is of course it is now more distributed. So this was already here improvements to DQN, but the R2D2 agent is also distributed, meaning that you have like a central instance. So this is now engineering, right? You have a central instance that is called the learner. And the learner has the main weights, which I'm going to designate with theta here. And it just takes in experience from all of these workers. So there's worker one, worker two, worker three, four, and so on. And these, they will all just run episodes. They will all do work, work, work, work, work, work, work, work, work independently of each other and then send back their experience to the learner. And every now and then the learner sinks out the weights of the neural networks to the workers. So that's kind of distributed RL in this sense. You have a central learner, then you have many, many workers doing the actual interaction with the environment. So one of the main pitfalls of R2D2 is still it has a poor exploration, exploitation strategy, which I believe it is still just kind of epsilon greedy. What does it mean? So in order to understand this, maybe consider again our screen here, right? So let's say you're here with your space ship, right? And there are, there's a meteor coming right here and one right here. And there is a gold coin right here. Let's make this gold, right? So let's say you get a reward for collecting the coin, but you also get a reward for shooting the meteors, right? So what happens if you shoot right now? So if you shoot, then let's say you shoot and this meteor explodes, right? So you get a reward. Yeah. So you get one reward, but then the meteor right behind it will hit you, right? It's coming toward you. You'll have no way, no time to get out of the way. So one reward and then death, right? Let's make a little arrow here, right? So in total you get one reward. Now what happens instead if you move to the right? So move, right? So the next, in the next frame, the meteor will fly past you. You are over here, right? But the gold coin is here. Now this has given you so far zero reward, right? Oops. This has given you zero reward, but then in the next frame, you know, these meteors have passed now and you are going to get that gold coin. And that gives you one reward and no death, right? So you can technically go on here and maybe you'll get five more rewards down the line. So the, the, this is, here's the exploration exploitation dilemma. If an agent has for some reason learned that the shooting action in this situation will give it a one reward and the move action will give it zero reward, but has not learned to look past this. So this is kind of nebulous here. It has only experienced, it has only experienced one frame of here. Yeah. It has only experienced one frame of experience. It will say, wait a minute, shoot here appears to be like really good. It gives me one reward and move gives me zero reward. So from now on I'll just always do shoot, right? Shoot, shoot, shoot. Now what you would like to do. So this is called exploitation, right? Exploitation. It has learned something that gives it a reward. So it will just do that over and over again. Whereas here you could say, ah, I, I might go this way, even though it's zero word, because I can hope, right? I don't know yet, but I can hope that I will get a more reward down here. This is exploration. And the question in, in reinforcement learning is always how to trade off these two, right? And ideally you would want your agent to collect maximum reward that speaks for exploitation of what it has already learned. But also you never want to discard the possibility that, um, down the line of things that you don't yet know, there might be even more reward. And that speaks for exploration. I'm just, this both are abbreviated, same exploit, explore. This was dumb. Um, so in the original deep QN formulation, and I believe also in R2D2, this is done with Epsilon greedy, um, which is surprisingly performing well. Uh, so in Epsilon greedy, you simply say, I'm going to have a constant Epsilon. This is E Epsilon. Um, this is maybe 5% or something. I'm going to simply do something at random and the other one minus Epsilon. I'm just going to go with the, um, with the thing I have already learned. And this performs pretty well, but you might imagine that there is something smarter to do. So never give up. Um, these, this algorithm, it kind of goes into this, um, exploration, uh, mode where it tries to get, get to smarter ways to do exploration. And the keywords here are things like intrinsic motivation. So intrinsic motivation and curiosity refer to the fact that, um, it is so in addition to the reward you get from the environment here, right? This, this reward right here, you can also interject at this point and say, ah, I'm going to give some R prime, some reward of myself, right? To to kind of encourage some behavior in the agent. And this here we call intrinsic intrinsic. Um, so that means you add to the reward of the environment, you add some reward of your own that has nothing to do with the environment or not much, um, but just encourages certain behavior in the agent that is now also trying to maximize this intrinsic reward. Um, and in curiosity and intrinsic motivation formulations, usually you are rewarded for novelty. Novelty, which means the agent is rewarded for finding things that it has not yet seen. Um, so you, in this situation over here, you might see why this encourages the agent to go this route here because it says, wait a minute, there's a bunch of stuff like here. I just die, right? But there is a bunch of stuff I haven't seen yet down here. So I might want to go explore that and we give it extra intrinsic reward or prime for seeing things it hasn't seen yet. So it will learn if I do things that I have never done, I will get this sweet intrinsic reward and then it will go explore. Now, of course it's a, it's a big engineering question of how exactly to set this intrinsic reward. And there are many, many different formulations of that, um, that fall under this term of, let's say curiosity or something like this. Um, nevertheless, this never give up has, has, um, improved over R2D2, uh, using ideas like that. And now agent 57 improves again. Now how does agent 57 improve again? And it is mainly, um, it is mainly in, in the, in the, in this, what I just said. So how exactly do you apply this intrinsic reward? How exactly do you navigate the exploration, exploitation trade off? That's where agent 57 comes in because what they've realized this, that for these different Atari games right here, uh, some are very easy. Some you don't need much exploration. Some you need a lot. Some you need it over a large time scale and simply one agent, um, one never give up agent with the same settings of this curiosity of how long it looks into the future is not going to solve all the games. So agent 57 learns, um, how to, to modulate this exploration, exploitation trade off. So let's jump into the paper a bit more. I encourage you to read the blog post that is quite thorough and, um, the paper is a bit more technical. Sorry. Let me switch over. This is the paper agent 57 up forming the Atari human benchmark by Google deep mind. And um, here they say improvements to end you to never give up. So the first improvement they do is, um, so we've, we've already talked about how this is classic Q learning, right? So you're trying to learn this function, uh, that gives you the Q value of an action and the state. Um, now since we're going to deal with intrinsic reward in addition to extrinsic reward, uh, it makes sense. That's what they argue to split the Q learning function into two different parts. One part that learns the extrinsic reward and one part that learns the intrinsic reward. Right. And then you have a parameter beta, um, in front of it. Now beta in this case is the trade off. How much do you want to value this intrinsic reward? Right. Um, and here we see our first lever on the exploitation, exploration trade off. If an agent gets lots of reward for, uh, for exploring, right, it might never exploit and exploiting might actually be a good, a good option in the game that you're in. So you might want to set beta small, but in other games you might want to encourage exploration to the max and therefore set beta very high. Um, all right, another, uh, constant along with that, that they modulate is the, is the, um, the discount factor. So which is called this gamma here. So you already see here this beta we've already seen and they also modulate this gamma. Now what does gamma do, um, if I have my state and action, we already said, so here is an observation one and I do action one and that gives me observation two and I do action two and that gives me observation three and I do action three and each time I get a reward, right? An extrinsic reward and an intrinsic reward. So reward one, reward two, reward three and so on. Now usually, um, an RL agent will look at these rewards and let's say you are here, you are at observation one and you're trying to estimate your future rewards. Um, what will be most important will be the reward that you're getting right now, right? Because that's the most sure because, um, this reward here that you might get two steps from now, you know, a lot of things could happen, right? You are pretty sure that if you do action one, you're going to get to this state, but you're not entirely sure. You could also get to another state and therefore you had to do another action and therefore this reward here could be something different. Um, so these algorithms are, are having what's known as a discount factor. That means the value of a state, uh, of a state S is going to be the sum from time, uh, zero, let's say K equals T that's stated time T up until some horizon. I think they call it H in the paper. You could also think of this as infinity of the reward at step K, but discounted by this factor. Um, and you raise it to the, to the power of K usually or T T minus, uh, yeah, K minus T. So basically means that you, this is if T is one, so it's the reward at the at this time step plus let's say gamma here is 0.99, right? Plus 0.99 the reward at the next time step plus 0.99 squared, uh, the reward of that after that. And you see that the more, the more into the future you look, the less, um, value these rewards have. So little bars here indicate that you're going to value future rewards less and less. This is called a discount factor right here. And it's, um, how to set it is very important because if you set it very low, let's say you set it to 0.1, that means all that you want to do is maximize the rewards that you're getting in the likely the next and next, next step. Uh, you're not really looking into the future. Um, this is very good for games that give you immediate reward for good actions. But if you, uh, if you set it very high, let's say 0.999, right? That means a reward a hundred steps from now doesn't, you know, is, is almost the same to you as a reward one step from now. And this is very valuable for games that don't give you a reward immediately or that kind of trying to trick you as we saw before. Like if you shoot the meteor now, then you get one reward, but if you don't and pass on the opportunity, you might get much more later. So the modulation of the discount factor is also very important, uh, to set and really depends on the game. So we have two quantities here that really depend on what kind of game it is. And also they argue, um, it, it also depends where in the learning process you are. So if you're at the very beginning of the learning process, you might want to have a very high goal, the high intrinsic reward to go explore. And you want, might want to get, have a very low discount factor in order to learn a good immediate value function. But then as time goes on, you might want to bring down the intrinsic reward because now you really want actually, because your end goal is to maximize the extrinsic reward and you want to up this discount factor to look more into the future. Now that you have already learned the immediate values very well. So if I had to summarize and simplify what agent 57 does is it builds a neural network that adjusts these two quantities across the training, right? Um, so, so it adjusts the beta and gamma across the training and it does this in a so-called bandit setting. Now there is no real good picture in this paper that I can show you. So I'm just going to have to, to draw. So you have an agent, right? It interacts with this environment here and it always gets these rewards. Now what you have here is a meta controller, right? So the agents, it has two parameters. It has this beta and this gamma and the meta controller now observes this. It observes this interaction and it outputs values for these two constants and the does this dynamically as the training progresses, right? So the agent, the agent will, will kind of learn, the agent will change its behavior over time. Now this is actually implemented in a slightly different way in that the meta controller doesn't control the values directly, but it, it has kind of options. So what you do is you define a bunch of possibilities for beta and gamma. So you say I have strategy one, strategy one has beta at 0.1 and gamma at 0.9. Strategy two has beta at 0.2 and gamma at 0.8 and so on. Right? And now the meta controller has to choose between one of these, in this case, six different strategies across training. So it might start off, as we said, with a high beta, which might be over here, 0.9, 0.1. It might start off with a high beta and then transition to the lower ends. And it can do so depending on the game and depending on the progress in the game. So this is, this is dynamic and this is the improvement over never give up over this other agent, because this other agent simply had these strategies and trained them at the same time. And now this meta controller here controls which strategy is currently trained and which one is used to generate the experience. So this is, this is basically, I mean, there's a, they also, of course, they also say, well, we also increase the window of, let me go back. So this LSTM, these, I've shown you these things here that incorporate experience over time. They also say, well, we increase the window of how long the LSTM, the time window of how much experience is incorporated. And they do a bunch of other things, which I always find kind of annoying because it's always really, really hard to see where the improvements come from that they claim they made. So, but, you know, barring that, basically they built this meta controller to choose the strategies for the agent over time. Now of course, this meta controller again is trained by the rewards that you get back from the environment. So the meta controller as an action has the choice of strategy, right? And the reward, it gets back from the agent environment interaction, right? So in itself, it is a reinforcement learning problem. Now why, like, to me it seems just shifts the, it just shifts the problem of exploration exploitation one level higher. They use a sliding window bandit algorithm to do this. But again, you have hyper parameters there, like how long is the sliding window and how does the bandit algorithm do the exploration exploitation tradeoff. So it seems to me you're just shifting it one level higher. And it also seems like we're getting into the region of where we are meta over engineering our approaches to the specifics of this Atari benchmark. Because we're kind of observing, oh, okay, these agents do this wrong, these agents do this wrong. So let's just build an agent that can do both sort of. And then the kind of audastic thing I find that they open with how to measure artificial general intelligence, which, I mean, come on, you're just it's kind of amnest right now you're just kind of over and over and overfitting on this one benchmark, there's not really a need to, to make this into a story on artificial general intelligence. Alright, so this was my two cents to this. I hope you enjoyed this and bye bye. | [
{
"start": 0,
"end": 9.120000000000001,
"text": " Hi there, you're looking at Solaris, which is a game in the Atari benchmark, and it has"
},
{
"start": 9.120000000000001,
"end": 14.74,
"text": " been one of the hardest games for reinforcement learning agents to solve."
},
{
"start": 14.74,
"end": 22,
"text": " What you're seeing is Agent 57, which is a new agent by DeepMind that is the first one"
},
{
"start": 22,
"end": 31.92,
"text": " to beat all of the 57 games in the Atari suite to a human or superhuman performance."
},
{
"start": 31.92,
"end": 38.2,
"text": " So some of these games have been pretty easy for RL agents, but some of them, look at this"
},
{
"start": 38.2,
"end": 42.8,
"text": " one here, have been pretty hard, mainly because of the reward structure."
},
{
"start": 42.8,
"end": 52.72,
"text": " Now you see on the top edge, the reward, it's not going up for a long time, and this kind"
},
{
"start": 52.72,
"end": 58.8,
"text": " of games where the reward doesn't go up for a long time is very hard for RL agents."
},
{
"start": 58.8,
"end": 66.4,
"text": " So Agent 57 builds on a number of previous improvements to the original DeepQ networks"
},
{
"start": 66.4,
"end": 71.24,
"text": " of DeepMind, and today we'll look into this."
},
{
"start": 71.24,
"end": 76.36,
"text": " So it's called Agent 57, as I said, because it beats all of these 57 games."
},
{
"start": 76.36,
"end": 84.28,
"text": " They're quite diverse, and it's a cool thing that a single system can beat it."
},
{
"start": 84.28,
"end": 86.16,
"text": " So they go into this."
},
{
"start": 86.16,
"end": 91.06,
"text": " This is a printout of the website, so I can scribble on it."
},
{
"start": 91.06,
"end": 95.6,
"text": " And this here, it's been cut off, but it should say DQN here."
},
{
"start": 95.6,
"end": 98.28,
"text": " DQN from 2015."
},
{
"start": 98.28,
"end": 108.32000000000001,
"text": " All right, so this DQN paper of 2015 was kind of the original paper that popularized this"
},
{
"start": 108.32000000000001,
"end": 116.28,
"text": " Atari benchmark and introduced neural networks to reinforcement learning, basically, that"
},
{
"start": 116.28,
"end": 119.24000000000001,
"text": " made it work."
},
{
"start": 119.24000000000001,
"end": 121.18,
"text": " Since then, there have been a number of improvements."
},
{
"start": 121.18,
"end": 125.78,
"text": " So maybe we'll just go into what DeepQ learning is."
},
{
"start": 125.78,
"end": 134.24,
"text": " So in reinforcement learning, usually, you have an agent here, and you have an environment"
},
{
"start": 134.24,
"end": 136.1,
"text": " over here, right?"
},
{
"start": 136.1,
"end": 139.44,
"text": " And the environment will give you an observation."
},
{
"start": 139.44,
"end": 146,
"text": " Now, the observation in our case would be something like the frame of a game, right?"
},
{
"start": 146,
"end": 150.52,
"text": " And you're here, you're a little rocket, and there is a bunch of meteors, right?"
},
{
"start": 150.52,
"end": 156.16,
"text": " And then the agent needs to somehow give back an action."
},
{
"start": 156.16,
"end": 163.02,
"text": " So an action, and the actions in the Atari benchmark are always defined."
},
{
"start": 163.02,
"end": 169.16000000000003,
"text": " So you can, in Atari, you used to have this kind of joystick thing."
},
{
"start": 169.16000000000003,
"end": 177.12,
"text": " You can put it up, down, left, right, or you can put it upright, up, left, and so on."
},
{
"start": 177.12,
"end": 181.88,
"text": " And also you have a button, I think, one or two buttons."
},
{
"start": 181.88,
"end": 187.28,
"text": " I don't actually remember, but you can press at least one button."
},
{
"start": 187.28,
"end": 188.56,
"text": " So these are the actions."
},
{
"start": 188.56,
"end": 192.76,
"text": " Let's say there is something like 20 different actions."
},
{
"start": 192.76,
"end": 198.28,
"text": " So all of the directions here, and then you can always press or not press a button with"
},
{
"start": 198.28,
"end": 201.12,
"text": " it."
},
{
"start": 201.12,
"end": 204.64000000000001,
"text": " So you have to give, you send back this action here."
},
{
"start": 204.64,
"end": 210.44,
"text": " You say, I want to put the joystick up, and I want to press the button at the same time."
},
{
"start": 210.44,
"end": 215.95999999999998,
"text": " And then the environment will give you back a, it will say, okay, we'll give you back"
},
{
"start": 215.95999999999998,
"end": 220.35999999999999,
"text": " a new observation, which would be the next frame of the game."
},
{
"start": 220.35999999999999,
"end": 224,
"text": " You've pressed up, so your little rocket is a bit more forward."
},
{
"start": 224,
"end": 229.17999999999998,
"text": " You've pressed the button, so you fired a shot, and the meteors are still here."
},
{
"start": 229.17999999999998,
"end": 232.5,
"text": " And it will also give you back a reward."
},
{
"start": 232.5,
"end": 238.52,
"text": " So the reward, different games give different rewards."
},
{
"start": 238.52,
"end": 247.16,
"text": " For example, in Pac-Man, every time your Pac-Man eats a little one of these dots, you get a"
},
{
"start": 247.16,
"end": 248.16,
"text": " reward."
},
{
"start": 248.16,
"end": 253.8,
"text": " But in other games, most famously games like Montezuma's Revenge, you're in this room,"
},
{
"start": 253.8,
"end": 258.6,
"text": " and there are these platforms and ladders and stuff, and you're here, and there are"
},
{
"start": 258.6,
"end": 262,
"text": " opponents rolling around, and there's a door over here."
},
{
"start": 262,
"end": 267.24,
"text": " You need to go down, jump over here, get up, get some key, and then go to the door."
},
{
"start": 267.24,
"end": 271.12,
"text": " And only then will you get a reward."
},
{
"start": 271.12,
"end": 277.52,
"text": " So games vary in many ways in how you get this reward."
},
{
"start": 277.52,
"end": 280.96,
"text": " So that's kind of the intrinsic problem here."
},
{
"start": 280.96,
"end": 284.52,
"text": " So deep queue learning is the following."
},
{
"start": 284.52,
"end": 288.28,
"text": " We have a neural network taking in the observation."
},
{
"start": 288.28,
"end": 291.4,
"text": " So we have a neural network, let's designate it as this."
},
{
"start": 291.4,
"end": 293.91999999999996,
"text": " And the observation goes in here, right?"
},
{
"start": 293.91999999999996,
"end": 296.59999999999997,
"text": " This is O, the observation goes in here."
},
{
"start": 296.59999999999997,
"end": 299.4,
"text": " And also, the action goes in here."
},
{
"start": 299.4,
"end": 302.2,
"text": " Now let's call this AI, because we have different actions."
},
{
"start": 302.2,
"end": 306.2,
"text": " And O, observation at step T."
},
{
"start": 306.2,
"end": 309.91999999999996,
"text": " And it will give you a queue value."
},
{
"start": 309.91999999999996,
"end": 315.23999999999995,
"text": " So the queue value for the observation at time T and action I."
},
{
"start": 315.23999999999995,
"end": 318.15999999999997,
"text": " Now you do this for every single action."
},
{
"start": 318.16,
"end": 325.64000000000004,
"text": " So you put observation with action A, J in the same network, right?"
},
{
"start": 325.64000000000004,
"end": 331.04,
"text": " You get an output that is the queue value for observation, the same observation with"
},
{
"start": 331.04,
"end": 332.28000000000003,
"text": " this different action."
},
{
"start": 332.28000000000003,
"end": 334.32000000000005,
"text": " You do this for every action."
},
{
"start": 334.32000000000005,
"end": 338.36,
"text": " And wherever the queue value is the highest, right?"
},
{
"start": 338.36,
"end": 341.90000000000003,
"text": " Wherever that's the highest, that's the action you go with."
},
{
"start": 341.90000000000003,
"end": 347.96000000000004,
"text": " So what you have to do is you have to train this neural network to predict the queue value"
},
{
"start": 347.96,
"end": 348.96,
"text": " as accurate as possible."
},
{
"start": 348.96,
"end": 356.68,
"text": " And the queue value is basically the reward that you expect from now until the end of"
},
{
"start": 356.68,
"end": 361.64,
"text": " the episode by performing this action in this situation, right?"
},
{
"start": 361.64,
"end": 364.32,
"text": " That's queue learning."
},
{
"start": 364.32,
"end": 372.67999999999995,
"text": " Simply predicting if I do action I right now, how much reward am I going to get from now"
},
{
"start": 372.67999999999995,
"end": 376.08,
"text": " until the end of the episode, right?"
},
{
"start": 376.08,
"end": 379.76,
"text": " That's basically it."
},
{
"start": 379.76,
"end": 384.03999999999996,
"text": " That's deep queue and deep queue learning simply because you have a neural network doing"
},
{
"start": 384.03999999999996,
"end": 385.7,
"text": " the learning."
},
{
"start": 385.7,
"end": 389.97999999999996,
"text": " So that was deep queue networks and they work pretty well, but they don't work for these"
},
{
"start": 389.97999999999996,
"end": 393.56,
"text": " long time horizons because you always just learn."
},
{
"start": 393.56,
"end": 395.96,
"text": " You just see one observation, right?"
},
{
"start": 395.96,
"end": 402.88,
"text": " And you kind of learn one step at a time and you rely on these queue values propagating"
},
{
"start": 402.88,
"end": 404.84,
"text": " through from your experience."
},
{
"start": 404.84,
"end": 408.4,
"text": " It doesn't work very well for these long credit assignments."
},
{
"start": 408.4,
"end": 417.28,
"text": " Now a significant improvement upon that is this R2D2 algorithm that incorporated LSTMs"
},
{
"start": 417.28,
"end": 420.84,
"text": " or GRUs, which are recurrent neural networks."
},
{
"start": 420.84,
"end": 427.23999999999995,
"text": " So not only does your observation go into the neural network, right?"
},
{
"start": 427.23999999999995,
"end": 433.94,
"text": " Now your history of observations, so what happened before, so not only the current game"
},
{
"start": 433.94,
"end": 435.08,
"text": " state, right?"
},
{
"start": 435.08,
"end": 441.16,
"text": " But here you have the observation from step one, the action you did at that step, then"
},
{
"start": 441.16,
"end": 446.84,
"text": " the observation time two, the action you did at time two and so on."
},
{
"start": 446.84,
"end": 454.36,
"text": " They now all, so this is encoded and this is encoded and then you have a recurrent neural"
},
{
"start": 454.36,
"end": 461.8,
"text": " network that incorporates all of these things that happened previously, right?"
},
{
"start": 461.8,
"end": 464.32,
"text": " To your current representation."
},
{
"start": 464.32,
"end": 472.92,
"text": " So now not only does the agent see what is happening right now, it also gets the information"
},
{
"start": 472.92,
"end": 481.28000000000003,
"text": " of what happened previously, what did it do previously and it can also now back propagate"
},
{
"start": 481.28000000000003,
"end": 486.28000000000003,
"text": " through these things and kind of learn a longer range credit assignment."
},
{
"start": 486.28,
"end": 493.28,
"text": " Credit assignment means it gets to figure out which actions had actually an influence"
},
{
"start": 493.28,
"end": 496.76,
"text": " on the final reward."
},
{
"start": 496.76,
"end": 503.59999999999997,
"text": " If you incorporate the history, right, you can have a direct gradient flow across that"
},
{
"start": 503.59999999999997,
"end": 504.59999999999997,
"text": " history."
},
{
"start": 504.59999999999997,
"end": 513.0799999999999,
"text": " So notably these LSTMs or GRUs, you can let them, you know, compute over maybe 10 or 100"
},
{
"start": 513.0799999999999,
"end": 514.0799999999999,
"text": " steps, right?"
},
{
"start": 514.08,
"end": 520.12,
"text": " And then you get a pretty good idea of which of these actions within those 100 steps led"
},
{
"start": 520.12,
"end": 523.72,
"text": " to which rewards."
},
{
"start": 523.72,
"end": 531.0600000000001,
"text": " And the other thing on R2D2 is of course it is now more distributed."
},
{
"start": 531.0600000000001,
"end": 537.88,
"text": " So this was already here improvements to DQN, but the R2D2 agent is also distributed, meaning"
},
{
"start": 537.88,
"end": 540.1400000000001,
"text": " that you have like a central instance."
},
{
"start": 540.1400000000001,
"end": 541.32,
"text": " So this is now engineering, right?"
},
{
"start": 541.32,
"end": 546,
"text": " You have a central instance that is called the learner."
},
{
"start": 546,
"end": 551.6400000000001,
"text": " And the learner has the main weights, which I'm going to designate with theta here."
},
{
"start": 551.6400000000001,
"end": 557.0600000000001,
"text": " And it just takes in experience from all of these workers."
},
{
"start": 557.0600000000001,
"end": 561.7600000000001,
"text": " So there's worker one, worker two, worker three, four, and so on."
},
{
"start": 561.7600000000001,
"end": 565.5200000000001,
"text": " And these, they will all just run episodes."
},
{
"start": 565.5200000000001,
"end": 569.6800000000001,
"text": " They will all do work, work, work, work, work, work, work, work, work independently"
},
{
"start": 569.68,
"end": 573.3199999999999,
"text": " of each other and then send back their experience to the learner."
},
{
"start": 573.3199999999999,
"end": 579.4399999999999,
"text": " And every now and then the learner sinks out the weights of the neural networks to the"
},
{
"start": 579.4399999999999,
"end": 580.4399999999999,
"text": " workers."
},
{
"start": 580.4399999999999,
"end": 583.4399999999999,
"text": " So that's kind of distributed RL in this sense."
},
{
"start": 583.4399999999999,
"end": 590.3199999999999,
"text": " You have a central learner, then you have many, many workers doing the actual interaction"
},
{
"start": 590.3199999999999,
"end": 593.8599999999999,
"text": " with the environment."
},
{
"start": 593.86,
"end": 609,
"text": " So one of the main pitfalls of R2D2 is still it has a poor exploration, exploitation strategy,"
},
{
"start": 609,
"end": 613.08,
"text": " which I believe it is still just kind of epsilon greedy."
},
{
"start": 613.08,
"end": 614.24,
"text": " What does it mean?"
},
{
"start": 614.24,
"end": 623.84,
"text": " So in order to understand this, maybe consider again our screen here, right?"
},
{
"start": 623.84,
"end": 628.6,
"text": " So let's say you're here with your space ship, right?"
},
{
"start": 628.6,
"end": 635.28,
"text": " And there are, there's a meteor coming right here and one right here."
},
{
"start": 635.28,
"end": 638.16,
"text": " And there is a gold coin right here."
},
{
"start": 638.16,
"end": 641.28,
"text": " Let's make this gold, right?"
},
{
"start": 641.28,
"end": 646,
"text": " So let's say you get a reward for collecting the coin, but you also get a reward for shooting"
},
{
"start": 646,
"end": 648.8399999999999,
"text": " the meteors, right?"
},
{
"start": 648.8399999999999,
"end": 652.24,
"text": " So what happens if you shoot right now?"
},
{
"start": 652.24,
"end": 664.6999999999999,
"text": " So if you shoot, then let's say you shoot and this meteor explodes, right?"
},
{
"start": 664.6999999999999,
"end": 665.8399999999999,
"text": " So you get a reward."
},
{
"start": 665.8399999999999,
"end": 666.8399999999999,
"text": " Yeah."
},
{
"start": 666.84,
"end": 672.5600000000001,
"text": " So you get one reward, but then the meteor right behind it will hit you, right?"
},
{
"start": 672.5600000000001,
"end": 673.72,
"text": " It's coming toward you."
},
{
"start": 673.72,
"end": 676.2800000000001,
"text": " You'll have no way, no time to get out of the way."
},
{
"start": 676.2800000000001,
"end": 680.12,
"text": " So one reward and then death, right?"
},
{
"start": 680.12,
"end": 683.5,
"text": " Let's make a little arrow here, right?"
},
{
"start": 683.5,
"end": 687.34,
"text": " So in total you get one reward."
},
{
"start": 687.34,
"end": 692.26,
"text": " Now what happens instead if you move to the right?"
},
{
"start": 692.26,
"end": 694.4000000000001,
"text": " So move, right?"
},
{
"start": 694.4,
"end": 699.84,
"text": " So the next, in the next frame, the meteor will fly past you."
},
{
"start": 699.84,
"end": 701.8,
"text": " You are over here, right?"
},
{
"start": 701.8,
"end": 704,
"text": " But the gold coin is here."
},
{
"start": 704,
"end": 707.56,
"text": " Now this has given you so far zero reward, right?"
},
{
"start": 707.56,
"end": 708.56,
"text": " Oops."
},
{
"start": 708.56,
"end": 717.92,
"text": " This has given you zero reward, but then in the next frame, you know, these meteors have"
},
{
"start": 717.92,
"end": 723.04,
"text": " passed now and you are going to get that gold coin."
},
{
"start": 723.04,
"end": 728.0799999999999,
"text": " And that gives you one reward and no death, right?"
},
{
"start": 728.0799999999999,
"end": 734.4399999999999,
"text": " So you can technically go on here and maybe you'll get five more rewards down the line."
},
{
"start": 734.4399999999999,
"end": 740.04,
"text": " So the, the, this is, here's the exploration exploitation dilemma."
},
{
"start": 740.04,
"end": 746.52,
"text": " If an agent has for some reason learned that the shooting action in this situation will"
},
{
"start": 746.52,
"end": 753.76,
"text": " give it a one reward and the move action will give it zero reward, but has not learned to"
},
{
"start": 753.76,
"end": 754.96,
"text": " look past this."
},
{
"start": 754.96,
"end": 757.36,
"text": " So this is kind of nebulous here."
},
{
"start": 757.36,
"end": 764.84,
"text": " It has only experienced, it has only experienced one frame of here."
},
{
"start": 764.84,
"end": 765.84,
"text": " Yeah."
},
{
"start": 765.84,
"end": 768.6999999999999,
"text": " It has only experienced one frame of experience."
},
{
"start": 768.6999999999999,
"end": 773.6,
"text": " It will say, wait a minute, shoot here appears to be like really good."
},
{
"start": 773.6,
"end": 777.08,
"text": " It gives me one reward and move gives me zero reward."
},
{
"start": 777.08,
"end": 781.08,
"text": " So from now on I'll just always do shoot, right?"
},
{
"start": 781.08,
"end": 783.44,
"text": " Shoot, shoot, shoot."
},
{
"start": 783.44,
"end": 785.76,
"text": " Now what you would like to do."
},
{
"start": 785.76,
"end": 789,
"text": " So this is called exploitation, right?"
},
{
"start": 789,
"end": 790.28,
"text": " Exploitation."
},
{
"start": 790.28,
"end": 794.24,
"text": " It has learned something that gives it a reward."
},
{
"start": 794.24,
"end": 798.0400000000001,
"text": " So it will just do that over and over again."
},
{
"start": 798.04,
"end": 806.16,
"text": " Whereas here you could say, ah, I, I might go this way, even though it's zero word, because"
},
{
"start": 806.16,
"end": 807.8,
"text": " I can hope, right?"
},
{
"start": 807.8,
"end": 813.28,
"text": " I don't know yet, but I can hope that I will get a more reward down here."
},
{
"start": 813.28,
"end": 815.92,
"text": " This is exploration."
},
{
"start": 815.92,
"end": 821.76,
"text": " And the question in, in reinforcement learning is always how to trade off these two, right?"
},
{
"start": 821.76,
"end": 829.36,
"text": " And ideally you would want your agent to collect maximum reward that speaks for exploitation"
},
{
"start": 829.36,
"end": 831.56,
"text": " of what it has already learned."
},
{
"start": 831.56,
"end": 838.6,
"text": " But also you never want to discard the possibility that, um, down the line of things that you"
},
{
"start": 838.6,
"end": 843.52,
"text": " don't yet know, there might be even more reward."
},
{
"start": 843.52,
"end": 844.96,
"text": " And that speaks for exploration."
},
{
"start": 844.96,
"end": 852.88,
"text": " I'm just, this both are abbreviated, same exploit, explore."
},
{
"start": 852.88,
"end": 854.76,
"text": " This was dumb."
},
{
"start": 854.76,
"end": 862.88,
"text": " Um, so in the original deep QN formulation, and I believe also in R2D2, this is done with"
},
{
"start": 862.88,
"end": 869.08,
"text": " Epsilon greedy, um, which is surprisingly performing well."
},
{
"start": 869.08,
"end": 875.48,
"text": " Uh, so in Epsilon greedy, you simply say, I'm going to have a constant Epsilon."
},
{
"start": 875.48,
"end": 877.96,
"text": " This is E Epsilon."
},
{
"start": 877.96,
"end": 882.84,
"text": " Um, this is maybe 5% or something."
},
{
"start": 882.84,
"end": 889.2800000000001,
"text": " I'm going to simply do something at random and the other one minus Epsilon."
},
{
"start": 889.2800000000001,
"end": 895.2,
"text": " I'm just going to go with the, um, with the thing I have already learned."
},
{
"start": 895.2,
"end": 901.6,
"text": " And this performs pretty well, but you might imagine that there is something smarter to"
},
{
"start": 901.6,
"end": 902.88,
"text": " do."
},
{
"start": 902.88,
"end": 904.96,
"text": " So never give up."
},
{
"start": 904.96,
"end": 913.48,
"text": " Um, these, this algorithm, it kind of goes into this, um, exploration, uh, mode where"
},
{
"start": 913.48,
"end": 918.08,
"text": " it tries to get, get to smarter ways to do exploration."
},
{
"start": 918.08,
"end": 923.96,
"text": " And the keywords here are things like intrinsic motivation."
},
{
"start": 923.96,
"end": 933.76,
"text": " So intrinsic motivation and curiosity refer to the fact that, um, it is so in addition"
},
{
"start": 933.76,
"end": 938.24,
"text": " to the reward you get from the environment here, right?"
},
{
"start": 938.24,
"end": 945.1800000000001,
"text": " This, this reward right here, you can also interject at this point and say, ah, I'm going"
},
{
"start": 945.1800000000001,
"end": 951.48,
"text": " to give some R prime, some reward of myself, right?"
},
{
"start": 951.48,
"end": 955.08,
"text": " To to kind of encourage some behavior in the agent."
},
{
"start": 955.08,
"end": 960.52,
"text": " And this here we call intrinsic intrinsic."
},
{
"start": 960.52,
"end": 967.6800000000001,
"text": " Um, so that means you add to the reward of the environment, you add some reward of your"
},
{
"start": 967.6800000000001,
"end": 974.2,
"text": " own that has nothing to do with the environment or not much, um, but just encourages certain"
},
{
"start": 974.2,
"end": 980.44,
"text": " behavior in the agent that is now also trying to maximize this intrinsic reward."
},
{
"start": 980.44,
"end": 988.96,
"text": " Um, and in curiosity and intrinsic motivation formulations, usually you are rewarded for"
},
{
"start": 988.96,
"end": 990.6400000000001,
"text": " novelty."
},
{
"start": 990.6400000000001,
"end": 998.0400000000001,
"text": " Novelty, which means the agent is rewarded for finding things that it has not yet seen."
},
{
"start": 998.0400000000001,
"end": 1004.24,
"text": " Um, so you, in this situation over here, you might see why this encourages the agent to"
},
{
"start": 1004.24,
"end": 1009.12,
"text": " go this route here because it says, wait a minute, there's a bunch of stuff like here."
},
{
"start": 1009.12,
"end": 1010.84,
"text": " I just die, right?"
},
{
"start": 1010.84,
"end": 1014.24,
"text": " But there is a bunch of stuff I haven't seen yet down here."
},
{
"start": 1014.24,
"end": 1022.12,
"text": " So I might want to go explore that and we give it extra intrinsic reward or prime for"
},
{
"start": 1022.12,
"end": 1024.78,
"text": " seeing things it hasn't seen yet."
},
{
"start": 1024.78,
"end": 1030.64,
"text": " So it will learn if I do things that I have never done, I will get this sweet intrinsic"
},
{
"start": 1030.64,
"end": 1033.72,
"text": " reward and then it will go explore."
},
{
"start": 1033.72,
"end": 1040.8,
"text": " Now, of course it's a, it's a big engineering question of how exactly to set this intrinsic"
},
{
"start": 1040.8,
"end": 1042.08,
"text": " reward."
},
{
"start": 1042.08,
"end": 1048.04,
"text": " And there are many, many different formulations of that, um, that fall under this term of,"
},
{
"start": 1048.04,
"end": 1051.04,
"text": " let's say curiosity or something like this."
},
{
"start": 1051.04,
"end": 1059.6000000000001,
"text": " Um, nevertheless, this never give up has, has, um, improved over R2D2, uh, using ideas"
},
{
"start": 1059.6000000000001,
"end": 1061.2,
"text": " like that."
},
{
"start": 1061.2,
"end": 1065.0800000000002,
"text": " And now agent 57 improves again."
},
{
"start": 1065.0800000000002,
"end": 1069.64,
"text": " Now how does agent 57 improve again?"
},
{
"start": 1069.64,
"end": 1079.1200000000001,
"text": " And it is mainly, um, it is mainly in, in the, in the, in this, what I just said."
},
{
"start": 1079.1200000000001,
"end": 1082.56,
"text": " So how exactly do you apply this intrinsic reward?"
},
{
"start": 1082.56,
"end": 1087.68,
"text": " How exactly do you navigate the exploration, exploitation trade off?"
},
{
"start": 1087.68,
"end": 1093.1200000000001,
"text": " That's where agent 57 comes in because what they've realized this, that for these different"
},
{
"start": 1093.1200000000001,
"end": 1097.76,
"text": " Atari games right here, uh, some are very easy."
},
{
"start": 1097.76,
"end": 1099.8,
"text": " Some you don't need much exploration."
},
{
"start": 1099.8,
"end": 1101.64,
"text": " Some you need a lot."
},
{
"start": 1101.64,
"end": 1109.04,
"text": " Some you need it over a large time scale and simply one agent, um, one never give up agent"
},
{
"start": 1109.04,
"end": 1115.1200000000001,
"text": " with the same settings of this curiosity of how long it looks into the future is not going"
},
{
"start": 1115.1200000000001,
"end": 1116.96,
"text": " to solve all the games."
},
{
"start": 1116.96,
"end": 1127.88,
"text": " So agent 57 learns, um, how to, to modulate this exploration, exploitation trade off."
},
{
"start": 1127.88,
"end": 1130.88,
"text": " So let's jump into the paper a bit more."
},
{
"start": 1130.88,
"end": 1138.6000000000001,
"text": " I encourage you to read the blog post that is quite thorough and, um, the paper is a"
},
{
"start": 1138.6000000000001,
"end": 1139.6000000000001,
"text": " bit more technical."
},
{
"start": 1139.6000000000001,
"end": 1140.6000000000001,
"text": " Sorry."
},
{
"start": 1140.6000000000001,
"end": 1143.4,
"text": " Let me switch over."
},
{
"start": 1143.4,
"end": 1150.6000000000001,
"text": " This is the paper agent 57 up forming the Atari human benchmark by Google deep mind."
},
{
"start": 1150.6000000000001,
"end": 1160.88,
"text": " And um, here they say improvements to end you to never give up."
},
{
"start": 1160.88,
"end": 1166.6000000000001,
"text": " So the first improvement they do is, um, so we've, we've already talked about how this"
},
{
"start": 1166.6000000000001,
"end": 1169.4,
"text": " is classic Q learning, right?"
},
{
"start": 1169.4,
"end": 1176.1200000000001,
"text": " So you're trying to learn this function, uh, that gives you the Q value of an action and"
},
{
"start": 1176.1200000000001,
"end": 1177.1200000000001,
"text": " the state."
},
{
"start": 1177.1200000000001,
"end": 1185.92,
"text": " Um, now since we're going to deal with intrinsic reward in addition to extrinsic reward, uh,"
},
{
"start": 1185.92,
"end": 1187.64,
"text": " it makes sense."
},
{
"start": 1187.64,
"end": 1193.3200000000002,
"text": " That's what they argue to split the Q learning function into two different parts."
},
{
"start": 1193.32,
"end": 1199.9199999999998,
"text": " One part that learns the extrinsic reward and one part that learns the intrinsic reward."
},
{
"start": 1199.9199999999998,
"end": 1200.9199999999998,
"text": " Right."
},
{
"start": 1200.9199999999998,
"end": 1206.6799999999998,
"text": " And then you have a parameter beta, um, in front of it."
},
{
"start": 1206.6799999999998,
"end": 1211,
"text": " Now beta in this case is the trade off."
},
{
"start": 1211,
"end": 1215.8,
"text": " How much do you want to value this intrinsic reward?"
},
{
"start": 1215.8,
"end": 1216.8,
"text": " Right."
},
{
"start": 1216.8,
"end": 1221.3999999999999,
"text": " Um, and here we see our first lever on the exploitation, exploration trade off."
},
{
"start": 1221.4,
"end": 1229.44,
"text": " If an agent gets lots of reward for, uh, for exploring, right, it might never exploit and"
},
{
"start": 1229.44,
"end": 1233.88,
"text": " exploiting might actually be a good, a good option in the game that you're in."
},
{
"start": 1233.88,
"end": 1241.46,
"text": " So you might want to set beta small, but in other games you might want to encourage exploration"
},
{
"start": 1241.46,
"end": 1245.68,
"text": " to the max and therefore set beta very high."
},
{
"start": 1245.68,
"end": 1258.2,
"text": " Um, all right, another, uh, constant along with that, that they modulate is the, is the,"
},
{
"start": 1258.2,
"end": 1260.64,
"text": " um, the discount factor."
},
{
"start": 1260.64,
"end": 1265.5600000000002,
"text": " So which is called this gamma here."
},
{
"start": 1265.5600000000002,
"end": 1271.8400000000001,
"text": " So you already see here this beta we've already seen and they also modulate this gamma."
},
{
"start": 1271.84,
"end": 1280.52,
"text": " Now what does gamma do, um, if I have my state and action, we already said, so here is an"
},
{
"start": 1280.52,
"end": 1289,
"text": " observation one and I do action one and that gives me observation two and I do action two"
},
{
"start": 1289,
"end": 1295.8799999999999,
"text": " and that gives me observation three and I do action three and each time I get a reward,"
},
{
"start": 1295.8799999999999,
"end": 1296.8799999999999,
"text": " right?"
},
{
"start": 1296.8799999999999,
"end": 1299.6599999999999,
"text": " An extrinsic reward and an intrinsic reward."
},
{
"start": 1299.66,
"end": 1306.72,
"text": " So reward one, reward two, reward three and so on."
},
{
"start": 1306.72,
"end": 1316.38,
"text": " Now usually, um, an RL agent will look at these rewards and let's say you are here,"
},
{
"start": 1316.38,
"end": 1321.92,
"text": " you are at observation one and you're trying to estimate your future rewards."
},
{
"start": 1321.92,
"end": 1327.66,
"text": " Um, what will be most important will be the reward that you're getting right now, right?"
},
{
"start": 1327.66,
"end": 1333.76,
"text": " Because that's the most sure because, um, this reward here that you might get two steps"
},
{
"start": 1333.76,
"end": 1337.2,
"text": " from now, you know, a lot of things could happen, right?"
},
{
"start": 1337.2,
"end": 1341.28,
"text": " You are pretty sure that if you do action one, you're going to get to this state, but"
},
{
"start": 1341.28,
"end": 1342.4,
"text": " you're not entirely sure."
},
{
"start": 1342.4,
"end": 1347.8000000000002,
"text": " You could also get to another state and therefore you had to do another action and therefore"
},
{
"start": 1347.8000000000002,
"end": 1350.8400000000001,
"text": " this reward here could be something different."
},
{
"start": 1350.84,
"end": 1358.1599999999999,
"text": " Um, so these algorithms are, are having what's known as a discount factor."
},
{
"start": 1358.1599999999999,
"end": 1366.48,
"text": " That means the value of a state, uh, of a state S is going to be the sum from time,"
},
{
"start": 1366.48,
"end": 1374.4399999999998,
"text": " uh, zero, let's say K equals T that's stated time T up until some horizon."
},
{
"start": 1374.4399999999998,
"end": 1377.86,
"text": " I think they call it H in the paper."
},
{
"start": 1377.86,
"end": 1385.6999999999998,
"text": " You could also think of this as infinity of the reward at step K, but discounted by this"
},
{
"start": 1385.6999999999998,
"end": 1387.12,
"text": " factor."
},
{
"start": 1387.12,
"end": 1398.36,
"text": " Um, and you raise it to the, to the power of K usually or T T minus, uh, yeah, K minus"
},
{
"start": 1398.36,
"end": 1407.84,
"text": " T. So basically means that you, this is if T is one, so it's the reward at the"
},
{
"start": 1407.84,
"end": 1416.6799999999998,
"text": " at this time step plus let's say gamma here is 0.99, right?"
},
{
"start": 1416.6799999999998,
"end": 1428.76,
"text": " Plus 0.99 the reward at the next time step plus 0.99 squared, uh, the reward of that"
},
{
"start": 1428.76,
"end": 1429.76,
"text": " after that."
},
{
"start": 1429.76,
"end": 1436.6,
"text": " And you see that the more, the more into the future you look, the less, um, value these"
},
{
"start": 1436.6,
"end": 1442.8799999999999,
"text": " rewards have. So little bars here indicate that you're going to value future rewards"
},
{
"start": 1442.8799999999999,
"end": 1444.76,
"text": " less and less."
},
{
"start": 1444.76,
"end": 1448.28,
"text": " This is called a discount factor right here."
},
{
"start": 1448.28,
"end": 1454.08,
"text": " And it's, um, how to set it is very important because if you set it very low, let's say"
},
{
"start": 1454.08,
"end": 1461.6799999999998,
"text": " you set it to 0.1, that means all that you want to do is maximize the rewards that you're"
},
{
"start": 1461.6799999999998,
"end": 1465.9199999999998,
"text": " getting in the likely the next and next, next step."
},
{
"start": 1465.92,
"end": 1469.16,
"text": " Uh, you're not really looking into the future."
},
{
"start": 1469.16,
"end": 1475.8400000000001,
"text": " Um, this is very good for games that give you immediate reward for good actions."
},
{
"start": 1475.8400000000001,
"end": 1483.5600000000002,
"text": " But if you, uh, if you set it very high, let's say 0.999, right?"
},
{
"start": 1483.5600000000002,
"end": 1490.16,
"text": " That means a reward a hundred steps from now doesn't, you know, is, is almost the same"
},
{
"start": 1490.16,
"end": 1492.76,
"text": " to you as a reward one step from now."
},
{
"start": 1492.76,
"end": 1500.08,
"text": " And this is very valuable for games that don't give you a reward immediately or that kind"
},
{
"start": 1500.08,
"end": 1502.64,
"text": " of trying to trick you as we saw before."
},
{
"start": 1502.64,
"end": 1508.92,
"text": " Like if you shoot the meteor now, then you get one reward, but if you don't and pass"
},
{
"start": 1508.92,
"end": 1512.4,
"text": " on the opportunity, you might get much more later."
},
{
"start": 1512.4,
"end": 1519.06,
"text": " So the modulation of the discount factor is also very important, uh, to set and really"
},
{
"start": 1519.06,
"end": 1520.2,
"text": " depends on the game."
},
{
"start": 1520.2,
"end": 1526.6000000000001,
"text": " So we have two quantities here that really depend on what kind of game it is."
},
{
"start": 1526.6000000000001,
"end": 1532.7,
"text": " And also they argue, um, it, it also depends where in the learning process you are."
},
{
"start": 1532.7,
"end": 1538.6000000000001,
"text": " So if you're at the very beginning of the learning process, you might want to have a"
},
{
"start": 1538.6000000000001,
"end": 1545,
"text": " very high goal, the high intrinsic reward to go explore."
},
{
"start": 1545,
"end": 1551.56,
"text": " And you want, might want to get, have a very low discount factor in order to learn a good"
},
{
"start": 1551.56,
"end": 1553.92,
"text": " immediate value function."
},
{
"start": 1553.92,
"end": 1559.78,
"text": " But then as time goes on, you might want to bring down the intrinsic reward because now"
},
{
"start": 1559.78,
"end": 1565.96,
"text": " you really want actually, because your end goal is to maximize the extrinsic reward and"
},
{
"start": 1565.96,
"end": 1570.16,
"text": " you want to up this discount factor to look more into the future."
},
{
"start": 1570.16,
"end": 1575.72,
"text": " Now that you have already learned the immediate values very well."
},
{
"start": 1575.72,
"end": 1588.72,
"text": " So if I had to summarize and simplify what agent 57 does is it builds a neural network"
},
{
"start": 1588.72,
"end": 1596.3600000000001,
"text": " that adjusts these two quantities across the training, right?"
},
{
"start": 1596.36,
"end": 1605.36,
"text": " Um, so, so it adjusts the beta and gamma across the training and it does this in a so-called"
},
{
"start": 1605.36,
"end": 1608.3,
"text": " bandit setting."
},
{
"start": 1608.3,
"end": 1614.1799999999998,
"text": " Now there is no real good picture in this paper that I can show you."
},
{
"start": 1614.1799999999998,
"end": 1616.4799999999998,
"text": " So I'm just going to have to, to draw."
},
{
"start": 1616.4799999999998,
"end": 1618.8,
"text": " So you have an agent, right?"
},
{
"start": 1618.8,
"end": 1626.02,
"text": " It interacts with this environment here and it always gets these rewards."
},
{
"start": 1626.02,
"end": 1630.44,
"text": " Now what you have here is a meta controller, right?"
},
{
"start": 1630.44,
"end": 1633.76,
"text": " So the agents, it has two parameters."
},
{
"start": 1633.76,
"end": 1640.48,
"text": " It has this beta and this gamma and the meta controller now observes this."
},
{
"start": 1640.48,
"end": 1648.52,
"text": " It observes this interaction and it outputs values for these two constants and the does"
},
{
"start": 1648.52,
"end": 1652.84,
"text": " this dynamically as the training progresses, right?"
},
{
"start": 1652.84,
"end": 1662.36,
"text": " So the agent, the agent will, will kind of learn, the agent will change its behavior"
},
{
"start": 1662.36,
"end": 1663.36,
"text": " over time."
},
{
"start": 1663.36,
"end": 1668.9599999999998,
"text": " Now this is actually implemented in a slightly different way in that the meta controller"
},
{
"start": 1668.9599999999998,
"end": 1673.62,
"text": " doesn't control the values directly, but it, it has kind of options."
},
{
"start": 1673.62,
"end": 1680.56,
"text": " So what you do is you define a bunch of possibilities for beta and gamma."
},
{
"start": 1680.56,
"end": 1687.1399999999999,
"text": " So you say I have strategy one, strategy one has beta at 0.1 and gamma at 0.9."
},
{
"start": 1687.1399999999999,
"end": 1691.76,
"text": " Strategy two has beta at 0.2 and gamma at 0.8 and so on."
},
{
"start": 1691.76,
"end": 1692.76,
"text": " Right?"
},
{
"start": 1692.76,
"end": 1700.6399999999999,
"text": " And now the meta controller has to choose between one of these, in this case, six different"
},
{
"start": 1700.6399999999999,
"end": 1702.82,
"text": " strategies across training."
},
{
"start": 1702.82,
"end": 1708.12,
"text": " So it might start off, as we said, with a high beta, which might be over here, 0.9,"
},
{
"start": 1708.12,
"end": 1709.12,
"text": " 0.1."
},
{
"start": 1709.12,
"end": 1717.1599999999999,
"text": " It might start off with a high beta and then transition to the lower ends."
},
{
"start": 1717.1599999999999,
"end": 1723.6399999999999,
"text": " And it can do so depending on the game and depending on the progress in the game."
},
{
"start": 1723.6399999999999,
"end": 1729.8,
"text": " So this is, this is dynamic and this is the improvement over never give up over this other"
},
{
"start": 1729.8,
"end": 1734.84,
"text": " agent, because this other agent simply had these strategies and trained them at the same"
},
{
"start": 1734.84,
"end": 1736.56,
"text": " time."
},
{
"start": 1736.56,
"end": 1743.56,
"text": " And now this meta controller here controls which strategy is currently trained and which"
},
{
"start": 1743.56,
"end": 1748.32,
"text": " one is used to generate the experience."
},
{
"start": 1748.32,
"end": 1757.52,
"text": " So this is, this is basically, I mean, there's a, they also, of course, they also say, well,"
},
{
"start": 1757.52,
"end": 1764.84,
"text": " we also increase the window of, let me go back."
},
{
"start": 1764.84,
"end": 1771.9599999999998,
"text": " So this LSTM, these, I've shown you these things here that incorporate experience over"
},
{
"start": 1771.9599999999998,
"end": 1772.9599999999998,
"text": " time."
},
{
"start": 1772.9599999999998,
"end": 1779.48,
"text": " They also say, well, we increase the window of how long the LSTM, the time window of how"
},
{
"start": 1779.48,
"end": 1783.28,
"text": " much experience is incorporated."
},
{
"start": 1783.28,
"end": 1787.72,
"text": " And they do a bunch of other things, which I always find kind of annoying because it's"
},
{
"start": 1787.72,
"end": 1793.9199999999998,
"text": " always really, really hard to see where the improvements come from that they claim they"
},
{
"start": 1793.92,
"end": 1794.92,
"text": " made."
},
{
"start": 1794.92,
"end": 1802.4,
"text": " So, but, you know, barring that, basically they built this meta controller to choose"
},
{
"start": 1802.4,
"end": 1807.0800000000002,
"text": " the strategies for the agent over time."
},
{
"start": 1807.0800000000002,
"end": 1816.02,
"text": " Now of course, this meta controller again is trained by the rewards that you get back"
},
{
"start": 1816.02,
"end": 1817.6000000000001,
"text": " from the environment."
},
{
"start": 1817.6,
"end": 1825.3999999999999,
"text": " So the meta controller as an action has the choice of strategy, right?"
},
{
"start": 1825.3999999999999,
"end": 1831.56,
"text": " And the reward, it gets back from the agent environment interaction, right?"
},
{
"start": 1831.56,
"end": 1835.6,
"text": " So in itself, it is a reinforcement learning problem."
},
{
"start": 1835.6,
"end": 1847.6799999999998,
"text": " Now why, like, to me it seems just shifts the, it just shifts the problem of exploration"
},
{
"start": 1847.6799999999998,
"end": 1850.6,
"text": " exploitation one level higher."
},
{
"start": 1850.6,
"end": 1854.08,
"text": " They use a sliding window bandit algorithm to do this."
},
{
"start": 1854.08,
"end": 1859.98,
"text": " But again, you have hyper parameters there, like how long is the sliding window and how"
},
{
"start": 1859.98,
"end": 1863.8799999999999,
"text": " does the bandit algorithm do the exploration exploitation tradeoff."
},
{
"start": 1863.88,
"end": 1867.5400000000002,
"text": " So it seems to me you're just shifting it one level higher."
},
{
"start": 1867.5400000000002,
"end": 1876.22,
"text": " And it also seems like we're getting into the region of where we are meta over engineering"
},
{
"start": 1876.22,
"end": 1883.14,
"text": " our approaches to the specifics of this Atari benchmark."
},
{
"start": 1883.14,
"end": 1887.88,
"text": " Because we're kind of observing, oh, okay, these agents do this wrong, these agents do"
},
{
"start": 1887.88,
"end": 1888.88,
"text": " this wrong."
},
{
"start": 1888.88,
"end": 1893.8200000000002,
"text": " So let's just build an agent that can do both sort of."
},
{
"start": 1893.82,
"end": 1901.32,
"text": " And then the kind of audastic thing I find that they open with how to measure artificial"
},
{
"start": 1901.32,
"end": 1907.04,
"text": " general intelligence, which, I mean, come on, you're just it's kind of amnest right"
},
{
"start": 1907.04,
"end": 1913.08,
"text": " now you're just kind of over and over and overfitting on this one benchmark, there's"
},
{
"start": 1913.08,
"end": 1922.1599999999999,
"text": " not really a need to, to make this into a story on artificial general intelligence."
},
{
"start": 1922.16,
"end": 1924.68,
"text": " Alright, so this was my two cents to this."
},
{
"start": 1924.68,
"end": 1952.4,
"text": " I hope you enjoyed this and bye bye."
}
] |
lmAj0SU_bW0 | Yannic Kilcher | UCZHmQk67mSJgfCCTn7xBfew | Axial Attention & MetNet: A Neural Weather Model for Precipitation Forecasting | [
"Science & Technology"
] | [
"deep learning",
"machine learning",
"arxiv",
"google",
"attention mechanism",
"attention",
"transformer",
"rnn",
"recurrent",
"weather",
"long-range",
"layers",
"convolutions",
"cnns",
"rain",
"physics"
] | MetNet is a predictive neural network model for weather prediction. It uses axial attention to capture long-range dependencies. Axial attention decomposes attention layers over images into row-attention and column-attention in order to save memory and computation.
https://ai.googleblog.com/2020/03/a-neural-weather-model-for-eight-hour.html
https://arxiv.org/abs/1912.12180
Abstract:
Weather forecasting is a long standing scientific challenge with direct social and economic impact. The task is suitable for deep neural networks due to vast amounts of continuously collected data and a rich spatial and temporal structure that presents long range dependencies. We introduce MetNet, a neural network that forecasts precipitation up to 8 hours into the future at the high spatial resolution of 1 km2 and at the temporal resolution of 2 minutes with a latency in the order of seconds. MetNet takes as input radar and satellite data and forecast lead time and produces a probabilistic precipitation map. The architecture uses axial self-attention to aggregate the global context from a large input patch corresponding to a million square kilometers. We evaluate the performance of MetNet at various precipitation thresholds and find that MetNet outperforms Numerical Weather Prediction at forecasts of up to 7 to 8 hours on the scale of the continental United States.
Authors: Casper Kaae Sønderby, Lasse Espeholt, Jonathan Heek, Mostafa Dehghani, Avital Oliver,Tim Salimans, Shreya Agrawal, Jason Hickey, Nal Kalchbrenner
Links:
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
BitChute: https://www.bitchute.com/channel/yannic-kilcher
Minds: https://www.minds.com/ykilcher | Hi there. So what you're looking at here is a weather forecast model. Specifically the very top row is a new weather forecast model called NetNet by Google Research. So the goal of weather prediction is pretty simple. You want to know what the weather is going to be in the future. Specifically here you want to know precipitation rates. And so this is a new work that uses neural network instead of physical models in order to predict precipitation. So in the middle here you see this is the ground truth of what really happened at that particular time. You see precipitation rates in red here moving across the country. Now the bottom there is a physical model and as far as I understand it, physical models have been used so far to make weather predictions. Which basically means that you simulate these rain clouds and the movement of them across the country. And you do a physical simulation like a particle simulation type of thing and then that allows you to predict and then you run that maybe multiple times and you get an idea of the kind of distribution you're going to get. Now what MetNet does is it simply uses a neural network to predict the outcome directly. So there's no physical simulation involved. There is just a neural network that takes as input what's the situation now and maybe over a stretch of time. And then you ask it please make a prediction in eight hours or something. And then the MetNet will make that prediction and it will just output it like snap. No physical simulation needed. And you also see here that MetNet outputs things in kind of a cloud way, in a probabilistic way. In one forward pass you don't need to run it multiple times. But we'll get to that. On the bottom here you see the measurement. So the axis is F1. F1 is kind of the overlap of how well you're able to predict the precipitation. And you see here the MetNet is above the HRR baseline for most of this time. Up to 480 minutes into the future. Which is eight hours I believe. All right. So the paper is the following. It's called MetNet, a Neural Weather Model for Precipitation Forecasting. And I'm not going to read all the names here. The main corresponding authors are Caspar K. Sonderby and Nalkalke Brenner. And it's a team of Google research. So specifically they use the input of these two things here. So one is this GOS16, which is what you see here on the left. And the precipitation rates are here depicted on the right. So you want to take these things as input into your model. Now how do you do that? Of course we want to build a neural network. And this is the architecture they come up with. So on the bottom here they feed in the data. And they feed in the data in 15 minute interval from 90 minutes into the pass. So you have to imagine it like this. So there's a timeline. I'm going to use a little bit of a finer thing. So there's a timeline. And you let's say are here. This is now. And then here in the future, this is maybe one hour into the future. This is your target, right? This is you are here and you're looking out. You would like to know what's the precipitation going to be in one hour from now. What Metnet does is it takes an input. And specifically it takes the last 90 minutes before now as an input. And it samples it in frequencies of 15 minute intervals. So each one of these is going to be 15 minutes. And each 15 minutes you get like a snapshot of this entire of the input region. Now the input region, if I can jump back here to the website for a second, they show it what the input region is. The input region, if you want to predict in the middle of this small square, the input region is actually the entire 1024 square kilometers around it. So it's very big input. Though the actual region you consider is the inside 64 square kilometers. But you take in information from the big region. And the main point of the paper, I believe, is how to do that. All right, so each 15 minutes you take in a snapshot. And these are these snapshots here on the bottom. So these are, and you have to imagine in here, every 15 minutes there's a stack of these inputs. So what are these inputs? These inputs are some kind of features that you have. So there is the target time, which in this case would be this one hour here. There is the month, day and hour, which is important for weather prediction, right? So the time of year, time of day and so on. Longitude latitude is probably pretty important. Elevation map is probably pretty important. So these you can see, these are all maps. Now sometimes, and this is how you encode things in these. Since it's a neural network, you know, all of these things must be of the same dimensions here. So if you have 256 dimensions here and probably 256 dimensions here, then all of these things must be of the same dimension. And if you want to give a feature such as the target time, which in this case, let's say it's one hour, you just put here one hour, what's one hour? Let's say 60 minutes. So you just put the number 60 here, 60, 60, 60, 60, 60, 256 times and 256 times 250, 65, sorry, 265 times, no, 56. I'm confusing with German. So this is how you encode features. It's pretty primitive, but it turns out it works the best if you do it this way. All right. So you have these planes and some, as I said, are just features such as the target time, month, day and hour and so on. Elevation, I guess is a map, is like an elevation map of the region you consider. And this corresponds now to this, these 64 kilometers times 64 kilometers here. And that's exactly what these center crops are here. So this center crop thing, that now, this thing here, this plane, sorry, is these 64 by 64 region. That's this plane here. And also the, that's the precipitation and the GOES, that's this thing here. Now we also have these down sampled things, which these are the 1024 kilometers. So this here and this here, these are the 1024 square kilometer patches, but they are down sampled. So everything is down sampled, I guess, to 256 by 256 pixels. So you don't really take into account every nuance of that very big, of that very big input, but you do down sample it. So you kind of get the big picture of the outer frame and in the inner frame, you take it in a much higher resolution in order to get the details. All right. So you stack all of this up into a big tensor and then you feed it into here into a spatial down sampler, which I guess, no, I have read is a, some just a convolutional neural network, right? So this is your typical image processing pipeline. So you do this for each of these stacks, right? And then what you get out of it is a lower size representation right here. So you get these representations and then you let a temporal encoder run over it. What does a temporal encoder do? This in particular is a convolutional LSTM. And if you already know what an LSTM is, a convolutional LSTM is nothing more than an LSTM that has as intermediate layers, convolutional layers. So it's pretty suited to do, for example, videos or any sort of image processing that goes over time like this one. So the temporal encoder simply starts out here with an initial state. My pens are screwing me today. So it starts out here with an initial state and then it simply inputs each of these representations, takes them one by one, runs across time, right? And each time producing a new intermediate representation of the input until it finally reaches this here, final representation. So this thing here is a single final representation of all of this input, right? Of this entire time span of all of these stacks here. Yeah, so you can press this into a single input with first a convolutional network to downsample each time point individually and then with a recurrent neural network, an LSTM, to integrate the information over time. You end up with this single piece here. And then what you do, so you still, here you still retain kind of an image sort of thing. So this representation here, you can see it in the background. Maybe I'll get down my scribbles here. This here is still sort of an image tensor, though I guess it's a hidden representation, so you couldn't really look at it. But it still has dimensions of images. So this here is still, I think, the same or corresponding to these dimensions here. So this still has some spatial information where this might be north-south here in this axis, might be east and west, right? And then these are just the hidden channels, the channels of the hidden representations, right? So what you would like to do now is to basically encode information from the space around you. If you look at, let's look at one of these, one of the big pictures. What you would like to do in weather prediction, let's say you are right here. What's a good example, you are right here, right? Now if you want to know if this particular cloud over here is going to move to your direction, what you want to know is, for example, is there a mountain range here, right? Because then it's more probable that this cloud is going maybe to move up there. You would also want to know how this cloud here moves, right? If this cloud here moves somewhere here around, then it's probably this cloud down here might be pulled with it or something like this. So you're very much, sorry, you're there. You're very much kind of want to look out into each of the directions here and you want to incorporate kind of what's happening across the space. We're already used to kind of convolutional networks being able to do this, but in here the authors use attention to do that. So if you don't know what attention is, my most popular video is about attention and you can do attention for images. So the way that works is that you have a series of images of stacked blocks of a neural network. Let me draw this here. So you have an image here and let's say it has just four pixels, right? So you have the next layer of these four pixels, right? So you have layers of this. So the next layers of the four pixels, they all emit what are called queries and queries are just vectors. So each pixel emits a single vector. Let's say this, that, that, this, right? And each of the lower layers emits what is called a key. This, this, this, this. And now the keys and the queries are routed together based on their inner product. So these two would be routed together. This would probably be routed here. This as well. This would probably routed here. So what in effect each of the pixels of the higher layer can look at specific pixels of the lower layer. Now you can imagine this is exactly what we want here in that if there is a mountain range here and we might be interested in that. So we'd be able from our, from our point here to specifically attend to that location using, using attention, right? So the authors here build basically a stacked model of attention layers. And that's what's happening in the third part here. And this is the attention is in order to incorporate long range dependencies. As I made the example with the mountain range, this might be far away, but it might actually influence your weather very much. So the attention is to incorporate these long range dependencies. But the problem with attention is, is as you saw in the example, each of these pixels can attend to each of the pixels in the lower layer. So what you'd end up with, so each can attend to that. This can attend to each. This can attend to each. You'll see you'll end up with 16 connections. Can't even draw them. So you end up with 16 connections. In general, if you have D here, you will end up with a D squared number of things you need to calculate, right? So if this here, and now of course we have images. So generally we'll think of D by D pixels. Now we have D by D pixels and that thing squared number of things we need to calculate. This quickly gets too much. So in, for example, MNIST, you have 28 by 28 pixel images. This is 780 or 2 or something. I don't quite remember. But you'll have to calculate this squared many connections between things. This is simply impossible pretty quickly, especially if you scale up the images and then have some channels in here as well. So attention for image processing has been a bit lagging compared to natural language processing. In natural language processing, you usually have maybe 500 tokens or something. Images you have much more. So attention is much more expensive. So you can't really do it on current hardware. Now this paper uses something called axial attention. Axial attention is kind of the trick of how to make this tension happen for images. And for that I want to switch over to this paper. It's called Axial Attention in Multidimensional Transformers by some of the same authors. So Jonathan Ho and Nell Coutt Brenner, also of Google Brain and UC Berkeley, proposed this axial transformer. Now they originally proposed axial attention for autoregressive models. If you know transformers, they also started by making autoregressive models, so language modeling and so on. But we can decouple the axial attention from the autoregressivity of these models. So I'm not going to talk about autoregressive models, it's just axial attention. So what is axial attention? It's pretty simple actually. And I want to start by talking about convolutions. So what does a convolution do? Let's just take a one-dimensional image, which is pretty boring, but let's say it has these eight pixels here. So this is an image, it just has one row of eight pixels. What do I do when I run a convolutional filter across that? This is the lower layer, and now this is the next layer that is produced by a convolution. So for each of the pixels in the next layer, what I can do with the convolutional layer, I can look at its neighbors in the lower layer. So these three would be part of that. And then I go on to this, and again I look at its neighbors at these three. I might have done this in a different color. And then I look at this, and it can look at itself and its neighbors. So a convolution is pretty smart. And then of course in the next layer, that repeats. Now if you think, what's the difference between doing this and a fully connected layer? So if I have a fully connected layer, a classic neural network, a fully connected layer, then this pixel here would incorporate information from all of the pixels here. And this pixel here would incorporate information from all the pixels. Now why might this be better? Because the information that I want here for this pixel might depend on this pixel over here. So I might benefit from a connection over there, or it might benefit from this pixel here, which it can't reach. And with a convolutional network, I can't do that. Why are then convolutional networks preferable? Because the convolutional network can do the same thing across multiple layers. So let's assume again that this pixel here needs information from this pixel right here. And as you can see in just one layer, it can only get information from those, right? But now take the next layer, so the same pixel here, it can attend to these three, right? Now these three can each in turn attend to their neighbors, right? And I'm not going to draw everything, but the resolution field for this pixel here will end up being all of this, right? Now we still don't have our desired pixel in here, but if we just go one layer more, then this pixel right here, a different color, this pixel right here, right? The resolution field across the layers increases, because it's always incorporating information from downstream, and the downstream again incorporates information from the downstream, so eventually you can aggregate the same information. So instead of having a single layer with all of these connections, we have convolutional layers, which seem like a worse idea, because they can only do less things, attend to less things, but across the layers they actually can do the same thing, right? And that turns out to be a huge advantage of these convolutional layers, and that's why convolutional layers are used for image processing, and not the multi-layer perceptrons. So the same exact thing happens with axial attention, just in a different form. It is a bit poorly drawn here, I believe, but this is how you have to imagine it. As before, this pixel, the red pixel here, if I just have a normal transformer layer, the red pixel can attend to all of the other pixels in the image, right? That's the, that's basically, and each of the pixels can do that, so that's your d squared computation right here. Now, what we want to do is, in a convolutional layer, what we would do is, okay, you can only attend to your neighbors, and then in the next layer the neighbors can attend to their neighbors, and thereby you go out and out. In axial attention, you say, okay, this thing can only attend to its row and its column, right? That's it. You can only do attention to your row and your column, and I believe they don't even do it at the same time. So in one layer you can attend to the row you're in, and in the other you can attend to the column you're in. Now, let's see how the same thing happens as for a convolutional layer. So in the, basically, how then, if the red pixel needs access to information in this green pixel, how does it do that? So in the first layer it can attend to its row and its column, right? And so can every other pixel, including, sorry, including, of course, the pixel where that, so let's say this square here can also attend to its row and its column, and its row happens to be including the green one, right? So in layer one, this red square here gets information from the green square via row attention, right? And then in layer two now, this, our red square of interest now can row attend to this other red square here, so they get connected in layer two. I'm sorry, I don't want that. So you see that within just two layers we've transferred information from the green square via this red square to that red square. So we can, in the same way as a convolution, you can replace the long-range arbitrary dependencies between pixels by simply having multiple layers of restricted dependence. The same goes for this axial attention. So you can replace the arbitrary attention in layers, right? You can replace that by a two-step process where you first transfer information via the column and then transfer it via the row. It's a bit like, you know, in chess you can have a queen that can move any direction, especially diagonally, and then if you just have a rook you kind of need to do two moves. So in the queen is like the full attention and the rook is the multi-layer axial attention. They can achieve the same thing, you just need more layers. But as a trade-off you get a super, super saving in requirement of memory and computation, right? So they stress that, you know, kind of you can represent the same distributions with the axial attention. And you know, the trade-off is you just have to do multiple layers of it. Right, so this is axial attention and they are now able to incorporate this into their model right here. So they have, I believe, eight blocks, so four row attention, you see this right here, and four column attention blocks in their model. And finally they output this distribution here across their region of interest. Now this again is your, I believe, this 64 by 64 resolution. So you can see how they kind of aggregated information across the 64 using this axial attention. And then that makes their prediction in this one hour. So this is this. Alright, so this was a long way. So recap, they have 15-minute snapshots of this input data across along with some features. They use a spatial down sampler, which is a CNN, on each of them individually. Then they use a convolutional LSTM to encode this across time to end up with a single representation here at the end. Then they use axial attention in order to aggregate information across the spatial dimensions. They do this in multiple stages and at the end they make a participation prediction, which is a distribution, as you can see here. So as an output you directly get a distribution of results, which is also cool because the physical simulation, you have to let it run many, many times in order to get a distribution of results. And this neural network can simply give you a distribution right away. That's what they say right here. So they go a bit into the architecture compared to baseline. I want to get back to what I showed you at the beginning. This here is just the picture, kind of the picture book example. So left is the ground truth, in the middle is MatNet, and on the right is a baseline method. This here is in, as you can see, in two hours, in four, six and eight. So you can see the MatNet gives you as an output this distribution. What I find interesting, for example, is this sample two right here. So in this sample one you can see there is a consistent difference and this is the forecast time, so how much in advance you want to get it? No, this would be a one hour, but it can go up to eight hours. Here is a consistent gap in F1, which means the MatNet does it better across this span of time, which is for the top sample right here. For the bottom sample though, you can see here, there is a big gap at the beginning, again, there is a big gap at the beginning, and then this gap gets smaller and smaller and smaller. And this, I think, might give you an indication of, let's say, the weakness of this approach, doing it with neural networks. So with neural networks you kind of rely on regularities, you kind of rely on broad scale, correct things that you can learn from the data, and this might work well as long as things are regular, which of course across shorter time spans things tend to be more regular, right? But if you go for longer time spans, I believe there is more of a chaos element to it, like weather can be very dependent on very subtle things, and the physics simulation that is really taking into account the actual physics might be able to much, much better account for that. And that's why I believe across time here you'll see that the two models get closer together. That being said, MetNet of course is still on top here. But it will be interesting to forecast for longer even, though I haven't actually dig through their results, through their numerical results, but you can do that if you want. Alright, so this was it for MetNet and axial attention. I hope you liked this, and bye bye. | [
{
"start": 0,
"end": 8.52,
"text": " Hi there. So what you're looking at here is a weather forecast model. Specifically the"
},
{
"start": 8.52,
"end": 15.76,
"text": " very top row is a new weather forecast model called NetNet by Google Research. So the goal"
},
{
"start": 15.76,
"end": 19.96,
"text": " of weather prediction is pretty simple. You want to know what the weather is going to"
},
{
"start": 19.96,
"end": 27.68,
"text": " be in the future. Specifically here you want to know precipitation rates. And so this is"
},
{
"start": 27.68,
"end": 36.04,
"text": " a new work that uses neural network instead of physical models in order to predict precipitation."
},
{
"start": 36.04,
"end": 41.519999999999996,
"text": " So in the middle here you see this is the ground truth of what really happened at that"
},
{
"start": 41.519999999999996,
"end": 48.64,
"text": " particular time. You see precipitation rates in red here moving across the country. Now"
},
{
"start": 48.64,
"end": 55.22,
"text": " the bottom there is a physical model and as far as I understand it, physical models have"
},
{
"start": 55.22,
"end": 62.92,
"text": " been used so far to make weather predictions. Which basically means that you simulate these"
},
{
"start": 62.92,
"end": 68.96,
"text": " rain clouds and the movement of them across the country. And you do a physical simulation"
},
{
"start": 68.96,
"end": 74.72,
"text": " like a particle simulation type of thing and then that allows you to predict and then you"
},
{
"start": 74.72,
"end": 80.6,
"text": " run that maybe multiple times and you get an idea of the kind of distribution you're"
},
{
"start": 80.6,
"end": 88.24,
"text": " going to get. Now what MetNet does is it simply uses a neural network to predict the outcome"
},
{
"start": 88.24,
"end": 95.83999999999999,
"text": " directly. So there's no physical simulation involved. There is just a neural network that"
},
{
"start": 95.83999999999999,
"end": 101.91999999999999,
"text": " takes as input what's the situation now and maybe over a stretch of time. And then you"
},
{
"start": 101.92,
"end": 111.4,
"text": " ask it please make a prediction in eight hours or something. And then the MetNet will make"
},
{
"start": 111.4,
"end": 119.24000000000001,
"text": " that prediction and it will just output it like snap. No physical simulation needed."
},
{
"start": 119.24000000000001,
"end": 124.8,
"text": " And you also see here that MetNet outputs things in kind of a cloud way, in a probabilistic"
},
{
"start": 124.8,
"end": 133.04,
"text": " way. In one forward pass you don't need to run it multiple times. But we'll get to that."
},
{
"start": 133.04,
"end": 142.68,
"text": " On the bottom here you see the measurement. So the axis is F1. F1 is kind of the overlap"
},
{
"start": 142.68,
"end": 151.44,
"text": " of how well you're able to predict the precipitation. And you see here the MetNet is above the HRR"
},
{
"start": 151.44,
"end": 161.6,
"text": " baseline for most of this time. Up to 480 minutes into the future. Which is eight hours"
},
{
"start": 161.6,
"end": 169.32,
"text": " I believe. All right. So the paper is the following. It's called MetNet, a Neural Weather"
},
{
"start": 169.32,
"end": 176.72,
"text": " Model for Precipitation Forecasting. And I'm not going to read all the names here. The"
},
{
"start": 176.72,
"end": 182.8,
"text": " main corresponding authors are Caspar K. Sonderby and Nalkalke Brenner. And it's a"
},
{
"start": 182.8,
"end": 194.28,
"text": " team of Google research. So specifically they use the input of these two things here. So"
},
{
"start": 194.28,
"end": 203.96,
"text": " one is this GOS16, which is what you see here on the left. And the precipitation rates are"
},
{
"start": 203.96,
"end": 213.4,
"text": " here depicted on the right. So you want to take these things as input into your model."
},
{
"start": 213.4,
"end": 218.64000000000001,
"text": " Now how do you do that? Of course we want to build a neural network. And this is the"
},
{
"start": 218.64000000000001,
"end": 226.52,
"text": " architecture they come up with. So on the bottom here they feed in the data. And they"
},
{
"start": 226.52,
"end": 232.44,
"text": " feed in the data in 15 minute interval from 90 minutes into the pass. So you have to imagine"
},
{
"start": 232.44,
"end": 238.8,
"text": " it like this. So there's a timeline. I'm going to use a little bit of a finer thing. So there's"
},
{
"start": 238.8,
"end": 246.52,
"text": " a timeline. And you let's say are here. This is now. And then here in the future, this"
},
{
"start": 246.52,
"end": 252.56,
"text": " is maybe one hour into the future. This is your target, right? This is you are here and"
},
{
"start": 252.56,
"end": 258.68,
"text": " you're looking out. You would like to know what's the precipitation going to be in one"
},
{
"start": 258.68,
"end": 268.88,
"text": " hour from now. What Metnet does is it takes an input. And specifically it takes the last"
},
{
"start": 268.88,
"end": 278.04,
"text": " 90 minutes before now as an input. And it samples it in frequencies of 15 minute intervals."
},
{
"start": 278.04,
"end": 287.4,
"text": " So each one of these is going to be 15 minutes. And each 15 minutes you get like a snapshot"
},
{
"start": 287.4,
"end": 296.4,
"text": " of this entire of the input region. Now the input region, if I can jump back here to the"
},
{
"start": 296.4,
"end": 304.47999999999996,
"text": " website for a second, they show it what the input region is. The input region, if you"
},
{
"start": 304.47999999999996,
"end": 310.12,
"text": " want to predict in the middle of this small square, the input region is actually the entire"
},
{
"start": 310.12,
"end": 318.32,
"text": " 1024 square kilometers around it. So it's very big input. Though the actual region you"
},
{
"start": 318.32,
"end": 327.2,
"text": " consider is the inside 64 square kilometers. But you take in information from the big region."
},
{
"start": 327.2,
"end": 335.28000000000003,
"text": " And the main point of the paper, I believe, is how to do that. All right, so each 15 minutes"
},
{
"start": 335.28,
"end": 340.15999999999997,
"text": " you take in a snapshot. And these are these snapshots here on the bottom. So these are,"
},
{
"start": 340.15999999999997,
"end": 345.91999999999996,
"text": " and you have to imagine in here, every 15 minutes there's a stack of these inputs. So"
},
{
"start": 345.91999999999996,
"end": 352.2,
"text": " what are these inputs? These inputs are some kind of features that you have. So there is"
},
{
"start": 352.2,
"end": 359.59999999999997,
"text": " the target time, which in this case would be this one hour here. There is the month,"
},
{
"start": 359.59999999999997,
"end": 365.03999999999996,
"text": " day and hour, which is important for weather prediction, right? So the time of year, time"
},
{
"start": 365.04,
"end": 371.66,
"text": " of day and so on. Longitude latitude is probably pretty important. Elevation map is probably"
},
{
"start": 371.66,
"end": 379.64000000000004,
"text": " pretty important. So these you can see, these are all maps. Now sometimes, and this is how"
},
{
"start": 379.64000000000004,
"end": 384.20000000000005,
"text": " you encode things in these. Since it's a neural network, you know, all of these things must"
},
{
"start": 384.20000000000005,
"end": 389.96000000000004,
"text": " be of the same dimensions here. So if you have 256 dimensions here and probably 256"
},
{
"start": 389.96,
"end": 396.84,
"text": " dimensions here, then all of these things must be of the same dimension. And if you"
},
{
"start": 396.84,
"end": 401.56,
"text": " want to give a feature such as the target time, which in this case, let's say it's one"
},
{
"start": 401.56,
"end": 408.35999999999996,
"text": " hour, you just put here one hour, what's one hour? Let's say 60 minutes. So you just put"
},
{
"start": 408.36,
"end": 421.48,
"text": " the number 60 here, 60, 60, 60, 60, 60, 256 times and 256 times 250, 65, sorry, 265 times,"
},
{
"start": 421.48,
"end": 429.8,
"text": " no, 56. I'm confusing with German. So this is how you encode features. It's pretty primitive,"
},
{
"start": 429.8,
"end": 435.48,
"text": " but it turns out it works the best if you do it this way. All right. So you have these"
},
{
"start": 435.48,
"end": 440.84000000000003,
"text": " planes and some, as I said, are just features such as the target time, month, day and hour"
},
{
"start": 440.84000000000003,
"end": 449.66,
"text": " and so on. Elevation, I guess is a map, is like an elevation map of the region you consider."
},
{
"start": 449.66,
"end": 458.88,
"text": " And this corresponds now to this, these 64 kilometers times 64 kilometers here. And that's"
},
{
"start": 458.88,
"end": 466.04,
"text": " exactly what these center crops are here. So this center crop thing, that now, this thing"
},
{
"start": 466.04,
"end": 476.96,
"text": " here, this plane, sorry, is these 64 by 64 region. That's this plane here. And also the,"
},
{
"start": 476.96,
"end": 484.96,
"text": " that's the precipitation and the GOES, that's this thing here. Now we also have these down"
},
{
"start": 484.96,
"end": 499.2,
"text": " sampled things, which these are the 1024 kilometers. So this here and this here, these are the"
},
{
"start": 499.2,
"end": 507.32,
"text": " 1024 square kilometer patches, but they are down sampled. So everything is down sampled,"
},
{
"start": 507.32,
"end": 516.24,
"text": " I guess, to 256 by 256 pixels. So you don't really take into account every nuance of that"
},
{
"start": 516.24,
"end": 522.04,
"text": " very big, of that very big input, but you do down sample it. So you kind of get the"
},
{
"start": 522.04,
"end": 527.88,
"text": " big picture of the outer frame and in the inner frame, you take it in a much higher"
},
{
"start": 527.88,
"end": 534.88,
"text": " resolution in order to get the details. All right. So you stack all of this up into a"
},
{
"start": 534.88,
"end": 542.16,
"text": " big tensor and then you feed it into here into a spatial down sampler, which I guess,"
},
{
"start": 542.16,
"end": 550.36,
"text": " no, I have read is a, some just a convolutional neural network, right? So this is your typical"
},
{
"start": 550.36,
"end": 557.08,
"text": " image processing pipeline. So you do this for each of these stacks, right? And then"
},
{
"start": 557.08,
"end": 566.24,
"text": " what you get out of it is a lower size representation right here. So you get these representations"
},
{
"start": 566.24,
"end": 571.72,
"text": " and then you let a temporal encoder run over it. What does a temporal encoder do? This"
},
{
"start": 571.72,
"end": 579.0400000000001,
"text": " in particular is a convolutional LSTM. And if you already know what an LSTM is, a convolutional"
},
{
"start": 579.0400000000001,
"end": 587,
"text": " LSTM is nothing more than an LSTM that has as intermediate layers, convolutional layers."
},
{
"start": 587,
"end": 593.64,
"text": " So it's pretty suited to do, for example, videos or any sort of image processing that"
},
{
"start": 593.64,
"end": 601.44,
"text": " goes over time like this one. So the temporal encoder simply starts out here with an initial"
},
{
"start": 601.44,
"end": 608.12,
"text": " state. My pens are screwing me today. So it starts out here with an initial state and"
},
{
"start": 608.12,
"end": 616.92,
"text": " then it simply inputs each of these representations, takes them one by one, runs across time, right?"
},
{
"start": 616.92,
"end": 625.04,
"text": " And each time producing a new intermediate representation of the input until it finally"
},
{
"start": 625.04,
"end": 634.1999999999999,
"text": " reaches this here, final representation. So this thing here is a single final representation"
},
{
"start": 634.1999999999999,
"end": 645.9599999999999,
"text": " of all of this input, right? Of this entire time span of all of these stacks here. Yeah,"
},
{
"start": 645.96,
"end": 652.5600000000001,
"text": " so you can press this into a single input with first a convolutional network to downsample"
},
{
"start": 652.5600000000001,
"end": 659.96,
"text": " each time point individually and then with a recurrent neural network, an LSTM, to integrate"
},
{
"start": 659.96,
"end": 666.2800000000001,
"text": " the information over time. You end up with this single piece here. And then what you"
},
{
"start": 666.2800000000001,
"end": 673.6,
"text": " do, so you still, here you still retain kind of an image sort of thing. So this representation"
},
{
"start": 673.6,
"end": 682,
"text": " here, you can see it in the background. Maybe I'll get down my scribbles here. This here"
},
{
"start": 682,
"end": 688.12,
"text": " is still sort of an image tensor, though I guess it's a hidden representation, so you"
},
{
"start": 688.12,
"end": 698.4,
"text": " couldn't really look at it. But it still has dimensions of images. So this here is still,"
},
{
"start": 698.4,
"end": 705.76,
"text": " I think, the same or corresponding to these dimensions here. So this still has some spatial"
},
{
"start": 705.76,
"end": 713.76,
"text": " information where this might be north-south here in this axis, might be east and west,"
},
{
"start": 713.76,
"end": 720.88,
"text": " right? And then these are just the hidden channels, the channels of the hidden representations,"
},
{
"start": 720.88,
"end": 733.72,
"text": " right? So what you would like to do now is to basically encode information from the space"
},
{
"start": 733.72,
"end": 742.56,
"text": " around you. If you look at, let's look at one of these, one of the big pictures. What"
},
{
"start": 742.56,
"end": 750,
"text": " you would like to do in weather prediction, let's say you are right here. What's a good"
},
{
"start": 750,
"end": 756.2,
"text": " example, you are right here, right? Now if you want to know if this particular cloud"
},
{
"start": 756.2,
"end": 765.4,
"text": " over here is going to move to your direction, what you want to know is, for example, is"
},
{
"start": 765.4,
"end": 770.32,
"text": " there a mountain range here, right? Because then it's more probable that this cloud is"
},
{
"start": 770.32,
"end": 778.62,
"text": " going maybe to move up there. You would also want to know how this cloud here moves, right?"
},
{
"start": 778.62,
"end": 786.76,
"text": " If this cloud here moves somewhere here around, then it's probably this cloud down here might"
},
{
"start": 786.76,
"end": 793.28,
"text": " be pulled with it or something like this. So you're very much, sorry, you're there. You're"
},
{
"start": 793.28,
"end": 802.36,
"text": " very much kind of want to look out into each of the directions here and you want to incorporate"
},
{
"start": 802.36,
"end": 809.16,
"text": " kind of what's happening across the space. We're already used to kind of convolutional"
},
{
"start": 809.16,
"end": 817.44,
"text": " networks being able to do this, but in here the authors use attention to do that. So if"
},
{
"start": 817.44,
"end": 823,
"text": " you don't know what attention is, my most popular video is about attention and you can"
},
{
"start": 823,
"end": 832.12,
"text": " do attention for images. So the way that works is that you have a series of images of stacked"
},
{
"start": 832.12,
"end": 838.32,
"text": " blocks of a neural network. Let me draw this here. So you have an image here and let's"
},
{
"start": 838.32,
"end": 845.16,
"text": " say it has just four pixels, right? So you have the next layer of these four pixels,"
},
{
"start": 845.16,
"end": 851.32,
"text": " right? So you have layers of this. So the next layers of the four pixels, they all emit"
},
{
"start": 851.32,
"end": 860.38,
"text": " what are called queries and queries are just vectors. So each pixel emits a single vector."
},
{
"start": 860.38,
"end": 868,
"text": " Let's say this, that, that, this, right? And each of the lower layers emits what is called"
},
{
"start": 868,
"end": 877.52,
"text": " a key. This, this, this, this. And now the keys and the queries are routed together based"
},
{
"start": 877.52,
"end": 881.14,
"text": " on their inner product. So these two would be routed together. This would probably be"
},
{
"start": 881.14,
"end": 888.16,
"text": " routed here. This as well. This would probably routed here. So what in effect each of the"
},
{
"start": 888.16,
"end": 896.76,
"text": " pixels of the higher layer can look at specific pixels of the lower layer. Now you can imagine"
},
{
"start": 896.76,
"end": 904.68,
"text": " this is exactly what we want here in that if there is a mountain range here and we might"
},
{
"start": 904.68,
"end": 911.64,
"text": " be interested in that. So we'd be able from our, from our point here to specifically attend"
},
{
"start": 911.64,
"end": 920.84,
"text": " to that location using, using attention, right? So the authors here build basically a stacked"
},
{
"start": 920.84,
"end": 927.48,
"text": " model of attention layers. And that's what's happening in the third part here. And this"
},
{
"start": 927.48,
"end": 935.76,
"text": " is the attention is in order to incorporate long range dependencies. As I made the example"
},
{
"start": 935.76,
"end": 940.64,
"text": " with the mountain range, this might be far away, but it might actually influence your"
},
{
"start": 940.64,
"end": 946.88,
"text": " weather very much. So the attention is to incorporate these long range dependencies."
},
{
"start": 946.88,
"end": 955.92,
"text": " But the problem with attention is, is as you saw in the example, each of these pixels can"
},
{
"start": 955.92,
"end": 963.3199999999999,
"text": " attend to each of the pixels in the lower layer. So what you'd end up with, so each"
},
{
"start": 963.3199999999999,
"end": 968.58,
"text": " can attend to that. This can attend to each. This can attend to each. You'll see you'll"
},
{
"start": 968.58,
"end": 973.88,
"text": " end up with 16 connections. Can't even draw them. So you end up with 16 connections. In"
},
{
"start": 973.88,
"end": 983.76,
"text": " general, if you have D here, you will end up with a D squared number of things you need"
},
{
"start": 983.76,
"end": 990.5200000000001,
"text": " to calculate, right? So if this here, and now of course we have images. So generally"
},
{
"start": 990.52,
"end": 1001.04,
"text": " we'll think of D by D pixels. Now we have D by D pixels and that thing squared number"
},
{
"start": 1001.04,
"end": 1008.72,
"text": " of things we need to calculate. This quickly gets too much. So in, for example, MNIST,"
},
{
"start": 1008.72,
"end": 1025.3600000000001,
"text": " you have 28 by 28 pixel images. This is 780 or 2 or something. I don't quite remember."
},
{
"start": 1025.3600000000001,
"end": 1034.8,
"text": " But you'll have to calculate this squared many connections between things. This is simply"
},
{
"start": 1034.8,
"end": 1040.2,
"text": " impossible pretty quickly, especially if you scale up the images and then have some channels"
},
{
"start": 1040.2,
"end": 1049.32,
"text": " in here as well. So attention for image processing has been a bit lagging compared to natural"
},
{
"start": 1049.32,
"end": 1056,
"text": " language processing. In natural language processing, you usually have maybe 500 tokens or something."
},
{
"start": 1056,
"end": 1059.8,
"text": " Images you have much more. So attention is much more expensive. So you can't really do"
},
{
"start": 1059.8,
"end": 1067.2,
"text": " it on current hardware. Now this paper uses something called axial attention. Axial attention"
},
{
"start": 1067.2,
"end": 1074.1599999999999,
"text": " is kind of the trick of how to make this tension happen for images. And for that I want to"
},
{
"start": 1074.1599999999999,
"end": 1080.48,
"text": " switch over to this paper. It's called Axial Attention in Multidimensional Transformers"
},
{
"start": 1080.48,
"end": 1088.56,
"text": " by some of the same authors. So Jonathan Ho and Nell Coutt Brenner, also of Google Brain"
},
{
"start": 1088.56,
"end": 1097.8,
"text": " and UC Berkeley, proposed this axial transformer. Now they originally proposed axial attention"
},
{
"start": 1097.8,
"end": 1105.6399999999999,
"text": " for autoregressive models. If you know transformers, they also started by making autoregressive"
},
{
"start": 1105.6399999999999,
"end": 1113.9199999999998,
"text": " models, so language modeling and so on. But we can decouple the axial attention from the"
},
{
"start": 1113.92,
"end": 1118.72,
"text": " autoregressivity of these models. So I'm not going to talk about autoregressive models,"
},
{
"start": 1118.72,
"end": 1126.6000000000001,
"text": " it's just axial attention. So what is axial attention? It's pretty simple actually. And"
},
{
"start": 1126.6000000000001,
"end": 1132.64,
"text": " I want to start by talking about convolutions. So what does a convolution do? Let's just"
},
{
"start": 1132.64,
"end": 1142.4,
"text": " take a one-dimensional image, which is pretty boring, but let's say it has these eight pixels"
},
{
"start": 1142.4,
"end": 1149.88,
"text": " here. So this is an image, it just has one row of eight pixels. What do I do when I run"
},
{
"start": 1149.88,
"end": 1157.0800000000002,
"text": " a convolutional filter across that? This is the lower layer, and now this is the next"
},
{
"start": 1157.0800000000002,
"end": 1165.24,
"text": " layer that is produced by a convolution. So for each of the pixels in the next layer,"
},
{
"start": 1165.24,
"end": 1173.16,
"text": " what I can do with the convolutional layer, I can look at its neighbors in the lower layer."
},
{
"start": 1173.16,
"end": 1180.04,
"text": " So these three would be part of that. And then I go on to this, and again I look at"
},
{
"start": 1180.04,
"end": 1187.64,
"text": " its neighbors at these three. I might have done this in a different color. And then I"
},
{
"start": 1187.64,
"end": 1197.5600000000002,
"text": " look at this, and it can look at itself and its neighbors. So a convolution is pretty"
},
{
"start": 1197.5600000000002,
"end": 1208.6000000000001,
"text": " smart. And then of course in the next layer, that repeats. Now if you think, what's the"
},
{
"start": 1208.6000000000001,
"end": 1213.4,
"text": " difference between doing this and a fully connected layer? So if I have a fully connected"
},
{
"start": 1213.4,
"end": 1223.8400000000001,
"text": " layer, a classic neural network, a fully connected layer, then this pixel here would incorporate"
},
{
"start": 1223.8400000000001,
"end": 1233.4,
"text": " information from all of the pixels here. And this pixel here would incorporate information"
},
{
"start": 1233.4,
"end": 1240.24,
"text": " from all the pixels. Now why might this be better? Because the information that I want"
},
{
"start": 1240.24,
"end": 1248.6,
"text": " here for this pixel might depend on this pixel over here. So I might benefit from a connection"
},
{
"start": 1248.6,
"end": 1256.08,
"text": " over there, or it might benefit from this pixel here, which it can't reach. And with"
},
{
"start": 1256.08,
"end": 1262.56,
"text": " a convolutional network, I can't do that. Why are then convolutional networks preferable?"
},
{
"start": 1262.56,
"end": 1269.4,
"text": " Because the convolutional network can do the same thing across multiple layers. So let's"
},
{
"start": 1269.4,
"end": 1279.3200000000002,
"text": " assume again that this pixel here needs information from this pixel right here. And as you can"
},
{
"start": 1279.3200000000002,
"end": 1291.3200000000002,
"text": " see in just one layer, it can only get information from those, right? But now take the next layer,"
},
{
"start": 1291.32,
"end": 1300.6399999999999,
"text": " so the same pixel here, it can attend to these three, right? Now these three can each in"
},
{
"start": 1300.6399999999999,
"end": 1309.04,
"text": " turn attend to their neighbors, right? And I'm not going to draw everything, but the"
},
{
"start": 1309.04,
"end": 1317.08,
"text": " resolution field for this pixel here will end up being all of this, right? Now we still"
},
{
"start": 1317.08,
"end": 1327.8,
"text": " don't have our desired pixel in here, but if we just go one layer more, then this pixel"
},
{
"start": 1327.8,
"end": 1336.72,
"text": " right here, a different color, this pixel right here, right? The resolution field across"
},
{
"start": 1336.72,
"end": 1347.04,
"text": " the layers increases, because it's always incorporating information from downstream,"
},
{
"start": 1347.04,
"end": 1351.96,
"text": " and the downstream again incorporates information from the downstream, so eventually you can"
},
{
"start": 1351.96,
"end": 1357.24,
"text": " aggregate the same information. So instead of having a single layer with all of these"
},
{
"start": 1357.24,
"end": 1362.64,
"text": " connections, we have convolutional layers, which seem like a worse idea, because they"
},
{
"start": 1362.64,
"end": 1370.0600000000002,
"text": " can only do less things, attend to less things, but across the layers they actually can do"
},
{
"start": 1370.0600000000002,
"end": 1378,
"text": " the same thing, right? And that turns out to be a huge advantage of these convolutional"
},
{
"start": 1378,
"end": 1383.1200000000001,
"text": " layers, and that's why convolutional layers are used for image processing, and not the"
},
{
"start": 1383.1200000000001,
"end": 1391.18,
"text": " multi-layer perceptrons. So the same exact thing happens with axial attention, just in"
},
{
"start": 1391.18,
"end": 1398.04,
"text": " a different form. It is a bit poorly drawn here, I believe, but this is how you have"
},
{
"start": 1398.04,
"end": 1412,
"text": " to imagine it. As before, this pixel, the red pixel here, if I just have a normal transformer"
},
{
"start": 1412,
"end": 1420.2,
"text": " layer, the red pixel can attend to all of the other pixels in the image, right? That's"
},
{
"start": 1420.2,
"end": 1425.76,
"text": " the, that's basically, and each of the pixels can do that, so that's your d squared computation"
},
{
"start": 1425.76,
"end": 1433,
"text": " right here. Now, what we want to do is, in a convolutional layer, what we would do is,"
},
{
"start": 1433,
"end": 1437.88,
"text": " okay, you can only attend to your neighbors, and then in the next layer the neighbors can"
},
{
"start": 1437.88,
"end": 1443.8400000000001,
"text": " attend to their neighbors, and thereby you go out and out. In axial attention, you say,"
},
{
"start": 1443.84,
"end": 1455.1999999999998,
"text": " okay, this thing can only attend to its row and its column, right? That's it. You can"
},
{
"start": 1455.1999999999998,
"end": 1460.9199999999998,
"text": " only do attention to your row and your column, and I believe they don't even do it at the"
},
{
"start": 1460.9199999999998,
"end": 1466.1599999999999,
"text": " same time. So in one layer you can attend to the row you're in, and in the other you"
},
{
"start": 1466.1599999999999,
"end": 1472.8799999999999,
"text": " can attend to the column you're in. Now, let's see how the same thing happens as for a convolutional"
},
{
"start": 1472.88,
"end": 1479.96,
"text": " layer. So in the, basically, how then, if the red pixel needs access to information"
},
{
"start": 1479.96,
"end": 1486.4,
"text": " in this green pixel, how does it do that? So in the first layer it can attend to its"
},
{
"start": 1486.4,
"end": 1499.2800000000002,
"text": " row and its column, right? And so can every other pixel, including, sorry, including,"
},
{
"start": 1499.28,
"end": 1509.56,
"text": " of course, the pixel where that, so let's say this square here can also attend to its"
},
{
"start": 1509.56,
"end": 1517.6,
"text": " row and its column, and its row happens to be including the green one, right? So in layer"
},
{
"start": 1517.6,
"end": 1531.6799999999998,
"text": " one, this red square here gets information from the green square via row attention, right?"
},
{
"start": 1531.6799999999998,
"end": 1543.3799999999999,
"text": " And then in layer two now, this, our red square of interest now can row attend to this other"
},
{
"start": 1543.38,
"end": 1555.0600000000002,
"text": " red square here, so they get connected in layer two. I'm sorry, I don't want that. So"
},
{
"start": 1555.0600000000002,
"end": 1561.1200000000001,
"text": " you see that within just two layers we've transferred information from the green square"
},
{
"start": 1561.1200000000001,
"end": 1569.2600000000002,
"text": " via this red square to that red square. So we can, in the same way as a convolution,"
},
{
"start": 1569.26,
"end": 1578.72,
"text": " you can replace the long-range arbitrary dependencies between pixels by simply having multiple layers"
},
{
"start": 1578.72,
"end": 1587.84,
"text": " of restricted dependence. The same goes for this axial attention. So you can replace the"
},
{
"start": 1587.84,
"end": 1598.56,
"text": " arbitrary attention in layers, right? You can replace that by a two-step process where"
},
{
"start": 1598.56,
"end": 1608.08,
"text": " you first transfer information via the column and then transfer it via the row. It's a bit"
},
{
"start": 1608.08,
"end": 1617.12,
"text": " like, you know, in chess you can have a queen that can move any direction, especially diagonally,"
},
{
"start": 1617.12,
"end": 1623.12,
"text": " and then if you just have a rook you kind of need to do two moves. So in the queen is"
},
{
"start": 1623.12,
"end": 1630.7199999999998,
"text": " like the full attention and the rook is the multi-layer axial attention. They can achieve"
},
{
"start": 1630.7199999999998,
"end": 1639.7199999999998,
"text": " the same thing, you just need more layers. But as a trade-off you get a super, super"
},
{
"start": 1639.7199999999998,
"end": 1647.4799999999998,
"text": " saving in requirement of memory and computation, right? So they stress that, you know, kind"
},
{
"start": 1647.48,
"end": 1653.56,
"text": " of you can represent the same distributions with the axial attention. And you know, the"
},
{
"start": 1653.56,
"end": 1660.3600000000001,
"text": " trade-off is you just have to do multiple layers of it. Right, so this is axial attention"
},
{
"start": 1660.3600000000001,
"end": 1667.28,
"text": " and they are now able to incorporate this into their model right here. So they have,"
},
{
"start": 1667.28,
"end": 1674.4,
"text": " I believe, eight blocks, so four row attention, you see this right here, and four column attention"
},
{
"start": 1674.4,
"end": 1685.68,
"text": " blocks in their model. And finally they output this distribution here across their region"
},
{
"start": 1685.68,
"end": 1695.68,
"text": " of interest. Now this again is your, I believe, this 64 by 64 resolution. So you can see how"
},
{
"start": 1695.68,
"end": 1703.3200000000002,
"text": " they kind of aggregated information across the 64 using this axial attention. And then"
},
{
"start": 1703.32,
"end": 1712.28,
"text": " that makes their prediction in this one hour. So this is this. Alright, so this was a long"
},
{
"start": 1712.28,
"end": 1719.6399999999999,
"text": " way. So recap, they have 15-minute snapshots of this input data across along with some"
},
{
"start": 1719.6399999999999,
"end": 1726.8,
"text": " features. They use a spatial down sampler, which is a CNN, on each of them individually."
},
{
"start": 1726.8,
"end": 1735.24,
"text": " Then they use a convolutional LSTM to encode this across time to end up with a single representation"
},
{
"start": 1735.24,
"end": 1743.8,
"text": " here at the end. Then they use axial attention in order to aggregate information across the"
},
{
"start": 1743.8,
"end": 1750.08,
"text": " spatial dimensions. They do this in multiple stages and at the end they make a participation"
},
{
"start": 1750.08,
"end": 1760.24,
"text": " prediction, which is a distribution, as you can see here. So as an output you directly"
},
{
"start": 1760.24,
"end": 1766.6,
"text": " get a distribution of results, which is also cool because the physical simulation, you"
},
{
"start": 1766.6,
"end": 1772.04,
"text": " have to let it run many, many times in order to get a distribution of results. And this"
},
{
"start": 1772.04,
"end": 1780.04,
"text": " neural network can simply give you a distribution right away. That's what they say right here."
},
{
"start": 1780.04,
"end": 1787.92,
"text": " So they go a bit into the architecture compared to baseline. I want to get back to what I"
},
{
"start": 1787.92,
"end": 1792.8799999999999,
"text": " showed you at the beginning. This here is just the picture, kind of the picture book"
},
{
"start": 1792.8799999999999,
"end": 1798.6,
"text": " example. So left is the ground truth, in the middle is MatNet, and on the right is a baseline"
},
{
"start": 1798.6,
"end": 1810.24,
"text": " method. This here is in, as you can see, in two hours, in four, six and eight. So you"
},
{
"start": 1810.24,
"end": 1815.04,
"text": " can see the MatNet gives you as an output this distribution. What I find interesting,"
},
{
"start": 1815.04,
"end": 1823.3999999999999,
"text": " for example, is this sample two right here. So in this sample one you can see there is"
},
{
"start": 1823.3999999999999,
"end": 1828.56,
"text": " a consistent difference and this is the forecast time, so how much in advance you want to get"
},
{
"start": 1828.56,
"end": 1834.6399999999999,
"text": " it? No, this would be a one hour, but it can go up to eight hours. Here is a consistent"
},
{
"start": 1834.6399999999999,
"end": 1844.32,
"text": " gap in F1, which means the MatNet does it better across this span of time, which is"
},
{
"start": 1844.32,
"end": 1851.28,
"text": " for the top sample right here. For the bottom sample though, you can see here, there is"
},
{
"start": 1851.28,
"end": 1857.6799999999998,
"text": " a big gap at the beginning, again, there is a big gap at the beginning, and then this"
},
{
"start": 1857.68,
"end": 1865.1200000000001,
"text": " gap gets smaller and smaller and smaller. And this, I think, might give you an indication"
},
{
"start": 1865.1200000000001,
"end": 1870.6000000000001,
"text": " of, let's say, the weakness of this approach, doing it with neural networks. So with neural"
},
{
"start": 1870.6000000000001,
"end": 1878.88,
"text": " networks you kind of rely on regularities, you kind of rely on broad scale, correct things"
},
{
"start": 1878.88,
"end": 1885.8400000000001,
"text": " that you can learn from the data, and this might work well as long as things are regular,"
},
{
"start": 1885.84,
"end": 1892.28,
"text": " which of course across shorter time spans things tend to be more regular, right? But"
},
{
"start": 1892.28,
"end": 1898.36,
"text": " if you go for longer time spans, I believe there is more of a chaos element to it, like"
},
{
"start": 1898.36,
"end": 1906.12,
"text": " weather can be very dependent on very subtle things, and the physics simulation that is"
},
{
"start": 1906.12,
"end": 1912,
"text": " really taking into account the actual physics might be able to much, much better account"
},
{
"start": 1912,
"end": 1920.56,
"text": " for that. And that's why I believe across time here you'll see that the two models get"
},
{
"start": 1920.56,
"end": 1927.96,
"text": " closer together. That being said, MetNet of course is still on top here. But it will be"
},
{
"start": 1927.96,
"end": 1939.04,
"text": " interesting to forecast for longer even, though I haven't actually dig through their results,"
},
{
"start": 1939.04,
"end": 1945.76,
"text": " through their numerical results, but you can do that if you want. Alright, so this was"
},
{
"start": 1945.76,
"end": 1969.68,
"text": " it for MetNet and axial attention. I hope you liked this, and bye bye."
}
] |
wAgO2WZzjn4 | Yannic Kilcher | UCZHmQk67mSJgfCCTn7xBfew | [Rant] coronavirus | [
"Science & Technology"
] | [
"corona",
"covid",
"covid19",
"lockdown",
"social distancing"
] | A rant about toilet paper and lockdowns.
Links:
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
BitChute: https://www.bitchute.com/channel/yannic-kilcher
Minds: https://www.minds.com/ykilcher | This video is going to be a rant. There is not really a script and I have not really thought this through. But I would like to talk about some things that are on my mind and that I don't see discussed very often with respect to coronavirus. I'm not a medical expert. I don't play a doctor on the internet. And there absolutely is no need to follow any of my advice or take anything as advice that I say. I just want to talk and maybe someone else will have a good idea of what I talk. So it is a crazy world we live in. I would have never thought at the beginning of this year that this would be the year where everyone stays at their house and works from home. I've always thought that in a time like this, when the economy is going down, basically the thing of value would be something like Bitcoin or Ethereum or alternative things. But no, everything is going down and the actual new currency of choice is toilet paper. Everyone is going to grab the toilet paper. What a crazy world where the most trusted news source is someone like Tucker Carlson. Yeah, didn't see that one coming. Thanks, Tucker, for saving us. So I don't know what to make of this. And I do know that this is a serious situation. And you should definitely do everything you can to take care of yourself and to take care of your community. What I want to talk about is the question of what is it going to do long term? So if we think about this, we often think about this right now. We have an exponential increase in number of cases. You've probably seen this and you've probably seen graphics like these where the goal is to flatten the curve. The sense behind this being that if this rises exponentially, of course, at some point it will affect the entire population. So it's going to flatten out. And if you look at the number of new cases daily, it might be some curve like this. The problem is that we only have a finite capacity of health care systems. So all of these people are basically going to be screwed once we get to this point. Now, the goal is to flatten the curve, that we can take some measures to keep this curve under or at the capacity of our health care system. These measures are varying wildly. So it is these measures that I want to talk about a bit. Now, these measures range from something like social distancing, where you basically say, all right, no big events, no groups of large people, social distancing. And just kind of avoid contact with other people. Now, of course, all the CS departments of the world go like, well, this is business as usual. Like, yay, we've practiced for this our entire lives. So it is mildly inconvenient. But we can keep it up all the way to lockdown. Lockdown comes also in various forms. But the most drastic sense is stay home or you'll get shot or locked up or something like this. And it is this discrepancy. Of course, the more down on the curve you go, the more you're going to theoretically flatten this out. The more the less you do, the higher your peak is going to be. But it's not that easy, I find. If you look at the cases here, of course, they're exponentially rising. But if you look at where the outbreak started in China, the orange curve here, you actually see the number of cases flattening out. Now, you see it flattening out at something like 100 K. And last I know China has more people than 100 K. So that means not everyone's infected. Now, with a disease that infects this easily and spreads this easily from person to person, as it appears to be the case, there are two possibilities. Either the rest of China, which China is over a billion people, and this is 100 K. So the entire rest of China, basically almost all of China is asymptomatic, which the latest numbers I hear are that maybe 50 percent of cases are asymptomatic. Or the other possibility is that most of China has yet to be infected. Now, with a virus like this, if you look at the distribution, it's basically arrived everywhere in the world. So there is very, very little hope of snuffing this thing out, actually making it stop, which what you'd have to do is you'd have to lock every single person down for two to three weeks. And now only a single person that doesn't keep to that can start a new outbreak. So what I fully expect to happen if these numbers are correct and if China actually has done this successfully, so flattened this curve successfully, is that let's say the green thing here is China, is that, okay, they get to a point where they feel they have no new cases for a while, so they let the restriction up, right, they remove the restriction. There's going to be some person somewhere in some CS department that now goes outside and meets another person. And in that particular person here, the virus happens to have an incubation period of 21 days instead of 14. And they're going to transmit that to two, three, four, five people. After these measures, everyone's going to be longing for social contacts and large groups. And we might gradually loosen the restrictions, but still a new outbreak is inevitable, it seems. So what you'll have again is a spike. And then a country might enact measures again and so on. But I believe the world we're going to live in, if we really lock down people, if we really enforce these measures, is a world of multiple repeated seasonal peaks of this disease. And that means we are in for the long term. I don't want to say ever that we shouldn't do that, because it of course effectively reduces the number of deaths, which should be our ultimate goal here. But just know that flattening the curve once, like these graphics here, is a bit misleading, I believe. We need to be thinking about a long term plan here. And since we're going long term, and with long term, I mean months, I mean multiple years, with long term, the problem here is the people. And I want to elaborate on that. So the largest problem are the people. People aren't just machines that you can command around. People are individuals. They have their own ideas. They have their own goals that they want to fulfill, right? At some point, you want to go on a vacation. This is an island with a tree. So let's talk about lockdown. Lockdown, it appears to be a thing that is necessary in some parts if you ask some people. Again, I don't want to give advice on this. I just want to give some thoughts. So what do you get with lockdown? With lockdown, you get OMG, it's happening, and so on. That's day one. Day three, you get funny YouTube videos. Everyone that is in lockdown will be like, oh, I'm stuck at home. It's so boring. Already forgetting that other people have major issues with being locked down. A lot of people sitting on top of each other is going to create a lot of problems. And eventually, more and more people are going to long for this to end. And I'm not saying that, you know, that response to a virus should be fun. But what I'm saying is that people are going to break this. It is inevitable. First some are going to break, then more. You have a very delicate balance going on here. Right now, there is a lot of support. A lot of people are on the side of locking things down in a lockdown. A lot of people are conscientious, staying home, avoiding social contact as much as possible. But some are going to be the first ones to go over there. Some are going to break. Some are going to find excuses not to keep to it. And the problem is, the harder the measures are, the harder you are down here. The stronger the pull is going to be for people to go on this other side. And I guarantee you, the people on social media that are shaming others the most, that are yelling out the loudest for others to not break the lockdown, either they have an extremely comfortable living at their own homes, which is an extreme privilege, or they are the worst ones to break it themselves, to find every excuse they can, why they are exempt from it. And people are going to see this. More and more people are going to be over here, and with more people over here. Look, they have the sunshine. They are out and about. They are doing their things more like normal. The people over here, they are going to see this. And more and more people will be, hey, why am I keeping to this? Why am I not over there? Why can these people do that? And they will go. And at some point, the scale is going to tip, and any lockdown, barring martial law and the threat of being shot, if you go outside, will be ineffective. And at that point, wherever you are, the cases are going to spike. And it will be even worse than when you did nothing, or as bad. So I believe that it is a very delicate balance that you have to strike here. Total lockdown, people aren't going to take this for a long time, and you need to think about a long time here. I don't know what the answer is. I don't know where exactly the scale of just keep apart, to stay home, whatever it takes, is. I just think that two harsh measures can also be counterproductive. I'm very fortunate to live in Switzerland. Most of our neighbors have instituted total lockdowns, and the Swiss government has recently decided not to do so at this time, with, I believe, much of the same reasoning as I'm just laying out. We need to think about this long term, and people are not going to keep to a lockdown long term, and it will be worse if they don't. Now, I believe the best response to something like this is a distributed one. I believe the best response is to go to people in their networks. People usually care about the people around them, enough so that they will take responsibility into the hand. I believe you should give the people the responsibility, as much responsibility as you can, and I believe the network of people, each one arranging themselves in the most pro-social way, can be the best response, better than any government could do. Governments can do things such as prohibit large gatherings. Sometimes, if you don't do that, even the individual people can't do anything against that. But to actually believe in your citizens, and believe in the fundamental goodness of humans, and the fundamental care for other humans, is a strong suit here. On the other hand, you see other governments. I have read that a city in Norway is thinking about employing a monitoring system, where they track everyone's phone, and if more than a certain amount of people are in the same place, they will basically send everyone a text message, saying you should disperse. While this is an effective measure, and I believe can definitely help, and it is something that you need to be very careful about. As we saw with 9-11, as soon as governments get power, they rarely let it go, as Edward Snowden finally demonstrated. If you enact something like this, you must definitely make sure that there is a time limit on it. Any government measure right now, be that spending to help the economy, which is certainly a good thing, be this measures to increase social distancing, to prohibit public gatherings. Support this, but it must be time limited. Otherwise, governments aren't going to let this go. Finally, I would like to come to a more global scale of long-term thinking, countries and other countries. As you go on, you need to think about your economy. Our economies were growing at a fairly good pace until this hit, and now they're plunging. At any point, they're going to be opportunists. They're going to be personal opportunists, hoarding toilet paper and hand sanitizer, and trying to sell them for marked-up prices. They're going to be country opportunists. When everything's falling down, if you're the country that locks things down now, your economy is going to fall. Eventually, though, you'll have to get back. Countries that get back sooner will be in an upswing sooner. Basically, the question is, where is the ideal point here? To leave the... To not react anymore, to let people do their thing, to get back on track. I don't know where that is, but I believe you're going to see a Cold War-like situation in the world where countries are going to accuse other countries of not doing enough or doing too much, of not playing fairly, of helping to spread the virus. And I believe that will be the case for the years to come. Because what happens over the long time? Of course, right now, you can afford to not fix that pipe under your house that's broken. You can afford to not clean the... To not get the person to clean the chimney. You can afford to not get dental work done. I don't even know how to draw a tooth. Let's say this is a tooth. It probably has some peaks here. Over the long term, though, all of these things are going to break. And we need to get back to normal. And the longer a state keeps up these measures, the worse it's going to get. Finally, we need to talk about risk people. People at risk tend to be older, tend to be ones with health issues. Think about this. If you're an old person having health issues, you're looking at long term. Once you realize this is not going to be over in a few weeks, what do you do? You're old. And the next year or so in lockdown mode is going to be hard for you. And for everyone. But a year, if you're that old and sick, is probably more quality life you have left than after it. So you need to be thinking either, I'm going to survive this because I bunker in my house, don't get the virus. But what is it worth? Because my other diseases will get me afterwards. Otherwise, I could be spending the quality time I have with my family, with my children, with my grandchildren. I could be spending it with my friends. And if I die, I die. It is not an easy question, but I'm absolutely sure there are people right now who are asking themselves this. If you're a government and you're thinking about mandatory lockdowns, I do see that this is in order to save people, in order to not have people walking around that spread the virus to vulnerable populations. But you need to be thinking about the people you're trying to help. Some of them would actually be on this side. I don't know what the best response is to everything here. I think we're just going to see and I don't want to give advice. This is just some of the things I think. I wish everyone the absolute healthiest season they can have right now. Take care. Please think about others. Please do not make the problem worse yourself. You're part of a network and you can be a powerful force for good during this time. Think about long-term, if you're asking your government to do things, think about what's the best situation and how we are going to get there. Thanks and stay healthy. | [
{
"start": 0,
"end": 7,
"text": " This video is going to be a rant. There is not really a script and I have not really thought this through."
},
{
"start": 7,
"end": 18,
"text": " But I would like to talk about some things that are on my mind and that I don't see discussed very often with respect to coronavirus."
},
{
"start": 18,
"end": 22,
"text": " I'm not a medical expert. I don't play a doctor on the internet."
},
{
"start": 22,
"end": 30,
"text": " And there absolutely is no need to follow any of my advice or take anything as advice that I say."
},
{
"start": 30,
"end": 37,
"text": " I just want to talk and maybe someone else will have a good idea of what I talk."
},
{
"start": 37,
"end": 41,
"text": " So it is a crazy world we live in."
},
{
"start": 41,
"end": 50,
"text": " I would have never thought at the beginning of this year that this would be the year where everyone stays at their house and works from home."
},
{
"start": 50,
"end": 65,
"text": " I've always thought that in a time like this, when the economy is going down, basically the thing of value would be something like Bitcoin or Ethereum or alternative things."
},
{
"start": 65,
"end": 71,
"text": " But no, everything is going down and the actual new currency of choice is toilet paper."
},
{
"start": 71,
"end": 74,
"text": " Everyone is going to grab the toilet paper."
},
{
"start": 74,
"end": 84,
"text": " What a crazy world where the most trusted news source is someone like Tucker Carlson."
},
{
"start": 84,
"end": 88,
"text": " Yeah, didn't see that one coming."
},
{
"start": 88,
"end": 91,
"text": " Thanks, Tucker, for saving us."
},
{
"start": 91,
"end": 95,
"text": " So I don't know what to make of this."
},
{
"start": 95,
"end": 99,
"text": " And I do know that this is a serious situation."
},
{
"start": 99,
"end": 108,
"text": " And you should definitely do everything you can to take care of yourself and to take care of your community."
},
{
"start": 108,
"end": 116,
"text": " What I want to talk about is the question of what is it going to do long term?"
},
{
"start": 116,
"end": 122,
"text": " So if we think about this, we often think about this right now."
},
{
"start": 122,
"end": 125,
"text": " We have an exponential increase in number of cases."
},
{
"start": 125,
"end": 134,
"text": " You've probably seen this and you've probably seen graphics like these where the goal is to flatten the curve."
},
{
"start": 134,
"end": 145,
"text": " The sense behind this being that if this rises exponentially, of course, at some point it will affect the entire population."
},
{
"start": 145,
"end": 147,
"text": " So it's going to flatten out."
},
{
"start": 147,
"end": 152,
"text": " And if you look at the number of new cases daily, it might be some curve like this."
},
{
"start": 152,
"end": 157,
"text": " The problem is that we only have a finite capacity of health care systems."
},
{
"start": 157,
"end": 162,
"text": " So all of these people are basically going to be screwed once we get to this point."
},
{
"start": 162,
"end": 172,
"text": " Now, the goal is to flatten the curve, that we can take some measures to keep this curve under or at the capacity of our health care system."
},
{
"start": 172,
"end": 175,
"text": " These measures are varying wildly."
},
{
"start": 175,
"end": 179,
"text": " So it is these measures that I want to talk about a bit."
},
{
"start": 179,
"end": 196,
"text": " Now, these measures range from something like social distancing, where you basically say, all right, no big events, no groups of large people, social distancing."
},
{
"start": 196,
"end": 202,
"text": " And just kind of avoid contact with other people."
},
{
"start": 202,
"end": 209,
"text": " Now, of course, all the CS departments of the world go like, well, this is business as usual."
},
{
"start": 209,
"end": 214,
"text": " Like, yay, we've practiced for this our entire lives."
},
{
"start": 214,
"end": 217,
"text": " So it is mildly inconvenient."
},
{
"start": 217,
"end": 223,
"text": " But we can keep it up all the way to lockdown."
},
{
"start": 223,
"end": 226,
"text": " Lockdown comes also in various forms."
},
{
"start": 226,
"end": 233,
"text": " But the most drastic sense is stay home or you'll get shot or locked up or something like this."
},
{
"start": 233,
"end": 235,
"text": " And it is this discrepancy."
},
{
"start": 235,
"end": 243,
"text": " Of course, the more down on the curve you go, the more you're going to theoretically flatten this out."
},
{
"start": 243,
"end": 248,
"text": " The more the less you do, the higher your peak is going to be."
},
{
"start": 248,
"end": 252,
"text": " But it's not that easy, I find."
},
{
"start": 252,
"end": 257,
"text": " If you look at the cases here, of course, they're exponentially rising."
},
{
"start": 257,
"end": 267,
"text": " But if you look at where the outbreak started in China, the orange curve here, you actually see the number of cases flattening out."
},
{
"start": 267,
"end": 271,
"text": " Now, you see it flattening out at something like 100 K."
},
{
"start": 271,
"end": 276,
"text": " And last I know China has more people than 100 K."
},
{
"start": 276,
"end": 279,
"text": " So that means not everyone's infected."
},
{
"start": 279,
"end": 288,
"text": " Now, with a disease that infects this easily and spreads this easily from person to person, as it appears to be the case, there are two possibilities."
},
{
"start": 288,
"end": 296,
"text": " Either the rest of China, which China is over a billion people, and this is 100 K."
},
{
"start": 296,
"end": 308,
"text": " So the entire rest of China, basically almost all of China is asymptomatic, which the latest numbers I hear are that maybe 50 percent of cases are asymptomatic."
},
{
"start": 308,
"end": 316,
"text": " Or the other possibility is that most of China has yet to be infected."
},
{
"start": 316,
"end": 322,
"text": " Now, with a virus like this, if you look at the distribution, it's basically arrived everywhere in the world."
},
{
"start": 322,
"end": 335,
"text": " So there is very, very little hope of snuffing this thing out, actually making it stop, which what you'd have to do is you'd have to lock every single person down for two to three weeks."
},
{
"start": 335,
"end": 341,
"text": " And now only a single person that doesn't keep to that can start a new outbreak."
},
{
"start": 341,
"end": 356,
"text": " So what I fully expect to happen if these numbers are correct and if China actually has done this successfully, so flattened this curve successfully, is that let's say the green thing here is China, is that, okay,"
},
{
"start": 356,
"end": 365,
"text": " they get to a point where they feel they have no new cases for a while, so they let the restriction up, right, they remove the restriction."
},
{
"start": 365,
"end": 374,
"text": " There's going to be some person somewhere in some CS department that now goes outside and meets another person."
},
{
"start": 374,
"end": 382,
"text": " And in that particular person here, the virus happens to have an incubation period of 21 days instead of 14."
},
{
"start": 382,
"end": 388,
"text": " And they're going to transmit that to two, three, four, five people."
},
{
"start": 388,
"end": 393,
"text": " After these measures, everyone's going to be longing for social contacts and large groups."
},
{
"start": 393,
"end": 400,
"text": " And we might gradually loosen the restrictions, but still a new outbreak is inevitable, it seems."
},
{
"start": 400,
"end": 403,
"text": " So what you'll have again is a spike."
},
{
"start": 403,
"end": 407,
"text": " And then a country might enact measures again and so on."
},
{
"start": 407,
"end": 417,
"text": " But I believe the world we're going to live in, if we really lock down people, if we really enforce these measures,"
},
{
"start": 417,
"end": 423,
"text": " is a world of multiple repeated seasonal peaks of this disease."
},
{
"start": 423,
"end": 427,
"text": " And that means we are in for the long term."
},
{
"start": 427,
"end": 439,
"text": " I don't want to say ever that we shouldn't do that, because it of course effectively reduces the number of deaths, which should be our ultimate goal here."
},
{
"start": 439,
"end": 449,
"text": " But just know that flattening the curve once, like these graphics here, is a bit misleading, I believe."
},
{
"start": 449,
"end": 452,
"text": " We need to be thinking about a long term plan here."
},
{
"start": 452,
"end": 459,
"text": " And since we're going long term, and with long term, I mean months, I mean multiple years,"
},
{
"start": 459,
"end": 464,
"text": " with long term, the problem here is the people."
},
{
"start": 464,
"end": 467,
"text": " And I want to elaborate on that."
},
{
"start": 467,
"end": 471,
"text": " So the largest problem are the people."
},
{
"start": 471,
"end": 475,
"text": " People aren't just machines that you can command around."
},
{
"start": 475,
"end": 478,
"text": " People are individuals. They have their own ideas."
},
{
"start": 478,
"end": 482,
"text": " They have their own goals that they want to fulfill, right?"
},
{
"start": 482,
"end": 485,
"text": " At some point, you want to go on a vacation."
},
{
"start": 485,
"end": 490,
"text": " This is an island with a tree."
},
{
"start": 490,
"end": 492,
"text": " So let's talk about lockdown."
},
{
"start": 492,
"end": 501,
"text": " Lockdown, it appears to be a thing that is necessary in some parts if you ask some people."
},
{
"start": 501,
"end": 503,
"text": " Again, I don't want to give advice on this."
},
{
"start": 503,
"end": 507,
"text": " I just want to give some thoughts."
},
{
"start": 507,
"end": 509,
"text": " So what do you get with lockdown?"
},
{
"start": 509,
"end": 514,
"text": " With lockdown, you get OMG, it's happening, and so on."
},
{
"start": 514,
"end": 516,
"text": " That's day one."
},
{
"start": 516,
"end": 520,
"text": " Day three, you get funny YouTube videos."
},
{
"start": 520,
"end": 527,
"text": " Everyone that is in lockdown will be like, oh, I'm stuck at home."
},
{
"start": 527,
"end": 529,
"text": " It's so boring."
},
{
"start": 529,
"end": 534,
"text": " Already forgetting that other people have major issues with being locked down."
},
{
"start": 534,
"end": 540,
"text": " A lot of people sitting on top of each other is going to create a lot of problems."
},
{
"start": 540,
"end": 546,
"text": " And eventually, more and more people are going to long for this to end."
},
{
"start": 546,
"end": 553,
"text": " And I'm not saying that, you know, that response to a virus should be fun."
},
{
"start": 553,
"end": 557,
"text": " But what I'm saying is that people are going to break this."
},
{
"start": 557,
"end": 558,
"text": " It is inevitable."
},
{
"start": 558,
"end": 560,
"text": " First some are going to break, then more."
},
{
"start": 560,
"end": 564,
"text": " You have a very delicate balance going on here."
},
{
"start": 564,
"end": 566,
"text": " Right now, there is a lot of support."
},
{
"start": 566,
"end": 571,
"text": " A lot of people are on the side of locking things down in a lockdown."
},
{
"start": 571,
"end": 577,
"text": " A lot of people are conscientious, staying home, avoiding social contact as much as possible."
},
{
"start": 577,
"end": 581,
"text": " But some are going to be the first ones to go over there."
},
{
"start": 581,
"end": 583,
"text": " Some are going to break."
},
{
"start": 583,
"end": 588,
"text": " Some are going to find excuses not to keep to it."
},
{
"start": 588,
"end": 593,
"text": " And the problem is, the harder the measures are, the harder you are down here."
},
{
"start": 593,
"end": 597,
"text": " The stronger the pull is going to be for people to go on this other side."
},
{
"start": 597,
"end": 603,
"text": " And I guarantee you, the people on social media that are shaming others the most,"
},
{
"start": 603,
"end": 609,
"text": " that are yelling out the loudest for others to not break the lockdown,"
},
{
"start": 609,
"end": 613,
"text": " either they have an extremely comfortable living at their own homes,"
},
{
"start": 613,
"end": 615,
"text": " which is an extreme privilege,"
},
{
"start": 615,
"end": 620,
"text": " or they are the worst ones to break it themselves,"
},
{
"start": 620,
"end": 624,
"text": " to find every excuse they can, why they are exempt from it."
},
{
"start": 624,
"end": 626,
"text": " And people are going to see this."
},
{
"start": 626,
"end": 630,
"text": " More and more people are going to be over here, and with more people over here."
},
{
"start": 630,
"end": 632,
"text": " Look, they have the sunshine."
},
{
"start": 632,
"end": 634,
"text": " They are out and about."
},
{
"start": 634,
"end": 637,
"text": " They are doing their things more like normal."
},
{
"start": 637,
"end": 641,
"text": " The people over here, they are going to see this."
},
{
"start": 641,
"end": 646,
"text": " And more and more people will be, hey, why am I keeping to this?"
},
{
"start": 646,
"end": 648,
"text": " Why am I not over there?"
},
{
"start": 648,
"end": 650,
"text": " Why can these people do that?"
},
{
"start": 650,
"end": 651,
"text": " And they will go."
},
{
"start": 651,
"end": 654,
"text": " And at some point, the scale is going to tip,"
},
{
"start": 654,
"end": 660,
"text": " and any lockdown, barring martial law and the threat of being shot,"
},
{
"start": 660,
"end": 663,
"text": " if you go outside, will be ineffective."
},
{
"start": 663,
"end": 668,
"text": " And at that point, wherever you are, the cases are going to spike."
},
{
"start": 668,
"end": 673,
"text": " And it will be even worse than when you did nothing, or as bad."
},
{
"start": 673,
"end": 678,
"text": " So I believe that it is a very delicate balance that you have to strike here."
},
{
"start": 678,
"end": 682,
"text": " Total lockdown, people aren't going to take this for a long time,"
},
{
"start": 682,
"end": 685,
"text": " and you need to think about a long time here."
},
{
"start": 685,
"end": 687,
"text": " I don't know what the answer is."
},
{
"start": 687,
"end": 693,
"text": " I don't know where exactly the scale of just keep apart,"
},
{
"start": 693,
"end": 698,
"text": " to stay home, whatever it takes, is."
},
{
"start": 698,
"end": 704,
"text": " I just think that two harsh measures can also be counterproductive."
},
{
"start": 704,
"end": 708,
"text": " I'm very fortunate to live in Switzerland."
},
{
"start": 708,
"end": 711,
"text": " Most of our neighbors have instituted total lockdowns,"
},
{
"start": 711,
"end": 716,
"text": " and the Swiss government has recently decided not to do so at this time,"
},
{
"start": 716,
"end": 721,
"text": " with, I believe, much of the same reasoning as I'm just laying out."
},
{
"start": 721,
"end": 723,
"text": " We need to think about this long term,"
},
{
"start": 723,
"end": 726,
"text": " and people are not going to keep to a lockdown long term,"
},
{
"start": 726,
"end": 730,
"text": " and it will be worse if they don't."
},
{
"start": 730,
"end": 734,
"text": " Now, I believe the best response to something like this is a distributed one."
},
{
"start": 734,
"end": 738,
"text": " I believe the best response is to go to people in their networks."
},
{
"start": 738,
"end": 741,
"text": " People usually care about the people around them,"
},
{
"start": 741,
"end": 746,
"text": " enough so that they will take responsibility into the hand."
},
{
"start": 746,
"end": 751,
"text": " I believe you should give the people the responsibility,"
},
{
"start": 751,
"end": 754,
"text": " as much responsibility as you can,"
},
{
"start": 754,
"end": 756,
"text": " and I believe the network of people,"
},
{
"start": 756,
"end": 760,
"text": " each one arranging themselves in the most pro-social way,"
},
{
"start": 760,
"end": 765,
"text": " can be the best response, better than any government could do."
},
{
"start": 765,
"end": 770,
"text": " Governments can do things such as prohibit large gatherings."
},
{
"start": 770,
"end": 773,
"text": " Sometimes, if you don't do that,"
},
{
"start": 773,
"end": 777,
"text": " even the individual people can't do anything against that."
},
{
"start": 777,
"end": 781,
"text": " But to actually believe in your citizens,"
},
{
"start": 781,
"end": 784,
"text": " and believe in the fundamental goodness of humans,"
},
{
"start": 784,
"end": 788,
"text": " and the fundamental care for other humans,"
},
{
"start": 788,
"end": 791,
"text": " is a strong suit here."
},
{
"start": 791,
"end": 795,
"text": " On the other hand, you see other governments."
},
{
"start": 795,
"end": 799,
"text": " I have read that a city in Norway"
},
{
"start": 799,
"end": 804,
"text": " is thinking about employing a monitoring system,"
},
{
"start": 804,
"end": 807,
"text": " where they track everyone's phone,"
},
{
"start": 807,
"end": 811,
"text": " and if more than a certain amount of people are in the same place,"
},
{
"start": 811,
"end": 815,
"text": " they will basically send everyone a text message,"
},
{
"start": 815,
"end": 818,
"text": " saying you should disperse."
},
{
"start": 818,
"end": 820,
"text": " While this is an effective measure,"
},
{
"start": 820,
"end": 823,
"text": " and I believe can definitely help,"
},
{
"start": 823,
"end": 827,
"text": " and it is something that you need to be very careful about."
},
{
"start": 827,
"end": 831,
"text": " As we saw with 9-11, as soon as governments get power,"
},
{
"start": 831,
"end": 836,
"text": " they rarely let it go, as Edward Snowden finally demonstrated."
},
{
"start": 836,
"end": 838,
"text": " If you enact something like this,"
},
{
"start": 838,
"end": 842,
"text": " you must definitely make sure that there is a time limit on it."
},
{
"start": 842,
"end": 844,
"text": " Any government measure right now,"
},
{
"start": 844,
"end": 848,
"text": " be that spending to help the economy, which is certainly a good thing,"
},
{
"start": 848,
"end": 852,
"text": " be this measures to increase social distancing,"
},
{
"start": 852,
"end": 855,
"text": " to prohibit public gatherings."
},
{
"start": 855,
"end": 859,
"text": " Support this, but it must be time limited."
},
{
"start": 859,
"end": 863,
"text": " Otherwise, governments aren't going to let this go."
},
{
"start": 863,
"end": 867,
"text": " Finally, I would like to come to a more global scale"
},
{
"start": 867,
"end": 872,
"text": " of long-term thinking, countries and other countries."
},
{
"start": 872,
"end": 878,
"text": " As you go on, you need to think about your economy."
},
{
"start": 878,
"end": 883,
"text": " Our economies were growing at a fairly good pace until this hit,"
},
{
"start": 883,
"end": 885,
"text": " and now they're plunging."
},
{
"start": 885,
"end": 887,
"text": " At any point, they're going to be opportunists."
},
{
"start": 887,
"end": 889,
"text": " They're going to be personal opportunists,"
},
{
"start": 889,
"end": 891,
"text": " hoarding toilet paper and hand sanitizer,"
},
{
"start": 891,
"end": 895,
"text": " and trying to sell them for marked-up prices."
},
{
"start": 895,
"end": 898,
"text": " They're going to be country opportunists."
},
{
"start": 898,
"end": 901,
"text": " When everything's falling down,"
},
{
"start": 901,
"end": 905,
"text": " if you're the country that locks things down now,"
},
{
"start": 905,
"end": 907,
"text": " your economy is going to fall."
},
{
"start": 907,
"end": 910,
"text": " Eventually, though, you'll have to get back."
},
{
"start": 910,
"end": 914,
"text": " Countries that get back sooner will be in an upswing sooner."
},
{
"start": 914,
"end": 919,
"text": " Basically, the question is, where is the ideal point here?"
},
{
"start": 919,
"end": 921,
"text": " To leave the..."
},
{
"start": 921,
"end": 925,
"text": " To not react anymore, to let people do their thing,"
},
{
"start": 925,
"end": 927,
"text": " to get back on track."
},
{
"start": 927,
"end": 929,
"text": " I don't know where that is,"
},
{
"start": 929,
"end": 935,
"text": " but I believe you're going to see a Cold War-like situation in the world"
},
{
"start": 935,
"end": 938,
"text": " where countries are going to accuse other countries"
},
{
"start": 938,
"end": 940,
"text": " of not doing enough or doing too much,"
},
{
"start": 940,
"end": 944,
"text": " of not playing fairly, of helping to spread the virus."
},
{
"start": 944,
"end": 949,
"text": " And I believe that will be the case for the years to come."
},
{
"start": 949,
"end": 951,
"text": " Because what happens over the long time?"
},
{
"start": 951,
"end": 955,
"text": " Of course, right now, you can afford to not fix that pipe"
},
{
"start": 955,
"end": 958,
"text": " under your house that's broken."
},
{
"start": 958,
"end": 961,
"text": " You can afford to not clean the..."
},
{
"start": 961,
"end": 964,
"text": " To not get the person to clean the chimney."
},
{
"start": 964,
"end": 967,
"text": " You can afford to not get dental work done."
},
{
"start": 967,
"end": 970,
"text": " I don't even know how to draw a tooth."
},
{
"start": 970,
"end": 973,
"text": " Let's say this is a tooth."
},
{
"start": 973,
"end": 976,
"text": " It probably has some peaks here."
},
{
"start": 976,
"end": 980,
"text": " Over the long term, though, all of these things are going to break."
},
{
"start": 980,
"end": 982,
"text": " And we need to get back to normal."
},
{
"start": 982,
"end": 987,
"text": " And the longer a state keeps up these measures,"
},
{
"start": 987,
"end": 991,
"text": " the worse it's going to get."
},
{
"start": 991,
"end": 996,
"text": " Finally, we need to talk about risk people."
},
{
"start": 996,
"end": 1002,
"text": " People at risk tend to be older, tend to be ones with health issues."
},
{
"start": 1002,
"end": 1003,
"text": " Think about this."
},
{
"start": 1003,
"end": 1008,
"text": " If you're an old person having health issues,"
},
{
"start": 1008,
"end": 1010,
"text": " you're looking at long term."
},
{
"start": 1010,
"end": 1014,
"text": " Once you realize this is not going to be over in a few weeks,"
},
{
"start": 1014,
"end": 1015,
"text": " what do you do?"
},
{
"start": 1015,
"end": 1016,
"text": " You're old."
},
{
"start": 1016,
"end": 1021,
"text": " And the next year or so in lockdown mode"
},
{
"start": 1021,
"end": 1024,
"text": " is going to be hard for you."
},
{
"start": 1024,
"end": 1025,
"text": " And for everyone."
},
{
"start": 1025,
"end": 1029,
"text": " But a year, if you're that old and sick,"
},
{
"start": 1029,
"end": 1036,
"text": " is probably more quality life you have left than after it."
},
{
"start": 1036,
"end": 1038,
"text": " So you need to be thinking either,"
},
{
"start": 1038,
"end": 1043,
"text": " I'm going to survive this because I bunker in my house,"
},
{
"start": 1043,
"end": 1045,
"text": " don't get the virus."
},
{
"start": 1045,
"end": 1046,
"text": " But what is it worth?"
},
{
"start": 1046,
"end": 1050,
"text": " Because my other diseases will get me afterwards."
},
{
"start": 1050,
"end": 1053,
"text": " Otherwise, I could be spending the quality time I have"
},
{
"start": 1053,
"end": 1056,
"text": " with my family, with my children, with my grandchildren."
},
{
"start": 1056,
"end": 1059,
"text": " I could be spending it with my friends."
},
{
"start": 1059,
"end": 1061,
"text": " And if I die, I die."
},
{
"start": 1061,
"end": 1063,
"text": " It is not an easy question,"
},
{
"start": 1063,
"end": 1067,
"text": " but I'm absolutely sure there are people right now"
},
{
"start": 1067,
"end": 1069,
"text": " who are asking themselves this."
},
{
"start": 1069,
"end": 1073,
"text": " If you're a government and you're thinking about mandatory lockdowns,"
},
{
"start": 1073,
"end": 1078,
"text": " I do see that this is in order to save people,"
},
{
"start": 1078,
"end": 1084,
"text": " in order to not have people walking around that spread the virus"
},
{
"start": 1084,
"end": 1087,
"text": " to vulnerable populations."
},
{
"start": 1087,
"end": 1090,
"text": " But you need to be thinking about the people you're trying to help."
},
{
"start": 1090,
"end": 1098,
"text": " Some of them would actually be on this side."
},
{
"start": 1098,
"end": 1104,
"text": " I don't know what the best response is to everything here."
},
{
"start": 1104,
"end": 1108,
"text": " I think we're just going to see and I don't want to give advice."
},
{
"start": 1108,
"end": 1112,
"text": " This is just some of the things I think."
},
{
"start": 1112,
"end": 1119,
"text": " I wish everyone the absolute healthiest season they can have right now."
},
{
"start": 1119,
"end": 1120,
"text": " Take care."
},
{
"start": 1120,
"end": 1122,
"text": " Please think about others."
},
{
"start": 1122,
"end": 1125,
"text": " Please do not make the problem worse yourself."
},
{
"start": 1125,
"end": 1132,
"text": " You're part of a network and you can be a powerful force for good during this time."
},
{
"start": 1132,
"end": 1139,
"text": " Think about long-term, if you're asking your government to do things,"
},
{
"start": 1139,
"end": 1144,
"text": " think about what's the best situation and how we are going to get there."
},
{
"start": 1144,
"end": 1166,
"text": " Thanks and stay healthy."
}
] |
H3Bhlan0mE0 | Yannic Kilcher | UCZHmQk67mSJgfCCTn7xBfew | Online Education - How I Make My Videos | [
"Science & Technology"
] | [
"deep learning",
"machine learning",
"online video",
"university",
"online",
"create",
"lecture"
] | Just a short overview of tools I use to make my videos.
OneNote - https://www.onenote.com
iSpring Free Cam - https://www.ispringsolutions.com/ispring-cam
Shotcut - https://shotcut.org
Slack - https://slack.com
RocketChat - https://rocket.chat
Zoom - https://zoom.us
Jitsi - https://jitsi.org
GDocs - https://www.google.com/docs/about
Piazza - https://piazza.com
CMT - https://cmt3.research.microsoft.com/About
Links:
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
BitChute: https://www.bitchute.com/channel/yannic-kilcher
Minds: https://www.minds.com/ykilcher | Hi there! So a lot of people have been asking me how I make these videos. And this is of course relevant now that everyone's work from home and all the schools are converted into online schools. All of a sudden a lot of people have to make these online education happen. And I think this style of video lends itself to online education. So I'll quickly go over the process of how to do this and maybe also how to run a university class online. Alright, so the process is pretty simple of how I make my videos. This might not work for everyone, but it works for me. I use the Microsoft OneNote in order to scribble on papers basically. So the thing is, in OneNote you have this insert thing here and you can print out a PDF onto your notebook here. So the way this looks then is you'll get the PDF in your notebook and you can scribble on it with this using this draw tab here. You can choose a pen and scribble on it. You can highlight things and so on. And I do this while I record the screen, so that's pretty much all there is to it. You can then print out again this notebook and you can distribute those annotated PDF if you want. Now I'm pretty sure this is here inserted as some sort of an image. So I don't know about the copy paste ability of the resulting product. But here you see this is a paper I actually made a video about. And that's basically all there is to it. It's OneNote. It's a free program from Microsoft. In order to do the annotating I use a last generation Microsoft Surface tablet that I got for cheap. At some point it comes with a nice pen and touch screen so you can basically zoom around and zip around while you do these things. In order to record the screen I use this iSpring FreeCam software. It might not be the best but it does work for me well and they have a cool Pro edition if you need more features. But it works really well for recording your screen. You can record parts of your screen or the full screen. You can record with sound. So I use a microphone and then I just record the sound from that with the same tool. And at the end you get a video file that you can upload to YouTube. Easy as that. If I need to do some editing, which is rarely because I am lazy, I use either iMovie from Apple which comes with an Apple operating system. So I have a MacBook that I run iMovie on. iMovie is really easy to edit movies on. I don't know if there's anything on Windows where it's that easy that comes pre-packaged. But if I need to do more complicated things I use Shotcut which is an open source editor. I believe that's available for all the platforms. You can do fairly complicated things with Shotcut if you ever need to do that. But if I just need to stitch like two or three things together I use iMovie. And that's pretty much it for making and recording videos, I believe. One note is that in order to do a class online not all people will just be able to record a video and then upload. Some of the things you need to do are actually live. A lot of people right now use Zoom for live teleconferencing. But you can also do this sort of presenter mode where you present and people can do questions. Of course you can do this via YouTube streaming as well. But then it's of course it's kind of public on YouTube or link accessible with Zoom. I believe you have more control. But of course Zoom is a proprietary solution and with the free account you can only get so far. So they limit your meetings in length if you have more than I believe three or four people. An alternative is Jitsi which is open source video conferencing. And the cool thing here is you can actually run your own server such that you can truly have control over everything. In order to communicate with lots of people, of course people use Slack. But again Slack is a proprietary service and an alternative to that would be Rocket Chat. Again where you can run your own server and it is fairly similar to Slack. In order to collaborate or share just general notes, of course Google's suite of docs and sheets and so on is excellent. And for classes especially, Piazza is a good place. You can sign up as a class. You can have TAs sign up as TAs. You can have your students sign up as students and then the students can ask questions and then other students or the TAs can answer those questions. Basically a bit of a forum. But you can also announce things there for your classes. It's pretty cool and it's really geared towards online classes and it's free. So I know a lot of universities are using that right now. So if you're looking for some sort of announcement or discussion board for your class, Piazza is definitely a good place to be. And lastly we sometimes have classes where students have to submit projects. And we actually use CMT for this because it's really neat where you can set deadlines and everything. Students can upload and then you can assign reviewers to those projects, which in our case are us, the TAs. And you know you can have meta reviews and so on. So CMT is actually very good. Maybe a bit of an overkill if you just run a single class. But it has lots and lots of features. And of course the big conferences also use CMT. So it's definitely stress tested. Alright, so that was it for my videos. Or at least how I make them. I just print out the PDF, sit down for half an hour and rant about it. And that's pretty much it. And then you throw it on YouTube or distribute the file however you want. And with that I hope I answered a little bit of these questions. And I all wish you a healthy rest of the Corona season. Bye. | [
{
"start": 0,
"end": 5,
"text": " Hi there! So a lot of people have been asking me how I make these videos."
},
{
"start": 5,
"end": 13,
"text": " And this is of course relevant now that everyone's work from home and all the schools are converted into online schools."
},
{
"start": 13,
"end": 19,
"text": " All of a sudden a lot of people have to make these online education happen."
},
{
"start": 19,
"end": 24,
"text": " And I think this style of video lends itself to online education."
},
{
"start": 24,
"end": 31,
"text": " So I'll quickly go over the process of how to do this and maybe also how to run a university class online."
},
{
"start": 31,
"end": 35,
"text": " Alright, so the process is pretty simple of how I make my videos."
},
{
"start": 35,
"end": 38,
"text": " This might not work for everyone, but it works for me."
},
{
"start": 38,
"end": 44,
"text": " I use the Microsoft OneNote in order to scribble on papers basically."
},
{
"start": 44,
"end": 56,
"text": " So the thing is, in OneNote you have this insert thing here and you can print out a PDF onto your notebook here."
},
{
"start": 56,
"end": 67,
"text": " So the way this looks then is you'll get the PDF in your notebook and you can scribble on it with this using this draw tab here."
},
{
"start": 67,
"end": 72,
"text": " You can choose a pen and scribble on it. You can highlight things and so on."
},
{
"start": 72,
"end": 77,
"text": " And I do this while I record the screen, so that's pretty much all there is to it."
},
{
"start": 77,
"end": 86,
"text": " You can then print out again this notebook and you can distribute those annotated PDF if you want."
},
{
"start": 86,
"end": 93,
"text": " Now I'm pretty sure this is here inserted as some sort of an image."
},
{
"start": 93,
"end": 97,
"text": " So I don't know about the copy paste ability of the resulting product."
},
{
"start": 97,
"end": 102,
"text": " But here you see this is a paper I actually made a video about."
},
{
"start": 102,
"end": 108,
"text": " And that's basically all there is to it. It's OneNote. It's a free program from Microsoft."
},
{
"start": 108,
"end": 119,
"text": " In order to do the annotating I use a last generation Microsoft Surface tablet that I got for cheap."
},
{
"start": 119,
"end": 128,
"text": " At some point it comes with a nice pen and touch screen so you can basically zoom around and zip around while you do these things."
},
{
"start": 128,
"end": 136,
"text": " In order to record the screen I use this iSpring FreeCam software."
},
{
"start": 136,
"end": 144,
"text": " It might not be the best but it does work for me well and they have a cool Pro edition if you need more features."
},
{
"start": 144,
"end": 151,
"text": " But it works really well for recording your screen. You can record parts of your screen or the full screen."
},
{
"start": 151,
"end": 159,
"text": " You can record with sound. So I use a microphone and then I just record the sound from that with the same tool."
},
{
"start": 159,
"end": 164,
"text": " And at the end you get a video file that you can upload to YouTube. Easy as that."
},
{
"start": 164,
"end": 178,
"text": " If I need to do some editing, which is rarely because I am lazy, I use either iMovie from Apple which comes with an Apple operating system."
},
{
"start": 178,
"end": 189,
"text": " So I have a MacBook that I run iMovie on. iMovie is really easy to edit movies on."
},
{
"start": 189,
"end": 194,
"text": " I don't know if there's anything on Windows where it's that easy that comes pre-packaged."
},
{
"start": 194,
"end": 201,
"text": " But if I need to do more complicated things I use Shotcut which is an open source editor."
},
{
"start": 201,
"end": 205,
"text": " I believe that's available for all the platforms."
},
{
"start": 205,
"end": 211,
"text": " You can do fairly complicated things with Shotcut if you ever need to do that."
},
{
"start": 211,
"end": 217,
"text": " But if I just need to stitch like two or three things together I use iMovie."
},
{
"start": 217,
"end": 226,
"text": " And that's pretty much it for making and recording videos, I believe."
},
{
"start": 226,
"end": 240,
"text": " One note is that in order to do a class online not all people will just be able to record a video and then upload."
},
{
"start": 240,
"end": 244,
"text": " Some of the things you need to do are actually live."
},
{
"start": 244,
"end": 249,
"text": " A lot of people right now use Zoom for live teleconferencing."
},
{
"start": 249,
"end": 255,
"text": " But you can also do this sort of presenter mode where you present and people can do questions."
},
{
"start": 255,
"end": 259,
"text": " Of course you can do this via YouTube streaming as well."
},
{
"start": 259,
"end": 266,
"text": " But then it's of course it's kind of public on YouTube or link accessible with Zoom."
},
{
"start": 266,
"end": 269,
"text": " I believe you have more control."
},
{
"start": 269,
"end": 276,
"text": " But of course Zoom is a proprietary solution and with the free account you can only get so far."
},
{
"start": 276,
"end": 281,
"text": " So they limit your meetings in length if you have more than I believe three or four people."
},
{
"start": 281,
"end": 287,
"text": " An alternative is Jitsi which is open source video conferencing."
},
{
"start": 287,
"end": 296,
"text": " And the cool thing here is you can actually run your own server such that you can truly have control over everything."
},
{
"start": 296,
"end": 303,
"text": " In order to communicate with lots of people, of course people use Slack."
},
{
"start": 303,
"end": 310,
"text": " But again Slack is a proprietary service and an alternative to that would be Rocket Chat."
},
{
"start": 310,
"end": 318,
"text": " Again where you can run your own server and it is fairly similar to Slack."
},
{
"start": 318,
"end": 331,
"text": " In order to collaborate or share just general notes, of course Google's suite of docs and sheets and so on is excellent."
},
{
"start": 331,
"end": 337,
"text": " And for classes especially, Piazza is a good place."
},
{
"start": 337,
"end": 342,
"text": " You can sign up as a class. You can have TAs sign up as TAs."
},
{
"start": 342,
"end": 351,
"text": " You can have your students sign up as students and then the students can ask questions and then other students or the TAs can answer those questions."
},
{
"start": 351,
"end": 356,
"text": " Basically a bit of a forum. But you can also announce things there for your classes."
},
{
"start": 356,
"end": 361,
"text": " It's pretty cool and it's really geared towards online classes and it's free."
},
{
"start": 361,
"end": 366,
"text": " So I know a lot of universities are using that right now."
},
{
"start": 366,
"end": 375,
"text": " So if you're looking for some sort of announcement or discussion board for your class, Piazza is definitely a good place to be."
},
{
"start": 375,
"end": 382,
"text": " And lastly we sometimes have classes where students have to submit projects."
},
{
"start": 382,
"end": 389,
"text": " And we actually use CMT for this because it's really neat where you can set deadlines and everything."
},
{
"start": 389,
"end": 396,
"text": " Students can upload and then you can assign reviewers to those projects, which in our case are us, the TAs."
},
{
"start": 396,
"end": 400,
"text": " And you know you can have meta reviews and so on."
},
{
"start": 400,
"end": 408,
"text": " So CMT is actually very good. Maybe a bit of an overkill if you just run a single class."
},
{
"start": 408,
"end": 412,
"text": " But it has lots and lots of features."
},
{
"start": 412,
"end": 416,
"text": " And of course the big conferences also use CMT."
},
{
"start": 416,
"end": 419,
"text": " So it's definitely stress tested."
},
{
"start": 419,
"end": 423,
"text": " Alright, so that was it for my videos."
},
{
"start": 423,
"end": 430,
"text": " Or at least how I make them. I just print out the PDF, sit down for half an hour and rant about it."
},
{
"start": 430,
"end": 435,
"text": " And that's pretty much it. And then you throw it on YouTube or distribute the file however you want."
},
{
"start": 435,
"end": 442,
"text": " And with that I hope I answered a little bit of these questions."
},
{
"start": 442,
"end": 451,
"text": " And I all wish you a healthy rest of the Corona season. Bye."
}
] |
p3sAF3gVMMA | Yannic Kilcher | UCZHmQk67mSJgfCCTn7xBfew | Deep Learning for Symbolic Mathematics | [
"Science & Technology"
] | [
"deep learning",
"machine learning",
"nlp",
"natural language processing",
"machine translation",
"arxiv",
"attention mechanism",
"attention",
"transformer",
"rnn",
"recurrent",
"seq2seq",
"facebook",
"fair",
"research",
"math",
"integral",
"ode"
] | This model solves integrals and ODEs by doing seq2seq!
https://arxiv.org/abs/1912.01412
https://ai.facebook.com/blog/using-neural-networks-to-solve-advanced-mathematics-equations/
Abstract:
Neural networks have a reputation for being better at solving statistical or approximate problems than at performing calculations or working with symbolic data. In this paper, we show that they can be surprisingly good at more elaborated tasks in mathematics, such as symbolic integration and solving differential equations. We propose a syntax for representing mathematical problems, and methods for generating large datasets that can be used to train sequence-to-sequence models. We achieve results that outperform commercial Computer Algebra Systems such as Matlab or Mathematica.
Authors: Guillaume Lample, François Charton
Links:
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
BitChute: https://www.bitchute.com/channel/yannic-kilcher
Minds: https://www.minds.com/ykilcher | Hi there! Can you solve this? Neither can I. But Wolfram Alpha can. So this is the thing that probably I have most to thank for for passing university, especially the math classes in it. If you don't know Wolfram Alpha, it is an engine from the creators of Mathematica, but it is online. It can do symbolic math. So it can integrate this expression, for example, and you'll get the solution here. And if you have the pro version, it can even give you a step-by-step solution of how to get there. So this part of math is an entirely different part than we usually do with computers. Usually we do numeric math, working with actual values. But here it's about symbolic math. It's about manipulating expression, in this case integrating them. So here is a paper by Facebook AI Research called Deep Learning for Symbolic Mathematics by Guillaume Lampe and François Gertot. These people have basically tackled the task of doing these mathematical reasoning, solving these mathematical problems in a symbolic way using neural networks. So they start out by saying here, neural networks have a reputation for being better at solving statistical or proximate problems. That's what I meant by numeric, then at performing calculations or working with symbolic data. And in this case, they go about this other than other people have. So let's look at how they did it. We can express symbolic mathematics in these kind of trees. So an expression like these up here would be expressed into this tree. So you would have a plus, this 2 plus 3. Sorry, of course there's an implicit bracket here. So you'd have this plus right here, the 2 here and the entire right hand side here. So you can basically decompose it into trees like this or this or this. Here you also can have the differentiation operator as a symbol in there, just like any other operator. Moreover, you can basically decompose everything into everything they have here, into binary and unary nodes in a tree. What that means is either like a plus sign, it has two components, so a left and a right hand side that should be added together. Or like the cosine, it has one argument, namely the thing that it should take the cosine of. So a lot of people have tried going about this problem by working with these trees and basically training neural networks to... So first they use kind of a parser to decompose such a thing into a tree like this. And then use neural networks, let's say tree recursive neural networks or so on, to kind of make sense of the tree and solve it in a recursive manner or something like this. But that has its limitations. So what these people from Facebook AI did is they viewed it as a natural language expression problem. So they say, no, no, let's actually go with trees as sequences. So you can see that this mathematical expression, for example, is already a sequence. It's simply a sequence of tokens. But there are many different ways of expressing this. So you can say 2 plus 3 times the parentheses, you can say 3 times parentheses plus 2. You can turn many things around and there's always these parentheses make it harder and so on. So what they do is they say, OK, let's actually go from this thing to a tree. So let's go to the tree representation and then let's take the tree representation because the tree representation can be normalized. And then let's put that again into a sequence representation such as this one. And this is called reverse polish notation. And it has multiple advantages over the old expression. So let's keep that on the right hand side here. This is the same thing, except it's what is called a prefix notation, whereas the thing on the right here is called an infix notation. Infix because the operators such as the plus is always between its arguments. So it's left hand argument and it's right hand argument. In prefix notation, the operator is always in front of its arguments. So this operator here is has a first argument. This end as a second argument. This right now, the cool thing is if you express a tree like this, you can simply go and use a stack machine to solve it. So you can basically go. I would say you can go from the from the right here and you see you select two and five plus. And let's do it by hand. Actually, this is fun. So we have plus two times three. If you're a boomer like me, you remember you have to use calculators like this and couldn't use the infix notation. So you go from the right, right? You say two, five plus. Cool. That's seven. So scratch that. Put seven here, right? So your new stack is three, two times. This right. Then you go again from the right and you go seven, three times. OK, that's twenty one. Cool. Twenty one. Scratch this. Now it's twenty one. Two plus twenty one is twenty three. I'm fairly sure that's the solution. Well, correct me if I'm wrong. But this is how you would would go about solving like this. So it is the same expression as the original one, but it doesn't use any parentheses. And it is it is derived from the from the tree, basically. So it is you can you can normalize it much more in order to find unique expressions. So what this system does is it it transforms any expression into a prefix notation such as this one. Oops. And then it uses a sequence to sequence model. In order to derive the solution. Now, just how crazy is this? Right. So we come we go from this thing here, right? From this thing. And the solution is twenty one. Right. And the neural network is simply trained to do sequence to sequence from this to that sequence to sequence. That means it basically parses this as a token level. Right. And then it outputs these tokens without. So during training, you simply give it the you give it the input here and you give it the output. And it's supposed to learn how to transform one into the other without you giving it any sort of mathematical ability. Right. Without you telling it what does a plus sign mean without you telling it this algorithm that I just told you. Now, this by itself is already pretty astounding that you would try such a thing. It really transforms the string. So this is not the mathematical equation, but the string of this into the string of that. Now, they don't do it on numbers. Like, I don't think that would work as well if you were to to make it kind of calculate numerical things like this. As we said, this is symbolic. So what it can do is it can, for example, integrate. So if you have an expression like. Let's see some on the bottom here. So if you had an expression such as a polynomial. Here, an expression like this. Right. You would like to find its integral. That is a problem. That's one of the problems we had at the beginning. Right. This integral right here. You can write this in a string like we said. And then derive its solution right here. And have the neural network learn to map one to the other, right, to map this to that. So the way it goes is it would map this into map this into its tree representation. It would map this into its prefix notation. Right. It would also map this to. Let's take another color here. This into its tree. Then it will map this into its prefix notation. And then that's the training data. The training data is take this, derive that. Right. And at inference time, of course, you won't have this here. You'll simply be asked to output a sequence as a normal natural language. Like you can think of machine translation. This thing translates problems into solutions. It's crazy. I mean, it's not it's not technically super challenging, but it's crazy that it works or that it could work. Right. So we'll see how this actually works. They use a transformer model, which is just which is a classic model. If you don't know what a transformer is, I have a video called Attention is All You Need about transformers. You can basically use them to do these kinds of tasks, to map one string into another string. So. Yeah, so they go into detail here of how they construct the data set and how big the problem space is and so on. Ultimately, they compare their system to. Mathematica, I think, and Maple and MathLab, which do the same thing. So Mathematica, which is the kind of desktop version of Wolfram Alpha that I've shown you, you have it here. So integration. Is the task of integrating, let's say, these these symbolic expressions. ODE order one and order two are slightly different tasks where you're asked to solve an ordinary differential equation, which is also a task in symbolic mathematics. If you compare it to Mathematica here and they give it Mathematica a limit of 30 seconds, what Mathematica will do is it will kind of search the manipulations that it knows. So the advantage of this is it can always give you, let's say, a step by step solution if it finds a solution. Right. It will just start and it will do a tree search, manipulating the expression you give in until it reaches a satisfactory solution. But then once it has that, it can give you a path through the tree, which leads to the solution, which will give you a step by step solution. So you can understand it. The system that Facebook designs here doesn't do that. It simply takes right. It simply takes the input tokens like this is the input and it just gives you an output that is learned so that the network per se doesn't understand math. It simply learns from many, many examples that to transform to to come up with good hypotheses. So if you compare here, Mathematica, for example, it can integrate 84 percent of the things that they put into it. It's not said whether it gets it wrong or simply times out in the rest. I would say it times out because probably Mathematica never gets it wrong because it's an actual symbolic manipulator with defined rules. So I guess the rest of the rest 16 percent, it simply times out, doesn't find a solution. Whereas this Facebook system and they say it usually finds the solution in less than a second, finds these solutions in 98.4 percent of the time with a beam size of one. Now, what does the beam size mean? It means that the time that you have to generate the output is the time that you generate the output. So if you have a sequence of input, you can always choose to do a beam search. So when you have a sequence of input, let's actually give an example, a cat jumps. The task is simply to continue the sentence, right, to continue the sentence so you can generate an output sequence. The output sequence could be over the dog. What you can do is you can this is beam size, would be called beam size one or no beam search at all. You can do what's called a beam search in that each step you actually generate multiple hypotheses and then keep the best ones in memory. So you in a beam size of 10, you would always consider the 10 most probable solutions and you would kind of evaluate all 10 and then always keep the 10 best. Let's see how this goes. Let's do a beam size of three in our case. So a cat jumps and then you could come up with three different things. This sentence could continue cat jumps over a cat jumps between and a cat jumps swiftly. Right. So these are your three hypotheses. Then we go to the next step. We have to evaluate each of those, each of them. So a cat jumps over the over a over me. A cat jumps between the between two and a cat jumps between many. The cat jumps swiftly end of sentence, that jumps swiftly over cat jumps swiftly. And right, these are all valid. So of these nine, you would now select again the three that overall have the highest likelihood. Maybe that's the following cat jumps over the cat jumps over a and a cat jumps between two. These three. Right. So you just keep these three. And then in the next step, you again from these three, you would want for each three hypotheses and so on. So this is what's called a beam search. And if you give it a beam size of 10 or 50, this system tends to improve even more. The way this system works is quite different from Mathematica in that Mathematica, as I said, is a symbolic solver that never makes mistakes, but can fail to give you a solution. This system simply generates an output sequence that is not guaranteed to be actually a solution to the problem. It's just a hypothesis. But then you can quickly check whether the hypothesis is correct. So the nature of these math problems with integration, you can simply differentiate. And with ODE, you can simply plug them in to see if there is solution. It's kind of like your classic, let's say, NP-hard problems or like a SAT solving where you can quickly check whether something is a solution. So if you have a system that generates 50 hypotheses, you could quickly check which one is actually correct. So these numbers here mean that one of these 50 that the system came up with was a correct solution. And if you allow for such many hypotheses, you can see it goes up quite a bit. For example, the ODE solving is almost the same. And here it's even worse if you take ODE's of order 2. It's even worse than Mathematica. But if you allow for larger beam sizes, you see it dramatically goes up. And so it's a different approach. I wouldn't be surprised if Mathematica would actually implement something like this very soon or just buy this off of Facebook or something, or Facebook by Mathematica in whatever way. This clearly is a different approach and it appears to work better. But there is a caveat. So here's the caveat that I see with this kind of thing. These evaluations are done on data sets, of course. And this paper goes into big detail on how to generate these data sets. So they have to pay attention to many things like many solutions are equivalent. For example, here, you know, that this solution and this solution to this equation, to this differential equation are the same. So they have to use a symbolic framework to check whether the solutions are the same and so on. This it is very good work, but they do evaluate on expressions that fit into their data set. So here in their data set, they say, OK, we evaluate, you know, expressions with up to 15 internal nodes, 11 leave values for these four binary operators, then these 15 unary operators. So the expressions that they train on fall into this data set. Right. Also, just numbers from negative five to five. So it is it is kind of to be expected that a system that is trained on these things would meet would perform very well on these things as opposed to opposed to Mathematica. That is, you know, a general purpose tool. Moreover, if you look at. Sorry, I think this is further down. For example, in integration for the integration task, they have three different ways of solving of generating data. They have the forward way where they simply use a symbolic integrator to generate expressions. They have the backward way where they start from the integral and then differentiate it in order to obtain a training pair. And they have an integration by parts method. These are three different methods to come up with problems for this system to be trained on. And they have very different properties to the effect that if you train with one just one, it won't work well on the other. So if you train with the forward method, it will work very well on data that has been generated with the forward method. So this is down here. This is what it's trained on. And this is what it's evaluated on. Right. You can see the diagonal is very, very strong. But if you train with the backward method, but you evaluate on data generated with the forward method, it is actually very poor. That's because in one case, generally, the solutions are longer than the input. In the other case, the solutions are shorter. So not only does this system only work on the particular task here, it is actually very attuned to the way that this data was generated. Right. So in fact, I would postulate that this training data is only probably a very small subset of all of the things that we would like to integrate. And again, the problem the problem is made kind of worse because they their evaluation set would also come from their distribution. So what they've ultimately shown is that they can do this on a very skewed, probably very biased subset of this mathematical problem. And on that biased subset, they can outperform something like Mathematica. Right. They kind of defeat themselves. Yeah. If you look here, they even the different integration data generating methods, if you only train on one of them, it doesn't generalize. If you only train on forward data, then if you evaluate on backward generated data, it doesn't work. So even the integrator can't really generalize. So they have to kind of combine different method. And even now, we can probably easily find examples that this integrator can't solve. So, I mean, there is a lot of cool things here and they show a number of properties that the model learns just from without them telling it to. And it's cool that it works anyway. As I said, this model has no programmed in notion of how math works. But also it kind of shows the problems if you do this via a training data set in that if your training data set is very skewed and then your evaluation set follows the same generation process, the claims you can make at the end are limited. And to be fair, I don't know what claims they made in the press generally. So I think there is a pretty cool work. Check it out. And that was it. Thanks. | [
{
"start": 0,
"end": 16,
"text": " Hi there! Can you solve this? Neither can I. But Wolfram Alpha can. So this is the thing that probably I have most to thank for for passing university, especially the math classes in it."
},
{
"start": 16,
"end": 33,
"text": " If you don't know Wolfram Alpha, it is an engine from the creators of Mathematica, but it is online. It can do symbolic math. So it can integrate this expression, for example, and you'll get the solution here."
},
{
"start": 33,
"end": 48,
"text": " And if you have the pro version, it can even give you a step-by-step solution of how to get there. So this part of math is an entirely different part than we usually do with computers."
},
{
"start": 48,
"end": 60,
"text": " Usually we do numeric math, working with actual values. But here it's about symbolic math. It's about manipulating expression, in this case integrating them."
},
{
"start": 60,
"end": 75,
"text": " So here is a paper by Facebook AI Research called Deep Learning for Symbolic Mathematics by Guillaume Lampe and François Gertot."
},
{
"start": 75,
"end": 86,
"text": " These people have basically tackled the task of doing these mathematical reasoning, solving these mathematical problems in a symbolic way using neural networks."
},
{
"start": 86,
"end": 95,
"text": " So they start out by saying here, neural networks have a reputation for being better at solving statistical or proximate problems."
},
{
"start": 95,
"end": 101,
"text": " That's what I meant by numeric, then at performing calculations or working with symbolic data."
},
{
"start": 101,
"end": 111,
"text": " And in this case, they go about this other than other people have. So let's look at how they did it."
},
{
"start": 111,
"end": 124,
"text": " We can express symbolic mathematics in these kind of trees. So an expression like these up here would be expressed into this tree."
},
{
"start": 124,
"end": 132,
"text": " So you would have a plus, this 2 plus 3. Sorry, of course there's an implicit bracket here."
},
{
"start": 132,
"end": 138,
"text": " So you'd have this plus right here, the 2 here and the entire right hand side here."
},
{
"start": 138,
"end": 146,
"text": " So you can basically decompose it into trees like this or this or this."
},
{
"start": 146,
"end": 156,
"text": " Here you also can have the differentiation operator as a symbol in there, just like any other operator."
},
{
"start": 156,
"end": 165,
"text": " Moreover, you can basically decompose everything into everything they have here, into binary and unary nodes in a tree."
},
{
"start": 165,
"end": 173,
"text": " What that means is either like a plus sign, it has two components, so a left and a right hand side that should be added together."
},
{
"start": 173,
"end": 181,
"text": " Or like the cosine, it has one argument, namely the thing that it should take the cosine of."
},
{
"start": 181,
"end": 191,
"text": " So a lot of people have tried going about this problem by working with these trees and basically training neural networks to..."
},
{
"start": 191,
"end": 196,
"text": " So first they use kind of a parser to decompose such a thing into a tree like this."
},
{
"start": 196,
"end": 208,
"text": " And then use neural networks, let's say tree recursive neural networks or so on, to kind of make sense of the tree and solve it in a recursive manner or something like this."
},
{
"start": 208,
"end": 212,
"text": " But that has its limitations."
},
{
"start": 212,
"end": 218,
"text": " So what these people from Facebook AI did is they viewed it as a natural language expression problem."
},
{
"start": 218,
"end": 226,
"text": " So they say, no, no, let's actually go with trees as sequences."
},
{
"start": 226,
"end": 233,
"text": " So you can see that this mathematical expression, for example, is already a sequence."
},
{
"start": 233,
"end": 237,
"text": " It's simply a sequence of tokens."
},
{
"start": 237,
"end": 242,
"text": " But there are many different ways of expressing this."
},
{
"start": 242,
"end": 247,
"text": " So you can say 2 plus 3 times the parentheses, you can say 3 times parentheses plus 2."
},
{
"start": 247,
"end": 254,
"text": " You can turn many things around and there's always these parentheses make it harder and so on."
},
{
"start": 254,
"end": 259,
"text": " So what they do is they say, OK, let's actually go from this thing to a tree."
},
{
"start": 259,
"end": 272,
"text": " So let's go to the tree representation and then let's take the tree representation because the tree representation can be normalized."
},
{
"start": 272,
"end": 278,
"text": " And then let's put that again into a sequence representation such as this one."
},
{
"start": 278,
"end": 281,
"text": " And this is called reverse polish notation."
},
{
"start": 281,
"end": 286,
"text": " And it has multiple advantages over the old expression."
},
{
"start": 286,
"end": 291,
"text": " So let's keep that on the right hand side here."
},
{
"start": 291,
"end": 300,
"text": " This is the same thing, except it's what is called a prefix notation, whereas the thing on the right here is called an infix notation."
},
{
"start": 300,
"end": 306,
"text": " Infix because the operators such as the plus is always between its arguments."
},
{
"start": 306,
"end": 310,
"text": " So it's left hand argument and it's right hand argument."
},
{
"start": 310,
"end": 316,
"text": " In prefix notation, the operator is always in front of its arguments."
},
{
"start": 316,
"end": 321,
"text": " So this operator here is has a first argument."
},
{
"start": 321,
"end": 324,
"text": " This end as a second argument."
},
{
"start": 324,
"end": 329,
"text": " This right now, the cool thing is if you express a tree like this,"
},
{
"start": 329,
"end": 334,
"text": " you can simply go and use a stack machine to solve it."
},
{
"start": 334,
"end": 337,
"text": " So you can basically go."
},
{
"start": 337,
"end": 343,
"text": " I would say you can go from the from the right here and you see you select two and five plus."
},
{
"start": 343,
"end": 346,
"text": " And let's do it by hand."
},
{
"start": 346,
"end": 352,
"text": " Actually, this is fun. So we have plus two times three."
},
{
"start": 352,
"end": 359,
"text": " If you're a boomer like me, you remember you have to use calculators like this and couldn't use the infix notation."
},
{
"start": 359,
"end": 361,
"text": " So you go from the right, right?"
},
{
"start": 361,
"end": 364,
"text": " You say two, five plus. Cool."
},
{
"start": 364,
"end": 366,
"text": " That's seven. So scratch that."
},
{
"start": 366,
"end": 368,
"text": " Put seven here, right?"
},
{
"start": 368,
"end": 373,
"text": " So your new stack is three, two times."
},
{
"start": 373,
"end": 374,
"text": " This right."
},
{
"start": 374,
"end": 380,
"text": " Then you go again from the right and you go seven, three times."
},
{
"start": 380,
"end": 382,
"text": " OK, that's twenty one. Cool."
},
{
"start": 382,
"end": 384,
"text": " Twenty one. Scratch this."
},
{
"start": 384,
"end": 389,
"text": " Now it's twenty one. Two plus twenty one is twenty three."
},
{
"start": 389,
"end": 392,
"text": " I'm fairly sure that's the solution."
},
{
"start": 392,
"end": 394,
"text": " Well, correct me if I'm wrong."
},
{
"start": 394,
"end": 397,
"text": " But this is how you would would go about solving like this."
},
{
"start": 397,
"end": 404,
"text": " So it is the same expression as the original one, but it doesn't use any parentheses."
},
{
"start": 404,
"end": 411,
"text": " And it is it is derived from the from the tree, basically."
},
{
"start": 411,
"end": 420,
"text": " So it is you can you can normalize it much more in order to find unique expressions."
},
{
"start": 420,
"end": 430,
"text": " So what this system does is it it transforms any expression into a prefix notation such as this one."
},
{
"start": 430,
"end": 435,
"text": " Oops. And then it uses a sequence to sequence model."
},
{
"start": 435,
"end": 437,
"text": " In order to derive the solution."
},
{
"start": 437,
"end": 441,
"text": " Now, just how crazy is this? Right."
},
{
"start": 441,
"end": 446,
"text": " So we come we go from this thing here, right?"
},
{
"start": 446,
"end": 450,
"text": " From this thing. And the solution is twenty one."
},
{
"start": 450,
"end": 459,
"text": " Right. And the neural network is simply trained to do sequence to sequence from this to that sequence to sequence."
},
{
"start": 459,
"end": 467,
"text": " That means it basically parses this as a token level. Right."
},
{
"start": 467,
"end": 471,
"text": " And then it outputs these tokens without."
},
{
"start": 471,
"end": 480,
"text": " So during training, you simply give it the you give it the input here and you give it the output."
},
{
"start": 480,
"end": 488,
"text": " And it's supposed to learn how to transform one into the other without you giving it any sort of"
},
{
"start": 488,
"end": 490,
"text": " mathematical ability. Right."
},
{
"start": 490,
"end": 496,
"text": " Without you telling it what does a plus sign mean without you telling it this algorithm that I just told you."
},
{
"start": 496,
"end": 503,
"text": " Now, this by itself is already pretty astounding that you would try such a thing."
},
{
"start": 503,
"end": 506,
"text": " It really transforms the string."
},
{
"start": 506,
"end": 512,
"text": " So this is not the mathematical equation, but the string of this into the string of that."
},
{
"start": 512,
"end": 515,
"text": " Now, they don't do it on numbers."
},
{
"start": 515,
"end": 524,
"text": " Like, I don't think that would work as well if you were to to make it kind of calculate numerical things like this."
},
{
"start": 524,
"end": 526,
"text": " As we said, this is symbolic."
},
{
"start": 526,
"end": 530,
"text": " So what it can do is it can, for example, integrate."
},
{
"start": 530,
"end": 539,
"text": " So if you have an expression like."
},
{
"start": 539,
"end": 541,
"text": " Let's see some on the bottom here."
},
{
"start": 541,
"end": 548,
"text": " So if you had an expression such as a polynomial."
},
{
"start": 548,
"end": 552,
"text": " Here, an expression like this."
},
{
"start": 552,
"end": 556,
"text": " Right. You would like to find its integral."
},
{
"start": 556,
"end": 559,
"text": " That is a problem. That's one of the problems we had at the beginning."
},
{
"start": 559,
"end": 561,
"text": " Right. This integral right here."
},
{
"start": 561,
"end": 567,
"text": " You can write this in a string like we said."
},
{
"start": 567,
"end": 575,
"text": " And then derive its solution right here."
},
{
"start": 575,
"end": 582,
"text": " And have the neural network learn to map one to the other, right, to map this to that."
},
{
"start": 582,
"end": 591,
"text": " So the way it goes is it would map this into map this into its tree representation."
},
{
"start": 591,
"end": 598,
"text": " It would map this into its prefix notation."
},
{
"start": 598,
"end": 602,
"text": " Right. It would also map this to."
},
{
"start": 602,
"end": 604,
"text": " Let's take another color here."
},
{
"start": 604,
"end": 608,
"text": " This into its tree."
},
{
"start": 608,
"end": 612,
"text": " Then it will map this into its prefix notation."
},
{
"start": 612,
"end": 614,
"text": " And then that's the training data."
},
{
"start": 614,
"end": 619,
"text": " The training data is take this, derive that."
},
{
"start": 619,
"end": 624,
"text": " Right. And at inference time, of course, you won't have this here."
},
{
"start": 624,
"end": 630,
"text": " You'll simply be asked to output a sequence as a normal natural language."
},
{
"start": 630,
"end": 632,
"text": " Like you can think of machine translation."
},
{
"start": 632,
"end": 638,
"text": " This thing translates problems into solutions."
},
{
"start": 638,
"end": 640,
"text": " It's crazy."
},
{
"start": 640,
"end": 646,
"text": " I mean, it's not it's not technically super challenging, but it's crazy that it works or that it could work."
},
{
"start": 646,
"end": 647,
"text": " Right."
},
{
"start": 647,
"end": 650,
"text": " So we'll see how this actually works."
},
{
"start": 650,
"end": 654,
"text": " They use a transformer model, which is just which is a classic model."
},
{
"start": 654,
"end": 661,
"text": " If you don't know what a transformer is, I have a video called Attention is All You Need about transformers."
},
{
"start": 661,
"end": 667,
"text": " You can basically use them to do these kinds of tasks, to map one string into another string."
},
{
"start": 667,
"end": 671,
"text": " So."
},
{
"start": 671,
"end": 683,
"text": " Yeah, so they go into detail here of how they construct the data set and how big the problem space is and so on."
},
{
"start": 683,
"end": 688,
"text": " Ultimately, they compare their system to."
},
{
"start": 688,
"end": 696,
"text": " Mathematica, I think, and Maple and MathLab, which do the same thing."
},
{
"start": 696,
"end": 704,
"text": " So Mathematica, which is the kind of desktop version of Wolfram Alpha that I've shown you, you have it here."
},
{
"start": 704,
"end": 707,
"text": " So integration."
},
{
"start": 707,
"end": 712,
"text": " Is the task of integrating, let's say, these these symbolic expressions."
},
{
"start": 712,
"end": 725,
"text": " ODE order one and order two are slightly different tasks where you're asked to solve an ordinary differential equation, which is also a task in symbolic mathematics."
},
{
"start": 725,
"end": 737,
"text": " If you compare it to Mathematica here and they give it Mathematica a limit of 30 seconds, what Mathematica will do is it will kind of search the manipulations that it knows."
},
{
"start": 737,
"end": 745,
"text": " So the advantage of this is it can always give you, let's say, a step by step solution if it finds a solution."
},
{
"start": 745,
"end": 755,
"text": " Right. It will just start and it will do a tree search, manipulating the expression you give in until it reaches a satisfactory solution."
},
{
"start": 755,
"end": 764,
"text": " But then once it has that, it can give you a path through the tree, which leads to the solution, which will give you a step by step solution."
},
{
"start": 764,
"end": 766,
"text": " So you can understand it."
},
{
"start": 766,
"end": 768,
"text": " The system that Facebook designs here doesn't do that."
},
{
"start": 768,
"end": 770,
"text": " It simply takes right."
},
{
"start": 770,
"end": 780,
"text": " It simply takes the input tokens like this is the input and it just gives you an output that is learned so that the network per se doesn't understand math."
},
{
"start": 780,
"end": 790,
"text": " It simply learns from many, many examples that to transform to to come up with good hypotheses."
},
{
"start": 790,
"end": 800,
"text": " So if you compare here, Mathematica, for example, it can integrate 84 percent of the things that they put into it."
},
{
"start": 800,
"end": 805,
"text": " It's not said whether it gets it wrong or simply times out in the rest."
},
{
"start": 805,
"end": 815,
"text": " I would say it times out because probably Mathematica never gets it wrong because it's an actual symbolic manipulator with defined rules."
},
{
"start": 815,
"end": 822,
"text": " So I guess the rest of the rest 16 percent, it simply times out, doesn't find a solution."
},
{
"start": 822,
"end": 838,
"text": " Whereas this Facebook system and they say it usually finds the solution in less than a second, finds these solutions in 98.4 percent of the time with a beam size of one."
},
{
"start": 838,
"end": 840,
"text": " Now, what does the beam size mean?"
},
{
"start": 840,
"end": 846,
"text": " It means that the time that you have to generate the output is the time that you generate the output."
},
{
"start": 846,
"end": 852,
"text": " So if you have a sequence of input, you can always choose to do a beam search."
},
{
"start": 852,
"end": 865,
"text": " So when you have a sequence of input, let's actually give an example, a cat jumps."
},
{
"start": 865,
"end": 871,
"text": " The task is simply to continue the sentence, right, to continue the sentence so you can generate an output sequence."
},
{
"start": 871,
"end": 876,
"text": " The output sequence could be over the dog."
},
{
"start": 876,
"end": 884,
"text": " What you can do is you can this is beam size, would be called beam size one or no beam search at all."
},
{
"start": 884,
"end": 893,
"text": " You can do what's called a beam search in that each step you actually generate multiple hypotheses and then keep the best ones in memory."
},
{
"start": 893,
"end": 908,
"text": " So you in a beam size of 10, you would always consider the 10 most probable solutions and you would kind of evaluate all 10 and then always keep the 10 best."
},
{
"start": 908,
"end": 913,
"text": " Let's see how this goes. Let's do a beam size of three in our case."
},
{
"start": 913,
"end": 917,
"text": " So a cat jumps and then you could come up with three different things."
},
{
"start": 917,
"end": 930,
"text": " This sentence could continue cat jumps over a cat jumps between and a cat jumps swiftly."
},
{
"start": 930,
"end": 932,
"text": " Right. So these are your three hypotheses."
},
{
"start": 932,
"end": 937,
"text": " Then we go to the next step. We have to evaluate each of those, each of them."
},
{
"start": 937,
"end": 944,
"text": " So a cat jumps over the over a over me."
},
{
"start": 944,
"end": 956,
"text": " A cat jumps between the between two and a cat jumps between many."
},
{
"start": 956,
"end": 967,
"text": " The cat jumps swiftly end of sentence, that jumps swiftly over cat jumps swiftly."
},
{
"start": 967,
"end": 970,
"text": " And right, these are all valid."
},
{
"start": 970,
"end": 978,
"text": " So of these nine, you would now select again the three that overall have the highest likelihood."
},
{
"start": 978,
"end": 990,
"text": " Maybe that's the following cat jumps over the cat jumps over a and a cat jumps between two."
},
{
"start": 990,
"end": 992,
"text": " These three. Right. So you just keep these three."
},
{
"start": 992,
"end": 1000,
"text": " And then in the next step, you again from these three, you would want for each three hypotheses and so on."
},
{
"start": 1000,
"end": 1002,
"text": " So this is what's called a beam search."
},
{
"start": 1002,
"end": 1010,
"text": " And if you give it a beam size of 10 or 50, this system tends to improve even more."
},
{
"start": 1010,
"end": 1019,
"text": " The way this system works is quite different from Mathematica in that Mathematica, as I said, is a symbolic solver that never makes mistakes,"
},
{
"start": 1019,
"end": 1022,
"text": " but can fail to give you a solution."
},
{
"start": 1022,
"end": 1029,
"text": " This system simply generates an output sequence that is not guaranteed to be actually a solution to the problem."
},
{
"start": 1029,
"end": 1031,
"text": " It's just a hypothesis."
},
{
"start": 1031,
"end": 1035,
"text": " But then you can quickly check whether the hypothesis is correct."
},
{
"start": 1035,
"end": 1041,
"text": " So the nature of these math problems with integration, you can simply differentiate."
},
{
"start": 1041,
"end": 1046,
"text": " And with ODE, you can simply plug them in to see if there is solution."
},
{
"start": 1046,
"end": 1057,
"text": " It's kind of like your classic, let's say, NP-hard problems or like a SAT solving where you can quickly check whether something is a solution."
},
{
"start": 1057,
"end": 1065,
"text": " So if you have a system that generates 50 hypotheses, you could quickly check which one is actually correct."
},
{
"start": 1065,
"end": 1073,
"text": " So these numbers here mean that one of these 50 that the system came up with was a correct solution."
},
{
"start": 1073,
"end": 1079,
"text": " And if you allow for such many hypotheses, you can see it goes up quite a bit."
},
{
"start": 1079,
"end": 1083,
"text": " For example, the ODE solving is almost the same."
},
{
"start": 1083,
"end": 1087,
"text": " And here it's even worse if you take ODE's of order 2."
},
{
"start": 1087,
"end": 1089,
"text": " It's even worse than Mathematica."
},
{
"start": 1089,
"end": 1096,
"text": " But if you allow for larger beam sizes, you see it dramatically goes up."
},
{
"start": 1096,
"end": 1099,
"text": " And so it's a different approach."
},
{
"start": 1099,
"end": 1113,
"text": " I wouldn't be surprised if Mathematica would actually implement something like this very soon or just buy this off of Facebook or something, or Facebook by Mathematica in whatever way."
},
{
"start": 1113,
"end": 1117,
"text": " This clearly is a different approach and it appears to work better."
},
{
"start": 1117,
"end": 1119,
"text": " But there is a caveat."
},
{
"start": 1119,
"end": 1123,
"text": " So here's the caveat that I see with this kind of thing."
},
{
"start": 1123,
"end": 1130,
"text": " These evaluations are done on data sets, of course."
},
{
"start": 1130,
"end": 1136,
"text": " And this paper goes into big detail on how to generate these data sets."
},
{
"start": 1136,
"end": 1142,
"text": " So they have to pay attention to many things like many solutions are equivalent."
},
{
"start": 1142,
"end": 1156,
"text": " For example, here, you know, that this solution and this solution to this equation, to this differential equation are the same."
},
{
"start": 1156,
"end": 1164,
"text": " So they have to use a symbolic framework to check whether the solutions are the same and so on."
},
{
"start": 1164,
"end": 1176,
"text": " This it is very good work, but they do evaluate on expressions that fit into their data set."
},
{
"start": 1176,
"end": 1191,
"text": " So here in their data set, they say, OK, we evaluate, you know, expressions with up to 15 internal nodes, 11 leave values for these four binary operators, then these 15 unary operators."
},
{
"start": 1191,
"end": 1197,
"text": " So the expressions that they train on fall into this data set."
},
{
"start": 1197,
"end": 1205,
"text": " Right. Also, just numbers from negative five to five."
},
{
"start": 1205,
"end": 1217,
"text": " So it is it is kind of to be expected that a system that is trained on these things would meet would perform very well on these things as opposed to opposed to Mathematica."
},
{
"start": 1217,
"end": 1222,
"text": " That is, you know, a general purpose tool."
},
{
"start": 1222,
"end": 1226,
"text": " Moreover, if you look at."
},
{
"start": 1226,
"end": 1228,
"text": " Sorry, I think this is further down."
},
{
"start": 1228,
"end": 1239,
"text": " For example, in integration for the integration task, they have three different ways of solving of generating data."
},
{
"start": 1239,
"end": 1245,
"text": " They have the forward way where they simply use a symbolic integrator to generate expressions."
},
{
"start": 1245,
"end": 1253,
"text": " They have the backward way where they start from the integral and then differentiate it in order to obtain a training pair."
},
{
"start": 1253,
"end": 1256,
"text": " And they have an integration by parts method."
},
{
"start": 1256,
"end": 1261,
"text": " These are three different methods to come up with problems for this system to be trained on."
},
{
"start": 1261,
"end": 1271,
"text": " And they have very different properties to the effect that if you train with one just one, it won't work well on the other."
},
{
"start": 1271,
"end": 1282,
"text": " So if you train with the forward method, it will work very well on data that has been generated with the forward method."
},
{
"start": 1282,
"end": 1285,
"text": " So this is down here. This is what it's trained on."
},
{
"start": 1285,
"end": 1287,
"text": " And this is what it's evaluated on."
},
{
"start": 1287,
"end": 1291,
"text": " Right. You can see the diagonal is very, very strong."
},
{
"start": 1291,
"end": 1303,
"text": " But if you train with the backward method, but you evaluate on data generated with the forward method, it is actually very poor."
},
{
"start": 1303,
"end": 1309,
"text": " That's because in one case, generally, the solutions are longer than the input."
},
{
"start": 1309,
"end": 1311,
"text": " In the other case, the solutions are shorter."
},
{
"start": 1311,
"end": 1317,
"text": " So not only does this system only work on the particular task here,"
},
{
"start": 1317,
"end": 1325,
"text": " it is actually very attuned to the way that this data was generated."
},
{
"start": 1325,
"end": 1338,
"text": " Right. So in fact, I would postulate that this training data is only probably a very small subset of all of the things that we would like to integrate."
},
{
"start": 1338,
"end": 1349,
"text": " And again, the problem the problem is made kind of worse because they their evaluation set would also come from their distribution."
},
{
"start": 1349,
"end": 1361,
"text": " So what they've ultimately shown is that they can do this on a very skewed, probably very biased subset of this mathematical problem."
},
{
"start": 1361,
"end": 1366,
"text": " And on that biased subset, they can outperform something like Mathematica."
},
{
"start": 1366,
"end": 1369,
"text": " Right. They kind of defeat themselves."
},
{
"start": 1369,
"end": 1378,
"text": " Yeah. If you look here, they even the different integration data generating methods, if you only train on one of them, it doesn't generalize."
},
{
"start": 1378,
"end": 1388,
"text": " If you only train on forward data, then if you evaluate on backward generated data, it doesn't work."
},
{
"start": 1388,
"end": 1392,
"text": " So even the integrator can't really generalize."
},
{
"start": 1392,
"end": 1403,
"text": " So they have to kind of combine different method. And even now, we can probably easily find examples that this integrator can't solve."
},
{
"start": 1403,
"end": 1415,
"text": " So, I mean, there is a lot of cool things here and they show a number of properties that the model learns just from without them telling it to."
},
{
"start": 1415,
"end": 1421,
"text": " And it's cool that it works anyway. As I said, this model has no programmed in notion of how math works."
},
{
"start": 1421,
"end": 1443,
"text": " But also it kind of shows the problems if you do this via a training data set in that if your training data set is very skewed and then your evaluation set follows the same generation process, the claims you can make at the end are limited."
},
{
"start": 1443,
"end": 1447,
"text": " And to be fair, I don't know what claims they made in the press generally."
},
{
"start": 1447,
"end": 1455,
"text": " So I think there is a pretty cool work. Check it out. And that was it. Thanks."
}
] |
JPX_jSZtszY | Yannic Kilcher | UCZHmQk67mSJgfCCTn7xBfew | NeurIPS 2020 Changes to Paper Submission Process | [
"Science & Technology"
] | [
"machine learning",
"deep learning",
"phd",
"papers",
"neurips",
"nips",
"conference",
"submission",
"society",
"ethics"
] | My thoughts on the changes to the paper submission process for NeurIPS 2020.
The main new changes are:
1. ACs can desk reject papers
2. All authors have to be able to review if asked
3. Resubmissions from other conferences must be marked and a summary of changes since the last submission must be provided
4. Borader societal / ethical impact must be discussed
5. Upon acceptance, all papers must link to an explanatory video and the PDFs for slides and poster
https://neurips.cc/Conferences/2020/CallForPapers
https://youtu.be/361h6lHZGDg
Links:
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
BitChute: https://www.bitchute.com/channel/yannic-kilcher
Minds: https://www.minds.com/ykilcher | Hi there. So I just wanted to give a few quick thoughts about the changes to the NeurIPS submission process this year as opposed to last year. They've announced this on the website, on Twitter, with the video and so on, and I thought I might share some thoughts on that. And maybe some of you haven't heard yet in case you're planning to submit or thinking about it. So desk rejections. ACs, area chairs, have the ability to desk reject papers that they feel strongly are not going to be passable to the reviewers. They did an experiment last year where the ACs were simply supposed to mark submissions that they would desk reject, and it turned out that ACs aren't very good at estimating which submissions are going to be rejected by the reviewers. That might be because there wasn't really anything at stake because it was just kind of a let's see how this works. But it is definitely a move to reduce the number of submissions because the field is exploding and we lack reviewing power, reviewing people. So this is a move to reduce the number of people that have to review something because there will be fewer papers. I don't know if this increases the quality overall. If your paper gets desk rejected, there's usually some obvious reason for it why an AC decided it's not worth it. They probably haven't read it in depth, but there might be some kind of overall structural issue that, or like the introduction has many typos, or you know, look for the obvious things even though your work might be good. Second, all authors of a paper have to be able to review if asked to do so. And again, this is a stab at this kind of reviewing crisis, but I have mixed feelings about this. I really think this is a move in the wrong direction. It will increase the number of authors because a lot of people have been kind of free riding in that they're submitting papers, but they aren't reviewing other papers even though they would be competent researchers simply because reviewing doesn't get you anything. So there's no incentive to do reviews. Maybe you can say you're a reviewer, but then there's every incentive to do bad reviews, like two line reviews where the first line says you should have compared to my paper, reject like, fuck you if you're a reviewer like this. In any case, like a lot of times, and this hits, for example, like universities where you maybe work with a master student and the master student does some of the pre-processing of the data and they don't really have a clue about the machine learning, but they still contribute it. So why shouldn't they be an author on the paper? They might even have written that section about the data pre-processing. And now they're asked to review entire papers about topics where they're not really familiar with or you have some outside collaborators or, you know, there are so many things wrong. I think this attracts the wrong kind of people and by forcing people to do it, you encourage even more, like all these reviewers that would not have reviewed, what will happen is they will give shitty reviews and you will have even worse quality of reviews as a result. I think this is the wrong move to reduce the number of load per reviewer. I'd rather see abolish peer review completely in computer science, in machine learning at least. That's my opinion, but that might be a video for another time. I have plans how to replace it another time. Resubmissions have to be clearly marked. So if your paper is a resubmission of, like if you had already submitted it in the last 12 months, it's been rejected, you have to say it is a resubmission and the changes you made to the paper. Again with a peer review process that actually works, this would make a lot of sense. You can say, well, it got rejected last time and here is how I corrected for what the reviewers criticized, but with the review quality right now, I mean most of the papers, what are they going to say? It got rejected for nefarious reasons because the reviewer had a bad bowel movement that morning and I didn't really change much. So you encourage people to kind of blow out of proportion the changes they made and put a lot of additional unnecessary work on two papers that would actually be already fine. So all of these things, they are forcing people to do things and then the incentives of what we want aren't aligned with what we give. So what you'll end up with is lower quality reviews and lower quality work. So the next two points are of a different nature. The first one though, that will probably, I mean even if the ACs aren't perfect, that's a good move. I like that. The fourth point and the fifth point are a bit different. The fourth point is there is a new section in CMT apparently where you have to describe the broader societal impact and ethics around your work. How will your work influence society? What are positives and negatives? Ethical outcomes? How can it be used? And this is targeted towards things like let's say facial recognition. If you develop a new facial recognition algorithm, you may be able to argue, well this could be better used to identify victims in a big crowd. There's a mass riot or something and then you don't know who is there. Is my relative one of the people in the mass that gets stomped on? Or you can also say this potentially helps a dictatorial state to govern their people because they can now recognize everyone. For most papers it will be a bit shaky. Like if your third order optimization algorithm achieves a slightly better convergence rate, I'm not sure what's here. But what I feel is that this is dumb in a way because this just means more work. Basically now you have to demonstrate and yeah it says you should discuss positive and negative aspects but in essence everyone will be demonstrating virtue signaling how good their work will be for society and what good can be done and maybe a bit of bad. But that can be mitigated and it just pushes into a more PR world. So it goes from the science world into a more PR world. It means extra work and who are the people that can afford to do extra work? It's mostly the big companies. They can just put an additional team member on that, maybe even do additional experiments to show the societal impact of the work and who will lose out are probably small universities, independent researchers. And so on that don't have that capacity that simply do their research because it's an interesting research question. And for almost every single thing in the world that has an application it will have good and bad applications. So yeah mixed feelings. So the fifth is you are now supposed if your paper gets accepted to make a video about it and upload the poster basically link to the poster that you would use and also link to slides that you would give your talk with. This is to make it more accessible to people that are not at the conference which again I have mixed feelings about. Again it pushes it into this more PR realm. Talks are already live streamed. Most of them are for most of the large conferences and I feel it just gets people one step more away from the actual paper. So it allows people to grandstand and PR up even more of their work because even people who don't attend the conference now they're not going to read the paper, they're just going to watch the video. And in the video you can always leave away those things that you would have to like that a reviewer makes you put in the paper right and in the video you can overbought. It's camera ready. No one reviews the video. You can say whatever you want. So it's just where before if you didn't attend the conference I think many people actually did read the paper, watched talks where people could ask questions and now it's just one more PR thing. And again who has time, energy and money to really invest a lot into this? It's mainly large companies right if you're small and you're time bound and so on you might not have equipment or time to do that. I am not for hire to do your NURBS videos just saying. I don't have time to make these videos really. As you can see stellar quality I think there's a bright glare right here. So that was it for my opinions on this and I wish you a nice day. Bye bye. | [
{
"start": 0,
"end": 4.5200000000000005,
"text": " Hi there."
},
{
"start": 4.5200000000000005,
"end": 11.120000000000001,
"text": " So I just wanted to give a few quick thoughts about the changes to the NeurIPS submission"
},
{
"start": 11.120000000000001,
"end": 14.6,
"text": " process this year as opposed to last year."
},
{
"start": 14.6,
"end": 20.2,
"text": " They've announced this on the website, on Twitter, with the video and so on, and I thought"
},
{
"start": 20.2,
"end": 22.68,
"text": " I might share some thoughts on that."
},
{
"start": 22.68,
"end": 27.52,
"text": " And maybe some of you haven't heard yet in case you're planning to submit or thinking"
},
{
"start": 27.52,
"end": 28.6,
"text": " about it."
},
{
"start": 28.6,
"end": 31.360000000000003,
"text": " So desk rejections."
},
{
"start": 31.360000000000003,
"end": 39.760000000000005,
"text": " ACs, area chairs, have the ability to desk reject papers that they feel strongly are"
},
{
"start": 39.760000000000005,
"end": 45.400000000000006,
"text": " not going to be passable to the reviewers."
},
{
"start": 45.400000000000006,
"end": 50.88,
"text": " They did an experiment last year where the ACs were simply supposed to mark submissions"
},
{
"start": 50.88,
"end": 57.120000000000005,
"text": " that they would desk reject, and it turned out that ACs aren't very good at estimating"
},
{
"start": 57.12,
"end": 61.64,
"text": " which submissions are going to be rejected by the reviewers."
},
{
"start": 61.64,
"end": 65.42,
"text": " That might be because there wasn't really anything at stake because it was just kind"
},
{
"start": 65.42,
"end": 68.24,
"text": " of a let's see how this works."
},
{
"start": 68.24,
"end": 73.84,
"text": " But it is definitely a move to reduce the number of submissions because the field is"
},
{
"start": 73.84,
"end": 80.56,
"text": " exploding and we lack reviewing power, reviewing people."
},
{
"start": 80.56,
"end": 87.44,
"text": " So this is a move to reduce the number of people that have to review something because"
},
{
"start": 87.44,
"end": 91.16,
"text": " there will be fewer papers."
},
{
"start": 91.16,
"end": 94.68,
"text": " I don't know if this increases the quality overall."
},
{
"start": 94.68,
"end": 101.04,
"text": " If your paper gets desk rejected, there's usually some obvious reason for it why an"
},
{
"start": 101.04,
"end": 104.24000000000001,
"text": " AC decided it's not worth it."
},
{
"start": 104.24000000000001,
"end": 109.56,
"text": " They probably haven't read it in depth, but there might be some kind of overall structural"
},
{
"start": 109.56,
"end": 117.98,
"text": " issue that, or like the introduction has many typos, or you know, look for the obvious things"
},
{
"start": 117.98,
"end": 121.16,
"text": " even though your work might be good."
},
{
"start": 121.16,
"end": 129.56,
"text": " Second, all authors of a paper have to be able to review if asked to do so."
},
{
"start": 129.56,
"end": 134.72,
"text": " And again, this is a stab at this kind of reviewing crisis, but I have mixed feelings"
},
{
"start": 134.72,
"end": 135.72,
"text": " about this."
},
{
"start": 135.72,
"end": 139.6,
"text": " I really think this is a move in the wrong direction."
},
{
"start": 139.6,
"end": 144.24,
"text": " It will increase the number of authors because a lot of people have been kind of free riding"
},
{
"start": 144.24,
"end": 150.32,
"text": " in that they're submitting papers, but they aren't reviewing other papers even though"
},
{
"start": 150.32,
"end": 155.04,
"text": " they would be competent researchers simply because reviewing doesn't get you anything."
},
{
"start": 155.04,
"end": 158.14,
"text": " So there's no incentive to do reviews."
},
{
"start": 158.14,
"end": 162.4,
"text": " Maybe you can say you're a reviewer, but then there's every incentive to do bad reviews,"
},
{
"start": 162.4,
"end": 166.6,
"text": " like two line reviews where the first line says you should have compared to my paper,"
},
{
"start": 166.6,
"end": 172,
"text": " reject like, fuck you if you're a reviewer like this."
},
{
"start": 172,
"end": 179.32,
"text": " In any case, like a lot of times, and this hits, for example, like universities where"
},
{
"start": 179.32,
"end": 184,
"text": " you maybe work with a master student and the master student does some of the pre-processing"
},
{
"start": 184,
"end": 189.56,
"text": " of the data and they don't really have a clue about the machine learning, but they still"
},
{
"start": 189.56,
"end": 190.56,
"text": " contribute it."
},
{
"start": 190.56,
"end": 192.26,
"text": " So why shouldn't they be an author on the paper?"
},
{
"start": 192.26,
"end": 196.48,
"text": " They might even have written that section about the data pre-processing."
},
{
"start": 196.48,
"end": 202.64,
"text": " And now they're asked to review entire papers about topics where they're not really familiar"
},
{
"start": 202.64,
"end": 208.84,
"text": " with or you have some outside collaborators or, you know, there are so many things wrong."
},
{
"start": 208.84,
"end": 214.92,
"text": " I think this attracts the wrong kind of people and by forcing people to do it, you encourage"
},
{
"start": 214.92,
"end": 220.32,
"text": " even more, like all these reviewers that would not have reviewed, what will happen is they"
},
{
"start": 220.32,
"end": 227.07999999999998,
"text": " will give shitty reviews and you will have even worse quality of reviews as a result."
},
{
"start": 227.07999999999998,
"end": 233.2,
"text": " I think this is the wrong move to reduce the number of load per reviewer."
},
{
"start": 233.2,
"end": 239.12,
"text": " I'd rather see abolish peer review completely in computer science, in machine learning at"
},
{
"start": 239.12,
"end": 240.24,
"text": " least."
},
{
"start": 240.24,
"end": 245.48,
"text": " That's my opinion, but that might be a video for another time."
},
{
"start": 245.48,
"end": 250.07999999999998,
"text": " I have plans how to replace it another time."
},
{
"start": 250.08,
"end": 252.48000000000002,
"text": " Resubmissions have to be clearly marked."
},
{
"start": 252.48000000000002,
"end": 257.96000000000004,
"text": " So if your paper is a resubmission of, like if you had already submitted it in the last"
},
{
"start": 257.96000000000004,
"end": 263.92,
"text": " 12 months, it's been rejected, you have to say it is a resubmission and the changes"
},
{
"start": 263.92,
"end": 265.68,
"text": " you made to the paper."
},
{
"start": 265.68,
"end": 271.34000000000003,
"text": " Again with a peer review process that actually works, this would make a lot of sense."
},
{
"start": 271.34000000000003,
"end": 276.8,
"text": " You can say, well, it got rejected last time and here is how I corrected for what the reviewers"
},
{
"start": 276.8,
"end": 282.96000000000004,
"text": " criticized, but with the review quality right now, I mean most of the papers, what are they"
},
{
"start": 282.96000000000004,
"end": 284.88,
"text": " going to say?"
},
{
"start": 284.88,
"end": 291.52000000000004,
"text": " It got rejected for nefarious reasons because the reviewer had a bad bowel movement that"
},
{
"start": 291.52000000000004,
"end": 293.92,
"text": " morning and I didn't really change much."
},
{
"start": 293.92,
"end": 299.56,
"text": " So you encourage people to kind of blow out of proportion the changes they made and put"
},
{
"start": 299.56,
"end": 305.44,
"text": " a lot of additional unnecessary work on two papers that would actually be already fine."
},
{
"start": 305.44,
"end": 315.8,
"text": " So all of these things, they are forcing people to do things and then the incentives of what"
},
{
"start": 315.8,
"end": 320.22,
"text": " we want aren't aligned with what we give."
},
{
"start": 320.22,
"end": 326.15999999999997,
"text": " So what you'll end up with is lower quality reviews and lower quality work."
},
{
"start": 326.15999999999997,
"end": 330.56,
"text": " So the next two points are of a different nature."
},
{
"start": 330.56,
"end": 337.56,
"text": " The first one though, that will probably, I mean even if the ACs aren't perfect, that's"
},
{
"start": 337.56,
"end": 338.56,
"text": " a good move."
},
{
"start": 338.56,
"end": 340.72,
"text": " I like that."
},
{
"start": 340.72,
"end": 344.16,
"text": " The fourth point and the fifth point are a bit different."
},
{
"start": 344.16,
"end": 348.88,
"text": " The fourth point is there is a new section in CMT apparently where you have to describe"
},
{
"start": 348.88,
"end": 354.2,
"text": " the broader societal impact and ethics around your work."
},
{
"start": 354.2,
"end": 356.68,
"text": " How will your work influence society?"
},
{
"start": 356.68,
"end": 358.92,
"text": " What are positives and negatives?"
},
{
"start": 358.92,
"end": 360.32,
"text": " Ethical outcomes?"
},
{
"start": 360.32,
"end": 361.32,
"text": " How can it be used?"
},
{
"start": 361.32,
"end": 366.44,
"text": " And this is targeted towards things like let's say facial recognition."
},
{
"start": 366.44,
"end": 371.68,
"text": " If you develop a new facial recognition algorithm, you may be able to argue, well this could"
},
{
"start": 371.68,
"end": 378.8,
"text": " be better used to identify victims in a big crowd."
},
{
"start": 378.8,
"end": 382.56,
"text": " There's a mass riot or something and then you don't know who is there."
},
{
"start": 382.56,
"end": 390.03999999999996,
"text": " Is my relative one of the people in the mass that gets stomped on?"
},
{
"start": 390.04,
"end": 396.64000000000004,
"text": " Or you can also say this potentially helps a dictatorial state to govern their people"
},
{
"start": 396.64000000000004,
"end": 399.8,
"text": " because they can now recognize everyone."
},
{
"start": 399.8,
"end": 402.92,
"text": " For most papers it will be a bit shaky."
},
{
"start": 402.92,
"end": 409.76,
"text": " Like if your third order optimization algorithm achieves a slightly better convergence rate,"
},
{
"start": 409.76,
"end": 412.28000000000003,
"text": " I'm not sure what's here."
},
{
"start": 412.28,
"end": 423.2,
"text": " But what I feel is that this is dumb in a way because this just means more work."
},
{
"start": 423.2,
"end": 427.84,
"text": " Basically now you have to demonstrate and yeah it says you should discuss positive and"
},
{
"start": 427.84,
"end": 433.78,
"text": " negative aspects but in essence everyone will be demonstrating virtue signaling how good"
},
{
"start": 433.78,
"end": 439.84,
"text": " their work will be for society and what good can be done and maybe a bit of bad."
},
{
"start": 439.84,
"end": 446.2,
"text": " But that can be mitigated and it just pushes into a more PR world."
},
{
"start": 446.2,
"end": 449.03999999999996,
"text": " So it goes from the science world into a more PR world."
},
{
"start": 449.03999999999996,
"end": 453.79999999999995,
"text": " It means extra work and who are the people that can afford to do extra work?"
},
{
"start": 453.79999999999995,
"end": 455.64,
"text": " It's mostly the big companies."
},
{
"start": 455.64,
"end": 460.67999999999995,
"text": " They can just put an additional team member on that, maybe even do additional experiments"
},
{
"start": 460.67999999999995,
"end": 467.79999999999995,
"text": " to show the societal impact of the work and who will lose out are probably small universities,"
},
{
"start": 467.79999999999995,
"end": 469.55999999999995,
"text": " independent researchers."
},
{
"start": 469.56,
"end": 476.28000000000003,
"text": " And so on that don't have that capacity that simply do their research because it's an interesting"
},
{
"start": 476.28000000000003,
"end": 478,
"text": " research question."
},
{
"start": 478,
"end": 483.76,
"text": " And for almost every single thing in the world that has an application it will have good"
},
{
"start": 483.76,
"end": 485.68,
"text": " and bad applications."
},
{
"start": 485.68,
"end": 488.56,
"text": " So yeah mixed feelings."
},
{
"start": 488.56,
"end": 494.2,
"text": " So the fifth is you are now supposed if your paper gets accepted to make a video about"
},
{
"start": 494.2,
"end": 502.12,
"text": " it and upload the poster basically link to the poster that you would use and also link"
},
{
"start": 502.12,
"end": 504.76,
"text": " to slides that you would give your talk with."
},
{
"start": 504.76,
"end": 510.82,
"text": " This is to make it more accessible to people that are not at the conference which again"
},
{
"start": 510.82,
"end": 513.28,
"text": " I have mixed feelings about."
},
{
"start": 513.28,
"end": 517.56,
"text": " Again it pushes it into this more PR realm."
},
{
"start": 517.56,
"end": 521.16,
"text": " Talks are already live streamed."
},
{
"start": 521.16,
"end": 526.68,
"text": " Most of them are for most of the large conferences and I feel it just gets people one step more"
},
{
"start": 526.68,
"end": 530.68,
"text": " away from the actual paper."
},
{
"start": 530.68,
"end": 537.8399999999999,
"text": " So it allows people to grandstand and PR up even more of their work because even people"
},
{
"start": 537.8399999999999,
"end": 540.8399999999999,
"text": " who don't attend the conference now they're not going to read the paper, they're just"
},
{
"start": 540.8399999999999,
"end": 542.52,
"text": " going to watch the video."
},
{
"start": 542.52,
"end": 548.24,
"text": " And in the video you can always leave away those things that you would have to like that"
},
{
"start": 548.24,
"end": 553,
"text": " a reviewer makes you put in the paper right and in the video you can overbought."
},
{
"start": 553,
"end": 554.24,
"text": " It's camera ready."
},
{
"start": 554.24,
"end": 555.72,
"text": " No one reviews the video."
},
{
"start": 555.72,
"end": 556.84,
"text": " You can say whatever you want."
},
{
"start": 556.84,
"end": 562.2,
"text": " So it's just where before if you didn't attend the conference I think many people actually"
},
{
"start": 562.2,
"end": 569.84,
"text": " did read the paper, watched talks where people could ask questions and now it's just one"
},
{
"start": 569.84,
"end": 571.2,
"text": " more PR thing."
},
{
"start": 571.2,
"end": 578.4000000000001,
"text": " And again who has time, energy and money to really invest a lot into this?"
},
{
"start": 578.4000000000001,
"end": 584.4000000000001,
"text": " It's mainly large companies right if you're small and you're time bound and so on you"
},
{
"start": 584.4000000000001,
"end": 588.12,
"text": " might not have equipment or time to do that."
},
{
"start": 588.12,
"end": 593.36,
"text": " I am not for hire to do your NURBS videos just saying."
},
{
"start": 593.36,
"end": 597.7,
"text": " I don't have time to make these videos really."
},
{
"start": 597.7,
"end": 602.62,
"text": " As you can see stellar quality I think there's a bright glare right here."
},
{
"start": 602.62,
"end": 607.84,
"text": " So that was it for my opinions on this and I wish you a nice day."
},
{
"start": 607.84,
"end": 628.24,
"text": " Bye bye."
}
] |
9Kec_7WFyp0 | Yannic Kilcher | UCZHmQk67mSJgfCCTn7xBfew | Growing Neural Cellular Automata | [
"Science & Technology"
] | [
"machine learning",
"deep learning",
"cellular automata",
"game of life",
"conway",
"google",
"distill",
"interactive",
"colab",
"local",
"global",
"update"
] | The Game of Life on steroids! This model learns to grow complex patterns in an entirely local way. Each cell is trained to listen to its neighbors and update itself in a way such that, collectively, an overall goal is reached. Fascinating and interactive!
https://distill.pub/2020/growing-ca/
https://en.wikipedia.org/wiki/Conway%27s_Game_of_Life
Abstract:
Most multicellular organisms begin their life as a single egg cell - a single cell whose progeny reliably self-assemble into highly complex anatomies with many organs and tissues in precisely the same arrangement each time. The ability to build their own bodies is probably the most fundamental skill every living creature possesses. Morphogenesis (the process of an organism’s shape development) is one of the most striking examples of a phenomenon called self-organisation. Cells, the tiny building blocks of bodies, communicate with their neighbors to decide the shape of organs and body plans, where to grow each organ, how to interconnect them, and when to eventually stop. Understanding the interplay of the emergence of complex outcomes from simple rules and homeostatic 1 feedback loops is an active area of research. What is clear is that evolution has learned to exploit the laws of physics and computation to implement the highly robust morphogenetic software that runs on genome-encoded cellular hardware.
This process is extremely robust to perturbations. Even when the organism is fully developed, some species still have the capability to repair damage - a process known as regeneration. Some creatures, such as salamanders, can fully regenerate vital organs, limbs, eyes, or even parts of the brain! Morphogenesis is a surprisingly adaptive process. Sometimes even a very atypical development process can result in a viable organism - for example, when an early mammalian embryo is cut in two, each half will form a complete individual - monozygotic twins!
The biggest puzzle in this field is the question of how the cell collective knows what to build and when to stop. The sciences of genomics and stem cell biology are only part of the puzzle, as they explain the distribution of specific components in each cell, and the establishment of different types of cells. While we know of many genes that are required for the process of regeneration, we still do not know the algorithm that is sufficient for cells to know how to build or remodel complex organs to a very specific anatomical end-goal. Thus, one major lynch-pin of future work in biomedicine is the discovery of the process by which large-scale anatomy is specified within cell collectives, and how we can rewrite this information to have rational control of growth and form. It is also becoming clear that the software of life possesses numerous modules or subroutines, such as “build an eye here”, which can be activated with simple signal triggers. Discovery of such subroutines and a mapping out of the developmental logic is a new field at the intersection of developmental biology and computer science. An important next step is to try to formulate computational models of this process, both to enrich the conceptual toolkit of biologists and to help translate the discoveries of biology into better robotics and computational technology.
Imagine if we could design systems of the same plasticity and robustness as biological life: structures and machines that could grow and repair themselves. Such technology would transform the current efforts in regenerative medicine, where scientists and clinicians seek to discover the inputs or stimuli that could cause cells in the body to build structures on demand as needed. To help crack the puzzle of the morphogenetic code, and also exploit the insights of biology to create self-repairing systems in real life, we try to replicate some of the desired properties in an in silico experiment.
Authors: Alexander Mordvintsev, Ettore Randazzo, Eyvind Niklasson, Michael Levin
Links:
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
BitChute: https://www.bitchute.com/channel/yannic-kilcher
Minds: https://www.minds.com/ykilcher | Hi there. Today I thought we would be looking at growing neural cellular automata, which is an article on distill.pub, which I found pretty neat. So this is kind of an interactive article. If you don't know distill.pub, check it out. It is a cool new concept as an alternative to the classical journals or the conference system. So what it allows you to do is to kind of write articles that are a bit more interactive, a bit more engaging and don't have the... There's no PDFs, there's no pages, there are animations and so on. So I thought we'd be looking at this article today, which is kind of a growing neural cellular automata. So if you don't know what cellular automata are, this is a very kind of old concept. The most famous one is called the game of life, where you have these cells. Here you can see every pixel is a cell and they follow some kind of update rule. And usually it's the update rule, something like if my neighbor is alive, I'm going to be alive as well in the next time step. Or if enough neighbors are alive and if only very few neighbors are alive, I'm going to die. So this gives rise to these kinds of patterns. And here the same is done with color. And the update rules are a bit more complicated. So basically, ah, traveler. Oh, nice. Okay. So in the game of life, if you play it, the most prestigious thing to get is are these kind of travelers. I've not... This is the first time I've managed to do this in this thing. So what does it do? So each pixel here is kind of an autonomous thing that is only allowed to look at its neighbors in order to decide whether or not in the next time step it is going to be alive. Look, it's like incorporating again. So each cell looks at its neighbors and then decides what its next state will be. And here it's not only alive or dead. Dead would be white and alive would be anything else. But it is also, I guess this white is... It is also the color. So each cell decides on what color it should have. And then this is a live thing. So it kind of reproduces, right? You can see if I start it new. If you double click here, it grows from somewhere else. And this is completely local. So these cells really only look at their neighbors. That's the special part, right? They don't look at the global structure. It's not like a GAN that can look at the entire picture and decide what's still missing. What these can also do if you destroy part of it, they can kind of grow back just, again, just out of local update rules at the level of the individual cells and their neighbors. They're trained to do these big structures. So let's look at how they do it. So basically, here's how they model a cell. And let's go over here. So each cell, as I said, is made up of 16 channels. And here it's modeled as three by three, but I think each cell is really one pixel. And each cell is allowed to look at its eight neighbors, right? So each cell is allowed to look at its eight neighbors across 16 different channels. And the 16 channels here mean the first three are RGB. So this is the actual color that is seen. Then there is an alive or dead channel. So what they call an alpha channel. So if this channel is high, the cell is considered alive. Otherwise, it is considered dead and not part of the pattern. So a cell can come alive or die, depending on its neighbors. And then the rest, the rest 12 channels are what they call hidden channels. So the cell is allowed to encode some hidden state there. So there's each cell is represented by the 16 dimensional vector, which is not much right. And then each cell is allowed to look at three things. So from the bottom here, it's allowed to look at its own state, so at its own 16 dimensional vectors, and it is allowed to look at its neighbors. And it does this by doing a convolution with a sobel filter. And the sobel filter is simply a fixed filter that you do a three by three convolution with, as you can see here, is basically a gradient filter. So basically measures the difference between what's to the left of the cell and what's to the right of the cell. And here in the sobel y direction, the same in the y direction. So it's basically allowed to look at gradients in states of its neighbors. This is modeled after real cells kind of looking at chemical gradients in their neighborhoods. So this is all this, this is all that the cell has to decide what it's supposed to do next, right. And what we want is we want that each individual cell only looking at its neighbors produces in total, they will produce these kind of very complex pattern. So the update rule is the following, you convolute with the sobel filters and you take the cell identity, you put this all into a vector, you put it through a very, very small neural network. So this is one dense layer, one relu, and then another dense layer to get the next 16 dimensional vector, which is the next state. And that defines your update rules. That doesn't really define the next state that defines the Delta to the next state, kind of like a residual neural network. So basically, which cells need to come alive in the next time step, which cells need to die and how are they to change their colors, right. And then you get the output of the next step, right. So that's, that's basically the entire thing. So all that is learned here is the the update rule of the neural network, right. So basically, the neural network decides, it looks at a cell and its neighbors and decides what the information in the cell in the next step should be, right. And you do this for multiple time steps. That's I actually want to go down here, you do this for multiple time steps, the initial state is simply one cell that is alive here in the middle, everything else is dead, this cell is alive and black, you do this for many steps, right. And then at some point, you get an output. And you compare the output to your desired output, you compute a loss that is differentiable. And because your update rule is differentiable, and your loss is differentiable, you can backprop through time to the original pattern here. And you can basically learn this update rule by backproping through time. This is a bit like an LSTM. And if you see in the architecture here, I think this residual connection is really the key to making this work over time. Because usually, I would not expect something like this to easily emerge over time because you have the problem of vanishing and exploding gradients. And you have no way of mitigating this problem here, this problem here in this simple neural network. But in any case, they backprop through time here. So each of these update steps, which again, this isn't one neural network with many layers, this is the same neural network applied over and over and over and over again, and then there is a loss computed. So basically, the gradients will accumulate over these steps, and they tell the network what it needs to adjust to go from this one single black pixel to this final desired state. If you do this over and over again, you learn things, you learn a update rule that will give rise to that pattern, hopefully. Now, here is a kind of an illustration of this alive and dead thing. So what they do is they consider cells that have an alpha channel, one of these channels called alpha, they have an alpha channel above 0.1, it's considered alive, right, and part of the loss. Then the neighbors, the neighbors of these cells that are below 0.1, but are neighboring a cell that is mature, alive, they're called growing, they're also part of the loss, right. So simply by being close to something, someone that is alive, a cell that is alive, you are considered alive as well, but your neighbors aren't, right, only the neighbors of really alive. So there's really alive, kind of alive, and then there is dead. And dead, the meaning of dead here, the gray ones, is they're not, they won't become part of the pattern, part of the loss, right, they're dead. All right, so what will this get you initially? So here is an animation, if they train this just like that, just back up through time with a target pattern, and then they let it run, you see these patterns actually emerge. So that's pretty cool. But then if you let them run for longer than they've been trained, you basically have no guarantees on what's going to happen. Like these update rules are simply trained to achieve the pattern within a certain number of steps, right. If you run for more than that, and apply the update rules for longer than that, you you have like there's little like you have no guarantee what's going to happen, these update rules will simply continue, as you can see here and produce some weird stuff. So they are trying to fix this. So what they do is basically they train for longer, but they do it in a in a kind of different way. So at each at each step of training, and as a step, I mean, a batch over these number of time steps. So so they sample a batch, initially, it's just all black pixels, right, as we see above. And then they optimize for these number of time steps. And then they're at the end. So what they do is they don't always start from the black pixel. But sometimes they also start from a previously seen end state. So basically, they take the end state of a previous training run, and then they just continue from that instead of starting from the initial point. And you see after some training, they get better and better. So initially, you see the thing on the left here. The thing on the left here being a starting state. And then it progressively gets better. So basically, by starting from end states of other things, you learn to. So if the end state of the other thing isn't very good, you basically learn to go to the good pattern to the pattern you want. But of course, over time, there's going to be more and more of these end states that you train from that are already pretty close to the pattern you want. And so then what that means is you learn to reproduce the pattern. So you are already at a good point, you learn to stay at that good point. And then that enables you to basically learn update rules that if you're not at the pattern you want, they go towards the pattern you want. But also if you run for longer, if you are already are at the pattern you want, then you stay at the pattern you want. So that's what we basically saw in the very initial demonstration where you could, this is a live demonstration like this thing up here, this is a live, this is running, right. And you see the update rules data, they are continuously applied, they basically stay at the pattern where they are. And that is also that is learned because of this protocol that you train from end states as well as from beginning states. So the next thing is what I'm doing here is I can destroy part of the pattern, and it will kind of regrow right you see that here. So this is also a part so for now we've also only learned to go from a single pixel like here from a black pixel to the pattern. But now we also want to learn to go to regrow when destroyed because that is, you can see this is modeled after kind of live tissue. So here you can see the parts are cut away and then the cells try to regrow. So this is I think initially, this is initially when you just train them, they exhibit some of that property, but not like very satisfying in some cases. So what they do is they train not only do they use end states, like we saw before, but also some of their training samples are simply the pattern destroyed a bit. So as you can see in some of these samples, like these here, they in each sample, they kind of cut out part of the sample and they train the update rules to regrow that part that gives you that now gives you the ability to if you damage to pretty consistently regrow the pattern, as you can see here. And they also train for rotation, which is non trivial if you have these kind of pixel based, pixel based models. But I want to jump that because I want to keep it kind of short here. So the entire goal of this is to kind of model the behavior of natural cells, because the natural cells, they don't have an overarching view, they only have the view of their neighbors, right, and they are able to grow into very complex structures. I invite you to give this a try. The distilled out pop journal is very cool. It's very interactive, you can play around with it, you can reproduce things in a collab. And yeah, shout out to the authors here, Alexander Morbintsev, Ettore Randazzo, Evan Nicholson and Michael Levin. Yeah, that was it from me. Thanks for watching and bye bye. | [
{
"start": 0,
"end": 8.16,
"text": " Hi there. Today I thought we would be looking at growing neural cellular automata, which"
},
{
"start": 8.16,
"end": 16.48,
"text": " is an article on distill.pub, which I found pretty neat. So this is kind of an interactive"
},
{
"start": 16.48,
"end": 23,
"text": " article. If you don't know distill.pub, check it out. It is a cool new concept as an alternative"
},
{
"start": 23,
"end": 31.32,
"text": " to the classical journals or the conference system. So what it allows you to do is to"
},
{
"start": 31.32,
"end": 41.519999999999996,
"text": " kind of write articles that are a bit more interactive, a bit more engaging and don't"
},
{
"start": 41.519999999999996,
"end": 48,
"text": " have the... There's no PDFs, there's no pages, there are animations and so on. So I thought"
},
{
"start": 48,
"end": 55.12,
"text": " we'd be looking at this article today, which is kind of a growing neural cellular automata."
},
{
"start": 55.12,
"end": 61.76,
"text": " So if you don't know what cellular automata are, this is a very kind of old concept. The"
},
{
"start": 61.76,
"end": 66.84,
"text": " most famous one is called the game of life, where you have these cells. Here you can see"
},
{
"start": 66.84,
"end": 73.2,
"text": " every pixel is a cell and they follow some kind of update rule. And usually it's the"
},
{
"start": 73.2,
"end": 78.2,
"text": " update rule, something like if my neighbor is alive, I'm going to be alive as well in"
},
{
"start": 78.2,
"end": 84.28,
"text": " the next time step. Or if enough neighbors are alive and if only very few neighbors are"
},
{
"start": 84.28,
"end": 88.88,
"text": " alive, I'm going to die. So this gives rise to these kinds of patterns. And here the same"
},
{
"start": 88.88,
"end": 96.96000000000001,
"text": " is done with color. And the update rules are a bit more complicated. So basically, ah,"
},
{
"start": 96.96,
"end": 105.67999999999999,
"text": " traveler. Oh, nice. Okay. So in the game of life, if you play it, the most prestigious"
},
{
"start": 105.67999999999999,
"end": 112.6,
"text": " thing to get is are these kind of travelers. I've not... This is the first time I've managed"
},
{
"start": 112.6,
"end": 120.28,
"text": " to do this in this thing. So what does it do? So each pixel here is kind of an autonomous"
},
{
"start": 120.28,
"end": 125.75999999999999,
"text": " thing that is only allowed to look at its neighbors in order to decide whether or not"
},
{
"start": 125.76,
"end": 133.52,
"text": " in the next time step it is going to be alive. Look, it's like incorporating again. So each"
},
{
"start": 133.52,
"end": 139.28,
"text": " cell looks at its neighbors and then decides what its next state will be. And here it's"
},
{
"start": 139.28,
"end": 146.96,
"text": " not only alive or dead. Dead would be white and alive would be anything else. But it is"
},
{
"start": 146.96,
"end": 153.08,
"text": " also, I guess this white is... It is also the color. So each cell decides on what color"
},
{
"start": 153.08,
"end": 161.32000000000002,
"text": " it should have. And then this is a live thing. So it kind of reproduces, right? You can see"
},
{
"start": 161.32000000000002,
"end": 167.16000000000003,
"text": " if I start it new. If you double click here, it grows from somewhere else. And this is"
},
{
"start": 167.16000000000003,
"end": 172.08,
"text": " completely local. So these cells really only look at their neighbors. That's the special"
},
{
"start": 172.08,
"end": 176.8,
"text": " part, right? They don't look at the global structure. It's not like a GAN that can look"
},
{
"start": 176.8,
"end": 183.48000000000002,
"text": " at the entire picture and decide what's still missing. What these can also do if you destroy"
},
{
"start": 183.48000000000002,
"end": 190.56,
"text": " part of it, they can kind of grow back just, again, just out of local update rules at the"
},
{
"start": 190.56,
"end": 196.76000000000002,
"text": " level of the individual cells and their neighbors. They're trained to do these big structures."
},
{
"start": 196.76000000000002,
"end": 205.78,
"text": " So let's look at how they do it. So basically, here's how they model a cell. And let's go"
},
{
"start": 205.78,
"end": 213.56,
"text": " over here. So each cell, as I said, is made up of 16 channels. And here it's modeled as"
},
{
"start": 213.56,
"end": 220.84,
"text": " three by three, but I think each cell is really one pixel. And each cell is allowed to look"
},
{
"start": 220.84,
"end": 226.8,
"text": " at its eight neighbors, right? So each cell is allowed to look at its eight neighbors"
},
{
"start": 226.8,
"end": 235.32,
"text": " across 16 different channels. And the 16 channels here mean the first three are RGB. So this"
},
{
"start": 235.32,
"end": 240.95999999999998,
"text": " is the actual color that is seen. Then there is an alive or dead channel. So what they"
},
{
"start": 240.95999999999998,
"end": 248.35999999999999,
"text": " call an alpha channel. So if this channel is high, the cell is considered alive. Otherwise,"
},
{
"start": 248.35999999999999,
"end": 254.51999999999998,
"text": " it is considered dead and not part of the pattern. So a cell can come alive or die,"
},
{
"start": 254.51999999999998,
"end": 259.68,
"text": " depending on its neighbors. And then the rest, the rest 12 channels are what they call hidden"
},
{
"start": 259.68,
"end": 267.44,
"text": " channels. So the cell is allowed to encode some hidden state there. So there's each cell"
},
{
"start": 267.44,
"end": 272.6,
"text": " is represented by the 16 dimensional vector, which is not much right. And then each cell"
},
{
"start": 272.6,
"end": 278.78000000000003,
"text": " is allowed to look at three things. So from the bottom here, it's allowed to look at its"
},
{
"start": 278.78000000000003,
"end": 285.52,
"text": " own state, so at its own 16 dimensional vectors, and it is allowed to look at its neighbors."
},
{
"start": 285.52,
"end": 291.03999999999996,
"text": " And it does this by doing a convolution with a sobel filter. And the sobel filter is simply"
},
{
"start": 291.03999999999996,
"end": 298.2,
"text": " a fixed filter that you do a three by three convolution with, as you can see here, is"
},
{
"start": 298.2,
"end": 305.56,
"text": " basically a gradient filter. So basically measures the difference between what's to"
},
{
"start": 305.56,
"end": 309.96,
"text": " the left of the cell and what's to the right of the cell. And here in the sobel y direction,"
},
{
"start": 309.96,
"end": 316.56,
"text": " the same in the y direction. So it's basically allowed to look at gradients in states of"
},
{
"start": 316.56,
"end": 324.44,
"text": " its neighbors. This is modeled after real cells kind of looking at chemical gradients"
},
{
"start": 324.44,
"end": 330.64,
"text": " in their neighborhoods. So this is all this, this is all that the cell has to decide what"
},
{
"start": 330.64,
"end": 337.06,
"text": " it's supposed to do next, right. And what we want is we want that each individual cell"
},
{
"start": 337.06,
"end": 342.44,
"text": " only looking at its neighbors produces in total, they will produce these kind of very"
},
{
"start": 342.44,
"end": 348.9,
"text": " complex pattern. So the update rule is the following, you convolute with the sobel filters"
},
{
"start": 348.9,
"end": 354.28,
"text": " and you take the cell identity, you put this all into a vector, you put it through a very,"
},
{
"start": 354.28,
"end": 359.84000000000003,
"text": " very small neural network. So this is one dense layer, one relu, and then another dense"
},
{
"start": 359.84000000000003,
"end": 365.44,
"text": " layer to get the next 16 dimensional vector, which is the next state. And that defines"
},
{
"start": 365.44,
"end": 370.28,
"text": " your update rules. That doesn't really define the next state that defines the Delta to the"
},
{
"start": 370.28,
"end": 376.6,
"text": " next state, kind of like a residual neural network. So basically, which cells need to"
},
{
"start": 376.6,
"end": 381.24,
"text": " come alive in the next time step, which cells need to die and how are they to change their"
},
{
"start": 381.24,
"end": 389.1,
"text": " colors, right. And then you get the output of the next step, right. So that's, that's"
},
{
"start": 389.1,
"end": 395.64000000000004,
"text": " basically the entire thing. So all that is learned here is the the update rule of the"
},
{
"start": 395.64000000000004,
"end": 400.48,
"text": " neural network, right. So basically, the neural network decides, it looks at a cell and its"
},
{
"start": 400.48,
"end": 406.88,
"text": " neighbors and decides what the information in the cell in the next step should be, right."
},
{
"start": 406.88,
"end": 411.96000000000004,
"text": " And you do this for multiple time steps. That's I actually want to go down here, you do this"
},
{
"start": 411.96000000000004,
"end": 417.28000000000003,
"text": " for multiple time steps, the initial state is simply one cell that is alive here in the"
},
{
"start": 417.28,
"end": 422.59999999999997,
"text": " middle, everything else is dead, this cell is alive and black, you do this for many steps,"
},
{
"start": 422.59999999999997,
"end": 428.84,
"text": " right. And then at some point, you get an output. And you compare the output to your"
},
{
"start": 428.84,
"end": 434.28,
"text": " desired output, you compute a loss that is differentiable. And because your update rule"
},
{
"start": 434.28,
"end": 442.88,
"text": " is differentiable, and your loss is differentiable, you can backprop through time to the original"
},
{
"start": 442.88,
"end": 447.76,
"text": " pattern here. And you can basically learn this update rule by backproping through time."
},
{
"start": 447.76,
"end": 453.64,
"text": " This is a bit like an LSTM. And if you see in the architecture here, I think this residual"
},
{
"start": 453.64,
"end": 459.96,
"text": " connection is really the key to making this work over time. Because usually, I would not"
},
{
"start": 459.96,
"end": 465.04,
"text": " expect something like this to easily emerge over time because you have the problem of"
},
{
"start": 465.04,
"end": 470.68,
"text": " vanishing and exploding gradients. And you have no way of mitigating this problem here,"
},
{
"start": 470.68,
"end": 480.84000000000003,
"text": " this problem here in this simple neural network. But in any case, they backprop through time"
},
{
"start": 480.84000000000003,
"end": 487.6,
"text": " here. So each of these update steps, which again, this isn't one neural network with"
},
{
"start": 487.6,
"end": 493.24,
"text": " many layers, this is the same neural network applied over and over and over and over again,"
},
{
"start": 493.24,
"end": 498.88,
"text": " and then there is a loss computed. So basically, the gradients will accumulate over these steps,"
},
{
"start": 498.88,
"end": 504.32,
"text": " and they tell the network what it needs to adjust to go from this one single black pixel"
},
{
"start": 504.32,
"end": 511.24,
"text": " to this final desired state. If you do this over and over again, you learn things, you"
},
{
"start": 511.24,
"end": 518.96,
"text": " learn a update rule that will give rise to that pattern, hopefully. Now, here is a kind"
},
{
"start": 518.96,
"end": 525.4,
"text": " of an illustration of this alive and dead thing. So what they do is they consider cells"
},
{
"start": 525.4,
"end": 531.28,
"text": " that have an alpha channel, one of these channels called alpha, they have an alpha channel above"
},
{
"start": 531.28,
"end": 541.6,
"text": " 0.1, it's considered alive, right, and part of the loss. Then the neighbors, the neighbors"
},
{
"start": 541.6,
"end": 550.28,
"text": " of these cells that are below 0.1, but are neighboring a cell that is mature, alive,"
},
{
"start": 550.28,
"end": 554.68,
"text": " they're called growing, they're also part of the loss, right. So simply by being close"
},
{
"start": 554.68,
"end": 560.64,
"text": " to something, someone that is alive, a cell that is alive, you are considered alive as"
},
{
"start": 560.64,
"end": 565.88,
"text": " well, but your neighbors aren't, right, only the neighbors of really alive. So there's"
},
{
"start": 565.88,
"end": 572.3599999999999,
"text": " really alive, kind of alive, and then there is dead. And dead, the meaning of dead here,"
},
{
"start": 572.3599999999999,
"end": 578.12,
"text": " the gray ones, is they're not, they won't become part of the pattern, part of the loss,"
},
{
"start": 578.12,
"end": 590.36,
"text": " right, they're dead. All right, so what will this get you initially? So here is an animation,"
},
{
"start": 590.36,
"end": 595.68,
"text": " if they train this just like that, just back up through time with a target pattern, and"
},
{
"start": 595.68,
"end": 600.6,
"text": " then they let it run, you see these patterns actually emerge. So that's pretty cool. But"
},
{
"start": 600.6,
"end": 606.28,
"text": " then if you let them run for longer than they've been trained, you basically have no guarantees"
},
{
"start": 606.28,
"end": 612.68,
"text": " on what's going to happen. Like these update rules are simply trained to achieve the pattern"
},
{
"start": 612.68,
"end": 617.4399999999999,
"text": " within a certain number of steps, right. If you run for more than that, and apply the"
},
{
"start": 617.4399999999999,
"end": 624.0799999999999,
"text": " update rules for longer than that, you you have like there's little like you have no"
},
{
"start": 624.0799999999999,
"end": 629.06,
"text": " guarantee what's going to happen, these update rules will simply continue, as you can see"
},
{
"start": 629.06,
"end": 635.4399999999999,
"text": " here and produce some weird stuff. So they are trying to fix this. So what they do is"
},
{
"start": 635.44,
"end": 639.7600000000001,
"text": " basically they train for longer, but they do it in a in a kind of different way. So"
},
{
"start": 639.7600000000001,
"end": 649.7600000000001,
"text": " at each at each step of training, and as a step, I mean, a batch over these number of"
},
{
"start": 649.7600000000001,
"end": 656.5200000000001,
"text": " time steps. So so they sample a batch, initially, it's just all black pixels, right, as we see"
},
{
"start": 656.5200000000001,
"end": 663.44,
"text": " above. And then they optimize for these number of time steps. And then they're at the end."
},
{
"start": 663.44,
"end": 668.7600000000001,
"text": " So what they do is they don't always start from the black pixel. But sometimes they also"
},
{
"start": 668.7600000000001,
"end": 678.72,
"text": " start from a previously seen end state. So basically, they take the end state of a previous"
},
{
"start": 678.72,
"end": 684.7600000000001,
"text": " training run, and then they just continue from that instead of starting from the initial"
},
{
"start": 684.7600000000001,
"end": 693.32,
"text": " point. And you see after some training, they get better and better. So initially, you see"
},
{
"start": 693.32,
"end": 701.12,
"text": " the thing on the left here. The thing on the left here being a starting state. And then"
},
{
"start": 701.12,
"end": 708.1600000000001,
"text": " it progressively gets better. So basically, by starting from end states of other things,"
},
{
"start": 708.1600000000001,
"end": 715.6800000000001,
"text": " you learn to. So if the end state of the other thing isn't very good, you basically learn"
},
{
"start": 715.6800000000001,
"end": 722.34,
"text": " to go to the good pattern to the pattern you want. But of course, over time, there's going"
},
{
"start": 722.34,
"end": 726.94,
"text": " to be more and more of these end states that you train from that are already pretty close"
},
{
"start": 726.94,
"end": 734.8000000000001,
"text": " to the pattern you want. And so then what that means is you learn to reproduce the pattern."
},
{
"start": 734.8000000000001,
"end": 740.44,
"text": " So you are already at a good point, you learn to stay at that good point. And then that"
},
{
"start": 740.44,
"end": 747.6,
"text": " enables you to basically learn update rules that if you're not at the pattern you want,"
},
{
"start": 747.6,
"end": 753,
"text": " they go towards the pattern you want. But also if you run for longer, if you are already"
},
{
"start": 753,
"end": 759.5,
"text": " are at the pattern you want, then you stay at the pattern you want. So that's what we"
},
{
"start": 759.5,
"end": 765.36,
"text": " basically saw in the very initial demonstration where you could, this is a live demonstration"
},
{
"start": 765.36,
"end": 771.6,
"text": " like this thing up here, this is a live, this is running, right. And you see the update"
},
{
"start": 771.6,
"end": 776.6800000000001,
"text": " rules data, they are continuously applied, they basically stay at the pattern where they"
},
{
"start": 776.68,
"end": 782.4,
"text": " are. And that is also that is learned because of this protocol that you train from end states"
},
{
"start": 782.4,
"end": 791.88,
"text": " as well as from beginning states. So the next thing is what I'm doing here is I can destroy"
},
{
"start": 791.88,
"end": 799,
"text": " part of the pattern, and it will kind of regrow right you see that here. So this is also a"
},
{
"start": 799,
"end": 804.0799999999999,
"text": " part so for now we've also only learned to go from a single pixel like here from a black"
},
{
"start": 804.08,
"end": 811.8000000000001,
"text": " pixel to the pattern. But now we also want to learn to go to regrow when destroyed because"
},
{
"start": 811.8000000000001,
"end": 823.12,
"text": " that is, you can see this is modeled after kind of live tissue. So here you can see the"
},
{
"start": 823.12,
"end": 834,
"text": " parts are cut away and then the cells try to regrow. So this is I think initially, this"
},
{
"start": 834,
"end": 840.04,
"text": " is initially when you just train them, they exhibit some of that property, but not like"
},
{
"start": 840.04,
"end": 847.32,
"text": " very satisfying in some cases. So what they do is they train not only do they use end"
},
{
"start": 847.32,
"end": 854.36,
"text": " states, like we saw before, but also some of their training samples are simply the pattern"
},
{
"start": 854.36,
"end": 861.04,
"text": " destroyed a bit. So as you can see in some of these samples, like these here, they in"
},
{
"start": 861.04,
"end": 867.92,
"text": " each sample, they kind of cut out part of the sample and they train the update rules"
},
{
"start": 867.92,
"end": 875.76,
"text": " to regrow that part that gives you that now gives you the ability to if you damage to"
},
{
"start": 875.76,
"end": 884.52,
"text": " pretty consistently regrow the pattern, as you can see here. And they also train for"
},
{
"start": 884.52,
"end": 891.72,
"text": " rotation, which is non trivial if you have these kind of pixel based, pixel based models."
},
{
"start": 891.72,
"end": 898.64,
"text": " But I want to jump that because I want to keep it kind of short here. So the entire"
},
{
"start": 898.64,
"end": 905.76,
"text": " goal of this is to kind of model the behavior of natural cells, because the natural cells,"
},
{
"start": 905.76,
"end": 910.92,
"text": " they don't have an overarching view, they only have the view of their neighbors, right,"
},
{
"start": 910.92,
"end": 918.4399999999999,
"text": " and they are able to grow into very complex structures. I invite you to give this a try."
},
{
"start": 918.4399999999999,
"end": 923.5999999999999,
"text": " The distilled out pop journal is very cool. It's very interactive, you can play around"
},
{
"start": 923.5999999999999,
"end": 931.1999999999999,
"text": " with it, you can reproduce things in a collab. And yeah, shout out to the authors here,"
},
{
"start": 931.2,
"end": 944.12,
"text": " Alexander Morbintsev, Ettore Randazzo, Evan Nicholson and Michael Levin. Yeah, that was"
},
{
"start": 944.12,
"end": 961.48,
"text": " it from me. Thanks for watching and bye bye."
}
] |
tC01FRB0M7w | Yannic Kilcher | UCZHmQk67mSJgfCCTn7xBfew | Turing-NLG, DeepSpeed and the ZeRO optimizer | [
"Science & Technology"
] | [
"deep learning",
"machine learning",
"nlp",
"natural language processing",
"machine translation",
"arxiv",
"attention mechanism",
"attention",
"transformer",
"seq2seq",
"bert",
"long sequence",
"memory",
"gpt-2",
"Megatron",
"Microsoft",
"distributed",
"parallelism"
] | Microsoft has trained a 17-billion parameter language model that achieves state-of-the-art perplexity. This video takes a look at the ZeRO optimizer that enabled this breakthrough. ZeRO allows you to do model- and data-parallelism without having huge cuts in training speed.
https://www.microsoft.com/en-us/research/blog/turing-nlg-a-17-billion-parameter-language-model-by-microsoft/
https://www.microsoft.com/en-us/research/blog/zero-deepspeed-new-system-optimizations-enable-training-models-with-over-100-billion-parameters/
https://github.com/microsoft/DeepSpeed
https://arxiv.org/abs/1910.02054
Links:
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
BitChute: https://www.bitchute.com/channel/yannic-kilcher
Minds: https://www.minds.com/ykilcher | Hi everyone, today we're going to look at Turing NLGA 17 billion parameter language model by Microsoft. The latest and greatest of language modeling by Microsoft. What is this? It is a language model. A language model is basically a model that learns to produce language, given language. So if you start a sentence it's supposed to finish a sentence. If you start a paragraph it's supposed to finish the paragraph. That's a language model. Ultimately you can make it do different things like answer questions, have a conversation with you, anything to do with understanding language. The special thing about this one is that it's ginormous. So if you look at the scale of language models, so BERT was quite a large thing back in its day. Ye Olde BERT, you can see here it has about 340 million parameters. Now I have to say all of these language models are transformers. This is kind of the state of the art today. So all of these are kind of our transformer based models. Then GPT-2 here, you can see that was the model that was so large it was too dangerous to be released into the world. That stands at 1.5 billion parameters. Megatron LM by Nvidia 8.3 billion and now we are at 17 billion parameters for this language model. And it is a bit better. People just throw more and more and more resources at this language problem. So what you can do with it, you can of course do language modeling. So what happens is you take a bunch of text like all of Wikipedia and all of the internet and all of Reddit and so on and you let the model train on it to understand, to basically produce that sort of language. And then you can measure it, for example it's a perplexity on a validation set. And the Turing NLG is currently state-of-the-art on that. It can also do for example question answering. So you can ask the question and give it a passage about that question and it will then tell you the answer that it deduced from that passage given the question as you can see here. What is more interesting is that a usual QA system will point to the passage. So it will point to the words Tristan Prediman. Whereas with a generative model like this one what you can do is you can make it actually output an answer as a sentence. So it will generate the text Jason Bras was engaged to Tristan Prediman. If you ask a question without giving it a context and just ask it to generate an answer it will do so as well. I don't know if these answers are cherry-picked but they call this zero-shot question answering. So if you ask when did World War II end and it can output World War II ended in 1945. Simply out of regularities it detected in the training data. So I mean that's what I'm kind of wondering. At what point are these models, do they have so many parameters that they simply reproduce the training data? I mean this clearly some article from the training data is about World War II or many are and it simply learned that following a question when did World War II end it needs to answer with the appropriate passage. I'm not sure that is a proper measure of language understanding if you simply can bake more and more of the training data into these many many parameters but I'm not the judge of that here. It can do it very well. So yeah what I'm actually more interested in is this thing is called the zero optimizer that they use to train the model. So the model is just a transformer, it's just a big big transformer model. There is nothing really special about the model except that it is larger than the last model and therefore a bit better. What is interesting is that this would have been pretty impossible to train if it weren't for this zero optimizer of this deep speed library and Microsoft has released this deep speed library. It's compatible for now with PyTorch. You can check this out. I'll put a link into the description and I want to dive into this a bit. There's a paper, it's by Samyam Raj Bandari and all by Microsoft. The paper describes in detail the optimizer but it's not very visual. That's why we're going to the blog post. You can see it gives many speed ups over the previous Megatron LM model that Nvidia just trained using what Nvidia has. Nvidia has machines that are interconnected within the machine with very fast buses between GPUs. But this zero optimizer can now also go over the network and make it pretty fast. Let's explore that a bit. I have the copy this here. We'll look how the zero optimizer works. Usually what you do is if you have multiple GPUs you can do something like this. This is called data parallelism. What you have is a model and the model in this case fits on your GPU. It fits on a single GPU. The blue thing here is the model. I'll actually draw this. The model is a neural network so it has a bunch of layers. Layer, layer, layer, layer. What you want to do is you pass data forward. Here is some loss and then right into the loss function and then backward again. That's basically what you need to do. You need to pass it forward and backward in order to do back propagation training. If this all fits into one box that's completely fine. If this fits into one machine, cool. We can just put many batches of data through batch one, batch two, batch three and so on. Train the model. If you want to do a speed up using this you can do so. If you have lots of data you can do what's called, and I'm always confused, I think this is called data parallelism or is it called model parallelism. In any case what you can do is you can take a second machine or many of those, replicate the model. These two models here are exactly the same. What you do is you take your data and you split it up. You take double the amount of data and you put one batch of data through the top part and you put the other through the bottom part. You do your forward passes on the machines and you do your backward passes. Then what you want to do is you want to sync between the machines what they learned from the data. Each machine has a different set of data points. Each machine calculates its own parameter updates. It learns from the data it has and then they communicate to keep because this here and this here should be the same. It's the same model. They have to keep in sync. This can be usually can be done fairly efficiently especially if these aren't actually two machines but just two GPUs inside of one large machine. If this is a large machine this is GPU 0 and this is GPU 1. This is pretty standard because especially on Nvidia machines they have these whatever I think they call them InfiniBand or so. Nvidia has these connectors that connects the GPUs together really fast. You can keep these in sync but now the problem becomes what if you want to train a model that is larger than this. Let's forget about the data parallelism for now if that is what it's called and just consider a model that is too large. A model that is too large will not fit into a machine. This is a model as a large model. What you want to do is you want to pack some of the model onto your first machine and then take the other part of the model and pack it onto another machine. You separate the model and put it on different machines. If you have a batch of data what you have to do is you pass it pass it pass it forward propagate as you regularly would but then you have an intermediate result. You send that to the next machine and you forward propagate that. At the end here you have a loss. You want to back propagate regularly through this machine. You have an intermediate result of back propagation. Send it over the network and back prop all the way through the model. That's how you can train a model that is too large for one machine if you have multiple machines. The problem here of course is this part. Just as you had to keep in sync the model before, now your communication problem becomes one of... You have to send the intermediate stages to that model and you have to send the intermediate stage of the back propagation back to that part of the model. While this part is working this part is idling away. The network overhead is just very costly. Especially if your model is so large it can't even fit into one of these single boxes. This is very problematic here. It's still doable. But what the zero optimizer does is it does both data and model parallelism. It can train models that are too large for a single machine. It can do data parallelism at the same time. Basically everything is working all the time. There is not much wasted computation. The communication is efficient and so on. It's really a technical achievement. It's not so much a scientific advance. It's really a technical achievement this optimizer. We'll shortly go through. There is a kind of an animation on the website but it's super slow. I think this might be the first time that I will be faster at explaining something than a video. Let's see here. What you do is... Let's just consider these three GPUs. Before that it would all fit on one machine. But now let's say you don't actually have that much memory. You don't have these giant empty blocks here. You just have a bit of that. So you have to split your model. The blue parts here are your model. These are model parameters. The orange part here is memory you need to store gradients. You need as many gradients as you have model parameters. Because you do gradient descent. The green stuff here are what's called optimizer parameters. Now if you just have SGD these would be non-existent. But if you have something like AdaGrad or Atom they have additional parameters for each model parameter that they need to keep track of. So these are stored here. There can be significant overhead. There's also like a floating point 3216 conversion going on here. Don't want to go into that. So you split your model onto these three machines. Let's say that's your entire model. Your model is six blocks wide. You need to forward propagate now through everything. So here is what Xero does. I think it's pretty cool. What we need to do is we have these three different batches of data and we want to forward propagate them all through the model. Through the same model at the same time. As if the model were actually stored on all these machines. Like if all of these machines had the entire model. And we can do a bit of communication. So what we do first is... This one's easy. Data zero through the first two layers here is easy. Because we have them. So bang you go through the first you get an intermediate result here and here. How do we propagate data one through the first layer? We can't send data one here. That would be too expensive. And that's the whole point would be lost. We want to actually compute data one on this GPU at the same time. What we do is before we start we actually communicate these two blocks here to GPU one. We send these parameters around and fill them in here. We send them here and we also send them here. We send the parameters to all the machines. Then we can actually forward prop data one through this and data three through this. So we can do forward prop. After we've communicated all the GPUs can be working. Same with layer two. Layer two simply can send these two here. You can see that these two here to the other machines. Now while it's doing that we've already propagated through the first layer. We've already propagated here and here through the first layer. So we can actually delete these again. We can delete these first layer parameters that we sent around again. So here you see how we can save memory. We don't keep all the model in sync and all the machines. We send whatever we need on the other machines and then once the computation is done they can delete it again. Because there's always one machine, this one here for the middle parameters, that keeps track of the parameters and that can at any point if they're needed send them again. So that's the big kind of catch. You can forward prop now through these two. They're already present. Then you can delete those again on the machines where they're not natively stored. From here you can send those two. Also up here you can send those two and forward prop your model through to the end. That was a mistake. Then each machine calculates its own loss. The backward propagation happens in much the same way. If you follow so far you can already imagine. Now the loss is different because there's a different batch of data going through each machine. There's a different batch of data going through each machine but each machine has computed with the same model due to the communication of the zero optimizer. That's pretty cool. You get the benefits of data parallelism, lots of data on the different machines and you also split up the model across the machines. You don't actually store the model on any of these machines. You only send. From here you send as you need and then you delete again. For the backward propagation, same thing. You calculate gradients. You calculate gradients here and you send the gradients as needed to the other machines. You calculate gradients here and here and you send them to the machine where they're actually needed. This is a weird pen. You send them to that machine. That machine will aggregate all the gradients of all the machines. It will aggregate them and then locally it can compute using these optimizer parameters and so on. It can do all kinds of optimization locally because it has gathered gradients from all the other data. What you end up with, for example, GPU 2 here, for these two layers it has effectively broadcast the layers such that much much more data than it just had itself could run through the layers. It has aggregated gradients from all of that data and now it can use all of these gradients together to make a good update using the optimizer parameters. To make a good update to these model parameters and then in the next iteration it can go ahead and broadcast the model parameters. The new model parameters again. It is able to compute with much more data than it can just fit by itself. It is just doing its part. So Zero and DeepSpeed, Zero is the protocol and DeepSpeed is the actual library. They will do all of this communication and splitting and so on for you over the network in a way that is efficient, in a way that everything runs at the same time and the communication overhead is minimal. You can actually choose which stage you want, so what your trade-off of communication and memory saving will be. This is extremely cool. They say this goes up to whatever 100 billion parameter models if you use... This isn't something for your average Colab user. This is really something for big players. But that being said, I don't think language is solved by simply throwing more parameters at it. I think there's still a bit of a breakthrough ahead yet to come in language understanding with newer model architectures. Alright, that was it for me. Thanks. | [
{
"start": 0,
"end": 6.34,
"text": " Hi everyone, today we're going to look at Turing NLGA 17 billion parameter"
},
{
"start": 6.34,
"end": 11.78,
"text": " language model by Microsoft. The latest and greatest of language modeling by"
},
{
"start": 11.78,
"end": 18.5,
"text": " Microsoft. What is this? It is a language model. A language model is basically a"
},
{
"start": 18.5,
"end": 25.580000000000002,
"text": " model that learns to produce language, given language. So if you start a"
},
{
"start": 25.580000000000002,
"end": 28.66,
"text": " sentence it's supposed to finish a sentence. If you start a paragraph it's"
},
{
"start": 28.66,
"end": 33.6,
"text": " supposed to finish the paragraph. That's a language model. Ultimately you can make"
},
{
"start": 33.6,
"end": 37.44,
"text": " it do different things like answer questions, have a conversation with you,"
},
{
"start": 37.44,
"end": 41.08,
"text": " anything to do with understanding language. The special thing about this"
},
{
"start": 41.08,
"end": 47.760000000000005,
"text": " one is that it's ginormous. So if you look at the scale of language"
},
{
"start": 47.760000000000005,
"end": 55.68,
"text": " models, so BERT was quite a large thing back in its day. Ye Olde BERT, you"
},
{
"start": 55.68,
"end": 62.2,
"text": " can see here it has about 340 million parameters. Now I have to say all of"
},
{
"start": 62.2,
"end": 66.08,
"text": " these language models are transformers. This is kind of the state of the art"
},
{
"start": 66.08,
"end": 74.32,
"text": " today. So all of these are kind of our transformer based models. Then GPT-2"
},
{
"start": 74.32,
"end": 79.92,
"text": " here, you can see that was the model that was so large it was too dangerous to be"
},
{
"start": 79.92,
"end": 87.12,
"text": " released into the world. That stands at 1.5 billion parameters. Megatron LM by"
},
{
"start": 87.12,
"end": 94.64,
"text": " Nvidia 8.3 billion and now we are at 17 billion parameters for this language"
},
{
"start": 94.64,
"end": 103.76,
"text": " model. And it is a bit better. People just throw more and more and more"
},
{
"start": 103.76,
"end": 111.96000000000001,
"text": " resources at this language problem. So what you can do with it, you can"
},
{
"start": 111.96000000000001,
"end": 116.4,
"text": " of course do language modeling. So what happens is you take a bunch of text like"
},
{
"start": 116.4,
"end": 122.36000000000001,
"text": " all of Wikipedia and all of the internet and all of Reddit and so on and you let"
},
{
"start": 122.36000000000001,
"end": 128.76,
"text": " the model train on it to understand, to basically produce that sort of language."
},
{
"start": 128.76,
"end": 134.07999999999998,
"text": " And then you can measure it, for example it's a perplexity on a validation set."
},
{
"start": 134.07999999999998,
"end": 142.92,
"text": " And the Turing NLG is currently state-of-the-art on that. It can also do"
},
{
"start": 142.92,
"end": 146.56,
"text": " for example question answering. So you can ask the question and give it a"
},
{
"start": 146.56,
"end": 152.76,
"text": " passage about that question and it will then tell you the answer that it deduced"
},
{
"start": 152.76,
"end": 158.6,
"text": " from that passage given the question as you can see here. What is more interesting"
},
{
"start": 158.6,
"end": 165.64,
"text": " is that a usual QA system will point to the passage. So it will point to the"
},
{
"start": 165.64,
"end": 174.68,
"text": " words Tristan Prediman. Whereas with a generative model like this one what you"
},
{
"start": 174.68,
"end": 180.51999999999998,
"text": " can do is you can make it actually output an answer as a sentence. So it"
},
{
"start": 180.51999999999998,
"end": 186.95999999999998,
"text": " will generate the text Jason Bras was engaged to Tristan Prediman."
},
{
"start": 186.96,
"end": 197.92000000000002,
"text": " If you ask a question without giving it a context and just ask it to generate an"
},
{
"start": 197.92000000000002,
"end": 202.52,
"text": " answer it will do so as well. I don't know if these answers are cherry-picked"
},
{
"start": 202.52,
"end": 206.76000000000002,
"text": " but they call this zero-shot question answering. So if you ask when did World"
},
{
"start": 206.76000000000002,
"end": 214.84,
"text": " War II end and it can output World War II ended in 1945. Simply out of regularities"
},
{
"start": 214.84,
"end": 220.72,
"text": " it detected in the training data. So I mean that's what I'm kind of wondering."
},
{
"start": 220.72,
"end": 227,
"text": " At what point are these models, do they have so many parameters that they"
},
{
"start": 227,
"end": 234.24,
"text": " simply reproduce the training data? I mean this clearly some article"
},
{
"start": 234.24,
"end": 240.32,
"text": " from the training data is about World War II or many are and it simply learned"
},
{
"start": 240.32,
"end": 247.79999999999998,
"text": " that following a question when did World War II end it needs to answer with the"
},
{
"start": 247.79999999999998,
"end": 254.68,
"text": " appropriate passage. I'm not sure that is a proper measure of language"
},
{
"start": 254.68,
"end": 260.44,
"text": " understanding if you simply can bake more and more of the training data into"
},
{
"start": 260.44,
"end": 269.44,
"text": " these many many parameters but I'm not the judge of that here. It can do it very"
},
{
"start": 269.44,
"end": 276.28,
"text": " well. So yeah what I'm actually more interested in is this thing is called the"
},
{
"start": 276.28,
"end": 281.76,
"text": " zero optimizer that they use to train the model. So the model is just a"
},
{
"start": 281.76,
"end": 285.8,
"text": " transformer, it's just a big big transformer model. There is nothing really"
},
{
"start": 285.8,
"end": 291.52,
"text": " special about the model except that it is larger than the last model and"
},
{
"start": 291.52,
"end": 296.88,
"text": " therefore a bit better. What is interesting is that this would have been"
},
{
"start": 296.88,
"end": 303.12,
"text": " pretty impossible to train if it weren't for this zero optimizer of this deep"
},
{
"start": 303.12,
"end": 307.88,
"text": " speed library and Microsoft has released this deep speed library. It's compatible"
},
{
"start": 307.88,
"end": 313.32,
"text": " for now with PyTorch. You can check this out. I'll put a link into the description"
},
{
"start": 313.32,
"end": 320.4,
"text": " and I want to dive into this a bit. There's a paper, it's by Samyam Raj"
},
{
"start": 320.4,
"end": 331.84,
"text": " Bandari and all by Microsoft. The paper describes in detail the optimizer"
},
{
"start": 331.84,
"end": 338.91999999999996,
"text": " but it's not very visual. That's why we're going to the blog post. You can see"
},
{
"start": 338.91999999999996,
"end": 348.28,
"text": " it gives many speed ups over the previous Megatron LM model that"
},
{
"start": 348.28,
"end": 355.84,
"text": " Nvidia just trained using what Nvidia has. Nvidia has machines that"
},
{
"start": 355.84,
"end": 361.91999999999996,
"text": " are interconnected within the machine with very fast buses"
},
{
"start": 361.91999999999996,
"end": 371.67999999999995,
"text": " between GPUs. But this zero optimizer can now also go over the network and make it"
},
{
"start": 371.68,
"end": 378.88,
"text": " pretty fast. Let's explore that a bit. I have the copy this here. We'll"
},
{
"start": 378.88,
"end": 383.52,
"text": " look how the zero optimizer works. Usually what you do is if you have"
},
{
"start": 383.52,
"end": 391.52,
"text": " multiple GPUs you can do something like this. This is called data parallelism."
},
{
"start": 391.52,
"end": 398.6,
"text": " What you have is a model and the model in this case fits on your GPU."
},
{
"start": 398.6,
"end": 403.76000000000005,
"text": " It fits on a single GPU. The blue thing here is the model. I'll actually"
},
{
"start": 403.76000000000005,
"end": 410.64000000000004,
"text": " draw this. The model is a neural network so it has a bunch of"
},
{
"start": 410.64000000000004,
"end": 415.76000000000005,
"text": " layers. Layer, layer, layer, layer. What you want to do is you pass data"
},
{
"start": 415.76000000000005,
"end": 423.72,
"text": " forward. Here is some loss and then right into the loss function and then backward"
},
{
"start": 423.72,
"end": 428.28000000000003,
"text": " again. That's basically what you need to do. You need to pass it forward and"
},
{
"start": 428.28,
"end": 433.47999999999996,
"text": " backward in order to do back propagation training. If this all fits"
},
{
"start": 433.47999999999996,
"end": 440.44,
"text": " into one box that's completely fine. If this fits into one machine, cool."
},
{
"start": 440.44,
"end": 445.21999999999997,
"text": " We can just put many batches of data through batch one, batch two, batch three"
},
{
"start": 445.21999999999997,
"end": 451.15999999999997,
"text": " and so on. Train the model. If you want to do a speed up using this you can do so."
},
{
"start": 451.15999999999997,
"end": 456.4,
"text": " If you have lots of data you can do what's called, and I'm always confused, I"
},
{
"start": 456.4,
"end": 462,
"text": " think this is called data parallelism or is it called model parallelism."
},
{
"start": 462,
"end": 466.91999999999996,
"text": " In any case what you can do is you can take a second machine or many of those,"
},
{
"start": 466.91999999999996,
"end": 475.2,
"text": " replicate the model. These two models here are exactly the same."
},
{
"start": 475.2,
"end": 480.88,
"text": " What you do is you take your data and you split it up. You take double"
},
{
"start": 480.88,
"end": 486.32,
"text": " the amount of data and you put one batch of data through the top part and you"
},
{
"start": 486.32,
"end": 490.59999999999997,
"text": " put the other through the bottom part. You do your forward passes on the"
},
{
"start": 490.59999999999997,
"end": 496.24,
"text": " machines and you do your backward passes. Then what you want to do is you want"
},
{
"start": 496.24,
"end": 500.88,
"text": " to sync between the machines what they learned from the data. Each machine"
},
{
"start": 500.88,
"end": 506.92,
"text": " has a different set of data points. Each machine calculates its own parameter"
},
{
"start": 506.92,
"end": 513.08,
"text": " updates. It learns from the data it has and then they communicate to keep"
},
{
"start": 513.08,
"end": 518.6,
"text": " because this here and this here should be the same. It's the same model."
},
{
"start": 518.6,
"end": 524.6,
"text": " They have to keep in sync. This can be usually can be done fairly efficiently"
},
{
"start": 524.6,
"end": 529.96,
"text": " especially if these aren't actually two machines but just two GPUs inside of one"
},
{
"start": 529.96,
"end": 536.8000000000001,
"text": " large machine. If this is a large machine this is GPU 0 and this is GPU 1."
},
{
"start": 536.8000000000001,
"end": 541.9200000000001,
"text": " This is pretty standard because especially on Nvidia machines they have"
},
{
"start": 541.92,
"end": 548.52,
"text": " these whatever I think they call them InfiniBand or so."
},
{
"start": 548.52,
"end": 554.16,
"text": " Nvidia has these connectors that connects the GPUs together really fast."
},
{
"start": 554.16,
"end": 561.4399999999999,
"text": " You can keep these in sync but now the problem becomes what if you want to"
},
{
"start": 561.4399999999999,
"end": 567.24,
"text": " train a model that is larger than this. Let's forget about the data parallelism"
},
{
"start": 567.24,
"end": 572.36,
"text": " for now if that is what it's called and just consider a model that is too large."
},
{
"start": 572.36,
"end": 582,
"text": " A model that is too large will not fit into a machine. This is a model as a"
},
{
"start": 582,
"end": 589.48,
"text": " large model. What you want to do is you want to pack some of the model onto"
},
{
"start": 589.48,
"end": 597.36,
"text": " your first machine and then take the other part of the model and pack"
},
{
"start": 597.36,
"end": 602.44,
"text": " it onto another machine. You separate the model and put it on different"
},
{
"start": 602.44,
"end": 606.8000000000001,
"text": " machines. If you have a batch of data what you have to do is you pass it"
},
{
"start": 606.8000000000001,
"end": 611.08,
"text": " pass it pass it forward propagate as you regularly would but then you have an"
},
{
"start": 611.08,
"end": 615.9200000000001,
"text": " intermediate result. You send that to the next machine and you forward"
},
{
"start": 615.92,
"end": 622.0799999999999,
"text": " propagate that. At the end here you have a loss. You want to back propagate"
},
{
"start": 622.0799999999999,
"end": 625.68,
"text": " regularly through this machine. You have an intermediate result of back"
},
{
"start": 625.68,
"end": 631.9399999999999,
"text": " propagation. Send it over the network and back prop all the way through the model."
},
{
"start": 631.9399999999999,
"end": 637.88,
"text": " That's how you can train a model that is too large for one machine if you"
},
{
"start": 637.88,
"end": 645.1999999999999,
"text": " have multiple machines. The problem here of course is this part. Just as you had"
},
{
"start": 645.2,
"end": 650.0400000000001,
"text": " to keep in sync the model before, now your communication problem"
},
{
"start": 650.0400000000001,
"end": 660.24,
"text": " becomes one of... You have to send the intermediate stages to that model and"
},
{
"start": 660.24,
"end": 664.76,
"text": " you have to send the intermediate stage of the back propagation back to that"
},
{
"start": 664.76,
"end": 672.84,
"text": " part of the model. While this part is working this part is idling away."
},
{
"start": 672.84,
"end": 681.8000000000001,
"text": " The network overhead is just very costly. Especially if your model is so"
},
{
"start": 681.8000000000001,
"end": 690.12,
"text": " large it can't even fit into one of these single boxes. This is very"
},
{
"start": 690.12,
"end": 701.0400000000001,
"text": " problematic here. It's still doable. But what the zero optimizer does is it does"
},
{
"start": 701.04,
"end": 707.52,
"text": " both data and model parallelism. It can train models that are too large"
},
{
"start": 707.52,
"end": 718,
"text": " for a single machine. It can do data parallelism at the same time."
},
{
"start": 718,
"end": 724.8,
"text": " Basically everything is working all the time. There is not much wasted"
},
{
"start": 724.8,
"end": 728.8,
"text": " computation. The communication is efficient and so on. It's really a"
},
{
"start": 728.8,
"end": 733.4,
"text": " technical achievement. It's not so much a scientific advance. It's really a"
},
{
"start": 733.4,
"end": 739.28,
"text": " technical achievement this optimizer. We'll shortly go through. There is a"
},
{
"start": 739.28,
"end": 744.0799999999999,
"text": " kind of an animation on the website but it's super slow. I think"
},
{
"start": 744.0799999999999,
"end": 748.7199999999999,
"text": " this might be the first time that I will be faster at explaining something than a"
},
{
"start": 748.7199999999999,
"end": 755.4799999999999,
"text": " video. Let's see here. What you do is... Let's just consider these"
},
{
"start": 755.48,
"end": 759.28,
"text": " three GPUs. Before that it would all fit on one machine. But now let's say you"
},
{
"start": 759.28,
"end": 764.72,
"text": " don't actually have that much memory. You don't have these giant"
},
{
"start": 764.72,
"end": 769.84,
"text": " empty blocks here. You just have a bit of that. So you have to split your model."
},
{
"start": 769.84,
"end": 776.36,
"text": " The blue parts here are your model. These are model parameters."
},
{
"start": 776.36,
"end": 784.08,
"text": " The orange part here is memory you need to store gradients. You need as"
},
{
"start": 784.08,
"end": 789.6800000000001,
"text": " many gradients as you have model parameters. Because you do gradient"
},
{
"start": 789.6800000000001,
"end": 795.6800000000001,
"text": " descent. The green stuff here are what's called optimizer parameters. Now if you"
},
{
"start": 795.6800000000001,
"end": 801.96,
"text": " just have SGD these would be non-existent. But if you have something"
},
{
"start": 801.96,
"end": 806,
"text": " like AdaGrad or Atom they have additional parameters for each model"
},
{
"start": 806,
"end": 811.8000000000001,
"text": " parameter that they need to keep track of. So these are stored here. There"
},
{
"start": 811.8,
"end": 818.28,
"text": " can be significant overhead. There's also like a floating point 3216"
},
{
"start": 818.28,
"end": 822.3199999999999,
"text": " conversion going on here. Don't want to go into that. So you split your"
},
{
"start": 822.3199999999999,
"end": 825.9599999999999,
"text": " model onto these three machines. Let's say that's your entire model. Your model"
},
{
"start": 825.9599999999999,
"end": 832.76,
"text": " is six blocks wide. You need to forward propagate now through everything."
},
{
"start": 832.76,
"end": 838.68,
"text": " So here is what Xero does. I think it's pretty cool. What we need to do"
},
{
"start": 838.68,
"end": 843.68,
"text": " is we have these three different batches of data and we want to forward"
},
{
"start": 843.68,
"end": 850.0799999999999,
"text": " propagate them all through the model. Through the same model at the same time."
},
{
"start": 850.0799999999999,
"end": 856,
"text": " As if the model were actually stored on all these machines. Like if all of these"
},
{
"start": 856,
"end": 862.9599999999999,
"text": " machines had the entire model. And we can do a bit of communication. So what"
},
{
"start": 862.96,
"end": 870.2,
"text": " we do first is... This one's easy. Data zero through the first two layers"
},
{
"start": 870.2,
"end": 875.48,
"text": " here is easy. Because we have them. So bang you go through the first"
},
{
"start": 875.48,
"end": 886.24,
"text": " you get an intermediate result here and here. How do we propagate data one"
},
{
"start": 886.24,
"end": 892.1600000000001,
"text": " through the first layer? We can't send data one here. That would be"
},
{
"start": 892.16,
"end": 897.16,
"text": " too expensive. And that's the whole point would be lost. We want to"
},
{
"start": 897.16,
"end": 903.68,
"text": " actually compute data one on this GPU at the same time. What we do is before we"
},
{
"start": 903.68,
"end": 911.4399999999999,
"text": " start we actually communicate these two blocks here to GPU one. We send"
},
{
"start": 911.4399999999999,
"end": 919.4,
"text": " these parameters around and fill them in here. We send them here and we"
},
{
"start": 919.4,
"end": 925.12,
"text": " also send them here. We send the parameters to all the machines."
},
{
"start": 925.12,
"end": 931.48,
"text": " Then we can actually forward prop data one through this and data three through"
},
{
"start": 931.48,
"end": 937.84,
"text": " this. So we can do forward prop. After we've communicated all the GPUs can be"
},
{
"start": 937.84,
"end": 946.84,
"text": " working. Same with layer two. Layer two simply can send these"
},
{
"start": 946.84,
"end": 954.32,
"text": " two here. You can see that these two here to the other machines. Now while"
},
{
"start": 954.32,
"end": 958.48,
"text": " it's doing that we've already propagated through the first layer."
},
{
"start": 958.48,
"end": 964.64,
"text": " We've already propagated here and here through the first layer. So we can"
},
{
"start": 964.64,
"end": 970.8000000000001,
"text": " actually delete these again. We can delete these first layer"
},
{
"start": 970.8000000000001,
"end": 976.64,
"text": " parameters that we sent around again. So here you see how we can save memory."
},
{
"start": 976.64,
"end": 982.52,
"text": " We don't keep all the model in sync and all the machines. We send whatever we"
},
{
"start": 982.52,
"end": 989,
"text": " need on the other machines and then once the computation is done they can delete"
},
{
"start": 989,
"end": 993.84,
"text": " it again. Because there's always one machine, this one here for the"
},
{
"start": 993.84,
"end": 998.08,
"text": " middle parameters, that keeps track of the parameters and that can at any point"
},
{
"start": 998.08,
"end": 1003.6,
"text": " if they're needed send them again. So that's the big kind of catch. You can"
},
{
"start": 1003.6,
"end": 1008.08,
"text": " forward prop now through these two. They're already present."
},
{
"start": 1008.08,
"end": 1012.96,
"text": " Then you can delete those again on the machines where they're not natively"
},
{
"start": 1012.96,
"end": 1021.24,
"text": " stored. From here you can send those two. Also up here you can send"
},
{
"start": 1021.24,
"end": 1030.64,
"text": " those two and forward prop your model through to the end."
},
{
"start": 1030.64,
"end": 1039.3200000000002,
"text": " That was a mistake. Then each machine calculates its own loss."
},
{
"start": 1039.3200000000002,
"end": 1045.8000000000002,
"text": " The backward propagation happens in much the same way."
},
{
"start": 1045.8000000000002,
"end": 1053.0800000000002,
"text": " If you follow so far you can already imagine."
},
{
"start": 1053.0800000000002,
"end": 1057.8400000000001,
"text": " Now the loss is different because there's a different batch of data"
},
{
"start": 1057.84,
"end": 1061.76,
"text": " going through each machine. There's a different batch of data going"
},
{
"start": 1061.76,
"end": 1067.28,
"text": " through each machine but each machine has computed with the same model due to"
},
{
"start": 1067.28,
"end": 1074.1599999999999,
"text": " the communication of the zero optimizer. That's pretty cool. You get the"
},
{
"start": 1074.1599999999999,
"end": 1079.74,
"text": " benefits of data parallelism, lots of data on the different machines and you"
},
{
"start": 1079.74,
"end": 1086.84,
"text": " also split up the model across the machines. You don't actually store"
},
{
"start": 1086.84,
"end": 1092.24,
"text": " the model on any of these machines. You only send."
},
{
"start": 1092.24,
"end": 1100.12,
"text": " From here you send as you need and then you delete again. For the backward"
},
{
"start": 1100.12,
"end": 1106.52,
"text": " propagation, same thing. You calculate gradients."
},
{
"start": 1106.52,
"end": 1112.3999999999999,
"text": " You calculate gradients here and you send the gradients as needed to the"
},
{
"start": 1112.4,
"end": 1120,
"text": " other machines. You calculate gradients here and here and you send them to the"
},
{
"start": 1120,
"end": 1124.64,
"text": " machine where they're actually needed. This is a weird pen. You send them to"
},
{
"start": 1124.64,
"end": 1129.44,
"text": " that machine. That machine will aggregate all the gradients of all the machines."
},
{
"start": 1129.44,
"end": 1138.3200000000002,
"text": " It will aggregate them and then locally it can compute using"
},
{
"start": 1138.3200000000002,
"end": 1142.24,
"text": " these optimizer parameters and so on. It can do all kinds of optimization"
},
{
"start": 1142.24,
"end": 1148.48,
"text": " locally because it has gathered gradients from all the other data."
},
{
"start": 1148.48,
"end": 1157.44,
"text": " What you end up with, for example, GPU 2 here, for these two layers it has"
},
{
"start": 1157.44,
"end": 1164.72,
"text": " effectively broadcast the layers such that much much more data than it just"
},
{
"start": 1164.72,
"end": 1172.72,
"text": " had itself could run through the layers. It has aggregated gradients from all of"
},
{
"start": 1172.72,
"end": 1178.08,
"text": " that data and now it can use all of these gradients together to make a good"
},
{
"start": 1178.08,
"end": 1184.68,
"text": " update using the optimizer parameters. To make a good update to these model"
},
{
"start": 1184.68,
"end": 1189.08,
"text": " parameters and then in the next iteration it can go ahead and broadcast"
},
{
"start": 1189.08,
"end": 1193.3600000000001,
"text": " the model parameters. The new model parameters again. It is able to"
},
{
"start": 1193.36,
"end": 1200,
"text": " compute with much more data than it can just fit by itself. It is just doing"
},
{
"start": 1200,
"end": 1207.36,
"text": " its part. So Zero and DeepSpeed, Zero is the protocol and DeepSpeed is the"
},
{
"start": 1207.36,
"end": 1213.04,
"text": " actual library. They will do all of this communication and splitting and so on"
},
{
"start": 1213.04,
"end": 1218.8799999999999,
"text": " for you over the network in a way that is efficient, in a way that everything"
},
{
"start": 1218.88,
"end": 1225.96,
"text": " runs at the same time and the communication overhead is minimal. You"
},
{
"start": 1225.96,
"end": 1232.2800000000002,
"text": " can actually choose which stage you want, so what your trade-off of communication"
},
{
"start": 1232.2800000000002,
"end": 1238.96,
"text": " and memory saving will be. This is extremely cool. They say this goes up to"
},
{
"start": 1238.96,
"end": 1248.72,
"text": " whatever 100 billion parameter models if you use... This isn't something for"
},
{
"start": 1248.72,
"end": 1254.48,
"text": " your average Colab user. This is really something for big players."
},
{
"start": 1254.48,
"end": 1261.64,
"text": " But that being said, I don't think language is solved by simply throwing"
},
{
"start": 1261.64,
"end": 1265.28,
"text": " more parameters at it. I think there's still a bit of a breakthrough"
},
{
"start": 1265.28,
"end": 1274.2,
"text": " ahead yet to come in language understanding with newer model"
},
{
"start": 1274.2,
"end": 1278.8400000000001,
"text": " architectures. Alright, that was it for me. Thanks."
}
] |
vB_hQ5NmtPs | Yannic Kilcher | UCZHmQk67mSJgfCCTn7xBfew | [Interview] Mark Ledwich - Algorithmic Extremism: Examining YouTube's Rabbit Hole of Radicalization | [
"Science & Technology"
] | [
"machine learning",
"youtube",
"recommendation",
"algorithm",
"extremism",
"alt right",
"pipeline",
"pathway",
"mainstream",
"radicalization"
] | Interview with one of the authors of a widely reported study on YouTube's recommendation engine and where it leads its users.
https://arxiv.org/abs/1912.11211
https://www.recfluence.net/
https://github.com/markledwich2/Recfluence
https://www.patreon.com/ledwich
Abstract:
The role that YouTube and its behind-the-scenes recommendation algorithm plays in encouraging online radicalization has been suggested by both journalists and academics alike. This study directly quantifies these claims by examining the role that YouTube's algorithm plays in suggesting radicalized content. After categorizing nearly 800 political channels, we were able to differentiate between political schemas in order to analyze the algorithm traffic flows out and between each group. After conducting a detailed analysis of recommendations received by each channel type, we refute the popular radicalization claims. To the contrary, these data suggest that YouTube's recommendation algorithm actively discourages viewers from visiting radicalizing or extremist content. Instead, the algorithm is shown to favor mainstream media and cable news content over independent YouTube channels with slant towards left-leaning or politically neutral channels. Our study thus suggests that YouTube's recommendation algorithm fails to promote inflammatory or radicalized content, as previously claimed by several outlets.
Authors: Mark Ledwich, Anna Zaitsev | Alright, I'm very pleased to have Mark Ladoitch here today in In he's the he's one of the authors of this paper. That's called algorithmic extremism examining YouTube's rabbit hole of radicalization So I've done a video about a topic like this before actually several and this is basically one in a line of research that examines the recommendation algorithm of YouTube specifically but also kind of the general Social media platforms. So Mark, thanks for being here Could you maybe for people who do not know anything about this could you kind of Explain where your work fits into what's been done before or kind of also what comes out of the of the mainstream Media about this topic because there's been quite a bit of of talk Yeah, so I'm not a researcher by trade I'm a programmer and the reason why I got into this was because I could see clear bias in the way The YouTube recommendation system is being reported on and also in the research a There's some narratives. I think it might be because There's a lot of people worried about rhyming populism and this is a way to explain that They're looking for ways YouTube are radicalizing people and finding evidence for that or But that could be anecdotes or in some of the studies at sexual quantitative data But they're only looking to confirm it. So there's really obvious things. I think you covered in your video Some of them will just look for movement towards alright channels Through like centrist Or alt-light they call it instead of looking for both ways. Just really obvious things like that Calling it calling it an infection that cliche clearly shows that really looked at it Like a curious person would so I thought I could easily as a software engineer just collect all the data And without any complicated statistics just looking at the overall Flow of recommendations between the two the overall flow of recommendations between videos What their political influences? Yeah, this this was a thing that that bugged me of the paper that I made a video about is that they claim there's this radicalization pipeline, right and with pipeline everyone sort of understands a different thing But I think the general consensus is that the recommendation algorithm itself will steer you towards like more extreme content and in this case towards like the alt-right extremist content and the paper actually analyzed and said Okay, we found evidence that there is movement in this direction But they never shown that this is significantly more movement than like in in the other direction So in order to justify a pipeline one would need to show that the movement this way is about larger than this way in some notion and So I've I've found I've actually spoken to the author of that paper and he agrees with that but Obviously doesn't Doesn't have like energy to go into every you know go Refute everything that comes at them. They've also been a bunch of like they've also been exposed to a lot of Criticism, let's say as have you and I think even more when when your paper came out, I think The four days there was just a giant storm of people Attacking your paper basically Basically just just listing every single thing that's wrong with it and why this isn't valid and Things like this. So let's actually jump into what you did specifically so if I'm if I'm can summarize and you can then maybe correct so that we can Establish what happened you basically collected? recommendations, so you scrape these videos on YouTube and you collected these recommendations and We can we can see this so in your paper you then can make such diagrams Such as this one or these where in the middle the white bar is a Channel or a group that you're interested in and then to the left you can see where all the impressions of that Channel or group come from so what's where where basically the views come from? Through the recommendation system and on the right you can see of all the views the channel has retrieved Where do they go to so what what what is recommended next? Right, so it basically shows both directions for for every group and then you've also labeled these by multiple methods so that you can kind of establish these groups and What is pretty cool? We've built this website where you can analyze this and my computer is a bit overloaded at the moment But I promise it's really interactive. All right, so during the interview my computer crashed So I'm doing this in post-production Just to show you how this website operates So what you have here is an overview over all the channels of what rare recommendations were collected And they are grouped into groups for example here after partisan left Center left social justice partisan right and so on so you can see for each group or channel where recommendations come from and where they go to For example the large red one here. I happen to know that is Fox News You can see Fox News of the daily impression it received from itself 36 million impressions and it gives to itself 36 million these numbers have to agree by nature of how the data is collected of course But you can also see it gets 2.7 million impressions from CNN 2.6 million from the next news network and so on and it also gives quite a bit of recommendations to CNN and so on so You can go for example at some individual channel. Here's the daily wire the daily wire is Mainly run by Ben Shapiro So it's a bit more to the right of Fox News and a bit more on the direction of alternative media You can see the daily wire gets some most of its impression Count wise from itself from the daily wire, but it gives most of them to Fox News So actually you can see here that itself is a long way down Like in whatever sixth or seventh place So actually if you were to watch the daily wire the recommendation system would most likely steer you towards something like Fox News Whereas the the claim is that the YouTube algorithm would actually steer you towards more radical content Actually in in reality, it seems like it would steer towards more of these mainstream content So actually want to go to this tab you can see different groupings here and The radicalization pathways is the previous paper we have looked at So they have all these channels here of this radicalization pathway and you can see here the control group gives very very very few Impressions to the IDW The IDW gives much more impressions to the control group, right? Again, and the IDW gives very few impressions to the alt light compared to the amount of Impressions the alt light gives to the IDW and even to the control group And if you look at the alt right and we're going to zoom in on that here It's even more so the alt right of course receives most of its impressions from itself Which you could expect for any kind of group. This is your classic filter bubble situation But if we analyze the question of is there a pipeline you can see that Next most likely you are diverted to the IDW and to the control group much more Than you come from the IDW or the control group, right? Let's look at the the alt light so called this is kind of the so called gateway to the control group So called gateway to the alt right you can see here the alt light gives most of its impressions next to itself To the control group and the IDW so deradicalizing If you look at its way to the alt right, you'll see that it gets about four times as much impressions From the alt right as it gives to the alt right. So Basically, it's kind of taking the steam out of a quarter of all of these sessions and gives it To either the control group or the IDW or itself So this is exactly the opposite of what you would expect If you were to claim that there is a pipeline You would expect their most recommendations to come from more moderate content and go towards more extreme content But it's exactly the opposite and again, these are the exact channels that this original paper used Now what this paper find that the one that we're discussing if you go to media type here What you'll be able to see is the division into mainstream media youtube creator and so-called missing link media Which we'll leave out for a moment Let's focus on mainstream versus youtube creators. You can see the mainstream media gives most recommendations to itself While giving only very little recommendations to youtube creators and the missing link media While the youtube creators actually give almost half of their impressions. Look at that They they like give almost half of their impressions to the mainstream media Which means that there is a big big push by the algorithm to Towards these mainstream media away from youtube creators. So in general and I invite you to look at this website In general, you can pretty much see that the exact opposite of a radicalization pipeline is happening if you of course if you look at these recommendations and how they are distributed actually Most recommendation pathways are towards moderate centrist content and of course creating creating filter bubbles Which is a problem by itself, but is not a radicalization pipeline Lastly, I want to look at white identitarians because it's a one of the groups that people are referring to when they Claim that there are these radicalization pipelines. Look at that So of the white identitarian they get most of their impressions, of course from other white identitarian Videos which would be your filter bubble phenomenon But they give most and this is a group right the white identitarian channels give most of their Recommendations to the partisan right to the central and left mass mainstream media libertarians and and so on and uh Themselves are like really really really far down So to claim that there is this radicalization pipeline if you look at this data to me Seems not justified from this data and if I look at the other paper That really left out the important analysis Of the the backwards direction It seems that given everything it seems that the claim is not warranted All right back to the interview. Um Is that about like what you've done is that a good summary of of the data collection and analysis Um, there's a yeah, it's a good summary I can go into detail. Yeah, please Um, so youtube doesn't make it easy so I started this back in november in 2018 And I was using the youtube api And to get enough uh to get enough quota because they limit the amount of requests you can actually make to their api I created multiple keys, which is against their um policy Um, and they also asked you to delete all your data after 30 days That's also part of their policy. So um later about I think it was october 2019 they cut off my access because I was doing that So I had to move to just uh scraping websites and now My collection process actually just loads up the website and gets the recommendations from the actual page like a user would Um And that's difficult because they block access after a couple of hundred requests. They'll They'll stop you that machine from actually requesting from the website So I need to Use a proxy service that That's fairly expensive and what they do is they simulate or they have actual residential connections through your home connection like atnt and my requests get tunneled through that like a variety of locations in the states to get um A representative kind of sample Cool so so the data collection is Would you say that's that's the hardest part? I feel the labeling of channels is also not so easy But you've you've managed to kind of do that Half automated also half collecting things from kind of um sources that analyze these channels But at least for for most of the things that i've inspected I found the labeling to be pretty sane I think this is always something you can attack the the original paper was also attacked on how they label I find this to be kind of vicarish Mostly I think your labels are pretty good as well. The other papers labels are also mostly pretty okay Yeah, so let's let's go to it. Sorry Yeah, it's quite subjective I expected the labeling to be what I get my pushback on but it turns out it was um the anonymous collection So what you've actually found here what are what would you say are your your main results and I can maybe Show So you've analyzed a bit where do things come from where do things go to and I found this this part here to be One of the even though it's pretty simple One of the core things to say about this is that mostly what you found could be said is It's simply a recommendation algorithm working as a recommendation algorithm should which means it creates You know your typical filter bubbles if you if I watch one minute of this video All of a sudden my site is filled with makeup tutorials and things like this But also you found that there is quite some Over the top push towards what could be considered mainstream media and there is A bit of a draw away from the smaller YouTuber like channels is that is that something that like is that character? I don't know That's right. So it yeah, that's a good way to characterize it if that chart we're looking at now If it was a neutral algorithm The green bars would be the same as the gray ones. So you you receive the same amount of recommendations as you give That would be proportional to the views that you get the future organically The recommendations that you get from the green bars That you get the future organically. Um the recommendations you receive be equivalent to that but we find that it disproportionately recommends mainstream media channels That's not even though. So it's not like um, it doesn't look like it's consistently doing that So you can find exceptions to that rule is um, I I believe one of the main criticisms of your paper has been that you Only use data from 2019 onwards and I have actually looked at your website and your website a lot of times says that the data has been collected from way earlier than that um, so is it that you've almost only used 2019 data in your paper or what is in in the pipe the pipe is just from um november and december 2019 and the reason we did that um Is that we only had 400 channels before that And the collection process have changed over time So this is a clean set of data we could look at and I thought the most recent was the most relevant So what is it doing now? But um, i've provided i've got the same analysis over time So i've got a gift that I made that everyone can look at which goes through all the months i've been collecting Um, and you can see that chart for where it goes to and has gone through a bunch of changes so in about april 2019 That's when they really clamped down on conspiracies and other fringe channels Before that was it was much closer to neutral Okay, so but it never it never looked like a a rabbit hole it's never favoring Fringe channels. Yeah. I mean that that has been my experience also personally on youtube. I've I've joined youtube very early or i've i've watched youtube very early when Young earth creationism was still active and then these things were kind of completely discredited by simply having you having People exposed to other points of view and even I find this now Even though youtube makes it kind of easy to find let's say niche content It also exposes you to a bunch of of different views. Um, and and I've always found this to be very very optimistic in the sense of This is probably deradicalizing much more people than radicalizing But you've you've received like as I said a bunch of criticism in so if you could What was the The largest criticism irrespective of whether it was valid or not. What do you have you found was kind of what most people? were criticizing Most people criticizing that we were collecting anonymous recommendations. It wasn't the personalized ones Yeah, and it's actually like it is a valid limitation. We had it. There's a first limitation we talked about in this paper And It's still an open question How personalization would affect these aggregate results that we've got but I think it's reasonable To assume it will be quite similar once you average it out. So for any one person it might be different But you would expect personalization based on someone's history to even out because It's kind of the algorithms kind of like the average of all that when it's anonymous Yeah, I feel like you'll get that the the the notion of the the the notion that If because if you're not logged in the recommendation is like a person with only one video of history, right? So it's it's the same thing, but there's only one hit point of history instead of multiple I find Why should why should the behavior be qualitatively different if you have multiple points of history? like this is a strong claim that you have to you'd have to really show that there is a qualitative difference not just a more or less accuracy and I feel the people making this criticism are it's really on them to show that there is a substantial difference rather than saying that this is a giant limitation of the work Yeah, and it's also very hypocritical for a lot of the people saying it because some of them like Zion out who was mockingly saying that her article her original article in New York Times Used algo transparency, which is anonymous as well, but she doesn't she never looked into that I think a lot of this is completely motivated reasoning. They don't they don't care about the details I've I've seen this one this one twitter user She she comment she said something to the effect of if you've seen this article, please consult someone that works in this space like it's It's please don't don't read the article yourself. You must you must get your information through someone I've actually i've read the article I've I find it's pretty straightforward the limitations are clear But also the the results are pretty clear and it's it's actually mostly a boring article, right if if I'm sorry, like it's not a criticism. This is good. Like it's mostly you find that things work as expected There is a bit of a push towards mainstream which can be probably explaining that youtube wants to be Advertiser friendly right and these mainstream channels are already are Advertiser friendly so they probably get bumped a bit. Um, if what would you say is Maybe the most the most valid criticism that you've heard maybe not the biggest but the most Where do you where you say? Yeah, this is really This is really something that is you know I think um, I guess what's Um, there was criticism that i'm overclaiming not in the paper so much but in my tweets and medium I guess that's that's fair But I guess when I tweet and write in medium, those are what I believe in kind of a vasian way I'm not catching my claims that you would When you're writing a paper So I guess that's valid But I think a lot of people read into what I was saying More than what I was so when I say the algorithm Has a de-radicalizing influence. I'm just talking about the recommendations whereas a lot of people consider that to be Talking about all things considered so Even if it isn't doesn't have a bias towards a fringe maybe sociologically youtube Radicalizes people it could be the case. I don't know Um, but that's what i'm talking about. I'm talking about just the influence through recommendations And that's all we can hold google accountable for or at least it's what probably all could agree that google Should be held accountable for with its recommendation system Yeah, do you um, do you expect something to come or have you heard something to come out of youtube themselves? Like the the company any form of official statement to this? Nothing nothing at all. Um, the only I got a vague I got a vague a reporter was complaining that youtube sent them this So I think they've read it But I have no absolutely no contact with them Okay Cool, are you doing any anything in follow-up or do you have plans for more research? None of this i've just gone back to work i've applied a bunch for a bunch of independent grant money But i'm not optimistic. So if I don't get that i'll keep i'll keep it pattering along. I'll probably reduce the amount of recommendations because i'm spending like About 500 a month at the moment just keeping it running. So I gotta reduce my costs Yeah, and you do have a patreon for people to to chip into that, right? Yeah, so if you can link to that that'd be good. So if i'm getting something like Like 22 a month, so it doesn't really cover it Yeah all right, so Okay, this this has been very very pleasant. I think we've we've kind of looked at a lot of things is there anything you would like to amend To this that people should know about the research or about this this field No, I just have a um, I encourage you to have a play digging into data yourself. There's Um, if you're in this area the data is free to use the code's free to use Um, just consider this a contribution to knowledge Cool Well, thanks a lot mark. Um, I wish you a very pleasant evening for you, I guess and Cheers. Thanks Thanks for having me. Bye Bye | [
{
"start": 0,
"end": 2.94,
"text": " Alright, I'm very pleased to have Mark Ladoitch here today"
},
{
"start": 3.6,
"end": 4.72,
"text": " in"
},
{
"start": 4.72,
"end": 12.74,
"text": " In he's the he's one of the authors of this paper. That's called algorithmic extremism examining YouTube's rabbit hole of radicalization"
},
{
"start": 13.36,
"end": 21,
"text": " So I've done a video about a topic like this before actually several and this is basically one in a line of"
},
{
"start": 21.64,
"end": 27.66,
"text": " research that examines the recommendation algorithm of YouTube specifically but also kind of the"
},
{
"start": 28.2,
"end": 29.32,
"text": " general"
},
{
"start": 29.32,
"end": 32.84,
"text": " Social media platforms. So Mark, thanks for being here"
},
{
"start": 34.8,
"end": 40.08,
"text": " Could you maybe for people who do not know anything about this could you kind of"
},
{
"start": 40.64,
"end": 49.08,
"text": " Explain where your work fits into what's been done before or kind of also what comes out of the of the mainstream"
},
{
"start": 49.760000000000005,
"end": 54.04,
"text": " Media about this topic because there's been quite a bit of of talk"
},
{
"start": 54.04,
"end": 62.04,
"text": " Yeah, so I'm not a researcher by trade I'm a programmer and the reason why I got into this was because I"
},
{
"start": 62.56,
"end": 65.56,
"text": " could see clear bias in the way"
},
{
"start": 65.56,
"end": 69.68,
"text": " The YouTube recommendation system is being reported on and also in the research"
},
{
"start": 70.48,
"end": 71.96000000000001,
"text": " a"
},
{
"start": 71.96000000000001,
"end": 75.08,
"text": " There's some narratives. I think it might be because"
},
{
"start": 75.96000000000001,
"end": 80.68,
"text": " There's a lot of people worried about rhyming populism and this is a way to explain that"
},
{
"start": 80.68,
"end": 87,
"text": " They're looking for ways YouTube are radicalizing people and finding evidence for that or"
},
{
"start": 87.88000000000001,
"end": 91.08000000000001,
"text": " But that could be anecdotes or in some of the studies at sexual"
},
{
"start": 92.16000000000001,
"end": 93.76,
"text": " quantitative data"
},
{
"start": 93.76,
"end": 98.64000000000001,
"text": " But they're only looking to confirm it. So there's really obvious things. I think you covered in your video"
},
{
"start": 99.76,
"end": 104.36000000000001,
"text": " Some of them will just look for movement towards alright channels"
},
{
"start": 105.08000000000001,
"end": 106.88000000000001,
"text": " Through like centrist"
},
{
"start": 106.88,
"end": 112.72,
"text": " Or alt-light they call it instead of looking for both ways. Just really obvious things like that"
},
{
"start": 113.47999999999999,
"end": 118.6,
"text": " Calling it calling it an infection that cliche clearly shows that really looked at it"
},
{
"start": 118.6,
"end": 124.36,
"text": " Like a curious person would so I thought I could easily as a software engineer just collect all the data"
},
{
"start": 125.19999999999999,
"end": 130.48,
"text": " And without any complicated statistics just looking at the overall"
},
{
"start": 131.48,
"end": 133.76,
"text": " Flow of recommendations between the two"
},
{
"start": 133.76,
"end": 137.48,
"text": " the overall flow of recommendations between videos"
},
{
"start": 138.95999999999998,
"end": 140.95999999999998,
"text": " What their political influences?"
},
{
"start": 141.88,
"end": 149.44,
"text": " Yeah, this this was a thing that that bugged me of the paper that I made a video about is that they claim there's this"
},
{
"start": 150.07999999999998,
"end": 155.62,
"text": " radicalization pipeline, right and with pipeline everyone sort of understands a different thing"
},
{
"start": 155.62,
"end": 162.12,
"text": " But I think the general consensus is that the recommendation algorithm itself will steer you"
},
{
"start": 162.12,
"end": 168.48000000000002,
"text": " towards like more extreme content and in this case towards like the"
},
{
"start": 169,
"end": 172.32,
"text": " alt-right extremist content and the paper actually"
},
{
"start": 173.20000000000002,
"end": 174.88,
"text": " analyzed and said"
},
{
"start": 174.88,
"end": 178.52,
"text": " Okay, we found evidence that there is movement in this direction"
},
{
"start": 179.04000000000002,
"end": 186.32,
"text": " But they never shown that this is significantly more movement than like in in the other direction"
},
{
"start": 186.32,
"end": 193.84,
"text": " So in order to justify a pipeline one would need to show that the movement this way is about larger than"
},
{
"start": 194.35999999999999,
"end": 196.44,
"text": " this way in some notion and"
},
{
"start": 197.79999999999998,
"end": 205.68,
"text": " So I've I've found I've actually spoken to the author of that paper and he agrees with that but"
},
{
"start": 207.24,
"end": 209.24,
"text": " Obviously doesn't"
},
{
"start": 209.24,
"end": 213.01999999999998,
"text": " Doesn't have like energy to go into every you know go"
},
{
"start": 213.02,
"end": 220.22,
"text": " Refute everything that comes at them. They've also been a bunch of like they've also been exposed to a lot of"
},
{
"start": 221.34,
"end": 228.54000000000002,
"text": " Criticism, let's say as have you and I think even more when when your paper came out, I think"
},
{
"start": 229.34,
"end": 233.5,
"text": " The four days there was just a giant storm of people"
},
{
"start": 235.54000000000002,
"end": 237.54000000000002,
"text": " Attacking your paper basically"
},
{
"start": 237.54,
"end": 246.06,
"text": " Basically just just listing every single thing that's wrong with it and why this isn't valid and"
},
{
"start": 246.62,
"end": 251.7,
"text": " Things like this. So let's actually jump into what you did specifically"
},
{
"start": 252.5,
"end": 253.82,
"text": " so"
},
{
"start": 253.82,
"end": 259.9,
"text": " if I'm if I'm can summarize and you can then maybe correct so that we can"
},
{
"start": 260.46,
"end": 263.4,
"text": " Establish what happened you basically collected?"
},
{
"start": 263.4,
"end": 269.32,
"text": " recommendations, so you scrape these videos on YouTube and you collected these recommendations and"
},
{
"start": 270.84,
"end": 277.28,
"text": " We can we can see this so in your paper you then can make such diagrams"
},
{
"start": 278.08,
"end": 280.84,
"text": " Such as this one or these"
},
{
"start": 281.79999999999995,
"end": 285.67999999999995,
"text": " where in the middle the white bar is a"
},
{
"start": 286.44,
"end": 292.26,
"text": " Channel or a group that you're interested in and then to the left you can see where all the"
},
{
"start": 292.26,
"end": 294.26,
"text": " impressions of that"
},
{
"start": 294.86,
"end": 299.53999999999996,
"text": " Channel or group come from so what's where where basically the views come from?"
},
{
"start": 300.09999999999997,
"end": 305.62,
"text": " Through the recommendation system and on the right you can see of all the views the channel has retrieved"
},
{
"start": 305.62,
"end": 309.26,
"text": " Where do they go to so what what what is recommended next?"
},
{
"start": 309.26,
"end": 317.34,
"text": " Right, so it basically shows both directions for for every group and then you've also labeled these by multiple"
},
{
"start": 317.94,
"end": 321.38,
"text": " methods so that you can kind of establish these groups and"
},
{
"start": 321.38,
"end": 322.58,
"text": " What is pretty cool?"
},
{
"start": 322.58,
"end": 330.38,
"text": " We've built this website where you can analyze this and my computer is a bit overloaded at the moment"
},
{
"start": 330.38,
"end": 336.65999999999997,
"text": " But I promise it's really interactive. All right, so during the interview my computer crashed"
},
{
"start": 336.65999999999997,
"end": 338.98,
"text": " So I'm doing this in post-production"
},
{
"start": 339.7,
"end": 341.7,
"text": " Just to show you how this website operates"
},
{
"start": 342.02,
"end": 347.38,
"text": " So what you have here is an overview over all the channels of what rare recommendations were collected"
},
{
"start": 347.38,
"end": 351.98,
"text": " And they are grouped into groups for example here after partisan left"
},
{
"start": 352.3,
"end": 358.7,
"text": " Center left social justice partisan right and so on so you can see for each group or channel where"
},
{
"start": 359.02,
"end": 361.02,
"text": " recommendations come from and where they go to"
},
{
"start": 361.65999999999997,
"end": 366.42,
"text": " For example the large red one here. I happen to know that is Fox News"
},
{
"start": 367.74,
"end": 374.38,
"text": " You can see Fox News of the daily impression it received from itself"
},
{
"start": 374.38,
"end": 382.38,
"text": " 36 million impressions and it gives to itself 36 million these numbers have to agree by nature of how the data is collected"
},
{
"start": 382.38,
"end": 383.86,
"text": " of course"
},
{
"start": 383.86,
"end": 388.1,
"text": " But you can also see it gets 2.7 million impressions from CNN"
},
{
"start": 388.5,
"end": 395.76,
"text": " 2.6 million from the next news network and so on and it also gives quite a bit of recommendations to CNN and so on so"
},
{
"start": 396.54,
"end": 402.78,
"text": " You can go for example at some individual channel. Here's the daily wire the daily wire is"
},
{
"start": 402.78,
"end": 404.78,
"text": " Mainly run by Ben Shapiro"
},
{
"start": 404.78,
"end": 410.38,
"text": " So it's a bit more to the right of Fox News and a bit more on the direction of alternative media"
},
{
"start": 410.78,
"end": 415.82,
"text": " You can see the daily wire gets some most of its impression"
},
{
"start": 416.29999999999995,
"end": 422.38,
"text": " Count wise from itself from the daily wire, but it gives most of them to Fox News"
},
{
"start": 422.38,
"end": 429.02,
"text": " So actually you can see here that itself is a long way down"
},
{
"start": 429.02,
"end": 432.29999999999995,
"text": " Like in whatever sixth or seventh place"
},
{
"start": 432.85999999999996,
"end": 440.21999999999997,
"text": " So actually if you were to watch the daily wire the recommendation system would most likely steer you towards something like Fox News"
},
{
"start": 440.53999999999996,
"end": 448.46,
"text": " Whereas the the claim is that the YouTube algorithm would actually steer you towards more radical content"
},
{
"start": 448.85999999999996,
"end": 455.41999999999996,
"text": " Actually in in reality, it seems like it would steer towards more of these mainstream content"
},
{
"start": 455.42,
"end": 460.14000000000004,
"text": " So actually want to go to this tab you can see different groupings here and"
},
{
"start": 461.66,
"end": 465.66,
"text": " The radicalization pathways is the previous paper we have looked at"
},
{
"start": 465.66,
"end": 472.94,
"text": " So they have all these channels here of this radicalization pathway and you can see here the control group"
},
{
"start": 473.74,
"end": 476.62,
"text": " gives very very very few"
},
{
"start": 477.34000000000003,
"end": 479.34000000000003,
"text": " Impressions to the IDW"
},
{
"start": 479.34,
"end": 485.5,
"text": " The IDW gives much more impressions to the control group, right?"
},
{
"start": 486.29999999999995,
"end": 492.14,
"text": " Again, and the IDW gives very few impressions to the alt light compared to the amount of"
},
{
"start": 492.46,
"end": 497.26,
"text": " Impressions the alt light gives to the IDW and even to the control group"
},
{
"start": 497.26,
"end": 501.09999999999997,
"text": " And if you look at the alt right and we're going to zoom in on that here"
},
{
"start": 501.09999999999997,
"end": 506.29999999999995,
"text": " It's even more so the alt right of course receives most of its impressions from itself"
},
{
"start": 506.3,
"end": 510.7,
"text": " Which you could expect for any kind of group. This is your classic filter bubble situation"
},
{
"start": 511.26,
"end": 517.26,
"text": " But if we analyze the question of is there a pipeline you can see that"
},
{
"start": 518.46,
"end": 525.26,
"text": " Next most likely you are diverted to the IDW and to the control group much more"
},
{
"start": 525.26,
"end": 529.5,
"text": " Than you come from the IDW or the control group, right?"
},
{
"start": 529.5,
"end": 535.42,
"text": " Let's look at the the alt light so called this is kind of the so called gateway to the control group"
},
{
"start": 535.42,
"end": 542.2199999999999,
"text": " So called gateway to the alt right you can see here the alt light gives most of its impressions next to itself"
},
{
"start": 542.2199999999999,
"end": 546.38,
"text": " To the control group and the IDW so deradicalizing"
},
{
"start": 547.02,
"end": 553.66,
"text": " If you look at its way to the alt right, you'll see that it gets about four times as much impressions"
},
{
"start": 554.14,
"end": 557.9799999999999,
"text": " From the alt right as it gives to the alt right. So"
},
{
"start": 558.62,
"end": 563.98,
"text": " Basically, it's kind of taking the steam out of a quarter of all of these sessions and gives it"
},
{
"start": 563.98,
"end": 568.46,
"text": " To either the control group or the IDW or itself"
},
{
"start": 568.46,
"end": 573.1800000000001,
"text": " So this is exactly the opposite of what you would expect"
},
{
"start": 573.98,
"end": 576.46,
"text": " If you were to claim that there is a pipeline"
},
{
"start": 577.1800000000001,
"end": 584.14,
"text": " You would expect their most recommendations to come from more moderate content and go towards more extreme content"
},
{
"start": 584.38,
"end": 589.4200000000001,
"text": " But it's exactly the opposite and again, these are the exact channels that this original paper used"
},
{
"start": 589.42,
"end": 594.6999999999999,
"text": " Now what this paper find that the one that we're discussing if you go to media type here"
},
{
"start": 595.3399999999999,
"end": 601.9799999999999,
"text": " What you'll be able to see is the division into mainstream media youtube creator and so-called missing link media"
},
{
"start": 601.9799999999999,
"end": 603.9799999999999,
"text": " Which we'll leave out for a moment"
},
{
"start": 604.3,
"end": 611.5799999999999,
"text": " Let's focus on mainstream versus youtube creators. You can see the mainstream media gives most recommendations to itself"
},
{
"start": 611.58,
"end": 618.86,
"text": " While giving only very little recommendations to youtube creators and the missing link media"
},
{
"start": 618.86,
"end": 624.38,
"text": " While the youtube creators actually give almost half of their impressions. Look at that"
},
{
"start": 624.38,
"end": 629.5,
"text": " They they like give almost half of their impressions to the mainstream media"
},
{
"start": 631.0200000000001,
"end": 636.5400000000001,
"text": " Which means that there is a big big push by the algorithm to"
},
{
"start": 636.54,
"end": 644.14,
"text": " Towards these mainstream media away from youtube creators. So in general and I invite you to look at this website"
},
{
"start": 645.5,
"end": 650.9399999999999,
"text": " In general, you can pretty much see that the exact opposite of a"
},
{
"start": 651.5799999999999,
"end": 659.9,
"text": " radicalization pipeline is happening if you of course if you look at these recommendations and how they are distributed actually"
},
{
"start": 659.9,
"end": 669.66,
"text": " Most recommendation pathways are towards moderate centrist content and of course creating creating filter bubbles"
},
{
"start": 669.66,
"end": 673.42,
"text": " Which is a problem by itself, but is not a radicalization pipeline"
},
{
"start": 674.22,
"end": 681.98,
"text": " Lastly, I want to look at white identitarians because it's a one of the groups that people are referring to when they"
},
{
"start": 682.54,
"end": 686.14,
"text": " Claim that there are these radicalization pipelines. Look at that"
},
{
"start": 686.14,
"end": 693.42,
"text": " So of the white identitarian they get most of their impressions, of course from other white identitarian"
},
{
"start": 694.9399999999999,
"end": 697.98,
"text": " Videos which would be your filter bubble phenomenon"
},
{
"start": 699.1,
"end": 705.34,
"text": " But they give most and this is a group right the white identitarian channels give most of their"
},
{
"start": 705.98,
"end": 712.06,
"text": " Recommendations to the partisan right to the central and left mass mainstream media"
},
{
"start": 712.06,
"end": 715.66,
"text": " libertarians and and so on and uh"
},
{
"start": 716.1999999999999,
"end": 719.8199999999999,
"text": " Themselves are like really really really far down"
},
{
"start": 720.78,
"end": 727.02,
"text": " So to claim that there is this radicalization pipeline if you look at this data to me"
},
{
"start": 727.02,
"end": 732.06,
"text": " Seems not justified from this data and if I look at the other paper"
},
{
"start": 732.54,
"end": 735.42,
"text": " That really left out the important analysis"
},
{
"start": 736.14,
"end": 738.4599999999999,
"text": " Of the the backwards direction"
},
{
"start": 738.46,
"end": 743.1800000000001,
"text": " It seems that given everything it seems that the claim is not warranted"
},
{
"start": 743.9000000000001,
"end": 746.46,
"text": " All right back to the interview. Um"
},
{
"start": 748.5400000000001,
"end": 755.6600000000001,
"text": " Is that about like what you've done is that a good summary of of the data collection and analysis"
},
{
"start": 758.3000000000001,
"end": 762.94,
"text": " Um, there's a yeah, it's a good summary I can go into detail. Yeah, please"
},
{
"start": 762.94,
"end": 769.5200000000001,
"text": " Um, so youtube doesn't make it easy so I started this back in november in 2018"
},
{
"start": 770.22,
"end": 772.62,
"text": " And I was using the youtube api"
},
{
"start": 773.2600000000001,
"end": 778.7,
"text": " And to get enough uh to get enough quota because they limit the amount of requests you can actually make to their api"
},
{
"start": 779.4200000000001,
"end": 782.46,
"text": " I created multiple keys, which is against their um policy"
},
{
"start": 783.2600000000001,
"end": 787.4200000000001,
"text": " Um, and they also asked you to delete all your data after 30 days"
},
{
"start": 787.42,
"end": 793.18,
"text": " That's also part of their policy. So um later"
},
{
"start": 793.42,
"end": 795.42,
"text": " about I think it was october"
},
{
"start": 795.9799999999999,
"end": 799.02,
"text": " 2019 they cut off my access because I was doing that"
},
{
"start": 799.8199999999999,
"end": 803.42,
"text": " So I had to move to just uh scraping websites and now"
},
{
"start": 804.06,
"end": 809.42,
"text": " My collection process actually just loads up the website and gets the recommendations from the actual page like a user would"
},
{
"start": 811.0999999999999,
"end": 812.54,
"text": " Um"
},
{
"start": 812.54,
"end": 818.9399999999999,
"text": " And that's difficult because they block access after a couple of hundred requests. They'll"
},
{
"start": 819.5,
"end": 823.02,
"text": " They'll stop you that machine from actually requesting from the website"
},
{
"start": 823.74,
"end": 825.26,
"text": " So I need to"
},
{
"start": 825.26,
"end": 827.26,
"text": " Use a proxy service that"
},
{
"start": 828.14,
"end": 836.62,
"text": " That's fairly expensive and what they do is they simulate or they have actual residential connections through your home connection like atnt"
},
{
"start": 837.5,
"end": 838.62,
"text": " and"
},
{
"start": 838.62,
"end": 843.98,
"text": " my requests get tunneled through that like a variety of locations in the states to get um"
},
{
"start": 844.54,
"end": 846.54,
"text": " A representative kind of sample"
},
{
"start": 849.82,
"end": 852.86,
"text": " Cool so so the data collection is"
},
{
"start": 853.58,
"end": 855.82,
"text": " Would you say that's that's the hardest part?"
},
{
"start": 857.1,
"end": 860.38,
"text": " I feel the labeling of channels is also not so easy"
},
{
"start": 860.62,
"end": 863.82,
"text": " But you've you've managed to kind of do that"
},
{
"start": 863.82,
"end": 870.46,
"text": " Half automated also half collecting things from kind of um sources that analyze these channels"
},
{
"start": 871.0200000000001,
"end": 877.74,
"text": " But at least for for most of the things that i've inspected I found the labeling to be pretty sane"
},
{
"start": 878.22,
"end": 884.94,
"text": " I think this is always something you can attack the the original paper was also attacked on how they label"
},
{
"start": 884.94,
"end": 887.98,
"text": " I find this to be kind of vicarish"
},
{
"start": 887.98,
"end": 894.54,
"text": " Mostly I think your labels are pretty good as well. The other papers labels are also mostly pretty okay"
},
{
"start": 894.54,
"end": 897.5,
"text": " Yeah, so let's let's go to it. Sorry"
},
{
"start": 899.02,
"end": 907.02,
"text": " Yeah, it's quite subjective I expected the labeling to be what I get my pushback on but it turns out it was um"
},
{
"start": 907.82,
"end": 909.82,
"text": " the anonymous"
},
{
"start": 909.82,
"end": 911.82,
"text": " collection"
},
{
"start": 911.82,
"end": 919.74,
"text": " So what you've actually found here what are what would you say are your your main results and I can maybe"
},
{
"start": 921.1800000000001,
"end": 922.7,
"text": " Show"
},
{
"start": 922.7,
"end": 928.38,
"text": " So you've analyzed a bit where do things come from where do things go to and"
},
{
"start": 929.74,
"end": 933.2600000000001,
"text": " I found this this part here to be"
},
{
"start": 933.98,
"end": 936.3800000000001,
"text": " One of the even though it's pretty simple"
},
{
"start": 936.38,
"end": 943.66,
"text": " One of the core things to say about this is that mostly what you found"
},
{
"start": 944.86,
"end": 946.38,
"text": " could be"
},
{
"start": 946.38,
"end": 948.38,
"text": " said is"
},
{
"start": 948.54,
"end": 955.42,
"text": " It's simply a recommendation algorithm working as a recommendation algorithm should which means it creates"
},
{
"start": 956.06,
"end": 961.42,
"text": " You know your typical filter bubbles if you if I watch one minute of this video"
},
{
"start": 961.42,
"end": 965.8199999999999,
"text": " All of a sudden my site is filled with makeup tutorials and things like this"
},
{
"start": 965.9799999999999,
"end": 968.78,
"text": " But also you found that there is quite some"
},
{
"start": 969.74,
"end": 975.66,
"text": " Over the top push towards what could be considered mainstream media and there is"
},
{
"start": 976.3,
"end": 978.78,
"text": " A bit of a draw away from the smaller"
},
{
"start": 979.5,
"end": 986.2199999999999,
"text": " YouTuber like channels is that is that something that like is that character? I don't know"
},
{
"start": 986.22,
"end": 991.82,
"text": " That's right. So it yeah, that's a good way to characterize it if that chart we're looking at now"
},
{
"start": 992.62,
"end": 994.62,
"text": " If it was a neutral algorithm"
},
{
"start": 995.58,
"end": 1001.74,
"text": " The green bars would be the same as the gray ones. So you you receive the same amount of recommendations as you give"
},
{
"start": 1003.5,
"end": 1007.26,
"text": " That would be proportional to the views that you get the future organically"
},
{
"start": 1009.58,
"end": 1011.98,
"text": " The recommendations that you get from the green bars"
},
{
"start": 1011.98,
"end": 1019.9200000000001,
"text": " That you get the future organically. Um the recommendations you receive be equivalent to that but we find that it disproportionately"
},
{
"start": 1021,
"end": 1023,
"text": " recommends mainstream media channels"
},
{
"start": 1023.26,
"end": 1027.98,
"text": " That's not even though. So it's not like um, it doesn't look like it's consistently doing that"
},
{
"start": 1028.8600000000001,
"end": 1030.8600000000001,
"text": " So you can find exceptions to that"
},
{
"start": 1030.94,
"end": 1032.6200000000001,
"text": " rule"
},
{
"start": 1032.6200000000001,
"end": 1039.26,
"text": " is um, I I believe one of the main criticisms of your paper has been that you"
},
{
"start": 1039.26,
"end": 1042.7,
"text": " Only use data from 2019 onwards"
},
{
"start": 1043.42,
"end": 1044.3799999999999,
"text": " and"
},
{
"start": 1044.3799999999999,
"end": 1051.9,
"text": " I have actually looked at your website and your website a lot of times says that the data has been collected from way earlier than that"
},
{
"start": 1052.86,
"end": 1056.06,
"text": " um, so is it that you've almost only used"
},
{
"start": 1056.54,
"end": 1059.74,
"text": " 2019 data in your paper"
},
{
"start": 1060.3,
"end": 1064.3799999999999,
"text": " or what is in in the pipe the pipe is just from"
},
{
"start": 1065.34,
"end": 1067.58,
"text": " um november and december 2019"
},
{
"start": 1067.58,
"end": 1070.1399999999999,
"text": " and the reason we did that um"
},
{
"start": 1071.26,
"end": 1074.86,
"text": " Is that we only had 400 channels before that"
},
{
"start": 1075.8999999999999,
"end": 1078.86,
"text": " And the collection process have changed over time"
},
{
"start": 1078.86,
"end": 1083.26,
"text": " So this is a clean set of data we could look at and I thought the most recent was the most relevant"
},
{
"start": 1083.26,
"end": 1084.6999999999998,
"text": " So what is it doing now?"
},
{
"start": 1084.6999999999998,
"end": 1088.22,
"text": " But um, i've provided i've got the same analysis over time"
},
{
"start": 1088.22,
"end": 1093.1,
"text": " So i've got a gift that I made that everyone can look at which goes through all the months i've been collecting"
},
{
"start": 1093.1,
"end": 1099.26,
"text": " Um, and you can see that chart for where it goes to and has gone through a bunch of changes so in about april 2019"
},
{
"start": 1100.06,
"end": 1103.82,
"text": " That's when they really clamped down on conspiracies and other fringe channels"
},
{
"start": 1104.6999999999998,
"end": 1107.02,
"text": " Before that was it was much closer to neutral"
},
{
"start": 1108.6999999999998,
"end": 1112.86,
"text": " Okay, so but it never it never looked like a a rabbit hole it's never favoring"
},
{
"start": 1113.74,
"end": 1119.34,
"text": " Fringe channels. Yeah. I mean that that has been my experience also personally on youtube. I've"
},
{
"start": 1119.34,
"end": 1123.58,
"text": " I've joined youtube very early or i've i've watched youtube very early when"
},
{
"start": 1124.62,
"end": 1130.9399999999998,
"text": " Young earth creationism was still active and then these things were kind of completely discredited by simply"
},
{
"start": 1131.82,
"end": 1133.82,
"text": " having you having"
},
{
"start": 1134.22,
"end": 1139.1799999999998,
"text": " People exposed to other points of view and even I find this now"
},
{
"start": 1139.1799999999998,
"end": 1143.4199999999998,
"text": " Even though youtube makes it kind of easy to find let's say niche content"
},
{
"start": 1143.42,
"end": 1149.5800000000002,
"text": " It also exposes you to a bunch of of different views. Um, and and"
},
{
"start": 1150.38,
"end": 1152.38,
"text": " I've always found this to be very"
},
{
"start": 1152.7,
"end": 1153.98,
"text": " very"
},
{
"start": 1153.98,
"end": 1155.98,
"text": " optimistic in the sense of"
},
{
"start": 1156.3000000000002,
"end": 1159.9,
"text": " This is probably deradicalizing much more people than radicalizing"
},
{
"start": 1160.38,
"end": 1166.38,
"text": " But you've you've received like as I said a bunch of criticism in so if you could"
},
{
"start": 1167.26,
"end": 1168.7,
"text": " What was the"
},
{
"start": 1168.7,
"end": 1175.66,
"text": " The largest criticism irrespective of whether it was valid or not. What do you have you found was kind of what most people?"
},
{
"start": 1176.3,
"end": 1178.3,
"text": " were criticizing"
},
{
"start": 1179.42,
"end": 1184.94,
"text": " Most people criticizing that we were collecting anonymous recommendations. It wasn't the personalized ones"
},
{
"start": 1184.94,
"end": 1191.66,
"text": " Yeah, and it's actually like it is a valid limitation. We had it. There's a first limitation we talked about in this paper"
},
{
"start": 1193.18,
"end": 1194.54,
"text": " And"
},
{
"start": 1194.54,
"end": 1196.54,
"text": " It's still an open question"
},
{
"start": 1196.54,
"end": 1201.98,
"text": " How personalization would affect these aggregate results that we've got but I think it's reasonable"
},
{
"start": 1202.54,
"end": 1208.1399999999999,
"text": " To assume it will be quite similar once you average it out. So for any one person it might be different"
},
{
"start": 1209.18,
"end": 1213.8999999999999,
"text": " But you would expect personalization based on someone's history to even out because"
},
{
"start": 1214.46,
"end": 1217.98,
"text": " It's kind of the algorithms kind of like the average of all that when it's anonymous"
},
{
"start": 1218.54,
"end": 1220.54,
"text": " Yeah, I feel like you'll get that"
},
{
"start": 1221.5,
"end": 1222.78,
"text": " the"
},
{
"start": 1222.78,
"end": 1224.78,
"text": " the the notion of"
},
{
"start": 1224.78,
"end": 1227.1,
"text": " the the the notion that"
},
{
"start": 1228.06,
"end": 1234.94,
"text": " If because if you're not logged in the recommendation is like a person with only one video of history, right?"
},
{
"start": 1235.5,
"end": 1241.18,
"text": " So it's it's the same thing, but there's only one hit point of history instead of multiple I find"
},
{
"start": 1241.8999999999999,
"end": 1248.7,
"text": " Why should why should the behavior be qualitatively different if you have multiple points of history?"
},
{
"start": 1248.7,
"end": 1256.3,
"text": " like this is a strong claim that you have to you'd have to really show that there is a qualitative difference not just"
},
{
"start": 1256.8600000000001,
"end": 1263.98,
"text": " a more or less accuracy and I feel the people making this criticism are it's really on them to show that there is"
},
{
"start": 1264.8600000000001,
"end": 1271.18,
"text": " a substantial difference rather than saying that this is a giant limitation of the work"
},
{
"start": 1273.42,
"end": 1277.82,
"text": " Yeah, and it's also very hypocritical for a lot of the people saying it because"
},
{
"start": 1277.82,
"end": 1279.34,
"text": " some of them like"
},
{
"start": 1279.34,
"end": 1285.82,
"text": " Zion out who was mockingly saying that her article her original article in New York Times"
},
{
"start": 1286.3,
"end": 1291.4199999999998,
"text": " Used algo transparency, which is anonymous as well, but she doesn't she never looked into that"
},
{
"start": 1291.4199999999998,
"end": 1296.46,
"text": " I think a lot of this is completely motivated reasoning. They don't they don't care about the details"
},
{
"start": 1297.34,
"end": 1301.1,
"text": " I've I've seen this one this one twitter user"
},
{
"start": 1301.1,
"end": 1308.4599999999998,
"text": " She she comment she said something to the effect of if you've seen this article, please consult"
},
{
"start": 1308.62,
"end": 1311.02,
"text": " someone that works in this space like"
},
{
"start": 1312.2199999999998,
"end": 1313.4199999999998,
"text": " it's"
},
{
"start": 1313.4199999999998,
"end": 1319.26,
"text": " It's please don't don't read the article yourself. You must you must get your information through someone"
},
{
"start": 1320.9399999999998,
"end": 1322.9399999999998,
"text": " I've actually i've read the article"
},
{
"start": 1323.4199999999998,
"end": 1327.02,
"text": " I've I find it's pretty straightforward the limitations are clear"
},
{
"start": 1327.02,
"end": 1332.86,
"text": " But also the the results are pretty clear and it's it's actually mostly a boring article, right if if"
},
{
"start": 1334.06,
"end": 1340.7,
"text": " I'm sorry, like it's not a criticism. This is good. Like it's mostly you find that things work as expected"
},
{
"start": 1340.78,
"end": 1346.46,
"text": " There is a bit of a push towards mainstream which can be probably explaining that youtube wants to be"
},
{
"start": 1347.16,
"end": 1351.42,
"text": " Advertiser friendly right and these mainstream channels are already are"
},
{
"start": 1351.42,
"end": 1358.8000000000002,
"text": " Advertiser friendly so they probably get bumped a bit. Um, if what would you say is"
},
{
"start": 1359.92,
"end": 1361.92,
"text": " Maybe the most the most"
},
{
"start": 1362.24,
"end": 1366.24,
"text": " valid criticism that you've heard maybe not the biggest but the most"
},
{
"start": 1366.8000000000002,
"end": 1368.8000000000002,
"text": " Where do you where you say? Yeah, this is really"
},
{
"start": 1369.28,
"end": 1371.8400000000001,
"text": " This is really something that is you know"
},
{
"start": 1374.3200000000002,
"end": 1376.3200000000002,
"text": " I think um, I guess what's"
},
{
"start": 1376.32,
"end": 1383.36,
"text": " Um, there was criticism that i'm overclaiming not in the paper so much but in my tweets and medium"
},
{
"start": 1383.9199999999998,
"end": 1385.9199999999998,
"text": " I guess that's that's fair"
},
{
"start": 1386,
"end": 1390.72,
"text": " But I guess when I tweet and write in medium, those are what I believe in kind of a vasian way"
},
{
"start": 1391.12,
"end": 1393.28,
"text": " I'm not catching my claims that you would"
},
{
"start": 1394.6399999999999,
"end": 1396.6399999999999,
"text": " When you're writing a paper"
},
{
"start": 1398.8,
"end": 1400.8,
"text": " So I guess that's valid"
},
{
"start": 1401.36,
"end": 1403.36,
"text": " But I think a lot of people read into what I was saying"
},
{
"start": 1403.36,
"end": 1406.6399999999999,
"text": " More than what I was so when I say the algorithm"
},
{
"start": 1407.76,
"end": 1413.52,
"text": " Has a de-radicalizing influence. I'm just talking about the recommendations whereas a lot of people consider that to be"
},
{
"start": 1414.08,
"end": 1416.32,
"text": " Talking about all things considered so"
},
{
"start": 1417.28,
"end": 1422.24,
"text": " Even if it isn't doesn't have a bias towards a fringe maybe sociologically youtube"
},
{
"start": 1422.9399999999998,
"end": 1426.1599999999999,
"text": " Radicalizes people it could be the case. I don't know"
},
{
"start": 1426.9599999999998,
"end": 1431.54,
"text": " Um, but that's what i'm talking about. I'm talking about just the influence through recommendations"
},
{
"start": 1431.54,
"end": 1437.46,
"text": " And that's all we can hold google accountable for or at least it's what probably all could agree that google"
},
{
"start": 1437.7,
"end": 1440.74,
"text": " Should be held accountable for with its recommendation system"
},
{
"start": 1442.42,
"end": 1449.62,
"text": " Yeah, do you um, do you expect something to come or have you heard something to come out of youtube themselves?"
},
{
"start": 1449.62,
"end": 1454.5,
"text": " Like the the company any form of official statement to this?"
},
{
"start": 1456.6599999999999,
"end": 1460.26,
"text": " Nothing nothing at all. Um, the only I got a vague"
},
{
"start": 1460.26,
"end": 1463.78,
"text": " I got a vague a reporter was complaining that youtube sent them this"
},
{
"start": 1464.82,
"end": 1466.82,
"text": " So I think they've read it"
},
{
"start": 1467.78,
"end": 1469.86,
"text": " But I have no absolutely no contact with them"
},
{
"start": 1471.3799999999999,
"end": 1473.14,
"text": " Okay"
},
{
"start": 1473.14,
"end": 1477.54,
"text": " Cool, are you doing any anything in follow-up or do you have plans for more research?"
},
{
"start": 1479.62,
"end": 1484.9,
"text": " None of this i've just gone back to work i've applied a bunch for a bunch of independent grant money"
},
{
"start": 1484.9,
"end": 1492.1200000000001,
"text": " But i'm not optimistic. So if I don't get that i'll keep i'll keep it pattering along. I'll probably reduce the amount of recommendations"
},
{
"start": 1492.98,
"end": 1494.98,
"text": " because i'm spending like"
},
{
"start": 1495.6200000000001,
"end": 1500.9,
"text": " About 500 a month at the moment just keeping it running. So I gotta reduce my costs"
},
{
"start": 1501.6200000000001,
"end": 1506.26,
"text": " Yeah, and you do have a patreon for people to to chip into that, right?"
},
{
"start": 1507.5400000000002,
"end": 1510.66,
"text": " Yeah, so if you can link to that that'd be good. So if i'm getting something like"
},
{
"start": 1510.66,
"end": 1515.38,
"text": " Like 22 a month, so it doesn't really cover it"
},
{
"start": 1516.1000000000001,
"end": 1517.3000000000002,
"text": " Yeah"
},
{
"start": 1517.3000000000002,
"end": 1518.66,
"text": " all right, so"
},
{
"start": 1518.66,
"end": 1523.14,
"text": " Okay, this this has been very very pleasant. I think we've we've kind of looked at"
},
{
"start": 1523.94,
"end": 1526.18,
"text": " a lot of things is there anything you would like to"
},
{
"start": 1526.8200000000002,
"end": 1528.26,
"text": " amend"
},
{
"start": 1528.26,
"end": 1532.1000000000001,
"text": " To this that people should know about the research or about this this field"
},
{
"start": 1533.6200000000001,
"end": 1538.5800000000002,
"text": " No, I just have a um, I encourage you to have a play digging into data yourself. There's"
},
{
"start": 1538.58,
"end": 1542.58,
"text": " Um, if you're in this area the data is free to use the code's free to use"
},
{
"start": 1543.54,
"end": 1546.58,
"text": " Um, just consider this a contribution to knowledge"
},
{
"start": 1548.4199999999998,
"end": 1549.62,
"text": " Cool"
},
{
"start": 1549.62,
"end": 1554.82,
"text": " Well, thanks a lot mark. Um, I wish you a very pleasant evening for you, I guess"
},
{
"start": 1555.3799999999999,
"end": 1556.8999999999999,
"text": " and"
},
{
"start": 1556.8999999999999,
"end": 1558.8999999999999,
"text": " Cheers. Thanks"
},
{
"start": 1558.8999999999999,
"end": 1560.8999999999999,
"text": " Thanks for having me. Bye"
},
{
"start": 1560.9,
"end": 1568.9,
"text": " Bye"
}
] |
i4H0kjxrias | Yannic Kilcher | UCZHmQk67mSJgfCCTn7xBfew | Reformer: The Efficient Transformer | [
"Science & Technology"
] | [
"deep learning",
"machine learning",
"nlp",
"natural language processing",
"machine translation",
"arxiv",
"google",
"attention mechanism",
"attention",
"transformer",
"seq2seq",
"bert",
"memory",
"lsh",
"locality sensitive hashing",
"reversible",
"revertible",
"flow",
"long sequence"
] | The Transformer for the masses! Reformer solves the biggest problem with the famous Transformer model: Its huge resource requirements. By cleverly combining Locality Sensitive Hashing and ideas from Reversible Networks, the classically huge footprint of the Transformer is drastically reduced. Not only does that mean the model uses less memory, but it can process much longer input sequences, up to 16K tokens with just 16gb of memory!
https://arxiv.org/abs/2001.04451
https://ai.googleblog.com/2020/01/reformer-efficient-transformer.html
Abstract:
Large Transformer models routinely achieve state-of-the-art results on a number of tasks but training these models can be prohibitively costly, especially on long sequences. We introduce two techniques to improve the efficiency of Transformers. For one, we replace dot-product attention by one that uses locality-sensitive hashing, changing its complexity from O(L2) to O(LlogL), where L is the length of the sequence. Furthermore, we use reversible residual layers instead of the standard residuals, which allows storing activations only once in the training process instead of N times, where N is the number of layers. The resulting model, the Reformer, performs on par with Transformer models while being much more memory-efficient and much faster on long sequences.
Authors: Nikita Kitaev, Łukasz Kaiser, Anselm Levskaya
Links:
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
BitChute: https://www.bitchute.com/channel/yannic-kilcher
Minds: https://www.minds.com/ykilcher | Hi there! Today we'll look at Reformer, the efficient transformer by Nikita Kitaev, Lukas Kaiser and Anselm Levskaia. This is a paper that tries to reduce the extreme resource requirements of the transformer model. Now if you haven't seen the transformer model before, that's this thing, I suggest you go watch for example my video on it, Attention is All You Need, it's called, where the transformer is introduced. The most famous transformer is called BERT, B-E-R-T, and you can also look that up, I've made a video about this. So what's the issue here? If you remember transformers, they need a lot of memory. And why? That's because they compute, in each layer they compute these attention things. Let's recap shortly. In a transformer you propagate information layer by layer. So you have layer here with some signal, and then the next layer that you try to propagate the signal. Now what you do, you assign, you assign key queries to each of the next layer. So each of the next layer has queries, and queries are just vectors. This is a vector, this is a vector, this is a vector, and so on. So basically the next layer has the ability to ask, to ask the last layer what it wants. This is a kind of an intrinsic property of attention, and I, as I said, I explained this in detail in the video, Attention is All You Need. Basically these are what's called queries, Q. And then this layer is exposing what are called keys, and keys again are vectors. So vector, vector, vector, vector, and so on. So keys are vectors, and the way that the information is propagated to the next layer is whenever, whatever, we consider for example this node here, right, this node, let's make that yellow. When we consider this node here, it is going to look in the last layer which, which keys match my key the most. And in this case it will probably be this key and this key, right, they match the key the most. And here we look at the inner product, so the angle between the vectors. And then information is aggregated by simply having a weighted average of the values. So information is coming in here and here. Actually information is coming into all the nodes, but since only these keys match, the information will be propagated like this, to this unit. We could do this for another unit, for example this unit right here. What's the value of this unit? Well we have to look at the key here. Which key is it going to be matched to? It's probably going to be matched to this key right here. And probably no other key really. Maybe this key a little bit. So the information of that node in the next layer will be whatever's information is coming in here, routed there, and a little bit of this information. So this is kind of a, it's not a hard, it's called soft attention. So there's a little bit of information going everywhere, but the majority of the information is coming from the nodes where the keys match. So these are queries, these are keys, and technically these things coming in here are called values. But imagine the values simply as the information to be propagated, and the queries and the keys are responsible for routing that information to the next layer. All of these things are learned. So the queries, the keys, and the values. Now what's the problem? The problem is between the queries and the keys. As you can see, what you have to do is you have to match every single query with every single key in order to find out where information goes. So this becomes order of, if you have D keys and D queries, order of D squared operations that you have to do. And of course D squared values that you have to compute. And since these are all vectors, of course there is D will not only be the number of keys, but then again this is multiplied, so there is an inner multiplication with the dimensionality, let's call that capital D, of the... no sorry that's not an inner multiplication. Let's just remain at this. So D squared inner products between vectors of capital D dimensions. So it's not an easy thing for resources to do. You need a lot of resources to hold this, all of this in memory at the same time and to compute all of these things. The reformer aims to solve this problem. So this giant space problem that the transformers have, space, memory, also computational problem to a lesser degree. Mostly it's a memory issue. Alright, so what is happening here? And you see here that this product between two matrices clearly gives you this kind of squared thing. So what's happening in the reformer to do this? The trick is, if we go back to this drawing, the trick is to create what's called a hashing scheme or buckets. In creating buckets what you want to do is you want to group similar things together. So let's say we create four buckets. Bucket one, bucket two, bucket three, bucket four. And each bucket we label. And bucket one we label with the up direction, this with the right direction, with the down direction, the left direction as vectors. And now we simply put each of the things into the bucket where it belongs most. So let's for example this vector here, it goes here. Sorry, that is like absolutely not the right place. It goes probably here, right? This vector here, probably this one goes here, right? And so on. So you'll end up each of these assigning a bucket. So these all go into that bucket. Let's continue, actually let's also put the keys in the same buckets. So also the keys, this key here probably goes to this bucket. This key here probably goes to this bucket. Let's say this key here probably goes to the bucket over here. You already see, so before, right before, we cared about this particular query and this particular key. We just looked and we said those two will probably route information to each other because they're similar. And now you can see they both ended up in the same bucket. So the idea is to create a scheme where you throw these things into buckets such that if two vectors are similar they will end up in the same bucket with high probability. So you'll only have to really compare things within the same bucket and not across all of these d squared elements. That's the idea and the technique here is called locality sensitive hashing. So locality sensitive hashing. And short this is called LSH. The idea is the following, if you have two vectors v1 and v2 and they have and you have a distance measure distance measure d. D is a distance. What you want is if the distance between v1 and v2 is small, I'm getting confused with color, with small then you want them in the same bucket. And if the distance is large then you want them in a different bucket. Different buckets. You know with high probability. So all of these things where you say you want them in the same bucket with probability p with probability p with high probability p and here you want them in different buckets with high probability. Or you want them in the same pocket with low probability. That's an equivalent form of stating. This is all formalized and I can direct you to the Wikipedia page of that. It's pretty good. It gives a concise definition. Here you can see that and it gives a number of examples. So one example I'd like to give here for locality sensitive hashing is of course the scheme of bucketing will all depend on what your distance measure is. If you consider the distance measure simply to be the jacquard distance. So let's say we have two vectors 0 1 0 1 and here we have 1 0 1 1 0 1 and here it's 0 0 0 1. Alright so maybe you can see the first two vectors here are much more close together than the last vector. Now in terms of bit differences, one scheme to do locality sensitive hashing is to simply sub sample bits. So in this case this is a slightly constructed example. We will just sub sample the first two bits and then construct the buckets according to these bit values. So if since we sample two bits we have four buckets. Here is 0 0, here is 0 1, here is 1 0 and here is 1 1. That's the concept of locality sensitive hashing. You have these buckets and then you can say alright this vector has 1 0, goes into this, this goes into this and then that goes into the 0 1 bucket. And you end up with what you have. You have the two close vectors in the same bucket and the two far apart vectors in that bucket. Of course that doesn't always work. You can be unlucky in sub sampling but that's kind of trade-off you'll have to go for. If things that are close together happen with it's a low probability but if they happen to end up in the different buckets then basically you lose the fact that they are close to each other and that's the trade-off. The kind of locality sensitive hashing they use in the reformer now is what are called random projections. So let's say you have a bunch of vectors and that's really what we care about. You have a bunch of vectors and what you want, you want the keys and queries. So you have a bunch of vectors like this and you want to create buckets such that vectors that are close together will end up in the same bucket and vectors that are far apart will end up in the in different buckets. A cool way to do is, and this is in the cosine distance so we care about the angle between vectors, a cool way to do this is to use random plane projections and the cool thing about it is it works for the cosine distance and you can basically choose how many buckets you create. Let's say we want to create four buckets here again. What we need is two hyper planes and what we'll do is, so here is the origin, we'll simply create two hyper planes through the origin at random. So I'm gonna draw a random hyper plane here like this and then a second random hyper plane like this. So you would agree those are pretty random hyper planes as much as I can be a random generator and then we'll simply label, so this will label hyper plane one, this will label hyper plane two. Now we simply assign each vector bits according to the, on which side of the hyper plane they lie. So let's call this here the plus side and this here the minus side or even yeah let's call this the plus and the minus and here also we call this the plus side and this the minus side. So this vector here is, its signs are plus plus right because it's on the plus side of both of hyper planes. This vector plus plus, this one plus plus, this one here is called, it's on the negative side of plane two but on the positive side of plane one so it's plus minus, this one here minus minus, minus minus, minus minus and these are your buckets. So you would group these vectors together because they have they have the same signs. You would group that vector, you would group these vectors together. The combination of this with attention, since in attention you've seen attention uses a softmax and the softmax is dominated usually by the largest elements and since we compute inner products it means that this softmax thing is dominated by vectors that have small inner products. So basically you don't have to look at all of these d squared vectors if you can find the ones that have the closest distance. You can pretty much ignore the others. And LSH allows you to do this. So build buckets of vectors with similar directions. Then you only have to care about these vectors comparing them to each other. So that's not a lot of vectors generally and that's how you save a lot of work. So you will only have to care about these three vectors if your key vector for example is right here. You'll only have to care about these things in the same bucket and you can ignore all of that rest of the space. Of course the more hyperplanes you have the more buckets you'll have, the less vectors you'll have in the same bucket. That's the general idea. I find this explanation to be a bit easy. You can equivalently explain it by doing these kind of random rotations in the space. You can think about how that will end up actually being the exact same thing as what I just explained. I just like that my explanation better I think. Alright so the way they use this, they have an illustration right here, is the following. So they have these keys right? Sequence of queries and keys. So they do equivalent queries and keys which is a thing you can do in transformers. Don't worry too much about it whether they're different or not. But then they do this LSH bucketing and here the color of the cell is just the bucket, the LSH bucket which will end up. Then they sort that right as you can see and now they do an additional thing which is called the chunk. As you can see there are not the same amount of vectors in each bucket and that is sometimes a problem because even though you've reduced the memory, the memory requirements are still dominated by the largest bucket. By whatever bucket has the most number of vectors that will pretty much be your memory requirement. Because now you don't have to, if this is D, you have to compute all the D squared things anymore. But you'll only have to compute this quantity, let's call that B. So the maximum bucket size. But that could still be large right? If you look at a distribution it's probably going to be something like this right? Where most buckets have a kind of a standard number of vectors but some buckets will have a lot of vectors and that's, sorry, some few buckets will have a lot of vectors and your memory requirement is still dominated by this. So they do an additional thing which is called chunking which means they actually take fixed size chunks here, fixed size. Here they always take four and they say all right these are our chunks and we will only compute attention within the chunks right? So it could be that there's the same bucket is actually split between chunks and that's why they do an additional thing is that you can attend two things in a different chunk right here. You can attend two things in your neighboring chunks so you're restricted to either your own chunk or your neighboring chunk. Note that there aren't any any arrows going over here. So you can attend, they have this diagram here, which things you can attend to. You can attend to yourself or attend to your neighboring thing but not to any other thing or the other way around right? So that's basically the the concept of saving memory. Now your memory requirements are, if we call this quantity now, we call the other one B, let's call this the chunk size C right? Your memory requirements are pretty much C squared plus whatever this unidirectional, so not this isn't squared, plus probably O of C something like this. So you bring your memory requirements down quite a bit. Now that's the general idea here. The problem they face again is, so they face another problem where they say hold on, I can't find it right here, they say hold on, we do have actually another problem and that is that these transformers have to back propagate. So you'll have to forward propagate these things and now we've kind of solved this D square computation issue but what you'll have to do is if you go from layer to layer right? Layer, layer, layer, layer. What you have to do is if you propagate information forward you still have to back propagate and in order to back propagate usually, usually you'll have to remember all of these activations right? So these activations, these activations. In order to do back prop it is often the case that you actually have to remember the activations because in each forward propagation, in each layer here you might lose some information. Imagine you have a layer that maps these two-dimensional vectors both to, so here actually let's make this blue, maps these three vectors to the following configuration. So a layer maps these vectors to this, this and this. So it maps two things to one thing which you know can be if you in a linear layer can decide to map it to a lower dimensional subspace. So you could actually decide to map it to in fact two points right? This is also a possibility. You could do dimension reduction. So because all of this in order to do back prop you actually have to remember these things in order to do proper back prop. This is a problem again for the transformer because all these activations even though we've gotten rid of the d-square computation they will have to be remembered and that takes a lot of memory. The way to solve this is actually to do invertible layers. What that means is that if I propagate information forward, forward, forward, forward, I can figure out what the information here was simply by looking at the back prop activations. And this happens if the layer is invertible. So if this function here is invertible. So if f here technically is invertible. So I can actually write down the inverse of f and that is defined. This of course is a pretty big restriction and the way they achieve it, I like to go to the blog here, the way they achieve it is they do what's called an idea from reversible networks where they always have two sets of activations. That's what you see here. X1 and X2. And in each layer only one of them is updated in a residual fashion. You can see here layer 1 updates X2 but X1 remains the same and goes to Y1. And then in the next layer, layer 2 only updates Y1 in order to construct Z1. But Y2 remains the same to be Z2. And then you can revert the layers. You can basically figure out what the activations were from the back prop signal. Now that's extremely good if you want to save memory but of course it restricts clearly. You have to be restricted to this kind of architecture similar. This idea actually isn't new. This has been used many times in things like normalizing flows and I want to highlight this paper. Actually want to highlight specific... I chose this paper because they have these nice diagrams where they show exactly this. You see they have two sets X1 and X2 that in forward propagation they only update one of them. And then in backward in what's called inverse propagation they can figure out what those were. And they couple these in exactly the same way. Like here this drawing might be even more similar where they alternate between updating the two activations. So you can think of this as a way to simply make the function that you're representing with the neural network invertible. That is a giant constraint on your architecture but these methods here, these normalizing flow methods, use that so they can actually define an invertible layer because they need the Jacobian inverse in order to compute their normalizing flow. So you see that's why they originally did it. And I'm sure that that's not a new idea or particularly new again. Strangely I haven't found any of the flow literature cited. They do cite the reversible residual net paper that they probably got the idea from. So with these two things now you can save the giant computation. And you can also not store the forward activations. So they say they can take now giant giant giant input sizes. You may remember transformers like BERT. So BERT it can use something like 512 tokens. In its input sequence. That means the sequence that you can look at with BERT at a time is 512 long and not a bit longer. There have been some extensions to that. For example I believe in XL net. So XL net has pushed this to something like C times 512 where C is a smallish constant. That where you can kind of carry over information between sequences. But this thing here as you can see they calculate it could take up something like 64,000 tokens and that would use in total 16 gigabytes of memory. Which is available on a high-end GPU. So this is a giant this is a giant step forward in in producing transformers that can actually take large models. And here you see the memory and time complexity. You can look at these things yourself but you can see maybe here that these squares here from the original transformer they now vanish from this. And all of these constants are a lot of these constants are actually smaller. For example that chunk size is in there instead of kind of the entire sequence length. So that's basically the the paper. They show that I can actually input those long sequences. They can apply this to images. You see there's image net pixel by pixel which is a lot of pixels and would have been absolutely unthinkable with one of the original transformers. And with that I invite you to check out the paper and the blog post and I'll see you next time. Bye bye. | [
{
"start": 0,
"end": 5.84,
"text": " Hi there! Today we'll look at Reformer, the efficient transformer by Nikita"
},
{
"start": 5.84,
"end": 13.72,
"text": " Kitaev, Lukas Kaiser and Anselm Levskaia. This is a paper that tries to reduce the"
},
{
"start": 13.72,
"end": 18.6,
"text": " extreme resource requirements of the transformer model. Now if you haven't"
},
{
"start": 18.6,
"end": 25.2,
"text": " seen the transformer model before, that's this thing, I suggest you go watch for"
},
{
"start": 25.2,
"end": 29.36,
"text": " example my video on it, Attention is All You Need, it's called, where the"
},
{
"start": 29.36,
"end": 36.56,
"text": " transformer is introduced. The most famous transformer is called BERT, B-E-R-T,"
},
{
"start": 36.56,
"end": 43.72,
"text": " and you can also look that up, I've made a video about this. So what's the issue"
},
{
"start": 43.72,
"end": 50.480000000000004,
"text": " here? If you remember transformers, they need a lot of memory. And why? That's"
},
{
"start": 50.480000000000004,
"end": 56.92,
"text": " because they compute, in each layer they compute these attention things. Let's"
},
{
"start": 56.92,
"end": 63.64,
"text": " recap shortly. In a transformer you propagate information layer by layer. So"
},
{
"start": 63.64,
"end": 71.48,
"text": " you have layer here with some signal, and then the next layer that you try to"
},
{
"start": 71.48,
"end": 80.44,
"text": " propagate the signal. Now what you do, you assign, you assign key queries to each of"
},
{
"start": 80.44,
"end": 84.92,
"text": " the next layer. So each of the next layer has queries, and queries are just"
},
{
"start": 84.92,
"end": 90.44,
"text": " vectors. This is a vector, this is a vector, this is a vector, and so on. So"
},
{
"start": 90.44,
"end": 97.48,
"text": " basically the next layer has the ability to ask, to ask the last layer what it"
},
{
"start": 97.48,
"end": 104.2,
"text": " wants. This is a kind of an intrinsic property of attention, and I, as I said, I"
},
{
"start": 104.2,
"end": 108.92,
"text": " explained this in detail in the video, Attention is All You Need. Basically"
},
{
"start": 108.92,
"end": 115.88,
"text": " these are what's called queries, Q. And then this layer is exposing what are"
},
{
"start": 115.88,
"end": 124.28,
"text": " called keys, and keys again are vectors. So vector, vector, vector, vector, and so on."
},
{
"start": 124.28,
"end": 130.48,
"text": " So keys are vectors, and the way that the information is propagated to the next"
},
{
"start": 130.48,
"end": 138.83999999999997,
"text": " layer is whenever, whatever, we consider for example this node here, right, this"
},
{
"start": 138.83999999999997,
"end": 144.79999999999998,
"text": " node, let's make that yellow. When we consider this node here, it is going to"
},
{
"start": 144.79999999999998,
"end": 152.48,
"text": " look in the last layer which, which keys match my key the most. And in this case"
},
{
"start": 152.48,
"end": 158.2,
"text": " it will probably be this key and this key, right, they match the key the most."
},
{
"start": 158.2,
"end": 164.11999999999998,
"text": " And here we look at the inner product, so the angle between the vectors. And then"
},
{
"start": 164.11999999999998,
"end": 171.56,
"text": " information is aggregated by simply having a weighted average of the values."
},
{
"start": 171.56,
"end": 176.56,
"text": " So information is coming in here and here. Actually information is coming into"
},
{
"start": 176.56,
"end": 181.48,
"text": " all the nodes, but since only these keys match, the information will be propagated"
},
{
"start": 181.48,
"end": 189.79999999999998,
"text": " like this, to this unit. We could do this for another unit, for example this unit"
},
{
"start": 189.79999999999998,
"end": 195.83999999999997,
"text": " right here. What's the value of this unit? Well we have to look at the key here."
},
{
"start": 195.83999999999997,
"end": 201,
"text": " Which key is it going to be matched to? It's probably going to be matched to"
},
{
"start": 201,
"end": 208.23999999999998,
"text": " this key right here. And probably no other key really. Maybe this key a little"
},
{
"start": 208.24,
"end": 213.04000000000002,
"text": " bit. So the information of that node in the next layer will be whatever's"
},
{
"start": 213.04000000000002,
"end": 218.08,
"text": " information is coming in here, routed there, and a little bit of this"
},
{
"start": 218.08,
"end": 223,
"text": " information. So this is kind of a, it's not a hard, it's called soft attention."
},
{
"start": 223,
"end": 228.48000000000002,
"text": " So there's a little bit of information going everywhere, but the majority of the"
},
{
"start": 228.48000000000002,
"end": 232.12,
"text": " information is coming from the nodes where the keys match. So these are"
},
{
"start": 232.12,
"end": 237.60000000000002,
"text": " queries, these are keys, and technically these things coming in here are called"
},
{
"start": 237.6,
"end": 243.48,
"text": " values. But imagine the values simply as the information to be propagated, and the"
},
{
"start": 243.48,
"end": 248.56,
"text": " queries and the keys are responsible for routing that information to the next"
},
{
"start": 248.56,
"end": 254.28,
"text": " layer. All of these things are learned. So the queries, the keys, and the values."
},
{
"start": 254.28,
"end": 259.15999999999997,
"text": " Now what's the problem? The problem is between the queries and the keys. As you"
},
{
"start": 259.15999999999997,
"end": 264.88,
"text": " can see, what you have to do is you have to match every single query with every"
},
{
"start": 264.88,
"end": 270.28,
"text": " single key in order to find out where information goes. So this becomes order"
},
{
"start": 270.28,
"end": 278.6,
"text": " of, if you have D keys and D queries, order of D squared operations that you"
},
{
"start": 278.6,
"end": 283.96,
"text": " have to do. And of course D squared values that you have to compute. And"
},
{
"start": 283.96,
"end": 290.96,
"text": " since these are all vectors, of course there is D will not only be the number"
},
{
"start": 290.96,
"end": 294.91999999999996,
"text": " of keys, but then again this is multiplied, so there is an inner"
},
{
"start": 294.91999999999996,
"end": 303.52,
"text": " multiplication with the dimensionality, let's call that capital D, of the... no"
},
{
"start": 303.52,
"end": 310.35999999999996,
"text": " sorry that's not an inner multiplication. Let's just remain at this. So D squared"
},
{
"start": 310.35999999999996,
"end": 317,
"text": " inner products between vectors of capital D dimensions. So it's not an"
},
{
"start": 317,
"end": 324.8,
"text": " easy thing for resources to do. You need a lot of resources to hold this, all of"
},
{
"start": 324.8,
"end": 331.22,
"text": " this in memory at the same time and to compute all of these things. The reformer"
},
{
"start": 331.22,
"end": 336.64,
"text": " aims to solve this problem. So this giant space problem that the"
},
{
"start": 336.64,
"end": 343.24,
"text": " transformers have, space, memory, also computational problem to a lesser degree."
},
{
"start": 343.24,
"end": 350.44,
"text": " Mostly it's a memory issue. Alright, so what is happening here? And you see"
},
{
"start": 350.44,
"end": 356.84000000000003,
"text": " here that this product between two matrices clearly gives you this"
},
{
"start": 356.84000000000003,
"end": 365.08,
"text": " kind of squared thing. So what's happening in the reformer to do this?"
},
{
"start": 365.08,
"end": 371.96000000000004,
"text": " The trick is, if we go back to this drawing, the trick is to create"
},
{
"start": 371.96,
"end": 378.35999999999996,
"text": " what's called a hashing scheme or buckets. In creating buckets what you"
},
{
"start": 378.35999999999996,
"end": 385.4,
"text": " want to do is you want to group similar things together. So let's say we create"
},
{
"start": 385.4,
"end": 395.88,
"text": " four buckets. Bucket one, bucket two, bucket three, bucket four. And each"
},
{
"start": 395.88,
"end": 402.56,
"text": " bucket we label. And bucket one we label with the up direction, this with the right"
},
{
"start": 402.56,
"end": 408.56,
"text": " direction, with the down direction, the left direction as vectors. And now we"
},
{
"start": 408.56,
"end": 415.36,
"text": " simply put each of the things into the bucket where it belongs most. So let's"
},
{
"start": 415.36,
"end": 422.76,
"text": " for example this vector here, it goes here. Sorry, that is like absolutely not"
},
{
"start": 422.76,
"end": 432.12,
"text": " the right place. It goes probably here, right? This vector here, probably this one"
},
{
"start": 432.12,
"end": 437.8,
"text": " goes here, right? And so on. So you'll end up each of these assigning a bucket. So"
},
{
"start": 437.8,
"end": 445.4,
"text": " these all go into that bucket. Let's continue, actually let's also"
},
{
"start": 445.4,
"end": 453,
"text": " put the keys in the same buckets. So also the keys, this key here probably goes"
},
{
"start": 453,
"end": 462.64,
"text": " to this bucket. This key here probably goes to this bucket. Let's say this key"
},
{
"start": 462.64,
"end": 468.12,
"text": " here probably goes to the bucket over here. You already see, so before, right"
},
{
"start": 468.12,
"end": 476.04,
"text": " before, we cared about this particular query and this particular key. We just"
},
{
"start": 476.04,
"end": 480.8,
"text": " looked and we said those two will probably route information to each other"
},
{
"start": 480.8,
"end": 486.72,
"text": " because they're similar. And now you can see they both ended up in the same"
},
{
"start": 486.72,
"end": 493.84000000000003,
"text": " bucket. So the idea is to create a scheme where you throw these things into"
},
{
"start": 493.84,
"end": 499.56,
"text": " buckets such that if two vectors are similar they will end up in the same"
},
{
"start": 499.56,
"end": 504.76,
"text": " bucket with high probability. So you'll only have to really compare things within"
},
{
"start": 504.76,
"end": 511.96,
"text": " the same bucket and not across all of these d squared elements. That's the idea"
},
{
"start": 511.96,
"end": 520.16,
"text": " and the technique here is called locality sensitive hashing. So locality"
},
{
"start": 520.16,
"end": 531.56,
"text": " sensitive hashing. And short this is called LSH. The idea is the following, if"
},
{
"start": 531.56,
"end": 539.92,
"text": " you have two vectors v1 and v2 and they have and you have a distance measure"
},
{
"start": 539.92,
"end": 551.64,
"text": " distance measure d. D is a distance. What you want is if the distance between v1"
},
{
"start": 551.64,
"end": 564.8399999999999,
"text": " and v2 is small, I'm getting confused with color, with small then you want them in the"
},
{
"start": 564.84,
"end": 579.0400000000001,
"text": " same bucket. And if the distance is large then you want them in a different bucket."
},
{
"start": 579.0400000000001,
"end": 589.88,
"text": " Different buckets. You know with high probability. So all of these things"
},
{
"start": 589.88,
"end": 597.4399999999999,
"text": " where you say you want them in the same bucket with probability p with"
},
{
"start": 597.4399999999999,
"end": 602.76,
"text": " probability p with high probability p and here you want them in different"
},
{
"start": 602.76,
"end": 606.88,
"text": " buckets with high probability. Or you want them in the same pocket with low"
},
{
"start": 606.88,
"end": 612.32,
"text": " probability. That's an equivalent form of stating. This is all formalized and I"
},
{
"start": 612.32,
"end": 618.56,
"text": " can direct you to the Wikipedia page of that. It's pretty good. It gives a concise"
},
{
"start": 618.56,
"end": 625.1199999999999,
"text": " definition. Here you can see that and it gives a number of examples. So one"
},
{
"start": 625.1199999999999,
"end": 630.04,
"text": " example I'd like to give here for locality sensitive hashing is of course"
},
{
"start": 630.04,
"end": 636.4799999999999,
"text": " the scheme of bucketing will all depend on what your distance measure is. If you"
},
{
"start": 636.4799999999999,
"end": 641.3599999999999,
"text": " consider the distance measure simply to be the jacquard distance. So let's say we"
},
{
"start": 641.36,
"end": 656.12,
"text": " have two vectors 0 1 0 1 and here we have 1 0 1 1 0 1 and here it's 0 0 0 1."
},
{
"start": 656.12,
"end": 664.16,
"text": " Alright so maybe you can see the first two vectors here are much more close"
},
{
"start": 664.16,
"end": 672.9599999999999,
"text": " together than the last vector. Now in terms of bit differences, one scheme"
},
{
"start": 672.9599999999999,
"end": 680.12,
"text": " to do locality sensitive hashing is to simply sub sample bits. So in this case"
},
{
"start": 680.12,
"end": 686.52,
"text": " this is a slightly constructed example. We will just sub sample the first two"
},
{
"start": 686.52,
"end": 691.88,
"text": " bits and then construct the buckets according to these bit values. So if"
},
{
"start": 691.88,
"end": 698.24,
"text": " since we sample two bits we have four buckets. Here is 0 0, here is 0 1,"
},
{
"start": 698.24,
"end": 703.76,
"text": " here is 1 0 and here is 1 1. That's the concept of locality sensitive hashing."
},
{
"start": 703.76,
"end": 708.12,
"text": " You have these buckets and then you can say alright this vector has 1 0,"
},
{
"start": 708.12,
"end": 716.76,
"text": " goes into this, this goes into this and then that goes into the 0 1 bucket."
},
{
"start": 716.76,
"end": 722,
"text": " And you end up with what you have. You have the two close vectors in the same"
},
{
"start": 722,
"end": 726.08,
"text": " bucket and the two far apart vectors in that bucket. Of course that doesn't"
},
{
"start": 726.08,
"end": 730.36,
"text": " always work. You can be unlucky in sub sampling but that's kind of"
},
{
"start": 730.36,
"end": 735.36,
"text": " trade-off you'll have to go for. If things that are close together"
},
{
"start": 735.36,
"end": 740.96,
"text": " happen with it's a low probability but if they happen to end up in the different"
},
{
"start": 740.96,
"end": 747.44,
"text": " buckets then basically you lose the fact that they are close to each other and"
},
{
"start": 747.44,
"end": 752.6,
"text": " that's the trade-off. The kind of locality sensitive hashing they use in"
},
{
"start": 752.6,
"end": 757.84,
"text": " the reformer now is what are called random projections. So let's say you have"
},
{
"start": 757.84,
"end": 761.48,
"text": " a bunch of vectors and that's really what we care about. You have a bunch"
},
{
"start": 761.48,
"end": 770.76,
"text": " of vectors and what you want, you want the keys and queries. So you have a"
},
{
"start": 770.76,
"end": 775.8,
"text": " bunch of vectors like this and you want to create buckets such that vectors that"
},
{
"start": 775.8,
"end": 780.64,
"text": " are close together will end up in the same bucket and vectors that are far"
},
{
"start": 780.64,
"end": 787.4,
"text": " apart will end up in the in different buckets. A cool way to do is,"
},
{
"start": 787.4,
"end": 791.72,
"text": " and this is in the cosine distance so we care about the angle between vectors,"
},
{
"start": 791.72,
"end": 799.48,
"text": " a cool way to do this is to use random plane projections and the cool"
},
{
"start": 799.48,
"end": 803.44,
"text": " thing about it is it works for the cosine distance and you can basically"
},
{
"start": 803.44,
"end": 810.4,
"text": " choose how many buckets you create. Let's say we want to create four"
},
{
"start": 810.4,
"end": 816.16,
"text": " buckets here again. What we need is two hyper planes and what we'll do is, so"
},
{
"start": 816.16,
"end": 822.04,
"text": " here is the origin, we'll simply create two hyper planes through the origin at"
},
{
"start": 822.04,
"end": 829.44,
"text": " random. So I'm gonna draw a random hyper plane here like this and then a second"
},
{
"start": 829.44,
"end": 837.24,
"text": " random hyper plane like this. So you would agree those are pretty random"
},
{
"start": 837.24,
"end": 843.12,
"text": " hyper planes as much as I can be a random generator and then we'll simply"
},
{
"start": 843.12,
"end": 848.8000000000001,
"text": " label, so this will label hyper plane one, this will label hyper plane two."
},
{
"start": 848.8000000000001,
"end": 857,
"text": " Now we simply assign each vector bits according to the, on which"
},
{
"start": 857,
"end": 862,
"text": " side of the hyper plane they lie. So let's call this here the plus side and"
},
{
"start": 862,
"end": 866.88,
"text": " this here the minus side or even yeah let's call this the plus and the minus"
},
{
"start": 866.88,
"end": 872.24,
"text": " and here also we call this the plus side and this the minus side. So this vector"
},
{
"start": 872.24,
"end": 880.8,
"text": " here is, its signs are plus plus right because it's on the plus side of both of"
},
{
"start": 880.8,
"end": 888.64,
"text": " hyper planes. This vector plus plus, this one plus plus, this one here is called,"
},
{
"start": 888.64,
"end": 894.12,
"text": " it's on the negative side of plane two but on the positive side of plane one so"
},
{
"start": 894.12,
"end": 902.12,
"text": " it's plus minus, this one here minus minus, minus minus, minus minus and these"
},
{
"start": 902.12,
"end": 907.12,
"text": " are your buckets. So you would group these vectors together because they have"
},
{
"start": 907.12,
"end": 911.48,
"text": " they have the same signs. You would group that vector, you would group these"
},
{
"start": 911.48,
"end": 918.64,
"text": " vectors together. The combination of this with attention, since in attention you've"
},
{
"start": 918.64,
"end": 926.44,
"text": " seen attention uses a softmax and the softmax is dominated usually by the"
},
{
"start": 926.44,
"end": 932.44,
"text": " largest elements and since we compute inner products it means that this softmax"
},
{
"start": 932.44,
"end": 938.48,
"text": " thing is dominated by vectors that have small inner products. So basically"
},
{
"start": 938.48,
"end": 944.6800000000001,
"text": " you don't have to look at all of these d squared vectors if you can find the"
},
{
"start": 944.6800000000001,
"end": 950.48,
"text": " ones that have the closest distance. You can pretty much ignore the others."
},
{
"start": 950.48,
"end": 957.8800000000001,
"text": " And LSH allows you to do this. So build buckets of vectors with"
},
{
"start": 957.88,
"end": 964.68,
"text": " similar directions. Then you only have to care about these vectors comparing them"
},
{
"start": 964.68,
"end": 971.32,
"text": " to each other. So that's not a lot of vectors generally and that's how you"
},
{
"start": 971.32,
"end": 976.32,
"text": " save a lot of work. So you will only have to care about these three vectors if"
},
{
"start": 976.32,
"end": 981.36,
"text": " your key vector for example is right here. You'll only have to care about these"
},
{
"start": 981.36,
"end": 988.4,
"text": " things in the same bucket and you can ignore all of that rest of the space. Of"
},
{
"start": 988.4,
"end": 992.72,
"text": " course the more hyperplanes you have the more buckets you'll have, the less"
},
{
"start": 992.72,
"end": 997.04,
"text": " vectors you'll have in the same bucket. That's the general idea. I find this"
},
{
"start": 997.04,
"end": 1001.36,
"text": " explanation to be a bit easy. You can equivalently explain it by doing these"
},
{
"start": 1001.36,
"end": 1007.84,
"text": " kind of random rotations in the space. You can think about how that will end up"
},
{
"start": 1007.84,
"end": 1012.5600000000001,
"text": " actually being the exact same thing as what I just explained. I just like that"
},
{
"start": 1012.5600000000001,
"end": 1020.48,
"text": " my explanation better I think. Alright so the way they use this, they have an"
},
{
"start": 1020.48,
"end": 1026.88,
"text": " illustration right here, is the following. So they have these keys right?"
},
{
"start": 1026.88,
"end": 1031.68,
"text": " Sequence of queries and keys. So they do equivalent queries and keys which is a"
},
{
"start": 1031.68,
"end": 1036.48,
"text": " thing you can do in transformers. Don't worry too much about it whether they're"
},
{
"start": 1036.48,
"end": 1042.16,
"text": " different or not. But then they do this LSH bucketing and here the color of the"
},
{
"start": 1042.16,
"end": 1048.84,
"text": " cell is just the bucket, the LSH bucket which will end up. Then they sort that"
},
{
"start": 1048.84,
"end": 1055.3600000000001,
"text": " right as you can see and now they do an additional thing which is called the"
},
{
"start": 1055.3600000000001,
"end": 1061.4,
"text": " chunk. As you can see there are not the same amount of vectors in each bucket"
},
{
"start": 1061.4,
"end": 1068.3200000000002,
"text": " and that is sometimes a problem because even though you've reduced the"
},
{
"start": 1068.3200000000002,
"end": 1073.4,
"text": " memory, the memory requirements are still dominated by the"
},
{
"start": 1073.4,
"end": 1080.3200000000002,
"text": " largest bucket. By whatever bucket has the most number of vectors that will"
},
{
"start": 1080.3200000000002,
"end": 1085.48,
"text": " pretty much be your memory requirement. Because now you don't have to, if"
},
{
"start": 1085.48,
"end": 1091.2800000000002,
"text": " this is D, you have to compute all the D squared things anymore. But you'll"
},
{
"start": 1091.28,
"end": 1099.84,
"text": " only have to compute this quantity, let's call that B. So the maximum"
},
{
"start": 1099.84,
"end": 1105.6399999999999,
"text": " bucket size. But that could still be large right? If you look at a"
},
{
"start": 1105.6399999999999,
"end": 1110.6,
"text": " distribution it's probably going to be something like this right? Where most"
},
{
"start": 1110.6,
"end": 1116.44,
"text": " buckets have a kind of a standard number of vectors but some buckets will have a"
},
{
"start": 1116.44,
"end": 1122.64,
"text": " lot of vectors and that's, sorry, some few buckets will have a lot of vectors and"
},
{
"start": 1122.64,
"end": 1126.16,
"text": " your memory requirement is still dominated by this. So they do an"
},
{
"start": 1126.16,
"end": 1129.04,
"text": " additional thing which is called chunking which means they actually take"
},
{
"start": 1129.04,
"end": 1136.24,
"text": " fixed size chunks here, fixed size. Here they always take four and they say all"
},
{
"start": 1136.24,
"end": 1143.8,
"text": " right these are our chunks and we will only compute attention within the chunks"
},
{
"start": 1143.8,
"end": 1149,
"text": " right? So it could be that there's the same bucket is actually split"
},
{
"start": 1149,
"end": 1153.2,
"text": " between chunks and that's why they do an additional thing is that you can attend"
},
{
"start": 1153.2,
"end": 1159.84,
"text": " two things in a different chunk right here. You can attend two things"
},
{
"start": 1159.84,
"end": 1165.52,
"text": " in your neighboring chunks so you're restricted to either your own chunk or"
},
{
"start": 1165.52,
"end": 1173.48,
"text": " your neighboring chunk. Note that there aren't any any arrows going over here."
},
{
"start": 1173.48,
"end": 1180.08,
"text": " So you can attend, they have this diagram here, which things you can"
},
{
"start": 1180.08,
"end": 1185.6,
"text": " attend to. You can attend to yourself or attend to your neighboring thing but not"
},
{
"start": 1185.6,
"end": 1192.4,
"text": " to any other thing or the other way around right? So that's basically the"
},
{
"start": 1192.4,
"end": 1201.32,
"text": " the concept of saving memory. Now your memory requirements are, if we call this"
},
{
"start": 1201.32,
"end": 1208.28,
"text": " quantity now, we call the other one B, let's call this the chunk size C right?"
},
{
"start": 1208.28,
"end": 1213.76,
"text": " Your memory requirements are pretty much C squared plus whatever this"
},
{
"start": 1213.76,
"end": 1220.52,
"text": " unidirectional, so not this isn't squared, plus probably O of C something"
},
{
"start": 1220.52,
"end": 1230.3999999999999,
"text": " like this. So you bring your memory requirements down quite a bit. Now"
},
{
"start": 1230.4,
"end": 1240.0400000000002,
"text": " that's the general idea here. The problem they face again is, so they face"
},
{
"start": 1240.0400000000002,
"end": 1249.92,
"text": " another problem where they say hold on, I can't find it right here, they say hold on,"
},
{
"start": 1249.92,
"end": 1254.72,
"text": " we do have actually another problem and that is that these transformers"
},
{
"start": 1254.72,
"end": 1260.64,
"text": " have to back propagate. So you'll have to forward propagate these things and now"
},
{
"start": 1260.64,
"end": 1264.48,
"text": " we've kind of solved this D square computation issue but what you'll have to"
},
{
"start": 1264.48,
"end": 1270.64,
"text": " do is if you go from layer to layer right? Layer, layer, layer, layer. What you"
},
{
"start": 1270.64,
"end": 1274.96,
"text": " have to do is if you propagate information forward you still have to"
},
{
"start": 1274.96,
"end": 1280.68,
"text": " back propagate and in order to back propagate usually, usually you'll have to"
},
{
"start": 1280.68,
"end": 1287.3600000000001,
"text": " remember all of these activations right? So these activations, these activations."
},
{
"start": 1287.3600000000001,
"end": 1292.4,
"text": " In order to do back prop it is often the case that you actually have to remember"
},
{
"start": 1292.4,
"end": 1296.96,
"text": " the activations because in each forward propagation, in each layer here you might"
},
{
"start": 1296.96,
"end": 1304.5600000000002,
"text": " lose some information. Imagine you have a layer that maps these"
},
{
"start": 1304.56,
"end": 1314.12,
"text": " two-dimensional vectors both to, so here actually let's make this blue, maps these"
},
{
"start": 1314.12,
"end": 1319.96,
"text": " three vectors to the following configuration. So a layer maps these"
},
{
"start": 1319.96,
"end": 1329.32,
"text": " vectors to this, this and this. So it maps two things to one thing which"
},
{
"start": 1329.32,
"end": 1335.32,
"text": " you know can be if you in a linear layer can decide to map it to a lower"
},
{
"start": 1335.32,
"end": 1340.6799999999998,
"text": " dimensional subspace. So you could actually decide to map it to in fact"
},
{
"start": 1340.6799999999998,
"end": 1346,
"text": " two points right? This is also a possibility. You could do dimension reduction."
},
{
"start": 1346,
"end": 1349.52,
"text": " So because all of this in order to do back prop you actually have to remember"
},
{
"start": 1349.52,
"end": 1357.32,
"text": " these things in order to do proper back prop. This is a problem again for the"
},
{
"start": 1357.32,
"end": 1361.4399999999998,
"text": " transformer because all these activations even though we've gotten rid"
},
{
"start": 1361.4399999999998,
"end": 1366.72,
"text": " of the d-square computation they will have to be remembered and that takes a"
},
{
"start": 1366.72,
"end": 1374.6,
"text": " lot of memory. The way to solve this is actually to do invertible layers. What"
},
{
"start": 1374.6,
"end": 1378.96,
"text": " that means is that if I propagate information forward, forward, forward,"
},
{
"start": 1378.96,
"end": 1385.76,
"text": " forward, I can figure out what the information here was simply by looking"
},
{
"start": 1385.76,
"end": 1392.56,
"text": " at the back prop activations. And this happens if the layer is invertible."
},
{
"start": 1392.56,
"end": 1400.48,
"text": " So if this function here is invertible. So if f here technically is invertible."
},
{
"start": 1400.48,
"end": 1408.6,
"text": " So I can actually write down the inverse of f and that is defined. This of course"
},
{
"start": 1408.6,
"end": 1419.1999999999998,
"text": " is a pretty big restriction and the way they achieve it, I like to go to the blog"
},
{
"start": 1419.1999999999998,
"end": 1430.4399999999998,
"text": " here, the way they achieve it is they do what's called an idea from reversible"
},
{
"start": 1430.4399999999998,
"end": 1434.4399999999998,
"text": " networks where they always have two sets of activations. That's what you see here."
},
{
"start": 1434.44,
"end": 1441.56,
"text": " X1 and X2. And in each layer only one of them is updated in a residual fashion."
},
{
"start": 1441.56,
"end": 1449.52,
"text": " You can see here layer 1 updates X2 but X1 remains the same and goes to Y1."
},
{
"start": 1449.52,
"end": 1458.76,
"text": " And then in the next layer, layer 2 only updates Y1 in order to"
},
{
"start": 1458.76,
"end": 1466.28,
"text": " construct Z1. But Y2 remains the same to be Z2. And then you can revert the layers."
},
{
"start": 1466.28,
"end": 1471.84,
"text": " You can basically figure out what the activations were from the back prop"
},
{
"start": 1471.84,
"end": 1479.24,
"text": " signal. Now that's extremely good if you want to save memory but of course it"
},
{
"start": 1479.24,
"end": 1483.4,
"text": " restricts clearly. You have to be restricted to this kind of architecture"
},
{
"start": 1483.4,
"end": 1490.52,
"text": " similar. This idea actually isn't new. This has been used many times in things"
},
{
"start": 1490.52,
"end": 1494.8000000000002,
"text": " like normalizing flows and I want to highlight this paper. Actually want to"
},
{
"start": 1494.8000000000002,
"end": 1501.16,
"text": " highlight specific... I chose this paper because they have these nice diagrams"
},
{
"start": 1501.16,
"end": 1509.5600000000002,
"text": " where they show exactly this. You see they have two sets X1 and X2 that in"
},
{
"start": 1509.56,
"end": 1514.52,
"text": " forward propagation they only update one of them. And then in backward in what's"
},
{
"start": 1514.52,
"end": 1520.44,
"text": " called inverse propagation they can figure out what those were. And they"
},
{
"start": 1520.44,
"end": 1527.32,
"text": " couple these in exactly the same way. Like here this drawing might be even more"
},
{
"start": 1527.32,
"end": 1534.04,
"text": " similar where they alternate between updating the two activations. So you can"
},
{
"start": 1534.04,
"end": 1539.76,
"text": " think of this as a way to simply make the function that you're representing"
},
{
"start": 1539.76,
"end": 1544.68,
"text": " with the neural network invertible. That is a giant constraint on your"
},
{
"start": 1544.68,
"end": 1549.24,
"text": " architecture but these methods here, these normalizing flow methods, use that"
},
{
"start": 1549.24,
"end": 1554.84,
"text": " so they can actually define an invertible layer because they need the"
},
{
"start": 1554.84,
"end": 1562.8799999999999,
"text": " Jacobian inverse in order to compute their normalizing flow. So you see that's"
},
{
"start": 1562.88,
"end": 1569.3600000000001,
"text": " why they originally did it. And I'm sure that that's not a new idea or"
},
{
"start": 1569.3600000000001,
"end": 1576.3600000000001,
"text": " particularly new again. Strangely I haven't found any of the flow"
},
{
"start": 1576.3600000000001,
"end": 1585,
"text": " literature cited. They do cite the reversible residual net paper that they"
},
{
"start": 1585,
"end": 1592.0800000000002,
"text": " probably got the idea from. So with these two things now you can save the"
},
{
"start": 1592.08,
"end": 1599.84,
"text": " giant computation. And you can also not store the forward activations. So"
},
{
"start": 1599.84,
"end": 1612.1599999999999,
"text": " they say they can take now giant giant giant input sizes. You may remember"
},
{
"start": 1612.1599999999999,
"end": 1622,
"text": " transformers like BERT. So BERT it can use something like 512 tokens."
},
{
"start": 1622,
"end": 1628,
"text": " In its input sequence. That means the sequence that you can look at with BERT"
},
{
"start": 1628,
"end": 1634.72,
"text": " at a time is 512 long and not a bit longer. There have been some"
},
{
"start": 1634.72,
"end": 1644.12,
"text": " extensions to that. For example I believe in XL net. So XL net has pushed this to"
},
{
"start": 1644.12,
"end": 1655.1599999999999,
"text": " something like C times 512 where C is a smallish constant. That where you"
},
{
"start": 1655.1599999999999,
"end": 1659.6399999999999,
"text": " can kind of carry over information between sequences. But this thing here"
},
{
"start": 1659.6399999999999,
"end": 1668.04,
"text": " as you can see they calculate it could take up something like 64,000 tokens and"
},
{
"start": 1668.04,
"end": 1675.32,
"text": " that would use in total 16 gigabytes of memory. Which is available on a high-end"
},
{
"start": 1675.32,
"end": 1687,
"text": " GPU. So this is a giant this is a giant step forward in in producing"
},
{
"start": 1687,
"end": 1693.12,
"text": " transformers that can actually take large models. And here you see the memory"
},
{
"start": 1693.12,
"end": 1698.9599999999998,
"text": " and time complexity. You can look at these things yourself but you can see"
},
{
"start": 1698.9599999999998,
"end": 1704.4399999999998,
"text": " maybe here that these squares here from the original transformer they now"
},
{
"start": 1704.4399999999998,
"end": 1710.3999999999999,
"text": " vanish from this. And all of these constants are a lot of these constants"
},
{
"start": 1710.3999999999999,
"end": 1715.12,
"text": " are actually smaller. For example that chunk size is in there instead of kind"
},
{
"start": 1715.12,
"end": 1724.3999999999999,
"text": " of the entire sequence length. So that's basically the the paper. They show that"
},
{
"start": 1724.3999999999999,
"end": 1729.76,
"text": " I can actually input those long sequences. They can apply this to images."
},
{
"start": 1729.76,
"end": 1735.8,
"text": " You see there's image net pixel by pixel which is a lot of pixels and would have"
},
{
"start": 1735.8,
"end": 1742.6799999999998,
"text": " been absolutely unthinkable with one of the original transformers. And with that"
},
{
"start": 1742.68,
"end": 1749.04,
"text": " I invite you to check out the paper and the blog post and I'll see you next time."
},
{
"start": 1749.04,
"end": 1775.84,
"text": " Bye bye."
}
] |
EbFosdOi5SY | Yannic Kilcher | UCZHmQk67mSJgfCCTn7xBfew | Go-Explore: a New Approach for Hard-Exploration Problems | [
"Science & Technology"
] | [
"machine learning",
"ml",
"reinforcement learning",
"rl",
"ai",
"artificial intelligence",
"uber",
"exploration",
"hard exploration",
"research",
"novelty",
"graph",
"robustify",
"explore",
"montezuma",
"montezuma's revenge",
"pitfall",
"atari"
] | This algorithm solves the hardest games in the Atari suite and makes it look so easy! This modern version of Dijkstra's shortest path algorithm is outperforming everything else by orders of magnitude, and all based on random exploration.
https://arxiv.org/abs/1901.10995
https://eng.uber.com/go-explore/
https://github.com/uber-research/go-explore
Abstract:
A grand challenge in reinforcement learning is intelligent exploration, especially when rewards are sparse or deceptive. Two Atari games serve as benchmarks for such hard-exploration domains: Montezuma's Revenge and Pitfall. On both games, current RL algorithms perform poorly, even those with intrinsic motivation, which is the dominant method to improve performance on hard-exploration domains. To address this shortfall, we introduce a new algorithm called Go-Explore. It exploits the following principles: (1) remember previously visited states, (2) first return to a promising state (without exploration), then explore from it, and (3) solve simulated environments through any available means (including by introducing determinism), then robustify via imitation learning. The combined effect of these principles is a dramatic performance improvement on hard-exploration problems. On Montezuma's Revenge, Go-Explore scores a mean of over 43k points, almost 4 times the previous state of the art. Go-Explore can also harness human-provided domain knowledge and, when augmented with it, scores a mean of over 650k points on Montezuma's Revenge. Its max performance of nearly 18 million surpasses the human world record, meeting even the strictest definition of "superhuman" performance. On Pitfall, Go-Explore with domain knowledge is the first algorithm to score above zero. Its mean score of almost 60k points exceeds expert human performance. Because Go-Explore produces high-performing demonstrations automatically and cheaply, it also outperforms imitation learning work where humans provide solution demonstrations. Go-Explore opens up many new research directions into improving it and weaving its insights into current RL algorithms. It may also enable progress on previously unsolvable hard-exploration problems in many domains, especially those that harness a simulator during training (e.g. robotics).
Authors: Adrien Ecoffet, Joost Huizinga, Joel Lehman, Kenneth O. Stanley, Jeff Clune | Hi there, what you're seeing here is the game Montezuma's Revenge and it has been a problem for a long time for reinforcement learning algorithms. What you can see is this little person that has to kind of jump around, collect keys, collect these coins, kind of get over enemies and so on, and all of this is super hard because the reward is so sparse, so sometimes you have to do hundreds of actions until you get the next improvement in score. You can see on the top how your score is increasing and it seems like this algorithm is pretty efficient on this, but keep in mind this algorithm has to learn from just the pixel input. It has to learn every single move of the agent. So if you see here for example jumping over the enemies, stopping when these blue bars come and going down the ladders without hitting the spider, this is a really really hard problem. So far reinforcement learning algorithms have had a very hard time doing this until this algorithm showed up. GoExplore, which was the first one that actually surpassed I believe human experts or widely surpassed human experts at this game, in fact the first reinforcement learning algorithm that without human demonstration could do anything at all at this game. So let's dive in and see how this algorithm does what it does. And the paper to this is called GoExplore, a new approach for hard exploration problems by Adria Ecofe, Joost Huizinga, Joel Lehmann, Kenneth O. Stanley and Jeff Klun from Uber AI Labs. So they break down the problem into what they call two problems. So these hard exploration problems, they say they suffer from two things, detachment and derailment. You can see here detachment and derailment. So they explain those in detail. Detachment and derailment are related to each other. Detachment is when an exploration algorithm that has some sort of intrinsic motivation, right? This is how you usually do these hard exploration problems. You give intrinsic motivation to the agent to explore new things, like in absence of a reward, if there's no reward around, it should just reach some kind of new state. And you give the algorithm points for reaching states that it has never seen before. But this can come to this sort of detachment problem. They illustrate this here. So let's say your algorithm starts actually here in the middle, right? And everything that's green here is intrinsic reward. So you collect the green stuff that gives you points, right? So the goal might actually be in here or in here. But you have to teach the algorithm to go all this way around. And you do that by simply motivating it to go to new states by giving it a reward for every state it hasn't been. So it starts exploring, goes here, and maybe the first episode reaches here right before it is reset, usually reset after, well, like it bounces kind of around, it's like, ah, there's new stuff. And then it goes here and it will explore kind of it. And it will be motivated to explore because there's always this green stuff here. So after a while here, whatever is purple has been explored, right? Recently. So with purple, they mark what has been recently explored. All of this has been recently explored, right? So it is gone until here. But usually you also have like a component that isn't purely seeking this green stuff, but is also doing some kind of random exploration. And so what happens, what can happen in these algorithms is that if you at one of these times you start the episode here, by chance, it actually goes into the other direction. All right. And then it's like, wow, there's all this green stuff over here, right? And then it's like, woo, so much green stuff. Right. And then what usually happens is it kind of forgets that there's green stuff over here. So it explores all of this stuff around here. It explores, explores, explores, but there's no more stuff. And then it's stuck, right? It's stuck here. And it says, where, where am I going to go? Like I know over here, there's no more green stuff. And over here, there doesn't appear to be any green stuff because it's forgotten about this. So this, they claim these intrinsic motivation algorithms, what they can lead to is you can detach from your frontier of new knowledge, right? Like they can forget that there is, that here at one point they were here and the algorithm, what the algorithm did, it was it explored here until here, and then it explored over here. So it thinks that this thing over here is its most recent frontier of knowledge, right? This is, this is my state here. This is where I go explore from, but there is nowhere to explore from, right? What it should remember is that here it actually kind of jumped over by random chance. I hope this makes sense. This is called detachment of intrinsic motivation algorithms. And it happens when you, when you kind of give these points according to simply reaching new states. And then another thing is what they call derailment. And derailment is a bit of a more subtle problem. So in derailment, what happens is maybe you, maybe you've actually, let's say this same situation. You've discovered a promising state, right, by some miracle. Here is the goal, right? You've reached the goal. You've done this by exploration. You've explored a bunch and you've reached the goal. Now the problem is, can you do it again? Right? Especially if the environment is a bit stochastic, right? If there is noise, if the environment isn't always the same, can you actually learn how to do this robustly, like such that you can repeat your success? And in derailment is the problem that often these algorithms, while they find promising things, they kind of struggle to robustly reach those promising states. Go Explorer solves these problems in two separate phases, one for each, basically. So what it does is in a phase one, it explores, right? Explore and this is a crucial part, until solved. So this is an explorer, a method that explores until the problem is solved with the focus on explore, right? And then in stage two, robustify. And by robustify means that if stage one has resulted in trajectories that have solved the game or the environment, then phase two is simply tasked with robustly finding those. So let's look at phase one. Phase one is kind of like, think of Dijkstra's algorithm. So in Dijkstra's algorithm, this is a shortest path algorithm in graphs. So in Dijkstra's algorithm, you have a graph and you want to reach this from the start, let's call this the start. And this is the end or the goal. And the graph is connected with edges. And these edges have usually sometimes they have weights. We can simply, the goal is how to go the shortest path from start to the end. And what Dijkstra's algorithm does, it starts exploring. So it's like it goes here. All right, and then it says, ah, this is a new state. I reached the state in one step. All right, explore some more. I reached this state in two steps. And then it's like, I reached a state in three steps. Okay, but I can also go here, I reached this state in one step, in two steps. I've already been here. Okay. But then it can, it can say, okay, from here, I reached this state into this is a bad example. Let's say we actually have to make a shortest path. This is the graph, right? So it reaches this state in two steps, but then it explores this thing. It's like, ah, wait a minute, I've seen this state. But before I've reached it in two steps. Now I'm reaching it in one step. This is better. So this path here is better than this path here. And then it goes on from here. It goes on it says, okay, I'm reaching this goal in two steps. I've reached it in three steps before. So clearly, this bottom path here is better than what I've done before this top or this path. So this is this is what Go Explorer does. In a nutshell, what it does is has an archive of states, right? An archive of states that it has visited previously. And the crucial thing here is, and this is kind of necessary to their algorithm, that this is completely deterministic. So what they actually do is they will save the state of the game emulator, right? They are here, right? And they do some exploration, jumping some until their person is here, their game is in some state, and they will save the emulator to a buffer. This is kind of crucial, such that at a later point, they can select this, this exactly this state that they were in, and from here, run a bunch of explorations again, right? So if they say select state from archive, and then go to that state, this is simply restoring the emulator state. But you could also what you could also do if if this is a purely deterministic environment, you could simply save the sequence of actions that you've done to come here, and simply buy so maybe you gone right, right, and here you jump, and you go right, you can simply replay those to get to the exact same state, they discuss that this can be expanded to also handle a kind of stochastic environments. But in their case, at the phase one, the environment is completely deterministic. So they can do this, they can go, sorry, they can go to a state deterministically. So they'll select a state from an archive, they have an algorithm for selecting kind of promising states. They go to that state, and then they explore from that state and they simply do this random. So this is random. And then they update the archive. So what do they do? Right? So we saw so here, maybe a new graph, so they go to a state, this is their state, and then they explore. Now there, there are multiple things that can happen. One they can encounter a new state, right? New state never seen before. All right, what they do is they save it to the buffer. They say, okay, this new state, let's call it n, this new state, I've reached it in. And here we have done s steps, I've reached an s plus one step. And whatever here is the emulator state that we had before, right? So I can at any point, I can go back. If, however, the state has already been seen, let's call this m, they retrieve m, m prime from the buffer because they've already seen it, it's in the buffer, right? They compare, hey, these steps, so is s prime, is this smaller or larger than s plus one? So basically, I've seen this state before, but using this path, can I reach it in fewer steps than I've reached it before? If yes, then I'm going to replace this, replace this s by s plus one, and then save it again in the buffer. All right, so I can, I now have a better path to reach this state than before. So it's almost exactly like Dijkstra's algorithm in that you simply explore and every new state you find you've either already seen, so you just simply have a new way of getting to that state. If you haven't seen it, you simply remember it, and then you do it all again. So you can imagine with time, these number of states in this buffer will explode. And it's not feasible for Montezuma's revenge. Like imagine this game, right? You have to, you have to go everywhere and explore everything, right? This, I mean, every single action here could be a state. That's why, let me pause this. That's why what they do is they, they have to come up with a notion of state that is, doesn't simply include every single game state there is. And what they do is, this is sampled here, they down sample the image. And then this, sorry, I've tried drawing over a blog post, they down sample the image, and then they simply say, all right, so this, this thing would become this thing. And they simply say, okay, if two of these images have the same representation, so grayscale, down sampled, quantized, then they are the same state. And that's kind of the crux of the algorithm I find. So if two things have the same state, then the algorithm is prone to kind of confusing them for each other. It thinks one is the other, not exactly, but it does kind of assume that they are close actually here. But there is a crucial difference between the two. The algorithm will have a very hard time in some situations. I don't want to, like, you can think of, it needs to be kind of convoluted situations, but it can be the kind of crux of the algorithm very much if the state representation isn't done well. And they actually have two methods. One simply relies on this down sampling and the other one, they provide domain knowledge, which means kind of which level you're in, where the player is, and so on. But this is, this is pretty cool. So if you are able, so if, if your reinforcement learning problem, first of all, is deterministic. At least in a simulator. And second, allows for good state representations, kind of for, for low dimensional state representations. If those two things are given, you can use GoExplore. And as I said, this, this representation here is key. So now you know how they do it. They simply explore these states. And if they come on a new state, and every state is, is, is, so we don't mean this here, we actually mean this representation of it, they store it and they remember how to get to it. And simply by exploring like this and having a smart algorithm that picks which state to explore from, which of course is also a lot of domain knowledge, they are able to solve the game, right? So you see, goes way past human expert, and they're, they're able to, to actually perform really well simply by exploring. This is the exploration phase. This is simply random exploration from promising states. And then in the second part, in the second phase, they now robustify it. So now they introduce noise into their environment, right? Because usually environments have noise or some sort of stochasticity, and they run imitation learning on the best trajectories they found. And what that does is, what they do is they have a trajectory, let's say, let's say this is a trajectory, right? These are actions you need to reach this goal state. This imitation learning algorithm, what they do is they take a few steps back, say here, and they just use imitation learning, which is basically a form of reinforcement learning to reach the goal state from here, simply reach the goal state, right? Once in under noise, right? So you can't just take the exact same actions. Once this has been learned, back up a few more steps, maybe here, and then try to reach the goal state. Now you've already learned how to do this part. So this this bigger part should become should be easier than simply starting from here. And you do that until you've kind of backed up your entire trajectory. This is a well known method from imitation learning. But usually you have usually this red thing is a human demonstration. But now this red trajectory has been found by go explore. It turns out if you have a bunch of these trajectories from go explore, you can do a pretty good job at that. All right, that's basically all that I wanted to say about go explore. It's basically Dijkstra's algorithm. It works under very specific circumstances, but I think it's super promising. And it's kind of a new way of thinking about it. So the video I've shown is actually go explore solving Montezuma's revenge getting like a new high score. And you can see how like skilled this this algorithm becomes. All right, with that, I say goodbye and hope to see you next time. | [
{
"start": 0,
"end": 7.8,
"text": " Hi there, what you're seeing here is the game Montezuma's Revenge and it has been a problem"
},
{
"start": 7.8,
"end": 11.120000000000001,
"text": " for a long time for reinforcement learning algorithms."
},
{
"start": 11.120000000000001,
"end": 17.64,
"text": " What you can see is this little person that has to kind of jump around, collect keys,"
},
{
"start": 17.64,
"end": 25.48,
"text": " collect these coins, kind of get over enemies and so on, and all of this is super hard because"
},
{
"start": 25.48,
"end": 31.16,
"text": " the reward is so sparse, so sometimes you have to do hundreds of actions until you get"
},
{
"start": 31.16,
"end": 33.88,
"text": " the next improvement in score."
},
{
"start": 33.88,
"end": 38.68,
"text": " You can see on the top how your score is increasing and it seems like this algorithm is pretty"
},
{
"start": 38.68,
"end": 45.64,
"text": " efficient on this, but keep in mind this algorithm has to learn from just the pixel input."
},
{
"start": 45.64,
"end": 49.84,
"text": " It has to learn every single move of the agent."
},
{
"start": 49.84,
"end": 56.480000000000004,
"text": " So if you see here for example jumping over the enemies, stopping when these blue bars"
},
{
"start": 56.480000000000004,
"end": 62.760000000000005,
"text": " come and going down the ladders without hitting the spider, this is a really really hard problem."
},
{
"start": 62.760000000000005,
"end": 69.48,
"text": " So far reinforcement learning algorithms have had a very hard time doing this until this"
},
{
"start": 69.48,
"end": 71.04,
"text": " algorithm showed up."
},
{
"start": 71.04,
"end": 79.68,
"text": " GoExplore, which was the first one that actually surpassed I believe human experts or widely"
},
{
"start": 79.68,
"end": 86.48,
"text": " surpassed human experts at this game, in fact the first reinforcement learning algorithm"
},
{
"start": 86.48,
"end": 92.12,
"text": " that without human demonstration could do anything at all at this game."
},
{
"start": 92.12,
"end": 97.16000000000001,
"text": " So let's dive in and see how this algorithm does what it does."
},
{
"start": 97.16000000000001,
"end": 103.04,
"text": " And the paper to this is called GoExplore, a new approach for hard exploration problems"
},
{
"start": 103.04,
"end": 112.04,
"text": " by Adria Ecofe, Joost Huizinga, Joel Lehmann, Kenneth O. Stanley and Jeff Klun from Uber"
},
{
"start": 112.04,
"end": 114.28,
"text": " AI Labs."
},
{
"start": 114.28,
"end": 121.38000000000001,
"text": " So they break down the problem into what they call two problems."
},
{
"start": 121.38000000000001,
"end": 126.52000000000001,
"text": " So these hard exploration problems, they say they suffer from two things, detachment and"
},
{
"start": 126.52000000000001,
"end": 127.86000000000001,
"text": " derailment."
},
{
"start": 127.86000000000001,
"end": 132.96,
"text": " You can see here detachment and derailment."
},
{
"start": 132.96,
"end": 137.72,
"text": " So they explain those in detail."
},
{
"start": 137.72,
"end": 143.12,
"text": " Detachment and derailment are related to each other."
},
{
"start": 143.12,
"end": 150.12,
"text": " Detachment is when an exploration algorithm that has some sort of intrinsic motivation,"
},
{
"start": 150.12,
"end": 151.12,
"text": " right?"
},
{
"start": 151.12,
"end": 153.76000000000002,
"text": " This is how you usually do these hard exploration problems."
},
{
"start": 153.76000000000002,
"end": 160.32,
"text": " You give intrinsic motivation to the agent to explore new things, like in absence of"
},
{
"start": 160.32,
"end": 165.54,
"text": " a reward, if there's no reward around, it should just reach some kind of new state."
},
{
"start": 165.54,
"end": 171.92,
"text": " And you give the algorithm points for reaching states that it has never seen before."
},
{
"start": 171.92,
"end": 177.51999999999998,
"text": " But this can come to this sort of detachment problem."
},
{
"start": 177.51999999999998,
"end": 179.16,
"text": " They illustrate this here."
},
{
"start": 179.16,
"end": 184.6,
"text": " So let's say your algorithm starts actually here in the middle, right?"
},
{
"start": 184.6,
"end": 190.4,
"text": " And everything that's green here is intrinsic reward."
},
{
"start": 190.4,
"end": 193.6,
"text": " So you collect the green stuff that gives you points, right?"
},
{
"start": 193.6,
"end": 197.76,
"text": " So the goal might actually be in here or in here."
},
{
"start": 197.76,
"end": 201.32,
"text": " But you have to teach the algorithm to go all this way around."
},
{
"start": 201.32,
"end": 207.68,
"text": " And you do that by simply motivating it to go to new states by giving it a reward for"
},
{
"start": 207.68,
"end": 209.32,
"text": " every state it hasn't been."
},
{
"start": 209.32,
"end": 214.44,
"text": " So it starts exploring, goes here, and maybe the first episode reaches here right before"
},
{
"start": 214.44,
"end": 219.32,
"text": " it is reset, usually reset after, well, like it bounces kind of around, it's like, ah,"
},
{
"start": 219.32,
"end": 220.32,
"text": " there's new stuff."
},
{
"start": 220.32,
"end": 224.28,
"text": " And then it goes here and it will explore kind of it."
},
{
"start": 224.28,
"end": 229.56,
"text": " And it will be motivated to explore because there's always this green stuff here."
},
{
"start": 229.56,
"end": 234.6,
"text": " So after a while here, whatever is purple has been explored, right?"
},
{
"start": 234.6,
"end": 235.6,
"text": " Recently."
},
{
"start": 235.6,
"end": 237.68,
"text": " So with purple, they mark what has been recently explored."
},
{
"start": 237.68,
"end": 240,
"text": " All of this has been recently explored, right?"
},
{
"start": 240,
"end": 242,
"text": " So it is gone until here."
},
{
"start": 242,
"end": 246.72,
"text": " But usually you also have like a component that isn't purely seeking this green stuff,"
},
{
"start": 246.72,
"end": 249.86,
"text": " but is also doing some kind of random exploration."
},
{
"start": 249.86,
"end": 254.44,
"text": " And so what happens, what can happen in these algorithms is that if you at one of these"
},
{
"start": 254.44,
"end": 260.2,
"text": " times you start the episode here, by chance, it actually goes into the other direction."
},
{
"start": 260.2,
"end": 261.2,
"text": " All right."
},
{
"start": 261.2,
"end": 265.08,
"text": " And then it's like, wow, there's all this green stuff over here, right?"
},
{
"start": 265.08,
"end": 268.16,
"text": " And then it's like, woo, so much green stuff."
},
{
"start": 268.16,
"end": 269.16,
"text": " Right."
},
{
"start": 269.16,
"end": 275.8,
"text": " And then what usually happens is it kind of forgets that there's green stuff over here."
},
{
"start": 275.8,
"end": 278.96000000000004,
"text": " So it explores all of this stuff around here."
},
{
"start": 278.96000000000004,
"end": 283.12,
"text": " It explores, explores, explores, but there's no more stuff."
},
{
"start": 283.12,
"end": 285.32000000000005,
"text": " And then it's stuck, right?"
},
{
"start": 285.32000000000005,
"end": 287.64000000000004,
"text": " It's stuck here."
},
{
"start": 287.64000000000004,
"end": 290.20000000000005,
"text": " And it says, where, where am I going to go?"
},
{
"start": 290.20000000000005,
"end": 294.74,
"text": " Like I know over here, there's no more green stuff."
},
{
"start": 294.74,
"end": 299.24,
"text": " And over here, there doesn't appear to be any green stuff because it's forgotten about"
},
{
"start": 299.24,
"end": 300.24,
"text": " this."
},
{
"start": 300.24,
"end": 304.56,
"text": " So this, they claim these intrinsic motivation algorithms, what they can lead to is you can"
},
{
"start": 304.56,
"end": 308.6,
"text": " detach from your frontier of new knowledge, right?"
},
{
"start": 308.6,
"end": 316.76,
"text": " Like they can forget that there is, that here at one point they were here and the algorithm,"
},
{
"start": 316.76,
"end": 321.88,
"text": " what the algorithm did, it was it explored here until here, and then it explored over"
},
{
"start": 321.88,
"end": 322.88,
"text": " here."
},
{
"start": 322.88,
"end": 331,
"text": " So it thinks that this thing over here is its most recent frontier of knowledge, right?"
},
{
"start": 331,
"end": 332.76,
"text": " This is, this is my state here."
},
{
"start": 332.76,
"end": 336.48,
"text": " This is where I go explore from, but there is nowhere to explore from, right?"
},
{
"start": 336.48,
"end": 342.2,
"text": " What it should remember is that here it actually kind of jumped over by random chance."
},
{
"start": 342.2,
"end": 343.88,
"text": " I hope this makes sense."
},
{
"start": 343.88,
"end": 348.8,
"text": " This is called detachment of intrinsic motivation algorithms."
},
{
"start": 348.8,
"end": 355.16,
"text": " And it happens when you, when you kind of give these points according to simply reaching"
},
{
"start": 355.16,
"end": 357.54,
"text": " new states."
},
{
"start": 357.54,
"end": 361.72,
"text": " And then another thing is what they call derailment."
},
{
"start": 361.72,
"end": 364.96000000000004,
"text": " And derailment is a bit of a more subtle problem."
},
{
"start": 364.96000000000004,
"end": 372.96000000000004,
"text": " So in derailment, what happens is maybe you, maybe you've actually, let's say this same"
},
{
"start": 372.96000000000004,
"end": 374.1,
"text": " situation."
},
{
"start": 374.1,
"end": 379.84000000000003,
"text": " You've discovered a promising state, right, by some miracle."
},
{
"start": 379.84000000000003,
"end": 381.92,
"text": " Here is the goal, right?"
},
{
"start": 381.92,
"end": 383.8,
"text": " You've reached the goal."
},
{
"start": 383.8,
"end": 386.20000000000005,
"text": " You've done this by exploration."
},
{
"start": 386.20000000000005,
"end": 389.24,
"text": " You've explored a bunch and you've reached the goal."
},
{
"start": 389.24,
"end": 392.32000000000005,
"text": " Now the problem is, can you do it again?"
},
{
"start": 392.32000000000005,
"end": 393.32000000000005,
"text": " Right?"
},
{
"start": 393.32000000000005,
"end": 396.08000000000004,
"text": " Especially if the environment is a bit stochastic, right?"
},
{
"start": 396.08000000000004,
"end": 402.42,
"text": " If there is noise, if the environment isn't always the same, can you actually learn how"
},
{
"start": 402.42,
"end": 407.48,
"text": " to do this robustly, like such that you can repeat your success?"
},
{
"start": 407.48,
"end": 414.04,
"text": " And in derailment is the problem that often these algorithms, while they find promising"
},
{
"start": 414.04,
"end": 420.52000000000004,
"text": " things, they kind of struggle to robustly reach those promising states."
},
{
"start": 420.52000000000004,
"end": 427.12,
"text": " Go Explorer solves these problems in two separate phases, one for each, basically."
},
{
"start": 427.12,
"end": 434.72,
"text": " So what it does is in a phase one, it explores, right?"
},
{
"start": 434.72,
"end": 437.68,
"text": " Explore and this is a crucial part, until solved."
},
{
"start": 437.68,
"end": 444.34000000000003,
"text": " So this is an explorer, a method that explores until the problem is solved with the focus"
},
{
"start": 444.34000000000003,
"end": 448.14,
"text": " on explore, right?"
},
{
"start": 448.14,
"end": 452.88,
"text": " And then in stage two, robustify."
},
{
"start": 452.88,
"end": 459.24,
"text": " And by robustify means that if stage one has resulted in trajectories that have solved"
},
{
"start": 459.24,
"end": 467.26,
"text": " the game or the environment, then phase two is simply tasked with robustly finding those."
},
{
"start": 467.26,
"end": 470.54,
"text": " So let's look at phase one."
},
{
"start": 470.54,
"end": 475.94,
"text": " Phase one is kind of like, think of Dijkstra's algorithm."
},
{
"start": 475.94,
"end": 480.86,
"text": " So in Dijkstra's algorithm, this is a shortest path algorithm in graphs."
},
{
"start": 480.86,
"end": 488.6,
"text": " So in Dijkstra's algorithm, you have a graph and you want to reach this from the start,"
},
{
"start": 488.6,
"end": 490.32,
"text": " let's call this the start."
},
{
"start": 490.32,
"end": 493.72,
"text": " And this is the end or the goal."
},
{
"start": 493.72,
"end": 497.66,
"text": " And the graph is connected with edges."
},
{
"start": 497.66,
"end": 500.88,
"text": " And these edges have usually sometimes they have weights."
},
{
"start": 500.88,
"end": 507.12,
"text": " We can simply, the goal is how to go the shortest path from start to the end."
},
{
"start": 507.12,
"end": 510.68,
"text": " And what Dijkstra's algorithm does, it starts exploring."
},
{
"start": 510.68,
"end": 511.88,
"text": " So it's like it goes here."
},
{
"start": 511.88,
"end": 514.52,
"text": " All right, and then it says, ah, this is a new state."
},
{
"start": 514.52,
"end": 516.44,
"text": " I reached the state in one step."
},
{
"start": 516.44,
"end": 518.2,
"text": " All right, explore some more."
},
{
"start": 518.2,
"end": 520.04,
"text": " I reached this state in two steps."
},
{
"start": 520.04,
"end": 523,
"text": " And then it's like, I reached a state in three steps."
},
{
"start": 523,
"end": 528.04,
"text": " Okay, but I can also go here, I reached this state in one step, in two steps."
},
{
"start": 528.04,
"end": 529.5600000000001,
"text": " I've already been here."
},
{
"start": 529.5600000000001,
"end": 530.5600000000001,
"text": " Okay."
},
{
"start": 530.5600000000001,
"end": 537.4,
"text": " But then it can, it can say, okay, from here, I reached this state into this is a bad example."
},
{
"start": 537.4,
"end": 540.44,
"text": " Let's say we actually have to make a shortest path."
},
{
"start": 540.44,
"end": 541.6400000000001,
"text": " This is the graph, right?"
},
{
"start": 541.6400000000001,
"end": 544.6,
"text": " So it reaches this state in two steps, but then it explores this thing."
},
{
"start": 544.6,
"end": 547.5,
"text": " It's like, ah, wait a minute, I've seen this state."
},
{
"start": 547.5,
"end": 550.08,
"text": " But before I've reached it in two steps."
},
{
"start": 550.08,
"end": 552.0600000000001,
"text": " Now I'm reaching it in one step."
},
{
"start": 552.0600000000001,
"end": 553.0600000000001,
"text": " This is better."
},
{
"start": 553.0600000000001,
"end": 557.6400000000001,
"text": " So this path here is better than this path here."
},
{
"start": 557.6400000000001,
"end": 561.2,
"text": " And then it goes on from here."
},
{
"start": 561.2,
"end": 566.2600000000001,
"text": " It goes on it says, okay, I'm reaching this goal in two steps."
},
{
"start": 566.2600000000001,
"end": 567.9000000000001,
"text": " I've reached it in three steps before."
},
{
"start": 567.9,
"end": 574.28,
"text": " So clearly, this bottom path here is better than what I've done before this top or this"
},
{
"start": 574.28,
"end": 575.28,
"text": " path."
},
{
"start": 575.28,
"end": 577.8,
"text": " So this is this is what Go Explorer does."
},
{
"start": 577.8,
"end": 583.48,
"text": " In a nutshell, what it does is has an archive of states, right?"
},
{
"start": 583.48,
"end": 586.68,
"text": " An archive of states that it has visited previously."
},
{
"start": 586.68,
"end": 591.84,
"text": " And the crucial thing here is, and this is kind of necessary to their algorithm, that"
},
{
"start": 591.84,
"end": 593.64,
"text": " this is completely deterministic."
},
{
"start": 593.64,
"end": 601.04,
"text": " So what they actually do is they will save the state of the game emulator, right?"
},
{
"start": 601.04,
"end": 602.64,
"text": " They are here, right?"
},
{
"start": 602.64,
"end": 609.9,
"text": " And they do some exploration, jumping some until their person is here, their game is"
},
{
"start": 609.9,
"end": 617.4,
"text": " in some state, and they will save the emulator to a buffer."
},
{
"start": 617.4,
"end": 623.9599999999999,
"text": " This is kind of crucial, such that at a later point, they can select this, this exactly"
},
{
"start": 623.9599999999999,
"end": 630.8199999999999,
"text": " this state that they were in, and from here, run a bunch of explorations again, right?"
},
{
"start": 630.8199999999999,
"end": 636.18,
"text": " So if they say select state from archive, and then go to that state, this is simply"
},
{
"start": 636.18,
"end": 638.16,
"text": " restoring the emulator state."
},
{
"start": 638.16,
"end": 643.16,
"text": " But you could also what you could also do if if this is a purely deterministic environment,"
},
{
"start": 643.16,
"end": 649.04,
"text": " you could simply save the sequence of actions that you've done to come here, and simply"
},
{
"start": 649.04,
"end": 655.6,
"text": " buy so maybe you gone right, right, and here you jump, and you go right, you can simply"
},
{
"start": 655.6,
"end": 661.88,
"text": " replay those to get to the exact same state, they discuss that this can be expanded to"
},
{
"start": 661.88,
"end": 664.4399999999999,
"text": " also handle a kind of stochastic environments."
},
{
"start": 664.4399999999999,
"end": 670.12,
"text": " But in their case, at the phase one, the environment is completely deterministic."
},
{
"start": 670.12,
"end": 676.82,
"text": " So they can do this, they can go, sorry, they can go to a state deterministically."
},
{
"start": 676.82,
"end": 680.8,
"text": " So they'll select a state from an archive, they have an algorithm for selecting kind"
},
{
"start": 680.8,
"end": 683.2,
"text": " of promising states."
},
{
"start": 683.2,
"end": 688.26,
"text": " They go to that state, and then they explore from that state and they simply do this random."
},
{
"start": 688.26,
"end": 692.5600000000001,
"text": " So this is random."
},
{
"start": 692.5600000000001,
"end": 694.08,
"text": " And then they update the archive."
},
{
"start": 694.08,
"end": 695.6,
"text": " So what do they do?"
},
{
"start": 695.6,
"end": 696.6,
"text": " Right?"
},
{
"start": 696.6,
"end": 704.16,
"text": " So we saw so here, maybe a new graph, so they go to a state, this is their state, and then"
},
{
"start": 704.16,
"end": 706.48,
"text": " they explore."
},
{
"start": 706.48,
"end": 710.9200000000001,
"text": " Now there, there are multiple things that can happen."
},
{
"start": 710.9200000000001,
"end": 713.44,
"text": " One they can encounter a new state, right?"
},
{
"start": 713.44,
"end": 714.96,
"text": " New state never seen before."
},
{
"start": 714.96,
"end": 718.36,
"text": " All right, what they do is they save it to the buffer."
},
{
"start": 718.36,
"end": 724.86,
"text": " They say, okay, this new state, let's call it n, this new state, I've reached it in."
},
{
"start": 724.86,
"end": 729.5600000000001,
"text": " And here we have done s steps, I've reached an s plus one step."
},
{
"start": 729.5600000000001,
"end": 734.12,
"text": " And whatever here is the emulator state that we had before, right?"
},
{
"start": 734.12,
"end": 736.48,
"text": " So I can at any point, I can go back."
},
{
"start": 736.48,
"end": 745.98,
"text": " If, however, the state has already been seen, let's call this m, they retrieve m, m prime"
},
{
"start": 745.98,
"end": 749.32,
"text": " from the buffer because they've already seen it, it's in the buffer, right?"
},
{
"start": 749.32,
"end": 762.24,
"text": " They compare, hey, these steps, so is s prime, is this smaller or larger than s plus one?"
},
{
"start": 762.24,
"end": 770.4000000000001,
"text": " So basically, I've seen this state before, but using this path, can I reach it in fewer"
},
{
"start": 770.4000000000001,
"end": 772.7600000000001,
"text": " steps than I've reached it before?"
},
{
"start": 772.76,
"end": 779.64,
"text": " If yes, then I'm going to replace this, replace this s by s plus one, and then save it again"
},
{
"start": 779.64,
"end": 780.64,
"text": " in the buffer."
},
{
"start": 780.64,
"end": 787.2,
"text": " All right, so I can, I now have a better path to reach this state than before."
},
{
"start": 787.2,
"end": 793.96,
"text": " So it's almost exactly like Dijkstra's algorithm in that you simply explore and every new state"
},
{
"start": 793.96,
"end": 799.72,
"text": " you find you've either already seen, so you just simply have a new way of getting to that"
},
{
"start": 799.72,
"end": 800.76,
"text": " state."
},
{
"start": 800.76,
"end": 806.28,
"text": " If you haven't seen it, you simply remember it, and then you do it all again."
},
{
"start": 806.28,
"end": 816.8,
"text": " So you can imagine with time, these number of states in this buffer will explode."
},
{
"start": 816.8,
"end": 819.3199999999999,
"text": " And it's not feasible for Montezuma's revenge."
},
{
"start": 819.3199999999999,
"end": 820.84,
"text": " Like imagine this game, right?"
},
{
"start": 820.84,
"end": 825.56,
"text": " You have to, you have to go everywhere and explore everything, right?"
},
{
"start": 825.56,
"end": 829.78,
"text": " This, I mean, every single action here could be a state."
},
{
"start": 829.78,
"end": 833.22,
"text": " That's why, let me pause this."
},
{
"start": 833.22,
"end": 840.3199999999999,
"text": " That's why what they do is they, they have to come up with a notion of state that is,"
},
{
"start": 840.3199999999999,
"end": 843.62,
"text": " doesn't simply include every single game state there is."
},
{
"start": 843.62,
"end": 848.54,
"text": " And what they do is, this is sampled here, they down sample the image."
},
{
"start": 848.54,
"end": 855.9599999999999,
"text": " And then this, sorry, I've tried drawing over a blog post, they down sample the image, and"
},
{
"start": 855.96,
"end": 864.72,
"text": " then they simply say, all right, so this, this thing would become this thing."
},
{
"start": 864.72,
"end": 871.52,
"text": " And they simply say, okay, if two of these images have the same representation, so grayscale,"
},
{
"start": 871.52,
"end": 876.22,
"text": " down sampled, quantized, then they are the same state."
},
{
"start": 876.22,
"end": 878.8000000000001,
"text": " And that's kind of the crux of the algorithm I find."
},
{
"start": 878.8000000000001,
"end": 885.26,
"text": " So if two things have the same state, then the algorithm is prone to kind of confusing"
},
{
"start": 885.26,
"end": 886.26,
"text": " them for each other."
},
{
"start": 886.26,
"end": 893.8199999999999,
"text": " It thinks one is the other, not exactly, but it does kind of assume that they are close"
},
{
"start": 893.8199999999999,
"end": 895.46,
"text": " actually here."
},
{
"start": 895.46,
"end": 897.68,
"text": " But there is a crucial difference between the two."
},
{
"start": 897.68,
"end": 902.06,
"text": " The algorithm will have a very hard time in some situations."
},
{
"start": 902.06,
"end": 907.06,
"text": " I don't want to, like, you can think of, it needs to be kind of convoluted situations,"
},
{
"start": 907.06,
"end": 913.4,
"text": " but it can be the kind of crux of the algorithm very much if the state representation isn't"
},
{
"start": 913.4,
"end": 914.4,
"text": " done well."
},
{
"start": 914.4,
"end": 915.6999999999999,
"text": " And they actually have two methods."
},
{
"start": 915.6999999999999,
"end": 920.54,
"text": " One simply relies on this down sampling and the other one, they provide domain knowledge,"
},
{
"start": 920.54,
"end": 927.22,
"text": " which means kind of which level you're in, where the player is, and so on."
},
{
"start": 927.22,
"end": 928.86,
"text": " But this is, this is pretty cool."
},
{
"start": 928.86,
"end": 937.42,
"text": " So if you are able, so if, if your reinforcement learning problem, first of all, is deterministic."
},
{
"start": 937.42,
"end": 944.8199999999999,
"text": " At least in a simulator."
},
{
"start": 944.8199999999999,
"end": 959.1999999999999,
"text": " And second, allows for good state representations, kind of for, for low dimensional state representations."
},
{
"start": 959.1999999999999,
"end": 965.28,
"text": " If those two things are given, you can use GoExplore."
},
{
"start": 965.28,
"end": 968.72,
"text": " And as I said, this, this representation here is key."
},
{
"start": 968.72,
"end": 971.78,
"text": " So now you know how they do it."
},
{
"start": 971.78,
"end": 974.38,
"text": " They simply explore these states."
},
{
"start": 974.38,
"end": 981.28,
"text": " And if they come on a new state, and every state is, is, is, so we don't mean this here,"
},
{
"start": 981.28,
"end": 986.8199999999999,
"text": " we actually mean this representation of it, they store it and they remember how to get"
},
{
"start": 986.8199999999999,
"end": 988.12,
"text": " to it."
},
{
"start": 988.12,
"end": 994.26,
"text": " And simply by exploring like this and having a smart algorithm that picks which state to"
},
{
"start": 994.26,
"end": 1000.54,
"text": " explore from, which of course is also a lot of domain knowledge, they are able to solve"
},
{
"start": 1000.54,
"end": 1002.9,
"text": " the game, right?"
},
{
"start": 1002.9,
"end": 1009.64,
"text": " So you see, goes way past human expert, and they're, they're able to, to actually perform"
},
{
"start": 1009.64,
"end": 1012.6,
"text": " really well simply by exploring."
},
{
"start": 1012.6,
"end": 1014.08,
"text": " This is the exploration phase."
},
{
"start": 1014.08,
"end": 1017.9,
"text": " This is simply random exploration from promising states."
},
{
"start": 1017.9,
"end": 1024.98,
"text": " And then in the second part, in the second phase, they now robustify it."
},
{
"start": 1024.98,
"end": 1029.58,
"text": " So now they introduce noise into their environment, right?"
},
{
"start": 1029.58,
"end": 1035.5,
"text": " Because usually environments have noise or some sort of stochasticity, and they run imitation"
},
{
"start": 1035.5,
"end": 1038.6,
"text": " learning on the best trajectories they found."
},
{
"start": 1038.6,
"end": 1045.1399999999999,
"text": " And what that does is, what they do is they have a trajectory, let's say, let's say this"
},
{
"start": 1045.1399999999999,
"end": 1046.7,
"text": " is a trajectory, right?"
},
{
"start": 1046.7,
"end": 1050.14,
"text": " These are actions you need to reach this goal state."
},
{
"start": 1050.14,
"end": 1054.32,
"text": " This imitation learning algorithm, what they do is they take a few steps back, say here,"
},
{
"start": 1054.32,
"end": 1058.8600000000001,
"text": " and they just use imitation learning, which is basically a form of reinforcement learning"
},
{
"start": 1058.8600000000001,
"end": 1063.66,
"text": " to reach the goal state from here, simply reach the goal state, right?"
},
{
"start": 1063.66,
"end": 1066.02,
"text": " Once in under noise, right?"
},
{
"start": 1066.02,
"end": 1068.72,
"text": " So you can't just take the exact same actions."
},
{
"start": 1068.72,
"end": 1074.5,
"text": " Once this has been learned, back up a few more steps, maybe here, and then try to reach"
},
{
"start": 1074.5,
"end": 1075.96,
"text": " the goal state."
},
{
"start": 1075.96,
"end": 1078.78,
"text": " Now you've already learned how to do this part."
},
{
"start": 1078.78,
"end": 1084.98,
"text": " So this this bigger part should become should be easier than simply starting from here."
},
{
"start": 1084.98,
"end": 1090.66,
"text": " And you do that until you've kind of backed up your entire trajectory."
},
{
"start": 1090.66,
"end": 1094.06,
"text": " This is a well known method from imitation learning."
},
{
"start": 1094.06,
"end": 1099.3,
"text": " But usually you have usually this red thing is a human demonstration."
},
{
"start": 1099.3,
"end": 1103.1000000000001,
"text": " But now this red trajectory has been found by go explore."
},
{
"start": 1103.1,
"end": 1107.62,
"text": " It turns out if you have a bunch of these trajectories from go explore, you can do a"
},
{
"start": 1107.62,
"end": 1110.06,
"text": " pretty good job at that."
},
{
"start": 1110.06,
"end": 1113.74,
"text": " All right, that's basically all that I wanted to say about go explore."
},
{
"start": 1113.74,
"end": 1116.06,
"text": " It's basically Dijkstra's algorithm."
},
{
"start": 1116.06,
"end": 1119.9599999999998,
"text": " It works under very specific circumstances, but I think it's super promising."
},
{
"start": 1119.9599999999998,
"end": 1123.1,
"text": " And it's kind of a new way of thinking about it."
},
{
"start": 1123.1,
"end": 1127.8799999999999,
"text": " So the video I've shown is actually go explore solving Montezuma's revenge getting like a"
},
{
"start": 1127.8799999999999,
"end": 1129.1599999999999,
"text": " new high score."
},
{
"start": 1129.16,
"end": 1136.78,
"text": " And you can see how like skilled this this algorithm becomes."
},
{
"start": 1136.78,
"end": 1163.78,
"text": " All right, with that, I say goodbye and hope to see you next time."
}
] |
waK7AD-AEyc | Yannic Kilcher | UCZHmQk67mSJgfCCTn7xBfew | NeurIPS 19 Poster Session | [
"Science & Technology"
] | [
"machine learning",
"conference",
"posters",
"research",
"bubble"
] | I'm at the poster session and the amount of people here is just crazy | Hi there, we are here at the NURBS 2019 poster session, one of the poster sessions specifically. There are two poster sessions a day, three days, so this is day two, the first poster session. It's technically lunchtime, so most people are out, but you can see there's still so many people here. There are about 250 posters in this room, and every poster has a ball of people around it. This is not peak time. Yesterday they didn't even let people into this room. That's the kind of the only reason you come to the conference to actually talk to the people doing the work, but it's almost impossible because they're constantly trying to explain their work to about 20 people at a time, asking any meaningful questions, getting into a conversation is almost impossible. It's about 10 degrees warmer in here than outside. It is sweaty, it smells, it's absolutely beautiful. I don't know, there is a kind of a feeling in the air that this is a bubble, just a sheer amount of people attending this is crazy. I don't know what this looks like in a few years. Maybe this is peak, or maybe it's just going to grow and grow and grow. I don't know. So you can see what it looks like, and maybe I've described well what it feels like to be here. With that, I am going to dive in, and bye bye. | [
{
"start": 0,
"end": 8.540000000000001,
"text": " Hi there, we are here at the NURBS 2019 poster session, one of the poster sessions specifically."
},
{
"start": 8.540000000000001,
"end": 13.76,
"text": " There are two poster sessions a day, three days, so this is day two, the first poster"
},
{
"start": 13.76,
"end": 14.76,
"text": " session."
},
{
"start": 14.76,
"end": 18.02,
"text": " It's technically lunchtime, so most people are out, but you can see there's still so"
},
{
"start": 18.02,
"end": 20.52,
"text": " many people here."
},
{
"start": 20.52,
"end": 27.32,
"text": " There are about 250 posters in this room, and every poster has a ball of people around"
},
{
"start": 27.32,
"end": 29.36,
"text": " it."
},
{
"start": 29.36,
"end": 30.56,
"text": " This is not peak time."
},
{
"start": 30.56,
"end": 37.08,
"text": " Yesterday they didn't even let people into this room."
},
{
"start": 37.08,
"end": 41.04,
"text": " That's the kind of the only reason you come to the conference to actually talk to the"
},
{
"start": 41.04,
"end": 45.68,
"text": " people doing the work, but it's almost impossible because they're constantly trying to explain"
},
{
"start": 45.68,
"end": 58.2,
"text": " their work to about 20 people at a time, asking any meaningful questions, getting into a conversation"
},
{
"start": 58.2,
"end": 61.760000000000005,
"text": " is almost impossible."
},
{
"start": 61.760000000000005,
"end": 65.60000000000001,
"text": " It's about 10 degrees warmer in here than outside."
},
{
"start": 65.60000000000001,
"end": 73.48,
"text": " It is sweaty, it smells, it's absolutely beautiful."
},
{
"start": 73.48,
"end": 82,
"text": " I don't know, there is a kind of a feeling in the air that this is a bubble, just a sheer"
},
{
"start": 82,
"end": 87.60000000000001,
"text": " amount of people attending this is crazy."
},
{
"start": 87.6,
"end": 89.88,
"text": " I don't know what this looks like in a few years."
},
{
"start": 89.88,
"end": 93.96,
"text": " Maybe this is peak, or maybe it's just going to grow and grow and grow."
},
{
"start": 93.96,
"end": 96.39999999999999,
"text": " I don't know."
},
{
"start": 96.39999999999999,
"end": 103,
"text": " So you can see what it looks like, and maybe I've described well what it feels like to"
},
{
"start": 103,
"end": 106,
"text": " be here."
},
{
"start": 106,
"end": 123.72,
"text": " With that, I am going to dive in, and bye bye."
}
] |
RrvC8YW0pT0 | Yannic Kilcher | UCZHmQk67mSJgfCCTn7xBfew | Reinforcement Learning Upside Down: Don't Predict Rewards -- Just Map Them to Actions | [
"Science & Technology"
] | [
"rl",
"reinforcement learning",
"ai",
"artificial intelligence",
"udrl",
"schmidhuber",
"policy",
"value",
"reward"
] | Schmidhuber thinking outside the box! Upside-Down RL turns RL on its head and constructs a behavior function that uses the desired reward as an input. The new paradigm shows surprising performance compared to classic RL algorithms.
Abstract:
We transform reinforcement learning (RL) into a form of supervised learning (SL) by turning traditional RL on its head, calling this Upside Down RL (UDRL). Standard RL predicts rewards, while UDRL instead uses rewards as task-defining inputs, together with representations of time horizons and other computable functions of historic and desired future data. UDRL learns to interpret these input observations as commands, mapping them to actions (or action probabilities) through SL on past (possibly accidental) experience. UDRL generalizes to achieve high rewards or other goals, through input commands such as: get lots of reward within at most so much time! A separate paper [61] on first experiments with UDRL shows that even a pilot version of UDRL can outperform traditional baseline algorithms on certain challenging RL problems. We also introduce a related simple but general approach for teaching a robot to imitate humans. First videotape humans imitating the robot's current behaviors, then let the robot learn through SL to map the videos (as input commands) to these behaviors, then let it generalize and imitate videos of humans executing previously unknown behavior. This Imitate-Imitator concept may actually explain why biological evolution has resulted in parents who imitate the babbling of their babies.
Author: Juergen Schmidhuber
https://arxiv.org/abs/1912.02875
https://arxiv.org/abs/1912.02877 | He did it! Crazy son of a bitch did it again! What am I talking about? Jürgen Schmidhuber reinforcement learning upside down! New paper just dropped on the verge of the NeurIPS conference being presented at a workshop here. Presenting upside down reinforcement learning. I am pumped for this one, can you tell? It says we transform reinforcement learning into a form of supervised learning by turning traditional RL on its head. Calling this RL-lar. What do we call this? We'll just call it lar. Upside down reinforcement learning. And so this is upside down. Never mind. Okay, let's just check out how it works. So I'm going to give a brief overview before we go into this paper. Alright, so let's say you have a reinforcement learning problem. Let's say an Atari game for example. And in an Atari game you usually have a screen, right? And let's just say you're playing this marine commander. So there's water here, right? And there might be a bunch of... Here's your boat, right? There's a boat, a little boat. There might be a bunch of opponents right here. Fishy fish opponents, fishy fish opponents and so on. And there are a bunch of gold coins like here. That's a big gold coin, right? And you're kind of supposed to, I think you're supposed to like go get air. You have some air meter over here. Whatever. So there's this Atari game, right? You're supposed to get the reward which is this maybe this coin here. And stay alive as long as possible and so on. So this is a classic reinforcement learning problem. And there are various techniques for this. We've looked at a couple of them. And what upside down reinforcement learning does is basically what you do is you want to transform this input to a new representation. Which basically, well, if I can, maybe I can... Let me get this correctly. So then there's this over here and then there's a little fishy, a little fishy here. And there's a coin right here. So what you want to do is basically turn this input on its head like upside down. And so this way is kind of up or down or whatever in this new representation. And if you actually learn on this new representation with pretty the same techniques, it works much better than the classic RL setting. And this is not only for like these Atari games. Like this appears to hold throughout the RL space. So in robotics, like if you have a robot or whatever, this is a robot. It has a square head, as you can tell. You know, it's supposed to like open a door. You've seen this DARPA challenge. This doesn't work, right? But if you just transform this and actually turn the robot upside down, the robot will be able to open the door just fine. And even like if you have a chessboard and there's like a bunch of pieces on it. The problem in this case is you have to simulate this chessboard. And if you turn this around now, basically all the pieces will fall off. So what you need to do is you need to have a simulator that encodes a magnetic chessboard such that the pieces don't fall off. So it's a bit of programming effort. But if you do that... All right, I'm kidding. This is a new paradigm for RL, but it's unfortunately not as good. Someone should try the magnetic chessboard simulator. Upside down RL is a new paradigm for RL where basically the kind of notion of inputs and outputs of the RL algorithm are switched around a bit. So basic ideas here is that you have an RL algorithm that is also fed with a bunch of commands. So in classic RL what you'll have... Let's actually go back to this Atari game here, right? In classic RL, an RL algorithm will get the Atari game as a screen as an input and is asked from this to predict a bunch of outputs. So in classic Atari, these are eight actions. I'm going to draw three here, like go to the left, go to the right, or press the button for shoot, right? These are the actions you have and the algorithm is tasked. And there are different versions of this. In policy methods, policy gradient methods, typically the algorithm is tasked with outputting a distribution over these actions. In other methods like value learning, Q learning, the algorithm is tasked with assigning each of these actions a value. So in this situation, going to the left will be worth three in the future. Going to the right will be worth negative one and shooting will be worth zero. So you might want to go with this action here. Now in upside-down reinforcement learning, we've had observation going into the model and the model coming up with the value estimation of the different actions. In upside-down reinforcement learning, you'll have the observation and something else going into the model and the model coming up with an action. And this something else is the key. What you input here is your desire, your future desire. And in this paper, they call it a command. So you'll have a command as an input together with the observations. You basically say, here's my state and I would like to achieve, let's say five reward in the next five reward in the next two time steps, right? Make this happen. Right. This is this is your command going into the model and the model will then try to find actions such that in the next two time steps, you'll get five reward. You can easily see a model that learns this will actually be able to, you know, do various things, including doing the classic RL things like get as much reward as possible in given or in the shortest amount of time, but can also do much more. And in the general sense, the difference is how this is trained now. This model, when you train it, as you can see, you don't it's not trained with in my having in mind kind of only to get the maximum reward. It is trained to be much more a general kind of understanding of the world. I mean, learning what do I need to do to achieve a variety of goals? Specifically, what you want to do to train this is the following. Say you have a method of of moving in the world and collecting traces, right? So you go from state, state one, state two, state three. You go with like your action one, action two. Let's draw action three. The state four. And in each of these, you get a you get rewards, right? Reward one reward to reward three. Now, this in classic RL, this will kind of give you one training example, right? So this is if you consider this to be an episode, this will give you one training example to to run this sequence of actions. Upside down RL, you can actually consider this as many, many training examples. And here's what I mean. So if you, for example, start at state one, you can say, aha, within one time step, one one time step, I go to state two and I have achieved our one rewards by doing action a one. Right. So this now can be an input to your model. Your model could learn if you get as an observation, remember the previous thing as an observation, you get s one as a command. You get I want to achieve in one time step. Are one reward. Right. And you train this goes into the model and the model is trained to say a one given if I am in s one and I do a one, I will achieve that. Right. So you train the model to give a one as an output. And this is valid because in the past you've observed going from s one using a one to a state where you get this this kind of reward in this kind of time. But you can also so you can do all of these single steps. They will all provide individual training examples to your model. Right. But then also you can consider a two step thing. So you can say I'm in state s one and I go I go in two time steps. I have achieved our one plus our two reward by doing actions a one then a two. Right. And a two I'm going to do in parents here because what you want to do is you want to always always consider the action that comes right after where you are now. So again your training sample let me draw this up here. Maybe your training sample would be the following. I am in state s one. This would be my observation. My command would be I would like to achieve in two time steps reward r one plus r two reward. Right. This reward this both goes into the model. Right. You tell the model please given the state s one achieve this reward in this time. And the model is supposed to output a one saying ha in the past I was in this state and I did achieve this goal by using that. So the model is supposed to learn to achieve different goals. Right. So now you can not only train from good episodes right. You can train for any episode any episode usually in classic or you kind of want to focus on the good episodes because you want to maximize your reward. But here you can tell the model hey if you've done something particularly stupid let's say here in s three you done something the a three was particularly stupid gave you. So r three here was really bad reward like a negative five billion trillion. And you can actually train the model to recognize this can be a hey look if you are in s three and within one time step you want to achieve negative five billion billion billion trillion. Reward you all you have to do is action a three right. And then the cool thing now is if you are at evaluation time you actually want the big reward what you'll do is you simply plug in a different command simply in one time step still I'm in state s three in one time step. I want to achieve actually three reward not negative a lot right. And the model will have learned that a three will lead to a situation where you get a lot of negative reward. So the model will be like I'm for sure not going to do a three right. I'm going to do something else here because I have learned to map a three to like this really low reward. So in essence this has connections to things like hindsight experience replay and kind of universal value function where you kind of learn to go from any state to any other state in this. But none of these do none of these have this kind of command what Schmidhuber calls command here as an input to the model. And I think actually this is this is really positive to input this because usually in universal value functions what you would say is let's consider a simple grid world right. Whatever your agent is here and you need to you need to reach a goal that's down here. But you might not be able to learn it because it's super sparse reward and so on. But what you can do is you can learn to reach this position and this position and this position from various positions like go here go from here to here. You can learn to go from here to here. And you know in essence you would like it eventually to generalize to all the fields. So you basically learn to go from any position to any other position with your agent with these universal value or universal policy functions having sub goals. But they during that phase where they learn to go from anything to anything they don't they don't necessarily include this reward thing as a as an input. It's more like kind of either a sub goal or like the usual value function will simply approximate the reward. Whereas whereas in this technique we actually have a policy learning we actually output a an action value. Also hindsight experience replay what hindsight experience replay would do in the same situation right. You're here. We might do a videos on this in the future. You're here and you try right. And your agent actually it ends up here right ends up right here. What you can do is you can simply say oh well actually this this was my goal all along and then simply train train your model as if as if this thing here was your goal all along. And not this thing here and treat it as kind of a positive reward for this. At least that's how I understand it. Right. And both of these things are quite different than here where we have this command as input and I do I do like it. So I think this this is very much the basic things here. This it is extra extrapolated to kind of noisy inputs and noisy environments and so on. But this is the basic the basic gist of it. So here you see your you what you will learn is to map all and all is your representation of your input. So the screen for example or the chessboard. And I think also kind of the last action and there were you get in this step plus your horizon and desire. So in how much time you would like to achieve how much reward and then you can also get input some extra goals that you have. And so you can see basically any any episode that you've run in the past will give you a valid training example for this. Your model will simply learn to match the previous experience with the goals that were achieved in the previous experience. So there is lots of lots of generalizations here like how exactly these things are represented. This this time horizon can be a high dimensional object. The desire can be as I understand it somewhat a dimensional object. The extra commands can be like conditionals on these two things. It gets very complicated, but I want to jump ahead to a different paper where so this paper is basically just describing the algorithm. And then the next paper is doing experiments with this. Let's scroll past here. All right. So this paper training agents using up that down reinforcement learning released on the same day, but different authors that have used also made who was also here but have used this to implement a variant of this. And here you see again what I was trying to to explain. So in traditional RL, this especially here Q learning, you'll have this function which gets an observation as input and then Q learning especially. So you also get the action as an input and you're supposed to say for the given observation this particular action has this expected value as a return. Right. That's what I explained at the beginning. That's kind of value based reinforcement learning. Whereas the behavior function here, which would be upside down reinforcement learning gets the observation and a command and will map that to an action. And here again is what we've gone over. This is a bit of a different thing. So this agent has apparently run two different episodes. One point it did this sequence of actions and at the other point from the same starting state it did this sequence of action and you can see here on the right all the training samples we can we can derive from this. So we can say from state s 0 right. If I want to return in one time step, I have experienced this in the past right to return in one time step. All I have to do is take action a one. But if I want one return in one time step, I have to take action a two and you teach your behavior function to learn these things to learn to output these actions with these things here as inputs. And then what you hope of course is that this will generalize that it will learn to generalize that you can say now give me more reward than I have ever seen before right. And it will kind of learn which things correspond to lower reward, which things correspond to higher award and will be able to extrapolate which things will correspond to even higher report reward. Sorry. So they have two algorithms and this is kind of this is reminiscent of the old of the old RL kind of world where you do kind of one algorithm is continuously learning from the experience gathered by another algorithm. So you have one set of algorithms and this even in modern RL this this this is how it's done right. You have two different boxes right. Actually you have probably one box learning the model like this is I'm going to represent this here learner right. And the learner distributes the model to many many machines interacting with the simulators and these machines all they do is run episodes with the learned model and they will send back their experience here. And then the learner can learn from it and then at the end send it again. So so. All right here we go. So in each step what we do in order to to generate a new episode we don't always want to want to kind of execute one given policy. What we do is we sample from the end of the replay buffer and the replay buffer is sorted by returns right. So the highest return episodes are on top. So we want to sample the highest return episodes then we want to say maybe some of them are 10 steps long maybe some of them are five steps long and so on. So we set the horizon to be the mean of the length of these right and we set the desired return how much return should be achieved in this time to be the unit to sample from the uniform distribution between M and M plus S and M is the mean and S is the standard deviation of the selected episode. So so what this means is is like here is a bunch of episodes from the start at the same time. Here's a bunch of episodes that I ran right from here is time zero and then time goes on that I ran that had really high returns right. Now I'm going to take the mean time that these episodes ran like this. This is maybe five time steps. So in five time I want to achieve now how much reward now you look at all the rewards that were achieved. This is maybe a distribution that has some mean here like so and then you say I want to achieve a reward between here and one standard deviation higher than here. So right and this this would be the reward you want to achieve. So what you do is you kind of push your learned model to just go a bit beyond what it has seen so far is basically say look I you can do this but you can just do a bit more in the same amount of time. Please do this and you hope the model has learned to kind of generalize to do this. And if so you will execute these episodes and then these episodes will go back to the learner right. I'll go back to the learner here and the learner will learn from them and hopefully then you can like generalize even more and then you can say I now know how to achieve this bit more reward. Now I can if I run the episode I will achieve even more reward. I can push the model even further right. So at eval time you can always ask the model to produce as much reward as possible in the given time. And of course every episode sent back here is not only one training example as we saw but many many training examples can be derived from these models even beyond what's in what's in this paper. All right. So I think this was a good first shot at describing this algorithm. I hope you get the gist of it. I enjoy this a bit of a criticism for me would be it's still kind of doesn't it. So it doesn't touch the exploration dilemma. So it again deals with kind of incremental incrementally getting better whereas I feel this can easily get stuck in some minimum where it's not possible to do this incremental generalization of the model where you really need a new approach. And that's why games like Montezuma's Revenge are solved using algorithms like Go Explore and not any of the classic algorithms. That being said they have experiments where they show that especially in sparse reward environments they do better than classic or algorithms. So if you for example here take the lunar lander where A to C beats upside down RL and I guess you didn't get Matt Ploidlip to do the upside down. Well the in other in other environments upside down RL clearly beats the classic algorithms. And what I like here is they took a lunar lander and which basically at every time step you get a reward in lunar lander and they hypothesized. Okay this is really good for these classic algorithms that do reward maximization instead of kind of learning this general behavior function. And what they did is they modified the game such that all the reward is given at the end of the episode. And then you see that upside down RL will actually outperform here the classic things where it's exactly the same game you just get the reward at the end. So upside down RL kind of learns the structure of the world learns that you get this reward at the end after such and such many time steps. So you can it will learn please get me zero reward in 50 time steps like no problem. But please get me a thousand rewards in a hundred time steps. No problem. I just go to the end of the episode right. Whereas these pure reward maximization techniques they don't they somehow have a harder time to do that. I like this investigation. I like the thinking outside the box. The Schmidhuber ism of the paper. It's just all great. It's a great time to be alive and check this out and I'll see you. Bye bye. | [
{
"start": 0,
"end": 6.4,
"text": " He did it! Crazy son of a bitch did it again!"
},
{
"start": 6.4,
"end": 12.8,
"text": " What am I talking about? Jürgen Schmidhuber reinforcement learning upside down!"
},
{
"start": 12.8,
"end": 20.6,
"text": " New paper just dropped on the verge of the NeurIPS conference being presented at a workshop here."
},
{
"start": 20.6,
"end": 26.2,
"text": " Presenting upside down reinforcement learning. I am pumped for this one, can you tell?"
},
{
"start": 26.2,
"end": 35.6,
"text": " It says we transform reinforcement learning into a form of supervised learning by turning traditional RL on its head."
},
{
"start": 35.6,
"end": 42.4,
"text": " Calling this RL-lar. What do we call this? We'll just call it lar."
},
{
"start": 42.4,
"end": 45.4,
"text": " Upside down reinforcement learning."
},
{
"start": 45.4,
"end": 52.6,
"text": " And so this is upside down. Never mind."
},
{
"start": 52.6,
"end": 56.6,
"text": " Okay, let's just check out how it works."
},
{
"start": 56.6,
"end": 62.400000000000006,
"text": " So I'm going to give a brief overview before we go into this paper."
},
{
"start": 62.400000000000006,
"end": 69.8,
"text": " Alright, so let's say you have a reinforcement learning problem. Let's say an Atari game for example."
},
{
"start": 69.8,
"end": 73,
"text": " And in an Atari game you usually have a screen, right?"
},
{
"start": 73,
"end": 79.4,
"text": " And let's just say you're playing this marine commander. So there's water here, right?"
},
{
"start": 79.4,
"end": 84.60000000000001,
"text": " And there might be a bunch of... Here's your boat, right?"
},
{
"start": 84.60000000000001,
"end": 88.4,
"text": " There's a boat, a little boat. There might be a bunch of opponents right here."
},
{
"start": 88.4,
"end": 92.60000000000001,
"text": " Fishy fish opponents, fishy fish opponents and so on."
},
{
"start": 92.60000000000001,
"end": 96.9,
"text": " And there are a bunch of gold coins like here. That's a big gold coin, right?"
},
{
"start": 96.9,
"end": 101.80000000000001,
"text": " And you're kind of supposed to, I think you're supposed to like go get air."
},
{
"start": 101.80000000000001,
"end": 104.30000000000001,
"text": " You have some air meter over here. Whatever."
},
{
"start": 104.30000000000001,
"end": 106.80000000000001,
"text": " So there's this Atari game, right?"
},
{
"start": 106.8,
"end": 111.2,
"text": " You're supposed to get the reward which is this maybe this coin here."
},
{
"start": 111.2,
"end": 114,
"text": " And stay alive as long as possible and so on."
},
{
"start": 114,
"end": 116.5,
"text": " So this is a classic reinforcement learning problem."
},
{
"start": 116.5,
"end": 120.7,
"text": " And there are various techniques for this. We've looked at a couple of them."
},
{
"start": 120.7,
"end": 128.7,
"text": " And what upside down reinforcement learning does is basically what you do is you want to transform this input to a new representation."
},
{
"start": 128.7,
"end": 138.6,
"text": " Which basically, well, if I can, maybe I can... Let me get this correctly."
},
{
"start": 138.6,
"end": 145.7,
"text": " So then there's this over here and then there's a little fishy, a little fishy here."
},
{
"start": 145.7,
"end": 147.89999999999998,
"text": " And there's a coin right here."
},
{
"start": 147.89999999999998,
"end": 153.39999999999998,
"text": " So what you want to do is basically turn this input on its head like upside down."
},
{
"start": 153.4,
"end": 159.1,
"text": " And so this way is kind of up or down or whatever in this new representation."
},
{
"start": 159.1,
"end": 166.4,
"text": " And if you actually learn on this new representation with pretty the same techniques,"
},
{
"start": 166.4,
"end": 169.70000000000002,
"text": " it works much better than the classic RL setting."
},
{
"start": 169.70000000000002,
"end": 172.3,
"text": " And this is not only for like these Atari games."
},
{
"start": 172.3,
"end": 177.5,
"text": " Like this appears to hold throughout the RL space."
},
{
"start": 177.5,
"end": 181.70000000000002,
"text": " So in robotics, like if you have a robot or whatever, this is a robot."
},
{
"start": 181.7,
"end": 184.79999999999998,
"text": " It has a square head, as you can tell."
},
{
"start": 184.79999999999998,
"end": 186.89999999999998,
"text": " You know, it's supposed to like open a door."
},
{
"start": 186.89999999999998,
"end": 190.6,
"text": " You've seen this DARPA challenge. This doesn't work, right?"
},
{
"start": 190.6,
"end": 198.7,
"text": " But if you just transform this and actually turn the robot upside down,"
},
{
"start": 198.7,
"end": 202.1,
"text": " the robot will be able to open the door just fine."
},
{
"start": 202.1,
"end": 207.6,
"text": " And even like if you have a chessboard and there's like a bunch of pieces on it."
},
{
"start": 207.6,
"end": 212.7,
"text": " The problem in this case is you have to simulate this chessboard."
},
{
"start": 212.7,
"end": 218.1,
"text": " And if you turn this around now, basically all the pieces will fall off."
},
{
"start": 218.1,
"end": 224,
"text": " So what you need to do is you need to have a simulator that encodes a magnetic chessboard"
},
{
"start": 224,
"end": 226.7,
"text": " such that the pieces don't fall off."
},
{
"start": 226.7,
"end": 230.4,
"text": " So it's a bit of programming effort. But if you do that..."
},
{
"start": 230.4,
"end": 234.1,
"text": " All right, I'm kidding."
},
{
"start": 234.1,
"end": 240.29999999999998,
"text": " This is a new paradigm for RL, but it's unfortunately not as good."
},
{
"start": 240.29999999999998,
"end": 244.4,
"text": " Someone should try the magnetic chessboard simulator."
},
{
"start": 244.4,
"end": 254,
"text": " Upside down RL is a new paradigm for RL where basically the kind of notion of inputs"
},
{
"start": 254,
"end": 259.5,
"text": " and outputs of the RL algorithm are switched around a bit."
},
{
"start": 259.5,
"end": 272.4,
"text": " So basic ideas here is that you have an RL algorithm that is also fed with a bunch of commands."
},
{
"start": 272.4,
"end": 275.2,
"text": " So in classic RL what you'll have..."
},
{
"start": 275.2,
"end": 279.3,
"text": " Let's actually go back to this Atari game here, right?"
},
{
"start": 279.3,
"end": 285.8,
"text": " In classic RL, an RL algorithm will get the Atari game as a screen as an input"
},
{
"start": 285.8,
"end": 290.3,
"text": " and is asked from this to predict a bunch of outputs."
},
{
"start": 290.3,
"end": 293.5,
"text": " So in classic Atari, these are eight actions."
},
{
"start": 293.5,
"end": 300.3,
"text": " I'm going to draw three here, like go to the left, go to the right, or press the button for shoot, right?"
},
{
"start": 300.3,
"end": 305.5,
"text": " These are the actions you have and the algorithm is tasked."
},
{
"start": 305.5,
"end": 307.5,
"text": " And there are different versions of this."
},
{
"start": 307.5,
"end": 312,
"text": " In policy methods, policy gradient methods, typically the algorithm is tasked"
},
{
"start": 312,
"end": 316,
"text": " with outputting a distribution over these actions."
},
{
"start": 316,
"end": 323.5,
"text": " In other methods like value learning, Q learning, the algorithm is tasked with assigning each of these actions a value."
},
{
"start": 323.5,
"end": 330,
"text": " So in this situation, going to the left will be worth three in the future."
},
{
"start": 330,
"end": 336.2,
"text": " Going to the right will be worth negative one and shooting will be worth zero."
},
{
"start": 336.2,
"end": 342.2,
"text": " So you might want to go with this action here."
},
{
"start": 342.2,
"end": 349.2,
"text": " Now in upside-down reinforcement learning, we've had observation going into the model"
},
{
"start": 349.2,
"end": 355.4,
"text": " and the model coming up with the value estimation of the different actions."
},
{
"start": 355.4,
"end": 363.09999999999997,
"text": " In upside-down reinforcement learning, you'll have the observation and something else going into the model"
},
{
"start": 363.1,
"end": 366.70000000000005,
"text": " and the model coming up with an action."
},
{
"start": 366.70000000000005,
"end": 368.8,
"text": " And this something else is the key."
},
{
"start": 368.8,
"end": 374.1,
"text": " What you input here is your desire, your future desire."
},
{
"start": 374.1,
"end": 377.1,
"text": " And in this paper, they call it a command."
},
{
"start": 377.1,
"end": 380.40000000000003,
"text": " So you'll have a command as an input together with the observations."
},
{
"start": 380.40000000000003,
"end": 386.90000000000003,
"text": " You basically say, here's my state and I would like to achieve,"
},
{
"start": 386.9,
"end": 393.4,
"text": " let's say five reward in the next five reward in the next two time steps, right?"
},
{
"start": 393.4,
"end": 394.59999999999997,
"text": " Make this happen."
},
{
"start": 394.59999999999997,
"end": 400.59999999999997,
"text": " Right. This is this is your command going into the model and the model will then try to find actions"
},
{
"start": 400.59999999999997,
"end": 406.09999999999997,
"text": " such that in the next two time steps, you'll get five reward."
},
{
"start": 406.09999999999997,
"end": 413,
"text": " You can easily see a model that learns this will actually be able to, you know, do various things,"
},
{
"start": 413,
"end": 418.9,
"text": " including doing the classic RL things like get as much reward as possible in given"
},
{
"start": 418.9,
"end": 424.4,
"text": " or in the shortest amount of time, but can also do much more."
},
{
"start": 424.4,
"end": 429.5,
"text": " And in the general sense, the difference is how this is trained now."
},
{
"start": 429.5,
"end": 436.8,
"text": " This model, when you train it, as you can see, you don't it's not trained with in my having in mind"
},
{
"start": 436.8,
"end": 440.1,
"text": " kind of only to get the maximum reward."
},
{
"start": 440.1,
"end": 445.20000000000005,
"text": " It is trained to be much more a general kind of understanding of the world."
},
{
"start": 445.20000000000005,
"end": 452.40000000000003,
"text": " I mean, learning what do I need to do to achieve a variety of goals?"
},
{
"start": 452.40000000000003,
"end": 457.90000000000003,
"text": " Specifically, what you want to do to train this is the following."
},
{
"start": 457.90000000000003,
"end": 464.90000000000003,
"text": " Say you have a method of of moving in the world and collecting traces, right?"
},
{
"start": 464.9,
"end": 472.79999999999995,
"text": " So you go from state, state one, state two, state three."
},
{
"start": 472.79999999999995,
"end": 478.29999999999995,
"text": " You go with like your action one, action two."
},
{
"start": 478.29999999999995,
"end": 481.79999999999995,
"text": " Let's draw action three."
},
{
"start": 481.79999999999995,
"end": 484.5,
"text": " The state four."
},
{
"start": 484.5,
"end": 487.5,
"text": " And in each of these, you get a you get rewards, right?"
},
{
"start": 487.5,
"end": 492,
"text": " Reward one reward to reward three."
},
{
"start": 492,
"end": 498.9,
"text": " Now, this in classic RL, this will kind of give you one training example, right?"
},
{
"start": 498.9,
"end": 508.1,
"text": " So this is if you consider this to be an episode, this will give you one training example to to run this sequence of actions."
},
{
"start": 508.1,
"end": 513.6,
"text": " Upside down RL, you can actually consider this as many, many training examples."
},
{
"start": 513.6,
"end": 515,
"text": " And here's what I mean."
},
{
"start": 515,
"end": 529.5,
"text": " So if you, for example, start at state one, you can say, aha, within one time step, one one time step,"
},
{
"start": 529.5,
"end": 537.3,
"text": " I go to state two and I have achieved our one rewards by doing action a one."
},
{
"start": 537.3,
"end": 538.8,
"text": " Right."
},
{
"start": 538.8,
"end": 541.9,
"text": " So this now can be an input to your model."
},
{
"start": 541.9,
"end": 552,
"text": " Your model could learn if you get as an observation, remember the previous thing as an observation, you get s one as a command."
},
{
"start": 552,
"end": 557.5,
"text": " You get I want to achieve in one time step."
},
{
"start": 557.5,
"end": 560.4,
"text": " Are one reward."
},
{
"start": 560.4,
"end": 570.6999999999999,
"text": " Right. And you train this goes into the model and the model is trained to say a one given if I am in s one"
},
{
"start": 570.7,
"end": 574.3000000000001,
"text": " and I do a one, I will achieve that."
},
{
"start": 574.3000000000001,
"end": 578,
"text": " Right. So you train the model to give a one as an output."
},
{
"start": 578,
"end": 590.8000000000001,
"text": " And this is valid because in the past you've observed going from s one using a one to a state where you get this this kind of reward in this kind of time."
},
{
"start": 590.8000000000001,
"end": 594,
"text": " But you can also so you can do all of these single steps."
},
{
"start": 594,
"end": 598.2,
"text": " They will all provide individual training examples to your model."
},
{
"start": 598.2,
"end": 601.5,
"text": " Right. But then also you can consider a two step thing."
},
{
"start": 601.5,
"end": 609,
"text": " So you can say I'm in state s one and I go I go in two time steps."
},
{
"start": 609,
"end": 618.4000000000001,
"text": " I have achieved our one plus our two reward by doing actions a one then a two."
},
{
"start": 618.4000000000001,
"end": 627.6,
"text": " Right. And a two I'm going to do in parents here because what you want to do is you want to always always consider the action that comes right after where you are now."
},
{
"start": 627.6,
"end": 631.9,
"text": " So again your training sample let me draw this up here."
},
{
"start": 631.9,
"end": 635.2,
"text": " Maybe your training sample would be the following."
},
{
"start": 635.2,
"end": 637.1,
"text": " I am in state s one."
},
{
"start": 637.1,
"end": 638.7,
"text": " This would be my observation."
},
{
"start": 638.7,
"end": 646.6,
"text": " My command would be I would like to achieve in two time steps reward r one plus r two reward."
},
{
"start": 646.6,
"end": 650.3000000000001,
"text": " Right. This reward this both goes into the model."
},
{
"start": 650.3000000000001,
"end": 656.2,
"text": " Right. You tell the model please given the state s one achieve this reward in this time."
},
{
"start": 656.2,
"end": 666.3000000000001,
"text": " And the model is supposed to output a one saying ha in the past I was in this state and I did achieve this goal by using that."
},
{
"start": 666.3000000000001,
"end": 670.2,
"text": " So the model is supposed to learn to achieve different goals."
},
{
"start": 670.2,
"end": 674,
"text": " Right. So now you can not only train from good episodes right."
},
{
"start": 674,
"end": 683.8000000000001,
"text": " You can train for any episode any episode usually in classic or you kind of want to focus on the good episodes because you want to maximize your reward."
},
{
"start": 683.8,
"end": 694.3,
"text": " But here you can tell the model hey if you've done something particularly stupid let's say here in s three you done something the a three was particularly stupid gave you."
},
{
"start": 694.3,
"end": 700.8,
"text": " So r three here was really bad reward like a negative five billion trillion."
},
{
"start": 700.8,
"end": 713.6999999999999,
"text": " And you can actually train the model to recognize this can be a hey look if you are in s three and within one time step you want to achieve negative five billion billion billion trillion."
},
{
"start": 713.7,
"end": 717.7,
"text": " Reward you all you have to do is action a three right."
},
{
"start": 717.7,
"end": 730.6,
"text": " And then the cool thing now is if you are at evaluation time you actually want the big reward what you'll do is you simply plug in a different command simply in one time step still I'm in state s three in one time step."
},
{
"start": 730.6,
"end": 736.3000000000001,
"text": " I want to achieve actually three reward not negative a lot right."
},
{
"start": 736.3,
"end": 744.4,
"text": " And the model will have learned that a three will lead to a situation where you get a lot of negative reward."
},
{
"start": 744.4,
"end": 750.3,
"text": " So the model will be like I'm for sure not going to do a three right."
},
{
"start": 750.3,
"end": 757.4,
"text": " I'm going to do something else here because I have learned to map a three to like this really low reward."
},
{
"start": 757.4,
"end": 772.6,
"text": " So in essence this has connections to things like hindsight experience replay and kind of universal value function where you kind of learn to go from any state to any other state in this."
},
{
"start": 772.6,
"end": 781.5,
"text": " But none of these do none of these have this kind of command what Schmidhuber calls command here as an input to the model."
},
{
"start": 781.5,
"end": 794.2,
"text": " And I think actually this is this is really positive to input this because usually in universal value functions what you would say is let's consider a simple grid world right."
},
{
"start": 794.2,
"end": 801.3,
"text": " Whatever your agent is here and you need to you need to reach a goal that's down here."
},
{
"start": 801.3,
"end": 805.3,
"text": " But you might not be able to learn it because it's super sparse reward and so on."
},
{
"start": 805.3,
"end": 814.8,
"text": " But what you can do is you can learn to reach this position and this position and this position from various positions like go here go from here to here."
},
{
"start": 814.8,
"end": 816.5,
"text": " You can learn to go from here to here."
},
{
"start": 816.5,
"end": 822,
"text": " And you know in essence you would like it eventually to generalize to all the fields."
},
{
"start": 822,
"end": 832.4,
"text": " So you basically learn to go from any position to any other position with your agent with these universal value or universal policy functions having sub goals."
},
{
"start": 832.4,
"end": 842.1999999999999,
"text": " But they during that phase where they learn to go from anything to anything they don't they don't necessarily include this reward thing as a as an input."
},
{
"start": 842.1999999999999,
"end": 853.8,
"text": " It's more like kind of either a sub goal or like the usual value function will simply approximate the reward."
},
{
"start": 853.8,
"end": 862.3,
"text": " Whereas whereas in this technique we actually have a policy learning we actually output a an action value."
},
{
"start": 862.3,
"end": 867.6999999999999,
"text": " Also hindsight experience replay what hindsight experience replay would do in the same situation right."
},
{
"start": 867.6999999999999,
"end": 869.9,
"text": " You're here."
},
{
"start": 869.9,
"end": 872.5999999999999,
"text": " We might do a videos on this in the future."
},
{
"start": 872.5999999999999,
"end": 875.4,
"text": " You're here and you try right."
},
{
"start": 875.4,
"end": 879.6999999999999,
"text": " And your agent actually it ends up here right ends up right here."
},
{
"start": 879.6999999999999,
"end": 892.1999999999999,
"text": " What you can do is you can simply say oh well actually this this was my goal all along and then simply train train your model as if as if this thing here was your goal all along."
},
{
"start": 892.2,
"end": 899,
"text": " And not this thing here and treat it as kind of a positive reward for this."
},
{
"start": 899,
"end": 901.5,
"text": " At least that's how I understand it."
},
{
"start": 901.5,
"end": 902.9000000000001,
"text": " Right."
},
{
"start": 902.9000000000001,
"end": 910.5,
"text": " And both of these things are quite different than here where we have this command as input and I do I do like it."
},
{
"start": 910.5,
"end": 918.9000000000001,
"text": " So I think this this is very much the basic things here."
},
{
"start": 918.9,
"end": 927.5,
"text": " This it is extra extrapolated to kind of noisy inputs and noisy environments and so on."
},
{
"start": 927.5,
"end": 933.1999999999999,
"text": " But this is the basic the basic gist of it."
},
{
"start": 933.1999999999999,
"end": 944.4,
"text": " So here you see your you what you will learn is to map all and all is your representation of your input."
},
{
"start": 944.4,
"end": 947,
"text": " So the screen for example or the chessboard."
},
{
"start": 947,
"end": 953.2,
"text": " And I think also kind of the last action and there were you get in this step plus your horizon and desire."
},
{
"start": 953.2,
"end": 962,
"text": " So in how much time you would like to achieve how much reward and then you can also get input some extra goals that you have."
},
{
"start": 962,
"end": 972.1,
"text": " And so you can see basically any any episode that you've run in the past will give you a valid training example for this."
},
{
"start": 972.1,
"end": 982.9,
"text": " Your model will simply learn to match the previous experience with the goals that were achieved in the previous experience."
},
{
"start": 982.9,
"end": 988.9,
"text": " So there is lots of lots of generalizations here like how exactly these things are represented."
},
{
"start": 988.9,
"end": 991.8000000000001,
"text": " This this time horizon can be a high dimensional object."
},
{
"start": 991.8000000000001,
"end": 996.2,
"text": " The desire can be as I understand it somewhat a dimensional object."
},
{
"start": 996.2,
"end": 1000.7,
"text": " The extra commands can be like conditionals on these two things."
},
{
"start": 1000.7,
"end": 1012.2,
"text": " It gets very complicated, but I want to jump ahead to a different paper where so this paper is basically just describing the algorithm."
},
{
"start": 1012.2,
"end": 1017.4000000000001,
"text": " And then the next paper is doing experiments with this."
},
{
"start": 1017.4000000000001,
"end": 1019,
"text": " Let's scroll past here."
},
{
"start": 1019,
"end": 1019.4000000000001,
"text": " All right."
},
{
"start": 1019.4000000000001,
"end": 1025.8,
"text": " So this paper training agents using up that down reinforcement learning released on the same day,"
},
{
"start": 1025.8,
"end": 1038.1,
"text": " but different authors that have used also made who was also here but have used this to implement a variant of this."
},
{
"start": 1038.1,
"end": 1041.3999999999999,
"text": " And here you see again what I was trying to to explain."
},
{
"start": 1041.3999999999999,
"end": 1051,
"text": " So in traditional RL, this especially here Q learning, you'll have this function which gets an observation as input and then Q learning especially."
},
{
"start": 1051,
"end": 1062.6,
"text": " So you also get the action as an input and you're supposed to say for the given observation this particular action has this expected value as a return."
},
{
"start": 1062.6,
"end": 1062.9,
"text": " Right."
},
{
"start": 1062.9,
"end": 1064.4,
"text": " That's what I explained at the beginning."
},
{
"start": 1064.4,
"end": 1068.4,
"text": " That's kind of value based reinforcement learning."
},
{
"start": 1068.4,
"end": 1079.8,
"text": " Whereas the behavior function here, which would be upside down reinforcement learning gets the observation and a command and will map that to an action."
},
{
"start": 1079.8,
"end": 1082,
"text": " And here again is what we've gone over."
},
{
"start": 1082,
"end": 1083.6,
"text": " This is a bit of a different thing."
},
{
"start": 1083.6,
"end": 1087.3999999999999,
"text": " So this agent has apparently run two different episodes."
},
{
"start": 1087.3999999999999,
"end": 1101.5,
"text": " One point it did this sequence of actions and at the other point from the same starting state it did this sequence of action and you can see here on the right all the training samples we can we can derive from this."
},
{
"start": 1101.5,
"end": 1106.5,
"text": " So we can say from state s 0 right."
},
{
"start": 1106.5,
"end": 1114.3,
"text": " If I want to return in one time step, I have experienced this in the past right to return in one time step."
},
{
"start": 1114.3,
"end": 1117.5,
"text": " All I have to do is take action a one."
},
{
"start": 1117.5,
"end": 1133.5,
"text": " But if I want one return in one time step, I have to take action a two and you teach your behavior function to learn these things to learn to output these actions with these things here as inputs."
},
{
"start": 1133.5,
"end": 1144.7,
"text": " And then what you hope of course is that this will generalize that it will learn to generalize that you can say now give me more reward than I have ever seen before right."
},
{
"start": 1144.7,
"end": 1158.5,
"text": " And it will kind of learn which things correspond to lower reward, which things correspond to higher award and will be able to extrapolate which things will correspond to even higher report reward."
},
{
"start": 1158.5,
"end": 1159.6,
"text": " Sorry."
},
{
"start": 1159.6,
"end": 1180.5,
"text": " So they have two algorithms and this is kind of this is reminiscent of the old of the old RL kind of world where you do kind of one algorithm is continuously learning from the experience gathered by another algorithm."
},
{
"start": 1180.5,
"end": 1185.6999999999998,
"text": " So you have one set of algorithms and this even in modern RL this this this is how it's done right."
},
{
"start": 1185.6999999999998,
"end": 1188.3,
"text": " You have two different boxes right."
},
{
"start": 1188.3,
"end": 1195.7,
"text": " Actually you have probably one box learning the model like this is I'm going to represent this here learner right."
},
{
"start": 1195.7,
"end": 1211.3999999999999,
"text": " And the learner distributes the model to many many machines interacting with the simulators and these machines all they do is run episodes with the learned model and they will send back their experience here."
},
{
"start": 1211.3999999999999,
"end": 1216.6,
"text": " And then the learner can learn from it and then at the end send it again."
},
{
"start": 1216.6,
"end": 1226.6,
"text": " So so."
},
{
"start": 1226.6,
"end": 1230.1,
"text": " All right here we go."
},
{
"start": 1230.1,
"end": 1242.6,
"text": " So in each step what we do in order to to generate a new episode we don't always want to want to kind of execute one given policy."
},
{
"start": 1242.6,
"end": 1249.3999999999999,
"text": " What we do is we sample from the end of the replay buffer and the replay buffer is sorted by returns right."
},
{
"start": 1249.3999999999999,
"end": 1252.1,
"text": " So the highest return episodes are on top."
},
{
"start": 1252.1,
"end": 1261.8,
"text": " So we want to sample the highest return episodes then we want to say maybe some of them are 10 steps long maybe some of them are five steps long and so on."
},
{
"start": 1261.8,
"end": 1286.1,
"text": " So we set the horizon to be the mean of the length of these right and we set the desired return how much return should be achieved in this time to be the unit to sample from the uniform distribution between M and M plus S and M is the mean and S is the standard deviation of the selected episode."
},
{
"start": 1286.1,
"end": 1292.6,
"text": " So so what this means is is like here is a bunch of episodes from the start at the same time."
},
{
"start": 1292.6,
"end": 1305,
"text": " Here's a bunch of episodes that I ran right from here is time zero and then time goes on that I ran that had really high returns right."
},
{
"start": 1305,
"end": 1310.8,
"text": " Now I'm going to take the mean time that these episodes ran like this."
},
{
"start": 1310.8,
"end": 1321.8,
"text": " This is maybe five time steps. So in five time I want to achieve now how much reward now you look at all the rewards that were achieved."
},
{
"start": 1321.8,
"end": 1334.5,
"text": " This is maybe a distribution that has some mean here like so and then you say I want to achieve a reward between here and one standard deviation higher than here."
},
{
"start": 1334.5,
"end": 1353.3,
"text": " So right and this this would be the reward you want to achieve. So what you do is you kind of push your learned model to just go a bit beyond what it has seen so far is basically say look I you can do this but you can just do a bit more in the same amount of time."
},
{
"start": 1353.3,
"end": 1357.4,
"text": " Please do this and you hope the model has learned to kind of generalize to do this."
},
{
"start": 1357.4,
"end": 1365,
"text": " And if so you will execute these episodes and then these episodes will go back to the learner right."
},
{
"start": 1365,
"end": 1377.8000000000002,
"text": " I'll go back to the learner here and the learner will learn from them and hopefully then you can like generalize even more and then you can say I now know how to achieve this bit more reward."
},
{
"start": 1377.8000000000002,
"end": 1381.5,
"text": " Now I can if I run the episode I will achieve even more reward."
},
{
"start": 1381.5,
"end": 1391,
"text": " I can push the model even further right. So at eval time you can always ask the model to produce as much reward as possible in the given time."
},
{
"start": 1391,
"end": 1404.9,
"text": " And of course every episode sent back here is not only one training example as we saw but many many training examples can be derived from these models even beyond what's in what's in this paper."
},
{
"start": 1404.9,
"end": 1413.7,
"text": " All right. So I think this was a good first shot at describing this algorithm. I hope you get the gist of it."
},
{
"start": 1413.7,
"end": 1419.6000000000001,
"text": " I enjoy this a bit of a criticism for me would be it's still kind of doesn't it."
},
{
"start": 1419.6000000000001,
"end": 1422.7,
"text": " So it doesn't touch the exploration dilemma."
},
{
"start": 1422.7,
"end": 1440.7,
"text": " So it again deals with kind of incremental incrementally getting better whereas I feel this can easily get stuck in some minimum where it's not possible to do this incremental generalization of the model where you really need a new approach."
},
{
"start": 1440.7,
"end": 1449.6000000000001,
"text": " And that's why games like Montezuma's Revenge are solved using algorithms like Go Explore and not any of the classic algorithms."
},
{
"start": 1449.6,
"end": 1459.3999999999999,
"text": " That being said they have experiments where they show that especially in sparse reward environments they do better than classic or algorithms."
},
{
"start": 1459.3999999999999,
"end": 1476.3,
"text": " So if you for example here take the lunar lander where A to C beats upside down RL and I guess you didn't get Matt Ploidlip to do the upside down."
},
{
"start": 1476.3,
"end": 1483.5,
"text": " Well the in other in other environments upside down RL clearly beats the classic algorithms."
},
{
"start": 1483.5,
"end": 1492.3,
"text": " And what I like here is they took a lunar lander and which basically at every time step you get a reward in lunar lander and they hypothesized."
},
{
"start": 1492.3,
"end": 1499.3999999999999,
"text": " Okay this is really good for these classic algorithms that do reward maximization instead of kind of learning this general behavior function."
},
{
"start": 1499.3999999999999,
"end": 1504.8,
"text": " And what they did is they modified the game such that all the reward is given at the end of the episode."
},
{
"start": 1504.8,
"end": 1515.8999999999999,
"text": " And then you see that upside down RL will actually outperform here the classic things where it's exactly the same game you just get the reward at the end."
},
{
"start": 1515.8999999999999,
"end": 1523.5,
"text": " So upside down RL kind of learns the structure of the world learns that you get this reward at the end after such and such many time steps."
},
{
"start": 1523.5,
"end": 1529.6,
"text": " So you can it will learn please get me zero reward in 50 time steps like no problem."
},
{
"start": 1529.6,
"end": 1532.5,
"text": " But please get me a thousand rewards in a hundred time steps."
},
{
"start": 1532.5,
"end": 1536.9,
"text": " No problem. I just go to the end of the episode right."
},
{
"start": 1536.9,
"end": 1543.4,
"text": " Whereas these pure reward maximization techniques they don't they somehow have a harder time to do that."
},
{
"start": 1543.4,
"end": 1548.1,
"text": " I like this investigation. I like the thinking outside the box."
},
{
"start": 1548.1,
"end": 1552.4,
"text": " The Schmidhuber ism of the paper. It's just all great."
},
{
"start": 1552.4,
"end": 1562.7,
"text": " It's a great time to be alive and check this out and I'll see you. Bye bye."
}
] |
Z6ea_AbnnCc | Yannic Kilcher | UCZHmQk67mSJgfCCTn7xBfew | NeurIPS 2019 | [
"Science & Technology"
] | [
"machine learning",
"conference",
"ai",
"neurips",
"neurips2019",
"canada",
"research"
] | I'm at the 2019 conference on Neural Information Processing Systems in Vancouver, trying to register, but the line was just so long that I decided to bail :D | Good morning learners! We are here in beautiful Vancouver in Canada and attending the NURIPS conference 2019. Of course one of the largest conferences in machine learning of the year. It's actually there's been a lottery system for the tickets because so many people wanted to register. There were 8,000 people attending I think and it's Sunday morning so even before the conference starts I thought I was smart going really early to register but today is company expo day and I didn't register for that because you know usually companies will make fair bit of fuss about their research online so there's kind of little need to attend that in person you can just catch up later but everyone wants to get in on that and it's it's crazy here like the line starts so you go in here but actually have to go downstairs and the line starts somewhere like way back here underground then you go all line all the way queue all the way up there go there over there up the escalator circle a bunch of times go up some more I guess then you maybe see people all the way over there up until the registration desks that are finally I guess over there I didn't look but it's absolutely crazy these conferences exploding with people from all over the planet I don't even know what kind of the composition is I would be interested how many of them are students of course machine learning departments probably exploding right now with people every company wants to get in on that and I don't know where the trend is going that growth can't continue forever I feel and the it's it's kind of questionable how long we can uphold this how good this is I don't know any of these things I'll just try to get back later going to work a bit now get back later get my ticket and then I hope I can report a bit from the conference over the next few days I can get some good nuggets out of there that said I hope you're doing well and I'll see you later bye bye | [
{
"start": 0,
"end": 7.24,
"text": " Good morning learners! We are here in beautiful Vancouver in Canada and"
},
{
"start": 7.24,
"end": 13.56,
"text": " attending the NURIPS conference 2019. Of course one of the largest conferences"
},
{
"start": 13.56,
"end": 20.04,
"text": " in machine learning of the year. It's actually there's been a lottery system"
},
{
"start": 20.04,
"end": 24.44,
"text": " for the tickets because so many people wanted to register. There were 8,000"
},
{
"start": 24.44,
"end": 29.84,
"text": " people attending I think and it's Sunday morning so even before the conference"
},
{
"start": 29.84,
"end": 34.6,
"text": " starts I thought I was smart going really early to register but today is"
},
{
"start": 34.6,
"end": 39.68,
"text": " company expo day and I didn't register for that because you know usually"
},
{
"start": 39.68,
"end": 45.72,
"text": " companies will make fair bit of fuss about their research online so there's"
},
{
"start": 45.72,
"end": 53.6,
"text": " kind of little need to attend that in person you can just catch up later but"
},
{
"start": 53.6,
"end": 58.84,
"text": " everyone wants to get in on that and it's it's crazy here like the line"
},
{
"start": 58.84,
"end": 62.96,
"text": " starts so you go in here but actually have to go downstairs and the line"
},
{
"start": 62.96,
"end": 67.52000000000001,
"text": " starts somewhere like way back here underground then you go all line all"
},
{
"start": 67.52000000000001,
"end": 71.88000000000001,
"text": " the way queue all the way up there go there over there up the escalator circle"
},
{
"start": 71.88000000000001,
"end": 76.68,
"text": " a bunch of times go up some more I guess then you maybe see people all the way"
},
{
"start": 76.68,
"end": 83.36,
"text": " over there up until the registration desks that are finally I guess over"
},
{
"start": 83.36,
"end": 88.08000000000001,
"text": " there I didn't look but it's absolutely crazy these conferences exploding with"
},
{
"start": 88.08,
"end": 91.96,
"text": " people from all over the planet I don't even know what kind of the composition"
},
{
"start": 91.96,
"end": 96.08,
"text": " is I would be interested how many of them are students of course machine"
},
{
"start": 96.08,
"end": 103.24,
"text": " learning departments probably exploding right now with people every company"
},
{
"start": 103.24,
"end": 107.2,
"text": " wants to get in on that and I don't know where the trend is going that growth"
},
{
"start": 107.2,
"end": 114.6,
"text": " can't continue forever I feel and the it's it's kind of questionable how long"
},
{
"start": 114.6,
"end": 120.75999999999999,
"text": " we can uphold this how good this is I don't know any of these things I'll just"
},
{
"start": 120.75999999999999,
"end": 125.47999999999999,
"text": " try to get back later going to work a bit now get back later get my ticket and"
},
{
"start": 125.47999999999999,
"end": 131.72,
"text": " then I hope I can report a bit from the conference over the next few days I can"
},
{
"start": 131.72,
"end": 139,
"text": " get some good nuggets out of there that said I hope you're doing well and I'll"
},
{
"start": 139,
"end": 144.84,
"text": " see you later bye bye"
}
] |
We20YSAJZSE | Yannic Kilcher | UCZHmQk67mSJgfCCTn7xBfew | MuZero: Mastering Atari, Go, Chess and Shogi by Planning with a Learned Model | [
"Science & Technology"
] | [
"ml",
"ai",
"machine learning",
"reinforcement learning",
"deep rl",
"deepmind",
"google",
"alphago",
"alphazero",
"value function",
"policy",
"artificial intelligence",
"rl",
"deep reinforcement learning",
"model-free",
"model-based",
"environment model",
"hidden representation",
"latent state",
"transition",
"chess",
"shogi",
"go",
"atari"
] | MuZero harnesses the power of AlphaZero, but without relying on an accurate environment model. This opens up planning-based reinforcement learning to entirely new domains, where such environment models aren't available. The difference to previous work is that, instead of learning a model predicting future observations, MuZero predicts the future observations' latent representations, and thus learns to only represent things that matter to the task!
Abstract:
Constructing agents with planning capabilities has long been one of the main challenges in the pursuit of artificial intelligence. Tree-based planning methods have enjoyed huge success in challenging domains, such as chess and Go, where a perfect simulator is available. However, in real-world problems the dynamics governing the environment are often complex and unknown. In this work we present the MuZero algorithm which, by combining a tree-based search with a learned model, achieves superhuman performance in a range of challenging and visually complex domains, without any knowledge of their underlying dynamics. MuZero learns a model that, when applied iteratively, predicts the quantities most directly relevant to planning: the reward, the action-selection policy, and the value function. When evaluated on 57 different Atari games - the canonical video game environment for testing AI techniques, in which model-based planning approaches have historically struggled - our new algorithm achieved a new state of the art. When evaluated on Go, chess and shogi, without any knowledge of the game rules, MuZero matched the superhuman performance of the AlphaZero algorithm that was supplied with the game rules.
Authors: Julian Schrittwieser, Ioannis Antonoglou, Thomas Hubert, Karen Simonyan, Laurent Sifre, Simon Schmitt, Arthur Guez, Edward Lockhart, Demis Hassabis, Thore Graepel, Timothy Lillicrap, David Silver
https://arxiv.org/abs/1911.08265
Links:
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
BitChute: https://www.bitchute.com/channel/yannic-kilcher
Minds: https://www.minds.com/ykilcher | Hi there! Today we're looking at mastering Atari Go, Chess and Shogi by planning with a learned model by Julian Schrittweiser and people generally from DeepMind. So this paper is an extension to AlphaZero, the kind of famous algorithm that learned to play Go and Chess simply by playing itself and the kind of cool thing about this model is that it has a learned environment model. So what does this mean? Usually if you have a game such as chess, I believe there is a picture of chess down here, if you have a game such as chess and you want to learn to play it, you need to know the kind of the rules of chess, right? So in chess you have the rules like the pawn can move two or one, right? The bishop can move diagonally and so on. Similarly in Shogi or Go here, you know where you can place the stones and when you win everything is clearly defined. So what you can do is actually you can plan, right? You can now think of okay if I do this opening, right, my opponent could do either this or this or you know this and for each of the three moves I'll have response. So if they do, if they move this pawn, I'll go for like a gambit here and if they move this pawn then I can, you know, move on. Something like this, right? So what in a sense what you have is a tree search. So you start out with the state you're currently in, right? And then your opponent, sorry, this should be your state you're currently in, your opponent has the option of performing any one of these moves. Let's say there are three moves and then from each of these three moves you again have the option of performing any of these moves. And the good thing is in chess you know each exactly what they do. Like if I move my pawn then the new board configuration will be the pawn will no longer be here but here, right? So you know exactly what's going to happen. You can calculate that you have perfect simulator. And other domains you don't have that. For example in Atari all you have in Atari is this screen, right? Maybe you have a little submarine here, right? You have some opponents, right? The opponent, I don't know, what do your opponents look like? Are they fish? I don't even know in this game, right? And you can, I think you can shoot? There's coins to select? I don't know. Okay, in any case and sometimes you need to go up and there is like a health bar. But in essence you only have this screen here, right? You don't have more. And if you press a button you don't exactly know what's going to happen. You don't exactly know what the pixel space will look like as this shot moves forward, right? I guess you could know but you can't use that to plan because the kind of space is too big and your actions may be not clearly predictable. And when you win aren't clearly predictable and there may be randomness. So all of this stuff, usually what people do is here they do use a model-free reinforcement learning. We've had this discussion before. So this would be model-free and while chess here you'd go about model-based. Now what MuZero does is it uses a model-based planning but it learns the model. So it tries to construct a model for this here. It tries to say, okay if I have this screen A here, right? My thing is here and I press the button right then probably my submarine is going to be a bit more to the right. But it doesn't do this exactly. So this has been done before and this is what's kind of known as learning an environment model where you map current environment plus action to the next step in the environment, right? And this usually doesn't work too well because you're really trying to generate this entire pixel space here. What the cool thing about MuZero is it doesn't do that. It doesn't predict the next state. What it does predict is a hidden state and let's draw the hidden state as a little cloud here. It predicts a hidden state of the next step and from the hidden state it will predict things like the reward, the policy, the value and then it can use from that hidden state it'll predict the next hidden state. And from that it will again predict the reward. So the base idea is you only predict what you absolutely need to obtain the values that are important for doing reinforcement learning. You're not trying to predict the full environment. You're simply trying to predict whatever is necessary and this here is a learned quantity. Whatever is necessary to predict what your RL model is going to need. So that's the basic gist of it and we'll look at how they do it or how they describe what they're doing. So basically the picture A here is how MuZero plans. So imagine you have a configuration, a current state. This is an observation. This could be a chessboard. This could also be a position in shogi but it could also be a screen in an Atari game or a camera input of a self-driving car and so on. And the first thing it does it encodes that observation using this H here. I believe they call this a representation function. You encode that to this hidden state. Now the hidden state, this is appropriately sized, the hidden state here is supposed to capture everything you need about the state to predict the kind of RL quantities in the future. And you learn this function H which in this case of course is going to be a neural network in order to produce such a state. Now from this state you do two things. First of all you have this function F here and they call this the I don't remember but you have a function to predict the following two quantities. You predict the value function at that state and the value function simply means if you are in this state here, this is now not a true state but a hidden state, but still if you're in this state, in this hidden state that belongs to this observation, then in the future you're going to make this much reward on average with your current policy. That's the value function. So the value function basically tells you how good it is to be in a given state. And then the policy, this is a bit special, the policy is predicting how you would act in this state. Now this is a bit confusing or it was to me when I first learned it because we're going to see over here how a mu0 decides on how to act. Namely it does this entire tree search thing up to a certain depth, right? And then it creates this histogram and from that it produces the action. But in order to produce, to do this tree search, this is exactly this picture A. This is that tree search that is done. And in order to do that you need these p-values because we'll go there in a second, you need these p-values and they cannot themselves again do a tree search, right? That would be like infinite recursion. So what you need is you need kind of an estimate, right? Like if I were, and especially down, it makes more sense, if I were in that state how would I act, right? If I were to do a tree search like this. So you simply build a neural network that tells you with one evaluation without having to do the entire tree search down from here how you would act. This doesn't need to be a perfect approximation of how you would actually act but it needs to be good enough, right? So this simply tells you how you would act in that state. And that's important because what we do next is we use this policy to generate this action. And this is a simulated action. This isn't a real action because the real action would go here to the next actual observation. This is a simulated action saying if I'm in this hidden state, right, my policy approximately would be this thing. And so I can sample from that and say my action in that state would be this action. And so now I have a hidden state and an action and from that I can produce the next hidden state. Now of course if I were to apply the action up here to the observation, right, action one, I would get the next observation. And that is exactly how alpha zero works, right? You use your simulator, your perfect simulator, to take the current observation, the current state, with a given action that this policy gives you and you produce the next state. But we don't have a perfect simulator, right? And we don't want to learn a model that predicts the entire state. But what we want to do is we want to predict the following. If we were to take a one here, if, right, we would get an observation, can we predict the result when we would apply the function h to that, right, giving me s prime, right? This is observation prime. So this function h here, which is the function that maps from observation space to hidden space, if we were to apply this to the next hidden, to the next observation, we would obtain some hidden state for that observation. Can we predict that thing? So we need a function that maps from the hidden state given an action, right, to the next hidden state. And that's exactly what what happens down here, right? This function g here maps exactly this hidden state plus the action to the next hidden state. And also, also at the same time, it will predict a reward, right? Because in each step you might get a reward. So each transition here gives you a reward. And we're trying to predict that as well. Not that important, especially for games like chess or shogi, where there's only win or lose at the very end. But they incorporate this here to also be able to play these Atari games and like a broader range of reinforcement learning games. But in essence, that's what it is, right? We're trying to predict the next hidden state. And now we can basically recursively apply this. So from here, I have an idea of what my policy might be in that state, right? My proximate policy, my kind of mini policy that only needs one evaluation. I can sample an action from that policy. And if maybe it's action two here, and I can then predict the next hidden state that I would be in. Also the reward, right? And therefore, using this, I can do like a tree search. So I can simulate future trajectories, right? First, all of these policies, I can sample from them. I can sample from them, giving me different actions so that that will lead me down the tree different routes. So I can simulate future trajectories in this tree. And at the end, I have a pretty good idea. I can do this up to a certain depth, right? I don't have to do it until the very end, I can. And then I'll have a pretty good idea of how my immediate the immediate future looks right, which actions lead me to approximately which states and for each state, of course, especially for each bottom state here, I have an estimation of the value of that state. So basically, I can, the easiest thing would simply be to whatever search, how many steps is this? One, no, this is zero. One, two, three steps into the future. And for each of these states, obtain the value v here, v here, v, v, v, v, v. And then I simply pick the action up, the action up here. I'm running out of colors. And simply pick the action up here that will lead me eventually to the highest value state. So that's, we of course, we've not incorporated opponent plays here and so on. But that's the basic idea. You can do this more sophisticated this tree search. And this is a topic that we might cover in a video about AlphaGo or AlphaZero. But in essence, you can do the same thing as AlphaGo or AlphaZero, except if you're not working with the simulator, but you're working with a learned model on the hidden states of the true observations. So B is how you would actually act, right? So for each observation here, we'd say you'd run such a tree search, and you kind of get a histogram over visited actions. And again, we'll skip over that here. But this, this is part of the AlphaZero paper. And you decide on an action. And that will give you a reward and a next observation. And that's how you act. And then you train these things end to end. So you train the networks such that, of course, the reward, you know what the rewards are, right? The reward prediction of G, you know what that should be, right? From given a trajectory and action sequence, you know what the individual reward should be. So that's, you can train G for that. First of all, you can also train to predict the correct value functions like in classic reinforcement learning, you can do like an end step into the future prediction, or you can play until the end sample trajectories and so on. And the policy you predict, you, you predict the policy, your approximate policy to to match your true actions, right? Because your true actions you've generated by doing this entire tree search thing, which is, you know, the your what you're actually going to do. So you're training your approximate policy predictor that you use to run the tree search to match as close as possible to your actual actions, right? This in this fashion. So this policy resulting from hidden state zero should be as close as possible to the action you actually took in the observation that led to hidden state zero. Yeah, so this is how you search, search, act and train using mu zero. And this is pretty, this is it, right? This is the rest is experiments. The rest is simply showing that they can handle these games, they can keep the performance basically of the simulator based alpha zero in, in games. Sorry, where are the results here? Yeah, so in these games in these left hand games, they can keep the performance of alpha zero even exceeded here in go. And remember, they don't have a simulator like alpha zero, they have to learn this model. And in Atari, they actually out compete the current state of the art, which is I think, or to D two, or Impala. But it's it's some model, I guess some model free RL baseline here on the on Atari. So that's pretty cool. And I think that brings RL to kind of a new level with this hidden learning. And yeah, they so they compare it against against multiple ones are two D two different things. All right. Yeah, so that's that's that. For me, it's a cool paper. It's short. Read it if you if you want. I invite you to also look at the additional experiments where they basically ablate what they need is the learned model really as good or better as the real simulator? Does it take as much time actually takes less time, which for for higher elo, which is pretty cool. How many simulations are needed? Things like this. All right, that was it. I like this paper, check it out. Bye bye. | [
{
"start": 0,
"end": 5.82,
"text": " Hi there! Today we're looking at mastering Atari Go, Chess and Shogi by"
},
{
"start": 5.82,
"end": 12.120000000000001,
"text": " planning with a learned model by Julian Schrittweiser and people generally from"
},
{
"start": 12.120000000000001,
"end": 21.32,
"text": " DeepMind. So this paper is an extension to AlphaZero, the kind of famous"
},
{
"start": 21.32,
"end": 29.400000000000002,
"text": " algorithm that learned to play Go and Chess simply by playing itself and the"
},
{
"start": 29.4,
"end": 35.48,
"text": " kind of cool thing about this model is that it has a learned environment"
},
{
"start": 35.48,
"end": 40.92,
"text": " model. So what does this mean? Usually if you have a game such as chess, I believe"
},
{
"start": 40.92,
"end": 45.9,
"text": " there is a picture of chess down here, if you have a game such as chess and you"
},
{
"start": 45.9,
"end": 50.08,
"text": " want to learn to play it, you need to know the kind of the rules of chess,"
},
{
"start": 50.08,
"end": 58.16,
"text": " right? So in chess you have the rules like the pawn can move two or one, right?"
},
{
"start": 58.16,
"end": 65.11999999999999,
"text": " The bishop can move diagonally and so on. Similarly in Shogi or Go here, you know"
},
{
"start": 65.11999999999999,
"end": 70.6,
"text": " where you can place the stones and when you win everything is clearly defined."
},
{
"start": 70.6,
"end": 76.12,
"text": " So what you can do is actually you can plan, right? You can now think of"
},
{
"start": 76.12,
"end": 83.88,
"text": " okay if I do this opening, right, my opponent could do either this or"
},
{
"start": 83.88,
"end": 91.72,
"text": " this or you know this and for each of the three moves I'll have response. So if"
},
{
"start": 91.72,
"end": 98.84,
"text": " they do, if they move this pawn, I'll go for like a gambit here and if they move"
},
{
"start": 98.84,
"end": 106.39999999999999,
"text": " this pawn then I can, you know, move on. Something like this, right? So what in a"
},
{
"start": 106.39999999999999,
"end": 110.36,
"text": " sense what you have is a tree search. So you start out with the state you're"
},
{
"start": 110.36,
"end": 115.76,
"text": " currently in, right? And then your opponent, sorry, this should be your"
},
{
"start": 115.76,
"end": 120.68,
"text": " state you're currently in, your opponent has the option of performing any one of"
},
{
"start": 120.68,
"end": 125.6,
"text": " these moves. Let's say there are three moves and then from each of these three"
},
{
"start": 125.6,
"end": 131.24,
"text": " moves you again have the option of performing any of these moves. And the"
},
{
"start": 131.24,
"end": 137,
"text": " good thing is in chess you know each exactly what they do. Like if I move my"
},
{
"start": 137,
"end": 144.52,
"text": " pawn then the new board configuration will be the pawn will no longer be here"
},
{
"start": 144.52,
"end": 148.8,
"text": " but here, right? So you know exactly what's going to happen. You can calculate"
},
{
"start": 148.8,
"end": 154.56,
"text": " that you have perfect simulator. And other domains you don't have that. For example"
},
{
"start": 154.56,
"end": 162.2,
"text": " in Atari all you have in Atari is this screen, right? Maybe you have a"
},
{
"start": 162.2,
"end": 168.6,
"text": " little submarine here, right? You have some opponents, right? The"
},
{
"start": 168.6,
"end": 174.92,
"text": " opponent, I don't know, what do your opponents look like? Are they fish? I don't even know in this"
},
{
"start": 174.92,
"end": 181.23999999999998,
"text": " game, right? And you can, I think you can shoot? There's coins to select? I don't"
},
{
"start": 181.23999999999998,
"end": 185.6,
"text": " know. Okay, in any case and sometimes you need to go up and there is like a health"
},
{
"start": 185.6,
"end": 192.92,
"text": " bar. But in essence you only have this screen here, right? You don't"
},
{
"start": 192.92,
"end": 199.72,
"text": " have more. And if you press a button you don't"
},
{
"start": 199.72,
"end": 203.32,
"text": " exactly know what's going to happen. You don't exactly know what the pixel space"
},
{
"start": 203.32,
"end": 210,
"text": " will look like as this shot moves forward, right? I guess you could know but"
},
{
"start": 210,
"end": 215.76,
"text": " you can't use that to plan because the kind of space is too big and"
},
{
"start": 215.76,
"end": 221.32,
"text": " your actions may be not clearly predictable. And when you win aren't"
},
{
"start": 221.32,
"end": 226.08,
"text": " clearly predictable and there may be randomness. So all of this stuff, usually"
},
{
"start": 226.08,
"end": 229.6,
"text": " what people do is here they do use a model-free reinforcement learning. We've"
},
{
"start": 229.6,
"end": 237.72,
"text": " had this discussion before. So this would be model-free and while chess"
},
{
"start": 237.72,
"end": 248.16,
"text": " here you'd go about model-based. Now what MuZero does is it uses a model-based"
},
{
"start": 248.16,
"end": 254.96,
"text": " planning but it learns the model. So it tries to construct a model for this here."
},
{
"start": 254.96,
"end": 261.48,
"text": " It tries to say, okay if I have this screen A here, right? My thing is here and"
},
{
"start": 261.48,
"end": 270,
"text": " I press the button right then probably my submarine is going to be a bit more to"
},
{
"start": 270,
"end": 276.32,
"text": " the right. But it doesn't do this exactly. So this has been done before and this is"
},
{
"start": 276.32,
"end": 281.40000000000003,
"text": " what's kind of known as learning an environment model where you map current"
},
{
"start": 281.40000000000003,
"end": 288.68,
"text": " environment plus action to the next step in the environment, right? And this"
},
{
"start": 288.68,
"end": 294.44,
"text": " usually doesn't work too well because you're really trying to generate this"
},
{
"start": 294.44,
"end": 300.44,
"text": " entire pixel space here. What the cool thing about MuZero is it doesn't do that."
},
{
"start": 300.44,
"end": 306.04,
"text": " It doesn't predict the next state. What it does predict is a hidden state and"
},
{
"start": 306.04,
"end": 310.64,
"text": " let's draw the hidden state as a little cloud here. It predicts a hidden"
},
{
"start": 310.64,
"end": 315,
"text": " state of the next step and from the hidden state it will predict things like"
},
{
"start": 315,
"end": 323.04,
"text": " the reward, the policy, the value and then it can use from that hidden state it'll"
},
{
"start": 323.04,
"end": 329.2,
"text": " predict the next hidden state. And from that it will again predict the"
},
{
"start": 329.2,
"end": 334.76,
"text": " reward. So the base idea is you only predict what you"
},
{
"start": 334.76,
"end": 341.48,
"text": " absolutely need to obtain the values that are important for doing reinforcement"
},
{
"start": 341.48,
"end": 346.64000000000004,
"text": " learning. You're not trying to predict the full environment. You're simply trying"
},
{
"start": 346.64000000000004,
"end": 351.56,
"text": " to predict whatever is necessary and this here is a learned quantity. Whatever"
},
{
"start": 351.56,
"end": 358.04,
"text": " is necessary to predict what your RL model is going to need. So"
},
{
"start": 358.04,
"end": 367,
"text": " that's the basic gist of it and we'll look at how they do it or how"
},
{
"start": 367,
"end": 374.12,
"text": " they describe what they're doing. So basically the picture A here is how MuZero"
},
{
"start": 374.12,
"end": 380.16,
"text": " plans. So imagine you have a configuration, a current state. This is an"
},
{
"start": 380.16,
"end": 384.56,
"text": " observation. This could be a chessboard. This could also be a position in"
},
{
"start": 384.56,
"end": 389.8,
"text": " shogi but it could also be a screen in an Atari game or a camera input of a"
},
{
"start": 389.8,
"end": 394.92,
"text": " self-driving car and so on. And the first thing it does it encodes that"
},
{
"start": 394.92,
"end": 400.88,
"text": " observation using this H here. I believe they call this a representation"
},
{
"start": 400.88,
"end": 408.08000000000004,
"text": " function. You encode that to this hidden state. Now the hidden state, this is"
},
{
"start": 408.08000000000004,
"end": 416.88,
"text": " appropriately sized, the hidden state here is supposed to capture everything"
},
{
"start": 416.88,
"end": 422.56,
"text": " you need about the state to predict the kind of RL quantities in the future."
},
{
"start": 422.56,
"end": 428.2,
"text": " And you learn this function H which in this case of course is going to be a"
},
{
"start": 428.2,
"end": 434.48,
"text": " neural network in order to produce such a state. Now from this state you do two"
},
{
"start": 434.48,
"end": 440.72,
"text": " things. First of all you have this function F here and they call this the"
},
{
"start": 440.72,
"end": 446.36,
"text": " I don't remember but you have a function to predict the following two quantities."
},
{
"start": 446.36,
"end": 452.2,
"text": " You predict the value function at that state and the value function simply"
},
{
"start": 452.2,
"end": 458.15999999999997,
"text": " means if you are in this state here, this is now not a true state but a"
},
{
"start": 458.15999999999997,
"end": 463.47999999999996,
"text": " hidden state, but still if you're in this state, in this hidden state that belongs"
},
{
"start": 463.47999999999996,
"end": 471.96,
"text": " to this observation, then in the future you're going to make this much reward on"
},
{
"start": 471.96,
"end": 476.64,
"text": " average with your current policy. That's the value function. So the value"
},
{
"start": 476.64,
"end": 481.52,
"text": " function basically tells you how good it is to be in a given state. And"
},
{
"start": 481.52,
"end": 490.03999999999996,
"text": " then the policy, this is a bit special, the policy is predicting how you would"
},
{
"start": 490.03999999999996,
"end": 495.52,
"text": " act in this state. Now this is a bit confusing or it was to me when I"
},
{
"start": 495.52,
"end": 502.76,
"text": " first learned it because we're going to see over here how a mu0 decides on how"
},
{
"start": 502.76,
"end": 507.84,
"text": " to act. Namely it does this entire tree search thing up to a certain depth, right?"
},
{
"start": 507.84,
"end": 512.64,
"text": " And then it creates this histogram and from that it produces the action. But in"
},
{
"start": 512.64,
"end": 518.64,
"text": " order to produce, to do this tree search, this is exactly this picture A. This is"
},
{
"start": 518.64,
"end": 524.24,
"text": " that tree search that is done. And in order to do that you need these p-values"
},
{
"start": 524.24,
"end": 530.36,
"text": " because we'll go there in a second, you need these p-values and they cannot"
},
{
"start": 530.36,
"end": 535.24,
"text": " themselves again do a tree search, right? That would be like infinite recursion. So"
},
{
"start": 535.24,
"end": 542.08,
"text": " what you need is you need kind of an estimate, right? Like if I were, and"
},
{
"start": 542.08,
"end": 549.8,
"text": " especially down, it makes more sense, if I were in that state how would I"
},
{
"start": 549.8,
"end": 554.76,
"text": " act, right? If I were to do a tree search like this. So you simply build a neural"
},
{
"start": 554.76,
"end": 559.36,
"text": " network that tells you with one evaluation without having to do the"
},
{
"start": 559.36,
"end": 565.36,
"text": " entire tree search down from here how you would act. This doesn't need to be a"
},
{
"start": 565.36,
"end": 570.64,
"text": " perfect approximation of how you would actually act but it needs to be good"
},
{
"start": 570.64,
"end": 575.08,
"text": " enough, right? So this simply tells you how you would act in that state. And"
},
{
"start": 575.08,
"end": 581.5600000000001,
"text": " that's important because what we do next is we use this policy to generate this"
},
{
"start": 581.5600000000001,
"end": 586.16,
"text": " action. And this is a simulated action. This isn't a real action because the"
},
{
"start": 586.16,
"end": 590.48,
"text": " real action would go here to the next actual observation. This is a simulated"
},
{
"start": 590.48,
"end": 597.12,
"text": " action saying if I'm in this hidden state, right, my policy approximately"
},
{
"start": 597.12,
"end": 602.88,
"text": " would be this thing. And so I can sample from that and say my action in that"
},
{
"start": 602.88,
"end": 609.8,
"text": " state would be this action. And so now I have a hidden state and an action and"
},
{
"start": 609.8,
"end": 615.48,
"text": " from that I can produce the next hidden state. Now of course if I were to apply"
},
{
"start": 615.48,
"end": 620.72,
"text": " the action up here to the observation, right, action one, I would get the next"
},
{
"start": 620.72,
"end": 627,
"text": " observation. And that is exactly how alpha zero works, right? You use your"
},
{
"start": 627,
"end": 632.04,
"text": " simulator, your perfect simulator, to take the current observation, the current"
},
{
"start": 632.04,
"end": 637.9200000000001,
"text": " state, with a given action that this policy gives you and you produce the"
},
{
"start": 637.9200000000001,
"end": 641.48,
"text": " next state. But we don't have a perfect simulator, right? And we don't want to"
},
{
"start": 641.48,
"end": 646.44,
"text": " learn a model that predicts the entire state. But what we want to do is we want"
},
{
"start": 646.44,
"end": 654.04,
"text": " to predict the following. If we were to take a one here, if, right, we would get"
},
{
"start": 654.04,
"end": 663.28,
"text": " an observation, can we predict the result when we would apply the function h to"
},
{
"start": 663.28,
"end": 669.64,
"text": " that, right, giving me s prime, right? This is observation prime. So this"
},
{
"start": 669.64,
"end": 674.28,
"text": " function h here, which is the function that maps from observation space to"
},
{
"start": 674.28,
"end": 680.3199999999999,
"text": " hidden space, if we were to apply this to the next hidden, to the next observation,"
},
{
"start": 680.3199999999999,
"end": 688.3199999999999,
"text": " we would obtain some hidden state for that observation. Can we predict that"
},
{
"start": 688.3199999999999,
"end": 694.96,
"text": " thing? So we need a function that maps from the hidden state given an action,"
},
{
"start": 694.96,
"end": 701.2800000000001,
"text": " right, to the next hidden state. And that's exactly what what happens down"
},
{
"start": 701.2800000000001,
"end": 708.0400000000001,
"text": " here, right? This function g here maps exactly this hidden state plus the"
},
{
"start": 708.0400000000001,
"end": 717.84,
"text": " action to the next hidden state. And also, also at the same time, it will predict a"
},
{
"start": 717.84,
"end": 723.08,
"text": " reward, right? Because in each step you might get a reward. So each transition"
},
{
"start": 723.08,
"end": 727.32,
"text": " here gives you a reward. And we're trying to predict that as well. Not that"
},
{
"start": 727.32,
"end": 731.08,
"text": " important, especially for games like chess or shogi, where there's only win"
},
{
"start": 731.08,
"end": 735.1600000000001,
"text": " or lose at the very end. But they incorporate this here to also be able to"
},
{
"start": 735.1600000000001,
"end": 739.44,
"text": " play these Atari games and like a broader range of reinforcement learning"
},
{
"start": 739.44,
"end": 744.2,
"text": " games. But in essence, that's what it is, right? We're trying to predict the next"
},
{
"start": 744.2,
"end": 748.2800000000001,
"text": " hidden state. And now we can basically recursively apply this. So from here, I"
},
{
"start": 748.28,
"end": 754.28,
"text": " have an idea of what my policy might be in that state, right? My proximate policy,"
},
{
"start": 754.28,
"end": 761.28,
"text": " my kind of mini policy that only needs one evaluation. I can sample an action"
},
{
"start": 761.28,
"end": 766.68,
"text": " from that policy. And if maybe it's action two here, and I can then predict"
},
{
"start": 766.68,
"end": 774.76,
"text": " the next hidden state that I would be in. Also the reward, right? And therefore,"
},
{
"start": 774.76,
"end": 780.84,
"text": " using this, I can do like a tree search. So I can simulate future trajectories,"
},
{
"start": 780.84,
"end": 787.36,
"text": " right? First, all of these policies, I can sample from them. I can sample"
},
{
"start": 787.36,
"end": 791.92,
"text": " from them, giving me different actions so that that will lead me down"
},
{
"start": 791.92,
"end": 797.4399999999999,
"text": " the tree different routes. So I can simulate future trajectories in this"
},
{
"start": 797.4399999999999,
"end": 802.36,
"text": " tree. And at the end, I have a pretty good idea. I can do this up to a certain"
},
{
"start": 802.36,
"end": 807.64,
"text": " depth, right? I don't have to do it until the very end, I can. And then I'll have a"
},
{
"start": 807.64,
"end": 815.2,
"text": " pretty good idea of how my immediate the immediate future looks right, which"
},
{
"start": 815.2,
"end": 820.36,
"text": " actions lead me to approximately which states and for each state, of course,"
},
{
"start": 820.36,
"end": 824.2,
"text": " especially for each bottom state here, I have an estimation of the value of that"
},
{
"start": 824.2,
"end": 829.4,
"text": " state. So basically, I can, the easiest thing would simply be to whatever"
},
{
"start": 829.4,
"end": 837.68,
"text": " search, how many steps is this? One, no, this is zero. One, two, three steps into"
},
{
"start": 837.68,
"end": 844.6,
"text": " the future. And for each of these states, obtain the value v here, v here, v, v, v,"
},
{
"start": 844.6,
"end": 850.4399999999999,
"text": " v, v. And then I simply pick the action up, the action up here. I'm running out"
},
{
"start": 850.4399999999999,
"end": 855.84,
"text": " of colors. And simply pick the action up here that will lead me eventually to the"
},
{
"start": 855.84,
"end": 864.24,
"text": " highest value state. So that's, we of course, we've not incorporated opponent"
},
{
"start": 864.24,
"end": 868.2800000000001,
"text": " plays here and so on. But that's the basic idea. You can do this more"
},
{
"start": 868.2800000000001,
"end": 873.36,
"text": " sophisticated this tree search. And this is a topic that we might cover in a"
},
{
"start": 873.36,
"end": 880,
"text": " video about AlphaGo or AlphaZero. But in essence, you can do the same thing as"
},
{
"start": 880,
"end": 885.4000000000001,
"text": " AlphaGo or AlphaZero, except if you're not working with the simulator, but"
},
{
"start": 885.4,
"end": 890.4,
"text": " you're working with a learned model on the hidden states of the true"
},
{
"start": 890.4,
"end": 895.9599999999999,
"text": " observations. So B is how you would actually act, right? So for each"
},
{
"start": 895.9599999999999,
"end": 901.56,
"text": " observation here, we'd say you'd run such a tree search, and you kind of get a"
},
{
"start": 901.56,
"end": 906.88,
"text": " histogram over visited actions. And again, we'll skip over that here. But this,"
},
{
"start": 906.88,
"end": 912.8,
"text": " this is part of the AlphaZero paper. And you decide on an action. And that will"
},
{
"start": 912.8,
"end": 918.28,
"text": " give you a reward and a next observation. And that's how you act. And then you"
},
{
"start": 918.28,
"end": 931.04,
"text": " train these things end to end. So you train the networks such that, of"
},
{
"start": 931.04,
"end": 935.7199999999999,
"text": " course, the reward, you know what the rewards are, right? The reward prediction"
},
{
"start": 935.7199999999999,
"end": 940.24,
"text": " of G, you know what that should be, right? From given a trajectory and action"
},
{
"start": 940.24,
"end": 945.2,
"text": " sequence, you know what the individual reward should be. So that's, you can train"
},
{
"start": 945.2,
"end": 952.96,
"text": " G for that. First of all, you can also train to predict the correct value"
},
{
"start": 952.96,
"end": 957.4,
"text": " functions like in classic reinforcement learning, you can do like an end step"
},
{
"start": 957.6,
"end": 962.64,
"text": " into the future prediction, or you can play until the end sample trajectories"
},
{
"start": 962.64,
"end": 969.4,
"text": " and so on. And the policy you predict, you, you predict the policy, your"
},
{
"start": 969.4,
"end": 975.12,
"text": " approximate policy to to match your true actions, right? Because your true"
},
{
"start": 975.12,
"end": 981.68,
"text": " actions you've generated by doing this entire tree search thing, which is, you"
},
{
"start": 981.68,
"end": 987.0799999999999,
"text": " know, the your what you're actually going to do. So you're training your"
},
{
"start": 987.0799999999999,
"end": 993.8,
"text": " approximate policy predictor that you use to run the tree search to match as"
},
{
"start": 993.8,
"end": 1004.64,
"text": " close as possible to your actual actions, right? This in this fashion. So this"
},
{
"start": 1004.8,
"end": 1011.52,
"text": " policy resulting from hidden state zero should be as close as possible to the"
},
{
"start": 1011.52,
"end": 1016.4799999999999,
"text": " action you actually took in the observation that led to hidden state zero."
},
{
"start": 1016.48,
"end": 1025.3600000000001,
"text": " Yeah, so this is how you search, search, act and train using mu zero. And this is"
},
{
"start": 1025.3600000000001,
"end": 1033.76,
"text": " pretty, this is it, right? This is the rest is experiments. The rest is simply"
},
{
"start": 1033.76,
"end": 1039.04,
"text": " showing that they can handle these games, they can keep the performance basically"
},
{
"start": 1039.04,
"end": 1046.6399999999999,
"text": " of the simulator based alpha zero in, in games. Sorry, where are the results here?"
},
{
"start": 1046.6399999999999,
"end": 1050.3999999999999,
"text": " Yeah, so in these games in these left hand games, they can keep the"
},
{
"start": 1050.3999999999999,
"end": 1057.76,
"text": " performance of alpha zero even exceeded here in go. And remember, they don't have"
},
{
"start": 1057.76,
"end": 1064.1599999999999,
"text": " a simulator like alpha zero, they have to learn this model. And in Atari, they"
},
{
"start": 1064.16,
"end": 1073.0400000000002,
"text": " actually out compete the current state of the art, which is I think, or to D two, or"
},
{
"start": 1073.0400000000002,
"end": 1080.4,
"text": " Impala. But it's it's some model, I guess some model free RL baseline here on the"
},
{
"start": 1080.4,
"end": 1087.0400000000002,
"text": " on Atari. So that's pretty cool. And I think that brings RL to kind of a new"
},
{
"start": 1087.04,
"end": 1094.56,
"text": " level with this hidden learning. And yeah, they so they compare it against against"
},
{
"start": 1094.56,
"end": 1104.96,
"text": " multiple ones are two D two different things. All right. Yeah, so that's that's"
},
{
"start": 1104.96,
"end": 1112.8,
"text": " that. For me, it's a cool paper. It's short. Read it if you if you want. I"
},
{
"start": 1112.8,
"end": 1118,
"text": " invite you to also look at the additional experiments where they basically ablate"
},
{
"start": 1118,
"end": 1122.56,
"text": " what they need is the learned model really as good or better as the real"
},
{
"start": 1122.56,
"end": 1127.28,
"text": " simulator? Does it take as much time actually takes less time, which for for"
},
{
"start": 1127.28,
"end": 1131.84,
"text": " higher elo, which is pretty cool. How many simulations are needed? Things like"
},
{
"start": 1131.84,
"end": 1143.28,
"text": " this. All right, that was it. I like this paper, check it out. Bye bye."
}
] |
KXEEqcwXn8w | Yannic Kilcher | UCZHmQk67mSJgfCCTn7xBfew | A neurally plausible model learns successor representations in partially observable environments | [
"Science & Technology"
] | [
"ml",
"ai",
"machine learning",
"artificial ingelligence",
"deep learning",
"reinforcement learning",
"model-free",
"model-based",
"search",
"markov",
"mdp",
"pomdp",
"implicit",
"expectation",
"wake-sleep"
] | Successor representations are a mid-point between model-based and model-free reinforcement learning. This paper learns successor representation in environments where only incomplete information is available.
Abstract:
Animals need to devise strategies to maximize returns while interacting with their environment based on incoming noisy sensory observations. Task-relevant states, such as the agent's location within an environment or the presence of a predator, are often not directly observable but must be inferred using available sensory information. Successor representations (SR) have been proposed as a middle-ground between model-based and model-free reinforcement learning strategies, allowing for fast value computation and rapid adaptation to changes in the reward function or goal locations. Indeed, recent studies suggest that features of neural responses are consistent with the SR framework. However, it is not clear how such representations might be learned and computed in partially observed, noisy environments. Here, we introduce a neurally plausible model using distributional successor features, which builds on the distributed distributional code for the representation and computation of uncertainty, and which allows for efficient value function computation in partially observed environments via the successor representation. We show that distributional successor features can support reinforcement learning in noisy environments in which direct learning of successful policies is infeasible.
Authors: Eszter Vertes, Maneesh Sahani
Links:
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
BitChute: https://www.bitchute.com/channel/yannic-kilcher
Minds: https://www.minds.com/ykilcher | Alright, hi there! Today we're looking at a neurally plausible model, learned successor representations in partially observable environments, by Esther Vertes and Manish Sani. This paper is a paper on a topic that has been interesting me for a while, and that's successor representations. So we'll dive into all of this. The title is fairly lengthy and complicated, but ultimately we're dealing with a setting of reinforcement learning. So if you know something about reinforcement learning, in reinforcement learning usually you have an agent, which, let's just say this is you, and there is an environment which is a big black box that you don't know anything about. This is environment. And what the environment gives you is what's called an observation. So an observation could be anything, but in this case let's just assume you get a little picture of what's in front of you. So in front of you might be a tree, and in front of you might be a house. And then you can perform an action, and this action in this case might be to enter the house. And then the environment in the next step, it gives you back a new picture and says, ah you're now inside the house. So here is a door that leads you to this room, and the door that leads you that room, and there's a little table in front of you. So it's just this cycle of action observation. And with that you're trying to collect some reward over time. Now there are different ways of achieving this reward over time. So basically the reward is going to be, for example, you could get a reward for finding the kitchen, or for going into as many rooms as possible, or you know anything like this. So the other objective is to learn what's called a policy. So which actions to take. So action one, action two, action three, given the observations that maximizes your rewards. So there's mainly two ways to go about this. There's the model-free and the model-based reinforcement learning approach. Let's split them. So in the model-free approach, what you're trying to do is you're trying to simply learn a policy, and we call this here pi of s, and s is your state. And the state you can think of it as the observation. So in this policy we'll simply output an action. And this is the kind of the simple setup of model-free reinforcement learning. The important thing here is you're trying to learn this. Usually there's parameters theta of this policy pi. This could be a neural network and the theta are then the weights of the neural network. So you're trying to learn the neural network such that if you give it a state it just outputs the action. So you have this neural network with your state, you input the state into layer, layer, layer, layer, layer, and then it outputs one of maybe three actions. Go north, go south, go west, maybe go east. This could be four actions. You're just trying to train the neural network using backprop and the reward signal through what's called the reinforce trick or variance thereof. This is model-free reinforcement learning. It's very easy to implement, let's say, and it's very applicable. It will simply give you a mapping. You don't have to know nothing about how the world works. It'll simply tell you at the end if you're in this state do that action and the reward will be high. In contrast there is the other world. This is the model-based reinforcement learning. So in model-based reinforcement learning what you have is a model of the world. The model of the world is best described for example if you play chess. If you play chess, and this is a let's do a simplified chess board here, four by four, and you have a pawn right here. You have a pawn and you know if I do the action of moving the pawn forward, I know the pawn will then be in this square right here, in the next time step. I know that because I have a model of the world, I know how the world works, and I can predict basically the results of my actions. So if you have a model-based reinforcement learning setup, if you know how the world works, you can do something like a search. So given you're here in a state, you know if I do action one I go to this state, if I do action two I go to that state, and if I do action three I go to this other state. From each of the states you can then say ah but again I have three actions and I can you know go into these three states, go into these maybe here two, and maybe here I can go into these, actually let's do three as well. Then the question more becomes, can you find a path through this thing such that at the end you are in the state that you want to end up? So for example here is outside, and then here you can go to the tree, to the house, or to the field, and in the house you can go to the bedroom, the bathroom, the kitchen, and you know all of this, you have a model. So you can actually kind of compute what would happen if I do something and then search for the best path. Whereas in the model-free reinforcement learning approach, what you'd simply do is you'd say here is a state, and the state is for example I am in the house, and now give me the action that would maximize my future reward, and you're trying to learn this directly. So it's a very different style of reinforcement learning. Basically one is a pure machine learning approach, and the other one is a search problem. Now you can of course mix and match the two, like for example people in AlphaGo have done, they have a model-based reinforcement learning that also has kind of a learning machine learning elements, but in between now we have the successor features. So the successor representations, they are, if you will, they are somewhere in between the two. So they kind of trade off the advantages of model-free, where you you only have to learn a function, right, from state to something, with the advantages of model-based, the fact that you actually have a bit of an idea of how the world works, and can adjust quickly to let's say different reward structures or things like this. So what do successor representations do? Successor representations basically learn how states are connected, and this is a classic successor representation. So the successor representation M here of policy pi, the policy remember is what tells you which action you should take in a given state. You define it as a connection between state i and state j, and M of si as j means given that I am in si, so this could be the kitchen, and your goal is to find the bedroom, and if this is the kitchen, given that I am in state si, what's the probability that in the future at some point I will transition to si, right? Given that I'm in the kitchen, what's the probability that I'll end up in the bedroom at some point in the future? And this is formally expressed, this is the expectation over your policy, and it's the indicator function that the future state, sorry, this is the future state t plus k, you see k goes from zero to infinity, so for all of the future, and st is the one you're in now, so for any future state this is equal to sj. Now of course this makes no sense unless you kind of discount, have a discount factor here, so if you're in state, if you're in the bedroom further in the future, then this value would be lower. So this value is high if you will transition from si to sj with high probability in the near future, and this is a successor representation, right? It basically tells you if you want to go from state si to state sj, how likely is that in the near future, right? So if this number is high, you know that these two states are closely connected, that you can expect to end up in state sj somewhere down the line if you're in si now. One more representation, if you consider the vector m pi of si given all of the sj's, so I'm doing a dot here, so this is a vector, you can actually compare two states si, so if one is, if you plug in here, you plug in the kitchen, and then also you plug in the, I don't know, the garage. If they, and you'll get out two vectors, right? You get two vectors, if those vectors are very similar, then you know that if you're in the kitchen or in the garage, it doesn't matter, you're going to end up, you have a similar future trajectories basically. However, if those two vectors are far apart, you know that these two states are far apart with respect to your policy. So this is pretty cool things you can do with successor representations, and I hope this gives you kind of some insight. So another neat trick is that if you have a value function, so and the value function, in this case there's a simplified assumption, but you don't actually need it, the simplified assumption is that the reward only depends on the state you're in. Basically, it doesn't matter how you get to the state, like the actions you perform, if you're in a given state, if you're in a given room in the house, you'll get some reward. Like for example, if you find the bedroom, then you win. That's a reward that would only be characterized by the state. If that's the case, you can compute the value function of the reinforcement learning problem simply by integrating over the success representations. So for each state, you simply go over all of the possible other states, and you ask how likely am I to go to that state, and what reward will I have in that state, and that's your value function. So pretty simple. You can actually learn the successor representations by TD learning, by temporal difference learning, which is a method that's applied throughout reinforcement learning, especially in places like Q learning, and also for learning value functions. So pretty neat successor representations. This paper then goes from successor representations of individual state to successor representations over continuous space. So right now we have these states, state kitchen, you go to the bedroom, you go to somewhere, and these states were kind of discrete places. So there was a house and you have different rooms in the house, and you can go between them. Now we're dealing more with continuous states. So you can generalize these successor representations to continuous state by considering not the states themselves, but features of the state. And a feature, in this here you have to kind of imagine as binary features. And the features, let me give like some really dumb examples, but maybe it helps you. Like one feature could be the smell. Does it smell in the room? Like just binary. Does it smell or doesn't it smell? And then one feature could there be, is there sunlight? And then one feature could be, is it warm? And these are all binary features. So you have to build the features such that if the features are the same, then the states should be fairly close in whatever sense. So for example, if it smells but there is no sunlight, you're probably somewhere in the bathroom. Like where exactly in xy coordinates you are in the bathroom, it doesn't really matter to this as long as the features are high. And so if it smells and there is no sunlight, you're probably somewhere in the bathroom. And that makes all the states in the bathroom, all the coordinates, close together. So this is how you have to imagine these features. You can define your successor representations exactly the same over these features, except that the representation is now not from state i to state j, but from a state to a given feature. So that means if I am in state st at the current time, what is the probability that in the near future this feature will be high? So if I am right now in the or close to the bathroom, let's say, the probability that smell, oh sorry, this should be a highlight, the probability that smell is high in the future is very high, right? So this this number would be high. So exactly the same except for these continuous features now. And you can do the same thing including defining the value function as a simple linear multiplication with these features. That is an assumption under the assumption that the reward is a linear function of the features of the states, which is the analogous assumption to saying that the reward only depends on the state in the linear case, or somewhat of an analogous function, not entirely. All right, so you can also learn this by temporal difference learning exactly the same. So this is pretty cool. These are the successor representations and you can actually, if you learn them, you have kind of a model of how the world works. Not as much a model as the model based reinforcement learning where you know exactly how it works, right? Here you know exactly how the world works, you have this model. In model three, you don't know how the world works at all. You simply know, oh if I'm in this state and do this action, that that'll turn out really well. But in the successor representation framework, you have you have an idea of what states there are. We'll do the discrete case right now. So this could be kitchen, this could be outdoor, this could be bedroom. And so you have an idea what states there are and so on, and how they connect to each other. Like you say, from the kitchen I can easily go to the bedroom, but I cannot as well go to maybe the bathroom. From outdoor I can easily go to the kitchen, but I can't go to the bedroom and so on. So you have kind of an idea of how all of these states connect to each other. And that is the success representation. You can already see how that helps learning agent a lot if you introduce the successor, if you have the successor representation. Now what this this paper deals with in essence is it says, okay these successor representations are cool, but it has only so far been done in a case where you have full observability. And the full observability is the case where you kind of know what state you're in, right? You kind of know that, sorry, you are in the kitchen, you are outdoors, you are in the bedroom. That is not known. But what if you don't? And in most problems you don't. What if you just have a picture, like here, right? You just see a tree in the house, right? You don't, you kind of have to infer that you are outdoor, right? And if you're here, you just get this picture of a couple of doors and a table and you have to infer that you are now in the living room. So in essence there is an additional layer of complexity. Not only do you go from state to state to state, but you don't actually observe the states. What you observe is from each state you observe what are called observations, right? So you only observe these and you have to infer what the, you kind of have to guess what the underlying states are in order to know what you should do to get to the next state, right? You only ever observe the observations. So this here is the actual thing, this is kitchen, and this here could be a picture of the kitchen, right? There's a counter, there's a stove, yeah. And so you get kind of what I mean. In their example they simplify this to kind of a toy data setup where you have this environment and this is one beautiful picture. I don't know why. Oh well. Just you have one this setup and this is this box basically. This box and it has this wall, right? And then you have an agent that is able to walk around in here like with whatever policy. The policy determines how it walks around. But then what you observe is not the actual position, but what you observe is for example for this position you observe a random point here. So they basically add noise to each observer, to each state. And if you're in this state you will observe one of these points in this circle, right? So your trajectory might look to you as you observe it much more, much like for example from here to here to here to here. And you kind of have to guess what the underlying state is. And you see this here. This blue thing is what the agent actually does, but the gray thing is what it observes. And the observations are sometimes even outside of this boundary. And this orange thing is now the inferred thing. And that's what we actually want, is to go from the observed to these inferred. And we want that the inferred is as close as possible to this true latent state. So the way they do it is they introduce this distributional distributed coding for the expectation of the features. And basically what they say is they say we will build a framework where we represent the features as expectations over some distribution. And the expectation we'll call mu. And mu is simply the kind of mean of this feature under this distribution. This is very general so let's look at how to plug this in. So what they now have to do is they have to learn these two things. First of all if I draw this picture again these are the underlying states and they kind of transition into each other. So this is state one, state two, state three. And with action one, action two we transition from state to state. But also there are these observations. Observation one, observation two, observation three. So the agent needs to learn two different things. First of all it needs to learn, given an observation, what state am I probably in. This is the first thing it needs to learn. And then the second thing it needs to learn is given this state and this action what's the next state that I will go to. And of course these things down here they're not observed. So these things down here you can only do in distribution. So I'm going to represent this with a p here. You can only kind of do this in distribution and the way they handle it is they always maintain the expected value of these things. And that's, they do this in this wake-sleep algorithm. Alright so this is me re-recording this part because I have done a terrible job at the first time. So I want to understand this wake-sleep algorithm to compute the things that we don't know. Let me draw this actually again. So the way this algorithm does it is actually pretty cool. It has two phases, a sleep phase and a wake phase and it alternates between the two constantly. It's kind of like expectation maximization. Well ultimately what you want to learn are two different sets of parameters W and T. Now you, whenever you learn T you use W, the one that you've already learned. And whenever you learn W you use the T that you've already learned. So it's kind of a bootstrapping each other up. The two functions you learn here are this FW and the T here. So T is just a matrix and F of W is a function. The function has weights W. So you see in the sleep phase you update W and in the wake phase you update T. Now why is this called wake and sleep? It's because in the wake phase you're actually so called awake and you use real observations. So in the wake phase, and I find it easier to start actually at the wake phase, in the wake phase you collect observations. So you let your agent go around its environment and collect a bunch of observations. You don't know what the states are, but what you do is simply you collect these observations. Now it's not that important what the policy is here. So you basically follow some policy and you collect these observations. And then what you say is, okay I have the function F of W and remember since we're in the wake phase we're learning T so we assume we already have the W. In essence in practice we start out with a random one and then kind of alternate between the two phases until both get really good. So we already have a W and we use it to update T. How do we do this? We need to understand what this function F of W does. F of W takes this mu and the current observation and produces a new mu. So what is a mu? This mu here, this mu here as we saw above here, the mu is the expectation over the features. And in essence the mu is a guess. The mu is your best guess of what the features of the state are. Or in the discrete case you could also say a guess of what the state is. So you don't know the state, but what you want to maintain is a distribution over state. So you want to kind of maintain this distribution. But you can't calculate, you can't properly efficiently calculate with an entire distribution unless you assume it's some sort of Gaussian or so. But what you can do is you can simply take its mean, mu, and that's your best guess for what the state is. The state could be anywhere here according to this distribution, but you simply come up with mu which is your best guess. So the function F of W takes in the best guess of where you were up until the last step. And it also takes as an argument your current observation and it gives you the output of F is mu t. It's the best guess of where you are now. It's pretty straightforward if you think about it. So for every observation you want to have kind of a guess of what your state is. And that's mu. So what F does is it takes whatever observations you had, these observations gave rise to a mu that guess where you are. You take this mu and you take this observation and from that you derive the next guess of where you are. You just say I guessed I was in the kitchen before, now I moved, I observed that I moved through some sort of door and there's some sort of table. So given that I thought I was in the kitchen and that I observed this thing, now I'm probably in the living room. That's what FW does. So you input the observations that you had and you input your current observation to get the guess of where you're next. And these are real observations. And then you simply update t. What does t do? t relates your current and your next guess. And that's important. We already said that F takes your last guess and gives you the next guess. t does kind of the same thing, but t does it without relying on an additional observation. t simply says well if I am here or if my guess is that I am in the kitchen, then what's the probability that in the next step I'll be in the living room without observing anything? t is simply relating states to each other or relating guesses of states to each other. So it's simply saying well under the current policy that I am, what is the kind of distribution of going from one room to the next room? So in the wake phase you learn the t. The t simply represents how you move from state to state. So it's exactly basically this function here. Except that it's not from state to state, but it relates your guess about your guess, your mu of the state 1 to the mu of the state 2. And then in the sleep phase, you now assume that you have a good estimate of how the states relate to each other. And what you can then do is you can actually sample trajectories. And this is why it's called sleeping. It's kind of like dreaming. So given that you have a model t of how states transition to each other or your your guesses about states more precisely, you can now sample state trajectories. So you can dream up how you would move in an environment. And the assumption here is that you know the process that if you have a state that gives you an observation. For example in their experiments is always the state is x-y coordinates and that's corrupted by Gaussian noise. There is also ways to learn this transition. This is what's called the observation process. But you assume you know it. So you can sample trajectories of states and corresponding observations. Now this is not the real world, but this is using this t down here. You kind of know how or you kind of have some sort of model. You learn a model of how you move about the world. So you sample these trajectories and from these trajectories you can now learn the F of W function. So you see since you know what the state is, you can compute these features exactly. And then you can learn this F of W function that gives you a guess of the last state and the current observation and gives you the next the guess of the next state. And that you can then use temporal difference learning. This is always here. Also with the t here we have temporal difference kind of a temporal difference learning to learn the parameters W. So it's very kind of convoluted, but ultimately it's a simple process. In the wake phase you go into the world and actually collect real observations. And you have a method of deriving from these observations, deriving the guesses about the states. So what you can do is you can learn a transition between the states. If you have a good guess of what the states are given each observation you can learn how to transition from one state to the next state. Except you don't do it in actual states, you do it in guesses about states. Then once you have a model of how you move from one state to the next state you can go and dream up such state trajectories. You can dream state trajectories and therefore also you can dream how you would observe them. And given that you can learn then a better function that relates your guess about a state given the observation to the actual features of the state. Since for this particular thing you know what the state is. So this is this two-step process. Notice the cool thing. We've never actually had to learn this mu explicitly. We never had to learn how to go from observations to your guesses about states because we can compute this recursively. So you simply start out with mu0 which is a guess about the initial state and then you go to mu1 and mu2 and you never actually have to learn that function. So that's how they learn these success representations and the experiments of this are fairly cool. Here is another diagram of how that looks like. You have a state this gives you an observation and from that you derive a guess of what this state is. So you can now look at what the agent learned. The agent actually learns dynamics of this room. It means if you're here you probably go somewhere. There is no clear direction but if you're close to the wall your next states are probably going to be inwards of this wall. And yeah I've already shown you this picture. So they have a last cool experiment here where what they do is they specify a reward and the reward is down here. And from each state you want to know which way do I have to go to get the reward. Now if they give the agent the value of the latent state and the latent state here are just your x y coordinates. If they give this to the agent and they let it run, they let it learn the structure of the world, it will correctly conclude these are the high value states, lower, lower, lower, lower, lower value states. Up until over here are the most low value states because you travel the longest to go to the reward. If you just give it the observation, the noisy observation, it will actually assign high value to states here. Because of course it doesn't infer the latent state. It simply takes the observation as the phase value says. Well I was here and I reached here pretty quickly so it must be a good state. But in fact it wasn't here, it was here and the added noise would just corrupt the observation. So you see it learns kind of a wrong model of the world. Whereas if you use this DDC you see, sorry about that, if you use this DDC you see you're much closer to the true state of the world, like to the one on the left here. So on the left here you actually kind of cheat, you give it the actual state. But here you give it the observation but tell it it's actually a noisy observation. You use what this paper proposes and again it will learn to assign a low value to these states because it needs to go all the way around. Even though it has supposedly seen the agent go from here to here directly, but it kind of understands that it's just a noisy observation. Alright so this was this from this paper. It's a very very cool approach I think to reinforcement learning and there's some more experiments where you can see that this DDC actually helps. I'm excited about successor representations and how to incorporate them in reinforcement learning because it seems a perfect kind of middle ground between model-based and model-free RL. With that thanks for listening and bye bye! | [
{
"start": 0,
"end": 4.5600000000000005,
"text": " Alright, hi there! Today we're looking at a neurally plausible model,"
},
{
"start": 4.5600000000000005,
"end": 8.96,
"text": " learned successor representations in partially observable environments,"
},
{
"start": 8.96,
"end": 12.56,
"text": " by Esther Vertes and Manish Sani."
},
{
"start": 12.56,
"end": 20.080000000000002,
"text": " This paper is a paper on a topic that has been interesting me for a while,"
},
{
"start": 20.080000000000002,
"end": 22.400000000000002,
"text": " and that's successor representations."
},
{
"start": 22.400000000000002,
"end": 28.8,
"text": " So we'll dive into all of this. The title is fairly lengthy and complicated,"
},
{
"start": 28.8,
"end": 33.52,
"text": " but ultimately we're dealing with a setting of reinforcement learning."
},
{
"start": 33.52,
"end": 37.04,
"text": " So if you know something about reinforcement learning,"
},
{
"start": 37.04,
"end": 42.08,
"text": " in reinforcement learning usually you have an agent,"
},
{
"start": 42.08,
"end": 45.36,
"text": " which, let's just say this is you,"
},
{
"start": 45.36,
"end": 51.519999999999996,
"text": " and there is an environment which is a big black box"
},
{
"start": 51.519999999999996,
"end": 54.56,
"text": " that you don't know anything about. This is environment."
},
{
"start": 54.56,
"end": 57.92,
"text": " And what the environment gives you is what's called an observation."
},
{
"start": 57.92,
"end": 62.160000000000004,
"text": " So an observation could be anything, but in this case let's just assume"
},
{
"start": 62.160000000000004,
"end": 67.28,
"text": " you get a little picture of what's in front of you."
},
{
"start": 67.28,
"end": 72.96000000000001,
"text": " So in front of you might be a tree, and in front of you might be a house."
},
{
"start": 72.96000000000001,
"end": 78.24000000000001,
"text": " And then you can perform an action, and this action in this case might be to"
},
{
"start": 78.24000000000001,
"end": 82.4,
"text": " enter the house. And then the environment in the next step,"
},
{
"start": 82.4,
"end": 86.64,
"text": " it gives you back a new picture and says, ah you're now inside the house."
},
{
"start": 86.64,
"end": 91.36,
"text": " So here is a door that leads you to this room, and the door that leads you that"
},
{
"start": 91.36,
"end": 93.76,
"text": " room, and there's a little table in front of you."
},
{
"start": 93.76,
"end": 99.12,
"text": " So it's just this cycle of action observation."
},
{
"start": 99.12,
"end": 103.36,
"text": " And with that you're trying to collect some reward"
},
{
"start": 103.36,
"end": 107.84,
"text": " over time. Now there are different ways of achieving"
},
{
"start": 107.84,
"end": 113.28,
"text": " this reward over time. So basically the reward is going to be,"
},
{
"start": 113.28,
"end": 117.52,
"text": " for example, you could get a reward for finding the kitchen,"
},
{
"start": 117.52,
"end": 122.96000000000001,
"text": " or for going into as many rooms as possible, or"
},
{
"start": 122.96000000000001,
"end": 126.64,
"text": " you know anything like this. So the other objective is to learn"
},
{
"start": 126.64,
"end": 130.4,
"text": " what's called a policy. So which actions to take. So action one,"
},
{
"start": 130.4,
"end": 136.16,
"text": " action two, action three, given the observations that maximizes your rewards."
},
{
"start": 136.16,
"end": 139.84,
"text": " So there's mainly two ways to go about this. There's the model-free and the"
},
{
"start": 139.84,
"end": 144.64000000000001,
"text": " model-based reinforcement learning approach. Let's split them. So in the"
},
{
"start": 144.64000000000001,
"end": 150.16,
"text": " model-free approach, what you're trying to do"
},
{
"start": 150.16,
"end": 154.48000000000002,
"text": " is you're trying to simply learn a policy, and we call this here"
},
{
"start": 154.48000000000002,
"end": 159.92000000000002,
"text": " pi of s, and s is your state. And the state you can think of it as the"
},
{
"start": 159.92000000000002,
"end": 164.72,
"text": " observation. So in this policy we'll simply output"
},
{
"start": 164.72,
"end": 170.4,
"text": " an action. And this is the kind of the simple setup of"
},
{
"start": 170.4,
"end": 173.04,
"text": " model-free reinforcement learning. The important thing here is"
},
{
"start": 173.04,
"end": 177.04,
"text": " you're trying to learn this. Usually there's parameters theta"
},
{
"start": 177.04,
"end": 181.04,
"text": " of this policy pi. This could be a neural network and the theta"
},
{
"start": 181.04,
"end": 184.4,
"text": " are then the weights of the neural network. So you're trying to learn the"
},
{
"start": 184.4,
"end": 188.8,
"text": " neural network such that if you give it a state it just"
},
{
"start": 188.8,
"end": 192.56,
"text": " outputs the action. So you have this neural network with your state, you"
},
{
"start": 192.56,
"end": 197.28,
"text": " input the state into layer, layer, layer, layer, layer, and then it outputs one of"
},
{
"start": 197.28,
"end": 202.88,
"text": " maybe three actions. Go north, go south, go west, maybe go"
},
{
"start": 202.88,
"end": 206.24,
"text": " east. This could be four actions."
},
{
"start": 206.24,
"end": 209.68,
"text": " You're just trying to train the neural network using backprop"
},
{
"start": 209.68,
"end": 212.48000000000002,
"text": " and the reward signal through what's called the"
},
{
"start": 212.48000000000002,
"end": 217.68,
"text": " reinforce trick or variance thereof. This is model-free reinforcement learning."
},
{
"start": 217.68,
"end": 221.92000000000002,
"text": " It's very easy to implement, let's say,"
},
{
"start": 221.92,
"end": 226.79999999999998,
"text": " and it's very applicable. It will simply give you a mapping."
},
{
"start": 226.79999999999998,
"end": 230.48,
"text": " You don't have to know nothing about how the world works. It'll simply"
},
{
"start": 230.48,
"end": 233.2,
"text": " tell you at the end if you're in this state"
},
{
"start": 233.2,
"end": 236.64,
"text": " do that action and the reward will be high."
},
{
"start": 236.64,
"end": 240.79999999999998,
"text": " In contrast there is the other world. This is the model-based reinforcement"
},
{
"start": 240.79999999999998,
"end": 243.2,
"text": " learning."
},
{
"start": 243.51999999999998,
"end": 246.95999999999998,
"text": " So in model-based reinforcement learning what you have is a"
},
{
"start": 246.95999999999998,
"end": 250.79999999999998,
"text": " model of the world. The model of the world"
},
{
"start": 250.8,
"end": 254.08,
"text": " is best described for example if you play chess."
},
{
"start": 254.08,
"end": 258.88,
"text": " If you play chess, and this is a let's do a simplified chess board"
},
{
"start": 258.88,
"end": 263.28000000000003,
"text": " here, four by four, and you have a pawn right here."
},
{
"start": 263.28000000000003,
"end": 269.84000000000003,
"text": " You have a pawn and you know if I do the action of moving the"
},
{
"start": 269.84000000000003,
"end": 274.08000000000004,
"text": " pawn forward, I know the pawn will then be in this"
},
{
"start": 274.08000000000004,
"end": 277.92,
"text": " square right here, in the next time step. I know that"
},
{
"start": 277.92,
"end": 281.76,
"text": " because I have a model of the world, I know how the world works,"
},
{
"start": 281.76,
"end": 285.52000000000004,
"text": " and I can predict basically the results of my actions."
},
{
"start": 285.52000000000004,
"end": 289.28000000000003,
"text": " So if you have a model-based reinforcement learning setup,"
},
{
"start": 289.28000000000003,
"end": 292.88,
"text": " if you know how the world works, you can do something like a search."
},
{
"start": 292.88,
"end": 297.68,
"text": " So given you're here in a state, you know if I do action one"
},
{
"start": 297.68,
"end": 301.04,
"text": " I go to this state, if I do action two I go to that state,"
},
{
"start": 301.04,
"end": 305.28000000000003,
"text": " and if I do action three I go to this other state. From each of the states"
},
{
"start": 305.28,
"end": 310.47999999999996,
"text": " you can then say ah but again I have three actions and I can you know go"
},
{
"start": 310.47999999999996,
"end": 314.32,
"text": " into these three states, go into these maybe here two, and maybe"
},
{
"start": 314.32,
"end": 318.15999999999997,
"text": " here I can go into these, actually let's do three as well."
},
{
"start": 318.15999999999997,
"end": 322.64,
"text": " Then the question more becomes, can you find a path"
},
{
"start": 322.64,
"end": 328.71999999999997,
"text": " through this thing such that at the end you are in the state that you"
},
{
"start": 328.71999999999997,
"end": 333.76,
"text": " want to end up? So for example here is outside,"
},
{
"start": 333.76,
"end": 337.28,
"text": " and then here you can go to the tree, to the house,"
},
{
"start": 337.28,
"end": 342.48,
"text": " or to the field, and in the house you can go to the bedroom,"
},
{
"start": 342.48,
"end": 347.84,
"text": " the bathroom, the kitchen, and you know all of this, you have a model."
},
{
"start": 347.84,
"end": 351.03999999999996,
"text": " So you can actually kind of compute what would happen if I do"
},
{
"start": 351.03999999999996,
"end": 353.92,
"text": " something and then search for the best path."
},
{
"start": 353.92,
"end": 357.12,
"text": " Whereas in the model-free reinforcement learning approach,"
},
{
"start": 357.12,
"end": 361.36,
"text": " what you'd simply do is you'd say here is a state, and the state is for example"
},
{
"start": 361.36,
"end": 367.12,
"text": " I am in the house, and now give me the action that would"
},
{
"start": 367.12,
"end": 371.2,
"text": " maximize my future reward, and you're trying to learn this directly."
},
{
"start": 371.2,
"end": 374.88,
"text": " So it's a very different style of reinforcement"
},
{
"start": 374.88,
"end": 379.2,
"text": " learning. Basically one is a pure machine learning approach, and the"
},
{
"start": 379.2,
"end": 382.64,
"text": " other one is a search problem. Now you can of course mix and match the two,"
},
{
"start": 382.64,
"end": 386.88,
"text": " like for example people in AlphaGo have done, they have a model-based"
},
{
"start": 386.88,
"end": 391.2,
"text": " reinforcement learning that also has kind of a learning machine learning"
},
{
"start": 391.2,
"end": 395.44,
"text": " elements, but in between now we have the successor"
},
{
"start": 395.44,
"end": 400.71999999999997,
"text": " features. So the successor representations, they are,"
},
{
"start": 400.71999999999997,
"end": 404.15999999999997,
"text": " if you will, they are somewhere in between the two."
},
{
"start": 404.15999999999997,
"end": 410.24,
"text": " So they kind of trade off the advantages of model-free, where you"
},
{
"start": 410.24,
"end": 414.56,
"text": " you only have to learn a function, right, from state to something,"
},
{
"start": 414.56,
"end": 419.12,
"text": " with the advantages of model-based, the fact that you actually have a bit of an"
},
{
"start": 419.12,
"end": 422.56,
"text": " idea of how the world works, and can adjust quickly to"
},
{
"start": 422.56,
"end": 426.96,
"text": " let's say different reward structures or things like this."
},
{
"start": 426.96,
"end": 432.64,
"text": " So what do successor representations do? Successor representations basically"
},
{
"start": 432.64,
"end": 438.08,
"text": " learn how states are connected, and this is a classic successor"
},
{
"start": 438.08,
"end": 442.4,
"text": " representation. So the successor representation M here"
},
{
"start": 442.4,
"end": 447.44,
"text": " of policy pi, the policy remember is what tells you which action you should take"
},
{
"start": 447.44,
"end": 453.52,
"text": " in a given state. You define it as a"
},
{
"start": 453.52,
"end": 460.4,
"text": " connection between state i and state j, and M of si as j means"
},
{
"start": 460.4,
"end": 464.72,
"text": " given that I am in si, so this could be the kitchen,"
},
{
"start": 464.72,
"end": 471.92,
"text": " and your goal is to find the bedroom, and if this is the kitchen,"
},
{
"start": 471.92,
"end": 475.92,
"text": " given that I am in state si, what's the probability"
},
{
"start": 475.92,
"end": 479.84000000000003,
"text": " that in the future at some point I will transition"
},
{
"start": 479.84000000000003,
"end": 486.40000000000003,
"text": " to si, right? Given that I'm in the kitchen, what's the probability that"
},
{
"start": 486.40000000000003,
"end": 491.28000000000003,
"text": " I'll end up in the bedroom at some point in the future?"
},
{
"start": 491.28000000000003,
"end": 496.40000000000003,
"text": " And this is formally expressed, this is the expectation over your policy,"
},
{
"start": 496.40000000000003,
"end": 503.6,
"text": " and it's the indicator function that the future state,"
},
{
"start": 503.6,
"end": 509.20000000000005,
"text": " sorry, this is the future state t plus k, you see k goes from zero to infinity, so"
},
{
"start": 509.20000000000005,
"end": 513.12,
"text": " for all of the future, and st is the one you're in now,"
},
{
"start": 513.12,
"end": 516.96,
"text": " so for any future state this is equal to sj."
},
{
"start": 516.96,
"end": 520.16,
"text": " Now of course this makes no sense unless you kind of"
},
{
"start": 520.16,
"end": 525.52,
"text": " discount, have a discount factor here, so if you're in state, if you're in the"
},
{
"start": 525.52,
"end": 528.88,
"text": " bedroom further in the future, then this value would be lower."
},
{
"start": 528.88,
"end": 534.24,
"text": " So this value is high if you will transition from si to sj with high"
},
{
"start": 534.24,
"end": 537.28,
"text": " probability in the near future, and this is a"
},
{
"start": 537.28,
"end": 541.76,
"text": " successor representation, right? It basically tells you if you want to"
},
{
"start": 541.76,
"end": 547.04,
"text": " go from state si to state sj, how likely is that in the near future,"
},
{
"start": 547.04,
"end": 553.44,
"text": " right? So if this number is high, you know that"
},
{
"start": 553.44,
"end": 557.28,
"text": " these two states are closely connected, that you can"
},
{
"start": 557.28,
"end": 563.4399999999999,
"text": " expect to end up in state sj somewhere down the line if you're in si now."
},
{
"start": 563.4399999999999,
"end": 567.04,
"text": " One more representation, if you consider the vector"
},
{
"start": 567.04,
"end": 575.36,
"text": " m pi of si given all of the sj's, so I'm doing a dot here, so this is a vector,"
},
{
"start": 575.36,
"end": 581.68,
"text": " you can actually compare two states si, so if one is, if you plug in here,"
},
{
"start": 581.68,
"end": 586.16,
"text": " you plug in the kitchen, and then also you plug in"
},
{
"start": 586.16,
"end": 593.4399999999999,
"text": " the, I don't know, the garage. If they, and you'll get out two vectors,"
},
{
"start": 593.4399999999999,
"end": 596.88,
"text": " right? You get two vectors, if those vectors are very similar,"
},
{
"start": 596.88,
"end": 600.88,
"text": " then you know that if you're in the kitchen or in the garage, it doesn't"
},
{
"start": 600.88,
"end": 603.1999999999999,
"text": " matter, you're going to end up, you have a"
},
{
"start": 603.1999999999999,
"end": 608.24,
"text": " similar future trajectories basically. However, if those two"
},
{
"start": 608.24,
"end": 610.48,
"text": " vectors are far apart, you know that these two"
},
{
"start": 610.48,
"end": 613.76,
"text": " states are far apart with respect to your policy."
},
{
"start": 613.76,
"end": 618.08,
"text": " So this is pretty cool things you can do with successor representations,"
},
{
"start": 618.08,
"end": 621.36,
"text": " and I hope this gives you kind of some insight."
},
{
"start": 621.36,
"end": 629.12,
"text": " So another neat trick is that if you have a value function, so"
},
{
"start": 629.12,
"end": 632.72,
"text": " and the value function, in this case there's a simplified assumption, but you"
},
{
"start": 632.72,
"end": 635.6,
"text": " don't actually need it, the simplified assumption is that the"
},
{
"start": 635.6,
"end": 638.96,
"text": " reward only depends on the state you're in."
},
{
"start": 638.96,
"end": 642,
"text": " Basically, it doesn't matter how you get to the state, like the actions you"
},
{
"start": 642,
"end": 645.36,
"text": " perform, if you're in a given state, if you're in a given room in the house,"
},
{
"start": 645.36,
"end": 649.2,
"text": " you'll get some reward. Like for example, if you find the bedroom,"
},
{
"start": 649.2,
"end": 652.56,
"text": " then you win. That's a reward that would only be"
},
{
"start": 652.56,
"end": 656,
"text": " characterized by the state. If that's the case,"
},
{
"start": 656,
"end": 662.32,
"text": " you can compute the value function of the reinforcement learning problem"
},
{
"start": 662.32,
"end": 668.64,
"text": " simply by integrating over the success representations. So for each"
},
{
"start": 668.64,
"end": 674,
"text": " state, you simply go over all of the possible other states, and you ask how"
},
{
"start": 674,
"end": 678,
"text": " likely am I to go to that state, and what reward will I have in that state, and"
},
{
"start": 678,
"end": 682.56,
"text": " that's your value function. So pretty simple."
},
{
"start": 682.56,
"end": 685.6,
"text": " You can actually learn the successor representations"
},
{
"start": 685.6,
"end": 689.76,
"text": " by TD learning, by temporal difference learning,"
},
{
"start": 689.76,
"end": 694.96,
"text": " which is a method that's applied throughout reinforcement learning,"
},
{
"start": 694.96,
"end": 702.5600000000001,
"text": " especially in places like Q learning, and also for learning value functions."
},
{
"start": 702.5600000000001,
"end": 708,
"text": " So pretty neat successor representations."
},
{
"start": 708.72,
"end": 714.08,
"text": " This paper then goes from successor representations of individual state"
},
{
"start": 714.08,
"end": 720.64,
"text": " to successor representations over continuous space. So right now we have"
},
{
"start": 720.64,
"end": 723.9200000000001,
"text": " these states, state kitchen, you go to the"
},
{
"start": 723.92,
"end": 727.76,
"text": " bedroom, you go to somewhere, and these states were kind of"
},
{
"start": 727.76,
"end": 732.9599999999999,
"text": " discrete places. So there was a house and you have different"
},
{
"start": 732.9599999999999,
"end": 736.56,
"text": " rooms in the house, and you can go between them."
},
{
"start": 736.56,
"end": 743.1999999999999,
"text": " Now we're dealing more with continuous states. So you can generalize"
},
{
"start": 743.1999999999999,
"end": 746.88,
"text": " these successor representations to continuous state by considering"
},
{
"start": 746.88,
"end": 750.56,
"text": " not the states themselves, but features of the"
},
{
"start": 750.56,
"end": 755.92,
"text": " state. And a feature, in this here you have to kind of imagine as"
},
{
"start": 755.92,
"end": 761.8399999999999,
"text": " binary features. And the features, let me give like some really dumb"
},
{
"start": 761.8399999999999,
"end": 766.9599999999999,
"text": " examples, but maybe it helps you. Like one feature could be the smell."
},
{
"start": 766.9599999999999,
"end": 770.88,
"text": " Does it smell in the room? Like just binary. Does it smell or doesn't it smell?"
},
{
"start": 770.88,
"end": 776.7199999999999,
"text": " And then one feature could there be, is there sunlight?"
},
{
"start": 776.72,
"end": 784.1600000000001,
"text": " And then one feature could be, is it warm?"
},
{
"start": 784.96,
"end": 790.5600000000001,
"text": " And these are all binary features."
},
{
"start": 790.5600000000001,
"end": 796.5600000000001,
"text": " So you have to build the features such that if the"
},
{
"start": 796.5600000000001,
"end": 802.08,
"text": " features are the same, then the states should be fairly close in"
},
{
"start": 802.08,
"end": 808,
"text": " whatever sense. So for example, if it smells but there is no"
},
{
"start": 808,
"end": 812.32,
"text": " sunlight, you're probably somewhere in the bathroom. Like where exactly in xy"
},
{
"start": 812.32,
"end": 816.96,
"text": " coordinates you are in the bathroom, it doesn't really matter to this as long"
},
{
"start": 816.96,
"end": 821.5200000000001,
"text": " as the features are high. And so if it smells and there is no"
},
{
"start": 821.5200000000001,
"end": 825.9200000000001,
"text": " sunlight, you're probably somewhere in the bathroom. And that makes"
},
{
"start": 825.9200000000001,
"end": 830.88,
"text": " all the states in the bathroom, all the coordinates, close together."
},
{
"start": 830.88,
"end": 834.96,
"text": " So this is how you have to imagine these features. You can define your successor"
},
{
"start": 834.96,
"end": 839.28,
"text": " representations exactly the same over these features, except that the"
},
{
"start": 839.28,
"end": 845.12,
"text": " representation is now not from state i to state j, but from a state to"
},
{
"start": 845.12,
"end": 852.16,
"text": " a given feature. So that means if I am in state st at the current time, what is"
},
{
"start": 852.16,
"end": 858.24,
"text": " the probability that in the near future this feature will be high?"
},
{
"start": 858.24,
"end": 863.44,
"text": " So if I am right now in the or close to the bathroom, let's say,"
},
{
"start": 864.5600000000001,
"end": 870.72,
"text": " the probability that smell, oh sorry, this should be a highlight, the"
},
{
"start": 870.72,
"end": 876.72,
"text": " probability that smell is high in the future is very high, right? So this"
},
{
"start": 876.72,
"end": 881.36,
"text": " this number would be high. So exactly the same except for these continuous"
},
{
"start": 881.36,
"end": 887.84,
"text": " features now. And you can do the same thing including defining the value"
},
{
"start": 887.84,
"end": 893.44,
"text": " function as a simple linear multiplication with these features."
},
{
"start": 894.32,
"end": 898,
"text": " That is an assumption under the assumption that the reward is a linear"
},
{
"start": 898,
"end": 902.88,
"text": " function of the features of the states, which is the analogous assumption to"
},
{
"start": 902.88,
"end": 907.6,
"text": " saying that the reward only depends on the state in the linear case, or"
},
{
"start": 907.6,
"end": 910.1600000000001,
"text": " somewhat of an analogous function, not entirely."
},
{
"start": 912.96,
"end": 917.0400000000001,
"text": " All right, so you can also learn this by temporal difference learning exactly"
},
{
"start": 917.04,
"end": 922.56,
"text": " the same. So this is pretty cool. These are the successor representations and"
},
{
"start": 922.56,
"end": 929.28,
"text": " you can actually, if you learn them, you have kind of a model of how the world"
},
{
"start": 929.28,
"end": 935.4399999999999,
"text": " works. Not as much a model as the model based reinforcement learning where you"
},
{
"start": 935.4399999999999,
"end": 941.04,
"text": " know exactly how it works, right? Here you know exactly how the world works,"
},
{
"start": 941.04,
"end": 944.88,
"text": " you have this model. In model three, you don't know how the world works at all."
},
{
"start": 944.88,
"end": 949.28,
"text": " You simply know, oh if I'm in this state and do this action, that that'll turn out"
},
{
"start": 949.28,
"end": 953.76,
"text": " really well. But in the successor representation framework, you have"
},
{
"start": 956.08,
"end": 961.04,
"text": " you have an idea of what states there are. We'll do the discrete case right now."
},
{
"start": 961.04,
"end": 966.56,
"text": " So this could be kitchen, this could be outdoor, this could be bedroom."
},
{
"start": 967.6,
"end": 974.48,
"text": " And so you have an idea what states there are and so on, and how they connect to"
},
{
"start": 974.48,
"end": 979.12,
"text": " each other. Like you say, from the kitchen I can easily go to the bedroom, but I"
},
{
"start": 979.12,
"end": 986.72,
"text": " cannot as well go to maybe the bathroom. From outdoor I can easily go to the"
},
{
"start": 986.72,
"end": 991.84,
"text": " kitchen, but I can't go to the bedroom and so on. So you have kind of an idea"
},
{
"start": 991.84,
"end": 997.28,
"text": " of how all of these states connect to each other. And that is the success"
},
{
"start": 997.28,
"end": 1002.88,
"text": " representation. You can already see how that helps learning agent a lot if you"
},
{
"start": 1002.88,
"end": 1008.48,
"text": " introduce the successor, if you have the successor representation. Now what this"
},
{
"start": 1008.48,
"end": 1012.96,
"text": " this paper deals with in essence is it says, okay these successor"
},
{
"start": 1012.96,
"end": 1018.4,
"text": " representations are cool, but it has only so far been done in a case where you"
},
{
"start": 1018.4,
"end": 1024.4,
"text": " have full observability. And the full observability is the case where you kind"
},
{
"start": 1024.4,
"end": 1030.64,
"text": " of know what state you're in, right? You kind of know that, sorry, you are in the"
},
{
"start": 1030.64,
"end": 1037.68,
"text": " kitchen, you are outdoors, you are in the bedroom. That is not known. But what if"
},
{
"start": 1037.68,
"end": 1042.24,
"text": " you don't? And in most problems you don't. What if you just have a picture, like"
},
{
"start": 1042.24,
"end": 1046.88,
"text": " here, right? You just see a tree in the house, right? You don't, you kind of have"
},
{
"start": 1046.88,
"end": 1052,
"text": " to infer that you are outdoor, right? And if you're here, you just get this picture"
},
{
"start": 1052,
"end": 1057.8400000000001,
"text": " of a couple of doors and a table and you have to infer that you are now in the"
},
{
"start": 1057.84,
"end": 1064.3999999999999,
"text": " living room. So in essence there is an additional layer of complexity. Not"
},
{
"start": 1064.3999999999999,
"end": 1075.04,
"text": " only do you go from state to state to state, but you don't actually"
},
{
"start": 1075.04,
"end": 1081.1999999999998,
"text": " observe the states. What you observe is from each state you observe what are"
},
{
"start": 1081.2,
"end": 1089.92,
"text": " called observations, right? So you only observe these and you have to infer what"
},
{
"start": 1089.92,
"end": 1095.28,
"text": " the, you kind of have to guess what the underlying states are in order to know"
},
{
"start": 1095.28,
"end": 1099.92,
"text": " what you should do to get to the next state, right? You only ever observe the"
},
{
"start": 1099.92,
"end": 1106.8400000000001,
"text": " observations. So this here is the actual thing, this is kitchen, and this"
},
{
"start": 1106.84,
"end": 1113.36,
"text": " here could be a picture of the kitchen, right? There's a counter, there's a stove,"
},
{
"start": 1113.36,
"end": 1120.6399999999999,
"text": " yeah. And so you get kind of what I mean. In their example they"
},
{
"start": 1120.6399999999999,
"end": 1127.48,
"text": " simplify this to kind of a toy data setup where you have this environment"
},
{
"start": 1127.48,
"end": 1134.24,
"text": " and this is one beautiful picture. I don't know why. Oh well. Just you have"
},
{
"start": 1134.24,
"end": 1140.8,
"text": " one this setup and this is this box basically. This box and it has this wall,"
},
{
"start": 1140.8,
"end": 1148.72,
"text": " right? And then you have an agent that is able to walk around in here like with"
},
{
"start": 1148.72,
"end": 1152.8,
"text": " whatever policy. The policy determines how it walks around. But then what you"
},
{
"start": 1152.8,
"end": 1157.68,
"text": " observe is not the actual position, but what you observe is for example for this"
},
{
"start": 1157.68,
"end": 1163.14,
"text": " position you observe a random point here. So they basically add noise to each"
},
{
"start": 1163.14,
"end": 1168.0400000000002,
"text": " observer, to each state. And if you're in this state you will observe one of these"
},
{
"start": 1168.0400000000002,
"end": 1174.44,
"text": " points in this circle, right? So your trajectory might look to you as you"
},
{
"start": 1174.44,
"end": 1180.1200000000001,
"text": " observe it much more, much like for example from here to here to here to"
},
{
"start": 1180.1200000000001,
"end": 1186.42,
"text": " here. And you kind of have to guess what the underlying state is. And you see"
},
{
"start": 1186.42,
"end": 1193.0800000000002,
"text": " this here. This blue thing is what the agent actually does, but the gray"
},
{
"start": 1193.08,
"end": 1198.04,
"text": " thing is what it observes. And the observations are sometimes even outside"
},
{
"start": 1198.04,
"end": 1205.24,
"text": " of this boundary. And this orange thing is now the inferred thing."
},
{
"start": 1205.24,
"end": 1212.52,
"text": " And that's what we actually want, is to go from the observed to these inferred."
},
{
"start": 1212.52,
"end": 1218.24,
"text": " And we want that the inferred is as close as possible to this true latent"
},
{
"start": 1218.24,
"end": 1224.6,
"text": " state. So the way they do it is they introduce this distributional"
},
{
"start": 1224.6,
"end": 1234,
"text": " distributed coding for the expectation of the features."
},
{
"start": 1234,
"end": 1242.84,
"text": " And basically what they say is they say we will build a framework where"
},
{
"start": 1242.84,
"end": 1251.9199999999998,
"text": " we represent the features as expectations over some distribution."
},
{
"start": 1251.9199999999998,
"end": 1260.4399999999998,
"text": " And the expectation we'll call mu. And mu is simply the kind of mean of"
},
{
"start": 1260.4399999999998,
"end": 1266.6799999999998,
"text": " this feature under this distribution. This is very general so let's"
},
{
"start": 1266.68,
"end": 1278.28,
"text": " look at how to plug this in. So what they now have to do is they"
},
{
"start": 1278.28,
"end": 1283.5600000000002,
"text": " have to learn these two things. First of all if I draw this"
},
{
"start": 1283.5600000000002,
"end": 1290.5600000000002,
"text": " picture again these are the underlying states and they kind of transition into"
},
{
"start": 1290.5600000000002,
"end": 1295.5600000000002,
"text": " each other. So this is state one, state two, state three. And with action one,"
},
{
"start": 1295.56,
"end": 1299.96,
"text": " action two we transition from state to state. But also there are these"
},
{
"start": 1299.96,
"end": 1308.56,
"text": " observations. Observation one, observation two, observation three. So the agent needs"
},
{
"start": 1308.56,
"end": 1314.8799999999999,
"text": " to learn two different things. First of all it needs to learn, given an"
},
{
"start": 1314.8799999999999,
"end": 1321.12,
"text": " observation, what state am I probably in. This is the first thing it needs"
},
{
"start": 1321.12,
"end": 1325.6799999999998,
"text": " to learn. And then the second thing it needs to learn is given this state and"
},
{
"start": 1325.6799999999998,
"end": 1335.28,
"text": " this action what's the next state that I will go to. And of"
},
{
"start": 1335.28,
"end": 1339.76,
"text": " course these things down here they're not observed. So these things down here"
},
{
"start": 1339.76,
"end": 1345.32,
"text": " you can only do in distribution. So I'm going to represent this with a p here."
},
{
"start": 1345.32,
"end": 1349.8799999999999,
"text": " You can only kind of do this in distribution and the way they handle it"
},
{
"start": 1349.88,
"end": 1359.92,
"text": " is they always maintain the expected value of these things. And that's, they"
},
{
"start": 1359.92,
"end": 1365,
"text": " do this in this wake-sleep algorithm. Alright so this is me re-recording this"
},
{
"start": 1365,
"end": 1370.92,
"text": " part because I have done a terrible job at the first time. So I want to"
},
{
"start": 1370.92,
"end": 1376.68,
"text": " understand this wake-sleep algorithm to compute the things that we don't know."
},
{
"start": 1376.68,
"end": 1390,
"text": " Let me draw this actually again. So the way this algorithm does it is actually"
},
{
"start": 1390,
"end": 1396.3600000000001,
"text": " pretty cool. It has two phases, a sleep phase and a wake phase and it alternates"
},
{
"start": 1396.3600000000001,
"end": 1401.16,
"text": " between the two constantly. It's kind of like expectation maximization. Well"
},
{
"start": 1401.16,
"end": 1405.88,
"text": " ultimately what you want to learn are two different sets of parameters W and T."
},
{
"start": 1405.88,
"end": 1414.5200000000002,
"text": " Now you, whenever you learn T you use W, the one that you've already learned. And"
},
{
"start": 1414.5200000000002,
"end": 1419,
"text": " whenever you learn W you use the T that you've already learned. So it's kind of"
},
{
"start": 1419,
"end": 1426.8400000000001,
"text": " a bootstrapping each other up. The two functions you learn here are this FW"
},
{
"start": 1426.84,
"end": 1437.48,
"text": " and the T here. So T is just a matrix and F of W is a function. The function has"
},
{
"start": 1437.48,
"end": 1443.48,
"text": " weights W. So you see in the sleep phase you update W and in the wake"
},
{
"start": 1443.48,
"end": 1449.06,
"text": " phase you update T. Now why is this called wake and sleep? It's because in the"
},
{
"start": 1449.06,
"end": 1455.1599999999999,
"text": " wake phase you're actually so called awake and you use real observations. So"
},
{
"start": 1455.16,
"end": 1460.0400000000002,
"text": " in the wake phase, and I find it easier to start actually at the wake phase, in"
},
{
"start": 1460.0400000000002,
"end": 1465.8400000000001,
"text": " the wake phase you collect observations. So you let your agent go around its"
},
{
"start": 1465.8400000000001,
"end": 1469.88,
"text": " environment and collect a bunch of observations. You don't know what the"
},
{
"start": 1469.88,
"end": 1475.4,
"text": " states are, but what you do is simply you collect these observations. Now it's not"
},
{
"start": 1475.4,
"end": 1480.64,
"text": " that important what the policy is here. So you basically follow some policy and"
},
{
"start": 1480.64,
"end": 1490.6200000000001,
"text": " you collect these observations. And then what you say is, okay I have"
},
{
"start": 1490.6200000000001,
"end": 1495.48,
"text": " the function F of W and remember since we're in the wake phase we're learning"
},
{
"start": 1495.48,
"end": 1502.44,
"text": " T so we assume we already have the W. In essence in practice we start out with a"
},
{
"start": 1502.44,
"end": 1506.92,
"text": " random one and then kind of alternate between the two phases until"
},
{
"start": 1506.92,
"end": 1514.28,
"text": " both get really good. So we already have a W and we use it to update T. How"
},
{
"start": 1514.28,
"end": 1519.8400000000001,
"text": " do we do this? We need to understand what this function F of W does. F of"
},
{
"start": 1519.8400000000001,
"end": 1530.48,
"text": " W takes this mu and the current observation and produces a new mu. So"
},
{
"start": 1530.48,
"end": 1539.64,
"text": " what is a mu? This mu here, this mu here as we saw above here, the"
},
{
"start": 1539.64,
"end": 1548.1200000000001,
"text": " mu is the expectation over the features. And in essence the mu is a guess. The mu"
},
{
"start": 1548.1200000000001,
"end": 1553.56,
"text": " is your best guess of what the features of the state are. Or in the"
},
{
"start": 1553.56,
"end": 1560.76,
"text": " discrete case you could also say a guess of what the state is. So you"
},
{
"start": 1560.76,
"end": 1566.2,
"text": " don't know the state, but what you want to maintain is a distribution"
},
{
"start": 1566.2,
"end": 1570.6399999999999,
"text": " over state. So you want to kind of maintain this distribution. But you can't"
},
{
"start": 1570.6399999999999,
"end": 1575.48,
"text": " calculate, you can't properly efficiently calculate with an entire"
},
{
"start": 1575.48,
"end": 1580.56,
"text": " distribution unless you assume it's some sort of Gaussian or so. But what you can"
},
{
"start": 1580.56,
"end": 1588.6399999999999,
"text": " do is you can simply take its mean, mu, and that's your best guess"
},
{
"start": 1588.6399999999999,
"end": 1594.36,
"text": " for what the state is. The state could be anywhere here"
},
{
"start": 1594.36,
"end": 1599.56,
"text": " according to this distribution, but you simply come up with mu which is your"
},
{
"start": 1599.56,
"end": 1611.08,
"text": " best guess. So the function F of W takes in the best guess of where"
},
{
"start": 1611.08,
"end": 1617.72,
"text": " you were up until the last step. And it also takes as an argument your current"
},
{
"start": 1617.72,
"end": 1625.52,
"text": " observation and it gives you the output of F is mu t. It's the best guess"
},
{
"start": 1625.52,
"end": 1630.16,
"text": " of where you are now. It's pretty straightforward if you think"
},
{
"start": 1630.16,
"end": 1638.56,
"text": " about it. So for every observation you want to have kind of a guess of"
},
{
"start": 1638.56,
"end": 1645.04,
"text": " what your state is. And that's mu. So what F does is it"
},
{
"start": 1645.04,
"end": 1650.8799999999999,
"text": " takes whatever observations you had, these observations gave rise to a mu"
},
{
"start": 1650.88,
"end": 1655.64,
"text": " that guess where you are. You take this mu and you take this observation and"
},
{
"start": 1655.64,
"end": 1661.64,
"text": " from that you derive the next guess of where you are. You just say I guessed I"
},
{
"start": 1661.64,
"end": 1669.2800000000002,
"text": " was in the kitchen before, now I moved, I observed that I moved through some"
},
{
"start": 1669.2800000000002,
"end": 1674.2800000000002,
"text": " sort of door and there's some sort of table. So given that I thought I"
},
{
"start": 1674.2800000000002,
"end": 1677.8000000000002,
"text": " was in the kitchen and that I observed this thing, now I'm probably in the"
},
{
"start": 1677.8,
"end": 1687.6,
"text": " living room. That's what FW does. So you input the observations that you had"
},
{
"start": 1687.6,
"end": 1692.9199999999998,
"text": " and you input your current observation to get the guess of where you're"
},
{
"start": 1692.9199999999998,
"end": 1698.56,
"text": " next. And these are real observations. And then you simply update t. What"
},
{
"start": 1698.56,
"end": 1706.28,
"text": " does t do? t relates your current and your next guess. And that's important. We"
},
{
"start": 1706.28,
"end": 1713.56,
"text": " already said that F takes your last guess and gives you the next guess."
},
{
"start": 1713.56,
"end": 1720.56,
"text": " t does kind of the same thing, but t does it without relying on"
},
{
"start": 1720.56,
"end": 1726.8799999999999,
"text": " an additional observation. t simply says well if I am here or if my guess is that"
},
{
"start": 1726.8799999999999,
"end": 1732.52,
"text": " I am in the kitchen, then what's the probability that in the next step I'll"
},
{
"start": 1732.52,
"end": 1737.16,
"text": " be in the living room without observing anything? t is simply"
},
{
"start": 1737.16,
"end": 1743.84,
"text": " relating states to each other or relating guesses of states to each other."
},
{
"start": 1743.84,
"end": 1750.84,
"text": " So it's simply saying well under the current policy that I am,"
},
{
"start": 1750.84,
"end": 1756.76,
"text": " what is the kind of distribution of going from one room to the next room?"
},
{
"start": 1756.76,
"end": 1762.8,
"text": " So in the wake phase you learn the t. The t simply represents how"
},
{
"start": 1762.8,
"end": 1767.8,
"text": " you move from state to state. So it's exactly basically this function here."
},
{
"start": 1767.8,
"end": 1773.44,
"text": " Except that it's not from state to state, but it relates your guess about your"
},
{
"start": 1773.44,
"end": 1783.16,
"text": " guess, your mu of the state 1 to the mu of the state 2. And then in the"
},
{
"start": 1783.16,
"end": 1791.24,
"text": " sleep phase, you now assume that you have a good estimate of how"
},
{
"start": 1791.24,
"end": 1795.48,
"text": " the states relate to each other. And what you can then do is you can actually"
},
{
"start": 1795.48,
"end": 1799.92,
"text": " sample trajectories. And this is why it's called sleeping. It's kind of like"
},
{
"start": 1799.92,
"end": 1806.6000000000001,
"text": " dreaming. So given that you have a model t of how states transition to each other"
},
{
"start": 1806.6000000000001,
"end": 1812.5800000000002,
"text": " or your your guesses about states more precisely, you can now sample state"
},
{
"start": 1812.58,
"end": 1817.72,
"text": " trajectories. So you can dream up how you would move in an environment."
},
{
"start": 1817.72,
"end": 1824.6799999999998,
"text": " And the assumption here is that you know the process that if you have a"
},
{
"start": 1824.6799999999998,
"end": 1829.04,
"text": " state that gives you an observation. For example in their experiments is always"
},
{
"start": 1829.04,
"end": 1835.36,
"text": " the state is x-y coordinates and that's corrupted by Gaussian noise. There is"
},
{
"start": 1835.36,
"end": 1840.52,
"text": " also ways to learn this transition. This is what's called the"
},
{
"start": 1840.52,
"end": 1846.32,
"text": " observation process. But you assume you know it. So you can sample"
},
{
"start": 1846.32,
"end": 1853.48,
"text": " trajectories of states and corresponding observations. Now this is"
},
{
"start": 1853.48,
"end": 1860.52,
"text": " not the real world, but this is using this t down here. You kind of know how"
},
{
"start": 1860.52,
"end": 1864.68,
"text": " or you kind of have some sort of model. You learn a model of how you"
},
{
"start": 1864.68,
"end": 1868.98,
"text": " move about the world. So you sample these trajectories and from these"
},
{
"start": 1868.98,
"end": 1874.88,
"text": " trajectories you can now learn the F of W function. So you see since you know"
},
{
"start": 1874.88,
"end": 1881.52,
"text": " what the state is, you can compute these features exactly. And then you"
},
{
"start": 1881.52,
"end": 1888.96,
"text": " can learn this F of W function that gives you a guess of the"
},
{
"start": 1888.96,
"end": 1894.78,
"text": " last state and the current observation and gives you the next the guess of the"
},
{
"start": 1894.78,
"end": 1902.94,
"text": " next state. And that you can then use temporal difference learning. This is"
},
{
"start": 1902.94,
"end": 1907.8,
"text": " always here. Also with the t here we have temporal difference kind of a"
},
{
"start": 1907.8,
"end": 1917.76,
"text": " temporal difference learning to learn the parameters W. So it's very kind of"
},
{
"start": 1917.76,
"end": 1925.36,
"text": " convoluted, but ultimately it's a simple process. In the wake phase you go into"
},
{
"start": 1925.36,
"end": 1930.76,
"text": " the world and actually collect real observations. And you have a method"
},
{
"start": 1930.76,
"end": 1939.64,
"text": " of deriving from these observations, deriving the guesses about the states."
},
{
"start": 1939.64,
"end": 1945.72,
"text": " So what you can do is you can learn a transition between the states. If"
},
{
"start": 1945.72,
"end": 1950.72,
"text": " you have a good guess of what the states are given each observation you can learn"
},
{
"start": 1950.72,
"end": 1955.6000000000001,
"text": " how to transition from one state to the next state. Except you don't do it in"
},
{
"start": 1955.6000000000001,
"end": 1961.4,
"text": " actual states, you do it in guesses about states. Then once you have a model of how"
},
{
"start": 1961.4,
"end": 1967.56,
"text": " you move from one state to the next state you can go and dream up such state"
},
{
"start": 1967.56,
"end": 1973.6200000000001,
"text": " trajectories. You can dream state trajectories and therefore also you can"
},
{
"start": 1973.62,
"end": 1978.7399999999998,
"text": " dream how you would observe them. And given that you can learn then a better"
},
{
"start": 1978.7399999999998,
"end": 1985.32,
"text": " function that relates your guess about a state given the observation"
},
{
"start": 1985.32,
"end": 1990.76,
"text": " to the actual features of the state. Since for this particular thing you know"
},
{
"start": 1990.76,
"end": 2000.12,
"text": " what the state is. So this is this two-step process. Notice the cool thing."
},
{
"start": 2000.12,
"end": 2007.1999999999998,
"text": " We've never actually had to learn this mu explicitly. We never had to learn how"
},
{
"start": 2007.1999999999998,
"end": 2013.84,
"text": " to go from observations to your guesses about states because we can compute this"
},
{
"start": 2013.84,
"end": 2019.6,
"text": " recursively. So you simply start out with mu0 which is a guess about the"
},
{
"start": 2019.6,
"end": 2026.6,
"text": " initial state and then you go to mu1 and mu2 and you never actually have to"
},
{
"start": 2026.6,
"end": 2032,
"text": " learn that function. So that's how they"
},
{
"start": 2032,
"end": 2037.3999999999999,
"text": " learn these success representations and the experiments of this are"
},
{
"start": 2037.3999999999999,
"end": 2042.9599999999998,
"text": " fairly cool. Here is another diagram of how that looks like. You have a state"
},
{
"start": 2042.9599999999998,
"end": 2046.7199999999998,
"text": " this gives you an observation and from that you derive a guess of what this"
},
{
"start": 2046.7199999999998,
"end": 2052.88,
"text": " state is. So you can now look at what the agent learned. The agent actually"
},
{
"start": 2052.88,
"end": 2060.44,
"text": " learns dynamics of this room. It means if you're here you probably go somewhere."
},
{
"start": 2060.44,
"end": 2064.92,
"text": " There is no clear direction but if you're close to the wall your next"
},
{
"start": 2064.92,
"end": 2070.88,
"text": " states are probably going to be inwards of this wall. And yeah I've"
},
{
"start": 2070.88,
"end": 2078.76,
"text": " already shown you this picture. So they have a last cool experiment here where"
},
{
"start": 2078.76,
"end": 2085.76,
"text": " what they do is they specify a reward and the reward is down here. And from each"
},
{
"start": 2085.76,
"end": 2091.4,
"text": " state you want to know which way do I have to go to get the reward."
},
{
"start": 2091.4,
"end": 2098.48,
"text": " Now if they give the agent the value of the latent state and the latent state"
},
{
"start": 2098.48,
"end": 2102.6000000000004,
"text": " here are just your x y coordinates. If they give this to the agent and they let"
},
{
"start": 2102.6000000000004,
"end": 2106.76,
"text": " it run, they let it learn the structure of the world, it will correctly conclude"
},
{
"start": 2106.76,
"end": 2111.5600000000004,
"text": " these are the high value states, lower, lower, lower, lower, lower"
},
{
"start": 2111.5600000000004,
"end": 2116.6400000000003,
"text": " value states. Up until over here are the most low value states because you"
},
{
"start": 2116.6400000000003,
"end": 2124.84,
"text": " travel the longest to go to the reward. If you just give it the observation, the"
},
{
"start": 2124.84,
"end": 2129.6400000000003,
"text": " noisy observation, it will actually assign high value to states here."
},
{
"start": 2129.6400000000003,
"end": 2135.5200000000004,
"text": " Because of course it doesn't infer the latent state. It simply takes the"
},
{
"start": 2135.52,
"end": 2140,
"text": " observation as the phase value says. Well I was here and I reached here pretty"
},
{
"start": 2140,
"end": 2145.84,
"text": " quickly so it must be a good state. But in fact it wasn't here, it was here and"
},
{
"start": 2145.84,
"end": 2151.12,
"text": " the added noise would just corrupt the observation. So you see it learns kind of"
},
{
"start": 2151.12,
"end": 2158.6,
"text": " a wrong model of the world. Whereas if you use this DDC you see, sorry about"
},
{
"start": 2158.6,
"end": 2164.24,
"text": " that, if you use this DDC you see you're much closer to the true state of the"
},
{
"start": 2164.24,
"end": 2171,
"text": " world, like to the one on the left here. So on the left here you"
},
{
"start": 2171,
"end": 2175.2799999999997,
"text": " actually kind of cheat, you give it the actual state. But here you give it"
},
{
"start": 2175.2799999999997,
"end": 2179.3599999999997,
"text": " the observation but tell it it's actually a noisy observation. You use"
},
{
"start": 2179.3599999999997,
"end": 2183.68,
"text": " what this paper proposes and again it will learn to assign a low value to"
},
{
"start": 2183.68,
"end": 2188,
"text": " these states because it needs to go all the way around. Even though it has"
},
{
"start": 2188,
"end": 2193.9599999999996,
"text": " supposedly seen the agent go from here to here directly, but it kind of"
},
{
"start": 2193.96,
"end": 2199.32,
"text": " understands that it's just a noisy observation. Alright so this was this"
},
{
"start": 2199.32,
"end": 2204.2400000000002,
"text": " from this paper. It's a very very cool approach I think to reinforcement"
},
{
"start": 2204.2400000000002,
"end": 2207.16,
"text": " learning and there's some more experiments where you can see that this"
},
{
"start": 2207.16,
"end": 2212.7200000000003,
"text": " DDC actually helps. I'm excited about successor representations and how to"
},
{
"start": 2212.7200000000003,
"end": 2217.36,
"text": " incorporate them in reinforcement learning because it seems a perfect kind"
},
{
"start": 2217.36,
"end": 2222.88,
"text": " of middle ground between model-based and model-free RL. With that"
},
{
"start": 2222.88,
"end": 2227,
"text": " thanks for listening and bye bye!"
}
] |
Xc9Rkbg6IZA | Yannic Kilcher | UCZHmQk67mSJgfCCTn7xBfew | SinGAN: Learning a Generative Model from a Single Natural Image | [
"Science & Technology"
] | [
"ml",
"ai",
"machine learning",
"artificial ingelligence",
"gan",
"generative",
"image processing",
"deep learning",
"image editing",
"deep dream",
"style transfer",
"convolutional neural networks",
"generative adversarial networks",
"photoshop"
] | With just a single image as an input, this algorithm learns a generative model that matches the input image's patch distribution at multiple scales and resolutions. This enables sampling of extremely realistic looking variations on the original image and much more.
Abstract:
We introduce SinGAN, an unconditional generative model that can be learned from a single natural image. Our model is trained to capture the internal distribution of patches within the image, and is then able to generate high quality, diverse samples that carry the same visual content as the image. SinGAN contains a pyramid of fully convolutional GANs, each responsible for learning the patch distribution at a different scale of the image. This allows generating new samples of arbitrary size and aspect ratio, that have significant variability, yet maintain both the global structure and the fine textures of the training image. In contrast to previous single image GAN schemes, our approach is not limited to texture images, and is not conditional (i.e. it generates samples from noise). User studies confirm that the generated samples are commonly confused to be real images. We illustrate the utility of SinGAN in a wide range of image manipulation tasks.
Authors: Tamar Rott Shaham, Tali Dekel, Tomer Michaeli
https://arxiv.org/abs/1905.01164
https://github.com/tamarott/SinGAN
Links:
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
BitChute: https://www.bitchute.com/channel/yannic-kilcher
Minds: https://www.minds.com/ykilcher | Hi there! Today we'll look at SINGAN, Learning a Generative Model from a Single Natural Image by Tamar Rott-Schaum, Tali Dekal and Tomer Mikhaili. So this paper, as it says, it's dealing with learning a generative model from just one image. And this kind of needs to be stressed because most generative models, even if they produce single image samples, they're kind of trained on a large image database beforehand to kind of learn what an image is. But this algorithm really starts out clean-slate, right? The algorithm starts out with nothing and then you give it this one single training image. And from that it can then generate all of these things, without ever having seen any other images during training. And the second row is simply a second example where you start clean-slate, input this image and then produce these. And you can see there's quite a bit of variety in the samples you produce from this image. So basically the task is, if you're just given one image, learn something about the distribution. And this paper specifically deals with patch distributions at different scales. So this could be learn about the distribution of these grass to sky here. So learn about the individual birds and so on. And then at lower scales learn about how the border of this grass looks. So the generative model learns that there's always kind of grass at the bottom, where there's just one image at the largest scale. But then at lower scales sometimes the border looks like a sharp corner and sometimes the border is relatively flat, like here. So it can vary up those things and it can make the border different. Also the birds, it kind of learns how the individual birds look and how they're distributed and therefore it can change that. You see there's quite a bit of variety here. You can also change the aspect ratio and you can actually do much more, much weirder things with it. For example, here are some examples of applications. First there is paint to image. So these are different tasks here. So the top row is always the training image. This is the single image you give the algorithm. And then you have a row of input and then this is what the algorithm outputs. So in paint to image you input a training image and you input a, you can do this in MS Paint or something, kind of the way you want the image to look. So what you want the algorithm to do is take the style of this image and put it into the form of that image and it produces this. Looks pretty good. In editing you can tell the algorithm, alright I want this, I want this tower to go lower down, right? I want this house to be more wide. So you'll get an image like this and you can see there are clear kind of contours here and here that are not nice and also the house is, you know, pixel stretched and so on. So this algorithm, this generative algorithm, can produce this image from it which looks much better here around the borders and kind of fills in missing windows to match of course the patch statistics that it sees in this top image, right? You always have to think that all this algorithm sees is the topmost image to learn from. Harmonization is a task where you have an input image and then you like copy paste some object in it and what it does is it will kind of adjust the patch statistics of that object to the surrounding image. And super resolution, finally, finally we get what every single action movie, just the NSA, can do. It's like, ah here is the security camera footage. Zoom in, enhance. Yeah, so I doubt that, you know, hidden number plates here, pixel-ish number plates, all of a sudden can become readable and identifiable but still this is very cool. And lastly you can do animation from this, as you can guess, I guess. It's not a movie. All right, let's look at how they do all of this kind of stuff. All of this is the same model that can be tasked to do these different things through various probing. At its essence it's this multi-scale GAN and the GAN is trained to have a series of generators and a series of discriminators and you always train them one by one. So first you train the lowest resolution and then you keep it fixed and then train the next resolution and so on until you're at the highest resolution. So in each layer, so at the bottom layer, we simply feed in, we simply feed in noise to a generator of a GAN and the generator generates an image. Now you take this image and you take a down sampled version of your training image. Remember you just have one training image. You take a down sampled version of that and you let the discriminator decide which one is real, which one's fake and you train the generator to fool the discriminator as much as possible. Now if you were to do this with the entire image, of course the generator would simply learn to reproduce the original image. So that's no good. So what this paper does more is that the discriminator actually doesn't work on the entire image but just on patches of the image. And that's so that they basically can't memorize the entire image. So the discriminator will pick these patches, these overlapping patches basically. You can imagine it's something like this overlapping patches and it will try to decide for each one is this patch real or is this patch fake? So the generator produces the entire image. This is what the generator produces the entire image but the discriminator can only see the image in patches, in overlapping patches. And that's what makes this paper kind of work. Otherwise they would just remember the single training image because you only have one training image. You kind of need some variety. This is at the lowest scale. Remember you input the noise and the lowest scale in this example is for example 25 by 25 pixel. You scale down your original image here also to 25 by 25 and then you let the discriminator decide. So once you've trained this generator to make very good 25 by 25 pixel images, that in this patch way fool the discriminator. You keep it fixed. For the next stage what you want to do is you always want to go through this layer first. So forget this discriminator now. We've trained this stage. Keep this generator fixed. Input noise, output, whatever the generator produces. Then take this upscale it. For example multiply each side by 2 to 50 by 50 pixels. Input this together with some new noise into the next stage generator. And then the same as before. This generator produces an image. You scale down your original image. You scale it down to now 50 by 50 pixels and you let the discriminator decide again in patches. Since the discriminator patches are always the same size but we scale down the image less and less, the effective patch size of the discriminator becomes much lower. Now this discriminator only sees the image in patches like so. Also the generated image that comes in here. It also sees in these patches and tries to decide are these patches from real or from fake images. You can see that the lowest layer here, this layer, is trained to kind of get the coarse-grained structure of the image. The discriminator will kind of see very large patches. So the generator must match the kind of large-scale structure. These patches won't be very very high resolution because we downscaled the image, but they will be large across the image. So the generator must match the coarse low resolution stuff in the image. But as you go up the layers, up and up the layers, your discriminator sees less and less of the picture at once. It sees less and less of the picture at once. So this discriminator here in the topmost layer can only concentrate on very small patches and therefore this generator will only have to produce things that look real at a very very small scale. So in essence you have this series of generators trained that each one is tasked with basically modeling details at a finer and finer scale until you come to the last final scale. But then each input of each one is the output of the last one. So basically you take whatever the last one has produced and the last one is really good at doing coarser grain things and you add to it your details of this level. And this will in the end give you a very realistic image that matches at every level of resolution, matches the kind of statistics, the patch statistics of this real image. So that's the whole point of this thing. To have this series of generators one after the other, each one adds their own details at its own scale. And this works super well apparently. So each generator is just built like this. It takes some noise and the image of the lower scale, it adds them, sorry for these artifacts, it puts it through five convolutional layers and then simply combines it with the input. And this will produce this image at this scale. That's each layer, it's just five conv layers. And since they're fully convolutional you can actually change the aspect ratio at inference time, you can change the resolution and so on. It seems pretty neat. Of course from experience I can tell you that this probably didn't work at the first try and there is a lot of work even though it seems pretty easy. Keep that in mind. So for training this there are actually two different losses. First of all you have what's called the adversarial loss. And the adversarial loss is your classic GAN loss, where the generator tries to fool the discriminator and the discriminator tries to catch the generator. But then also you have a reconstruction loss. And the reconstruction loss specifically deals at each layer. At each layer you train the generator to reconstruct the original image when you put in a zero noise, except at the lowest layer. But essentially what you want to do is you want to say well when I don't input any noise then please reconstruct the original image. And that seems to be important for the setup to include this noise so that the generative model is basically able to reconstruct the original image as a whole. So these two losses are combined to form the training objective. And again this is not trained on data set. It is trained on a single image. And the productions are pretty cool. So again here are more samples from just the single training images at the left side. And then you have random samples from the single image. You can do things like super resolution, where this picture has been super resoluted to that picture. And I like that they investigate the effects of kind of their setup. So they ask okay what happens if we just have basically two different scales in this scaling setup. Then you see the kind of patch statistics will be very very fine-grained and it won't match any sort of coarse-grained structure. If you have very many scales, the more scales you have better basically. The more different scales you capture. Even more interesting is what if, so at this layer where we have G, G, G, you scale up, scale up, scale up and so on. What you could do is you could not start here, but you say okay scrap this layer. What we actually do is we take the original image and we scale it down and we input that into here instead of inputting the output from the lower layer. So basically you start at let's say the ground truth and that effect is shown here. So if you start at the lowest layer in this particular example you see that sometimes there are weird things. But what you can do is start at a let's say an intermediate layer with the original image and then the variety you get because you kind of keep the coarse-grained structure the same. The variety you get will only be in the right we said there are different layers and but you now eliminate these two layers and replace them with your original image at the scale. So the variety you get will only be from these finer grained lower resolution patches things. So for example as you can see here the zebra samples now differ in how exactly their stripes are manifested. This seems pretty cool. So you have kind of a handle on how fine grained you want your details or your changes to be. They do a bunch of more experiments where you can do a lot of kind of playful things with this thing. There is code available for example here you can see editing again as an example where they compare also with content aware move which I think is implemented in Photoshop and paint harmonization as we saw before. So all of these kind of things are very playful are very cool and I encourage you to check out this paper and the code it seems pretty easy. I have a remark though this again is only learned from a single image and that's the kind of cool part but it should be possible to combine this with some sort of approach over a data set. Like if I have a model that is really good at a single image right producing something that looks like a single image I should be able to combine it with a model that has been learned from a database. It's kind of like a Bayesian approach where you say okay I want to produce the best image so I want to maximize the probability of this image given the other image. But then you can also say aha but that's kind of proportional to j given i times p of i right you know Bayes rule and it seems that this paper is dealing mostly with kind of maximizing the likelihood of the output while you could probably combine it with some sort of prior over natural images and come up with an even better model. Of course then you'd need an actual database of images and training procedure and you need a way to combine these two models. So maybe that's a bit of a challenge. Anyway cool paper check it out bye bye. | [
{
"start": 0,
"end": 6,
"text": " Hi there! Today we'll look at SINGAN, Learning a Generative Model from a Single"
},
{
"start": 6,
"end": 13.96,
"text": " Natural Image by Tamar Rott-Schaum, Tali Dekal and Tomer Mikhaili. So this paper,"
},
{
"start": 13.96,
"end": 19.04,
"text": " as it says, it's dealing with learning a generative model from just one image. And"
},
{
"start": 19.04,
"end": 22.92,
"text": " this kind of needs to be stressed because most generative models, even if"
},
{
"start": 22.92,
"end": 27.28,
"text": " they produce single image samples, they're kind of trained on a large image"
},
{
"start": 27.28,
"end": 32.68,
"text": " database beforehand to kind of learn what an image is. But this"
},
{
"start": 32.68,
"end": 38.2,
"text": " algorithm really starts out clean-slate, right? The algorithm starts out with nothing"
},
{
"start": 38.2,
"end": 44.040000000000006,
"text": " and then you give it this one single training image. And from that it can then"
},
{
"start": 44.040000000000006,
"end": 49.44,
"text": " generate all of these things, without ever having seen any other images"
},
{
"start": 49.44,
"end": 55.120000000000005,
"text": " during training. And the second row is simply a second example where you start"
},
{
"start": 55.12,
"end": 61.519999999999996,
"text": " clean-slate, input this image and then produce these. And you can see there's"
},
{
"start": 61.519999999999996,
"end": 65.16,
"text": " quite a bit of variety in the samples you produce from this image. So basically"
},
{
"start": 65.16,
"end": 71,
"text": " the task is, if you're just given one image, learn something about the"
},
{
"start": 71,
"end": 75.8,
"text": " distribution. And this paper specifically deals with patch distributions at"
},
{
"start": 75.8,
"end": 81.2,
"text": " different scales. So this could be learn about the distribution of these"
},
{
"start": 81.2,
"end": 90.60000000000001,
"text": " grass to sky here. So learn about the individual birds and so on. And then at"
},
{
"start": 90.60000000000001,
"end": 97.28,
"text": " lower scales learn about how the border of this grass looks. So the"
},
{
"start": 97.28,
"end": 102.24000000000001,
"text": " generative model learns that there's always kind of grass at the"
},
{
"start": 102.24000000000001,
"end": 107.36,
"text": " bottom, where there's just one image at the largest scale. But then at lower"
},
{
"start": 107.36,
"end": 114,
"text": " scales sometimes the border looks like a sharp corner and sometimes the"
},
{
"start": 114,
"end": 119.88,
"text": " border is relatively flat, like here. So it can vary up those things and it can"
},
{
"start": 119.88,
"end": 125.8,
"text": " make the border different. Also the birds, it kind of learns how"
},
{
"start": 125.8,
"end": 130.2,
"text": " the individual birds look and how they're distributed and therefore it"
},
{
"start": 130.2,
"end": 135.16,
"text": " can change that. You see there's quite a bit of variety here. You can also change"
},
{
"start": 135.16,
"end": 139.88,
"text": " the aspect ratio and you can actually do much more, much weirder things with it."
},
{
"start": 139.88,
"end": 146.12,
"text": " For example, here are some examples of applications. First there is paint to"
},
{
"start": 146.12,
"end": 151,
"text": " image. So these are different tasks here. So the top row is always the training"
},
{
"start": 151,
"end": 155.96,
"text": " image. This is the single image you give the algorithm. And then you have a row of"
},
{
"start": 155.96,
"end": 160.56,
"text": " input and then this is what the algorithm outputs. So in paint to image"
},
{
"start": 160.56,
"end": 167.08,
"text": " you input a training image and you input a, you can do this in MS Paint or"
},
{
"start": 167.08,
"end": 173.24,
"text": " something, kind of the way you want the image to look. So what you want"
},
{
"start": 173.24,
"end": 178.32,
"text": " the algorithm to do is take the style of this"
},
{
"start": 178.32,
"end": 184.48000000000002,
"text": " image and put it into the form of that image and it produces this. Looks"
},
{
"start": 184.48,
"end": 192.88,
"text": " pretty good. In editing you can tell the algorithm, alright I want this, I want"
},
{
"start": 192.88,
"end": 199.17999999999998,
"text": " this tower to go lower down, right? I want this house to be more wide. So you'll get"
},
{
"start": 199.17999999999998,
"end": 204.35999999999999,
"text": " an image like this and you can see there are clear kind of contours here and here"
},
{
"start": 204.35999999999999,
"end": 210.28,
"text": " that are not nice and also the house is, you know, pixel stretched and so on. So"
},
{
"start": 210.28,
"end": 216.16,
"text": " this algorithm, this generative algorithm, can produce this image from it"
},
{
"start": 216.16,
"end": 220.52,
"text": " which looks much better here around the borders and kind of fills in missing"
},
{
"start": 220.52,
"end": 227.28,
"text": " windows to match of course the patch statistics that it sees in this top"
},
{
"start": 227.28,
"end": 232.36,
"text": " image, right? You always have to think that all this algorithm sees is the"
},
{
"start": 232.36,
"end": 237.52,
"text": " topmost image to learn from. Harmonization is a task where you have"
},
{
"start": 237.52,
"end": 243.76000000000002,
"text": " an input image and then you like copy paste some object in it and what it does"
},
{
"start": 243.76000000000002,
"end": 248.4,
"text": " is it will kind of adjust the patch statistics of that object to the"
},
{
"start": 248.4,
"end": 255.48000000000002,
"text": " surrounding image. And super resolution, finally, finally we get what every single"
},
{
"start": 255.48000000000002,
"end": 262.24,
"text": " action movie, just the NSA, can do. It's like, ah here is the security camera"
},
{
"start": 262.24,
"end": 272.56,
"text": " footage. Zoom in, enhance. Yeah, so I doubt that, you know, hidden"
},
{
"start": 272.56,
"end": 276.12,
"text": " number plates here, pixel-ish number plates, all of a sudden can become"
},
{
"start": 276.12,
"end": 283.2,
"text": " readable and identifiable but still this is very cool. And lastly you can do"
},
{
"start": 283.2,
"end": 292.92,
"text": " animation from this, as you can guess, I guess. It's not a movie."
},
{
"start": 292.92,
"end": 297.48,
"text": " All right, let's look at how they do all of this kind of stuff. All of this is the"
},
{
"start": 297.48,
"end": 301.8,
"text": " same model that can be tasked to do these different things through various"
},
{
"start": 301.8,
"end": 309,
"text": " probing. At its essence it's this multi-scale GAN and the GAN is trained"
},
{
"start": 309,
"end": 314.76,
"text": " to have a series of generators and a series of discriminators and you always"
},
{
"start": 314.76,
"end": 320.32,
"text": " train them one by one. So first you train the lowest resolution and then you keep"
},
{
"start": 320.32,
"end": 323.84,
"text": " it fixed and then train the next resolution and so on until you're at"
},
{
"start": 323.84,
"end": 330.68,
"text": " the highest resolution. So in each layer, so at the bottom layer, we simply feed in,"
},
{
"start": 330.68,
"end": 338.52,
"text": " we simply feed in noise to a generator of a GAN and the generator generates"
},
{
"start": 338.52,
"end": 345.47999999999996,
"text": " an image. Now you take this image and you take a down sampled version of"
},
{
"start": 345.47999999999996,
"end": 349.15999999999997,
"text": " your training image. Remember you just have one training image. You take a"
},
{
"start": 349.15999999999997,
"end": 355.47999999999996,
"text": " down sampled version of that and you let the discriminator decide which one is"
},
{
"start": 355.47999999999996,
"end": 359.64,
"text": " real, which one's fake and you train the generator to fool the discriminator as"
},
{
"start": 359.64,
"end": 363.64,
"text": " much as possible. Now if you were to do this with the entire image, of course the"
},
{
"start": 363.64,
"end": 369.12,
"text": " generator would simply learn to reproduce the original image. So that's"
},
{
"start": 369.12,
"end": 375.44,
"text": " no good. So what this paper does more is that the discriminator"
},
{
"start": 375.44,
"end": 380.8,
"text": " actually doesn't work on the entire image but just on patches of the image."
},
{
"start": 380.8,
"end": 388.8,
"text": " And that's so that they basically can't memorize the"
},
{
"start": 388.8,
"end": 396.36,
"text": " entire image. So the discriminator will pick these patches, these overlapping"
},
{
"start": 396.36,
"end": 400.5,
"text": " patches basically. You can imagine it's something like this overlapping patches"
},
{
"start": 400.5,
"end": 406.8,
"text": " and it will try to decide for each one is this patch real or is this patch fake?"
},
{
"start": 406.8,
"end": 412.76,
"text": " So the generator produces the entire image. This is what the"
},
{
"start": 412.76,
"end": 419.92,
"text": " generator produces the entire image but the discriminator can only see the image"
},
{
"start": 419.92,
"end": 426.4,
"text": " in patches, in overlapping patches. And that's what makes this paper kind of"
},
{
"start": 426.4,
"end": 432.64,
"text": " work. Otherwise they would just remember the single training image"
},
{
"start": 432.64,
"end": 437.88,
"text": " because you only have one training image. You kind of need some variety."
},
{
"start": 437.88,
"end": 445.24,
"text": " This is at the lowest scale. Remember you input the noise and the lowest"
},
{
"start": 445.24,
"end": 451.64,
"text": " scale in this example is for example 25 by 25 pixel. You scale down"
},
{
"start": 451.64,
"end": 456.44,
"text": " your original image here also to 25 by 25 and then you let the discriminator"
},
{
"start": 456.44,
"end": 461.92,
"text": " decide. So once you've trained this generator to make very good"
},
{
"start": 461.92,
"end": 469.64000000000004,
"text": " 25 by 25 pixel images, that in this patch way fool the discriminator. You keep"
},
{
"start": 469.64000000000004,
"end": 474.68,
"text": " it fixed. For the next stage what you want to do is you always want to go"
},
{
"start": 474.68,
"end": 480.8,
"text": " through this layer first. So forget this discriminator now. We've trained"
},
{
"start": 480.8,
"end": 487.28000000000003,
"text": " this stage. Keep this generator fixed. Input noise, output, whatever the"
},
{
"start": 487.28,
"end": 494.32,
"text": " generator produces. Then take this upscale it. For example multiply each"
},
{
"start": 494.32,
"end": 501.64,
"text": " side by 2 to 50 by 50 pixels. Input this together with some new noise into the"
},
{
"start": 501.64,
"end": 506.11999999999995,
"text": " next stage generator. And then the same as before. This generator produces an"
},
{
"start": 506.11999999999995,
"end": 512.4,
"text": " image. You scale down your original image. You scale it down to now 50 by 50"
},
{
"start": 512.4,
"end": 518.76,
"text": " pixels and you let the discriminator decide again in patches. Since the"
},
{
"start": 518.76,
"end": 523.0799999999999,
"text": " discriminator patches are always the same size but we scale down the image"
},
{
"start": 523.0799999999999,
"end": 527.72,
"text": " less and less, the effective patch size of the discriminator becomes much lower."
},
{
"start": 527.72,
"end": 537.36,
"text": " Now this discriminator only sees the image in patches like so. Also the"
},
{
"start": 537.36,
"end": 542.28,
"text": " generated image that comes in here. It also sees in these"
},
{
"start": 542.28,
"end": 549.88,
"text": " patches and tries to decide are these patches from real or from fake images."
},
{
"start": 549.88,
"end": 559.24,
"text": " You can see that the lowest layer here, this layer, is trained to kind of get the"
},
{
"start": 559.24,
"end": 566.9200000000001,
"text": " coarse-grained structure of the image. The discriminator will"
},
{
"start": 566.92,
"end": 573.5999999999999,
"text": " kind of see very large patches. So the generator must match the kind of"
},
{
"start": 573.5999999999999,
"end": 578.52,
"text": " large-scale structure. These patches won't be very very high resolution"
},
{
"start": 578.52,
"end": 582.8399999999999,
"text": " because we downscaled the image, but they will be large across the image. So the"
},
{
"start": 582.8399999999999,
"end": 589.7199999999999,
"text": " generator must match the coarse low resolution stuff in the image. But as you"
},
{
"start": 589.72,
"end": 597.6800000000001,
"text": " go up the layers, up and up the layers, your discriminator sees less and less of"
},
{
"start": 597.6800000000001,
"end": 604.1600000000001,
"text": " the picture at once. It sees less and less of the picture at once."
},
{
"start": 604.1600000000001,
"end": 610.44,
"text": " So this discriminator here in the topmost layer can only concentrate on"
},
{
"start": 610.44,
"end": 616.6,
"text": " very small patches and therefore this generator will only have to produce"
},
{
"start": 616.6,
"end": 625.44,
"text": " things that look real at a very very small scale. So in essence you have"
},
{
"start": 625.44,
"end": 631.6,
"text": " this series of generators trained that each one is tasked with basically"
},
{
"start": 631.6,
"end": 636.8000000000001,
"text": " modeling details at a finer and finer scale until you come to the last final"
},
{
"start": 636.8000000000001,
"end": 642.2,
"text": " scale. But then each input of each one is the output of the last one. So"
},
{
"start": 642.2,
"end": 646.52,
"text": " basically you take whatever the last one has produced and the last one is really"
},
{
"start": 646.52,
"end": 653.36,
"text": " good at doing coarser grain things and you add to it your details of this level."
},
{
"start": 653.36,
"end": 660.12,
"text": " And this will in the end give you a very realistic image that matches at every"
},
{
"start": 660.12,
"end": 666.4399999999999,
"text": " level of resolution, matches the kind of statistics, the patch statistics of this"
},
{
"start": 666.4399999999999,
"end": 674.3199999999999,
"text": " real image. So that's the whole point of this thing. To have"
},
{
"start": 674.32,
"end": 679.2,
"text": " this series of generators one after the other, each one adds their own details"
},
{
"start": 679.2,
"end": 685.5600000000001,
"text": " at its own scale. And this works super well apparently. So each generator is"
},
{
"start": 685.5600000000001,
"end": 690.96,
"text": " just built like this. It takes some noise and the image of the lower"
},
{
"start": 690.96,
"end": 696.7600000000001,
"text": " scale, it adds them, sorry for these artifacts, it puts it through five"
},
{
"start": 696.7600000000001,
"end": 704.2,
"text": " convolutional layers and then simply combines it with the input. And this"
},
{
"start": 704.2,
"end": 711.1600000000001,
"text": " will produce this image at this scale. That's each layer, it's just five"
},
{
"start": 711.1600000000001,
"end": 716.2,
"text": " conv layers. And since they're fully convolutional you can actually change"
},
{
"start": 716.2,
"end": 723.2800000000001,
"text": " the aspect ratio at inference time, you can change the resolution and so on."
},
{
"start": 723.2800000000001,
"end": 731.2,
"text": " It seems pretty neat. Of course from experience I can tell you that this"
},
{
"start": 731.2,
"end": 736.84,
"text": " probably didn't work at the first try and there is a lot of work even though"
},
{
"start": 736.84,
"end": 742.32,
"text": " it seems pretty easy. Keep that in mind. So for training this there are"
},
{
"start": 742.32,
"end": 746.76,
"text": " actually two different losses. First of all you have what's called the"
},
{
"start": 746.76,
"end": 753.12,
"text": " adversarial loss. And the adversarial loss is your classic GAN loss, where"
},
{
"start": 753.12,
"end": 756.84,
"text": " the generator tries to fool the discriminator and the"
},
{
"start": 756.84,
"end": 760.72,
"text": " discriminator tries to catch the generator. But then also you have a"
},
{
"start": 760.72,
"end": 765.76,
"text": " reconstruction loss. And the reconstruction loss specifically deals"
},
{
"start": 765.76,
"end": 775.6,
"text": " at each layer. At each layer you train the generator to reconstruct the"
},
{
"start": 775.6,
"end": 781.1600000000001,
"text": " original image when you put in a zero noise, except at the lowest layer. But"
},
{
"start": 781.1600000000001,
"end": 786.64,
"text": " essentially what you want to do is you want to say well when I don't input"
},
{
"start": 786.64,
"end": 792.48,
"text": " any noise then please reconstruct the original image. And that seems to be"
},
{
"start": 792.48,
"end": 797.76,
"text": " important for the setup to include this noise so that the"
},
{
"start": 797.76,
"end": 804.36,
"text": " generative model is basically able to reconstruct the original image as a whole."
},
{
"start": 804.36,
"end": 809.4399999999999,
"text": " So these two losses are combined to form the training objective. And"
},
{
"start": 809.4399999999999,
"end": 815.84,
"text": " again this is not trained on data set. It is trained on a single image."
},
{
"start": 815.84,
"end": 824.32,
"text": " And the productions are pretty cool. So again here are more samples from just"
},
{
"start": 824.32,
"end": 828.48,
"text": " the single training images at the left side. And then you have random samples"
},
{
"start": 828.48,
"end": 833.0600000000001,
"text": " from the single image. You can do things like super resolution, where this picture"
},
{
"start": 833.0600000000001,
"end": 840.7800000000001,
"text": " has been super resoluted to that picture. And I like that they investigate the"
},
{
"start": 840.7800000000001,
"end": 845.72,
"text": " effects of kind of their setup. So they ask okay what happens if we just have"
},
{
"start": 845.72,
"end": 851.9200000000001,
"text": " basically two different scales in this scaling setup. Then you see"
},
{
"start": 851.9200000000001,
"end": 859.24,
"text": " the kind of patch statistics will be very very fine-grained and it won't match"
},
{
"start": 859.24,
"end": 865.32,
"text": " any sort of coarse-grained structure. If you have very many scales, the"
},
{
"start": 865.32,
"end": 872.52,
"text": " more scales you have better basically. The more different scales you capture."
},
{
"start": 872.52,
"end": 881.56,
"text": " Even more interesting is what if, so at this layer where we have G, G, G,"
},
{
"start": 881.56,
"end": 886.52,
"text": " you scale up, scale up, scale up and so on. What you could do is you could not"
},
{
"start": 886.52,
"end": 892,
"text": " start here, but you say okay scrap this layer. What we actually do is we"
},
{
"start": 892,
"end": 896.92,
"text": " take the original image and we scale it down and we input that into here instead"
},
{
"start": 896.92,
"end": 901.12,
"text": " of inputting the output from the lower layer. So basically you start at let's"
},
{
"start": 901.12,
"end": 908.84,
"text": " say the ground truth and that effect is shown here. So if you"
},
{
"start": 908.84,
"end": 916.84,
"text": " start at the lowest layer in this particular example you see that"
},
{
"start": 916.84,
"end": 923.12,
"text": " sometimes there are weird things. But what you can do is start at a let's say"
},
{
"start": 923.12,
"end": 928.52,
"text": " an intermediate layer with the original image and then the variety you get"
},
{
"start": 928.52,
"end": 932.8,
"text": " because you kind of keep the coarse-grained structure the same. The"
},
{
"start": 932.8,
"end": 936.6,
"text": " variety you get will only be in the right we said there are different"
},
{
"start": 936.6,
"end": 941.52,
"text": " layers and but you now eliminate these two layers and replace them with your"
},
{
"start": 941.52,
"end": 945.68,
"text": " original image at the scale. So the variety you get will only be from these"
},
{
"start": 945.68,
"end": 951.72,
"text": " finer grained lower resolution patches things. So for example as you can see"
},
{
"start": 951.72,
"end": 958.76,
"text": " here the zebra samples now differ in how exactly their stripes are manifested."
},
{
"start": 958.76,
"end": 965.76,
"text": " This seems pretty cool. So you have kind of a handle on how fine"
},
{
"start": 965.76,
"end": 971.48,
"text": " grained you want your details or your changes to be. They do a bunch of"
},
{
"start": 971.48,
"end": 978.36,
"text": " more experiments where you can do a lot of kind of playful things with this"
},
{
"start": 978.36,
"end": 984.8000000000001,
"text": " thing. There is code available for example here you can see editing again"
},
{
"start": 984.8000000000001,
"end": 990.88,
"text": " as an example where they compare also with content aware move which I think is"
},
{
"start": 990.88,
"end": 999.76,
"text": " implemented in Photoshop and paint harmonization as we saw before. So all of"
},
{
"start": 999.76,
"end": 1003.88,
"text": " these kind of things are very playful are very cool and I encourage you to"
},
{
"start": 1003.88,
"end": 1008.6,
"text": " check out this paper and the code it seems pretty easy. I have a remark though"
},
{
"start": 1008.6,
"end": 1013.24,
"text": " this again is only learned from a single image and that's the kind of"
},
{
"start": 1013.24,
"end": 1020.24,
"text": " cool part but it should be possible to combine this with some sort of approach"
},
{
"start": 1020.24,
"end": 1028.42,
"text": " over a data set. Like if I have a model that is really good at a single"
},
{
"start": 1028.42,
"end": 1032.56,
"text": " image right producing something that looks like a single image I should be"
},
{
"start": 1032.56,
"end": 1039.24,
"text": " able to combine it with a model that has been learned from a database."
},
{
"start": 1039.24,
"end": 1043.72,
"text": " It's kind of like a Bayesian approach where you say okay I want to produce"
},
{
"start": 1043.72,
"end": 1052.6799999999998,
"text": " the best image so I want to maximize the probability of this image given the"
},
{
"start": 1052.6799999999998,
"end": 1060.32,
"text": " other image. But then you can also say aha but that's kind of"
},
{
"start": 1060.32,
"end": 1069.6399999999999,
"text": " proportional to j given i times p of i right you know Bayes rule and it seems"
},
{
"start": 1069.6399999999999,
"end": 1075.2,
"text": " that this paper is dealing mostly with kind of maximizing the likelihood of the"
},
{
"start": 1075.2,
"end": 1080.36,
"text": " output while you could probably combine it with some sort of prior over natural"
},
{
"start": 1080.36,
"end": 1086.32,
"text": " images and come up with an even better model. Of course then you'd need an"
},
{
"start": 1086.32,
"end": 1092.1599999999999,
"text": " actual database of images and training procedure and you need a way to combine"
},
{
"start": 1092.1599999999999,
"end": 1096.76,
"text": " these two models. So maybe that's a bit of a challenge. Anyway cool paper check"
},
{
"start": 1096.76,
"end": 1116.92,
"text": " it out bye bye."
}
] |
BTLCdge7uSQ | Yannic Kilcher | UCZHmQk67mSJgfCCTn7xBfew | AlphaStar: Grandmaster level in StarCraft II using multi-agent reinforcement learning | [
"Science & Technology"
] | [
"ml",
"ai",
"machine learning",
"reinforcement learning",
"deep rl",
"deepmind",
"google",
"starcraft",
"alphastar",
"alphago",
"alphazero",
"value function",
"policy",
"vtrace",
"upgo",
"terran",
"protoss",
"zerg",
"build order",
"strategy",
"pointer network",
"transformer",
"league training",
"league",
"battlenet",
"artificial intelligence",
"bot",
"rl",
"deep reinforcement learning",
"model-free",
"exploiters",
"self-play",
"ficticious self-play",
"rts"
] | DeepMind's new agent to tackle yet another Esport: Starcraft II. This agent uses deep reinforcement learning with a new technique, called League Training, to catapult itself to Grandmaster-level skill at playing this game.
Abstract:
Many real-world applications require artificial agents to compete and coordinate with other agents in complex environments. As a stepping stone to this goal, the domain of StarCraft has emerged as an important challenge for artificial intelligence research, owing to its iconic and enduring status among the most difficult professional esports and its relevance to the real world in terms of its raw complexity and multi-agent challenges. Over the course of a decade and numerous competitions, the strongest agents have simplified important aspects of the game, utilized superhuman capabilities, or employed hand-crafted sub-systems. Despite these advantages, no previous agent has come close to matching the overall skill of top StarCraft players. We chose to address the challenge of StarCraft using general purpose learning methods that are in principle applicable to other complex domains: a multi-agent reinforcement learning algorithm that uses data from both human and agent games within a diverse league of continually adapting strategies and counter-strategies, each represented by deep neural networks. We evaluated our agent, AlphaStar, in the full game of StarCraft II, through a series of online games against human players. AlphaStar was rated at Grandmaster level for all three StarCraft races and above 99.8% of officially ranked human players.
Authors: Oriol Vinyals, Igor Babuschkin, Wojciech M. Czarnecki, Michaël Mathieu, Andrew Dudzik, Junyoung Chung, David H. Choi, Richard Powell, Timo Ewalds, Petko Georgiev, Junhyuk Oh, Dan Horgan, Manuel Kroiss, Ivo Danihelka, Aja Huang, Laurent Sifre, Trevor Cai, John P. Agapiou, Max Jaderberg, Alexander S. Vezhnevets, Rémi Leblond, Tobias Pohlen, Valentin Dalibard, David Budden, Yury Sulsky, James Molloy, Tom L. Paine, Caglar Gulcehre, Ziyu Wang, Tobias Pfaff, Yuhuai Wu, Roman Ring, Dani Yogatama, Dario Wünsch, Katrina McKinney, Oliver Smith, Tom Schaul, Timothy Lillicrap, Koray Kavukcuoglu, Demis Hassabis, Chris Apps, David Silver
https://www.deepmind.com/blog/article/AlphaStar-Grandmaster-level-in-StarCraft-II-using-multi-agent-reinforcement-learning
Links:
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
BitChute: https://www.bitchute.com/channel/yannic-kilcher
Minds: https://www.minds.com/ykilcher | Alright, let's talk about AlphaStar, Grandmaster level in StarCraft 2 using multi-agent reinforcement learning. The corresponding paper looks like this and is by Oriol Vinyals et al. from DeepMind and has been published in the journal of Nature recently. Now let me say this first. Stop publishing in Nature. This is a journal is not open access. It makes its readers pay for getting the article. So actually you can access this article or a public version of it for free but you can't print it, you can't download it unless you pay for it. And this to me, it seems ridiculous because none of this money goes to the authors of the article. None of this money goes to the reviewers. The review quality isn't notably better, at least in the field of computer science. All of this is a publicity stunt by DeepMind because Nature has been kind of impactful in the last decades. It's like, ooh, look at me, I got a big dick I publish in Nature. Nothing more than that. It's like OpenAI saying their model is too dangerous to release to the world. I guess DeepMind might make the same claim about AlphaStar. It's like too dangerous of a StarCraft player. Yeah, so stop this. Publish your research in open access. Nature or journals like these for computer science. It's a remnant of the last century. So go on and join everyone else in distributing knowledge. All right, rant over. Let's jump in into this article. So the article describes how to train a reinforcement learning agent to play the game of StarCraft 2. So StarCraft 2 is this game for everyone who doesn't know. Just very quickly explain the game. StarCraft 2 is a real time strategy game and you're kind of in this top third person view and you control your units and the goal is kind of to move your units around and first of all build up buildings and using those buildings you can then produce more and more diverse units and ultimately you want to kind of produce some sort of army that can go to the opponent and destroy the opponent's base. So you control all of this on a computer using a mouse and a keyboard and StarCraft is notable for being very balanced. So there are three different races you can play. So first are the Terran which are kind of human, human-ish. They have marines and tanks and helicopters I believe and things like this. Then the Protoss are some sort of alien race that are super advanced so they can teleport and have energy shields and things like that. And then last are the Zerg and the Zerg are kind of icky ground dwelling creatures that infect things and spread like a disease. So the interesting thing here is compared to other real-time strategy games is that the three races they play very different. So the game is almost a different game if you play as a different race but they are so well balanced that almost any matchup is kind of a fair game between equally skilled players. So that's makes StarCraft pretty unique. Also pretty unique is the very very high action per minute rates that pro players get. Like they play this insanely fast. So game lasts about 10 to 15 minutes and as I said the goal is to destroy the enemy base. So to train an RL agent to play this is very hard because the action space is very high. You have to target with your mouse part of the screen. You have to look what is on the screen, what can I do. There's this mini map down here. There are things you can do. There are opponents you can target and so on. So all of this is very very very difficult for an RL agent. And at the end, after 10 minutes, you play play play play play and after 10 minutes you either win or you lose. And the RL agent has to figure out which of the actions that I did during those 10 minutes right. Was it this one? Was it this one? Which led to me winning or losing? These are very hard problems for reinforcement learning. And DeepMind has combined almost every trick in the book known so far to RL to achieve this. Now the main contribution I'd say here that is novel is what is called league training and we'll get to that. So first of all, if you don't know what reinforcement learning is, reinforcement learning is basically what I just described. You have an input right, which could be this thing here and you have a set of actions that you can do, which the set of actions here is anywhere you can click right, you can click anywhere on the screen. And you have to do this over and over and over and over again until you either win or you lose. And from that you will see you will at the end receive Yeah, you win or you lose and then you have to kind of learn to play the game. So it's machine learning hardcore because you get minimal information and have to achieve a lot of things from it. So the first thing that DeepMind actually does is it does supervised learning. And we'll get into how exactly the model works later. But first thing DeepMind does is it trains an agent to simply imitate humans, right? So you have human data. And from the human data, you so these are games played by humans, good humans, right? Not not people like me. So these these are games played with humans from a significantly high ELO. And the first thing you extract is this Z here. Now Z is is called a statistics vector. And as I understand it, it's mainly the build order, which means in which order do you build your buildings and units and this is very important in StarCraft. This is a strategic decision where you say, okay, first, I'm going to build three worker units. This is like three workers, worker, worker, worker, and then I'm going to build a house and then I'm going to and so on. So these are major strategic decisions that that you kind of have to make with minutes, minutes ahead of time to plan in advance. And this this is kind of stays constant for the game. So this is extracted and provided to the model as an input. So what is the current strategy basically the current overall strategy? The second thing that is extracted is this is at every time step, the observation that the humans had so the screen that humans see, and also the actions that the human did, right? So the human takes its mouse and clicks somewhere, right? This is supposed to be a mouse pointer and clicks here, right? And then the model, this part here, this is the model. And this is the policy function of the model. So the policy decides what to do, right? Is trained to match the action that the human did. So in essence, first, you train an agent to simply imitate humans. And this you can do by supervised learning, right? This is classic machine learning. Each each step you have this input, which is an image, and you have the strategy you're trying to follow. And from these two, you're simply trying to match the action that the human did, assuming the human made a good decision. So this is how you initialize, right? You don't start from scratch. Now I have to say that even though this name is Alpha star, it has surprisingly little to do with Alpha Go or Alpha Zero that DeepMind has done before. Mainly this is entirely model free reinforcement learning. And goes more into the direction of classic deep RL. And you can see with the human data, you can already get pretty far. So these down here are the leagues of StarCraft. And this this here are percentiles of players. And you see with the supervised training, you can get almost you can get better than 80 85% of human players already. Right? So pretty, pretty impressive already simply by imitating humans. Now so the the the way to to further improve this, and let's actually go first into how the model looks like. So down here, they describe this model. That's it. So the model is supposed to map from input to output. So from the screen that the agent sees, right, and some other things to what the agent is going to do to an action a. If you simply do this at every time step, then you have a game playing agent. So first, the question is, of course, how does this happen? Now the input isn't only the thing that the agencies which is this the mini map and the mini map? I believe that's the mini map or the entire map. Well, it's it's in essence, it is a picture. It is also a list of entities. So the the game engine extracts a list of entities. And these can be inside the screen here and outside the screen for friendly. So the assumption is the agent knows about all of its units and where they are and what their statistics are. So in this entity thing, for each entity, you have a list of what is its health, what is its type, what is its position, does it carry any items and so on all the things you need to know about this entity. This is in this list of entities. And along with that also opponent entities, but only the ones that are on screen. Right. So all of this goes into this list of entities. And then the next features are scalar features. And as I understand it, scalar features are things like what race are you playing currently? What time is it in the game and so on. So these are additional features. And also baseline features. And this is mainly used to train the value network. And if you this is not going to make sense if you know nothing about reinforcement learning. But one main contribution of this paper is or not contribution, but kind of thing that they claim is that for computing the value network, they also use the observations. So all of this of the opponent player, because you know this during training, because you're doing self play, and you don't need this value network during inference. You can actually do this and this improves performance significantly. Alright so that's just for people who know RL very well. Everyone else don't don't worry too much about these things. Alright so these are the inputs, the scalar features, the entity and the minimap. Each one goes through separate encoders. So the minimap goes through a ResNet which is a convolutional network. And the entities go through a transformer which is kind of a thing to, it's appropriate to encode a set of entities right. Scalar features go through a classic feed forward network MLP. All of these get combined here into a deep LSTM that goes over time. Now the deep LSTM is what really makes the strategy because each time step, each time step a screen like this is input into the into the thing. But the agent also needs to remember what did it do last steps two steps ago right. This is important because you don't have full observability. You need to know what did I do in the in the past. And that's where the so if the last step you saw this screen and the step before you saw this screen right then all of this would go through these encoding step into the LSTM right. So the LSTM will encode now over time all of these different steps. And so you can kind of say alright if I have just started building a building I should probably not build the same building again even though I can't see it on the screen right. Because I know that three steps ago I did start building a build build a building. So this is kind of the LSTM is basically where you integrate your strategy over time. So from the LSTM you have to make two predictions. You have to make a prediction of what to do. This is the action and how valuable is your current state and how valuable is your current state. This is called the value network. This is a core component of deep reinforcement learning. These two components one is called the policy which would be everything over here and what is called the value network which is called everything over here. These are the things you need to do actor critic learning and actor critic learning is the current state of the art in deep RL. So deep mind does nothing else here except as I said they use these baseline features for the value network. But if you don't know what a value network is don't worry about it. The important part for playing the game is actually the part over here that called the policy. So first you need to do to decide what action you do and that there are many action types in Starcraft as I already said you can build a building you can move a unit you can actually move the camera that's an action type right because you want to maybe see what's over here or over here or over here. So that's an action you can do and if you have decided on what action you want to do you have to decide when do I do it. So you see the action type once you figured it out it goes into the next neural network and that decides okay when do I do it when do I do this action. So it specifies a delay. Then once you've decided what to do and when to do it it goes into the next neural network and that decides should I put this into the queue of actions because the agent here is limited to a certain number of actions per second and I think it's 22 actions per five seconds or something like this so in order to mimic you know human limitations. So there's a queue of actions to be executed and the agent needs to decide do I really want is this action so important to put it into the queue. Alright if you have decided what to do when to do it whether you would like to do it at all right then you have to you have to say it goes into the next neural network and you have to say alright which units do I want to do it with right if you want to build a building you can have to choose one or many workers to do it. I don't actually know how StarCraft works in this I'm a bit of a noob but you have to you have to select units with which to do the action for most of the thing and there I like the use of a pointer network here so what a pointer network is is a network that can point to its own inputs it's sort of like an attention network but not really in a pointer network if you have a set of inputs like we have here so entity entity entity entity right all these entities and you can see the entity embedding the entity encoder actually has skip connections that go here right so this network directly gets these these entities as input it can then write you then you have a neural network on top of that neural network that the neural network takes all of these things as an input and what the neural network will output is a pointer to one of these things right you can say look I point to this thing right here this is a called a pointer network and yeah as I said it's different from an attention network which might so an attention network is where you get a distribution actually get a distribution in both cases there is a difference but we don't have to really time to go into it here but in essence with a pointer network you can select which of these entities you want to do something with all right now you've decided on which action when whether to cue it with which unit to do it now you have to decide for some actions for example if the action is attack or heal or something this target unit which unit do you want to target or which which location on the map you want to target this is the target point here and you can see again here are skip connections from the entity encoder and from the spatial encoder to these things and while the target unit is an attention network that's this like much like a pointer network you will kind of point to places in lists the target point is a deconvolution or resnet what that means is so you have this spatial encoder here will embed the mini map so there will be a neural network right here actually let's draw the neural network in this color right here it will give you a an embedding of that right and that's what you what you feed into that's what you feed for example into the LSTM but then what you do is you have a deconvolutional network which again produces a mini map but on this mini map there there's not it's not the original mini map but it's kind of a distribution of locations so it said here here do I want to point all right so the that this neural network is responsible for producing this dot on the mini map basically saying okay I know what to do when to do it with which units to do it and so on I want to do it right here on the mini map okay and now you have it right you go from the input which are these things the mini map the entities and so on to what do I want to do where when with which units and so on right this is called a policy and it's extremely complicated every one of these boxes here is a neural network and you can see it's it's very it's a lot to train and they of course they have a lot of resources since they are deep mind but that's the the main thing all right they have a few tricks to train this and we won't go too much into this but one of the tricks is V trace from the Impala paper one of another trick is up go up going policy update and a third trick is TD lambda learning here and all of these are kind of improvements onto classic actor critic reinforcement learning style like a to see your a3c if you are interested then you can you know look into these things so that's how they train it and the question now is what's the protocol for training it we saw okay there is supervised learning cool then there is reinforcement learning all right but you can't just apply and this is in the reinforcement learning this is what we said you get kind of a reward and the reward goes into this TD lambda and V trace and and up going policy update to train the value function and the policy but the special thing that this paper introduces is what's called leak training now in in papers like alpha go or alpha zero what had been done is called self play and self play basically means you have an agent you have an agent right you have this how in a row an agent that's this is supposed to be an artificial intelligence right how to make it artificial okay it has a little hat right a funky hat it's a robot and the robot will play a copy of itself right and the copy it might be slightly different but the it basically these two these two play each other and thereby become better and better and better and you can see this like over time as as the purple one gets better the blue one gets better as well because they they kind of play against each other and when one falls behind right when one falls behind then they simply copy over from the other one they basically copy the other one and then they catch up again right they catch up right and they continue competing so by competing against each other they get better and this is called self play now people have noticed this kind of leads to instabilities because you can get kind of trapped get trapped in cycles like rock paper scissor cycles so what they do is they will actually as they get better so this is the first version right and the second version they are a bit better now so they have bigger hats right and here bigger bigger larger hats right and down here they are even better so they have like ginormous hats but they might have some weaknesses because they only play against each other right so this is the same players but over time what they will do is they will actually play occasionally play old versions of the other player or of themselves right occasionally the new versions will fall back and play old versions or not only the current versions of the agent or old versions of themselves right so this this is called fictitious self play in that you always play the you know not only play the your current kind of opponent or your current self i mean it's the same anyway because you keep copying the weights you also play the old ones and this paper goes a step further and says actually we we do this but we want to prioritize the good ones so for example we know that we know that the current ones are good right but we know that this particular one was also pretty good so far so we are we keep making we keep making these these new ones play against this one more often and this has led to kind of an improvement in these kind of self play algorithms and the real new part of this um alpha star paper is the fact that they do this league training and in the league training they this this is what it looks like but i find this graphic rather confusing i'd rather explain it like something like this all right so there is your current your current strategy and you have a hat right and you do all of the you do all of the all of the i play against myself with the smaller hat thing right i play against past versions of myself fine but then you also do you have what's called exploiters and exploiters an exploiter is a let's call it a triangle hat because it's very evil what it does is it specifically targets only the current good agent right so this this agent right here is tasked with playing old versions of itself and playing the exploiter both at the same time but the exploiter is only tasked with playing this thing so um what it can do is it can specialize in exploiting whatever weaknesses this player has of course the hope is that the this player will become better in response because there's a player trying to exploit it right so every and as this as this player becomes better than this player here is reinitialized and tries to find new weaknesses right so as this as this one continues to learn so the exploiters they are initialized you can see this here so these are called the main agents and you can see they play against each other right one of them they play against each other they play against past versions of themselves so these are past versions of themselves but then there are these main exploiters and the main exploiters they're constantly reinitialized from human data right you can see this here they're reinitialized and they only play against these main players right they don't have to deal with any of the past players or playing against themselves stuff they only try to exploit the main players and thereby the main players get better once they get better than an exploiter they are reinitialized so the exploiters are reinitialized to find new exploits of the main agents the third component is what's called a league exploiter and a league exploiter is the following so the league let's the league exploiter here and its hat is a wavy hat and what the league exploiter does is it plays against past versions of itself and others so it does play against the league exploiter sorry with smaller wavy hat it also plays against this thing by the way the this this here also plays against past versions of this and of everything else you can see here the past version arrows it goes against all past players so this this represents all the past players that ever existed and so does the so does the so here but also against past versions of this of this main exploiter here but the important thing is the current main exploiter doesn't play past versions of its of itself right so this also plays this and this place this and this place this and this also place this so the league exploiter they they do take part in this whole league business like playing against past versions of all the players but it only plays against the main ex against the main exploiters and this is a thing that i find missing here honestly i don't know if i don't understand this but i'm pretty sure i do like these also play these and that's an arrow missing in the in the drawing uh the league exploiters play the main agents but the main difference between the league exploiters and the main agents is the league exploiters they don't play themselves right there is no there's no playing themselves on the league exploiters so the league exploiters what they can do is they can find weaknesses of the entire league and kind of train train the by playing against the main opponents using those found weaknesses you bet that the main ex the main agents will get better against those major weaknesses of the entire league right so the main agents first of all they get better by playing the main exploiters because the main exploiters are mainly trying to exploit the main agents the main agents also get better by playing the league exploiters because the league exploiters find weaknesses of the entire league right so and the main agents they also get better by playing each So that makes these these main agents kind of... You can say they're trained against everything under the sun, against any possible exploit that can be found either in themselves or generally. And thereby they get really good at StarCraft, because they can counter pretty much everything. So this is how league training works and this is what I feel is the main contribution of this paper to the reinforcement learning world. Now they do an ablation study here. You can see where this ends up. So these final agents here, they end up in Grandmaster level StarCraft and beat 99. some percent of human players. So really really good. They do an ablation study of all of the tricks they use. So this is pretty much all tricks they use. And you can see here this includes this league composition. What happens if we only have main agents, then main exploiters, league exploiters, and you can see the elo going up. Then you can see multi-agent learning. How much does this fictitious self play? The fact that we prioritize to strong players and so on. How much does this help? And you again see the elo going up. How much does it help that we use human data? How much does it help that we use these different networks? They have very good ablation studies of how much each of the things help. Here they investigate what if we didn't have a camera interface? So what if we could see the entire game at once and not only the opponents that are within the camera? And what if we didn't need to move the camera? They investigate the off-policy learning corrections that we mentioned and so on. I find this very cool that they do these huge ablation studies to show really how much each of these tricks that they used helps in generating their superior performance. Here you can see how these agents develop. So over training and they have a massive infrastructure and they train for days. You can see this here. But you can see that the the main agents just get better and better and better and better. While the main exploiters of course they stay the same but they kind of keep getting reinitialized. So this main agent is trained to exploit these these sorry these main exploiters trained to exploit these main agents. This one is trying to exploit these ones. They're not by themselves really good agents but they're simply trained to to find and exploit weaknesses of the main agents. Likewise these league exploiters they do get better with the league but they are only concerned with exploiting current and past versions of the league. Also to make the main agents better. So everything is geared towards making these main agents better. And you can see it actually works. They have some analysis of which units these agents build. I'm not too versed in Starcraft to comment on this. But all in all I find this to be a very cool paper and I find it to be described fairly clear what they do. Though they do not release the source code. They release some kind of pseudo code. But the analysis and the ablations are very good. The results are let's say questionable because of course you can't compare machines to humans especially in a game where you have to make quick actions. Even if you limit the actions, they do this here. So they have this monitoring layer which limits the actions and introduces delay and so on. But still if it's not the same as a human who might not always be able to do these 22 actions per five seconds. If something quick happens they may need to have some kind of relaxation phase and so on. But they try with these kind of delays and action limits. They try to model these kind of limitations. I find this as fair as possible. This is what I find kind of problematic. So they own units as I said. The agent can also see the ones that are outside the camera. And that seems kind of shady. Because of course you can you can claim humans can do whatever command groups to also control units outside the camera. But it's not really the case. So that's sort of a distinct advantage that the machine has. But yeah in any case I find it to be very well done. And I hope this made it a bit clearer what the exact contributions are. And with that have a fun time playing against AlphaStar. Bye bye. | [
{
"start": 0,
"end": 7.28,
"text": " Alright, let's talk about AlphaStar, Grandmaster level in StarCraft 2 using multi-agent reinforcement"
},
{
"start": 7.28,
"end": 8.36,
"text": " learning."
},
{
"start": 8.36,
"end": 15.3,
"text": " The corresponding paper looks like this and is by Oriol Vinyals et al. from DeepMind and"
},
{
"start": 15.3,
"end": 19.56,
"text": " has been published in the journal of Nature recently."
},
{
"start": 19.56,
"end": 21.84,
"text": " Now let me say this first."
},
{
"start": 21.84,
"end": 24.12,
"text": " Stop publishing in Nature."
},
{
"start": 24.12,
"end": 26.38,
"text": " This is a journal is not open access."
},
{
"start": 26.38,
"end": 29.92,
"text": " It makes its readers pay for getting the article."
},
{
"start": 29.92,
"end": 36.440000000000005,
"text": " So actually you can access this article or a public version of it for free but you can't"
},
{
"start": 36.440000000000005,
"end": 40.760000000000005,
"text": " print it, you can't download it unless you pay for it."
},
{
"start": 40.760000000000005,
"end": 47.84,
"text": " And this to me, it seems ridiculous because none of this money goes to the authors of"
},
{
"start": 47.84,
"end": 48.92,
"text": " the article."
},
{
"start": 48.92,
"end": 50.88,
"text": " None of this money goes to the reviewers."
},
{
"start": 50.88,
"end": 56.160000000000004,
"text": " The review quality isn't notably better, at least in the field of computer science."
},
{
"start": 56.16,
"end": 61.68,
"text": " All of this is a publicity stunt by DeepMind because Nature has been kind of impactful"
},
{
"start": 61.68,
"end": 62.68,
"text": " in the last decades."
},
{
"start": 62.68,
"end": 68.44,
"text": " It's like, ooh, look at me, I got a big dick I publish in Nature."
},
{
"start": 68.44,
"end": 69.6,
"text": " Nothing more than that."
},
{
"start": 69.6,
"end": 74.08,
"text": " It's like OpenAI saying their model is too dangerous to release to the world."
},
{
"start": 74.08,
"end": 78.16,
"text": " I guess DeepMind might make the same claim about AlphaStar."
},
{
"start": 78.16,
"end": 81.36,
"text": " It's like too dangerous of a StarCraft player."
},
{
"start": 81.36,
"end": 85.12,
"text": " Yeah, so stop this."
},
{
"start": 85.12,
"end": 88.64,
"text": " Publish your research in open access."
},
{
"start": 88.64,
"end": 92.04,
"text": " Nature or journals like these for computer science."
},
{
"start": 92.04,
"end": 94.56,
"text": " It's a remnant of the last century."
},
{
"start": 94.56,
"end": 99.72,
"text": " So go on and join everyone else in distributing knowledge."
},
{
"start": 99.72,
"end": 102.4,
"text": " All right, rant over."
},
{
"start": 102.4,
"end": 104.36000000000001,
"text": " Let's jump in into this article."
},
{
"start": 104.36000000000001,
"end": 110.84,
"text": " So the article describes how to train a reinforcement learning agent to play the game of StarCraft"
},
{
"start": 110.84,
"end": 112.04,
"text": " 2."
},
{
"start": 112.04,
"end": 117.04,
"text": " So StarCraft 2 is this game for everyone who doesn't know."
},
{
"start": 117.04,
"end": 118.80000000000001,
"text": " Just very quickly explain the game."
},
{
"start": 118.80000000000001,
"end": 125,
"text": " StarCraft 2 is a real time strategy game and you're kind of in this top third person view"
},
{
"start": 125,
"end": 130.4,
"text": " and you control your units and the goal is kind of to move your units around and first"
},
{
"start": 130.4,
"end": 136,
"text": " of all build up buildings and using those buildings you can then produce more and more"
},
{
"start": 136,
"end": 142,
"text": " diverse units and ultimately you want to kind of produce some sort of army that can go to"
},
{
"start": 142,
"end": 145.76,
"text": " the opponent and destroy the opponent's base."
},
{
"start": 145.76,
"end": 152,
"text": " So you control all of this on a computer using a mouse and a keyboard and StarCraft is notable"
},
{
"start": 152,
"end": 154.16,
"text": " for being very balanced."
},
{
"start": 154.16,
"end": 157.6,
"text": " So there are three different races you can play."
},
{
"start": 157.6,
"end": 164.64,
"text": " So first are the Terran which are kind of human, human-ish."
},
{
"start": 164.64,
"end": 170.64,
"text": " They have marines and tanks and helicopters I believe and things like this."
},
{
"start": 170.64,
"end": 177.83999999999997,
"text": " Then the Protoss are some sort of alien race that are super advanced so they can teleport"
},
{
"start": 177.83999999999997,
"end": 182.11999999999998,
"text": " and have energy shields and things like that."
},
{
"start": 182.11999999999998,
"end": 189.35999999999999,
"text": " And then last are the Zerg and the Zerg are kind of icky ground dwelling creatures that"
},
{
"start": 189.35999999999999,
"end": 195.66,
"text": " infect things and spread like a disease."
},
{
"start": 195.66,
"end": 200.76,
"text": " So the interesting thing here is compared to other real-time strategy games is that"
},
{
"start": 200.76,
"end": 203.96,
"text": " the three races they play very different."
},
{
"start": 203.96,
"end": 209.8,
"text": " So the game is almost a different game if you play as a different race but they are"
},
{
"start": 209.8,
"end": 216.64,
"text": " so well balanced that almost any matchup is kind of a fair game between equally skilled"
},
{
"start": 216.64,
"end": 218.24,
"text": " players."
},
{
"start": 218.24,
"end": 220.5,
"text": " So that's makes StarCraft pretty unique."
},
{
"start": 220.5,
"end": 226.68,
"text": " Also pretty unique is the very very high action per minute rates that pro players get."
},
{
"start": 226.68,
"end": 229.48,
"text": " Like they play this insanely fast."
},
{
"start": 229.48,
"end": 238.08,
"text": " So game lasts about 10 to 15 minutes and as I said the goal is to destroy the enemy base."
},
{
"start": 238.08,
"end": 244.12,
"text": " So to train an RL agent to play this is very hard because the action space is very high."
},
{
"start": 244.12,
"end": 248.72,
"text": " You have to target with your mouse part of the screen."
},
{
"start": 248.72,
"end": 253.72,
"text": " You have to look what is on the screen, what can I do."
},
{
"start": 253.72,
"end": 256,
"text": " There's this mini map down here."
},
{
"start": 256,
"end": 259.32,
"text": " There are things you can do."
},
{
"start": 259.32,
"end": 261.28,
"text": " There are opponents you can target and so on."
},
{
"start": 261.28,
"end": 266.84,
"text": " So all of this is very very very difficult for an RL agent."
},
{
"start": 266.84,
"end": 274.32,
"text": " And at the end, after 10 minutes, you play play play play play and after 10 minutes you"
},
{
"start": 274.32,
"end": 276.96,
"text": " either win or you lose."
},
{
"start": 276.96,
"end": 282.96,
"text": " And the RL agent has to figure out which of the actions that I did during those 10 minutes"
},
{
"start": 282.96,
"end": 283.96,
"text": " right."
},
{
"start": 283.96,
"end": 284.96,
"text": " Was it this one?"
},
{
"start": 284.96,
"end": 285.96,
"text": " Was it this one?"
},
{
"start": 285.96,
"end": 287.76,
"text": " Which led to me winning or losing?"
},
{
"start": 287.76,
"end": 292.15999999999997,
"text": " These are very hard problems for reinforcement learning."
},
{
"start": 292.15999999999997,
"end": 299.64,
"text": " And DeepMind has combined almost every trick in the book known so far to RL to achieve"
},
{
"start": 299.64,
"end": 300.64,
"text": " this."
},
{
"start": 300.64,
"end": 308.03999999999996,
"text": " Now the main contribution I'd say here that is novel is what is called league training"
},
{
"start": 308.03999999999996,
"end": 310.86,
"text": " and we'll get to that."
},
{
"start": 310.86,
"end": 319.4,
"text": " So first of all, if you don't know what reinforcement learning is, reinforcement learning is basically"
},
{
"start": 319.4,
"end": 320.82,
"text": " what I just described."
},
{
"start": 320.82,
"end": 329.12,
"text": " You have an input right, which could be this thing here and you have a set of actions that"
},
{
"start": 329.12,
"end": 333.52,
"text": " you can do, which the set of actions here is anywhere you can click right, you can click"
},
{
"start": 333.52,
"end": 335.64,
"text": " anywhere on the screen."
},
{
"start": 335.64,
"end": 341.16,
"text": " And you have to do this over and over and over and over again until you either win or"
},
{
"start": 341.16,
"end": 342.68,
"text": " you lose."
},
{
"start": 342.68,
"end": 348.12,
"text": " And from that you will see you will at the end receive Yeah, you win or you lose and"
},
{
"start": 348.12,
"end": 350.72,
"text": " then you have to kind of learn to play the game."
},
{
"start": 350.72,
"end": 355.72,
"text": " So it's machine learning hardcore because you get minimal information and have to achieve"
},
{
"start": 355.72,
"end": 357.92,
"text": " a lot of things from it."
},
{
"start": 357.92,
"end": 366.16,
"text": " So the first thing that DeepMind actually does is it does supervised learning."
},
{
"start": 366.16,
"end": 371.86,
"text": " And we'll get into how exactly the model works later."
},
{
"start": 371.86,
"end": 378.08000000000004,
"text": " But first thing DeepMind does is it trains an agent to simply imitate humans, right?"
},
{
"start": 378.08000000000004,
"end": 381.28000000000003,
"text": " So you have human data."
},
{
"start": 381.28000000000003,
"end": 387.52000000000004,
"text": " And from the human data, you so these are games played by humans, good humans, right?"
},
{
"start": 387.52,
"end": 390.71999999999997,
"text": " Not not people like me."
},
{
"start": 390.71999999999997,
"end": 396.96,
"text": " So these these are games played with humans from a significantly high ELO."
},
{
"start": 396.96,
"end": 399.84,
"text": " And the first thing you extract is this Z here."
},
{
"start": 399.84,
"end": 403.44,
"text": " Now Z is is called a statistics vector."
},
{
"start": 403.44,
"end": 409.08,
"text": " And as I understand it, it's mainly the build order, which means in which order do you build"
},
{
"start": 409.08,
"end": 412.4,
"text": " your buildings and units and this is very important in StarCraft."
},
{
"start": 412.4,
"end": 418.76,
"text": " This is a strategic decision where you say, okay, first, I'm going to build three worker"
},
{
"start": 418.76,
"end": 419.76,
"text": " units."
},
{
"start": 419.76,
"end": 424.28,
"text": " This is like three workers, worker, worker, worker, and then I'm going to build a house"
},
{
"start": 424.28,
"end": 426.2,
"text": " and then I'm going to and so on."
},
{
"start": 426.2,
"end": 434.08,
"text": " So these are major strategic decisions that that you kind of have to make with minutes,"
},
{
"start": 434.08,
"end": 438.09999999999997,
"text": " minutes ahead of time to plan in advance."
},
{
"start": 438.1,
"end": 442.58000000000004,
"text": " And this this is kind of stays constant for the game."
},
{
"start": 442.58000000000004,
"end": 446.44,
"text": " So this is extracted and provided to the model as an input."
},
{
"start": 446.44,
"end": 451.28000000000003,
"text": " So what is the current strategy basically the current overall strategy?"
},
{
"start": 451.28000000000003,
"end": 457.72,
"text": " The second thing that is extracted is this is at every time step, the observation that"
},
{
"start": 457.72,
"end": 466.86,
"text": " the humans had so the screen that humans see, and also the actions that the human did, right?"
},
{
"start": 466.86,
"end": 472.96000000000004,
"text": " So the human takes its mouse and clicks somewhere, right?"
},
{
"start": 472.96000000000004,
"end": 477.6,
"text": " This is supposed to be a mouse pointer and clicks here, right?"
},
{
"start": 477.6,
"end": 482.62,
"text": " And then the model, this part here, this is the model."
},
{
"start": 482.62,
"end": 484.64,
"text": " And this is the policy function of the model."
},
{
"start": 484.64,
"end": 488.48,
"text": " So the policy decides what to do, right?"
},
{
"start": 488.48,
"end": 492.64,
"text": " Is trained to match the action that the human did."
},
{
"start": 492.64,
"end": 498.28,
"text": " So in essence, first, you train an agent to simply imitate humans."
},
{
"start": 498.28,
"end": 500.36,
"text": " And this you can do by supervised learning, right?"
},
{
"start": 500.36,
"end": 502.24,
"text": " This is classic machine learning."
},
{
"start": 502.24,
"end": 509.24,
"text": " Each each step you have this input, which is an image, and you have the strategy you're"
},
{
"start": 509.24,
"end": 510.56,
"text": " trying to follow."
},
{
"start": 510.56,
"end": 515.56,
"text": " And from these two, you're simply trying to match the action that the human did, assuming"
},
{
"start": 515.56,
"end": 518.02,
"text": " the human made a good decision."
},
{
"start": 518.02,
"end": 520.68,
"text": " So this is how you initialize, right?"
},
{
"start": 520.68,
"end": 524.04,
"text": " You don't start from scratch."
},
{
"start": 524.04,
"end": 530.76,
"text": " Now I have to say that even though this name is Alpha star, it has surprisingly little"
},
{
"start": 530.76,
"end": 537.4399999999999,
"text": " to do with Alpha Go or Alpha Zero that DeepMind has done before."
},
{
"start": 537.4399999999999,
"end": 542.88,
"text": " Mainly this is entirely model free reinforcement learning."
},
{
"start": 542.88,
"end": 549.56,
"text": " And goes more into the direction of classic deep RL."
},
{
"start": 549.56,
"end": 553.1199999999999,
"text": " And you can see with the human data, you can already get pretty far."
},
{
"start": 553.1199999999999,
"end": 557.3599999999999,
"text": " So these down here are the leagues of StarCraft."
},
{
"start": 557.3599999999999,
"end": 561.68,
"text": " And this this here are percentiles of players."
},
{
"start": 561.68,
"end": 566.0799999999999,
"text": " And you see with the supervised training, you can get almost you can get better than"
},
{
"start": 566.0799999999999,
"end": 569.9599999999999,
"text": " 80 85% of human players already."
},
{
"start": 569.9599999999999,
"end": 571.26,
"text": " Right?"
},
{
"start": 571.26,
"end": 576.52,
"text": " So pretty, pretty impressive already simply by imitating humans."
},
{
"start": 576.52,
"end": 589.52,
"text": " Now so the the the way to to further improve this, and let's actually go first into how"
},
{
"start": 589.52,
"end": 591.88,
"text": " the model looks like."
},
{
"start": 591.88,
"end": 597.52,
"text": " So down here, they describe this model."
},
{
"start": 597.52,
"end": 598.6999999999999,
"text": " That's it."
},
{
"start": 598.6999999999999,
"end": 603.66,
"text": " So the model is supposed to map from input to output."
},
{
"start": 603.66,
"end": 611.92,
"text": " So from the screen that the agent sees, right, and some other things to what the agent is"
},
{
"start": 611.92,
"end": 615.16,
"text": " going to do to an action a."
},
{
"start": 615.16,
"end": 619.68,
"text": " If you simply do this at every time step, then you have a game playing agent."
},
{
"start": 619.68,
"end": 623.9399999999999,
"text": " So first, the question is, of course, how does this happen?"
},
{
"start": 623.9399999999999,
"end": 631.72,
"text": " Now the input isn't only the thing that the agencies which is this the mini map and the"
},
{
"start": 631.72,
"end": 634.12,
"text": " mini map?"
},
{
"start": 634.12,
"end": 637.72,
"text": " I believe that's the mini map or the entire map."
},
{
"start": 637.72,
"end": 641.76,
"text": " Well, it's it's in essence, it is a picture."
},
{
"start": 641.76,
"end": 644.46,
"text": " It is also a list of entities."
},
{
"start": 644.46,
"end": 650.52,
"text": " So the the game engine extracts a list of entities."
},
{
"start": 650.52,
"end": 658.6,
"text": " And these can be inside the screen here and outside the screen for friendly."
},
{
"start": 658.6,
"end": 664.32,
"text": " So the assumption is the agent knows about all of its units and where they are and what"
},
{
"start": 664.32,
"end": 665.84,
"text": " their statistics are."
},
{
"start": 665.84,
"end": 672.12,
"text": " So in this entity thing, for each entity, you have a list of what is its health, what"
},
{
"start": 672.12,
"end": 677.9200000000001,
"text": " is its type, what is its position, does it carry any items and so on all the things you"
},
{
"start": 677.9200000000001,
"end": 679.88,
"text": " need to know about this entity."
},
{
"start": 679.88,
"end": 682,
"text": " This is in this list of entities."
},
{
"start": 682,
"end": 688.96,
"text": " And along with that also opponent entities, but only the ones that are on screen."
},
{
"start": 688.96,
"end": 690.54,
"text": " Right."
},
{
"start": 690.54,
"end": 695.08,
"text": " So all of this goes into this list of entities."
},
{
"start": 695.08,
"end": 697.44,
"text": " And then the next features are scalar features."
},
{
"start": 697.44,
"end": 703.28,
"text": " And as I understand it, scalar features are things like what race are you playing currently?"
},
{
"start": 703.28,
"end": 705.72,
"text": " What time is it in the game and so on."
},
{
"start": 705.72,
"end": 708.56,
"text": " So these are additional features."
},
{
"start": 708.56,
"end": 712.4799999999999,
"text": " And also baseline features."
},
{
"start": 712.4799999999999,
"end": 718.4799999999999,
"text": " And this is mainly used to train the value network."
},
{
"start": 718.4799999999999,
"end": 723.0999999999999,
"text": " And if you this is not going to make sense if you know nothing about reinforcement learning."
},
{
"start": 723.0999999999999,
"end": 730.0799999999999,
"text": " But one main contribution of this paper is or not contribution, but kind of thing that"
},
{
"start": 730.0799999999999,
"end": 735.6999999999999,
"text": " they claim is that for computing the value network, they also use the observations."
},
{
"start": 735.7,
"end": 741.22,
"text": " So all of this of the opponent player, because you know this during training, because you're"
},
{
"start": 741.22,
"end": 746.76,
"text": " doing self play, and you don't need this value network during inference."
},
{
"start": 746.76,
"end": 751.32,
"text": " You can actually do this and this improves performance significantly."
},
{
"start": 751.32,
"end": 759.76,
"text": " Alright so that's just for people who know RL very well."
},
{
"start": 759.76,
"end": 763,
"text": " Everyone else don't don't worry too much about these things."
},
{
"start": 763,
"end": 768.44,
"text": " Alright so these are the inputs, the scalar features, the entity and the minimap."
},
{
"start": 768.44,
"end": 771.04,
"text": " Each one goes through separate encoders."
},
{
"start": 771.04,
"end": 775.72,
"text": " So the minimap goes through a ResNet which is a convolutional network."
},
{
"start": 775.72,
"end": 781.8,
"text": " And the entities go through a transformer which is kind of a thing to, it's appropriate"
},
{
"start": 781.8,
"end": 784.2,
"text": " to encode a set of entities right."
},
{
"start": 784.2,
"end": 788.16,
"text": " Scalar features go through a classic feed forward network MLP."
},
{
"start": 788.16,
"end": 793.28,
"text": " All of these get combined here into a deep LSTM that goes over time."
},
{
"start": 793.28,
"end": 801.0799999999999,
"text": " Now the deep LSTM is what really makes the strategy because each time step, each time"
},
{
"start": 801.0799999999999,
"end": 806.74,
"text": " step a screen like this is input into the into the thing."
},
{
"start": 806.74,
"end": 811.64,
"text": " But the agent also needs to remember what did it do last steps two steps ago right."
},
{
"start": 811.64,
"end": 814.28,
"text": " This is important because you don't have full observability."
},
{
"start": 814.28,
"end": 818.48,
"text": " You need to know what did I do in the in the past."
},
{
"start": 818.48,
"end": 824.6,
"text": " And that's where the so if the last step you saw this screen and the step before you saw"
},
{
"start": 824.6,
"end": 831.12,
"text": " this screen right then all of this would go through these encoding step into the LSTM"
},
{
"start": 831.12,
"end": 832.12,
"text": " right."
},
{
"start": 832.12,
"end": 837.9599999999999,
"text": " So the LSTM will encode now over time all of these different steps."
},
{
"start": 837.9599999999999,
"end": 844.04,
"text": " And so you can kind of say alright if I have just started building a building I should"
},
{
"start": 844.04,
"end": 849.68,
"text": " probably not build the same building again even though I can't see it on the screen right."
},
{
"start": 849.68,
"end": 858.36,
"text": " Because I know that three steps ago I did start building a build build a building."
},
{
"start": 858.36,
"end": 865.4,
"text": " So this is kind of the LSTM is basically where you integrate your strategy over time."
},
{
"start": 865.4,
"end": 869.0799999999999,
"text": " So from the LSTM you have to make two predictions."
},
{
"start": 869.0799999999999,
"end": 873.64,
"text": " You have to make a prediction of what to do."
},
{
"start": 873.64,
"end": 880.88,
"text": " This is the action and how valuable is your current state and how valuable is your current"
},
{
"start": 880.88,
"end": 881.88,
"text": " state."
},
{
"start": 881.88,
"end": 883.72,
"text": " This is called the value network."
},
{
"start": 883.72,
"end": 887.76,
"text": " This is a core component of deep reinforcement learning."
},
{
"start": 887.76,
"end": 891.8,
"text": " These two components one is called the policy which would be everything over here and what"
},
{
"start": 891.8,
"end": 895.72,
"text": " is called the value network which is called everything over here."
},
{
"start": 895.72,
"end": 900.3199999999999,
"text": " These are the things you need to do actor critic learning and actor critic learning"
},
{
"start": 900.3199999999999,
"end": 902.64,
"text": " is the current state of the art in deep RL."
},
{
"start": 902.64,
"end": 907.06,
"text": " So deep mind does nothing else here except as I said they use these baseline features"
},
{
"start": 907.06,
"end": 909.1999999999999,
"text": " for the value network."
},
{
"start": 909.1999999999999,
"end": 912.4399999999999,
"text": " But if you don't know what a value network is don't worry about it."
},
{
"start": 912.4399999999999,
"end": 918.16,
"text": " The important part for playing the game is actually the part over here that called the"
},
{
"start": 918.16,
"end": 919.28,
"text": " policy."
},
{
"start": 919.28,
"end": 924.56,
"text": " So first you need to do to decide what action you do and that there are many action types"
},
{
"start": 924.56,
"end": 929.24,
"text": " in Starcraft as I already said you can build a building you can move a unit you can actually"
},
{
"start": 929.24,
"end": 933.12,
"text": " move the camera that's an action type right because you want to maybe see what's over"
},
{
"start": 933.12,
"end": 935.72,
"text": " here or over here or over here."
},
{
"start": 935.72,
"end": 943.72,
"text": " So that's an action you can do and if you have decided on what action you want to do"
},
{
"start": 943.72,
"end": 946.26,
"text": " you have to decide when do I do it."
},
{
"start": 946.26,
"end": 952.04,
"text": " So you see the action type once you figured it out it goes into the next neural network"
},
{
"start": 952.04,
"end": 957.32,
"text": " and that decides okay when do I do it when do I do this action."
},
{
"start": 957.32,
"end": 960.6800000000001,
"text": " So it specifies a delay."
},
{
"start": 960.6800000000001,
"end": 966.5600000000001,
"text": " Then once you've decided what to do and when to do it it goes into the next neural network"
},
{
"start": 966.5600000000001,
"end": 973.84,
"text": " and that decides should I put this into the queue of actions because the agent here is"
},
{
"start": 973.84,
"end": 980.62,
"text": " limited to a certain number of actions per second and I think it's 22 actions per five"
},
{
"start": 980.62,
"end": 987.6,
"text": " seconds or something like this so in order to mimic you know human limitations."
},
{
"start": 987.6,
"end": 993.36,
"text": " So there's a queue of actions to be executed and the agent needs to decide do I really"
},
{
"start": 993.36,
"end": 997.88,
"text": " want is this action so important to put it into the queue."
},
{
"start": 997.88,
"end": 1004,
"text": " Alright if you have decided what to do when to do it whether you would like to do it at"
},
{
"start": 1004,
"end": 1011.16,
"text": " all right then you have to you have to say it goes into the next neural network and you"
},
{
"start": 1011.16,
"end": 1016.28,
"text": " have to say alright which units do I want to do it with right if you want to build a"
},
{
"start": 1016.28,
"end": 1020.6,
"text": " building you can have to choose one or many workers to do it."
},
{
"start": 1020.6,
"end": 1025.6,
"text": " I don't actually know how StarCraft works in this I'm a bit of a noob but you have to"
},
{
"start": 1025.6,
"end": 1031.12,
"text": " you have to select units with which to do the action for most of the thing and there"
},
{
"start": 1031.12,
"end": 1037.6,
"text": " I like the use of a pointer network here so what a pointer network is is a network that"
},
{
"start": 1037.6,
"end": 1043.08,
"text": " can point to its own inputs it's sort of like an attention network but not really in a pointer"
},
{
"start": 1043.08,
"end": 1048.6,
"text": " network if you have a set of inputs like we have here so entity entity entity entity"
},
{
"start": 1048.6,
"end": 1054.3799999999999,
"text": " right all these entities and you can see the entity embedding the entity encoder actually"
},
{
"start": 1054.38,
"end": 1062.2800000000002,
"text": " has skip connections that go here right so this network directly gets these these entities"
},
{
"start": 1062.2800000000002,
"end": 1072.2,
"text": " as input it can then write you then you have a neural network on top of that neural network"
},
{
"start": 1072.2,
"end": 1081.24,
"text": " that the neural network takes all of these things as an input and what the neural network"
},
{
"start": 1081.24,
"end": 1089.44,
"text": " will output is a pointer to one of these things right you can say look I point to this thing"
},
{
"start": 1089.44,
"end": 1094.76,
"text": " right here this is a called a pointer network and yeah as I said it's different from an"
},
{
"start": 1094.76,
"end": 1104.04,
"text": " attention network which might so an attention network is where you get a distribution actually"
},
{
"start": 1104.04,
"end": 1108.08,
"text": " get a distribution in both cases there is a difference but we don't have to really time"
},
{
"start": 1108.08,
"end": 1115.76,
"text": " to go into it here but in essence with a pointer network you can select which of these entities"
},
{
"start": 1115.76,
"end": 1122.3799999999999,
"text": " you want to do something with all right now you've decided on which action when whether"
},
{
"start": 1122.3799999999999,
"end": 1128.8799999999999,
"text": " to cue it with which unit to do it now you have to decide for some actions for example"
},
{
"start": 1128.8799999999999,
"end": 1134.96,
"text": " if the action is attack or heal or something this target unit which unit do you want to"
},
{
"start": 1134.96,
"end": 1143.64,
"text": " target or which which location on the map you want to target this is the target point"
},
{
"start": 1143.64,
"end": 1150.56,
"text": " here and you can see again here are skip connections from the entity encoder and from the spatial"
},
{
"start": 1150.56,
"end": 1158.16,
"text": " encoder to these things and while the target unit is an attention network that's this like"
},
{
"start": 1158.16,
"end": 1166.68,
"text": " much like a pointer network you will kind of point to places in lists the target point"
},
{
"start": 1166.68,
"end": 1172.9,
"text": " is a deconvolution or resnet what that means is so you have this spatial encoder here will"
},
{
"start": 1172.9,
"end": 1179.0800000000002,
"text": " embed the mini map so there will be a neural network right here actually let's draw the"
},
{
"start": 1179.0800000000002,
"end": 1186.6000000000001,
"text": " neural network in this color right here it will give you a an embedding of that right"
},
{
"start": 1186.6,
"end": 1193.12,
"text": " and that's what you what you feed into that's what you feed for example into the LSTM but"
},
{
"start": 1193.12,
"end": 1202.84,
"text": " then what you do is you have a deconvolutional network which again produces a mini map but"
},
{
"start": 1202.84,
"end": 1208.9599999999998,
"text": " on this mini map there there's not it's not the original mini map but it's kind of a distribution"
},
{
"start": 1208.96,
"end": 1218.48,
"text": " of locations so it said here here do I want to point all right so the that this neural"
},
{
"start": 1218.48,
"end": 1225.4,
"text": " network is responsible for producing this dot on the mini map basically saying okay"
},
{
"start": 1225.4,
"end": 1231.52,
"text": " I know what to do when to do it with which units to do it and so on I want to do it right"
},
{
"start": 1231.52,
"end": 1239.76,
"text": " here on the mini map okay and now you have it right you go from the input which are these"
},
{
"start": 1239.76,
"end": 1245.5,
"text": " things the mini map the entities and so on to what do I want to do where when with which"
},
{
"start": 1245.5,
"end": 1252.68,
"text": " units and so on right this is called a policy and it's extremely complicated every one of"
},
{
"start": 1252.68,
"end": 1259.84,
"text": " these boxes here is a neural network and you can see it's it's very it's a lot to train"
},
{
"start": 1259.84,
"end": 1266.04,
"text": " and they of course they have a lot of resources since they are deep mind but that's the the"
},
{
"start": 1266.04,
"end": 1276.84,
"text": " main thing all right they have a few tricks to train this and we won't go too much into"
},
{
"start": 1276.84,
"end": 1286.72,
"text": " this but one of the tricks is V trace from the Impala paper one of another trick is"
},
{
"start": 1286.72,
"end": 1296.76,
"text": " up go up going policy update and a third trick is TD lambda learning here and all of these"
},
{
"start": 1296.76,
"end": 1302.24,
"text": " are kind of improvements onto classic actor critic reinforcement learning style like a"
},
{
"start": 1302.24,
"end": 1312.88,
"text": " to see your a3c if you are interested then you can you know look into these things so"
},
{
"start": 1312.88,
"end": 1321.24,
"text": " that's how they train it and the question now is what's the protocol for training it"
},
{
"start": 1321.24,
"end": 1327.16,
"text": " we saw okay there is supervised learning cool then there is reinforcement learning all right"
},
{
"start": 1327.16,
"end": 1331.66,
"text": " but you can't just apply and this is in the reinforcement learning this is what we said"
},
{
"start": 1331.66,
"end": 1337.92,
"text": " you get kind of a reward and the reward goes into this TD lambda and V trace and and up"
},
{
"start": 1337.92,
"end": 1346.3200000000002,
"text": " going policy update to train the value function and the policy but the special thing that"
},
{
"start": 1346.3200000000002,
"end": 1352.76,
"text": " this paper introduces is what's called leak training now in in papers like alpha go or"
},
{
"start": 1352.76,
"end": 1359,
"text": " alpha zero what had been done is called self play and self play basically means you have"
},
{
"start": 1359,
"end": 1366.64,
"text": " an agent you have an agent right you have this how in a row an agent that's this is"
},
{
"start": 1366.64,
"end": 1371.5200000000002,
"text": " supposed to be an artificial intelligence right how to make it artificial okay it has"
},
{
"start": 1371.5200000000002,
"end": 1382.5200000000002,
"text": " a little hat right a funky hat it's a robot and the robot will play a copy of itself right"
},
{
"start": 1382.5200000000002,
"end": 1390.5600000000002,
"text": " and the copy it might be slightly different but the it basically these two these two play"
},
{
"start": 1390.5600000000002,
"end": 1395.48,
"text": " each other and thereby become better and better and better and you can see this like over"
},
{
"start": 1395.48,
"end": 1401.84,
"text": " time as as the purple one gets better the blue one gets better as well because they"
},
{
"start": 1401.84,
"end": 1406.8,
"text": " they kind of play against each other and when one falls behind right when one falls behind"
},
{
"start": 1406.8,
"end": 1413.24,
"text": " then they simply copy over from the other one they basically copy the other one and"
},
{
"start": 1413.24,
"end": 1419.44,
"text": " then they catch up again right they catch up right and they continue competing so by"
},
{
"start": 1419.44,
"end": 1426.4,
"text": " competing against each other they get better and this is called self play now people have"
},
{
"start": 1426.4,
"end": 1431.16,
"text": " noticed this kind of leads to instabilities because you can get kind of trapped get trapped"
},
{
"start": 1431.16,
"end": 1439.06,
"text": " in cycles like rock paper scissor cycles so what they do is they will actually as they"
},
{
"start": 1439.06,
"end": 1445.2,
"text": " get better so this is the first version right and the second version they are a bit better"
},
{
"start": 1445.2,
"end": 1457.6000000000001,
"text": " now so they have bigger hats right and here bigger bigger larger hats right and down here"
},
{
"start": 1457.6000000000001,
"end": 1463.44,
"text": " they are even better so they have like ginormous hats but they might have some weaknesses because"
},
{
"start": 1463.44,
"end": 1469.1200000000001,
"text": " they only play against each other right so this is the same players but over time what"
},
{
"start": 1469.12,
"end": 1477.08,
"text": " they will do is they will actually play occasionally play old versions of the other player or of"
},
{
"start": 1477.08,
"end": 1485.36,
"text": " themselves right occasionally the new versions will fall back and play old versions or not"
},
{
"start": 1485.36,
"end": 1491.3999999999999,
"text": " only the current versions of the agent or old versions of themselves right so this this"
},
{
"start": 1491.3999999999999,
"end": 1498.52,
"text": " is called fictitious self play in that you always play the you know not only play the"
},
{
"start": 1498.52,
"end": 1503.4,
"text": " your current kind of opponent or your current self i mean it's the same anyway because you"
},
{
"start": 1503.4,
"end": 1509.2,
"text": " keep copying the weights you also play the old ones and this paper goes a step further"
},
{
"start": 1509.2,
"end": 1518.28,
"text": " and says actually we we do this but we want to prioritize the good ones so for example"
},
{
"start": 1518.28,
"end": 1524.32,
"text": " we know that we know that the current ones are good right but we know that this particular"
},
{
"start": 1524.32,
"end": 1533.4399999999998,
"text": " one was also pretty good so far so we are we keep making we keep making these these"
},
{
"start": 1533.4399999999998,
"end": 1540.8999999999999,
"text": " new ones play against this one more often and this has led to kind of an improvement"
},
{
"start": 1540.8999999999999,
"end": 1547.62,
"text": " in these kind of self play algorithms and the real new part of this um alpha star paper"
},
{
"start": 1547.62,
"end": 1554.4399999999998,
"text": " is the fact that they do this league training and in the league training they this this"
},
{
"start": 1554.4399999999998,
"end": 1559.7199999999998,
"text": " is what it looks like but i find this graphic rather confusing i'd rather explain it like"
},
{
"start": 1559.7199999999998,
"end": 1567.8799999999999,
"text": " something like this all right so there is your current your current strategy and you"
},
{
"start": 1567.88,
"end": 1577.48,
"text": " have a hat right and you do all of the you do all of the all of the i play against myself"
},
{
"start": 1577.48,
"end": 1584,
"text": " with the smaller hat thing right i play against past versions of myself fine but then you"
},
{
"start": 1584,
"end": 1596.88,
"text": " also do you have what's called exploiters and exploiters an exploiter is a let's call"
},
{
"start": 1596.88,
"end": 1603.68,
"text": " it a triangle hat because it's very evil what it does is it specifically targets only the"
},
{
"start": 1603.68,
"end": 1611.48,
"text": " current good agent right so this this agent right here is tasked with playing old versions"
},
{
"start": 1611.48,
"end": 1618.5200000000002,
"text": " of itself and playing the exploiter both at the same time but the exploiter is only"
},
{
"start": 1618.5200000000002,
"end": 1626.5200000000002,
"text": " tasked with playing this thing so um what it can do is it can specialize in exploiting"
},
{
"start": 1626.52,
"end": 1632.4,
"text": " whatever weaknesses this player has of course the hope is that the this player will become"
},
{
"start": 1632.4,
"end": 1639.84,
"text": " better in response because there's a player trying to exploit it right so every and as"
},
{
"start": 1639.84,
"end": 1645.16,
"text": " this as this player becomes better than this player here is reinitialized and tries to"
},
{
"start": 1645.16,
"end": 1651.44,
"text": " find new weaknesses right so as this as this one continues to learn so the exploiters they"
},
{
"start": 1651.44,
"end": 1658.52,
"text": " are initialized you can see this here so these are called the main agents and you can see"
},
{
"start": 1658.52,
"end": 1662.88,
"text": " they play against each other right one of them they play against each other they play"
},
{
"start": 1662.88,
"end": 1670.92,
"text": " against past versions of themselves so these are past versions of themselves but then there"
},
{
"start": 1670.92,
"end": 1675.66,
"text": " are these main exploiters and the main exploiters they're constantly reinitialized from human"
},
{
"start": 1675.66,
"end": 1684.24,
"text": " data right you can see this here they're reinitialized and they only play against these main players"
},
{
"start": 1684.24,
"end": 1688.3200000000002,
"text": " right they don't have to deal with any of the past players or playing against themselves"
},
{
"start": 1688.3200000000002,
"end": 1694.16,
"text": " stuff they only try to exploit the main players and thereby the main players get better once"
},
{
"start": 1694.16,
"end": 1700.6000000000001,
"text": " they get better than an exploiter they are reinitialized so the exploiters are reinitialized"
},
{
"start": 1700.6,
"end": 1706.84,
"text": " to find new exploits of the main agents the third component is what's called a league"
},
{
"start": 1706.84,
"end": 1714.36,
"text": " exploiter and a league exploiter is the following so the league let's the league exploiter here"
},
{
"start": 1714.36,
"end": 1725.28,
"text": " and its hat is a wavy hat and what the league exploiter does is it plays against past versions"
},
{
"start": 1725.28,
"end": 1734,
"text": " of itself and others so it does play against the league exploiter sorry with smaller wavy"
},
{
"start": 1734,
"end": 1742.56,
"text": " hat it also plays against this thing by the way the this this here also plays against"
},
{
"start": 1742.56,
"end": 1748.44,
"text": " past versions of this and of everything else you can see here the past version arrows it"
},
{
"start": 1748.44,
"end": 1753.44,
"text": " goes against all past players so this this represents all the past players that ever"
},
{
"start": 1753.44,
"end": 1762.96,
"text": " existed and so does the so does the so here but also against past versions of this of"
},
{
"start": 1762.96,
"end": 1769.44,
"text": " this main exploiter here but the important thing is the current main exploiter doesn't"
},
{
"start": 1769.44,
"end": 1777.28,
"text": " play past versions of its of itself right so this also plays this and this place this"
},
{
"start": 1777.28,
"end": 1784.16,
"text": " and this place this and this also place this so the league exploiter they they do take"
},
{
"start": 1784.16,
"end": 1793.12,
"text": " part in this whole league business like playing against past versions of all the players but"
},
{
"start": 1793.12,
"end": 1802.3999999999999,
"text": " it only plays against the main ex against the main exploiters and this is a thing that"
},
{
"start": 1802.4,
"end": 1807.52,
"text": " i find missing here honestly i don't know if i don't understand this but i'm pretty sure"
},
{
"start": 1807.52,
"end": 1814.3200000000002,
"text": " i do like these also play these and that's an arrow missing in the in the drawing uh"
},
{
"start": 1814.3200000000002,
"end": 1817.8000000000002,
"text": " the league exploiters play the main agents but the main difference between the league"
},
{
"start": 1817.8000000000002,
"end": 1823.16,
"text": " exploiters and the main agents is the league exploiters they don't play themselves right"
},
{
"start": 1823.16,
"end": 1828.72,
"text": " there is no there's no playing themselves on the league exploiters so the league exploiters"
},
{
"start": 1828.72,
"end": 1837.76,
"text": " what they can do is they can find weaknesses of the entire league and kind of train train"
},
{
"start": 1837.76,
"end": 1843.88,
"text": " the by playing against the main opponents using those found weaknesses you bet that"
},
{
"start": 1843.88,
"end": 1850.88,
"text": " the main ex the main agents will get better against those major weaknesses of the entire"
},
{
"start": 1850.88,
"end": 1858.96,
"text": " league right so the main agents first of all they get better by playing the main exploiters"
},
{
"start": 1858.96,
"end": 1864.24,
"text": " because the main exploiters are mainly trying to exploit the main agents the main agents"
},
{
"start": 1864.24,
"end": 1870.6000000000001,
"text": " also get better by playing the league exploiters because the league exploiters find weaknesses"
},
{
"start": 1870.6000000000001,
"end": 1877.3400000000001,
"text": " of the entire league right so and the main agents they also get better by playing each"
},
{
"start": 1877.34,
"end": 1884.22,
"text": " So that makes these these main agents kind of..."
},
{
"start": 1884.22,
"end": 1888.1399999999999,
"text": " You can say they're trained against everything under the sun,"
},
{
"start": 1888.1399999999999,
"end": 1893.02,
"text": " against any possible exploit that can be found either in themselves or"
},
{
"start": 1893.02,
"end": 1898.4599999999998,
"text": " generally. And thereby they get really good at StarCraft,"
},
{
"start": 1898.4599999999998,
"end": 1902.4599999999998,
"text": " because they can counter pretty much everything. So this is how"
},
{
"start": 1902.4599999999998,
"end": 1906.4599999999998,
"text": " league training works and this is what I feel is the main contribution of this"
},
{
"start": 1906.46,
"end": 1911.18,
"text": " paper to the reinforcement learning world."
},
{
"start": 1911.18,
"end": 1916.22,
"text": " Now they do an ablation study here. You can see"
},
{
"start": 1916.22,
"end": 1920.94,
"text": " where this ends up. So these final agents here,"
},
{
"start": 1920.94,
"end": 1928.7,
"text": " they end up in Grandmaster level StarCraft and beat 99."
},
{
"start": 1928.7,
"end": 1934.94,
"text": " some percent of human players. So really really good."
},
{
"start": 1934.94,
"end": 1939.1000000000001,
"text": " They do an ablation study of all of the tricks they use."
},
{
"start": 1939.1000000000001,
"end": 1942.6200000000001,
"text": " So this is pretty much all tricks they use."
},
{
"start": 1942.6200000000001,
"end": 1949.5,
"text": " And you can see here this includes this league composition."
},
{
"start": 1949.5,
"end": 1953.3400000000001,
"text": " What happens if we only have main agents, then main exploiters, league"
},
{
"start": 1953.3400000000001,
"end": 1959.5,
"text": " exploiters, and you can see the elo going up."
},
{
"start": 1959.5,
"end": 1965.66,
"text": " Then you can see multi-agent learning. How much does this fictitious"
},
{
"start": 1965.66,
"end": 1969.1,
"text": " self play? The fact that we prioritize to strong"
},
{
"start": 1969.1,
"end": 1973.58,
"text": " players and so on. How much does this help? And you again see the elo"
},
{
"start": 1973.58,
"end": 1978.54,
"text": " going up. How much does it help that we use human data?"
},
{
"start": 1978.54,
"end": 1982.46,
"text": " How much does it help that we use these different networks?"
},
{
"start": 1982.46,
"end": 1991.26,
"text": " They have very good ablation studies of how much each of the"
},
{
"start": 1991.26,
"end": 1996.38,
"text": " things help. Here they investigate what if we didn't have a camera"
},
{
"start": 1996.38,
"end": 2002.78,
"text": " interface? So what if we could see the entire game at once and not only"
},
{
"start": 2002.78,
"end": 2005.5,
"text": " the opponents that are within the camera?"
},
{
"start": 2005.5,
"end": 2009.02,
"text": " And what if we didn't need to move the camera?"
},
{
"start": 2009.02,
"end": 2014.22,
"text": " They investigate the off-policy learning corrections that we mentioned"
},
{
"start": 2014.22,
"end": 2018.7,
"text": " and so on. I find this very cool that they do these"
},
{
"start": 2018.7,
"end": 2023.34,
"text": " huge ablation studies to show really how much each of these tricks that they used"
},
{
"start": 2023.34,
"end": 2029.58,
"text": " helps in generating their superior performance."
},
{
"start": 2029.58,
"end": 2033.98,
"text": " Here you can see how these agents develop."
},
{
"start": 2033.98,
"end": 2038.7,
"text": " So over training and they have a massive infrastructure and they train for"
},
{
"start": 2038.7,
"end": 2041.9,
"text": " days. You can see this here. But you can see that the"
},
{
"start": 2041.9,
"end": 2045.3400000000001,
"text": " the main agents just get better and better and better"
},
{
"start": 2045.3400000000001,
"end": 2049.18,
"text": " and better. While the main exploiters of course"
},
{
"start": 2049.18,
"end": 2053.02,
"text": " they stay the same but they kind of keep getting reinitialized."
},
{
"start": 2053.02,
"end": 2058.46,
"text": " So this main agent is trained to exploit these"
},
{
"start": 2058.46,
"end": 2064.86,
"text": " these sorry these main exploiters trained to exploit these main agents."
},
{
"start": 2064.86,
"end": 2068.46,
"text": " This one is trying to exploit these ones. They're not by themselves"
},
{
"start": 2068.46,
"end": 2071.34,
"text": " really good agents but they're simply trained to"
},
{
"start": 2071.34,
"end": 2074.54,
"text": " to find and exploit weaknesses of the main agents."
},
{
"start": 2074.54,
"end": 2078.7,
"text": " Likewise these league exploiters they do get better with the league"
},
{
"start": 2078.7,
"end": 2085.26,
"text": " but they are only concerned with exploiting current and past versions of"
},
{
"start": 2085.26,
"end": 2089.26,
"text": " the league. Also to make the main agents better."
},
{
"start": 2089.26,
"end": 2092.94,
"text": " So everything is geared towards making these main agents better."
},
{
"start": 2092.94,
"end": 2099.34,
"text": " And you can see it actually works."
},
{
"start": 2099.34,
"end": 2105.02,
"text": " They have some analysis of which units these agents build."
},
{
"start": 2105.02,
"end": 2110.06,
"text": " I'm not too versed in Starcraft to comment on this."
},
{
"start": 2110.06,
"end": 2113.66,
"text": " But all in all I find this to be a very cool paper"
},
{
"start": 2113.66,
"end": 2119.5,
"text": " and I find it to be described fairly clear what they do."
},
{
"start": 2119.5,
"end": 2123.66,
"text": " Though they do not release the source code."
},
{
"start": 2123.66,
"end": 2128.78,
"text": " They release some kind of pseudo code. But the analysis and the ablations"
},
{
"start": 2128.78,
"end": 2134.86,
"text": " are very good. The results are let's say questionable because of course"
},
{
"start": 2134.86,
"end": 2140.54,
"text": " you can't compare"
},
{
"start": 2140.54,
"end": 2144.54,
"text": " machines to humans especially in a game where you have to make quick actions."
},
{
"start": 2144.54,
"end": 2148.46,
"text": " Even if you limit the actions, they do this here."
},
{
"start": 2148.46,
"end": 2154.54,
"text": " So they have this monitoring layer which limits the actions and"
},
{
"start": 2154.54,
"end": 2160.78,
"text": " introduces delay and so on. But still if it's not the same as a"
},
{
"start": 2160.78,
"end": 2165.58,
"text": " human who might not always be able to do these 22"
},
{
"start": 2165.58,
"end": 2170.46,
"text": " actions per five seconds. If something quick happens they may"
},
{
"start": 2170.46,
"end": 2174.06,
"text": " need to have some kind of relaxation phase and so on."
},
{
"start": 2174.06,
"end": 2178.14,
"text": " But they try with these kind of delays and action limits. They try to"
},
{
"start": 2178.14,
"end": 2182.54,
"text": " model these kind of limitations."
},
{
"start": 2182.54,
"end": 2187.58,
"text": " I find this as fair as possible."
},
{
"start": 2187.58,
"end": 2191.74,
"text": " This is what I find kind of problematic. So they own units as I said."
},
{
"start": 2191.74,
"end": 2196.06,
"text": " The agent can also see the ones that are outside the camera."
},
{
"start": 2196.06,
"end": 2202.3799999999997,
"text": " And that seems kind of shady. Because of course you can you can claim"
},
{
"start": 2202.3799999999997,
"end": 2205.1,
"text": " humans can do whatever command groups to also"
},
{
"start": 2205.1,
"end": 2211.8199999999997,
"text": " control units outside the camera. But it's not really the case."
},
{
"start": 2211.8199999999997,
"end": 2217.74,
"text": " So that's sort of a distinct advantage that the machine has."
},
{
"start": 2217.74,
"end": 2222.94,
"text": " But yeah in any case I find it to be very well done."
},
{
"start": 2222.94,
"end": 2228.7799999999997,
"text": " And I hope this made it a bit clearer what the exact contributions are."
},
{
"start": 2228.78,
"end": 2235.5,
"text": " And with that have a fun time playing against AlphaStar."
},
{
"start": 2235.5,
"end": 2263.02,
"text": " Bye bye."
}
] |
kOy49NqZeqI | Yannic Kilcher | UCZHmQk67mSJgfCCTn7xBfew | IMPALA: Scalable Distributed Deep-RL with Importance Weighted Actor-Learner Architectures | [
"Science & Technology"
] | [
"machine learning",
"ml",
"ai",
"artificial intellgence",
"deepmind",
"reinforcement learning",
"deep rl",
"a2c",
"a3c",
"actor",
"critic",
"distributed",
"scale",
"bias",
"off-policy",
"policy gradient",
"deepmind lab",
"vtrace"
] | Policy Gradient RL on a massively distributed scale with theoretical guarantees!
Abstract:
In this work we aim to solve a large collection of tasks using a single reinforcement learning agent with a single set of parameters. A key challenge is to handle the increased amount of data and extended training time. We have developed a new distributed agent IMPALA (Importance Weighted Actor-Learner Architecture) that not only uses resources more efficiently in single-machine training but also scales to thousands of machines without sacrificing data efficiency or resource utilisation. We achieve stable learning at high throughput by combining decoupled acting and learning with a novel off-policy correction method called V-trace. We demonstrate the effectiveness of IMPALA for multi-task reinforcement learning on DMLab-30 (a set of 30 tasks from the DeepMind Lab environment (Beattie et al., 2016)) and Atari-57 (all available Atari games in Arcade Learning Environment (Bellemare et al., 2013a)). Our results show that IMPALA is able to achieve better performance than previous agents with less data, and crucially exhibits positive transfer between tasks as a result of its multi-task approach.
Authors: Lasse Espeholt, Hubert Soyer, Remi Munos, Karen Simonyan, Volodymir Mnih, Tom Ward, Yotam Doron, Vlad Firoiu, Tim Harley, Iain Dunning, Shane Legg, Koray Kavukcuoglu
https://arxiv.org/abs/1802.01561
https://github.com/deepmind/scalable_agent | Hi there! Today we're looking at Impala, scalable distributed deep RL with importance-weighted actor learner architectures by Lasse Espejolt, Hubert Sawyer, Remy Munoz and Al. So this paper deals with a new architecture for deep reinforcement learning, specifically distributed deep reinforcement learning. So that means settings where you go beyond one single machine or beyond one single accelerator like a GPU. So I want to introduce this by showing you this task here. This is called the DeepMind lab and the DeepMind lab is a kind of a 3D environment as you can see here. These are screenshots where they're very different goals but some of this as you can see are kind of labyrinth style things where you have to collect apples, some are platformers where you I guess have to jump around and so on or find objects. So the DeepMind introduced this as kind of a as an reinforcement learning environment and what you can do the agent as you can see here has a camera it perceives pixels and it can get rewards for performing actions. The actions it can perform is it can you know walk back and forth, it can jump, it can crouch, it can rotate. So this is kind of a limited set of actions that it can do but it can move around in this 3D world and it needs to achieve some goals. So that usually this is kind of a good setting for reinforcement learning and this paper doesn't do a whole lot of new things in terms of reinforcement learning but it does a lot of things to kind of make it work in a distributed setting. So usually what you would like to do is something like A2C. A2C is advantage actor critic learning and it's a very successful algorithm in reinforcement learning. We won't go into this much here but basic elements of it is you have are two things you have a policy and usually this is called PI, sorry about that, usually this is called PI policy that you input your current state so your current observation at time t and you want to score an action right action A. Now you might have maybe as we saw before you can walk left walk right and so on so you might have ten actions or so. So in here you would put action one or action two or action three and for this you would get probability distributions over each action so maybe in this particular state so each time with the same state. So you would get a distribution something like this right so here you should probably go with action three. That's your policy function. Policy function PI tells you in this particular state which action should you take how often kind of gives you distribution. The second thing you want is a what's called a value function so the value function V, capital V usually, you input your state and it will output it will output what the value is of that state and that's usually termed kind of as a lowercase V. The value of the state is given if you're in a maze right I'm gonna draw maze from the top here right you can't reach there like here so here is the goal and let's say you are oops you're right here the green right and you have the choice of going forward to the right or to the left. Now this would be your policy here. You would ask your policy and A1 would maybe be go forward A2 go to the left A3 to the right so your policy would decide what to do. Your value function however would decide in each of the states so where you are plus where you could go here here here so basically for each state in the system it would give you a value in particular in this case it would probably give you a very very high value here like yeah this is a good point because you're very close to the goal right here this is probably not so good a point and this is a very bad point because you're you're going to corner you're actually moving farther away from the goal so if your value function is trained well then you can you can use that also to assess your situation so the value function for each state s it will give you a numerical value of how good that state is in terms of reaching your goal and the A2C algorithm now deals with the interplay of these the A2C uses actually both of these in an interplay so it will use one to teach the other one right and this interplay between those gives makes for a very successful reinforcement learning algorithm now the way A2C does it is as you can see here what it does is it has to there are two variants here think synced step and synced trajectories but in essence it has to run these episodes and these here are steps in the episodes and let's say an episode as four steps before it can do the learning part and the learning part is here the orange thing once it has done a step of learning it has to run episodes again and then it has can do learning again and that's because of a limitation of this which is called on policy learning so on in on policy learning you always want to have your update step which is the orange part to be fed with data so the this all of these app all of these steps here go into this update steps and it's necessary that the steps that you make the updates from are computed with kind of the most current version of the of the agent right so that the agent will go into the world make some steps using its neural network maybe I should explain so that the agent right is this box and the agent has this policy right and with this policy as we saw it will go and it will interact with the world right outside of itself and it will kind of the world will give back observations and it will then interact again so you can move a step forward right first first thing is move the step step forward and then the world gives it back a high you are now no longer here you've moved here right and then it's on I want to move to the left and the world says okay so you're no longer here you've moved one to the left and this on the right here are the observations and is on the left here are the actions and for the a to see is kind of necessary that we always have a current version of the policy generating these steps in order to be able to learn from them and then the next steps also need to be kind of current to be learned now there have been attempts to decentralize this and is exactly what impala does impala splits this into multiple workers you can think of this as different machines so there is a split here and these are called actors and this is called a learner now the actors they will go ahead and they will run episodes on their own right occasionally or they will run episodes and they will communicate those episodes to the learner and the learner will continuously here learn so these orange steps can be made in much more quick success succession and don't have to be like synchronized as in the a to see here is another way of seeing this over here and we'll just concentrate on this on this left thing here so there is a learner and there are actors and every now and then the actor sinks its model from the learner these are different machines so this can happen over the network every now and then the actor gets like an update of the newest policy network and then the actor will just go ahead and run episodes using that policy right episode episode episode episode step steps without interfering with anything else and then once it has run an episode or multiple ones it will communicate this back to the learner and if all the actors do this all right the learner gets a whole bunch of these episodes and then can learn from all of them simultaneously and it can do so in kind of with in in kind of very fast succession as you see here so the work is split of course you run into a problem namely as we saw in the a to see algorithm this type of reinforcement learning requires basically that you always run the episode with the current model and that's not the case here right the actor may sink the parameters they sink the parameters once in a while but then it will run these episodes right when it runs these episodes here it has no idea or it the learner in the meantime has continually been updating the model while the actor kind of has an old model so these episodes here are run with an old model so the learner if it tries to learn from this must kind of correct for this fact and the big kind of theoretical contribution of this paper is how to correct for the fact that the data you learn from comes from an outdated policy model and this is what's called V trace correction so without going too much into the into the details here V trace correction happens as as fall it happens as follows so what you define are what's called V trace targets and these V trace targets are basically the targets that you train your value function towards right so the the the value function as we discussed before that is a that is the thing that tells you how good each state is and the targets you train this towards and you're also by the way using this V V trace corrections in policy updates but these are defined as follows so the V trace target for step s is the value function at step s plus this correction thing and the the correction thing basically well I've I want to break this down some more so the V at current s is your value function plus and this is a sum over all future steps over and this is a discount factor and this is kind of a delta from one step to the next so you're in an episode and you've made some steps right and let's say we are here right this is s and so your your little V s will be whatever your value function says of s plus kind of a correction for each step that you make go into the future like this and the main part of these is is this here which is basically the reward at the step plus the difference of the value functions of the steps after it and what V trace introduces now is this bit here and these CI again are computed as such so all of this kind of is very nested so there is a there's a big multiplication here it's a very nested thing but in the very very very core of it you can see the following these V trace corrections are a ratio between pi and mu and pi is the policy of the learner that is the current policy and mu is the policy that has been used to generate the to generate the episode and this is truncated by a minimum and usually the C bar is one so let's consider what happens here what happens is let's say that mu is higher than pi for a given pair of AI index a what does it mean it means that in the past you run an episode you come you are in this maze right such to them and you're here right now the and the goal let's say the goal is down here and the action is going over here that does the action that you're considering here now your mu which is your old policy that the actor has synced at some point mu might say this is very good right because it moves you towards the goal more but then your your pie the learner has been learning since the eight since the agent the actor has synchronized the weights the learner has been learning and the learner might know wait wait since you have decided this I have actually learned that this might not be such a good move because you know there's a wall here and I'd rather go down here and then over here so what it will do it will since pi is low and mu is higher it will down weigh this action and this is how you correct for the fact that there are old weights by basically down weighing wherever the old policy thought of an action as being worth more than the new policy does and this is how you make up for the fact that the new policy you assume it knows better because it has learned more and thereby you you give lower weight to the data points where the the policies have diverged a lot so that's at the core of it and you can think of in terms of here you can think of it as maybe here at this step you're at a point where the old policy that the actor has has updated itself to says we should do action one right but the new policy that the learner has in the meantime has learned more says now we should do action two and if this is the case then this whole rest of the episode is down weight because it is no longer current knowledge right and this is not just kind of a heuristic but they actually do prove that this this this comes with some guarantees especially reduces to kind of the classic reinforcement algorithms if you assume that mu is always pi so that current policy is the old policy and therefore you're in the old setting alright so this was a bit of a lengthy explanation of the math behind it and at the end what you do is following you train your value function using this update and you can see here it's simply the gradient of the value function scaled by the thing that contains this V trace target right you then you update your policy in this direction and this is the classic reinforcement learning reinforce style policy update where here you have the gradient of the of the policy and here you have the weighing by the reward and specifically here it is the reward plus this V trace target and this thing here is a bias correction or a bias reducing sorry variance reducing bias that was terrible the final form is what's called an entropy penalty where you want to push the entropy of your policy up such that the agent kind of is biased towards exploring more than exploiting if you know of the classic exploration exploitation dilemma so that's that's what you do compute these V trace targets update your value and policy according to these equations and there you go so what do what does Impala do specifically in this deep mind lab they have two architectures first of all they have this they have this small architecture second they have this large architecture and they just kind of try it out on these and they measure how many frames per second they can get in and you see here compared to on single machine compared to a 3c they bring in a lot more frames per second this is just on a single machine but then on distributed setting the scale up also is very significant that they reach that's because they don't have to wait for other things they can just go ahead everything runs at full speed basically and everything runs in parallel and the fact that that some of the information is old is corrected by V trace and the last thing I want to show is the wall clock time I think this is the important plot in this deep mind lab on over all the tasks the wall clock time compared to the score you can see a 3c while it does you know increase over time the Impala variants up here increase in much much faster wall clock time so that's the that's the paper they have a lot of proofs in the appendix which I'm not gonna go over if you want to give it a try then it is it is not called Impala on github it is called I think scalable agent so on github it is called scalable agent I think but you'll find it if you if you search for Impala github or something like this yeah other than that thanks for listening and see you next time | [
{
"start": 0,
"end": 5.88,
"text": " Hi there! Today we're looking at Impala, scalable distributed deep RL with"
},
{
"start": 5.88,
"end": 11.48,
"text": " importance-weighted actor learner architectures by Lasse Espejolt, Hubert"
},
{
"start": 11.48,
"end": 18.48,
"text": " Sawyer, Remy Munoz and Al. So this paper deals with a new architecture for deep"
},
{
"start": 18.48,
"end": 23.44,
"text": " reinforcement learning, specifically distributed deep reinforcement learning."
},
{
"start": 23.44,
"end": 29.88,
"text": " So that means settings where you go beyond one single machine or beyond one"
},
{
"start": 29.88,
"end": 35.4,
"text": " single accelerator like a GPU. So I want to introduce this by showing you this"
},
{
"start": 35.4,
"end": 41.6,
"text": " task here. This is called the DeepMind lab and the DeepMind lab is a kind of a"
},
{
"start": 41.6,
"end": 47.44,
"text": " 3D environment as you can see here. These are screenshots where they're very"
},
{
"start": 47.44,
"end": 51.239999999999995,
"text": " different goals but some of this as you can see are kind of labyrinth style"
},
{
"start": 51.239999999999995,
"end": 56.96,
"text": " things where you have to collect apples, some are platformers where you I guess"
},
{
"start": 56.96,
"end": 62.120000000000005,
"text": " have to jump around and so on or find objects. So the DeepMind introduced this"
},
{
"start": 62.120000000000005,
"end": 69.76,
"text": " as kind of a as an reinforcement learning environment and what you can do"
},
{
"start": 69.76,
"end": 75.76,
"text": " the agent as you can see here has a camera it perceives pixels and it can"
},
{
"start": 75.76,
"end": 81.6,
"text": " get rewards for performing actions. The actions it can perform is it can you"
},
{
"start": 81.6,
"end": 86.92,
"text": " know walk back and forth, it can jump, it can crouch, it can rotate. So this is"
},
{
"start": 86.92,
"end": 91.12,
"text": " kind of a limited set of actions that it can do but it can move around in this"
},
{
"start": 91.12,
"end": 97.28,
"text": " 3D world and it needs to achieve some goals. So that usually this is"
},
{
"start": 97.28,
"end": 103.96000000000001,
"text": " kind of a good setting for reinforcement learning and this paper doesn't"
},
{
"start": 103.96000000000001,
"end": 109.04,
"text": " do a whole lot of new things in terms of reinforcement learning but it does a lot"
},
{
"start": 109.04,
"end": 115.88,
"text": " of things to kind of make it work in a distributed setting. So usually what you"
},
{
"start": 115.88,
"end": 121.96,
"text": " would like to do is something like A2C. A2C is advantage actor critic learning"
},
{
"start": 121.96,
"end": 128.04,
"text": " and it's a very successful algorithm in reinforcement learning. We won't go into"
},
{
"start": 128.04,
"end": 135.2,
"text": " this much here but basic elements of it is you have are two things you have"
},
{
"start": 135.2,
"end": 141.07999999999998,
"text": " a policy and usually this is called PI, sorry about that, usually this is called"
},
{
"start": 141.08,
"end": 146.36,
"text": " PI policy that you input your current state so your current observation at"
},
{
"start": 146.36,
"end": 156.52,
"text": " time t and you want to score an action right action A. Now you might have maybe"
},
{
"start": 156.52,
"end": 160.8,
"text": " as we saw before you can walk left walk right and so on so you might have ten"
},
{
"start": 160.8,
"end": 169.52,
"text": " actions or so. So in here you would put action one or action two or action three"
},
{
"start": 169.52,
"end": 174.84,
"text": " and for this you would get probability distributions over each action so maybe"
},
{
"start": 174.84,
"end": 182.24,
"text": " in this particular state so each time with the same state. So you would get a"
},
{
"start": 182.24,
"end": 188.76000000000002,
"text": " distribution something like this right so here you should probably go with"
},
{
"start": 188.76000000000002,
"end": 195.44,
"text": " action three. That's your policy function. Policy function PI tells you in this"
},
{
"start": 195.44,
"end": 200.44,
"text": " particular state which action should you take how often kind of gives you"
},
{
"start": 200.44,
"end": 206.35999999999999,
"text": " distribution. The second thing you want is a what's called a value function so"
},
{
"start": 206.35999999999999,
"end": 212.72,
"text": " the value function V, capital V usually, you input your state and it will output"
},
{
"start": 212.72,
"end": 220.4,
"text": " it will output what the value is of that state and that's usually termed kind of"
},
{
"start": 220.4,
"end": 227.76000000000002,
"text": " as a lowercase V. The value of the state is given if you're in a maze right I'm"
},
{
"start": 227.76000000000002,
"end": 236.08,
"text": " gonna draw maze from the top here right you can't reach there like here so here"
},
{
"start": 236.08,
"end": 245.04000000000002,
"text": " is the goal and let's say you are oops you're right here the green right and"
},
{
"start": 245.04,
"end": 252.2,
"text": " you have the choice of going forward to the right or to the left. Now this"
},
{
"start": 252.2,
"end": 257.84,
"text": " would be your policy here. You would ask your policy and A1"
},
{
"start": 257.84,
"end": 264.12,
"text": " would maybe be go forward A2 go to the left A3 to the right so your policy"
},
{
"start": 264.12,
"end": 270.12,
"text": " would decide what to do. Your value function however would decide in each of"
},
{
"start": 270.12,
"end": 275.04,
"text": " the states so where you are plus where you could go here here here so basically"
},
{
"start": 275.04,
"end": 279.8,
"text": " for each state in the system it would give you a value in particular in this"
},
{
"start": 279.8,
"end": 285.88,
"text": " case it would probably give you a very very high value here like yeah this is"
},
{
"start": 285.88,
"end": 290.64,
"text": " a good point because you're very close to the goal right here this is probably"
},
{
"start": 290.64,
"end": 296.16,
"text": " not so good a point and this is a very bad point because you're you're going to"
},
{
"start": 296.16,
"end": 300.66,
"text": " corner you're actually moving farther away from the goal so if your value"
},
{
"start": 300.66,
"end": 306.68,
"text": " function is trained well then you can you can use that also to assess your"
},
{
"start": 306.68,
"end": 313.04,
"text": " situation so the value function for each state s it will give you a numerical"
},
{
"start": 313.04,
"end": 320.20000000000005,
"text": " value of how good that state is in terms of reaching your goal and the A2C"
},
{
"start": 320.2,
"end": 326.47999999999996,
"text": " algorithm now deals with the interplay of these the A2C uses actually both of"
},
{
"start": 326.47999999999996,
"end": 334.52,
"text": " these in an interplay so it will use one to teach the other one right and this"
},
{
"start": 334.52,
"end": 340,
"text": " interplay between those gives makes for a very successful reinforcement learning"
},
{
"start": 340,
"end": 347.2,
"text": " algorithm now the way A2C does it is as you can see here what it does is it has"
},
{
"start": 347.2,
"end": 352.64,
"text": " to there are two variants here think synced step and synced trajectories but"
},
{
"start": 352.64,
"end": 358.24,
"text": " in essence it has to run these episodes and these here are steps in the episodes"
},
{
"start": 358.24,
"end": 363.4,
"text": " and let's say an episode as four steps before it can do the learning part and"
},
{
"start": 363.4,
"end": 367.48,
"text": " the learning part is here the orange thing once it has done a step of"
},
{
"start": 367.48,
"end": 372.91999999999996,
"text": " learning it has to run episodes again and then it has can do learning again"
},
{
"start": 372.92,
"end": 377.96000000000004,
"text": " and that's because of a limitation of this which is called on policy learning"
},
{
"start": 377.96000000000004,
"end": 384.36,
"text": " so on in on policy learning you always want to have your update step which is"
},
{
"start": 384.36,
"end": 390.88,
"text": " the orange part to be fed with data so the this all of these app all of these"
},
{
"start": 390.88,
"end": 396.24,
"text": " steps here go into this update steps and it's necessary that the steps that you"
},
{
"start": 396.24,
"end": 402.84000000000003,
"text": " make the updates from are computed with kind of the most current version of the"
},
{
"start": 402.84,
"end": 407.71999999999997,
"text": " of the agent right so that the agent will go into the world make some steps"
},
{
"start": 407.71999999999997,
"end": 413.67999999999995,
"text": " using its neural network maybe I should explain so that the agent right is this"
},
{
"start": 413.67999999999995,
"end": 420.12,
"text": " box and the agent has this policy right and with this policy as we saw it will"
},
{
"start": 420.12,
"end": 425.64,
"text": " go and it will interact with the world right outside of itself and it will kind"
},
{
"start": 425.64,
"end": 430.55999999999995,
"text": " of the world will give back observations and it will then interact again so you"
},
{
"start": 430.56,
"end": 435.6,
"text": " can move a step forward right first first thing is move the step step forward"
},
{
"start": 435.6,
"end": 441.12,
"text": " and then the world gives it back a high you are now no longer here you've moved"
},
{
"start": 441.12,
"end": 445.56,
"text": " here right and then it's on I want to move to the left and the world says okay"
},
{
"start": 445.56,
"end": 450.72,
"text": " so you're no longer here you've moved one to the left and this on the right"
},
{
"start": 450.72,
"end": 455.76,
"text": " here are the observations and is on the left here are the actions and for the a"
},
{
"start": 455.76,
"end": 459.64,
"text": " to see is kind of necessary that we always have a current version of the"
},
{
"start": 459.64,
"end": 466.44,
"text": " policy generating these steps in order to be able to learn from them and then"
},
{
"start": 466.44,
"end": 472.15999999999997,
"text": " the next steps also need to be kind of current to be learned now there have"
},
{
"start": 472.15999999999997,
"end": 478.59999999999997,
"text": " been attempts to decentralize this and is exactly what impala does impala"
},
{
"start": 478.59999999999997,
"end": 486.8,
"text": " splits this into multiple workers you can think of this as different machines"
},
{
"start": 486.8,
"end": 492.92,
"text": " so there is a split here and these are called actors and this is called a"
},
{
"start": 492.92,
"end": 498.92,
"text": " learner now the actors they will go ahead and they will run episodes on"
},
{
"start": 498.92,
"end": 505.08000000000004,
"text": " their own right occasionally or they will run episodes and they will"
},
{
"start": 505.08000000000004,
"end": 510.36,
"text": " communicate those episodes to the learner and the learner will continuously"
},
{
"start": 510.36,
"end": 515.64,
"text": " here learn so these orange steps can be made in much more quick success"
},
{
"start": 515.64,
"end": 523.76,
"text": " succession and don't have to be like synchronized as in the a to see here is"
},
{
"start": 523.76,
"end": 527.88,
"text": " another way of seeing this over here and we'll just concentrate on this on this"
},
{
"start": 527.88,
"end": 534.24,
"text": " left thing here so there is a learner and there are actors and every now and"
},
{
"start": 534.24,
"end": 539.6,
"text": " then the actor sinks its model from the learner these are different machines so"
},
{
"start": 539.6,
"end": 543.52,
"text": " this can happen over the network every now and then the actor gets like an"
},
{
"start": 543.52,
"end": 549,
"text": " update of the newest policy network and then the actor will just go ahead and"
},
{
"start": 549,
"end": 555.88,
"text": " run episodes using that policy right episode episode episode episode step"
},
{
"start": 555.88,
"end": 561.06,
"text": " steps without interfering with anything else and then once it has run an episode"
},
{
"start": 561.06,
"end": 565.72,
"text": " or multiple ones it will communicate this back to the learner and if all the"
},
{
"start": 565.72,
"end": 571.1999999999999,
"text": " actors do this all right the learner gets a whole bunch of these episodes and"
},
{
"start": 571.2,
"end": 577.5200000000001,
"text": " then can learn from all of them simultaneously and it can do so in kind"
},
{
"start": 577.5200000000001,
"end": 583.36,
"text": " of with in in kind of very fast succession as you see here so the work"
},
{
"start": 583.36,
"end": 589.9200000000001,
"text": " is split of course you run into a problem namely as we saw in the a to see"
},
{
"start": 589.9200000000001,
"end": 596.84,
"text": " algorithm this type of reinforcement learning requires basically that you"
},
{
"start": 596.84,
"end": 602.32,
"text": " always run the episode with the current model and that's not the case here right"
},
{
"start": 602.32,
"end": 608.88,
"text": " the actor may sink the parameters they sink the parameters once in a while but"
},
{
"start": 608.88,
"end": 614.12,
"text": " then it will run these episodes right when it runs these episodes here it has"
},
{
"start": 614.12,
"end": 621.84,
"text": " no idea or it the learner in the meantime has continually been updating"
},
{
"start": 621.84,
"end": 627.2800000000001,
"text": " the model while the actor kind of has an old model so these episodes here are run"
},
{
"start": 627.2800000000001,
"end": 633.12,
"text": " with an old model so the learner if it tries to learn from this must kind of"
},
{
"start": 633.12,
"end": 638.4,
"text": " correct for this fact and the big kind of theoretical contribution of this"
},
{
"start": 638.4,
"end": 644.88,
"text": " paper is how to correct for the fact that the data you learn from comes from"
},
{
"start": 644.88,
"end": 653.76,
"text": " an outdated policy model and this is what's called V trace correction so"
},
{
"start": 653.76,
"end": 663.32,
"text": " without going too much into the into the details here V trace correction happens"
},
{
"start": 663.32,
"end": 669.76,
"text": " as as fall it happens as follows so what you define are what's called V trace"
},
{
"start": 669.76,
"end": 676.08,
"text": " targets and these V trace targets are basically the targets that you train"
},
{
"start": 676.08,
"end": 684.36,
"text": " your value function towards right so the the the value function as we discussed"
},
{
"start": 684.36,
"end": 690.76,
"text": " before that is a that is the thing that tells you how good each state is and the"
},
{
"start": 690.76,
"end": 696.16,
"text": " targets you train this towards and you're also by the way using this V V"
},
{
"start": 696.16,
"end": 704.52,
"text": " trace corrections in policy updates but these are defined as follows so the V"
},
{
"start": 704.52,
"end": 712.04,
"text": " trace target for step s is the value function at step s plus this correction"
},
{
"start": 712.04,
"end": 720.0799999999999,
"text": " thing and the the correction thing basically well I've I want to break this"
},
{
"start": 720.08,
"end": 730.5200000000001,
"text": " down some more so the V at current s is your value function plus and this is a"
},
{
"start": 730.5200000000001,
"end": 738.12,
"text": " sum over all future steps over and this is a discount factor and this is kind of"
},
{
"start": 738.12,
"end": 743.0600000000001,
"text": " a delta from one step to the next so you're in an episode and you've made"
},
{
"start": 743.06,
"end": 754.4,
"text": " some steps right and let's say we are here right this is s and so your your"
},
{
"start": 754.4,
"end": 766.8,
"text": " little V s will be whatever your value function says of s plus kind of a"
},
{
"start": 766.8,
"end": 773.4,
"text": " correction for each step that you make go into the future like this and the"
},
{
"start": 773.4,
"end": 781.3599999999999,
"text": " main part of these is is this here which is basically the reward at the step plus"
},
{
"start": 781.3599999999999,
"end": 787.8,
"text": " the difference of the value functions of the steps after it and what V trace"
},
{
"start": 787.8,
"end": 798.76,
"text": " introduces now is this bit here and these CI again are computed as such so"
},
{
"start": 798.76,
"end": 802.68,
"text": " all of this kind of is very nested so there is a there's a big multiplication"
},
{
"start": 802.68,
"end": 808.24,
"text": " here it's a very nested thing but in the very very very core of it you can see"
},
{
"start": 808.24,
"end": 816.3599999999999,
"text": " the following these V trace corrections are a ratio between pi and mu and pi is"
},
{
"start": 816.36,
"end": 824.32,
"text": " the policy of the learner that is the current policy and mu is the policy that"
},
{
"start": 824.32,
"end": 832.88,
"text": " has been used to generate the to generate the episode and this is truncated"
},
{
"start": 832.88,
"end": 838.64,
"text": " by a minimum and usually the C bar is one so let's consider what happens here"
},
{
"start": 838.64,
"end": 848.56,
"text": " what happens is let's say that mu is higher than pi for a given pair of AI"
},
{
"start": 848.56,
"end": 855.76,
"text": " index a what does it mean it means that in the past you run an episode you come"
},
{
"start": 855.76,
"end": 867.96,
"text": " you are in this maze right such to them and you're here right now the and the"
},
{
"start": 867.96,
"end": 879.2800000000001,
"text": " goal let's say the goal is down here and the action is going over here that"
},
{
"start": 879.2800000000001,
"end": 886.2,
"text": " does the action that you're considering here now your mu which is your old"
},
{
"start": 886.2,
"end": 892.52,
"text": " policy that the actor has synced at some point mu might say this is very good"
},
{
"start": 892.52,
"end": 901.84,
"text": " right because it moves you towards the goal more but then your your pie the"
},
{
"start": 901.84,
"end": 906.1999999999999,
"text": " learner has been learning since the eight since the agent the actor has"
},
{
"start": 906.1999999999999,
"end": 910.8,
"text": " synchronized the weights the learner has been learning and the learner might know"
},
{
"start": 910.8,
"end": 916.96,
"text": " wait wait since you have decided this I have actually learned that this might"
},
{
"start": 916.96,
"end": 922.24,
"text": " not be such a good move because you know there's a wall here and I'd rather go"
},
{
"start": 922.24,
"end": 930.4,
"text": " down here and then over here so what it will do it will since pi is low and mu"
},
{
"start": 930.4,
"end": 935.96,
"text": " is higher it will down weigh this action and this is how you correct for the fact"
},
{
"start": 935.96,
"end": 942.36,
"text": " that there are old weights by basically down weighing wherever the old policy"
},
{
"start": 942.36,
"end": 948.12,
"text": " thought of an action as being worth more than the new policy does and this is how"
},
{
"start": 948.12,
"end": 951.76,
"text": " you make up for the fact that the new policy you assume it knows better"
},
{
"start": 951.76,
"end": 956.68,
"text": " because it has learned more and thereby you you give lower weight to the data"
},
{
"start": 956.68,
"end": 963.36,
"text": " points where the the policies have diverged a lot so that's at the core of"
},
{
"start": 963.36,
"end": 974.24,
"text": " it and you can think of in terms of here you can think of it as maybe here at"
},
{
"start": 974.24,
"end": 980.2,
"text": " this step you're at a point where the old policy that the actor has has"
},
{
"start": 980.2,
"end": 988.5200000000001,
"text": " updated itself to says we should do action one right but the new policy that"
},
{
"start": 988.5200000000001,
"end": 993.84,
"text": " the learner has in the meantime has learned more says now we should do action"
},
{
"start": 993.84,
"end": 1002.84,
"text": " two and if this is the case then this whole rest of the episode is down weight"
},
{
"start": 1002.84,
"end": 1009.24,
"text": " because it is no longer current knowledge right and this is not just"
},
{
"start": 1009.24,
"end": 1014.5600000000001,
"text": " kind of a heuristic but they actually do prove that this this this comes with"
},
{
"start": 1014.5600000000001,
"end": 1018.24,
"text": " some guarantees especially reduces to kind of the classic reinforcement"
},
{
"start": 1018.24,
"end": 1023.84,
"text": " algorithms if you assume that mu is always pi so that current policy is the"
},
{
"start": 1023.84,
"end": 1027.76,
"text": " old policy and therefore you're in the old setting alright so this was a bit of"
},
{
"start": 1027.76,
"end": 1035.04,
"text": " a lengthy explanation of the math behind it and at the end what you do is"
},
{
"start": 1035.04,
"end": 1043.1599999999999,
"text": " following you train your value function using this update and you can see here"
},
{
"start": 1043.1599999999999,
"end": 1048.24,
"text": " it's simply the gradient of the value function scaled by the thing that"
},
{
"start": 1048.24,
"end": 1055.1599999999999,
"text": " contains this V trace target right you then you update your policy in this"
},
{
"start": 1055.1599999999999,
"end": 1060.68,
"text": " direction and this is the classic reinforcement learning reinforce style"
},
{
"start": 1060.68,
"end": 1068.72,
"text": " policy update where here you have the gradient of the of the policy and here"
},
{
"start": 1068.72,
"end": 1076.44,
"text": " you have the weighing by the reward and specifically here it is the reward plus"
},
{
"start": 1076.44,
"end": 1084.44,
"text": " this V trace target and this thing here is a bias correction or a bias reducing"
},
{
"start": 1084.44,
"end": 1092.92,
"text": " sorry variance reducing bias that was terrible the final form is what's called"
},
{
"start": 1092.92,
"end": 1099.28,
"text": " an entropy penalty where you want to push the entropy of your policy up such"
},
{
"start": 1099.28,
"end": 1105.92,
"text": " that the agent kind of is biased towards exploring more than exploiting if you"
},
{
"start": 1105.92,
"end": 1110,
"text": " know of the classic exploration exploitation dilemma so that's that's"
},
{
"start": 1110,
"end": 1115.72,
"text": " what you do compute these V trace targets update your value and policy"
},
{
"start": 1115.72,
"end": 1122.84,
"text": " according to these equations and there you go so what do what does Impala do"
},
{
"start": 1122.84,
"end": 1127.76,
"text": " specifically in this deep mind lab they have two architectures first of all they"
},
{
"start": 1127.76,
"end": 1132.32,
"text": " have this they have this small architecture second they have this large"
},
{
"start": 1132.32,
"end": 1138.24,
"text": " architecture and they just kind of try it out on these and they measure how"
},
{
"start": 1138.24,
"end": 1143.32,
"text": " many frames per second they can get in and you see here compared to on single"
},
{
"start": 1143.32,
"end": 1150.92,
"text": " machine compared to a 3c they bring in a lot more frames per second this is just"
},
{
"start": 1150.92,
"end": 1157.08,
"text": " on a single machine but then on distributed setting the scale up also is"
},
{
"start": 1157.08,
"end": 1163.52,
"text": " very significant that they reach that's because they don't have to wait for"
},
{
"start": 1163.52,
"end": 1168.44,
"text": " other things they can just go ahead everything runs at full speed basically"
},
{
"start": 1168.44,
"end": 1174.2,
"text": " and everything runs in parallel and the fact that that some of the information"
},
{
"start": 1174.2,
"end": 1182.4,
"text": " is old is corrected by V trace and the last thing I want to show is the wall"
},
{
"start": 1182.4,
"end": 1187.6,
"text": " clock time I think this is the important plot in this deep mind lab on over all"
},
{
"start": 1187.6,
"end": 1195.1599999999999,
"text": " the tasks the wall clock time compared to the score you can see a 3c while it"
},
{
"start": 1195.1599999999999,
"end": 1201.1999999999998,
"text": " does you know increase over time the Impala variants up here increase in much"
},
{
"start": 1201.1999999999998,
"end": 1210.6399999999999,
"text": " much faster wall clock time so that's the that's the paper they have a lot of"
},
{
"start": 1210.6399999999999,
"end": 1215.2199999999998,
"text": " proofs in the appendix which I'm not gonna go over if you want to give it a"
},
{
"start": 1215.22,
"end": 1223.04,
"text": " try then it is it is not called Impala on github it is called I think scalable"
},
{
"start": 1223.04,
"end": 1237.04,
"text": " agent so on github it is called scalable agent I think but you'll find it if you"
},
{
"start": 1237.04,
"end": 1242.96,
"text": " if you search for Impala github or something like this yeah other than that"
},
{
"start": 1242.96,
"end": 1247.16,
"text": " thanks for listening and see you next time"
}
] |
ctCv_NRpqvM | Yannic Kilcher | UCZHmQk67mSJgfCCTn7xBfew | The Visual Task Adaptation Benchmark | [
"Science & Technology"
] | [
"ml",
"machine learning",
"cnn",
"imagenet",
"pretraining",
"finetuning",
"fine-tuning",
"google",
"benchmark",
"initialization",
"supervised",
"unsupervised",
"bert",
"artificial intelligence",
"score"
] | This paper presents a new benchmark for Visual Task Adaptation (i.e. BERT for images) and investigates several baseline methods for doing so.
Abstract:
Representation learning promises to unlock deep learning for the long tail of vision tasks without expansive labelled datasets. Yet, the absence of a unified yardstick to evaluate general visual representations hinders progress. Many sub-fields promise representations, but each has different evaluation protocols that are either too constrained (linear classification), limited in scope (ImageNet, CIFAR, Pascal-VOC), or only loosely related to representation quality (generation). We present the Visual Task Adaptation Benchmark (VTAB): a diverse, realistic, and challenging benchmark to evaluate representations. VTAB embodies one principle: good representations adapt to unseen tasks with few examples. We run a large VTAB study of popular algorithms, answering questions like: How effective are ImageNet representation on non-standard datasets? Are generative models competitive? Is self-supervision useful if one already has labels?
Authors: Xiaohua Zhai, Joan Puigcerver, Alexander Kolesnikov, Pierre Ruyssen, Carlos Riquelme, Mario Lucic, Josip Djolonga, Andre Susano Pinto, Maxim Neumann, Alexey Dosovitskiy, Lucas Beyer, Olivier Bachem, Michael Tschannen, Marcin Michalski, Olivier Bousquet, Sylvain Gelly, Neil Houlsby
https://arxiv.org/abs/1910.04867
https://github.com/google-research/task_adaptation | Hi there. Today we're looking at the visual task adaptation benchmark by a list of authors that's way too long to read out all from Google Brain. So what is this paper? This paper cares about a new benchmark that is abbreviated VTab and VTab is a benchmark for a task called visual task adaptation. So a benchmark, the meaning of a benchmark is it's kind of a number that you achieve with a model and whoever has the highest number is the best at this task. So the benchmark kind of standardizes how you evaluate models and the model is here. They do visual task adaptation. So what is visual task adaptation? So this is visual task adaptation. It's kind of illustrated in this figure. Imagine you have a bunch of what are called visual tasks and a visual task, and this is the right side here, a visual task is anything that can be solved from just visual input. So basically given a picture or many pictures and you ask kind of a question about it, if that question can be answered by just looking at the picture then that's called a visual task. For example in this data set you might be asked whether a picture contains a dog or a cat. In this data set you might be asked to outline where the objects are. So here the plane, you might be able to segment or you might be able to point out where buildings are in the images. Right here, here, there's no building here. So there's varieties of tasks that are possible. Or in the bottom domain you might be asked which one of the two red dots here is closer to the observer in 3D space. Or you might be asked in this picture please count the number of gray boxes. So there's a bunch of, all of these count as visual tasks. Now the setting that the authors imagine here is there are many of these visual tasks in the world for which there isn't much training data. Imagine something like this. These are aerial images so you kind of need a satellite or a plane to obtain them and then you need to label them. So all of this is isn't that cheap. Even more so in a for example medical domain where you have very expensive CT images of patients and then you need to obtain them and you need to convince the patients to release their data and someone needs to label it. So it's very costly to obtain lots of training data. Now what we want to do is we want to, for all of these tasks, we ideally want to build neural networks, deep neural networks because we know they're super accurate but they are only super accurate if you have lots of training data. So that conflicts with the fact that we might not have so much training data for these tasks. So the proposed solution here is what's called visual task adaptation and it's the following. Imagine you have lots and lots of what's called here upstream data. And upstream data, what they mean is data that is similar to the data here but not exactly the same but you have lots of it. And the example given is ImageNet. So imagine this here to be ImageNet. ImageNet is a data set with over a million images. All of them are labeled into one of a thousand classes and so you can build a very good model for ImageNet to predict the ImageNet class. And you can get very accurate, you have lots of data. Cool. So you build this model but now what you want to do is you want to use this what's here called an adaptation algorithm. And you want to use that model that you trained on ImageNet data and kind of change it just a bit. So you start from the model you have that works on ImageNet and with the few training data you have here on the right side and the author has actually standardized this in the benchmark to 1k samples. So you only have a thousand training samples compared to the millions that you potentially need. You have a thousand samples and you adapt your model to these tasks. So you train the model on ImageNet and you adapt it to predict whether or not there's a cat or a dog and you adapt it to segment these images and you adapt it to predict the depth of points. So you can consider this kind of as a pre-training thing. So you pre-train your model on ImageNet and then you adapt it to these others. That's what's called task adaptation. It's not exactly pre-training in the classic sense because pre-training in the classic sense means basically that you retain the same model but here it's a bit different. So in stage one you train a deep neural network on lots of training data. A deep neural network here this might be you know you have a bunch of layers layer layer layer layer layer and then here you have a thousand you classify into a thousand classes. This is your model. Then in stage two over here you adapt this model and what it ultimately means is you take for example this part here up until the second to last layer transfer it over put it here right bam bam bam bam bam you retain the weights you keep the weights but then you add just one or two new layers and classify your new tasks. This could be is it a cat or is it a dog? Then you train you can either elect to only train the green part here or you can train the whole thing. The second thing is called fine-tuning. The author is mostly elect to do fine-tuning in this work so you carry over the weights and you add a new head and then you train the entire thing with the 1000 samples that you have for this task and then you the kind of the goal is to get as good as possible on that one task where you only have a thousand samples. If your pre-training was good so if your stage one was good then you would expect that stage two would profit a lot from this pre-training which basically means that even though you only have a thousand samples you can reach accuracies that would usually only be possible with much more samples. That's the idea behind it. This is what's called visual task adaptation. The authors propose a benchmark for this. A benchmark for this part, for the adaptation algorithm. The adaptation algorithm they propose as a baseline is train on ImageNet and then fine-tune. That's an adaptation algorithm. They propose a score for this. If you come up with a better adaptation algorithm for example you could say no I'm going to train on YouTube data and then do fine-tune that and then maybe you'd reach better accuracies in these tasks over here and then your score would be higher. It's kind of a benchmark to compare adaptation algorithms. Here your benchmark score and this is conditioned on n, the number of samples that you have in the in the layer two tasks and this here is standardized to 1000 in their case. The score of an adaptation algorithm A is the following. It's the expectation over this is kind of an error measure and you can think of it basically as a test set classification error on the layer two tasks. Of that adaptation algorithm if given the data set of a layer two tasks of n samples and the layer two tasks here comes from a distribution of layer two tasks. What does it mean? This distribution of layer two tasks they imagine, they show this in this picture, they imagine the visual tasks like on this big landscape of visual tasks right here and what they ideally want to do is they want to sample a task here and this task corresponds to classifying these dog images and very close to it could be classifying bird images but then very far away could be a task of counting and depth estimation and so on. They imagine all the visual tasks have some kind of some sort of distribution. So what happens is you sample one of those visual tasks for each element in this expectation. You sample one of them, you build the data set with a thousand samples right you put it through your adaptation algorithms or your adaptation algorithm for example your pre-trained image net you adapt it to that task with a thousand samples and then you compute your error metric on that. Now if you do this over the whole distribution you get an expectation of this error metric in all the visual tasks and that will be your score. What does it mean in practice? I mean in practice you don't have this distribution right in practice you have a list so like list here is a list of tasks right there's this task this task this task this task there's whatever the pets task and then there is the aerial then there is the counting right you have a list of tasks and what is it like this stuff and this expectation ultimately right stage one train a model M stage two for each of these tasks adapt the model M or fine-tune your model M on these tasks then for each task get an error rate error rate one task two gives you error rate two tasks three gives you error rate three then jump simply one over n sum them up so take the take the average error rate of the of the of all of the tasks and that's your score that's kind of my first criticism of this thing like this this all just seems like super mathematized with like oh we imagine all of these tasks being in some distribution somewhere like that there is a distribution of tasks and we have an expectation over the distribution now like why just say here's a bunch of tasks right adapt your model to each one of them get the average error rate done that's your score that would have been first of all much easier and second of all they never actually care to characterize this distribution like if if they were to actually rigorously characterize this distribution of visual tasks I would agree that this formulation makes sense but all they say basically all they say is tasks that a human can solve from visual input alone and they give a bunch of examples of you know a good task would be the following right so label one one zero zero one right and you probably figured it out the task is is it a square or is it a triangle right that's a does a visual task in the classic sense human can solve it from visual input alone then the following task wouldn't be as easy labels one zero zero one so the task I had in mind was is there and spelling is the spelling of the shape over here does it contain an a so square contains an a circle doesn't line doesn't but triangle contains an a right so therefore this you kind of need world knowledge and you can't just solve it from visual input alone right especially not you can't generalize to new new shapes if you if you just from visually put so um they and they say appendix B they validate this right they validate that humans can solve it but I I actually disagree with this because just because humans can solve a task just from visual input doesn't mean that they don't use world knowledge in it like in this whatever pets example here right humans know how cats and dogs look anatomically right how they look from the side and from the back and so on even if they haven't seen it in a picture they they know how they behave and so on what is kind of realistic setting for a cat and a dog to be in so all of this it seems kind of a bit shady and the reason I'm saying this is if you make this distribution formulation you also you have to give a rigorous definition and because if a new task arrives now like one that's not in your list like never been before here in the world like new task arrives how do we know whether or not we should include it in the list or not right how do we know whether it's part of this distribution or not it just seems very very shaky so that being said they do give this list and this list has 19 tasks that's down here so there are 19 tasks their categorized as natural which means natural images these these yeah the examples here are pets flowers images house numbers and so on specialized images are for example images with that you special equipment for example medical images and then structured means where that's down here structured means that the model needs come to comprehend the structure of a scene so they give an example of object counting or 3d depth prediction I mean that's that's fair enough they have these 19 tasks but and they show kind of the tasks down here here's a list of tasks and kind of their baseline method on it but but why for me like the question is why exactly these tasks if they don't specify this distribution why these tasks and they don't really like they do some they do a lot of experimentation actually an investigation but what's kind of missing for me is to show that these tasks first of all are kind of internally consistent in that they're really visual tasks and second of all that they kind of cover this distribution or they represent this entire distribution that they're trying to model and it seems to me unclear why exactly these tasks why they left others out and included these ones in all fairness probably they simply took the ones that that they could get their hands on but still I feel that this is very shaky and that might that might lead to the benchmark not being adapted very widely but alright enough with the criticism let's go further in this so they do present this kind of baseline experiments and they they pre train always on image net and then they they they fine-tune on these layer two tasks and the way they pre train here is listed here for example so if they pre train a generative model it actually performs worse than if they just train from scratch for the layer two tasks on the thousand samples right self supervised is kind of a pre training method where if you have an image you do something like you rotate it to the right or to the left and then you ask a model some sort of a discriminator did it did I turn it to the right or to the left like zero is to the right left and one is to the right so you this is called self supervised you don't need labels for this right and it kind of works well semi supervised has some of the labels and supervised has is like image net with full labels and you kind of see unsurprisingly that the more information you have the the better you are going to be in all of these these kind of tasks interestingly the generative pre training works the worst worse than even from scratch training so that's kind of a sort of special what what I do really appreciate about this this investigation here is that they investigate a lot of variants of this of this benchmark and they come to the conclusion I think this encapsulated here one for example we find two models using 16 Google Cloud TPU hardware accelerators now that's expensive right but they say we conduct additional experiments to assess whether our result can be reproduced with a more basic hardware setup we evaluate on all the tasks using a single Nvidia P100 GPU with a thousand steps 64 images per mini batch right so they verify that you can do this benchmark you can take part in this benchmark even if you don't have much time or money or hardware right that's why for example they limit they limit the number of examples in the layer two tasks to a thousand they do investigate that this correlates with your performance if you were to include the full data sets of the layer two tasks so if you just include a thousand examples that correlates well they do investigate they do investigate whether you can put it on a single GPU they do investigate if you only run it for a thousand steps here you see this experiment you have to run it for a thousand steps basically and you're almost at the level if as if you were to run it for 50,000 steps so there's a lot of work to that goes into making sure that everybody can kind of participate in this benchmark and that I appreciate this a lot and there is actually code available so if you go to github and you just search for task adaptation actually I had it open before but I don't know so you go to github and you go to Google research and search for task adaptation to adaptation you'll you'll find it there is code that downloads all of the data sets for you prepares them and there is a script that runs your layer one model so you need to provide it a layer one model but then there is a script that that runs it on all of the different layer two tasks and at the end calculates your benchmark for you so that's pretty neat and I would encourage you if you have a good idea for a pre training or for a adaptation algorithm take part in the benchmark I suspect there will be a leaderboard kind of online leaderboard coming out at some point otherwise you simply can report the number in your papers and I hope you are going to be successful at that all right so that was it for me have lots of fun and bye bye | [
{
"start": 0,
"end": 6.3,
"text": " Hi there. Today we're looking at the visual task adaptation benchmark by a"
},
{
"start": 6.3,
"end": 14.34,
"text": " list of authors that's way too long to read out all from Google Brain. So what"
},
{
"start": 14.34,
"end": 20.86,
"text": " is this paper? This paper cares about a new benchmark that is abbreviated VTab"
},
{
"start": 20.86,
"end": 28.22,
"text": " and VTab is a benchmark for a task called visual task adaptation. So a"
},
{
"start": 28.22,
"end": 34.64,
"text": " benchmark, the meaning of a benchmark is it's kind of a number that you achieve"
},
{
"start": 34.64,
"end": 40.32,
"text": " with a model and whoever has the highest number is the best at this task."
},
{
"start": 40.32,
"end": 46.82,
"text": " So the benchmark kind of standardizes how you evaluate models and the"
},
{
"start": 46.82,
"end": 52.239999999999995,
"text": " model is here. They do visual task adaptation. So what is visual task"
},
{
"start": 52.24,
"end": 59.52,
"text": " adaptation? So this is visual task adaptation. It's kind of illustrated in"
},
{
"start": 59.52,
"end": 65.76,
"text": " this figure. Imagine you have a bunch of what are called visual tasks and a"
},
{
"start": 65.76,
"end": 70.76,
"text": " visual task, and this is the right side here, a visual task is anything that"
},
{
"start": 70.76,
"end": 76.38,
"text": " can be solved from just visual input. So basically given a picture or many"
},
{
"start": 76.38,
"end": 83.1,
"text": " pictures and you ask kind of a question about it, if that question can be"
},
{
"start": 83.1,
"end": 88.44,
"text": " answered by just looking at the picture then that's called a visual task. For"
},
{
"start": 88.44,
"end": 93.24,
"text": " example in this data set you might be asked whether a picture contains a dog"
},
{
"start": 93.24,
"end": 102.03999999999999,
"text": " or a cat. In this data set you might be asked to outline where the objects"
},
{
"start": 102.04,
"end": 107,
"text": " are. So here the plane, you might be able to segment or you might be able to point"
},
{
"start": 107,
"end": 112.68,
"text": " out where buildings are in the images. Right here, here, there's no building"
},
{
"start": 112.68,
"end": 117.08000000000001,
"text": " here. So there's varieties of tasks that are possible. Or in the"
},
{
"start": 117.08000000000001,
"end": 123.04,
"text": " bottom domain you might be asked which one of the two red dots here is closer"
},
{
"start": 123.04,
"end": 129.60000000000002,
"text": " to the observer in 3D space. Or you might be asked in this picture please count"
},
{
"start": 129.6,
"end": 135.76,
"text": " the number of gray boxes. So there's a bunch of, all of these count as visual"
},
{
"start": 135.76,
"end": 142.12,
"text": " tasks. Now the setting that the authors imagine here is there are many of these"
},
{
"start": 142.12,
"end": 148.76,
"text": " visual tasks in the world for which there isn't much training data. Imagine"
},
{
"start": 148.76,
"end": 152.88,
"text": " something like this. These are aerial images so you kind of need a satellite"
},
{
"start": 152.88,
"end": 156.64,
"text": " or a plane to obtain them and then you need to label them. So all of this is"
},
{
"start": 156.64,
"end": 163.72,
"text": " isn't that cheap. Even more so in a for example medical domain where you have"
},
{
"start": 163.72,
"end": 169.35999999999999,
"text": " very expensive CT images of patients and then you need to obtain them and you"
},
{
"start": 169.35999999999999,
"end": 174.48,
"text": " need to convince the patients to release their data and someone needs to label it."
},
{
"start": 174.48,
"end": 180,
"text": " So it's very costly to obtain lots of training data. Now what we want to do is"
},
{
"start": 180,
"end": 185,
"text": " we want to, for all of these tasks, we ideally want to build neural networks,"
},
{
"start": 185,
"end": 189.76,
"text": " deep neural networks because we know they're super accurate but they are only"
},
{
"start": 189.76,
"end": 194.88,
"text": " super accurate if you have lots of training data. So that conflicts with the"
},
{
"start": 194.88,
"end": 200.4,
"text": " fact that we might not have so much training data for these tasks. So the"
},
{
"start": 200.4,
"end": 204.84,
"text": " proposed solution here is what's called visual task adaptation and it's the"
},
{
"start": 204.84,
"end": 210.84,
"text": " following. Imagine you have lots and lots of what's called here upstream data."
},
{
"start": 210.84,
"end": 217.52,
"text": " And upstream data, what they mean is data that is similar to the data here but not"
},
{
"start": 217.52,
"end": 222.64000000000001,
"text": " exactly the same but you have lots of it. And the example given is ImageNet."
},
{
"start": 222.64000000000001,
"end": 231.12,
"text": " So imagine this here to be ImageNet. ImageNet is a data set with over a"
},
{
"start": 231.12,
"end": 237.88,
"text": " million images. All of them are labeled into one of a thousand classes and so"
},
{
"start": 237.88,
"end": 243.72,
"text": " you can build a very good model for ImageNet to predict the ImageNet class."
},
{
"start": 243.72,
"end": 250,
"text": " And you can get very accurate, you have lots of data. Cool. So you build"
},
{
"start": 250,
"end": 253.84,
"text": " this model but now what you want to do is you want to use this what's here"
},
{
"start": 253.84,
"end": 259.6,
"text": " called an adaptation algorithm. And you want to use that model that you trained"
},
{
"start": 259.6,
"end": 265.48,
"text": " on ImageNet data and kind of change it just a bit. So you start from the model"
},
{
"start": 265.48,
"end": 270.64000000000004,
"text": " you have that works on ImageNet and with the few training data you have here on"
},
{
"start": 270.64000000000004,
"end": 273.68,
"text": " the right side and the author has actually standardized this in the"
},
{
"start": 273.68,
"end": 278.76,
"text": " benchmark to 1k samples. So you only have a thousand training samples"
},
{
"start": 278.76,
"end": 283.20000000000005,
"text": " compared to the millions that you potentially need. You have a thousand"
},
{
"start": 283.20000000000005,
"end": 290.72,
"text": " samples and you adapt your model to these tasks. So you train the model"
},
{
"start": 290.72,
"end": 294.40000000000003,
"text": " on ImageNet and you adapt it to predict whether or not there's a cat or a dog"
},
{
"start": 294.4,
"end": 300.44,
"text": " and you adapt it to segment these images and you adapt it to predict the depth of"
},
{
"start": 300.44,
"end": 306.47999999999996,
"text": " points. So you can consider this kind of as a pre-training thing. So you pre-train"
},
{
"start": 306.47999999999996,
"end": 313.2,
"text": " your model on ImageNet and then you adapt it to these others. That's what's"
},
{
"start": 313.2,
"end": 318.12,
"text": " called task adaptation. It's not exactly pre-training in the classic sense"
},
{
"start": 318.12,
"end": 322.52,
"text": " because pre-training in the classic sense means basically that you"
},
{
"start": 322.52,
"end": 329.03999999999996,
"text": " retain the same model but here it's a bit different. So in stage one you"
},
{
"start": 329.03999999999996,
"end": 333.68,
"text": " train a deep neural network on lots of training data. A deep neural network"
},
{
"start": 333.68,
"end": 337.68,
"text": " here this might be you know you have a bunch of layers layer layer layer layer"
},
{
"start": 337.68,
"end": 343.96,
"text": " layer and then here you have a thousand you classify into a thousand classes."
},
{
"start": 343.96,
"end": 350.84,
"text": " This is your model. Then in stage two over here you adapt this model"
},
{
"start": 350.84,
"end": 357.03999999999996,
"text": " and what it ultimately means is you take for example this part here up until the"
},
{
"start": 357.03999999999996,
"end": 364.35999999999996,
"text": " second to last layer transfer it over put it here right bam bam bam bam bam"
},
{
"start": 364.35999999999996,
"end": 371.59999999999997,
"text": " you retain the weights you keep the weights but then you add just one or two"
},
{
"start": 371.59999999999997,
"end": 377.88,
"text": " new layers and classify your new tasks. This could be is it a cat or is it a dog?"
},
{
"start": 377.88,
"end": 383.44,
"text": " Then you train you can either elect to only train the green part here or"
},
{
"start": 383.44,
"end": 389.28,
"text": " you can train the whole thing. The second thing is called fine-tuning."
},
{
"start": 389.28,
"end": 394.71999999999997,
"text": " The author is mostly elect to do fine-tuning in this work so you carry"
},
{
"start": 394.71999999999997,
"end": 401.76,
"text": " over the weights and you add a new head and then you train the entire thing with"
},
{
"start": 401.76,
"end": 408,
"text": " the 1000 samples that you have for this task and then you the kind of the goal"
},
{
"start": 408,
"end": 412.44,
"text": " is to get as good as possible on that one task where you only have a thousand"
},
{
"start": 412.44,
"end": 420.03999999999996,
"text": " samples. If your pre-training was good so if your stage one was good then you"
},
{
"start": 420.03999999999996,
"end": 426.52,
"text": " would expect that stage two would profit a lot from this pre-training which"
},
{
"start": 426.52,
"end": 429.8,
"text": " basically means that even though you only have a thousand samples you can"
},
{
"start": 429.8,
"end": 437.08,
"text": " reach accuracies that would usually only be possible with much more samples."
},
{
"start": 437.08,
"end": 444.88,
"text": " That's the idea behind it. This is what's called visual task"
},
{
"start": 444.88,
"end": 452.04,
"text": " adaptation. The authors propose a benchmark for this. A benchmark for"
},
{
"start": 452.04,
"end": 457.88,
"text": " this part, for the adaptation algorithm. The adaptation algorithm they"
},
{
"start": 457.88,
"end": 463.4,
"text": " propose as a baseline is train on ImageNet and then fine-tune. That's an"
},
{
"start": 463.4,
"end": 468.76,
"text": " adaptation algorithm. They propose a score for this. If you come up with a"
},
{
"start": 468.76,
"end": 474.08,
"text": " better adaptation algorithm for example you could say no I'm going to train"
},
{
"start": 474.08,
"end": 480.64,
"text": " on YouTube data and then do fine-tune that and then maybe you'd reach"
},
{
"start": 480.64,
"end": 487.15999999999997,
"text": " better accuracies in these tasks over here and then your"
},
{
"start": 487.16,
"end": 490.72,
"text": " score would be higher. It's kind of a benchmark to compare adaptation"
},
{
"start": 490.72,
"end": 498.68,
"text": " algorithms. Here your benchmark score and this is conditioned on n, the number of"
},
{
"start": 498.68,
"end": 503.96000000000004,
"text": " samples that you have in the in the layer two tasks and this here is"
},
{
"start": 503.96000000000004,
"end": 512.9200000000001,
"text": " standardized to 1000 in their case. The score of an adaptation algorithm A is"
},
{
"start": 512.92,
"end": 522.92,
"text": " the following. It's the expectation over this is kind of an error"
},
{
"start": 522.92,
"end": 527.04,
"text": " measure and you can think of it basically as a test set"
},
{
"start": 527.04,
"end": 533.12,
"text": " classification error on the layer two tasks. Of that adaptation algorithm if"
},
{
"start": 533.12,
"end": 540.4,
"text": " given the data set of a layer two tasks of n samples and the layer two tasks here"
},
{
"start": 540.4,
"end": 548.8,
"text": " comes from a distribution of layer two tasks. What does it mean? This"
},
{
"start": 548.8,
"end": 553.12,
"text": " distribution of layer two tasks they imagine, they show this in this picture,"
},
{
"start": 553.12,
"end": 559.4399999999999,
"text": " they imagine the visual tasks like on this big landscape of visual"
},
{
"start": 559.4399999999999,
"end": 565,
"text": " tasks right here and what they ideally want to do is they want to sample a"
},
{
"start": 565,
"end": 570.64,
"text": " task here and this task corresponds to classifying these dog images and very"
},
{
"start": 570.64,
"end": 576.32,
"text": " close to it could be classifying bird images but then very far away could be a"
},
{
"start": 576.32,
"end": 581.24,
"text": " task of counting and depth estimation and so on. They imagine all the visual"
},
{
"start": 581.24,
"end": 587.24,
"text": " tasks have some kind of some sort of distribution. So what happens is"
},
{
"start": 587.24,
"end": 594.06,
"text": " you sample one of those visual tasks for each element in this"
},
{
"start": 594.06,
"end": 599.76,
"text": " expectation. You sample one of them, you build the data set with a thousand"
},
{
"start": 599.76,
"end": 604.16,
"text": " samples right you put it through your adaptation algorithms or your"
},
{
"start": 604.16,
"end": 609.1199999999999,
"text": " adaptation algorithm for example your pre-trained image net you adapt it to"
},
{
"start": 609.1199999999999,
"end": 614.4799999999999,
"text": " that task with a thousand samples and then you compute your error metric on"
},
{
"start": 614.4799999999999,
"end": 621.8399999999999,
"text": " that. Now if you do this over the whole distribution you get an expectation of"
},
{
"start": 621.84,
"end": 628.24,
"text": " this error metric in all the visual tasks and that will be your score."
},
{
"start": 628.24,
"end": 633.4,
"text": " What does it mean in practice? I mean in practice you don't have this"
},
{
"start": 633.4,
"end": 639.8000000000001,
"text": " distribution right in practice you have a list so like list here is a list of"
},
{
"start": 639.8000000000001,
"end": 644.36,
"text": " tasks right there's this task this task this task this task there's whatever the"
},
{
"start": 644.36,
"end": 651.1600000000001,
"text": " pets task and then there is the aerial then there is the counting right you"
},
{
"start": 651.16,
"end": 658.24,
"text": " have a list of tasks and what is it like this stuff and this expectation"
},
{
"start": 658.24,
"end": 665.68,
"text": " ultimately right stage one train a model M stage two for each of these tasks"
},
{
"start": 665.68,
"end": 671.12,
"text": " adapt the model M or fine-tune your model M on these tasks then for each"
},
{
"start": 671.12,
"end": 678.0799999999999,
"text": " task get an error rate error rate one task two gives you error rate two tasks"
},
{
"start": 678.08,
"end": 687.32,
"text": " three gives you error rate three then jump simply one over n sum them up so"
},
{
"start": 687.32,
"end": 693.48,
"text": " take the take the average error rate of the of the of all of the tasks and"
},
{
"start": 693.48,
"end": 698.5200000000001,
"text": " that's your score that's kind of my first criticism of this thing like this"
},
{
"start": 698.5200000000001,
"end": 703.44,
"text": " this all just seems like super mathematized with like oh we imagine all"
},
{
"start": 703.44,
"end": 708.7600000000001,
"text": " of these tasks being in some distribution somewhere like that there"
},
{
"start": 708.7600000000001,
"end": 714.24,
"text": " is a distribution of tasks and we have an expectation over the distribution"
},
{
"start": 714.24,
"end": 720.6,
"text": " now like why just say here's a bunch of tasks right adapt your model to each one"
},
{
"start": 720.6,
"end": 727.6800000000001,
"text": " of them get the average error rate done that's your score that would have been"
},
{
"start": 727.6800000000001,
"end": 732.08,
"text": " first of all much easier and second of all they never actually care to"
},
{
"start": 732.08,
"end": 736.2800000000001,
"text": " characterize this distribution like if if they were to actually rigorously"
},
{
"start": 736.2800000000001,
"end": 740.1600000000001,
"text": " characterize this distribution of visual tasks I would agree that this"
},
{
"start": 740.1600000000001,
"end": 749,
"text": " formulation makes sense but all they say basically all they say is tasks that a"
},
{
"start": 749,
"end": 754.44,
"text": " human can solve from visual input alone and they give a bunch of examples of"
},
{
"start": 754.44,
"end": 764.9200000000001,
"text": " you know a good task would be the following right so label one one zero"
},
{
"start": 764.9200000000001,
"end": 769.9200000000001,
"text": " zero one right and you probably figured it out the task is is it a square or is"
},
{
"start": 769.9200000000001,
"end": 774.5200000000001,
"text": " it a triangle right that's a does a visual task in the classic sense human"
},
{
"start": 774.5200000000001,
"end": 779.08,
"text": " can solve it from visual input alone then the following task wouldn't be as"
},
{
"start": 779.08,
"end": 792.1600000000001,
"text": " easy labels one zero zero one so the task I had in mind was is there and"
},
{
"start": 792.1600000000001,
"end": 799.32,
"text": " spelling is the spelling of the shape over here does it contain an a so square"
},
{
"start": 799.32,
"end": 806,
"text": " contains an a circle doesn't line doesn't but triangle contains an a right"
},
{
"start": 806,
"end": 810.12,
"text": " so therefore this you kind of need world knowledge and you can't just solve it"
},
{
"start": 810.12,
"end": 815.12,
"text": " from visual input alone right especially not you can't generalize to new new"
},
{
"start": 815.12,
"end": 824.48,
"text": " shapes if you if you just from visually put so um they and they say appendix B"
},
{
"start": 824.48,
"end": 831.44,
"text": " they validate this right they validate that humans can solve it but I I"
},
{
"start": 831.44,
"end": 836.48,
"text": " actually disagree with this because just because humans can solve a task just"
},
{
"start": 836.48,
"end": 840.8800000000001,
"text": " from visual input doesn't mean that they don't use world knowledge in it like in"
},
{
"start": 840.8800000000001,
"end": 848,
"text": " this whatever pets example here right humans know how cats and dogs look"
},
{
"start": 848,
"end": 852.32,
"text": " anatomically right how they look from the side and from the back and so on"
},
{
"start": 852.32,
"end": 857.72,
"text": " even if they haven't seen it in a picture they they know how they behave"
},
{
"start": 857.72,
"end": 864.84,
"text": " and so on what is kind of realistic setting for a cat and a dog to be in so"
},
{
"start": 864.84,
"end": 870.12,
"text": " all of this it seems kind of a bit shady and the reason I'm saying this is if"
},
{
"start": 870.12,
"end": 874.4,
"text": " you make this distribution formulation you also you have to give a rigorous"
},
{
"start": 874.4,
"end": 880.76,
"text": " definition and because if a new task arrives now like one that's not in your"
},
{
"start": 880.76,
"end": 886.24,
"text": " list like never been before here in the world like new task arrives how do we"
},
{
"start": 886.24,
"end": 891.64,
"text": " know whether or not we should include it in the list or not right how do we know"
},
{
"start": 891.64,
"end": 899.04,
"text": " whether it's part of this distribution or not it just seems very very shaky so"
},
{
"start": 899.04,
"end": 905.6800000000001,
"text": " that being said they do give this list and this list has 19 tasks that's down"
},
{
"start": 905.6800000000001,
"end": 910.36,
"text": " here so there are 19 tasks their categorized as natural which means"
},
{
"start": 910.36,
"end": 916.24,
"text": " natural images these these yeah the examples here are pets flowers images"
},
{
"start": 916.24,
"end": 923.08,
"text": " house numbers and so on specialized images are for example images with that"
},
{
"start": 923.08,
"end": 929.04,
"text": " you special equipment for example medical images and then structured means"
},
{
"start": 929.04,
"end": 936.12,
"text": " where that's down here structured means that the model needs come to comprehend"
},
{
"start": 936.12,
"end": 941.64,
"text": " the structure of a scene so they give an example of object counting or 3d depth"
},
{
"start": 941.64,
"end": 947.12,
"text": " prediction I mean that's that's fair enough they have these 19 tasks but and"
},
{
"start": 947.12,
"end": 955.6,
"text": " they show kind of the tasks down here here's a list of tasks and kind of their"
},
{
"start": 955.6,
"end": 963.12,
"text": " baseline method on it but but why for me like the question is why exactly these"
},
{
"start": 963.12,
"end": 969.32,
"text": " tasks if they don't specify this distribution why these tasks and they"
},
{
"start": 969.32,
"end": 973.12,
"text": " don't really like they do some they do a lot of experimentation actually an"
},
{
"start": 973.12,
"end": 978.16,
"text": " investigation but what's kind of missing for me is to show that these tasks first"
},
{
"start": 978.16,
"end": 983.08,
"text": " of all are kind of internally consistent in that they're really visual tasks and"
},
{
"start": 983.08,
"end": 988,
"text": " second of all that they kind of cover this distribution or they represent"
},
{
"start": 988,
"end": 993.52,
"text": " this entire distribution that they're trying to model and it seems to me"
},
{
"start": 993.52,
"end": 999.76,
"text": " unclear why exactly these tasks why they left others out and included these ones"
},
{
"start": 999.76,
"end": 1005.16,
"text": " in all fairness probably they simply took the ones that that they could get"
},
{
"start": 1005.16,
"end": 1014.4,
"text": " their hands on but still I feel that this is very shaky and that might that"
},
{
"start": 1014.4,
"end": 1020.28,
"text": " might lead to the benchmark not being adapted very widely but alright enough"
},
{
"start": 1020.28,
"end": 1027.36,
"text": " with the criticism let's go further in this so they do present this kind of"
},
{
"start": 1027.36,
"end": 1034.52,
"text": " baseline experiments and they they pre train always on image net and then they"
},
{
"start": 1034.52,
"end": 1041.12,
"text": " they they fine-tune on these layer two tasks and the way they pre train here is"
},
{
"start": 1041.12,
"end": 1046.1599999999999,
"text": " listed here for example so if they pre train a generative model it actually"
},
{
"start": 1046.1599999999999,
"end": 1050.36,
"text": " performs worse than if they just train from scratch for the layer two tasks on"
},
{
"start": 1050.36,
"end": 1056.1599999999999,
"text": " the thousand samples right self supervised is kind of a pre training"
},
{
"start": 1056.1599999999999,
"end": 1060.6,
"text": " method where if you have an image you do something like you rotate it to the"
},
{
"start": 1060.6,
"end": 1065.6399999999999,
"text": " right or to the left and then you ask a model some sort of a discriminator did"
},
{
"start": 1065.6399999999999,
"end": 1070,
"text": " it did I turn it to the right or to the left like zero is to the right left and"
},
{
"start": 1070,
"end": 1074.64,
"text": " one is to the right so you this is called self supervised you don't need"
},
{
"start": 1074.64,
"end": 1082.04,
"text": " labels for this right and it kind of works well semi supervised has some of"
},
{
"start": 1082.04,
"end": 1087.92,
"text": " the labels and supervised has is like image net with full labels and you kind"
},
{
"start": 1087.92,
"end": 1093.52,
"text": " of see unsurprisingly that the more information you have the the better you"
},
{
"start": 1093.52,
"end": 1098.52,
"text": " are going to be in all of these these kind of tasks interestingly the"
},
{
"start": 1098.52,
"end": 1105.84,
"text": " generative pre training works the worst worse than even from scratch training so"
},
{
"start": 1105.84,
"end": 1114.24,
"text": " that's kind of a sort of special what what I do really appreciate about this"
},
{
"start": 1114.24,
"end": 1121.44,
"text": " this investigation here is that they investigate a lot of variants of this"
},
{
"start": 1121.44,
"end": 1128.48,
"text": " of this benchmark and they come to the conclusion I think this encapsulated"
},
{
"start": 1128.48,
"end": 1134.64,
"text": " here one for example we find two models using 16 Google Cloud TPU hardware"
},
{
"start": 1134.64,
"end": 1139.32,
"text": " accelerators now that's expensive right but they say we conduct additional"
},
{
"start": 1139.32,
"end": 1143.6,
"text": " experiments to assess whether our result can be reproduced with a more basic"
},
{
"start": 1143.6,
"end": 1149.72,
"text": " hardware setup we evaluate on all the tasks using a single Nvidia P100 GPU"
},
{
"start": 1149.72,
"end": 1156.24,
"text": " with a thousand steps 64 images per mini batch right so they verify that you can"
},
{
"start": 1156.24,
"end": 1160.72,
"text": " do this benchmark you can take part in this benchmark even if you don't have"
},
{
"start": 1160.72,
"end": 1167.1200000000001,
"text": " much time or money or hardware right that's why for example they limit they"
},
{
"start": 1167.1200000000001,
"end": 1172.6,
"text": " limit the number of examples in the layer two tasks to a thousand they do"
},
{
"start": 1172.6,
"end": 1177.92,
"text": " investigate that this correlates with your performance if you were to include"
},
{
"start": 1177.92,
"end": 1182.56,
"text": " the full data sets of the layer two tasks so if you just include a thousand"
},
{
"start": 1182.56,
"end": 1187.36,
"text": " examples that correlates well they do investigate they do investigate whether"
},
{
"start": 1187.36,
"end": 1193.44,
"text": " you can put it on a single GPU they do investigate if you only run it for a"
},
{
"start": 1193.44,
"end": 1196.44,
"text": " thousand steps here you see this experiment you have to run it for a"
},
{
"start": 1196.44,
"end": 1202.28,
"text": " thousand steps basically and you're almost at the level if as if you were to"
},
{
"start": 1202.28,
"end": 1207.6,
"text": " run it for 50,000 steps so there's a lot of work to that goes into making sure"
},
{
"start": 1207.6,
"end": 1212.8,
"text": " that everybody can kind of participate in this benchmark and that I appreciate"
},
{
"start": 1212.8,
"end": 1220.1999999999998,
"text": " this a lot and there is actually code available so if you go to github and"
},
{
"start": 1220.1999999999998,
"end": 1225.08,
"text": " you just search for task adaptation actually I had it open before but I don't"
},
{
"start": 1225.08,
"end": 1231.7199999999998,
"text": " know so you go to github and you go to Google research and search for task"
},
{
"start": 1231.72,
"end": 1243.2,
"text": " adaptation to adaptation you'll you'll find it there is code that downloads all"
},
{
"start": 1243.2,
"end": 1249.2,
"text": " of the data sets for you prepares them and there is a script that runs your"
},
{
"start": 1249.2,
"end": 1253.64,
"text": " layer one model so you need to provide it a layer one model but then there is"
},
{
"start": 1253.64,
"end": 1261.1200000000001,
"text": " a script that that runs it on all of the different layer two tasks and at the end"
},
{
"start": 1261.12,
"end": 1267.1999999999998,
"text": " calculates your benchmark for you so that's pretty neat and I would encourage"
},
{
"start": 1267.1999999999998,
"end": 1272.4799999999998,
"text": " you if you have a good idea for a pre training or for a adaptation algorithm"
},
{
"start": 1272.4799999999998,
"end": 1277.28,
"text": " take part in the benchmark I suspect there will be a leaderboard kind of"
},
{
"start": 1277.28,
"end": 1282.12,
"text": " online leaderboard coming out at some point otherwise you simply can report"
},
{
"start": 1282.12,
"end": 1288.1999999999998,
"text": " the number in your papers and I hope you are going to be successful at that all"
},
{
"start": 1288.2,
"end": 1296.28,
"text": " right so that was it for me have lots of fun and bye bye"
}
] |
69IjNZaoeao | Yannic Kilcher | UCZHmQk67mSJgfCCTn7xBfew | LeDeepChef 👨🍳 Deep Reinforcement Learning Agent for Families of Text-Based Games | [
"Science & Technology"
] | [
"ml",
"machine learning",
"reinforcement learning",
"recipe",
"text-based games",
"text games",
"natural language processing",
"nlp",
"actor",
"critic",
"GRU",
"embedding",
"pretraining",
"artificial intelligence",
"ai",
"competition",
"microsoft"
] | The AI cook is here! This agent learns to play a text-based game where the goal is to prepare a meal according to a recipe. Challenges? Many! The number of possible actions is huge, ingredients change and can include ones never seen before, you need to navigate rooms, use tools, manage an inventory and sequence everything correctly and all of this from a noisy textual description that the game engine throws at you. This paper mixes supervised explicit training with reinforcement learning in order to solve this task.
Abstract:
While Reinforcement Learning (RL) approaches lead to significant achievements in a variety of areas in recent history, natural language tasks remained mostly unaffected, due to the compositional and combinatorial nature that makes them notoriously hard to optimize. With the emerging field of Text-Based Games (TBGs), researchers try to bridge this gap. Inspired by the success of RL algorithms on Atari games, the idea is to develop new methods in a restricted game world and then gradually move to more complex environments. Previous work in the area of TBGs has mainly focused on solving individual games. We, however, consider the task of designing an agent that not just succeeds in a single game, but performs well across a whole family of games, sharing the same theme. In this work, we present our deep RL agent--LeDeepChef--that shows generalization capabilities to never-before-seen games of the same family with different environments and task descriptions. The agent participated in Microsoft Research's "First TextWorld Problems: A Language and Reinforcement Learning Challenge" and outperformed all but one competitor on the final test set. The games from the challenge all share the same theme, namely cooking in a modern house environment, but differ significantly in the arrangement of the rooms, the presented objects, and the specific goal (recipe to cook). To build an agent that achieves high scores across a whole family of games, we use an actor-critic framework and prune the action-space by using ideas from hierarchical reinforcement learning and a specialized module trained on a recipe database.
Authors: Leonard Adolphs, Thomas Hofmann
https://arxiv.org/abs/1909.01646 | Hi there. Today we're looking at Le Deep Chef, deep reinforcement learning agent for families of text-based games by Leonard Adolfs and Thomas Hoffmann. So this is a paper about engineering an agent for a particular family of tasks. This is different from reinforcement learning agents that for example are just good at one game, let's say Pong or whatnot and even I guess even things like Starcraft. Though this kind of depends on what you mean by game. So what are we talking about here? The following is a text-based games where the goal is to cook recipes. So let's just jump in and see what goes on. The game starts by telling you, you are hungry. Let's cook a delicious meal and so on. So the objective is basically always the same. It's find the cookbook, read the recipe that's in it, then collect all the things that are in the recipe, prepare them in certain ways that are also specified by the recipe and then at the end you have a meal and then you can eat the meal and that will give you points. But since it's a text-based games and the input doesn't come structured but it comes in natural text. So the game tells you for example kitchen. So basically you're in the kitchen. You are now in the kitchen. I guess you better just go and list everything you see here. You hear a noise, you spin around. So you see that the kind of input you get from the game is very playful, has a lot of descriptive elements. Sometimes it's like you see a closed oven. You make out a table. Then you can see on the counter you can make out a sliced fried red hot pepper and so on. So it's very much not trivial to kind of parse this in a traditional way. If you were to go about this by simply writing an algorithm extracting things it's very hard because for example you might see that there's an oven but it's a closed oven. You make out a table. So this is kind of a synonym for you see a table but you see like there is a table. You can make out a sliced fried red hot pepper and here it's important not only do you need to realize that there is a red hot pepper but also that its state is sliced and fried. This is important because you need all ingredients in a certain state. Right? You examine here you examine the stove so there is a stove. Right? So all these things you need to kind of understand. So if you now look there is a recipe book in here. Or no there isn't a recipe. You can examine recipe. I guess there is a recipe book in that room. If there is a recipe book then you can examine the recipe and that's the command. So the arrows here always indicate that that's a user command. And these you have to type. That's like the next thing that your agent needs to do. You can't select from a predefined set of actions. You actually need to type in the things you want to do. Right? And these are a lot. Like there are a lot of possibilities of what you could type in. Even if you restrict it to kind of what you know the game accepts there are still so many actions. It's way different than for example Atari games. They always have eight actions. Like there's eight buttons you could possibly press and that's it. And here there are like combinatorically many things you can do. Like you can prepare and take and all the ingredients. You don't know which ingredients come. So here you examine the recipe. Let's look at a recipe. It says you open the recipe. Start reading. Recipe number one. Here are the ingredients. Red hot pepper. Here for right now that's just one ingredient. Then there are directions. So what do you need to do? Slice the red hot pepper. Fry the red hot pepper and prepare the meal. Those are the directions of the recipe. You also have this inventory command which tells you which you're carrying. Next difficulty. The inventory is finite. So you can't carry everything. At some points you have to drop things that are unnecessary. You can't just take everything. Here you see the command take red hot pepper. That only works if there's a red hot pepper in the room. And here says you take the red hot pepper from the counter. Your score has just gone up by one point. And then if you type inventory it says you're carrying a sliced fried red hot pepper. Again here it says the state of the ingredient. So the ingredient is the red hot pepper and the state is sliced and fried. And then you can prepare meal and then you can eat meal and then it says your score has just gone up by one point. And these are the scores you collect. So there are a lot of difficulties that are actually not shown in this example. For example there are different rooms. You may have noticed here you're in the kitchen. But there could be other rooms and you start in a random room. You also need to navigate through the rooms. Close the doors to the rooms could be closed and then you need to open them and so on. You can only for example if this pepper here weren't already sliced and fried you need to find... You can only slice it if there is a knife in the room. You can only fry it if there is a frying pan or an oven or a stove in the room. So and then you'd have to notice that there is a knife. If there is no knife you need to take the red hot pepper bring it to a new room with a knife and then slice it. So this is vastly difficult game. The last difficulty is actually that in the test set there will be ingredients that you haven't seen during training. So also that there. Your agent needs to generalize. That's why it says a family of text-based games. Because the objective always the same to kind of cook the recipe. But the things you have to do and the things that appear and so on those are those change basically from episode to episode. And the test set will be different than the training set or kind of there will be unseen data. Alright so how does this paper go about solving this problem? This paper basically does the following and we are going here from high level to low level. On the highest level it's a reinforcement learning agent and that is sort of how you would imagine an RL agent to work. So here at the end you have a policy and the policy predicts an action. If you don't know what a kind of a policy and an action things are in RL these are basic RL concept and we'll kind of skip them here and I'll assume everyone knows what they are. But essentially a policy specifies which action you take next given the current game state. So the policy is made up, scores different actions. So at each step there are k actions available. And these k actions I foresaid there are almost infinitely many actions that you could take. The first difficulty and that's the thing that actually comes in here is to reduce all of the possible actions that you can't even list to just k commands. So we'll go into that later how this is done. But basically one of the main contributions of this paper is how do you even specify what is reasonable, what would be reasonable to do in the current situation. And then the policy over here only has to decide among those reasonable actions, not among all actions. But given that you have k reasonable commands you see here command one command, these are embedded and then fed into GRUs which are recurrent neural networks. So for each of these commands you'll get a 32 dimensional vector. This 32 dimensional vector is here C1 through Ck. Each are combined with an encoding of the current state. So these 32 dimensional vector are combined with encoding of the current state which is 256 dimensional and then fed into a neural network that will output a probability distribution over these actions. This is pretty classic in deep reinforcement learning. So you have action encoding and the state encoding and the policy decides on that. The state encoding you'll see here it's the same everywhere of course because the current game state is the current game state. This comes from this model up here. What this does is over here you have the what you would call the state the current observation. The current observation is composed of many things. Specifically the following eight things. The first one is actually called observation which is I would call all of this the current observation from an RL perspective. But the first is actually observation. It's whatever you saw the big text you saw before. Like you were in the kitchen it looks like this it smells like this you turn around and so on. This would be the observation. It's what the game engine says at the current time step. This is just a piece of text. Second missing items. Third unnecessary items. Now these things you might wonder okay how do I know what what items are missing and unnecessary. These things come from another model that this paper trains and we'll get into that later. But basically they have a method of specifying which items are still missing which are unnecessary and they list those here. Then description which is the output of the last look command. So in each room you can look you can type look and then it'll give you a description of the room and what's in there. The previous commands this is often used in RL either explicitly or implicitly through a recurrent network in order to give the agent an idea what what happened in the in the previous steps or what it did so that it doesn't repeat actions unnecessarily or so it learns to not repeat actions unnecessarily. Required utilities. Again this is a model that's kind of trained to predict what utilities are required to perform some actions. So as I said before if you want to slice the red hot pepper you need a knife. If you want to fry it you need a stove. Discovered locations. As I said there are different rooms you actually don't know what rooms there are before you actually go in in there. So before you go through a door you reach another room. So the list of previously discovered and visited locations is there and then the name of the current location it is also there. So these are eight things that make up the current observation. These eight things are just strings of text and these eight things are each one as you can see here these are that the eight things from observation to location each one are embedded and fed also into an RNN. So for each of these eight things you'll obtain a 32 dimensional vector and these are all concatenated to make up one big 256 dimensional vector. So this 256 dimensional vector will contain all the necessary information about the current room what's in there what what items are you still missing what items do you have in your inventory which ones are unnecessary and so on. So if you train this correctly this 256 dimensional vector will describe the current game state as it is relevant to your agent like everything about it every relevant information that's in here will be encoded in this vector. Now this vector isn't the final state encoding yet what you'll have is you feed this into an RNN that takes as input the last time steps you have to imagine the last time step already there was observation blah blah blah this entire thing was I'm just copying I'm just copying this box over here so this entire thing was already done last step and already fed into an RNN so this this is an RNN that actually goes over time and the last whatever the output here is it will be fed to the next step and this is a trick often done in reinforcement learning as well that you actually have a recurrent neural network over the time steps so each time step you have a certain observation you encode it and so on you get a description of that and then you feed this into an RNN what the RNN can learn to do is it can learn to react to different not only to the current observation but to the current observation conditioned on the history of previous observations so it can learn before I was in this room now I'm in this new room so I actually haven't you know taken all the items from this room yet because I just came into this room and so on so the the kind of component where you are able to look at the past and what happened in the past is in captured by this RNN here so it's fairly complicated architecture but this here this state encoding that is conditioned on the also on the history then goes into this into here that's it that's the vector that goes in here is combined with each action so all of these actions here these K actions and this is all fed through a neural network and that will give you the policy this is a fairly complicated thing but if you look at it it's not it's not too it's not too difficult actually so what you'll do is you will take your observations here this is all observation it will be encoded and combined with the history in order to give you this in order to give you an encoding of the current state on the other hand you'll take all of the possible commands that you could perform right now encode each one separately right into an embedding and then you combine each one of those with this encoding you specified previously that you and and from that you make your decision which action to take next and the action here is the one that's output is the action you take next sampled from this policy the last thing you need is a value network and this is just important for reinforcement learning which tells you from this state here so I'm getting weird with colors here from this state here which is the same as this one so you'd simply transfer this over from this state how valuable is that what's my value of the state and the value is if I'm in this state and I act as I normally act what are all my future rewards going to be combined so it basically gives you a value of this state you can think of this in for example terms of chess if you had this in chess and then this here is it would be a description of the chessboard this HT and the value would be how valuable is this position for you so if you're very much ahead and material and position and so on this value would be very high if you're behind this value would be very low and this is in a real network simply trying to predict that value so with all of this you now have a never good basis to do reinforcement learning you have a policy you have a value network and from that you can train an RL agent and this is done classically in an actor critic way where you do advantage learning here the advantage and the policy you train weighted by the advantage then the value network you train to be close to their reward and then you have an entropy penalty if you don't know what these things are the video will get bit too long if I were to go over these reinforcement learning concepts but these are very standard in reinforcement learning so you can train these you can basically train what it does is you can train these neural networks in absence of label training data because you don't know what the best action is in each step right there's no one telling you you just have a reward you just sometimes you get a point and you don't know which actions led to that so these things will actually allow you to train these neural networks by using just the reward without knowing which exact actions were right and wrong and that's the core of reinforcement learning obviously alright so the the core one of the core ingredients actually is this recipe manager and the recipe manager is a sub model that does the following so here it takes as an input the cookbook here and it also takes as an input the inventory and it outputs something like this and this this is a this is a table representation of what it outputs it will output all the ingredients that you need for the recipe whether or not this input that this ingredient is currently missing from your inventory and action to perform so which actions still need to be performed so let's look at the following let's look at this example the recipe tells you you need the ingredients are a carrot a red hot pepper and a white onion and the inventory says you care you're carrying a white onion and a carrot right so down here you see aha we we do actually have we do actually have a carrot so it's not missing the carrot isn't missing you have it in your inventory the red hot pepper is missing we don't have it in the inventory but we need it for the recipe the white onion we need for the recipe but it's not missing then it also is for each of the ingredients is supposed to tell you this recipe model which of the what you still need to perform on it so here it says slice the carrot roast the carrot and you simply have a carrot it doesn't say slice the roast that means it's not sliced and roasted so the recipe is supposed to output you still need to slice and roast the carrot here for example for the white onion says fry the white onion and as you can see in the inventory it says you're carrying a fried white onion so for the white onion you see we don't need to do anything anymore so that the recipe model is basically trying to to make this table here and this table you can see as an intermediary step in order to do all the other things and the difference here to a pure RL method and this is important the difference is that this representation this intermediate table representation is done explicitly so the recipe model really produces a table like this and not just in other RL methods people go about and make this recipe model output some sort of you know let's say a 200 dimensional vector that's supposed to encompass all of this information and that doesn't appear to work as well like often that if you simply train this end-to-end that will not pick up on the important information because the training signal tends to be way too weak you have to imagine you already have this really really big model construction here and you're trying to learn it you're trying to learn it from a tiny reward signal that you get at the end right this is very noisy signal now if if you're now trying to say well the inputs to these things right this command here and we also saw the inputs to these these depend on this recipe model also now are whatever giant neural network construction here and we'll all train this end-to-end and these will actually not be text these will actually be some sort of latent vectors that will often fail because you're now just trying to extract information from too noisy of a reward signal so the authors here do actually pretty neat separation of that and they train this recipe model with actually an augmented data set so they go to freebase and get more food items and then they construct a data set that resembles this and train it in a supervised way to output tables tables like this so this is is pretty smart and I think it's a good lesson if you ever attempt something like this that really really important information such as this one if you can train it in a supervised way as a kind of a pre-processing step to your RL procedure that's extremely helpful here you can you can see how this is then used so by combining this table that was output from the recipe model and your inventory and the output of this look command you can then generate these commands so before we said it's important to reduce the everything you could do which is infinite things to everything that is reasonable to do currently and this model here does that so given this given that and given the description of what's currently in the room you can now generate these commands and for example take knife if you have to slice something because you see a knife is in the room and you could conceivably take the knife right you can construct these commands but also since you know right since you know what's since you know what's in your inventory and since you know which things are still missing you can generate commands like take the white onion or drop the water because you don't need the water right so um the the offers also group these things here in this what they call high-level commands which take all required items from here simply means take everything that's in the room that is not in the inventory but you need it so these things which for an RL agent it makes sense to group these things together because it doesn't make sense to have them as two separate things if you need both take both if you don't need any what if you have a new entry drop all of these things so that makes sense that's a small optimization that apparently brought some gains but the kind of the the overarching message here is that once you have a once you have this information from the recipe model you can then use it in many useful ways in order to make life for your RL agent easier alright so that kind of is the entire model that's very it's quite convoluted but basically you start with this here this recipe manager you decide you output this table down here which ingredients are in the recipe are they still missing and which actions we need to perform you then combine it with this information here the information about the current room and your inventory in order to come up with a set of commands that are conceivable to do here you combine these commands with some commands that are always available so commands that are always available are things like look inventory prepare meal you have that right you add that if the recipe manager does not output any missing and the agents location is the kitchen so you can add these other items and also we're not even gonna get into that you add navigational items because there are doors in these rooms and you need to navigate around so they actually train another model to here you see to detect to detect directions that you could move into and open doors for every closed door in the room so that's another challenge that the agent needs to overcome they have to build an entire model to predict which doors are there and are they closed do you need to open them so these commands if there are doors and if you can move through them these commands are also added to this set of commands that are reasonable so now we have a set of commands that are reasonable over here then you describe the room here you put both into this embedding and then finally your policy outputs an action that's that that's the entire process very convoluted very big very astonishing that this works with our L but in order to need to get it to work you actually need to do this supervised training and the experimental evidence here is quite solid in that they compare to baseline systems that that use classic techniques and they do some ablation over over their individual parts and they get second place I think in a competition about these text-based games so that's pretty good and that was it for me and check it out and bye bye | [
{
"start": 0,
"end": 5.4,
"text": " Hi there. Today we're looking at Le Deep Chef, deep reinforcement learning agent"
},
{
"start": 5.4,
"end": 11.28,
"text": " for families of text-based games by Leonard Adolfs and Thomas Hoffmann. So"
},
{
"start": 11.28,
"end": 18.400000000000002,
"text": " this is a paper about engineering an agent for a particular family of tasks."
},
{
"start": 18.400000000000002,
"end": 22.400000000000002,
"text": " This is different from reinforcement learning agents that for example are"
},
{
"start": 22.4,
"end": 30.24,
"text": " just good at one game, let's say Pong or whatnot and even I guess even things"
},
{
"start": 30.24,
"end": 39.08,
"text": " like Starcraft. Though this kind of depends on what you mean by game. So what"
},
{
"start": 39.08,
"end": 45.32,
"text": " are we talking about here? The following is a text-based games where the goal is"
},
{
"start": 45.32,
"end": 55.7,
"text": " to cook recipes. So let's just jump in and see what goes on. The game"
},
{
"start": 55.7,
"end": 62.16,
"text": " starts by telling you, you are hungry. Let's cook a delicious meal and so on."
},
{
"start": 62.16,
"end": 68.52,
"text": " So the objective is basically always the same. It's find the cookbook, read the"
},
{
"start": 68.52,
"end": 75.16,
"text": " recipe that's in it, then collect all the things that are in the recipe, prepare"
},
{
"start": 75.16,
"end": 80.47999999999999,
"text": " them in certain ways that are also specified by the recipe and then at the"
},
{
"start": 80.47999999999999,
"end": 84.75999999999999,
"text": " end you have a meal and then you can eat the meal and that will give you points."
},
{
"start": 84.75999999999999,
"end": 91.52,
"text": " But since it's a text-based games and the input doesn't come structured but it"
},
{
"start": 91.52,
"end": 98.52,
"text": " comes in natural text. So the game tells you for example kitchen. So basically"
},
{
"start": 98.52,
"end": 102.72,
"text": " you're in the kitchen. You are now in the kitchen. I guess you better just go and"
},
{
"start": 102.72,
"end": 107.8,
"text": " list everything you see here. You hear a noise, you spin around. So you see that"
},
{
"start": 107.8,
"end": 113.84,
"text": " the kind of input you get from the game is very playful, has a lot of descriptive"
},
{
"start": 113.84,
"end": 123.6,
"text": " elements. Sometimes it's like you see a closed oven. You make out a table. Then"
},
{
"start": 123.6,
"end": 130.04,
"text": " you can see on the counter you can make out a sliced fried red hot pepper and so"
},
{
"start": 130.04,
"end": 136.92,
"text": " on. So it's very much not trivial to kind of parse this in a traditional way."
},
{
"start": 136.92,
"end": 141.84,
"text": " If you were to go about this by simply writing an algorithm extracting things"
},
{
"start": 141.84,
"end": 147.32,
"text": " it's very hard because for example you might see that there's an oven but it's"
},
{
"start": 147.32,
"end": 153.07999999999998,
"text": " a closed oven. You make out a table. So this is kind of a synonym for you see a"
},
{
"start": 153.08,
"end": 160.76000000000002,
"text": " table but you see like there is a table. You can make out a sliced fried red hot"
},
{
"start": 160.76000000000002,
"end": 164.48000000000002,
"text": " pepper and here it's important not only do you need to realize that there is a"
},
{
"start": 164.48000000000002,
"end": 170.8,
"text": " red hot pepper but also that its state is sliced and fried. This is important"
},
{
"start": 170.8,
"end": 179,
"text": " because you need all ingredients in a certain state. Right? You examine here you"
},
{
"start": 179,
"end": 186.96,
"text": " examine the stove so there is a stove. Right? So all these things you need to"
},
{
"start": 186.96,
"end": 193.48,
"text": " kind of understand. So if you now look there is a recipe book in here."
},
{
"start": 193.48,
"end": 200.2,
"text": " Or no there isn't a recipe. You can examine recipe. I guess there is a recipe"
},
{
"start": 200.2,
"end": 206.84,
"text": " book in that room. If there is a recipe book then you can examine the recipe and"
},
{
"start": 206.84,
"end": 211.32,
"text": " that's the command. So the arrows here always indicate that that's a user"
},
{
"start": 211.32,
"end": 217.16,
"text": " command. And these you have to type. That's like the next thing that"
},
{
"start": 217.16,
"end": 223.64000000000001,
"text": " your agent needs to do. You can't select from a predefined set of actions."
},
{
"start": 223.64000000000001,
"end": 228.96,
"text": " You actually need to type in the things you want to do. Right? And these are a"
},
{
"start": 228.96,
"end": 233.32,
"text": " lot. Like there are a lot of possibilities of what you could type in."
},
{
"start": 233.32,
"end": 237.48,
"text": " Even if you restrict it to kind of what you know the game accepts there are"
},
{
"start": 237.48,
"end": 243.4,
"text": " still so many actions. It's way different than for example Atari games."
},
{
"start": 243.4,
"end": 246.51999999999998,
"text": " They always have eight actions. Like there's eight buttons you could"
},
{
"start": 246.51999999999998,
"end": 252.64,
"text": " possibly press and that's it. And here there are like combinatorically many"
},
{
"start": 252.64,
"end": 259.03999999999996,
"text": " things you can do. Like you can prepare and take and all the ingredients. You"
},
{
"start": 259.04,
"end": 264.92,
"text": " don't know which ingredients come. So here you examine the recipe."
},
{
"start": 264.92,
"end": 269.6,
"text": " Let's look at a recipe. It says you open the recipe. Start reading. Recipe number"
},
{
"start": 269.6,
"end": 275.6,
"text": " one. Here are the ingredients. Red hot pepper. Here for right now that's just one"
},
{
"start": 275.6,
"end": 280.08000000000004,
"text": " ingredient. Then there are directions. So what do you need to do? Slice the red"
},
{
"start": 280.08000000000004,
"end": 285.32000000000005,
"text": " hot pepper. Fry the red hot pepper and prepare the meal. Those are"
},
{
"start": 285.32,
"end": 291.2,
"text": " the directions of the recipe. You also have this inventory command which"
},
{
"start": 291.2,
"end": 298.4,
"text": " tells you which you're carrying. Next difficulty. The inventory is finite. So"
},
{
"start": 298.4,
"end": 302.68,
"text": " you can't carry everything. At some points you have to drop things that are"
},
{
"start": 302.68,
"end": 308.6,
"text": " unnecessary. You can't just take everything. Here you see the command take"
},
{
"start": 308.6,
"end": 313.08,
"text": " red hot pepper. That only works if there's a red hot pepper in the room. And"
},
{
"start": 313.08,
"end": 318.12,
"text": " here says you take the red hot pepper from the counter. Your score has just gone"
},
{
"start": 318.12,
"end": 322.44,
"text": " up by one point. And then if you type inventory it says you're carrying a"
},
{
"start": 322.44,
"end": 330.08,
"text": " sliced fried red hot pepper. Again here it says the state of the ingredient."
},
{
"start": 330.08,
"end": 336.44,
"text": " So the ingredient is the red hot pepper and the state is sliced and fried. And"
},
{
"start": 336.44,
"end": 340.32,
"text": " then you can prepare meal and then you can eat meal and then it says your"
},
{
"start": 340.32,
"end": 345.92,
"text": " score has just gone up by one point. And these are the scores you collect. So"
},
{
"start": 345.92,
"end": 349.52,
"text": " there are a lot of difficulties that are actually not shown in this example. For"
},
{
"start": 349.52,
"end": 354.2,
"text": " example there are different rooms. You may have noticed here you're in the"
},
{
"start": 354.2,
"end": 359.48,
"text": " kitchen. But there could be other rooms and you start in a random room. You also"
},
{
"start": 359.48,
"end": 364.32,
"text": " need to navigate through the rooms. Close the doors to the rooms could be"
},
{
"start": 364.32,
"end": 373.08,
"text": " closed and then you need to open them and so on. You can only for example if"
},
{
"start": 373.08,
"end": 382.68,
"text": " this pepper here weren't already sliced and fried you need to find..."
},
{
"start": 382.68,
"end": 389.2,
"text": " You can only slice it if there is a knife in the room. You can only fry"
},
{
"start": 389.2,
"end": 395.12,
"text": " it if there is a frying pan or an oven or a stove in the room."
},
{
"start": 395.12,
"end": 402.59999999999997,
"text": " So and then you'd have to notice that there is a knife. If there is no knife"
},
{
"start": 402.59999999999997,
"end": 407.24,
"text": " you need to take the red hot pepper bring it to a new room with a knife and"
},
{
"start": 407.24,
"end": 415.2,
"text": " then slice it. So this is vastly difficult game. The last difficulty is"
},
{
"start": 415.2,
"end": 422.47999999999996,
"text": " actually that in the test set there will be ingredients that you haven't seen"
},
{
"start": 422.47999999999996,
"end": 428.92,
"text": " during training. So also that there. Your agent needs to generalize. That's why it"
},
{
"start": 428.92,
"end": 432.88,
"text": " says a family of text-based games. Because the objective always the same to"
},
{
"start": 432.88,
"end": 436.36,
"text": " kind of cook the recipe. But the things you have to do and the things that"
},
{
"start": 436.36,
"end": 443.36,
"text": " appear and so on those are those change basically from episode to episode. And"
},
{
"start": 443.36,
"end": 448.88,
"text": " the test set will be different than the training set or kind of there will be"
},
{
"start": 448.88,
"end": 454.84000000000003,
"text": " unseen data. Alright so how does this paper go about solving this problem?"
},
{
"start": 454.84000000000003,
"end": 465.2,
"text": " This paper basically does the following and we are going here from high level to"
},
{
"start": 465.2,
"end": 471.84000000000003,
"text": " low level. On the highest level it's a reinforcement learning agent and that is"
},
{
"start": 471.84,
"end": 481.64,
"text": " sort of how you would imagine an RL agent to work. So here at the end you have"
},
{
"start": 481.64,
"end": 487.71999999999997,
"text": " a policy and the policy predicts an action. If you don't know what a kind of"
},
{
"start": 487.71999999999997,
"end": 492.32,
"text": " a policy and an action things are in RL these are basic RL concept and we'll"
},
{
"start": 492.32,
"end": 498.2,
"text": " kind of skip them here and I'll assume everyone knows what they are. But"
},
{
"start": 498.2,
"end": 503.64,
"text": " essentially a policy specifies which action you take next given the current"
},
{
"start": 503.64,
"end": 511.64,
"text": " game state. So the policy is made up, scores different actions. So at each step"
},
{
"start": 511.64,
"end": 519.2,
"text": " there are k actions available. And these k actions I foresaid there are almost"
},
{
"start": 519.2,
"end": 524.84,
"text": " infinitely many actions that you could take. The first difficulty and that's the"
},
{
"start": 524.84,
"end": 534.2800000000001,
"text": " thing that actually comes in here is to reduce all of the possible actions that"
},
{
"start": 534.2800000000001,
"end": 541.48,
"text": " you can't even list to just k commands. So we'll go into that later how this is"
},
{
"start": 541.48,
"end": 547.52,
"text": " done. But basically one of the main contributions of this paper is how do"
},
{
"start": 547.52,
"end": 553.9200000000001,
"text": " you even specify what is reasonable, what would be reasonable to do in the current"
},
{
"start": 553.92,
"end": 559.92,
"text": " situation. And then the policy over here only has to decide among those reasonable"
},
{
"start": 559.92,
"end": 566.16,
"text": " actions, not among all actions. But given that you have k reasonable commands"
},
{
"start": 566.16,
"end": 572.64,
"text": " you see here command one command, these are embedded and then fed into GRUs which are"
},
{
"start": 572.64,
"end": 578.4399999999999,
"text": " recurrent neural networks. So for each of these commands you'll get a 32"
},
{
"start": 578.44,
"end": 588.48,
"text": " dimensional vector. This 32 dimensional vector is here C1 through Ck. Each are"
},
{
"start": 588.48,
"end": 596.36,
"text": " combined with an encoding of the current state. So these 32 dimensional"
},
{
"start": 596.36,
"end": 601.6800000000001,
"text": " vector are combined with encoding of the current state which is 256 dimensional"
},
{
"start": 601.6800000000001,
"end": 607.9200000000001,
"text": " and then fed into a neural network that will output a probability distribution"
},
{
"start": 607.92,
"end": 613.4,
"text": " over these actions. This is pretty classic in deep reinforcement learning."
},
{
"start": 613.4,
"end": 619.4399999999999,
"text": " So you have action encoding and the state encoding and the policy decides on that."
},
{
"start": 619.4399999999999,
"end": 623.52,
"text": " The state encoding you'll see here it's the same everywhere of course because"
},
{
"start": 623.52,
"end": 628.7199999999999,
"text": " the current game state is the current game state. This comes from this model up"
},
{
"start": 628.7199999999999,
"end": 636.3199999999999,
"text": " here. What this does is over here you have the what you would call the state"
},
{
"start": 636.32,
"end": 643.9200000000001,
"text": " the current observation. The current observation is composed of many"
},
{
"start": 643.9200000000001,
"end": 649.08,
"text": " things. Specifically the following eight things. The first one is actually"
},
{
"start": 649.08,
"end": 655.2800000000001,
"text": " called observation which is I would call all of this the current observation"
},
{
"start": 655.2800000000001,
"end": 661.08,
"text": " from an RL perspective. But the first is actually observation. It's whatever you"
},
{
"start": 661.08,
"end": 665.12,
"text": " saw the big text you saw before. Like you were in the kitchen it looks like this"
},
{
"start": 665.12,
"end": 669.16,
"text": " it smells like this you turn around and so on. This would be the observation."
},
{
"start": 669.16,
"end": 673.28,
"text": " It's what the game engine says at the current time step. This is just a piece of"
},
{
"start": 673.28,
"end": 683.64,
"text": " text. Second missing items. Third unnecessary items. Now these things you"
},
{
"start": 683.64,
"end": 688.28,
"text": " might wonder okay how do I know what what items are missing and unnecessary."
},
{
"start": 688.28,
"end": 695.3199999999999,
"text": " These things come from another model that this paper trains and we'll get"
},
{
"start": 695.3199999999999,
"end": 700.0799999999999,
"text": " into that later. But basically they have a method of specifying which items are"
},
{
"start": 700.0799999999999,
"end": 708.3199999999999,
"text": " still missing which are unnecessary and they list those here. Then description"
},
{
"start": 708.3199999999999,
"end": 713.36,
"text": " which is the output of the last look command. So in each room you can look you"
},
{
"start": 713.36,
"end": 717.4399999999999,
"text": " can type look and then it'll give you a description of the room and what's in"
},
{
"start": 717.44,
"end": 725.9200000000001,
"text": " there. The previous commands this is often used in RL either explicitly or"
},
{
"start": 725.9200000000001,
"end": 732.84,
"text": " implicitly through a recurrent network in order to give the agent an idea what"
},
{
"start": 732.84,
"end": 737.8800000000001,
"text": " what happened in the in the previous steps or what it did so that it doesn't"
},
{
"start": 737.8800000000001,
"end": 743.9200000000001,
"text": " repeat actions unnecessarily or so it learns to not repeat actions"
},
{
"start": 743.92,
"end": 750.8,
"text": " unnecessarily. Required utilities. Again this is a model that's kind of trained"
},
{
"start": 750.8,
"end": 757.8399999999999,
"text": " to predict what utilities are required to perform some actions. So as I said"
},
{
"start": 757.8399999999999,
"end": 762.52,
"text": " before if you want to slice the red hot pepper you need a knife. If you want to"
},
{
"start": 762.52,
"end": 770,
"text": " fry it you need a stove. Discovered locations. As I said there are different"
},
{
"start": 770,
"end": 776,
"text": " rooms you actually don't know what rooms there are before you actually go in in"
},
{
"start": 776,
"end": 782.08,
"text": " there. So before you go through a door you reach another room. So the list of"
},
{
"start": 782.08,
"end": 787.76,
"text": " previously discovered and visited locations is there and then the name of"
},
{
"start": 787.76,
"end": 795.04,
"text": " the current location it is also there. So these are eight things that make up the"
},
{
"start": 795.04,
"end": 801.24,
"text": " current observation. These eight things are just strings of text and these eight"
},
{
"start": 801.24,
"end": 807,
"text": " things are each one as you can see here these are that the eight things from"
},
{
"start": 807,
"end": 813.3199999999999,
"text": " observation to location each one are embedded and fed also into an RNN. So for"
},
{
"start": 813.3199999999999,
"end": 818.52,
"text": " each of these eight things you'll obtain a 32 dimensional vector and these are"
},
{
"start": 818.52,
"end": 824.88,
"text": " all concatenated to make up one big 256 dimensional vector. So this 256"
},
{
"start": 824.88,
"end": 830.4,
"text": " dimensional vector will contain all the necessary information about the current"
},
{
"start": 830.4,
"end": 835.52,
"text": " room what's in there what what items are you still missing what items do you have"
},
{
"start": 835.52,
"end": 839.96,
"text": " in your inventory which ones are unnecessary and so on. So if you train"
},
{
"start": 839.96,
"end": 846.4,
"text": " this correctly this 256 dimensional vector will describe the current game"
},
{
"start": 846.4,
"end": 851.76,
"text": " state as it is relevant to your agent like everything about it every"
},
{
"start": 851.76,
"end": 857.24,
"text": " relevant information that's in here will be encoded in this vector. Now this"
},
{
"start": 857.24,
"end": 863.4399999999999,
"text": " vector isn't the final state encoding yet what you'll have is you feed this into"
},
{
"start": 863.4399999999999,
"end": 869.92,
"text": " an RNN that takes as input the last time steps you have to imagine the last time"
},
{
"start": 869.92,
"end": 876.4,
"text": " step already there was observation blah blah blah this entire thing was I'm just"
},
{
"start": 876.4,
"end": 883.52,
"text": " copying I'm just copying this box over here so this entire thing was already"
},
{
"start": 883.52,
"end": 890.28,
"text": " done last step and already fed into an RNN so this this is an RNN that actually"
},
{
"start": 890.28,
"end": 896.8,
"text": " goes over time and the last whatever the output here is it will be fed to the"
},
{
"start": 896.8,
"end": 902.1999999999999,
"text": " next step and this is a trick often done in reinforcement learning as well that"
},
{
"start": 902.2,
"end": 908.44,
"text": " you actually have a recurrent neural network over the time steps so each"
},
{
"start": 908.44,
"end": 912.8000000000001,
"text": " time step you have a certain observation you encode it and so on you get a"
},
{
"start": 912.8000000000001,
"end": 917.88,
"text": " description of that and then you feed this into an RNN what the RNN can learn"
},
{
"start": 917.88,
"end": 925.84,
"text": " to do is it can learn to react to different not only to the current"
},
{
"start": 925.84,
"end": 929.96,
"text": " observation but to the current observation conditioned on the history"
},
{
"start": 929.96,
"end": 935.88,
"text": " of previous observations so it can learn before I was in this room now I'm in this"
},
{
"start": 935.88,
"end": 942.2800000000001,
"text": " new room so I actually haven't you know taken all the items from this room yet"
},
{
"start": 942.2800000000001,
"end": 949,
"text": " because I just came into this room and so on so the the kind of component where"
},
{
"start": 949,
"end": 954.52,
"text": " you are able to look at the past and what happened in the past is in captured"
},
{
"start": 954.52,
"end": 965.24,
"text": " by this RNN here so it's fairly complicated architecture but this here"
},
{
"start": 965.24,
"end": 973,
"text": " this state encoding that is conditioned on the also on the history then goes into"
},
{
"start": 973,
"end": 980.72,
"text": " this into here that's it that's the vector that goes in here is combined"
},
{
"start": 980.72,
"end": 988.1600000000001,
"text": " with each action so all of these actions here these K actions and this is all fed"
},
{
"start": 988.1600000000001,
"end": 994.64,
"text": " through a neural network and that will give you the policy this is a fairly"
},
{
"start": 994.64,
"end": 1000.48,
"text": " complicated thing but if you look at it it's not it's not too it's not too"
},
{
"start": 1000.48,
"end": 1010.48,
"text": " difficult actually so what you'll do is you will take your observations here this"
},
{
"start": 1010.48,
"end": 1016.28,
"text": " is all observation it will be encoded and combined with the history in order"
},
{
"start": 1016.28,
"end": 1022.6,
"text": " to give you this in order to give you an encoding of the current state on the"
},
{
"start": 1022.6,
"end": 1027.2,
"text": " other hand you'll take all of the possible commands that you could"
},
{
"start": 1027.2,
"end": 1033.1200000000001,
"text": " perform right now encode each one separately right into an embedding and"
},
{
"start": 1033.1200000000001,
"end": 1039.6,
"text": " then you combine each one of those with this encoding you specified previously"
},
{
"start": 1039.6,
"end": 1046.9199999999998,
"text": " that you and and from that you make your decision which action to take next and"
},
{
"start": 1046.9199999999998,
"end": 1052.76,
"text": " the action here is the one that's output is the action you take next sampled from"
},
{
"start": 1052.76,
"end": 1060.6799999999998,
"text": " this policy the last thing you need is a value network and this is just important"
},
{
"start": 1060.6799999999998,
"end": 1068.3999999999999,
"text": " for reinforcement learning which tells you from this state here so I'm getting"
},
{
"start": 1068.4,
"end": 1075.3600000000001,
"text": " weird with colors here from this state here which is the same as this one so"
},
{
"start": 1075.3600000000001,
"end": 1079.8000000000002,
"text": " you'd simply transfer this over from this state how valuable is that what's"
},
{
"start": 1079.8000000000002,
"end": 1085.3600000000001,
"text": " my value of the state and the value is if I'm in this state and I act as I"
},
{
"start": 1085.3600000000001,
"end": 1091.3200000000002,
"text": " normally act what are all my future rewards going to be combined so it"
},
{
"start": 1091.3200000000002,
"end": 1096.2800000000002,
"text": " basically gives you a value of this state you can think of this in for"
},
{
"start": 1096.28,
"end": 1102.48,
"text": " example terms of chess if you had this in chess and then this here is it would"
},
{
"start": 1102.48,
"end": 1108,
"text": " be a description of the chessboard this HT and the value would be how valuable"
},
{
"start": 1108,
"end": 1111.92,
"text": " is this position for you so if you're very much ahead and material and"
},
{
"start": 1111.92,
"end": 1116.92,
"text": " position and so on this value would be very high if you're behind this value"
},
{
"start": 1116.92,
"end": 1121.16,
"text": " would be very low and this is in a real network simply trying to predict that"
},
{
"start": 1121.16,
"end": 1130.3600000000001,
"text": " value so with all of this you now have a never good basis to do reinforcement"
},
{
"start": 1130.3600000000001,
"end": 1137.0400000000002,
"text": " learning you have a policy you have a value network and from that you can"
},
{
"start": 1137.0400000000002,
"end": 1142.52,
"text": " train an RL agent and this is done classically in an actor critic way where"
},
{
"start": 1142.52,
"end": 1149.8400000000001,
"text": " you do advantage learning here the advantage and the policy you train"
},
{
"start": 1149.84,
"end": 1155,
"text": " weighted by the advantage then the value network you train to be close to their"
},
{
"start": 1155,
"end": 1159.56,
"text": " reward and then you have an entropy penalty if you don't know what these"
},
{
"start": 1159.56,
"end": 1164.12,
"text": " things are the video will get bit too long if I were to go over these"
},
{
"start": 1164.12,
"end": 1169.04,
"text": " reinforcement learning concepts but these are very standard in reinforcement"
},
{
"start": 1169.04,
"end": 1175.6799999999998,
"text": " learning so you can train these you can basically train what it does is you can"
},
{
"start": 1175.68,
"end": 1181.1200000000001,
"text": " train these neural networks in absence of label training data because you don't"
},
{
"start": 1181.1200000000001,
"end": 1185.44,
"text": " know what the best action is in each step right there's no one telling you"
},
{
"start": 1185.44,
"end": 1189.64,
"text": " you just have a reward you just sometimes you get a point and you don't"
},
{
"start": 1189.64,
"end": 1195.64,
"text": " know which actions led to that so these things will actually allow you to train"
},
{
"start": 1195.64,
"end": 1201.52,
"text": " these neural networks by using just the reward without knowing which exact"
},
{
"start": 1201.52,
"end": 1206.52,
"text": " actions were right and wrong and that's the core of reinforcement learning"
},
{
"start": 1206.52,
"end": 1216,
"text": " obviously alright so the the core one of the core ingredients actually is this"
},
{
"start": 1216,
"end": 1225.48,
"text": " recipe manager and the recipe manager is a sub model that does the following so"
},
{
"start": 1225.48,
"end": 1234.64,
"text": " here it takes as an input the cookbook here and it also takes as an input the"
},
{
"start": 1234.64,
"end": 1241.72,
"text": " inventory and it outputs something like this and this this is a this is a table"
},
{
"start": 1241.72,
"end": 1248.32,
"text": " representation of what it outputs it will output all the ingredients that you"
},
{
"start": 1248.32,
"end": 1256.12,
"text": " need for the recipe whether or not this input that this ingredient is currently"
},
{
"start": 1256.12,
"end": 1265.72,
"text": " missing from your inventory and action to perform so which actions still need"
},
{
"start": 1265.72,
"end": 1272.56,
"text": " to be performed so let's look at the following let's look at this example the"
},
{
"start": 1272.56,
"end": 1276.56,
"text": " recipe tells you you need the ingredients are a carrot a red hot pepper"
},
{
"start": 1276.56,
"end": 1283.9199999999998,
"text": " and a white onion and the inventory says you care you're carrying a white onion"
},
{
"start": 1283.9199999999998,
"end": 1295.44,
"text": " and a carrot right so down here you see aha we we do actually have we do"
},
{
"start": 1295.44,
"end": 1301.6,
"text": " actually have a carrot so it's not missing the carrot isn't missing you"
},
{
"start": 1301.6,
"end": 1305.48,
"text": " have it in your inventory the red hot pepper is missing we don't have it in"
},
{
"start": 1305.48,
"end": 1309.56,
"text": " the inventory but we need it for the recipe the white onion we need for the"
},
{
"start": 1309.56,
"end": 1317.52,
"text": " recipe but it's not missing then it also is for each of the ingredients is"
},
{
"start": 1317.52,
"end": 1322.58,
"text": " supposed to tell you this recipe model which of the what you still need to"
},
{
"start": 1322.58,
"end": 1327.52,
"text": " perform on it so here it says slice the carrot roast the carrot and you simply"
},
{
"start": 1327.52,
"end": 1331.48,
"text": " have a carrot it doesn't say slice the roast that means it's not sliced and"
},
{
"start": 1331.48,
"end": 1336.16,
"text": " roasted so the recipe is supposed to output you still need to slice and roast"
},
{
"start": 1336.16,
"end": 1342.64,
"text": " the carrot here for example for the white onion says fry the white onion and"
},
{
"start": 1342.64,
"end": 1352.8,
"text": " as you can see in the inventory it says you're carrying a fried white onion so"
},
{
"start": 1352.8,
"end": 1358.6,
"text": " for the white onion you see we don't need to do anything anymore so that the"
},
{
"start": 1358.6,
"end": 1366.9599999999998,
"text": " recipe model is basically trying to to make this table here and this table you"
},
{
"start": 1366.9599999999998,
"end": 1372.9599999999998,
"text": " can see as an intermediary step in order to do all the other things and the"
},
{
"start": 1372.9599999999998,
"end": 1378.6,
"text": " difference here to a pure RL method and this is important the difference is that"
},
{
"start": 1378.6,
"end": 1384.4399999999998,
"text": " this representation this intermediate table representation is done explicitly"
},
{
"start": 1384.44,
"end": 1391.3600000000001,
"text": " so the recipe model really produces a table like this and not just in other RL"
},
{
"start": 1391.3600000000001,
"end": 1397.4,
"text": " methods people go about and make this recipe model output some sort of you"
},
{
"start": 1397.4,
"end": 1402.76,
"text": " know let's say a 200 dimensional vector that's supposed to encompass all of this"
},
{
"start": 1402.76,
"end": 1410.16,
"text": " information and that doesn't appear to work as well like often that if you"
},
{
"start": 1410.16,
"end": 1415.28,
"text": " simply train this end-to-end that will not pick up on the important information"
},
{
"start": 1415.28,
"end": 1420.2,
"text": " because the training signal tends to be way too weak you have to imagine you"
},
{
"start": 1420.2,
"end": 1426.0800000000002,
"text": " already have this really really big model construction here and you're"
},
{
"start": 1426.0800000000002,
"end": 1431.4,
"text": " trying to learn it you're trying to learn it from a tiny reward signal that"
},
{
"start": 1431.4,
"end": 1437.28,
"text": " you get at the end right this is very noisy signal now if if you're now trying"
},
{
"start": 1437.28,
"end": 1443.36,
"text": " to say well the inputs to these things right this command here and we also saw"
},
{
"start": 1443.36,
"end": 1448.36,
"text": " the inputs to these these depend on this recipe model also now are whatever"
},
{
"start": 1448.36,
"end": 1454.16,
"text": " giant neural network construction here and we'll all train this end-to-end and"
},
{
"start": 1454.16,
"end": 1458.48,
"text": " these will actually not be text these will actually be some sort of latent"
},
{
"start": 1458.48,
"end": 1464.32,
"text": " vectors that will often fail because you're now just trying to extract"
},
{
"start": 1464.32,
"end": 1469.36,
"text": " information from too noisy of a reward signal so the authors here do actually"
},
{
"start": 1469.36,
"end": 1477.52,
"text": " pretty neat separation of that and they train this recipe model with actually an"
},
{
"start": 1477.52,
"end": 1482.24,
"text": " augmented data set so they go to freebase and get more food items and"
},
{
"start": 1482.24,
"end": 1488.76,
"text": " then they construct a data set that resembles this and train it in a"
},
{
"start": 1488.76,
"end": 1496.56,
"text": " supervised way to output tables tables like this so this is is pretty smart and"
},
{
"start": 1496.56,
"end": 1503.28,
"text": " I think it's a good lesson if you ever attempt something like this that really"
},
{
"start": 1503.28,
"end": 1507.42,
"text": " really important information such as this one if you can train it in a"
},
{
"start": 1507.42,
"end": 1512.32,
"text": " supervised way as a kind of a pre-processing step to your RL"
},
{
"start": 1512.32,
"end": 1520.56,
"text": " procedure that's extremely helpful here you can you can see how this is then"
},
{
"start": 1520.56,
"end": 1528.28,
"text": " used so by combining this table that was output from the recipe model and your"
},
{
"start": 1528.28,
"end": 1537.4399999999998,
"text": " inventory and the output of this look command you can then generate these"
},
{
"start": 1537.44,
"end": 1542.56,
"text": " commands so before we said it's important to reduce the everything you could do"
},
{
"start": 1542.56,
"end": 1548.6000000000001,
"text": " which is infinite things to everything that is reasonable to do currently and"
},
{
"start": 1548.6000000000001,
"end": 1556.2,
"text": " this model here does that so given this given that and given the description of"
},
{
"start": 1556.2,
"end": 1563.04,
"text": " what's currently in the room you can now generate these commands and for example"
},
{
"start": 1563.04,
"end": 1567.44,
"text": " take knife if you have to slice something because you see a knife is in"
},
{
"start": 1567.44,
"end": 1573.8,
"text": " the room and you could conceivably take the knife right you can construct these"
},
{
"start": 1573.8,
"end": 1580.12,
"text": " commands but also since you know right since you know what's since you know"
},
{
"start": 1580.12,
"end": 1585.8,
"text": " what's in your inventory and since you know which things are still missing you"
},
{
"start": 1585.8,
"end": 1591.56,
"text": " can generate commands like take the white onion or drop the water because"
},
{
"start": 1591.56,
"end": 1597.9199999999998,
"text": " you don't need the water right so um the the offers also group these things here"
},
{
"start": 1597.9199999999998,
"end": 1602.04,
"text": " in this what they call high-level commands which take all required items"
},
{
"start": 1602.04,
"end": 1607.6399999999999,
"text": " from here simply means take everything that's in the room that is not in the"
},
{
"start": 1607.6399999999999,
"end": 1612.76,
"text": " inventory but you need it so these things which for an RL agent it makes"
},
{
"start": 1612.76,
"end": 1618.44,
"text": " sense to group these things together because it doesn't make sense to have"
},
{
"start": 1618.44,
"end": 1623.56,
"text": " them as two separate things if you need both take both if you don't need any"
},
{
"start": 1623.56,
"end": 1628.88,
"text": " what if you have a new entry drop all of these things so that makes sense that's"
},
{
"start": 1628.88,
"end": 1636.04,
"text": " a small optimization that apparently brought some gains but the kind of the"
},
{
"start": 1636.04,
"end": 1641.72,
"text": " the overarching message here is that once you have a once you have this"
},
{
"start": 1641.72,
"end": 1647.52,
"text": " information from the recipe model you can then use it in many useful ways in"
},
{
"start": 1647.52,
"end": 1656.44,
"text": " order to make life for your RL agent easier alright so that kind of is the"
},
{
"start": 1656.44,
"end": 1661.74,
"text": " entire model that's very it's quite convoluted but basically you start with"
},
{
"start": 1661.74,
"end": 1666.84,
"text": " this here this recipe manager you decide you output this table down here which"
},
{
"start": 1666.84,
"end": 1674.48,
"text": " ingredients are in the recipe are they still missing and which actions we need"
},
{
"start": 1674.48,
"end": 1679.64,
"text": " to perform you then combine it with this information here the information about"
},
{
"start": 1679.64,
"end": 1684.64,
"text": " the current room and your inventory in order to come up with a set of commands"
},
{
"start": 1684.64,
"end": 1690.32,
"text": " that are conceivable to do here you combine these commands with some"
},
{
"start": 1690.32,
"end": 1697.72,
"text": " commands that are always available so commands that are always available are"
},
{
"start": 1697.72,
"end": 1705.84,
"text": " things like look inventory prepare meal you have that right you add that if the"
},
{
"start": 1705.84,
"end": 1711.68,
"text": " recipe manager does not output any missing and the agents location is the"
},
{
"start": 1711.68,
"end": 1718.56,
"text": " kitchen so you can add these other items and also we're not even gonna get into"
},
{
"start": 1718.56,
"end": 1722.76,
"text": " that you add navigational items because there are doors in these rooms and you"
},
{
"start": 1722.76,
"end": 1728.8,
"text": " need to navigate around so they actually train another model to here you see to"
},
{
"start": 1728.8,
"end": 1738.32,
"text": " detect to detect directions that you could move into and open doors for every"
},
{
"start": 1738.32,
"end": 1742.18,
"text": " closed door in the room so that's another challenge that the agent needs to"
},
{
"start": 1742.18,
"end": 1746.84,
"text": " overcome they have to build an entire model to predict which doors are there"
},
{
"start": 1746.84,
"end": 1752.04,
"text": " and are they closed do you need to open them so these commands if there are"
},
{
"start": 1752.04,
"end": 1757.08,
"text": " doors and if you can move through them these commands are also added to this"
},
{
"start": 1757.08,
"end": 1761.24,
"text": " set of commands that are reasonable so now we have a set of commands that are"
},
{
"start": 1761.24,
"end": 1768.8799999999999,
"text": " reasonable over here then you describe the room here you put both into this"
},
{
"start": 1768.8799999999999,
"end": 1775.44,
"text": " embedding and then finally your policy outputs an action that's that that's the"
},
{
"start": 1775.44,
"end": 1781.72,
"text": " entire process very convoluted very big very astonishing that this works with our"
},
{
"start": 1781.72,
"end": 1788.2,
"text": " L but in order to need to get it to work you actually need to do this supervised"
},
{
"start": 1788.2,
"end": 1794.3600000000001,
"text": " training and the experimental evidence here is quite solid in that they compare"
},
{
"start": 1794.3600000000001,
"end": 1803.48,
"text": " to baseline systems that that use classic techniques and they do some"
},
{
"start": 1803.48,
"end": 1811.2,
"text": " ablation over over their individual parts and they get second place I think"
},
{
"start": 1811.2,
"end": 1817.56,
"text": " in a competition about these text-based games so that's pretty good and that was"
},
{
"start": 1817.56,
"end": 1847.1599999999999,
"text": " it for me and check it out and bye bye"
}
] |
BK3rv0MQMwY | Yannic Kilcher | UCZHmQk67mSJgfCCTn7xBfew | [News] The Siraj Raval Controversy | [
"Science & Technology"
] | [
"machine learning",
"siraj",
"controversy",
"scam",
"scammer",
"fraud",
"plagiarism",
"plagiarized",
"course",
"refund",
"policy",
"ai",
"online",
"hype",
"credit",
"attribution",
"paper",
"scandal",
"news",
"twitter",
"neural qubit",
"intellectual property"
] | Popular ML YouTuber Siraj Raval is in the middle of not just one, but two controversies: First, a lot of students of his 200$ online-course have accused him of breaking major promises he made when advertising the course and denying them refunds. Second, his paper on "The Neural Qubit" appears to be plagiarized almost verbatim.
https://www.reddit.com/r/MachineLearning/comments/d7ad2y/d_siraj_raval_potentially_exploiting_students/
https://www.reddit.com/r/MachineLearning/comments/dh2xfs/d_siraj_has_a_new_paper_the_neural_qubit_its/ | There is a massive controversy going on right now and in the middle is Siraj Raval, a prominent YouTuber. So today I'll just be actually shortly reporting on this, not giving too much opinion, just kind of stating what's up in a very high level overview. Because if you haven't heard of this, I think it's important that you do. And this is both sad and funny to a degree, more sad actually, but you know, make your own opinions. So Siraj is this very prominent YouTuber that makes videos mostly, let's say coding tutorials or explaining short concepts in the field of machine learning. And recently also branched out into other fields like here Watch Me Build a Marketing Startup and so on. So what happened, it was two recent developments. First of all, he offered a course and the course was $200. And this is one of his students on Twitter and many more have come out. And he offered this course for $200 and basically said, make money with machine learning. That was the course. And he said he was going to take 500 students in this course and it would be personal and it would be a very, very high level. He said he was going to take 500 students in this course and it would be personalized learning, personalized support from basically from him or he said he is all in into this course. Then the students discovered that there were actually over a thousand people in the course and there was almost no personalized support. So there's only, he was giving 50 minutes of his weekly time to do Q&A, 30 minutes of video content and apparently he also replied to all the code submissions with the exact same email. So things like this. He actually split up the students into two different Slack groups so they wouldn't notice that there are over a thousand people. So about two, 500 people groups. Then people wanted a refund and then apparently when he hit the Slack limit, he transferred them to Discord and he added everyone that wanted a refund to Discord channel and then simply banned them. I mean, yeah. There are many more stories of students about this course apparently. This was kind of really a bit of a scam, this course, without especially the refunds. There was no refund policy and then he sent the students to Discord and they were like, oh, I want to see. Then about two weeks, I think, into the course there was a refund policy. After two weeks after the course started and the refund policy said you can get a refund within two weeks of the course starting. So this, I mean, this is all just kind of really, really weird. I encourage you to read up more on this because there are many more stories about this course. So he apologized publicly and said he shouldn't have done that, he should have hired TAs and so on. He apologized for it and that seemed to be kind of the end of that. I don't exactly know what happened to the students. Some claimed they never got a refund and so on. But then it went on and it went on badly for Siraj, if I may say, because he published a paper called The Neural Cubit and people have gone and it turns out that it is almost all plagiarized from one or two other papers. Actually, yeah, it turns out it's, I think it's two papers and it's almost all plagiarized from there. You can see on the left the green sections and on the right the red sections are exactly identical. For example, this table up here, I think it's on the next page of the other paper, is exactly this from the other paper. If you look at whatever these equations, they're all the same. The sentences are exactly the same and so on. He only changed, also the diagrams, you see here on the upper left, exactly taken from this other paper. I think he mentions this other paper, he cites it once and he says his work is kind of a derivative of that or leaned on that and so on. But these aren't explicitly quotes here. The only changes he made are changes like, so whenever the other paper says we can write the combined transformation, here you can see he says I can write. Thanks to the CV encoding, I get a nonlinear functional. There's a rule in computer science. The only person who's allowed to do this is Don Knuth. No one else. That's wholly rule broken. So more seriously, he changed that and then he also kind of used a couple of synonyms which make no sense. So, for example, he replaced the word gate by the word door and of course a logic gate then becomes a logic door. So here it's a non-Gaussian gate, phi. I don't know if in this instance, but in this instance he replaced it. Here it actually says gate, but sometimes it's replaced by door and also he replaced the word complex Hilbert space to complicated Hilbert space which makes no sense at all. So this, yeah, it's funny and sad at the same time. So this happened and again he's apologizing. He says I've seen claims that my neural qubit was partly plagiarized. This is true. And he basically claims it. He sort of blames it. He says he's doing too many videos a week which I agree. I mean, I can tell you that making videos is hard, even crappy videos like mine. And his are actually edited and so on. But the problem is many people more came out and said that he did the same thing to their project. Here you see someone. He did the exact same thing to our project. It took four people a couple of months to do. He acted like it was his own. And many more came out and said he plagiarized other things as well where he basically just takes code and gives minimal or no attribution to the original authors and then passed it off as his own. This after this course, yeah, everyone, this could not get any worse. Hold my gas in quantum doors. Yeah. So this all happened. I mean, I encourage you to go read up on it to make up your own mind. I just want to point out quickly the end. And I won't actually show the identity of the person. I'm posting this if you really want to find out. But it's not about that person. It's about the kind of sentiment. So there is a sentiment around that you should kind of unfollow him. And because that lends credibility to him. And there is a point to be made of that kind of if the kind of prominent researchers refer to him and so on that gives him some credibility. But I'm also very much against sort of cancel culture. It is also the case that he, like no matter how much he's plagiarized, has popularized the field more than anyone else. And maybe, you know, there is a conversation to be had and a lesson to be learned without immediately canceling someone. That's just so that I mean, there's, it's a it's a complicated issue, but just kind of want to get this out there. So go read up on this is all it's it's yeah, it's a wild world. So that being said, bye bye. Have fun. | [
{
"start": 0,
"end": 7,
"text": " There is a massive controversy going on right now and in the middle is Siraj Raval, a prominent"
},
{
"start": 7.6000000000000005,
"end": 14.6,
"text": " YouTuber. So today I'll just be actually shortly reporting on this, not giving too much opinion,"
},
{
"start": 15.08,
"end": 21.6,
"text": " just kind of stating what's up in a very high level overview. Because if you haven't heard"
},
{
"start": 21.6,
"end": 31.560000000000002,
"text": " of this, I think it's important that you do. And this is both sad and funny to a degree,"
},
{
"start": 31.560000000000002,
"end": 37.24,
"text": " more sad actually, but you know, make your own opinions. So Siraj is this very prominent"
},
{
"start": 37.24,
"end": 44.24,
"text": " YouTuber that makes videos mostly, let's say coding tutorials or explaining short concepts"
},
{
"start": 44.24,
"end": 51.24,
"text": " in the field of machine learning. And recently also branched out into other fields like here"
},
{
"start": 51.24,
"end": 58.24,
"text": " Watch Me Build a Marketing Startup and so on. So what happened, it was two recent developments."
},
{
"start": 58.28,
"end": 65.28,
"text": " First of all, he offered a course and the course was $200. And this is one of his students"
},
{
"start": 65.28,
"end": 72.28,
"text": " on Twitter and many more have come out. And he offered this course for $200 and basically"
},
{
"start": 73.64,
"end": 80.64,
"text": " said, make money with machine learning. That was the course. And he said he was going to"
},
{
"start": 81.64,
"end": 88.04,
"text": " take 500 students in this course and it would be personal and it would be a very, very"
},
{
"start": 88.04,
"end": 95.04,
"text": " high level. He said he was going to take 500 students in this course and it would be personalized"
},
{
"start": 95.80000000000001,
"end": 102.80000000000001,
"text": " learning, personalized support from basically from him or he said he is all in into this"
},
{
"start": 104.36000000000001,
"end": 111.36000000000001,
"text": " course. Then the students discovered that there were actually over a thousand people"
},
{
"start": 111.36,
"end": 118.36,
"text": " in the course and there was almost no personalized support. So there's only, he was giving 50"
},
{
"start": 119.72,
"end": 126.72,
"text": " minutes of his weekly time to do Q&A, 30 minutes of video content and apparently he also replied"
},
{
"start": 129.92,
"end": 136.92000000000002,
"text": " to all the code submissions with the exact same email. So things like this. He actually"
},
{
"start": 136.92,
"end": 142.92,
"text": " split up the students into two different Slack groups so they wouldn't notice that there"
},
{
"start": 142.92,
"end": 149.92,
"text": " are over a thousand people. So about two, 500 people groups. Then people wanted a refund"
},
{
"start": 153.92,
"end": 160.92,
"text": " and then apparently when he hit the Slack limit, he transferred them to Discord and"
},
{
"start": 160.92,
"end": 167.92,
"text": " he added everyone that wanted a refund to Discord channel and then simply banned them."
},
{
"start": 168.92,
"end": 175.92,
"text": " I mean, yeah. There are many more stories of students about this course apparently."
},
{
"start": 176.92,
"end": 183.92,
"text": " This was kind of really a bit of a scam, this course, without especially the refunds. There"
},
{
"start": 183.92,
"end": 189.92,
"text": " was no refund policy and then he sent the students to Discord and they were like, oh,"
},
{
"start": 189.92,
"end": 196.92,
"text": " I want to see. Then about two weeks, I think, into the course there was a refund policy."
},
{
"start": 197.04,
"end": 201.07999999999998,
"text": " After two weeks after the course started and the refund policy said you can get a refund"
},
{
"start": 201.07999999999998,
"end": 208.07999999999998,
"text": " within two weeks of the course starting. So this, I mean, this is all just kind of really,"
},
{
"start": 209.83999999999997,
"end": 216.83999999999997,
"text": " really weird. I encourage you to read up more on this because there are many more stories"
},
{
"start": 216.84,
"end": 223.84,
"text": " about this course. So he apologized publicly and said he shouldn't have done that, he"
},
{
"start": 227.8,
"end": 234.8,
"text": " should have hired TAs and so on. He apologized for it and that seemed to be kind of the end"
},
{
"start": 238.8,
"end": 242.8,
"text": " of that. I don't exactly know what happened to the students. Some claimed they never got"
},
{
"start": 242.8,
"end": 249.8,
"text": " a refund and so on. But then it went on and it went on badly for Siraj, if I may say,"
},
{
"start": 250.8,
"end": 257.8,
"text": " because he published a paper called The Neural Cubit and people have gone and it turns out"
},
{
"start": 257.8,
"end": 266.8,
"text": " that it is almost all plagiarized from one or two other papers. Actually, yeah, it turns"
},
{
"start": 266.8,
"end": 271.8,
"text": " out it's, I think it's two papers and it's almost all plagiarized from there. You can"
},
{
"start": 271.8,
"end": 276.8,
"text": " see on the left the green sections and on the right the red sections are exactly identical."
},
{
"start": 276.8,
"end": 283.8,
"text": " For example, this table up here, I think it's on the next page of the other paper, is exactly"
},
{
"start": 283.8,
"end": 289.8,
"text": " this from the other paper. If you look at whatever these equations, they're all the"
},
{
"start": 289.8,
"end": 296.8,
"text": " same. The sentences are exactly the same and so on. He only changed, also the diagrams,"
},
{
"start": 296.8,
"end": 303.8,
"text": " you see here on the upper left, exactly taken from this other paper. I think he mentions"
},
{
"start": 303.8,
"end": 310.8,
"text": " this other paper, he cites it once and he says his work is kind of a derivative of that"
},
{
"start": 310.8,
"end": 319.8,
"text": " or leaned on that and so on. But these aren't explicitly quotes here. The only changes"
},
{
"start": 319.8,
"end": 326.8,
"text": " he made are changes like, so whenever the other paper says we can write the combined"
},
{
"start": 326.8,
"end": 331.8,
"text": " transformation, here you can see he says I can write. Thanks to the CV encoding, I get"
},
{
"start": 331.8,
"end": 335.8,
"text": " a nonlinear functional. There's a rule in computer science. The only person who's allowed"
},
{
"start": 335.8,
"end": 345.8,
"text": " to do this is Don Knuth. No one else. That's wholly rule broken. So more seriously, he"
},
{
"start": 345.8,
"end": 353.8,
"text": " changed that and then he also kind of used a couple of synonyms which make no sense."
},
{
"start": 353.8,
"end": 359.8,
"text": " So, for example, he replaced the word gate by the word door and of course a logic gate"
},
{
"start": 359.8,
"end": 367.8,
"text": " then becomes a logic door. So here it's a non-Gaussian gate, phi. I don't know if in"
},
{
"start": 367.8,
"end": 376.8,
"text": " this instance, but in this instance he replaced it. Here it actually says gate, but sometimes"
},
{
"start": 376.8,
"end": 384.8,
"text": " it's replaced by door and also he replaced the word complex Hilbert space to complicated"
},
{
"start": 384.8,
"end": 393.8,
"text": " Hilbert space which makes no sense at all. So this, yeah, it's funny and sad at the same"
},
{
"start": 393.8,
"end": 405.8,
"text": " time. So this happened and again he's apologizing. He says I've seen claims that my neural"
},
{
"start": 405.8,
"end": 412.8,
"text": " qubit was partly plagiarized. This is true. And he basically claims it. He sort of blames"
},
{
"start": 412.8,
"end": 419.8,
"text": " it. He says he's doing too many videos a week which I agree. I mean, I can tell you that"
},
{
"start": 419.8,
"end": 426.8,
"text": " making videos is hard, even crappy videos like mine. And his are actually edited and"
},
{
"start": 426.8,
"end": 437.8,
"text": " so on. But the problem is many people more came out and said that he did the same thing"
},
{
"start": 437.8,
"end": 441.8,
"text": " to their project. Here you see someone. He did the exact same thing to our project. It"
},
{
"start": 441.8,
"end": 447.8,
"text": " took four people a couple of months to do. He acted like it was his own. And many more"
},
{
"start": 447.8,
"end": 457.8,
"text": " came out and said he plagiarized other things as well where he basically just takes code"
},
{
"start": 457.8,
"end": 464.8,
"text": " and gives minimal or no attribution to the original authors and then passed it off as"
},
{
"start": 464.8,
"end": 474.8,
"text": " his own. This after this course, yeah, everyone, this could not get any worse. Hold my gas"
},
{
"start": 474.8,
"end": 484.8,
"text": " in quantum doors. Yeah. So this all happened. I mean, I encourage you to go read up on it"
},
{
"start": 484.8,
"end": 489.8,
"text": " to make up your own mind. I just want to point out quickly the end. And I won't actually"
},
{
"start": 489.8,
"end": 495.8,
"text": " show the identity of the person. I'm posting this if you really want to find out. But it's"
},
{
"start": 495.8,
"end": 499.8,
"text": " not about that person. It's about the kind of sentiment. So there is a sentiment around"
},
{
"start": 499.8,
"end": 507.8,
"text": " that you should kind of unfollow him. And because that lends credibility to him. And"
},
{
"start": 507.8,
"end": 514.8,
"text": " there is a point to be made of that kind of if the kind of prominent researchers refer"
},
{
"start": 514.8,
"end": 520.8,
"text": " to him and so on that gives him some credibility. But I'm also very much against sort of cancel"
},
{
"start": 520.8,
"end": 526.8,
"text": " culture. It is also the case that he, like no matter how much he's plagiarized, has"
},
{
"start": 526.8,
"end": 533.8,
"text": " popularized the field more than anyone else. And maybe, you know, there is a conversation"
},
{
"start": 533.8,
"end": 542.8,
"text": " to be had and a lesson to be learned without immediately canceling someone. That's just"
},
{
"start": 542.8,
"end": 548.8,
"text": " so that I mean, there's, it's a it's a complicated issue, but just kind of want to get this out"
},
{
"start": 548.8,
"end": 558.8,
"text": " there. So go read up on this is all it's it's yeah, it's a wild world. So that being said,"
},
{
"start": 558.8,
"end": 579.8,
"text": " bye bye. Have fun."
}
] |
rvr143crpuU | Yannic Kilcher | UCZHmQk67mSJgfCCTn7xBfew | Accelerating Deep Learning by Focusing on the Biggest Losers | [
"Science & Technology"
] | [
"machine learning",
"deep learning",
"dl",
"neural network",
"training",
"convergence",
"loss",
"importance",
"speed-up",
"faster",
"ai",
"dnn",
"deep neural network",
"backprop",
"backpropagation",
"cifar10",
"svhn",
"classifier"
] | What if you could reduce the time your network trains by only training on the hard examples? This paper proposes to select samples with high loss and only train on those in order to speed up training.
Abstract:
This paper introduces Selective-Backprop, a technique that accelerates the training of deep neural networks (DNNs) by prioritizing examples with high loss at each iteration. Selective-Backprop uses the output of a training example's forward pass to decide whether to use that example to compute gradients and update parameters, or to skip immediately to the next example. By reducing the number of computationally-expensive backpropagation steps performed, Selective-Backprop accelerates training. Evaluation on CIFAR10, CIFAR100, and SVHN, across a variety of modern image models, shows that Selective-Backprop converges to target error rates up to 3.5x faster than with standard SGD and between 1.02--1.8x faster than a state-of-the-art importance sampling approach. Further acceleration of 26% can be achieved by using stale forward pass results for selection, thus also skipping forward passes of low priority examples.
Authors: Angela H. Jiang, Daniel L.-K. Wong, Giulio Zhou, David G. Andersen, Jeffrey Dean, Gregory R. Ganger, Gauri Joshi, Michael Kaminksy, Michael Kozuch, Zachary C. Lipton, Padmanabhan Pillai
https://arxiv.org/abs/1910.00762 | Hi there! Today we're looking at accelerating deep learning by focusing on the biggest losers by Angela Jiang et al. This paper is pretty simple, pretty short in idea and is a pretty much an engineering paper. So we'll go over this idea and give it a good look and discuss advantages, disadvantages and so on. What's the basic idea? The basic idea is the following. If you train a neural network, what do you do? Usually you have a training data set, which I represent here. Each line is a sample and usually your network has a bunch of layers. Each line here is a layer of weights. What you do is you group your training data set into mini batches. Let's say that's a mini batch, four samples and you pass it through the network. This is called the forward propagation. You then calculate the loss of your forward propagated signal and then you back propagate this loss. When back propagating, you want to back propagate the loss such that it reaches each of the layers and it tells each layer how to update itself. What you want to do is for each layer you actually need to back prop the loss once towards the layer below it and once towards itself in order for the layer below it to continue the back prop and for the layer itself to update its weights. Each time you back prop basically once towards the lower layer and once towards yourself. That's a lot of work. You see whatever work you have passing your samples through the network here, you basically double the work going back. The core idea of this paper is if you look at the following. In a traditional training neural network you'll have some overhead in each training step, some overhead of maybe putting the data to the GPU or something like this. Then you have a time that you require for a forward pass and then you have a big chunk that you require for the backward pass. You see it's about double the size of this forward pass. This paper asks how can we reduce this backward pass time. What they propose is the following. They propose if the backward pass is expensive and we do it here for each data point in these mini batches, why don't we stop doing this and only try to select examples that are important. Once we only have selected the important examples, only those examples get to do the backward pass. Thereby let's say if we can only select one third of the examples to do the backward pass, we can reduce the amount that's required in the backward pass, the amount of work, by one third or sorry by two thirds. The way they select the important examples is by looking at the loss. They basically say whichever examples have a high loss, these must be the important examples, these are the hard examples. If we only train on the hard examples or if we train on the hard examples more, then the network will learn on these hard examples faster. Of course there is an implication there that if your network is good on the hard examples, it's also going to be good on the easy examples. That's like the definition of hard and easy examples. Of course that's a kind of a simplifying assumption. The idea is only select the hard examples and only by how much loss they have and only then backprop these hard examples. That's how they can reduce this by a lot. There's several intricacies here. The setup time of course is the same. What they do next is they forward propagate the entire mini batch here, because they need the loss of each example and then therefore they need to forward propagate the entire mini batch. At the end of this they select the examples with the highest loss and they only use those in training. Training consists of another forward pass, but this one is much smaller because you only forward pass the examples that you're actually training on. Then the backward pass accordingly will also be much much smaller because now again you have less samples to actually train on. The reason that you even need this second forward pass is the following. When you do backprop you can't simply start with a signal back here and then backprop that through the network. That doesn't work usually with most network architectures. Namely what you need to do is actually while you forward pass you need to remember information at each layer. A good example of this is the MaxPool operation. In MaxPool what you do is you maybe have four pixels that are next to each other and you select one of them. Now you need to remember during the forward pass which one you selected. Otherwise the backward pass won't work. You need to know which pixel to back prop through. That's why at each point you need to remember information to inform the backward pass. That's why basically you need a second forward pass with only the examples that you want to train on. You forward pass once, calculate this loss, select the ones with the high loss, then forward pass these again and then backprop only these examples. That's the main gist of it. This is exactly what you see here. Forward pass everything, then forward pass those again that have high loss and then backprop them. There is actually an interesting thing in this graphic in that you see that this forward pass here also is shorter than this forward pass. I assume that's because this forward pass here actually needs to do those additional saving of information while this forward pass here is simply a forward pass without intention of backward passing. You can instruct the deep learning frameworks to then not remember this information. They have another improvement over their algorithm called stale selective backprop. This is called selective backprop. They have stale selective backprop. What stale selective backprop does is it says well we might not always need this forward pass. What we might be able to do is actually, first we take the entire data set, let's use a different color here, let's use this. We take the entire data set forward properly through the network and then save this save into some database the losses. Then we use these losses to select the individual points here for a while. We perform maybe this is training here. You start here, you do this loss calculation and then you run your training until a couple of epochs and then you say okay now this information here is really outdated, I should really update it. Then you do this entire thing again and then you run training some more until you again stop and say okay now this information is stale again. Thereby you can amortize the cost of these forward passes. You pass your entire training set once in a while and then use those losses to select the hard examples. That's amortized. You can then reduce this forward pass that's used for selecting again by a lot. Of course the paper shows that this doesn't hurt your performance too much if you have this stale information. This is the entire idea of the algorithm and the algorithm is detailed here. Very briefly you have this buffer and you go through the data, you forward pass every example. For each loss you calculate the probability that you should retain it. It's a probabilistic framework, it's not absolute cutoff. If you decide to choose it probabilistically you append it to this buffer. If this buffer is of a certain size then you do the back prop only on this buffer. This buffer now only has the high loss examples with higher probability. Don't forget within this backward here there is also an implicit forward pass. Then you clear the buffer and go on. There are lots of forward passes here to compute the losses and then every now and then there is a backward pass whenever the buffer is of certain size. The probabilistic calculation of how and when to retain a sample is very simple. You have a deck of recent losses, a history of recent losses. You simply calculate the percentile that a given loss has in this history and that percentile will then decide on the probability. If you raise it to a power and that looks something like this. What's often used in this paper is this 33% selection. That would be the blue curve and you see the median example here. If you are in the median then you have about a 33% chance of being retained. If you have a higher loss than that then your probability rises. The first interesting thing actually is this graphic here where they show what the algorithm deems the hardest and easiest examples. Examples chosen least frequently and this is the CIFAR-10 dataset which is images 32 by 32 in color and you need to classify them into 10 categories. You see the easiest images, the ones chosen least frequently, are almost all automobiles. Of the automobiles they're almost all where you see the full car with the wheels and whatnot like this one here. These are what the algorithm deems easy samples and if you look at the hard samples, examples chosen most frequently by selective backprop, it's these here. For example bird and bird in this case is just kind of a smear. They're just kind of smears on a blue background. It's understandably that this resolution is pretty hard to make out that this is a bird. Airplane and automobile here you see that it's only a partial view of the thing. It's not like the full car like we saw in the easy pictures. It's only partial and this seems to be pretty hard and it's understandable. This cat here to me it's even unclear if this is a cat or a dog and I think dog is also a category in CIFAR-10 so the algorithm is certainly understandably confused by this example and deems it a hard example. And here even more you see truck and this isn't a truck as far as I can make out. These are two humans on the picture with no truck anywhere visible. So this seems to be a mislabeled example and of course mislabeled examples are going to be of high loss to the algorithm. This is the first criticism or thing and the authors recognize this that if you up weigh your examples with high loss you are going to up weigh all the mislabeled examples as well and thereby you're going to train more on the mislabeled examples and thereby you're going to possibly degrade your test performance much more than had you given every sample the same weight. And the authors address this actually nicely by doing an experiment and the experiment is what if we artificially mislabel examples how much can these algorithms tolerate. And so they often have these graphics here where they show test error over time. So test error and the x-axis here is number of back propped images which is kind of a time dimension in training these algorithms. You see the blue is a traditional model and the pink is a selective back prop with a 33% retain rate. So you see the selective back prop is much faster in reaching a low error and this first thing is simply with 1% of the labels shuffled. So 1% of the images are mislabeled. Selective back prop is still much faster than the traditional trajectory. If you go to 10% shuffled still you can see selective back prop is faster reaching a low error. Of course the error now generally is higher. But if you go to 20% here what you can see 20% shuffled labels what you can see it starts to become clear that these selective back prop it retains 33% of the hardest examples right. So and 20% of the examples have a wrong label. That means most of what it upweighs are wrongly labeled examples. Almost let's say that there's still a lot of correctly labeled examples. But you see it gets to a low error but then it gets up again as it kind of massively overfits on these wrongly labeled examples because it upweighs them so much. Because here in still every example is hard right. So these wrongly labeled examples they'll get about the same weight as correctly labeled examples because the network isn't trained yet. But as you go lower it starts to massively overfit. So compared to the traditional model kind of just reaches this low error that okay is now corrupted by these wrong labels but it doesn't it doesn't hurt as much. So that's kind of my first criticism. If you have a lot of noisy labels or if you have a lot of mislabeled examples then this method might actually hurt more than it helps. But the level is interesting that it can kind of tolerate 10% but it gets kind of into trouble at 20 or so more percent. So this is the first criticism and that's how the authors address it. I really like this ablation study that they do. Here this is kind of the meat of the experiment. So what they show here these curves on the bottom and let's look at this curve is on the x-axis you actually have wall clock time now. So how much time do you need in order to reach a kind of low error. Here is test set error. You see the traditional model in blue has a certain trajectory. Now cath 18 is a baseline don't worry about it. What we're interested in is the selective backprop which is the pink which you can see outperforms this traditional training. And what we're also interested in is the stale SB. So stale meaning it has this buffer of information that reduces it's supposed to reduce the time again. And you see that is even that even more outperforms the traditional approach. You can also see that the staleness here apparently doesn't hurt the performance too much. You see the error is fairly close and it reaches this error in a much faster time. This on CIFAR 10. They have this nice table up here where they show the speed up to reach a given error. So what they do is they take this error of the traditional model this test set error and they ask how fast are these methods in reaching this error times a constant. So times 1.1 times 1.2 times 1.4 now. Of course the reaching 1.4 times the final error is much is easier and thereby but it's also easier for the traditional model of course. So that's the catch but these are kind of benchmarks they chose to how fast are these models in reaching 1.1 1.2 1.4 times the error of a traditionally trained model. You can see here on CIFAR 10 for example actually it's go to SVHN. SVHN is the easiest of the of the data sets and it shows the most clear thing. So the traditional error is 1.7% and you see that the speed up is so this selective back prop is 3.4 times faster in reaching this 1.1 times the error of this traditional model and it's also 3.4 times faster reaching 1.2 times and it's 3.5 times faster in reaching it 1.4 times. The stale selective back prop is even faster so 4.3 4.9 5 times faster in reaching 1.4 times this reaching 1.4 times the the error and so what you can what you can see here is that these methods really make it faster but also there's kind of two things two important things to note in this table. First of all you can see as you go to the right in the table the speed ups get higher and what it means is that as you need to reach as you make the problem easier so as you need to reach a higher error which is as you need to reach a higher loss value these methods are there faster what that means is they're really fast at reaching a somewhat decent point which is represented here they're really fast but if they need them to reach a more and more accurate performance they themselves get slower and slower so this this is of course clear because what you're doing is you're no longer treating every day to point the same you are introducing a bias into your training by only training on the hard examples so you're introducing a bias and this bias will give you a speed up but also hurt your performance and thereby if you have to get more and more accurate you will you will lose much of that speed up because you need to reduce that bias at the end that you introduced so that's the first caveat as you want to get to a higher and higher performance these methods will help less and less because they basically introduce the bias to gain speed at the beginning of training or to reach less accurate points the second thing is as you look at these problems here so SVH n 1.7 percent error C for 10 is a slightly harder problem 2.9 percent error and C for 100 is really a harder problem where a traditional model has 18 percent error if you look at the speed ups now then you can see even at this right most end here you have the 3.5 and 5x speed up here we have a 1.5 2x speed up here we have a 1.2 1.6x speed up so as the problems get harder and as the kind of models get get fancier as the classes get more then the the speed up is much lower and I believe that's because the the bias you introduce by reweighing the samples the bias you introduce will hurt you much more on a difficult and large problem with a large network then it will hurt you on an easy problem right easy problem you were fine introducing some bias but if you have a hard noisy problem then this bias you introduce will hurt you much more and thereby this the speed up that these methods give you is much much less and so this means that the performance of these models is directly anti correlated with the hardness of the problem and that tells me it kind of makes it almost unusable or it goes towards if I look at the numbers if I look at the numbers over here and extrapolate that to something like image net it tells me that these methods are going to be almost useless on a data set of the size and complexity as image net and the interesting problems nowadays are very much in the domain of more hard more complex problems so the the kind of usefulness of this method in practice is something that I wouldn't bet on just from reading this paper I'm open to be convinced otherwise but just from reading this papers it seems like the harder you make the problem the less these methods help and that's exactly not what you want you want exactly the opposite you want to say oh if I scale this up it'll it'll you know give me even more of a speed up and that's going to be even better but this is the opposite so and given that they have no basically no theoretical analysis of how much this bias hurts you or how you can still make it kind of good in expectation how you would need to correct at the end and so on I would I would I would first of course test it I'm very interested to see tests on larger more complex problems but from this I'm a bit skeptical I'm sorry yeah so they they show I mean they show that on these states that it clearly helps clearly speeds up the training and that's of course that's already a good good thing and they do the required experiments they do the ablation studies on these data sets and so on so you can see here for example on these first graphics on all the data sets see clearly goes down as you introduce the more sophisticated algorithms but again you can see on the hard data set it doesn't go down as much all right but they do discuss this they're really fair to themselves they do risk they discuss this in their paper of how you know how practical this is and so on and what they what else they tried and didn't work and and that's a I think that it's a really good paper in itself and it's a really good investigation all right so that was it for me have a fun day bye bye | [
{
"start": 0,
"end": 5.04,
"text": " Hi there! Today we're looking at accelerating deep learning by focusing on"
},
{
"start": 5.04,
"end": 12.6,
"text": " the biggest losers by Angela Jiang et al. This paper is pretty simple, pretty short"
},
{
"start": 12.6,
"end": 18.76,
"text": " in idea and is a pretty much an engineering paper. So we'll go over this"
},
{
"start": 18.76,
"end": 24.88,
"text": " idea and give it a good look and discuss advantages, disadvantages and so on."
},
{
"start": 24.88,
"end": 30.759999999999998,
"text": " What's the basic idea? The basic idea is the following. If you train a neural"
},
{
"start": 30.759999999999998,
"end": 37.44,
"text": " network, what do you do? Usually you have a training data set, which I represent"
},
{
"start": 37.44,
"end": 42.44,
"text": " here. Each line is a sample and usually your network has a bunch of"
},
{
"start": 42.44,
"end": 49,
"text": " layers. Each line here is a layer of weights. What you do is you group your"
},
{
"start": 49,
"end": 53.120000000000005,
"text": " training data set into mini batches. Let's say that's a mini batch, four"
},
{
"start": 53.12,
"end": 57.879999999999995,
"text": " samples and you pass it through the network. This is called the forward"
},
{
"start": 57.879999999999995,
"end": 66.24,
"text": " propagation. You then calculate the loss of your forward propagated"
},
{
"start": 66.24,
"end": 73.44,
"text": " signal and then you back propagate this loss. When back propagating, you"
},
{
"start": 73.44,
"end": 77.16,
"text": " want to back propagate the loss such that it reaches each of the layers and"
},
{
"start": 77.16,
"end": 81.56,
"text": " it tells each layer how to update itself. What you want to do is for each"
},
{
"start": 81.56,
"end": 86.32000000000001,
"text": " layer you actually need to back prop the loss once towards the layer below it and"
},
{
"start": 86.32000000000001,
"end": 91.64,
"text": " once towards itself in order for the layer below it to continue the back prop"
},
{
"start": 91.64,
"end": 97.08,
"text": " and for the layer itself to update its weights. Each time you back prop"
},
{
"start": 97.08,
"end": 103.56,
"text": " basically once towards the lower layer and once towards yourself. That's a"
},
{
"start": 103.56,
"end": 110.72,
"text": " lot of work. You see whatever work you have passing your samples through the"
},
{
"start": 110.72,
"end": 117.96,
"text": " network here, you basically double the work going back."
},
{
"start": 117.96,
"end": 124.2,
"text": " The core idea of this paper is if you look at the following. In a"
},
{
"start": 124.2,
"end": 129.92,
"text": " traditional training neural network you'll have some overhead in each"
},
{
"start": 129.92,
"end": 135.92,
"text": " training step, some overhead of maybe putting the data to the GPU or something"
},
{
"start": 135.92,
"end": 142.23999999999998,
"text": " like this. Then you have a time that you require for a forward pass and then you"
},
{
"start": 142.23999999999998,
"end": 145.95999999999998,
"text": " have a big chunk that you require for the backward pass. You see it's about"
},
{
"start": 145.95999999999998,
"end": 152.56,
"text": " double the size of this forward pass. This paper asks how can we reduce this"
},
{
"start": 152.56,
"end": 160.79999999999998,
"text": " backward pass time. What they propose is the following. They propose if"
},
{
"start": 160.79999999999998,
"end": 165.23999999999998,
"text": " the backward pass is expensive and we do it here for each data point in these"
},
{
"start": 165.24,
"end": 172.04000000000002,
"text": " mini batches, why don't we stop doing this and only try to select"
},
{
"start": 172.04000000000002,
"end": 177.64000000000001,
"text": " examples that are important. Once we only have selected the important"
},
{
"start": 177.64000000000001,
"end": 184.56,
"text": " examples, only those examples get to do the backward pass. Thereby let's say if"
},
{
"start": 184.56,
"end": 189.38,
"text": " we can only select one third of the examples to do the backward pass, we can"
},
{
"start": 189.38,
"end": 195.08,
"text": " reduce the amount that's required in the backward pass, the amount of work, by"
},
{
"start": 195.08,
"end": 202.24,
"text": " one third or sorry by two thirds. The way they select the important examples"
},
{
"start": 202.24,
"end": 208.32000000000002,
"text": " is by looking at the loss. They basically say whichever examples have a"
},
{
"start": 208.32000000000002,
"end": 213.28,
"text": " high loss, these must be the important examples, these are the hard examples."
},
{
"start": 213.28,
"end": 218,
"text": " If we only train on the hard examples or if we train on the hard"
},
{
"start": 218,
"end": 226.56,
"text": " examples more, then the network will learn on these hard examples faster."
},
{
"start": 226.56,
"end": 230.6,
"text": " Of course there is an implication there that if your network is good on the hard"
},
{
"start": 230.6,
"end": 234.64,
"text": " examples, it's also going to be good on the easy examples. That's like the"
},
{
"start": 234.64,
"end": 240.88,
"text": " definition of hard and easy examples. Of course that's a kind of a simplifying"
},
{
"start": 240.88,
"end": 247.36,
"text": " assumption. The idea is only select the hard examples and only by how much"
},
{
"start": 247.36,
"end": 252.16000000000003,
"text": " loss they have and only then backprop these hard examples. That's how"
},
{
"start": 252.16000000000003,
"end": 258.96000000000004,
"text": " they can reduce this by a lot. There's several intricacies here."
},
{
"start": 258.96000000000004,
"end": 263.8,
"text": " The setup time of course is the same. What they do next is they forward"
},
{
"start": 263.8,
"end": 268.72,
"text": " propagate the entire mini batch here, because they need the loss of each"
},
{
"start": 268.72,
"end": 273.8,
"text": " example and then therefore they need to forward propagate the entire mini batch."
},
{
"start": 273.8,
"end": 280.44,
"text": " At the end of this they select the examples with the highest loss and they"
},
{
"start": 280.44,
"end": 285.5,
"text": " only use those in training. Training consists of another forward"
},
{
"start": 285.5,
"end": 289.24,
"text": " pass, but this one is much smaller because you only forward pass the"
},
{
"start": 289.24,
"end": 293.96000000000004,
"text": " examples that you're actually training on. Then the backward pass accordingly"
},
{
"start": 293.96000000000004,
"end": 300.2,
"text": " will also be much much smaller because now again you have less samples to"
},
{
"start": 300.2,
"end": 306.92,
"text": " actually train on. The reason that you even need this second forward"
},
{
"start": 306.92,
"end": 312.36,
"text": " pass is the following. When you do backprop you can't simply start with a"
},
{
"start": 312.36,
"end": 316.91999999999996,
"text": " signal back here and then backprop that through the network. That doesn't work"
},
{
"start": 316.91999999999996,
"end": 322.76,
"text": " usually with most network architectures. Namely what you need to do is actually"
},
{
"start": 322.76,
"end": 328.56,
"text": " while you forward pass you need to remember information at each layer. A good"
},
{
"start": 328.56,
"end": 333.36,
"text": " example of this is the MaxPool operation. In MaxPool what you do is you"
},
{
"start": 333.36,
"end": 337.32,
"text": " maybe have four pixels that are next to each other and you select one of them."
},
{
"start": 337.32,
"end": 342.32,
"text": " Now you need to remember during the forward pass which one you selected."
},
{
"start": 342.32,
"end": 347.32,
"text": " Otherwise the backward pass won't work. You need to know which pixel to back"
},
{
"start": 347.32,
"end": 352.96,
"text": " prop through. That's why at each point you need to remember information to"
},
{
"start": 352.96,
"end": 358.88,
"text": " inform the backward pass. That's why basically you need a second forward"
},
{
"start": 358.88,
"end": 368.32,
"text": " pass with only the examples that you want to train on."
},
{
"start": 368.32,
"end": 373.64,
"text": " You forward pass once, calculate this loss, select the ones with the"
},
{
"start": 373.64,
"end": 378.59999999999997,
"text": " high loss, then forward pass these again and then backprop only these examples."
},
{
"start": 378.6,
"end": 384.40000000000003,
"text": " That's the main gist of it. This is exactly what you see here."
},
{
"start": 384.40000000000003,
"end": 390,
"text": " Forward pass everything, then forward pass those again that have high loss"
},
{
"start": 390,
"end": 394.20000000000005,
"text": " and then backprop them. There is actually an interesting thing in this"
},
{
"start": 394.20000000000005,
"end": 399.04,
"text": " graphic in that you see that this forward pass here also is shorter than"
},
{
"start": 399.04,
"end": 403.12,
"text": " this forward pass. I assume that's because this forward pass here actually"
},
{
"start": 403.12,
"end": 407.68,
"text": " needs to do those additional saving of information while this forward pass here"
},
{
"start": 407.68,
"end": 412.24,
"text": " is simply a forward pass without intention of backward passing. You can"
},
{
"start": 412.24,
"end": 419.40000000000003,
"text": " instruct the deep learning frameworks to then not remember this information."
},
{
"start": 419.40000000000003,
"end": 425.4,
"text": " They have another improvement over their algorithm called stale"
},
{
"start": 425.4,
"end": 429.48,
"text": " selective backprop. This is called selective backprop. They have stale"
},
{
"start": 429.48,
"end": 435.12,
"text": " selective backprop. What stale selective backprop does is it says well we might"
},
{
"start": 435.12,
"end": 441.2,
"text": " not always need this forward pass. What we might be able to do is"
},
{
"start": 441.2,
"end": 446.24,
"text": " actually, first we take the entire data set,"
},
{
"start": 446.24,
"end": 450.96,
"text": " let's use a different color here, let's use this. We take the"
},
{
"start": 450.96,
"end": 457.04,
"text": " entire data set forward properly through the network and then save this"
},
{
"start": 457.04,
"end": 463.6,
"text": " save into some database the losses. Then we use these losses to select"
},
{
"start": 463.6,
"end": 471.36,
"text": " the individual points here for a while. We perform maybe"
},
{
"start": 471.36,
"end": 477.36,
"text": " this is training here. You start here, you do this loss calculation and then"
},
{
"start": 477.36,
"end": 483,
"text": " you run your training until a couple of epochs and then you say okay now this"
},
{
"start": 483,
"end": 487.08000000000004,
"text": " information here is really outdated, I should really update it. Then you do this"
},
{
"start": 487.08000000000004,
"end": 492.8,
"text": " entire thing again and then you run training some more until you again stop"
},
{
"start": 492.8,
"end": 498.44,
"text": " and say okay now this information is stale again. Thereby you can amortize"
},
{
"start": 498.44,
"end": 504.36,
"text": " the cost of these forward passes. You pass your entire training set once"
},
{
"start": 504.36,
"end": 510.04,
"text": " in a while and then use those losses to select the hard examples. That's"
},
{
"start": 510.04,
"end": 516.72,
"text": " amortized. You can then reduce this forward pass that's used for selecting"
},
{
"start": 516.72,
"end": 521.16,
"text": " again by a lot. Of course the paper shows that this doesn't hurt your"
},
{
"start": 521.16,
"end": 526.28,
"text": " performance too much if you have this stale information. This is the"
},
{
"start": 526.28,
"end": 533.36,
"text": " entire idea of the algorithm and the algorithm is detailed here. Very"
},
{
"start": 533.36,
"end": 539.68,
"text": " briefly you have this buffer and you go through the data, you forward pass every"
},
{
"start": 539.68,
"end": 545.24,
"text": " example. For each loss you calculate the probability that you should retain it."
},
{
"start": 545.24,
"end": 551.6,
"text": " It's a probabilistic framework, it's not absolute cutoff. If you decide to"
},
{
"start": 551.6,
"end": 556.48,
"text": " choose it probabilistically you append it to this buffer. If this buffer is of a"
},
{
"start": 556.48,
"end": 561.64,
"text": " certain size then you do the back prop only on this buffer. This buffer"
},
{
"start": 561.64,
"end": 565,
"text": " now only has the high loss examples with higher"
},
{
"start": 565,
"end": 570.6800000000001,
"text": " probability. Don't forget within this backward here there is also an implicit"
},
{
"start": 570.68,
"end": 577.56,
"text": " forward pass. Then you clear the buffer and go on. There are lots of"
},
{
"start": 577.56,
"end": 583.76,
"text": " forward passes here to compute the losses and then every now"
},
{
"start": 583.76,
"end": 587.4799999999999,
"text": " and then there is a backward pass whenever the buffer is of certain size."
},
{
"start": 587.4799999999999,
"end": 592.8399999999999,
"text": " The probabilistic calculation of how and when to retain a sample is very"
},
{
"start": 592.8399999999999,
"end": 599.56,
"text": " simple. You have a deck of recent losses, a history of recent losses. You simply"
},
{
"start": 599.56,
"end": 605.4399999999999,
"text": " calculate the percentile that a given loss has in this history and that"
},
{
"start": 605.4399999999999,
"end": 609.88,
"text": " percentile will then decide on the probability. If you raise it to a power"
},
{
"start": 609.88,
"end": 615.88,
"text": " and that looks something like this. What's often used in this paper is this"
},
{
"start": 615.88,
"end": 620.9599999999999,
"text": " 33% selection. That would be the blue curve and you see the median example"
},
{
"start": 620.9599999999999,
"end": 627.0799999999999,
"text": " here. If you are in the median then you have about a 33% chance of being"
},
{
"start": 627.08,
"end": 633.8000000000001,
"text": " retained. If you have a higher loss than that then your probability rises."
},
{
"start": 633.8000000000001,
"end": 638.76,
"text": " The first interesting thing actually is this graphic here where they show"
},
{
"start": 638.76,
"end": 645.8000000000001,
"text": " what the algorithm deems the hardest and easiest examples. Examples chosen"
},
{
"start": 645.8000000000001,
"end": 652.2800000000001,
"text": " least frequently and this is the CIFAR-10 dataset which is images 32 by 32 in"
},
{
"start": 652.28,
"end": 658.48,
"text": " color and you need to classify them into 10 categories. You see the easiest"
},
{
"start": 658.48,
"end": 664.3199999999999,
"text": " images, the ones chosen least frequently, are almost all automobiles."
},
{
"start": 664.3199999999999,
"end": 670.3199999999999,
"text": " Of the automobiles they're almost all where you see the full car with the"
},
{
"start": 670.3199999999999,
"end": 676.3399999999999,
"text": " wheels and whatnot like this one here. These are what the"
},
{
"start": 676.34,
"end": 683.2,
"text": " algorithm deems easy samples and if you look at the hard samples, examples"
},
{
"start": 683.2,
"end": 689.4,
"text": " chosen most frequently by selective backprop, it's these here. For example"
},
{
"start": 689.4,
"end": 695.6,
"text": " bird and bird in this case is just kind of a smear. They're just kind of smears"
},
{
"start": 695.6,
"end": 701.12,
"text": " on a blue background. It's understandably that this resolution is pretty hard to"
},
{
"start": 701.12,
"end": 705.96,
"text": " make out that this is a bird. Airplane and automobile here you see that it's"
},
{
"start": 705.96,
"end": 713.1600000000001,
"text": " only a partial view of the thing. It's not like the full car like we saw"
},
{
"start": 713.1600000000001,
"end": 718.36,
"text": " in the easy pictures. It's only partial and this seems to be pretty hard and"
},
{
"start": 718.36,
"end": 724.36,
"text": " it's understandable. This cat here to me it's even unclear if this is a cat or a"
},
{
"start": 724.36,
"end": 731.2800000000001,
"text": " dog and I think dog is also a category in CIFAR-10 so the algorithm is"
},
{
"start": 731.28,
"end": 736.9599999999999,
"text": " certainly understandably confused by this example and deems it a hard example."
},
{
"start": 736.9599999999999,
"end": 743.48,
"text": " And here even more you see truck and this isn't a truck as far as I can make"
},
{
"start": 743.48,
"end": 750.36,
"text": " out. These are two humans on the picture with no truck anywhere visible. So this"
},
{
"start": 750.36,
"end": 755.68,
"text": " seems to be a mislabeled example and of course mislabeled examples are going to"
},
{
"start": 755.68,
"end": 762.64,
"text": " be of high loss to the algorithm. This is the first criticism or thing and the"
},
{
"start": 762.64,
"end": 769.92,
"text": " authors recognize this that if you up weigh your examples with high loss you"
},
{
"start": 769.92,
"end": 775.4399999999999,
"text": " are going to up weigh all the mislabeled examples as well and thereby you're going"
},
{
"start": 775.4399999999999,
"end": 780.1999999999999,
"text": " to train more on the mislabeled examples and thereby you're going to possibly"
},
{
"start": 780.2,
"end": 786.5600000000001,
"text": " degrade your test performance much more than had you given every sample the same"
},
{
"start": 786.5600000000001,
"end": 792.08,
"text": " weight. And the authors address this actually nicely by doing an experiment"
},
{
"start": 792.08,
"end": 797.32,
"text": " and the experiment is what if we artificially mislabel examples how much"
},
{
"start": 797.32,
"end": 802.96,
"text": " can these algorithms tolerate. And so they often have these graphics here"
},
{
"start": 802.96,
"end": 811,
"text": " where they show test error over time. So test error and the x-axis here is number"
},
{
"start": 811,
"end": 816,
"text": " of back propped images which is kind of a time dimension in training these"
},
{
"start": 816,
"end": 823.2,
"text": " algorithms. You see the blue is a traditional model and the pink is a"
},
{
"start": 823.2,
"end": 831.0400000000001,
"text": " selective back prop with a 33% retain rate. So you see the selective back prop"
},
{
"start": 831.04,
"end": 836.76,
"text": " is much faster in reaching a low error and this first thing is simply with 1%"
},
{
"start": 836.76,
"end": 841.5999999999999,
"text": " of the labels shuffled. So 1% of the images are mislabeled. Selective back"
},
{
"start": 841.5999999999999,
"end": 851.0799999999999,
"text": " prop is still much faster than the traditional trajectory. If you go to 10%"
},
{
"start": 851.0799999999999,
"end": 856.8399999999999,
"text": " shuffled still you can see selective back prop is faster reaching a low error."
},
{
"start": 856.84,
"end": 864.6800000000001,
"text": " Of course the error now generally is higher. But if you go to 20% here what"
},
{
"start": 864.6800000000001,
"end": 870.64,
"text": " you can see 20% shuffled labels what you can see it starts to become clear that"
},
{
"start": 870.64,
"end": 878.2,
"text": " these selective back prop it retains 33% of the hardest examples right. So and"
},
{
"start": 878.2,
"end": 885.52,
"text": " 20% of the examples have a wrong label. That means most of what it upweighs are"
},
{
"start": 885.52,
"end": 890.1999999999999,
"text": " wrongly labeled examples. Almost let's say that there's still a lot of"
},
{
"start": 890.1999999999999,
"end": 897.76,
"text": " correctly labeled examples. But you see it gets to a low error but then it gets"
},
{
"start": 897.76,
"end": 903.16,
"text": " up again as it kind of massively overfits on these wrongly labeled examples"
},
{
"start": 903.16,
"end": 909.76,
"text": " because it upweighs them so much. Because here in still every"
},
{
"start": 909.76,
"end": 914.4,
"text": " example is hard right. So these wrongly labeled examples they'll get about the"
},
{
"start": 914.4,
"end": 917.72,
"text": " same weight as correctly labeled examples because the network isn't"
},
{
"start": 917.72,
"end": 923.88,
"text": " trained yet. But as you go lower it starts to massively overfit. So compared"
},
{
"start": 923.88,
"end": 933.48,
"text": " to the traditional model kind of just reaches this low error that okay is now"
},
{
"start": 933.48,
"end": 938.64,
"text": " corrupted by these wrong labels but it doesn't it doesn't hurt as much. So"
},
{
"start": 938.64,
"end": 944.16,
"text": " that's kind of my first criticism. If you have a lot of noisy labels or if you"
},
{
"start": 944.16,
"end": 950.4399999999999,
"text": " have a lot of mislabeled examples then this method might actually hurt more"
},
{
"start": 950.4399999999999,
"end": 955.8399999999999,
"text": " than it helps. But the level is interesting that it can kind of tolerate"
},
{
"start": 955.8399999999999,
"end": 965.68,
"text": " 10% but it gets kind of into trouble at 20 or so more percent. So this is the"
},
{
"start": 965.68,
"end": 969.24,
"text": " first criticism and that's how the authors address it. I really like this"
},
{
"start": 969.24,
"end": 976.24,
"text": " ablation study that they do. Here this is kind of the meat of the experiment."
},
{
"start": 976.24,
"end": 980.28,
"text": " So what they show here these curves on the bottom and let's look at this curve"
},
{
"start": 980.28,
"end": 986.28,
"text": " is on the x-axis you actually have wall clock time now. So how much time do you"
},
{
"start": 986.28,
"end": 993.72,
"text": " need in order to reach a kind of low error. Here is test set error. You see the"
},
{
"start": 993.72,
"end": 999.2,
"text": " traditional model in blue has a certain trajectory. Now cath 18 is a baseline"
},
{
"start": 999.2,
"end": 1004.36,
"text": " don't worry about it. What we're interested in is the selective"
},
{
"start": 1004.36,
"end": 1010.76,
"text": " backprop which is the pink which you can see outperforms this traditional"
},
{
"start": 1010.76,
"end": 1016.84,
"text": " training. And what we're also interested in is the stale SB. So stale meaning it"
},
{
"start": 1016.84,
"end": 1021.36,
"text": " has this buffer of information that reduces it's supposed to reduce the time"
},
{
"start": 1021.36,
"end": 1027.66,
"text": " again. And you see that is even that even more outperforms the traditional"
},
{
"start": 1027.66,
"end": 1033.88,
"text": " approach. You can also see that the staleness here apparently doesn't hurt"
},
{
"start": 1033.88,
"end": 1039.3200000000002,
"text": " the performance too much. You see the error is fairly close and it reaches"
},
{
"start": 1039.3200000000002,
"end": 1046.3200000000002,
"text": " this error in a much faster time. This on CIFAR 10. They have this nice table up here"
},
{
"start": 1046.3200000000002,
"end": 1054.76,
"text": " where they show the speed up to reach a given error. So what they do is they take"
},
{
"start": 1054.76,
"end": 1060.04,
"text": " this error of the traditional model this test set error and they ask how fast are"
},
{
"start": 1060.04,
"end": 1066.68,
"text": " these methods in reaching this error times a constant. So times 1.1 times 1.2"
},
{
"start": 1066.68,
"end": 1072.2,
"text": " times 1.4 now. Of course the reaching 1.4 times the final error is much is"
},
{
"start": 1072.2,
"end": 1079.04,
"text": " easier and thereby but it's also easier for the traditional model of course. So"
},
{
"start": 1079.04,
"end": 1085.2,
"text": " that's the catch but these are kind of benchmarks they chose to how fast are"
},
{
"start": 1085.2,
"end": 1090.36,
"text": " these models in reaching 1.1 1.2 1.4 times the error of a traditionally"
},
{
"start": 1090.36,
"end": 1097.08,
"text": " trained model. You can see here on CIFAR 10 for example actually it's go to SVHN."
},
{
"start": 1097.08,
"end": 1102.92,
"text": " SVHN is the easiest of the of the data sets and it shows the most clear thing."
},
{
"start": 1102.92,
"end": 1111.92,
"text": " So the traditional error is 1.7% and you see that the speed up is so this"
},
{
"start": 1111.92,
"end": 1120.24,
"text": " selective back prop is 3.4 times faster in reaching this 1.1 times the"
},
{
"start": 1120.24,
"end": 1127.28,
"text": " error of this traditional model and it's also 3.4 times faster reaching 1.2"
},
{
"start": 1127.28,
"end": 1135.72,
"text": " times and it's 3.5 times faster in reaching it 1.4 times. The stale"
},
{
"start": 1135.72,
"end": 1143.32,
"text": " selective back prop is even faster so 4.3 4.9 5 times faster in reaching 1.4"
},
{
"start": 1143.32,
"end": 1152.24,
"text": " times this reaching 1.4 times the the error and so what you can what you can"
},
{
"start": 1152.24,
"end": 1157.76,
"text": " see here is that these methods really make it faster but also there's kind of"
},
{
"start": 1157.76,
"end": 1162.56,
"text": " two things two important things to note in this table. First of all you can see"
},
{
"start": 1162.56,
"end": 1170.1200000000001,
"text": " as you go to the right in the table the speed ups get higher and what it means"
},
{
"start": 1170.1200000000001,
"end": 1176.6,
"text": " is that as you need to reach as you make the problem easier so as you need to"
},
{
"start": 1176.6,
"end": 1184.8799999999999,
"text": " reach a higher error which is as you need to reach a higher loss value these"
},
{
"start": 1184.8799999999999,
"end": 1190.9199999999998,
"text": " methods are there faster what that means is they're really fast at reaching a"
},
{
"start": 1190.9199999999998,
"end": 1196.7199999999998,
"text": " somewhat decent point which is represented here they're really fast but"
},
{
"start": 1196.7199999999998,
"end": 1202.3999999999999,
"text": " if they need them to reach a more and more accurate performance they"
},
{
"start": 1202.4,
"end": 1209.16,
"text": " themselves get slower and slower so this this is of course clear because what"
},
{
"start": 1209.16,
"end": 1214.5600000000002,
"text": " you're doing is you're no longer treating every day to point the same you"
},
{
"start": 1214.5600000000002,
"end": 1219.48,
"text": " are introducing a bias into your training by only training on the hard"
},
{
"start": 1219.48,
"end": 1225.3200000000002,
"text": " examples so you're introducing a bias and this bias will give you a speed up"
},
{
"start": 1225.3200000000002,
"end": 1229.52,
"text": " but also hurt your performance and thereby if you have to get more and more"
},
{
"start": 1229.52,
"end": 1236.32,
"text": " accurate you will you will lose much of that speed up because you need to reduce"
},
{
"start": 1236.32,
"end": 1242.6399999999999,
"text": " that bias at the end that you introduced so that's the first caveat as you want"
},
{
"start": 1242.6399999999999,
"end": 1247.6399999999999,
"text": " to get to a higher and higher performance these methods will help less"
},
{
"start": 1247.6399999999999,
"end": 1253.36,
"text": " and less because they basically introduce the bias to gain speed at the"
},
{
"start": 1253.36,
"end": 1262.1599999999999,
"text": " beginning of training or to reach less accurate points the second thing is as"
},
{
"start": 1262.1599999999999,
"end": 1270.52,
"text": " you look at these problems here so SVH n 1.7 percent error C for 10 is a"
},
{
"start": 1270.52,
"end": 1276.1999999999998,
"text": " slightly harder problem 2.9 percent error and C for 100 is really a harder"
},
{
"start": 1276.1999999999998,
"end": 1280.9599999999998,
"text": " problem where a traditional model has 18 percent error if you look at the speed"
},
{
"start": 1280.96,
"end": 1290.48,
"text": " ups now then you can see even at this right most end here you have the 3.5 and"
},
{
"start": 1290.48,
"end": 1298.8400000000001,
"text": " 5x speed up here we have a 1.5 2x speed up here we have a 1.2 1.6x speed up so"
},
{
"start": 1298.8400000000001,
"end": 1305.8400000000001,
"text": " as the problems get harder and as the kind of models get get fancier as the"
},
{
"start": 1305.84,
"end": 1313.9199999999998,
"text": " classes get more then the the speed up is much lower and I believe that's"
},
{
"start": 1313.9199999999998,
"end": 1321.6,
"text": " because the the bias you introduce by reweighing the samples the bias you"
},
{
"start": 1321.6,
"end": 1327.32,
"text": " introduce will hurt you much more on a difficult and large problem with a large"
},
{
"start": 1327.32,
"end": 1333.52,
"text": " network then it will hurt you on an easy problem right easy problem you were fine"
},
{
"start": 1333.52,
"end": 1339.28,
"text": " introducing some bias but if you have a hard noisy problem then this bias you"
},
{
"start": 1339.28,
"end": 1345.6,
"text": " introduce will hurt you much more and thereby this the speed up that these"
},
{
"start": 1345.6,
"end": 1351.6,
"text": " methods give you is much much less and so this means that the performance of"
},
{
"start": 1351.6,
"end": 1357.2,
"text": " these models is directly anti correlated with the hardness of the problem and"
},
{
"start": 1357.2,
"end": 1364.4,
"text": " that tells me it kind of makes it almost unusable or it goes towards if I look at"
},
{
"start": 1364.4,
"end": 1370,
"text": " the numbers if I look at the numbers over here and extrapolate that to"
},
{
"start": 1370,
"end": 1374.16,
"text": " something like image net it tells me that these methods are going to be"
},
{
"start": 1374.16,
"end": 1381.24,
"text": " almost useless on a data set of the size and complexity as image net and the"
},
{
"start": 1381.24,
"end": 1387.1200000000001,
"text": " interesting problems nowadays are very much in the domain of more hard more"
},
{
"start": 1387.12,
"end": 1393.8,
"text": " complex problems so the the kind of usefulness of this method in practice"
},
{
"start": 1393.8,
"end": 1400.08,
"text": " is something that I wouldn't bet on just from reading this paper I'm open to be"
},
{
"start": 1400.08,
"end": 1403.8799999999999,
"text": " convinced otherwise but just from reading this papers it seems like the"
},
{
"start": 1403.8799999999999,
"end": 1407,
"text": " harder you make the problem the less these methods help and that's exactly"
},
{
"start": 1407,
"end": 1411.4399999999998,
"text": " not what you want you want exactly the opposite you want to say oh if I scale"
},
{
"start": 1411.4399999999998,
"end": 1416.28,
"text": " this up it'll it'll you know give me even more of a speed up and that's going"
},
{
"start": 1416.28,
"end": 1423.24,
"text": " to be even better but this is the opposite so and given that they have no"
},
{
"start": 1423.24,
"end": 1429.12,
"text": " basically no theoretical analysis of how much this bias hurts you or how you can"
},
{
"start": 1429.12,
"end": 1433.44,
"text": " still make it kind of good in expectation how you would need to correct"
},
{
"start": 1433.44,
"end": 1440.12,
"text": " at the end and so on I would I would I would first of course test it I'm very"
},
{
"start": 1440.12,
"end": 1445.6399999999999,
"text": " interested to see tests on larger more complex problems but from this I'm a bit"
},
{
"start": 1445.64,
"end": 1453.44,
"text": " skeptical I'm sorry yeah so they they show I mean they show that on these"
},
{
"start": 1453.44,
"end": 1457.3600000000001,
"text": " states that it clearly helps clearly speeds up the training and that's of"
},
{
"start": 1457.3600000000001,
"end": 1461.8400000000001,
"text": " course that's already a good good thing and they do the required experiments"
},
{
"start": 1461.8400000000001,
"end": 1466.5200000000002,
"text": " they do the ablation studies on these data sets and so on so you can see here"
},
{
"start": 1466.5200000000002,
"end": 1472.76,
"text": " for example on these first graphics on all the data sets see clearly goes down"
},
{
"start": 1472.76,
"end": 1479.4,
"text": " as you introduce the more sophisticated algorithms but again you can see on the"
},
{
"start": 1479.4,
"end": 1486.28,
"text": " hard data set it doesn't go down as much all right but they do discuss this"
},
{
"start": 1486.28,
"end": 1491.16,
"text": " they're really fair to themselves they do risk they discuss this in their paper"
},
{
"start": 1491.16,
"end": 1496.68,
"text": " of how you know how practical this is and so on and what they what else they"
},
{
"start": 1496.68,
"end": 1501.92,
"text": " tried and didn't work and and that's a I think that it's a really good paper in"
},
{
"start": 1501.92,
"end": 1506.24,
"text": " itself and it's a really good investigation all right so that was it"
},
{
"start": 1506.24,
"end": 1532.96,
"text": " for me have a fun day bye bye"
}
] |
MIEA8azwu1k | Yannic Kilcher | UCZHmQk67mSJgfCCTn7xBfew | DEEP LEARNING MEME REVIEW - Episode 1 | [
"Comedy"
] | [
"deep learning",
"memes",
"meme review",
"artificial intelligence",
"review",
"discussion",
"reaction",
"ai",
"machine learning",
"ml",
"dnn",
"gpu",
"deep neural network",
"ml memes",
"deep learning memes",
"machine learning memes",
"funny",
"gpus",
"classifier",
"hinton",
"turing award",
"bert",
"xlnet",
"optimization",
"error rate",
"culture",
"community",
"research"
] | The wait is finally over! Antonio and I discuss the best, funniest and dankest memes of the machine learning world. Join us for a laugh! | What? You haven't done memes before? No. Don't you have this show on YouTube when you review memes and stuff? No. You haven't? What is that? I think that's an entirely new concept. We're just gonna steal this concept from PewDiePie. Okay. But first actual meme review deep learning theme. Welcome. I'm joined by Antonio who is a bit of a memester himself. And today we're just gonna kind of look at deep learning memes. Nice. Let's jump in. So. Oh no, that's a paper. That's the meme. That is code. Okay. Being a DL researcher is not stress at all. 26. That is incredible how he says like, but now, oh, I already, I always knew that it worked. Of course. Yeah, yeah. There was no other way. There was no AI winter or anything. This was, this was always, Tep Hinton is so cool. Yeah. All right. Nice. Next meme. Next meme. I guess my brain is just really big. Oh, what else is really big? I thought you never asked. I agree. Gradient update on the edge of a really steep cliff. Big gradients are always good. I mean, look at that. Why wouldn't you want to land over there? Yeah, yeah, it's perfect. It seems much more interesting than down there. So perfect. I guess it's an, oh, minus seven over four. Wow. That's a small epsilon. Very small epsilon. Yes. Almost optimal. Crazy. Take the scientist when he sees a new problem. Classifier fit. This is, this is the old days. The old days, yes. Of scikit-learn. It still works pretty well. No, we must use deep learning for everything. Oh, sorry. No, no, sorry. Let's just look at the next meme, please. I don't know this template. This is a cool template. Yeah, it's a good template. NLP researchers BERT and then XLNet. What is XLNet? So XLNet is BERT just trained differently. Okay. And it costs like 10 times more to train it. Okay. And it's a bit better. How much does it cost electricity? Why? So people have calculated this to train one XLNet costs about 250K. It's insane. But does it work 1% better? It's like, that is like five PhD students. That's almost as good a language model as XLNet. And how much is better than BERT? A bit. A bit? Oh, a bit. A bit. That's all that counts. Wow. State of the art. Search archive for preprint. Search GitHub for code. Ask random idiots on Facebook. Me. Go. Let's go, Burbus. Go. In some ways, actually, it is simpler to publish something on archive and not being completely like people just saying, oh, you're an idiot and stuff like that. Because we've probably got unnoticed. Probably gets unnoticed, right? Yeah. On Facebook, it doesn't get unnoticed. Yeah, that's a real peer review. Exactly. If you are in a very good meme page on Facebook about deep learning, you're going to get wrecked. Yes, exactly. That's not going to happen. This software engineer designed a chat board to chat with his girlfriend while he's busy at work. However, the girl eventually got suspicious over the speed she was receiving messages from her boyfriend. Modern problems require modern solutions. But also like pretty good chat board. Got suspicious with the timing. Yeah. And now for the actual content. Well, what fashion companies try to sell us. What we really want. Fashion MNIST. Fashion MNIST is the new cool thing. So cool. Does anyone use it? I use it. Cool. By the way, I found a huge saddle point. Nice. MNIST. Wow. Huge saddle. It is not very MNIST. Where is it? Places. How much accuracy do you get on fashion MNIST? Like as MNIST. Because it's so easy. Like it's basically as MNIST. I don't know. I'm not a fashion person. So I don't know what to call this. What? What? This? This is a pants sweat. Me and the boys after using dropouts. Me and the boys. Also, I don't know where they come from. Where do they come from? I don't know. Some comic. They are so, so beautiful. Are you still watching machine learning tutorials on YouTube? Did you check my internet history? Why can't you watch porn like a normal child? I'm addicted. Andrew NG? I'm addicted. What is this Andrew NG? I must use more Keras code. Yes. Please. What is wrong with you? Because Andrew NG, boy, I don't know. But I understand that it makes you comfortable. And respected and loved. He does. He says it's okay if I don't understand everything. Whereas in porn it's completely different. It's not okay. I'm really with my notes trying to follow the plot. Wait, what was the plot? Why? When your binary classifier predicts 51% accuracy. It ain't much, but it's honest work. That's what you want to get. Better than random. Exactly. Just change your random seed until you get 51%. Your method works. Yes, exactly. And also like, you know about in finance, but it's actually state of the art, right? In what? In finance. Prediction of the last, if you have a profit time series, if you predict the next time point as the last time point, that's probably the best thing you can do. I'm going to switch my PhD topic. Yeah, and also like some people with their fancy methods do worse. Because they say, yeah, because of this and that and then it's just to be, and then... Because it's just like, you just predict whatever was there and you're good. Okay, next meme. Next meme. Deep learning research rather than video. Cheap view, cheap view, cheap view. Oh, damn. Too bad I don't use cheap views. I will start though. You know this Math Lab Deep Learning toolbox? Yeah. Recently they introduced neuronal stuff with the networks and the graphs, which is basically as the brain. Yeah. And so basically you can learn stuff with Math Lab. With Math Lab? Exactly. Wow. Exactly. Can you learn to uninstall it? I look like all you need. No, you don't look like an Envy that hide and not. Because that's what we really want. Exactly. Me, I sure hope my model's error rate isn't super high. Error rate. Sorry. So sorry. Optimization is hard. Yeah, it's hard. Just hard. You do as fancy methods and then there's SGD. Yeah. That beats you every time. Yeah. Bastard. Me and the boys about to receive the Turing Award. Me and the boys. So fancy. Yeah. Look at them. It's probably thinking about capsules. Yeah. Oh, oh. But wasn't it like two years ago? Yeah. Yeah. What is the state of that? It's still the same. He's still thinking about it. Okay. I didn't get what capsules are. To be honest. Well, they sort of are different. Oh, they're different? Yeah. Okay. Yeah. They're not like the same. Ah, I see, I see. So that means that they work in another way. Yes, but only kind of. So to do other things. Sort of. Sort of. I see. But then they do it on the same tasks. Ah, I see. No, they're like trying to abstract concepts into these capsules and then the capsules can route the information to other capsules dynamically. Yeah. Does it work? No, I don't think so. Right? Kind of. It kind of works. Yeah. Ah, why are people... Okay. Like you can make it do something. Okay. Capsules. Capsules. And like meme. My desires are unconventional. So show me. RTX 2060, 2070 and 2080. Ah, yeah. No, don't let me look at them. I want them so badly. I just can't. Use a transformer instead of an LSTM. I have failed you. You again. You again. No. RNNs must come back. Yes, exactly. They're too touring complete. Not. Assistant, remember this location. Okay, I remember that. What did I ask you to remember? I remember what you told me. This location. What does this location mean? Visitor top results. Assistant, machines are about to take over the world. Definitely. This is this intelligence. Yeah, exactly. Yeah, we must be very, very careful. Also with jobs and stuff. What? What? You finished the memes? Not yet. There's one more. So I have to preface this. So basically this is a... So the robot is supposed to get the ball to the target. And in one setting it has a reference motion of a human doing the same thing. So it learns to learn from that. And then for comparison, there is no reference motion. And it just learns from scratch. So first is with and three times and then without. With reference motion. Nice. Nice, yeah. Wow. And now without. Get the ball there. Get it there. Get it there. It's so cute. Yes, yes. We are AI Doom. Yes, done already. The damage is done. Yeah, I mean I can see an army of robots. Their arms. Their guns. They just take the bullet and go like... All right, this was it for episode one of Deep Learning Meme Review. Thanks so much for being here with us. And have a good time. | [
{
"start": 0,
"end": 2,
"text": " What? You haven't done memes before?"
},
{
"start": 2,
"end": 2.5,
"text": " No."
},
{
"start": 2.5,
"end": 5,
"text": " Don't you have this show on YouTube when you review memes and stuff?"
},
{
"start": 5,
"end": 5.5,
"text": " No."
},
{
"start": 5.5,
"end": 6,
"text": " You haven't?"
},
{
"start": 6,
"end": 9,
"text": " What is that? I think that's an entirely new concept."
},
{
"start": 13,
"end": 16,
"text": " We're just gonna steal this concept from PewDiePie."
},
{
"start": 16,
"end": 17,
"text": " Okay."
},
{
"start": 17,
"end": 21,
"text": " But first actual meme review deep learning theme."
},
{
"start": 21,
"end": 22,
"text": " Welcome."
},
{
"start": 22,
"end": 34,
"text": " I'm joined by Antonio who is a bit of a memester himself."
},
{
"start": 34,
"end": 39,
"text": " And today we're just gonna kind of look at deep learning memes."
},
{
"start": 39,
"end": 40,
"text": " Nice."
},
{
"start": 40,
"end": 41,
"text": " Let's jump in."
},
{
"start": 42,
"end": 43,
"text": " So."
},
{
"start": 43,
"end": 44,
"text": " Oh no, that's a paper."
},
{
"start": 45,
"end": 46,
"text": " That's the meme."
},
{
"start": 46,
"end": 47,
"text": " That is code."
},
{
"start": 47,
"end": 48,
"text": " Okay."
},
{
"start": 48,
"end": 52,
"text": " Being a DL researcher is not stress at all."
},
{
"start": 54,
"end": 56,
"text": " 26."
},
{
"start": 59,
"end": 64,
"text": " That is incredible how he says like, but now, oh, I already, I always knew that it worked."
},
{
"start": 64,
"end": 65,
"text": " Of course."
},
{
"start": 65,
"end": 66,
"text": " Yeah, yeah."
},
{
"start": 66,
"end": 67,
"text": " There was no other way."
},
{
"start": 67,
"end": 70,
"text": " There was no AI winter or anything."
},
{
"start": 70,
"end": 73,
"text": " This was, this was always, Tep Hinton is so cool."
},
{
"start": 73,
"end": 74,
"text": " Yeah."
},
{
"start": 75,
"end": 76,
"text": " All right."
},
{
"start": 76,
"end": 77,
"text": " Nice."
},
{
"start": 77,
"end": 78,
"text": " Next meme."
},
{
"start": 78,
"end": 79,
"text": " Next meme."
},
{
"start": 79,
"end": 82,
"text": " I guess my brain is just really big."
},
{
"start": 82,
"end": 84,
"text": " Oh, what else is really big?"
},
{
"start": 84,
"end": 86,
"text": " I thought you never asked."
},
{
"start": 86,
"end": 87,
"text": " I agree."
},
{
"start": 87,
"end": 90,
"text": " Gradient update on the edge of a really steep cliff."
},
{
"start": 93,
"end": 95,
"text": " Big gradients are always good."
},
{
"start": 95,
"end": 96,
"text": " I mean, look at that."
},
{
"start": 96,
"end": 98,
"text": " Why wouldn't you want to land over there?"
},
{
"start": 98,
"end": 99,
"text": " Yeah, yeah, it's perfect."
},
{
"start": 99,
"end": 101,
"text": " It seems much more interesting than down there."
},
{
"start": 101,
"end": 102,
"text": " So perfect."
},
{
"start": 102,
"end": 104,
"text": " I guess it's an, oh, minus seven over four."
},
{
"start": 104,
"end": 105,
"text": " Wow."
},
{
"start": 105,
"end": 107,
"text": " That's a small epsilon."
},
{
"start": 107,
"end": 108,
"text": " Very small epsilon."
},
{
"start": 108,
"end": 109,
"text": " Yes."
},
{
"start": 109,
"end": 110,
"text": " Almost optimal."
},
{
"start": 110,
"end": 111,
"text": " Crazy."
},
{
"start": 111,
"end": 115,
"text": " Take the scientist when he sees a new problem."
},
{
"start": 116,
"end": 118,
"text": " Classifier fit."
},
{
"start": 120,
"end": 122,
"text": " This is, this is the old days."
},
{
"start": 122,
"end": 123,
"text": " The old days, yes."
},
{
"start": 123,
"end": 124,
"text": " Of scikit-learn."
},
{
"start": 124,
"end": 127,
"text": " It still works pretty well."
},
{
"start": 128,
"end": 130,
"text": " No, we must use deep learning for everything."
},
{
"start": 130,
"end": 131,
"text": " Oh, sorry."
},
{
"start": 131,
"end": 132,
"text": " No, no, sorry."
},
{
"start": 132,
"end": 133,
"text": " Let's just look at the next meme, please."
},
{
"start": 133,
"end": 134,
"text": " I don't know this template."
},
{
"start": 134,
"end": 135,
"text": " This is a cool template."
},
{
"start": 135,
"end": 136,
"text": " Yeah, it's a good template."
},
{
"start": 136,
"end": 140,
"text": " NLP researchers BERT and then XLNet."
},
{
"start": 140,
"end": 141,
"text": " What is XLNet?"
},
{
"start": 141,
"end": 144,
"text": " So XLNet is BERT just trained differently."
},
{
"start": 144,
"end": 145,
"text": " Okay."
},
{
"start": 145,
"end": 148,
"text": " And it costs like 10 times more to train it."
},
{
"start": 148,
"end": 149,
"text": " Okay."
},
{
"start": 149,
"end": 151,
"text": " And it's a bit better."
},
{
"start": 151,
"end": 153,
"text": " How much does it cost electricity?"
},
{
"start": 153,
"end": 154,
"text": " Why?"
},
{
"start": 154,
"end": 160,
"text": " So people have calculated this to train one XLNet costs about 250K."
},
{
"start": 160,
"end": 163,
"text": " It's insane."
},
{
"start": 163,
"end": 166,
"text": " But does it work 1% better?"
},
{
"start": 166,
"end": 169,
"text": " It's like, that is like five PhD students."
},
{
"start": 169,
"end": 173,
"text": " That's almost as good a language model as XLNet."
},
{
"start": 173,
"end": 175,
"text": " And how much is better than BERT?"
},
{
"start": 175,
"end": 176,
"text": " A bit."
},
{
"start": 176,
"end": 177,
"text": " A bit?"
},
{
"start": 177,
"end": 178,
"text": " Oh, a bit."
},
{
"start": 178,
"end": 179,
"text": " A bit."
},
{
"start": 179,
"end": 180,
"text": " That's all that counts."
},
{
"start": 180,
"end": 181,
"text": " Wow."
},
{
"start": 181,
"end": 182,
"text": " State of the art."
},
{
"start": 182,
"end": 184,
"text": " Search archive for preprint."
},
{
"start": 184,
"end": 186,
"text": " Search GitHub for code."
},
{
"start": 186,
"end": 190,
"text": " Ask random idiots on Facebook."
},
{
"start": 190,
"end": 191,
"text": " Me."
},
{
"start": 191,
"end": 192,
"text": " Go."
},
{
"start": 192,
"end": 193,
"text": " Let's go, Burbus."
},
{
"start": 193,
"end": 194,
"text": " Go."
},
{
"start": 194,
"end": 200,
"text": " In some ways, actually, it is simpler to publish something on archive and not being completely"
},
{
"start": 200,
"end": 203,
"text": " like people just saying, oh, you're an idiot and stuff like that."
},
{
"start": 203,
"end": 205,
"text": " Because we've probably got unnoticed."
},
{
"start": 205,
"end": 207,
"text": " Probably gets unnoticed, right?"
},
{
"start": 207,
"end": 208,
"text": " Yeah."
},
{
"start": 208,
"end": 209,
"text": " On Facebook, it doesn't get unnoticed."
},
{
"start": 209,
"end": 211,
"text": " Yeah, that's a real peer review."
},
{
"start": 211,
"end": 212,
"text": " Exactly."
},
{
"start": 212,
"end": 217,
"text": " If you are in a very good meme page on Facebook about deep learning, you're going to get wrecked."
},
{
"start": 217,
"end": 218,
"text": " Yes, exactly."
},
{
"start": 218,
"end": 220,
"text": " That's not going to happen."
},
{
"start": 220,
"end": 225,
"text": " This software engineer designed a chat board to chat with his girlfriend while he's busy"
},
{
"start": 225,
"end": 226,
"text": " at work."
},
{
"start": 226,
"end": 231,
"text": " However, the girl eventually got suspicious over the speed she was receiving messages"
},
{
"start": 231,
"end": 232,
"text": " from her boyfriend."
},
{
"start": 232,
"end": 237,
"text": " Modern problems require modern solutions."
},
{
"start": 237,
"end": 239,
"text": " But also like pretty good chat board."
},
{
"start": 239,
"end": 241,
"text": " Got suspicious with the timing."
},
{
"start": 241,
"end": 242,
"text": " Yeah."
},
{
"start": 242,
"end": 244,
"text": " And now for the actual content."
},
{
"start": 244,
"end": 249,
"text": " Well, what fashion companies try to sell us."
},
{
"start": 249,
"end": 250,
"text": " What we really want."
},
{
"start": 250,
"end": 251,
"text": " Fashion MNIST."
},
{
"start": 251,
"end": 254,
"text": " Fashion MNIST is the new cool thing."
},
{
"start": 254,
"end": 255,
"text": " So cool."
},
{
"start": 255,
"end": 256,
"text": " Does anyone use it?"
},
{
"start": 256,
"end": 257,
"text": " I use it."
},
{
"start": 257,
"end": 258,
"text": " Cool."
},
{
"start": 258,
"end": 262,
"text": " By the way, I found a huge saddle point."
},
{
"start": 262,
"end": 263,
"text": " Nice."
},
{
"start": 263,
"end": 264,
"text": " MNIST."
},
{
"start": 264,
"end": 265,
"text": " Wow."
},
{
"start": 265,
"end": 266,
"text": " Huge saddle."
},
{
"start": 266,
"end": 267,
"text": " It is not very MNIST."
},
{
"start": 267,
"end": 268,
"text": " Where is it?"
},
{
"start": 268,
"end": 269,
"text": " Places."
},
{
"start": 269,
"end": 272,
"text": " How much accuracy do you get on fashion MNIST?"
},
{
"start": 272,
"end": 273,
"text": " Like as MNIST."
},
{
"start": 273,
"end": 274,
"text": " Because it's so easy."
},
{
"start": 274,
"end": 277,
"text": " Like it's basically as MNIST."
},
{
"start": 277,
"end": 278,
"text": " I don't know."
},
{
"start": 278,
"end": 279,
"text": " I'm not a fashion person."
},
{
"start": 279,
"end": 280,
"text": " So I don't know what to call this."
},
{
"start": 280,
"end": 281,
"text": " What?"
},
{
"start": 281,
"end": 282,
"text": " What?"
},
{
"start": 282,
"end": 283,
"text": " This?"
},
{
"start": 283,
"end": 284,
"text": " This is a pants sweat."
},
{
"start": 284,
"end": 289,
"text": " Me and the boys after using dropouts."
},
{
"start": 289,
"end": 292,
"text": " Me and the boys."
},
{
"start": 292,
"end": 294,
"text": " Also, I don't know where they come from."
},
{
"start": 294,
"end": 295,
"text": " Where do they come from?"
},
{
"start": 295,
"end": 296,
"text": " I don't know."
},
{
"start": 296,
"end": 297,
"text": " Some comic."
},
{
"start": 297,
"end": 303,
"text": " They are so, so beautiful."
},
{
"start": 303,
"end": 307,
"text": " Are you still watching machine learning tutorials on YouTube?"
},
{
"start": 307,
"end": 309,
"text": " Did you check my internet history?"
},
{
"start": 309,
"end": 313,
"text": " Why can't you watch porn like a normal child?"
},
{
"start": 313,
"end": 314,
"text": " I'm addicted."
},
{
"start": 314,
"end": 315,
"text": " Andrew NG?"
},
{
"start": 315,
"end": 316,
"text": " I'm addicted."
},
{
"start": 316,
"end": 319,
"text": " What is this Andrew NG?"
},
{
"start": 319,
"end": 321,
"text": " I must use more Keras code."
},
{
"start": 321,
"end": 322,
"text": " Yes."
},
{
"start": 322,
"end": 323,
"text": " Please."
},
{
"start": 323,
"end": 325,
"text": " What is wrong with you?"
},
{
"start": 325,
"end": 327,
"text": " Because Andrew NG, boy, I don't know."
},
{
"start": 327,
"end": 330,
"text": " But I understand that it makes you comfortable."
},
{
"start": 330,
"end": 331,
"text": " And respected and loved."
},
{
"start": 331,
"end": 332,
"text": " He does."
},
{
"start": 332,
"end": 334,
"text": " He says it's okay if I don't understand everything."
},
{
"start": 334,
"end": 337,
"text": " Whereas in porn it's completely different."
},
{
"start": 337,
"end": 338,
"text": " It's not okay."
},
{
"start": 338,
"end": 342,
"text": " I'm really with my notes trying to follow the plot."
},
{
"start": 342,
"end": 344,
"text": " Wait, what was the plot?"
},
{
"start": 344,
"end": 345,
"text": " Why?"
},
{
"start": 345,
"end": 349,
"text": " When your binary classifier predicts 51% accuracy."
},
{
"start": 349,
"end": 353,
"text": " It ain't much, but it's honest work."
},
{
"start": 353,
"end": 354,
"text": " That's what you want to get."
},
{
"start": 354,
"end": 355,
"text": " Better than random."
},
{
"start": 355,
"end": 356,
"text": " Exactly."
},
{
"start": 356,
"end": 359,
"text": " Just change your random seed until you get 51%."
},
{
"start": 359,
"end": 361,
"text": " Your method works."
},
{
"start": 361,
"end": 362,
"text": " Yes, exactly."
},
{
"start": 362,
"end": 366,
"text": " And also like, you know about in finance, but it's actually state of the art, right?"
},
{
"start": 366,
"end": 367,
"text": " In what?"
},
{
"start": 367,
"end": 368,
"text": " In finance."
},
{
"start": 368,
"end": 373,
"text": " Prediction of the last, if you have a profit time series, if you predict the next time"
},
{
"start": 373,
"end": 378,
"text": " point as the last time point, that's probably the best thing you can do."
},
{
"start": 378,
"end": 381,
"text": " I'm going to switch my PhD topic."
},
{
"start": 381,
"end": 385,
"text": " Yeah, and also like some people with their fancy methods do worse."
},
{
"start": 385,
"end": 390,
"text": " Because they say, yeah, because of this and that and then it's just to be, and then..."
},
{
"start": 390,
"end": 394,
"text": " Because it's just like, you just predict whatever was there and you're good."
},
{
"start": 394,
"end": 396,
"text": " Okay, next meme."
},
{
"start": 396,
"end": 397,
"text": " Next meme."
},
{
"start": 397,
"end": 400,
"text": " Deep learning research rather than video."
},
{
"start": 400,
"end": 402,
"text": " Cheap view, cheap view, cheap view."
},
{
"start": 402,
"end": 405,
"text": " Oh, damn."
},
{
"start": 405,
"end": 407,
"text": " Too bad I don't use cheap views."
},
{
"start": 407,
"end": 408,
"text": " I will start though."
},
{
"start": 408,
"end": 411,
"text": " You know this Math Lab Deep Learning toolbox?"
},
{
"start": 411,
"end": 412,
"text": " Yeah."
},
{
"start": 412,
"end": 420,
"text": " Recently they introduced neuronal stuff with the networks and the graphs, which is basically"
},
{
"start": 420,
"end": 421,
"text": " as the brain."
},
{
"start": 421,
"end": 422,
"text": " Yeah."
},
{
"start": 422,
"end": 426,
"text": " And so basically you can learn stuff with Math Lab."
},
{
"start": 426,
"end": 427,
"text": " With Math Lab?"
},
{
"start": 427,
"end": 428,
"text": " Exactly."
},
{
"start": 428,
"end": 429,
"text": " Wow."
},
{
"start": 429,
"end": 430,
"text": " Exactly."
},
{
"start": 430,
"end": 431,
"text": " Can you learn to uninstall it?"
},
{
"start": 431,
"end": 433,
"text": " I look like all you need."
},
{
"start": 433,
"end": 438,
"text": " No, you don't look like an Envy that hide and not."
},
{
"start": 438,
"end": 440,
"text": " Because that's what we really want."
},
{
"start": 440,
"end": 441,
"text": " Exactly."
},
{
"start": 441,
"end": 447,
"text": " Me, I sure hope my model's error rate isn't super high."
},
{
"start": 447,
"end": 449,
"text": " Error rate."
},
{
"start": 449,
"end": 450,
"text": " Sorry."
},
{
"start": 450,
"end": 453,
"text": " So sorry."
},
{
"start": 453,
"end": 455,
"text": " Optimization is hard."
},
{
"start": 455,
"end": 457,
"text": " Yeah, it's hard."
},
{
"start": 457,
"end": 458,
"text": " Just hard."
},
{
"start": 458,
"end": 461,
"text": " You do as fancy methods and then there's SGD."
},
{
"start": 461,
"end": 462,
"text": " Yeah."
},
{
"start": 462,
"end": 463,
"text": " That beats you every time."
},
{
"start": 463,
"end": 464,
"text": " Yeah."
},
{
"start": 464,
"end": 465,
"text": " Bastard."
},
{
"start": 465,
"end": 469,
"text": " Me and the boys about to receive the Turing Award."
},
{
"start": 469,
"end": 471,
"text": " Me and the boys."
},
{
"start": 471,
"end": 472,
"text": " So fancy."
},
{
"start": 472,
"end": 473,
"text": " Yeah."
},
{
"start": 473,
"end": 474,
"text": " Look at them."
},
{
"start": 474,
"end": 476,
"text": " It's probably thinking about capsules."
},
{
"start": 476,
"end": 477,
"text": " Yeah."
},
{
"start": 477,
"end": 478,
"text": " Oh, oh."
},
{
"start": 478,
"end": 480,
"text": " But wasn't it like two years ago?"
},
{
"start": 480,
"end": 481,
"text": " Yeah."
},
{
"start": 481,
"end": 482,
"text": " Yeah."
},
{
"start": 482,
"end": 483,
"text": " What is the state of that?"
},
{
"start": 483,
"end": 484,
"text": " It's still the same."
},
{
"start": 484,
"end": 486,
"text": " He's still thinking about it."
},
{
"start": 486,
"end": 487,
"text": " Okay."
},
{
"start": 487,
"end": 489,
"text": " I didn't get what capsules are."
},
{
"start": 489,
"end": 490,
"text": " To be honest."
},
{
"start": 490,
"end": 493,
"text": " Well, they sort of are different."
},
{
"start": 493,
"end": 495,
"text": " Oh, they're different?"
},
{
"start": 495,
"end": 496,
"text": " Yeah."
},
{
"start": 496,
"end": 497,
"text": " Okay."
},
{
"start": 497,
"end": 498,
"text": " Yeah."
},
{
"start": 498,
"end": 499,
"text": " They're not like the same."
},
{
"start": 499,
"end": 501,
"text": " Ah, I see, I see."
},
{
"start": 501,
"end": 506,
"text": " So that means that they work in another way."
},
{
"start": 506,
"end": 507,
"text": " Yes, but only kind of."
},
{
"start": 507,
"end": 508,
"text": " So to do other things."
},
{
"start": 508,
"end": 509,
"text": " Sort of."
},
{
"start": 509,
"end": 510,
"text": " Sort of."
},
{
"start": 510,
"end": 511,
"text": " I see."
},
{
"start": 511,
"end": 513,
"text": " But then they do it on the same tasks."
},
{
"start": 513,
"end": 515,
"text": " Ah, I see."
},
{
"start": 515,
"end": 522,
"text": " No, they're like trying to abstract concepts into these capsules and then the capsules"
},
{
"start": 522,
"end": 525,
"text": " can route the information to other capsules dynamically."
},
{
"start": 525,
"end": 526,
"text": " Yeah."
},
{
"start": 526,
"end": 527,
"text": " Does it work?"
},
{
"start": 527,
"end": 528,
"text": " No, I don't think so."
},
{
"start": 528,
"end": 529,
"text": " Right?"
},
{
"start": 529,
"end": 530,
"text": " Kind of."
},
{
"start": 530,
"end": 531,
"text": " It kind of works."
},
{
"start": 531,
"end": 532,
"text": " Yeah."
},
{
"start": 532,
"end": 533,
"text": " Ah, why are people..."
},
{
"start": 533,
"end": 534,
"text": " Okay."
},
{
"start": 534,
"end": 536,
"text": " Like you can make it do something."
},
{
"start": 536,
"end": 537,
"text": " Okay."
},
{
"start": 537,
"end": 538,
"text": " Capsules."
},
{
"start": 538,
"end": 539,
"text": " Capsules."
},
{
"start": 539,
"end": 540,
"text": " And like meme."
},
{
"start": 540,
"end": 543,
"text": " My desires are unconventional."
},
{
"start": 543,
"end": 547,
"text": " So show me."
},
{
"start": 547,
"end": 552,
"text": " RTX 2060, 2070 and 2080."
},
{
"start": 552,
"end": 553,
"text": " Ah, yeah."
},
{
"start": 553,
"end": 555,
"text": " No, don't let me look at them."
},
{
"start": 555,
"end": 557,
"text": " I want them so badly."
},
{
"start": 557,
"end": 560,
"text": " I just can't."
},
{
"start": 560,
"end": 564,
"text": " Use a transformer instead of an LSTM."
},
{
"start": 564,
"end": 566,
"text": " I have failed you."
},
{
"start": 566,
"end": 567,
"text": " You again."
},
{
"start": 567,
"end": 568,
"text": " You again."
},
{
"start": 568,
"end": 569,
"text": " No."
},
{
"start": 569,
"end": 572,
"text": " RNNs must come back."
},
{
"start": 572,
"end": 573,
"text": " Yes, exactly."
},
{
"start": 573,
"end": 576,
"text": " They're too touring complete."
},
{
"start": 576,
"end": 577,
"text": " Not."
},
{
"start": 577,
"end": 580,
"text": " Assistant, remember this location."
},
{
"start": 580,
"end": 582,
"text": " Okay, I remember that."
},
{
"start": 582,
"end": 584,
"text": " What did I ask you to remember?"
},
{
"start": 584,
"end": 586,
"text": " I remember what you told me."
},
{
"start": 586,
"end": 588,
"text": " This location."
},
{
"start": 588,
"end": 591,
"text": " What does this location mean?"
},
{
"start": 591,
"end": 593,
"text": " Visitor top results."
},
{
"start": 593,
"end": 597,
"text": " Assistant, machines are about to take over the world."
},
{
"start": 597,
"end": 598,
"text": " Definitely."
},
{
"start": 598,
"end": 600,
"text": " This is this intelligence."
},
{
"start": 600,
"end": 602,
"text": " Yeah, exactly."
},
{
"start": 602,
"end": 604,
"text": " Yeah, we must be very, very careful."
},
{
"start": 604,
"end": 606,
"text": " Also with jobs and stuff."
},
{
"start": 606,
"end": 608,
"text": " What?"
},
{
"start": 608,
"end": 609,
"text": " What?"
},
{
"start": 609,
"end": 611,
"text": " You finished the memes?"
},
{
"start": 611,
"end": 612,
"text": " Not yet."
},
{
"start": 612,
"end": 613,
"text": " There's one more."
},
{
"start": 613,
"end": 616,
"text": " So I have to preface this."
},
{
"start": 616,
"end": 618,
"text": " So basically this is a..."
},
{
"start": 618,
"end": 622,
"text": " So the robot is supposed to get the ball to the target."
},
{
"start": 622,
"end": 628,
"text": " And in one setting it has a reference motion of a human doing the same thing."
},
{
"start": 628,
"end": 630,
"text": " So it learns to learn from that."
},
{
"start": 630,
"end": 634,
"text": " And then for comparison, there is no reference motion."
},
{
"start": 634,
"end": 636,
"text": " And it just learns from scratch."
},
{
"start": 636,
"end": 640,
"text": " So first is with and three times and then without."
},
{
"start": 640,
"end": 642,
"text": " With reference motion."
},
{
"start": 642,
"end": 643,
"text": " Nice."
},
{
"start": 643,
"end": 644,
"text": " Nice, yeah."
},
{
"start": 644,
"end": 645,
"text": " Wow."
},
{
"start": 645,
"end": 647,
"text": " And now without."
},
{
"start": 651,
"end": 652,
"text": " Get the ball there."
},
{
"start": 652,
"end": 653,
"text": " Get it there."
},
{
"start": 653,
"end": 654,
"text": " Get it there."
},
{
"start": 654,
"end": 657,
"text": " It's so cute."
},
{
"start": 657,
"end": 660,
"text": " Yes, yes."
},
{
"start": 660,
"end": 662,
"text": " We are AI Doom."
},
{
"start": 662,
"end": 664,
"text": " Yes, done already."
},
{
"start": 664,
"end": 665,
"text": " The damage is done."
},
{
"start": 665,
"end": 668,
"text": " Yeah, I mean I can see an army of robots."
},
{
"start": 668,
"end": 670,
"text": " Their arms."
},
{
"start": 670,
"end": 671,
"text": " Their guns."
},
{
"start": 671,
"end": 673,
"text": " They just take the bullet and go like..."
},
{
"start": 676,
"end": 680,
"text": " All right, this was it for episode one of Deep Learning Meme Review."
},
{
"start": 680,
"end": 682,
"text": " Thanks so much for being here with us."
},
{
"start": 682,
"end": 684,
"text": " And have a good time."
}
] |
nXGHJTtFYRU | Yannic Kilcher | UCZHmQk67mSJgfCCTn7xBfew | Dynamic Routing Between Capsules | [
"Science & Technology"
] | [
"machine learning",
"deep learning",
"capsules",
"capsule networks",
"google brain",
"hinton",
"jeff hinton",
"geoff hinton",
"routing",
"neural networks",
"convolution",
"convolutional neural networks",
"deep neural networks",
"cnns",
"mnist",
"multimnist",
"disentanglement",
"architecture",
"reconstruction",
"alternative",
"dnn",
"ml",
"ai",
"artificial intelligence",
"brain",
"visual system",
"classifier",
"image",
"nonlinearity",
"entities",
"objects",
"capsule",
"network"
] | Geoff Hinton's next big idea! Capsule Networks are an alternative way of implementing neural networks by dividing each layer into capsules. Each capsule is responsible for detecting the presence and properties of one particular entity in the input sample. This information is then allocated dynamically to higher-level capsules in a novel and unconventional routing scheme. While Capsule Networks are still in their infancy, they are an exciting and promising new direction.
Abstract:
A capsule is a group of neurons whose activity vector represents the instantiation parameters of a specific type of entity such as an object or an object part. We use the length of the activity vector to represent the probability that the entity exists and its orientation to represent the instantiation parameters. Active capsules at one level make predictions, via transformation matrices, for the instantiation parameters of higher-level capsules. When multiple predictions agree, a higher level capsule becomes active. We show that a discrimininatively trained, multi-layer capsule system achieves state-of-the-art performance on MNIST and is considerably better than a convolutional net at recognizing highly overlapping digits. To achieve these results we use an iterative routing-by-agreement mechanism: A lower-level capsule prefers to send its output to higher level capsules whose activity vectors have a big scalar product with the prediction coming from the lower-level capsule.
Authors: Sara Sabour, Nicholas Frosst, Geoffrey E Hinton
https://arxiv.org/abs/1710.09829
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
Minds: https://www.minds.com/ykilcher
BitChute: https://www.bitchute.com/channel/10a5ui845DOJ/ | Hi there! Today we're looking at dynamic routing between capsules by Sara Sabour, Nicholas Frost and Jeffrey Hinton of Google Brain. This paper is a bit older but it's made quite the impact at the time and so we'll go through it. I find this pretty hard paper to read and kind of understand because a lot of things are very implicit and hand wavy. So we'll kind of go through it and try to get the best out of it, try to explain what capsules are and what they do and how they stack against current networks. So capsule network in essence is a new type of neural network made of capsules. And here it says a capsule is a group of neurons whose activity vector represents the instantiation parameters of a specific type of entity such as an object or an object part. Kind of cryptic but so what they're saying is that in a capsule network, let me try to draw one here actually, in a capsule network you have what's called capsules. Capsules you can imagine as just little blobs of things right? And they're also ordered in layers in this case. Let's actually leave away the second layer. And each of these of these capsules will correspond to an entity in the input. Let's say the input is an image. So somewhere here there is an image right? Then maybe this capsule here will be responsible for detecting is there a wall in the image. And this one will be responsible for detecting is there a roof. This one will be is there a door. And this one will be responsible for detecting is there a lake in the image right? So now each of these each of these capsules can for on one hand can either be high or low. So if you if you imagine now a situation where wall high, roof high, door high, lake low. It means probably the image has a house on it right? But second of all not only can it predict whether or not a given entity is present in an image but the individual capsules are also responsible for encoding the exact way or shape or form that this entity takes. So the wall could have different aspects such as color color green. It could have size tall. It could have orientation. orientation is like I don't know vertical. Cool. Then roof could have angle right? Angle wide. So it's a wide roof or a flat roof right? These are these are kind of attributes of these things that also the capsules would encode. So ultimately what these capsules that they are proposing will output is the roof capsule here for example would output a vector. So the output of the roof capsule is a let me draw a coordinate system is a vector. Now the length of the vector will represent so that the length draw this norm here will represent the probability that the roof is in the image. That there is a roof in an image right? The roof is element of this input image. This is simply the length and the individual coordinates will encode these attributes. So this here for example this axis could be the angle of the roof and this axis could be the color. Let's say just that the angle is like some degree number that can be positive or negative. Maybe a roof can be like this. Right this so this is but in essence this is a flat roof and this is a very narrow angle roof. So you can imagine something like this and then the color could also be maybe parameterized on a one-dimensional. It can have more dimensions than two I just can't draw more. So the depending on where this where this arrow now points the for example this vector here has the same probability that there is a roof in the image like if the output is this but the color will be different. The angle will be the same because they're roughly on the same this axis here but the color of this will encode a different different colored roof. And then if the vector is something like this a very short vector it will encode the same the same angle and color directions. So maybe I shouldn't say the position on the axis it's more like this angle and this this angle that encode the attributes. So the kind of the angular components if you will encode the attributes and the length encodes the probability. So this small vector has the same direction in terms of color and angle of the roof but it's much less probable much less likely. So this if the capsule outputs the little blue vector here it says well if there is a roof it's going to be this color in this angle but I'm really that really don't think there's a roof in this image. Whereas if it outputs the large green one then it says I'm pretty sure that there's a roof and it's going to be this angle and this this this angle and this color. Alright so that's that is what each capsule is supposed to do. Each capsule takes the input and outputs a vector that encodes if the entity that the capsule is responsible for is present in the image A and B what properties this entity has. And then we get to the point where there's the next layer of capsules. So the next layer of capsules takes information that each capsule here takes information from each capsule in the lower layer like like you're used to from your neural network and integrates this information and we'll talk about how this works. It integrates all of this information right all of these are vectors now that come from the lower integrates all of this information and again each capsule in this next layer is responsible for a entity. Now these entities in the higher layers are usually composite entities of the lower layers. So this one here could be responsible for house, this one could be responsible for national park, national park and this one could be responsible for beach or something like this right. And then each of these will integrate all of this information from the lower layers and then come up with their own output vector encoding whether or not a given entity is present in the in the image. Of course the house class will pick up if there is a door a roof and a wall in the image the house classes will pick up on that or that's how it's meant to work house class is meant to pick up on that and then itself output a large vector saying there's probably a house in this in this image. So each of these capsules in by itself is responsible for encoding the presence and attributes of a object or object part or entity or part of entity in the given input data. And of course the last layer here it will simply be your classification layer. So in the last layer you have as many capsules as you have classes in your classification task. So this is mainly for a classification task and then you can classify and you can kind of train the whole system like this. So how exactly this happens we'll see next. Alright so they make kind of analogies to the visual system and so on. We'll jump these you can everyone that does deep learning in some way is trying to to make that. We're rather going to the specifics of how these capsules work and how their specific suggestions for them. Note that they say this is in no way the only implementation of capsules. It's just kind of an example to show how one could do it. Alright so first of all they present their what you might call non-linearity. So their non-linearity what it needs to do is if you look at these capsule networks the outputs here the length of the outputs of these vectors right they're supposed to represent probabilities and as such they they need to be so here it roof this door maybe a vector like this wall maybe a vector like that. So initially we simply specify the output is a vector and in essence these capsules are implemented in much the same way like your classic neural network layer would be implemented. So each of these capsules will be in essence a neural network layer by itself that outputs a vector. There's nothing constraining the length of the vector initially so their non-linearity does constrain the vector to be of maximum length 1 and of minimum length 0. That's this non-linearity here. So S here is the unscaled output of the capsule and you can see here if the length of S gets close to 1 or sorry gets really large then this here becomes irrelevant. This whole term will be 1 and then the length of the final output of V here will be 1. Right so if this is very large then the the length of the scaled output will be 1 however if the if the length is really small of the original output so if this goes towards 0 then this becomes irrelevant this becomes irrelevant this will go towards 0 and the entire length will go towards 0. So this is kind of a nice way to scale these outputs always to be between length 0 and 1. Then next thing is so how this I find I find the the most complicated part right so we'll jump ahead actually to how a capsule's network is implemented and this is the the capsule network they implement so first it's an MNIST classifier you have an MNIST image here and it first goes through a simple convolutional layer that's that's nothing new this is a classic convolutional layer is there's 256 channels it has a 9 by 9 filters and stride 1 so it will output a 20 by 20 time by 256 tensor then each of these so each of the outputs here is sent to each of these capsules and now they're convolutional capsules so that makes it a bit more complicated but don't you know don't worry primarily about them being convolutional capsules the analogy is exactly as in a classic neural network you can implement these capsules as void-feed-forward capsules or as convolutional capsules and maybe also as transformer capsules I don't think anyone's done that all right there's a paper for you the so you'll send you'll send the output of this convolution layer to each capsule and then you have basically just two layer of capsules here the first layer consists of 32 what they call primary caps sorry the these 32 capsules each will output an eight dimensional vector and I'm simplifying here it's it's convolutional but they will just for simplest they will each output an eight dimensional vector right and these are exactly as we said before so each of these will be responsible ultimately for a given entity or part of entity being there like in MNIST this could be is there a little curve on the bottom left side right this might indicate the presence of a six or an eight something like this and then the these capsules here each is they represented as a row so each of these rows here is a capsule and we have ten of these and these are your simply your final classification capsules so each capsule is responsible for indicating the presence or absence of one particular class of digits so this will be of a one of a two of a three of a four and so on of a zero I guess somewhere as well so these are ten capsules and the question is how does information go from a capsule here from the output of a capsule or to any of capsule here and the easy way to do this is simply to say as in a classical neural network the output here simply goes to the input here just you just put it there basically on on unchanged now there is a bit of an issue here with the dimensions but you can simply say well we simply put a weight matrix in to route into the capsules but the idea of these capsules and this paper is to say wait wait these capsules actually we want to make them decide to which capsule in the next layer will they send their input right so the capsules can kind of decide where they want to send their output to like where is this where is the capsule that detects the maybe this one detects is there a line in the right side of the image right indicating maybe a seven or a one this is probably most relevant for the one class and for the seven class so it might decide to route its output there and the idea of how this routing happens is basically the topic of this paper so the the capsules route their output to the appropriate next layers capsules how is this done all right this is done via the what's called the routing mechanism that I find it quite poorly described here so I will simply draw it I will simply try to make it up all right so we have capsules and as I've drawn them before right we have one two three capsules and we maybe have two parent capsules each of these capsules here will output a vector as we said and we'll only do it for this this one sorry vector here so this will output this vector and needs to decide where to here or to here do I send to this output now what it does is there is an iterative procedure that has multiple steps and this is I think this is at least the way I understand I think the important part to understand is that if we forward pass data through this network it actually doesn't go forward in a straight line what it actually does is it goes through a layer and then it does multiple steps in between layers until it has decided where it wants to go in the next layer and then it goes on to the next layer and if there's another capsule layers it does again multiple steps before it goes on so that's that's my take on it and the multiple steps are as follows first I'll send my output vector to to all of the all of the layers like equally all of the parent capsules and so will will everyone else right everyone will send theirs equally to the parent now this isn't just done and this may be here this isn't just done just by sending it but this is actually done by modulation of weight matrices so each thing here if this is capsule I and this is capsule J there is a weight matrix in between W I J that is learned right this is a static weight matrix and each one of these red red arrows you see here has such a weight matrix attached to it so each each line you see here is actually modulated by such a weight matrix so there is an a quadratic number of these weight matrices flying around and this will also then allow you that maybe this vector is eight dimensional but the input vector here is 16 dimensional what we saw before all right so the out the input of capsule J here it will receive let's see what it receives it will receive the output of capsule will the output of capsule 1 V 1 modulated by the let's let's call this yeah let's call this J modulated by 1 J W 1 J and it will also receive this is a set the output of capsule 2 modulated by the weight matrix for sorry weight matrix for capsule 2 and so on now what it does is it adds this these all up into a soft max so sorry let's write this so soft it will add those all up in a soft max weighted fashion so it will actually compute a a weighted average of those now the weights at the beginning are are just one because it gets each from each lower capsule it gets equal amount of this vector but then this will give you an output so this will give you some output let's put this in green this will give you an output that's I don't know how they call it in the paper let's just call it O J right and then what you do is all right you compare how much do each of the individual contributions agree with OJ so you actually compute for each of these you would compute the inner product so you would compute the inner product of W 1 J V 1 with OJ and you would compute the inner product of W 2 J V 2 with OJ all right the inner product and then these inner products here will become the weighting coefficients for the soft max in the next iteration all right so this I mean this this is a bit convoluted but ultimately what you're saying is if you're a capsule here you'll send your output forward you have an output you send it forward right to the other capsule and the other capsule will so this is this is your output and we'll forget about this weight matrix 6 for now this is your up the other capsule will output its own its own output computed from the lower layers now we do an iteration again if your output now aligns with this you will send more of it and these these two that I've drawn here actually align pretty well right so you'll send more of it is more more more right and now maybe the output that next computed output of the same capsule will be even more in that direction because you've contributed more right you'll send more and then you're like in the next iteration wow these two are really equal sorry this should be red here your ears just keeps being the same and then you say well I'm gonna send even more to that one right whereas another capsule that it's whose initial output was basically whose initial output was basically like this it will by itself compute the inner product with the original this original it will send it here right it will compute the inner product with the original output and it will realize well these do not align very much and then it will send less right it will send less to the next step and because it sends less in the next step of course the output will then probably align even less with that vector and then it will send less and less and less so this is called dynamic routing the the idea behind it is kind of that you route by agreement so you will route to the parent capsules that agree with your output and by agreement we mean kind of the inner product is high after modulating by this weight matrix and that sort of so that basically means this weight matrix is responsible for deciding which information is relevant together whenever you have two vectors that align in the same layer then the in the sense of the capsule networks those represent the same kind of information and those will be routed together to the same capsule in terms of the examples we made maybe if a door and a roof is present then these these these weight matrices that connect door and roof to the house class they will transform a high vector in door and roof into aligning vectors for the house class and thereby saying look these two if I look at them through if I look at a door and a roof through the perspective of trying to be a house right then they are in much agreement on the presence of a house so if I am a house right I am a house and I look at a door and I look at a roof through the kind of from the perspective of being a house right this is this is what these weight matrices do they always have a perspective of the parent capsule then these two things they make a lot of sense together and thus I will route them to the same place so they can both contribute to their being a house now from the perspective of a house if I look at a little beach with a tree on it right then that does not that is not the same that does not really is not the same information as a door or a roof so I will not route this to the house in the in the same strength that is sort of the best way I have of explaining it how these capsules work basically the lower entities will always be routed for the relevance of the higher entities that are trying to are trying to combine the lower entities if that wasn't it's not entirely clear to me either yet but it's the best shot I I can give and the routing is here formalized I find it hard to follow the important thing is that there is an inner loop in all of this so there is an like kind of an an inner iteration and this inner iteration is computed in every forward pass and so these routing where the information goes in the next layer that is only the prior probability for that is learned but the actual routing coefficients those are dynamically computed in every forward pass so every forward pass goes it goes information goes through a layer then it goes multiple steps between two layers until it decides exactly what the distribution for the next layer is and then the next layer computes its outputs and that goes again multiple steps between these layers and the next layer so that's the the basic thing to remember there's also some normalization involved the squash is the non-linearity we discussed so what do they actually train now at the end here they have a they have these ten capsules and each capsule will be responsible for recognizing one the presence of one digit in the MNIST data set of course and so what they do is they take the length of these vectors that are output by these capsules these capsules are feed-forward capsules as opposed to the convolutional capsules here so the feed-forward capsules output again a vector the length of this vector is taken and then it's basically trained like you would train a regression problem and the loss here is specified up here so if the if the image actually does contain this if the training label actually has this digit present this T here encodes that so if if K let's say K is 2 right so if K 2 if there is a 2 in the image when we know that because it's a training image then the length of the output of capsule number 2 should be high and this simply encodes that it should be very close to this M plus an M plus here is that I think they said it to 0.9 so they say you should be the length should be as close as possible to 0.9 whereas if the 2 is not present then TK will be 0 then this part will be active so it's only one of these two parts will be active then the length of the vector so of capsule number 2 should be close to this M negative which is 0.1 it's basically a regression problem saying if if there if the given entity is in the image then please make the length as close as possible to 0.9 and if it's not make it as close as possible to 0.1 so this this is a classic say regression loss on the length of the output vectors the the lambda is just a factor to to dampen the contribution for all the negative classes with respect to the one positive class of course per capsule it turns out this is actually not enough so this will be the classification output but it's it seems not enough they don't say it's not enough but they simply say we additionally do the following so they also do is they introduce a reconstruction loss now if this model is trained correctly then these capsules here these last capsules especially this one maybe that's the capsule corresponding to the class of the digit 8 will not only encode if an 8 is there or not as in the length of the vector output but it will also encode the properties of dates it is a 16 dimensional vector so it will encode hopefully things like the stroke width so then it might encode the maybe the rotation of the digit then it might be controlled the tightness of the of the loop so you can have an 8 with very large loops or it can have an 8 sorry this is a smaller rate I can have an 8 with very tight loops so it might you know encode things like this so technically it is it will be possible to reconstruct from this description reconstruct say the width is high the rotation is zero and the tightness is low then maybe I have a wide widely stroked not tight 8 that is not rotated right so it should be possible to reconstruct this and they they do exactly that so they take this last capsule of the class that is the actual training label that's called the reconstruction target and they feed this to a simple feed-forward neural network that at the end you see this is exactly the MNIST size will try to reconstruct the the image so if the image here this image goes in then it goes all through here it will take the class for here feed it through this network reshape it to an image again and hopefully what will come out is again this for here and it will then have an auxiliary auxiliary loss in addition to the loss of this of this classification loss here will auxiliary loss that tries to reconstruct the original image right and that's simply a I believe it's just an L2 reconstruction loss that is that is scaled down that it doesn't dominate so they also train the network basically to reconstruct this and I believe they do this because the length isn't quite enough to make it do what they want it to do thus they by having this reconstruction here they really kind of enforce that the individual capsules the individual dimensions must encode some kind of information about the original image and since the original images in the MNIST data set at least vary by those things by stroke width by rotation by tightness that by this loss will be reflected in the in the reconstruction all right so how are they doing here you see different examples of inputs and then reconstructed outputs and this you know seems pretty good actually so you see here all of these the input image is reconstructed fairly well so the numbers up here in the fall so the right are the failure cases here it the input image is a five labeled in the training data but the network actually classifies it as a three but then if you now you have two choices right this this is the same sample I have two choices for reconstruction either you reconstruct the capsule that is actually the is that you know is the true capsule that should be activated and you reconstruct from that or you reconstruct from the capsule that the network says the it classifies it as so here it mixed up a five four three if you still take the five the capsule and reconstructed you see it actually looks like the original image but it looks much more like a five and if you take the three capsule to reconstruct which is what the network classified this as it's still it looks like the original image but it looks much more like an actual three right it's it's missing the the part up here whereas over here it's it's missing this part here so that the network really seems to kind of learn the different variations of these digits and in an ambiguous case such as this one it you know it can it can actually go either way and it can actually reconstruct the original output in either interpretations once as a three and once as a five it will be interesting to see what the actual lengths of the vector of both of these classes were that were mixed up and here they compare their accuracies so they have a baseline model which I believe is just a CNN where they get a decent kind of error and then the capsule networks they get a lower error and here you see as you add the reconstruction loss and as you add routing more so one step of routing simply means the first step is where you send your output equally to each parent that is as in the classical neural network case but if you introduce three steps of routing then your error drops even lower so they they kind of are on par with baseline CNNs on MNIST here they also explore what their capsules learn so as I said the individual capsules the dimensions should encode kind of properties of the variations of the of the class class samples and here they explore this in the different capsules so they change some dimensions and they run it through their reconstruction networks and indeed they discover that there is like a scale and thickness dimension stroke thickness dimension there's a skew dimension and so on width and translation so that this is pretty remarkable these networks really if you train them in this way they really seem to learn about the entities and about the properties of the entities and that seems to be quite interesting you see that there's everything here stays well within the class that the capsule is assigned to they also yeah this robustness to affine transformations where they improve over the baseline it's kind of an auxiliary experiment the next interesting experiment is what they call the multi MNIST experiment the multi MNIST experiment is done by taking two different MNIST digits and basically just overlapping them so that they have you know shift them slightly but as you see here or here they are overlapped heavily and the task of the network is to figure out which two overlapping digits are in the image and the the network is very very good at doing this the capsule network that is and better than the the baselines because the capsule network simply encodes the presence and properties of a particular instance in the image if you simply take the top two length capsules and then reconstruct those independently then you're you can you can you can basically segment the image and you see this here so the different colorations come from two different reconstructions of the image from two different capsules so green is from one capsule and red from the other capsule so the network correctly identifies that it's a 6 and the zero right and it also correctly identifies not only which pixels belong to the 6 and which belong to 0 but also pixels that belong to both so that's not a not a problem if you use capsule networks as they are are notable to say here they the way they train is is they train the actual reconstruction by only reconstructing one at a time so the kind of the premise of the data set is that you actually have access to the underlying individual digits while training so like the images of the individual digits you don't only have this label here but that's a detail here are some kind of failure cases where it it misclassified or you miss specify the capsules and it's kind of unable use here you see to to assign the digits of the misclassified or the pixels of the misclassified thing it's quite interesting to look at the failure cases but I find it more interesting to look actually the success cases and the kind of ease at which the at which the capsule networks can do this simply by how they're structured alright so then lastly they also experiment on C for 10 and interestingly the C for 10 experiments show that the capsule networks don't perform as well there and as you know C for 10 is a data set that is about the same size as MNIST but it's first of all color and second of all is natural images and so they have quite a bit of clutter it's not black and white black background white digits it's actually there's a sky like on an image there's lots of things going on and right there's my tree and there's stuff here and there's stuff here and the the capsule networks they like to account for things in the image so they like to have a capsule corresponding to everything that's going on here and here and here and here and here if the whole background is black that is not a problem you can account for simply the background but if there's lots of things going on then these capsule networks get they get they get a bit over explanatory they want to explain everything and that degrades the performance now this paper basically says yeah you can have a something like a none of the above category and they found that it helped to introduce that in my opinion that it I think the the the solution will be more towards introduction of a better loss function for this because like such that you don't need kind of to explain the entire thing rather than here we'll hear what you do is you simply explain it by saying it's none of the above but it's incredibly hard to balance that my opinion yeah all right so that is basically the end of this they say they have a discussion here where they compare capsules against other related work but I hope that you kind of got an overview of how this works now and as much as possible and with that that was it for me and thanks for watching bye bye | [
{
"start": 0,
"end": 6,
"text": " Hi there! Today we're looking at dynamic routing between capsules by Sara Sabour,"
},
{
"start": 6,
"end": 11.96,
"text": " Nicholas Frost and Jeffrey Hinton of Google Brain. This paper is a bit older"
},
{
"start": 11.96,
"end": 18.8,
"text": " but it's made quite the impact at the time and so we'll go through it. I find"
},
{
"start": 18.8,
"end": 22.92,
"text": " this pretty hard paper to read and kind of understand because a lot of things"
},
{
"start": 22.92,
"end": 31.400000000000002,
"text": " are very implicit and hand wavy. So we'll kind of go through it and try to get the"
},
{
"start": 31.400000000000002,
"end": 35.96,
"text": " best out of it, try to explain what capsules are and what they do and how"
},
{
"start": 35.96,
"end": 41.44,
"text": " they stack against current networks. So capsule network in essence is a"
},
{
"start": 41.44,
"end": 46.32000000000001,
"text": " new type of neural network made of capsules. And here it says a capsule is a"
},
{
"start": 46.32000000000001,
"end": 50.120000000000005,
"text": " group of neurons whose activity vector represents the instantiation"
},
{
"start": 50.12,
"end": 56.12,
"text": " parameters of a specific type of entity such as an object or an object part. Kind"
},
{
"start": 56.12,
"end": 63.12,
"text": " of cryptic but so what they're saying is that in a capsule network, let me try to"
},
{
"start": 63.12,
"end": 68.52,
"text": " draw one here actually, in a capsule network you have what's called capsules."
},
{
"start": 68.52,
"end": 75.36,
"text": " Capsules you can imagine as just little blobs of things right? And they're also"
},
{
"start": 75.36,
"end": 81.4,
"text": " ordered in layers in this case. Let's actually leave away the second layer. And"
},
{
"start": 81.4,
"end": 89.8,
"text": " each of these of these capsules will correspond to an entity in the input."
},
{
"start": 89.8,
"end": 94.52,
"text": " Let's say the input is an image. So somewhere here there is an image right?"
},
{
"start": 94.52,
"end": 101.44,
"text": " Then maybe this capsule here will be responsible for detecting is there a"
},
{
"start": 101.44,
"end": 108.24,
"text": " wall in the image. And this one will be responsible for detecting is there a"
},
{
"start": 108.24,
"end": 117.16,
"text": " roof. This one will be is there a door. And this one will be responsible for"
},
{
"start": 117.16,
"end": 125.56,
"text": " detecting is there a lake in the image right? So now each of these each of these"
},
{
"start": 125.56,
"end": 133.2,
"text": " capsules can for on one hand can either be high or low. So if you if you imagine"
},
{
"start": 133.2,
"end": 142.32,
"text": " now a situation where wall high, roof high, door high, lake low. It means"
},
{
"start": 142.32,
"end": 150.56,
"text": " probably the image has a house on it right? But second of all not only can it"
},
{
"start": 150.56,
"end": 156.76,
"text": " predict whether or not a given entity is present in an image but the individual"
},
{
"start": 156.76,
"end": 162.68,
"text": " capsules are also responsible for encoding the exact way or shape or form"
},
{
"start": 162.68,
"end": 169.24,
"text": " that this entity takes. So the wall could have different aspects such as color"
},
{
"start": 169.24,
"end": 181.72,
"text": " color green. It could have size tall. It could have orientation. orientation is"
},
{
"start": 181.72,
"end": 191.96,
"text": " like I don't know vertical. Cool. Then roof could have angle right? Angle wide."
},
{
"start": 191.96,
"end": 196.64000000000001,
"text": " So it's a wide roof or a flat roof right? These are these are kind of attributes"
},
{
"start": 196.64,
"end": 203.23999999999998,
"text": " of these things that also the capsules would encode. So ultimately what these"
},
{
"start": 203.23999999999998,
"end": 209.6,
"text": " capsules that they are proposing will output is the roof capsule here for"
},
{
"start": 209.6,
"end": 215.56,
"text": " example would output a vector. So the output of the roof capsule is a let me"
},
{
"start": 215.56,
"end": 223.76,
"text": " draw a coordinate system is a vector. Now the length of the vector will"
},
{
"start": 223.76,
"end": 231.23999999999998,
"text": " represent so that the length draw this norm here will represent the probability"
},
{
"start": 231.23999999999998,
"end": 238.72,
"text": " that the roof is in the image. That there is a roof in an image right? The roof is"
},
{
"start": 238.72,
"end": 245.16,
"text": " element of this input image. This is simply the length and the individual"
},
{
"start": 245.16,
"end": 250.32,
"text": " coordinates will encode these attributes. So this here for example this axis could"
},
{
"start": 250.32,
"end": 257.44,
"text": " be the angle of the roof and this axis could be the color. Let's say just that"
},
{
"start": 257.44,
"end": 262.12,
"text": " the angle is like some degree number that can be positive or negative. Maybe a"
},
{
"start": 262.12,
"end": 268.84,
"text": " roof can be like this. Right this so this is but in essence this is a flat roof"
},
{
"start": 268.84,
"end": 273.68,
"text": " and this is a very narrow angle roof. So you can imagine something like this and"
},
{
"start": 273.68,
"end": 277.8,
"text": " then the color could also be maybe parameterized on a one-dimensional. It"
},
{
"start": 277.8,
"end": 282.36,
"text": " can have more dimensions than two I just can't draw more. So the depending on"
},
{
"start": 282.36,
"end": 289.8,
"text": " where this where this arrow now points the for example this vector here has the"
},
{
"start": 289.8,
"end": 294.92,
"text": " same probability that there is a roof in the image like if the output is this but"
},
{
"start": 294.92,
"end": 298.6,
"text": " the color will be different. The angle will be the same because they're roughly"
},
{
"start": 298.6,
"end": 303,
"text": " on the same this axis here but the color of this will encode a different"
},
{
"start": 303,
"end": 310.32,
"text": " different colored roof. And then if the vector is something like this a very"
},
{
"start": 310.32,
"end": 320.64,
"text": " short vector it will encode the same the same angle and color directions. So maybe"
},
{
"start": 320.64,
"end": 325.8,
"text": " I shouldn't say the position on the axis it's more like this angle and this this"
},
{
"start": 325.8,
"end": 330.4,
"text": " angle that encode the attributes. So the kind of the angular components if you"
},
{
"start": 330.4,
"end": 334.12,
"text": " will encode the attributes and the length encodes the probability. So this"
},
{
"start": 334.12,
"end": 339.59999999999997,
"text": " small vector has the same direction in terms of color and angle of the roof but"
},
{
"start": 339.59999999999997,
"end": 345.08,
"text": " it's much less probable much less likely. So this if the capsule outputs the"
},
{
"start": 345.08,
"end": 350.59999999999997,
"text": " little blue vector here it says well if there is a roof it's going to be this"
},
{
"start": 350.59999999999997,
"end": 354.52,
"text": " color in this angle but I'm really that really don't think there's a roof in"
},
{
"start": 354.52,
"end": 360.35999999999996,
"text": " this image. Whereas if it outputs the large green one then it says I'm pretty"
},
{
"start": 360.36,
"end": 365.2,
"text": " sure that there's a roof and it's going to be this angle and this this this"
},
{
"start": 365.2,
"end": 370.76,
"text": " angle and this color. Alright so that's that is what each capsule is supposed to"
},
{
"start": 370.76,
"end": 378.2,
"text": " do. Each capsule takes the input and outputs a vector that encodes if the"
},
{
"start": 378.2,
"end": 383.04,
"text": " entity that the capsule is responsible for is present in the image A and B"
},
{
"start": 383.04,
"end": 389.76,
"text": " what properties this entity has. And then we get to the point where there's the"
},
{
"start": 389.76,
"end": 394.92,
"text": " next layer of capsules. So the next layer of capsules takes information that each"
},
{
"start": 394.92,
"end": 402.4,
"text": " capsule here takes information from each capsule in the lower layer like like"
},
{
"start": 402.4,
"end": 407.36,
"text": " you're used to from your neural network and integrates this information and"
},
{
"start": 407.36,
"end": 411.4,
"text": " we'll talk about how this works. It integrates all of this information right"
},
{
"start": 411.4,
"end": 415.88,
"text": " all of these are vectors now that come from the lower integrates all of this"
},
{
"start": 415.88,
"end": 422.4,
"text": " information and again each capsule in this next layer is responsible for a"
},
{
"start": 422.4,
"end": 427.84,
"text": " entity. Now these entities in the higher layers are usually composite entities of"
},
{
"start": 427.84,
"end": 436.36,
"text": " the lower layers. So this one here could be responsible for house, this one could"
},
{
"start": 436.36,
"end": 444.4,
"text": " be responsible for national park, national park and this one could be"
},
{
"start": 444.4,
"end": 451.08,
"text": " responsible for beach or something like this right. And then each of these will"
},
{
"start": 451.08,
"end": 456.23999999999995,
"text": " integrate all of this information from the lower layers and then come up with"
},
{
"start": 456.23999999999995,
"end": 461.4,
"text": " their own output vector encoding whether or not a given entity is present in the"
},
{
"start": 461.4,
"end": 469,
"text": " in the image. Of course the house class will pick up if there is a door a roof"
},
{
"start": 469,
"end": 473.35999999999996,
"text": " and a wall in the image the house classes will pick up on that or that's"
},
{
"start": 473.36,
"end": 477.40000000000003,
"text": " how it's meant to work house class is meant to pick up on that and then itself"
},
{
"start": 477.40000000000003,
"end": 483,
"text": " output a large vector saying there's probably a house in this in this image."
},
{
"start": 483,
"end": 488.56,
"text": " So each of these capsules in by itself is responsible for encoding the presence"
},
{
"start": 488.56,
"end": 494.96000000000004,
"text": " and attributes of a object or object part or entity or part of entity in the"
},
{
"start": 494.96000000000004,
"end": 500.04,
"text": " given input data. And of course the last layer here it will simply be your"
},
{
"start": 500.04,
"end": 505.32,
"text": " classification layer. So in the last layer you have as many capsules as you"
},
{
"start": 505.32,
"end": 511.08000000000004,
"text": " have classes in your classification task. So this is mainly for a"
},
{
"start": 511.08000000000004,
"end": 517.84,
"text": " classification task and then you can classify and you can kind of train the"
},
{
"start": 517.84,
"end": 525.48,
"text": " whole system like this. So how exactly this happens we'll see next."
},
{
"start": 525.48,
"end": 533.96,
"text": " Alright so they make kind of analogies to the visual system and so on."
},
{
"start": 533.96,
"end": 541.6,
"text": " We'll jump these you can everyone that does deep learning in some way is trying"
},
{
"start": 541.6,
"end": 547.64,
"text": " to to make that. We're rather going to the specifics of how these capsules work"
},
{
"start": 547.64,
"end": 553.8000000000001,
"text": " and how their specific suggestions for them. Note that they say this is in no"
},
{
"start": 553.8,
"end": 558.92,
"text": " way the only implementation of capsules. It's just kind of an example to show how"
},
{
"start": 558.92,
"end": 565.56,
"text": " one could do it. Alright so first of all they present their what you might call"
},
{
"start": 565.56,
"end": 570.68,
"text": " non-linearity. So their non-linearity what it needs to do is if you look at"
},
{
"start": 570.68,
"end": 575.04,
"text": " these capsule networks the outputs here the length of the outputs of these"
},
{
"start": 575.04,
"end": 580.3199999999999,
"text": " vectors right they're supposed to represent probabilities and as such they"
},
{
"start": 580.32,
"end": 587,
"text": " they need to be so here it roof this door maybe a vector like this wall maybe"
},
{
"start": 587,
"end": 592.2,
"text": " a vector like that. So initially we simply specify the output is a vector"
},
{
"start": 592.2,
"end": 597,
"text": " and in essence these capsules are implemented in much the same way like"
},
{
"start": 597,
"end": 604.6800000000001,
"text": " your classic neural network layer would be implemented. So each of these"
},
{
"start": 604.68,
"end": 613.28,
"text": " capsules will be in essence a neural network layer by itself that outputs a"
},
{
"start": 613.28,
"end": 619.3599999999999,
"text": " vector. There's nothing constraining the length of the vector initially so"
},
{
"start": 619.3599999999999,
"end": 626.8,
"text": " their non-linearity does constrain the vector to be of maximum length 1 and of"
},
{
"start": 626.8,
"end": 631.3599999999999,
"text": " minimum length 0. That's this non-linearity here. So S here is the"
},
{
"start": 631.36,
"end": 638.6800000000001,
"text": " unscaled output of the capsule and you can see here if the length of S gets"
},
{
"start": 638.6800000000001,
"end": 646.2,
"text": " close to 1 or sorry gets really large then this here becomes irrelevant."
},
{
"start": 646.2,
"end": 653.8000000000001,
"text": " This whole term will be 1 and then the length of the final output of V here"
},
{
"start": 653.8000000000001,
"end": 661.12,
"text": " will be 1. Right so if this is very large then the the length of the scaled"
},
{
"start": 661.12,
"end": 666.92,
"text": " output will be 1 however if the if the length is really small of the original"
},
{
"start": 666.92,
"end": 672.92,
"text": " output so if this goes towards 0 then this becomes irrelevant this becomes"
},
{
"start": 672.92,
"end": 680,
"text": " irrelevant this will go towards 0 and the entire length will go towards 0."
},
{
"start": 680,
"end": 689.2,
"text": " So this is kind of a nice way to scale these outputs always to be between length 0"
},
{
"start": 689.2,
"end": 702.5200000000001,
"text": " and 1. Then next thing is so how this I find I find the the most complicated"
},
{
"start": 702.5200000000001,
"end": 710.48,
"text": " part right so we'll jump ahead actually to how a capsule's network is implemented"
},
{
"start": 710.48,
"end": 716.76,
"text": " and this is the the capsule network they implement so first it's an MNIST"
},
{
"start": 716.76,
"end": 721.84,
"text": " classifier you have an MNIST image here and it first goes through a simple"
},
{
"start": 721.84,
"end": 726.4,
"text": " convolutional layer that's that's nothing new this is a classic"
},
{
"start": 726.4,
"end": 734.84,
"text": " convolutional layer is there's 256 channels it has a 9 by 9 filters and"
},
{
"start": 734.84,
"end": 747.1600000000001,
"text": " stride 1 so it will output a 20 by 20 time by 256 tensor then each of these"
},
{
"start": 747.1600000000001,
"end": 752.6,
"text": " so each of the outputs here is sent to each of these capsules and now they're"
},
{
"start": 752.6,
"end": 758.2800000000001,
"text": " convolutional capsules so that makes it a bit more complicated but don't you"
},
{
"start": 758.2800000000001,
"end": 762.1600000000001,
"text": " know don't worry primarily about them being convolutional capsules the"
},
{
"start": 762.16,
"end": 765.28,
"text": " analogy is exactly as in a classic neural network you can implement these"
},
{
"start": 765.28,
"end": 772.4399999999999,
"text": " capsules as void-feed-forward capsules or as convolutional capsules and maybe also"
},
{
"start": 772.4399999999999,
"end": 777.3199999999999,
"text": " as transformer capsules I don't think anyone's done that all right there's a"
},
{
"start": 777.3199999999999,
"end": 785,
"text": " paper for you the so you'll send you'll send the output of this convolution"
},
{
"start": 785,
"end": 790.04,
"text": " layer to each capsule and then you have basically just two layer of capsules"
},
{
"start": 790.04,
"end": 797.64,
"text": " here the first layer consists of 32 what they call primary caps sorry the these"
},
{
"start": 797.64,
"end": 805.24,
"text": " 32 capsules each will output an eight dimensional vector and I'm simplifying"
},
{
"start": 805.24,
"end": 809.48,
"text": " here it's it's convolutional but they will just for simplest they will each"
},
{
"start": 809.48,
"end": 816.68,
"text": " output an eight dimensional vector right and these are exactly as we said before"
},
{
"start": 816.68,
"end": 821.8,
"text": " so each of these will be responsible ultimately for a given entity or part of"
},
{
"start": 821.8,
"end": 828.06,
"text": " entity being there like in MNIST this could be is there a little curve on the"
},
{
"start": 828.06,
"end": 831.64,
"text": " bottom left side right this might indicate the presence of a six or an"
},
{
"start": 831.64,
"end": 838.8399999999999,
"text": " eight something like this and then the these capsules here each is they"
},
{
"start": 838.8399999999999,
"end": 844.1999999999999,
"text": " represented as a row so each of these rows here is a capsule and we have ten"
},
{
"start": 844.2,
"end": 848.88,
"text": " of these and these are your simply your final classification capsules so each"
},
{
"start": 848.88,
"end": 854.76,
"text": " capsule is responsible for indicating the presence or absence of one particular"
},
{
"start": 854.76,
"end": 859.5600000000001,
"text": " class of digits so this will be of a one of a two of a three of a four and so on"
},
{
"start": 859.5600000000001,
"end": 865.9200000000001,
"text": " of a zero I guess somewhere as well so these are ten capsules and the question"
},
{
"start": 865.9200000000001,
"end": 871.5200000000001,
"text": " is how does information go from a capsule here from the output of a"
},
{
"start": 871.52,
"end": 877,
"text": " capsule or to any of capsule here and the easy way to do this is simply to say"
},
{
"start": 877,
"end": 884.12,
"text": " as in a classical neural network the output here simply goes to the input"
},
{
"start": 884.12,
"end": 891.92,
"text": " here just you just put it there basically on on unchanged now there is a"
},
{
"start": 891.92,
"end": 897.4,
"text": " bit of an issue here with the dimensions but you can simply say well we simply"
},
{
"start": 897.4,
"end": 903.88,
"text": " put a weight matrix in to route into the capsules but the idea of these capsules"
},
{
"start": 903.88,
"end": 912.28,
"text": " and this paper is to say wait wait these capsules actually we want to make them"
},
{
"start": 912.28,
"end": 920.84,
"text": " decide to which capsule in the next layer will they send their input right"
},
{
"start": 920.84,
"end": 926.84,
"text": " so the capsules can kind of decide where they want to send their output to like"
},
{
"start": 926.84,
"end": 932.48,
"text": " where is this where is the capsule that detects the maybe this one detects is"
},
{
"start": 932.48,
"end": 937.08,
"text": " there a line in the right side of the image right indicating maybe a seven or"
},
{
"start": 937.08,
"end": 945.4,
"text": " a one this is probably most relevant for the one class and for the seven class so"
},
{
"start": 945.4,
"end": 951.52,
"text": " it might decide to route its output there and the idea of how this routing"
},
{
"start": 951.52,
"end": 959.4399999999999,
"text": " happens is basically the topic of this paper so the the capsules route their"
},
{
"start": 959.4399999999999,
"end": 967,
"text": " output to the appropriate next layers capsules how is this done all right this"
},
{
"start": 967,
"end": 972.1999999999999,
"text": " is done via the what's called the routing mechanism that I find it quite"
},
{
"start": 972.1999999999999,
"end": 981.12,
"text": " poorly described here so I will simply draw it I will simply try to make it up"
},
{
"start": 981.12,
"end": 990.88,
"text": " all right so we have capsules and as I've drawn them before right we have one"
},
{
"start": 990.88,
"end": 1000.32,
"text": " two three capsules and we maybe have two parent capsules each of these capsules"
},
{
"start": 1000.32,
"end": 1006.16,
"text": " here will output a vector as we said and we'll only do it for this this one sorry"
},
{
"start": 1006.16,
"end": 1012.92,
"text": " vector here so this will output this vector and needs to decide where to here"
},
{
"start": 1012.92,
"end": 1020.04,
"text": " or to here do I send to this output now what it does is there is an iterative"
},
{
"start": 1020.04,
"end": 1027.68,
"text": " procedure that has multiple steps and this is I think this is at least the way"
},
{
"start": 1027.68,
"end": 1032.52,
"text": " I understand I think the important part to understand is that if we forward pass"
},
{
"start": 1032.52,
"end": 1037.24,
"text": " data through this network it actually doesn't go forward in a straight line"
},
{
"start": 1037.24,
"end": 1042.04,
"text": " what it actually does is it goes through a layer and then it does multiple steps"
},
{
"start": 1042.04,
"end": 1047.76,
"text": " in between layers until it has decided where it wants to go in the next layer"
},
{
"start": 1047.76,
"end": 1051.96,
"text": " and then it goes on to the next layer and if there's another capsule layers it"
},
{
"start": 1051.96,
"end": 1058.32,
"text": " does again multiple steps before it goes on so that's that's my take on it and"
},
{
"start": 1058.32,
"end": 1064.6,
"text": " the multiple steps are as follows first I'll send my output vector to to all of"
},
{
"start": 1064.6,
"end": 1070.12,
"text": " the all of the layers like equally all of the parent capsules and so will will"
},
{
"start": 1070.12,
"end": 1078.08,
"text": " everyone else right everyone will send theirs equally to the parent now this"
},
{
"start": 1078.08,
"end": 1082.8999999999999,
"text": " isn't just done and this may be here this isn't just done just by sending it"
},
{
"start": 1082.8999999999999,
"end": 1087.32,
"text": " but this is actually done by modulation of weight matrices so each thing here if"
},
{
"start": 1087.32,
"end": 1093.3999999999999,
"text": " this is capsule I and this is capsule J there is a weight matrix in between W I J"
},
{
"start": 1093.3999999999999,
"end": 1098.1599999999999,
"text": " that is learned right this is a static weight matrix and each one of these red"
},
{
"start": 1098.1599999999999,
"end": 1104.36,
"text": " red arrows you see here has such a weight matrix attached to it so each"
},
{
"start": 1104.36,
"end": 1108.76,
"text": " each line you see here is actually modulated by such a weight matrix so"
},
{
"start": 1108.76,
"end": 1113.9199999999998,
"text": " there is an a quadratic number of these weight matrices flying around and this"
},
{
"start": 1113.92,
"end": 1118.24,
"text": " will also then allow you that maybe this vector is eight dimensional but the"
},
{
"start": 1118.24,
"end": 1124.16,
"text": " input vector here is 16 dimensional what we saw before all right so the out the"
},
{
"start": 1124.16,
"end": 1129.48,
"text": " input of capsule J here it will receive let's see what it receives it will"
},
{
"start": 1129.48,
"end": 1140.5600000000002,
"text": " receive the output of capsule will the output of capsule 1 V 1 modulated by the"
},
{
"start": 1140.56,
"end": 1148.8,
"text": " let's let's call this yeah let's call this J modulated by 1 J W 1 J and it"
},
{
"start": 1148.8,
"end": 1155.6,
"text": " will also receive this is a set the output of capsule 2 modulated by the"
},
{
"start": 1155.6,
"end": 1162.8799999999999,
"text": " weight matrix for sorry weight matrix for capsule 2 and so on now what it does"
},
{
"start": 1162.88,
"end": 1174.4,
"text": " is it adds this these all up into a soft max so sorry let's write this so soft it"
},
{
"start": 1174.4,
"end": 1180.24,
"text": " will add those all up in a soft max weighted fashion so it will actually"
},
{
"start": 1180.24,
"end": 1188.5600000000002,
"text": " compute a a weighted average of those now the weights at the beginning are are"
},
{
"start": 1188.56,
"end": 1195.56,
"text": " just one because it gets each from each lower capsule it gets equal amount of"
},
{
"start": 1195.56,
"end": 1200.6,
"text": " this vector but then this will give you an output so this will give you some"
},
{
"start": 1200.6,
"end": 1207.84,
"text": " output let's put this in green this will give you an output that's I don't know"
},
{
"start": 1207.84,
"end": 1215.72,
"text": " how they call it in the paper let's just call it O J right and then what you do"
},
{
"start": 1215.72,
"end": 1224.08,
"text": " is all right you compare how much do each of the individual contributions"
},
{
"start": 1224.08,
"end": 1230.68,
"text": " agree with OJ so you actually compute for each of these you would compute the"
},
{
"start": 1230.68,
"end": 1239.48,
"text": " inner product so you would compute the inner product of W 1 J V 1 with OJ and"
},
{
"start": 1239.48,
"end": 1249.24,
"text": " you would compute the inner product of W 2 J V 2 with OJ all right the inner"
},
{
"start": 1249.24,
"end": 1254.76,
"text": " product and then these inner products here will become the weighting"
},
{
"start": 1254.76,
"end": 1261.2,
"text": " coefficients for the soft max in the next iteration all right so this I mean"
},
{
"start": 1261.2,
"end": 1265.44,
"text": " this this is a bit convoluted but ultimately what you're saying is if"
},
{
"start": 1265.44,
"end": 1273.0800000000002,
"text": " you're a capsule here you'll send your output forward you have an output you"
},
{
"start": 1273.0800000000002,
"end": 1280.0800000000002,
"text": " send it forward right to the other capsule and the other capsule will so"
},
{
"start": 1280.0800000000002,
"end": 1283.56,
"text": " this is this is your output and we'll forget about this weight matrix 6 for"
},
{
"start": 1283.56,
"end": 1290.24,
"text": " now this is your up the other capsule will output its own its own output"
},
{
"start": 1290.24,
"end": 1297.88,
"text": " computed from the lower layers now we do an iteration again if your output now"
},
{
"start": 1297.88,
"end": 1305,
"text": " aligns with this you will send more of it and these these two that I've drawn"
},
{
"start": 1305,
"end": 1309.36,
"text": " here actually align pretty well right so you'll send more of it is more more"
},
{
"start": 1309.36,
"end": 1316.04,
"text": " more right and now maybe the output that next computed output of the same capsule"
},
{
"start": 1316.04,
"end": 1319.4,
"text": " will be even more in that direction because you've contributed more right"
},
{
"start": 1319.4,
"end": 1323.3200000000002,
"text": " you'll send more and then you're like in the next iteration wow these two are"
},
{
"start": 1323.3200000000002,
"end": 1328.5600000000002,
"text": " really equal sorry this should be red here your ears just keeps being the same"
},
{
"start": 1328.5600000000002,
"end": 1333.0400000000002,
"text": " and then you say well I'm gonna send even more to that one right whereas"
},
{
"start": 1333.0400000000002,
"end": 1340.76,
"text": " another capsule that it's whose initial output was basically whose initial"
},
{
"start": 1340.76,
"end": 1348.6000000000001,
"text": " output was basically like this it will by itself compute the inner product with"
},
{
"start": 1348.6,
"end": 1353.36,
"text": " the original this original it will send it here right it will compute the inner"
},
{
"start": 1353.36,
"end": 1358.48,
"text": " product with the original output and it will realize well these do not align"
},
{
"start": 1358.48,
"end": 1363.48,
"text": " very much and then it will send less right it will send less to the next step"
},
{
"start": 1363.48,
"end": 1369.08,
"text": " and because it sends less in the next step of course the output will then"
},
{
"start": 1369.08,
"end": 1374.4399999999998,
"text": " probably align even less with that vector and then it will send less and"
},
{
"start": 1374.44,
"end": 1380.2,
"text": " less and less so this is called dynamic routing the the idea behind it is kind"
},
{
"start": 1380.2,
"end": 1388.24,
"text": " of that you route by agreement so you will route to the parent capsules that"
},
{
"start": 1388.24,
"end": 1393.8400000000001,
"text": " agree with your output and by agreement we mean kind of the inner product is"
},
{
"start": 1393.8400000000001,
"end": 1400.3200000000002,
"text": " high after modulating by this weight matrix and that sort of so that"
},
{
"start": 1400.32,
"end": 1405.6799999999998,
"text": " basically means this weight matrix is responsible for deciding which"
},
{
"start": 1405.6799999999998,
"end": 1411.08,
"text": " information is relevant together whenever you have two vectors that align"
},
{
"start": 1411.08,
"end": 1417.48,
"text": " in the same layer then the in the sense of the capsule networks those represent"
},
{
"start": 1417.48,
"end": 1423.8799999999999,
"text": " the same kind of information and those will be routed together to the same"
},
{
"start": 1423.8799999999999,
"end": 1429.8,
"text": " capsule in terms of the examples we made maybe if a door and a roof is"
},
{
"start": 1429.8,
"end": 1436.9199999999998,
"text": " present then these these these weight matrices that connect door and roof to"
},
{
"start": 1436.9199999999998,
"end": 1442.84,
"text": " the house class they will transform a high vector in door and roof into"
},
{
"start": 1442.84,
"end": 1449.76,
"text": " aligning vectors for the house class and thereby saying look these two if I look"
},
{
"start": 1449.76,
"end": 1457.28,
"text": " at them through if I look at a door and a roof through the perspective of trying"
},
{
"start": 1457.28,
"end": 1464.6,
"text": " to be a house right then they are in much agreement on the presence of a"
},
{
"start": 1464.6,
"end": 1476.12,
"text": " house so if I am a house right I am a house and I look at a door and I look at"
},
{
"start": 1476.12,
"end": 1482.72,
"text": " a roof through the kind of from the perspective of being a house right this"
},
{
"start": 1482.72,
"end": 1486.6399999999999,
"text": " is this is what these weight matrices do they always have a perspective of the"
},
{
"start": 1486.64,
"end": 1492.72,
"text": " parent capsule then these two things they make a lot of sense together and"
},
{
"start": 1492.72,
"end": 1500.16,
"text": " thus I will route them to the same place so they can both contribute to their"
},
{
"start": 1500.16,
"end": 1506.1200000000001,
"text": " being a house now from the perspective of a house if I look at a little beach"
},
{
"start": 1506.1200000000001,
"end": 1512.8000000000002,
"text": " with a tree on it right then that does not that is not the same that does not"
},
{
"start": 1512.8,
"end": 1521.36,
"text": " really is not the same information as a door or a roof so I will not route this"
},
{
"start": 1521.36,
"end": 1530.6399999999999,
"text": " to the house in the in the same strength that is sort of the best way I have of"
},
{
"start": 1530.6399999999999,
"end": 1535.48,
"text": " explaining it how these capsules work basically the lower entities will always"
},
{
"start": 1535.48,
"end": 1543.08,
"text": " be routed for the relevance of the higher entities that are trying to are"
},
{
"start": 1543.08,
"end": 1549.84,
"text": " trying to combine the lower entities if that wasn't it's not entirely clear to"
},
{
"start": 1549.84,
"end": 1557,
"text": " me either yet but it's the best shot I I can give and the routing is here"
},
{
"start": 1557,
"end": 1563.84,
"text": " formalized I find it hard to follow the important thing is that there is an"
},
{
"start": 1563.84,
"end": 1570.8799999999999,
"text": " inner loop in all of this so there is an like kind of an an inner iteration and"
},
{
"start": 1570.8799999999999,
"end": 1578.04,
"text": " this inner iteration is computed in every forward pass and so these routing"
},
{
"start": 1578.04,
"end": 1584.48,
"text": " where the information goes in the next layer that is only the prior probability"
},
{
"start": 1584.48,
"end": 1591.1599999999999,
"text": " for that is learned but the actual routing coefficients those are"
},
{
"start": 1591.16,
"end": 1597.88,
"text": " dynamically computed in every forward pass so every forward pass goes it goes"
},
{
"start": 1597.88,
"end": 1602.28,
"text": " information goes through a layer then it goes multiple steps between two layers"
},
{
"start": 1602.28,
"end": 1606.1200000000001,
"text": " until it decides exactly what the distribution for the next layer is and"
},
{
"start": 1606.1200000000001,
"end": 1610.64,
"text": " then the next layer computes its outputs and that goes again multiple steps"
},
{
"start": 1610.64,
"end": 1616.48,
"text": " between these layers and the next layer so that's the the basic thing to"
},
{
"start": 1616.48,
"end": 1621.76,
"text": " remember there's also some normalization involved the squash is the non-linearity"
},
{
"start": 1621.76,
"end": 1629.1200000000001,
"text": " we discussed so what do they actually train now at the end here they have a"
},
{
"start": 1629.1200000000001,
"end": 1634.56,
"text": " they have these ten capsules and each capsule will be responsible for"
},
{
"start": 1634.56,
"end": 1640.4,
"text": " recognizing one the presence of one digit in the MNIST data set of course"
},
{
"start": 1640.4,
"end": 1646.04,
"text": " and so what they do is they take the length of these vectors that are output"
},
{
"start": 1646.04,
"end": 1650,
"text": " by these capsules these capsules are feed-forward capsules as opposed to the"
},
{
"start": 1650,
"end": 1655.3999999999999,
"text": " convolutional capsules here so the feed-forward capsules output again a"
},
{
"start": 1655.3999999999999,
"end": 1661.1599999999999,
"text": " vector the length of this vector is taken and then it's basically trained"
},
{
"start": 1661.1599999999999,
"end": 1666.52,
"text": " like you would train a regression problem and the loss here is specified"
},
{
"start": 1666.52,
"end": 1673.52,
"text": " up here so if the if the image actually does contain this if the training label"
},
{
"start": 1673.52,
"end": 1683.28,
"text": " actually has this digit present this T here encodes that so if if K let's say K"
},
{
"start": 1683.28,
"end": 1691.92,
"text": " is 2 right so if K 2 if there is a 2 in the image when we know that because it's"
},
{
"start": 1691.92,
"end": 1698.24,
"text": " a training image then the length of the output of capsule number 2 should be"
},
{
"start": 1698.24,
"end": 1705.56,
"text": " high and this simply encodes that it should be very close to this M plus an"
},
{
"start": 1705.56,
"end": 1710.52,
"text": " M plus here is that I think they said it to 0.9 so they say you should be the"
},
{
"start": 1710.52,
"end": 1717.04,
"text": " length should be as close as possible to 0.9 whereas if the 2 is not present then"
},
{
"start": 1717.04,
"end": 1723.44,
"text": " TK will be 0 then this part will be active so it's only one of these two"
},
{
"start": 1723.44,
"end": 1730.04,
"text": " parts will be active then the length of the vector so of capsule number 2 should"
},
{
"start": 1730.04,
"end": 1735.48,
"text": " be close to this M negative which is 0.1 it's basically a regression problem"
},
{
"start": 1735.48,
"end": 1742.44,
"text": " saying if if there if the given entity is in the image then please make the"
},
{
"start": 1742.44,
"end": 1746.3600000000001,
"text": " length as close as possible to 0.9 and if it's not make it as close as possible"
},
{
"start": 1746.36,
"end": 1755.04,
"text": " to 0.1 so this this is a classic say regression loss on the length of the"
},
{
"start": 1755.04,
"end": 1761.7199999999998,
"text": " output vectors the the lambda is just a factor to to dampen the contribution for"
},
{
"start": 1761.7199999999998,
"end": 1768.1599999999999,
"text": " all the negative classes with respect to the one positive class of course per"
},
{
"start": 1768.16,
"end": 1776.76,
"text": " capsule it turns out this is actually not enough so this will be the"
},
{
"start": 1776.76,
"end": 1781.44,
"text": " classification output but it's it seems not enough they don't say it's not"
},
{
"start": 1781.44,
"end": 1786.0400000000002,
"text": " enough but they simply say we additionally do the following so they"
},
{
"start": 1786.0400000000002,
"end": 1791.8000000000002,
"text": " also do is they introduce a reconstruction loss now if this model is"
},
{
"start": 1791.8000000000002,
"end": 1796.68,
"text": " trained correctly then these capsules here these last capsules especially"
},
{
"start": 1796.68,
"end": 1800,
"text": " this one maybe that's the capsule corresponding to the class of the digit"
},
{
"start": 1800,
"end": 1808.02,
"text": " 8 will not only encode if an 8 is there or not as in the length of the vector"
},
{
"start": 1808.02,
"end": 1812.72,
"text": " output but it will also encode the properties of dates it is a 16"
},
{
"start": 1812.72,
"end": 1818.8400000000001,
"text": " dimensional vector so it will encode hopefully things like the stroke width"
},
{
"start": 1818.84,
"end": 1829.3999999999999,
"text": " so then it might encode the maybe the rotation of the digit then it might be"
},
{
"start": 1829.3999999999999,
"end": 1836.28,
"text": " controlled the tightness of the of the loop so you can have an 8 with very"
},
{
"start": 1836.28,
"end": 1841.08,
"text": " large loops or it can have an 8 sorry this is a smaller rate I can have an 8"
},
{
"start": 1841.08,
"end": 1846.8,
"text": " with very tight loops so it might you know encode things like this so"
},
{
"start": 1846.8,
"end": 1853.48,
"text": " technically it is it will be possible to reconstruct from this description"
},
{
"start": 1853.48,
"end": 1859.44,
"text": " reconstruct say the width is high the rotation is zero and the tightness is"
},
{
"start": 1859.44,
"end": 1870.3999999999999,
"text": " low then maybe I have a wide widely stroked not tight 8 that is not rotated"
},
{
"start": 1870.3999999999999,
"end": 1875.12,
"text": " right so it should be possible to reconstruct this and they they do exactly"
},
{
"start": 1875.12,
"end": 1880.9599999999998,
"text": " that so they take this last capsule of the class that is the actual training"
},
{
"start": 1880.9599999999998,
"end": 1888.08,
"text": " label that's called the reconstruction target and they feed this to a simple"
},
{
"start": 1888.08,
"end": 1893.1599999999999,
"text": " feed-forward neural network that at the end you see this is exactly the MNIST"
},
{
"start": 1893.1599999999999,
"end": 1899.84,
"text": " size will try to reconstruct the the image so if the image here this image"
},
{
"start": 1899.84,
"end": 1907.24,
"text": " goes in then it goes all through here it will take the class for here feed it"
},
{
"start": 1907.24,
"end": 1912.56,
"text": " through this network reshape it to an image again and hopefully what will come"
},
{
"start": 1912.56,
"end": 1920.56,
"text": " out is again this for here and it will then have an auxiliary auxiliary loss in"
},
{
"start": 1920.56,
"end": 1926.36,
"text": " addition to the loss of this of this classification loss here will auxiliary"
},
{
"start": 1926.36,
"end": 1932.8799999999999,
"text": " loss that tries to reconstruct the original image right and that's simply a"
},
{
"start": 1932.8799999999999,
"end": 1941.52,
"text": " I believe it's just an L2 reconstruction loss that is that is scaled down that it"
},
{
"start": 1941.52,
"end": 1947.1999999999998,
"text": " doesn't dominate so they also train the network basically to reconstruct this"
},
{
"start": 1947.1999999999998,
"end": 1952.28,
"text": " and I believe they do this because the length isn't quite enough to make it do"
},
{
"start": 1952.28,
"end": 1959.12,
"text": " what they want it to do thus they by having this reconstruction here they"
},
{
"start": 1959.12,
"end": 1964.36,
"text": " really kind of enforce that the individual capsules the individual"
},
{
"start": 1964.36,
"end": 1971.52,
"text": " dimensions must encode some kind of information about the original image"
},
{
"start": 1971.52,
"end": 1976.44,
"text": " and since the original images in the MNIST data set at least vary by those"
},
{
"start": 1976.44,
"end": 1983.2,
"text": " things by stroke width by rotation by tightness that by this loss will be"
},
{
"start": 1983.2,
"end": 1996.16,
"text": " reflected in the in the reconstruction all right so how are they doing here you"
},
{
"start": 1996.16,
"end": 2003.1200000000001,
"text": " see different examples of inputs and then reconstructed outputs and this you"
},
{
"start": 2003.12,
"end": 2009.1999999999998,
"text": " know seems pretty good actually so you see here all of these the input image is"
},
{
"start": 2009.1999999999998,
"end": 2016.9599999999998,
"text": " reconstructed fairly well so the numbers up here in the fall so the right are the"
},
{
"start": 2016.9599999999998,
"end": 2023,
"text": " failure cases here it the input image is a five labeled in the training data but"
},
{
"start": 2023,
"end": 2029.32,
"text": " the network actually classifies it as a three but then if you now you have two"
},
{
"start": 2029.32,
"end": 2032.6399999999999,
"text": " choices right this this is the same sample I have two choices for"
},
{
"start": 2032.64,
"end": 2038.8000000000002,
"text": " reconstruction either you reconstruct the capsule that is actually the is that"
},
{
"start": 2038.8000000000002,
"end": 2042.76,
"text": " you know is the true capsule that should be activated and you reconstruct from"
},
{
"start": 2042.76,
"end": 2049.2000000000003,
"text": " that or you reconstruct from the capsule that the network says the it classifies"
},
{
"start": 2049.2000000000003,
"end": 2054,
"text": " it as so here it mixed up a five four three if you still take the five the"
},
{
"start": 2054,
"end": 2058.96,
"text": " capsule and reconstructed you see it actually looks like the original image"
},
{
"start": 2058.96,
"end": 2064.32,
"text": " but it looks much more like a five and if you take the three capsule to"
},
{
"start": 2064.32,
"end": 2068.2400000000002,
"text": " reconstruct which is what the network classified this as it's still it looks"
},
{
"start": 2068.2400000000002,
"end": 2073.28,
"text": " like the original image but it looks much more like an actual three right it's"
},
{
"start": 2073.28,
"end": 2078.68,
"text": " it's missing the the part up here whereas over here it's it's missing this"
},
{
"start": 2078.68,
"end": 2083.76,
"text": " part here so that the network really seems to kind of learn the different"
},
{
"start": 2083.76,
"end": 2089.92,
"text": " variations of these digits and in an ambiguous case such as this one it you"
},
{
"start": 2089.92,
"end": 2094.48,
"text": " know it can it can actually go either way and it can actually reconstruct the"
},
{
"start": 2094.48,
"end": 2101,
"text": " original output in either interpretations once as a three and once"
},
{
"start": 2101,
"end": 2105.44,
"text": " as a five it will be interesting to see what the actual lengths of the vector of"
},
{
"start": 2105.44,
"end": 2112.6400000000003,
"text": " both of these classes were that were mixed up and here they compare their"
},
{
"start": 2112.64,
"end": 2118.48,
"text": " accuracies so they have a baseline model which I believe is just a CNN"
},
{
"start": 2118.48,
"end": 2125.92,
"text": " where they get a decent kind of error and then the capsule networks they get a"
},
{
"start": 2125.92,
"end": 2130.72,
"text": " lower error and here you see as you add the reconstruction loss and as you add"
},
{
"start": 2130.72,
"end": 2135.64,
"text": " routing more so one step of routing simply means the first step is where you"
},
{
"start": 2135.64,
"end": 2142.44,
"text": " send your output equally to each parent that is as in the classical neural"
},
{
"start": 2142.44,
"end": 2148.88,
"text": " network case but if you introduce three steps of routing then your error drops"
},
{
"start": 2148.88,
"end": 2159.96,
"text": " even lower so they they kind of are on par with baseline CNNs on MNIST here"
},
{
"start": 2162.2400000000002,
"end": 2167.04,
"text": " they also explore what their capsules learn so as I said the individual capsules"
},
{
"start": 2167.04,
"end": 2174.32,
"text": " the dimensions should encode kind of properties of the variations of the of"
},
{
"start": 2174.32,
"end": 2180.4,
"text": " the class class samples and here they explore this in the different capsules so"
},
{
"start": 2180.4,
"end": 2184.32,
"text": " they change some dimensions and they run it through their reconstruction networks"
},
{
"start": 2184.32,
"end": 2189.96,
"text": " and indeed they discover that there is like a scale and thickness dimension"
},
{
"start": 2189.96,
"end": 2196.04,
"text": " stroke thickness dimension there's a skew dimension and so on width and"
},
{
"start": 2196.04,
"end": 2204.44,
"text": " translation so that this is pretty remarkable these networks really if you"
},
{
"start": 2204.44,
"end": 2209.2,
"text": " train them in this way they really seem to learn about the entities and about"
},
{
"start": 2209.2,
"end": 2214.72,
"text": " the properties of the entities and that seems to be quite interesting you see"
},
{
"start": 2214.72,
"end": 2219.96,
"text": " that there's everything here stays well within the class that the capsule is"
},
{
"start": 2219.96,
"end": 2227.92,
"text": " assigned to they also yeah this robustness to affine transformations"
},
{
"start": 2227.92,
"end": 2232.92,
"text": " where they improve over the baseline it's kind of an auxiliary experiment the"
},
{
"start": 2232.92,
"end": 2238.44,
"text": " next interesting experiment is what they call the multi MNIST experiment the"
},
{
"start": 2238.44,
"end": 2245.44,
"text": " multi MNIST experiment is done by taking two different MNIST digits and basically"
},
{
"start": 2245.44,
"end": 2251.32,
"text": " just overlapping them so that they have you know shift them slightly but as you"
},
{
"start": 2251.32,
"end": 2257.8,
"text": " see here or here they are overlapped heavily and the task of the network is"
},
{
"start": 2257.8,
"end": 2265.12,
"text": " to figure out which two overlapping digits are in the image and the the"
},
{
"start": 2265.12,
"end": 2272.56,
"text": " network is very very good at doing this the capsule network that is and better"
},
{
"start": 2272.56,
"end": 2276.96,
"text": " than the the baselines because the capsule network simply encodes the"
},
{
"start": 2276.96,
"end": 2282.92,
"text": " presence and properties of a particular instance in the image if you simply take"
},
{
"start": 2282.92,
"end": 2288.7999999999997,
"text": " the top two length capsules and then reconstruct those independently then"
},
{
"start": 2288.7999999999997,
"end": 2296.6,
"text": " you're you can you can you can basically segment the image and you see this here"
},
{
"start": 2296.6,
"end": 2302.12,
"text": " so the different colorations come from two different reconstructions of the"
},
{
"start": 2302.12,
"end": 2306.7999999999997,
"text": " image from two different capsules so green is from one capsule and red from"
},
{
"start": 2306.7999999999997,
"end": 2311,
"text": " the other capsule so the network correctly identifies that it's a 6 and"
},
{
"start": 2311,
"end": 2316.04,
"text": " the zero right and it also correctly identifies not only which pixels belong"
},
{
"start": 2316.04,
"end": 2321.24,
"text": " to the 6 and which belong to 0 but also pixels that belong to both so that's not"
},
{
"start": 2321.24,
"end": 2325.2799999999997,
"text": " a not a problem if you use capsule networks as they are"
},
{
"start": 2325.2799999999997,
"end": 2330.2,
"text": " are notable to say here they the way they train is is they train the actual"
},
{
"start": 2330.2,
"end": 2336.2799999999997,
"text": " reconstruction by only reconstructing one at a time so the kind of the premise"
},
{
"start": 2336.2799999999997,
"end": 2340.12,
"text": " of the data set is that you actually have access to the underlying individual"
},
{
"start": 2340.12,
"end": 2345.9199999999996,
"text": " digits while training so like the images of the individual digits you don't"
},
{
"start": 2345.9199999999996,
"end": 2352.8799999999997,
"text": " only have this label here but that's a detail here are some kind of failure"
},
{
"start": 2352.8799999999997,
"end": 2359.68,
"text": " cases where it it misclassified or you miss specify the capsules and it's kind"
},
{
"start": 2359.68,
"end": 2367.8799999999997,
"text": " of unable use here you see to to assign the digits of the misclassified or the"
},
{
"start": 2367.8799999999997,
"end": 2372.8799999999997,
"text": " pixels of the misclassified thing it's quite interesting to look at the failure"
},
{
"start": 2372.8799999999997,
"end": 2378.3999999999996,
"text": " cases but I find it more interesting to look actually the success cases and the"
},
{
"start": 2378.3999999999996,
"end": 2384.8199999999997,
"text": " kind of ease at which the at which the capsule networks can do this simply by"
},
{
"start": 2384.82,
"end": 2392.04,
"text": " how they're structured alright so then lastly they also experiment on C for 10"
},
{
"start": 2392.04,
"end": 2397.4,
"text": " and interestingly the C for 10 experiments show that the capsule"
},
{
"start": 2397.4,
"end": 2404,
"text": " networks don't perform as well there and as you know C for 10 is a data set that"
},
{
"start": 2404,
"end": 2407.44,
"text": " is about the same size as MNIST but it's first of all color and second of all is"
},
{
"start": 2407.44,
"end": 2413.32,
"text": " natural images and so they have quite a bit of clutter it's not black and white"
},
{
"start": 2413.32,
"end": 2418.8,
"text": " black background white digits it's actually there's a sky like on an"
},
{
"start": 2418.8,
"end": 2425.2400000000002,
"text": " image there's lots of things going on and right there's my tree and there's"
},
{
"start": 2425.2400000000002,
"end": 2429.76,
"text": " stuff here and there's stuff here and the the capsule networks they like to"
},
{
"start": 2429.76,
"end": 2434.96,
"text": " account for things in the image so they like to have a capsule corresponding to"
},
{
"start": 2434.96,
"end": 2438.84,
"text": " everything that's going on here and here and here and here and here if the whole"
},
{
"start": 2438.84,
"end": 2442.52,
"text": " background is black that is not a problem you can account for simply the"
},
{
"start": 2442.52,
"end": 2447,
"text": " background but if there's lots of things going on then these capsule networks"
},
{
"start": 2447,
"end": 2455,
"text": " get they get they get a bit over explanatory they want to explain"
},
{
"start": 2455,
"end": 2459.6,
"text": " everything and that degrades the performance now this paper basically"
},
{
"start": 2459.6,
"end": 2465.12,
"text": " says yeah you can have a something like a none of the above category and they"
},
{
"start": 2465.12,
"end": 2473.92,
"text": " found that it helped to introduce that in my opinion that it I think the the"
},
{
"start": 2473.92,
"end": 2478.88,
"text": " the solution will be more towards introduction of a better loss function"
},
{
"start": 2478.88,
"end": 2486.24,
"text": " for this because like such that you don't need kind of to explain the entire"
},
{
"start": 2486.24,
"end": 2490.8199999999997,
"text": " thing rather than here we'll hear what you do is you simply explain it by"
},
{
"start": 2490.8199999999997,
"end": 2494.4,
"text": " saying it's none of the above but it's incredibly hard to balance that my"
},
{
"start": 2494.4,
"end": 2504.48,
"text": " opinion yeah all right so that is basically the end of this they say they"
},
{
"start": 2504.48,
"end": 2510.32,
"text": " have a discussion here where they compare capsules against other related"
},
{
"start": 2510.32,
"end": 2519.84,
"text": " work but I hope that you kind of got an overview of how this works now and as"
},
{
"start": 2519.84,
"end": 2525.48,
"text": " much as possible and with that that was it for me and thanks for watching bye"
},
{
"start": 2525.48,
"end": 2551.48,
"text": " bye"
}
] |
-MCYbmU9kfg | Yannic Kilcher | UCZHmQk67mSJgfCCTn7xBfew | RoBERTa: A Robustly Optimized BERT Pretraining Approach | [
"Science & Technology"
] | [
"deep learning",
"machine learning",
"nlp",
"natural language processing",
"machine translation",
"arxiv",
"google",
"attention mechanism",
"attention",
"transformer",
"tensor2tensor",
"rnn",
"recurrent",
"seq2seq",
"bert",
"unsupervised",
"squad",
"wordpiece",
"embeddings",
"language",
"language modeling",
"attention layers",
"bidirectional",
"elmo",
"word vectors",
"pretrained",
"fine tuning"
] | This paper shows that the original BERT model, if trained correctly, can outperform all of the improvements that have been proposed lately, raising questions about the necessity and reasoning behind these.
Abstract:
Language model pretraining has led to significant performance gains but careful comparison between different approaches is challenging. Training is computationally expensive, often done on private datasets of different sizes, and, as we will show, hyperparameter choices have significant impact on the final results. We present a replication study of BERT pretraining (Devlin et al., 2019) that carefully measures the impact of many key hyperparameters and training data size. We find that BERT was significantly undertrained, and can match or exceed the performance of every model published after it. Our best model achieves state-of-the-art results on GLUE, RACE and SQuAD. These results highlight the importance of previously overlooked design choices, and raise questions about the source of recently reported improvements. We release our models and code.
Authors: Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov
https://arxiv.org/abs/1907.11692
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
Minds: https://www.minds.com/ykilcher
BitChute: https://www.bitchute.com/channel/10a5ui845DOJ/ | Hello everyone, today we're looking at Roberta, a robustly optimized BERT pre-training approach by Yin-Han Liu at AL, mainly of Facebook research. So this paper is a pretty short, pretty simple paper and the main premise is we've seen a number of improvements over the initial BERT paper where different pre-training of the transformer architecture or extensions of the architecture have been shown to have better performance than the original BERT model. And this paper basically says if you get the design choices right, then BERT is able to basically be on par or exceed all of these other methods so far. So they're basically exploring design choices in the pre-training and training of BERT. Alright, so if you don't know what BERT is, by the way, I have made a video about BERT, I've also made a video about transformers. In very quick terms, BERT is a language neural network architecture that takes as input text such as this kind of thing you see here, text such as that, and it will kind of encode it out and it can do various things, for example, classify it into certain categories or kind of segment it, extract answers from questions and so on. The whole thing is pre-trained with what's called a masked language model objective where you don't need labels to train it. So in a masked language model objective, you basically mask out certain words during training and then you ask BERT to reconstruct these words from the surrounding information. And that kind of has given some improvements in the original BERT paper, but subsequent papers have claimed that you can improve even more by using different pre-training objectives and so on such as Excel, NET. But here, these researchers basically explore different things. So they use a regular BERT architecture, that's what they describe here, so they use both the BERT base, the 12-layer, as well as the 24-layer BERT that has originally been described. They use masked language modeling as a pre-training objective and they explore the necessity of this next sentence prediction loss that has been part of BERT. So along with the masked sentence modeling, BERT has also had an objective where if you input a piece of, actually you input two pieces of text, two sentences such as this, these are two sentences, and BERT has to decide if the second sentence follows the first sentence in the corpus or in 50% of the cases, the second sentence is sampled from a different document. This kind of is, so the original paper argued this is necessary to incorporate long-distance relationships between text. Yeah, here the NSP objective was designed to improve performance on downstream tasks such as natural language inference. And this paper kind of explores the necessity of that loss. In terms of optimization, there is of course kind of a pre-training scheme and then a training scheme using Adam here with certain parameters and also this paper explores the use of these parameters. Lastly you have data and of course these models sometimes they're trained on different data and that's why comparing them makes it a bit harder to compare them because the pre-training is done on differently sized and differently structured data. This paper also tries to investigate the influence of the training data and especially what happens if we keep the training data constant. So all right, so they implement BERT, they re-implement BERT and then they fix some hyperparameters while they tune others and first of all the data set. So they use different data sets. The original BERT has been trained on this Book Corpus and Wikipedia, English Wikipedia data set which is 16 gigabytes large. Now this paper here collects a, what's this CC News data set which is the subset of the Common Crawl News data set which is all in. So the subset is the English portion and that's 76 gigabytes which is on par with for example what GPT-2 used I believe. So this is a very large training set and kind of comparing this original data to the large corpus, kind of what influence that is should make very clear what the influence of more training of more pre-training data is. They also have a number of other corpora open web text as well as here I believe there's one more stories, yes. So these are also pretty sizable but these are like, yeah these are like, have very specific schemas to them. Then the evaluation here happens on several different kind of downstream tasks. So the idea is you first you pre-train this BERT model on with the masked language modeling and so on and then you have this GLU task which is actually a collection of nine tasks and you have some other tasks such as SQUAD which is a question answering task and here RACE I don't even know what that is in particular but suffice to say these are kind of downstream NLP tasks. The paper isn't about these downstream tasks but it's just a way to measure how well your pre-training worked if then you can fine tune on such a task and you get a good performance. But what the tasks are in particular isn't too important. Alright so here we get into the meat of the paper. First they decide on what they call static versus dynamic masking. So in the original BERT paper whenever they do masked language modeling they take a piece of text and they basically replicate it a bunch of times because they want to iterate through training data a bunch of times and then in each iteration they mask out different tokens. They compare this to what's called dynamic masking. So this is static masking. Dynamic masking would be where you basically on the fly generate your mask. You don't pre-compute it and save it you on the fly generate it. This allows you to go through kind of more or less of the data as you want and when you encounter the same sample twice even though you replicate it in the original BERT model you could still encounter it twice if you train for longer than the number of replications. Then you basically see the exact same mask again and the dynamic masking is actually much more useful. It's much more ad hoc. Each time you see a sample you generate the mask on the fly. So they compare this here and they see that there is a marginal improvement so here higher is better marginal improvement in two tasks and a less marginal decrease in performance in one task. So they decide that this dynamic masking is of use. Second thing they investigate is the kind of input format and this next sentence prediction. So as I already said the original BERT training objective always gets two sentences next to each other and has to decide if the second one follows from the first one. Actually it doesn't it observes two concatenated document segments which are either sampled contiguously from the same document or from distinct documents and this is half and half. So in addition to the masked language modeling the model is trained to predict whether the observed document segments come from the same or distinct document via an auxiliary next sentence prediction loss. They investigate different ways of including or excluding this loss. So first is what they define if here if it's plus NSP that means that this particular thing includes the next sentence or next segment prediction loss. So they have segment pair plus NSP which means that each input has a pair of segments and these segments now the difference the distinction between a segment and a sentence is important where the sentence is really a natural sentence a segment can actually be multiple natural sentences which is what the original BERT does. So as long as the combined length is less than 512 tokens there can also be multiple sentences but there's clearly two segments and you have to decide if they follow after each other or not. The second thing they try is the same thing so the next segment prediction but now it's just two sentences it's just natural sentences so it must be one sentence a period and then the next sentence a period and you have to distinguish these two if they follow or not. Then they investigate full sentences which is they leave away this next segment prediction loss and they simply fill up the 512 tokens with text from the corpus. So each input is packed with full sentences sampled continuously from one or more documents and the one or more document means if you so if you sample text right you sample here text you put all of this in the thing and you are at the end of a document you simply continue with the next one and go on until you have the 512 tokens. So you basically fill fill fill until you have 512 tokens and that's this variant here. And then in the last variant you do the same thing this called dock sentences but you basically you stop at the end. So even so you put all of this in your state and if you here you stop and then you have to be content by simply padding the rest of the 512 tokens or something like this so you don't have as much data but the all the text that you have in one sample is actually continuous text from the same document. So they pit these four things against each other. This is this table here and as you can see here the best thing is this dock sentences thing so on these things followed by the full sentences encoding. So there's some some ambiguities here but in general you can kind of rank them as best second best and then here third best and fourth best and they conclude that this next segment or next sentence prediction loss here is more hurtful than helpful in the ways we see here and they say even though this is most most effective they in their case they'd rather go with this one because it's well I guess easier to implement you get more data through the model in the same time and the performance decrease isn't that much. So but it's pretty interesting to see that this next next segment next sentence prediction isn't super super helpful in actuality. Here so removing the NSP loss matches or slightly improves the downstream task performance. This is yeah in contrast to what the original BERT authors found but you have to keep in mind this is also on hasn't a bunch of other changes in. Then next thing they investigate batch size so batch size sorry batch size pretty seems to be pretty interesting for these large models in that they love large batch sizes and they actually explore batch sizes 512 here as a smallest one and they go up to 8000 so this they do this actually in a in a data parallel way where they have many many machines with many GPUs and they parallelize the data and then they accumulate the gradient of all of these different samples and so they can go up to a batch size of about 8k and they find generally that the 2000 batch size here as you can see helps to improve the so perplexity lower is better and the other numbers higher is better helps to to improve the performances if you control the control for data set size so the number of times you go through the data set is the same but if you go with a larger batch size that seems to help up to a point here the 2000 seems to be the best they found so again marginal improvement you can make by training with larger batch sizes and then this the last thing they've looked at is actually is text encoding so how do you encode text and the the pit here is basically between byte pair encoding or word piece encoding to that to to decide how large your vocabulary is basically and as I understand it they didn't find a much of a difference between the different implementations of the text encoding so they decide they go with they decide to go with one I don't even remember which one I think they go decide to go with byte pair encoding instead of word pieces all right so they combine all of this into Roberta which is a robustly optimized Bert approach and they say Roberta is trained with dynamic masking so what they showed first full sentence without the next segment prediction loss large mini batches a larger byte level byte pair encoding as well as of course their collection of training data and then here they also investigate how long to pre train so if you look at the original Bert models or the XL net models and then compare it to Roberta so Roberta this is the original data and they already beat Bert yet they do not they do not yet beat Excel net with that so if they add data they get even better actually on par mostly with the with Excel net if they pre train longer they get even better and if they want to say pre train even longer right so that here's the the number of steps if your number of steps then match the number of steps that the Excel net does with the same additional data then or with their additional data then you outperform Excel net as well so this this kind of just an an overview of this and they evaluate on other downstream tasks and they basically show that in most of them they can reach state-of-the-art performance or exceed it with their approach and in conclusion they basically say well this only shows that kind of the the gains that these other models make and the reasons why they make gains may be questionable if you simply pre train Bert in a better way you can reach the same performances so I think the end is not reached yet most of all they publish their code their data I believe I have not looked into this but definitely check out their repository where this is implemented seems pretty easy seems pretty straightforward and that was it for me bye bye | [
{
"start": 0,
"end": 6.84,
"text": " Hello everyone, today we're looking at Roberta, a robustly optimized BERT pre-training approach"
},
{
"start": 6.84,
"end": 11.96,
"text": " by Yin-Han Liu at AL, mainly of Facebook research."
},
{
"start": 11.96,
"end": 18.84,
"text": " So this paper is a pretty short, pretty simple paper and the main premise is we've seen a"
},
{
"start": 18.84,
"end": 28.44,
"text": " number of improvements over the initial BERT paper where different pre-training of the"
},
{
"start": 28.44,
"end": 35.92,
"text": " transformer architecture or extensions of the architecture have been shown to have better"
},
{
"start": 35.92,
"end": 38.8,
"text": " performance than the original BERT model."
},
{
"start": 38.8,
"end": 48.56,
"text": " And this paper basically says if you get the design choices right, then BERT is able to"
},
{
"start": 48.56,
"end": 53.28,
"text": " basically be on par or exceed all of these other methods so far."
},
{
"start": 53.28,
"end": 60.28,
"text": " So they're basically exploring design choices in the pre-training and training of BERT."
},
{
"start": 60.28,
"end": 67.84,
"text": " Alright, so if you don't know what BERT is, by the way, I have made a video about BERT,"
},
{
"start": 67.84,
"end": 72.08,
"text": " I've also made a video about transformers."
},
{
"start": 72.08,
"end": 81.44,
"text": " In very quick terms, BERT is a language neural network architecture that takes as input text"
},
{
"start": 81.44,
"end": 90.4,
"text": " such as this kind of thing you see here, text such as that, and it will kind of encode it"
},
{
"start": 90.4,
"end": 99.12,
"text": " out and it can do various things, for example, classify it into certain categories or kind"
},
{
"start": 99.12,
"end": 106.03999999999999,
"text": " of segment it, extract answers from questions and so on."
},
{
"start": 106.04,
"end": 111.92,
"text": " The whole thing is pre-trained with what's called a masked language model objective where"
},
{
"start": 111.92,
"end": 113.52000000000001,
"text": " you don't need labels to train it."
},
{
"start": 113.52000000000001,
"end": 118.96000000000001,
"text": " So in a masked language model objective, you basically mask out certain words during training"
},
{
"start": 118.96000000000001,
"end": 126.32000000000001,
"text": " and then you ask BERT to reconstruct these words from the surrounding information."
},
{
"start": 126.32000000000001,
"end": 133.56,
"text": " And that kind of has given some improvements in the original BERT paper, but subsequent"
},
{
"start": 133.56,
"end": 138.88,
"text": " papers have claimed that you can improve even more by using different pre-training objectives"
},
{
"start": 138.88,
"end": 142.6,
"text": " and so on such as Excel, NET."
},
{
"start": 142.6,
"end": 150.52,
"text": " But here, these researchers basically explore different things."
},
{
"start": 150.52,
"end": 156.48000000000002,
"text": " So they use a regular BERT architecture, that's what they describe here, so they use both"
},
{
"start": 156.48,
"end": 167.07999999999998,
"text": " the BERT base, the 12-layer, as well as the 24-layer BERT that has originally been described."
},
{
"start": 167.07999999999998,
"end": 176.83999999999997,
"text": " They use masked language modeling as a pre-training objective and they explore the necessity of"
},
{
"start": 176.83999999999997,
"end": 180.79999999999998,
"text": " this next sentence prediction loss that has been part of BERT."
},
{
"start": 180.8,
"end": 187.36,
"text": " So along with the masked sentence modeling, BERT has also had an objective where if you"
},
{
"start": 187.36,
"end": 194.10000000000002,
"text": " input a piece of, actually you input two pieces of text, two sentences such as this, these"
},
{
"start": 194.10000000000002,
"end": 199.92000000000002,
"text": " are two sentences, and BERT has to decide if the second sentence follows the first sentence"
},
{
"start": 199.92000000000002,
"end": 205.04000000000002,
"text": " in the corpus or in 50% of the cases, the second sentence is sampled from a different"
},
{
"start": 205.04000000000002,
"end": 206.12,
"text": " document."
},
{
"start": 206.12,
"end": 212.76,
"text": " This kind of is, so the original paper argued this is necessary to incorporate long-distance"
},
{
"start": 212.76,
"end": 215.8,
"text": " relationships between text."
},
{
"start": 215.8,
"end": 222.6,
"text": " Yeah, here the NSP objective was designed to improve performance on downstream tasks"
},
{
"start": 222.6,
"end": 227.36,
"text": " such as natural language inference."
},
{
"start": 227.36,
"end": 231.24,
"text": " And this paper kind of explores the necessity of that loss."
},
{
"start": 231.24,
"end": 237.44,
"text": " In terms of optimization, there is of course kind of a pre-training scheme and then a training"
},
{
"start": 237.44,
"end": 245.32000000000002,
"text": " scheme using Adam here with certain parameters and also this paper explores the use of these"
},
{
"start": 245.32000000000002,
"end": 247.28,
"text": " parameters."
},
{
"start": 247.28,
"end": 254.56,
"text": " Lastly you have data and of course these models sometimes they're trained on different data"
},
{
"start": 254.56,
"end": 259.76,
"text": " and that's why comparing them makes it a bit harder to compare them because the pre-training"
},
{
"start": 259.76,
"end": 265.64,
"text": " is done on differently sized and differently structured data."
},
{
"start": 265.64,
"end": 271.4,
"text": " This paper also tries to investigate the influence of the training data and especially what happens"
},
{
"start": 271.4,
"end": 275.28,
"text": " if we keep the training data constant."
},
{
"start": 275.28,
"end": 287.8,
"text": " So all right, so they implement BERT, they re-implement BERT and then they fix some hyperparameters"
},
{
"start": 287.8,
"end": 291.88,
"text": " while they tune others and first of all the data set."
},
{
"start": 291.88,
"end": 295.28000000000003,
"text": " So they use different data sets."
},
{
"start": 295.28000000000003,
"end": 301.44,
"text": " The original BERT has been trained on this Book Corpus and Wikipedia, English Wikipedia"
},
{
"start": 301.44,
"end": 304.52,
"text": " data set which is 16 gigabytes large."
},
{
"start": 304.52,
"end": 311.92,
"text": " Now this paper here collects a, what's this CC News data set which is the subset of the"
},
{
"start": 311.92,
"end": 316.36,
"text": " Common Crawl News data set which is all in."
},
{
"start": 316.36,
"end": 326.2,
"text": " So the subset is the English portion and that's 76 gigabytes which is on par with for example"
},
{
"start": 326.2,
"end": 330.16,
"text": " what GPT-2 used I believe."
},
{
"start": 330.16,
"end": 338.8,
"text": " So this is a very large training set and kind of comparing this original data to the large"
},
{
"start": 338.8,
"end": 344.40000000000003,
"text": " corpus, kind of what influence that is should make very clear what the influence of more"
},
{
"start": 344.4,
"end": 347.64,
"text": " training of more pre-training data is."
},
{
"start": 347.64,
"end": 356.03999999999996,
"text": " They also have a number of other corpora open web text as well as here I believe there's"
},
{
"start": 356.03999999999996,
"end": 358.12,
"text": " one more stories, yes."
},
{
"start": 358.12,
"end": 366,
"text": " So these are also pretty sizable but these are like, yeah these are like, have very specific"
},
{
"start": 366,
"end": 369.79999999999995,
"text": " schemas to them."
},
{
"start": 369.8,
"end": 377.28000000000003,
"text": " Then the evaluation here happens on several different kind of downstream tasks."
},
{
"start": 377.28000000000003,
"end": 383.6,
"text": " So the idea is you first you pre-train this BERT model on with the masked language modeling"
},
{
"start": 383.6,
"end": 392.64,
"text": " and so on and then you have this GLU task which is actually a collection of nine tasks"
},
{
"start": 392.64,
"end": 402.24,
"text": " and you have some other tasks such as SQUAD which is a question answering task and here"
},
{
"start": 402.24,
"end": 408.4,
"text": " RACE I don't even know what that is in particular but suffice to say these are kind of downstream"
},
{
"start": 408.4,
"end": 410.08,
"text": " NLP tasks."
},
{
"start": 410.08,
"end": 417.47999999999996,
"text": " The paper isn't about these downstream tasks but it's just a way to measure how well your"
},
{
"start": 417.48,
"end": 425,
"text": " pre-training worked if then you can fine tune on such a task and you get a good performance."
},
{
"start": 425,
"end": 429.72,
"text": " But what the tasks are in particular isn't too important."
},
{
"start": 429.72,
"end": 433.88,
"text": " Alright so here we get into the meat of the paper."
},
{
"start": 433.88,
"end": 440.16,
"text": " First they decide on what they call static versus dynamic masking."
},
{
"start": 440.16,
"end": 446.16,
"text": " So in the original BERT paper whenever they do masked language modeling they take a piece"
},
{
"start": 446.16,
"end": 451.40000000000003,
"text": " of text and they basically replicate it a bunch of times because they want to iterate"
},
{
"start": 451.40000000000003,
"end": 457.6,
"text": " through training data a bunch of times and then in each iteration they mask out different"
},
{
"start": 457.6,
"end": 461.24,
"text": " tokens."
},
{
"start": 461.24,
"end": 468.40000000000003,
"text": " They compare this to what's called dynamic masking."
},
{
"start": 468.40000000000003,
"end": 471.28000000000003,
"text": " So this is static masking."
},
{
"start": 471.28,
"end": 480.96,
"text": " Dynamic masking would be where you basically on the fly generate your mask."
},
{
"start": 480.96,
"end": 484.41999999999996,
"text": " You don't pre-compute it and save it you on the fly generate it."
},
{
"start": 484.41999999999996,
"end": 490.91999999999996,
"text": " This allows you to go through kind of more or less of the data as you want and when you"
},
{
"start": 490.91999999999996,
"end": 498.67999999999995,
"text": " encounter the same sample twice even though you replicate it in the original BERT model"
},
{
"start": 498.68,
"end": 503.56,
"text": " you could still encounter it twice if you train for longer than the number of replications."
},
{
"start": 503.56,
"end": 511.08,
"text": " Then you basically see the exact same mask again and the dynamic masking is actually"
},
{
"start": 511.08,
"end": 513.2,
"text": " much more useful."
},
{
"start": 513.2,
"end": 514.32,
"text": " It's much more ad hoc."
},
{
"start": 514.32,
"end": 517.62,
"text": " Each time you see a sample you generate the mask on the fly."
},
{
"start": 517.62,
"end": 522.24,
"text": " So they compare this here and they see that there is a marginal improvement so here higher"
},
{
"start": 522.24,
"end": 533.04,
"text": " is better marginal improvement in two tasks and a less marginal decrease in performance"
},
{
"start": 533.04,
"end": 534.04,
"text": " in one task."
},
{
"start": 534.04,
"end": 542.94,
"text": " So they decide that this dynamic masking is of use."
},
{
"start": 542.94,
"end": 549.74,
"text": " Second thing they investigate is the kind of input format and this next sentence prediction."
},
{
"start": 549.74,
"end": 555.92,
"text": " So as I already said the original BERT training objective always gets two sentences next to"
},
{
"start": 555.92,
"end": 561.86,
"text": " each other and has to decide if the second one follows from the first one."
},
{
"start": 561.86,
"end": 569.16,
"text": " Actually it doesn't it observes two concatenated document segments which are either sampled"
},
{
"start": 569.16,
"end": 577.58,
"text": " contiguously from the same document or from distinct documents and this is half and half."
},
{
"start": 577.58,
"end": 581.62,
"text": " So in addition to the masked language modeling the model is trained to predict whether the"
},
{
"start": 581.62,
"end": 588.9000000000001,
"text": " observed document segments come from the same or distinct document via an auxiliary next"
},
{
"start": 588.9000000000001,
"end": 592.48,
"text": " sentence prediction loss."
},
{
"start": 592.48,
"end": 598.26,
"text": " They investigate different ways of including or excluding this loss."
},
{
"start": 598.26,
"end": 606.08,
"text": " So first is what they define if here if it's plus NSP that means that this particular thing"
},
{
"start": 606.08,
"end": 610.84,
"text": " includes the next sentence or next segment prediction loss."
},
{
"start": 610.84,
"end": 620.72,
"text": " So they have segment pair plus NSP which means that each input has a pair of segments and"
},
{
"start": 620.72,
"end": 628.5200000000001,
"text": " these segments now the difference the distinction between a segment and a sentence is important"
},
{
"start": 628.5200000000001,
"end": 635.36,
"text": " where the sentence is really a natural sentence a segment can actually be multiple natural"
},
{
"start": 635.36,
"end": 641.44,
"text": " sentences which is what the original BERT does."
},
{
"start": 641.44,
"end": 648.6800000000001,
"text": " So as long as the combined length is less than 512 tokens there can also be multiple"
},
{
"start": 648.6800000000001,
"end": 654.5600000000001,
"text": " sentences but there's clearly two segments and you have to decide if they follow after"
},
{
"start": 654.5600000000001,
"end": 656.6800000000001,
"text": " each other or not."
},
{
"start": 656.6800000000001,
"end": 661.96,
"text": " The second thing they try is the same thing so the next segment prediction but now it's"
},
{
"start": 661.96,
"end": 673,
"text": " just two sentences it's just natural sentences so it must be one sentence a period and then"
},
{
"start": 673,
"end": 678.72,
"text": " the next sentence a period and you have to distinguish these two if they follow or not."
},
{
"start": 678.72,
"end": 687,
"text": " Then they investigate full sentences which is they leave away this next segment prediction"
},
{
"start": 687,
"end": 695.04,
"text": " loss and they simply fill up the 512 tokens with text from the corpus."
},
{
"start": 695.04,
"end": 700.68,
"text": " So each input is packed with full sentences sampled continuously from one or more documents"
},
{
"start": 700.68,
"end": 706.48,
"text": " and the one or more document means if you so if you sample text right you sample here"
},
{
"start": 706.48,
"end": 711.82,
"text": " text you put all of this in the thing and you are at the end of a document you simply"
},
{
"start": 711.82,
"end": 717.4000000000001,
"text": " continue with the next one and go on until you have the 512 tokens."
},
{
"start": 717.4000000000001,
"end": 725.2800000000001,
"text": " So you basically fill fill fill until you have 512 tokens and that's this variant here."
},
{
"start": 725.2800000000001,
"end": 729.96,
"text": " And then in the last variant you do the same thing this called dock sentences but you basically"
},
{
"start": 729.96,
"end": 731.5200000000001,
"text": " you stop at the end."
},
{
"start": 731.5200000000001,
"end": 738.44,
"text": " So even so you put all of this in your state and if you here you stop and then you have"
},
{
"start": 738.44,
"end": 745.5200000000001,
"text": " to be content by simply padding the rest of the 512 tokens or something like this so you"
},
{
"start": 745.5200000000001,
"end": 752.6800000000001,
"text": " don't have as much data but the all the text that you have in one sample is actually continuous"
},
{
"start": 752.6800000000001,
"end": 755.1800000000001,
"text": " text from the same document."
},
{
"start": 755.1800000000001,
"end": 760.1,
"text": " So they pit these four things against each other."
},
{
"start": 760.1,
"end": 776.8000000000001,
"text": " This is this table here and as you can see here the best thing is this dock sentences"
},
{
"start": 776.8000000000001,
"end": 785.52,
"text": " thing so on these things followed by the full sentences encoding."
},
{
"start": 785.52,
"end": 794.68,
"text": " So there's some some ambiguities here but in general you can kind of rank them as best"
},
{
"start": 794.68,
"end": 803.92,
"text": " second best and then here third best and fourth best and they conclude that this next segment"
},
{
"start": 803.92,
"end": 812.8,
"text": " or next sentence prediction loss here is more hurtful than helpful in the ways we see here"
},
{
"start": 812.8,
"end": 819.8599999999999,
"text": " and they say even though this is most most effective they in their case they'd rather"
},
{
"start": 819.8599999999999,
"end": 824.28,
"text": " go with this one because it's well I guess easier to implement you get more data through"
},
{
"start": 824.28,
"end": 832,
"text": " the model in the same time and the performance decrease isn't that much."
},
{
"start": 832,
"end": 837.18,
"text": " So but it's pretty interesting to see that this next next segment next sentence prediction"
},
{
"start": 837.18,
"end": 847.0799999999999,
"text": " isn't super super helpful in actuality."
},
{
"start": 847.0799999999999,
"end": 855.56,
"text": " Here so removing the NSP loss matches or slightly improves the downstream task performance."
},
{
"start": 855.56,
"end": 859.68,
"text": " This is yeah in contrast to what the original BERT authors found but you have to keep in"
},
{
"start": 859.68,
"end": 868.04,
"text": " mind this is also on hasn't a bunch of other changes in."
},
{
"start": 868.04,
"end": 875.8,
"text": " Then next thing they investigate batch size so batch size sorry batch size pretty seems"
},
{
"start": 875.8,
"end": 882.4,
"text": " to be pretty interesting for these large models in that they love large batch sizes and they"
},
{
"start": 882.4,
"end": 891.68,
"text": " actually explore batch sizes 512 here as a smallest one and they go up to 8000 so this"
},
{
"start": 891.68,
"end": 895.88,
"text": " they do this actually in a in a data parallel way where they have many many machines with"
},
{
"start": 895.88,
"end": 904.3199999999999,
"text": " many GPUs and they parallelize the data and then they accumulate the gradient of all of"
},
{
"start": 904.3199999999999,
"end": 909.0799999999999,
"text": " these different samples and so they can go up to a batch size of about 8k and they find"
},
{
"start": 909.08,
"end": 916.88,
"text": " generally that the 2000 batch size here as you can see helps to improve the so perplexity"
},
{
"start": 916.88,
"end": 925.2,
"text": " lower is better and the other numbers higher is better helps to to improve the performances"
},
{
"start": 925.2,
"end": 929.5200000000001,
"text": " if you control the control for data set size so the number of times you go through the"
},
{
"start": 929.5200000000001,
"end": 936.44,
"text": " data set is the same but if you go with a larger batch size that seems to help up to"
},
{
"start": 936.44,
"end": 943.6800000000001,
"text": " a point here the 2000 seems to be the best they found so again marginal improvement you"
},
{
"start": 943.6800000000001,
"end": 951,
"text": " can make by training with larger batch sizes and then this the last thing they've looked"
},
{
"start": 951,
"end": 957.32,
"text": " at is actually is text encoding so how do you encode text and the the pit here is basically"
},
{
"start": 957.32,
"end": 968.84,
"text": " between byte pair encoding or word piece encoding to that to to decide how large your vocabulary"
},
{
"start": 968.84,
"end": 975.96,
"text": " is basically and as I understand it they didn't find a much of a difference between the different"
},
{
"start": 975.96,
"end": 984.6800000000001,
"text": " implementations of the text encoding so they decide they go with they decide to go with"
},
{
"start": 984.68,
"end": 991.04,
"text": " one I don't even remember which one I think they go decide to go with byte pair encoding"
},
{
"start": 991.04,
"end": 998.4,
"text": " instead of word pieces all right so they combine all of this into Roberta which is a robustly"
},
{
"start": 998.4,
"end": 1009.12,
"text": " optimized Bert approach and they say Roberta is trained with dynamic masking so what they"
},
{
"start": 1009.12,
"end": 1016.96,
"text": " showed first full sentence without the next segment prediction loss large mini batches"
},
{
"start": 1016.96,
"end": 1024.08,
"text": " a larger byte level byte pair encoding as well as of course their collection of training"
},
{
"start": 1024.08,
"end": 1038.28,
"text": " data and then here they also investigate how long to pre train so if you look at the original"
},
{
"start": 1038.28,
"end": 1045.2,
"text": " Bert models or the XL net models and then compare it to Roberta so Roberta this is the"
},
{
"start": 1045.2,
"end": 1053.3999999999999,
"text": " original data and they already beat Bert yet they do not they do not yet beat Excel net"
},
{
"start": 1053.3999999999999,
"end": 1062.78,
"text": " with that so if they add data they get even better actually on par mostly with the with"
},
{
"start": 1062.78,
"end": 1069.28,
"text": " Excel net if they pre train longer they get even better and if they want to say pre train"
},
{
"start": 1069.28,
"end": 1075.96,
"text": " even longer right so that here's the the number of steps if your number of steps then match"
},
{
"start": 1075.96,
"end": 1085.8799999999999,
"text": " the number of steps that the Excel net does with the same additional data then or with"
},
{
"start": 1085.88,
"end": 1095.64,
"text": " their additional data then you outperform Excel net as well so this this kind of just"
},
{
"start": 1095.64,
"end": 1104.7600000000002,
"text": " an an overview of this and they evaluate on other downstream tasks and they basically"
},
{
"start": 1104.7600000000002,
"end": 1115.8600000000001,
"text": " show that in most of them they can reach state-of-the-art performance or exceed it with their approach"
},
{
"start": 1115.86,
"end": 1123.6,
"text": " and in conclusion they basically say well this only shows that kind of the the gains"
},
{
"start": 1123.6,
"end": 1128.4799999999998,
"text": " that these other models make and the reasons why they make gains may be questionable if"
},
{
"start": 1128.4799999999998,
"end": 1135.1999999999998,
"text": " you simply pre train Bert in a better way you can reach the same performances so I think"
},
{
"start": 1135.1999999999998,
"end": 1142.8,
"text": " the end is not reached yet most of all they publish their code their data I believe I"
},
{
"start": 1142.8,
"end": 1148.8799999999999,
"text": " have not looked into this but definitely check out their repository where this is implemented"
},
{
"start": 1148.88,
"end": 1176.88,
"text": " seems pretty easy seems pretty straightforward and that was it for me bye bye"
}
] |
AR3W-nfcDe4 | Yannic Kilcher | UCZHmQk67mSJgfCCTn7xBfew | Auditing Radicalization Pathways on YouTube | [
"Science & Technology"
] | [
"machine learning",
"data science",
"empirical",
"study",
"youtube",
"radicalization",
"alt-right",
"alt-lite",
"idw",
"intellectual dark web",
"alt right",
"alt lite",
"jordan peterson",
"joe rogan",
"pipeline",
"recommended",
"network",
"diffusion",
"social graph",
"infected",
"ideology",
"radical",
"analysis",
"suggested",
"filter bubble",
"fringe"
] | This paper claims that there is a radicalization pipeline on YouTube pushing people towards the Alt-Right, backing up their claims with empirical analysis of channel recommendations and commenting behavior. I suggest that there is a much simpler explanation of this data: A basic diffusion process.
Abstract:
Non-profits and the media claim there is a radicalization pipeline on YouTube. Its content creators would sponsor fringe ideas, and its recommender system would steer users towards edgier content. Yet, the supporting evidence for this claim is mostly anecdotal, and there are no proper measurements of the influence of YouTube's recommender system. In this work, we conduct a large scale audit of user radicalization on YouTube. We analyze 331,849 videos of 360 channels which we broadly classify into: control, the Alt-lite, the Intellectual Dark Web (I.D.W.), and the Alt-right ---channels in the I.D.W. and the Alt-lite would be gateways to fringe far-right ideology, here represented by Alt-right channels. Processing more than 79M comments, we show that the three communities increasingly share the same user base; that users consistently migrate from milder to more extreme content; and that a large percentage of users who consume Alt-right content now consumed Alt-lite and I.D.W. content in the past. We also probe YouTube's recommendation algorithm, looking at more than 2M million recommendations for videos and channels between May and July 2019. We find that Alt-lite content is easily reachable from I.D.W. channels via recommendations and that Alt-right channels may be reached from both I.D.W. and Alt-lite channels. Overall, we paint a comprehensive picture of user radicalization on YouTube and provide methods to transparently audit the platform and its recommender system.
Authors: Manoel Horta Ribeiro, Raphael Ottoni, Robert West, Virgílio A. F. Almeida, Wagner Meira
https://arxiv.org/abs/1908.08313
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
Minds: https://www.minds.com/ykilcher
BitChute: https://www.bitchute.com/channel/10a5ui845DOJ/ | Hi there! Today we're going to look at Auditing Radicalization Pathways on YouTube by Manuel Horta-Riberio at AL. So this paper is a bit different than the one we're usually looking at, but since I'm a YouTuber and this is in the kind of a data science realm, I thought it fits neatly. So yeah, we'll have a look. And this is mostly going to be an analysis and my opinion on it, so take that for what it is. This is, in my opinion, a paper where you can see very well what it looks like when you deceive yourself. So when you have a hypothesis of something and then only collect data that matches that, and you don't think of simpler solutions that explain the data, and therefore you don't think of experiments that could differentiate the simple solutions from what you propose. So it's a good example of how you can kind of trick yourself into believing you found something. And this isn't now about YouTube or anything. This happened to me so many times. It always pays off to take a step back and say, is there a simpler explanation for what's happening? And this is what I think is exactly happening here. So I'll present to you their hypothesis and then I'll present to you my kind of what I think is going on and a model that explains the data much much much easier and simpler and actually better. So let's dive in. This paper basically claims the following. So on YouTube there are channels and channels are, you know, independent channels. They make videos and you can actually arrange these channels. So each dot here is a channel. You can arrange these channels in kind of a network. And two channels you can claim they're connected and they can be a connection strength or whatever. For simplicity they can be connected if, for example, their topics are similar, if they reference each other, if they are recommended by YouTube from each other, if they have the same users watching those same channels or the videos of these channels. There are a number of metrics where you could make channels connected but all of them will turn out similar, like will give you the similar structure of channels being connected. Oh that's connected twice. So you can kind of build a graph of how these channels are connected and what you can do then is you can cluster them. You don't have to build a graph to cluster them but you can cluster the channels and what will emerge are parts of the graph that are very well connected. Right here this might be connected with this and with this. Parts of graph that are very well connected and are kind of well connected within and more sparsely connected to others, like also have a larger distance in between them. So if you start out from one channel and you're kind of watching recommended videos and recommended channels and so on, you'll stroll along here, you will get much faster to these things than to the other things. So these are called communities usually in these kind of social network analysis. So on YouTube you know there is a community for makeup, there's a community for sports, within sports there is a community for soccer, there's one for basketball and so on. So these are all these kind of communities that you can discover by clustering. This paper mainly deals with three communities. Namely the first of all is the IDW, which is the intellectual dark web. They discuss this here. So the intellectual dark web is they describe as a group of individuals that are in a rolling conversation with each other about topics that are, let's say, usually kind of difficult to talk about, such as gender differences or intelligence research in certain areas or even you know regular politics, but kind of the intellectual dark web are a wide variety of people that basically are conversing with each other about topics. The description is a bit vague but the main aspect is conversation and maybe topics that are kind of on the edge of what's acceptable to talk about. But the opinions range widely on these topics. The second group is the alt-right. And the alt-right here is kind of the, they're defined as ethno-nationalists. For example, here is an example, the fringe ideas such as white ethno-state, white supremacist ideology and so on. So specifically ethno-nationalists, nationalists that I think nations should be organized to along the lines of ethnicity. And the goal of the paper is actually to show that there is a kind of a dangerous pipeline on YouTube that will drive people to the alt-right and drive people into these radical ideas of the alt-right. Kind of in between is the alt-light, which is here defined as civic nationalists, which is simply as I understand it means that people should be organized into nations, not along ethnicity, but just should organize themselves into sovereign communities. And it would be more of your libertarian, classically liberal people, whereas the alt-right would be more of your, let's say, authoritarian right-wing person. So these three communities, they have a fourth community which is what they call a control group. And the control group consists of what they say are kind of mainstream channels on YouTube, simply to differentiate them from these three and two, see what's going on with them and if there is a difference. So this is kind of the setup and as I said the hypothesis is the following. People go on YouTube, so YouTube is here, YouTube, people come on YouTube, they go around, they explore a bit and all of a sudden they find IDW videos. These are recommended by YouTube on a fairly regular basis. That may mean they're interesting, people find it, they find it interesting and so on. And then there from the IDW there are recommendations and links to the alt-light. And the alt-light are still, so as I read this paper there is kind of an undertone, kind of the IDW and the alt-light are still okay. Like they discuss ideas that are sometimes political and so on, but the real worry is the alt-right, the kind of radical right-wing ethnic nationalists. And I mean yes, the formulation I can agree with. And then they claim, so you find IDW, that they have links to the alt-light or links, I mean recommendations and so on. And from the alt-light and to a certain degree also from the IDW you can then find the alt-right. So even though a user that goes on YouTube at first isn't likely to find the alt-right videos because it's fringe, it's extreme and so on, by through the YouTube recommendation algorithm basically by going to the IDW finding this, then from there they'll find the alt-light and from there and from the IDW they will then find the alt-right. So they claim that there's this pathway of radicalization here that kind of pushes people towards the alt-right. And that's their hypothesis. And they claim that they have evidence to support this and I claim that there is a simpler solution, namely... So first of all let me state I don't like the alt-right. I think their ideas are despicable. I should go without saying, though I have said it now, so you know, just as a disclaimer I'm not defending anyone here. I'm simply saying this paper has a simpler explanation for their data. Namely, what I think is happening here is YouTube again is channels. Each dot here is a channel. Channels can be clustered as such, right there, as we saw before. I'm just drawing more of them right now. Channels, channels, channels, channels, channels, channels, channels. So what I think is happening is there is a control group, what they call the control group. It's over here, it's large control, right? It's a bunch of channels. Then, which is kind of mainstream media, then over here there is, let's say, alternative media where all of these three groups belong into. So at some point you will have the IDW, then maybe a bit further away from the control group, but very close to the IDW you would have the alt-light, and very close to the two, maybe here you would have the alt-right. So notably, in my model, the IDW and the alt-light are kind of close together. They are in terms of comparative distance. So if you cluster these channels, let's say audience or topics or and so on, it will turn out that all of these three are far, far away from the control group. Those two are very close to each other and then here there is some distance, but how much distance is a question? But of course it's going to be smaller distance than the distance to the control group here. I mean I could draw the alt-right, maybe a more accurate picture would be something like this. So whatever, I mean it doesn't matter the details, but the distance here is smaller than the distance to the control group. In this model a second thing is also important, namely the alt-right, as you can see here, is much much smaller than the IDW and the alt-light. And these again are much smaller than the control group. And this I think accounts for most, so the distance relations between these and the size of the clusters account for most. So with size I mean mainly number of channels and also audience. This accounts for most of the data better than their model. So just keep this in mind. And my model of course doesn't include any kind of pipeline that they suggest. So first of all they go ahead and they say, alright, they collect channels. So they collect data for this and you know we could go over how they collect the data and criticize that and so on. They do human annotation and they start from already published reports and so on, which themselves can be criticized. I'm not gonna go into their data collection methodology. It can have mistakes, but then any collection methodology can have mistakes. What they end up with is a number of channels and here are the top channels from each category. And as you can see alt-right, alt-light, intellectual dark web, and control. So already here you can see pretty clearly the model I have in mind. They acknowledge all of this by the way. Look at the size of the alt-right channels, the biggest ones, compared to the size of the alt-light and the intellectual dark web. They're much much smaller in number of views. And then compare this to the size of the control group. The control group again is again larger than the other two groups. So just keep it in mind. Second thing to keep in mind, look at these channels. Maybe you know some of them. Joe Rogan, Sargon of Akkad, Paul Joseph Watson, Sticks Hexenhammer. These are youtubers. These are individuals making YouTube clips, creating content for YouTube, being on this platform. Whereas if you compare it with the control group, what's here? Vox, GQ, Wired, Business Insider. These aren't youtubers. These are websites or traditional media companies or their own kind of blogs and so on that have a YouTube channel where YouTube is one of the outlets of this media company. So I think there's a giant discrepancy here in the control group that can explain also some of this data that you see. So keep that in mind. I think the control group, they say they don't try to capture the user dynamic with the control group, but I think that there's many problems with this control group, including the fact that these are kind of traditional mainstream media that just have YouTube as an outlet. Moreover, a lot of these like Vox or Vice, they are clickbait media and rage bait media that it has worked for a number of years, but the algorithms are becoming more attuned to clickbait and these are crashing fast. Whereas the more youtuber people, they are not susceptible to that much to kind of the abolishment of clickbait. Alright, so this is the data. They have all these channels, they have all these videos and they first of all give some stats on it. Here you see on the bottom is always the year. So they do this over time and you see the active channels which are channels that have uploaded videos in some time. See the control group again is larger but has started to flatten out in the last few years. Whereas these communities, they are relatively flourishing. Another interesting point is that the paper somehow tries to tie this to the election of Donald Trump in 2016. But I think this is just kind of in there to gain relevance. A lot of these kind of trends and so on you'll see already start before that. So the start of the rise here, if you see these bumps here and so on, a lot of them start before 2016. So as we go through this make up your own mind of how much this is actually tied to the election or not. I think it's much more the years when clickbait started to go down as a business model. Never mind though. So the active channels growing, though the control group not growing as much. Videos published, even though the control group isn't growing so much, they still publish the most videos. But you can see generally the site is growing. Generally YouTube is growing. Like counts. And here you see something interesting starting to happen. Namely these communities, especially the alt-light and the intellectual dark web, they're starting to catch up. And this is one of the things that the paper also states is that if you look at for example comments per video, this light and the intellectual dark web outperform the control group vastly. Also if you look at views per video and likes per video, the control group simply don't have an engaged audience. Which I think first of all is because they produce clickbait. Second of all they're just not that interesting. And third of all they're not youtubers. Like this isn't their thing. They're just simply an outlet. But yeah so that's kind of a one, just kind of a bunch of metrics that they show here. The next table is a bit more interesting. In the next table they do a user intersection. So what they do is they collect all these videos and then they collect all the comments of these videos. And the comment of course always comes with a username. You need to be logged into YouTube to make a comment. And they see which users comment on multiple videos or on videos of multiple categories. And then they can look at how many users of category A also comment in category B and vice versa. So they have two metrics here. Jucard similarity which is for two communities A and B, number of users commenting on A and B divided by number of users commenting on A or B. And the second the overlap coefficient is number of users commenting on A and B divided by the minimum size of A and B. They say that the overlap coefficient is more useful to compare communities of different sizes. So we'll look at that. The top graphs are always always jacquard difference and the jacquard similarity in the bottom one are overlap coefficient. The first graphs though are number of commenting users per year. And you already see that even though the control group has much more views and probably much more videos, much larger, the comments don't... so the again the the users of the all light and the intellectual dark web are much more engaged. Also comments per user. This is the cumulative distribution function. Most people that comment on control group videos maybe comment once and then but these other communities they comment more. Self similarity means year after year. So always compared to the year before how many users are similar. So how well do these communities retain users. And you can already see here the control group is actually very bad at retaining users. It does have this overlap coefficient high but it has the jacquard self similarity low which basically if you think of the formula of the jacquard similarity means that this number is small and this number is high which means that A and B are very disjoint which means that the last year's users aren't this year's users basically. So they they constantly have to appeal to new users because they're losing old users because well I guess they're boring. Whereas the all light and intellectual dark web are much more are much better at retaining users. Interestingly the alt right not as good as retaining users as the other two. This could also be an effect of size like if your community is smaller the users might wander away more quickly. But I think this already speaks against the radicalization pipeline. If the if the alt right if YouTube was radicalizing people towards alt right we I think we would see a the alt right being on top of user retention. Then here they have intersections between communities. So green here is alt light and IDW while the blue is alt right and alt light and the other blue is alt right and IDW. So basically the green is alt light and IDW and the blues are the other two. And we see that the overlap in terms of overlap coefficient is similar. The overlap in terms of jacquard similarity the alt light and the IDW are very much more sharing users which in the picture I painted makes sense if you think my model is valid. My model explains this very well in that these two communities are quite close together therefore share a similar user base. The alt right smaller and a bit further apart therefore not as similar though more similar than the control group which is the last graph. The last graph is sorry the last graph is how similar are these communities to the control group and here we see the IDW and the alt light kind of similar. The alt right not as similar though in the overlap coefficient they're about the same. So the paper here claims oh look at the similarity this is definitely a radicalization. So they don't claim yet this is a radicalization pipeline but they claim that there's a higher similarity. If you actually look at the numbers it's not so I mean here you're around the 50% similarity and here at the end you're also around the 50% similarity with the control group. So this is within these groups and this is here with the control group. Also here if I look at the kind of mean here you're at whatever 20-18% and here you're also you may be a bit lower but you're also going towards this. What it looks to me like rather than there being a radicalization pipeline if you look at the shape of this and kind of where it starts in 2013-2014 it starts to go up here and you look at the shape of this it's simply the same shape delayed and I mean there's no reason why this graph wouldn't go up here wouldn't go up here in the future and reach the exact same numbers as here. It seems that the graph is simply shifted which makes total sense if you think these communities are... I'm gonna draw the same picture here... right IDW, alt light and over here control. If you think they're they're like that if you think simply think well YouTube is growing users are growing users are starting somewhere here and then spreading out pretty much randomly like they're spreading out spreading out spreading out users start here spreading out users start here spreading out here spreading out everywhere users just kind of there's a diffusion process going on not in a particular direction like they claim if there is just a diffusion process going on what would you expect you would expect users that started here to reach the IDW and alt right much sooner than they reach the control group but ultimately as the diffusion continues all users will have commented on most videos if you run YouTube infinitely and these numbers would go that's why the numbers go up right if you just let it go the diffusion process will go along and it simply takes a longer time to go from here all the way over here then it goes then between these communities so to me we're looking at a simple diffusion process here that is shifted in time and that explains very much the discrepancy in number but also the shape of the curve that is exactly the same but shifted their model does not explain the shape of the curve they simply say well here it's 75% and here it's only 50% that means that these communities are kind of shipping users towards each other so I think the explanation is easier then so they claim this does not alone kind of show that there is a pipeline what they now do however will show that basically so they claim this is the experiment that really shows that there is it is pipeline so what they do is they define what they call an infection so what they say is okay we are for example this this row here we're taking users that are alt light users at the beginning in this time so basically they only comment on the only comment on alt light videos during this time right so discard all users that comment on anything else just retain the ones that only comment on alt light videos during this time then we're going to follow them over time and see how many of them have at least one comment in an alt right video so this is only directed from the community over here towards the alt right and then they call a user infected specifically if they comment on one or two alt right videos they're lightly infected if they comment on three to five they're mildly infected and if they comment on more they're severely infected so as you can see users starting from the alt light or from the IDW or from both they will become in some will become infected over time namely and I postulate we simply look at the since that the tendencies between the groups are similar we'll simply look at the light infections here so they say okay after you know in 2018 about 8 to 10 percent of the users become infected in these groups you see here here about the same trajectories whereas it so whereas in the control group it's less here though honestly I don't think it's that much less right I think that again I think there's a normal diffusion process here they do this similarly with the with the other ones and to me like to them this makes total sense like oh yeah users that start in these communities they migrate to get infected by the alt right they go towards the alt right because you can find it so easily and to me this simply looks like a normal diffusion process here's what you need if you want and by the way the control group isn't that much different here's what you need if you want to show that there is a pipeline in this direction you need this exact same graph in the other direction and you need to show that people that started in the alt right do not go back in the same fashion towards the alt light or the IDW and they do especially not go to the control group you need to show this basically between each pair of these and you need to show that the direction of infection is only in a single direction namely towards radicalization otherwise you're just looking at a normal diffusion process between differently distance and differently sized groups so they go on to analyze and they say well how much basically how much of the alt right audience makes is made up by people that have been radicalized that have been infected so that this infection is kind of their proxy for what they call a radicalization and if you become infected then basically you're not part of the alt right or something even though you might have you might have commented something negative actually the might engage with their ideas and call them their crap but in any case you're now infected and they ask themselves how much of the alt right audience has are of these infected so basically how much of the alt right audience have our people that in the past have been not alt writers have been exclusively commenting on alt light or IDW videos and they find that for example for alt light 23% of the alt right audience are former alt lighters and have our former alt lighters that have now made one comment on an alt right video so that their claim is well there is a sizable portion of the alt right that at the beginning wasn't alt right that basically became infected and therefore that that kind of shows this radicalization pipeline that the alt right audience is mainly consistent of people that have not been alt right previously but have become so and to me again this is simply a function of the size of these communities right if if you think of this again and you start randomly somewhere on YouTube let's let's make this assumption people start randomly somewhere on YouTube what's the probability that you're going to start in the alt right very small right so what's the the kind of natural let's say the natural size of alt right before users go and migrate is very tiny right so not many users are going to be what you would consult originally alt writers whatever their their first comment basically what this thing measures is where is your first comment and are any of your subsequent comments alt right if your first comment is not in the alt right then you become a potential candidate for infection and if any comment is on the alt right then you're infected so what's the probability that your first comment is not alt right well you're gonna land somewhere on YouTube YouTube is huge the alt right is very small thus that probability is extremely small and then you let you simply let people diffuse let them diffuse let them diffuse some will end up in the alt right and since the alt right is so small to begin with actually most people that will comment at some point on an alt right video will will have their first comment from somewhere outside the alt right videos simply simply a numbers game right simply the alt right is so small that this is virtually guaranteed so what they find here is again simply an evidence of a regular diffusion process between these differently sized groups and the claims they make from this are just over the top again that their comparison to the control group if you if you look at the numbers they're actually not that different from this from the IDW numbers there they're different than the alt light here substantially different but again that simply a function of distance in my opinion in these in these clusters lastly they look at the YouTube recommender system and they say okay if we look at these videos and the channels and we look at on these videos what other videos are recommended and what other channels are recommended so if you have like a video on YouTube you have the video here and here you have like recommended videos similarly when you have a channel right you have a channel this is a person yeah I'm this person the person can have first of all they can have featured channels where they say look these are channels that I find cool I go check them out and then they also have recommended channels that are kind of given by YouTube as recommendations so here YouTube controls basically everything here the creator controls part and the YouTube controls dollar part so they look to both first of all the channels channels recommend recommendations so these are both sections here and they look at if you start on a alt light video how likely if you do a random walk are you to end up in the alt right or in the intellectual dark web or control group after one step two steps three steps four steps so that the big line is the random Walker and actually the dashed line is the distance if you were to target Lee go into the direction of such a video like what's the minimum number of clicks you need and you can see here the the if you start at alt light after one or two steps the random Walker is kind of a 2% chance to end up at an alt right video and about a 25% chance here of ending up in a intellectual dark web video and about a 50% chance of ending up again at an alt light video the scales here really different so it's very difficult to judge how it compares to the control group which is kind of at zero here but to me again this is a reflection of the size of these communities and I think it's a bit you know we are to to then claim oh these are reachable basically so 2% chance of landing on an alt right video um I'm not sure but again if you compare if you start from the control group there's almost no chance you'll end up in a alt right video so I guess the comparison is is okay if you compare to control group if you start look at videos however again if you start at alt light after one step you are approximately 25% likely to be in an IDW video you're a bit over 50% likely to stay in an alt light video however compare this to channels you're almost super unlikely to end at a control channel if you start at an alt light channel but in video recommendations you're actually also about 25% chance of ending in a control group video where as look at the scale here you're only about 0.03% likely to end up in an alt right video and also here so here even look at this if you start an IDW video the chance that you're going to end up in a control again super high much higher than an alt light video whereas with the channel recommendations this was completely turned around so we see the alt right completely loses when it comes to video recommendations and mainly the control group gains compared to the channel recommendations I think here's what I think I think this is due to this section here this section here where the creators have power and also this section here YouTube recommending I think they're putting a lot of work into the video recommendations I think they're putting not that much work into these recommendations and by work I mean actually manually intervening and deciding what's kind of good videos and bad videos and the the control group they're probably there's probably big advertisement money in that so they might be pushed up a bit in the video recommendations since most people are going by video recommendations I've actually never used the channel recommendations feature and the channel recommendations first of all the creator has power over part of it and then also YouTube may not put as much work into these related channels so both have in the effect that I would say that that the data here first of all it doesn't doesn't convince me of a radicalization pipeline it simply convinces me that some communities are larger smaller and closer together but second of all that this down here if you forget about the alt-right for a moment yeah they're irrelevant this down here actually compared to up here shows maybe a bit of evidence of an algorithmic promotion of these mainstream media channels compared to how the communities are actually clustering which I think this this up here might be a much more accurate picture so you know that it's just kind of a funky thing in the data yeah that alt-right is irrelevant to this part because they're they're just too small so this is this is kind of my take on this they didn't give recommendations and is this a pipeline and so on and I don't think so you've now heard my idea and you've heard their idea decide for yourself but I think it's a good example of how if you are convinced of an underlying mechanism you're going to collect evidence in support of that mechanism and if you catch yourself doing that really really think isn't there an easier explanation for this all right that was it for me have fun | [
{
"start": 0,
"end": 5.44,
"text": " Hi there! Today we're going to look at Auditing Radicalization Pathways on"
},
{
"start": 5.44,
"end": 12.96,
"text": " YouTube by Manuel Horta-Riberio at AL. So this paper is a bit different than the"
},
{
"start": 12.96,
"end": 19.52,
"text": " one we're usually looking at, but since I'm a YouTuber and this is in the kind"
},
{
"start": 19.52,
"end": 26.52,
"text": " of a data science realm, I thought it fits neatly. So yeah, we'll have a look."
},
{
"start": 26.52,
"end": 34.04,
"text": " And this is mostly going to be an analysis and my opinion on it, so take"
},
{
"start": 34.04,
"end": 42.4,
"text": " that for what it is. This is, in my opinion, a paper where you can see very"
},
{
"start": 42.4,
"end": 50.96,
"text": " well what it looks like when you deceive yourself. So when you have a"
},
{
"start": 50.96,
"end": 57.92,
"text": " hypothesis of something and then only collect data that matches that, and you"
},
{
"start": 57.92,
"end": 64.08,
"text": " don't think of simpler solutions that explain the data, and"
},
{
"start": 64.08,
"end": 68.56,
"text": " therefore you don't think of experiments that could differentiate the simple"
},
{
"start": 68.56,
"end": 72.84,
"text": " solutions from what you propose. So it's a good example of how you can kind of"
},
{
"start": 72.84,
"end": 77.96000000000001,
"text": " trick yourself into believing you found something. And this isn't"
},
{
"start": 77.96,
"end": 83.36,
"text": " now about YouTube or anything. This happened to me so many times. It always"
},
{
"start": 83.36,
"end": 89.83999999999999,
"text": " pays off to take a step back and say, is there a simpler explanation for what's"
},
{
"start": 89.83999999999999,
"end": 94.19999999999999,
"text": " happening? And this is what I think is exactly happening here. So I'll present"
},
{
"start": 94.19999999999999,
"end": 101.6,
"text": " to you their hypothesis and then I'll present to you my kind of what I think"
},
{
"start": 101.6,
"end": 108.55999999999999,
"text": " is going on and a model that explains the data much much much easier and"
},
{
"start": 108.55999999999999,
"end": 117.67999999999999,
"text": " simpler and actually better. So let's dive in. This paper basically claims"
},
{
"start": 117.67999999999999,
"end": 124.47999999999999,
"text": " the following. So on YouTube there are channels and channels are, you know,"
},
{
"start": 124.47999999999999,
"end": 128.72,
"text": " independent channels. They make videos and you can actually arrange these"
},
{
"start": 128.72,
"end": 134.84,
"text": " channels. So each dot here is a channel. You can arrange these channels in kind"
},
{
"start": 134.84,
"end": 139.96,
"text": " of a network. And two channels you can claim they're connected and they can be"
},
{
"start": 139.96,
"end": 145.16,
"text": " a connection strength or whatever. For simplicity they can be connected if, for"
},
{
"start": 145.16,
"end": 150.64,
"text": " example, their topics are similar, if they reference each other, if they are"
},
{
"start": 150.64,
"end": 155.12,
"text": " recommended by YouTube from each other, if they have the same users watching"
},
{
"start": 155.12,
"end": 159.64000000000001,
"text": " those same channels or the videos of these channels. There are a number of"
},
{
"start": 159.64000000000001,
"end": 166.64000000000001,
"text": " metrics where you could make channels connected but all of them"
},
{
"start": 166.64000000000001,
"end": 172.72,
"text": " will turn out similar, like will give you the similar structure of channels"
},
{
"start": 172.72,
"end": 179.16,
"text": " being connected. Oh that's connected twice. So you can kind of build a"
},
{
"start": 179.16,
"end": 183.6,
"text": " graph of how these channels are connected and what you can do then is you"
},
{
"start": 183.6,
"end": 188,
"text": " can cluster them. You don't have to build a graph to cluster them but you"
},
{
"start": 188,
"end": 193.92,
"text": " can cluster the channels and what will emerge are parts of the graph that are"
},
{
"start": 193.92,
"end": 199.64,
"text": " very well connected. Right here this might be connected with this and with"
},
{
"start": 199.64,
"end": 206.88,
"text": " this. Parts of graph that are very well connected and are kind of well"
},
{
"start": 206.88,
"end": 211.35999999999999,
"text": " connected within and more sparsely connected to others, like also have a"
},
{
"start": 211.36,
"end": 217.88000000000002,
"text": " larger distance in between them. So if you start out from one channel and you're"
},
{
"start": 217.88000000000002,
"end": 222.16000000000003,
"text": " kind of watching recommended videos and recommended channels and so on, you'll"
},
{
"start": 222.16000000000003,
"end": 227.32000000000002,
"text": " stroll along here, you will get much faster to these things than to the other"
},
{
"start": 227.32000000000002,
"end": 231.16000000000003,
"text": " things. So these are called communities usually in these kind of"
},
{
"start": 231.16000000000003,
"end": 235.76000000000002,
"text": " social network analysis. So on YouTube you know there is a community for"
},
{
"start": 235.76,
"end": 242.35999999999999,
"text": " makeup, there's a community for sports, within sports there is a community for"
},
{
"start": 242.35999999999999,
"end": 246.51999999999998,
"text": " soccer, there's one for basketball and so on. So these are all these kind of"
},
{
"start": 246.51999999999998,
"end": 251.07999999999998,
"text": " communities that you can discover by clustering. This paper mainly deals with"
},
{
"start": 251.07999999999998,
"end": 257.71999999999997,
"text": " three communities. Namely the first of all is the IDW, which is the"
},
{
"start": 257.71999999999997,
"end": 263.36,
"text": " intellectual dark web. They discuss this here. So the intellectual dark web is"
},
{
"start": 263.36,
"end": 272.28000000000003,
"text": " they describe as a group of individuals that are in a rolling conversation with"
},
{
"start": 272.28000000000003,
"end": 278.72,
"text": " each other about topics that are, let's say, usually kind of difficult to talk"
},
{
"start": 278.72,
"end": 285.40000000000003,
"text": " about, such as gender differences or intelligence research in certain areas"
},
{
"start": 285.4,
"end": 293.71999999999997,
"text": " or even you know regular politics, but kind of the intellectual dark web are a"
},
{
"start": 293.71999999999997,
"end": 300.4,
"text": " wide variety of people that basically are conversing with each other about"
},
{
"start": 300.4,
"end": 307.59999999999997,
"text": " topics. The description is a bit vague but the main aspect is conversation"
},
{
"start": 307.59999999999997,
"end": 315.08,
"text": " and maybe topics that are kind of on the edge of what's acceptable to talk"
},
{
"start": 315.08,
"end": 322.44,
"text": " about. But the opinions range widely on these topics. The second group is the alt-right."
},
{
"start": 322.44,
"end": 331.88,
"text": " And the alt-right here is kind of the, they're defined as ethno-nationalists."
},
{
"start": 331.88,
"end": 339.47999999999996,
"text": " For example, here is an example, the fringe ideas such as white ethno-state,"
},
{
"start": 339.48,
"end": 345.8,
"text": " white supremacist ideology and so on. So specifically ethno-nationalists,"
},
{
"start": 345.8,
"end": 350.96000000000004,
"text": " nationalists that I think nations should be organized to along the lines of"
},
{
"start": 350.96000000000004,
"end": 357.72,
"text": " ethnicity. And the goal of the paper is actually to show that there is a"
},
{
"start": 357.72,
"end": 364.44,
"text": " kind of a dangerous pipeline on YouTube that will drive people to the alt-right"
},
{
"start": 364.44,
"end": 370.28,
"text": " and drive people into these radical ideas of the alt-right. Kind of in between is"
},
{
"start": 370.28,
"end": 377.44,
"text": " the alt-light, which is here defined as civic nationalists, which is simply as I"
},
{
"start": 377.44,
"end": 382.76,
"text": " understand it means that people should be organized into nations, not along"
},
{
"start": 382.76,
"end": 386.64,
"text": " ethnicity, but just should organize themselves into sovereign communities."
},
{
"start": 386.64,
"end": 396.52,
"text": " And it would be more of your libertarian, classically liberal people, whereas the"
},
{
"start": 396.52,
"end": 404.96,
"text": " alt-right would be more of your, let's say, authoritarian right-wing person."
},
{
"start": 404.96,
"end": 409.68,
"text": " So these three communities, they have a fourth community which is what they call a"
},
{
"start": 409.68,
"end": 413.47999999999996,
"text": " control group. And the control group consists of what they say are kind of"
},
{
"start": 413.48,
"end": 420.92,
"text": " mainstream channels on YouTube, simply to differentiate them from these three"
},
{
"start": 420.92,
"end": 427.64000000000004,
"text": " and two, see what's going on with them and if there is a difference. So this is"
},
{
"start": 427.64000000000004,
"end": 432.40000000000003,
"text": " kind of the setup and as I said the hypothesis is the following."
},
{
"start": 432.40000000000003,
"end": 438.84000000000003,
"text": " People go on YouTube, so YouTube is here, YouTube, people come on YouTube, they go"
},
{
"start": 438.84,
"end": 444.52,
"text": " around, they explore a bit and all of a sudden they find IDW videos. These are"
},
{
"start": 444.52,
"end": 449.28,
"text": " recommended by YouTube on a fairly regular basis. That may mean they're"
},
{
"start": 449.28,
"end": 453.12,
"text": " interesting, people find it, they find it interesting and so on. And then there from"
},
{
"start": 453.12,
"end": 460.91999999999996,
"text": " the IDW there are recommendations and links to the alt-light. And the alt-light"
},
{
"start": 460.91999999999996,
"end": 467.15999999999997,
"text": " are still, so as I read this paper there is kind of an undertone, kind of the IDW"
},
{
"start": 467.16,
"end": 472.44,
"text": " and the alt-light are still okay. Like they discuss ideas that are"
},
{
"start": 472.44,
"end": 477.88000000000005,
"text": " sometimes political and so on, but the real worry is the alt-right, the"
},
{
"start": 477.88000000000005,
"end": 486.04,
"text": " kind of radical right-wing ethnic nationalists. And I mean yes, the"
},
{
"start": 486.04,
"end": 492.44000000000005,
"text": " formulation I can agree with. And then they claim, so you find IDW,"
},
{
"start": 492.44,
"end": 497.52,
"text": " that they have links to the alt-light or links, I mean recommendations and so on."
},
{
"start": 497.52,
"end": 502.64,
"text": " And from the alt-light and to a certain degree also from the IDW you can then"
},
{
"start": 502.64,
"end": 510.36,
"text": " find the alt-right. So even though a user that goes on YouTube at first isn't"
},
{
"start": 510.36,
"end": 517.16,
"text": " likely to find the alt-right videos because it's fringe, it's extreme and so"
},
{
"start": 517.16,
"end": 521.84,
"text": " on, by through the YouTube recommendation algorithm basically by"
},
{
"start": 521.84,
"end": 527.24,
"text": " going to the IDW finding this, then from there they'll find the alt-light and"
},
{
"start": 527.24,
"end": 534.96,
"text": " from there and from the IDW they will then find the alt-right. So they claim"
},
{
"start": 534.96,
"end": 542.26,
"text": " that there's this pathway of radicalization here that kind of pushes"
},
{
"start": 542.26,
"end": 551.76,
"text": " people towards the alt-right. And that's their hypothesis. And they claim"
},
{
"start": 551.76,
"end": 558.84,
"text": " that they have evidence to support this and I claim that there is a simpler"
},
{
"start": 558.84,
"end": 565.28,
"text": " solution, namely... So first of all let me state I don't like the alt-right. I think"
},
{
"start": 565.28,
"end": 574.64,
"text": " their ideas are despicable. I should go without saying, though I have said it now,"
},
{
"start": 574.64,
"end": 581.28,
"text": " so you know, just as a disclaimer I'm not defending anyone here. I'm simply saying"
},
{
"start": 581.28,
"end": 586.56,
"text": " this paper has a simpler explanation for their data. Namely, what I think is"
},
{
"start": 586.56,
"end": 595.6,
"text": " happening here is YouTube again is channels. Each dot here is a channel."
},
{
"start": 595.6,
"end": 601.36,
"text": " Channels can be clustered as such, right there, as we saw before. I'm just drawing"
},
{
"start": 601.36,
"end": 606.88,
"text": " more of them right now. Channels, channels, channels, channels, channels, channels, channels."
},
{
"start": 606.88,
"end": 614.12,
"text": " So what I think is happening is there is a control group, what they call the"
},
{
"start": 614.12,
"end": 621.6,
"text": " control group. It's over here, it's large control, right? It's a bunch of channels."
},
{
"start": 621.6,
"end": 630.28,
"text": " Then, which is kind of mainstream media, then over here there is, let's say,"
},
{
"start": 630.28,
"end": 635.56,
"text": " alternative media where all of these three groups belong into. So at some"
},
{
"start": 635.56,
"end": 642.28,
"text": " point you will have the IDW, then maybe a bit further away from the control group,"
},
{
"start": 642.28,
"end": 647.68,
"text": " but very close to the IDW you would have the alt-light, and very close to the two,"
},
{
"start": 647.68,
"end": 656.0799999999999,
"text": " maybe here you would have the alt-right. So notably, in my model, the"
},
{
"start": 656.0799999999999,
"end": 662.76,
"text": " IDW and the alt-light are kind of close together. They are in terms of"
},
{
"start": 662.76,
"end": 667.84,
"text": " comparative distance. So if you cluster these channels, let's say audience or"
},
{
"start": 667.84,
"end": 674.88,
"text": " topics or and so on, it will turn out that all of these three are far, far"
},
{
"start": 674.88,
"end": 679.68,
"text": " away from the control group. Those two are very close to each other and then"
},
{
"start": 679.68,
"end": 686.72,
"text": " here there is some distance, but how much distance is a question?"
},
{
"start": 686.72,
"end": 691.28,
"text": " But of course it's going to be smaller distance than the distance to the"
},
{
"start": 691.28,
"end": 697.6,
"text": " control group here. I mean I could draw the alt-right, maybe a more"
},
{
"start": 697.6,
"end": 705.0799999999999,
"text": " accurate picture would be something like this. So whatever, I mean"
},
{
"start": 705.0799999999999,
"end": 710.8,
"text": " it doesn't matter the details, but the distance here is smaller"
},
{
"start": 710.8,
"end": 719.12,
"text": " than the distance to the control group. In this model a second thing is"
},
{
"start": 719.12,
"end": 725.52,
"text": " also important, namely the alt-right, as you can see here, is much much smaller"
},
{
"start": 725.52,
"end": 731.84,
"text": " than the IDW and the alt-light. And these again are much smaller than the"
},
{
"start": 731.84,
"end": 737.96,
"text": " control group. And this I think accounts for most, so the distance relations"
},
{
"start": 737.96,
"end": 749.6,
"text": " between these and the size of the clusters account for most. So with"
},
{
"start": 749.6,
"end": 754.9200000000001,
"text": " size I mean mainly number of channels and also audience. This accounts"
},
{
"start": 754.9200000000001,
"end": 761.0400000000001,
"text": " for most of the data better than their model. So just keep this in mind."
},
{
"start": 761.0400000000001,
"end": 767.36,
"text": " And my model of course doesn't include any kind of pipeline that they"
},
{
"start": 767.36,
"end": 776.08,
"text": " suggest. So first of all they go ahead and they say, alright, they collect"
},
{
"start": 776.08,
"end": 781.5600000000001,
"text": " channels. So they collect data for this and you know we could go over how they"
},
{
"start": 781.5600000000001,
"end": 786.2,
"text": " collect the data and criticize that and so on. They do human annotation and they"
},
{
"start": 786.2,
"end": 791.8000000000001,
"text": " start from already published reports and so on, which themselves can be criticized."
},
{
"start": 791.8000000000001,
"end": 796.46,
"text": " I'm not gonna go into their data collection methodology. It can have"
},
{
"start": 796.46,
"end": 803.5600000000001,
"text": " mistakes, but then any collection methodology can have mistakes. What they"
},
{
"start": 803.5600000000001,
"end": 807.5600000000001,
"text": " end up with is a number of channels and here are the top channels from each"
},
{
"start": 807.5600000000001,
"end": 814,
"text": " category. And as you can see alt-right, alt-light, intellectual dark web,"
},
{
"start": 814,
"end": 821.2800000000001,
"text": " and control. So already here you can see pretty clearly the model I have in mind."
},
{
"start": 821.28,
"end": 827.16,
"text": " They acknowledge all of this by the way. Look at the size of the alt-right"
},
{
"start": 827.16,
"end": 832.56,
"text": " channels, the biggest ones, compared to the size of the alt-light and the"
},
{
"start": 832.56,
"end": 838.0799999999999,
"text": " intellectual dark web. They're much much smaller in number of views. And then"
},
{
"start": 838.0799999999999,
"end": 843.52,
"text": " compare this to the size of the control group. The control group again is again"
},
{
"start": 843.52,
"end": 849.9599999999999,
"text": " larger than the other two groups. So just keep it in mind. Second thing to keep in"
},
{
"start": 849.96,
"end": 856,
"text": " mind, look at these channels. Maybe you know some of them. Joe Rogan, Sargon of"
},
{
"start": 856,
"end": 864.14,
"text": " Akkad, Paul Joseph Watson, Sticks Hexenhammer. These are"
},
{
"start": 864.14,
"end": 870.88,
"text": " youtubers. These are individuals making YouTube clips, creating content for"
},
{
"start": 870.88,
"end": 876.52,
"text": " YouTube, being on this platform. Whereas if you compare it with the control group,"
},
{
"start": 876.52,
"end": 884.56,
"text": " what's here? Vox, GQ, Wired, Business Insider. These aren't youtubers. These are"
},
{
"start": 884.56,
"end": 890.1999999999999,
"text": " websites or traditional media companies or their own kind of"
},
{
"start": 890.1999999999999,
"end": 895.84,
"text": " blogs and so on that have a YouTube channel where YouTube is one of the"
},
{
"start": 895.84,
"end": 904.24,
"text": " outlets of this media company. So I think there's a giant discrepancy"
},
{
"start": 904.24,
"end": 909.34,
"text": " here in the control group that can explain also some of this data that you"
},
{
"start": 909.34,
"end": 915.04,
"text": " see. So keep that in mind. I think the control group, they say they don't try to"
},
{
"start": 915.04,
"end": 919.24,
"text": " capture the user dynamic with the control group, but I think that there's"
},
{
"start": 919.24,
"end": 923.6800000000001,
"text": " many problems with this control group, including the fact that these are"
},
{
"start": 923.6800000000001,
"end": 929.6800000000001,
"text": " kind of traditional mainstream media that just have YouTube as an outlet."
},
{
"start": 929.68,
"end": 936.28,
"text": " Moreover, a lot of these like Vox or Vice, they are clickbait media and"
},
{
"start": 936.28,
"end": 943.28,
"text": " rage bait media that it has worked for a number of years, but the algorithms"
},
{
"start": 943.28,
"end": 949.8399999999999,
"text": " are becoming more attuned to clickbait and these are crashing fast."
},
{
"start": 949.8399999999999,
"end": 958.4,
"text": " Whereas the more youtuber people, they are not susceptible to"
},
{
"start": 958.4,
"end": 965.12,
"text": " that much to kind of the abolishment of clickbait. Alright, so this is"
},
{
"start": 965.12,
"end": 970.68,
"text": " the data. They have all these channels, they have all these videos and they first of"
},
{
"start": 970.68,
"end": 979.28,
"text": " all give some stats on it. Here you see on the bottom is always the year."
},
{
"start": 979.28,
"end": 987.38,
"text": " So they do this over time and you see the active channels which are channels"
},
{
"start": 987.38,
"end": 993.72,
"text": " that have uploaded videos in some time. See the control group again is larger"
},
{
"start": 993.72,
"end": 1001.68,
"text": " but has started to flatten out in the last few years. Whereas these"
},
{
"start": 1001.68,
"end": 1007.52,
"text": " communities, they are relatively flourishing. Another interesting point"
},
{
"start": 1007.52,
"end": 1015.08,
"text": " is that the paper somehow tries to tie this to the election of Donald Trump in"
},
{
"start": 1015.08,
"end": 1022.08,
"text": " 2016. But I think this is just kind of in there to gain"
},
{
"start": 1022.08,
"end": 1027.8400000000001,
"text": " relevance. A lot of these kind of trends and so on you'll see already start"
},
{
"start": 1027.8400000000001,
"end": 1035.2,
"text": " before that. So the start of the rise here, if you see these"
},
{
"start": 1035.2,
"end": 1041.96,
"text": " bumps here and so on, a lot of them start before 2016. So as we go through this"
},
{
"start": 1041.96,
"end": 1046.48,
"text": " make up your own mind of how much this is actually tied to the election"
},
{
"start": 1046.48,
"end": 1054.92,
"text": " or not. I think it's much more the years when clickbait started to go"
},
{
"start": 1054.92,
"end": 1060.4,
"text": " down as a business model. Never mind though. So the active channels"
},
{
"start": 1060.4,
"end": 1069.16,
"text": " growing, though the control group not growing as much. Videos published, even"
},
{
"start": 1069.16,
"end": 1073,
"text": " though the control group isn't growing so much, they still publish the most"
},
{
"start": 1073,
"end": 1079.5600000000002,
"text": " videos. But you can see generally the site is growing. Generally YouTube is"
},
{
"start": 1079.5600000000002,
"end": 1085.88,
"text": " growing. Like counts. And here you see something interesting starting to happen."
},
{
"start": 1085.88,
"end": 1089.6000000000001,
"text": " Namely these communities, especially the alt-light and the intellectual dark web,"
},
{
"start": 1089.6000000000001,
"end": 1094.2,
"text": " they're starting to catch up. And this is one of the things that the paper also"
},
{
"start": 1094.2,
"end": 1100.4,
"text": " states is that if you look at for example comments per video, this"
},
{
"start": 1100.4,
"end": 1107.6000000000001,
"text": " light and the intellectual dark web outperform the control group vastly."
},
{
"start": 1107.6000000000001,
"end": 1117.1200000000001,
"text": " Also if you look at views per video and likes per video, the control"
},
{
"start": 1117.1200000000001,
"end": 1123,
"text": " group simply don't have an engaged audience. Which I think first of all is"
},
{
"start": 1123,
"end": 1127.68,
"text": " because they produce clickbait. Second of all they're just not that interesting."
},
{
"start": 1127.68,
"end": 1132.32,
"text": " And third of all they're not youtubers. Like this isn't their thing. They're"
},
{
"start": 1132.32,
"end": 1140.4,
"text": " just simply an outlet. But yeah so that's kind of a one, just kind of a"
},
{
"start": 1140.4,
"end": 1149.76,
"text": " bunch of metrics that they show here. The next table is a bit more"
},
{
"start": 1149.76,
"end": 1155.44,
"text": " interesting. In the next table they do a user intersection. So what they do is they"
},
{
"start": 1155.44,
"end": 1159.76,
"text": " collect all these videos and then they collect all the comments of these"
},
{
"start": 1159.76,
"end": 1165.28,
"text": " videos. And the comment of course always comes with a username. You need to be"
},
{
"start": 1165.28,
"end": 1170.84,
"text": " logged into YouTube to make a comment. And they see which users comment on"
},
{
"start": 1170.84,
"end": 1176.08,
"text": " multiple videos or on videos of multiple categories. And then they can look at"
},
{
"start": 1176.08,
"end": 1181.9199999999998,
"text": " how many users of category A also comment in category B and vice versa."
},
{
"start": 1181.9199999999998,
"end": 1188.28,
"text": " So they have two metrics here. Jucard similarity which is for two"
},
{
"start": 1188.28,
"end": 1193.84,
"text": " communities A and B, number of users commenting on A and B divided"
},
{
"start": 1193.84,
"end": 1199.04,
"text": " by number of users commenting on A or B. And the second the overlap coefficient"
},
{
"start": 1199.04,
"end": 1205.32,
"text": " is number of users commenting on A and B divided by the minimum size of A and B."
},
{
"start": 1205.32,
"end": 1212.2,
"text": " They say that the overlap coefficient is more useful to compare communities of"
},
{
"start": 1212.2,
"end": 1220.1599999999999,
"text": " different sizes. So we'll look at that. The top graphs are always always"
},
{
"start": 1220.1599999999999,
"end": 1226.8799999999999,
"text": " jacquard difference and the jacquard similarity in the bottom one are"
},
{
"start": 1226.8799999999999,
"end": 1232.32,
"text": " overlap coefficient. The first graphs though are number of commenting users"
},
{
"start": 1232.32,
"end": 1238.28,
"text": " per year. And you already see that even though the control group has much more"
},
{
"start": 1238.28,
"end": 1245,
"text": " views and probably much more videos, much larger, the comments don't... so the"
},
{
"start": 1245,
"end": 1250.04,
"text": " again the the users of the all light and the intellectual dark web are much more"
},
{
"start": 1250.04,
"end": 1258.2,
"text": " engaged. Also comments per user. This is the cumulative distribution function."
},
{
"start": 1258.2,
"end": 1264.44,
"text": " Most people that comment on control group videos maybe comment once"
},
{
"start": 1264.44,
"end": 1271.64,
"text": " and then but these other communities they comment more. Self similarity"
},
{
"start": 1271.64,
"end": 1277.1200000000001,
"text": " means year after year. So always compared to the year before how many users are"
},
{
"start": 1277.1200000000001,
"end": 1283.52,
"text": " similar. So how well do these communities retain users. And you can"
},
{
"start": 1283.52,
"end": 1289.04,
"text": " already see here the control group is actually very bad at retaining users. It"
},
{
"start": 1289.04,
"end": 1295.2,
"text": " does have this overlap coefficient high but it has the jacquard self similarity"
},
{
"start": 1295.2,
"end": 1299.72,
"text": " low which basically if you think of the formula of the jacquard similarity means"
},
{
"start": 1299.72,
"end": 1308.32,
"text": " that this number is small and this number is high which means that A and"
},
{
"start": 1308.32,
"end": 1314.96,
"text": " B are very disjoint which means that the last year's users aren't this year's"
},
{
"start": 1314.96,
"end": 1321.6,
"text": " users basically. So they they constantly have to appeal to new users because"
},
{
"start": 1321.6,
"end": 1327.36,
"text": " they're losing old users because well I guess they're boring. Whereas the"
},
{
"start": 1327.36,
"end": 1332.9199999999998,
"text": " all light and intellectual dark web are much more are much better at retaining"
},
{
"start": 1332.92,
"end": 1342.6000000000001,
"text": " users. Interestingly the alt right not as good as retaining users as the other two."
},
{
"start": 1342.6000000000001,
"end": 1347.2,
"text": " This could also be an effect of size like if your community is smaller the"
},
{
"start": 1347.2,
"end": 1354.1200000000001,
"text": " users might wander away more quickly. But I think this already speaks against the"
},
{
"start": 1354.1200000000001,
"end": 1360.88,
"text": " radicalization pipeline. If the if the alt right if YouTube was radicalizing"
},
{
"start": 1360.88,
"end": 1368.8000000000002,
"text": " people towards alt right we I think we would see a the alt right being on top"
},
{
"start": 1368.8000000000002,
"end": 1379.68,
"text": " of user retention. Then here they have intersections between communities. So"
},
{
"start": 1379.68,
"end": 1390.8400000000001,
"text": " green here is alt light and IDW while the blue is alt right and alt light and"
},
{
"start": 1390.84,
"end": 1396.4399999999998,
"text": " the other blue is alt right and IDW. So basically the green is alt light and IDW"
},
{
"start": 1396.4399999999998,
"end": 1404.8,
"text": " and the blues are the other two. And we see that the overlap in terms of overlap"
},
{
"start": 1404.8,
"end": 1411.52,
"text": " coefficient is similar. The overlap in terms of jacquard similarity the alt"
},
{
"start": 1411.52,
"end": 1418.8,
"text": " light and the IDW are very much more sharing users which in the picture I"
},
{
"start": 1418.8,
"end": 1425.8799999999999,
"text": " painted makes sense if you think my model is valid. My model explains this"
},
{
"start": 1425.8799999999999,
"end": 1434.04,
"text": " very well in that these two communities are quite close together therefore share"
},
{
"start": 1434.04,
"end": 1438.52,
"text": " a similar user base. The alt right smaller and a bit further apart"
},
{
"start": 1438.52,
"end": 1445.54,
"text": " therefore not as similar though more similar than the control group which is"
},
{
"start": 1445.54,
"end": 1450.96,
"text": " the last graph. The last graph is sorry the last graph is how similar are these"
},
{
"start": 1450.96,
"end": 1461.12,
"text": " communities to the control group and here we see the IDW and the alt light"
},
{
"start": 1461.12,
"end": 1467.36,
"text": " kind of similar. The alt right not as similar though in the overlap"
},
{
"start": 1467.36,
"end": 1476.56,
"text": " coefficient they're about the same. So the paper here claims oh look at the"
},
{
"start": 1476.56,
"end": 1481.6,
"text": " similarity this is definitely a radicalization. So they don't claim yet this"
},
{
"start": 1481.6,
"end": 1485.56,
"text": " is a radicalization pipeline but they claim that there's a higher similarity."
},
{
"start": 1485.56,
"end": 1491.36,
"text": " If you actually look at the numbers it's not so I mean here you're"
},
{
"start": 1491.36,
"end": 1496.9599999999998,
"text": " around the 50% similarity and here at the end you're also around the 50%"
},
{
"start": 1496.96,
"end": 1500.76,
"text": " similarity with the control group. So this is within these groups and this is"
},
{
"start": 1500.76,
"end": 1506.8400000000001,
"text": " here with the control group. Also here if I look at the kind of mean here"
},
{
"start": 1506.8400000000001,
"end": 1513.8,
"text": " you're at whatever 20-18% and here you're also you may be a bit lower but"
},
{
"start": 1513.8,
"end": 1519.6000000000001,
"text": " you're also going towards this. What it looks to me like rather than there being"
},
{
"start": 1519.6000000000001,
"end": 1525.16,
"text": " a radicalization pipeline if you look at the shape of this and kind"
},
{
"start": 1525.16,
"end": 1532.44,
"text": " of where it starts in 2013-2014 it starts to go up here and you look at the"
},
{
"start": 1532.44,
"end": 1538.64,
"text": " shape of this it's simply the same shape delayed and I mean there's no reason why"
},
{
"start": 1538.64,
"end": 1547.64,
"text": " this graph wouldn't go up here wouldn't go up here in the future and reach the"
},
{
"start": 1547.64,
"end": 1551.88,
"text": " exact same numbers as here. It seems that the graph is simply shifted which makes"
},
{
"start": 1551.88,
"end": 1557.0800000000002,
"text": " total sense if you think these communities are... I'm gonna draw the same"
},
{
"start": 1557.0800000000002,
"end": 1568.3600000000001,
"text": " picture here... right IDW, alt light and over here control. If you think they're"
},
{
"start": 1568.3600000000001,
"end": 1574.2800000000002,
"text": " they're like that if you think simply think well YouTube is growing users are"
},
{
"start": 1574.2800000000002,
"end": 1580.8400000000001,
"text": " growing users are starting somewhere here and then spreading out pretty much"
},
{
"start": 1580.84,
"end": 1585.12,
"text": " randomly like they're spreading out spreading out spreading out users start"
},
{
"start": 1585.12,
"end": 1588.32,
"text": " here spreading out users start here spreading out here spreading out"
},
{
"start": 1588.32,
"end": 1593.52,
"text": " everywhere users just kind of there's a diffusion process going on not in a"
},
{
"start": 1593.52,
"end": 1597.56,
"text": " particular direction like they claim if there is just a diffusion process going"
},
{
"start": 1597.56,
"end": 1604.24,
"text": " on what would you expect you would expect users that started here to reach"
},
{
"start": 1604.24,
"end": 1611.68,
"text": " the IDW and alt right much sooner than they reach the control group but"
},
{
"start": 1611.68,
"end": 1617,
"text": " ultimately as the diffusion continues all users will have commented on most"
},
{
"start": 1617,
"end": 1621.72,
"text": " videos if you run YouTube infinitely and these numbers would go that's why the"
},
{
"start": 1621.72,
"end": 1626.92,
"text": " numbers go up right if you just let it go the diffusion process will go along"
},
{
"start": 1626.92,
"end": 1633.04,
"text": " and it simply takes a longer time to go from here all the way over here then it"
},
{
"start": 1633.04,
"end": 1639.8,
"text": " goes then between these communities so to me we're looking at a simple diffusion"
},
{
"start": 1639.8,
"end": 1647.92,
"text": " process here that is shifted in time and that explains very much the discrepancy"
},
{
"start": 1647.92,
"end": 1651.6,
"text": " in number but also the shape of the curve that is exactly the same but"
},
{
"start": 1651.6,
"end": 1656.04,
"text": " shifted their model does not explain the shape of the curve they simply say well"
},
{
"start": 1656.04,
"end": 1663.1599999999999,
"text": " here it's 75% and here it's only 50% that means that these communities are"
},
{
"start": 1663.1599999999999,
"end": 1671.1599999999999,
"text": " kind of shipping users towards each other so I think the explanation is"
},
{
"start": 1671.1599999999999,
"end": 1677.24,
"text": " easier then so they claim this does not alone kind of show that there is a"
},
{
"start": 1677.24,
"end": 1683.32,
"text": " pipeline what they now do however will show that basically so they claim this"
},
{
"start": 1683.32,
"end": 1689.12,
"text": " is the experiment that really shows that there is it is pipeline so what they do"
},
{
"start": 1689.12,
"end": 1697.84,
"text": " is they define what they call an infection so what they say is okay we"
},
{
"start": 1697.84,
"end": 1706.36,
"text": " are for example this this row here we're taking users that are alt light users"
},
{
"start": 1706.36,
"end": 1713.4799999999998,
"text": " at the beginning in this time so basically they only comment on the only"
},
{
"start": 1713.4799999999998,
"end": 1720.08,
"text": " comment on alt light videos during this time right so discard all users that"
},
{
"start": 1720.08,
"end": 1724.6799999999998,
"text": " comment on anything else just retain the ones that only comment on alt light"
},
{
"start": 1724.6799999999998,
"end": 1730.76,
"text": " videos during this time then we're going to follow them over time and see how"
},
{
"start": 1730.76,
"end": 1738.84,
"text": " many of them have at least one comment in an alt right video so this is only"
},
{
"start": 1738.84,
"end": 1744.56,
"text": " directed from the community over here towards the alt right and then they call"
},
{
"start": 1744.56,
"end": 1750.72,
"text": " a user infected specifically if they comment on one or two alt right videos"
},
{
"start": 1750.72,
"end": 1757.2,
"text": " they're lightly infected if they comment on three to five they're mildly infected"
},
{
"start": 1757.2,
"end": 1765.04,
"text": " and if they comment on more they're severely infected so as you can see"
},
{
"start": 1765.04,
"end": 1773.92,
"text": " users starting from the alt light or from the IDW or from both they will"
},
{
"start": 1773.92,
"end": 1781.4,
"text": " become in some will become infected over time namely and I postulate we simply"
},
{
"start": 1781.4,
"end": 1785.8400000000001,
"text": " look at the since that the tendencies between the groups are similar we'll"
},
{
"start": 1785.84,
"end": 1794.56,
"text": " simply look at the light infections here so they say okay after you know in 2018"
},
{
"start": 1794.56,
"end": 1799.04,
"text": " about 8 to 10 percent of the users become infected in these groups you see"
},
{
"start": 1799.04,
"end": 1806.9199999999998,
"text": " here here about the same trajectories whereas it so whereas in the control"
},
{
"start": 1806.92,
"end": 1817.1200000000001,
"text": " group it's less here though honestly I don't think it's that much less right I"
},
{
"start": 1817.1200000000001,
"end": 1823.52,
"text": " think that again I think there's a normal diffusion process here they do"
},
{
"start": 1823.52,
"end": 1831.44,
"text": " this similarly with the with the other ones and to me like to them this makes"
},
{
"start": 1831.44,
"end": 1836.4,
"text": " total sense like oh yeah users that start in these communities they migrate"
},
{
"start": 1836.4,
"end": 1839.88,
"text": " to get infected by the alt right they go towards the alt right because you can"
},
{
"start": 1839.88,
"end": 1844.0800000000002,
"text": " find it so easily and to me this simply looks like a normal diffusion process"
},
{
"start": 1844.0800000000002,
"end": 1850.64,
"text": " here's what you need if you want and by the way the control group isn't that"
},
{
"start": 1850.64,
"end": 1855.72,
"text": " much different here's what you need if you want to show that there is a"
},
{
"start": 1855.72,
"end": 1863.5600000000002,
"text": " pipeline in this direction you need this exact same graph in the other direction"
},
{
"start": 1863.56,
"end": 1872.24,
"text": " and you need to show that people that started in the alt right do not go back"
},
{
"start": 1872.24,
"end": 1878.56,
"text": " in the same fashion towards the alt light or the IDW and they do especially"
},
{
"start": 1878.56,
"end": 1883.6,
"text": " not go to the control group you need to show this basically between each pair of"
},
{
"start": 1883.6,
"end": 1889.96,
"text": " these and you need to show that the direction of infection is only in a"
},
{
"start": 1889.96,
"end": 1895.4,
"text": " single direction namely towards radicalization otherwise you're just"
},
{
"start": 1895.4,
"end": 1899.56,
"text": " looking at a normal diffusion process between differently distance and"
},
{
"start": 1899.56,
"end": 1907.64,
"text": " differently sized groups so they go on to analyze and they say well how much"
},
{
"start": 1907.64,
"end": 1914.56,
"text": " basically how much of the alt right audience makes is made up by people that"
},
{
"start": 1914.56,
"end": 1919.2,
"text": " have been radicalized that have been infected so that this infection is kind"
},
{
"start": 1919.2,
"end": 1923.3600000000001,
"text": " of their proxy for what they call a radicalization and if you become"
},
{
"start": 1923.3600000000001,
"end": 1930.72,
"text": " infected then basically you're not part of the alt right or something even though"
},
{
"start": 1930.72,
"end": 1936.56,
"text": " you might have you might have commented something negative actually the might"
},
{
"start": 1936.56,
"end": 1942.32,
"text": " engage with their ideas and call them their crap but in any case you're now"
},
{
"start": 1942.32,
"end": 1948.8,
"text": " infected and they ask themselves how much of the alt right audience has"
},
{
"start": 1948.8,
"end": 1954.44,
"text": " are of these infected so basically how much of the alt right audience have our"
},
{
"start": 1954.44,
"end": 1960.9199999999998,
"text": " people that in the past have been not alt writers have been exclusively"
},
{
"start": 1960.9199999999998,
"end": 1970.76,
"text": " commenting on alt light or IDW videos and they find that for example for alt"
},
{
"start": 1970.76,
"end": 1978.6,
"text": " light 23% of the alt right audience are former alt lighters and have our former"
},
{
"start": 1978.6,
"end": 1984.6,
"text": " alt lighters that have now made one comment on an alt right video so that"
},
{
"start": 1984.6,
"end": 1992.52,
"text": " their claim is well there is a sizable portion of the alt right that at the"
},
{
"start": 1992.52,
"end": 1998.12,
"text": " beginning wasn't alt right that basically became infected and therefore"
},
{
"start": 1998.12,
"end": 2002.84,
"text": " that that kind of shows this radicalization pipeline that the alt"
},
{
"start": 2002.84,
"end": 2009.8,
"text": " right audience is mainly consistent of people that have not been alt right"
},
{
"start": 2009.8,
"end": 2017.1599999999999,
"text": " previously but have become so and to me again this is simply a function of the"
},
{
"start": 2017.1599999999999,
"end": 2024.36,
"text": " size of these communities right if if you think of this again and you start"
},
{
"start": 2024.36,
"end": 2028.6,
"text": " randomly somewhere on YouTube let's let's make this assumption people start"
},
{
"start": 2028.6,
"end": 2033.56,
"text": " randomly somewhere on YouTube what's the probability that you're going to start"
},
{
"start": 2033.56,
"end": 2040.4399999999998,
"text": " in the alt right very small right so what's the the kind of natural let's say"
},
{
"start": 2040.4399999999998,
"end": 2048.24,
"text": " the natural size of alt right before users go and migrate is very tiny right"
},
{
"start": 2048.24,
"end": 2054.4,
"text": " so not many users are going to be what you would consult originally alt writers"
},
{
"start": 2054.4,
"end": 2058.12,
"text": " whatever their their first comment basically what this thing measures is"
},
{
"start": 2058.12,
"end": 2064.2799999999997,
"text": " where is your first comment and are any of your subsequent comments alt right if"
},
{
"start": 2064.2799999999997,
"end": 2068.68,
"text": " your first comment is not in the alt right then you become a potential"
},
{
"start": 2068.68,
"end": 2073.3199999999997,
"text": " candidate for infection and if any comment is on the alt right then you're"
},
{
"start": 2073.3199999999997,
"end": 2077.4,
"text": " infected so what's the probability that your first comment is not alt right well"
},
{
"start": 2077.4,
"end": 2080.8399999999997,
"text": " you're gonna land somewhere on YouTube YouTube is huge the alt right is very"
},
{
"start": 2080.84,
"end": 2088.96,
"text": " small thus that probability is extremely small and then you let you simply let"
},
{
"start": 2088.96,
"end": 2095,
"text": " people diffuse let them diffuse let them diffuse some will end up in the alt"
},
{
"start": 2095,
"end": 2099.96,
"text": " right and since the alt right is so small to begin with actually most people"
},
{
"start": 2099.96,
"end": 2106.2400000000002,
"text": " that will comment at some point on an alt right video will will have their"
},
{
"start": 2106.24,
"end": 2114.3999999999996,
"text": " first comment from somewhere outside the alt right videos simply simply a"
},
{
"start": 2114.3999999999996,
"end": 2119.56,
"text": " numbers game right simply the alt right is so small that this is virtually"
},
{
"start": 2119.56,
"end": 2124.64,
"text": " guaranteed so what they find here is again simply an evidence of a regular"
},
{
"start": 2124.64,
"end": 2130.9599999999996,
"text": " diffusion process between these differently sized groups and the claims"
},
{
"start": 2130.9599999999996,
"end": 2136.2,
"text": " they make from this are just over the top again that their comparison to"
},
{
"start": 2136.2,
"end": 2140.3199999999997,
"text": " the control group if you if you look at the numbers they're actually not that"
},
{
"start": 2140.3199999999997,
"end": 2147.7599999999998,
"text": " different from this from the IDW numbers there they're different than the alt"
},
{
"start": 2147.7599999999998,
"end": 2156.9199999999996,
"text": " light here substantially different but again that simply a function of distance"
},
{
"start": 2156.9199999999996,
"end": 2164.96,
"text": " in my opinion in these in these clusters lastly they look at the YouTube"
},
{
"start": 2164.96,
"end": 2173.4,
"text": " recommender system and they say okay if we look at these videos and the channels"
},
{
"start": 2173.4,
"end": 2179.8,
"text": " and we look at on these videos what other videos are recommended and what"
},
{
"start": 2179.8,
"end": 2183.64,
"text": " other channels are recommended so if you have like a video on YouTube you have"
},
{
"start": 2183.64,
"end": 2187.6,
"text": " the video here and here you have like recommended videos similarly when you"
},
{
"start": 2187.6,
"end": 2191.7200000000003,
"text": " have a channel right you have a channel this is a person yeah I'm this person"
},
{
"start": 2191.72,
"end": 2195.68,
"text": " the person can have first of all they can have featured channels where they"
},
{
"start": 2195.68,
"end": 2200.8399999999997,
"text": " say look these are channels that I find cool I go check them out and then they"
},
{
"start": 2200.8399999999997,
"end": 2204.7599999999998,
"text": " also have recommended channels that are kind of given by YouTube as"
},
{
"start": 2204.7599999999998,
"end": 2211.08,
"text": " recommendations so here YouTube controls basically everything here the creator"
},
{
"start": 2211.08,
"end": 2217.72,
"text": " controls part and the YouTube controls dollar part so they look to both first"
},
{
"start": 2217.72,
"end": 2225.3599999999997,
"text": " of all the channels channels recommend recommendations so these are both"
},
{
"start": 2225.3599999999997,
"end": 2233.7999999999997,
"text": " sections here and they look at if you start on a alt light video how likely if"
},
{
"start": 2233.7999999999997,
"end": 2240.52,
"text": " you do a random walk are you to end up in the alt right or in the intellectual"
},
{
"start": 2240.52,
"end": 2245.7999999999997,
"text": " dark web or control group after one step two steps three steps four steps so that"
},
{
"start": 2245.8,
"end": 2251.8,
"text": " the big line is the random Walker and actually the dashed line is the distance"
},
{
"start": 2251.8,
"end": 2257.0800000000004,
"text": " if you were to target Lee go into the direction of such a video like what's"
},
{
"start": 2257.0800000000004,
"end": 2268,
"text": " the minimum number of clicks you need and you can see here the the if you"
},
{
"start": 2268,
"end": 2274.36,
"text": " start at alt light after one or two steps the random Walker is kind of a 2%"
},
{
"start": 2274.36,
"end": 2282.32,
"text": " chance to end up at an alt right video and about a 25% chance here of ending up"
},
{
"start": 2282.32,
"end": 2289.08,
"text": " in a intellectual dark web video and about a 50% chance of ending up again at"
},
{
"start": 2289.08,
"end": 2293.56,
"text": " an alt light video the scales here really different so it's very difficult"
},
{
"start": 2293.56,
"end": 2301,
"text": " to judge how it compares to the control group which is kind of at zero here but"
},
{
"start": 2301,
"end": 2306.68,
"text": " to me again this is a reflection of the size of these communities and I think"
},
{
"start": 2306.68,
"end": 2313.68,
"text": " it's a bit you know we are to to then claim oh these are reachable basically so"
},
{
"start": 2313.68,
"end": 2321.56,
"text": " 2% chance of landing on an alt right video um I'm not sure but again if you"
},
{
"start": 2321.56,
"end": 2326.48,
"text": " compare if you start from the control group there's almost no chance you'll"
},
{
"start": 2326.48,
"end": 2335,
"text": " end up in a alt right video so I guess the comparison is is okay if you compare"
},
{
"start": 2335,
"end": 2344.92,
"text": " to control group if you start look at videos however again if you start at alt"
},
{
"start": 2344.92,
"end": 2355,
"text": " light after one step you are approximately 25% likely to be in an IDW"
},
{
"start": 2355,
"end": 2360.8,
"text": " video you're a bit over 50% likely to stay in an alt light video however"
},
{
"start": 2360.8,
"end": 2367.32,
"text": " compare this to channels you're almost super unlikely to end at a control"
},
{
"start": 2367.32,
"end": 2372.16,
"text": " channel if you start at an alt light channel but in video recommendations"
},
{
"start": 2372.16,
"end": 2379.6,
"text": " you're actually also about 25% chance of ending in a control group video where"
},
{
"start": 2379.6,
"end": 2388,
"text": " as look at the scale here you're only about 0.03% likely to end up in an alt"
},
{
"start": 2388,
"end": 2399.64,
"text": " right video and also here so here even look at this if you start an IDW video"
},
{
"start": 2399.64,
"end": 2405.92,
"text": " the chance that you're going to end up in a control again super high much"
},
{
"start": 2405.92,
"end": 2413.32,
"text": " higher than an alt light video whereas with the channel recommendations this"
},
{
"start": 2413.32,
"end": 2418.4,
"text": " was completely turned around so we see the alt right completely loses when it"
},
{
"start": 2418.4,
"end": 2423.7200000000003,
"text": " comes to video recommendations and mainly the control group gains compared"
},
{
"start": 2423.7200000000003,
"end": 2430.84,
"text": " to the channel recommendations I think here's what I think I think this is due"
},
{
"start": 2430.84,
"end": 2437.08,
"text": " to this section here this section here where the creators have power and also"
},
{
"start": 2437.08,
"end": 2442.2400000000002,
"text": " this section here YouTube recommending I think they're putting a lot of work"
},
{
"start": 2442.2400000000002,
"end": 2447.1600000000003,
"text": " into the video recommendations I think they're putting not that much work into"
},
{
"start": 2447.1600000000003,
"end": 2451.76,
"text": " these recommendations and by work I mean actually manually intervening and"
},
{
"start": 2451.76,
"end": 2457.08,
"text": " deciding what's kind of good videos and bad videos and the the control group"
},
{
"start": 2457.08,
"end": 2463.7999999999997,
"text": " they're probably there's probably big advertisement money in that so they"
},
{
"start": 2463.7999999999997,
"end": 2467.36,
"text": " might be pushed up a bit in the video recommendations since most people are"
},
{
"start": 2467.36,
"end": 2472.2,
"text": " going by video recommendations I've actually never used the channel"
},
{
"start": 2472.2,
"end": 2476.12,
"text": " recommendations feature and the channel recommendations first of all the"
},
{
"start": 2476.12,
"end": 2481.24,
"text": " creator has power over part of it and then also YouTube may not put as much"
},
{
"start": 2481.24,
"end": 2491.08,
"text": " work into these related channels so both have in the effect that I would say that"
},
{
"start": 2491.08,
"end": 2496.56,
"text": " that the data here first of all it doesn't doesn't convince me of a"
},
{
"start": 2496.56,
"end": 2500.9599999999996,
"text": " radicalization pipeline it simply convinces me that some communities are"
},
{
"start": 2500.9599999999996,
"end": 2506.8599999999997,
"text": " larger smaller and closer together but second of all that this down here if you"
},
{
"start": 2506.86,
"end": 2512.36,
"text": " forget about the alt-right for a moment yeah they're irrelevant this down here"
},
{
"start": 2512.36,
"end": 2518.28,
"text": " actually compared to up here shows maybe a bit of evidence of an algorithmic"
},
{
"start": 2518.28,
"end": 2527.48,
"text": " promotion of these mainstream media channels compared to how the communities"
},
{
"start": 2527.48,
"end": 2531.84,
"text": " are actually clustering which I think this this up here might be a much more"
},
{
"start": 2531.84,
"end": 2541.1600000000003,
"text": " accurate picture so you know that it's just kind of a funky thing in the data"
},
{
"start": 2541.1600000000003,
"end": 2546.96,
"text": " yeah that alt-right is irrelevant to this part because they're they're just"
},
{
"start": 2546.96,
"end": 2556.08,
"text": " too small so this is this is kind of my take on this they didn't give"
},
{
"start": 2556.08,
"end": 2562.96,
"text": " recommendations and is this a pipeline and so on and I don't think so you've"
},
{
"start": 2562.96,
"end": 2571.24,
"text": " now heard my idea and you've heard their idea decide for yourself but I think"
},
{
"start": 2571.24,
"end": 2578.64,
"text": " it's a good example of how if you are convinced of an underlying mechanism"
},
{
"start": 2578.64,
"end": 2584.48,
"text": " you're going to collect evidence in support of that mechanism and if you"
},
{
"start": 2584.48,
"end": 2588.76,
"text": " catch yourself doing that really really think isn't there an easier explanation"
},
{
"start": 2588.76,
"end": 2618.5200000000004,
"text": " for this all right that was it for me have fun"
}
] |
wZWn7Hm8osA | Yannic Kilcher | UCZHmQk67mSJgfCCTn7xBfew | Gauge Equivariant Convolutional Networks and the Icosahedral CNN | [
"Science & Technology"
] | [
"machine learning",
"deep learning",
"artificial intelligence",
"ai",
"data science",
"convolution",
"convolutional neural networks",
"cnn",
"manifolds",
"curvature",
"parallel transport",
"gauge",
"gauge transformation",
"icosahedron",
"weight sharing",
"coordinate frame",
"invariant",
"coordinate system",
"equivariance",
"sphere",
"spherical"
] | Ever wanted to do a convolution on a Klein Bottle? This paper defines CNNs over manifolds such that they are independent of which coordinate frame you choose. Amazingly, this then results in an efficient practical method to achieve state-of-the-art in several tasks!
https://arxiv.org/abs/1902.04615
Abstract:
The principle of equivariance to symmetry transformations enables a theoretically grounded approach to neural network architecture design. Equivariant networks have shown excellent performance and data efficiency on vision and medical imaging problems that exhibit symmetries. Here we show how this principle can be extended beyond global symmetries to local gauge transformations. This enables the development of a very general class of convolutional neural networks on manifolds that depend only on the intrinsic geometry, and which includes many popular methods from equivariant and geometric deep learning. We implement gauge equivariant CNNs for signals defined on the surface of the icosahedron, which provides a reasonable approximation of the sphere. By choosing to work with this very regular manifold, we are able to implement the gauge equivariant convolution using a single conv2d call, making it a highly scalable and practical alternative to Spherical CNNs. Using this method, we demonstrate substantial improvements over previous methods on the task of segmenting omnidirectional images and global climate patterns.
Authors: Taco S. Cohen, Maurice Weiler, Berkay Kicanaoglu, Max Welling | What you're looking at here are manifolds. Specifically you're looking at 2D manifolds embedded in a 3D space. So naturally these are some kind of bodies that have a surface and one of the things you might want to do with a manifold like this is to define a convolutional neural network to work on this surface. So usually we have convolutional neural network working on flat surfaces such as images. But what if you could actually work on a manifold like this? An easy example is a sphere. You might want to work on a sphere. Why is that useful? Maybe you want to predict the climate and then you actually want to work on the Earth's surface which is approximated by a sphere. So today we'll look at the following paper. Gauge-equivariant convolutional networks and the icosahedral CNN by Tachokohen, Maurice Weiler, Burkai Kichang, and Max Welling. So as I already said this paper tries to define convolutional neural networks on any kind of manifold. So what's the problem inherently when you're doing this? Can't you just, you know, place a filter move it around like you do in a regular CNN? That's exactly the problem actually. So if you have a picture, and let me draw a picture of a cat, right? Cat here, here, here, here, eye, eye. Alright, cat smiling. This is a terrible cat. What you do is you have your filter, right, and that's a little patch in the image. You're just going to move this filter, move it around, move it around, and at each point you convolve the filter. If this is larger, you convolve each of the elements of the filter. Here maybe you have nine elements. So each of these elements here is convolved with the underlying image. At the end you aggregate all of them into a single point, usually by adding them up. And there you, from this image, you produce a new image that is a different thing. So if this kernel here, for example, is a specific kernel that detects lines, you might end up with, or that detects specifically up-down lines, you might end up with just the lines that go up and down in this. So the eyes here, here, right. So this might be the result of this convolution. Of course in CNN these convolutional kernels then are learned as parameters. So it seems pretty easy, right? You just simply take a kernel and kind of shift it around. At each point you convolve the underlying image and that's it. Well it's not so easy if you work on a manifold. And why is that? It's illustrated here on a sphere. So if you have a sphere and you place a kernel, it really matters which direction you place the kernel in. Of course I mean it does on an image, but bear with me. So here you place a kernel in the direction of this arrow, right? You place the kernel maybe like this here, you place your little kernel on it, and you say up. Basically up is here, right? And then you move that kernel around and ultimately you want to move it all the way to the other side of the sphere. So back here you want to move it over there, you want to move it all around the sphere, right? Now what happens if you move it this way, right? You convolve here, you move it this way, you convolve here. You see already by the red arrows where is up. Up is where the red arrows point, right? If you move it along here the red arrows will always point up up up up up. Okay so you arrive back here with your kernel. I'm gonna try to draw this dashed with the up in the kernel being this direction, because you've moved it around like so. But if you for some reason choose to move your kernel in another direction, namely in this direction up here, then as you can see if you place it here and then you place it here, you place it here, you place it back here and ultimately here. Where is up? If you just keep track of where up is in your kernel it's always going to be to the front of the sphere. So on one hand you have up being to the back here and on the other hand you have one up being to the front here. So this doesn't match. So it actually depends on which path you take from this original point to any other point. It depends which path you take, how your kernel is gonna end up there. And that's of course very unfortunate because we're not used to this on this on this 2d thing. Because if I you know move it down first and then up here, over here sorry, where is up in my... so if up is here, if it's down here, up is here and over here up is here. And if I kind of move it straight over here and then down and then here and then here, you see up is always the same direction. There is no problem in a flat surface. That's why we can simply define it as we do. But in a sphere or any kind of manifold it's called parallel transport is path dependent in technical terms. The way you transport a thing from one place to another really depends on the path you take. So this paper is trying to address this problem and define a convolution on any manifold. So how is this done? First of all to define a convolution on the curved surface what they do is they say okay we have a convolutional filter and the convolutional filter is actually some sort of a flat object and it works in what's called the tangent space of the manifold. The tangent space is a flat space that you can define at any point on the manifold. So here the manifold is the sphere. At some point P you define the tangent space as simply the tangent kind of a sheet, a straight sheet touching the surface at point P. So this is now a flat space where we can define a let's say a regular convolutional kernel as we did laying it up here. The question is how do you map points from the sphere to this tangent space and back and that's happening via this exponential map. The exponential map in this sense is not the same as the exponential map that you are used to by simply you know exponentiating things. The exponential map here basically means if I want to go from a point in the tangent space to a point on the manifold what I do is I take this vector here which is a straight vector in the tangent space and I go on the manifold in this direction for a predefined length. So this is usually a length of one on the manifold. For a predefined length I walk into this direction along the geodesic. It's along the shortest path into this direction and then I stop and where I end up that's where I basically end up. So that's the corresponding point to this point here on the tangent space. So to define a convolution fully it means that first you lay your kernel and then for each element in the kernel you will multiply that kernel entry, let me use a blue here, multiply that kernel entry by the corresponding point on the manifold itself. So by mapping this point in the tangent space to the manifold. You can also say you basically back project from the manifold to the tangent space and there you do your regular convolution. So that's how you define a convolution in the classic sense if you have for example a sphere and what the authors here of course noticed already is that this is dependent on how you get there and in technical terms it's called this is dependent on your gauge. So the gauge basically is defining this coordinate frame in the tangent space. So this tangent vector here is an abstract object, it's just a vector, but in order to do something with it, in order to do something with a kernel and convolution and so on, you have to express it in numbers and numbers are expressed with respect to a base usually. If you have a vector v here you can express it with respect to this two basis vectors. So maybe v is here is 2 and here is 3. So v can be represented as the vector 2, 3 with respect to the base e1, e2. And so this choice of base basically is what's called a gauge. Now I'm probably butchering this topic completely for any physicists or mathematicians listening but just kind of give you an impression. So this choice of bases is called a gauge and we can imagine a different choice of bases. So let me draw another basis here. So another basis might be 1, 2. So e1 is here, e2 is here. So the new coordinates here would be something like v can also be expressed in this new basis as say 1, here's maybe 1 and this is very far so this is maybe 5. So 5 in this direction. And to transform between the two there is formulas basically from from you know them from linear algebra from vector spaces. In general they're called gauge transformations and if we want our convolution to be invariant to the basically chosen coordinate frames we have to say in technical terms what we mean is the convolution should be gauge-equivariant. That means no matter which base we choose. If we choose this base or if we choose this the result should basically be the same. So within the computation of the convolution we must account for the fact of which gauge is chosen and then basically have the result be invariant. And with the result we don't mean the numbers of the result because these will change but we mean the the actual object that is resulting, the geometric object that is resulting should be equivalent under gauge transformations. So this is a it sounds very technical but the way I understand it is basically you want to define a convolution on these manifolds such that you it's such that the result is not dependent on exactly how you shift the kernel around as long as you account for the fact that you shifted it around this way should give you the same the same result. So for this they define a condition and the condition is that the kernel must behave as such. So the V is the input here and G minus 1 is a a transformation of the of the gauge as I understand it. And so basically if you transform the input by a different coordinate frame then at the kernel applied to that different input must behave exactly as the kernel applied to the original input and then perturbed by these two operations. So this is this you might notice this you might know things like this from discussions maybe of what it means for a function to be linear or something where the function applied to a transformed version must correspond to the function applied to the original version of the input transformed so the result transformed by some some operation. So if this holds so this is a condition on the kernel of the convolution and if you so if you define your convolution in this way this is a modification to the convolution on the tangent space that we had then your result will be gauge equivalent. What is this transformation and what is this new convolution they define they say if you do the convolution this way then these things will hold. So what is this this way basically again you convolve the kernel with the input but you the f here is the input k is the kernel but what you do if we come up here again what you do you have to do a slight modification your kernel here if you want to convolve it let's say this point here you would not combine this point with the point along the exponential map corresponding to it right this point here but what you would do is you would transport this point back along the geodesic to here and then you would and then you would compute your regular convolution. So this means sorry this is what this term here means technically. If you don't understand it don't worry I don't either I guess this is simply saying that if you perform convolutions in on manifolds in this way and you have the appropriate kernel then they will be gauge equivalent. So this is pretty cool because what they do next is they define the convolution on an icosahedron and an icosahedron is a shape a 3d geometric shape that's made of like triangles and I can try to maybe they have drawn it yes so all right this is an icosahedron and so they can now define a convolution on this with where a filter is basically the filter looks like this it's this kind of hexagon I yes and the and the filter is kind of shifted around and of course it's the problem is whenever it shifts over one of these boundaries here or whenever it shifts over the these corners here what do you do what do you do then because if you look at it you can't basically flatten the corner if you try to flatten the corner you're gonna have this wedge sticking out that's terrible you're gonna have a wedge here sticking out if you try to flatten the corner so you have to define basically the convolution on this they do it in their framework and specifically what they do is they flatten and pad the icosahedron to this representation so they put it into five pieces they have to pad a bit you see here each colored edge here this colored edge corresponds to this colored edge so that would be padded from here to nicely define this convolution and then they put this into a regular 2d image with the color things they are sometimes repeated in this image and then they define the filters in this following way so this these are the filters for basically for a six channel input image and what they have to do is they have to do a weight sharing between the filters in a very specific way and in order for the kernel to have these properties they need to see replicate these filters down here and if you look the different colors in these different let's call them channels they each have different intensities and if you look down here they're all slightly different which means they're all slightly different linear combinations of the of the filter up here or rotations basically they're all differently arranged but they're basically this blue field here is this blue field but is also let's see this one and this one and this one and this one so the the weights here are these original filters are basically arranged such that the weights are shared in this form down here but if you do this if you arrange them like this when you replicate each filter basically six times because you also want six output channels then the filter will have the desired properties and your convolution will be gauge equivalent so they apply this to to ICO M this so the complete algorithm is actually down here they can actually use if they pad the image in the correct way to the 2d image and expand the kernel to arrange it as we just saw they can use a regular 2d convolution to compute their result and that's pretty cool and this means this also is very very very efficient on this Ico Sahedron so what they do is they apply this to Ico M NIST where they project basically they project M NIST on an Ico Sahedron so they take the image M NIST and they project it onto this and then they try to classify it on that I can actually show that their method outperforms other method and learns these invariances so learns the the symmetries of the Ico Sahedron or basic sorry is invariant to them being invariant to the symmetries means you don't have to learn them anymore if you're not invariant to symmetries it means you have to learn each one of them separately right but if you're invariant to symmetries then you have only have to learn one thing once and then if the Ico Sahedron is rotated you're just like ma that's just the same thing as this other thing they also do this interestingly to climate pattern segmentation and also a kind of 2d or 3d omni-directional segmentation where you're in a room a 3d room and you have an omni-directional picture sorry from everywhere you have a picture a 3d sphere picture from everywhere you're asked to segment things in the room and actually outperform all other methods on these data sets so I find this extremely cool that kind of this ultra theoretical work starting out as ultra theoretical then gets implemented into something that beats state-of-the-art methods on relevant tasks alright so that was just a brief overview and a very dirty look at these things but I hope you got something out of it and thus far that was it for me bye bye | [
{
"start": 0,
"end": 5.68,
"text": " What you're looking at here are manifolds. Specifically you're looking at"
},
{
"start": 5.68,
"end": 13.36,
"text": " 2D manifolds embedded in a 3D space. So naturally these are some kind of bodies"
},
{
"start": 13.36,
"end": 17.96,
"text": " that have a surface and one of the things you might want to do with a"
},
{
"start": 17.96,
"end": 25.16,
"text": " manifold like this is to define a convolutional neural network to work on"
},
{
"start": 25.16,
"end": 29.48,
"text": " this surface. So usually we have convolutional neural network working on"
},
{
"start": 29.48,
"end": 36.120000000000005,
"text": " flat surfaces such as images. But what if you could actually work on a manifold"
},
{
"start": 36.120000000000005,
"end": 42.88,
"text": " like this? An easy example is a sphere. You might want to work on a sphere. Why is"
},
{
"start": 42.88,
"end": 47.6,
"text": " that useful? Maybe you want to predict the climate and then you actually want"
},
{
"start": 47.6,
"end": 53.400000000000006,
"text": " to work on the Earth's surface which is approximated by a sphere. So today we'll"
},
{
"start": 53.400000000000006,
"end": 58.2,
"text": " look at the following paper. Gauge-equivariant convolutional networks"
},
{
"start": 58.2,
"end": 67.12,
"text": " and the icosahedral CNN by Tachokohen, Maurice Weiler, Burkai Kichang,"
},
{
"start": 67.12,
"end": 75.64,
"text": " and Max Welling. So as I already said this paper tries to define"
},
{
"start": 75.64,
"end": 82.32000000000001,
"text": " convolutional neural networks on any kind of manifold. So what's the problem"
},
{
"start": 82.32000000000001,
"end": 87.80000000000001,
"text": " inherently when you're doing this? Can't you just, you know, place a filter"
},
{
"start": 87.8,
"end": 92.75999999999999,
"text": " move it around like you do in a regular CNN? That's exactly the problem actually."
},
{
"start": 92.75999999999999,
"end": 103.44,
"text": " So if you have a picture, and let me draw a picture of a cat, right? Cat here, here,"
},
{
"start": 103.44,
"end": 109.75999999999999,
"text": " here, here, eye, eye. Alright, cat smiling. This is a terrible cat. What you do is you"
},
{
"start": 109.75999999999999,
"end": 115.16,
"text": " have your filter, right, and that's a little patch in the image. You're just"
},
{
"start": 115.16,
"end": 121,
"text": " going to move this filter, move it around, move it around, and at each point you"
},
{
"start": 121,
"end": 125.6,
"text": " convolve the filter. If this is larger, you convolve each of the elements of the"
},
{
"start": 125.6,
"end": 129.76,
"text": " filter. Here maybe you have nine elements. So each of these elements here is"
},
{
"start": 129.76,
"end": 136.51999999999998,
"text": " convolved with the underlying image. At the end you aggregate all of them into a"
},
{
"start": 136.51999999999998,
"end": 142.4,
"text": " single point, usually by adding them up. And there you, from this image, you"
},
{
"start": 142.4,
"end": 149.88,
"text": " produce a new image that is a different thing. So if this kernel here, for example,"
},
{
"start": 149.88,
"end": 155.68,
"text": " is a specific kernel that detects lines, you might end up with, or that detects"
},
{
"start": 155.68,
"end": 163.16,
"text": " specifically up-down lines, you might end up with just the lines that go up and"
},
{
"start": 163.16,
"end": 171.12,
"text": " down in this. So the eyes here, here, right. So this might be the result of this"
},
{
"start": 171.12,
"end": 175.36,
"text": " convolution. Of course in CNN these convolutional kernels then are learned"
},
{
"start": 175.36,
"end": 182.08,
"text": " as parameters. So it seems pretty easy, right? You just simply take a kernel and"
},
{
"start": 182.08,
"end": 187.28,
"text": " kind of shift it around. At each point you convolve the underlying image"
},
{
"start": 187.28,
"end": 193.16,
"text": " and that's it. Well it's not so easy if you work on a manifold. And why is that?"
},
{
"start": 193.16,
"end": 199.84,
"text": " It's illustrated here on a sphere. So if you have a sphere and you place a kernel,"
},
{
"start": 199.84,
"end": 204.36,
"text": " it really matters which direction you place the kernel in. Of course I mean it"
},
{
"start": 204.36,
"end": 209.04,
"text": " does on an image, but bear with me. So here you place a kernel in the direction"
},
{
"start": 209.04,
"end": 213.76,
"text": " of this arrow, right? You place the kernel maybe like this here, you place your little"
},
{
"start": 213.76,
"end": 221.4,
"text": " kernel on it, and you say up. Basically up is here, right? And then you move that"
},
{
"start": 221.4,
"end": 225.32,
"text": " kernel around and ultimately you want to move it all the way to the other side of"
},
{
"start": 225.32,
"end": 229.4,
"text": " the sphere. So back here you want to move it over there, you want to move it all"
},
{
"start": 229.4,
"end": 236.08,
"text": " around the sphere, right? Now what happens if you move it this way, right? You"
},
{
"start": 236.08,
"end": 240.8,
"text": " convolve here, you move it this way, you convolve here. You see already by the red"
},
{
"start": 240.8,
"end": 246.92000000000002,
"text": " arrows where is up. Up is where the red arrows point, right? If you move it along"
},
{
"start": 246.92000000000002,
"end": 254.16,
"text": " here the red arrows will always point up up up up up. Okay so you arrive back here"
},
{
"start": 254.16,
"end": 261.92,
"text": " with your kernel. I'm gonna try to draw this dashed with the up in the"
},
{
"start": 261.92,
"end": 267.04,
"text": " kernel being this direction, because you've moved it around like so. But if"
},
{
"start": 267.04,
"end": 273.04,
"text": " you for some reason choose to move your kernel in another direction, namely in"
},
{
"start": 273.04,
"end": 278.64,
"text": " this direction up here, then as you can see if you place it here and then you"
},
{
"start": 278.64,
"end": 284.8,
"text": " place it here, you place it here, you place it back here and ultimately here."
},
{
"start": 284.8,
"end": 291.2,
"text": " Where is up? If you just keep track of where up is in your kernel it's always"
},
{
"start": 291.2,
"end": 297.41999999999996,
"text": " going to be to the front of the sphere. So on one hand you have up being to the"
},
{
"start": 297.41999999999996,
"end": 302.52,
"text": " back here and on the other hand you have one up being to the front here. So this"
},
{
"start": 302.52,
"end": 309.64,
"text": " doesn't match. So it actually depends on which path you take from this original"
},
{
"start": 309.64,
"end": 317.47999999999996,
"text": " point to any other point. It depends which path you take, how your kernel is"
},
{
"start": 317.47999999999996,
"end": 321.76,
"text": " gonna end up there. And that's of course very unfortunate because we're not used"
},
{
"start": 321.76,
"end": 327.4,
"text": " to this on this on this 2d thing. Because if I you know move it down first and then"
},
{
"start": 327.4,
"end": 334.84,
"text": " up here, over here sorry, where is up in my... so if up is here, if it's down here, up"
},
{
"start": 334.84,
"end": 341.56,
"text": " is here and over here up is here. And if I kind of move it straight over here and"
},
{
"start": 341.56,
"end": 346,
"text": " then down and then here and then here, you see up is always the same direction."
},
{
"start": 346,
"end": 353.4,
"text": " There is no problem in a flat surface. That's why we can simply define it"
},
{
"start": 353.4,
"end": 358.44,
"text": " as we do. But in a sphere or any kind of manifold it's called parallel"
},
{
"start": 358.44,
"end": 366.79999999999995,
"text": " transport is path dependent in technical terms. The way you transport a thing from"
},
{
"start": 366.79999999999995,
"end": 371.88,
"text": " one place to another really depends on the path you take. So this paper is"
},
{
"start": 371.88,
"end": 379.47999999999996,
"text": " trying to address this problem and define a convolution on any manifold. So"
},
{
"start": 379.48,
"end": 387.96000000000004,
"text": " how is this done? First of all to define a convolution on the curved surface what"
},
{
"start": 387.96000000000004,
"end": 391.44,
"text": " they do is they say okay we have a convolutional filter and the"
},
{
"start": 391.44,
"end": 397.04,
"text": " convolutional filter is actually some sort of a flat object and it works in"
},
{
"start": 397.04,
"end": 401.84000000000003,
"text": " what's called the tangent space of the manifold. The tangent space is a flat"
},
{
"start": 401.84000000000003,
"end": 406.92,
"text": " space that you can define at any point on the manifold. So here the manifold is"
},
{
"start": 406.92,
"end": 413.72,
"text": " the sphere. At some point P you define the tangent space as simply the tangent"
},
{
"start": 413.72,
"end": 421.36,
"text": " kind of a sheet, a straight sheet touching the surface at point P. So this"
},
{
"start": 421.36,
"end": 426.16,
"text": " is now a flat space where we can define a let's say a regular convolutional"
},
{
"start": 426.16,
"end": 434.28000000000003,
"text": " kernel as we did laying it up here. The question is how do you map"
},
{
"start": 434.28,
"end": 438.67999999999995,
"text": " points from the sphere to this tangent space and back and that's happening via"
},
{
"start": 438.67999999999995,
"end": 444.71999999999997,
"text": " this exponential map. The exponential map in this sense is not the same as the"
},
{
"start": 444.71999999999997,
"end": 450.76,
"text": " exponential map that you are used to by simply you know exponentiating things."
},
{
"start": 450.76,
"end": 458.28,
"text": " The exponential map here basically means if I want to go from a point in"
},
{
"start": 458.28,
"end": 463.64,
"text": " the tangent space to a point on the manifold what I do is I take this vector"
},
{
"start": 463.64,
"end": 469.64,
"text": " here which is a straight vector in the tangent space and I go on the manifold in"
},
{
"start": 469.64,
"end": 480,
"text": " this direction for a predefined length. So this is usually a length of one on"
},
{
"start": 480,
"end": 485.36,
"text": " the manifold. For a predefined length I walk into this direction along the"
},
{
"start": 485.36,
"end": 490.71999999999997,
"text": " geodesic. It's along the shortest path into this direction and then I stop and"
},
{
"start": 490.72,
"end": 496.56,
"text": " where I end up that's where I basically end up. So that's the corresponding point"
},
{
"start": 496.56,
"end": 502.72,
"text": " to this point here on the tangent space. So to define a convolution fully it means"
},
{
"start": 502.72,
"end": 509.08000000000004,
"text": " that first you lay your kernel and then for each element in the kernel you will"
},
{
"start": 509.08000000000004,
"end": 515.64,
"text": " multiply that kernel entry, let me use a blue here, multiply that kernel entry by"
},
{
"start": 515.64,
"end": 525.12,
"text": " the corresponding point on the manifold itself. So by mapping this"
},
{
"start": 525.12,
"end": 529.8,
"text": " point in the tangent space to the manifold. You can also say you basically"
},
{
"start": 529.8,
"end": 534.04,
"text": " back project from the manifold to the tangent space and there you do your"
},
{
"start": 534.04,
"end": 540.6,
"text": " regular convolution. So that's how you define a convolution in the classic sense"
},
{
"start": 540.6,
"end": 549,
"text": " if you have for example a sphere and what the authors here of course noticed"
},
{
"start": 549,
"end": 555.64,
"text": " already is that this is dependent on how you get there and in technical terms"
},
{
"start": 555.64,
"end": 561.64,
"text": " it's called this is dependent on your gauge. So the gauge basically is defining"
},
{
"start": 561.64,
"end": 566.84,
"text": " this coordinate frame in the tangent space. So this tangent vector here is an"
},
{
"start": 566.84,
"end": 571.5600000000001,
"text": " abstract object, it's just a vector, but in order to do something with it, in"
},
{
"start": 571.5600000000001,
"end": 574.4,
"text": " order to do something with a kernel and convolution and so on, you have to"
},
{
"start": 574.4,
"end": 580.44,
"text": " express it in numbers and numbers are expressed with respect to a base"
},
{
"start": 580.44,
"end": 587.6800000000001,
"text": " usually. If you have a vector v here you can express it with respect to this two"
},
{
"start": 587.6800000000001,
"end": 596.4000000000001,
"text": " basis vectors. So maybe v is here is 2 and here is 3. So v can be represented"
},
{
"start": 596.4,
"end": 605.56,
"text": " as the vector 2, 3 with respect to the base e1, e2. And so this choice of base"
},
{
"start": 605.56,
"end": 612.16,
"text": " basically is what's called a gauge. Now I'm probably butchering this topic"
},
{
"start": 612.16,
"end": 617.0799999999999,
"text": " completely for any physicists or mathematicians listening but just kind"
},
{
"start": 617.0799999999999,
"end": 625.76,
"text": " of give you an impression. So this choice of bases is called a gauge and we"
},
{
"start": 625.76,
"end": 630.6,
"text": " can imagine a different choice of bases. So let me draw another basis here. So"
},
{
"start": 630.6,
"end": 642.4399999999999,
"text": " another basis might be 1, 2. So e1 is here, e2 is here. So the new"
},
{
"start": 642.4399999999999,
"end": 648.3199999999999,
"text": " coordinates here would be something like v can also be expressed in this new"
},
{
"start": 648.3199999999999,
"end": 655.16,
"text": " basis as say 1, here's maybe 1 and this is very far so this is maybe 5. So 5 in"
},
{
"start": 655.16,
"end": 662.56,
"text": " this direction. And to transform between the two there is formulas basically from"
},
{
"start": 662.56,
"end": 666.8399999999999,
"text": " from you know them from linear algebra from vector spaces. In general they're"
},
{
"start": 666.8399999999999,
"end": 674.06,
"text": " called gauge transformations and if we want our convolution to be invariant to"
},
{
"start": 674.06,
"end": 681.12,
"text": " the basically chosen coordinate frames we have to say in technical terms what"
},
{
"start": 681.12,
"end": 687.08,
"text": " we mean is the convolution should be gauge-equivariant. That means no matter"
},
{
"start": 687.08,
"end": 694.44,
"text": " which base we choose. If we choose this base or if we choose this the result"
},
{
"start": 694.44,
"end": 701.08,
"text": " should basically be the same. So within the computation of the convolution we"
},
{
"start": 701.08,
"end": 707.04,
"text": " must account for the fact of which gauge is chosen and then basically have the"
},
{
"start": 707.04,
"end": 711.56,
"text": " result be invariant. And with the result we don't mean the numbers of the result"
},
{
"start": 711.56,
"end": 717.48,
"text": " because these will change but we mean the the actual object that is resulting,"
},
{
"start": 717.48,
"end": 723.48,
"text": " the geometric object that is resulting should be equivalent under gauge"
},
{
"start": 723.48,
"end": 733.64,
"text": " transformations. So this is a it sounds very technical but the way I understand"
},
{
"start": 733.64,
"end": 740.4,
"text": " it is basically you want to define a convolution on these manifolds such that"
},
{
"start": 740.4,
"end": 748.6,
"text": " you it's such that the result is not dependent on exactly how you shift the"
},
{
"start": 748.6,
"end": 754.16,
"text": " kernel around as long as you account for the fact that you shifted it around this"
},
{
"start": 754.16,
"end": 764.56,
"text": " way should give you the same the same result. So for this they define a"
},
{
"start": 764.56,
"end": 772.64,
"text": " condition and the condition is that the kernel must behave as such. So the V is"
},
{
"start": 772.64,
"end": 783.38,
"text": " the input here and G minus 1 is a a transformation of the of the gauge as I"
},
{
"start": 783.38,
"end": 789.62,
"text": " understand it. And so basically if you transform the input by a different"
},
{
"start": 789.62,
"end": 795.08,
"text": " coordinate frame then at the kernel applied to that different input must"
},
{
"start": 795.08,
"end": 805.2,
"text": " behave exactly as the kernel applied to the original input and then perturbed by"
},
{
"start": 805.2,
"end": 812.12,
"text": " these two operations. So this is this you might notice this you might know things"
},
{
"start": 812.12,
"end": 818.4,
"text": " like this from discussions maybe of what it means for a function to be linear or"
},
{
"start": 818.4,
"end": 825.08,
"text": " something where the function applied to a transformed version must correspond"
},
{
"start": 825.08,
"end": 830.8,
"text": " to the function applied to the original version of the input transformed so the"
},
{
"start": 830.8,
"end": 838.84,
"text": " result transformed by some some operation. So if this holds so this is a"
},
{
"start": 838.84,
"end": 843.4,
"text": " condition on the kernel of the convolution and if you so if you define"
},
{
"start": 843.4,
"end": 850.8000000000001,
"text": " your convolution in this way this is a modification to the convolution on the"
},
{
"start": 850.8000000000001,
"end": 856.0400000000001,
"text": " tangent space that we had then your result will be gauge"
},
{
"start": 856.0400000000001,
"end": 860.6800000000001,
"text": " equivalent. What is this transformation and what is this new"
},
{
"start": 860.6800000000001,
"end": 865.24,
"text": " convolution they define they say if you do the convolution this way then these"
},
{
"start": 865.24,
"end": 871.04,
"text": " things will hold. So what is this this way basically again you convolve the"
},
{
"start": 871.04,
"end": 878.84,
"text": " kernel with the input but you the f here is the input k is the kernel but what"
},
{
"start": 878.84,
"end": 885.84,
"text": " you do if we come up here again what you do you have to do a slight"
},
{
"start": 885.84,
"end": 892.28,
"text": " modification your kernel here if you want to convolve it let's say this point"
},
{
"start": 892.28,
"end": 900.16,
"text": " here you would not combine this point with the point along the exponential map"
},
{
"start": 900.16,
"end": 905.4,
"text": " corresponding to it right this point here but what you would do is you would"
},
{
"start": 905.4,
"end": 915.04,
"text": " transport this point back along the geodesic to here and then you would and"
},
{
"start": 915.04,
"end": 924.7199999999999,
"text": " then you would compute your regular convolution. So this means sorry this is"
},
{
"start": 924.7199999999999,
"end": 933.16,
"text": " what this term here means technically. If you don't understand it don't worry I"
},
{
"start": 933.16,
"end": 939.8,
"text": " don't either I guess this is simply saying that if you perform convolutions"
},
{
"start": 939.8,
"end": 947.12,
"text": " in on manifolds in this way and you have the appropriate kernel then they will be"
},
{
"start": 947.12,
"end": 953.4399999999999,
"text": " gauge equivalent. So this is pretty cool because what they do next is they define"
},
{
"start": 953.4399999999999,
"end": 964.92,
"text": " the convolution on an icosahedron and an icosahedron is a shape a 3d geometric"
},
{
"start": 964.92,
"end": 970.52,
"text": " shape that's made of like triangles and I can try to maybe they have drawn it"
},
{
"start": 970.52,
"end": 977.5999999999999,
"text": " yes so all right this is an icosahedron and so they can now define a"
},
{
"start": 977.5999999999999,
"end": 984.52,
"text": " convolution on this with where a filter is basically the filter looks like this"
},
{
"start": 984.52,
"end": 994.5999999999999,
"text": " it's this kind of hexagon I yes and the and the filter is kind of shifted around"
},
{
"start": 994.6,
"end": 999.72,
"text": " and of course it's the problem is whenever it shifts over one of these"
},
{
"start": 999.72,
"end": 1006.16,
"text": " boundaries here or whenever it shifts over the these corners here what do you"
},
{
"start": 1006.16,
"end": 1011.6,
"text": " do what do you do then because if you look at it you can't basically flatten"
},
{
"start": 1011.6,
"end": 1016.84,
"text": " the corner if you try to flatten the corner you're gonna have this wedge"
},
{
"start": 1016.84,
"end": 1024.92,
"text": " sticking out that's terrible you're gonna have a wedge here sticking out if"
},
{
"start": 1024.92,
"end": 1031.08,
"text": " you try to flatten the corner so you have to define basically the convolution"
},
{
"start": 1031.08,
"end": 1035.76,
"text": " on this they do it in their framework and specifically what they do is they"
},
{
"start": 1035.76,
"end": 1043.28,
"text": " flatten and pad the icosahedron to this representation so they put it into five"
},
{
"start": 1043.28,
"end": 1049.08,
"text": " pieces they have to pad a bit you see here each colored edge here this colored"
},
{
"start": 1049.08,
"end": 1055.92,
"text": " edge corresponds to this colored edge so that would be padded from here to nicely"
},
{
"start": 1055.92,
"end": 1063.32,
"text": " define this convolution and then they put this into a regular 2d image with"
},
{
"start": 1063.32,
"end": 1069.56,
"text": " the color things they are sometimes repeated in this image and then they"
},
{
"start": 1069.56,
"end": 1078.24,
"text": " define the filters in this following way so this these are the filters for"
},
{
"start": 1078.24,
"end": 1086.04,
"text": " basically for a six channel input image and what they have to do is they have to"
},
{
"start": 1086.04,
"end": 1092.48,
"text": " do a weight sharing between the filters in a very specific way and in order for"
},
{
"start": 1092.48,
"end": 1097.72,
"text": " the kernel to have these properties they need to see replicate these filters down"
},
{
"start": 1097.72,
"end": 1104.16,
"text": " here and if you look the different colors in these different let's call"
},
{
"start": 1104.16,
"end": 1111.28,
"text": " them channels they each have different intensities and if you look down here"
},
{
"start": 1111.28,
"end": 1114.72,
"text": " they're all slightly different which means they're all slightly different"
},
{
"start": 1114.72,
"end": 1120.48,
"text": " linear combinations of the of the filter up here or rotations basically"
},
{
"start": 1120.48,
"end": 1126,
"text": " they're all differently arranged but they're basically this blue field here"
},
{
"start": 1126,
"end": 1134.84,
"text": " is this blue field but is also let's see this one and this one and this one and"
},
{
"start": 1134.84,
"end": 1142.76,
"text": " this one so the the weights here are these original filters are basically"
},
{
"start": 1142.76,
"end": 1150.52,
"text": " arranged such that the weights are shared in this form down here but if you"
},
{
"start": 1150.52,
"end": 1155,
"text": " do this if you arrange them like this when you replicate each filter basically"
},
{
"start": 1155,
"end": 1160.68,
"text": " six times because you also want six output channels then the filter will have"
},
{
"start": 1160.68,
"end": 1165.44,
"text": " the desired properties and your convolution will be gauge equivalent so"
},
{
"start": 1165.44,
"end": 1173.92,
"text": " they apply this to to ICO M this so the complete algorithm is actually down here"
},
{
"start": 1173.92,
"end": 1178.64,
"text": " they can actually use if they pad the image in the correct way to the 2d image"
},
{
"start": 1178.64,
"end": 1183.88,
"text": " and expand the kernel to arrange it as we just saw they can use a regular 2d"
},
{
"start": 1183.88,
"end": 1189.8000000000002,
"text": " convolution to compute their result and that's pretty cool and this means this"
},
{
"start": 1189.8000000000002,
"end": 1198.48,
"text": " also is very very very efficient on this Ico Sahedron so what they do is they"
},
{
"start": 1198.48,
"end": 1204.5600000000002,
"text": " apply this to Ico M NIST where they project basically they project M NIST on"
},
{
"start": 1204.5600000000002,
"end": 1210.4,
"text": " an Ico Sahedron so they take the image M NIST and they project it onto this and"
},
{
"start": 1210.4,
"end": 1215.8400000000001,
"text": " then they try to classify it on that I can actually show that their method"
},
{
"start": 1215.8400000000001,
"end": 1222.64,
"text": " outperforms other method and learns these invariances so learns the the"
},
{
"start": 1222.64,
"end": 1229.1200000000001,
"text": " symmetries of the Ico Sahedron or basic sorry is invariant to them being"
},
{
"start": 1229.1200000000001,
"end": 1233,
"text": " invariant to the symmetries means you don't have to learn them anymore if"
},
{
"start": 1233,
"end": 1237.48,
"text": " you're not invariant to symmetries it means you have to learn each one of them"
},
{
"start": 1237.48,
"end": 1242.2,
"text": " separately right but if you're invariant to symmetries then you have only have to"
},
{
"start": 1242.2,
"end": 1246.56,
"text": " learn one thing once and then if the Ico Sahedron is rotated you're just like"
},
{
"start": 1246.56,
"end": 1250.4,
"text": " ma that's just the same thing as this other thing they also do this"
},
{
"start": 1250.4,
"end": 1258,
"text": " interestingly to climate pattern segmentation and also a kind of 2d or 3d"
},
{
"start": 1258,
"end": 1264.68,
"text": " omni-directional segmentation where you're in a room a 3d room and you have"
},
{
"start": 1264.68,
"end": 1270.5600000000002,
"text": " an omni-directional picture sorry from everywhere you have a picture a 3d"
},
{
"start": 1270.5600000000002,
"end": 1275.48,
"text": " sphere picture from everywhere you're asked to segment things in the room and"
},
{
"start": 1275.48,
"end": 1283.4,
"text": " actually outperform all other methods on these data sets so I find this extremely"
},
{
"start": 1283.4,
"end": 1289.72,
"text": " cool that kind of this ultra theoretical work starting out as ultra theoretical"
},
{
"start": 1289.72,
"end": 1294.96,
"text": " then gets implemented into something that beats state-of-the-art methods on"
},
{
"start": 1294.96,
"end": 1301.64,
"text": " relevant tasks alright so that was just a brief overview and a very dirty look"
},
{
"start": 1301.64,
"end": 1308,
"text": " at these things but I hope you got something out of it and thus far that was"
},
{
"start": 1308,
"end": 1320.56,
"text": " it for me bye bye"
}
] |
H6Qiegq_36c | Yannic Kilcher | UCZHmQk67mSJgfCCTn7xBfew | Processing Megapixel Images with Deep Attention-Sampling Models | [
"Science & Technology"
] | [
"machine learning",
"deep learning",
"research",
"attention",
"attention sampling",
"attention model",
"attention distribution",
"megapixel images",
"large images",
"artificial intelligence",
"megapixel mnist",
"street sign dataset",
"monte carlo",
"speed",
"memory",
"cnn",
"convolutional neural networks",
"limited resources",
"ai",
"image recognition",
"image classifier"
] | Current CNNs have to downsample large images before processing them, which can lose a lot of detail information. This paper proposes attention sampling, which learns to selectively process parts of any large image in full resolution, while discarding uninteresting bits. This leads to enormous gains in speed and memory consumption.
https://arxiv.org/abs/1905.03711
Abstract:
Existing deep architectures cannot operate on very large signals such as megapixel images due to computational and memory constraints. To tackle this limitation, we propose a fully differentiable end-to-end trainable model that samples and processes only a fraction of the full resolution input image. The locations to process are sampled from an attention distribution computed from a low resolution view of the input. We refer to our method as attention sampling and it can process images of several megapixels with a standard single GPU setup. We show that sampling from the attention distribution results in an unbiased estimator of the full model with minimal variance, and we derive an unbiased estimator of the gradient that we use to train our model end-to-end with a normal SGD procedure. This new method is evaluated on three classification tasks, where we show that it allows to reduce computation and memory footprint by an order of magnitude for the same accuracy as classical architectures. We also show the consistency of the sampling that indeed focuses on informative parts of the input images.
Authors: Angelos Katharopoulos, François Fleuret | Hi there, today we're looking at processing megapixel images with deep attention sampling models by Angelos Kateropoulos and François Fleuret. This is another paper that I saw the talk of at ICML and it's a pretty cool idea, it's pretty simple and apparently it works very well. So consider the following image here of a street situation and ask yourself if a self-driving car sees this, what are the kind of things it needs to be aware of? So of course one of the things it needs to be aware of is like the road, the cars and so on but also what's encircled in red here, the street sign and the street sign especially is important because there's a number on it and you want to see what the number is otherwise you won't be able to adjust your speed. So if this is now a really large image, so if the camera is really good and the dimensions of this image are really large, then current machine learning methods have a problem because current machine learning methods kind of go up to maybe something like 200 by 200 pixels or the current image net models, some down sample and so on. So if this is much larger than this, what current machine learning models would do is they would simply down sample, like compress the size, just compress it a bit and so on. And by that, as you see here on the right, if the original patch in the image you could cut it out and enlarge it, it would look like this. If you compress the whole image, the same patch would now look like this, blurred. So in the bottom half you'd be able to recognize the number, in the top half you wouldn't. So a standard CNN might be able to recognize the road and the car still at the lower resolution but not the speed sign. What we want is a method that can selectively pay attention to parts of the image that it finds interesting and then look at those parts in full detail while basically deciding to discard other parts completely such as the sky here. So this paper is one that does this and does so in a very efficient manner. So the basic premise is very simple. All right, I'm going to show you on this on the same image. So what you do is first you actually compress the image. So this image will become a smaller image, right? So here maybe this is 1000 by 2000, you compress it down to maybe 100 by 200. Still the same image but compressed. Here's the road, here's a bunch of trees. I'm very good at drawing trees. And here's this street sign and here is a car and here is another car. All right, so and there is a sky up here. So now what you do is on this smaller version you classify every location. I guess you could classify, you could subsample but you want to classify every single location on it on how interesting is it. And what they do is they take this and just put it through what they call an attention network which is just this it just a neural network. In their case it's a CNN that for each location here for each blue location outputs a function a of a and let's call it a x y at coordinates x and y of this image x. Okay, this is stupid notation. That's a of x so the image is x at coordinates i, j. Right, so all of these blue things here are i's and j's. Different i's and j's. And then what does this gives you now if you normalize correctly, so if you normalize over all the a's and i, j, a, i, j. If you normalize this gives you a distribution over this image. So if we look at it in like 1D this gives you like a distribution not a continuous one in this case a discrete one. How interesting is each patch and at the end if you have this distribution, so let's finish here, what you want to do is you want to say which are the most interesting locations. So this one's pretty high and these are very high so that might correspond to over here that might correspond to some location. So this location is very high and these locations are very interesting and only in these locations you take them out and then only those you process in full resolution. So you might have extracted let's say four patches so now you have four of these patches and each of them individually you run through a second neural network which is called another CNN which is called F the feature network. So the feature network will take a patch and output a vector of features. So it will feed those in and output the vector of features and then what you do is you simply your final output which they call G, let me colorize this so G which is G is now the final output let's not call it G let's call it O. Output is you sum over all the patches you have extracted down here so the patch number P over all your patches and you sum these features F of patch P right and P might be at location IJ let's put IJ here so IJ in the extracted patches and you weigh each feature by how much attention it got at that location. So it looks more complicated than it is what you do is you simply determine these features by using this neural network only at the position where this neural network says are interesting then you get the features from the interesting positions and you basically just weigh them by how much attention they got in the attention distribution and that will be your final output of the network and it makes intuitive sense like one network decides what is interesting the other network decides what are we going to do with the interesting things in this image. And the cool thing about this is you can basically decide how many of these patches here how many you want to extract you can decide at what resolution you want to process this image and all of this are parameters that you set by how much time you have for computation and how much memory you have for your computation so that's pretty cool pretty module we can scale up we can scale down and the another cool thing is the theoretical guarantees that they give so basically here they prove that the way they do it especially by extracting the patch especially if they have an unbiased sorry especially have if they have sampling without replacement is that if they weigh the things correctly and if they do the things correctly they show that this is actually an unbiased estimator of the true neural network if you were to evaluate on the full image basically on each patch in full resolution so only taking the ones where the attention focuses is an unbiased estimator and not only is it an unbiased estimator it is in fact the estimator with the smallest variance and that's what they prove here so the minimum variance estimator and this is this is pretty pretty interesting pretty cool and works pretty well they also show how to derive the gradient update when you train with this attention sampling so now you train your neural you train your machine learning system not on the whole image but only on a subset of the image patches but it still behaves in expectation as if you were to train on the entire image so pretty neat so here they show how this compares to full CNN in this case we have the full CNN where the picture is simply down sampled and then classified and this is what's called megapixel amnest so in megapixel amnest you have a large image and you put three digits in there there are the same for example five five five from the amnest data set you put two random digits others like two three and you put also a bunch of noise noise patches somewhere so the task is to recognize which is the dominant digit here in this case it would be five right five five where was the other one five here so if you give this to a regular CNN you see it does about this well this is the training loss here training loss and this is the test loss and it takes this much time right time per epoch here and this much time to evaluate sorry if you now use this attention sampling and as I said you can actually modulate how many patches you want to take so as you go down you take more patches we would expect it to take more time this is exactly what happens you see for example down here in the test error if you take five patches per image it takes very little time but the error I mean the error is still better than the if you use the CNN simply because you can now pay attention to details much more as you use more patches your test error drops the also your training loss they drop so using more patches will be actually give you a better and better and better performing model but you sacrifice a little bit of time but still not never as as slow as with the full with that with the CNN so even though it's a down sampled CNN right so that is very interesting and very cool that not only do they beat the the baseline in terms of error but also a lot in terms of speed if you look at what the model does as it learns here you see for a given image this is always the same image from the data set at the beginning they have actually marked where the relevant the three relevant digits are in the picture with the red circle so if you look at how over the training of this model how this distribution evolves is pretty interesting yellow basically means high attention so at the beginning you have high attention everywhere in the image right and then as you go on and on and on you see for example here it pays attention to all the locations where basically where there is something in the image right this could be one of these three digits but it could also be one of the digits that it's trying to that is trying to distract the model like the false digits or the noise patches and as you go more and more and more it really learns to only pay attention to the relevant digits and then classify those at full resolution so this really shows the this this kind of attention distribution learns something very meaningful they do more experiments on two data sets namely this is a histopathology data set right here where the goal is I think to recognize this epithelial cells this type of cell and you can see that this here is the baseline and this here is the new method and the baseline basically what it does is it does similar thing namely it processes the image in patches but it processes every single patch maybe in succession but it still processes every single patch where the attention sampling only processes the patches that the attention sampling distribution suggests and this other data set here is a street sign data set that you saw at the beginning right here and the the again I think this is the baseline and this is the attention sample so both learn to pay attention to the street signs but again the attention sampling much more efficient so here you see the baseline performance the attention sampling performance is similar in terms of test error but if you look at how much time the baseline uses per sample and how much memory and then compare this to the attention sampling you see that they save at least an order of magnitude in time and memory and the same thing goes for the street sign data set you see test error here and then test error is similar for the attention sampling but again time memory much much lower so the attention sampling is faster and is more memory efficient than the baseline and that makes it makes it easy to process these megapixel images even on here they say process megapixel images in a single CPU or GPU and that really I like this because it kind of brings their research back to let's say regular people or maybe universities that don't have as much money as large companies and so all in all very cool paper very neat experiments to have a lot in the appendix check it out where they show their attention distribution in these images their theoretical analysis is pretty easy to follow if you want to check that out and with that thanks for listening and bye bye | [
{
"start": 0,
"end": 4.92,
"text": " Hi there, today we're looking at processing megapixel images with deep"
},
{
"start": 4.92,
"end": 12.72,
"text": " attention sampling models by Angelos Kateropoulos and François Fleuret."
},
{
"start": 12.72,
"end": 20.88,
"text": " This is another paper that I saw the talk of at ICML and it's a pretty cool idea,"
},
{
"start": 20.88,
"end": 26.52,
"text": " it's pretty simple and apparently it works very well. So consider the"
},
{
"start": 26.52,
"end": 35.72,
"text": " following image here of a street situation and ask yourself if a"
},
{
"start": 35.72,
"end": 42.760000000000005,
"text": " self-driving car sees this, what are the kind of things it needs to be aware of?"
},
{
"start": 42.760000000000005,
"end": 48.28,
"text": " So of course one of the things it needs to be aware of is like the road, the cars"
},
{
"start": 48.28,
"end": 54.36,
"text": " and so on but also what's encircled in red here, the street sign and the street"
},
{
"start": 54.36,
"end": 59.88,
"text": " sign especially is important because there's a number on it and you want to"
},
{
"start": 59.88,
"end": 65.64,
"text": " see what the number is otherwise you won't be able to adjust your speed. So if"
},
{
"start": 65.64,
"end": 70.36,
"text": " this is now a really large image, so if the camera is really good and the"
},
{
"start": 70.36,
"end": 75.08,
"text": " dimensions of this image are really large, then current machine learning"
},
{
"start": 75.08,
"end": 81.88,
"text": " methods have a problem because current machine learning methods kind of go up"
},
{
"start": 81.88,
"end": 88.72,
"text": " to maybe something like 200 by 200 pixels or the current image net models,"
},
{
"start": 88.72,
"end": 93.92,
"text": " some down sample and so on. So if this is much larger than this, what current"
},
{
"start": 93.92,
"end": 98.72,
"text": " machine learning models would do is they would simply down sample, like compress"
},
{
"start": 98.72,
"end": 105.46,
"text": " the size, just compress it a bit and so on. And by that, as you see here on the"
},
{
"start": 105.46,
"end": 110.32,
"text": " right, if the original patch in the image you could cut it"
},
{
"start": 110.32,
"end": 115.8,
"text": " out and enlarge it, it would look like this. If you compress the whole image, the"
},
{
"start": 115.8,
"end": 121.72,
"text": " same patch would now look like this, blurred. So in the bottom half you'd be"
},
{
"start": 121.72,
"end": 128,
"text": " able to recognize the number, in the top half you wouldn't. So a standard CNN might"
},
{
"start": 128,
"end": 132.16,
"text": " be able to recognize the road and the car still at the lower resolution but"
},
{
"start": 132.16,
"end": 138.35999999999999,
"text": " not the speed sign. What we want is a method that can selectively pay"
},
{
"start": 138.36,
"end": 145.04000000000002,
"text": " attention to parts of the image that it finds interesting and then look at those"
},
{
"start": 145.04000000000002,
"end": 150.60000000000002,
"text": " parts in full detail while basically deciding to discard other parts"
},
{
"start": 150.60000000000002,
"end": 158.04000000000002,
"text": " completely such as the sky here. So this paper is one that does this and does so"
},
{
"start": 158.04000000000002,
"end": 166.12,
"text": " in a very efficient manner. So the basic premise is very simple. All right, I'm"
},
{
"start": 166.12,
"end": 172.04,
"text": " going to show you on this on the same image. So what you do is first you"
},
{
"start": 172.04,
"end": 177.88,
"text": " actually compress the image. So this image will become a smaller image, right?"
},
{
"start": 177.88,
"end": 187.24,
"text": " So here maybe this is 1000 by 2000, you compress it down to maybe 100 by 200."
},
{
"start": 187.24,
"end": 191.68,
"text": " Still the same image but compressed. Here's the road, here's a bunch of"
},
{
"start": 191.68,
"end": 198.56,
"text": " trees. I'm very good at drawing trees. And here's this street sign and here is a"
},
{
"start": 198.56,
"end": 207.6,
"text": " car and here is another car. All right, so and there is a sky up here. So now"
},
{
"start": 207.6,
"end": 215.56,
"text": " what you do is on this smaller version you classify every location. I guess"
},
{
"start": 215.56,
"end": 220.44,
"text": " you could classify, you could subsample but you want to classify every single"
},
{
"start": 220.44,
"end": 229.4,
"text": " location on it on how interesting is it. And what they do is they take this and"
},
{
"start": 229.4,
"end": 234.44,
"text": " just put it through what they call an attention network which is just this it"
},
{
"start": 234.44,
"end": 242.16,
"text": " just a neural network. In their case it's a CNN that for each location here for"
},
{
"start": 242.16,
"end": 254.48,
"text": " each blue location outputs a function a of a and let's call it a x y at"
},
{
"start": 254.48,
"end": 264.8,
"text": " coordinates x and y of this image x. Okay, this is stupid notation. That's a of x"
},
{
"start": 264.8,
"end": 272.12,
"text": " so the image is x at coordinates i, j. Right, so all of these blue things here"
},
{
"start": 272.12,
"end": 279.40000000000003,
"text": " are i's and j's. Different i's and j's. And then what does this gives you now if"
},
{
"start": 279.40000000000003,
"end": 286.8,
"text": " you normalize correctly, so if you normalize over all the a's and i, j, a, i, j. If you"
},
{
"start": 286.8,
"end": 292.08000000000004,
"text": " normalize this gives you a distribution over this image. So if we look at it in"
},
{
"start": 292.08,
"end": 299.56,
"text": " like 1D this gives you like a distribution not a continuous one in"
},
{
"start": 299.56,
"end": 310.68,
"text": " this case a discrete one. How interesting is each patch and at the end if you have"
},
{
"start": 310.68,
"end": 315.71999999999997,
"text": " this distribution, so let's finish here, what you want to do is you want to say"
},
{
"start": 315.71999999999997,
"end": 320.68,
"text": " which are the most interesting locations. So this one's pretty high and these are"
},
{
"start": 320.68,
"end": 328.6,
"text": " very high so that might correspond to over here that might correspond to some"
},
{
"start": 328.6,
"end": 334.12,
"text": " location. So this location is very high and these locations are very interesting"
},
{
"start": 334.12,
"end": 341.92,
"text": " and only in these locations you take them out and then only those you process"
},
{
"start": 341.92,
"end": 347.16,
"text": " in full resolution. So you might have extracted let's say four patches so now"
},
{
"start": 347.16,
"end": 357.36,
"text": " you have four of these patches and each of them individually you run through a"
},
{
"start": 357.36,
"end": 364.24,
"text": " second neural network which is called another CNN which is called F the"
},
{
"start": 364.24,
"end": 370.56,
"text": " feature network. So the feature network will take a patch and output a vector of"
},
{
"start": 370.56,
"end": 379.8,
"text": " features. So it will feed those in and output the vector of features and"
},
{
"start": 379.8,
"end": 391.8,
"text": " then what you do is you simply your final output which they call G, let me"
},
{
"start": 391.8,
"end": 406.56,
"text": " colorize this so G which is G is now the final output let's not call it G let's"
},
{
"start": 406.56,
"end": 419.88,
"text": " call it O. Output is you sum over all the patches you have extracted down here so"
},
{
"start": 419.88,
"end": 432.04,
"text": " the patch number P over all your patches and you sum these features F of patch P"
},
{
"start": 432.04,
"end": 444.15999999999997,
"text": " right and P might be at location IJ let's put IJ here so IJ in the extracted"
},
{
"start": 444.16,
"end": 451.32000000000005,
"text": " patches and you weigh each feature by how much attention it got at that"
},
{
"start": 451.32000000000005,
"end": 457.56,
"text": " location. So it looks more complicated than it is what you do is you"
},
{
"start": 457.56,
"end": 463.56,
"text": " simply determine these features by using this neural network only at the position"
},
{
"start": 463.56,
"end": 467.64000000000004,
"text": " where this neural network says are interesting then you get the features"
},
{
"start": 467.64,
"end": 474.24,
"text": " from the interesting positions and you basically just weigh them by how much"
},
{
"start": 474.24,
"end": 479.36,
"text": " attention they got in the attention distribution and that will be your final"
},
{
"start": 479.36,
"end": 484.59999999999997,
"text": " output of the network and it makes intuitive sense like one network decides"
},
{
"start": 484.59999999999997,
"end": 489.84,
"text": " what is interesting the other network decides what are we going to do with the"
},
{
"start": 489.84,
"end": 497.52,
"text": " interesting things in this image. And the cool thing about this is you"
},
{
"start": 497.52,
"end": 503.35999999999996,
"text": " can basically decide how many of these patches here how many you want to"
},
{
"start": 503.35999999999996,
"end": 508.35999999999996,
"text": " extract you can decide at what resolution you want to process this"
},
{
"start": 508.35999999999996,
"end": 516.48,
"text": " image and all of this are parameters that you set by how much time you have"
},
{
"start": 516.48,
"end": 522,
"text": " for computation and how much memory you have for your computation so that's"
},
{
"start": 522,
"end": 526.64,
"text": " pretty cool pretty module we can scale up we can scale down and the another cool"
},
{
"start": 526.64,
"end": 531.52,
"text": " thing is the theoretical guarantees that they give so basically here they prove"
},
{
"start": 531.52,
"end": 540.52,
"text": " that the way they do it especially by extracting the patch especially if they"
},
{
"start": 540.52,
"end": 545.28,
"text": " have an unbiased sorry especially have if they have sampling without replacement"
},
{
"start": 545.28,
"end": 553.52,
"text": " is that if they weigh the things correctly and if they do the things"
},
{
"start": 553.52,
"end": 558.6,
"text": " correctly they show that this is actually an unbiased estimator of the"
},
{
"start": 558.6,
"end": 566.0799999999999,
"text": " true neural network if you were to evaluate on the full image basically on"
},
{
"start": 566.0799999999999,
"end": 575.36,
"text": " each patch in full resolution so only taking the ones where the attention"
},
{
"start": 575.36,
"end": 582.88,
"text": " focuses is an unbiased estimator and not only is it an unbiased estimator it is"
},
{
"start": 582.88,
"end": 587.52,
"text": " in fact the estimator with the smallest variance and that's what they prove"
},
{
"start": 587.52,
"end": 598.32,
"text": " here so the minimum variance estimator and this is this is pretty pretty"
},
{
"start": 598.32,
"end": 603.56,
"text": " interesting pretty cool and works pretty well they also show how to derive the"
},
{
"start": 603.56,
"end": 609.52,
"text": " gradient update when you train with this attention sampling so now you train your"
},
{
"start": 609.52,
"end": 614.28,
"text": " neural you train your machine learning system not on the whole image but only"
},
{
"start": 614.28,
"end": 621.4399999999999,
"text": " on a subset of the image patches but it still behaves in expectation as if you"
},
{
"start": 621.4399999999999,
"end": 626.8,
"text": " were to train on the entire image so pretty neat so here they show how this"
},
{
"start": 626.8,
"end": 635.64,
"text": " compares to full CNN in this case we have the full CNN where the picture is"
},
{
"start": 635.64,
"end": 641.6,
"text": " simply down sampled and then classified and this is what's called megapixel"
},
{
"start": 641.6,
"end": 647.04,
"text": " amnest so in megapixel amnest you have a large image and you put three digits in"
},
{
"start": 647.04,
"end": 652.3199999999999,
"text": " there there are the same for example five five five from the amnest data set"
},
{
"start": 652.3199999999999,
"end": 658.84,
"text": " you put two random digits others like two three and you put also a bunch of"
},
{
"start": 658.84,
"end": 665.6,
"text": " noise noise patches somewhere so the task is to recognize which is the"
},
{
"start": 665.6,
"end": 671.4,
"text": " dominant digit here in this case it would be five right five five where was"
},
{
"start": 671.4,
"end": 678.5600000000001,
"text": " the other one five here so if you give this to a regular CNN you see it does"
},
{
"start": 678.5600000000001,
"end": 683.84,
"text": " about this well this is the training loss here training loss and this is the"
},
{
"start": 683.84,
"end": 690.96,
"text": " test loss and it takes this much time right time per epoch here and this much"
},
{
"start": 690.96,
"end": 698.84,
"text": " time to evaluate sorry if you now use this attention sampling and as I said"
},
{
"start": 698.84,
"end": 702.64,
"text": " you can actually modulate how many patches you want to take so as you go"
},
{
"start": 702.64,
"end": 708.44,
"text": " down you take more patches we would expect it to take more time this is"
},
{
"start": 708.44,
"end": 712.48,
"text": " exactly what happens you see for example down here in the test error if you take"
},
{
"start": 712.48,
"end": 719.4,
"text": " five patches per image it takes very little time but the error I mean the"
},
{
"start": 719.4,
"end": 724.44,
"text": " error is still better than the if you use the CNN simply because you can now"
},
{
"start": 724.44,
"end": 732.28,
"text": " pay attention to details much more as you use more patches your test error"
},
{
"start": 732.28,
"end": 737.28,
"text": " drops the also your training loss they drop so using more patches will be"
},
{
"start": 737.28,
"end": 742.28,
"text": " actually give you a better and better and better performing model but you"
},
{
"start": 742.28,
"end": 749.3199999999999,
"text": " sacrifice a little bit of time but still not never as as slow as with the full"
},
{
"start": 749.3199999999999,
"end": 757.16,
"text": " with that with the CNN so even though it's a down sampled CNN right so that"
},
{
"start": 757.16,
"end": 762.64,
"text": " is very interesting and very cool that not only do they beat the the baseline"
},
{
"start": 762.64,
"end": 768.92,
"text": " in terms of error but also a lot in terms of speed if you look at what the"
},
{
"start": 768.92,
"end": 774.92,
"text": " model does as it learns here you see for a given image this is always the same"
},
{
"start": 774.92,
"end": 779.5999999999999,
"text": " image from the data set at the beginning they have actually marked where the"
},
{
"start": 779.5999999999999,
"end": 785.8399999999999,
"text": " relevant the three relevant digits are in the picture with the red circle so if"
},
{
"start": 785.8399999999999,
"end": 793.64,
"text": " you look at how over the training of this model how this distribution evolves"
},
{
"start": 793.64,
"end": 798.76,
"text": " is pretty interesting yellow basically means high attention so at the beginning"
},
{
"start": 798.76,
"end": 806.8,
"text": " you have high attention everywhere in the image right and then as you go on and"
},
{
"start": 806.8,
"end": 812.24,
"text": " on and on you see for example here it pays attention to all the locations"
},
{
"start": 812.24,
"end": 818.8,
"text": " where basically where there is something in the image right this could be one of"
},
{
"start": 818.8,
"end": 823,
"text": " these three digits but it could also be one of the digits that it's trying to"
},
{
"start": 823,
"end": 827.4399999999999,
"text": " that is trying to distract the model like the false digits or the noise"
},
{
"start": 827.44,
"end": 834.2,
"text": " patches and as you go more and more and more it really learns to only pay"
},
{
"start": 834.2,
"end": 839.6800000000001,
"text": " attention to the relevant digits and then classify those at full resolution"
},
{
"start": 839.6800000000001,
"end": 845.2800000000001,
"text": " so this really shows the this this kind of attention distribution learns"
},
{
"start": 845.2800000000001,
"end": 855.08,
"text": " something very meaningful they do more experiments on two data sets namely this"
},
{
"start": 855.08,
"end": 861.76,
"text": " is a histopathology data set right here where the goal is I think to recognize"
},
{
"start": 861.76,
"end": 873.44,
"text": " this epithelial cells this type of cell and you can see that this here is the"
},
{
"start": 873.44,
"end": 882.6800000000001,
"text": " baseline and this here is the new method and the baseline basically what it does"
},
{
"start": 882.68,
"end": 887.4399999999999,
"text": " is it does similar thing namely it processes the image in patches but it"
},
{
"start": 887.4399999999999,
"end": 895.12,
"text": " processes every single patch maybe in succession but it still processes every"
},
{
"start": 895.12,
"end": 899.64,
"text": " single patch where the attention sampling only processes the patches that"
},
{
"start": 899.64,
"end": 906.88,
"text": " the attention sampling distribution suggests and this other data set here"
},
{
"start": 906.88,
"end": 912.5999999999999,
"text": " is a street sign data set that you saw at the beginning right here and the"
},
{
"start": 912.6,
"end": 920.6,
"text": " the again I think this is the baseline and this is the attention sample so both"
},
{
"start": 920.6,
"end": 925.44,
"text": " learn to pay attention to the street signs but again the attention sampling"
},
{
"start": 925.44,
"end": 933.52,
"text": " much more efficient so here you see the baseline performance the attention"
},
{
"start": 933.52,
"end": 939.6,
"text": " sampling performance is similar in terms of test error but if you look at how"
},
{
"start": 939.6,
"end": 945.6,
"text": " much time the baseline uses per sample and how much memory and then compare"
},
{
"start": 945.6,
"end": 951.48,
"text": " this to the attention sampling you see that they save at least an order of"
},
{
"start": 951.48,
"end": 956.6,
"text": " magnitude in time and memory and the same thing goes for the street sign"
},
{
"start": 956.6,
"end": 964.24,
"text": " data set you see test error here and then test error is similar for the"
},
{
"start": 964.24,
"end": 973.48,
"text": " attention sampling but again time memory much much lower so the attention"
},
{
"start": 973.48,
"end": 982,
"text": " sampling is faster and is more memory efficient than the baseline and that"
},
{
"start": 982,
"end": 988.6,
"text": " makes it makes it easy to process these megapixel images even on here they say"
},
{
"start": 988.6,
"end": 995.5600000000001,
"text": " process megapixel images in a single CPU or GPU and that really I like this"
},
{
"start": 995.5600000000001,
"end": 1001.44,
"text": " because it kind of brings their research back to let's say regular people or"
},
{
"start": 1001.44,
"end": 1009.8000000000001,
"text": " maybe universities that don't have as much money as large companies and so all"
},
{
"start": 1009.8000000000001,
"end": 1014.6800000000001,
"text": " in all very cool paper very neat experiments to have a lot in the"
},
{
"start": 1014.68,
"end": 1020,
"text": " appendix check it out where they show their attention distribution in these"
},
{
"start": 1020,
"end": 1025.32,
"text": " images their theoretical analysis is pretty easy to follow if you want to"
},
{
"start": 1025.32,
"end": 1045.12,
"text": " check that out and with that thanks for listening and bye bye"
}
] |
1L83tM8nwHU | Yannic Kilcher | UCZHmQk67mSJgfCCTn7xBfew | Manifold Mixup: Better Representations by Interpolating Hidden States | [
"Science & Technology"
] | [
"deep learning",
"neural networks",
"adversarial examples",
"machine learning",
"bengio",
"classification",
"smooth",
"flat representations",
"ai",
"artificial intelligence",
"supervised learning",
"regluarization",
"regularizer",
"hidden representations",
"overconfidence"
] | Standard neural networks suffer from problems such as un-smooth classification boundaries and overconfidence. Manifold Mixup is an easy regularization technique that rectifies these problems. It works by interpolating hidden representations of different data points and then train them to predict equally interpolated labels.
https://arxiv.org/abs/1806.05236
Abstract:
Deep neural networks excel at learning the training data, but often provide incorrect and confident predictions when evaluated on slightly different test examples. This includes distribution shifts, outliers, and adversarial examples. To address these issues, we propose Manifold Mixup, a simple regularizer that encourages neural networks to predict less confidently on interpolations of hidden representations. Manifold Mixup leverages semantic interpolations as additional training signal, obtaining neural networks with smoother decision boundaries at multiple levels of representation. As a result, neural networks trained with Manifold Mixup learn class-representations with fewer directions of variance. We prove theory on why this flattening happens under ideal conditions, validate it on practical situations, and connect it to previous works on information theory and generalization. In spite of incurring no significant computation and being implemented in a few lines of code, Manifold Mixup improves strong baselines in supervised learning, robustness to single-step adversarial attacks, and test log-likelihood.
Authors:
Vikas Verma, Alex Lamb, Christopher Beckham, Amir Najafi, Ioannis Mitliagkas, Aaron Courville, David Lopez-Paz, Yoshua Bengio | Hi there, today we're looking at manifold mixup, better representations by interpolating hidden states by Vikas Verma et al. A number of big names on this paper as you can see and I also saw this at ICML so I was intrigued by it. They propose manifold mixup which is sort of a regularizer of neural networks is specifically of supervised learning and it's actually a pretty simple concept and they kind of show that it has some nice properties and outperforms other regularizers. So what's the problem? The problem is that if you look at this spiral problem here which is often kind of used to to show properties of neural networks, what you have are blue points and the blue points are one class and the red points are another class. You see the two classes here are in this kind of spiral pattern. The data space is just two-dimensional. You see here this is one class, this is the other class. This is pretty difficult for a model to learn because of course the easy models would be like linear classifiers but there's no way to put a line through this such that one class is on one side mostly. So neural networks, if you train them, they will give you something like you see here. They will try to kind of bound the regions with the red points from the blue points but then there's some weird things like here is a weird thing, here is a weird thing. So you'd imagine a correct model would actually classify this area as blue but the neural network has no concept of let's say that the spiral should continue that thus it simply sees here's blue, here's blue, here's a bit of a gap in the training data. So in this case it assigns a red class to it. So this is one problem that the decision boundaries are rather squiggly and irregular and the second one if you look at the actual colors, full blue means very confident blue class, full red means very confident red class and in between you kind of see going into the the white so if you look very closely I can't actually zoom in more here. If you look very closely you'll see that the blue gets lighter and lighter until it reaches white and from here the red goes lighter and lighter until it reaches white and white means not confident, white means like 50-50. So you see the area of not confident is actually very small right. If you consider a point here is actually still very confident that it's a blue point and the area of non-confidence is very small even though maybe as as humans we would judge like a relatively large band in the middle to be not confident like if we get a point like this. And the third problem is that you can see in multiple locations like here or here or here that the decision boundary is very close to the data points unnecessarily close. So especially if you look here the decision boundary could be much more optimally placed probably something like this right given the training data but the neural networks because they only see training data they they have no basically no incentive to do this. Alright one might think of you know something like a support vector machine that actually has an incentive to to put the decision boundary away from the from the training data but the neural networks currently they're not SVMs they're basically logistic regressions and as such have no no incentive to do this. So this these are the problems the other problems are this is the input space. If you look at the hidden space so they build neural networks specifically they have like the 2d input and then that goes through a bunch of layers and then at one point there's a bottleneck layer with just two hidden nodes and then I guess that goes again and then it goes into a classifier. So in this bottleneck layer they analyze the hidden representations of the data points and in this case for this spiral data set what happens is so in red you see again the red classes in blue the blue class it's 2d so you can plot it what it does is it bunches up the hidden representations fairly fairly so it bunches them kind of up it spreads them out in directions here here here most are bunched up here and it does these kind of weird arrangements here with the pockets of those and of course the neural network is powerful enough such that it can actually you know separate all of this from each other but it's not ideal and the black dots they represent kind of points in between or points from the input space that are not part of the training data so they say they sample uniformly in the range of the input space you see that the black dots are all over the place right some are confident blue some are confident red some are like somewhere all right what you would expect from a good model is that if you input something that's kind of in between or not really sure not even part of the input distribution that it assigns like a low confidence to it that it says well I'm not sure about this this must be somewhere in the middle so just to jump in jump forward to the results what does manifold mixup do without knowing what it is in the same data set it gives you a picture like this you see the decision boundaries are much more smooth right the region of no confidence or of low confidence indicated by the light colors here is much larger and also the decision boundary here we had specifically this data point here you see the decision boundary is pushed away though you could argue about that particular point but the decision boundary is generally pushed away from the data points you also see no more kind of these squiggles here it doesn't happen in in here also if you look at the hidden representations the hidden representations now are spread out the classes are bunched up so not all the points are bunched up but the the points of individual classes are bunched up together and the randomly sampled points are in the middle as they should be you say only confident red is down here confident blue is up here and everything in between is on confident and third if you look at the singular value decompositions of the hidden player and that's kind of a measure of how spread out in the different dimensions a data set is you see that the manifold mix up here in green it concentrates or it it lowers the singular values of the kind of lower indexes so the first singular value is large which means that there is like a dominant direction in the in the data and this is done for each class separately as I understand it it puts a lot of weight on the first singular vector and then it pushes down the contributions of the other singular vector which means that the data set that is analyzed is is concentrated into fewer directions of variance this is layer one and here is layer three means so you see it happens in both that the manifold mix up compared to the baseline model does this so now you might ask what is manifold mix up it's actually pretty pretty simple concept all right here is another comparing it to other kind of regularization techniques and showing that none of them really does this so manifold mix up is this basically what you do is when you train a neural network you have input data and you take many batches of input data specifically you take two many batches X and Y and X prime Y prime right and then what you do is if I have the draw the neural network here so here is the inputs like a picture of a cat it goes through layers right and then what you do is you say at some particular you say stop stop right you take the representation out you and you do this with two different many batches so here is this is cat one and I'm down back here is cat two whatever or dog that's a cat you pass it in right here you take it out here you pass it through the network and you take it out so you now have two different forward paths of two different many batches and then you define a lambda and I guess they randomly sample a lambda in zero one right in the range of zero one so this is a mixing coefficient and then you mix you say lambda times hidden representation of batch one plus one minus lambda of hidden representation of batch two and that is what you pass through the rest of the network right so basically you forward propagate two different batches until a certain layer here then you mix them with a random coefficient and then you pass it through the rest and then the only thing you also have to do is then at the end if you think of the labels of these two things you want to mix the labels in the same fashion so you want to mix lambda times y of batch one plus one minus lambda of y of batch two and then this is your training signal for whatever comes out here right so it's it's um these are these are one hot labels so if it's class three it's zero zero one zero and if y2 is class five it's zero zero zero zero one and then you simply mix the two right and that becomes your training signal so in a practical example if let's just have a mini batch size of one so just one sample if this is cat and this is dog you would pass them forward right you would mix so in the hidden representation it would kind of become a cat dog maybe you do it 50 50 but then you would also mix the labels of cat and dog 50 50 and tell the network this is a mixture of 50% cat 50% dog and then you would train the network to predict that 50 50 coefficient so they do this the question is at which layer do you do this and they simply I think for each mini batch sample one hidden layer at random they might have some weighting or something but the way they describe it is they simply sample one layer for me per mini batch and then do the mixing there and then you can actually back prop through everything everything is differentiable this mixing is differentiable so you can back prop through any everything and there's even you know kind of an engineering trick to only use a single mini batch by mixing it with itself so that's that's pretty neat so this manifold mix up as you can see here is the that's kind of the description you mix the hidden representations with lambda and you mix the labels with the same lambda and that will become your actual training signal all right so they give some theory to it that it flattens representations and specifically they say under some conditions namely if the network is large enough so if the dimension of the hidden representation is of a certain size then if you optimize this manifold mix up like if you optimize over every lambda and over the entire training data set what you will end up is actually a linear function of the input this is not too surprising that if you because what you do is you mix linearly this mixture happens in a linear fashion so if you optimize for and you not only optimize for the training set but you optimize for every possible mixture of the training set linear mixture your minimization your minimizer function will actually become a linear function it's not surprising but they have a formal proof of this and they also have a proof that if certain assumptions are given then the minimizers if you apply the minimizers the hidden representations will actually fall on a low dimensional subspace which is also not surprising but it's kind of the theoretical analog to what they show with with the singular value distribution that it basically suppresses low singular values that means the data set is much more into a single direction the hidden representations sorry all right so this the theory part is you can you can read it if you if you want to it's yeah it's it's to the results are to be expected I would say from what they do and the last thing they give a pictorial example of why manifold mix up flattened representations so both of these things the fact that the minimizers will become linear functions and the fact that the singular value spectrum is more concentrated on the first singular value means basically that representations are flattened and here is a pictorial representation so in this case what happens if you if you basically have these four data points a 1a 2b 1 and b 2 where a 1 and a 2 are blue class and b 1 and b 2 are red class and if you now look at an interpolation point between the two so if you look at this interpolation point between a 1 and b 2 what happens is that in this case this should be 50 50 blue and red but if you now look at the points that it where it's not interpolated on this is very close to a 2 in this case it's probably should be more like 95 blue and 5 red do they say here well if you use manifold mix up to learn the network what you'll actually do is you say okay actually this hidden representation needs to be pushed outward and you will achieve something over here where any mixture of two points of the opposite class will actually give you a 50 50 so all the mid points here will give you a 50 50 mixture between the labels which basically means what you end up with is a line between this data and this data and it means that basically the network becomes more linear and the representations become more flat because flat is the optimal if your distributions are flat all the distances to the line are the same and this objective is optimized and this is basically my my kind of biggest problem with the method is that it it kind of mixes the input with a linear function where we know that that is kind of not the shape of the true data manifold the input manifolds as you can see here the input manifold here isn't linear or flat it's actually very very tangled and we know that neural networks as you continue in the layers will flatten those representations because ultimately at the end it needs to classify the data set linearly because the last layer is a softmax layer but the the idea that you could apply this to any layer seems a bit shady to me of course it works and they show it works and it's really nice that it works but applying this to low layers in neural networks seems a bit not principled to me so I think this is not the end of the story of this line of work and there is kind of more that can be done in a more principled fashion but in any case they show that this actually works in terms of performance on generalization on kind of standard data sets so they have results on CIFAR-10 and CIFAR-100 which are famous image data sets and they show that the hair regularizer outperforms others and they also show that they can withstand one step single step adversarial attacks more kind of better so they have a better performance against single step adversarial attacks after regularizing mostly again giving kind of an idea that the if you push if you push it if you have a two points this is X this is X X 1 X 2 there are different classes if you put the decision boundary really close to X 2 then an adversarial attack can simply move the point across the decision boundary with a very small step but if you actually have the decision boundary pushed away from both data points then the an adversarial attack must go a very long way to the decision boundary and thus if you limit the size of adversarial attacks which is what you usually do you can maybe not reach this decision boundary and thus you mitigate some of the problem so it's pretty cool I think yeah there's work to be done but I think this is pretty cool it's implemented pretty easy I've seen there's a lot of libraries already available with it in and yeah won't hurt to add this to your code make your network better and more robust all right that was it from me bye bye | [
{
"start": 0,
"end": 5.5200000000000005,
"text": " Hi there, today we're looking at manifold mixup, better representations by"
},
{
"start": 5.5200000000000005,
"end": 11.48,
"text": " interpolating hidden states by Vikas Verma et al. A number of big names on"
},
{
"start": 11.48,
"end": 18,
"text": " this paper as you can see and I also saw this at ICML so I was intrigued by it."
},
{
"start": 18,
"end": 26.34,
"text": " They propose manifold mixup which is sort of a regularizer of neural networks"
},
{
"start": 26.34,
"end": 32.56,
"text": " is specifically of supervised learning and it's actually a pretty simple concept"
},
{
"start": 32.56,
"end": 37.96,
"text": " and they kind of show that it has some nice properties and outperforms other"
},
{
"start": 37.96,
"end": 45,
"text": " regularizers. So what's the problem? The problem is that if you look at this"
},
{
"start": 45,
"end": 51.400000000000006,
"text": " spiral problem here which is often kind of used to to show properties of neural"
},
{
"start": 51.4,
"end": 57.68,
"text": " networks, what you have are blue points and the blue points are one class and"
},
{
"start": 57.68,
"end": 62.2,
"text": " the red points are another class. You see the two classes here are in this kind"
},
{
"start": 62.2,
"end": 66.68,
"text": " of spiral pattern. The data space is just two-dimensional. You see here"
},
{
"start": 66.68,
"end": 71.92,
"text": " this is one class, this is the other class. This is pretty difficult for a"
},
{
"start": 71.92,
"end": 77.52,
"text": " model to learn because of course the easy models would be like linear"
},
{
"start": 77.52,
"end": 82.75999999999999,
"text": " classifiers but there's no way to put a line through this such that one"
},
{
"start": 82.75999999999999,
"end": 88.75999999999999,
"text": " class is on one side mostly. So neural networks, if you train them, they will"
},
{
"start": 88.75999999999999,
"end": 93.47999999999999,
"text": " give you something like you see here. They will try to kind of bound the"
},
{
"start": 93.47999999999999,
"end": 99.84,
"text": " regions with the red points from the blue points but then there's"
},
{
"start": 99.84,
"end": 104.56,
"text": " some weird things like here is a weird thing, here is a weird thing. So you'd"
},
{
"start": 104.56,
"end": 110.28,
"text": " imagine a correct model would actually classify this area as blue but the"
},
{
"start": 110.28,
"end": 117.10000000000001,
"text": " neural network has no concept of let's say that the spiral should continue"
},
{
"start": 117.10000000000001,
"end": 121,
"text": " that thus it simply sees here's blue, here's blue, here's a bit of a gap in"
},
{
"start": 121,
"end": 128.32,
"text": " the training data. So in this case it assigns a red class to it. So this is"
},
{
"start": 128.32,
"end": 133.12,
"text": " one problem that the decision boundaries are rather squiggly and"
},
{
"start": 133.12,
"end": 139.24,
"text": " irregular and the second one if you look at the actual colors, full blue means"
},
{
"start": 139.24,
"end": 145.08,
"text": " very confident blue class, full red means very confident red class and in between"
},
{
"start": 145.08,
"end": 150.56,
"text": " you kind of see going into the the white so if you look very closely I can't"
},
{
"start": 150.56,
"end": 154.76,
"text": " actually zoom in more here. If you look very closely you'll see that the blue"
},
{
"start": 154.76,
"end": 160.08,
"text": " gets lighter and lighter until it reaches white and from here the red goes"
},
{
"start": 160.08,
"end": 164.96,
"text": " lighter and lighter until it reaches white and white means not confident,"
},
{
"start": 164.96,
"end": 172.08,
"text": " white means like 50-50. So you see the area of not confident is actually very"
},
{
"start": 172.08,
"end": 178.88000000000002,
"text": " small right. If you consider a point here is actually still very confident that"
},
{
"start": 178.88000000000002,
"end": 184.28,
"text": " it's a blue point and the area of non-confidence is very small even though"
},
{
"start": 184.28,
"end": 190.96,
"text": " maybe as as humans we would judge like a relatively large band in the middle to"
},
{
"start": 190.96,
"end": 197.08,
"text": " be not confident like if we get a point like this. And the third problem is that"
},
{
"start": 197.08,
"end": 203.12,
"text": " you can see in multiple locations like here or here or here that the decision"
},
{
"start": 203.12,
"end": 211.08,
"text": " boundary is very close to the data points unnecessarily close. So especially"
},
{
"start": 211.08,
"end": 215.96,
"text": " if you look here the decision boundary could be much more optimally placed"
},
{
"start": 215.96,
"end": 221.88000000000002,
"text": " probably something like this right given the training data but the neural"
},
{
"start": 221.88000000000002,
"end": 228,
"text": " networks because they only see training data they they have no basically no"
},
{
"start": 228,
"end": 234.52,
"text": " incentive to do this. Alright one might think of you know something like a"
},
{
"start": 234.52,
"end": 238.8,
"text": " support vector machine that actually has an incentive to to put the decision"
},
{
"start": 238.8,
"end": 245.84,
"text": " boundary away from the from the training data but the neural networks currently"
},
{
"start": 245.84,
"end": 252.28,
"text": " they're not SVMs they're basically logistic regressions and as such have"
},
{
"start": 252.28,
"end": 258.44,
"text": " no no incentive to do this. So this these are the problems the other problems are"
},
{
"start": 258.44,
"end": 263.36,
"text": " this is the input space. If you look at the hidden space so they build neural"
},
{
"start": 263.36,
"end": 268.2,
"text": " networks specifically they have like the 2d input and then that goes through a"
},
{
"start": 268.2,
"end": 271.8,
"text": " bunch of layers and then at one point there's a bottleneck layer with just two"
},
{
"start": 271.8,
"end": 276.71999999999997,
"text": " hidden nodes and then I guess that goes again and then it goes into a classifier."
},
{
"start": 276.71999999999997,
"end": 283.71999999999997,
"text": " So in this bottleneck layer they analyze the hidden representations of the data"
},
{
"start": 283.71999999999997,
"end": 290.44,
"text": " points and in this case for this spiral data set what happens is so in red you"
},
{
"start": 290.44,
"end": 294.4,
"text": " see again the red classes in blue the blue class it's 2d so you can plot it"
},
{
"start": 294.4,
"end": 300.67999999999995,
"text": " what it does is it bunches up the hidden representations fairly fairly so it"
},
{
"start": 300.67999999999995,
"end": 306.32,
"text": " bunches them kind of up it spreads them out in directions here here here most"
},
{
"start": 306.32,
"end": 311.47999999999996,
"text": " are bunched up here and it does these kind of weird arrangements here with the"
},
{
"start": 311.47999999999996,
"end": 316.79999999999995,
"text": " pockets of those and of course the neural network is powerful enough such"
},
{
"start": 316.79999999999995,
"end": 321.84,
"text": " that it can actually you know separate all of this from each other but it's not"
},
{
"start": 321.84,
"end": 327.44,
"text": " ideal and the black dots they represent kind of points in between or points from"
},
{
"start": 327.44,
"end": 331.28,
"text": " the input space that are not part of the training data so they say they sample"
},
{
"start": 331.28,
"end": 337.2,
"text": " uniformly in the range of the input space you see that the black dots are"
},
{
"start": 337.2,
"end": 342.03999999999996,
"text": " all over the place right some are confident blue some are confident red"
},
{
"start": 342.03999999999996,
"end": 348.03999999999996,
"text": " some are like somewhere all right what you would expect from a good model is"
},
{
"start": 348.04,
"end": 352.16,
"text": " that if you input something that's kind of in between or not really sure not"
},
{
"start": 352.16,
"end": 358.08000000000004,
"text": " even part of the input distribution that it assigns like a low confidence to it"
},
{
"start": 358.08000000000004,
"end": 361.40000000000003,
"text": " that it says well I'm not sure about this this must be somewhere in the"
},
{
"start": 361.40000000000003,
"end": 368.52000000000004,
"text": " middle so just to jump in jump forward to the results what does manifold mixup"
},
{
"start": 368.52000000000004,
"end": 373.24,
"text": " do without knowing what it is in the same data set it gives you a picture like"
},
{
"start": 373.24,
"end": 379.44,
"text": " this you see the decision boundaries are much more smooth right the region of no"
},
{
"start": 379.44,
"end": 384.32,
"text": " confidence or of low confidence indicated by the light colors here is"
},
{
"start": 384.32,
"end": 391.6,
"text": " much larger and also the decision boundary here we had specifically this"
},
{
"start": 391.6,
"end": 396.88,
"text": " data point here you see the decision boundary is pushed away though you could"
},
{
"start": 396.88,
"end": 401.04,
"text": " argue about that particular point but the decision boundary is generally"
},
{
"start": 401.04,
"end": 406.24,
"text": " pushed away from the data points you also see no more kind of these squiggles"
},
{
"start": 406.24,
"end": 414.24,
"text": " here it doesn't happen in in here also if you look at the hidden representations"
},
{
"start": 414.24,
"end": 422.20000000000005,
"text": " the hidden representations now are spread out the classes are bunched up so"
},
{
"start": 422.20000000000005,
"end": 426.76,
"text": " not all the points are bunched up but the the points of individual classes are"
},
{
"start": 426.76,
"end": 432.68,
"text": " bunched up together and the randomly sampled points are in the middle as"
},
{
"start": 432.68,
"end": 439.2,
"text": " they should be you say only confident red is down here confident blue is up"
},
{
"start": 439.2,
"end": 447.34,
"text": " here and everything in between is on confident and third if you look at the"
},
{
"start": 447.34,
"end": 452.59999999999997,
"text": " singular value decompositions of the hidden player and that's kind of a"
},
{
"start": 452.6,
"end": 458.96000000000004,
"text": " measure of how spread out in the different dimensions a data set is you"
},
{
"start": 458.96000000000004,
"end": 466.52000000000004,
"text": " see that the manifold mix up here in green it concentrates or it it lowers"
},
{
"start": 466.52000000000004,
"end": 474.44,
"text": " the singular values of the kind of lower indexes so the first singular value is"
},
{
"start": 474.44,
"end": 480.16,
"text": " large which means that there is like a dominant direction in the in the data"
},
{
"start": 480.16,
"end": 487.6,
"text": " and this is done for each class separately as I understand it it puts a"
},
{
"start": 487.6,
"end": 490.96000000000004,
"text": " lot of weight on the first singular vector and then it pushes down the"
},
{
"start": 490.96000000000004,
"end": 494.64000000000004,
"text": " contributions of the other singular vector which means that the data set"
},
{
"start": 494.64000000000004,
"end": 504.02000000000004,
"text": " that is analyzed is is concentrated into fewer directions of variance this is"
},
{
"start": 504.02,
"end": 511.76,
"text": " layer one and here is layer three means so you see it happens in both that the"
},
{
"start": 511.76,
"end": 518.84,
"text": " manifold mix up compared to the baseline model does this so now you might ask"
},
{
"start": 518.84,
"end": 523.52,
"text": " what is manifold mix up it's actually pretty pretty simple concept all right"
},
{
"start": 523.52,
"end": 529.16,
"text": " here is another comparing it to other kind of regularization techniques and"
},
{
"start": 529.16,
"end": 538.3199999999999,
"text": " showing that none of them really does this so manifold mix up is this"
},
{
"start": 538.3199999999999,
"end": 546.24,
"text": " basically what you do is when you train a neural network you have input data"
},
{
"start": 546.24,
"end": 552.24,
"text": " and you take many batches of input data specifically you take two many batches X"
},
{
"start": 552.24,
"end": 559.76,
"text": " and Y and X prime Y prime right and then what you do is if I have the draw the"
},
{
"start": 559.76,
"end": 567.72,
"text": " neural network here so here is the inputs like a picture of a cat it goes"
},
{
"start": 567.72,
"end": 573.8,
"text": " through layers right and then what you do is you say at some particular you say"
},
{
"start": 573.8,
"end": 581.36,
"text": " stop stop right you take the representation out you and you do this"
},
{
"start": 581.36,
"end": 587.24,
"text": " with two different many batches so here is this is cat one and I'm down back"
},
{
"start": 587.24,
"end": 596.92,
"text": " here is cat two whatever or dog that's a cat you pass it in right here you take"
},
{
"start": 596.92,
"end": 602.88,
"text": " it out here you pass it through the network and you take it out so you now"
},
{
"start": 602.88,
"end": 608,
"text": " have two different forward paths of two different many batches and then you"
},
{
"start": 608,
"end": 616.36,
"text": " define a lambda and I guess they randomly sample a lambda in zero one"
},
{
"start": 616.36,
"end": 621.68,
"text": " right in the range of zero one so this is a mixing coefficient and then you"
},
{
"start": 621.68,
"end": 631.16,
"text": " mix you say lambda times hidden representation of batch one plus one"
},
{
"start": 631.16,
"end": 637,
"text": " minus lambda of hidden representation of batch two and that is what you pass"
},
{
"start": 637,
"end": 642.16,
"text": " through the rest of the network right so basically you forward propagate two"
},
{
"start": 642.16,
"end": 650.04,
"text": " different batches until a certain layer here then you mix them with a random"
},
{
"start": 650.04,
"end": 655.56,
"text": " coefficient and then you pass it through the rest and then the only thing you"
},
{
"start": 655.56,
"end": 662.92,
"text": " also have to do is then at the end if you think of the labels of these two"
},
{
"start": 662.92,
"end": 669.28,
"text": " things you want to mix the labels in the same fashion so you want to mix lambda"
},
{
"start": 669.28,
"end": 678.3199999999999,
"text": " times y of batch one plus one minus lambda of y of batch two and then this"
},
{
"start": 678.3199999999999,
"end": 685.56,
"text": " is your training signal for whatever comes out here right so it's it's um"
},
{
"start": 685.56,
"end": 692.88,
"text": " these are these are one hot labels so if it's class three it's zero zero one zero"
},
{
"start": 692.88,
"end": 698.2399999999999,
"text": " and if y2 is class five it's zero zero zero zero one and then you simply mix"
},
{
"start": 698.2399999999999,
"end": 704.5999999999999,
"text": " the two right and that becomes your training signal so in a practical"
},
{
"start": 704.5999999999999,
"end": 710.8399999999999,
"text": " example if let's just have a mini batch size of one so just one sample if this"
},
{
"start": 710.84,
"end": 717.08,
"text": " is cat and this is dog you would pass them forward right you would mix so in"
},
{
"start": 717.08,
"end": 721.6800000000001,
"text": " the hidden representation it would kind of become a cat dog maybe you do it 50"
},
{
"start": 721.6800000000001,
"end": 726.44,
"text": " 50 but then you would also mix the labels of cat and dog 50 50 and tell the"
},
{
"start": 726.44,
"end": 732.72,
"text": " network this is a mixture of 50% cat 50% dog and then you would train the"
},
{
"start": 732.72,
"end": 739.36,
"text": " network to predict that 50 50 coefficient so they do this the question"
},
{
"start": 739.36,
"end": 744.76,
"text": " is at which layer do you do this and they simply I think for each mini batch"
},
{
"start": 744.76,
"end": 750.8000000000001,
"text": " sample one hidden layer at random they might have some weighting or something"
},
{
"start": 750.8000000000001,
"end": 756.44,
"text": " but the way they describe it is they simply sample one layer for me per mini"
},
{
"start": 756.44,
"end": 761.4,
"text": " batch and then do the mixing there and then you can actually back prop through"
},
{
"start": 761.4,
"end": 764.6800000000001,
"text": " everything everything is differentiable this mixing is differentiable so you"
},
{
"start": 764.6800000000001,
"end": 768.62,
"text": " can back prop through any everything and there's even you know kind of an"
},
{
"start": 768.62,
"end": 774.04,
"text": " engineering trick to only use a single mini batch by mixing it with itself so"
},
{
"start": 774.04,
"end": 778.32,
"text": " that's that's pretty neat so this manifold mix up as you can see here is"
},
{
"start": 778.32,
"end": 783.24,
"text": " the that's kind of the description you mix the hidden representations with"
},
{
"start": 783.24,
"end": 787.88,
"text": " lambda and you mix the labels with the same lambda and that will become your"
},
{
"start": 787.88,
"end": 798.08,
"text": " actual training signal all right so they give some theory to it that it flattens"
},
{
"start": 798.08,
"end": 805.12,
"text": " representations and specifically they say under some conditions namely if the"
},
{
"start": 805.12,
"end": 810.0400000000001,
"text": " network is large enough so if the dimension of the hidden representation"
},
{
"start": 810.0400000000001,
"end": 816.8000000000001,
"text": " is of a certain size then if you optimize this manifold mix up like if"
},
{
"start": 816.8000000000001,
"end": 822.2800000000001,
"text": " you optimize over every lambda and over the entire training data set what you"
},
{
"start": 822.28,
"end": 832.12,
"text": " will end up is actually a linear function of the input this is not"
},
{
"start": 832.12,
"end": 838.8399999999999,
"text": " too surprising that if you because what you do is you mix linearly this mixture"
},
{
"start": 838.8399999999999,
"end": 846.56,
"text": " happens in a linear fashion so if you optimize for and you not only optimize"
},
{
"start": 846.56,
"end": 849.92,
"text": " for the training set but you optimize for every possible mixture of the"
},
{
"start": 849.92,
"end": 855.12,
"text": " training set linear mixture your minimization your minimizer function"
},
{
"start": 855.12,
"end": 860.18,
"text": " will actually become a linear function it's not surprising but they have a"
},
{
"start": 860.18,
"end": 870,
"text": " formal proof of this and they also have a proof that if certain assumptions are"
},
{
"start": 870,
"end": 876.28,
"text": " given then the minimizers if you apply the minimizers the hidden representations"
},
{
"start": 876.28,
"end": 882.24,
"text": " will actually fall on a low dimensional subspace which is also not surprising"
},
{
"start": 882.24,
"end": 889.12,
"text": " but it's kind of the theoretical analog to what they show with with the singular"
},
{
"start": 889.12,
"end": 894.24,
"text": " value distribution that it basically suppresses low singular values that"
},
{
"start": 894.24,
"end": 898.66,
"text": " means the data set is much more into a single direction the hidden"
},
{
"start": 898.66,
"end": 908.16,
"text": " representations sorry all right so this the theory part is you can you can read"
},
{
"start": 908.16,
"end": 914.36,
"text": " it if you if you want to it's yeah it's it's to the results are to be expected I"
},
{
"start": 914.36,
"end": 922.9599999999999,
"text": " would say from what they do and the last thing they give a pictorial example of"
},
{
"start": 922.96,
"end": 928.72,
"text": " why manifold mix up flattened representations so both of these things"
},
{
"start": 928.72,
"end": 934.12,
"text": " the fact that the minimizers will become linear functions and the fact that the"
},
{
"start": 934.12,
"end": 938.2,
"text": " singular value spectrum is more concentrated on the first singular value"
},
{
"start": 938.2,
"end": 945.52,
"text": " means basically that representations are flattened and here is a pictorial"
},
{
"start": 945.52,
"end": 957.28,
"text": " representation so in this case what happens if you if you basically have"
},
{
"start": 957.28,
"end": 964.72,
"text": " these four data points a 1a 2b 1 and b 2 where a 1 and a 2 are blue class and b 1"
},
{
"start": 964.72,
"end": 973.24,
"text": " and b 2 are red class and if you now look at an interpolation point between"
},
{
"start": 973.24,
"end": 980.16,
"text": " the two so if you look at this interpolation point between a 1 and b 2"
},
{
"start": 980.16,
"end": 989.52,
"text": " what happens is that in this case this should be 50 50 blue and red but if you"
},
{
"start": 989.52,
"end": 994.16,
"text": " now look at the points that it where it's not interpolated on this is very"
},
{
"start": 994.16,
"end": 1001.5600000000001,
"text": " close to a 2 in this case it's probably should be more like 95 blue and 5 red"
},
{
"start": 1001.56,
"end": 1009.28,
"text": " do they say here well if you use manifold mix up to learn the network what"
},
{
"start": 1009.28,
"end": 1014.88,
"text": " you'll actually do is you say okay actually this hidden representation"
},
{
"start": 1014.88,
"end": 1022.1199999999999,
"text": " needs to be pushed outward and you will achieve something over here where any"
},
{
"start": 1022.12,
"end": 1031.84,
"text": " mixture of two points of the opposite class will actually give you a 50 50 so"
},
{
"start": 1031.84,
"end": 1039.84,
"text": " all the mid points here will give you a 50 50 mixture between the labels which"
},
{
"start": 1039.84,
"end": 1046.36,
"text": " basically means what you end up with is a line between this data and this data"
},
{
"start": 1046.36,
"end": 1052.08,
"text": " and it means that basically the network becomes more linear and the"
},
{
"start": 1052.08,
"end": 1057.6,
"text": " representations become more flat because flat is the optimal if your"
},
{
"start": 1057.6,
"end": 1063.6,
"text": " distributions are flat all the distances to the line are the same and this"
},
{
"start": 1063.6,
"end": 1071.12,
"text": " objective is optimized and this is basically my my kind of biggest problem"
},
{
"start": 1071.12,
"end": 1081.04,
"text": " with the method is that it it kind of mixes the input with a linear function"
},
{
"start": 1081.04,
"end": 1089.52,
"text": " where we know that that is kind of not the shape of the true data manifold the"
},
{
"start": 1089.52,
"end": 1097.8,
"text": " input manifolds as you can see here the input manifold here isn't linear or flat"
},
{
"start": 1097.8,
"end": 1104.08,
"text": " it's actually very very tangled and we know that neural networks as you"
},
{
"start": 1104.08,
"end": 1108.6399999999999,
"text": " continue in the layers will flatten those representations because ultimately"
},
{
"start": 1108.6399999999999,
"end": 1114.76,
"text": " at the end it needs to classify the data set linearly because the last layer is a"
},
{
"start": 1114.76,
"end": 1121.08,
"text": " softmax layer but the the idea that you could apply this to any layer seems a"
},
{
"start": 1121.08,
"end": 1126.24,
"text": " bit shady to me of course it works and they show it works and it's really nice"
},
{
"start": 1126.24,
"end": 1132.72,
"text": " that it works but applying this to low layers in neural networks seems a bit"
},
{
"start": 1132.72,
"end": 1141.4,
"text": " not principled to me so I think this is not the end of the story of this line of"
},
{
"start": 1141.4,
"end": 1147.76,
"text": " work and there is kind of more that can be done in a more principled fashion but"
},
{
"start": 1147.76,
"end": 1153.72,
"text": " in any case they show that this actually works in terms of performance on"
},
{
"start": 1153.72,
"end": 1161.1200000000001,
"text": " generalization on kind of standard data sets so they have results on CIFAR-10"
},
{
"start": 1161.1200000000001,
"end": 1166.4,
"text": " and CIFAR-100 which are famous image data sets and they show that the"
},
{
"start": 1166.4,
"end": 1175.3600000000001,
"text": " hair regularizer outperforms others and they also show that they can withstand"
},
{
"start": 1175.36,
"end": 1184.24,
"text": " one step single step adversarial attacks more kind of better so they have a"
},
{
"start": 1184.24,
"end": 1189.12,
"text": " better performance against single step adversarial attacks after"
},
{
"start": 1189.12,
"end": 1199.04,
"text": " regularizing mostly again giving kind of an idea that the if you push if you"
},
{
"start": 1199.04,
"end": 1205.32,
"text": " push it if you have a two points this is X this is X X 1 X 2 there are different"
},
{
"start": 1205.32,
"end": 1212.76,
"text": " classes if you put the decision boundary really close to X 2 then an adversarial"
},
{
"start": 1212.76,
"end": 1217.8,
"text": " attack can simply move the point across the decision boundary with a very small"
},
{
"start": 1217.8,
"end": 1225.06,
"text": " step but if you actually have the decision boundary pushed away from both"
},
{
"start": 1225.06,
"end": 1231.36,
"text": " data points then the an adversarial attack must go a very long way to the"
},
{
"start": 1231.36,
"end": 1237.12,
"text": " decision boundary and thus if you limit the size of adversarial attacks which is"
},
{
"start": 1237.12,
"end": 1242.4399999999998,
"text": " what you usually do you can maybe not reach this decision boundary and thus"
},
{
"start": 1242.4399999999998,
"end": 1249.12,
"text": " you mitigate some of the problem so it's pretty cool I think yeah there's work to"
},
{
"start": 1249.12,
"end": 1253.6799999999998,
"text": " be done but I think this is pretty cool it's implemented pretty easy I've seen"
},
{
"start": 1253.6799999999998,
"end": 1260.6,
"text": " there's a lot of libraries already available with it in and yeah won't hurt"
},
{
"start": 1260.6,
"end": 1265.08,
"text": " to add this to your code make your network better and more robust all right"
},
{
"start": 1265.08,
"end": 1292.32,
"text": " that was it from me bye bye"
}
] |
Qk4lJdp7ZAs | Yannic Kilcher | UCZHmQk67mSJgfCCTn7xBfew | Learning World Graphs to Accelerate Hierarchical Reinforcement Learning | [
"Science & Technology"
] | [
"deep learning",
"reinforcement learning",
"deep reinforcement learning",
"world model",
"hierarchical reinforcement learning",
"planning",
"salesforce",
"research",
"machine learning",
"navigation",
"pivot states",
"ai",
"artificial intelligence"
] | The goal of hierarchical reinforcement learning is to divide a task into different levels of coarseness with the top-level agent planning only over a high-level view of the world and each subsequent layer having a more detailed view. This paper proposes to learn a set of important states as well as their connections to each other as a high-level abstraction.
https://arxiv.org/abs/1907.00664
Abstract:
In many real-world scenarios, an autonomous agent often encounters various tasks within a single complex environment. We propose to build a graph abstraction over the environment structure to accelerate the learning of these tasks. Here, nodes are important points of interest (pivotal states) and edges represent feasible traversals between them. Our approach has two stages. First, we jointly train a latent pivotal state model and a curiosity-driven goal-conditioned policy in a task-agnostic manner. Second, provided with the information from the world graph, a high-level Manager quickly finds solution to new tasks and expresses subgoals in reference to pivotal states to a low-level Worker. The Worker can then also leverage the graph to easily traverse to the pivotal states of interest, even across long distance, and explore non-locally. We perform a thorough ablation study to evaluate our approach on a suite of challenging maze tasks, demonstrating significant advantages from the proposed framework over baselines that lack world graph knowledge in terms of performance and efficiency.
Authors: Wenling Shang, Alex Trott, Stephan Zheng, Caiming Xiong, Richard Socher | Hi there. Today we're looking at learning world graphs to accelerate hierarchical reinforcement learning by Wenling Sheng et al from Salesforce Research. This work is based in the world of reinforcement learning and especially hierarchical reinforcement learning. So in hierarchical reinforcement learning the idea is that in order to perform a task like in this case they perform all of their experiments on mazes like this. So imagine you have this maze and this red thing here is the agent and the goal is the green square and the gray things obviously are walls and the black things are everywhere the agent can move. The agent can always move one step in any direction that it wants and that isn't blocked by a wall. So in order to fulfill such a task the agent needs to take many many steps like go here here here here here here each one of those is a step. In addition this specific maze has an additional property namely that there's a locked door here and first you need to pick up the key to basically to open the locked door. So in order to reach the goal the agent needs first to pick up the key then open the door then go to the goal and each one of these it has to traverse many many steps. So the idea in hierarchical reinforcement learning is that you have two parts to it to the agent. So your agent which is this entire box here is divided into what's called a manager and a worker and this is a divide. So what the manager sees the manager sees basically I do an example here they do it differently but the manager could see large could see the world basically only in these large chunks right and it doesn't really care what is in or it cares what is in the chunks but it doesn't distinguish points within the chunks it just knows about these these chunks basically and what the manager will say oh first I need to go to this chunk here then because there's the key in this chunk and then I need to go to this chunk here because there is the door and then I need to go to this chunk here because there's the goal. So the in the view of the manager which has a very high level view of the world is the the action sequence is down here over here then over here. Those are like three actions that's a pretty simple and then the manager would pass this information to the worker and it would say hey worker please go to this state here please go to the first state and then the worker would be tasked with basically moving the individual steps to go not to the final goal but only to go to that chunk and then in that chunk the worker would go to the key and then once it has the key the manager would say good job now please perform the second action which is go to to this chunk here so the second action that the worker would so you basically get the idea whoops I am doing something here you get the idea that the I'm creating text boxes that the worker and the manager work together and that the manager has a high level view of the world and then the worker can basically execute the actual actions that the manager has decided on in a fine-grained way. So this is gives you several advantages namely the manager can plan high level and far away things and then the worker really only has to care about its close neighborhood because each step the manager proposes is a fairly short range so the worker can implement it. They do this in a kind of different way so let's actually start from the back from of this paper which is I find is a bit more explanatory and it makes a bit more sense to look at it what they propose is to learn a world graph so in a world graph what is a world graph a world graph consists of two things first a set of states which is the are the blue states here so all these blue states which are so-called pivot states or important states so these are states in the world that are very important determined by some measure right so these are basically states that look at look at where they are they're often at like narrow passes you see here here they're at these narrow passes so basically if you if you reach those states as an intermediary goal then you can basically go a lot of places from here so these are very let's say powerful states and these states are connected by a neighborhood graph so basically which states of these are close to each other and for example here you would connect of course those two because they're neighbors those you would probably connect those some I'm attempting to to kind of draw the world graph you could you might connect those doesn't need to be like a tree it can be like such so you see that the graph kind of takes shape these are fairly reachable so whenever a node in the graph whenever one of these important states is fairly easily reachable by some other state it's designated as a neighbor so with that with this world graph here this is what you get you get an abstraction basically you get a set of states with connections between them that says how easy or hard is it to reach from one state to the other if you have these things then you can really easily imagine a hierarchical reinforcement learning algorithm that now in let incorporates this information namely the manager will only use the important states to plan so for example if the goal the goal isn't drawn in here but let's say the goal is here and then the door the door is here it's a locked door here and then the key let's draw in the key come on okay this doesn't want to all right the key is somewhere let's say here there's the key he is this all right then the no let's put the key further away come on door here I'm off with the colors and key here all right so what would the manager do the manager would then say ah okay the keys here so this would be a good state to reach of my importance if the manager is only allowed to go important states right so the manager says because it has the graph right it says aha this state is easily reachable from let's say this state and this state is easily reachable from this state so it plans go here and go here and then go here then get the key right this is a kind of a micro action that is not in the importance they then I need to you know go here this is reachable from this state that's reachable from this state and from this state and that's reachable from my origin so from the key then next go here go here go here go here and then open the door and then of course go here and solve the the task the worker then would only ever need to implement the following it starts here and it says aha I need to go here what do I need to do I need to go for example down and over and now once I've done this I need to go here so I need to go right down right so you see the worker only ever has to care about going from one hop to the next hop making it really easy for the worker while the manager only has these blue states available which makes its search space much more much more condensed and much more much more overviewable especially with the nodes in between the world graph so that's if you have the world graph right if you have this set of states and how important are how easily they reachable reachable they are between each other you can very easily do a reinforcement learning approach that that is a hierarchical has the manager plan on the world graph has and then has the worker implement the fine-grained actions and there is already a method that does this this paper here uses feudal networks so we won't go into that later just saying it's pretty easy if you have those things so the real question is how do they learn the world graph and what they do is the following and they describe it here in kind of this sorry this way what they want to to finally learn is a prior that tells them for a given state how important it is it and that's a beta prior a beta distribution is a continuous approximation on a on a kind of a binary zero one variable so how do they do it they use an LSTM to encode trajectories so these are trajectories from kind of rollouts of policy and then the the LSTM encodes it and for each step it outputs this posterior over the what's called these latent variables here they say how important is a state so these are the posteriors whereas this over here is the prior and the posterior of course only make sense in context of a trajectory that's why the ultimate decision happens for the prior because the state needs to be important or not important to any trajectory so what they do is they roll out policies and they have certain methods of of doing this so they have they have random exploration of curiosity goals but they also train this continuously so they updated continuously via this what's called a goal condition policy and what a goal condition policy is basically is you put the agent somewhere in the maze actually let's use this maze over here you put the agent somewhere in the maze let's say here you for example make a bunch of ran make a random exploration let's say here so you know these two things are reachable and then you train the agency go from here to here right this is your goal now the agent tries to kind of reconstruct this random walk to there and you can riff so so this is how you train an agent to go it basically go from any two well reachable states to each other right from here to here and so on now you won't train it to go directly from here to over here because a random walk would be very hard for a random walk to find its way over there but what you end up with is is somehow an agent that is able to reach close by states and that's exactly what the worker is supposed to do right here and so of of these trajectories you can then unroll them and decide on the kind of on these on these pivotal states so how do you do that and this is where this top part here comes in so down here you input the trajectory and you output how important is each state all right and now you see in this example here the light color means that the LSTM decides this state isn't important and the darker orange color means the LSTM decides this state is important so what you do next is the states where it decides it is important and notice the beginning at the end are always important it feeds to a second LSTM as an input you see here here here so in this case of these two of these six states in the trajectory three are important namely the start the end and this one here where the LSTM decides hey that's important that goes into a second LSTM which is generator so this here is an encoder and this here is a decoder and what it does is it decodes the sequence of actions right here given nothing just given this it decodes a sequence of actions and at the end what you want is that the actions output here reconstruct the actions input this might sound a little confusing but the core value of this is what you want is to reconstruct the actions of the trajectory taken given only the important states what does this mean in our example in our example here this means if I have to go from here to here right and for example I took the following path this is this so right right down down right this is these were my action sequence now if I only have the start the end and one state in between let's say this one right then can I reconstruct what actions were taken and if I erase the blue thing and I tell you I went from here via here to here then you could very much reconstruct the actions here so this state here is a good candidate for being an important state whereas if it were a different state if it were for example if I told you I went from over here to here and then to here you'd say well this could be either something like this or it could be a path like this right it could be many many paths or like this could be many paths leading from here to here so this state here is not probably not very important so that's kind of how they how they learn which one are the important state via this encoding trajectories in an LSTM and trying to reconstruct the state the actions taken in the trajectory given only the states that were deemed important by the LSTM so that's how you train the LSTM to recognize important states and once you've recognized the important states in a trajectory you can then use those to learn prior so basically you ask over all possible trajectories which of the states are generally important and that's how you end up with these blue states all right and then the last part is to connect the blue states and that is fairly easily done in their approach what they say is all right we have blue states we should be pick one and we do a random walk from it right random walk random walk random walk if we hit another blue state like this one here in the random walk we simply say well there are probably neighbors so we do this a bunch of times if you hit the blue states of course without hitting another blue state first then you connect the two in a graph so these would be connected these would probably be connected what we ended up at the beginning right you have this graph maybe these two are connected and so on so this gives you this world graph and now you end up with a set of important states and connections between them that tell you which ones are easily reachable from each other so you can train the manager on that you can train the worker as we said before to simply select two close by states train it to go from one to the other that by the worker will learn that so in essence that's how they they do it you can look at the experiments themselves they show that this basically transfers so if you train like this pre train then you can give more specific and more complicated tasks and this will this will rapidly accelerate the learning of this yeah look at the experiments if you have time that was it for me thank you for listening | [
{
"start": 0,
"end": 4.62,
"text": " Hi there. Today we're looking at learning world graphs to accelerate"
},
{
"start": 4.62,
"end": 9.86,
"text": " hierarchical reinforcement learning by Wenling Sheng et al from Salesforce"
},
{
"start": 9.86,
"end": 16.62,
"text": " Research. This work is based in the world of reinforcement learning and"
},
{
"start": 16.62,
"end": 21.36,
"text": " especially hierarchical reinforcement learning. So in hierarchical reinforcement"
},
{
"start": 21.36,
"end": 29,
"text": " learning the idea is that in order to perform a task like in this case they"
},
{
"start": 29,
"end": 34,
"text": " perform all of their experiments on mazes like this. So imagine you have this"
},
{
"start": 34,
"end": 42.08,
"text": " maze and this red thing here is the agent and the goal is the green square"
},
{
"start": 42.08,
"end": 47.36,
"text": " and the gray things obviously are walls and the black things are everywhere the"
},
{
"start": 47.36,
"end": 53.8,
"text": " agent can move. The agent can always move one step in any direction that it"
},
{
"start": 53.8,
"end": 61.519999999999996,
"text": " wants and that isn't blocked by a wall. So in order to fulfill such a task the"
},
{
"start": 61.519999999999996,
"end": 66.52,
"text": " agent needs to take many many steps like go here here here here here here each"
},
{
"start": 66.52,
"end": 73.56,
"text": " one of those is a step. In addition this specific maze has an additional property"
},
{
"start": 73.56,
"end": 78.44,
"text": " namely that there's a locked door here and first you need to pick up the key to"
},
{
"start": 78.44,
"end": 85.39999999999999,
"text": " basically to open the locked door. So in order to reach the goal the agent needs"
},
{
"start": 85.39999999999999,
"end": 90.4,
"text": " first to pick up the key then open the door then go to the goal and each one of"
},
{
"start": 90.4,
"end": 97.12,
"text": " these it has to traverse many many steps. So the idea in hierarchical reinforcement"
},
{
"start": 97.12,
"end": 104.1,
"text": " learning is that you have two parts to it to the agent. So your agent which is"
},
{
"start": 104.1,
"end": 110.72,
"text": " this entire box here is divided into what's called a manager and a"
},
{
"start": 110.72,
"end": 118.83999999999999,
"text": " worker and this is a divide. So what the manager sees the manager sees basically"
},
{
"start": 118.83999999999999,
"end": 122.96,
"text": " I do an example here they do it differently but the manager could see"
},
{
"start": 122.96,
"end": 131.07999999999998,
"text": " large could see the world basically only in these large chunks right and it"
},
{
"start": 131.08,
"end": 136.60000000000002,
"text": " doesn't really care what is in or it cares what is in the chunks but it"
},
{
"start": 136.60000000000002,
"end": 142.08,
"text": " doesn't distinguish points within the chunks it just knows about these these"
},
{
"start": 142.08,
"end": 148.84,
"text": " chunks basically and what the manager will say oh first I need to go to this"
},
{
"start": 148.84,
"end": 153.72000000000003,
"text": " chunk here then because there's the key in this chunk and then I need to go to"
},
{
"start": 153.72000000000003,
"end": 158.16000000000003,
"text": " this chunk here because there is the door and then I need to go to this chunk"
},
{
"start": 158.16,
"end": 163.35999999999999,
"text": " here because there's the goal. So the in the view of the manager which has a very"
},
{
"start": 163.35999999999999,
"end": 170,
"text": " high level view of the world is the the action sequence is down here over here"
},
{
"start": 170,
"end": 174.84,
"text": " then over here. Those are like three actions that's a pretty simple and then"
},
{
"start": 174.84,
"end": 179.96,
"text": " the manager would pass this information to the worker and it would say hey worker"
},
{
"start": 179.96,
"end": 186.72,
"text": " please go to this state here please go to the first state and then the worker"
},
{
"start": 186.72,
"end": 195,
"text": " would be tasked with basically moving the individual steps to go not to the"
},
{
"start": 195,
"end": 200.64,
"text": " final goal but only to go to that chunk and then in that chunk the worker would"
},
{
"start": 200.64,
"end": 205.56,
"text": " go to the key and then once it has the key the manager would say good job now"
},
{
"start": 205.56,
"end": 210.48,
"text": " please perform the second action which is go to to this chunk here so the"
},
{
"start": 210.48,
"end": 216.16,
"text": " second action that the worker would so you basically get the idea whoops I am"
},
{
"start": 216.16,
"end": 222.92,
"text": " doing something here you get the idea that the I'm creating text boxes that"
},
{
"start": 222.92,
"end": 227.07999999999998,
"text": " the worker and the manager work together and that the manager has a high level"
},
{
"start": 227.07999999999998,
"end": 233.92,
"text": " view of the world and then the worker can basically execute the actual actions"
},
{
"start": 233.92,
"end": 240.64,
"text": " that the manager has decided on in a fine-grained way. So this is gives you"
},
{
"start": 240.64,
"end": 246.04,
"text": " several advantages namely the manager can plan high level and far away things"
},
{
"start": 246.04,
"end": 251.12,
"text": " and then the worker really only has to care about its close neighborhood"
},
{
"start": 251.12,
"end": 256.03999999999996,
"text": " because each step the manager proposes is a fairly short range so the worker"
},
{
"start": 256.03999999999996,
"end": 264.36,
"text": " can implement it. They do this in a kind of different way so let's actually start"
},
{
"start": 264.36,
"end": 271.76,
"text": " from the back from of this paper which is I find is a bit more explanatory and"
},
{
"start": 271.76,
"end": 277.44,
"text": " it makes a bit more sense to look at it what they propose is to learn a world"
},
{
"start": 277.44,
"end": 284.03999999999996,
"text": " graph so in a world graph what is a world graph a world graph consists of"
},
{
"start": 284.03999999999996,
"end": 291.03999999999996,
"text": " two things first a set of states which is the are the blue states here so all"
},
{
"start": 291.03999999999996,
"end": 298.4,
"text": " these blue states which are so-called pivot states or important states so"
},
{
"start": 298.4,
"end": 305.84,
"text": " these are states in the world that are very important determined by some measure"
},
{
"start": 305.84,
"end": 313.79999999999995,
"text": " right so these are basically states that look at look at where they are they're"
},
{
"start": 313.79999999999995,
"end": 319.35999999999996,
"text": " often at like narrow passes you see here here they're at these narrow passes so"
},
{
"start": 319.35999999999996,
"end": 325.64,
"text": " basically if you if you reach those states as an intermediary goal then you"
},
{
"start": 325.64,
"end": 329.68,
"text": " can basically go a lot of places from here so these are very let's say"
},
{
"start": 329.68,
"end": 336.56,
"text": " powerful states and these states are connected by a neighborhood graph so"
},
{
"start": 336.56,
"end": 342.47999999999996,
"text": " basically which states of these are close to each other and for example here"
},
{
"start": 342.47999999999996,
"end": 346.08,
"text": " you would connect of course those two because they're neighbors those you"
},
{
"start": 346.08,
"end": 352.71999999999997,
"text": " would probably connect those some I'm attempting to to kind of draw the world"
},
{
"start": 352.72,
"end": 358.48,
"text": " graph you could you might connect those doesn't need to be like a tree it can be"
},
{
"start": 358.48,
"end": 367.64000000000004,
"text": " like such so you see that the graph kind of takes shape these are fairly"
},
{
"start": 367.64000000000004,
"end": 373.20000000000005,
"text": " reachable so whenever a node in the graph whenever one of these important"
},
{
"start": 373.20000000000005,
"end": 378.6,
"text": " states is fairly easily reachable by some other state it's designated as a"
},
{
"start": 378.6,
"end": 386.52000000000004,
"text": " neighbor so with that with this world graph here this is what you get you get"
},
{
"start": 386.52000000000004,
"end": 391.16,
"text": " an abstraction basically you get a set of states with connections between them"
},
{
"start": 391.16,
"end": 396.72,
"text": " that says how easy or hard is it to reach from one state to the other if you"
},
{
"start": 396.72,
"end": 403.32000000000005,
"text": " have these things then you can really easily imagine a hierarchical"
},
{
"start": 403.32000000000005,
"end": 408.12,
"text": " reinforcement learning algorithm that now in let incorporates this information"
},
{
"start": 408.12,
"end": 414.44,
"text": " namely the manager will only use the important states to plan so for example"
},
{
"start": 414.44,
"end": 420.8,
"text": " if the goal the goal isn't drawn in here but let's say the goal is here and then"
},
{
"start": 420.8,
"end": 431,
"text": " the door the door is here it's a locked door here and then the key let's draw in"
},
{
"start": 431,
"end": 438.64,
"text": " the key come on okay this doesn't want to all right the key is somewhere let's"
},
{
"start": 438.64,
"end": 445.16,
"text": " say here there's the key he is this all right then the no let's put the key"
},
{
"start": 445.16,
"end": 457.24,
"text": " further away come on door here I'm off with the colors and key here all right"
},
{
"start": 457.24,
"end": 465.36,
"text": " so what would the manager do the manager would then say ah okay the keys here so"
},
{
"start": 465.36,
"end": 470.16,
"text": " this would be a good state to reach of my importance if the manager is only"
},
{
"start": 470.16,
"end": 475.6,
"text": " allowed to go important states right so the manager says because it has the"
},
{
"start": 475.6,
"end": 481.6,
"text": " graph right it says aha this state is easily reachable from let's say this"
},
{
"start": 481.6,
"end": 486.40000000000003,
"text": " state and this state is easily reachable from this state so it plans go here and"
},
{
"start": 486.4,
"end": 492.2,
"text": " go here and then go here then get the key right this is a kind of a micro"
},
{
"start": 492.2,
"end": 497.28,
"text": " action that is not in the importance they then I need to you know go here"
},
{
"start": 497.28,
"end": 505.64,
"text": " this is reachable from this state that's reachable from this state and from this"
},
{
"start": 505.64,
"end": 510.71999999999997,
"text": " state and that's reachable from my origin so from the key then next go here"
},
{
"start": 510.72,
"end": 517.2,
"text": " go here go here go here and then open the door and then of course go here and"
},
{
"start": 517.2,
"end": 528.76,
"text": " solve the the task the worker then would only ever need to implement the"
},
{
"start": 528.76,
"end": 535.64,
"text": " following it starts here and it says aha I need to go here what do I need to do"
},
{
"start": 535.64,
"end": 540.7,
"text": " I need to go for example down and over and now once I've done this I need to"
},
{
"start": 540.7,
"end": 547.2800000000001,
"text": " go here so I need to go right down right so you see the worker only ever has to"
},
{
"start": 547.2800000000001,
"end": 552.6400000000001,
"text": " care about going from one hop to the next hop making it really easy for the"
},
{
"start": 552.6400000000001,
"end": 557.6,
"text": " worker while the manager only has these blue states available which makes its"
},
{
"start": 557.6,
"end": 566.76,
"text": " search space much more much more condensed and much more much more"
},
{
"start": 566.76,
"end": 574.8,
"text": " overviewable especially with the nodes in between the world graph so that's if"
},
{
"start": 574.8,
"end": 579.88,
"text": " you have the world graph right if you have this set of states and how important"
},
{
"start": 579.88,
"end": 585.72,
"text": " are how easily they reachable reachable they are between each other you can very"
},
{
"start": 585.72,
"end": 590.8,
"text": " easily do a reinforcement learning approach that that is a hierarchical has"
},
{
"start": 590.8,
"end": 595.2,
"text": " the manager plan on the world graph has and then has the worker implement the"
},
{
"start": 595.2,
"end": 600.2800000000001,
"text": " fine-grained actions and there is already a method that does this this"
},
{
"start": 600.2800000000001,
"end": 605.0400000000001,
"text": " paper here uses feudal networks so we won't go into that later just saying"
},
{
"start": 605.0400000000001,
"end": 608.84,
"text": " it's pretty easy if you have those things so the real question is how do"
},
{
"start": 608.84,
"end": 617.12,
"text": " they learn the world graph and what they do is the following and they describe it"
},
{
"start": 617.12,
"end": 629.88,
"text": " here in kind of this sorry this way what they want to to finally learn is a prior"
},
{
"start": 629.88,
"end": 636.68,
"text": " that tells them for a given state how important it is it and that's a beta"
},
{
"start": 636.68,
"end": 641.84,
"text": " prior a beta distribution is a continuous approximation on a on a kind"
},
{
"start": 641.84,
"end": 652.5600000000001,
"text": " of a binary zero one variable so how do they do it they use an LSTM to encode"
},
{
"start": 652.5600000000001,
"end": 660.44,
"text": " trajectories so these are trajectories from kind of rollouts of policy and then"
},
{
"start": 660.44,
"end": 668.6,
"text": " the the LSTM encodes it and for each step it outputs this posterior over the"
},
{
"start": 668.6,
"end": 674.52,
"text": " what's called these latent variables here they say how important is a state"
},
{
"start": 674.52,
"end": 679.72,
"text": " so these are the posteriors whereas this over here is the prior and the posterior"
},
{
"start": 679.72,
"end": 686.08,
"text": " of course only make sense in context of a trajectory that's why the ultimate"
},
{
"start": 686.08,
"end": 690.2,
"text": " decision happens for the prior because the state needs to be important or not"
},
{
"start": 690.2,
"end": 698.48,
"text": " important to any trajectory so what they do is they roll out policies and they"
},
{
"start": 698.48,
"end": 707.12,
"text": " have certain methods of of doing this so they have they have random"
},
{
"start": 707.12,
"end": 711.84,
"text": " exploration of curiosity goals but they also train this continuously so they"
},
{
"start": 711.84,
"end": 716.84,
"text": " updated continuously via this what's called a goal condition policy and what"
},
{
"start": 716.84,
"end": 723.12,
"text": " a goal condition policy is basically is you put the agent somewhere in the maze"
},
{
"start": 723.12,
"end": 729.04,
"text": " actually let's use this maze over here you put the agent somewhere in the maze"
},
{
"start": 729.04,
"end": 738.16,
"text": " let's say here you for example make a bunch of ran make a random exploration"
},
{
"start": 738.16,
"end": 743.84,
"text": " let's say here so you know these two things are reachable and then you train"
},
{
"start": 743.84,
"end": 749,
"text": " the agency go from here to here right this is your goal now the agent tries to"
},
{
"start": 749,
"end": 755.84,
"text": " kind of reconstruct this random walk to there and you can riff so so this is how"
},
{
"start": 755.84,
"end": 761.28,
"text": " you train an agent to go it basically go from any two well reachable states to"
},
{
"start": 761.28,
"end": 765.54,
"text": " each other right from here to here and so on now you won't train it to go"
},
{
"start": 765.54,
"end": 770.64,
"text": " directly from here to over here because a random walk would be very hard for a"
},
{
"start": 770.64,
"end": 776.2,
"text": " random walk to find its way over there but what you end up with is is somehow an"
},
{
"start": 776.2,
"end": 781.4200000000001,
"text": " agent that is able to reach close by states and that's exactly what the"
},
{
"start": 781.4200000000001,
"end": 791.1600000000001,
"text": " worker is supposed to do right here and so of of these trajectories you can then"
},
{
"start": 791.1600000000001,
"end": 799.76,
"text": " unroll them and decide on the kind of on these on these pivotal states so how do"
},
{
"start": 799.76,
"end": 805.76,
"text": " you do that and this is where this top part here comes in so down here you"
},
{
"start": 805.76,
"end": 811.28,
"text": " input the trajectory and you output how important is each state all right and"
},
{
"start": 811.28,
"end": 818.84,
"text": " now you see in this example here the light color means that the LSTM decides"
},
{
"start": 818.84,
"end": 823.6,
"text": " this state isn't important and the darker orange color means the LSTM decides"
},
{
"start": 823.6,
"end": 830.68,
"text": " this state is important so what you do next is the states where it decides it"
},
{
"start": 830.68,
"end": 837.1999999999999,
"text": " is important and notice the beginning at the end are always important it feeds to"
},
{
"start": 837.1999999999999,
"end": 844.04,
"text": " a second LSTM as an input you see here here here so in this case of these two"
},
{
"start": 844.04,
"end": 849.4799999999999,
"text": " of these six states in the trajectory three are important namely the start"
},
{
"start": 849.4799999999999,
"end": 856.3599999999999,
"text": " the end and this one here where the LSTM decides hey that's important that goes"
},
{
"start": 856.36,
"end": 862.5600000000001,
"text": " into a second LSTM which is generator so this here is an encoder and this here is"
},
{
"start": 862.5600000000001,
"end": 869.28,
"text": " a decoder and what it does is it decodes the sequence of actions right here given"
},
{
"start": 869.28,
"end": 875.28,
"text": " nothing just given this it decodes a sequence of actions and at the end what"
},
{
"start": 875.28,
"end": 880.96,
"text": " you want is that the actions output here reconstruct the actions input this might"
},
{
"start": 880.96,
"end": 887.52,
"text": " sound a little confusing but the core value of this is what you want is to"
},
{
"start": 887.52,
"end": 894.32,
"text": " reconstruct the actions of the trajectory taken given only the important"
},
{
"start": 894.32,
"end": 900.4000000000001,
"text": " states what does this mean in our example in our example here this means"
},
{
"start": 900.4000000000001,
"end": 907.12,
"text": " if I have to go from here to here right and for example I took the following"
},
{
"start": 907.12,
"end": 912.52,
"text": " path this is this so right right down down right this is these were my action"
},
{
"start": 912.52,
"end": 920.16,
"text": " sequence now if I only have the start the end and one state in between let's"
},
{
"start": 920.16,
"end": 927.76,
"text": " say this one right then can I reconstruct what actions were taken and"
},
{
"start": 927.76,
"end": 936.88,
"text": " if I erase the blue thing and I tell you I went from here via here to here then"
},
{
"start": 936.88,
"end": 943.36,
"text": " you could very much reconstruct the actions here so this state here is a"
},
{
"start": 943.36,
"end": 947.88,
"text": " good candidate for being an important state whereas if it were a different"
},
{
"start": 947.88,
"end": 953.48,
"text": " state if it were for example if I told you I went from over here to here and"
},
{
"start": 953.48,
"end": 958.36,
"text": " then to here you'd say well this could be either something like this or it"
},
{
"start": 958.36,
"end": 963.04,
"text": " could be a path like this right it could be many many paths or like this"
},
{
"start": 963.04,
"end": 969.8399999999999,
"text": " could be many paths leading from here to here so this state here is not probably"
},
{
"start": 969.8399999999999,
"end": 977.16,
"text": " not very important so that's kind of how they how they learn which one are the"
},
{
"start": 977.16,
"end": 983.56,
"text": " important state via this encoding trajectories in an LSTM and trying to"
},
{
"start": 983.56,
"end": 991.48,
"text": " reconstruct the state the actions taken in the trajectory given only the states"
},
{
"start": 991.48,
"end": 995.96,
"text": " that were deemed important by the LSTM so that's how you train the LSTM to"
},
{
"start": 995.96,
"end": 1001,
"text": " recognize important states and once you've recognized the important states"
},
{
"start": 1001,
"end": 1008.8000000000001,
"text": " in a trajectory you can then use those to learn prior so basically you ask over"
},
{
"start": 1008.8000000000001,
"end": 1015.8000000000001,
"text": " all possible trajectories which of the states are generally important and"
},
{
"start": 1015.8,
"end": 1022.28,
"text": " that's how you end up with these blue states all right and then the last part"
},
{
"start": 1022.28,
"end": 1028.1599999999999,
"text": " is to connect the blue states and that is fairly easily done in their approach"
},
{
"start": 1028.1599999999999,
"end": 1034.04,
"text": " what they say is all right we have blue states we should be pick one and we do a"
},
{
"start": 1034.04,
"end": 1039.24,
"text": " random walk from it right random walk random walk random walk if we hit another"
},
{
"start": 1039.24,
"end": 1044.6,
"text": " blue state like this one here in the random walk we simply say well there are"
},
{
"start": 1044.6,
"end": 1048.7199999999998,
"text": " probably neighbors so we do this a bunch of times if you hit the blue states of"
},
{
"start": 1048.7199999999998,
"end": 1053.9599999999998,
"text": " course without hitting another blue state first then you connect the two in a"
},
{
"start": 1053.9599999999998,
"end": 1057.9599999999998,
"text": " graph so these would be connected these would probably be connected what we"
},
{
"start": 1057.9599999999998,
"end": 1064.9199999999998,
"text": " ended up at the beginning right you have this graph maybe these two are connected"
},
{
"start": 1064.9199999999998,
"end": 1069.52,
"text": " and so on so this gives you this world graph and now you end up with a set of"
},
{
"start": 1069.52,
"end": 1075.76,
"text": " important states and connections between them that tell you which ones are easily"
},
{
"start": 1075.76,
"end": 1081.8799999999999,
"text": " reachable from each other so you can train the manager on that you can train"
},
{
"start": 1081.8799999999999,
"end": 1087.32,
"text": " the worker as we said before to simply select two close by states train it to"
},
{
"start": 1087.32,
"end": 1093.6399999999999,
"text": " go from one to the other that by the worker will learn that so in essence"
},
{
"start": 1093.6399999999999,
"end": 1099.16,
"text": " that's how they they do it you can look at the experiments themselves they show"
},
{
"start": 1099.16,
"end": 1105.3200000000002,
"text": " that this basically transfers so if you train like this pre train then you can"
},
{
"start": 1105.3200000000002,
"end": 1110.76,
"text": " give more specific and more complicated tasks and this will this will rapidly"
},
{
"start": 1110.76,
"end": 1115.52,
"text": " accelerate the learning of this yeah look at the experiments if you have time"
},
{
"start": 1115.52,
"end": 1129.92,
"text": " that was it for me thank you for listening"
}
] |
ZAW9EyNo2fw | Yannic Kilcher | UCZHmQk67mSJgfCCTn7xBfew | Reconciling modern machine learning and the bias-variance trade-off | [
"Science & Technology"
] | [
"machine learning",
"bias",
"variance",
"tradeoff",
"generalization",
"overfitting",
"interpolation",
"parameters",
"model class",
"complexity",
"deep learning",
"neural networks",
"overparameterization",
"erm",
"random fourier features"
] | It turns out that the classic view of generalization and overfitting is incomplete! If you add parameters beyond the number of points in your dataset, generalization performance might increase again due to the increased smoothness of overparameterized functions.
Abstract:
The question of generalization in machine learning---how algorithms are able to learn predictors from a training sample to make accurate predictions out-of-sample---is revisited in light of the recent breakthroughs in modern machine learning technology.
The classical approach to understanding generalization is based on bias-variance trade-offs, where model complexity is carefully calibrated so that the fit on the training sample reflects performance out-of-sample.
However, it is now common practice to fit highly complex models like deep neural networks to data with (nearly) zero training error, and yet these interpolating predictors are observed to have good out-of-sample accuracy even for noisy data.
How can the classical understanding of generalization be reconciled with these observations from modern machine learning practice?
In this paper, we bridge the two regimes by exhibiting a new "double descent" risk curve that extends the traditional U-shaped bias-variance curve beyond the point of interpolation.
Specifically, the curve shows that as soon as the model complexity is high enough to achieve interpolation on the training sample---a point that we call the "interpolation threshold"---the risk of suitably chosen interpolating predictors from these models can, in fact, be decreasing as the model complexity increases, often below the risk achieved using non-interpolating models.
The double descent risk curve is demonstrated for a broad range of models, including neural networks and random forests, and a mechanism for producing this behavior is posited.
Authors: Mikhail Belkin, Daniel Hsu, Siyuan Ma, Soumik Mandal
https://arxiv.org/abs/1812.11118 | Hi there! Today we're looking at reconciling modern machine learning and the bias-variance trade-off by Mikhail Belkin et al. This paper struck me as interesting at ICML when I heard a talk by Mikhail Belkin. The paper is very interesting in terms of what it proposes about modern machine learning. What's the problem? The problem is they contrast what they call classical machine learning and how to understand machine learning, namely in terms of bias-variance trade-offs, and modern machine learning where it's for example deep neural networks which have very different properties. Basically the best way to describe it is probably with an example. Let's say we have four data points. Here is a coordinate system in two dimensions. One, two, three, four. Four data points. Why not? These four data points we want to fit a function from X to Y. Y is our target. It's kind of a regression problem. Let's say we have just one parameter which we can use to describe our function. Probably the best thing we could do is to do something like this, which is a line. The only parameter here is the slope of that line. Our model would be this one line and it would pass basically through the data and would describe the data fairly well as you can see. If we have two parameters now we can introduce for example a bias term and not have the line at the origin. This line here, now we have the bias which is the distance to this point to describe it as well as the slope of this line as parameters. So two parameters and if you look at this line here it describes the data a bit better than before. It passes kind of through the center of the data. If we go to three or four parameters, it's well known that if I have the same number of parameters as I have data points, I can actually fit the data perfectly. How to do this? It would be like an order for polynomial which... Let's see if I can draw an order for polynomial. It needs to go... It needs to rip and then... Okay well... No that's... Okay that's more than order for. In any case I can fit actually the data perfectly. Now if you think about all of these functions, let's contrast these. Alright let's contrast them and let's look at what is the data distribution probably. Data distribution is probably, if I fill in the rest of the data that is not in our training set, maybe something like this. So which of these functions generalize as well to this general data, the unseen data? Probably the first function not doing very poorly. The first function actually doing okay. The second function doing even better as we saw. If we add a parameter to the first function it gets better, but if we then add more parameters it gets worse. This is kind of taught in current machine learning classes as the phenomenon of overfitting. Whereas here the function that has the most parameters actually doesn't fit well. What is troubling now is that if you think of things like neural networks, modern architectures, they actually have even more... They have oftentimes more parameters than their data points in the data set. So they can fit the training data perfectly and still have kind of spare room, spare capacity. These models actually generalize fairly well. This paper asks what's going on here and what they propose is the following picture. Here we have a classical view of machine learning. On the x-axis is the complexity of H. You can think of the complexity of the... This is H is the model class. H is the class of all the models you could fit. For example it would be every linear model with one parameter. This was our first model. The first model would be somewhere here one. The complexity is one. Then here we'd have the complexity of two where we added a parameter, three parameters and then four parameters. This is what we saw. At the beginning one parameter we had some training risk. Here simply another term for loss. We had some training loss. Then as we added a parameter the training loss decreased. It got better and also the test loss on the unseen data decreased. So it got better on the test set as well as we added parameter. Then as we added more parameters it was able to fit the training data better and better going to almost zero risk here. But on unseen data the performance actually got worse again. Again this is what we teach as overfitting. These authors propose this is incomplete. Namely the picture actually looks like this and all we've done so far is look at this left hand side here. Namely that there is a peak here and this is called the interpolation threshold. The interpolation threshold is roughly at the point where you have as many parameters as you have data points. After the interpolation threshold if you give even more parameters the training risk of course stays low because you can fit the training data perfectly from the interpolation threshold forward. But the test risk actually decreases again. This is really interesting. Let me just preempt this and say this is not due to regularization. It's not because people regularize their models or anything like this. In any case regularization would actually move you to less of a complexity of your model class. Because now if you regularize you're no longer able to fit certain models as easily or converge to them. They propose that this is happening and they give some reason why this might happening and they give some evidence that this is happening. Here is the evidence that this is happening and they do this here for example. This is a random Fourier features classifier. What are random Fourier features? They describe them here. If you have a data point X what you do is you push this through a function which or you push this through many of them. You sample capital N of these vectors v and of each of the vectors v you take the inner product and raise it. Take the exponential function of it and then aggregate them. These random Fourier features are the random Fourier features and these are the weights that you learn. This is basically a linear classifier but not of the original features but of intermediary features which are fixed for a given random seed. The good thing is here you can sample, you can decide how many intermediary features that you want. The other good thing is if you let n go to infinity this actually becomes an infinite dimensional kernel machine. It becomes a kernel SVM with the Gaussian kernel which is operating in an infinite dimensional space. If you don't go as far then it's just an approximation to that. It's a cool model where you can choose how many parameters you want. It's a perfect model to explore this phenomenon. What are they doing? They are doing the following. They take MNIST and they just apply this model. On the x-axis here are the number of parameters and the number of random Fourier features that they construct. Here you can see the mean squared error on the test set. As you can see at the beginning the error goes down as proposed. Then here is probably this sweet spot of classical machine learning. After that you start to overfit, it goes up again. There's a giant peak and then it goes down again. Here 10,000 I think they do it with a subset of MNIST if I remember correctly. Around 10,000 is exactly the number of data points they use or multiplied by the classes. I don't remember correctly but in any case at this number you have the same amount of parameters as data points. After that the test error decreases again. As you give more and more and more features every single classifier on this line is able to fit the training data perfectly but they successfully get less and less error on the test set. You can see it approaches this dotted line here which is if you perfectly solve the infinite dimensional problem. If you actually use a kernel SVM to solve this problem, you can see this gives you a lower bound. It really shows nicely that the random Fourier features classifier approximates this as you go higher and higher with capital N. It actually approximates the kernel SVM. This is really interesting that this actually happens in practice. What they also see here is when they look at the norm of the solution. The norm of the solution they calculate as basically the norm in the Hilbert space but they can't because it's hard to compute. A proxy for this is simply the norm of the weight vector that you learn. The norm of the solution as you add more parameters of course for first it goes up because you add more parameters, you fit each of them, they have some value and then it goes up. It peaks at this interpolation threshold. There you have a really high norm solution and after that the norm goes down again of the solution. Again it approximates the norm of the perfectly solved kernel machine. That's extremely interesting and is a part of an explanation they give why this is happening. Namely the following. If you have too many parameters what you might do with the correct inductive bias is find a low norm solution. What does a low norm solution mean? A low norm solution means a relatively simple function. As you add parameters your model is better and better able to find a simple function that describes the training data. Not in terms of simple of less parameters but simple in terms of how it moves between the training data. If you imagine the training data again from before and you imagine it perfectly fit this polynomial here that we drew with four parameters. If I have many many many more parameters I can do something like... I have many parameters but I can be kind of squeaky but they have... right? So this something like this here I grab this here I grab this something like this and this moves smoothly between the training data. It has many parameters because it has many many squiggles here but it's a low norm solution. The low norm will cause the solution to kind of be smooth whereas a high norm solution that perfectly interpolates the training data would look something like this. So the authors here say if your inductive bias is able to find a low norm solution that perfectly fits the training data then that will generalize well. It turns out that modern architectures tend to find low norm solutions if you train them for example with SGD. The combination of many parameters and low norm solutions will give you a smooth function and the smoothness of the function will be the thing that generalizes to unseen data because the smoothness kind of ensures that everything in between the data will be nicely kind of interpolated here. So that's the the perspective. They go on from these random Fourier features to neural networks and what they do here is they train a neural network on MNIST with a one hidden layer. So there's two weight layers now and again you can see as the as the number of parameters so this means basically the number of hidden nodes they increase the number of hidden nodes in the hidden layer and as they increase this the training and test error go down. The training error continues to go down test error goes up until the interpolation threshold again and then the test error drops again while the training error continues to be almost zero. They do the same thing with decision trees and random forests and show the exact same thing that there is this interpolation threshold after which the test error drops even though the training error is almost zero. To me this is really remarkable and they show this in the appendix of many many more experiments where they show this phenomenon happening on different datasets and on different architectures here random ReLU features and so on and it kind of gives a new perspective on generalization and why our models generalize so well. They finally conclude with why has this not been seen yet and they give some nice reasons basically that for example models where you can choose the models where you can choose the the complexity for example random Fourier features are originally proposed as an approximation to kernel machines if you have too many data points and don't want to compute as many features so they they're basically only ever used in this regime where the classical paradigm holds and the neural networks in the other hand often are simply made super large and they say this peak here that they show is very localized and you might if you increase your neural network maybe you try one at this size this size this size and this size and all you then see is kind of a downward trajectory you kind of miss this peak so it leads to the impression that simply oh bigger neural networks perform better. Yeah so I found this interesting I hope you did as well and definitely check out more of this group's work. That was it for now have a nice day | [
{
"start": 0,
"end": 4.5600000000000005,
"text": " Hi there! Today we're looking at reconciling modern machine learning and"
},
{
"start": 4.5600000000000005,
"end": 11.64,
"text": " the bias-variance trade-off by Mikhail Belkin et al. This paper struck me as"
},
{
"start": 11.64,
"end": 19.92,
"text": " interesting at ICML when I heard a talk by Mikhail Belkin. The"
},
{
"start": 19.92,
"end": 26.28,
"text": " paper is very interesting in terms of what it proposes about modern machine"
},
{
"start": 26.28,
"end": 31.400000000000002,
"text": " learning. What's the problem? The problem is they contrast what they call"
},
{
"start": 31.400000000000002,
"end": 38.760000000000005,
"text": " classical machine learning and how to understand machine learning, namely in"
},
{
"start": 38.760000000000005,
"end": 45.32,
"text": " terms of bias-variance trade-offs, and modern machine learning where it's for"
},
{
"start": 45.32,
"end": 52.24,
"text": " example deep neural networks which have very different properties. Basically"
},
{
"start": 52.24,
"end": 56.72,
"text": " the best way to describe it is probably with an example. Let's say we have"
},
{
"start": 56.72,
"end": 62.28,
"text": " four data points. Here is a coordinate system in two dimensions."
},
{
"start": 62.28,
"end": 73.2,
"text": " One, two, three, four. Four data points. Why not?"
},
{
"start": 73.2,
"end": 83.60000000000001,
"text": " These four data points we want to fit a function from X to Y. Y is our"
},
{
"start": 83.60000000000001,
"end": 90,
"text": " target. It's kind of a regression problem. Let's say we have just one"
},
{
"start": 90,
"end": 95.64,
"text": " parameter which we can use to describe our function. Probably the best"
},
{
"start": 95.64,
"end": 103.72,
"text": " thing we could do is to do something like this, which is a line. The"
},
{
"start": 103.72,
"end": 111.72,
"text": " only parameter here is the slope of that line. Our model would be"
},
{
"start": 111.72,
"end": 117.28,
"text": " this one line and it would pass basically through the data and would"
},
{
"start": 117.28,
"end": 122.52,
"text": " describe the data fairly well as you can see. If we have two parameters now we can"
},
{
"start": 122.52,
"end": 128.48,
"text": " introduce for example a bias term and not have the line at the origin. This"
},
{
"start": 128.48,
"end": 136.07999999999998,
"text": " line here, now we have the bias which is the distance to this point to describe"
},
{
"start": 136.07999999999998,
"end": 141.24,
"text": " it as well as the slope of this line as parameters. So two parameters and if you"
},
{
"start": 141.24,
"end": 146.92,
"text": " look at this line here it describes the data a bit better than"
},
{
"start": 146.92,
"end": 152.48,
"text": " before. It passes kind of through the center of the data. If we"
},
{
"start": 152.48,
"end": 157.44,
"text": " go to three or four parameters, it's well known that if I"
},
{
"start": 157.44,
"end": 164.35999999999999,
"text": " have the same number of parameters as I have data points, I can"
},
{
"start": 164.35999999999999,
"end": 169.28,
"text": " actually fit the data perfectly. How to do this? It would be like an order"
},
{
"start": 169.28,
"end": 177.56,
"text": " for polynomial which... Let's see if I can draw an order for polynomial. It"
},
{
"start": 177.56,
"end": 195.44,
"text": " needs to go... It needs to rip and then... Okay well... No that's... Okay that's more than"
},
{
"start": 195.44,
"end": 202.28,
"text": " order for. In any case I can fit actually the data perfectly. Now if you think"
},
{
"start": 202.28,
"end": 207,
"text": " about all of these functions, let's contrast these. Alright let's contrast"
},
{
"start": 207,
"end": 214.2,
"text": " them and let's look at what is the data distribution probably."
},
{
"start": 214.2,
"end": 219.64,
"text": " Data distribution is probably, if I fill in the rest of the data that is not in"
},
{
"start": 219.64,
"end": 227.48,
"text": " our training set, maybe something like this. So which of these functions"
},
{
"start": 227.48,
"end": 234.6,
"text": " generalize as well to this general data, the unseen data? Probably the first"
},
{
"start": 234.6,
"end": 240.68,
"text": " function not doing very poorly. The first function actually doing okay. The second"
},
{
"start": 240.68,
"end": 247.16,
"text": " function doing even better as we saw. If we add a parameter to the"
},
{
"start": 247.16,
"end": 251.76,
"text": " first function it gets better, but if we then add more parameters it gets worse."
},
{
"start": 251.76,
"end": 255.88,
"text": " This is kind of taught in current machine learning classes as the"
},
{
"start": 255.88,
"end": 261.84,
"text": " phenomenon of overfitting. Whereas here the function that has the most"
},
{
"start": 261.84,
"end": 267.91999999999996,
"text": " parameters actually doesn't fit well. What is troubling now is that if you"
},
{
"start": 267.91999999999996,
"end": 272.47999999999996,
"text": " think of things like neural networks, modern architectures, they actually have"
},
{
"start": 272.47999999999996,
"end": 278.35999999999996,
"text": " even more... They have oftentimes more parameters than their data points in the"
},
{
"start": 278.35999999999996,
"end": 285.12,
"text": " data set. So they can fit the training data perfectly and still have kind of"
},
{
"start": 285.12,
"end": 292.64,
"text": " spare room, spare capacity. These models actually generalize fairly well."
},
{
"start": 292.64,
"end": 299.32,
"text": " This paper asks what's going on here and what they propose is the following"
},
{
"start": 299.32,
"end": 305.76,
"text": " picture. Here we have a classical view of machine learning. On the x-axis is"
},
{
"start": 305.76,
"end": 312.88,
"text": " the complexity of H. You can think of the complexity of the... This is H is"
},
{
"start": 312.88,
"end": 320.6,
"text": " the model class. H is the class of all the models you could fit. For"
},
{
"start": 320.6,
"end": 325.92,
"text": " example it would be every linear model with one parameter. This was our"
},
{
"start": 325.92,
"end": 330.08,
"text": " first model. The first model would be somewhere here one. The"
},
{
"start": 330.08,
"end": 334.84,
"text": " complexity is one. Then here we'd have the complexity of two where we added a"
},
{
"start": 334.84,
"end": 340.32,
"text": " parameter, three parameters and then four parameters. This is what we saw."
},
{
"start": 340.32,
"end": 346.32,
"text": " At the beginning one parameter we had some training risk."
},
{
"start": 346.32,
"end": 351.52,
"text": " Here simply another term for loss. We had some training loss. Then as"
},
{
"start": 351.52,
"end": 358.03999999999996,
"text": " we added a parameter the training loss decreased. It got better and also"
},
{
"start": 358.03999999999996,
"end": 364.52,
"text": " the test loss on the unseen data decreased. So it got better on the"
},
{
"start": 364.52,
"end": 369.38,
"text": " test set as well as we added parameter. Then as we added more parameters it was"
},
{
"start": 369.38,
"end": 374.12,
"text": " able to fit the training data better and better going to almost zero risk here."
},
{
"start": 374.12,
"end": 382.8,
"text": " But on unseen data the performance actually got worse again."
},
{
"start": 382.8,
"end": 387.36,
"text": " Again this is what we teach as overfitting. These authors propose this"
},
{
"start": 387.36,
"end": 392.52,
"text": " is incomplete. Namely the picture actually looks like this and all we've"
},
{
"start": 392.52,
"end": 399.2,
"text": " done so far is look at this left hand side here. Namely that there is a peak"
},
{
"start": 399.2,
"end": 403.92,
"text": " here and this is called the interpolation threshold. The interpolation"
},
{
"start": 403.92,
"end": 408.84,
"text": " threshold is roughly at the point where you have as many parameters as you have"
},
{
"start": 408.84,
"end": 415.15999999999997,
"text": " data points. After the interpolation threshold if you give even more"
},
{
"start": 415.15999999999997,
"end": 419.41999999999996,
"text": " parameters the training risk of course stays low because you can fit the"
},
{
"start": 419.41999999999996,
"end": 425.24,
"text": " training data perfectly from the interpolation threshold forward. But the"
},
{
"start": 425.24,
"end": 431.56,
"text": " test risk actually decreases again. This is really interesting."
},
{
"start": 431.56,
"end": 439.2,
"text": " Let me just preempt this and say this is not due to regularization. It's"
},
{
"start": 439.2,
"end": 443.88,
"text": " not because people regularize their models or anything like this. In any"
},
{
"start": 443.88,
"end": 449.40000000000003,
"text": " case regularization would actually move you to less of a complexity of your"
},
{
"start": 449.40000000000003,
"end": 454.68,
"text": " model class. Because now if you regularize you're no longer able to fit"
},
{
"start": 454.68,
"end": 464.40000000000003,
"text": " certain models as easily or converge to them. They propose that this is"
},
{
"start": 464.40000000000003,
"end": 468.08,
"text": " happening and they give some reason why this might happening and they give some"
},
{
"start": 468.08,
"end": 473.56,
"text": " evidence that this is happening. Here is the evidence that this is happening"
},
{
"start": 473.56,
"end": 481.64,
"text": " and they do this here for example. This is a random Fourier features classifier."
},
{
"start": 481.64,
"end": 486.24,
"text": " What are random Fourier features? They describe them here. If you have a"
},
{
"start": 486.24,
"end": 498.24,
"text": " data point X what you do is you push this through a function which or you"
},
{
"start": 498.24,
"end": 504.15999999999997,
"text": " push this through many of them. You sample capital N of these vectors v and"
},
{
"start": 504.15999999999997,
"end": 510.91999999999996,
"text": " of each of the vectors v you take the inner product and raise it."
},
{
"start": 510.92,
"end": 518.9200000000001,
"text": " Take the exponential function of it and then aggregate them. These"
},
{
"start": 518.9200000000001,
"end": 522.32,
"text": " random Fourier features are the random Fourier features and these"
},
{
"start": 522.32,
"end": 528.44,
"text": " are the weights that you learn. This is basically a linear classifier but"
},
{
"start": 528.44,
"end": 535.5600000000001,
"text": " not of the original features but of intermediary features which are fixed"
},
{
"start": 535.5600000000001,
"end": 540.88,
"text": " for a given random seed. The good thing is here you can sample, you can decide"
},
{
"start": 540.88,
"end": 546.12,
"text": " how many intermediary features that you want. The other good thing is if you let"
},
{
"start": 546.12,
"end": 553.4,
"text": " n go to infinity this actually becomes an infinite dimensional kernel machine."
},
{
"start": 553.4,
"end": 559.84,
"text": " It becomes a kernel SVM with the Gaussian kernel which is operating in"
},
{
"start": 559.84,
"end": 567.12,
"text": " an infinite dimensional space. If you don't go as far then it's just an"
},
{
"start": 567.12,
"end": 571.5600000000001,
"text": " approximation to that. It's a cool model where you can choose how"
},
{
"start": 571.5600000000001,
"end": 578.88,
"text": " many parameters you want. It's a perfect model to explore this phenomenon."
},
{
"start": 578.88,
"end": 585.72,
"text": " What are they doing? They are doing the following. They take MNIST and they just"
},
{
"start": 585.72,
"end": 592.48,
"text": " apply this model. On the x-axis here are the number of parameters and"
},
{
"start": 592.48,
"end": 600.32,
"text": " the number of random Fourier features that they construct. Here you can see"
},
{
"start": 600.32,
"end": 609.4,
"text": " the mean squared error on the test set. As you can see at the beginning"
},
{
"start": 609.4,
"end": 616.5600000000001,
"text": " the error goes down as proposed. Then here is probably this sweet spot"
},
{
"start": 616.5600000000001,
"end": 621.5600000000001,
"text": " of classical machine learning. After that you start to overfit, it goes up again."
},
{
"start": 621.56,
"end": 628.56,
"text": " There's a giant peak and then it goes down again."
},
{
"start": 628.56,
"end": 635.88,
"text": " Here 10,000 I think they do it with a subset of MNIST if I remember correctly."
},
{
"start": 635.88,
"end": 642.04,
"text": " Around 10,000 is exactly the number of data points they use or"
},
{
"start": 642.04,
"end": 648.3599999999999,
"text": " multiplied by the classes. I don't remember correctly but in any case at"
},
{
"start": 648.36,
"end": 658,
"text": " this number you have the same amount of parameters as data points."
},
{
"start": 658,
"end": 665.04,
"text": " After that the test error decreases again. As you give more and more and"
},
{
"start": 665.04,
"end": 670.4,
"text": " more features every single classifier on this line is able to fit the"
},
{
"start": 670.4,
"end": 675.96,
"text": " training data perfectly but they successfully get less and less error on"
},
{
"start": 675.96,
"end": 683.32,
"text": " the test set. You can see it approaches this dotted line here which is if"
},
{
"start": 683.32,
"end": 687.6800000000001,
"text": " you perfectly solve the infinite dimensional problem. If you actually"
},
{
"start": 687.6800000000001,
"end": 694.8000000000001,
"text": " use a kernel SVM to solve this problem, you can see this"
},
{
"start": 694.8000000000001,
"end": 701.1600000000001,
"text": " gives you a lower bound. It really shows nicely that the"
},
{
"start": 701.16,
"end": 706.4399999999999,
"text": " random Fourier features classifier approximates this as you go higher and"
},
{
"start": 706.4399999999999,
"end": 713.9599999999999,
"text": " higher with capital N. It actually approximates the kernel SVM."
},
{
"start": 713.9599999999999,
"end": 718.76,
"text": " This is really interesting that this actually happens in practice. What"
},
{
"start": 718.76,
"end": 724.64,
"text": " they also see here is when they look at the norm of the solution. The norm"
},
{
"start": 724.64,
"end": 733.88,
"text": " of the solution they calculate as basically the norm in the"
},
{
"start": 733.88,
"end": 739.04,
"text": " Hilbert space but they can't because it's hard to compute. A proxy for this"
},
{
"start": 739.04,
"end": 746.68,
"text": " is simply the norm of the weight vector that you learn. The norm of the"
},
{
"start": 746.68,
"end": 752.6,
"text": " solution as you add more parameters of course for first it goes up because you"
},
{
"start": 752.6,
"end": 759.24,
"text": " add more parameters, you fit each of them, they have some value and then"
},
{
"start": 759.24,
"end": 767.96,
"text": " it goes up. It peaks at this interpolation threshold. There you have a"
},
{
"start": 767.96,
"end": 773.84,
"text": " really high norm solution and after that the norm goes down again of the solution."
},
{
"start": 773.84,
"end": 782.72,
"text": " Again it approximates the norm of the perfectly solved kernel"
},
{
"start": 782.72,
"end": 788.36,
"text": " machine. That's extremely interesting and is a part of an explanation they"
},
{
"start": 788.36,
"end": 796.0400000000001,
"text": " give why this is happening. Namely the following. If you have too many"
},
{
"start": 796.0400000000001,
"end": 802.76,
"text": " parameters what you might do with the correct inductive bias is find a low"
},
{
"start": 802.76,
"end": 807.3199999999999,
"text": " norm solution. What does a low norm solution mean? A low norm solution"
},
{
"start": 807.3199999999999,
"end": 813.28,
"text": " means a relatively simple function. As you add parameters your model is"
},
{
"start": 813.28,
"end": 819.64,
"text": " better and better able to find a simple function that describes the training"
},
{
"start": 819.64,
"end": 827.12,
"text": " data. Not in terms of simple of less parameters but simple in terms"
},
{
"start": 827.12,
"end": 833.48,
"text": " of how it moves between the training data. If you imagine the training"
},
{
"start": 833.48,
"end": 844.2,
"text": " data again from before and you imagine it perfectly fit this polynomial"
},
{
"start": 844.2,
"end": 848.48,
"text": " here that we drew with four parameters. If I have many many many more"
},
{
"start": 848.48,
"end": 855.32,
"text": " parameters I can do something like... I have many parameters but I can be"
},
{
"start": 855.32,
"end": 862.6400000000001,
"text": " kind of squeaky but they have... right? So this something like this here I grab"
},
{
"start": 862.6400000000001,
"end": 868.12,
"text": " this here I grab this something like this and this moves smoothly between the"
},
{
"start": 868.12,
"end": 871.6800000000001,
"text": " training data. It has many parameters because it has many many squiggles here"
},
{
"start": 871.6800000000001,
"end": 876.6,
"text": " but it's a low norm solution. The low norm will cause the solution to kind of"
},
{
"start": 876.6,
"end": 883.5600000000001,
"text": " be smooth whereas a high norm solution that perfectly interpolates the training"
},
{
"start": 883.56,
"end": 893.64,
"text": " data would look something like this. So the authors here say if your"
},
{
"start": 893.64,
"end": 900.0799999999999,
"text": " inductive bias is able to find a low norm solution that perfectly fits the"
},
{
"start": 900.0799999999999,
"end": 907.16,
"text": " training data then that will generalize well. It turns out that modern"
},
{
"start": 907.16,
"end": 912.9599999999999,
"text": " architectures tend to find low norm solutions if you train them for example"
},
{
"start": 912.96,
"end": 921.1600000000001,
"text": " with SGD. The combination of many parameters and low norm"
},
{
"start": 921.1600000000001,
"end": 925.96,
"text": " solutions will give you a smooth function and the smoothness of the"
},
{
"start": 925.96,
"end": 932.44,
"text": " function will be the thing that generalizes to unseen data because the"
},
{
"start": 932.44,
"end": 940.1600000000001,
"text": " smoothness kind of ensures that everything in between the data will be"
},
{
"start": 940.16,
"end": 948.7199999999999,
"text": " nicely kind of interpolated here. So that's the the perspective."
},
{
"start": 948.7199999999999,
"end": 955,
"text": " They go on from these random Fourier features to neural networks and what"
},
{
"start": 955,
"end": 961.16,
"text": " they do here is they train a neural network on MNIST with a one hidden"
},
{
"start": 961.16,
"end": 968.88,
"text": " layer. So there's two weight layers now and again you can see as the as the"
},
{
"start": 968.88,
"end": 973.36,
"text": " number of parameters so this means basically the number of hidden nodes"
},
{
"start": 973.36,
"end": 978.04,
"text": " they increase the number of hidden nodes in the hidden layer and as they increase"
},
{
"start": 978.04,
"end": 982.96,
"text": " this the training and test error go down. The training error continues to go down"
},
{
"start": 982.96,
"end": 987.88,
"text": " test error goes up until the interpolation threshold again and then"
},
{
"start": 987.88,
"end": 994.48,
"text": " the test error drops again while the training error continues to be almost"
},
{
"start": 994.48,
"end": 1005.32,
"text": " zero. They do the same thing with decision trees and random forests and"
},
{
"start": 1005.32,
"end": 1011.28,
"text": " show the exact same thing that there is this interpolation threshold after which"
},
{
"start": 1011.28,
"end": 1021.16,
"text": " the test error drops even though the training error is almost zero. To me"
},
{
"start": 1021.16,
"end": 1026.12,
"text": " this is really remarkable and they show this in the appendix of many many more"
},
{
"start": 1026.12,
"end": 1031.68,
"text": " experiments where they show this phenomenon happening on different"
},
{
"start": 1031.68,
"end": 1040.32,
"text": " datasets and on different architectures here random ReLU features and so on and"
},
{
"start": 1040.32,
"end": 1046.6399999999999,
"text": " it kind of gives a new perspective on generalization and why our models"
},
{
"start": 1046.64,
"end": 1055.8400000000001,
"text": " generalize so well. They finally conclude with why has this not been seen yet and"
},
{
"start": 1055.8400000000001,
"end": 1065.1200000000001,
"text": " they give some nice reasons basically that for example models where you can"
},
{
"start": 1065.1200000000001,
"end": 1072.8400000000001,
"text": " choose the models where you can choose the the complexity for example random"
},
{
"start": 1072.84,
"end": 1079.1999999999998,
"text": " Fourier features are originally proposed as an approximation to kernel machines"
},
{
"start": 1079.1999999999998,
"end": 1082.9199999999998,
"text": " if you have too many data points and don't want to compute as many features"
},
{
"start": 1082.9199999999998,
"end": 1088.08,
"text": " so they they're basically only ever used in this regime where the classical"
},
{
"start": 1088.08,
"end": 1094.6399999999999,
"text": " paradigm holds and the neural networks in the other hand often are simply made"
},
{
"start": 1094.6399999999999,
"end": 1102.1599999999999,
"text": " super large and they say this peak here that they show is very localized and you"
},
{
"start": 1102.16,
"end": 1105.6000000000001,
"text": " might if you increase your neural network maybe you try one at this size"
},
{
"start": 1105.6000000000001,
"end": 1111.2,
"text": " this size this size and this size and all you then see is kind of a downward"
},
{
"start": 1111.2,
"end": 1116.0400000000002,
"text": " trajectory you kind of miss this peak so it leads to the impression that simply"
},
{
"start": 1116.0400000000002,
"end": 1124.48,
"text": " oh bigger neural networks perform better. Yeah so I found this interesting I hope"
},
{
"start": 1124.48,
"end": 1130.8400000000001,
"text": " you did as well and definitely check out more of this group's work. That was it"
},
{
"start": 1130.84,
"end": 1134.52,
"text": " for now have a nice day"
}
] |
l8JeokY5NsU | Yannic Kilcher | UCZHmQk67mSJgfCCTn7xBfew | Conversation about Population-Based Methods (Re-upload) | [
"Science & Technology"
] | [
"machine learning",
"ai",
"artificial intelligence",
"open ended learning",
"quality diversity",
"conference",
"icml",
"icml2019",
"tutorial",
"population-based search",
"goal switching",
"serendipidy",
"evolution",
"interview",
"podcast"
] | Being interviewed by Connor Shorten of Henry AI Labs (https://www.youtube.com/channel/UCHB9VepY6kYvZjj0Bgxnpbw) on the topic of population-based methods and open-ended learning.
Tutorial: https://www.facebook.com/icml.imls/videos/481758745967365/
Book: https://www.amazon.com/dp/B00X57B4JG/ | Hi there, I've recently been interviewed by the YouTube channel Henry AI Labs by Connor Shorten and what follows is the resulting conversation we had about population-based methods and Open-ended learning things like that basically topics of the ICML tutorial that we both saw It's important to note that none of us is really an expert on the topic but we are trying to make sense of it and mainly just kind of talking about the ideas So please enjoy the conversation with Connor Shorten definitely check out the Henry AI Labs channel and Now have a good time Thanks for watching the Henry AI Labs deep learning podcast today. I'm joined with Janek Kilcher Janek works in the data analytics lab at ETH. He has a great YouTube channel I really enjoy watching his paper summary videos If you like any of the videos that I'm making you definitely also like checking out this channel I'm gonna put the link in the description at the end of the talk So Janek, thanks for doing this with me. I really appreciate it Thanks for having me. It's cool. So what we're gonna talk about is population-based search and Presentation that ICML that I really thought was interesting about emphasizing Diversity and novelty in search. So the first question I just wanted to start by generally talking about your opinion on population-based search and The differences between population-based search and my gradient descent going straight for one solution Yeah, so the the kind of main difference Is that in population-based search as the name implies you maintain kind of a large population of solutions? So you don't want to limit yourself to just one trajectory say I start here and then I run towards my goal but you kind of maintain a lot of hypotheses of what the solution could be and then you kind of want to update all of them at the same time and So there's many different variants of population-based search but they all have this this thing in common where you maintain many solutions and you kind of bet on One of them becoming a good one basically Yes, so one other thing they they present their paper where they have the robot walking And if it breaks one of its legs, for example, it can go back to the map elites table and and say okay Well, I've lost this leg, but I think maybe this solution I was I wasn't too clear on how that would really be related. So I was maybe wondering if you had more insight on that Yes, so the so the maybe the the context is yeah You want to teach a robot to walk and the robot had six legs I believe so and if you think of what's the solution to the problem a solution is kind of an Algorithm that takes the current sensor input and outputs how to move the motors, right? So and If you just have like say your gradient descent algorithm converging on the best solution of the robot Of how to move the robot. It's just going to be like, oh, these are the sensors Okay, I'm gonna move like this like this like this like this but if One leg breaks, of course, you're lost because you only know this one way of moving and now the sorry So you only know this one way of moving Basically, and that's it But in population based search if you think of the solution as a way to move you maintain many many many ways to move so you basically the objective if you can call it like this is Algorithm find me a lot of different ways to move Right with my six legs and now if one of my legs I still can evaluate all of them I still can find okay, which one's the best but if now one of them falls away I have all these other solutions that I can try Right. So then what they would do is like this life falls away. Now. They just reevaluate all of those solutions while only having five legs and the best of those like is much more likely to kind of Work then if you had just your single solution So that kind of that's the its population base because you maintain many different ways of solving the problem Yes, I was also thinking about like using the search algorithms that control neural architecture search and things like that So it's trying to think of how you might extend these ideas from the robot walking with six legs To the RNN controller designing the convolutional network, but like maybe I might have like more of a Storage constraint and more of a latency constraint and I could jump to a different solution like that I'm just wondering how you think like these ideas of population-based search translate into the neural architecture search and specifically if it really is important because like you've got I feel like in neural architecture search you have such a direct signal with the Classification accuracy like I don't see as much variance as those in the in the objective function Yeah, I really think this population based approach is they shine in So they shine in multiple different areas, but one area where they shine is definitely when the environment changes So when you know something about whatever your input changes like the robot losing a leg so in kind of neural architecture search you might You might find these methods working if you then go for let's say transfer learning So you kind of train your network on one task you want to actually do it in another task, right? And then if you maintain many solutions and you can evaluate all of them In a in this transfer setting it's much more likely that one of them is gonna be is gonna be fine So but you're right of I also believe that directly in architecture search Maybe it's not Maybe it doesn't yield that many grades results though the other of course the other Area where these methods shine and this is with respect to algorithms like novelty search Which can be implemented as a population based method is They gave this really good example of deception in a search problem So a deception would be like if you have a robot walking a maze and the robot just wants to get to the goal Right and you would program it the robot to be rewarded the closer it gets to the goal But if like there's a wall in between and you actually need to go around the wall Kind of then for a while you would need to move away from the goal in order to reach it So if you have like a pure objective driven approach, you just go straight to the goal You would always get stuck at the wall But if you then kind of do what is called a novelty search where you basically reward the robot for things It has never done before it would actually find its way around the wall So you can maintain population of solutions that all kind of explore the space And that in our neural architecture search, maybe it's of a benefit that actually You know if I I probably always benefit from like adding more layers or neurons or something like this, but maybe I actually want to prune some stuff first and then add some more stuff So I maybe want to get worse first before I can get even better, right? So so is this a reach where I can imagine that happening? But I don't know Yeah, I was thinking the changing environment I definitely think like when you deploy a model and then you're getting new data that you could frame that as a changing environment And then also I was thinking about like in the context of GAN Which is something that I think is really interesting that the discriminator classifying the GAN Sam the generator samples, it's a changing environment because of the generators updates So maybe having some kind of population based GAN or discriminator model might help it avoid that like Continual learning problem, I guess is sort of an Yeah, that could that might as might very well be There are approaches to GANs, I believe where you basically you have like many discriminators And each one kind of only has let's say has its own limited view on the data And you're trying to kind of fool a lot of them at the same time, but it's not the same thing. But yes I think that that might make sense. Yeah, I've seen that multiple generator multiple discriminator model too I think that's really interesting as well So then one other thing I was curious about is this idea of goal switching and how that might relate to the like AutoML on our existing More like heavily studied things like classification, localization, semantic segmentation Like how do you think goal switching could be important? Like one idea I had is maybe if you've got like multi-class classification And it's got like a really low false positive rate or something on like one class You might say well you've somehow learned a decision boundary on that class Or do you think that wouldn't generalize and that there's no sense in goal switching in like a multi-class classification problem? So yeah, in general, well when you think of goal switching in general How they introduced it was also in the context of like this population based search of these map elites Maybe it's kind of so what map elites the algorithm does basically is it says Okay, I have a number of dimensions that I could solve the problem on and they introduced Okay, let's take life on earth needs to whatever survive So I can either be like a super tall creature right to reach food that no one else can reach I could be a super fast creature right to kind of run away from everything Or it can be a super heavy creature so that no one can attack me And so these are kind of the dimensions that you can solve the problem of reproduction and survival And within so what map elites does it it would segment this area So let's say size and speed it would segment this into a grid And then in each grid it would kind of maintain the best solution so far that is within that grid And then what they see is when they then kind of evolve this over time and improve each each grid is that Inventions let's say inventions algorithm discoveries in one grid say for a very fast creature They would then kind of be adapted to the very let's say the very heavy creatures so like fast creature Kind of discovers or longer legs make me even faster Maybe the longer legs can be then be combined in the heavy creature to do something else So this kind of goal switching it's think of like feathers being first kind of developed or evolved for warmth For temperature regulation then being goal switched over to adapt it for flight So in the in terms of multi class classification I guess it's a bit of a different problem if you just have one classifier You can definitely make the argument that since you know you're learning maybe to classify one class really well The low false positive rate you have learned very good features for that class And if some other class kind of like the zebra is a horse with stripes and then the horse is a horse But with the feature stripes being really low you can probably classify that better or something making stuff up here But it's a bit of a different context I feel the if you have a single classifier do multi class classification But definitely the logic applies in the feature space I would say where you learn features for one class and they might become useful for another class Yeah I had this other thoughts sort of when you're discussing that is like what about like multi class multitask learning Like maybe my intermediate features get mapped to a classifier get mapped to a segmentation get mapped to again Like could goal switching improve multitask learning Yeah I would definitely say so I think that that's exactly what we're seeing when you look at for example pre training So if you think of like these wherever these newest big language models like BERT or something they're really good at tasks I don't know what it was an NLP task labeling of sentiment sentiment classification is the classic right If they evaluate on that because it's so easy but let's say BERT is really good at sentiment classification But if you were to just to train it out right on sentiment classification it's probably not going to work because there's just too little signal But then what happens is you pre train it as a language model as this masked language model and it kind of gets really good at simply comprehending language And that skill can then be kind of adapted over into the into the cement sorry into the sentiment classification realm So I think if you look at something like pre training or multitask as you say then definitely one tap what the addition of a task might give rise to certain features That then all of a sudden can be adapted by another task whereas if you just trained the latter task by itself that maybe would have been too difficult So yeah there's definitely an analogy so then what I think about is so I'm going from my pre training language model into sentiment classification And maybe I also add like question answer during document summarization named entity like this like vector of tasks that it can go do I'm then curious like when your goal switching it's like how do you then combine the features later on or do you just like take it as if I need this task I'll go to this model like yeah Well the question here is do you whether or not you implement this as a single model and kind of refer to the goal switching of features within that model Or whether you also do this now as a population based method where basically you maintain you you maintain different neural networks for different combination of these tasks Then you'd actually need a method to kind of combine and reproduce the neural networks themselves which I yeah I see that's that's going to be a bit of a difficult task Like some cross distillation or some something crazy yeah I don't know how that will work exactly Yeah I just wonder about two things it's like do for my population based search could you have like the weights be the population like different sets of weights Or would it necessarily need to be like taking apart the layers and designing new internal like cells as in the architecture search like Because if I just have the weights maybe I could treat the diversity search or goal switching as like stochastic weight averaging and just like mesh them all together when I'm finished with my goal switching at the end But if it's yeah it's definitely be if you wanted to if you yeah if you wanted to if you wanted to implement your multi task multi task tasking as a population based approach Where yeah you could def it would definitely give you an easier time if you keep the architecture of your neural networks the same and simply have different weights And then you could indeed consider something like weight averaging or or yeah I guess a more modern approach will be like distillation from the two teacher models into one child model It's actually a good metaphor for a for reproduction kind of a distillation from multiple teacher model don't know if anyone's done that yet but yeah I guess that that might be the way to do it if you also maintain different architectures for different problems that might be a bit of a yeah Yeah that's an interesting thing too if you have the goal switching and then you model distill it all into one model that is yes Well if you think of map elites right you'd simply you'd simply distill it into the appropriate I don't even know what the what the axis would be probably I can imagine okay you have like three tasks so you have three axis and then you'd mix the task maybe in accordance on how far up your of these axes you are or something like this It's not exactly map elites because your actual objectives are on the axis but I don't know Yes pretty cool so just to backtrack one step I want to talk about like diversity centric search novelty like when I was thinking about that I was like can't you just initialize it such such that it has maximum diversity like can't you just initialize the population such that they're all like uniformly spaced and then search locally from there So I just wonder what you think on that and how this is different from that So yeah in these in these diversity search algorithms basically what you're you're doing is your your only goal is or your main goal depends on the algorithm but let's say your only goal is to find diverse behaviors or diverse solutions diverse whatever I think the main problem with that is is that the search space is so extremely large That you're going to have a hard time even even defining what a kind of a uniform distribution is because it's such a high dimensional space that even if you sample uniformly it's it's almost empty like you're almost right you're not you're not getting anywhere because you have finite finite computer you need to implement an algorithm Even if you even if my computer can hold a hundred thousand different members of a population in high dimensions that is nothing right so to me yet the initialization might be definitely important But I don't think you'll you'll get around some sort of iterative procedure and going around weeding out weeding out things such that you have space for interesting things because ultimately what you want to find is something interesting In the robot maze example the novelty search basically is here is a robot you started right and then you want to do something that you haven't done yet right so if the robots crashes into a wall the first time that's a good thing you say oh cool you haven't done that yet But if it crashes into the wall the second time you're like you've done that already right so you you you basically need a measure of saying how close to behaviors are but if the robot has crashed into every wall once the only thing it can do if it wants to do something new is actually go around the wall and then you're like oh cool you've done something new But the space of behaviors often is so large that you can't simply enumerate all the all the behaviors so you I think that's the main problem why you can't just make it diverse from the beginning Yeah when I think about that I was thinking that maybe the like reward function if you're like navigating the maze it needs to be more refined so like if it crashes into the wall that needs to be like I don't know plus three some some like unique signal I feel like in order to create that kind of because like Thinking of if it's just like reward zero everywhere but one if you hit that finish line and then maybe some kind of like discounting for how long it takes you to get there is like I don't see how it could interpret that it's done a new behavior if all it has is it so to me it feels like it's all about the design of the reward space now to implement such a thing Yes absolutely so the that the definitely if you wanted to do novelty search you would need to implement a measure of how close to behaviors are so there's no way around and I think that's kind of crux of the of this method is that by specifying how close to behaviors are so what what constitutes novelty and what doesn't You already implicitly kind of telling the robot something about the nature of of the world so I think that the kind of the objective because they now say oh we don't give the robot the objecting of reaching the target we simply give it the objective of not doing the same thing twice I think the kind of objective sneaks in Like again through the specification of how of you how close are to be a risk but definitely this is just kind of a really simple example of what they want to say is that these methods really become important when you have ambitious objectives in the maze we can all agree if we just designed the reward Crashing walls bad you don't have to actually go straight to the goal you can you know but go around walls good and so on then it's easy right but in really ambitious objectives like I don't know flying reaching the moon in the in the 1960s designing general AI Curing cancer and so on we don't actually know how to design the reward right because we don't know which steps need to be fulfilled in order to to fly to the moon I guess now we do in hindsight right but we couldn't have predicted we don't know which steps need to be discovered In order to cure cancer and it's very very probable if you look at history that the fundamental discoveries that lead to us curing cancer will not directly come from cancer research that's that's their entire point right it's not like you can have a goal go straight towards it if it's like a really ambitious goal very probably The solutions will come in part from extremely non related fields and they and you kind of have to make advances everywhere and in order to solve that problem so the the the question of it's all designed it's all about designing the reward yes but we would have to know how the reward must be must look and in these really ambitious objectives we don't And that's that's where they argue well the best thing actually you can do is to just explore and you just find interesting things along the way and you kind of hope that these interesting things will come no you know the interesting things will combine to form new interesting things right but you just don't know where you're going to end up right Yeah, I guess maybe you could just keep a trip like the trajectory of states and use that as your signal of novelty. But then I think like if you've got like a robotic arm with like x degrees of freedom it's like the state space would be too infinite to really like say oh this was significantly this is a significantly different sequential procedure of states and this other thing. So then the next thing. Yes, I think this is a good transition into their pick breeder experiment. And so anyone who listens to this who hasn't watched their talk the pick breeder is like, they've got these generator neural networks with sets of weights. And they have like humans go on and they pick two of the generated images to blend together and derive a new image. And so this repeats on and on until it goes from like just like a spiral pattern into like a skull face drawing or a butterfly drawing or something like that. And they. So this idea is supposed to represent open endedness in an environment and not so it just generally I, I just found it to be really interesting. I think it's one of the things in their talk that you look at it and you're like oh it's interesting what what is going on here. But it's like the, the mutation is really guided by the human search, which is so complex I feel like I was just wondering what you thought of that pick breeder experiment. Yeah, it's really cool. And it's, it's, it's actually the basis for their entire books I've read the book, the white greatness cannot be planned I believe I've got the title. But, so that this, they actually they kind of start out with this as a motivational example of what if, what if the only goal is to do something interesting and without any objective so all you do is kind of choose slight variations on the current picture, and you see what you end up with and I thought, I thought it illustrates their points extremely well so it illustrates, for example, goal switching is that if you were done with your sequence of image manipulations you could then save it into the database and someone else could pick up from it, and then kind of continue it. And since every human finds slightly different things interesting right, you could take someone else's final result and say, ah, you know that that kind of looks weird but then you, your modifications to it will be different than that human continued breeding the picture. So what you end up is, and they show this, for example, one picture ends up being a car, and it had been adapted from an alien face where the eyes of the alien face became the wheels of the car. And so the first person might have been like, oh, this looks more and more like an alien face, I'm going to make it more like an alien face, and then the second person is like, oh, that kind of looks nice, I'm going to modify it in a different, so they basically give this example of if you have an ambitious goal like getting to a car just from these very simple picture generation networks. Then the stepping stones to get there have nothing to do with cars, and the people that did it didn't have a car in mind while going there. And the second thing is that if you try to get a car from the beginning, I believe they've done this, if you try to, you can't. Like, it's just the sequence of things that you have to go through is so complicated and convoluted that if you were to try to end up with a result, it's basically impossible. So these kind of illustrate their points very, very nicely. And I mean, it's a cool experiment in itself, but they use it kind of as a basis metaphor for them going on, jumping off. Yeah, I just think it's so interesting, this idea that it's like you can't design a car unless you don't try it, unless you just happen to come across that. It's sort of like I think about like if I was to fire up GarageBand and start trying to make a song, it's like I don't know exactly what it's going to sound like. I'm just going to kind of explore until I come across something. So then I was thinking about like with the GANs and the way that the GANs design images. So this is sort of a design I drew up that I'm curious what you think of. It's like what if the generator just tries to make some object and then a pre-chained classifier says, oh, I think it looks like this maybe. And then you send it to like a refining network. So the GAN just sort of searches for objects and then some classifiers are like, oh, I think it looks like sort of like how the pig breeders sort of like how we're like, oh, I think this looks like a skull or whatever. So I'm going to try to refine it now. Do you think that would be an interesting thing or? You'd have like a two stage process. First you do something general and then it gets classified. And then you'd have like a special generator just for the skull class and the special discriminator just for that. Yeah, I don't see why not. It might be hard. It might be hard to get the first generator to be sufficiently diverse. So you might might need some kind of discriminator signal at the even at the beginning. So yes, I mean, you're like, how do you think the pig breeder experiment could become fully automated such that there's no human in the loop? Yeah, that's that's a thought I had as well, because to me it seems that the kind of, of course, the resulting pictures, the fact that they look like human objects or recognizable objects is a result from them being being bred by humans. Like the fact that it looks like a car or a skull or something like this is is very much. But also, I guess that that could be abstracted in. We just not expect the results to be like human recognizable objects, but maybe something else. The much more deeper construction in pig breeder is the fact that the measure of interestingness is provided by the humans. Right. So the humans, they they click on a picture and then they get variants of that picture and they click on the one that they most like. This this sense of interestingness of I like this one is that's what's that's the fundamental core that's provided by the humans as an input to the system. That's what drives the entire thing. That's exactly the same as before. It's when you write when you teach the robot which two behaviors are close enough, like, oh, no, that's too close to before. That's not novel. Or yes, that's sufficiently different than before. That is novel. Right. This this sense is somehow you either need to specify it or you need to have the human in the loop to provide it. I feel it's very, very hard to capture that in an algorithm as as of today. Yeah, like something I think about is like maybe I'd have like my thousand class image net classifier and then maybe I'd have like like a style classifier, like a neural style transfer network that I've like chopped off the like some intermediate feature. I'm going to take that as my style. And so maybe I'm like classifying. I think it's like an airplane. And then I kind of like this style for it. That's sort of like my like how I would think about trying to automate that. Like, I don't know, I guess, like, I don't know if I I guess it's interesting. But I also feel like when you're doing the pick reader, you're kind of like, oh, I'm going to try it now. Now that I see this vision, I'm going to try to make it like look like that now, I suppose. Like, yeah, yeah. I think I could mold this into a skull and then you start doing. Yes, yes, they're very much so they're not they're not advocating random exploration. What they're advocating is basically if you have an ambitious goal, then you basically don't know the stepping stones. But from stepping stone to stepping stone, that's where objectives are very handy. So when you want to say I this already kind of looks like something, I want to make it more like that. I want to make it more into a skull. Right. It already has like two circles and kind of the shape. But I'm going to drive it there. That that is very that can be very objective driven. But in the grand scheme of things, you don't know. Then once you have the skull, someone else can develop that into an even new thing. So, yeah, indeed, if if you if you are in kind of a local search in this space, then an objective driven behavior like what you're saying, like I want to make it as much this as possible. That's very that's actually a thing they're advocating for. But then from their end result, yeah, you would need to then restart again, do the same thing with like something else. Huh? Yeah, it's really interesting. Just thinking about, yeah, I think about like the stepping stones and like is how would you define the space of stepping stones to such a to any kind of thing? I guess it's like you could still design some kind of maybe it's discrete or maybe you have some kind of signal you can get back from it. And I guess it's just a lot to think about. Directly, I think they give this they give this great analogy. I feel like if you have a really ambitious objective, it's like crossing a lake, but the lake is covered in fog. So you basically can't really see very far, but you can always kind of see the next stepping stones. Right. And you can then you can then try to go from stepping stone to stepping stone, but you don't know which one to take if there's like a fork. There's two ways possible. You don't know which one. Right. So all you can do is basically go the most interesting one. And they relate this to scientific research. So, yeah, if we want to accomplish some really great research goal, like artificial general intelligence, we don't like we don't know. But we can see the next stepping stones. Right. We can see, oh, from what we have right now, what interesting combination could we make that still kind of it still kind of makes that's not total garbage. Right. So in the local search, I can try to say I want to I don't know. I want to do this. I want to do multiple generators and multi stage and then this thing. Right. This this is kind of a stepping stone and maybe that will then lead to something more interesting and so on. So, yeah, that's that's kind of how they relate. I like this metaphor of the lake. Yeah. Yeah. I just like could like a meta controller try to put the stones down and then the objective is or is the space too enormous that that idea of having a meta controller guide the stepping stone placement is too big. The stepping stone placement is just like absurd in that and there's no way that that would work. That's sort of where I'm thinking with this now is like. So they actually that's that's exactly the question. Right. Of what I so I believe you need such a meta whatever because the space is too large. You somehow need a way to choose the stepping stones in the first place. Right. You somehow need a way to do this. Now, what they're saying is that if you're if your goal is really ambitious, then a meta controller that simply wants to reach the goal is bad because right because what we discussed before, you might need a lot of inventions from other fields in order to make goal happen. And if you simply go your field maximum power towards your goal, that's not going to happen. Now, if your meta controller is actually just something that wants to produce interesting things, then that's actually something they advocate for. That is exactly what their algorithms are trying to capture. They're trying to capture locally. Yeah, we want to get better at a particular thing. What those particular things are and the order of these that should be novelty driven instead of goal driven. Yeah, yeah. Yeah. The interesting component. I guess I'm sort of biased towards liking the objective design. And now I'm thinking like, OK, well, let's abstract those meta controllers one level up and have a meta meta controller and just repeat this and hierarchy makes sense. And that if you if you if you're if you're a bit cynical, that is what you will also hear out of here out of and they have to argue in the in their book a lot against that like isn't the question isn't the kind of isn't the implementation of a meta controller that just searches for novelty in itself. And that's the objective again. And then they give some good reasons why actually you don't. It is different. It's more like a constraint on your search. If you think of natural evolution, for example, it isn't really doesn't really have an objective. You think reproduction and survival is the objective of natural evolution. It doesn't really the good the good reason they give is the objective has already been fulfilled by the very first organism to ever live. Right. Why didn't it stop there? Why didn't it stop very first cell? OK, done. We've fulfilled the objective. It's more of a it's more of an actually a constrained optimization where the constraint is you need to be able to survive. That's kind of the minimum bar of to being on this planet. And then I'm saying constrained optimization, but it's it's not it's not an optimization. It's more of like a constraint constraint search. OK, yeah, I think, yeah, I guess it's just like I don't think I'm closed in this world of trying to think of these constraint problems. And I haven't really like thought more generally about just like exploration as a whole. But but anyway, so I just wanted to ask you generally like your deep learning researcher, I want to ask like what areas of deep learning are you really interested in right now? And what do you think is promising in the near future? So I'm currently working in adversarial examples. That is a really interesting topic. There's lots of questions still still open, but I'm generally interested in pretty much any anything that is not. I'm not too interested in like the newest the newest fine technique on getting the latest state of the art numbers, even though that's probably super important for practitioners. Basically, agreeing more with the authors of this tutorial of that. Let's just try to do interesting things. And to me, these these actually these these areas in terms of open ended, open ended search, open ended learning are very interesting. I think reinforcement learning still has a long way to go. I think actually NLP still has a long way to go because I don't believe it's the current models are the end of it. So I think it's really exciting time. Yeah, I love thinking about adversarial examples because it definitely flips the CNN idea on its head. And then I had one other thing about adversarial examples that I'm interested in is there is like an interview with Elon Musk and this Lex Friedman researcher where he asked him about adversarial examples on his self-driving cars. And he seems dismissive of it. He says he thinks basically you could just average different patches of like test time augmentation to overcome adversarial examples. So in your research, do you think that like the example where they add the noise mass to the panda and they're like, oh, it's a given now, if they just perturbed it like nine more times, do you think the prediction would average out to pandas? That is a very difficult question. And from experience, simply adding noise and then feeding it to the classifier, even if you average after that, usually will defend against adversarial examples to a point. But it will also degrade your classification performance. Because so maybe I understood it wrong, but my understanding is I have my input, right? I simply add noise to it and then feed it through the network. And I could do this many times, right? And then average the prediction. But usually this will help against adversarial examples, but it will also degrade the accuracy of that classifier. So it might actually make your self-driving car worse in the overall. Because how often is it going to be attacked against a adversarial example? It's going to be attacked maybe once or twice a year, maybe if it drives by some hacker's house, right? Sticker on a stop sign or something. But the rest of the time, I would actually like to retain the best possible classifier. And if I always have to add noise, then that's not possible. So the research we're doing is actually into the direction of can we retain the original accuracy while still kind of detecting these samples? I mean, you somehow have to get a trade off somewhere, but just adding noise isn't the final solution yet. I was like, so with these adversarial examples, they're only going to make misclassifications like that if it really is adversarially sought after. It's not just like the noise perturbation would be such an enormous space to find it otherwise. Yes, you really need to try. So it's very unlikely that some random thing. Of course, these networks can be confused by random noise, but I think one of the self-driving cars once drove into a big white truck because it was large and white, so it thought it was sky. But other than these failures, you really have to try to find an adversarial example. Really cool. Yannick, thanks so much for doing this. Anybody watching or listening, definitely check out Yannick's YouTube channel. He has really great paper summaries and all sorts of things. Thank you. Thanks so much for having me. | [
{
"start": 0,
"end": 5.74,
"text": " Hi there, I've recently been interviewed by the YouTube channel Henry AI Labs"
},
{
"start": 6.3,
"end": 8.3,
"text": " by Connor Shorten and"
},
{
"start": 8.64,
"end": 14.74,
"text": " what follows is the resulting conversation we had about population-based methods and"
},
{
"start": 15.84,
"end": 22.400000000000002,
"text": " Open-ended learning things like that basically topics of the ICML tutorial that we both saw"
},
{
"start": 23.44,
"end": 27,
"text": " It's important to note that none of us is really an expert on the topic"
},
{
"start": 27,
"end": 30.8,
"text": " but we are trying to make sense of it and"
},
{
"start": 31.68,
"end": 33.68,
"text": " mainly just kind of talking about the ideas"
},
{
"start": 33.68,
"end": 40.8,
"text": " So please enjoy the conversation with Connor Shorten definitely check out the Henry AI Labs channel and"
},
{
"start": 41.64,
"end": 43.64,
"text": " Now have a good time"
},
{
"start": 43.96,
"end": 48.72,
"text": " Thanks for watching the Henry AI Labs deep learning podcast today. I'm joined with Janek Kilcher"
},
{
"start": 48.92,
"end": 54.24,
"text": " Janek works in the data analytics lab at ETH. He has a great YouTube channel"
},
{
"start": 54.24,
"end": 57.24,
"text": " I really enjoy watching his paper summary videos"
},
{
"start": 57.24,
"end": 61.24,
"text": " If you like any of the videos that I'm making you definitely also like checking out this channel"
},
{
"start": 61.24,
"end": 63.52,
"text": " I'm gonna put the link in the description at the end of the talk"
},
{
"start": 63.92,
"end": 66.92,
"text": " So Janek, thanks for doing this with me. I really appreciate it"
},
{
"start": 68.28,
"end": 74.24000000000001,
"text": " Thanks for having me. It's cool. So what we're gonna talk about is population-based search and"
},
{
"start": 75.8,
"end": 78.72,
"text": " Presentation that ICML that I really thought was interesting about"
},
{
"start": 79.86,
"end": 81.12,
"text": " emphasizing"
},
{
"start": 81.12,
"end": 88.96000000000001,
"text": " Diversity and novelty in search. So the first question I just wanted to start by generally talking about your opinion on population-based search and"
},
{
"start": 90.24000000000001,
"end": 95.92,
"text": " The differences between population-based search and my gradient descent going straight for one solution"
},
{
"start": 97.56,
"end": 100.08000000000001,
"text": " Yeah, so the the kind of main difference"
},
{
"start": 100.72,
"end": 107.36000000000001,
"text": " Is that in population-based search as the name implies you maintain kind of a large population of solutions?"
},
{
"start": 107.36,
"end": 113.66,
"text": " So you don't want to limit yourself to just one trajectory say I start here and then I run towards my goal"
},
{
"start": 113.66,
"end": 120.14,
"text": " but you kind of maintain a lot of hypotheses of what the solution could be and then you kind of"
},
{
"start": 120.72,
"end": 124,
"text": " want to update all of them at the same time and"
},
{
"start": 124.4,
"end": 127.88,
"text": " So there's many different variants of population-based search"
},
{
"start": 127.88,
"end": 135.16,
"text": " but they all have this this thing in common where you maintain many solutions and you kind of bet on"
},
{
"start": 135.16,
"end": 137.82,
"text": " One of them becoming a good one"
},
{
"start": 138.34,
"end": 139.85999999999999,
"text": " basically"
},
{
"start": 139.85999999999999,
"end": 145.74,
"text": " Yes, so one other thing they they present their paper where they have the robot walking"
},
{
"start": 145.74,
"end": 151.66,
"text": " And if it breaks one of its legs, for example, it can go back to the map elites table and and say okay"
},
{
"start": 151.66,
"end": 155.42,
"text": " Well, I've lost this leg, but I think maybe this solution"
},
{
"start": 155.42,
"end": 161.78,
"text": " I was I wasn't too clear on how that would really be related. So I was maybe wondering if you had more insight on that"
},
{
"start": 161.78,
"end": 166.98,
"text": " Yes, so the so the maybe the the context is yeah"
},
{
"start": 166.98,
"end": 170.78,
"text": " You want to teach a robot to walk and the robot had six legs"
},
{
"start": 170.78,
"end": 176.5,
"text": " I believe so and if you think of what's the solution to the problem a solution is kind of an"
},
{
"start": 177.06,
"end": 183.94,
"text": " Algorithm that takes the current sensor input and outputs how to move the motors, right? So and"
},
{
"start": 184.74,
"end": 191.3,
"text": " If you just have like say your gradient descent algorithm converging on the best solution of the robot"
},
{
"start": 191.3,
"end": 195.34,
"text": " Of how to move the robot. It's just going to be like, oh, these are the sensors"
},
{
"start": 195.34,
"end": 199.02,
"text": " Okay, I'm gonna move like this like this like this like this but if"
},
{
"start": 200.06,
"end": 207.98000000000002,
"text": " One leg breaks, of course, you're lost because you only know this one way of moving and now the sorry"
},
{
"start": 209.54000000000002,
"end": 211.54000000000002,
"text": " So you only know this one way of moving"
},
{
"start": 212.18,
"end": 213.94,
"text": " Basically, and that's it"
},
{
"start": 213.94,
"end": 220.34,
"text": " But in population based search if you think of the solution as a way to move you maintain many many"
},
{
"start": 220.34,
"end": 222.34,
"text": " many ways to move"
},
{
"start": 222.70000000000002,
"end": 224.38,
"text": " so you"
},
{
"start": 224.38,
"end": 225.9,
"text": " basically the"
},
{
"start": 225.9,
"end": 227.9,
"text": " objective if you can call it like this is"
},
{
"start": 229.9,
"end": 234.06,
"text": " Algorithm find me a lot of different ways to move"
},
{
"start": 234.5,
"end": 240.3,
"text": " Right with my six legs and now if one of my legs I still can evaluate all of them"
},
{
"start": 240.3,
"end": 244.94,
"text": " I still can find okay, which one's the best but if now one of them falls away"
},
{
"start": 244.94,
"end": 247.68,
"text": " I have all these other solutions that I can try"
},
{
"start": 247.68,
"end": 254.08,
"text": " Right. So then what they would do is like this life falls away. Now. They just reevaluate all of those solutions"
},
{
"start": 254.88,
"end": 260.92,
"text": " while only having five legs and the best of those like is much more likely to kind of"
},
{
"start": 262,
"end": 265.52,
"text": " Work then if you had just your single solution"
},
{
"start": 265.76,
"end": 272.26,
"text": " So that kind of that's the its population base because you maintain many different ways of solving the problem"
},
{
"start": 272.26,
"end": 280.62,
"text": " Yes, I was also thinking about like using the search algorithms that control neural architecture search and things like that"
},
{
"start": 280.9,
"end": 286.02,
"text": " So it's trying to think of how you might extend these ideas from the robot walking with six legs"
},
{
"start": 286.26,
"end": 291.48,
"text": " To the RNN controller designing the convolutional network, but like maybe I might have"
},
{
"start": 292.82,
"end": 294.82,
"text": " like more of a"
},
{
"start": 295.3,
"end": 299.18,
"text": " Storage constraint and more of a latency constraint and I could jump to a different solution like that"
},
{
"start": 299.18,
"end": 306.82,
"text": " I'm just wondering how you think like these ideas of population-based search translate into the neural architecture"
},
{
"start": 306.94,
"end": 316.16,
"text": " search and specifically if it really is important because like you've got I feel like in neural architecture search you have such a direct signal with the"
},
{
"start": 316.82,
"end": 323.02,
"text": " Classification accuracy like I don't see as much variance as those in the in the objective function"
},
{
"start": 323.02,
"end": 328.34,
"text": " Yeah, I really think this population based approach is they shine in"
},
{
"start": 328.65999999999997,
"end": 334.21999999999997,
"text": " So they shine in multiple different areas, but one area where they shine is definitely when the environment changes"
},
{
"start": 334.7,
"end": 343.06,
"text": " So when you know something about whatever your input changes like the robot losing a leg so in kind of neural architecture search you might"
},
{
"start": 343.62,
"end": 349.41999999999996,
"text": " You might find these methods working if you then go for let's say transfer learning"
},
{
"start": 349.42,
"end": 355.66,
"text": " So you kind of train your network on one task you want to actually do it in another task, right?"
},
{
"start": 355.66,
"end": 360.3,
"text": " And then if you maintain many solutions and you can evaluate all of them"
},
{
"start": 360.78000000000003,
"end": 366.94,
"text": " In a in this transfer setting it's much more likely that one of them is gonna be is gonna be fine"
},
{
"start": 366.94,
"end": 372.02000000000004,
"text": " So but you're right of I also believe that directly in architecture search"
},
{
"start": 372.54,
"end": 374.54,
"text": " Maybe it's not"
},
{
"start": 374.94,
"end": 377.74,
"text": " Maybe it doesn't yield that many grades"
},
{
"start": 377.74,
"end": 381.1,
"text": " results though the other of course the other"
},
{
"start": 382.46000000000004,
"end": 388.62,
"text": " Area where these methods shine and this is with respect to algorithms like novelty search"
},
{
"start": 390.46000000000004,
"end": 394.06,
"text": " Which can be implemented as a population based method is"
},
{
"start": 395.34000000000003,
"end": 400.14,
"text": " They gave this really good example of deception in a search problem"
},
{
"start": 400.14,
"end": 406.14,
"text": " So a deception would be like if you have a robot walking a maze and the robot just wants to get to the goal"
},
{
"start": 406.14,
"end": 411.74,
"text": " Right and you would program it the robot to be rewarded the closer it gets to the goal"
},
{
"start": 412.21999999999997,
"end": 417.18,
"text": " But if like there's a wall in between and you actually need to go around the wall"
},
{
"start": 417.18,
"end": 422.14,
"text": " Kind of then for a while you would need to move away from the goal in order to reach it"
},
{
"start": 422.14,
"end": 427.74,
"text": " So if you have like a pure objective driven approach, you just go straight to the goal"
},
{
"start": 427.74,
"end": 429.74,
"text": " You would always get stuck at the wall"
},
{
"start": 429.74,
"end": 436.3,
"text": " But if you then kind of do what is called a novelty search where you basically reward the robot for things"
},
{
"start": 436.3,
"end": 440.62,
"text": " It has never done before it would actually find its way around the wall"
},
{
"start": 440.62,
"end": 445.26,
"text": " So you can maintain population of solutions that all kind of explore the space"
},
{
"start": 445.26,
"end": 450.7,
"text": " And that in our neural architecture search, maybe it's of a benefit that actually"
},
{
"start": 451.42,
"end": 458.22,
"text": " You know if I I probably always benefit from like adding more layers or neurons or something"
},
{
"start": 458.22,
"end": 463.02000000000004,
"text": " like this, but maybe I actually want to prune some stuff first and then add some more stuff"
},
{
"start": 463.02000000000004,
"end": 467.1,
"text": " So I maybe want to get worse first before I can get even better, right?"
},
{
"start": 467.1,
"end": 473.74,
"text": " So so is this a reach where I can imagine that happening? But I don't know"
},
{
"start": 473.74,
"end": 476.54,
"text": " Yeah, I was thinking the changing environment"
},
{
"start": 476.54,
"end": 482.70000000000005,
"text": " I definitely think like when you deploy a model and then you're getting new data that you could frame that as a changing environment"
},
{
"start": 482.70000000000005,
"end": 486.86,
"text": " And then also I was thinking about like in the context of GAN"
},
{
"start": 486.86,
"end": 492.78000000000003,
"text": " Which is something that I think is really interesting that the discriminator classifying the GAN"
},
{
"start": 492.78000000000003,
"end": 497.02000000000004,
"text": " Sam the generator samples, it's a changing environment because of the generators updates"
},
{
"start": 497.02000000000004,
"end": 505.34000000000003,
"text": " So maybe having some kind of population based GAN or discriminator model might help it avoid that like"
},
{
"start": 505.34000000000003,
"end": 509.26,
"text": " Continual learning problem, I guess is sort of an"
},
{
"start": 510.7,
"end": 514.22,
"text": " Yeah, that could that might as might very well be"
},
{
"start": 514.22,
"end": 520.22,
"text": " There are approaches to GANs, I believe where you basically you have like many discriminators"
},
{
"start": 520.22,
"end": 525.4200000000001,
"text": " And each one kind of only has let's say has its own limited view on the data"
},
{
"start": 525.4200000000001,
"end": 529.5,
"text": " And you're trying to kind of fool a lot of them at the same time, but it's not the same thing. But yes"
},
{
"start": 529.5,
"end": 536.38,
"text": " I think that that might make sense. Yeah, I've seen that multiple generator multiple discriminator model too"
},
{
"start": 536.38,
"end": 538.38,
"text": " I think that's really interesting as well"
},
{
"start": 538.38,
"end": 548.38,
"text": " So then one other thing I was curious about is this idea of goal switching and how that might relate to the like AutoML on our existing"
},
{
"start": 548.38,
"end": 553.42,
"text": " More like heavily studied things like classification, localization, semantic segmentation"
},
{
"start": 553.42,
"end": 556.62,
"text": " Like how do you think goal switching could be important?"
},
{
"start": 556.62,
"end": 560.62,
"text": " Like one idea I had is maybe if you've got like multi-class classification"
},
{
"start": 560.62,
"end": 565.18,
"text": " And it's got like a really low false positive rate or something on like one class"
},
{
"start": 565.18,
"end": 569.18,
"text": " You might say well you've somehow learned a decision boundary on that class"
},
{
"start": 569.18,
"end": 577.18,
"text": " Or do you think that wouldn't generalize and that there's no sense in goal switching in like a multi-class classification problem?"
},
{
"start": 577.18,
"end": 583.18,
"text": " So yeah, in general, well when you think of goal switching in general"
},
{
"start": 583.18,
"end": 589.18,
"text": " How they introduced it was also in the context of like this population based search of these map elites"
},
{
"start": 589.18,
"end": 595.18,
"text": " Maybe it's kind of so what map elites the algorithm does basically is it says"
},
{
"start": 595.18,
"end": 600.18,
"text": " Okay, I have a number of dimensions that I could solve the problem on and they introduced"
},
{
"start": 600.18,
"end": 605.18,
"text": " Okay, let's take life on earth needs to whatever survive"
},
{
"start": 605.18,
"end": 611.18,
"text": " So I can either be like a super tall creature right to reach food that no one else can reach"
},
{
"start": 611.18,
"end": 615.18,
"text": " I could be a super fast creature right to kind of run away from everything"
},
{
"start": 615.18,
"end": 619.18,
"text": " Or it can be a super heavy creature so that no one can attack me"
},
{
"start": 619.18,
"end": 626.18,
"text": " And so these are kind of the dimensions that you can solve the problem of reproduction and survival"
},
{
"start": 626.18,
"end": 633.18,
"text": " And within so what map elites does it it would segment this area"
},
{
"start": 633.18,
"end": 638.18,
"text": " So let's say size and speed it would segment this into a grid"
},
{
"start": 638.18,
"end": 645.18,
"text": " And then in each grid it would kind of maintain the best solution so far that is within that grid"
},
{
"start": 645.18,
"end": 654.18,
"text": " And then what they see is when they then kind of evolve this over time and improve each each grid is that"
},
{
"start": 654.18,
"end": 662.18,
"text": " Inventions let's say inventions algorithm discoveries in one grid say for a very fast creature"
},
{
"start": 662.18,
"end": 669.18,
"text": " They would then kind of be adapted to the very let's say the very heavy creatures so like fast creature"
},
{
"start": 669.18,
"end": 672.18,
"text": " Kind of discovers or longer legs make me even faster"
},
{
"start": 672.18,
"end": 677.18,
"text": " Maybe the longer legs can be then be combined in the heavy creature to do something else"
},
{
"start": 677.18,
"end": 687.18,
"text": " So this kind of goal switching it's think of like feathers being first kind of developed or evolved for warmth"
},
{
"start": 687.18,
"end": 693.18,
"text": " For temperature regulation then being goal switched over to adapt it for flight"
},
{
"start": 693.18,
"end": 702.18,
"text": " So in the in terms of multi class classification I guess it's a bit of a different problem if you just have one classifier"
},
{
"start": 702.18,
"end": 709.18,
"text": " You can definitely make the argument that since you know you're learning maybe to classify one class really well"
},
{
"start": 709.18,
"end": 714.18,
"text": " The low false positive rate you have learned very good features for that class"
},
{
"start": 714.18,
"end": 724.18,
"text": " And if some other class kind of like the zebra is a horse with stripes and then the horse is a horse"
},
{
"start": 724.18,
"end": 731.18,
"text": " But with the feature stripes being really low you can probably classify that better or something making stuff up here"
},
{
"start": 731.18,
"end": 739.18,
"text": " But it's a bit of a different context I feel the if you have a single classifier do multi class classification"
},
{
"start": 739.18,
"end": 749.18,
"text": " But definitely the logic applies in the feature space I would say where you learn features for one class and they might become useful for another class"
},
{
"start": 749.18,
"end": 756.18,
"text": " Yeah I had this other thoughts sort of when you're discussing that is like what about like multi class multitask learning"
},
{
"start": 756.18,
"end": 763.18,
"text": " Like maybe my intermediate features get mapped to a classifier get mapped to a segmentation get mapped to again"
},
{
"start": 763.18,
"end": 768.18,
"text": " Like could goal switching improve multitask learning"
},
{
"start": 768.18,
"end": 776.18,
"text": " Yeah I would definitely say so I think that that's exactly what we're seeing when you look at for example pre training"
},
{
"start": 776.18,
"end": 785.18,
"text": " So if you think of like these wherever these newest big language models like BERT or something they're really good at tasks"
},
{
"start": 785.18,
"end": 794.18,
"text": " I don't know what it was an NLP task labeling of sentiment sentiment classification is the classic right"
},
{
"start": 794.18,
"end": 801.18,
"text": " If they evaluate on that because it's so easy but let's say BERT is really good at sentiment classification"
},
{
"start": 801.18,
"end": 810.18,
"text": " But if you were to just to train it out right on sentiment classification it's probably not going to work because there's just too little signal"
},
{
"start": 810.18,
"end": 820.18,
"text": " But then what happens is you pre train it as a language model as this masked language model and it kind of gets really good at simply comprehending language"
},
{
"start": 820.18,
"end": 832.18,
"text": " And that skill can then be kind of adapted over into the into the cement sorry into the sentiment classification realm"
},
{
"start": 832.18,
"end": 845.18,
"text": " So I think if you look at something like pre training or multitask as you say then definitely one tap what the addition of a task might give rise to certain features"
},
{
"start": 845.18,
"end": 855.18,
"text": " That then all of a sudden can be adapted by another task whereas if you just trained the latter task by itself that maybe would have been too difficult"
},
{
"start": 855.18,
"end": 864.18,
"text": " So yeah there's definitely an analogy so then what I think about is so I'm going from my pre training language model into sentiment classification"
},
{
"start": 864.18,
"end": 872.18,
"text": " And maybe I also add like question answer during document summarization named entity like this like vector of tasks that it can go do"
},
{
"start": 872.18,
"end": 886.18,
"text": " I'm then curious like when your goal switching it's like how do you then combine the features later on or do you just like take it as if I need this task I'll go to this model like yeah"
},
{
"start": 886.18,
"end": 895.18,
"text": " Well the question here is do you whether or not you implement this as a single model and kind of refer to the goal switching of features within that model"
},
{
"start": 895.18,
"end": 907.18,
"text": " Or whether you also do this now as a population based method where basically you maintain you you maintain different neural networks for different combination of these tasks"
},
{
"start": 907.18,
"end": 919.18,
"text": " Then you'd actually need a method to kind of combine and reproduce the neural networks themselves which I yeah I see that's that's going to be a bit of a difficult task"
},
{
"start": 919.18,
"end": 927.18,
"text": " Like some cross distillation or some something crazy yeah I don't know how that will work exactly"
},
{
"start": 927.18,
"end": 937.18,
"text": " Yeah I just wonder about two things it's like do for my population based search could you have like the weights be the population like different sets of weights"
},
{
"start": 937.18,
"end": 945.18,
"text": " Or would it necessarily need to be like taking apart the layers and designing new internal like cells as in the architecture search like"
},
{
"start": 945.18,
"end": 957.18,
"text": " Because if I just have the weights maybe I could treat the diversity search or goal switching as like stochastic weight averaging and just like mesh them all together when I'm finished with my goal switching at the end"
},
{
"start": 957.18,
"end": 980.18,
"text": " But if it's yeah it's definitely be if you wanted to if you yeah if you wanted to if you wanted to implement your multi task multi task tasking as a population based approach"
},
{
"start": 980.18,
"end": 993.18,
"text": " Where yeah you could def it would definitely give you an easier time if you keep the architecture of your neural networks the same and simply have different weights"
},
{
"start": 993.18,
"end": 1007.18,
"text": " And then you could indeed consider something like weight averaging or or yeah I guess a more modern approach will be like distillation from the two teacher models into one child model"
},
{
"start": 1007.18,
"end": 1025.1799999999998,
"text": " It's actually a good metaphor for a for reproduction kind of a distillation from multiple teacher model don't know if anyone's done that yet but yeah I guess that that might be the way to do it if you also maintain different architectures for different problems that might be a bit of a yeah"
},
{
"start": 1025.1799999999998,
"end": 1034.1799999999998,
"text": " Yeah that's an interesting thing too if you have the goal switching and then you model distill it all into one model that is yes"
},
{
"start": 1034.18,
"end": 1059.18,
"text": " Well if you think of map elites right you'd simply you'd simply distill it into the appropriate I don't even know what the what the axis would be probably I can imagine okay you have like three tasks so you have three axis and then you'd mix the task maybe in accordance on how far up your of these axes you are or something like this"
},
{
"start": 1059.18,
"end": 1067.18,
"text": " It's not exactly map elites because your actual objectives are on the axis but I don't know"
},
{
"start": 1067.18,
"end": 1087.18,
"text": " Yes pretty cool so just to backtrack one step I want to talk about like diversity centric search novelty like when I was thinking about that I was like can't you just initialize it such such that it has maximum diversity like can't you just initialize the population such that they're all like uniformly spaced and then search locally from there"
},
{
"start": 1087.18,
"end": 1092.18,
"text": " So I just wonder what you think on that and how this is different from that"
},
{
"start": 1092.18,
"end": 1121.18,
"text": " So yeah in these in these diversity search algorithms basically what you're you're doing is your your only goal is or your main goal depends on the algorithm but let's say your only goal is to find diverse behaviors or diverse solutions diverse whatever I think the main problem with that is is that the search space is so extremely large"
},
{
"start": 1121.18,
"end": 1148.18,
"text": " That you're going to have a hard time even even defining what a kind of a uniform distribution is because it's such a high dimensional space that even if you sample uniformly it's it's almost empty like you're almost right you're not you're not getting anywhere because you have finite finite computer you need to implement an algorithm"
},
{
"start": 1148.18,
"end": 1167.18,
"text": " Even if you even if my computer can hold a hundred thousand different members of a population in high dimensions that is nothing right so to me yet the initialization might be definitely important"
},
{
"start": 1167.18,
"end": 1186.18,
"text": " But I don't think you'll you'll get around some sort of iterative procedure and going around weeding out weeding out things such that you have space for interesting things because ultimately what you want to find is something interesting"
},
{
"start": 1186.18,
"end": 1208.18,
"text": " In the robot maze example the novelty search basically is here is a robot you started right and then you want to do something that you haven't done yet right so if the robots crashes into a wall the first time that's a good thing you say oh cool you haven't done that yet"
},
{
"start": 1208.18,
"end": 1232.18,
"text": " But if it crashes into the wall the second time you're like you've done that already right so you you you basically need a measure of saying how close to behaviors are but if the robot has crashed into every wall once the only thing it can do if it wants to do something new is actually go around the wall and then you're like oh cool you've done something new"
},
{
"start": 1232.18,
"end": 1247.18,
"text": " But the space of behaviors often is so large that you can't simply enumerate all the all the behaviors so you I think that's the main problem why you can't just make it diverse from the beginning"
},
{
"start": 1247.18,
"end": 1264.18,
"text": " Yeah when I think about that I was thinking that maybe the like reward function if you're like navigating the maze it needs to be more refined so like if it crashes into the wall that needs to be like I don't know plus three some some like unique signal I feel like in order to create that kind of because like"
},
{
"start": 1264.18,
"end": 1286.18,
"text": " Thinking of if it's just like reward zero everywhere but one if you hit that finish line and then maybe some kind of like discounting for how long it takes you to get there is like I don't see how it could interpret that it's done a new behavior if all it has is it so to me it feels like it's all about the design of the reward space now to implement such a thing"
},
{
"start": 1286.18,
"end": 1308.18,
"text": " Yes absolutely so the that the definitely if you wanted to do novelty search you would need to implement a measure of how close to behaviors are so there's no way around and I think that's kind of crux of the of this method is that by specifying how close to behaviors are so what what constitutes novelty and what doesn't"
},
{
"start": 1308.18,
"end": 1331.18,
"text": " You already implicitly kind of telling the robot something about the nature of of the world so I think that the kind of the objective because they now say oh we don't give the robot the objecting of reaching the target we simply give it the objective of not doing the same thing twice I think the kind of objective sneaks in"
},
{
"start": 1331.18,
"end": 1353.18,
"text": " Like again through the specification of how of you how close are to be a risk but definitely this is just kind of a really simple example of what they want to say is that these methods really become important when you have ambitious objectives in the maze we can all agree if we just designed the reward"
},
{
"start": 1353.18,
"end": 1374.18,
"text": " Crashing walls bad you don't have to actually go straight to the goal you can you know but go around walls good and so on then it's easy right but in really ambitious objectives like I don't know flying reaching the moon in the in the 1960s designing general AI"
},
{
"start": 1374.18,
"end": 1395.18,
"text": " Curing cancer and so on we don't actually know how to design the reward right because we don't know which steps need to be fulfilled in order to to fly to the moon I guess now we do in hindsight right but we couldn't have predicted we don't know which steps need to be discovered"
},
{
"start": 1395.18,
"end": 1418.18,
"text": " In order to cure cancer and it's very very probable if you look at history that the fundamental discoveries that lead to us curing cancer will not directly come from cancer research that's that's their entire point right it's not like you can have a goal go straight towards it if it's like a really ambitious goal very probably"
},
{
"start": 1418.18,
"end": 1445.18,
"text": " The solutions will come in part from extremely non related fields and they and you kind of have to make advances everywhere and in order to solve that problem so the the the question of it's all designed it's all about designing the reward yes but we would have to know how the reward must be must look and in these really ambitious objectives we don't"
},
{
"start": 1445.18,
"end": 1469.18,
"text": " And that's that's where they argue well the best thing actually you can do is to just explore and you just find interesting things along the way and you kind of hope that these interesting things will come no you know the interesting things will combine to form new interesting things right but you just don't know where you're going to end up right"
},
{
"start": 1469.18,
"end": 1495.18,
"text": " Yeah, I guess maybe you could just keep a trip like the trajectory of states and use that as your signal of novelty. But then I think like if you've got like a robotic arm with like x degrees of freedom it's like the state space would be too infinite to really like say oh this was significantly this is a significantly different sequential procedure of states and this other thing."
},
{
"start": 1495.18,
"end": 1511.18,
"text": " So then the next thing. Yes, I think this is a good transition into their pick breeder experiment. And so anyone who listens to this who hasn't watched their talk the pick breeder is like, they've got these generator neural networks with sets of weights."
},
{
"start": 1511.18,
"end": 1530.18,
"text": " And they have like humans go on and they pick two of the generated images to blend together and derive a new image. And so this repeats on and on until it goes from like just like a spiral pattern into like a skull face drawing or a butterfly drawing or something like that."
},
{
"start": 1530.18,
"end": 1546.18,
"text": " And they. So this idea is supposed to represent open endedness in an environment and not so it just generally I, I just found it to be really interesting. I think it's one of the things in their talk that you look at it and you're like oh it's interesting what what is going on here."
},
{
"start": 1546.18,
"end": 1559.18,
"text": " But it's like the, the mutation is really guided by the human search, which is so complex I feel like I was just wondering what you thought of that pick breeder experiment."
},
{
"start": 1559.18,
"end": 1576.18,
"text": " Yeah, it's really cool. And it's, it's, it's actually the basis for their entire books I've read the book, the white greatness cannot be planned I believe I've got the title."
},
{
"start": 1576.18,
"end": 1603.18,
"text": " But, so that this, they actually they kind of start out with this as a motivational example of what if, what if the only goal is to do something interesting and without any objective so all you do is kind of choose slight variations on the current picture, and you see what you end up with and I thought, I thought it illustrates their points"
},
{
"start": 1603.18,
"end": 1619.18,
"text": " extremely well so it illustrates, for example, goal switching is that if you were done with your sequence of image manipulations you could then save it into the database and someone else could pick up from it, and then kind of continue it."
},
{
"start": 1619.18,
"end": 1639.18,
"text": " And since every human finds slightly different things interesting right, you could take someone else's final result and say, ah, you know that that kind of looks weird but then you, your modifications to it will be different than that human continued breeding the picture."
},
{
"start": 1639.18,
"end": 1655.18,
"text": " So what you end up is, and they show this, for example, one picture ends up being a car, and it had been adapted from an alien face where the eyes of the alien face became the wheels of the car."
},
{
"start": 1655.18,
"end": 1682.18,
"text": " And so the first person might have been like, oh, this looks more and more like an alien face, I'm going to make it more like an alien face, and then the second person is like, oh, that kind of looks nice, I'm going to modify it in a different, so they basically give this example of if you have an ambitious goal like getting to a car just from these very simple picture generation networks."
},
{
"start": 1682.18,
"end": 1692.18,
"text": " Then the stepping stones to get there have nothing to do with cars, and the people that did it didn't have a car in mind while going there."
},
{
"start": 1692.18,
"end": 1701.18,
"text": " And the second thing is that if you try to get a car from the beginning, I believe they've done this, if you try to, you can't."
},
{
"start": 1701.18,
"end": 1714.18,
"text": " Like, it's just the sequence of things that you have to go through is so complicated and convoluted that if you were to try to end up with a result, it's basically impossible."
},
{
"start": 1714.18,
"end": 1720.18,
"text": " So these kind of illustrate their points very, very nicely."
},
{
"start": 1720.18,
"end": 1729.18,
"text": " And I mean, it's a cool experiment in itself, but they use it kind of as a basis metaphor for them going on, jumping off."
},
{
"start": 1729.18,
"end": 1739.18,
"text": " Yeah, I just think it's so interesting, this idea that it's like you can't design a car unless you don't try it, unless you just happen to come across that."
},
{
"start": 1739.18,
"end": 1747.18,
"text": " It's sort of like I think about like if I was to fire up GarageBand and start trying to make a song, it's like I don't know exactly what it's going to sound like."
},
{
"start": 1747.18,
"end": 1749.18,
"text": " I'm just going to kind of explore until I come across something."
},
{
"start": 1749.18,
"end": 1754.18,
"text": " So then I was thinking about like with the GANs and the way that the GANs design images."
},
{
"start": 1754.18,
"end": 1759.18,
"text": " So this is sort of a design I drew up that I'm curious what you think of."
},
{
"start": 1759.18,
"end": 1767.18,
"text": " It's like what if the generator just tries to make some object and then a pre-chained classifier says, oh, I think it looks like this maybe."
},
{
"start": 1767.18,
"end": 1770.18,
"text": " And then you send it to like a refining network."
},
{
"start": 1770.18,
"end": 1780.18,
"text": " So the GAN just sort of searches for objects and then some classifiers are like, oh, I think it looks like sort of like how the pig breeders sort of like how we're like, oh, I think this looks like a skull or whatever."
},
{
"start": 1780.18,
"end": 1783.18,
"text": " So I'm going to try to refine it now."
},
{
"start": 1783.18,
"end": 1786.18,
"text": " Do you think that would be an interesting thing or?"
},
{
"start": 1786.18,
"end": 1788.18,
"text": " You'd have like a two stage process."
},
{
"start": 1788.18,
"end": 1792.18,
"text": " First you do something general and then it gets classified."
},
{
"start": 1792.18,
"end": 1800.18,
"text": " And then you'd have like a special generator just for the skull class and the special discriminator just for that."
},
{
"start": 1800.18,
"end": 1803.18,
"text": " Yeah, I don't see why not."
},
{
"start": 1803.18,
"end": 1804.18,
"text": " It might be hard."
},
{
"start": 1804.18,
"end": 1810.18,
"text": " It might be hard to get the first generator to be sufficiently diverse."
},
{
"start": 1810.18,
"end": 1820.18,
"text": " So you might might need some kind of discriminator signal at the even at the beginning."
},
{
"start": 1820.18,
"end": 1831.18,
"text": " So yes, I mean, you're like, how do you think the pig breeder experiment could become fully automated such that there's no human in the loop?"
},
{
"start": 1831.18,
"end": 1839.18,
"text": " Yeah, that's that's a thought I had as well, because to me it seems that the kind of, of course, the resulting pictures,"
},
{
"start": 1839.18,
"end": 1848.18,
"text": " the fact that they look like human objects or recognizable objects is a result from them being being bred by humans."
},
{
"start": 1848.18,
"end": 1853.18,
"text": " Like the fact that it looks like a car or a skull or something like this is is very much."
},
{
"start": 1853.18,
"end": 1858.18,
"text": " But also, I guess that that could be abstracted in."
},
{
"start": 1858.18,
"end": 1867.18,
"text": " We just not expect the results to be like human recognizable objects, but maybe something else."
},
{
"start": 1867.18,
"end": 1877.18,
"text": " The much more deeper construction in pig breeder is the fact that the measure of interestingness is provided by the humans."
},
{
"start": 1877.18,
"end": 1885.18,
"text": " Right. So the humans, they they click on a picture and then they get variants of that picture and they click on the one that they most like."
},
{
"start": 1885.18,
"end": 1896.18,
"text": " This this sense of interestingness of I like this one is that's what's that's the fundamental core that's provided by the humans as an input to the system."
},
{
"start": 1896.18,
"end": 1900.18,
"text": " That's what drives the entire thing. That's exactly the same as before."
},
{
"start": 1900.18,
"end": 1911.18,
"text": " It's when you write when you teach the robot which two behaviors are close enough, like, oh, no, that's too close to before."
},
{
"start": 1911.18,
"end": 1915.18,
"text": " That's not novel. Or yes, that's sufficiently different than before."
},
{
"start": 1915.18,
"end": 1925.18,
"text": " That is novel. Right. This this sense is somehow you either need to specify it or you need to have the human in the loop to provide it."
},
{
"start": 1925.18,
"end": 1933.18,
"text": " I feel it's very, very hard to capture that in an algorithm as as of today."
},
{
"start": 1933.18,
"end": 1944.18,
"text": " Yeah, like something I think about is like maybe I'd have like my thousand class image net classifier and then maybe I'd have like like a style classifier,"
},
{
"start": 1944.18,
"end": 1949.18,
"text": " like a neural style transfer network that I've like chopped off the like some intermediate feature."
},
{
"start": 1949.18,
"end": 1957.18,
"text": " I'm going to take that as my style. And so maybe I'm like classifying. I think it's like an airplane. And then I kind of like this style for it."
},
{
"start": 1957.18,
"end": 1961.18,
"text": " That's sort of like my like how I would think about trying to automate that."
},
{
"start": 1961.18,
"end": 1967.18,
"text": " Like, I don't know, I guess, like, I don't know if I I guess it's interesting."
},
{
"start": 1967.18,
"end": 1970.18,
"text": " But I also feel like when you're doing the pick reader, you're kind of like, oh, I'm going to try it now."
},
{
"start": 1970.18,
"end": 1979.18,
"text": " Now that I see this vision, I'm going to try to make it like look like that now, I suppose. Like, yeah, yeah."
},
{
"start": 1979.18,
"end": 1984.18,
"text": " I think I could mold this into a skull and then you start doing."
},
{
"start": 1984.18,
"end": 1989.18,
"text": " Yes, yes, they're very much so they're not they're not advocating random exploration."
},
{
"start": 1989.18,
"end": 1998.18,
"text": " What they're advocating is basically if you have an ambitious goal, then you basically don't know the stepping stones."
},
{
"start": 1998.18,
"end": 2004.18,
"text": " But from stepping stone to stepping stone, that's where objectives are very handy."
},
{
"start": 2004.18,
"end": 2010.18,
"text": " So when you want to say I this already kind of looks like something, I want to make it more like that."
},
{
"start": 2010.18,
"end": 2012.18,
"text": " I want to make it more into a skull. Right."
},
{
"start": 2012.18,
"end": 2015.18,
"text": " It already has like two circles and kind of the shape."
},
{
"start": 2015.18,
"end": 2022.18,
"text": " But I'm going to drive it there. That that is very that can be very objective driven."
},
{
"start": 2022.18,
"end": 2030.18,
"text": " But in the grand scheme of things, you don't know. Then once you have the skull, someone else can develop that into an even new thing."
},
{
"start": 2030.18,
"end": 2045.18,
"text": " So, yeah, indeed, if if you if you are in kind of a local search in this space, then an objective driven behavior like what you're saying, like I want to make it as much this as possible."
},
{
"start": 2045.18,
"end": 2059.1800000000003,
"text": " That's very that's actually a thing they're advocating for. But then from their end result, yeah, you would need to then restart again, do the same thing with like something else."
},
{
"start": 2059.1800000000003,
"end": 2062.1800000000003,
"text": " Huh? Yeah, it's really interesting."
},
{
"start": 2062.18,
"end": 2076.18,
"text": " Just thinking about, yeah, I think about like the stepping stones and like is how would you define the space of stepping stones to such a to any kind of thing?"
},
{
"start": 2076.18,
"end": 2084.18,
"text": " I guess it's like you could still design some kind of maybe it's discrete or maybe you have some kind of signal you can get back from it."
},
{
"start": 2084.18,
"end": 2087.18,
"text": " And I guess it's just a lot to think about."
},
{
"start": 2087.18,
"end": 2092.18,
"text": " Directly, I think they give this they give this great analogy."
},
{
"start": 2092.18,
"end": 2101.18,
"text": " I feel like if you have a really ambitious objective, it's like crossing a lake, but the lake is covered in fog."
},
{
"start": 2101.18,
"end": 2108.18,
"text": " So you basically can't really see very far, but you can always kind of see the next stepping stones."
},
{
"start": 2108.18,
"end": 2117.18,
"text": " Right. And you can then you can then try to go from stepping stone to stepping stone, but you don't know which one to take if there's like a fork."
},
{
"start": 2117.18,
"end": 2123.18,
"text": " There's two ways possible. You don't know which one. Right. So all you can do is basically go the most interesting one."
},
{
"start": 2123.18,
"end": 2126.18,
"text": " And they relate this to scientific research."
},
{
"start": 2126.18,
"end": 2135.18,
"text": " So, yeah, if we want to accomplish some really great research goal, like artificial general intelligence, we don't like we don't know."
},
{
"start": 2135.18,
"end": 2149.18,
"text": " But we can see the next stepping stones. Right. We can see, oh, from what we have right now, what interesting combination could we make that still kind of it still kind of makes that's not total garbage."
},
{
"start": 2149.18,
"end": 2156.18,
"text": " Right. So in the local search, I can try to say I want to I don't know. I want to do this."
},
{
"start": 2156.18,
"end": 2168.18,
"text": " I want to do multiple generators and multi stage and then this thing. Right. This this is kind of a stepping stone and maybe that will then lead to something more interesting and so on."
},
{
"start": 2168.18,
"end": 2185.18,
"text": " So, yeah, that's that's kind of how they relate. I like this metaphor of the lake. Yeah. Yeah. I just like could like a meta controller try to put the stones down and then the objective is or is the space too enormous that that idea of having a meta controller guide the stepping stone placement is too big."
},
{
"start": 2185.18,
"end": 2192.18,
"text": " The stepping stone placement is just like absurd in that and there's no way that that would work. That's sort of where I'm thinking with this now is like."
},
{
"start": 2192.18,
"end": 2203.18,
"text": " So they actually that's that's exactly the question. Right. Of what I so I believe you need such a meta whatever because the space is too large."
},
{
"start": 2203.18,
"end": 2211.18,
"text": " You somehow need a way to choose the stepping stones in the first place. Right. You somehow need a way to do this."
},
{
"start": 2211.18,
"end": 2229.18,
"text": " Now, what they're saying is that if you're if your goal is really ambitious, then a meta controller that simply wants to reach the goal is bad because right because what we discussed before, you might need a lot of inventions from other fields in order to make goal happen."
},
{
"start": 2229.18,
"end": 2247.18,
"text": " And if you simply go your field maximum power towards your goal, that's not going to happen. Now, if your meta controller is actually just something that wants to produce interesting things, then that's actually something they advocate for."
},
{
"start": 2247.18,
"end": 2257.18,
"text": " That is exactly what their algorithms are trying to capture. They're trying to capture locally. Yeah, we want to get better at a particular thing."
},
{
"start": 2257.18,
"end": 2268.18,
"text": " What those particular things are and the order of these that should be novelty driven instead of goal driven."
},
{
"start": 2268.18,
"end": 2275.18,
"text": " Yeah, yeah. Yeah. The interesting component. I guess I'm sort of biased towards liking the objective design."
},
{
"start": 2275.18,
"end": 2286.18,
"text": " And now I'm thinking like, OK, well, let's abstract those meta controllers one level up and have a meta meta controller and just repeat this and hierarchy makes sense."
},
{
"start": 2286.18,
"end": 2309.18,
"text": " And that if you if you if you're if you're a bit cynical, that is what you will also hear out of here out of and they have to argue in the in their book a lot against that like isn't the question isn't the kind of isn't the implementation of a meta controller that just searches for novelty in itself."
},
{
"start": 2309.18,
"end": 2316.18,
"text": " And that's the objective again. And then they give some good reasons why actually you don't."
},
{
"start": 2316.18,
"end": 2327.18,
"text": " It is different. It's more like a constraint on your search. If you think of natural evolution, for example, it isn't really doesn't really have an objective."
},
{
"start": 2327.18,
"end": 2342.18,
"text": " You think reproduction and survival is the objective of natural evolution. It doesn't really the good the good reason they give is the objective has already been fulfilled by the very first organism to ever live."
},
{
"start": 2342.18,
"end": 2350.18,
"text": " Right. Why didn't it stop there? Why didn't it stop very first cell? OK, done. We've fulfilled the objective."
},
{
"start": 2350.18,
"end": 2357.18,
"text": " It's more of a it's more of an actually a constrained optimization where the constraint is you need to be able to survive."
},
{
"start": 2357.18,
"end": 2366.18,
"text": " That's kind of the minimum bar of to being on this planet. And then I'm saying constrained optimization, but it's it's not it's not an optimization."
},
{
"start": 2366.18,
"end": 2370.18,
"text": " It's more of like a constraint constraint search."
},
{
"start": 2370.18,
"end": 2385.18,
"text": " OK, yeah, I think, yeah, I guess it's just like I don't think I'm closed in this world of trying to think of these constraint problems. And I haven't really like thought more generally about just like exploration as a whole."
},
{
"start": 2385.18,
"end": 2394.18,
"text": " But but anyway, so I just wanted to ask you generally like your deep learning researcher, I want to ask like what areas of deep learning are you really interested in right now?"
},
{
"start": 2394.18,
"end": 2404.18,
"text": " And what do you think is promising in the near future? So I'm currently working in adversarial examples."
},
{
"start": 2404.18,
"end": 2415.18,
"text": " That is a really interesting topic. There's lots of questions still still open, but I'm generally interested in pretty much any anything that is not."
},
{
"start": 2415.18,
"end": 2432.18,
"text": " I'm not too interested in like the newest the newest fine technique on getting the latest state of the art numbers, even though that's probably super important for practitioners."
},
{
"start": 2432.18,
"end": 2439.18,
"text": " Basically, agreeing more with the authors of this tutorial of that."
},
{
"start": 2439.18,
"end": 2455.18,
"text": " Let's just try to do interesting things. And to me, these these actually these these areas in terms of open ended, open ended search, open ended learning are very interesting."
},
{
"start": 2455.18,
"end": 2458.18,
"text": " I think reinforcement learning still has a long way to go."
},
{
"start": 2458.18,
"end": 2466.18,
"text": " I think actually NLP still has a long way to go because I don't believe it's the current models are the end of it."
},
{
"start": 2466.18,
"end": 2469.18,
"text": " So I think it's really exciting time."
},
{
"start": 2469.18,
"end": 2476.18,
"text": " Yeah, I love thinking about adversarial examples because it definitely flips the CNN idea on its head."
},
{
"start": 2476.18,
"end": 2491.18,
"text": " And then I had one other thing about adversarial examples that I'm interested in is there is like an interview with Elon Musk and this Lex Friedman researcher where he asked him about adversarial examples on his self-driving cars."
},
{
"start": 2491.18,
"end": 2501.18,
"text": " And he seems dismissive of it. He says he thinks basically you could just average different patches of like test time augmentation to overcome adversarial examples."
},
{
"start": 2501.18,
"end": 2516.18,
"text": " So in your research, do you think that like the example where they add the noise mass to the panda and they're like, oh, it's a given now, if they just perturbed it like nine more times, do you think the prediction would average out to pandas?"
},
{
"start": 2516.18,
"end": 2533.18,
"text": " That is a very difficult question. And from experience, simply adding noise and then feeding it to the classifier, even if you average after that, usually will defend against adversarial examples to a point."
},
{
"start": 2533.18,
"end": 2538.18,
"text": " But it will also degrade your classification performance."
},
{
"start": 2538.18,
"end": 2547.18,
"text": " Because so maybe I understood it wrong, but my understanding is I have my input, right? I simply add noise to it and then feed it through the network."
},
{
"start": 2547.18,
"end": 2551.18,
"text": " And I could do this many times, right? And then average the prediction."
},
{
"start": 2551.18,
"end": 2568.18,
"text": " But usually this will help against adversarial examples, but it will also degrade the accuracy of that classifier. So it might actually make your self-driving car worse in the overall."
},
{
"start": 2568.18,
"end": 2582.18,
"text": " Because how often is it going to be attacked against a adversarial example? It's going to be attacked maybe once or twice a year, maybe if it drives by some hacker's house, right?"
},
{
"start": 2582.18,
"end": 2591.18,
"text": " Sticker on a stop sign or something. But the rest of the time, I would actually like to retain the best possible classifier."
},
{
"start": 2591.18,
"end": 2605.18,
"text": " And if I always have to add noise, then that's not possible. So the research we're doing is actually into the direction of can we retain the original accuracy while still kind of detecting these samples?"
},
{
"start": 2605.18,
"end": 2616.18,
"text": " I mean, you somehow have to get a trade off somewhere, but just adding noise isn't the final solution yet."
},
{
"start": 2616.18,
"end": 2624.18,
"text": " I was like, so with these adversarial examples, they're only going to make misclassifications like that if it really is adversarially sought after."
},
{
"start": 2624.18,
"end": 2632.18,
"text": " It's not just like the noise perturbation would be such an enormous space to find it otherwise."
},
{
"start": 2632.18,
"end": 2638.18,
"text": " Yes, you really need to try. So it's very unlikely that some random thing."
},
{
"start": 2638.18,
"end": 2651.18,
"text": " Of course, these networks can be confused by random noise, but I think one of the self-driving cars once drove into a big white truck because it was large and white, so it thought it was sky."
},
{
"start": 2651.18,
"end": 2660.18,
"text": " But other than these failures, you really have to try to find an adversarial example."
},
{
"start": 2660.18,
"end": 2667.18,
"text": " Really cool. Yannick, thanks so much for doing this. Anybody watching or listening, definitely check out Yannick's YouTube channel."
},
{
"start": 2667.18,
"end": 2671.18,
"text": " He has really great paper summaries and all sorts of things. Thank you."
},
{
"start": 2671.18,
"end": 2700.18,
"text": " Thanks so much for having me."
}
] |
H5vpBCLo74U | Yannic Kilcher | UCZHmQk67mSJgfCCTn7xBfew | XLNet: Generalized Autoregressive Pretraining for Language Understanding | [
"Science & Technology"
] | [
"deep learning",
"machine learning",
"artificial intelligence",
"ai",
"nlp",
"natural language processing",
"bert",
"xlnet",
"transformer",
"transformer xl",
"attention",
"attention layer",
"language model",
"language modeling",
"pretraining",
"autoregressive",
"autoencoder",
"permutation",
"google",
"carnegie mellon",
"cmu",
"state of the art",
"masked language model"
] | Abstract:
With the capability of modeling bidirectional contexts, denoising autoencoding based pretraining like BERT achieves better performance than pretraining approaches based on autoregressive language modeling. However, relying on corrupting the input with masks, BERT neglects dependency between the masked positions and suffers from a pretrain-finetune discrepancy. In light of these pros and cons, we propose XLNet, a generalized autoregressive pretraining method that (1) enables learning bidirectional contexts by maximizing the expected likelihood over all permutations of the factorization order and (2) overcomes the limitations of BERT thanks to its autoregressive formulation. Furthermore, XLNet integrates ideas from Transformer-XL, the state-of-the-art autoregressive model, into pretraining. Empirically, XLNet outperforms BERT on 20 tasks, often by a large margin, and achieves state-of-the-art results on 18 tasks including question answering, natural language inference, sentiment analysis, and document ranking.
Authors: Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, Quoc V. Le
https://arxiv.org/abs/1906.08237 | Hi there, today we're looking at XLNet, Generalized Autoregressive Pre-Training for Language Understanding, by Jilin Yang and other people from Carnegie Mellon University as well as Google Brain. So this is kind of the elephant in the room currently as XLNet is the first model to beat BERT, which was the previous state of the art on a lot of NLP tasks, to beat BERT at a lot of these same NLP tasks. So the chief state of the art result on 18 of 20 tasks I believe, maybe they test more, they outperformed BERT on 20, the chief state of the art on 18, including things as question answering, natural language inference, sentiment analysis and so on. So those are kind of remarkable results and even more remarkable is that the architecture of the network is actually very, fairly similar to BERT. The kind of new introduction is a pre-training, a different pre-training procedure and we'll look into that. So let's actually jump into their main points straight away. What they go into is there are two kinds of currently used pre-training methods for these NLP tasks and both can be understood as kind of language modeling. So language modeling for those of you who don't know is predict the next word in a sequence. So if I give you the sequence here, unsupervised representation learning has been and then I ask you what's next and then you're supposed to say highly. That's language modeling in a nutshell. So what they differentiate are two kinds of language modeling. The first one, they say is autoregressive language modeling. Now what autoregressive language modeling does is exactly what we've looked at. I give you unsupervised learning has been, you're supposed to predict highly. And then in the next step I give you unsupervised representation learning has been highly and you're supposed to predict successful and so on. So in the next step I'm going to give you the entire sentence up until here and you're supposed to predict in. Autoregressive because each token can look at the kind of previous ones in the in the sequence. So when you, sorry you can't see that, when you predict, when you predict you can always kind of autoregressively look at what the previous ones were, including what you've previously predicted. Of course during training this is teacher forced as I said so you put the actual words there. This is autoregressive modeling in contrast to what they call auto encoding. And auto encoding is what BERT does and this is the following. So in contrast to that let's say I have the same sequence unsupervised representation learning has been highly successful in the domain of something. And then I say okay I give you the sequence but I am going to delete this and this. And now I ask you to predict these two. So you can see the task is slightly different as you now have access to all of the sequence basically except the ones that you're asked to predict but you're you kind of asked to predict them not in any order but you're asked to predict them at the same time basically. So at the same time you're asked to predict this word and this word. So the first kind of these autoregressive language modeling has been used by transformer models until BERT and then basically BERT really pushed this auto encoding language model pre training, which made it so successful. And now this paper XLNET wants to like combine the best of both of them. And in order to understand what's the best of both of them. So what's good at BERT we've already seen it can actually draw information from all of the context of the words it's trying to predict. But what is the kind of pitfall of BERT and they they actually put this really nicely in an example they gave way further down where they say comparison to BERT. I don't know why that is not like also in the introduction but here they have the sentence New York is a city. Right. New York is a city. This one. And you're asked to predict these two words. And if you now compare BERT to what XLNET does. If. So the context is a city and you're asked to predict New York. What BERT does is it simply masks out the two words and says here please fill in these two words. Now this translates to the kind of objective being separated in the two words such that the prediction of York here is completely independent of the prediction of new. So if you know of any other city that is made of two words for example San Francisco or Los Angeles then these would be as valid and any mixture would be as valid. So you might BERT might end up with laws. York is a city and that will be perfectly fine for BERT because while it's predicting laws is a perfectly fine prediction for the first word of a two word city and York is a perfectly fine prediction for the last word of a two word city. Right. So these are the kind of mistakes that BERT can get into by not being autoregressive by basically predicting all of these tokens at the same time independently of each other. Whereas XLNET what it would do is it would specify an order. Let's say OK first I will predict the word new for the first word new something is a city. And then when I predict York I will actually take into account that I previously have predicted the word new. So that's the main advantage that autoregressive training has over auto encoding. Now what are the pitfalls. The pitfalls are if you have this sentence. If you look at it I'll write it down. New York is a city. If you have the sentence and let's say actually you're not you're not asked to predict New York you're asked to predict the word A. You're asked to predict that in autoregressive style or a city. It's a better example. The two words a city in autoregressive style if you predict the word A you can only ever look at what comes beforehand. Whereas if BERT were to predict A just the word A it would be able to look at all of it. Let's not predict city. So you see the kind of autoregressive model is bound to the order of the factorization of the sentence. So it's bound to the order in which it has to predict the tokens. So here if it's predicting A you can only look at stuff that comes before it because it needs to do it in order. Right. Once it gets to city you can actually look at the entire sentence here. But before that it only ever has partial information about the about the context. So actually it wouldn't be much better if I had said we're trying to predict these two words is and a right. And once I predict so BERT would actually have access to the word city here. Whereas the autoregressive models only have access to the ones before it. I hope that makes it clear. So the main idea in Excel net is where does this order dependence come from in the autoregressive model. The order dependence actually comes from the factorization of the sentence of the of the language model. So in a language model we're actually trying to assess the probability distribution of sentences here. X is a sentence. Right. And this can be naturally factorized into a product over the words where the probability of each word is only dependent on the words before it. This is a this is an equal is not an approximation. This is an equality. The probability of a sequence can be decomposed into a product of probabilities like this. Exactly. So this here is exactly what these autoregressive models implement. Each word is predicted from the words before it. Right. There are other kinds of autoregressive models that also do the other direction where here they say OK the probability of a sentence is a product and each word is predicted from the words after it. But it kind of is the same problem. You only ever have access into the one direction. Basically however you define the order of decoding you only ever have access from a given word to what was before it in the order. So the main idea of Excel that is they say hey why don't we consider all possible orderings. Right. I mean that that's kind of a. That's it's an idea. So let's go back to our thing here. They say why don't we consider all possible orderings. So basically what we will do is if this sample comes up New York is a city. All right. What I can do is I can define an ordering. Let's say I always want to predict two words. So typically masks out about 15 percent of its input to be predicted. And here let's say we'll mask out 20 percent which is two words. So of this sequence will mask two words and ask the model to predict it. That's that will be our pre training objective. The first time the sample comes up from the data set I might specify the order just classically. Right. Just one two three four five. All right. I'll predict the last two words. I'll kind of mask them out right. I give the model New York is and then I let it predict a. And then in the next step I'll give it New York is a and let it predict city. Cool. So now the pitfall is the word a here only has access to things before it and not to city itself. City has access to everything. All right. So but then I continue training and the next set time this sample right. It's in my data set. New York is a city. The next time it comes up I simply go for a different order. Let's say one two three four five. Right. So now again I'm asked I'm asking to predict the last two tokens which here are. City and York. So in the first step I would give it is a and new and I will ask it what's here. And I'll ask it to predict city. And then in the second step I'll also give it that and I'll ask it OK. Now what's here given all of that. Right. So new is a city. Right. You're asked to predict the missing word. So that that's pretty. So in the first step it's new is a. And you're asked to predict that the second and then the second step is new is the city and you're asked to predict the first. So now as you can see while predicting city here all of a sudden we didn't no longer in this ordering we don't have access to the word. York. So we'll have to learn to predict city from the rest of the context. Now even more even more if we now decide let's decide on a different ordering again. One two three four five. So now we'll actually first step is to ask. New York city please predict this thing here. Right. Yeah you might train the model to predict is and then the second step you say New York is city. Please predict this. Now you see before before when we were asked to predict the word a it only had access to things to the left of it. Then the very first example. But now it actually has access to the entire context. So the the idea is as we sample this data point multiple times and each time we decide on a different ordering to decode for each prediction of each token token sorry will actually have seen many many parts many different variants of the context. And in expectation will actually have seen all of the context just like Bert but will always having have done it in an order regressive way. So basically you get all the advantages of being order regressive namely that you are able to decode step by step while always referring to everything in front of you in the ordering. So the predictions are not independent but you also get the benefit of Bert that it's able to basically look at all of the rest of the context in expectation in order to make this prediction. So this is this is the main idea of of Excel net. They formalize this jump up again they formalize it in saying OK what Bert does here is it actually see it factorized log probability of a sentence into this sum. So the product in the log becomes a sum into the sum of log probabilities of no sorry this is order aggressive confused into the the words conditioned on everything in front of you. Everything in front of them. What Bert does is it actually approximately factorizes the log probability into each word and then everything in the context and everything that's not masked in the context. And this is only an approximate factorization because now you basically dropping away all these masked tokens. And what they do now is they do the same as the AR as the order aggressive models here. They decompose the log probability into a sum of log probabilities over each of the words given all the words before it but not before it in the sequence but before it in an chosen permutation Z. And Z is sampled uniformly from the set of all possible permutations. So in expectation they'll see all of the context. So this is the this is the main thing they show this here in a kind of a picture with. So here is the neural network. This is the input layer. And these are the hidden layers as the attention layers go up and up here you're asked to predict the token. So here you're always asked to predict X3. So there is no there's never going to be any weight here since if you knew X3 you would be able trivially to predict X3. All right so in the in the first example the factorization order chosen at random is 3 2 4 1. Now you're asked to predict X3 and we know OK we should only we should only do this with things that are before it in the permutation order. Well here since X3 is the first in the permutation order we actually don't we don't have anything to go on. We basically ask to predict X3 from scratch as if it were the start of the sentence. So we'll basically tell the model I have a sentence that goes hmm hmm hmm hmm please predict the third. All right it's a hard task. Yeah by the way you're always able to look at this memory thing here. Don't worry about this for now. This is just this is an augmentation they do on top of their idea. This is not the core idea. So OK but now the second time this sample comes up from the training set we decide on a different order. So the order here is 2 4 3 1. Now again we're asked to predict X3 and we're allowed to look at everything before it. So 2 and 4 as you see here there are weights from X2 and X4 into this column that finally is then a ask to predict X3. So this is also this is now an easier task right. You're allowed to look at the word to the left and to the right. If you have the following permutation order 1 4 2 3 you're actually allowed to look at all of the other words because X3 is at the end of the permutation order in order to produce X3. So all of these four and the fourth thing is a similar. So all of these four things will appear during training and you will learn from them. So in expectations you basically have seen all variants of different of different versions of the context which which helps a lot apparently. Right so in the in order to achieve this they had to make some architectural changes to the to the model. Namely what you want to do is in a single pass through the model here you not only want to predict one token but you want to do many predictions. This helps training a lot so BERT naturally always does like 15% of the tokens or so what was that like 40 50 tokens. So it masks them and it predicts them all at the same time. Now you would like to do this here as well you would like to predict all at the same time. The ones that you're asked to predict. But of course the problem is for here if you're asked if in this factorization order 2 4 3 1 if you're asked to predict X3 you're allowed to look at X2 and X4. If you're asked to predict X1 you're allowed to look at X2 X4 and X3. So if you only have a single pass through the model the question is do you now input X3 or do you not because the prediction of X3 is not allowed to look at X3. While the prediction of X1 is allowed to look at X3 so they do an architectural change in order to achieve both things so that you can have a single pass through the through the model. But the prediction of each token only depends on the things in front of it in the permutation order. And they do this by having these kind of two stream these masked to stream attention where they basically have not only one hidden representation like in classic transformers but they have at each step two hidden representations. One they call H and one they call G. So the H's are initialized with the embeddings of the tokens and the G's are just initialized randomly and then they get transformed. The point is the H of the next layer is always able to look at everything in front of it including its own its own H basically one layer down its own position one layer down. While the G is only allowed to look at the H's but the H's from before. Right so all the G's here are only ever able to look at the H's from before the current position whereas the H is always allowed here to look at the same but also at the H at the current position. And now at the last layer you simply ask the model to predict the token from just the G. And you can easily see that this results in these model only. Yeah only attending to things before it. The G by the way can also look at the G of the current layer so that's also the thing but it cannot look at the H. So there's never any information flowing from the current word embedding of the token you're trying to predict to the prediction layer. So basically that means the model can't just look like you're not telling the model the answer yet you're still able to feed to predict multiple things in a single pass through the model. Formally this is described here in the attention layer. So they divide how they produce the queries and how they produce the keys and values usually the queries and the keys and values are produced from the same hidden representation but here they produce the keys and values from the H's in both cases. But to update the G's they produce the queries from the last layer's G and to produce the H's they produce the queries from the last layer H's. And most importantly when they produce the keys and values the H's they look at here to update the G you're only allowed to look at H's before you in the permutation order. But to update the H you're allowed to look at everything before including the position you're currently at. So that's kind of the it's an engineering solution to the problem introduced by their augmentation. I think it's a pretty neat solution pretty cool. So the rest of the paper here is incorporating ideas from transformer Excel. So transformer Excel is one of these classic transformers that that is like this AR so this autoregressive style of transformer. But that has a few improvements over the classic vanilla transformer and they incorporate a number of things here namely first of all they incorporate this memory thing. So the memory thing allows you to input longer sequences. Let's say our our transformer input length is maximum of five tokens. What the transformer Excel allows you to do is you input five tokens and then you save you do your transformer thing you encode it and you save something into this memory. And then when you input the next five tokens your transformer is then allowed to look at the memory of the last sequence. Right and also update it so that that's kind of these these mem blocks you saw here. So you're always allowed to look at these mem blocks from last sequence and then the hidden representations here of this sequence. They will actually be stored in the mem block for the next sequence. This is kind of a trick to to to carry over information. It's not the the updating the memory part isn't learned with the objective to make the next prediction better but it's just some information kind of gradient free information to provide to the next step. And it apparently helps you can incorporate longer sequences into this transformer Excel. So they take this over and implement this into XL net. They also do relative positioning codings relative segment and codings. I won't go into this too much more here because it's not the main idea basically. So they do experiments and they compare to BERT architecture with the same basically same architecture the same number of parameters and or layers. And they beat BERT in all of these kind of NLP tasks or most of I think they said in 20. They reach new state of the art in 18 NLP tasks. So apparently their method works very well. So what they do here is the last thing I find important is an ablation study of the effects of their improvements. So they were because kind of my problem is I never know. Like they have this new idea. OK, we do these random permutations. But then they also say, oh, and also we include memory from XL net and we do relative positioning codings and so on. So for me, these kind of papers, of course, you reach better numbers, you get a new state of the art. So it's kind of a landmark paper. But to me, a paper should more be like a single thing. So whatever your idea is, this your idea is these orderings and whatever you need to do to make that work. OK, fine. But then why why the additional transformer Excel things? It's really then hard to estimate how much of the improvement comes from your idea and how much of the improvement simply comes from the fact that you already put these other things actually have nothing to do with it. So I appreciate these kind of analysis called ablation studies where they kind of try to take away the memory and these things and kind of look at what it's doing to the model. And you you see here kind of degrades down here as, for example, this column degrades as you take stuff away while still being more kind of more successful than BERT. So that that I would say also. Yeah, here is more unclear, but also kind of seems to degrade a bit while being more successful than BERT. So I appreciate this this kind of really trying to show that your gains really come from your new idea and not from some other stuff. All right. So the last thing I want to mention actually is this thing. So someone claiming or calculating that it costs two hundred and forty five thousand dollars to train the Excel net model the way they describe it in the paper. I'm sure it's going to be brought down because it was brought down that like the time to train was brought down with BERT as well. But this is just I mean, this is crazy. This is just training it. It kind of gives large questions about the state of research and the ability for kind of, let's say, more academic players to participate in research. On the one hand, of course, we like, of course, these companies should be able to do this. And on the other hand, if it seems like currently in some fields, just putting more money on the table will get you a better result. Not this. This actually like this paper is actually a cool idea, but it's still kind of prohibitively expensive to even reproduce it. Yeah, right. So that was that was that for this paper. I hope you enjoyed this and see you. | [
{
"start": 0,
"end": 14,
"text": " Hi there, today we're looking at XLNet, Generalized Autoregressive Pre-Training for Language Understanding, by Jilin Yang and other people from Carnegie Mellon University as well as Google Brain."
},
{
"start": 14,
"end": 30,
"text": " So this is kind of the elephant in the room currently as XLNet is the first model to beat BERT, which was the previous state of the art on a lot of NLP tasks, to beat BERT at a lot of these same NLP tasks."
},
{
"start": 30,
"end": 49,
"text": " So the chief state of the art result on 18 of 20 tasks I believe, maybe they test more, they outperformed BERT on 20, the chief state of the art on 18, including things as question answering, natural language inference, sentiment analysis and so on."
},
{
"start": 49,
"end": 68,
"text": " So those are kind of remarkable results and even more remarkable is that the architecture of the network is actually very, fairly similar to BERT. The kind of new introduction is a pre-training, a different pre-training procedure and we'll look into that."
},
{
"start": 68,
"end": 84,
"text": " So let's actually jump into their main points straight away. What they go into is there are two kinds of currently used pre-training methods for these NLP tasks and both can be understood as kind of language modeling."
},
{
"start": 84,
"end": 102,
"text": " So language modeling for those of you who don't know is predict the next word in a sequence. So if I give you the sequence here, unsupervised representation learning has been and then I ask you what's next and then you're supposed to say highly."
},
{
"start": 102,
"end": 115,
"text": " That's language modeling in a nutshell. So what they differentiate are two kinds of language modeling. The first one, they say is autoregressive language modeling."
},
{
"start": 115,
"end": 124,
"text": " Now what autoregressive language modeling does is exactly what we've looked at. I give you unsupervised learning has been, you're supposed to predict highly."
},
{
"start": 124,
"end": 134,
"text": " And then in the next step I give you unsupervised representation learning has been highly and you're supposed to predict successful and so on."
},
{
"start": 134,
"end": 140,
"text": " So in the next step I'm going to give you the entire sentence up until here and you're supposed to predict in."
},
{
"start": 140,
"end": 164,
"text": " Autoregressive because each token can look at the kind of previous ones in the in the sequence. So when you, sorry you can't see that, when you predict, when you predict you can always kind of autoregressively look at what the previous ones were, including what you've previously predicted."
},
{
"start": 164,
"end": 178,
"text": " Of course during training this is teacher forced as I said so you put the actual words there. This is autoregressive modeling in contrast to what they call auto encoding."
},
{
"start": 178,
"end": 197,
"text": " And auto encoding is what BERT does and this is the following. So in contrast to that let's say I have the same sequence unsupervised representation learning has been highly successful in the domain of something."
},
{
"start": 197,
"end": 211,
"text": " And then I say okay I give you the sequence but I am going to delete this and this. And now I ask you to predict these two."
},
{
"start": 211,
"end": 229,
"text": " So you can see the task is slightly different as you now have access to all of the sequence basically except the ones that you're asked to predict but you're you kind of asked to predict them not in any order but you're asked to predict them at the same time basically."
},
{
"start": 229,
"end": 234,
"text": " So at the same time you're asked to predict this word and this word."
},
{
"start": 234,
"end": 256,
"text": " So the first kind of these autoregressive language modeling has been used by transformer models until BERT and then basically BERT really pushed this auto encoding language model pre training, which made it so successful."
},
{
"start": 256,
"end": 264,
"text": " And now this paper XLNET wants to like combine the best of both of them."
},
{
"start": 264,
"end": 278,
"text": " And in order to understand what's the best of both of them. So what's good at BERT we've already seen it can actually draw information from all of the context of the words it's trying to predict."
},
{
"start": 278,
"end": 290,
"text": " But what is the kind of pitfall of BERT and they they actually put this really nicely in an example they gave way further down where they say comparison to BERT."
},
{
"start": 290,
"end": 299,
"text": " I don't know why that is not like also in the introduction but here they have the sentence New York is a city."
},
{
"start": 299,
"end": 311,
"text": " Right. New York is a city. This one. And you're asked to predict these two words. And if you now compare BERT to what XLNET does."
},
{
"start": 311,
"end": 323,
"text": " If. So the context is a city and you're asked to predict New York. What BERT does is it simply masks out the two words and says here please fill in these two words."
},
{
"start": 323,
"end": 336,
"text": " Now this translates to the kind of objective being separated in the two words such that the prediction of York here is completely independent of the prediction of new."
},
{
"start": 336,
"end": 350,
"text": " So if you know of any other city that is made of two words for example San Francisco or Los Angeles then these would be as valid and any mixture would be as valid."
},
{
"start": 350,
"end": 369,
"text": " So you might BERT might end up with laws. York is a city and that will be perfectly fine for BERT because while it's predicting laws is a perfectly fine prediction for the first word of a two word city and York is a perfectly fine prediction for the last word of a two word city."
},
{
"start": 369,
"end": 381,
"text": " Right. So these are the kind of mistakes that BERT can get into by not being autoregressive by basically predicting all of these tokens at the same time independently of each other."
},
{
"start": 381,
"end": 392,
"text": " Whereas XLNET what it would do is it would specify an order. Let's say OK first I will predict the word new for the first word new something is a city."
},
{
"start": 392,
"end": 399,
"text": " And then when I predict York I will actually take into account that I previously have predicted the word new."
},
{
"start": 399,
"end": 408,
"text": " So that's the main advantage that autoregressive training has over auto encoding."
},
{
"start": 408,
"end": 414,
"text": " Now what are the pitfalls. The pitfalls are if you have this sentence."
},
{
"start": 414,
"end": 424,
"text": " If you look at it I'll write it down. New York is a city."
},
{
"start": 424,
"end": 436,
"text": " If you have the sentence and let's say actually you're not you're not asked to predict New York you're asked to predict the word A."
},
{
"start": 436,
"end": 444,
"text": " You're asked to predict that in autoregressive style or a city. It's a better example."
},
{
"start": 444,
"end": 452,
"text": " The two words a city in autoregressive style if you predict the word A you can only ever look at what comes beforehand."
},
{
"start": 452,
"end": 459,
"text": " Whereas if BERT were to predict A just the word A it would be able to look at all of it."
},
{
"start": 459,
"end": 472,
"text": " Let's not predict city. So you see the kind of autoregressive model is bound to the order of the factorization of the sentence."
},
{
"start": 472,
"end": 476,
"text": " So it's bound to the order in which it has to predict the tokens."
},
{
"start": 476,
"end": 482,
"text": " So here if it's predicting A you can only look at stuff that comes before it because it needs to do it in order."
},
{
"start": 482,
"end": 486,
"text": " Right. Once it gets to city you can actually look at the entire sentence here."
},
{
"start": 486,
"end": 494,
"text": " But before that it only ever has partial information about the about the context."
},
{
"start": 494,
"end": 504,
"text": " So actually it wouldn't be much better if I had said we're trying to predict these two words is and a right."
},
{
"start": 504,
"end": 510,
"text": " And once I predict so BERT would actually have access to the word city here."
},
{
"start": 510,
"end": 518,
"text": " Whereas the autoregressive models only have access to the ones before it. I hope that makes it clear."
},
{
"start": 518,
"end": 527,
"text": " So the main idea in Excel net is where does this order dependence come from in the autoregressive model."
},
{
"start": 527,
"end": 535,
"text": " The order dependence actually comes from the factorization of the sentence of the of the language model."
},
{
"start": 535,
"end": 544,
"text": " So in a language model we're actually trying to assess the probability distribution of sentences here."
},
{
"start": 544,
"end": 560,
"text": " X is a sentence. Right. And this can be naturally factorized into a product over the words where the probability of each word is only dependent on the words before it."
},
{
"start": 560,
"end": 571,
"text": " This is a this is an equal is not an approximation. This is an equality. The probability of a sequence can be decomposed into a product of probabilities like this."
},
{
"start": 571,
"end": 577,
"text": " Exactly. So this here is exactly what these autoregressive models implement."
},
{
"start": 577,
"end": 584,
"text": " Each word is predicted from the words before it. Right."
},
{
"start": 584,
"end": 595,
"text": " There are other kinds of autoregressive models that also do the other direction where here they say OK the probability of a sentence is a product and each word is predicted from the words after it."
},
{
"start": 595,
"end": 601,
"text": " But it kind of is the same problem. You only ever have access into the one direction."
},
{
"start": 601,
"end": 612,
"text": " Basically however you define the order of decoding you only ever have access from a given word to what was before it in the order."
},
{
"start": 612,
"end": 623,
"text": " So the main idea of Excel that is they say hey why don't we consider all possible orderings."
},
{
"start": 623,
"end": 627,
"text": " Right. I mean that that's kind of a."
},
{
"start": 627,
"end": 632,
"text": " That's it's an idea. So let's go back to our thing here."
},
{
"start": 632,
"end": 642,
"text": " They say why don't we consider all possible orderings. So basically what we will do is if this sample comes up New York is a city. All right."
},
{
"start": 642,
"end": 649,
"text": " What I can do is I can define an ordering. Let's say I always want to predict two words."
},
{
"start": 649,
"end": 656,
"text": " So typically masks out about 15 percent of its input to be predicted."
},
{
"start": 656,
"end": 664,
"text": " And here let's say we'll mask out 20 percent which is two words. So of this sequence will mask two words and ask the model to predict it."
},
{
"start": 664,
"end": 672,
"text": " That's that will be our pre training objective. The first time the sample comes up from the data set I might specify the order just classically."
},
{
"start": 672,
"end": 679,
"text": " Right. Just one two three four five. All right. I'll predict the last two words."
},
{
"start": 679,
"end": 688,
"text": " I'll kind of mask them out right. I give the model New York is and then I let it predict a."
},
{
"start": 688,
"end": 695,
"text": " And then in the next step I'll give it New York is a and let it predict city. Cool."
},
{
"start": 695,
"end": 703,
"text": " So now the pitfall is the word a here only has access to things before it and not to city itself."
},
{
"start": 703,
"end": 711,
"text": " City has access to everything. All right. So but then I continue training and the next set time this sample right."
},
{
"start": 711,
"end": 718,
"text": " It's in my data set. New York is a city. The next time it comes up I simply go for a different order."
},
{
"start": 718,
"end": 732,
"text": " Let's say one two three four five. Right. So now again I'm asked I'm asking to predict the last two tokens which here are."
},
{
"start": 732,
"end": 743,
"text": " City and York. So in the first step I would give it is a and new and I will ask it what's here."
},
{
"start": 743,
"end": 749,
"text": " And I'll ask it to predict city. And then in the second step I'll also give it that and I'll ask it OK."
},
{
"start": 749,
"end": 754,
"text": " Now what's here given all of that. Right. So new is a city. Right."
},
{
"start": 754,
"end": 763,
"text": " You're asked to predict the missing word. So that that's pretty. So in the first step it's new is a."
},
{
"start": 763,
"end": 774,
"text": " And you're asked to predict that the second and then the second step is new is the city and you're asked to predict the first."
},
{
"start": 774,
"end": 783,
"text": " So now as you can see while predicting city here all of a sudden we didn't no longer in this ordering we don't have access to the word."
},
{
"start": 783,
"end": 788,
"text": " York. So we'll have to learn to predict city from the rest of the context."
},
{
"start": 788,
"end": 796,
"text": " Now even more even more if we now decide let's decide on a different ordering again."
},
{
"start": 796,
"end": 808,
"text": " One two three four five. So now we'll actually first step is to ask."
},
{
"start": 808,
"end": 814,
"text": " New York city please predict this thing here."
},
{
"start": 814,
"end": 824,
"text": " Right. Yeah you might train the model to predict is and then the second step you say New York is city."
},
{
"start": 824,
"end": 833,
"text": " Please predict this. Now you see before before when we were asked to predict the word a it only had access to things to the left of it."
},
{
"start": 833,
"end": 840,
"text": " Then the very first example. But now it actually has access to the entire context."
},
{
"start": 840,
"end": 860,
"text": " So the the idea is as we sample this data point multiple times and each time we decide on a different ordering to decode for each prediction of each token token sorry will actually have seen many many parts many different variants of the context."
},
{
"start": 860,
"end": 870,
"text": " And in expectation will actually have seen all of the context just like Bert but will always having have done it in an order regressive way."
},
{
"start": 870,
"end": 884,
"text": " So basically you get all the advantages of being order regressive namely that you are able to decode step by step while always referring to everything in front of you in the ordering."
},
{
"start": 884,
"end": 898,
"text": " So the predictions are not independent but you also get the benefit of Bert that it's able to basically look at all of the rest of the context in expectation in order to make this prediction."
},
{
"start": 898,
"end": 903,
"text": " So this is this is the main idea of of Excel net."
},
{
"start": 903,
"end": 917,
"text": " They formalize this jump up again they formalize it in saying OK what Bert does here is it actually see it factorized log probability of a sentence into this sum."
},
{
"start": 917,
"end": 932,
"text": " So the product in the log becomes a sum into the sum of log probabilities of no sorry this is order aggressive confused into the the words conditioned on everything in front of you."
},
{
"start": 932,
"end": 934,
"text": " Everything in front of them."
},
{
"start": 934,
"end": 950,
"text": " What Bert does is it actually approximately factorizes the log probability into each word and then everything in the context and everything that's not masked in the context."
},
{
"start": 950,
"end": 958,
"text": " And this is only an approximate factorization because now you basically dropping away all these masked tokens."
},
{
"start": 958,
"end": 969,
"text": " And what they do now is they do the same as the AR as the order aggressive models here."
},
{
"start": 969,
"end": 986,
"text": " They decompose the log probability into a sum of log probabilities over each of the words given all the words before it but not before it in the sequence but before it in an chosen permutation Z."
},
{
"start": 986,
"end": 992,
"text": " And Z is sampled uniformly from the set of all possible permutations."
},
{
"start": 992,
"end": 995,
"text": " So in expectation they'll see all of the context."
},
{
"start": 995,
"end": 1004,
"text": " So this is the this is the main thing they show this here in a kind of a picture with."
},
{
"start": 1004,
"end": 1006,
"text": " So here is the neural network."
},
{
"start": 1006,
"end": 1008,
"text": " This is the input layer."
},
{
"start": 1008,
"end": 1017,
"text": " And these are the hidden layers as the attention layers go up and up here you're asked to predict the token."
},
{
"start": 1017,
"end": 1020,
"text": " So here you're always asked to predict X3."
},
{
"start": 1020,
"end": 1030,
"text": " So there is no there's never going to be any weight here since if you knew X3 you would be able trivially to predict X3."
},
{
"start": 1030,
"end": 1040,
"text": " All right so in the in the first example the factorization order chosen at random is 3 2 4 1."
},
{
"start": 1040,
"end": 1049,
"text": " Now you're asked to predict X3 and we know OK we should only we should only do this with things that are before it in the permutation order."
},
{
"start": 1049,
"end": 1058,
"text": " Well here since X3 is the first in the permutation order we actually don't we don't have anything to go on."
},
{
"start": 1058,
"end": 1063,
"text": " We basically ask to predict X3 from scratch as if it were the start of the sentence."
},
{
"start": 1063,
"end": 1072,
"text": " So we'll basically tell the model I have a sentence that goes hmm hmm hmm hmm please predict the third."
},
{
"start": 1072,
"end": 1075,
"text": " All right it's a hard task."
},
{
"start": 1075,
"end": 1078,
"text": " Yeah by the way you're always able to look at this memory thing here."
},
{
"start": 1078,
"end": 1081,
"text": " Don't worry about this for now."
},
{
"start": 1081,
"end": 1086,
"text": " This is just this is an augmentation they do on top of their idea."
},
{
"start": 1086,
"end": 1088,
"text": " This is not the core idea."
},
{
"start": 1088,
"end": 1093,
"text": " So OK but now the second time this sample comes up from the training set we decide on a different order."
},
{
"start": 1093,
"end": 1097,
"text": " So the order here is 2 4 3 1."
},
{
"start": 1097,
"end": 1102,
"text": " Now again we're asked to predict X3 and we're allowed to look at everything before it."
},
{
"start": 1102,
"end": 1114,
"text": " So 2 and 4 as you see here there are weights from X2 and X4 into this column that finally is then a ask to predict X3."
},
{
"start": 1114,
"end": 1117,
"text": " So this is also this is now an easier task right."
},
{
"start": 1117,
"end": 1123,
"text": " You're allowed to look at the word to the left and to the right."
},
{
"start": 1123,
"end": 1131,
"text": " If you have the following permutation order 1 4 2 3 you're actually allowed to look at all of the other words"
},
{
"start": 1131,
"end": 1136,
"text": " because X3 is at the end of the permutation order in order to produce X3."
},
{
"start": 1136,
"end": 1140,
"text": " So all of these four and the fourth thing is a similar."
},
{
"start": 1140,
"end": 1145,
"text": " So all of these four things will appear during training and you will learn from them."
},
{
"start": 1145,
"end": 1157,
"text": " So in expectations you basically have seen all variants of different of different versions of the context which which helps a lot apparently."
},
{
"start": 1157,
"end": 1169,
"text": " Right so in the in order to achieve this they had to make some architectural changes to the to the model."
},
{
"start": 1169,
"end": 1178,
"text": " Namely what you want to do is in a single pass through the model here you not only want to predict one token but you want to do many predictions."
},
{
"start": 1178,
"end": 1188,
"text": " This helps training a lot so BERT naturally always does like 15% of the tokens or so what was that like 40 50 tokens."
},
{
"start": 1188,
"end": 1192,
"text": " So it masks them and it predicts them all at the same time."
},
{
"start": 1192,
"end": 1197,
"text": " Now you would like to do this here as well you would like to predict all at the same time."
},
{
"start": 1197,
"end": 1199,
"text": " The ones that you're asked to predict."
},
{
"start": 1199,
"end": 1213,
"text": " But of course the problem is for here if you're asked if in this factorization order 2 4 3 1 if you're asked to predict X3 you're allowed to look at X2 and X4."
},
{
"start": 1213,
"end": 1218,
"text": " If you're asked to predict X1 you're allowed to look at X2 X4 and X3."
},
{
"start": 1218,
"end": 1231,
"text": " So if you only have a single pass through the model the question is do you now input X3 or do you not because the prediction of X3 is not allowed to look at X3."
},
{
"start": 1231,
"end": 1244,
"text": " While the prediction of X1 is allowed to look at X3 so they do an architectural change in order to achieve both things so that you can have a single pass through the through the model."
},
{
"start": 1244,
"end": 1252,
"text": " But the prediction of each token only depends on the things in front of it in the permutation order."
},
{
"start": 1252,
"end": 1269,
"text": " And they do this by having these kind of two stream these masked to stream attention where they basically have not only one hidden representation like in classic transformers but they have at each step two hidden representations."
},
{
"start": 1269,
"end": 1272,
"text": " One they call H and one they call G."
},
{
"start": 1272,
"end": 1283,
"text": " So the H's are initialized with the embeddings of the tokens and the G's are just initialized randomly and then they get transformed."
},
{
"start": 1283,
"end": 1296,
"text": " The point is the H of the next layer is always able to look at everything in front of it including its own its own H basically one layer down its own position one layer down."
},
{
"start": 1296,
"end": 1307,
"text": " While the G is only allowed to look at the H's but the H's from before."
},
{
"start": 1307,
"end": 1323,
"text": " Right so all the G's here are only ever able to look at the H's from before the current position whereas the H is always allowed here to look at the same but also at the H at the current position."
},
{
"start": 1323,
"end": 1331,
"text": " And now at the last layer you simply ask the model to predict the token from just the G."
},
{
"start": 1331,
"end": 1338,
"text": " And you can easily see that this results in these model only."
},
{
"start": 1338,
"end": 1345,
"text": " Yeah only attending to things before it."
},
{
"start": 1345,
"end": 1355,
"text": " The G by the way can also look at the G of the current layer so that's also the thing but it cannot look at the H."
},
{
"start": 1355,
"end": 1368,
"text": " So there's never any information flowing from the current word embedding of the token you're trying to predict to the prediction layer."
},
{
"start": 1368,
"end": 1379,
"text": " So basically that means the model can't just look like you're not telling the model the answer yet you're still able to feed to predict multiple things in a single pass through the model."
},
{
"start": 1379,
"end": 1385,
"text": " Formally this is described here in the attention layer."
},
{
"start": 1385,
"end": 1403,
"text": " So they divide how they produce the queries and how they produce the keys and values usually the queries and the keys and values are produced from the same hidden representation but here they produce the keys and values from the H's in both cases."
},
{
"start": 1403,
"end": 1415,
"text": " But to update the G's they produce the queries from the last layer's G and to produce the H's they produce the queries from the last layer H's."
},
{
"start": 1415,
"end": 1427,
"text": " And most importantly when they produce the keys and values the H's they look at here to update the G you're only allowed to look at H's before you in the permutation order."
},
{
"start": 1427,
"end": 1434,
"text": " But to update the H you're allowed to look at everything before including the position you're currently at."
},
{
"start": 1434,
"end": 1442,
"text": " So that's kind of the it's an engineering solution to the problem introduced by their augmentation."
},
{
"start": 1442,
"end": 1446,
"text": " I think it's a pretty neat solution pretty cool."
},
{
"start": 1446,
"end": 1457,
"text": " So the rest of the paper here is incorporating ideas from transformer Excel."
},
{
"start": 1457,
"end": 1466,
"text": " So transformer Excel is one of these classic transformers that that is like this AR so this autoregressive style of transformer."
},
{
"start": 1466,
"end": 1478,
"text": " But that has a few improvements over the classic vanilla transformer and they incorporate a number of things here namely first of all they incorporate this memory thing."
},
{
"start": 1478,
"end": 1482,
"text": " So the memory thing allows you to input longer sequences."
},
{
"start": 1482,
"end": 1490,
"text": " Let's say our our transformer input length is maximum of five tokens."
},
{
"start": 1490,
"end": 1504,
"text": " What the transformer Excel allows you to do is you input five tokens and then you save you do your transformer thing you encode it and you save something into this memory."
},
{
"start": 1504,
"end": 1514,
"text": " And then when you input the next five tokens your transformer is then allowed to look at the memory of the last sequence."
},
{
"start": 1514,
"end": 1519,
"text": " Right and also update it so that that's kind of these these mem blocks you saw here."
},
{
"start": 1519,
"end": 1527,
"text": " So you're always allowed to look at these mem blocks from last sequence and then the hidden representations here of this sequence."
},
{
"start": 1527,
"end": 1531,
"text": " They will actually be stored in the mem block for the next sequence."
},
{
"start": 1531,
"end": 1537,
"text": " This is kind of a trick to to to carry over information."
},
{
"start": 1537,
"end": 1554,
"text": " It's not the the updating the memory part isn't learned with the objective to make the next prediction better but it's just some information kind of gradient free information to provide to the next step."
},
{
"start": 1554,
"end": 1559,
"text": " And it apparently helps you can incorporate longer sequences into this transformer Excel."
},
{
"start": 1559,
"end": 1563,
"text": " So they take this over and implement this into XL net."
},
{
"start": 1563,
"end": 1569,
"text": " They also do relative positioning codings relative segment and codings."
},
{
"start": 1569,
"end": 1577,
"text": " I won't go into this too much more here because it's not the main idea basically."
},
{
"start": 1577,
"end": 1588,
"text": " So they do experiments and they compare to BERT architecture with the same basically same architecture the same number of parameters and or layers."
},
{
"start": 1588,
"end": 1599,
"text": " And they beat BERT in all of these kind of NLP tasks or most of I think they said in 20."
},
{
"start": 1599,
"end": 1603,
"text": " They reach new state of the art in 18 NLP tasks."
},
{
"start": 1603,
"end": 1608,
"text": " So apparently their method works very well."
},
{
"start": 1608,
"end": 1618,
"text": " So what they do here is the last thing I find important is an ablation study of the effects of their improvements."
},
{
"start": 1618,
"end": 1624,
"text": " So they were because kind of my problem is I never know."
},
{
"start": 1624,
"end": 1627,
"text": " Like they have this new idea. OK, we do these random permutations."
},
{
"start": 1627,
"end": 1637,
"text": " But then they also say, oh, and also we include memory from XL net and we do relative positioning codings and so on."
},
{
"start": 1637,
"end": 1642,
"text": " So for me, these kind of papers, of course, you reach better numbers, you get a new state of the art."
},
{
"start": 1642,
"end": 1644,
"text": " So it's kind of a landmark paper."
},
{
"start": 1644,
"end": 1649,
"text": " But to me, a paper should more be like a single thing."
},
{
"start": 1649,
"end": 1655,
"text": " So whatever your idea is, this your idea is these orderings and whatever you need to do to make that work."
},
{
"start": 1655,
"end": 1663,
"text": " OK, fine. But then why why the additional transformer Excel things?"
},
{
"start": 1663,
"end": 1674,
"text": " It's really then hard to estimate how much of the improvement comes from your idea and how much of the improvement simply comes from the fact that you already put these other things actually have nothing to do with it."
},
{
"start": 1674,
"end": 1687,
"text": " So I appreciate these kind of analysis called ablation studies where they kind of try to take away the memory and these things and kind of look at what it's doing to the model."
},
{
"start": 1687,
"end": 1704,
"text": " And you you see here kind of degrades down here as, for example, this column degrades as you take stuff away while still being more kind of more successful than BERT."
},
{
"start": 1704,
"end": 1716,
"text": " So that that I would say also. Yeah, here is more unclear, but also kind of seems to degrade a bit while being more successful than BERT."
},
{
"start": 1716,
"end": 1727,
"text": " So I appreciate this this kind of really trying to show that your gains really come from your new idea and not from some other stuff."
},
{
"start": 1727,
"end": 1733,
"text": " All right. So the last thing I want to mention actually is this thing."
},
{
"start": 1733,
"end": 1746,
"text": " So someone claiming or calculating that it costs two hundred and forty five thousand dollars to train the Excel net model the way they describe it in the paper."
},
{
"start": 1746,
"end": 1753,
"text": " I'm sure it's going to be brought down because it was brought down that like the time to train was brought down with BERT as well."
},
{
"start": 1753,
"end": 1759,
"text": " But this is just I mean, this is crazy. This is just training it."
},
{
"start": 1759,
"end": 1771,
"text": " It kind of gives large questions about the state of research and the ability for kind of, let's say, more academic players to participate in research."
},
{
"start": 1771,
"end": 1777,
"text": " On the one hand, of course, we like, of course, these companies should be able to do this."
},
{
"start": 1777,
"end": 1788,
"text": " And on the other hand, if it seems like currently in some fields, just putting more money on the table will get you a better result."
},
{
"start": 1788,
"end": 1797,
"text": " Not this. This actually like this paper is actually a cool idea, but it's still kind of prohibitively expensive to even reproduce it."
},
{
"start": 1797,
"end": 1801,
"text": " Yeah, right. So that was that was that for this paper."
},
{
"start": 1801,
"end": 1819,
"text": " I hope you enjoyed this and see you."
}
] |
hkw-WDBipgo | Yannic Kilcher | UCZHmQk67mSJgfCCTn7xBfew | Talking to companies at ICML19 | [
"Science & Technology"
] | [
"machine learning",
"conference",
"ai",
"artificial intelligence",
"industry",
"academia",
"deep learning",
"hardware",
"lidar",
"graphcore"
] | A short rant on sponsor companies at ICML and how to talk to them. | All right, I quickly want to talk about kind of interaction with corporation company reps at these conferences, because to me it's still a bit of a secret or a bit of a not really clear of what to do. There's very different kinds of companies at these conferences, so some companies I feel are there to basically show off their technology, kind of wanting to use it. One example is for example Graphcore, the kind of new kid on the block for AI hardware in that they claim they have a chip specifically designed for the types of operations that machine learning applications do. So even more specialized than a GPU, and also they claim they are faster for equivalent kind of money spending than an Nvidia GPU, like a classic GPU. So basically you get much more bang for the buck. For now they just offer a cloud solution, I believe, and they're going to sell their cards through Dell. The way it works is they have kind of a low level compiler that will compile your model to these cards, and for now you can interact with it through C++, and then TensorFlow will come later, something like this. The thing about their card is that they have an extremely large memory right next to the compute unit, this would be kind of your traditional level one cache. That means that you get much faster access technically to your local variables, but then they don't have any kind of RAM, which means their entire card only has somewhat like 300 megabytes of memory, but they claim they can just basically distribute, if you have a large model you can distribute that over many cards, and then you'll get basically the speed up of the cards without having to sacrifice a model size. Another company that shows off really cool technology is a company that does LIDAR, and I forget the name right now, but when I try to look it up, they do a LIDAR sensor basically that is super tiny, and it costs a fraction of like a traditional LIDAR sensor. So I think they said theirs cost about $12,000, and it's really tiny, and has a couple of advantages compared to traditional sensors. As far as I understand, their lasers are mounted on the same chip, so they always point in the same direction, which reduces a lot of inaccuracies. I guess people would be interested in that, for self-driving cars and so on. These are kind of the hardware demonstrations that I've seen. Then there's other things, like there is a wellness center where you can get a massage, which is sponsored by the big companies, which is pretty nice, but I'm probably too much. I don't like these kinds of things too much. Maybe I'm just socially too awkward. For some companies, I feel that they're just there to recruit, and they don't really want to talk about what they do too much. So an indication of this would be a company where basically all of the reps at the booth are recruiters, so non-technical recruiters, that basically just kind of tell you what you can do as a career and not really what the company does as a whole. I never really know what to talk about then, because I feel like most people are interested and drawn towards interesting work, and if that comes with good working conditions, then that's a plus, but I don't feel for many people that that is the most important thing. So I could be wrong, and probably it's good that for some people it is, because otherwise everyone would take my jobs, the ones that I like. These companies will usually, if there is an engineer, they will not talk about too much what they do, like, oh, it's company secret and so on. So the funniest one was actually the NSA. Talking to the NSA was kind of painful because you kind of ask them, so what do you do? And they're like, yeah, machine learning. Because what I want to know as a researcher is, is there anything I could do there that I couldn't do anywhere else? So is there any unique problems that the NSA faces that actually demand new research, like demand new machine learning methods or some kind of change? So I ask this, and they're like, yes, there are problems like this. And you ask, like, which problems? And they're like, yeah, there are problems. We can't tell you. So everything's basically whatever. So I made it a game to ask them more specific questions and watch them, like, oh, this is classified. So yeah, if you're here, definitely check them out. It's fun. It's just fun to talk to them. Yeah, I feel to most companies, they're really interesting. I don't know more than half of them. So just going up, ask them what they do, kind of just get an overview over the landscape of what's needed currently in machine learning research. I think that's really useful, because as an academic, I tend to be very disconnected from the industry side of things and from what people actually need or want in practice. So talking to all these companies is really helpful to get an overview over that. Yeah, so but if you know a better way, I know some people are much more successful than me talking to companies at conferences. I'm definitely not the best at this. And yeah, if you have a better strategy, let me know. So I'm pretty happy so far. All right. That was that. See ya. | [
{
"start": 0,
"end": 11.76,
"text": " All right, I quickly want to talk about kind of interaction with corporation company reps"
},
{
"start": 11.76,
"end": 18.04,
"text": " at these conferences, because to me it's still a bit of a secret or a bit of a not really"
},
{
"start": 18.04,
"end": 20.64,
"text": " clear of what to do."
},
{
"start": 20.64,
"end": 26.92,
"text": " There's very different kinds of companies at these conferences, so some companies I"
},
{
"start": 26.92,
"end": 35,
"text": " feel are there to basically show off their technology, kind of wanting to use it."
},
{
"start": 35,
"end": 44.88,
"text": " One example is for example Graphcore, the kind of new kid on the block for AI hardware"
},
{
"start": 44.88,
"end": 51.2,
"text": " in that they claim they have a chip specifically designed for the types of operations that"
},
{
"start": 51.2,
"end": 54.760000000000005,
"text": " machine learning applications do."
},
{
"start": 54.76,
"end": 64.16,
"text": " So even more specialized than a GPU, and also they claim they are faster for equivalent"
},
{
"start": 64.16,
"end": 70,
"text": " kind of money spending than an Nvidia GPU, like a classic GPU."
},
{
"start": 70,
"end": 74.44,
"text": " So basically you get much more bang for the buck."
},
{
"start": 74.44,
"end": 80.72,
"text": " For now they just offer a cloud solution, I believe, and they're going to sell their"
},
{
"start": 80.72,
"end": 84.2,
"text": " cards through Dell."
},
{
"start": 84.2,
"end": 90.2,
"text": " The way it works is they have kind of a low level compiler that will compile your model"
},
{
"start": 90.2,
"end": 98.2,
"text": " to these cards, and for now you can interact with it through C++, and then TensorFlow will"
},
{
"start": 98.2,
"end": 100.32000000000001,
"text": " come later, something like this."
},
{
"start": 100.32000000000001,
"end": 108.96000000000001,
"text": " The thing about their card is that they have an extremely large memory right next to the"
},
{
"start": 108.96,
"end": 120.11999999999999,
"text": " compute unit, this would be kind of your traditional level one cache."
},
{
"start": 120.11999999999999,
"end": 125.08,
"text": " That means that you get much faster access technically to your local variables, but then"
},
{
"start": 125.08,
"end": 132.51999999999998,
"text": " they don't have any kind of RAM, which means their entire card only has somewhat like 300"
},
{
"start": 132.51999999999998,
"end": 137.72,
"text": " megabytes of memory, but they claim they can just basically distribute, if you have a large"
},
{
"start": 137.72,
"end": 145.6,
"text": " model you can distribute that over many cards, and then you'll get basically the speed up"
},
{
"start": 145.6,
"end": 152.2,
"text": " of the cards without having to sacrifice a model size."
},
{
"start": 152.2,
"end": 161.24,
"text": " Another company that shows off really cool technology is a company that does LIDAR, and"
},
{
"start": 161.24,
"end": 170.60000000000002,
"text": " I forget the name right now, but when I try to look it up, they do a LIDAR sensor basically"
},
{
"start": 170.60000000000002,
"end": 179.56,
"text": " that is super tiny, and it costs a fraction of like a traditional LIDAR sensor."
},
{
"start": 179.56,
"end": 188.20000000000002,
"text": " So I think they said theirs cost about $12,000, and it's really tiny, and has a couple of"
},
{
"start": 188.2,
"end": 192.16,
"text": " advantages compared to traditional sensors."
},
{
"start": 192.16,
"end": 197.95999999999998,
"text": " As far as I understand, their lasers are mounted on the same chip, so they always point in"
},
{
"start": 197.95999999999998,
"end": 205.6,
"text": " the same direction, which reduces a lot of inaccuracies."
},
{
"start": 205.6,
"end": 210.83999999999997,
"text": " I guess people would be interested in that, for self-driving cars and so on."
},
{
"start": 210.83999999999997,
"end": 215.67999999999998,
"text": " These are kind of the hardware demonstrations that I've seen."
},
{
"start": 215.68,
"end": 223.76000000000002,
"text": " Then there's other things, like there is a wellness center where you can get a massage,"
},
{
"start": 223.76000000000002,
"end": 232.32,
"text": " which is sponsored by the big companies, which is pretty nice, but I'm probably too much."
},
{
"start": 232.32,
"end": 236.88,
"text": " I don't like these kinds of things too much."
},
{
"start": 236.88,
"end": 241.24,
"text": " Maybe I'm just socially too awkward."
},
{
"start": 241.24,
"end": 247.76000000000002,
"text": " For some companies, I feel that they're just there to recruit, and they don't really want"
},
{
"start": 247.76000000000002,
"end": 250.84,
"text": " to talk about what they do too much."
},
{
"start": 250.84,
"end": 259.76,
"text": " So an indication of this would be a company where basically all of the reps at the booth"
},
{
"start": 259.76,
"end": 267.76,
"text": " are recruiters, so non-technical recruiters, that basically just kind of tell you what"
},
{
"start": 267.76,
"end": 276.12,
"text": " you can do as a career and not really what the company does as a whole."
},
{
"start": 276.12,
"end": 284.24,
"text": " I never really know what to talk about then, because I feel like most people are interested"
},
{
"start": 284.24,
"end": 290,
"text": " and drawn towards interesting work, and if that comes with good working conditions, then"
},
{
"start": 290,
"end": 296.24,
"text": " that's a plus, but I don't feel for many people that that is the most important thing."
},
{
"start": 296.24,
"end": 302.40000000000003,
"text": " So I could be wrong, and probably it's good that for some people it is, because otherwise"
},
{
"start": 302.40000000000003,
"end": 307.8,
"text": " everyone would take my jobs, the ones that I like."
},
{
"start": 307.8,
"end": 312.32,
"text": " These companies will usually, if there is an engineer, they will not talk about too"
},
{
"start": 312.32,
"end": 315.48,
"text": " much what they do, like, oh, it's company secret and so on."
},
{
"start": 315.48,
"end": 319.32,
"text": " So the funniest one was actually the NSA."
},
{
"start": 319.32,
"end": 327.08,
"text": " Talking to the NSA was kind of painful because you kind of ask them, so what do you do?"
},
{
"start": 327.08,
"end": 331.84,
"text": " And they're like, yeah, machine learning."
},
{
"start": 331.84,
"end": 337.8,
"text": " Because what I want to know as a researcher is, is there anything I could do there that"
},
{
"start": 337.8,
"end": 339.88,
"text": " I couldn't do anywhere else?"
},
{
"start": 339.88,
"end": 348.44,
"text": " So is there any unique problems that the NSA faces that actually demand new research, like"
},
{
"start": 348.44,
"end": 354.24,
"text": " demand new machine learning methods or some kind of change?"
},
{
"start": 354.24,
"end": 358.88,
"text": " So I ask this, and they're like, yes, there are problems like this."
},
{
"start": 358.88,
"end": 360.88,
"text": " And you ask, like, which problems?"
},
{
"start": 360.88,
"end": 363.8,
"text": " And they're like, yeah, there are problems."
},
{
"start": 363.8,
"end": 364.8,
"text": " We can't tell you."
},
{
"start": 364.8,
"end": 366.8,
"text": " So everything's basically whatever."
},
{
"start": 366.8,
"end": 373.8,
"text": " So I made it a game to ask them more specific questions and watch them, like, oh, this is"
},
{
"start": 373.8,
"end": 374.8,
"text": " classified."
},
{
"start": 374.8,
"end": 379.16,
"text": " So yeah, if you're here, definitely check them out."
},
{
"start": 379.16,
"end": 380.16,
"text": " It's fun."
},
{
"start": 380.16,
"end": 384.08,
"text": " It's just fun to talk to them."
},
{
"start": 384.08,
"end": 389.84000000000003,
"text": " Yeah, I feel to most companies, they're really interesting."
},
{
"start": 389.84000000000003,
"end": 391.92,
"text": " I don't know more than half of them."
},
{
"start": 391.92,
"end": 398.68,
"text": " So just going up, ask them what they do, kind of just get an overview over the landscape"
},
{
"start": 398.68,
"end": 401.64,
"text": " of what's needed currently in machine learning research."
},
{
"start": 401.64,
"end": 409.88,
"text": " I think that's really useful, because as an academic, I tend to be very disconnected from"
},
{
"start": 409.88,
"end": 417.28,
"text": " the industry side of things and from what people actually need or want in practice."
},
{
"start": 417.28,
"end": 422.03999999999996,
"text": " So talking to all these companies is really helpful to get an overview over that."
},
{
"start": 422.03999999999996,
"end": 428.76,
"text": " Yeah, so but if you know a better way, I know some people are much more successful than"
},
{
"start": 428.76,
"end": 433.08,
"text": " me talking to companies at conferences."
},
{
"start": 433.08,
"end": 435.08,
"text": " I'm definitely not the best at this."
},
{
"start": 435.08,
"end": 439.28,
"text": " And yeah, if you have a better strategy, let me know."
},
{
"start": 439.28,
"end": 442.03999999999996,
"text": " So I'm pretty happy so far."
},
{
"start": 442.03999999999996,
"end": 443.03999999999996,
"text": " All right."
},
{
"start": 443.03999999999996,
"end": 444.03999999999996,
"text": " That was that."
},
{
"start": 444.04,
"end": 459.04,
"text": " See ya."
}
] |
TFiZYA_JfJs | Yannic Kilcher | UCZHmQk67mSJgfCCTn7xBfew | Population-Based Search and Open-Ended Algorithms | [
"Science & Technology"
] | [
"machine learning",
"ai",
"artificial intelligence",
"open ended learning",
"quality diversity",
"conference",
"icml",
"icml2019",
"tutorial",
"population-based search",
"goal switching",
"serendipidy",
"evolution"
] | Comments on the ICML2019 tutorial on population-based search and open-ended learning.
Talk: https://www.facebook.com/icml.imls/videos/481758745967365/
Slides: http://www.cs.uwyo.edu/~jeffclune/share/2019_06_10_ICML_Tutorial.pdf
Book: https://www.amazon.com/dp/B00X57B4JG/
Event: https://icml.cc/Conferences/2019/ScheduleMultitrack?event=4336 | This is huge. This is just one hall and most people I guess are still waiting for registration. Yeah, but definitely the size of these things is ginormous. The tutorials have just started. There we go. It's finding a place. Hi, so I just wanted to give a little update on a tutorial that I liked which was the population-based search and open-ended learning tutorial which happened on Monday here. So I was pleasantly surprised by this tutorial because I knew almost nothing about these techniques and they seem really cool. It seems to be a really cool line of research. So I started out with what is population-based search and basically in population-based search you don't want to just reach one solution of a problem but you want to maintain a population of solutions that you develop over time. So natural evolution would be an example of that. So this can have many benefits that were explored in the tutorial. So the culprit of traditional optimization, let's say you have a classification problem, you just train one classifier on it, is what they call deception, meaning that a better example is an RL problem where you need to reach some goal but since the goal might be very hard to reach, your algorithm has basically nothing to go on. There's no stepping stone. So usually people go and construct a reward function in a very clever way. But this can be overcome with these techniques as well. So just imagine the hardest video game in the Atari suite. This would be something like Montezuma's Revenge where you first need to collect some key and then go to some door and only then you get a score. So this reward function is too ambitious and is a problem they call your deception. An observation they make is if you look at nature and natural evolution, it is very successful even without a goal. So there's no goal in mind to natural evolution except reproduction creates other reproduction. But it's not a goal, that's simply a kind of underlying mechanism. And if you look at nature, all this variety of life was produced without a goal in mind. And all this variety of life filling different niches and basically reproducing at their own pace. So it's a very interesting observation. The goal of this entire field is kind of to model, to go into this direction of what if we don't really go after only the cost function, but what if we... So in the most extreme case, what if we build a search algorithm that only wants to create novel things? So where kind of novelty is the only goal, what happens then? And it turns out some interesting things can be achieved with that. So they introduced this notion of quality diversity, which basically means if you look at, let's again take a life on earth, you want all the achievable behaviors that there are. So maybe one achievable behavior is a very fast life form that can hunt other life forms, and another achievable behavior is one that camouflages very well and so on. And you want to kind of find for each of these behaviors, you want to find the best possible example. So that's the direction that these algorithms go into. And an algorithm that they presented was MapElites, so M-A-P-Elites, which goes as follows. So let's say you have a bunch of dimensions you care about, say how fast a creature is, how tall it is, how well it is camouflaged and so on. Now you want to discretize each of those dimensions. So this will give you cells basically. So each of these discretization will introduce a grid of cells. And what you now do is you want to keep the best examples of each cell. So if you have a creature that's very fast but not very well camouflaged at some cell, you look at how well it's doing at the goal that you have in mind. And you want to keep the best one of those. You have a population and whichever ones are in that cell, you keep the best. And then you go ahead and you kind of change them. You could do this via evolutionary process, like you can mutate them, or it could be via gradient descent something. But you mutate them and I guess they will probably end up in a different cell. So you go look at that cell. Are these new ones better than the ones that you remembered from that old cell? And if so, replace them. For each cell, keep the best one and then kind of start continue developing from those. Sort of like Dijkstra's shortest path algorithm. So what it will return is like an entire landscape of possible behaviors. And for each behavior, it will give you the best result. Now it doesn't mean they all do equally. Some will be better, some cells will be not as good with regards to your cost function. But it will give you an entire landscape and you could see then that there are many kind of modes in this landscape. As I said, some creatures are very fast hunters, some camouflage very well. But then they are kind of slower. So you will be able to see these modes in that. I found this pretty interesting and opens the door to a lot of different applications. So a principle they employ is what is called goal switching. Namely, that means if a line of development can benefit from inventions of another line. So let's say the very fast hunters, they are good at that, but then maybe they don't reach quite optimal performance. But then another line develops somewhere else and these are camouflaged, like the camouflaged life forms develop. So they invent kind of camouflage. Now because of the way this mutation and so on is, you kind of keep the camouflaged ones around and the hunters. And now the camouflage can kind of jump over to the hunters. It's very difficult to explain like this, but they call this goal switching. And what it means is that the hunters can now adopt a little bit of camouflage through, let's say mutating one of the camouflaged ones into the hunters or vice versa. And then can kind of benefit from that invention over there. And so a good example of that, they mentioned, is that in order to discover the microwave, you first had to work on radar technology, which had nothing to do with microwaves. But because of the inventions made in radar technology, you could then invent the microwave easily. So it kind of jumped over into the space of ovens, basically. Before, all you had to make food warm was just put it in an oven and heat it up. Now you had the microwave. So that kind of these algorithms capture the spirit of this. A book that the people who gave the tutorial wrote is Why Greatness Cannot Be Planned. I'll definitely get that. And I can't recommend it since I haven't read it yet, but I'm going to get and read it. Should be fairly interesting. So they give them a number. They gave a number of examples of this, for example, robots that can recover from damage because so they had a robot with six legs. They trained it to move. Now they disabled one leg. Now, usually you have one solution like you trained your neural network. I don't think it was even a neural network, but you trained your like your system to move this robot as efficiently as possible. And now because you only have one solution, one legs broken, it doesn't work anymore. But since you have the entire landscape of solutions, you can easily kind of jump to other not as good solutions if you have all legs. But you can jump to other solutions in the solution space and try them out. Which ones do still work? If I only now have five legs, since you have the entire landscape, you're very well able to do that. So that's pretty cool. Another algorithm they presented was GoExplore, which is an algorithm that kind of solved these really hard Atari games while back. And what they do in specific is they kind of have an archive of states that they have reached in the past. So it's a video game and you do some things and then you are in certain states. So it's an archive of states. And you just pick one of that. Right. You pick like, OK, this state means I'm like my little person I control is somewhere over there. And then you just explore from it. Right. You do a population based. You just kind of go around from it and so on. And then you look at the state you end up in. And if the state you end up in is a known state like you've been there before. So it's also in your archive. Then you compare the two. Did you get faster to that state via the new route or did you get faster to that state via the route that was already in your archive? And if you're faster in that state via the new route, you will you replace the archived one with the new one. So this again is kind of like a Dijkstra shortest path algorithm extrapolated to this to this kind of domain where you have to explore. You don't actually have a graph. So I think it's it's pretty cool. It's all kind of the same principle, but it can employ this goal switching thing. Right. So you go to a certain state, but then all of a sudden, because you explored something else, you find a much quicker way to that state, which you never intended. But it happens. So this is a basic principle that kind of if you explore a lot, then good things might happen. So kind of a serendipity discovery mechanism, and you could use those good things, incorporate them into the things that already work. The last topic they covered was open ended search. So a distinction from what they've already discussed to open ended is now. They give the example again life on earth. If you consider it, it's a single run of an algorithm. It's not that for every life form, a different optimization was started and kind of started and finished, optimized for a certain thing. It's all one single run of the same algorithm. And it doesn't really have a goal in mind. So open ended algorithms are like that. They kind of define interesting notion. Is it still interesting if we were to just let it run for a billion years? Like, would it still be interesting? If yes, consider it an open ended algorithm, which I find a really good kind of definition. So the fundamental property that open ended algorithms have and research in this has defined is that constantly not only is the population shifting, but also the environment is shifting. So there's kind of a never static situation. The environment's always shifting. That also means there's always new opportunities opening up for kind of new life on earth, for new creatures to evolve, to kind of fill the niches that open up. And the research community around this, the open ended search, open ended learning community is considering exactly those types of environments. Like how can they even describe those, manufacture those and then learn in those. So pretty cool. The cool experiment they've shown was the pick breeder experiment, where basically it's a human in the loop. So they gave humans could cooperate. So as a human, you go to a website, you pick one picture and these pictures are procedurally generated. So they start out with a very simple pattern and you just have the opportunity to kind of you pick one and it gives you a bunch of random perturbations of the procedurally generated image. And you pick the ones that you like and then you continue exploring from there. And if you're happy, you can just save that to the database and someone else can look through the database and then pick yours, for example, to continue. And the things that the humans came up with or the result of that was extremely interesting. So not only could you perturb, but you could also kind of mix pictures as far as I remember. Not sure anymore. But the things they end up with is you could breed pictures, right? You could you could kind of also put pictures together. So the procedural generation of them and what you end up with is remarkable, remarkably interesting things. And the point they made is it's really only from very few iterations. These are like tens or hundreds of iterations of development, not like a million like we're used to. And there's a real tree of phylogenies that emerge. And the crucial lesson, they say, is people only find when they are not looking. So if you had a certain goal in mind, you would never be able to, you know, change the pictures in the way that this goal would appear. But if you have no goal in mind, you might discover all kinds of interesting things. So that that is kind of all I'm going to say of this. They discussed many more things, but I think these are the main takeaways. So population population based search is interesting because it can kind of overcome the problems that if you only had one optimizer, one optimization run of one algorithm, if you employ quality diversity in the algorithm map elites, this this enables this kind of goal switching gives you back an entire landscape of of the of learned actors or systems that for each one, you know, it's kind of the best performing one in that particular constraint of of the of the dimensions you care about. And yeah, open ended algorithms, open ended search is definitely a cool research direction. And I encourage you to check it out. All right. That was it so far. Thanks for listening. Bye. | [
{
"start": 0,
"end": 8,
"text": " This is huge. This is just one hall and most people I guess are still waiting for registration."
},
{
"start": 8,
"end": 14,
"text": " Yeah, but definitely the size of these things is ginormous."
},
{
"start": 14,
"end": 17,
"text": " The tutorials have just started."
},
{
"start": 17,
"end": 20,
"text": " There we go. It's finding a place."
},
{
"start": 20,
"end": 26,
"text": " Hi, so I just wanted to give a little update on a tutorial that I liked"
},
{
"start": 26,
"end": 30,
"text": " which was the population-based search and open-ended learning tutorial"
},
{
"start": 30,
"end": 34,
"text": " which happened on Monday here."
},
{
"start": 34,
"end": 40,
"text": " So I was pleasantly surprised by this tutorial because I knew almost nothing about these techniques"
},
{
"start": 40,
"end": 44,
"text": " and they seem really cool. It seems to be a really cool line of research."
},
{
"start": 44,
"end": 48,
"text": " So I started out with what is population-based search"
},
{
"start": 48,
"end": 54,
"text": " and basically in population-based search you don't want to just reach one solution of a problem"
},
{
"start": 54,
"end": 60,
"text": " but you want to maintain a population of solutions that you develop over time."
},
{
"start": 60,
"end": 66,
"text": " So natural evolution would be an example of that."
},
{
"start": 66,
"end": 73,
"text": " So this can have many benefits that were explored in the tutorial."
},
{
"start": 73,
"end": 80,
"text": " So the culprit of traditional optimization, let's say you have a classification problem,"
},
{
"start": 80,
"end": 86,
"text": " you just train one classifier on it, is what they call deception,"
},
{
"start": 86,
"end": 93,
"text": " meaning that a better example is an RL problem where you need to reach some goal"
},
{
"start": 93,
"end": 101,
"text": " but since the goal might be very hard to reach, your algorithm has basically nothing to go on."
},
{
"start": 101,
"end": 103,
"text": " There's no stepping stone."
},
{
"start": 103,
"end": 108,
"text": " So usually people go and construct a reward function in a very clever way."
},
{
"start": 108,
"end": 113,
"text": " But this can be overcome with these techniques as well."
},
{
"start": 113,
"end": 119,
"text": " So just imagine the hardest video game in the Atari suite."
},
{
"start": 119,
"end": 123,
"text": " This would be something like Montezuma's Revenge where you first need to collect some key"
},
{
"start": 123,
"end": 127,
"text": " and then go to some door and only then you get a score."
},
{
"start": 127,
"end": 134,
"text": " So this reward function is too ambitious and is a problem they call your deception."
},
{
"start": 134,
"end": 140,
"text": " An observation they make is if you look at nature and natural evolution,"
},
{
"start": 140,
"end": 144,
"text": " it is very successful even without a goal."
},
{
"start": 144,
"end": 152,
"text": " So there's no goal in mind to natural evolution except reproduction creates other reproduction."
},
{
"start": 152,
"end": 159,
"text": " But it's not a goal, that's simply a kind of underlying mechanism."
},
{
"start": 159,
"end": 165,
"text": " And if you look at nature, all this variety of life was produced without a goal in mind."
},
{
"start": 165,
"end": 173,
"text": " And all this variety of life filling different niches and basically reproducing at their own pace."
},
{
"start": 173,
"end": 176,
"text": " So it's a very interesting observation."
},
{
"start": 176,
"end": 181,
"text": " The goal of this entire field is kind of to model, to go into this direction of"
},
{
"start": 181,
"end": 188,
"text": " what if we don't really go after only the cost function, but what if we..."
},
{
"start": 188,
"end": 196,
"text": " So in the most extreme case, what if we build a search algorithm that only wants to create novel things?"
},
{
"start": 196,
"end": 202,
"text": " So where kind of novelty is the only goal, what happens then?"
},
{
"start": 202,
"end": 207,
"text": " And it turns out some interesting things can be achieved with that."
},
{
"start": 207,
"end": 215,
"text": " So they introduced this notion of quality diversity, which basically means if you look at,"
},
{
"start": 215,
"end": 223,
"text": " let's again take a life on earth, you want all the achievable behaviors that there are."
},
{
"start": 223,
"end": 230,
"text": " So maybe one achievable behavior is a very fast life form that can hunt other life forms,"
},
{
"start": 230,
"end": 235,
"text": " and another achievable behavior is one that camouflages very well and so on."
},
{
"start": 235,
"end": 243,
"text": " And you want to kind of find for each of these behaviors, you want to find the best possible example."
},
{
"start": 243,
"end": 247,
"text": " So that's the direction that these algorithms go into."
},
{
"start": 247,
"end": 256,
"text": " And an algorithm that they presented was MapElites, so M-A-P-Elites, which goes as follows."
},
{
"start": 256,
"end": 263,
"text": " So let's say you have a bunch of dimensions you care about, say how fast a creature is,"
},
{
"start": 263,
"end": 266,
"text": " how tall it is, how well it is camouflaged and so on."
},
{
"start": 266,
"end": 270,
"text": " Now you want to discretize each of those dimensions."
},
{
"start": 270,
"end": 274,
"text": " So this will give you cells basically."
},
{
"start": 274,
"end": 279,
"text": " So each of these discretization will introduce a grid of cells."
},
{
"start": 279,
"end": 285,
"text": " And what you now do is you want to keep the best examples of each cell."
},
{
"start": 285,
"end": 291,
"text": " So if you have a creature that's very fast but not very well camouflaged at some cell,"
},
{
"start": 291,
"end": 297,
"text": " you look at how well it's doing at the goal that you have in mind."
},
{
"start": 297,
"end": 300,
"text": " And you want to keep the best one of those."
},
{
"start": 300,
"end": 305,
"text": " You have a population and whichever ones are in that cell, you keep the best."
},
{
"start": 305,
"end": 308,
"text": " And then you go ahead and you kind of change them."
},
{
"start": 308,
"end": 312,
"text": " You could do this via evolutionary process, like you can mutate them,"
},
{
"start": 312,
"end": 317,
"text": " or it could be via gradient descent something."
},
{
"start": 317,
"end": 322,
"text": " But you mutate them and I guess they will probably end up in a different cell."
},
{
"start": 322,
"end": 329,
"text": " So you go look at that cell. Are these new ones better than the ones that you remembered from that old cell?"
},
{
"start": 329,
"end": 331,
"text": " And if so, replace them."
},
{
"start": 331,
"end": 338,
"text": " For each cell, keep the best one and then kind of start continue developing from those."
},
{
"start": 338,
"end": 342,
"text": " Sort of like Dijkstra's shortest path algorithm."
},
{
"start": 342,
"end": 350,
"text": " So what it will return is like an entire landscape of possible behaviors."
},
{
"start": 350,
"end": 355,
"text": " And for each behavior, it will give you the best result."
},
{
"start": 355,
"end": 358,
"text": " Now it doesn't mean they all do equally."
},
{
"start": 358,
"end": 365,
"text": " Some will be better, some cells will be not as good with regards to your cost function."
},
{
"start": 365,
"end": 372,
"text": " But it will give you an entire landscape and you could see then that there are many kind of modes in this landscape."
},
{
"start": 372,
"end": 377,
"text": " As I said, some creatures are very fast hunters, some camouflage very well."
},
{
"start": 377,
"end": 380,
"text": " But then they are kind of slower."
},
{
"start": 380,
"end": 383,
"text": " So you will be able to see these modes in that."
},
{
"start": 383,
"end": 392,
"text": " I found this pretty interesting and opens the door to a lot of different applications."
},
{
"start": 392,
"end": 397,
"text": " So a principle they employ is what is called goal switching."
},
{
"start": 397,
"end": 406,
"text": " Namely, that means if a line of development can benefit from inventions of another line."
},
{
"start": 406,
"end": 419,
"text": " So let's say the very fast hunters, they are good at that, but then maybe they don't reach quite optimal performance."
},
{
"start": 419,
"end": 427,
"text": " But then another line develops somewhere else and these are camouflaged, like the camouflaged life forms develop."
},
{
"start": 427,
"end": 429,
"text": " So they invent kind of camouflage."
},
{
"start": 429,
"end": 438,
"text": " Now because of the way this mutation and so on is, you kind of keep the camouflaged ones around and the hunters."
},
{
"start": 438,
"end": 442,
"text": " And now the camouflage can kind of jump over to the hunters."
},
{
"start": 442,
"end": 448,
"text": " It's very difficult to explain like this, but they call this goal switching."
},
{
"start": 448,
"end": 461,
"text": " And what it means is that the hunters can now adopt a little bit of camouflage through, let's say mutating one of the camouflaged ones into the hunters or vice versa."
},
{
"start": 461,
"end": 465,
"text": " And then can kind of benefit from that invention over there."
},
{
"start": 465,
"end": 478,
"text": " And so a good example of that, they mentioned, is that in order to discover the microwave, you first had to work on radar technology, which had nothing to do with microwaves."
},
{
"start": 478,
"end": 485,
"text": " But because of the inventions made in radar technology, you could then invent the microwave easily."
},
{
"start": 485,
"end": 489,
"text": " So it kind of jumped over into the space of ovens, basically."
},
{
"start": 489,
"end": 494,
"text": " Before, all you had to make food warm was just put it in an oven and heat it up."
},
{
"start": 494,
"end": 500,
"text": " Now you had the microwave. So that kind of these algorithms capture the spirit of this."
},
{
"start": 500,
"end": 508,
"text": " A book that the people who gave the tutorial wrote is Why Greatness Cannot Be Planned."
},
{
"start": 508,
"end": 516,
"text": " I'll definitely get that. And I can't recommend it since I haven't read it yet, but I'm going to get and read it."
},
{
"start": 516,
"end": 519,
"text": " Should be fairly interesting."
},
{
"start": 519,
"end": 531,
"text": " So they give them a number. They gave a number of examples of this, for example, robots that can recover from damage because so they had a robot with six legs."
},
{
"start": 531,
"end": 535,
"text": " They trained it to move. Now they disabled one leg."
},
{
"start": 535,
"end": 540,
"text": " Now, usually you have one solution like you trained your neural network."
},
{
"start": 540,
"end": 547,
"text": " I don't think it was even a neural network, but you trained your like your system to move this robot as efficiently as possible."
},
{
"start": 547,
"end": 552,
"text": " And now because you only have one solution, one legs broken, it doesn't work anymore."
},
{
"start": 552,
"end": 562,
"text": " But since you have the entire landscape of solutions, you can easily kind of jump to other not as good solutions if you have all legs."
},
{
"start": 562,
"end": 568,
"text": " But you can jump to other solutions in the solution space and try them out."
},
{
"start": 568,
"end": 576,
"text": " Which ones do still work? If I only now have five legs, since you have the entire landscape, you're very well able to do that."
},
{
"start": 576,
"end": 579,
"text": " So that's pretty cool."
},
{
"start": 579,
"end": 591,
"text": " Another algorithm they presented was GoExplore, which is an algorithm that kind of solved these really hard Atari games while back."
},
{
"start": 591,
"end": 600,
"text": " And what they do in specific is they kind of have an archive of states that they have reached in the past."
},
{
"start": 600,
"end": 608,
"text": " So it's a video game and you do some things and then you are in certain states. So it's an archive of states."
},
{
"start": 608,
"end": 619,
"text": " And you just pick one of that. Right. You pick like, OK, this state means I'm like my little person I control is somewhere over there."
},
{
"start": 619,
"end": 626,
"text": " And then you just explore from it. Right. You do a population based. You just kind of go around from it and so on."
},
{
"start": 626,
"end": 636,
"text": " And then you look at the state you end up in. And if the state you end up in is a known state like you've been there before."
},
{
"start": 636,
"end": 649,
"text": " So it's also in your archive. Then you compare the two. Did you get faster to that state via the new route or did you get faster to that state via the route that was already in your archive?"
},
{
"start": 649,
"end": 657,
"text": " And if you're faster in that state via the new route, you will you replace the archived one with the new one."
},
{
"start": 657,
"end": 667,
"text": " So this again is kind of like a Dijkstra shortest path algorithm extrapolated to this to this kind of domain where you have to explore."
},
{
"start": 667,
"end": 678,
"text": " You don't actually have a graph. So I think it's it's pretty cool. It's all kind of the same principle, but it can employ this goal switching thing."
},
{
"start": 678,
"end": 687,
"text": " Right. So you go to a certain state, but then all of a sudden, because you explored something else, you find a much quicker way to that state, which you never intended."
},
{
"start": 687,
"end": 699,
"text": " But it happens. So this is a basic principle that kind of if you explore a lot, then good things might happen."
},
{
"start": 699,
"end": 710,
"text": " So kind of a serendipity discovery mechanism, and you could use those good things, incorporate them into the things that already work."
},
{
"start": 710,
"end": 722,
"text": " The last topic they covered was open ended search. So a distinction from what they've already discussed to open ended is now."
},
{
"start": 722,
"end": 729,
"text": " They give the example again life on earth. If you consider it, it's a single run of an algorithm."
},
{
"start": 729,
"end": 740,
"text": " It's not that for every life form, a different optimization was started and kind of started and finished, optimized for a certain thing."
},
{
"start": 740,
"end": 747,
"text": " It's all one single run of the same algorithm. And it doesn't really have a goal in mind."
},
{
"start": 747,
"end": 752,
"text": " So open ended algorithms are like that. They kind of define interesting notion."
},
{
"start": 752,
"end": 758,
"text": " Is it still interesting if we were to just let it run for a billion years? Like, would it still be interesting?"
},
{
"start": 758,
"end": 765,
"text": " If yes, consider it an open ended algorithm, which I find a really good kind of definition."
},
{
"start": 765,
"end": 781,
"text": " So the fundamental property that open ended algorithms have and research in this has defined is that constantly not only is the population shifting, but also the environment is shifting."
},
{
"start": 781,
"end": 800,
"text": " So there's kind of a never static situation. The environment's always shifting. That also means there's always new opportunities opening up for kind of new life on earth, for new creatures to evolve, to kind of fill the niches that open up."
},
{
"start": 800,
"end": 814,
"text": " And the research community around this, the open ended search, open ended learning community is considering exactly those types of environments."
},
{
"start": 814,
"end": 821,
"text": " Like how can they even describe those, manufacture those and then learn in those. So pretty cool."
},
{
"start": 821,
"end": 832,
"text": " The cool experiment they've shown was the pick breeder experiment, where basically it's a human in the loop. So they gave humans could cooperate."
},
{
"start": 832,
"end": 840,
"text": " So as a human, you go to a website, you pick one picture and these pictures are procedurally generated."
},
{
"start": 840,
"end": 850,
"text": " So they start out with a very simple pattern and you just have the opportunity to kind of you pick one and it gives you a bunch of random perturbations of the procedurally generated image."
},
{
"start": 850,
"end": 855,
"text": " And you pick the ones that you like and then you continue exploring from there."
},
{
"start": 855,
"end": 864,
"text": " And if you're happy, you can just save that to the database and someone else can look through the database and then pick yours, for example, to continue."
},
{
"start": 864,
"end": 872,
"text": " And the things that the humans came up with or the result of that was extremely interesting."
},
{
"start": 872,
"end": 881,
"text": " So not only could you perturb, but you could also kind of mix pictures as far as I remember. Not sure anymore."
},
{
"start": 881,
"end": 891,
"text": " But the things they end up with is you could breed pictures, right? You could you could kind of also put pictures together."
},
{
"start": 891,
"end": 900,
"text": " So the procedural generation of them and what you end up with is remarkable, remarkably interesting things."
},
{
"start": 900,
"end": 905,
"text": " And the point they made is it's really only from very few iterations."
},
{
"start": 905,
"end": 911,
"text": " These are like tens or hundreds of iterations of development, not like a million like we're used to."
},
{
"start": 911,
"end": 915,
"text": " And there's a real tree of phylogenies that emerge."
},
{
"start": 915,
"end": 922,
"text": " And the crucial lesson, they say, is people only find when they are not looking."
},
{
"start": 922,
"end": 931,
"text": " So if you had a certain goal in mind, you would never be able to, you know, change the pictures in the way that this goal would appear."
},
{
"start": 931,
"end": 937,
"text": " But if you have no goal in mind, you might discover all kinds of interesting things."
},
{
"start": 937,
"end": 944,
"text": " So that that is kind of all I'm going to say of this."
},
{
"start": 944,
"end": 948,
"text": " They discussed many more things, but I think these are the main takeaways."
},
{
"start": 948,
"end": 958,
"text": " So population population based search is interesting because it can kind of overcome the problems that if you only had one optimizer,"
},
{
"start": 958,
"end": 965,
"text": " one optimization run of one algorithm, if you employ quality diversity in the algorithm map elites,"
},
{
"start": 965,
"end": 977,
"text": " this this enables this kind of goal switching gives you back an entire landscape of of the of learned actors or systems"
},
{
"start": 977,
"end": 988,
"text": " that for each one, you know, it's kind of the best performing one in that particular constraint of of the of the dimensions you care about."
},
{
"start": 988,
"end": 997,
"text": " And yeah, open ended algorithms, open ended search is definitely a cool research direction."
},
{
"start": 997,
"end": 1002,
"text": " And I encourage you to check it out. All right. That was it so far."
},
{
"start": 1002,
"end": 1007,
"text": " Thanks for listening. Bye."
}
] |
EA96xh9qog0 | Yannic Kilcher | UCZHmQk67mSJgfCCTn7xBfew | I'm at ICML19 :) | [
"Science & Technology"
] | [
"machine learning",
"conference",
"long beach",
"california",
"icml19",
"icml",
"artificial intelligence",
"ai",
"deep learning"
] | Short intro to the International Conference on Machine Learning in Long Beach, CA.
I'll be making some updates from the conference. | Hi there, it's day one of ICML and we'll be attending the conference here and just quickly pre-video to let everyone know I'll be trying to report from here kind of what papers are cool, what I liked, what are kind of the trends and so hopefully get this conference out to a broader community. So everyone's conglomerating here, the line's probably going to be huge, I'm already registered so that's pretty good. It's beautiful weather and looking forward to five days of conference. So today is tutorial day and I'll think I'll be attending some cool tutorials. Yeah, just look how pretty it is here, nice. All right, bye everyone, see you later. | [
{
"start": 0,
"end": 12.4,
"text": " Hi there, it's day one of ICML and we'll be attending the conference here and just"
},
{
"start": 12.4,
"end": 19.28,
"text": " quickly pre-video to let everyone know I'll be trying to report from here kind of what"
},
{
"start": 19.28,
"end": 27.12,
"text": " papers are cool, what I liked, what are kind of the trends and so hopefully get this conference"
},
{
"start": 27.12,
"end": 31.520000000000003,
"text": " out to a broader community. So everyone's conglomerating here, the line's probably"
},
{
"start": 31.520000000000003,
"end": 35.760000000000005,
"text": " going to be huge, I'm already registered so that's pretty good. It's beautiful weather"
},
{
"start": 36.64,
"end": 45.2,
"text": " and looking forward to five days of conference. So today is tutorial day and I'll think I'll be"
},
{
"start": 45.92,
"end": 54.480000000000004,
"text": " attending some cool tutorials. Yeah, just look how pretty it is here, nice."
},
{
"start": 54.48,
"end": 59.44,
"text": " All right, bye everyone, see you later."
}
] |
hMO6rbMAPew | Yannic Kilcher | UCZHmQk67mSJgfCCTn7xBfew | Adversarial Examples Are Not Bugs, They Are Features | [
"Science & Technology"
] | [
"machine learning",
"deep learning",
"adversarial examples",
"adversarial samples",
"pgd",
"projected gradient descent",
"vulnerabiliby",
"security",
"artificial intelligence",
"MIT",
"geometry",
"classifier",
"deep neural network",
"attack",
"convolutional neural networks",
"research",
"robust features",
"robust classifier",
"robust network",
"neural network"
] | Abstract:
Adversarial examples have attracted significant attention in machine learning, but the reasons for their existence and pervasiveness remain unclear. We demonstrate that adversarial examples can be directly attributed to the presence of non-robust features: features derived from patterns in the data distribution that are highly predictive, yet brittle and incomprehensible to humans. After capturing these features within a theoretical framework, we establish their widespread existence in standard datasets. Finally, we present a simple setting where we can rigorously tie the phenomena we observe in practice to a misalignment between the (human-specified) notion of robustness and the inherent geometry of the data.
Authors: Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Logan Engstrom, Brandon Tran, Aleksander Madry
https://arxiv.org/abs/1905.02175 | Hi there! Today we're looking at Adversarial Examples Are Not Bugs, They Are Features by Andrew Elias et al. So this paper is pretty interesting as a catchy title and we'll try to kind of dissect what it says. So first of all, in the abstract they say adversarial examples have attracted significant attention, but the reasons for their existence and pervasiveness remain unclear. So if you don't know what an adversarial example is, an adversarial example is basically the following. Say you have an image classifier, right? Classifier, boom, neural network, image here, and the image is of a, let's say, a cat. Right, this is my best attempt at a cat, bang, cat. And you feed it through the classifier and the classifier says cat. Now if you perturb this image, if you derive an image from it and you perturb it just very slightly, very subtly, so you introduce some pixels here, there, here, there, right, you change some pixels in a very targeted way, and you feed that new image through here, then the classifier will say dog or something really, you can make it say anything like airplane or, I don't know, sky or whatever you want. So these are called adversarial examples. And it's true, their existence and pervade, the reasons for their existence and pervasiveness remain unclear. They say we demonstrate that adversarial examples can be directly attributed to the presence of non-robust features. So they're basically, their paper is about these non-robust features and they define later what they mean exactly. But here they say features derived from patterns in the data distribution that are highly predictive, yet brittle and incomprehensible to humans. And this is pretty neat. So the fundamental idea, as I understand it, and I'm going to take this away right here, that if you have images, let's say here of cats, and I'm going to draw another one over here, if you have an image, say, of cats, there is multiple features in this image and the feature is something that the classifier can pick up on and kind of learn to, this is a horrible cat, learn to classify images from. So features that we humans generally use are a cat has ear, ear, eyes, whiskers, right, and the general relationship to each other of these things. This is what constitutes a cat. And that's how we classify it. But also they say there are other features that are also very indicative, right? If you think what differentiates a cat from a dog and a dog here, let's pick fluffy ears, also eyes, yeah, not going to go further with the dog too much. What differentiates a cat from a dog? And we, of course, we would say, well, the head shape is different and the ears are different and the relationship to them, to each other are different, but it could also be, and this is a simplistic right now, right? But it's also that cats, for example, have different fur than dogs. And yeah, being overly simplistic here, but bear with me. So let's say in our hypothetical world, cats have fur that goes like this, left to right, right? Every hair is basically vertical, sorry, horizontal. If you look at it like that and dog fur, on the other hand, is always like this, right? This is vertical, right? Top to bottom. And so the classifier might just as well pick up on the fur direction in order to classify images, right? Since all cats have that type of fur and all dogs have that other type of fur, the classifier might just as well pick up on that, right? And to us humans, we don't really pay attention to these things because they're minute, right? You don't look at the directions of individual hairs to classify in an animal to cat or dog. You would much rather go for these kind of large features like where are the ears, how do they look and so on. But a classifier, there's actually you can make an argument that the classifier would more likely pick up on the fur direction, right? In order to in order to classify since we're using convolutional neural networks and they're generally neighborhood pixel neighborhood operators. It can much easier pick up on these patterns than it can on the general relationship of the of the large features. So if a classifier now learns that cats always have fur like this and dogs always have fur like that, what we could do is we can go over here to the dog and change its fur, right? Change in the image, change its fur to this direction. Now, to us humans, that would still very much look like a dog because the fur direction is almost imperceptible. But to the classifier that has only learned, hey, a cat always has this type of fur and the dog always has that type of fur. That new image would totally look like a cat. Right. So this paper argues exactly that this paper argues that in the data set, there are features and these are real features like this. This actually could be the case that cats fur is always like that and dogs fur is always like this. It could be the case and the classifier could pick up on this. Right. And then the adversarial examples, the reason why they exist is because the classifier has picked up on these imperceptible features. And so by changing the features, we can change the classifiers decision. And without changing the image in a large scale. So they they say that they make this hypothesis and they kind of they say, OK, we established a widespread existence in standard data sets. So they kind of give supporting evidence for their hypothesis. And then they say, finally, we present a simple setting, which is a theoretical setting where we can rigorously tie the phenomena we observe to a misalignment between the human specified notion of robustness and the inherent geometry of the data. All right. So it's kind of different pieces of the of this paper. And we're going to look at them in succession. So the introduction, we largely skip, except that their main claim here is specifically we claim that adversarial vulnerability is a direct result of our models, sensitivity to well generalizing features in the data. So that's the core point, I think, is well generalizing features, which is what we mentioned. These are features that actually describe the data well, but but features that are kind of imperceptibly small to humans or that don't fit our notion of robustness. All right. So they go on and they define more clearly what they mean here. Here, whenever we talk of a feature, right? Remember, we had the our classifier here, then we input an image and the image is called X. Right. And that classifier, usually, if we look at it closer, consists of multiple layers of interconnected neurons, whatever. And the last layer will be an output layer into different classes. Right. And so the features, when they say a feature, what they mean specifically is the last here, the last representation before it goes into the classifier. So the way you would classify them and here they just establish a two class setting. The way you would establish that is you have feature one, feature two, feature three, and you have a weight vector W1 for each feature W2, W3. You make the inner product and that will give you a Y hat. Basically, if that is high, you say it's class one. If that is low, you say it's class minus one. So the classes here are plus one and minus one, just to make things simple. So but you see the features are basically what comes out after these layers, what is then used to make a linear classification. This last thing is basically just a logistic regression. So you can think of the features as the output of the neural network, but before it goes into the classifier. So a feature basically since then, it's linearly classified. If the feature is high, it will give a signal for one class. And if a feature is low, it will give a signal for the other class, depending on, of course, if this W is negative or positive. All right, so they say we call a feature row useful. And if this thing holds here, what is this thing? This thing means so the expectation over the dates. So generally in the data set, this must hold Y times the feature. So why is the class? And remember, it's plus or minus one. And the feature, as we've seen, is some some number Y times a feature must be higher than some some number. So what does it mean when a product is high? It means either both are high or both are low. So they're correlated. That's what that means. So basically, this is says a feature F is useful if whenever it an example, X is of class one, if it's class class one or let's if it if Y is one plus one, then F is high. And whenever Y is minus one, then F is low, which means it's high in the negative direction. Right. So this is this is our this is intuitive. Right. If a feature is useful, it means it should say one thing in samples of class one, then it should say another thing in samples of class two. Then I can actually use the feature to make a decision when it's, you know, very correlated with the class. So that, you know, that makes perfect sense. So that's kind of when is a feature useful if it correlates with the class label? Yes. Cool. But the usefulness simply any feature basically that classifier will extract will be useful. That's an assumption we can make. Otherwise, the classifier wouldn't extract it. So the neural network here, that's an assumption, will only extract useful features. Right. Because the non-useful features, there would simply be no reason for it to extract them because they don't contribute to solving the task, because they're not correlated with an output class. Right. So next, they define robust, robustly useful features. So in addition to being useful, they're now also robust. What does it mean? Again, we want a correlation of why and the feature to be higher than some constant. But not only the feature of the image X, but the feature of the image X that has been perturbed by a small perturbation. So and we take the infinum here over a class of perturbations. Of course, this class of perturbations is exactly the adversarial perturbations. Basically, what this means is it says that however we try to perturb X, right, and the infinum here means that the minimum correlation, however we try to make the feature not correlated with Y, however much we try, we can't get it lower than some some gamma, some number, right? We can't we can't get it down. So whatever we try to make the feature bad for the classifier, basically, we can't. If this holds for a feature, if this is the case, then we call that feature a robust feature. Right. That feature is robustly useful if it correlates, no matter how hard we try to make it not correlate. And of course, a non robust features, so a useful non robust feature is a feature which is useful. You see here is useful. But is not gamma robust feature for any gamma. So it is a feature that is useful like the cat fur. Right. So this here, an example of this would be that the cat's eyes and ear position. Right. We can't just make a small perturbation for the image and make the ears be somewhere completely else. That's just that would require a large perturbation of the image. So the position of the ears and eyes are pretty robust features. But here the cat's fur, no matter how no matter how small we we make this this gamma, we can always kind of change the fur to make the feature not to make the feature not useful. Right. If we can change the cat fur into a dog fur and the dog fur into a cat fur, then the feature will become not useful anymore. Because we can, you know, we can we can change that arbitrarily for any image and then the classifier will have no clue. It can't be like, well, this fur could be of any of any class. Right. So the feature is not useful anymore. So this is a non robust feature. The technique you can say any feature that is useful but not robust is a non robust feature. All right. So this is kind of the definition of what robust and non robust features are. Yeah. Remember, maybe remember robust features like position of the ears and their shape and non robust features would be which direction are the individual hairs in the fur going. Right. And in our world where cat fur is going different ways than dog fur. So they now go into experimental evidence for their for their hypothesis. And here you have to understand they do two experiments which give pretty good indication that their hypothesis is actually correct. And what you have to understand before this is is two things. First of all, here you basically you just have to assume that they already they have some procedure where they can do the following where they can take an image of the training data set and they can decompose it into its robust and non robust features. Right. Don't I mean don't ask yet how they do this. But they can decompose it into these two parts. Right. So that's assumption one. They have a procedure that can actually do that. And then number two is what they what they do here is basically the general theme of these experiments is they they have a training data set. Right. This is the original training. They create a derived version of it. So let's put a tick here. This is a derived version of the data set. Then they train a regular neural network with that. So what you can do with a neural network if you train one. All right. What you usually do is you feed images X you feed images in it gives you some output Y hat and you say well but I know why is the true label. So I feed an image of a cat that the network says airplane. You say well but this should be a cat. So please make this why more to be more to be. Please make this why had more be like why. And then you have a loss function here. You say this is wrong. Please correct this. You back propagate and all the network in here will update to make that a bit more likely. That's how you train usually in our network. Now what you can do is if you want to become robust adversarial examples you can do what is called adversarial training which means that you have the same network here. But of each of the training data points you create a derived version an adversarial example to that to this X you feed the adversarial examples through the network together with the original examples. Then this will give you some why hat to and then you say but this should also be equal to why. Basically you train the classifier also on adversarial examples right. Since the hypothesis is if you train on an image data set then you can teach the classifier about that data set right. Like you do with the regular data set say well OK I can now just train on adversarial examples and my classifier will be able to better classify these correctly right. This usually works it's called adversarial training and it's been a kind of standard method to make your classifier robust. They don't do that here. They don't do this. They simply want to say OK we now have we have a regular training procedure right like this except for what we change is here the training data set. We change this to in one case for example only robust images. So we've changed all the X to be only robust and we do the regular training procedure. And then we evaluate that resulting classifier here this thing we evaluate that. How does that behave. It's kind of a new approach where you modify the date the original data set. So what did they do. First of all they decompose this training data set into a version that is only robust features right. We assume we have such a procedure. We then train a regular neural network on that right. We train a regular neural network on this on this data set and what we get is two things. First of all good standard accuracy. What does good standard accuracy mean. It means that we we can test it on what's called the unmodified test set. So the the test set the original test set of the data set the test set belonging to this training data set. We can test it on that and it works just fine. Right. So that basically means that the robust features are predictive of the of the kind of they generalize well. It means that if I train a classifier only on robust features that can actually classify well to to the to the test set. Right. So that means that's standard accuracy standard accuracy is how well do I classify the test set just an unmodified test set. So they also obtain good robust accuracy which means that what is robust accuracy. Robust accuracy means your accuracy on adversarial examples of the test set. And usually classifiers are vulnerable to this classifier is usually obtained good standard accuracy but bad robust accuracy. But if I only train my classifier on what they call robust features then I all of a sudden retain good standard accuracy. But I also get good robust accuracy which means that. It gives pretty good support to their hypothesis that the adversarial examples are abusing the fact that the classifiers learn the non robust features. Since if I don't have any non robust features it means my classifier can't learn any non robust features which in turn means my classifier isn't vulnerable to adversarial attacks because they would abuse the fact that the classifier has learned about the non robust features. So that's pretty good evidence for their hypothesis. Second thing they do is they now create this on this modified data set where they only have non robust features. Right. So the only thing they have is non robust features. Again they train a standard neural network. They train just a regular neural network on that and they also get good standard accuracy. So this means that also the non robust features as we seen like the cats fur direction can lead to you generalize well to the test set since in the test set also the cats will have that property. But you get bad robust accuracy and this gives further support to their hypothesis if you train a classifier on only non robust features. They are features because they generalize well but they are very vulnerable because they're non robust. Right. So the classifier that has learned about non robust features is vulnerable. They didn't do a third experiment which I find pretty cool where they take they take the training image and of course it's an unmodified training image. So it's robust features will basically say this is a dog. It's non robust features will also say this is a dog because it's a training image of a dog. And what they then do is they derive from this dog an adversarial example towards the cat class. Right. So what does it mean in their hypothesis if their hypothesis is correct. It now means that the robust features still say it's a dog. We can also see this here right. The kind of big shape of the image still is a dog to us humans. But the non robust features will say it's a cat. Right. This hinges on their hypothesis that adversarial examples actually abuse the non robust features. Right. They create an adversarial example. So if their hypothesis is correct the non robust features now say that's a cat. So they derive an entire data set where they change every image to another image and they also change the labels accordingly. And then they train again a regular neural network on this and they look what happens on the unmodified test set. So the unmodified test set will. So imagine if you're the you're this classifier and what you get is an image X and it has robust features. That's a dog and has non robust features say cat and its label. You're asked to predict cat. Right. And then you see the next image and the next image X to the non robust features. Maybe it's derived from some other class it will say plain. But the robust the non robust features again say cat. Right. And you're asked to predict cat. So basically the constructed data set where the non robust features always agree with with the label but the robust features they don't. So naturally what you can expect is the classifier will learn to disregard the robust features because they're no longer useful. Right. But it will actually only will learn to view these features. It's different from before before we only had these features. Now we these features are still in there. Right. But they're not informative. So the classifier will naturally learn to pick up on the non robust features and classify and classify according to them so much that if we now test on the test set and we feed in an actual cat. Right. It's of course it's robust features will say cat and its non robust features will say cat and the classifier is able to accurately predict. This is a cat even though the all the images of cats it has seen during training were actually of basically of non cats of here a dog. So this is pretty cool and shows that kind of these these features that these non robust features that adversarial examples abuse since they're created by adversarial examples. They they are actually predictive and generalize to the test set. So that's pretty pretty good evidence for their hypothesis so far. Now the kind of final remaining question is how do they create what is the procedure where they can create a robust and then basically non robust version of the data set. And here is kind of where we get into the into the sort of what I find. Yeah. So here you see basically examples of so this is an original image of a ship in the CIFAR 10 data set I believe. And this is a robust sample. So these are only robust features of the ship. And this is a ship made with only non robust features you see is actually a moose. But the non robust features have been changed to ship. So the way they construct a robust version of the data set. They have a formal definition but the way they do it is as follows. So and then they say OK here is where we where we get into the details. They say imagine we have a classifier. Right. The classifier outputs features and here we call them here they call them G which is the representation. It can be larger than features. It can be a bigger class. But in essence G is the features which then goes into the into the classifier and into the labels and so on. So the neural network outputs the features inputs some X. Now what if what if I have another X let's say X prime and I just initialize this with random noise. And if I feed this and I get G prime here and I try to make the two as close as possible by changing X. So I'm going to change my X here. Basically I'm going to change my image such that the outputs the features here match each other as close as possible. What does it mean? And I do this via back propagation right. I match these and I back propagate to X. I can do that with gradient descent. What happens is that my image X will basically pick up will match the image. My X prime will match the X in all the ways that are relevant for the features. Basically I will transfer all of the features from X to X prime. But nothing else right since I start with random. Now what if my classifier and that's what they do. What if the classifier is a robust classifier. So remember we talked about we can actually robustify a classifier by doing adversarial training. What if I have a classifier like such that is robust. If I input an X and it outputs me a feature representation of X. If the classifier is robust that representation will only contain robust features. And then if I have a second image X or and I started from random noise and I match the representation of X. And by changing XR basically I will transfer all of the robust features from X. But nothing else right. Given that I start from random noise here this means random noise has no features. That's the assumption. Random noise has no features since it's random noise. And if I transfer only the robust features basically what I've done is I've have now an image that I know has no non robust features. And only robust features of X. So that's how they derive a robustified version of X. Second how do they derive a non robust version. And that's even even easier if I have a classifier. A regular classifier and I want a non robust version of X. I have X input output G output some label. What I do is I simply derive an adversarial example of X like we did before adversarial example in here out here. And that gives me some X Y2 which is different from Y right. If I have a adversarial example then basically I've transferred. I've transferred the non robust features that lead to class Y2. I've transferred the non robust features here while still maintaining the robust features from here. So if this is too abstract imagine here X is an image of a dog right dog. And I derive from it an adversarial image that now says airplane right. So the robust features will still be of a dog will still be of the original image. But the non robust features will be of the airplane class. So that's how I derive a non robust non robust version that has features of kind of one. Robust features of one class but non robust features of the other class. That's what you see up here with the moose right. The moose clearly has been started from the image of a moose and then has been has received non robust features from the ship class. And that's just your classic adversarial example procedure. So that's the that's the kind of procedure. And so what's kind of my criticism here if you look at the first part the first part where they say well in order to determine what the robust features are we actually need a classifier that's already robust. So we've seen before we have a we have a data set sorry let's go up here. They say aha here we have a data set right and we can disentangle this and then it will which color have we not used we have a data set. We only we robustify the data set to a robust data set. We train a standard neural network and that gives us good robust accuracy which is really cool because we don't do anything special during training and we still get good robust accuracy. But in order to do this procedure here this one you actually have to have a robust classifier right. You have to have this already robustified classifier which you have obtained by adversarially training the robust classifier. Basically what you're doing now is you take this adversarial training procedure which the point here is that you don't do anything different during training right. But here you take the adversarial training procedure and via training the robust classifier via changing this data set here you basically get good robust accuracy which to me is just a reflection that you've obtained the data set using this robust classifier in the first place. I mean yeah of course their their method gives a hint that I can actually this is actually due to things in the data set themselves right. But there and I mean that's really important because it surely means that it's not a point of let's say the the classifier itself but it's a point of the data set which also say OK. It also explains why these adversarial examples transfer between classifiers if you have two classifiers that are different but classify the same thing they're vulnerable to the same adversarial example which basically means it must be some property of the data set that these things learn. But to do then say we have a procedure to extract the robust features and if we only train on the robust features we become robust right as here but you obtain the robust features by using a robustified classifier which you have adversarially trained to me that's kind of kind of back door in adversarial training into this whole procedure. And yeah so that's that's kind of my first criticism my second criticism is the fact that you know I mean it's it's an interesting take on this but this whole notion this whole seeing of these features are robust these features are non robust is basically just reframing the problem of adversarial examples in terms of in terms of features. It says nothing why these features are there. It's just postulating that they're there. It says nothing why they're there. It says nothing about why the classifiers pick up on them or how they do it or how you know how this is to be mitigated without first having a robustly trained network to extract the robust features. It's very much widely or not. Things are very much widely not known about these samples it's just a reframing of the problem, I feel. And it's cool experiments I mean they, it does show some a lot of things about these adversarial examples but certainly not an explanation. I find, at least that's my opinion. Alright, so down here then they show that they make an kind of simplified version of this a theoretical setting where they can analyze this. And they basically say, okay, this is generally what happens at the fundamental level at the fundamental level, you have classes, and let's say the classes are distributed like, like this right this these are the examples in the data set and they're distributed like that right. Mean, and you have some covariance. So they're distributed like that. If I have two classes like this, such as here, right, and they're distributed like that, and I create like the separator, the linear classifier, the linear classifier will classify like this it will be like super this is the best linear classifier. Right, we can calculate this accurately. But what do I say when I say okay. I want an adversarial example adversarial examples means that I can shift my examples by a little bit but achieve a big change in output. And since, since this distance here. Right, so if I have a sample here, I need to go a long way to the boundary to achieve another output but if I go into another direction. Right, if I go down here, I only need to go a very short way. And since adversarial examples as they're specified, they say, okay, we want to go a short way and the short way is characterized by going a short way in any direction, right, this is a terrible circle in any direction, we want to go a short way. That's another example. You see that if I have this any direction property, there's actually directions where this classification boundary is very, very close. And so that's what they say this is a fundamental misalignment between the geometry of the data, which is like this, and the geometry of how we specify adversarial examples, which is, you know, kind of equal in each direction, which leads to that. And they say, okay, what if I now robust parameters so what if I adversarially train my network to be robust, it basically means that I expand my data, because I add adversarial examples right of the circle here, I actually add adversarial examples, so my, my class, my data distribution will actually more like this. And my separating hyperplane will change here. And the geometry of the adversarial examples will be much more aligned with my separating hyperplane. So this is kind of a toy example of where they say this is fundamentally what's going on. There's a misalignment between the geometry of the adversarial examples and the inherent geometry of the data. So that's kind of the theoretical analysis they do. And with that, I finish here, and I hope this was clear enough and goodbye. | [
{
"start": 0,
"end": 8,
"text": " Hi there! Today we're looking at Adversarial Examples Are Not Bugs, They Are Features by Andrew Elias et al."
},
{
"start": 8,
"end": 18,
"text": " So this paper is pretty interesting as a catchy title and we'll try to kind of dissect what it says."
},
{
"start": 18,
"end": 25,
"text": " So first of all, in the abstract they say adversarial examples have attracted significant attention,"
},
{
"start": 25,
"end": 30,
"text": " but the reasons for their existence and pervasiveness remain unclear."
},
{
"start": 30,
"end": 35,
"text": " So if you don't know what an adversarial example is, an adversarial example is basically the following."
},
{
"start": 35,
"end": 45,
"text": " Say you have an image classifier, right? Classifier, boom, neural network, image here, and the image is of a, let's say, a cat."
},
{
"start": 45,
"end": 57,
"text": " Right, this is my best attempt at a cat, bang, cat. And you feed it through the classifier and the classifier says cat."
},
{
"start": 57,
"end": 67,
"text": " Now if you perturb this image, if you derive an image from it and you perturb it just very slightly, very subtly,"
},
{
"start": 67,
"end": 75,
"text": " so you introduce some pixels here, there, here, there, right, you change some pixels in a very targeted way,"
},
{
"start": 75,
"end": 85,
"text": " and you feed that new image through here, then the classifier will say dog or something really, you can make it say anything like airplane or,"
},
{
"start": 85,
"end": 92,
"text": " I don't know, sky or whatever you want. So these are called adversarial examples."
},
{
"start": 92,
"end": 100,
"text": " And it's true, their existence and pervade, the reasons for their existence and pervasiveness remain unclear."
},
{
"start": 100,
"end": 107,
"text": " They say we demonstrate that adversarial examples can be directly attributed to the presence of non-robust features."
},
{
"start": 107,
"end": 114,
"text": " So they're basically, their paper is about these non-robust features and they define later what they mean exactly."
},
{
"start": 114,
"end": 126,
"text": " But here they say features derived from patterns in the data distribution that are highly predictive, yet brittle and incomprehensible to humans."
},
{
"start": 126,
"end": 135,
"text": " And this is pretty neat. So the fundamental idea, as I understand it, and I'm going to take this away right here,"
},
{
"start": 135,
"end": 147,
"text": " that if you have images, let's say here of cats, and I'm going to draw another one over here, if you have an image, say, of cats,"
},
{
"start": 147,
"end": 158,
"text": " there is multiple features in this image and the feature is something that the classifier can pick up on and kind of learn to,"
},
{
"start": 158,
"end": 170,
"text": " this is a horrible cat, learn to classify images from. So features that we humans generally use are a cat has ear,"
},
{
"start": 170,
"end": 178,
"text": " ear, eyes, whiskers, right, and the general relationship to each other of these things."
},
{
"start": 178,
"end": 187,
"text": " This is what constitutes a cat. And that's how we classify it. But also they say there are other features that are also very indicative, right?"
},
{
"start": 187,
"end": 203,
"text": " If you think what differentiates a cat from a dog and a dog here, let's pick fluffy ears, also eyes, yeah, not going to go further with the dog too much."
},
{
"start": 203,
"end": 214,
"text": " What differentiates a cat from a dog? And we, of course, we would say, well, the head shape is different and the ears are different and the relationship to them,"
},
{
"start": 214,
"end": 223,
"text": " to each other are different, but it could also be, and this is a simplistic right now, right? But it's also that cats, for example, have different fur than dogs."
},
{
"start": 223,
"end": 234,
"text": " And yeah, being overly simplistic here, but bear with me. So let's say in our hypothetical world, cats have fur that goes like this, left to right, right?"
},
{
"start": 234,
"end": 248,
"text": " Every hair is basically vertical, sorry, horizontal. If you look at it like that and dog fur, on the other hand, is always like this, right?"
},
{
"start": 248,
"end": 260,
"text": " This is vertical, right? Top to bottom. And so the classifier might just as well pick up on the fur direction in order to classify images, right?"
},
{
"start": 260,
"end": 268,
"text": " Since all cats have that type of fur and all dogs have that other type of fur, the classifier might just as well pick up on that, right?"
},
{
"start": 268,
"end": 272,
"text": " And to us humans, we don't really pay attention to these things because they're minute, right?"
},
{
"start": 272,
"end": 280,
"text": " You don't look at the directions of individual hairs to classify in an animal to cat or dog."
},
{
"start": 280,
"end": 287,
"text": " You would much rather go for these kind of large features like where are the ears, how do they look and so on."
},
{
"start": 287,
"end": 295,
"text": " But a classifier, there's actually you can make an argument that the classifier would more likely pick up on the fur direction, right?"
},
{
"start": 295,
"end": 304,
"text": " In order to in order to classify since we're using convolutional neural networks and they're generally neighborhood pixel neighborhood operators."
},
{
"start": 304,
"end": 313,
"text": " It can much easier pick up on these patterns than it can on the general relationship of the of the large features."
},
{
"start": 313,
"end": 325,
"text": " So if a classifier now learns that cats always have fur like this and dogs always have fur like that, what we could do is we can go over here to the dog and change its fur, right?"
},
{
"start": 325,
"end": 329,
"text": " Change in the image, change its fur to this direction."
},
{
"start": 329,
"end": 335,
"text": " Now, to us humans, that would still very much look like a dog because the fur direction is almost imperceptible."
},
{
"start": 335,
"end": 342,
"text": " But to the classifier that has only learned, hey, a cat always has this type of fur and the dog always has that type of fur."
},
{
"start": 342,
"end": 346,
"text": " That new image would totally look like a cat."
},
{
"start": 346,
"end": 355,
"text": " Right. So this paper argues exactly that this paper argues that in the data set, there are features and these are real features like this."
},
{
"start": 355,
"end": 361,
"text": " This actually could be the case that cats fur is always like that and dogs fur is always like this."
},
{
"start": 361,
"end": 365,
"text": " It could be the case and the classifier could pick up on this."
},
{
"start": 365,
"end": 376,
"text": " Right. And then the adversarial examples, the reason why they exist is because the classifier has picked up on these imperceptible features."
},
{
"start": 376,
"end": 383,
"text": " And so by changing the features, we can change the classifiers decision."
},
{
"start": 383,
"end": 386,
"text": " And without changing the image in a large scale."
},
{
"start": 386,
"end": 397,
"text": " So they they say that they make this hypothesis and they kind of they say, OK, we established a widespread existence in standard data sets."
},
{
"start": 397,
"end": 401,
"text": " So they kind of give supporting evidence for their hypothesis."
},
{
"start": 401,
"end": 410,
"text": " And then they say, finally, we present a simple setting, which is a theoretical setting where we can rigorously tie the phenomena"
},
{
"start": 410,
"end": 418,
"text": " we observe to a misalignment between the human specified notion of robustness and the inherent geometry of the data."
},
{
"start": 418,
"end": 421,
"text": " All right. So it's kind of different pieces of the of this paper."
},
{
"start": 421,
"end": 424,
"text": " And we're going to look at them in succession."
},
{
"start": 424,
"end": 438,
"text": " So the introduction, we largely skip, except that their main claim here is specifically we claim that adversarial vulnerability is a direct result of our models, sensitivity to well generalizing features in the data."
},
{
"start": 438,
"end": 445,
"text": " So that's the core point, I think, is well generalizing features, which is what we mentioned."
},
{
"start": 445,
"end": 458,
"text": " These are features that actually describe the data well, but but features that are kind of imperceptibly small to humans or that don't fit our notion of robustness."
},
{
"start": 458,
"end": 465,
"text": " All right. So they go on and they define more clearly what they mean here."
},
{
"start": 465,
"end": 468,
"text": " Here, whenever we talk of a feature, right?"
},
{
"start": 468,
"end": 475,
"text": " Remember, we had the our classifier here, then we input an image and the image is called X."
},
{
"start": 475,
"end": 484,
"text": " Right. And that classifier, usually, if we look at it closer, consists of multiple layers of interconnected neurons, whatever."
},
{
"start": 484,
"end": 490,
"text": " And the last layer will be an output layer into different classes."
},
{
"start": 490,
"end": 491,
"text": " Right."
},
{
"start": 491,
"end": 503,
"text": " And so the features, when they say a feature, what they mean specifically is the last here, the last representation before it goes into the classifier."
},
{
"start": 503,
"end": 510,
"text": " So the way you would classify them and here they just establish a two class setting."
},
{
"start": 510,
"end": 520,
"text": " The way you would establish that is you have feature one, feature two, feature three, and you have a weight vector W1 for each feature W2, W3."
},
{
"start": 520,
"end": 526,
"text": " You make the inner product and that will give you a Y hat."
},
{
"start": 526,
"end": 530,
"text": " Basically, if that is high, you say it's class one."
},
{
"start": 530,
"end": 533,
"text": " If that is low, you say it's class minus one."
},
{
"start": 533,
"end": 538,
"text": " So the classes here are plus one and minus one, just to make things simple."
},
{
"start": 538,
"end": 547,
"text": " So but you see the features are basically what comes out after these layers, what is then used to make a linear classification."
},
{
"start": 547,
"end": 552,
"text": " This last thing is basically just a logistic regression."
},
{
"start": 552,
"end": 558,
"text": " So you can think of the features as the output of the neural network, but before it goes into the classifier."
},
{
"start": 558,
"end": 564,
"text": " So a feature basically since then, it's linearly classified."
},
{
"start": 564,
"end": 569,
"text": " If the feature is high, it will give a signal for one class."
},
{
"start": 569,
"end": 576,
"text": " And if a feature is low, it will give a signal for the other class, depending on, of course, if this W is negative or positive."
},
{
"start": 576,
"end": 583,
"text": " All right, so they say we call a feature row useful."
},
{
"start": 583,
"end": 588,
"text": " And if this thing holds here, what is this thing?"
},
{
"start": 588,
"end": 591,
"text": " This thing means so the expectation over the dates."
},
{
"start": 591,
"end": 598,
"text": " So generally in the data set, this must hold Y times the feature."
},
{
"start": 598,
"end": 599,
"text": " So why is the class?"
},
{
"start": 599,
"end": 601,
"text": " And remember, it's plus or minus one."
},
{
"start": 601,
"end": 613,
"text": " And the feature, as we've seen, is some some number Y times a feature must be higher than some some number."
},
{
"start": 613,
"end": 615,
"text": " So what does it mean when a product is high?"
},
{
"start": 615,
"end": 619,
"text": " It means either both are high or both are low."
},
{
"start": 619,
"end": 622,
"text": " So they're correlated. That's what that means."
},
{
"start": 622,
"end": 643,
"text": " So basically, this is says a feature F is useful if whenever it an example, X is of class one, if it's class class one or let's if it if Y is one plus one, then F is high."
},
{
"start": 643,
"end": 651,
"text": " And whenever Y is minus one, then F is low, which means it's high in the negative direction."
},
{
"start": 651,
"end": 655,
"text": " Right. So this is this is our this is intuitive."
},
{
"start": 655,
"end": 665,
"text": " Right. If a feature is useful, it means it should say one thing in samples of class one, then it should say another thing in samples of class two."
},
{
"start": 665,
"end": 671,
"text": " Then I can actually use the feature to make a decision when it's, you know, very correlated with the class."
},
{
"start": 671,
"end": 676,
"text": " So that, you know, that makes perfect sense."
},
{
"start": 676,
"end": 681,
"text": " So that's kind of when is a feature useful if it correlates with the class label?"
},
{
"start": 681,
"end": 689,
"text": " Yes. Cool. But the usefulness simply any feature basically that classifier will extract will be useful."
},
{
"start": 689,
"end": 693,
"text": " That's an assumption we can make. Otherwise, the classifier wouldn't extract it."
},
{
"start": 693,
"end": 701,
"text": " So the neural network here, that's an assumption, will only extract useful features."
},
{
"start": 701,
"end": 714,
"text": " Right. Because the non-useful features, there would simply be no reason for it to extract them because they don't contribute to solving the task, because they're not correlated with an output class."
},
{
"start": 714,
"end": 721,
"text": " Right. So next, they define robust, robustly useful features."
},
{
"start": 721,
"end": 725,
"text": " So in addition to being useful, they're now also robust."
},
{
"start": 725,
"end": 735,
"text": " What does it mean? Again, we want a correlation of why and the feature to be higher than some constant."
},
{
"start": 735,
"end": 745,
"text": " But not only the feature of the image X, but the feature of the image X that has been perturbed by a small perturbation."
},
{
"start": 745,
"end": 750,
"text": " So and we take the infinum here over a class of perturbations."
},
{
"start": 750,
"end": 755,
"text": " Of course, this class of perturbations is exactly the adversarial perturbations."
},
{
"start": 755,
"end": 764,
"text": " Basically, what this means is it says that however we try to perturb X, right, and the infinum here means that the minimum correlation,"
},
{
"start": 764,
"end": 777,
"text": " however we try to make the feature not correlated with Y, however much we try, we can't get it lower than some some gamma, some number, right?"
},
{
"start": 777,
"end": 787,
"text": " We can't we can't get it down. So whatever we try to make the feature bad for the classifier, basically, we can't."
},
{
"start": 787,
"end": 794,
"text": " If this holds for a feature, if this is the case, then we call that feature a robust feature."
},
{
"start": 794,
"end": 804,
"text": " Right. That feature is robustly useful if it correlates, no matter how hard we try to make it not correlate."
},
{
"start": 804,
"end": 813,
"text": " And of course, a non robust features, so a useful non robust feature is a feature which is useful."
},
{
"start": 813,
"end": 820,
"text": " You see here is useful. But is not gamma robust feature for any gamma."
},
{
"start": 820,
"end": 825,
"text": " So it is a feature that is useful like the cat fur."
},
{
"start": 825,
"end": 830,
"text": " Right. So this here, an example of this would be that the cat's eyes and ear position."
},
{
"start": 830,
"end": 839,
"text": " Right. We can't just make a small perturbation for the image and make the ears be somewhere completely else."
},
{
"start": 839,
"end": 842,
"text": " That's just that would require a large perturbation of the image."
},
{
"start": 842,
"end": 847,
"text": " So the position of the ears and eyes are pretty robust features."
},
{
"start": 847,
"end": 864,
"text": " But here the cat's fur, no matter how no matter how small we we make this this gamma, we can always kind of change the fur to make the feature not to make the feature not useful."
},
{
"start": 864,
"end": 873,
"text": " Right. If we can change the cat fur into a dog fur and the dog fur into a cat fur, then the feature will become not useful anymore."
},
{
"start": 873,
"end": 879,
"text": " Because we can, you know, we can we can change that arbitrarily for any image and then the classifier will have no clue."
},
{
"start": 879,
"end": 884,
"text": " It can't be like, well, this fur could be of any of any class."
},
{
"start": 884,
"end": 886,
"text": " Right. So the feature is not useful anymore."
},
{
"start": 886,
"end": 895,
"text": " So this is a non robust feature. The technique you can say any feature that is useful but not robust is a non robust feature."
},
{
"start": 895,
"end": 901,
"text": " All right. So this is kind of the definition of what robust and non robust features are."
},
{
"start": 901,
"end": 914,
"text": " Yeah. Remember, maybe remember robust features like position of the ears and their shape and non robust features would be which direction are the individual hairs in the fur going."
},
{
"start": 914,
"end": 921,
"text": " Right. And in our world where cat fur is going different ways than dog fur."
},
{
"start": 921,
"end": 930,
"text": " So they now go into experimental evidence for their for their hypothesis."
},
{
"start": 930,
"end": 939,
"text": " And here you have to understand they do two experiments which give pretty good indication that their hypothesis is actually correct."
},
{
"start": 939,
"end": 944,
"text": " And what you have to understand before this is is two things."
},
{
"start": 944,
"end": 957,
"text": " First of all, here you basically you just have to assume that they already they have some procedure where they can do the following where they can take an image of the training data set"
},
{
"start": 957,
"end": 962,
"text": " and they can decompose it into its robust and non robust features."
},
{
"start": 962,
"end": 966,
"text": " Right. Don't I mean don't ask yet how they do this."
},
{
"start": 966,
"end": 971,
"text": " But they can decompose it into these two parts."
},
{
"start": 971,
"end": 975,
"text": " Right. So that's assumption one. They have a procedure that can actually do that."
},
{
"start": 975,
"end": 985,
"text": " And then number two is what they what they do here is basically the general theme of these experiments is they they have a training data set."
},
{
"start": 985,
"end": 991,
"text": " Right. This is the original training. They create a derived version of it."
},
{
"start": 991,
"end": 996,
"text": " So let's put a tick here. This is a derived version of the data set."
},
{
"start": 996,
"end": 1004,
"text": " Then they train a regular neural network with that."
},
{
"start": 1004,
"end": 1008,
"text": " So what you can do with a neural network if you train one."
},
{
"start": 1008,
"end": 1021,
"text": " All right. What you usually do is you feed images X you feed images in it gives you some output Y hat and you say well but I know why is the true label."
},
{
"start": 1021,
"end": 1024,
"text": " So I feed an image of a cat that the network says airplane."
},
{
"start": 1024,
"end": 1034,
"text": " You say well but this should be a cat. So please make this why more to be more to be."
},
{
"start": 1034,
"end": 1040,
"text": " Please make this why had more be like why. And then you have a loss function here."
},
{
"start": 1040,
"end": 1042,
"text": " You say this is wrong. Please correct this."
},
{
"start": 1042,
"end": 1047,
"text": " You back propagate and all the network in here will update to make that a bit more likely."
},
{
"start": 1047,
"end": 1049,
"text": " That's how you train usually in our network."
},
{
"start": 1049,
"end": 1063,
"text": " Now what you can do is if you want to become robust adversarial examples you can do what is called adversarial training which means that you have the same network here."
},
{
"start": 1063,
"end": 1080,
"text": " But of each of the training data points you create a derived version an adversarial example to that to this X you feed the adversarial examples through the network together with the original examples."
},
{
"start": 1080,
"end": 1090,
"text": " Then this will give you some why hat to and then you say but this should also be equal to why."
},
{
"start": 1090,
"end": 1096,
"text": " Basically you train the classifier also on adversarial examples right."
},
{
"start": 1096,
"end": 1106,
"text": " Since the hypothesis is if you train on an image data set then you can teach the classifier about that data set right."
},
{
"start": 1106,
"end": 1118,
"text": " Like you do with the regular data set say well OK I can now just train on adversarial examples and my classifier will be able to better classify these correctly right."
},
{
"start": 1118,
"end": 1124,
"text": " This usually works it's called adversarial training and it's been a kind of standard method to make your classifier robust."
},
{
"start": 1124,
"end": 1127,
"text": " They don't do that here. They don't do this."
},
{
"start": 1127,
"end": 1139,
"text": " They simply want to say OK we now have we have a regular training procedure right like this except for what we change is here the training data set."
},
{
"start": 1139,
"end": 1152,
"text": " We change this to in one case for example only robust images. So we've changed all the X to be only robust and we do the regular training procedure."
},
{
"start": 1152,
"end": 1159,
"text": " And then we evaluate that resulting classifier here this thing we evaluate that."
},
{
"start": 1159,
"end": 1165,
"text": " How does that behave. It's kind of a new approach where you modify the date the original data set."
},
{
"start": 1165,
"end": 1177,
"text": " So what did they do. First of all they decompose this training data set into a version that is only robust features right."
},
{
"start": 1177,
"end": 1186,
"text": " We assume we have such a procedure. We then train a regular neural network on that right."
},
{
"start": 1186,
"end": 1195,
"text": " We train a regular neural network on this on this data set and what we get is two things."
},
{
"start": 1195,
"end": 1199,
"text": " First of all good standard accuracy. What does good standard accuracy mean."
},
{
"start": 1199,
"end": 1208,
"text": " It means that we we can test it on what's called the unmodified test set."
},
{
"start": 1208,
"end": 1215,
"text": " So the the test set the original test set of the data set the test set belonging to this training data set."
},
{
"start": 1215,
"end": 1219,
"text": " We can test it on that and it works just fine. Right."
},
{
"start": 1219,
"end": 1228,
"text": " So that basically means that the robust features are predictive of the of the kind of they generalize well."
},
{
"start": 1228,
"end": 1239,
"text": " It means that if I train a classifier only on robust features that can actually classify well to to the to the test set."
},
{
"start": 1239,
"end": 1248,
"text": " Right. So that means that's standard accuracy standard accuracy is how well do I classify the test set just an unmodified test set."
},
{
"start": 1248,
"end": 1254,
"text": " So they also obtain good robust accuracy which means that what is robust accuracy."
},
{
"start": 1254,
"end": 1261,
"text": " Robust accuracy means your accuracy on adversarial examples of the test set."
},
{
"start": 1261,
"end": 1270,
"text": " And usually classifiers are vulnerable to this classifier is usually obtained good standard accuracy but bad robust accuracy."
},
{
"start": 1270,
"end": 1279,
"text": " But if I only train my classifier on what they call robust features then I all of a sudden retain good standard accuracy."
},
{
"start": 1279,
"end": 1287,
"text": " But I also get good robust accuracy which means that."
},
{
"start": 1287,
"end": 1296,
"text": " It gives pretty good support to their hypothesis that the adversarial examples are abusing the fact that the classifiers learn the non robust features."
},
{
"start": 1296,
"end": 1313,
"text": " Since if I don't have any non robust features it means my classifier can't learn any non robust features which in turn means my classifier isn't vulnerable to adversarial attacks because they would abuse the fact that the classifier has learned about the non robust features."
},
{
"start": 1313,
"end": 1318,
"text": " So that's pretty good evidence for their hypothesis."
},
{
"start": 1318,
"end": 1329,
"text": " Second thing they do is they now create this on this modified data set where they only have non robust features."
},
{
"start": 1329,
"end": 1332,
"text": " Right. So the only thing they have is non robust features."
},
{
"start": 1332,
"end": 1335,
"text": " Again they train a standard neural network."
},
{
"start": 1335,
"end": 1341,
"text": " They train just a regular neural network on that and they also get good standard accuracy."
},
{
"start": 1341,
"end": 1357,
"text": " So this means that also the non robust features as we seen like the cats fur direction can lead to you generalize well to the test set since in the test set also the cats will have that property."
},
{
"start": 1357,
"end": 1368,
"text": " But you get bad robust accuracy and this gives further support to their hypothesis if you train a classifier on only non robust features."
},
{
"start": 1368,
"end": 1376,
"text": " They are features because they generalize well but they are very vulnerable because they're non robust."
},
{
"start": 1376,
"end": 1383,
"text": " Right. So the classifier that has learned about non robust features is vulnerable."
},
{
"start": 1383,
"end": 1394,
"text": " They didn't do a third experiment which I find pretty cool where they take they take the training image and of course it's an unmodified training image."
},
{
"start": 1394,
"end": 1399,
"text": " So it's robust features will basically say this is a dog."
},
{
"start": 1399,
"end": 1406,
"text": " It's non robust features will also say this is a dog because it's a training image of a dog."
},
{
"start": 1406,
"end": 1415,
"text": " And what they then do is they derive from this dog an adversarial example towards the cat class."
},
{
"start": 1415,
"end": 1422,
"text": " Right. So what does it mean in their hypothesis if their hypothesis is correct."
},
{
"start": 1422,
"end": 1427,
"text": " It now means that the robust features still say it's a dog."
},
{
"start": 1427,
"end": 1429,
"text": " We can also see this here right."
},
{
"start": 1429,
"end": 1437,
"text": " The kind of big shape of the image still is a dog to us humans."
},
{
"start": 1437,
"end": 1441,
"text": " But the non robust features will say it's a cat."
},
{
"start": 1441,
"end": 1447,
"text": " Right. This hinges on their hypothesis that adversarial examples actually abuse the non robust features."
},
{
"start": 1447,
"end": 1456,
"text": " Right. They create an adversarial example. So if their hypothesis is correct the non robust features now say that's a cat."
},
{
"start": 1456,
"end": 1465,
"text": " So they derive an entire data set where they change every image to another image and they also change the labels accordingly."
},
{
"start": 1465,
"end": 1475,
"text": " And then they train again a regular neural network on this and they look what happens on the unmodified test set."
},
{
"start": 1475,
"end": 1487,
"text": " So the unmodified test set will. So imagine if you're the you're this classifier and what you get is an image X and it has robust features."
},
{
"start": 1487,
"end": 1493,
"text": " That's a dog and has non robust features say cat and its label."
},
{
"start": 1493,
"end": 1501,
"text": " You're asked to predict cat. Right. And then you see the next image and the next image X to the non robust features."
},
{
"start": 1501,
"end": 1509,
"text": " Maybe it's derived from some other class it will say plain. But the robust the non robust features again say cat."
},
{
"start": 1509,
"end": 1522,
"text": " Right. And you're asked to predict cat. So basically the constructed data set where the non robust features always agree with with the label but the robust features they don't."
},
{
"start": 1522,
"end": 1532,
"text": " So naturally what you can expect is the classifier will learn to disregard the robust features because they're no longer useful."
},
{
"start": 1532,
"end": 1538,
"text": " Right. But it will actually only will learn to view these features."
},
{
"start": 1538,
"end": 1544,
"text": " It's different from before before we only had these features. Now we these features are still in there. Right."
},
{
"start": 1544,
"end": 1559,
"text": " But they're not informative. So the classifier will naturally learn to pick up on the non robust features and classify and classify according to them so much that if we now test on the test set and we feed in an actual cat."
},
{
"start": 1559,
"end": 1568,
"text": " Right. It's of course it's robust features will say cat and its non robust features will say cat and the classifier is able to accurately predict."
},
{
"start": 1568,
"end": 1579,
"text": " This is a cat even though the all the images of cats it has seen during training were actually of basically of non cats of here a dog."
},
{
"start": 1579,
"end": 1592,
"text": " So this is pretty cool and shows that kind of these these features that these non robust features that adversarial examples abuse since they're created by adversarial examples."
},
{
"start": 1592,
"end": 1599,
"text": " They they are actually predictive and generalize to the test set."
},
{
"start": 1599,
"end": 1603,
"text": " So that's pretty pretty good evidence for their hypothesis so far."
},
{
"start": 1603,
"end": 1617,
"text": " Now the kind of final remaining question is how do they create what is the procedure where they can create a robust and then basically non robust version of the data set."
},
{
"start": 1617,
"end": 1623,
"text": " And here is kind of where we get into the into the sort of what I find."
},
{
"start": 1623,
"end": 1632,
"text": " Yeah. So here you see basically examples of so this is an original image of a ship in the CIFAR 10 data set I believe."
},
{
"start": 1632,
"end": 1637,
"text": " And this is a robust sample."
},
{
"start": 1637,
"end": 1639,
"text": " So these are only robust features of the ship."
},
{
"start": 1639,
"end": 1644,
"text": " And this is a ship made with only non robust features you see is actually a moose."
},
{
"start": 1644,
"end": 1649,
"text": " But the non robust features have been changed to ship."
},
{
"start": 1649,
"end": 1655,
"text": " So the way they construct a robust version of the data set."
},
{
"start": 1655,
"end": 1661,
"text": " They have a formal definition but the way they do it is as follows."
},
{
"start": 1661,
"end": 1666,
"text": " So and then they say OK here is where we where we get into the details."
},
{
"start": 1666,
"end": 1670,
"text": " They say imagine we have a classifier."
},
{
"start": 1670,
"end": 1678,
"text": " Right. The classifier outputs features and here we call them here they call them G which is the representation."
},
{
"start": 1678,
"end": 1680,
"text": " It can be larger than features."
},
{
"start": 1680,
"end": 1682,
"text": " It can be a bigger class."
},
{
"start": 1682,
"end": 1690,
"text": " But in essence G is the features which then goes into the into the classifier and into the labels and so on."
},
{
"start": 1690,
"end": 1695,
"text": " So the neural network outputs the features inputs some X."
},
{
"start": 1695,
"end": 1705,
"text": " Now what if what if I have another X let's say X prime and I just initialize this with random noise."
},
{
"start": 1705,
"end": 1715,
"text": " And if I feed this and I get G prime here and I try to make the two as close as possible by changing X."
},
{
"start": 1715,
"end": 1717,
"text": " So I'm going to change my X here."
},
{
"start": 1717,
"end": 1725,
"text": " Basically I'm going to change my image such that the outputs the features here match each other as close as possible."
},
{
"start": 1725,
"end": 1728,
"text": " What does it mean? And I do this via back propagation right."
},
{
"start": 1728,
"end": 1731,
"text": " I match these and I back propagate to X."
},
{
"start": 1731,
"end": 1734,
"text": " I can do that with gradient descent."
},
{
"start": 1734,
"end": 1744,
"text": " What happens is that my image X will basically pick up will match the image."
},
{
"start": 1744,
"end": 1751,
"text": " My X prime will match the X in all the ways that are relevant for the features."
},
{
"start": 1751,
"end": 1758,
"text": " Basically I will transfer all of the features from X to X prime."
},
{
"start": 1758,
"end": 1761,
"text": " But nothing else right since I start with random."
},
{
"start": 1761,
"end": 1766,
"text": " Now what if my classifier and that's what they do."
},
{
"start": 1766,
"end": 1770,
"text": " What if the classifier is a robust classifier."
},
{
"start": 1770,
"end": 1776,
"text": " So remember we talked about we can actually robustify a classifier by doing adversarial training."
},
{
"start": 1776,
"end": 1780,
"text": " What if I have a classifier like such that is robust."
},
{
"start": 1780,
"end": 1786,
"text": " If I input an X and it outputs me a feature representation of X."
},
{
"start": 1786,
"end": 1792,
"text": " If the classifier is robust that representation will only contain robust features."
},
{
"start": 1792,
"end": 1802,
"text": " And then if I have a second image X or and I started from random noise and I match the representation of X."
},
{
"start": 1802,
"end": 1811,
"text": " And by changing XR basically I will transfer all of the robust features from X."
},
{
"start": 1811,
"end": 1813,
"text": " But nothing else right."
},
{
"start": 1813,
"end": 1818,
"text": " Given that I start from random noise here this means random noise has no features."
},
{
"start": 1818,
"end": 1822,
"text": " That's the assumption. Random noise has no features since it's random noise."
},
{
"start": 1822,
"end": 1834,
"text": " And if I transfer only the robust features basically what I've done is I've have now an image that I know has no non robust features."
},
{
"start": 1834,
"end": 1838,
"text": " And only robust features of X."
},
{
"start": 1838,
"end": 1845,
"text": " So that's how they derive a robustified version of X."
},
{
"start": 1845,
"end": 1851,
"text": " Second how do they derive a non robust version."
},
{
"start": 1851,
"end": 1858,
"text": " And that's even even easier if I have a classifier."
},
{
"start": 1858,
"end": 1865,
"text": " A regular classifier and I want a non robust version of X."
},
{
"start": 1865,
"end": 1871,
"text": " I have X input output G output some label."
},
{
"start": 1871,
"end": 1882,
"text": " What I do is I simply derive an adversarial example of X like we did before adversarial example in here out here."
},
{
"start": 1882,
"end": 1887,
"text": " And that gives me some X Y2 which is different from Y right."
},
{
"start": 1887,
"end": 1895,
"text": " If I have a adversarial example then basically I've transferred."
},
{
"start": 1895,
"end": 1901,
"text": " I've transferred the non robust features that lead to class Y2."
},
{
"start": 1901,
"end": 1909,
"text": " I've transferred the non robust features here while still maintaining the robust features from here."
},
{
"start": 1909,
"end": 1916,
"text": " So if this is too abstract imagine here X is an image of a dog right dog."
},
{
"start": 1916,
"end": 1925,
"text": " And I derive from it an adversarial image that now says airplane right."
},
{
"start": 1925,
"end": 1932,
"text": " So the robust features will still be of a dog will still be of the original image."
},
{
"start": 1932,
"end": 1938,
"text": " But the non robust features will be of the airplane class."
},
{
"start": 1938,
"end": 1948,
"text": " So that's how I derive a non robust non robust version that has features of kind of one."
},
{
"start": 1948,
"end": 1952,
"text": " Robust features of one class but non robust features of the other class."
},
{
"start": 1952,
"end": 1955,
"text": " That's what you see up here with the moose right."
},
{
"start": 1955,
"end": 1963,
"text": " The moose clearly has been started from the image of a moose and then has been has received non robust features from the ship class."
},
{
"start": 1963,
"end": 1968,
"text": " And that's just your classic adversarial example procedure."
},
{
"start": 1968,
"end": 1971,
"text": " So that's the that's the kind of procedure."
},
{
"start": 1971,
"end": 1986,
"text": " And so what's kind of my criticism here if you look at the first part the first part where they say well in order to determine what the robust features are we actually need a classifier that's already robust."
},
{
"start": 1986,
"end": 1994,
"text": " So we've seen before we have a we have a data set sorry let's go up here."
},
{
"start": 1994,
"end": 2005,
"text": " They say aha here we have a data set right and we can disentangle this and then it will which color have we not used we have a data set."
},
{
"start": 2005,
"end": 2009,
"text": " We only we robustify the data set to a robust data set."
},
{
"start": 2009,
"end": 2019,
"text": " We train a standard neural network and that gives us good robust accuracy which is really cool because we don't do anything special during training and we still get good robust accuracy."
},
{
"start": 2019,
"end": 2030,
"text": " But in order to do this procedure here this one you actually have to have a robust classifier right."
},
{
"start": 2030,
"end": 2043,
"text": " You have to have this already robustified classifier which you have obtained by adversarially training the robust classifier."
},
{
"start": 2043,
"end": 2052,
"text": " Basically what you're doing now is you take this adversarial training procedure which the point here is that you don't do anything different during training right."
},
{
"start": 2052,
"end": 2069,
"text": " But here you take the adversarial training procedure and via training the robust classifier via changing this data set here you basically get good robust accuracy which to me is just a reflection that you've obtained the data set using this robust classifier in the first place."
},
{
"start": 2069,
"end": 2082,
"text": " I mean yeah of course their their method gives a hint that I can actually this is actually due to things in the data set themselves right."
},
{
"start": 2082,
"end": 2097,
"text": " But there and I mean that's really important because it surely means that it's not a point of let's say the the classifier itself but it's a point of the data set which also say OK."
},
{
"start": 2097,
"end": 2114,
"text": " It also explains why these adversarial examples transfer between classifiers if you have two classifiers that are different but classify the same thing they're vulnerable to the same adversarial example which basically means it must be some property of the data set that these things learn."
},
{
"start": 2114,
"end": 2137,
"text": " But to do then say we have a procedure to extract the robust features and if we only train on the robust features we become robust right as here but you obtain the robust features by using a robustified classifier which you have adversarially trained to me that's kind of kind of back door in adversarial training into this whole procedure."
},
{
"start": 2137,
"end": 2161,
"text": " And yeah so that's that's kind of my first criticism my second criticism is the fact that you know I mean it's it's an interesting take on this but this whole notion this whole seeing of these features are robust these features are non robust is basically just reframing the problem of adversarial examples in terms of in terms of features."
},
{
"start": 2161,
"end": 2167,
"text": " It says nothing why these features are there."
},
{
"start": 2167,
"end": 2195,
"text": " It's just postulating that they're there. It says nothing why they're there. It says nothing about why the classifiers pick up on them or how they do it or how you know how this is to be mitigated without first having a robustly trained network to extract the robust features."
},
{
"start": 2195,
"end": 2198,
"text": " It's very much widely or not."
},
{
"start": 2198,
"end": 2205,
"text": " Things are very much widely not known about these samples it's just a reframing of the problem, I feel."
},
{
"start": 2205,
"end": 2215,
"text": " And it's cool experiments I mean they, it does show some a lot of things about these adversarial examples but certainly not an explanation."
},
{
"start": 2215,
"end": 2219,
"text": " I find, at least that's my opinion."
},
{
"start": 2219,
"end": 2234,
"text": " Alright, so down here then they show that they make an kind of simplified version of this a theoretical setting where they can analyze this."
},
{
"start": 2234,
"end": 2254,
"text": " And they basically say, okay, this is generally what happens at the fundamental level at the fundamental level, you have classes, and let's say the classes are distributed like, like this right this these are the examples in the data set and they're distributed like that right."
},
{
"start": 2254,
"end": 2258,
"text": " Mean, and you have some covariance."
},
{
"start": 2258,
"end": 2277,
"text": " So they're distributed like that. If I have two classes like this, such as here, right, and they're distributed like that, and I create like the separator, the linear classifier, the linear classifier will classify like this it will be like super this is the best linear classifier."
},
{
"start": 2277,
"end": 2279,
"text": " Right, we can calculate this accurately."
},
{
"start": 2279,
"end": 2283,
"text": " But what do I say when I say okay."
},
{
"start": 2283,
"end": 2294,
"text": " I want an adversarial example adversarial examples means that I can shift my examples by a little bit but achieve a big change in output."
},
{
"start": 2294,
"end": 2298,
"text": " And since, since this distance here."
},
{
"start": 2298,
"end": 2307,
"text": " Right, so if I have a sample here, I need to go a long way to the boundary to achieve another output but if I go into another direction."
},
{
"start": 2307,
"end": 2329,
"text": " Right, if I go down here, I only need to go a very short way. And since adversarial examples as they're specified, they say, okay, we want to go a short way and the short way is characterized by going a short way in any direction, right, this is a terrible circle in any direction, we want to go a short way."
},
{
"start": 2329,
"end": 2339,
"text": " That's another example. You see that if I have this any direction property, there's actually directions where this classification boundary is very, very close."
},
{
"start": 2339,
"end": 2355,
"text": " And so that's what they say this is a fundamental misalignment between the geometry of the data, which is like this, and the geometry of how we specify adversarial examples, which is, you know, kind of equal in each direction, which leads to that."
},
{
"start": 2355,
"end": 2380,
"text": " And they say, okay, what if I now robust parameters so what if I adversarially train my network to be robust, it basically means that I expand my data, because I add adversarial examples right of the circle here, I actually add adversarial examples, so my, my class, my data distribution will actually more like this."
},
{
"start": 2380,
"end": 2393,
"text": " And my separating hyperplane will change here. And the geometry of the adversarial examples will be much more aligned with my separating hyperplane."
},
{
"start": 2393,
"end": 2407,
"text": " So this is kind of a toy example of where they say this is fundamentally what's going on. There's a misalignment between the geometry of the adversarial examples and the inherent geometry of the data."
},
{
"start": 2407,
"end": 2420,
"text": " So that's kind of the theoretical analysis they do. And with that, I finish here, and I hope this was clear enough and goodbye."
}
] |
_N_nFzMtWkA | Yannic Kilcher | UCZHmQk67mSJgfCCTn7xBfew | Reinforcement Learning, Fast and Slow | [
"Science & Technology"
] | [
"machine learning",
"reinforcement learning",
"meta-learning",
"deep rl",
"deep reinforcement learning",
"deep neural network",
"atari",
"alphago",
"deepmind",
"google",
"td-gammon",
"episodic memory",
"inductive bias",
"bias variance tradeoff"
] | Abstract:
Deep reinforcement learning (RL) methods have driven impressive advances in artificial intelligence in recent years, exceeding human performance in domains ranging from Atari to Go to no-limit poker. This progress has drawn the attention of cognitive scientists interested in understanding human learning. However, the concern has been raised that deep RL may be too sample-inefficient – that is, it may simply be too slow – to provide a plausible model of how humans learn. In the present review, we counter this critique by describing recently developed techniques that allow deep RL to operate more nimbly, solving problems much more quickly than previous methods. Although these techniques were developed in an AI context, we propose that they may have rich implications for psychology and neuroscience. A key insight, arising from these AI methods, concerns the fundamental connection between fast RL and slower, more incremental forms of learning.
Authors: Matthew Botvinick, Sam Ritter, Jane X. Wang, Zeb Kurth-Nelson, Charles Blundell, Demis Hassabis
https://www.cell.com/trends/cognitive-sciences/fulltext/S1364-6613(19)30061-0 | Hi there, today we're looking at reinforcement learning, fast and slow, by Matthew Botvinick, Sam Ritter, Jane X. Wang, Zeb Kurt-Nielsen, Charles Spondel and Demis Hassabis. These people are from Google DeepMind and this is a review of kind of a development in reinforcement learning, especially as it pertains to kind of how humans learn or what we can understand from the RL world that translates over to human learning. Alright, so basically their argument here is that the first wave of deep RL, as you see here, is powerful but slow. And they give examples of this. So in box one, box one is this. So they believe there's an image missing here. This is Backgammon, TD Gammon. This is the famous DeepMind Atari playing bot and this is kind of the 3D labyrinth playing bot. So there's been a number of advances in RL and especially what they talk about is deep RL. So when we talk about reinforcement learning, the easiest case is where you have an agent and an environment. Alright, so the agent will observe some observation O from the environment and then based on that the agent will perform an action A. And then the environment will give back a reward and also a next observation. So this is O0, O1, and then this is A0 and then here you give A1, AI. So basically this goes back and forth and back and forth. The agent performs an action, the environment gives a reward and the next observation. So this could be for example here in the Atari world. The observation is the screen itself. And then the agent needs to perform an action which is an input of the joystick or pressing some button. You can see the individual actions actually listed here. And then the reward will be given to the agent via a number which I guess is the same number as up here. So the task is to maximize the reward simply by... So the difference is you're not doing this in a supervised manner. So you're not telling the agent what would be the correct action to do. You simply tell it that whether what it did was good or bad by giving it a high or a low reward. Right, so that's reinforcement learning. So what is deep reinforcement learning? Deep reinforcement learning simply means the agent maps the observation to the action via a deep neural network. So deep neural network. That's deep reinforcement learning where the mapping or some part of the agent consists of a deep neural network. You see for example here there is a deep neural network mapping the observation to the action. As well as down here but it's a bit more complicated. So they argue that the first wave of this was powerful but slow meaning kind of you need a lot of samples. And they give two sources of why it's slow, why you need a lot of samples. They say the two factors are incremental parameter adjustment and weak inductive bias. So incremental parameter adjustment means basically that you have to update or train your neural network in a very small incremental way. In order to basically, because you train it one by one, right? You train your neural network step by step. You have to make small steps in order to not forget what came before. You can't fundamentally readjust your neural network to every new batch of observations because then that's going to destroy all the information you've learned of the old one. And then weak inductive bias here is basically an understanding of these neural networks. They are general function approximators and they can approximate any function. So if you just think in terms of kind of, I don't know, let's say polynomials and what kind of polynomials are there? This polynomial, this polynomial, this polynomial, this weird polynomial. If I have a function that can approximate all of these then I have a weak inductive bias whereas if I kind of know, okay all my polynomials are the polynomial that I'm looking for ultimately, I'm very sure it's a third degree polynomial, right? So something like this or like this or like this. So this is much less of a class of functions that I can fit but if I'm sure that the function that I'm trying to fit falls in this category then I'm much faster. So this is then called a strong inductive bias is where I build into the model basically I tell it beforehand. Here is a very restricted class of functions that you can fit. Whereas in a weak inductive bias I won't tell it that. I'll simply say, well model you could fit any function you want and I'm just giving you training samples. So this is a classic example of a bias variance trade-off where there is a lot of variance in these models meaning you can fit also a lot of functions but here because you bias the model towards a certain set of functions it can lower this variance and in this case here it speeds up learning because you don't have as much variance that means you can basically go faster while learning. Alright, so they propose two solutions to this problem of this kind of to mitigate these problems that make reinforcement learning faster or have made reinforcement learning faster. This is a review remember. So the first one is episodic deep reinforcement learning and this episodic deep reinforcement learning is specified here, fast learning through episodic memory. So the suggestion in this field of research is to augment the neural network or the agent by a memory and the memory could look something like this. So in a lot of these RL frameworks what a principal component of the agent is, so the agent will get an observation O and one of the things it has to do is estimate the value of this observation of this state. So basically the agent is in some state let's say you play pong right and you are here down and the ball comes your way up there right there's a little arrow sorry so the ball flies away from you and you're all the way down which basically means draw this bigger. So here you are down here and the ball is here flying up there. So one task in these in these agents that occurs often is to estimate the value of this observation basically means how much reward am I expecting from this state going into the future. In this case I probably will not expect a lot of reward since I can't move up fast enough right to catch the ball. So this I would assign this state a pretty low value whereas if I were up here I would assign this state quite a high value. So as we've already seen this is a deep neural network mapping we learn to assign value to different states and this is one of the parts that takes a long time and these methods they are the one that's depicted here replaces this value estimation by saying okay we have an observation we somehow need to estimate its value why don't we look for similar observation so we have some kind of memory right and we go with our observation and we retrieve O'1 O'2 O'3 that are somehow similar right so in our in our pong example I'm down I'm up here ball moves here I could be looking now at at states where I was here or where I was here like really close or where the ball flew a bit differently but still in the same direction or down here right so all these states are kind of close to my state and I can I already have I already have played these since they're in my memory right so with every one of them I can also retrieve the reward that I got so I because I already know the problem in reinforcement learning is before you do an action you don't know what the reward will be but here I already know because I've played it I've already experienced it it's in the past so I know what reward I got right so and this is exactly what they say over here they basically say here we have time time runs this way we're in state one then in state two and so on and we perform actions and and get rewards and what we can do is we can save these states into this memory as along with their sum of discounted rewards that we collect from that state on and then later this is like a spongebob reference if we want to estimate the value of some new state right what we do is we retrieve all of these states from memory calculate a similarity score over them and with with we wait basically we add their rewards weighted by how similar they are to the state that we want to compute so this basically amounts to averaging over states respective by how close they are to the current state right this is kind of a soft a soft way of saying I only select the states which are close and that gives you a value estimate for the new states so basically this means you just got rid of having to train a value function and this will speed up your reinforcement learning quite a bit if you don't have to train that if you already have good value estimations from your previous experience that's great of course there are a number of problems associated with that namely if this memory here for example becomes stale it doesn't represent the future rewards quite as well there is also a question of which states do you keep in memory just the good ones or do they have to have a certain property do you have to have some diversity in there and of course the biggest problem here the biggest problem is how do you know when two states are similar or when they aren't it might be easy in a situation like pong where I only have like three variables like position y position of my of my paddle and position of the ball and velocity of the ball those are like I can specify those in five numbers but if it gets harder than that if it's like this labyrinth setting full 3d environment then we have no clue which states are similar to each other and what these what most end up doing is they will train you guessed it they will train a deep neural network to give you this similarity score between states right how they do it is is a different question but presumably you can train this network offline basically meaning you can pre train it you could pre train it and then the so we have two stages stage one pre train train similarity dnn right and then once we've done that second stage do reinforcement learning using this and the claim here is that by having this done this this second stage will become faster so it it doesn't really solve the problem of the sample efficiency but what it says is okay the actual reinforcement learning part will become faster because we've already done the work previously but basically by by including this similarity score sorry whatever dnn by including this in the language of the review here we have successfully introduced an inductive bias into the rl procedure because the rl procedure now can't just fit any function we say we tell it your value function is one that conforms to our notion of similarity that we've pre trained this restricts the rl algorithm and we give it an inductive bias and as long as our similarity score is useful for the rl algorithm it can speed up its learning because it doesn't have to learn the value function itself all right cool so the second part here is a bit more abstract it's called meta reinforcement learning speeding up deep rl by learning to learn these kind of learning to learn approaches are quite abundant in the literature people try this usually there's a i mean it's it's very large scale experiments basically you have i think i believe they show it somewhere here yeah you have like some um some outer loop where you would say that's this thing here what the outer loop does is in each loop it samples one environment so it samples one environment from a distribution of environments so now you not only have one environment but you say okay if i'm going to navigate this maze one trying to learn to navigate this maze i'm going actually to learn to learn to navigate many mazes right so it's not like you train one agent to learn you train one agent to navigate many mazes that would just be classic reinforcement learning but you want to train an algorithm that helps an agent learn as a particular maze and you do that by training your helper algorithm on a variety of agent maze combinations so in each step you sample one environment like this this here and you then have an inner loop here you fully reinforcement learn train an agent in the classic sense on this environment right you see here action action observation reward right but the agent receives some kind of signal from outside so the outside algorithm will kind of tell the agent how to approach the problem right this could be that it initializes the the weights here you see that the outer loop trains the parameter weights which determine the inner learner that interacts with an environment during the duration of the episode for every cycle of the outer loop a new environment is sampled from a distribution of environments which share some common structure so basically the one would expect when you train this that these parameters here this could be for example it could be the initial weights of the network that the agent uses that this one possibility right this is very abstract here this meta reinforcement learning it could be literally anything that the outer model teaches the inner model or gives to the inner model right and you you train both of these with reinforcement learning so the inner you train with reinforcement learning on the individual rewards and then you can train the outer loop on the reward that the entire app agent environment episode achieved so the that's kind of a two loop situation and yeah so that's meta reinforcement learning again it's very unspecified what it does but as you can already see if you now have such an algorithm that kind of tells the the inner agent just as an example how to initialize its weights right how to initialize the weights of its deep neural network if you have that here then the agent you will technically bias it this is again an inductive bias so you will give it inductive bias towards what you think are good weights to generally learn these maze structured environments right since the outer loop you can update it way slower because it needs to learn over a longer time horizon and it needs to learn things for a different variety of environments but once you have good kind of initial weights for a particular environment then this agent in here can learn much faster given an individual environment so the agent you instantiated and then you give it good starting weights or some other kind of signal about the environment and then it can go much much faster at learning the environment thereby you have just sped up this inner agent by providing it an inductive bias and that's basically what the claim of the review is that by providing these models with a larger inductive bias you may then speed up their learning because you've kind of told them what good functions are from the outset of course you see the problem again here well the problem the problem is of course you actually need to train this outer loop and the outer loop may actually take much much longer to train than a single and unbiased reinforcement learning thing but again what you could do is you could pre-train on a distribution of environments and then once a new environment shows up that is similar to this distribution you can then have the agent instantiated and learn much faster so again kind of this two-step process you could pre-train this outer loop and then the inner loop will be much faster than if you didn't have the outer loop all right so those are basically the kind of the kind of outlines they do here they then kind of do a connection to like the brain and so on and they relate this to biology and biological learning but ultimately their conclusion is here that whenever you want to do whenever you have slow rl or this is at least my conclusion from their article whenever you have slower you can transform it to fast rl rl but you have to outsource the slow rl slow something else slow x you have to outsource the slowness to some other part so if you want to do fast rl you have to outsource the slowness and what the slowness provides is an inductive bias which means yeah if you want to do like fast rl with episodic memory you have to learn the similarity function which again which might be slow in itself but then the rl will be fast and if you want to do this via kind of a an outer meta learner again this learning of the outer meta learner might be slow but then the inner learner will be fast in a connection to the kind of biological aspect of this they do make a connection which which i find is appropriate in that for example the human brain the reason we can learn things fast let's say in the physical world picking things up dropping things down or navigating our paths we're incredibly good at this navigating through like a weird terrain with rocks in the way is because of course our brains have been adapted to these kinds of environment over generations so there is an outer process like evolution which is this kind of outer loop and it instantiates the inner loop which are the humans that kind of live or die by their ability to to navigate better so the if if the outer loop does a good job of only keeping the humans alive that can navigate well then the individual human in here that that does this the individual human given a landscape with rocks will then be much faster at learning to navigate it all right so that was it for that i it's an interesting article to read especially the connections to the kind of biological aspects and with that have a nice day | [
{
"start": 0,
"end": 7,
"text": " Hi there, today we're looking at reinforcement learning, fast and slow, by Matthew Botvinick,"
},
{
"start": 7,
"end": 17,
"text": " Sam Ritter, Jane X. Wang, Zeb Kurt-Nielsen, Charles Spondel and Demis Hassabis."
},
{
"start": 17,
"end": 24.52,
"text": " These people are from Google DeepMind and this is a review of kind of a development"
},
{
"start": 24.52,
"end": 32.16,
"text": " in reinforcement learning, especially as it pertains to kind of how humans learn or what"
},
{
"start": 32.16,
"end": 38.96,
"text": " we can understand from the RL world that translates over to human learning."
},
{
"start": 38.96,
"end": 48.44,
"text": " Alright, so basically their argument here is that the first wave of deep RL, as you"
},
{
"start": 48.44,
"end": 54.8,
"text": " see here, is powerful but slow."
},
{
"start": 54.8,
"end": 57.14,
"text": " And they give examples of this."
},
{
"start": 57.14,
"end": 60.66,
"text": " So in box one, box one is this."
},
{
"start": 60.66,
"end": 65.16,
"text": " So they believe there's an image missing here."
},
{
"start": 65.16,
"end": 68.28,
"text": " This is Backgammon, TD Gammon."
},
{
"start": 68.28,
"end": 78.03999999999999,
"text": " This is the famous DeepMind Atari playing bot and this is kind of the 3D labyrinth playing"
},
{
"start": 78.04,
"end": 79.04,
"text": " bot."
},
{
"start": 79.04,
"end": 83.88000000000001,
"text": " So there's been a number of advances in RL and especially what they talk about is deep"
},
{
"start": 83.88000000000001,
"end": 84.88000000000001,
"text": " RL."
},
{
"start": 84.88000000000001,
"end": 92.84,
"text": " So when we talk about reinforcement learning, the easiest case is where you have an agent"
},
{
"start": 92.84,
"end": 95.84,
"text": " and an environment."
},
{
"start": 95.84,
"end": 105.48,
"text": " Alright, so the agent will observe some observation O from the environment and then based on that"
},
{
"start": 105.48,
"end": 112.44,
"text": " the agent will perform an action A. And then the environment will give back a reward and"
},
{
"start": 112.44,
"end": 115.72,
"text": " also a next observation."
},
{
"start": 115.72,
"end": 124.52000000000001,
"text": " So this is O0, O1, and then this is A0 and then here you give A1, AI."
},
{
"start": 124.52000000000001,
"end": 128.32,
"text": " So basically this goes back and forth and back and forth."
},
{
"start": 128.32,
"end": 133.34,
"text": " The agent performs an action, the environment gives a reward and the next observation."
},
{
"start": 133.34,
"end": 137.94,
"text": " So this could be for example here in the Atari world."
},
{
"start": 137.94,
"end": 142.16,
"text": " The observation is the screen itself."
},
{
"start": 142.16,
"end": 148.72,
"text": " And then the agent needs to perform an action which is an input of the joystick or pressing"
},
{
"start": 148.72,
"end": 150.16,
"text": " some button."
},
{
"start": 150.16,
"end": 154.86,
"text": " You can see the individual actions actually listed here."
},
{
"start": 154.86,
"end": 161.2,
"text": " And then the reward will be given to the agent via a number which I guess is the same number"
},
{
"start": 161.2,
"end": 163.3,
"text": " as up here."
},
{
"start": 163.3,
"end": 168.04000000000002,
"text": " So the task is to maximize the reward simply by..."
},
{
"start": 168.04000000000002,
"end": 171.56,
"text": " So the difference is you're not doing this in a supervised manner."
},
{
"start": 171.56,
"end": 175.84,
"text": " So you're not telling the agent what would be the correct action to do."
},
{
"start": 175.84,
"end": 183.72000000000003,
"text": " You simply tell it that whether what it did was good or bad by giving it a high or a low"
},
{
"start": 183.72000000000003,
"end": 184.72000000000003,
"text": " reward."
},
{
"start": 184.72000000000003,
"end": 186.92000000000002,
"text": " Right, so that's reinforcement learning."
},
{
"start": 186.92000000000002,
"end": 189.44,
"text": " So what is deep reinforcement learning?"
},
{
"start": 189.44,
"end": 197.76,
"text": " Deep reinforcement learning simply means the agent maps the observation to the action via"
},
{
"start": 197.76,
"end": 199.8,
"text": " a deep neural network."
},
{
"start": 199.8,
"end": 203.52,
"text": " So deep neural network."
},
{
"start": 203.52,
"end": 209.28,
"text": " That's deep reinforcement learning where the mapping or some part of the agent consists"
},
{
"start": 209.28,
"end": 211.88,
"text": " of a deep neural network."
},
{
"start": 211.88,
"end": 219.36,
"text": " You see for example here there is a deep neural network mapping the observation to the action."
},
{
"start": 219.36,
"end": 225.20000000000002,
"text": " As well as down here but it's a bit more complicated."
},
{
"start": 225.20000000000002,
"end": 232.92000000000002,
"text": " So they argue that the first wave of this was powerful but slow meaning kind of you"
},
{
"start": 232.92000000000002,
"end": 235.48000000000002,
"text": " need a lot of samples."
},
{
"start": 235.48000000000002,
"end": 241.96,
"text": " And they give two sources of why it's slow, why you need a lot of samples."
},
{
"start": 241.96,
"end": 249.64000000000001,
"text": " They say the two factors are incremental parameter adjustment and weak inductive bias."
},
{
"start": 249.64000000000001,
"end": 256.28000000000003,
"text": " So incremental parameter adjustment means basically that you have to update or train"
},
{
"start": 256.28000000000003,
"end": 260.92,
"text": " your neural network in a very small incremental way."
},
{
"start": 260.92,
"end": 266.92,
"text": " In order to basically, because you train it one by one, right?"
},
{
"start": 266.92,
"end": 270.28000000000003,
"text": " You train your neural network step by step."
},
{
"start": 270.28,
"end": 275.84,
"text": " You have to make small steps in order to not forget what came before."
},
{
"start": 275.84,
"end": 281.44,
"text": " You can't fundamentally readjust your neural network to every new batch of observations"
},
{
"start": 281.44,
"end": 286.44,
"text": " because then that's going to destroy all the information you've learned of the old one."
},
{
"start": 286.44,
"end": 295,
"text": " And then weak inductive bias here is basically an understanding of these neural networks."
},
{
"start": 295,
"end": 300.23999999999995,
"text": " They are general function approximators and they can approximate any function."
},
{
"start": 300.24,
"end": 305.56,
"text": " So if you just think in terms of kind of, I don't know, let's say polynomials and what"
},
{
"start": 305.56,
"end": 306.92,
"text": " kind of polynomials are there?"
},
{
"start": 306.92,
"end": 314.04,
"text": " This polynomial, this polynomial, this polynomial, this weird polynomial."
},
{
"start": 314.04,
"end": 320,
"text": " If I have a function that can approximate all of these then I have a weak inductive"
},
{
"start": 320,
"end": 326.8,
"text": " bias whereas if I kind of know, okay all my polynomials are the polynomial that I'm looking"
},
{
"start": 326.8,
"end": 333.44,
"text": " for ultimately, I'm very sure it's a third degree polynomial, right?"
},
{
"start": 333.44,
"end": 337.2,
"text": " So something like this or like this or like this."
},
{
"start": 337.2,
"end": 346.2,
"text": " So this is much less of a class of functions that I can fit but if I'm sure that the function"
},
{
"start": 346.2,
"end": 351.76,
"text": " that I'm trying to fit falls in this category then I'm much faster."
},
{
"start": 351.76,
"end": 357.15999999999997,
"text": " So this is then called a strong inductive bias is where I build into the model basically"
},
{
"start": 357.15999999999997,
"end": 359.24,
"text": " I tell it beforehand."
},
{
"start": 359.24,
"end": 364.84,
"text": " Here is a very restricted class of functions that you can fit."
},
{
"start": 364.84,
"end": 367.92,
"text": " Whereas in a weak inductive bias I won't tell it that."
},
{
"start": 367.92,
"end": 372.8,
"text": " I'll simply say, well model you could fit any function you want and I'm just giving"
},
{
"start": 372.8,
"end": 374.7,
"text": " you training samples."
},
{
"start": 374.7,
"end": 381.68,
"text": " So this is a classic example of a bias variance trade-off where there is a lot of"
},
{
"start": 381.68,
"end": 388.16,
"text": " variance in these models meaning you can fit also a lot of functions but here because you"
},
{
"start": 388.16,
"end": 395.2,
"text": " bias the model towards a certain set of functions it can lower this variance and in this case"
},
{
"start": 395.2,
"end": 403.16,
"text": " here it speeds up learning because you don't have as much variance that means you can basically"
},
{
"start": 403.16,
"end": 405.92,
"text": " go faster while learning."
},
{
"start": 405.92,
"end": 417.28000000000003,
"text": " Alright, so they propose two solutions to this problem of this kind of to mitigate these"
},
{
"start": 417.28000000000003,
"end": 422.44,
"text": " problems that make reinforcement learning faster or have made reinforcement learning"
},
{
"start": 422.44,
"end": 423.70000000000005,
"text": " faster."
},
{
"start": 423.70000000000005,
"end": 426.98,
"text": " This is a review remember."
},
{
"start": 426.98,
"end": 433.8,
"text": " So the first one is episodic deep reinforcement learning and this episodic deep reinforcement"
},
{
"start": 433.8,
"end": 438.8,
"text": " learning is specified here, fast learning through episodic memory."
},
{
"start": 438.8,
"end": 446.58000000000004,
"text": " So the suggestion in this field of research is to augment the neural network or the agent"
},
{
"start": 446.58000000000004,
"end": 453.48,
"text": " by a memory and the memory could look something like this."
},
{
"start": 453.48,
"end": 462.44,
"text": " So in a lot of these RL frameworks what a principal component of the agent is, so the"
},
{
"start": 462.44,
"end": 470.16,
"text": " agent will get an observation O and one of the things it has to do is estimate the value"
},
{
"start": 470.16,
"end": 472.64,
"text": " of this observation of this state."
},
{
"start": 472.64,
"end": 480.88,
"text": " So basically the agent is in some state let's say you play pong right and you are here down"
},
{
"start": 480.88,
"end": 487.4,
"text": " and the ball comes your way up there right there's a little arrow sorry so the ball"
},
{
"start": 487.4,
"end": 495.15999999999997,
"text": " flies away from you and you're all the way down which basically means draw this bigger."
},
{
"start": 495.15999999999997,
"end": 502.96,
"text": " So here you are down here and the ball is here flying up there."
},
{
"start": 502.96,
"end": 510.28,
"text": " So one task in these in these agents that occurs often is to estimate the value of this"
},
{
"start": 510.28,
"end": 516.36,
"text": " observation basically means how much reward am I expecting from this state going into"
},
{
"start": 516.36,
"end": 517.6,
"text": " the future."
},
{
"start": 517.6,
"end": 524.04,
"text": " In this case I probably will not expect a lot of reward since I can't move up fast enough"
},
{
"start": 524.04,
"end": 526.76,
"text": " right to catch the ball."
},
{
"start": 526.76,
"end": 534.04,
"text": " So this I would assign this state a pretty low value whereas if I were up here I would"
},
{
"start": 534.04,
"end": 537.48,
"text": " assign this state quite a high value."
},
{
"start": 537.48,
"end": 545.16,
"text": " So as we've already seen this is a deep neural network mapping we learn to assign value to"
},
{
"start": 545.16,
"end": 553.52,
"text": " different states and this is one of the parts that takes a long time and these methods they"
},
{
"start": 553.52,
"end": 560.16,
"text": " are the one that's depicted here replaces this value estimation by saying okay we have"
},
{
"start": 560.16,
"end": 567.8399999999999,
"text": " an observation we somehow need to estimate its value why don't we look for similar observation"
},
{
"start": 567.84,
"end": 577.36,
"text": " so we have some kind of memory right and we go with our observation and we retrieve O'1"
},
{
"start": 577.36,
"end": 587,
"text": " O'2 O'3 that are somehow similar right so in our in our pong example I'm down I'm up"
},
{
"start": 587,
"end": 595.8000000000001,
"text": " here ball moves here I could be looking now at at states where I was here or where I was"
},
{
"start": 595.8,
"end": 601.4399999999999,
"text": " here like really close or where the ball flew a bit differently but still in the same direction"
},
{
"start": 601.4399999999999,
"end": 608.8399999999999,
"text": " or down here right so all these states are kind of close to my state and I can I already"
},
{
"start": 608.8399999999999,
"end": 614.26,
"text": " have I already have played these since they're in my memory right so with every one of them"
},
{
"start": 614.26,
"end": 621.0799999999999,
"text": " I can also retrieve the reward that I got so I because I already know the problem in"
},
{
"start": 621.0799999999999,
"end": 625.04,
"text": " reinforcement learning is before you do an action you don't know what the reward will"
},
{
"start": 625.04,
"end": 631.76,
"text": " be but here I already know because I've played it I've already experienced it it's in the"
},
{
"start": 631.76,
"end": 638.64,
"text": " past so I know what reward I got right so and this is exactly what they say over here"
},
{
"start": 638.64,
"end": 646.36,
"text": " they basically say here we have time time runs this way we're in state one then in state"
},
{
"start": 646.36,
"end": 654.48,
"text": " two and so on and we perform actions and and get rewards and what we can do is we can save"
},
{
"start": 654.48,
"end": 662.6,
"text": " these states into this memory as along with their sum of discounted rewards that we collect"
},
{
"start": 662.6,
"end": 672.48,
"text": " from that state on and then later this is like a spongebob reference if we want to estimate"
},
{
"start": 672.48,
"end": 680.76,
"text": " the value of some new state right what we do is we retrieve all of these states from"
},
{
"start": 680.76,
"end": 688.92,
"text": " memory calculate a similarity score over them and with with we wait basically we add their"
},
{
"start": 688.92,
"end": 694.64,
"text": " rewards weighted by how similar they are to the state that we want to compute so this"
},
{
"start": 694.64,
"end": 703.84,
"text": " basically amounts to averaging over states respective by how close they are to the current"
},
{
"start": 703.84,
"end": 708.88,
"text": " state right this is kind of a soft a soft way of saying I only select the states which"
},
{
"start": 708.88,
"end": 715.28,
"text": " are close and that gives you a value estimate for the new states so basically this means"
},
{
"start": 715.28,
"end": 721.2,
"text": " you just got rid of having to train a value function and this will speed up your reinforcement"
},
{
"start": 721.2,
"end": 726.4399999999999,
"text": " learning quite a bit if you don't have to train that if you already have good value"
},
{
"start": 726.4399999999999,
"end": 731.66,
"text": " estimations from your previous experience that's great of course there are a number"
},
{
"start": 731.66,
"end": 737.4,
"text": " of problems associated with that namely if this memory here for example becomes stale"
},
{
"start": 737.4,
"end": 745.4399999999999,
"text": " it doesn't represent the future rewards quite as well there is also a question of which"
},
{
"start": 745.4399999999999,
"end": 749.9599999999999,
"text": " states do you keep in memory just the good ones or do they have to have a certain property"
},
{
"start": 749.9599999999999,
"end": 756.6,
"text": " do you have to have some diversity in there and of course the biggest problem here the"
},
{
"start": 756.6,
"end": 764.1999999999999,
"text": " biggest problem is how do you know when two states are similar or when they aren't it"
},
{
"start": 764.2,
"end": 772.12,
"text": " might be easy in a situation like pong where I only have like three variables like position"
},
{
"start": 772.12,
"end": 777.72,
"text": " y position of my of my paddle and position of the ball and velocity of the ball those"
},
{
"start": 777.72,
"end": 785.5600000000001,
"text": " are like I can specify those in five numbers but if it gets harder than that if it's like"
},
{
"start": 785.5600000000001,
"end": 792.82,
"text": " this labyrinth setting full 3d environment then we have no clue which states are similar"
},
{
"start": 792.82,
"end": 799.72,
"text": " to each other and what these what most end up doing is they will train you guessed it"
},
{
"start": 799.72,
"end": 807.32,
"text": " they will train a deep neural network to give you this similarity score between states right"
},
{
"start": 807.32,
"end": 813.1400000000001,
"text": " how they do it is is a different question but presumably you can train this network"
},
{
"start": 813.1400000000001,
"end": 820.48,
"text": " offline basically meaning you can pre train it you could pre train it and then the so"
},
{
"start": 820.48,
"end": 833.4,
"text": " we have two stages stage one pre train train similarity dnn right and then once we've done"
},
{
"start": 833.4,
"end": 842.5600000000001,
"text": " that second stage do reinforcement learning using this and the claim here is that by having"
},
{
"start": 842.5600000000001,
"end": 849.8000000000001,
"text": " this done this this second stage will become faster so it it doesn't really solve the problem"
},
{
"start": 849.8,
"end": 854.3199999999999,
"text": " of the sample efficiency but what it says is okay the actual reinforcement learning"
},
{
"start": 854.3199999999999,
"end": 860.0799999999999,
"text": " part will become faster because we've already done the work previously but basically by"
},
{
"start": 860.0799999999999,
"end": 868.56,
"text": " by including this similarity score sorry whatever dnn by including this in the language of the"
},
{
"start": 868.56,
"end": 878.0799999999999,
"text": " review here we have successfully introduced an inductive bias into the rl procedure because"
},
{
"start": 878.08,
"end": 885.32,
"text": " the rl procedure now can't just fit any function we say we tell it your value function is one"
},
{
"start": 885.32,
"end": 890.8000000000001,
"text": " that conforms to our notion of similarity that we've pre trained this restricts the"
},
{
"start": 890.8000000000001,
"end": 898.72,
"text": " rl algorithm and we give it an inductive bias and as long as our similarity score is useful"
},
{
"start": 898.72,
"end": 904.4000000000001,
"text": " for the rl algorithm it can speed up its learning because it doesn't have to learn the value"
},
{
"start": 904.4,
"end": 912.4,
"text": " function itself all right cool so the second part here is a bit more abstract it's called"
},
{
"start": 912.4,
"end": 918.88,
"text": " meta reinforcement learning speeding up deep rl by learning to learn these kind of learning"
},
{
"start": 918.88,
"end": 925,
"text": " to learn approaches are quite abundant in the literature people try this usually there's"
},
{
"start": 925,
"end": 933.96,
"text": " a i mean it's it's very large scale experiments basically you have i think i believe they"
},
{
"start": 933.96,
"end": 941.2,
"text": " show it somewhere here yeah you have like some um some outer loop where you would say"
},
{
"start": 941.2,
"end": 947.6800000000001,
"text": " that's this thing here what the outer loop does is in each loop it samples one environment"
},
{
"start": 947.6800000000001,
"end": 953.0400000000001,
"text": " so it samples one environment from a distribution of environments so now you not only have one"
},
{
"start": 953.0400000000001,
"end": 959.96,
"text": " environment but you say okay if i'm going to navigate this maze one trying to learn"
},
{
"start": 959.96,
"end": 970.88,
"text": " to navigate this maze i'm going actually to learn to learn to navigate many mazes right"
},
{
"start": 970.88,
"end": 977.72,
"text": " so it's not like you train one agent to learn you train one agent to navigate many mazes"
},
{
"start": 977.72,
"end": 985.5600000000001,
"text": " that would just be classic reinforcement learning but you want to train an algorithm that helps"
},
{
"start": 985.56,
"end": 993.2399999999999,
"text": " an agent learn as a particular maze and you do that by training your helper algorithm"
},
{
"start": 993.2399999999999,
"end": 1000.1999999999999,
"text": " on a variety of agent maze combinations so in each step you sample one environment like"
},
{
"start": 1000.1999999999999,
"end": 1009,
"text": " this this here and you then have an inner loop here you fully reinforcement learn train"
},
{
"start": 1009,
"end": 1016.6,
"text": " an agent in the classic sense on this environment right you see here action action observation"
},
{
"start": 1016.6,
"end": 1025.84,
"text": " reward right but the agent receives some kind of signal from outside so the outside algorithm"
},
{
"start": 1025.84,
"end": 1034.04,
"text": " will kind of tell the agent how to approach the problem right this could be that it initializes"
},
{
"start": 1034.04,
"end": 1042.56,
"text": " the the weights here you see that the outer loop trains the parameter weights which determine"
},
{
"start": 1042.56,
"end": 1049.84,
"text": " the inner learner that interacts with an environment during the duration of the episode for every"
},
{
"start": 1049.84,
"end": 1054.96,
"text": " cycle of the outer loop a new environment is sampled from a distribution of environments"
},
{
"start": 1054.96,
"end": 1061,
"text": " which share some common structure so basically the one would expect when you train this that"
},
{
"start": 1061,
"end": 1068.64,
"text": " these parameters here this could be for example it could be the initial weights of the network"
},
{
"start": 1068.64,
"end": 1074.68,
"text": " that the agent uses that this one possibility right this is very abstract here this meta"
},
{
"start": 1074.68,
"end": 1081,
"text": " reinforcement learning it could be literally anything that the outer model teaches the"
},
{
"start": 1081,
"end": 1088.06,
"text": " inner model or gives to the inner model right and you you train both of these with reinforcement"
},
{
"start": 1088.06,
"end": 1092.44,
"text": " learning so the inner you train with reinforcement learning on the individual rewards and then"
},
{
"start": 1092.44,
"end": 1099.8,
"text": " you can train the outer loop on the reward that the entire app agent environment episode"
},
{
"start": 1099.8,
"end": 1108.96,
"text": " achieved so the that's kind of a two loop situation and yeah so that's meta reinforcement"
},
{
"start": 1108.96,
"end": 1116.72,
"text": " learning again it's very unspecified what it does but as you can already see if you"
},
{
"start": 1116.72,
"end": 1124.64,
"text": " now have such an algorithm that kind of tells the the inner agent just as an example how"
},
{
"start": 1124.64,
"end": 1131,
"text": " to initialize its weights right how to initialize the weights of its deep neural network if"
},
{
"start": 1131,
"end": 1138.24,
"text": " you have that here then the agent you will technically bias it this is again an inductive"
},
{
"start": 1138.24,
"end": 1149.44,
"text": " bias so you will give it inductive bias towards what you think are good weights to generally"
},
{
"start": 1149.44,
"end": 1158.1200000000001,
"text": " learn these maze structured environments right since the outer loop you can update it way"
},
{
"start": 1158.1200000000001,
"end": 1164.32,
"text": " slower because it needs to learn over a longer time horizon and it needs to learn things"
},
{
"start": 1164.32,
"end": 1170.24,
"text": " for a different variety of environments but once you have good kind of initial weights"
},
{
"start": 1170.24,
"end": 1177.08,
"text": " for a particular environment then this agent in here can learn much faster given an individual"
},
{
"start": 1177.08,
"end": 1182.12,
"text": " environment so the agent you instantiated and then you give it good starting weights"
},
{
"start": 1182.12,
"end": 1188.78,
"text": " or some other kind of signal about the environment and then it can go much much faster at learning"
},
{
"start": 1188.78,
"end": 1195.52,
"text": " the environment thereby you have just sped up this inner agent by providing it an inductive"
},
{
"start": 1195.52,
"end": 1207.56,
"text": " bias and that's basically what the claim of the review is that by providing these models"
},
{
"start": 1207.56,
"end": 1213.6399999999999,
"text": " with a larger inductive bias you may then speed up their learning because you've kind"
},
{
"start": 1213.64,
"end": 1220.6000000000001,
"text": " of told them what good functions are from the outset of course you see the problem again"
},
{
"start": 1220.6000000000001,
"end": 1228.2,
"text": " here well the problem the problem is of course you actually need to train this outer loop"
},
{
"start": 1228.2,
"end": 1236.0400000000002,
"text": " and the outer loop may actually take much much longer to train than a single and unbiased"
},
{
"start": 1236.0400000000002,
"end": 1242.1000000000001,
"text": " reinforcement learning thing but again what you could do is you could pre-train on a distribution"
},
{
"start": 1242.1,
"end": 1248.04,
"text": " of environments and then once a new environment shows up that is similar to this distribution"
},
{
"start": 1248.04,
"end": 1256.04,
"text": " you can then have the agent instantiated and learn much faster so again kind of this two-step"
},
{
"start": 1256.04,
"end": 1262.28,
"text": " process you could pre-train this outer loop and then the inner loop will be much faster"
},
{
"start": 1262.28,
"end": 1271.48,
"text": " than if you didn't have the outer loop all right so those are basically the kind of the"
},
{
"start": 1271.48,
"end": 1278.96,
"text": " kind of outlines they do here they then kind of do a connection to like the brain and so"
},
{
"start": 1278.96,
"end": 1290.3,
"text": " on and they relate this to biology and biological learning but ultimately their conclusion is"
},
{
"start": 1290.3,
"end": 1298.98,
"text": " here that whenever you want to do whenever you have slow rl or this is at least my conclusion"
},
{
"start": 1298.98,
"end": 1308.28,
"text": " from their article whenever you have slower you can transform it to fast rl rl but you"
},
{
"start": 1308.28,
"end": 1318.72,
"text": " have to outsource the slow rl slow something else slow x you have to outsource the slowness"
},
{
"start": 1318.72,
"end": 1324.52,
"text": " to some other part so if you want to do fast rl you have to outsource the slowness and"
},
{
"start": 1324.52,
"end": 1333.72,
"text": " what the slowness provides is an inductive bias which means yeah if you want to do like"
},
{
"start": 1333.72,
"end": 1339.26,
"text": " fast rl with episodic memory you have to learn the similarity function which again which"
},
{
"start": 1339.26,
"end": 1346.72,
"text": " might be slow in itself but then the rl will be fast and if you want to do this via kind"
},
{
"start": 1346.72,
"end": 1352.28,
"text": " of a an outer meta learner again this learning of the outer meta learner might be slow but"
},
{
"start": 1352.28,
"end": 1361.36,
"text": " then the inner learner will be fast in a connection to the kind of biological aspect of this they"
},
{
"start": 1361.36,
"end": 1368.48,
"text": " do make a connection which which i find is appropriate in that for example the human"
},
{
"start": 1368.48,
"end": 1374.32,
"text": " brain the reason we can learn things fast let's say in the physical world picking things"
},
{
"start": 1374.32,
"end": 1381.26,
"text": " up dropping things down or navigating our paths we're incredibly good at this navigating"
},
{
"start": 1381.26,
"end": 1389.84,
"text": " through like a weird terrain with rocks in the way is because of course our brains have"
},
{
"start": 1389.84,
"end": 1396.48,
"text": " been adapted to these kinds of environment over generations so there is an outer process"
},
{
"start": 1396.48,
"end": 1403,
"text": " like evolution which is this kind of outer loop and it instantiates the inner loop which"
},
{
"start": 1403,
"end": 1413.8,
"text": " are the humans that kind of live or die by their ability to to navigate better so the"
},
{
"start": 1413.8,
"end": 1419.64,
"text": " if if the outer loop does a good job of only keeping the humans alive that can navigate"
},
{
"start": 1419.64,
"end": 1427.08,
"text": " well then the individual human in here that that does this the individual human given"
},
{
"start": 1427.08,
"end": 1434.48,
"text": " a landscape with rocks will then be much faster at learning to navigate it all right so that"
},
{
"start": 1434.48,
"end": 1440.32,
"text": " was it for that i it's an interesting article to read especially the connections to the"
},
{
"start": 1440.32,
"end": 1457.6,
"text": " kind of biological aspects and with that have a nice day"
}
] |
F5mxzvgl_oU | Yannic Kilcher | UCZHmQk67mSJgfCCTn7xBfew | S.H.E. - Search. Human. Equalizer. | [
"Science & Technology"
] | [
"pantene",
"search",
"google",
"bias",
"machine learning",
"artificial intelligence",
"search engine",
"ranking",
"equality",
"diversity"
] | Short opinion on Pantene's tool to de-bias Google search results.
https://www.apnews.com/Business%20Wire/c53a0e8f5fe04bf68e8311f214c806cf
https://shetransforms.us/ | Hi everyone, just a quick more of a news update in the AI world. Which is the following. Pantene launches S.H.E. The Search Human Equalizer to shine a light on bias in search. So Pantene, the kind of cosmetic corporation, launches this thing which is supposed to correct your search. And it's introduced here in this YouTube video which as you can see down here has 400 likes, has 3.5K dislikes and of course comments are disabled. So that's kind of already weird. Let's say weird. If you go to the website here that they made, basically let me refresh this and you can see the intro. They say let's take the bias out of search. So if you search for greatest engineers you'll get all men. If you search for schoolgirl you'll get like this kind of sexualized images. If you search for Asian women in Spanish, same. So basically they have a browser extension that modifies your search results so that for example schoolgirl looks like this. Of course, I don't know, if I were to do this I would actually let people explore the search box right here. But of course I want you to download this extension. So to me the interesting part is how does this work? So you're asked to install a Chrome extension which I won't do. But basically down here they say view the terms that SHE is equalizing. If you click on that you get to a list. So it very much seems like this is absolutely manual handcrafted work because there's a lot of work in kind of correcting bias in for example in search, in machine learning and so on. These approaches usually have some data driven approach that actually will change the models and so on or will re-rank based on some kind of learned input. But this here is simply a list of terms, for example famous actor, famous athletes and so on that it will then kind of re-rank. And I'm pretty sure this is just human manual labor. Someone comes up with a new term like oh this term we should you can actually flag yourself in the Chrome extension. So they say here flag this search. You can there's a button so you can suggest one and they will say oh yeah okay that is really not biased or that is really biased. Will now re-rank the search results for you. I mean academically this is a terrible idea, absolutely terrible. Because how are you going to do this like manually replace every single there is like I don't know it reminds a bit of new speak. But yeah this approach is doomed to fail. But of course it's just a company trying to sell you stuff. It's not, I mean this is not a, this is a PR gag not really trying to do anything, anything state of the art or meaningful or even effective right. If you search a little different thing than this it will still show you the old kind of result. So yeah from the terms you can also pretty clearly see where they come from. They have their own name. They have Pantene. I didn't see this yet. They have Pantene in here. So yeah if you want less biased search results for these exact terms then install the extension. I do not recommend you do so. But I would like them to take on one more query that I came up with that is pretty pretty biased I found. And that's the most dangerous criminals. All men. Goodbye. | [
{
"start": 0,
"end": 7.12,
"text": " Hi everyone, just a quick more of a news update in the AI world."
},
{
"start": 7.12,
"end": 8.96,
"text": " Which is the following."
},
{
"start": 8.96,
"end": 11.120000000000001,
"text": " Pantene launches S.H.E."
},
{
"start": 11.120000000000001,
"end": 16.740000000000002,
"text": " The Search Human Equalizer to shine a light on bias in search."
},
{
"start": 16.740000000000002,
"end": 24.8,
"text": " So Pantene, the kind of cosmetic corporation, launches this thing which is supposed to correct"
},
{
"start": 24.8,
"end": 26.84,
"text": " your search."
},
{
"start": 26.84,
"end": 36.04,
"text": " And it's introduced here in this YouTube video which as you can see down here has 400 likes,"
},
{
"start": 36.04,
"end": 41.84,
"text": " has 3.5K dislikes and of course comments are disabled."
},
{
"start": 41.84,
"end": 47.8,
"text": " So that's kind of already weird."
},
{
"start": 47.8,
"end": 50,
"text": " Let's say weird."
},
{
"start": 50,
"end": 57.4,
"text": " If you go to the website here that they made, basically let me refresh this and you can"
},
{
"start": 57.4,
"end": 59.2,
"text": " see the intro."
},
{
"start": 59.2,
"end": 62.64,
"text": " They say let's take the bias out of search."
},
{
"start": 62.64,
"end": 68.06,
"text": " So if you search for greatest engineers you'll get all men."
},
{
"start": 68.06,
"end": 76,
"text": " If you search for schoolgirl you'll get like this kind of sexualized images."
},
{
"start": 76,
"end": 85.14,
"text": " If you search for Asian women in Spanish, same."
},
{
"start": 85.14,
"end": 91.64,
"text": " So basically they have a browser extension that modifies your search results so that"
},
{
"start": 91.64,
"end": 96,
"text": " for example schoolgirl looks like this."
},
{
"start": 96,
"end": 102.84,
"text": " Of course, I don't know, if I were to do this I would actually let people explore the search"
},
{
"start": 102.84,
"end": 104.56,
"text": " box right here."
},
{
"start": 104.56,
"end": 110.08,
"text": " But of course I want you to download this extension."
},
{
"start": 110.08,
"end": 116.24000000000001,
"text": " So to me the interesting part is how does this work?"
},
{
"start": 116.24000000000001,
"end": 123.5,
"text": " So you're asked to install a Chrome extension which I won't do."
},
{
"start": 123.5,
"end": 131.36,
"text": " But basically down here they say view the terms that SHE is equalizing."
},
{
"start": 131.36,
"end": 133.8,
"text": " If you click on that you get to a list."
},
{
"start": 133.8,
"end": 140.20000000000002,
"text": " So it very much seems like this is absolutely manual handcrafted work because there's a"
},
{
"start": 140.20000000000002,
"end": 144.84,
"text": " lot of work in kind of correcting bias in for example in search, in machine learning"
},
{
"start": 144.84,
"end": 145.92000000000002,
"text": " and so on."
},
{
"start": 145.92000000000002,
"end": 152.60000000000002,
"text": " These approaches usually have some data driven approach that actually will change the models"
},
{
"start": 152.60000000000002,
"end": 158.36,
"text": " and so on or will re-rank based on some kind of learned input."
},
{
"start": 158.36,
"end": 166.68,
"text": " But this here is simply a list of terms, for example famous actor, famous athletes and"
},
{
"start": 166.68,
"end": 169.28,
"text": " so on that it will then kind of re-rank."
},
{
"start": 169.28,
"end": 172.56,
"text": " And I'm pretty sure this is just human manual labor."
},
{
"start": 172.56,
"end": 178.76000000000002,
"text": " Someone comes up with a new term like oh this term we should you can actually flag yourself"
},
{
"start": 178.76000000000002,
"end": 180.02,
"text": " in the Chrome extension."
},
{
"start": 180.02,
"end": 183.48000000000002,
"text": " So they say here flag this search."
},
{
"start": 183.48000000000002,
"end": 187.64000000000001,
"text": " You can there's a button so you can suggest one and they will say oh yeah okay that is"
},
{
"start": 187.64,
"end": 191.2,
"text": " really not biased or that is really biased."
},
{
"start": 191.2,
"end": 196.51999999999998,
"text": " Will now re-rank the search results for you."
},
{
"start": 196.51999999999998,
"end": 200.95999999999998,
"text": " I mean academically this is a terrible idea, absolutely terrible."
},
{
"start": 200.95999999999998,
"end": 207.76,
"text": " Because how are you going to do this like manually replace every single there is like"
},
{
"start": 207.76,
"end": 213.23999999999998,
"text": " I don't know it reminds a bit of new speak."
},
{
"start": 213.23999999999998,
"end": 215.92,
"text": " But yeah this approach is doomed to fail."
},
{
"start": 215.92,
"end": 219.32,
"text": " But of course it's just a company trying to sell you stuff."
},
{
"start": 219.32,
"end": 228.83999999999997,
"text": " It's not, I mean this is not a, this is a PR gag not really trying to do anything, anything"
},
{
"start": 228.83999999999997,
"end": 232.6,
"text": " state of the art or meaningful or even effective right."
},
{
"start": 232.6,
"end": 239.04,
"text": " If you search a little different thing than this it will still show you the old kind of"
},
{
"start": 239.04,
"end": 241.23999999999998,
"text": " result."
},
{
"start": 241.24,
"end": 248.08,
"text": " So yeah from the terms you can also pretty clearly see where they come from."
},
{
"start": 248.08,
"end": 249.08,
"text": " They have their own name."
},
{
"start": 249.08,
"end": 250.08,
"text": " They have Pantene."
},
{
"start": 250.08,
"end": 251.08,
"text": " I didn't see this yet."
},
{
"start": 251.08,
"end": 256.04,
"text": " They have Pantene in here."
},
{
"start": 256.04,
"end": 265.2,
"text": " So yeah if you want less biased search results for these exact terms then install the extension."
},
{
"start": 265.2,
"end": 268.6,
"text": " I do not recommend you do so."
},
{
"start": 268.6,
"end": 275.96000000000004,
"text": " But I would like them to take on one more query that I came up with that is pretty pretty"
},
{
"start": 275.96000000000004,
"end": 277.38,
"text": " biased I found."
},
{
"start": 277.38,
"end": 281.16,
"text": " And that's the most dangerous criminals."
},
{
"start": 281.16,
"end": 282.16,
"text": " All men."
},
{
"start": 282.16,
"end": 302.8,
"text": " Goodbye."
}
] |
3Tqp_B2G6u0 | Yannic Kilcher | UCZHmQk67mSJgfCCTn7xBfew | Blockwise Parallel Decoding for Deep Autoregressive Models | [
"Science & Technology"
] | [
"machine learning",
"deep learning",
"transformers",
"nlp",
"natural language processing",
"ai",
"artificial intelligence",
"google brain",
"autoregressive",
"greedy decoding",
"inference",
"language model",
"speedup"
] | https://arxiv.org/abs/1811.03115
Abstract:
Deep autoregressive sequence-to-sequence models have demonstrated impressive performance across a wide variety of tasks in recent years. While common architecture classes such as recurrent, convolutional, and self-attention networks make different trade-offs between the amount of computation needed per layer and the length of the critical path at training time, generation still remains an inherently sequential process. To overcome this limitation, we propose a novel blockwise parallel decoding scheme in which we make predictions for multiple time steps in parallel then back off to the longest prefix validated by a scoring model. This allows for substantial theoretical improvements in generation speed when applied to architectures that can process output sequences in parallel. We verify our approach empirically through a series of experiments using state-of-the-art self-attention models for machine translation and image super-resolution, achieving iteration reductions of up to 2x over a baseline greedy decoder with no loss in quality, or up to 7x in exchange for a slight decrease in performance. In terms of wall-clock time, our fastest models exhibit real-time speedups of up to 4x over standard greedy decoding.
Authors: Mitchell Stern, Noam Shazeer, Jakob Uszkoreit | Hi there, today we'll look at blockwise parallel decoding for deep autoregressive models by Mitchell Stern, Noam Shazir and Jakob Uschkordei of UC Berkeley and Google Brain. So this is a bit more of an engineering paper than usual, which I find cool. It's basically an engineering trick to get these autoregressive models to decode faster, while you can either preserve fully their performance or suffer a bit of a drop in performance, while even speeding them up more. Alright, so let's dive in actually. The paper starts out with a description of what autoregressive models are and what decoding is in them. So let me try to quickly explain this. So what is an autoregressive model? So basically we're talking about, let's say, language models. So language models are the classic examples of these models, where you have a language model is a model that simply predicts the next word in a sequence. So you could have something like a cat sits on the, and then here is blank. So the language model is asked to predict which word is the word that follows. The language model basically does this by predicting the probability distribution over the next word. So w t plus one, if this is t here, this is t minus one and so on, given all the w's smaller or equal than t. So all the words that come before should lead to the next word being predicted. So the language model is tasked to ask what is the next word in the sequence, or what's the probability distribution over the next word. And then you can simply, you know, pick the maximum probability word or something like this. So that's that's pretty standard so far. So what is the autoregressive part in here? So basically the autoregressive part means that in order for me to find this word here, this next word, I will look at all of these words here. And what does it mean then when I want to use this language model for generating generating a sentence, let's say, so I'm now I've trained the language model, it's really good at predicting the next word, I want to actually use it to do something more interesting. So I, I want it to generate a full sentence, what I do, let's say I pick the first word, the right, I pick the first word, and I simply ask the language model, why what's the next word? Right? And the language model can do this, it can assess what's the probability distribution here over words, and it will, for example, give me some some distribution over words, and I pick the maximum one, I say, okay, the maximum one here is house. Okay, the house. The house. And then I go back and I ask the language model, well, what's the next word then? See, clearly, you're a language model. So you can give me based on the two previous words, you can give me the next word, what's the next word, and the language model will maybe say the house is, and so on. So you can see how you can generate a sentence by simply basically feeding the answer that the language model gives feeding it into the next step of predicting. So all of these now go into the next step, and once you've predicted the next step, the house is on. Once you've predicted that, then you can take that and in conjunction with everything you've predicted so far, to predict the next step. So you can use the language model that is trained to predict the next word to predict an entire sentence and the autoregressive part basically means that its own predictions will serve as the basis for the next predictions, and this introduces a fundamental problem, namely that I have to basically wait for one prediction, so I have to wait here for is before I can predict on, and this means if I have a, I basically can't help but, so if this is my language model, it's a box, I can't help but go to the language model, wait for a response, okay, then go to the language model again, wait for a response again. This is inherently sequential nature here where I have to do like M steps if M is the length of the sentence that I want, and we can't make use of batching normally, so usually what you do during training, during training you have a whole bunch of data, right, you have the cat sits on the mat, you have the house, the house is blue, so I can generate, just from these two sentences I can generate a bunch of training examples, I can ask, this is a training example where the input is the cat and it's meaning to predict sits, then this is a training example where the input is the cat sits and the language model has to predict on, this here is a training example, this, this is a training example, so I can chunk this up into a whole bunch of training examples and all of those I can write, I can feed in parallel into a big matrix, I can all put them here and then run this thing through my language model in training mode because each of them is already like is in the corpus, I can batch the training but I can't batch the prediction because of what we've seen before because inherently the next predicting the next word depends on the last word that the model itself has output, so there is no training corpus around since we're not training, yeah, so this is the fundamental problem and these authors tackle this problem, they say how can we make this faster, this decoding, so they introduce greedy decoding here where they say okay, this is what we've just seen, the probability of the next word is like the maximum, the maximum log probability here in that case if the model predicts a log probability over the words that we've input so far, right, and this X here is, so this is for example a translation task, a machine translation task, so the X would be the source language sentence, so maybe like a French sentence and the Y smaller equal to J would be the so far decoded English sentence if we're trying to translate to English and the Y J plus one would be the next word that we're trying to predict in the English sentence given the English sentence so far and the French sentence, the total French sentence, so greedy decoding just does this one step after another and we try to go to what they call blockwise parallel decoding. So we can just jump to the graphics straight away because what they do is pretty straightforward and is best illustrated in this graphic actually, so they go from this situation where they already have this here, they have a saw a dog ride, this is the sentence that has been decoded so far and we have to try to complete it, naturally we'll ask what's the next word, but they say okay what if we could predict not only the next word from this but the word two positions away or three positions away, we could do this all at the same time, right, I mean I can certainly build a model, a language model that doesn't only predict the next word but predicts the word after that as well, though of course if then this word, the predictor for this word still only gets this as an input so this is the important thing here, so the part of the model that predicts the is two words away isn't being informed that this word is being produced here, so naturally you would expect the quality to be worse because the word one position away, two positions away and three positions away are each predicted basically independently of each other just from the source context, so there is no, you can't expect like a coherency between the words or not a lot, so this is the fundamental trade-off with such a model, you can predict farther into the future at the same time but then these predictions can't basically depend on each other and this degrades your performance quite a bit, so what these authors do is to remedy that, they say well these things here we can, I mean we can produce a bunch of them, right, since all that's required as an input is this, we can actually produce like, we can produce a batch of them at the same time, so we can produce one, two and three words into the future and we can do this like a hundred times in parallel, no problem, alright, and we can sample this, we don't have to always take the most likely word, we can actually sample a bunch into the future and this now gets smarter because now I have a list of one hundred basically suggestions of what the continuation here could be, right, I have, I take this not as a given but I take these outputs as suggestions, alright, and then I can have another model that, this is called verify here, I can have another model that scores all of these different, all of these different decodings in parallel, both of these can be done by the same model, we saw the language model can be either used to predict or to score something, since it inherently predicts the probability of sequences or of following words, we can, we can let it output this probability all in parallel, so this also can count as a score, what I'm trying to say is you can, since the language model is constructed as a, as outputting probabilities anyway, like such, we can use it both to predict the next word and also if we have a suggestion we can use it to score that and to say okay how likely is that, right, and then what we can make sure is that the suggestion, we are looking for the suggestion basically that has the highest score and if you want to be really true to your original model you say I want to look for the suggestion that has the maximum, that would have had the maximum score had I decoded one by one, so then basically you retain the original performance and you gain a speed up as long as the, what the greedy decoding would have produced is in your suggestion, in your box of suggestions that you produce, as long as that's in there you gain a speed up, if that's not in there then you can always, you always have the one word ahead model because that's, you have that anyway, you predict the next word anyway, so in case none of these suggestions work out you still have this one word prediction basically which is the model you started with, so at worst case you're as fast as the greedy model and in best case you always, your suggestions are so good that they are always the one that would have been decoded anyway, so you can basically in this case do three steps at once. Alright, so this verify step here is shown here and you see it will decode, now this is just one suggestion keep in mind, they can produce many suggestions at the same time if there's memory or and they can actually, they can score each of this, so they can score this, they can score this and they can score this also independently as a batch, so they can do this in parallel and here you see, yeah here is executed in parallel, so the model will go and will score this word in and say ah this would have been, this is the argmax of the greedy decoding anyway and it can also score this step and say aha given that there is an in that this the is the argmax anyway, right and you can score this step and say ah given that there's in the, the argmax would have been car, so that's not bus, so we reject this suggestion but we keep that part of the suggestion and say okay the in the is basically what would have been decoded anyway according to the greedy decoding, so we can basically accept this here and continue from there, this is the accept step here, so this basically, so you can see in this one step which yeah we'll call one decoding step, we have basically done two of the greedy decoding steps in one go, so by predicting into the future and then selecting the one that agrees with the original model because we can, the fundamental thing is we can score in parallel but we can greedily produce not in parallel, alright so they actually push this further by also eliminating one of the, one of the evaluations here by combining basically the next predict step with the previous verify step and it's pretty cool to look at that, so we're in the same situation, you have this and you suggest this continuation and then the score model again will go here but while you verify you also do the next predict at the same time, since you've built your model, since it's the same model and this model every time you execute it, it outputs a distribution over the next set of positions, you might as well take the outputs of it, right, so when you then decide to accept this here, you will already have the outputs computed for the next three positions, so this you can feed directly into this next predict step, you basically don't have to execute it, you simply go to the one you've accepted and then you look at the outputs that you get anyway from this model and use them, so you might ask, okay which, how does a model look that like scores and predicts into the future and this, the answer is here, it's a bit out of order, I would have maybe liked this more previously but in any case this is what they do, so they use a transformer architecture and you have to imagine it starts down here and actually there is a huge network down here, right, this is just the output layer, so there's a giant transformer network down below and it produces this output representation, now normally from this representation you would go to this what's called p layer here, this is a output vocabulary projection, so this has one entry for each of the words in your vocabulary, so the, a, cat and so on and you would then for each one predict a probability, so with this representation you basically project it onto this vocabulary and predict the probability distribution over the next word, but what they do is they say no no no we not only need the next word, we need the next three words, so let's actually split this output signal into three output signals and they do this by introducing this hidden feed forward layer here or a hidden transformer layer, it's a hidden layer, yeah we insert a single feed forward layer with hidden size, okay, so they insert a hidden layer and then they also add these skip connections here, right, they add the skip connections which basically just means they feed through this output directly to here and add it to that, so basically the feed forward layer needs to transform this output here into the vocabulary input, one step ahead, two steps ahead and three steps ahead and you can see here that those are independent, right, they don't depend on each other, there's nothing feeding back p1 here into the decision of p2 so they can be executed in parallel, but they lose the dependence on each other, alright, so that's the architecture and you can clearly see here it's able to predict three steps into the future at the same time, so yeah, alright so they also do different adjustments where they say now yeah we can also kind of sacrifice a bit of the fidelity to the original model by not requiring that the basically we don't only accept when the suggestion is the perfect best suggestion that would have been decoded by the greedy model, but what we could do is we could just if it's in the top k we could accept it, if it's in the if it's good enough basically one of the suggestions that we have is good enough then we'll accept it or when you have like some sort of distance metric they say here so the distance between our suggestion and the maximum so the what would have been best by the greedy should be smaller than some constant epsilon and that way you can sacrifice a bit of performance but your suggestions will be accepted much more often and thereby your speedup will be much higher and they also experiment with whether or not they should fine tune the original model along with their model and also the experiment with knowledge distillation where they basically have like some some teacher model and you train the your model on the output of the teacher model don't want to go too far into this since these are mostly kind of things to make it work even better and you can see here that this is for example a machine translation task so this is the WMT 2014 English German translation and there's a regular they get a blow score of 26 and here higher is better and if you can see they get a fairly sizable speedups by keeping the blow scores fairly constant so they they almost speed up by 2x but if they allow the blow scores to go down a bit they get a much higher speedup of like 3 and then if they do like distillation and fine tuning they actually manage to keep up the performance even though they get very very high speedups so they get speedups until like 5x by not dropping the blow scores very much so that's that's pretty impressive another experiment they do is image super resolution where you can see here with regular they try to really keep exactly the original model output and it doesn't it doesn't speed it up too much but when they allow for a bit of a mistake to be made so here this is image super resolution so values are between zero and 255 and they allow epsilon equals to two of that so that's that's kind of less than 1% error on the individual pixel then they get a speed ups of 7x or something like this and you can see in this region here that when the K is for in case the number of steps that you decode ahead so and the mini mean block size is 3.75 that means on average 3.75 steps ahead or accepted which means basically there their suggestions are almost always good enough to be accepted so they get this massive speed up by basically being able to jump these decoding steps yeah so they have a bunch of other results here there show their wall clock time speed up since iteration speed up as well but if you have to pay in huge computational cost it's not so good but they also show that they have a big kind of wall clock speed up up to up to 4x here in super resolution and over 3x in translation so it's a pretty cool paper they give some examples here a bunch of more tables some examples of their super resolution and yeah if this might be something for you then use it it's I think it's a pretty neat trick and yeah especially for production systems all right that was it bye bye. | [
{
"start": 0,
"end": 6.640000000000001,
"text": " Hi there, today we'll look at blockwise parallel decoding for deep autoregressive models by"
},
{
"start": 6.640000000000001,
"end": 15.200000000000001,
"text": " Mitchell Stern, Noam Shazir and Jakob Uschkordei of UC Berkeley and Google Brain."
},
{
"start": 15.200000000000001,
"end": 21.44,
"text": " So this is a bit more of an engineering paper than usual, which I find cool."
},
{
"start": 21.44,
"end": 28.400000000000002,
"text": " It's basically an engineering trick to get these autoregressive models to decode faster,"
},
{
"start": 28.4,
"end": 36.48,
"text": " while you can either preserve fully their performance or suffer a bit of a drop in performance,"
},
{
"start": 36.48,
"end": 39.12,
"text": " while even speeding them up more."
},
{
"start": 39.12,
"end": 46.72,
"text": " Alright, so let's dive in actually."
},
{
"start": 46.72,
"end": 52.32,
"text": " The paper starts out with a description of what autoregressive models are and what decoding"
},
{
"start": 52.32,
"end": 54.239999999999995,
"text": " is in them."
},
{
"start": 54.24,
"end": 59.36,
"text": " So let me try to quickly explain this."
},
{
"start": 59.36,
"end": 62.800000000000004,
"text": " So what is an autoregressive model?"
},
{
"start": 62.800000000000004,
"end": 68,
"text": " So basically we're talking about, let's say, language models."
},
{
"start": 68,
"end": 73.2,
"text": " So language models are the classic examples of these models, where you have a language"
},
{
"start": 73.2,
"end": 77.08,
"text": " model is a model that simply predicts the next word in a sequence."
},
{
"start": 77.08,
"end": 88.4,
"text": " So you could have something like a cat sits on the, and then here is blank."
},
{
"start": 88.4,
"end": 95.44,
"text": " So the language model is asked to predict which word is the word that follows."
},
{
"start": 95.44,
"end": 101.16,
"text": " The language model basically does this by predicting the probability distribution over"
},
{
"start": 101.16,
"end": 102.28,
"text": " the next word."
},
{
"start": 102.28,
"end": 112.84,
"text": " So w t plus one, if this is t here, this is t minus one and so on, given all the w's smaller"
},
{
"start": 112.84,
"end": 114.76,
"text": " or equal than t."
},
{
"start": 114.76,
"end": 122.36,
"text": " So all the words that come before should lead to the next word being predicted."
},
{
"start": 122.36,
"end": 128.64,
"text": " So the language model is tasked to ask what is the next word in the sequence, or what's"
},
{
"start": 128.64,
"end": 131.16,
"text": " the probability distribution over the next word."
},
{
"start": 131.16,
"end": 136.04,
"text": " And then you can simply, you know, pick the maximum probability word or something like"
},
{
"start": 136.04,
"end": 137.04,
"text": " this."
},
{
"start": 137.04,
"end": 143.04,
"text": " So that's that's pretty standard so far."
},
{
"start": 143.04,
"end": 146.28,
"text": " So what is the autoregressive part in here?"
},
{
"start": 146.28,
"end": 153.16,
"text": " So basically the autoregressive part means that in order for me to find this word here,"
},
{
"start": 153.16,
"end": 158.04,
"text": " this next word, I will look at all of these words here."
},
{
"start": 158.04,
"end": 164.23999999999998,
"text": " And what does it mean then when I want to use this language model for generating generating"
},
{
"start": 164.23999999999998,
"end": 169.68,
"text": " a sentence, let's say, so I'm now I've trained the language model, it's really good at predicting"
},
{
"start": 169.68,
"end": 174.89999999999998,
"text": " the next word, I want to actually use it to do something more interesting."
},
{
"start": 174.89999999999998,
"end": 182.04,
"text": " So I, I want it to generate a full sentence, what I do, let's say I pick the first word,"
},
{
"start": 182.04,
"end": 187.92,
"text": " the right, I pick the first word, and I simply ask the language model, why what's the next"
},
{
"start": 187.92,
"end": 188.92,
"text": " word?"
},
{
"start": 188.92,
"end": 189.92,
"text": " Right?"
},
{
"start": 189.92,
"end": 195.95999999999998,
"text": " And the language model can do this, it can assess what's the probability distribution"
},
{
"start": 195.95999999999998,
"end": 201.72,
"text": " here over words, and it will, for example, give me some some distribution over words,"
},
{
"start": 201.72,
"end": 206.44,
"text": " and I pick the maximum one, I say, okay, the maximum one here is house."
},
{
"start": 206.44,
"end": 210.2,
"text": " Okay, the house."
},
{
"start": 210.2,
"end": 211.92,
"text": " The house."
},
{
"start": 211.92,
"end": 216.79999999999998,
"text": " And then I go back and I ask the language model, well, what's the next word then?"
},
{
"start": 216.8,
"end": 218.8,
"text": " See, clearly, you're a language model."
},
{
"start": 218.8,
"end": 223.52,
"text": " So you can give me based on the two previous words, you can give me the next word, what's"
},
{
"start": 223.52,
"end": 230.8,
"text": " the next word, and the language model will maybe say the house is, and so on."
},
{
"start": 230.8,
"end": 237.8,
"text": " So you can see how you can generate a sentence by simply basically feeding the answer that"
},
{
"start": 237.8,
"end": 242.72000000000003,
"text": " the language model gives feeding it into the next step of predicting."
},
{
"start": 242.72,
"end": 247.35999999999999,
"text": " So all of these now go into the next step, and once you've predicted the next step, the"
},
{
"start": 247.35999999999999,
"end": 251.07999999999998,
"text": " house is on."
},
{
"start": 251.07999999999998,
"end": 255.6,
"text": " Once you've predicted that, then you can take that and in conjunction with everything you've"
},
{
"start": 255.6,
"end": 258.66,
"text": " predicted so far, to predict the next step."
},
{
"start": 258.66,
"end": 263.76,
"text": " So you can use the language model that is trained to predict the next word to predict"
},
{
"start": 263.76,
"end": 268.36,
"text": " an entire sentence and the autoregressive part basically means that its own predictions"
},
{
"start": 268.36,
"end": 275.32,
"text": " will serve as the basis for the next predictions, and this introduces a fundamental problem,"
},
{
"start": 275.32,
"end": 283.68,
"text": " namely that I have to basically wait for one prediction, so I have to wait here for is"
},
{
"start": 283.68,
"end": 292.92,
"text": " before I can predict on, and this means if I have a, I basically can't help but, so if"
},
{
"start": 292.92,
"end": 298.28000000000003,
"text": " this is my language model, it's a box, I can't help but go to the language model, wait for"
},
{
"start": 298.28,
"end": 303.23999999999995,
"text": " a response, okay, then go to the language model again, wait for a response again."
},
{
"start": 303.23999999999995,
"end": 309.44,
"text": " This is inherently sequential nature here where I have to do like M steps if M is the"
},
{
"start": 309.44,
"end": 318.35999999999996,
"text": " length of the sentence that I want, and we can't make use of batching normally, so usually"
},
{
"start": 318.35999999999996,
"end": 324.23999999999995,
"text": " what you do during training, during training you have a whole bunch of data, right, you"
},
{
"start": 324.24,
"end": 343.96000000000004,
"text": " have the cat sits on the mat, you have the house, the house is blue, so I can generate,"
},
{
"start": 343.96000000000004,
"end": 349.28000000000003,
"text": " just from these two sentences I can generate a bunch of training examples, I can ask, this"
},
{
"start": 349.28,
"end": 356.67999999999995,
"text": " is a training example where the input is the cat and it's meaning to predict sits, then"
},
{
"start": 356.67999999999995,
"end": 361.79999999999995,
"text": " this is a training example where the input is the cat sits and the language model has"
},
{
"start": 361.79999999999995,
"end": 368.52,
"text": " to predict on, this here is a training example, this, this is a training example, so I can"
},
{
"start": 368.52,
"end": 373.79999999999995,
"text": " chunk this up into a whole bunch of training examples and all of those I can write, I can"
},
{
"start": 373.8,
"end": 381.84000000000003,
"text": " feed in parallel into a big matrix, I can all put them here and then run this thing"
},
{
"start": 381.84000000000003,
"end": 387.12,
"text": " through my language model in training mode because each of them is already like is in"
},
{
"start": 387.12,
"end": 394.2,
"text": " the corpus, I can batch the training but I can't batch the prediction because of what"
},
{
"start": 394.2,
"end": 400.12,
"text": " we've seen before because inherently the next predicting the next word depends on the last"
},
{
"start": 400.12,
"end": 405.72,
"text": " word that the model itself has output, so there is no training corpus around since we're"
},
{
"start": 405.72,
"end": 412.04,
"text": " not training, yeah, so this is the fundamental problem and these authors tackle this problem,"
},
{
"start": 412.04,
"end": 419.64,
"text": " they say how can we make this faster, this decoding, so they introduce greedy decoding"
},
{
"start": 419.64,
"end": 428.14,
"text": " here where they say okay, this is what we've just seen, the probability of the next word"
},
{
"start": 428.14,
"end": 435.88,
"text": " is like the maximum, the maximum log probability here in that case if the model predicts a"
},
{
"start": 435.88,
"end": 444.76,
"text": " log probability over the words that we've input so far, right, and this X here is, so"
},
{
"start": 444.76,
"end": 449,
"text": " this is for example a translation task, a machine translation task, so the X would be"
},
{
"start": 449,
"end": 456.56,
"text": " the source language sentence, so maybe like a French sentence and the Y smaller equal"
},
{
"start": 456.56,
"end": 463.36,
"text": " to J would be the so far decoded English sentence if we're trying to translate to English and"
},
{
"start": 463.36,
"end": 468.72,
"text": " the Y J plus one would be the next word that we're trying to predict in the English sentence"
},
{
"start": 468.72,
"end": 475.48,
"text": " given the English sentence so far and the French sentence, the total French sentence,"
},
{
"start": 475.48,
"end": 482.98,
"text": " so greedy decoding just does this one step after another and we try to go to what they"
},
{
"start": 482.98,
"end": 487.64000000000004,
"text": " call blockwise parallel decoding."
},
{
"start": 487.64000000000004,
"end": 494.28000000000003,
"text": " So we can just jump to the graphics straight away because what they do is pretty straightforward"
},
{
"start": 494.28000000000003,
"end": 500.92,
"text": " and is best illustrated in this graphic actually, so they go from this situation where they"
},
{
"start": 500.92,
"end": 510.6,
"text": " already have this here, they have a saw a dog ride, this is the sentence that has been"
},
{
"start": 510.6,
"end": 518.52,
"text": " decoded so far and we have to try to complete it, naturally we'll ask what's the next word,"
},
{
"start": 518.52,
"end": 524.76,
"text": " but they say okay what if we could predict not only the next word from this but the word"
},
{
"start": 524.76,
"end": 531.12,
"text": " two positions away or three positions away, we could do this all at the same time, right,"
},
{
"start": 531.12,
"end": 535.9200000000001,
"text": " I mean I can certainly build a model, a language model that doesn't only predict the next word"
},
{
"start": 535.92,
"end": 544.9599999999999,
"text": " but predicts the word after that as well, though of course if then this word, the predictor"
},
{
"start": 544.9599999999999,
"end": 550.68,
"text": " for this word still only gets this as an input so this is the important thing here, so the"
},
{
"start": 550.68,
"end": 559.2199999999999,
"text": " part of the model that predicts the is two words away isn't being informed that this"
},
{
"start": 559.2199999999999,
"end": 565,
"text": " word is being produced here, so naturally you would expect the quality to be worse because"
},
{
"start": 565,
"end": 571.4,
"text": " the word one position away, two positions away and three positions away are each predicted"
},
{
"start": 571.4,
"end": 579.08,
"text": " basically independently of each other just from the source context, so there is no, you"
},
{
"start": 579.08,
"end": 588.84,
"text": " can't expect like a coherency between the words or not a lot, so this is the fundamental"
},
{
"start": 588.84,
"end": 593.44,
"text": " trade-off with such a model, you can predict farther into the future at the same time but"
},
{
"start": 593.44,
"end": 599.72,
"text": " then these predictions can't basically depend on each other and this degrades your performance"
},
{
"start": 599.72,
"end": 606.8000000000001,
"text": " quite a bit, so what these authors do is to remedy that, they say well these things here"
},
{
"start": 606.8000000000001,
"end": 613.12,
"text": " we can, I mean we can produce a bunch of them, right, since all that's required as an input"
},
{
"start": 613.12,
"end": 618.7600000000001,
"text": " is this, we can actually produce like, we can produce a batch of them at the same time,"
},
{
"start": 618.76,
"end": 624.2,
"text": " so we can produce one, two and three words into the future and we can do this like a"
},
{
"start": 624.2,
"end": 631.2,
"text": " hundred times in parallel, no problem, alright, and we can sample this, we don't have to always"
},
{
"start": 631.2,
"end": 639.48,
"text": " take the most likely word, we can actually sample a bunch into the future and this now"
},
{
"start": 639.48,
"end": 646.42,
"text": " gets smarter because now I have a list of one hundred basically suggestions of what"
},
{
"start": 646.42,
"end": 652.3199999999999,
"text": " the continuation here could be, right, I have, I take this not as a given but I take these"
},
{
"start": 652.3199999999999,
"end": 660.12,
"text": " outputs as suggestions, alright, and then I can have another model that, this is called"
},
{
"start": 660.12,
"end": 668.24,
"text": " verify here, I can have another model that scores all of these different, all of these"
},
{
"start": 668.24,
"end": 672.92,
"text": " different decodings in parallel, both of these can be done by the same model, we saw the"
},
{
"start": 672.92,
"end": 679.9599999999999,
"text": " language model can be either used to predict or to score something, since it inherently"
},
{
"start": 679.9599999999999,
"end": 689.28,
"text": " predicts the probability of sequences or of following words, we can, we can let it output"
},
{
"start": 689.28,
"end": 694.92,
"text": " this probability all in parallel, so this also can count as a score, what I'm trying"
},
{
"start": 694.92,
"end": 701.04,
"text": " to say is you can, since the language model is constructed as a, as outputting probabilities"
},
{
"start": 701.04,
"end": 710.28,
"text": " anyway, like such, we can use it both to predict the next word and also if we have a suggestion"
},
{
"start": 710.28,
"end": 719.16,
"text": " we can use it to score that and to say okay how likely is that, right, and then what we"
},
{
"start": 719.16,
"end": 726.5999999999999,
"text": " can make sure is that the suggestion, we are looking for the suggestion basically that"
},
{
"start": 726.6,
"end": 733.72,
"text": " has the highest score and if you want to be really true to your original model you say"
},
{
"start": 733.72,
"end": 741.58,
"text": " I want to look for the suggestion that has the maximum, that would have had the maximum"
},
{
"start": 741.58,
"end": 750.88,
"text": " score had I decoded one by one, so then basically you retain the original performance and you"
},
{
"start": 750.88,
"end": 759.92,
"text": " gain a speed up as long as the, what the greedy decoding would have produced is in your suggestion,"
},
{
"start": 759.92,
"end": 763.6,
"text": " in your box of suggestions that you produce, as long as that's in there you gain a speed"
},
{
"start": 763.6,
"end": 769.66,
"text": " up, if that's not in there then you can always, you always have the one word ahead model because"
},
{
"start": 769.66,
"end": 775.72,
"text": " that's, you have that anyway, you predict the next word anyway, so in case none of these"
},
{
"start": 775.72,
"end": 782.88,
"text": " suggestions work out you still have this one word prediction basically which is the model"
},
{
"start": 782.88,
"end": 792.08,
"text": " you started with, so at worst case you're as fast as the greedy model and in best case"
},
{
"start": 792.08,
"end": 798.72,
"text": " you always, your suggestions are so good that they are always the one that would have been"
},
{
"start": 798.72,
"end": 807.36,
"text": " decoded anyway, so you can basically in this case do three steps at once. Alright, so this"
},
{
"start": 807.36,
"end": 814.9,
"text": " verify step here is shown here and you see it will decode, now this is just one suggestion"
},
{
"start": 814.9,
"end": 822.44,
"text": " keep in mind, they can produce many suggestions at the same time if there's memory or and"
},
{
"start": 822.44,
"end": 827.6,
"text": " they can actually, they can score each of this, so they can score this, they can score"
},
{
"start": 827.6,
"end": 837.72,
"text": " this and they can score this also independently as a batch, so they can do this in parallel"
},
{
"start": 837.72,
"end": 843.84,
"text": " and here you see, yeah here is executed in parallel, so the model will go and will score"
},
{
"start": 843.84,
"end": 848.52,
"text": " this word in and say ah this would have been, this is the argmax of the greedy decoding"
},
{
"start": 848.52,
"end": 854.88,
"text": " anyway and it can also score this step and say aha given that there is an in that this"
},
{
"start": 854.88,
"end": 861.72,
"text": " the is the argmax anyway, right and you can score this step and say ah given that there's"
},
{
"start": 861.72,
"end": 869.08,
"text": " in the, the argmax would have been car, so that's not bus, so we reject this suggestion"
},
{
"start": 869.08,
"end": 876.24,
"text": " but we keep that part of the suggestion and say okay the in the is basically what would"
},
{
"start": 876.24,
"end": 886.44,
"text": " have been decoded anyway according to the greedy decoding, so we can basically accept"
},
{
"start": 886.44,
"end": 896.48,
"text": " this here and continue from there, this is the accept step here, so this basically, so"
},
{
"start": 896.48,
"end": 902.52,
"text": " you can see in this one step which yeah we'll call one decoding step, we have basically"
},
{
"start": 902.52,
"end": 912.42,
"text": " done two of the greedy decoding steps in one go, so by predicting into the future and then"
},
{
"start": 912.42,
"end": 919.04,
"text": " selecting the one that agrees with the original model because we can, the fundamental thing"
},
{
"start": 919.04,
"end": 928.4,
"text": " is we can score in parallel but we can greedily produce not in parallel, alright so they actually"
},
{
"start": 928.4,
"end": 939.04,
"text": " push this further by also eliminating one of the, one of the evaluations here by combining"
},
{
"start": 939.04,
"end": 948.4,
"text": " basically the next predict step with the previous verify step and it's pretty cool to look at"
},
{
"start": 948.4,
"end": 957.04,
"text": " that, so we're in the same situation, you have this and you suggest this continuation"
},
{
"start": 957.04,
"end": 968.04,
"text": " and then the score model again will go here but while you verify you also do the next"
},
{
"start": 968.04,
"end": 973.56,
"text": " predict at the same time, since you've built your model, since it's the same model and"
},
{
"start": 973.56,
"end": 982.52,
"text": " this model every time you execute it, it outputs a distribution over the next set of positions,"
},
{
"start": 982.52,
"end": 988.4,
"text": " you might as well take the outputs of it, right, so when you then decide to accept this"
},
{
"start": 988.4,
"end": 996.36,
"text": " here, you will already have the outputs computed for the next three positions, so this you"
},
{
"start": 996.36,
"end": 1001.48,
"text": " can feed directly into this next predict step, you basically don't have to execute it, you"
},
{
"start": 1001.48,
"end": 1009.76,
"text": " simply go to the one you've accepted and then you look at the outputs that you get anyway"
},
{
"start": 1009.76,
"end": 1018.88,
"text": " from this model and use them, so you might ask, okay which, how does a model look that"
},
{
"start": 1018.88,
"end": 1024.12,
"text": " like scores and predicts into the future and this, the answer is here, it's a bit out of"
},
{
"start": 1024.12,
"end": 1029.8799999999999,
"text": " order, I would have maybe liked this more previously but in any case this is what they"
},
{
"start": 1029.8799999999999,
"end": 1034.52,
"text": " do, so they use a transformer architecture and you have to imagine it starts down here"
},
{
"start": 1034.52,
"end": 1040.48,
"text": " and actually there is a huge network down here, right, this is just the output layer,"
},
{
"start": 1040.48,
"end": 1047.6,
"text": " so there's a giant transformer network down below and it produces this output representation,"
},
{
"start": 1047.6,
"end": 1054.84,
"text": " now normally from this representation you would go to this what's called p layer here,"
},
{
"start": 1054.84,
"end": 1060.52,
"text": " this is a output vocabulary projection, so this has one entry for each of the words in"
},
{
"start": 1060.52,
"end": 1068.76,
"text": " your vocabulary, so the, a, cat and so on and you would then for each one predict a"
},
{
"start": 1068.76,
"end": 1076.24,
"text": " probability, so with this representation you basically project it onto this vocabulary"
},
{
"start": 1076.24,
"end": 1082.6399999999999,
"text": " and predict the probability distribution over the next word, but what they do is they say"
},
{
"start": 1082.6399999999999,
"end": 1087.68,
"text": " no no no we not only need the next word, we need the next three words, so let's actually"
},
{
"start": 1087.68,
"end": 1095.5600000000002,
"text": " split this output signal into three output signals and they do this by introducing this"
},
{
"start": 1095.5600000000002,
"end": 1103.3200000000002,
"text": " hidden feed forward layer here or a hidden transformer layer, it's a hidden layer, yeah"
},
{
"start": 1103.3200000000002,
"end": 1110.28,
"text": " we insert a single feed forward layer with hidden size, okay, so they insert a hidden"
},
{
"start": 1110.28,
"end": 1119.16,
"text": " layer and then they also add these skip connections here, right, they add the skip connections"
},
{
"start": 1119.16,
"end": 1127.52,
"text": " which basically just means they feed through this output directly to here and add it to"
},
{
"start": 1127.52,
"end": 1135.08,
"text": " that, so basically the feed forward layer needs to transform this output here into the"
},
{
"start": 1135.08,
"end": 1141.84,
"text": " vocabulary input, one step ahead, two steps ahead and three steps ahead and you can see"
},
{
"start": 1141.84,
"end": 1146.6,
"text": " here that those are independent, right, they don't depend on each other, there's nothing"
},
{
"start": 1146.6,
"end": 1151.84,
"text": " feeding back p1 here into the decision of p2 so they can be executed in parallel, but"
},
{
"start": 1151.84,
"end": 1160.12,
"text": " they lose the dependence on each other, alright, so that's the architecture and you can clearly"
},
{
"start": 1160.12,
"end": 1171.1599999999999,
"text": " see here it's able to predict three steps into the future at the same time, so yeah,"
},
{
"start": 1171.1599999999999,
"end": 1177.2399999999998,
"text": " alright so they also do different adjustments where they say now yeah we can also kind of"
},
{
"start": 1177.2399999999998,
"end": 1187.6799999999998,
"text": " sacrifice a bit of the fidelity to the original model by not requiring that the basically"
},
{
"start": 1187.68,
"end": 1192.96,
"text": " we don't only accept when the suggestion is the perfect best suggestion that would have"
},
{
"start": 1192.96,
"end": 1199.04,
"text": " been decoded by the greedy model, but what we could do is we could just if it's in the"
},
{
"start": 1199.04,
"end": 1205.48,
"text": " top k we could accept it, if it's in the if it's good enough basically one of the suggestions"
},
{
"start": 1205.48,
"end": 1210.4,
"text": " that we have is good enough then we'll accept it or when you have like some sort of distance"
},
{
"start": 1210.4,
"end": 1216,
"text": " metric they say here so the distance between our suggestion and the maximum so the what"
},
{
"start": 1216,
"end": 1222,
"text": " would have been best by the greedy should be smaller than some constant epsilon and"
},
{
"start": 1222,
"end": 1226.8,
"text": " that way you can sacrifice a bit of performance but your suggestions will be accepted much"
},
{
"start": 1226.8,
"end": 1232.6,
"text": " more often and thereby your speedup will be much higher and they also experiment with"
},
{
"start": 1232.6,
"end": 1239.4,
"text": " whether or not they should fine tune the original model along with their model and also the"
},
{
"start": 1239.4,
"end": 1246.5600000000002,
"text": " experiment with knowledge distillation where they basically have like some some teacher"
},
{
"start": 1246.5600000000002,
"end": 1251.92,
"text": " model and you train the your model on the output of the teacher model don't want to"
},
{
"start": 1251.92,
"end": 1258.92,
"text": " go too far into this since these are mostly kind of things to make it work even better"
},
{
"start": 1258.92,
"end": 1266.64,
"text": " and you can see here that this is for example a machine translation task so this is the"
},
{
"start": 1266.64,
"end": 1274.44,
"text": " WMT 2014 English German translation and there's a regular they get a blow score of 26 and"
},
{
"start": 1274.44,
"end": 1283.3600000000001,
"text": " here higher is better and if you can see they get a fairly sizable speedups by keeping the"
},
{
"start": 1283.3600000000001,
"end": 1289.8000000000002,
"text": " blow scores fairly constant so they they almost speed up by 2x but if they allow the blow"
},
{
"start": 1289.8,
"end": 1297.12,
"text": " scores to go down a bit they get a much higher speedup of like 3 and then if they do like"
},
{
"start": 1297.12,
"end": 1303.12,
"text": " distillation and fine tuning they actually manage to keep up the performance even though"
},
{
"start": 1303.12,
"end": 1310.56,
"text": " they get very very high speedups so they get speedups until like 5x by not dropping the"
},
{
"start": 1310.56,
"end": 1319.1399999999999,
"text": " blow scores very much so that's that's pretty impressive another experiment they do is image"
},
{
"start": 1319.14,
"end": 1326.2800000000002,
"text": " super resolution where you can see here with regular they try to really keep exactly the"
},
{
"start": 1326.2800000000002,
"end": 1332.5600000000002,
"text": " original model output and it doesn't it doesn't speed it up too much but when they allow for"
},
{
"start": 1332.5600000000002,
"end": 1339.8400000000001,
"text": " a bit of a mistake to be made so here this is image super resolution so values are between"
},
{
"start": 1339.8400000000001,
"end": 1347.64,
"text": " zero and 255 and they allow epsilon equals to two of that so that's that's kind of less"
},
{
"start": 1347.64,
"end": 1355.44,
"text": " than 1% error on the individual pixel then they get a speed ups of 7x or something like"
},
{
"start": 1355.44,
"end": 1361.72,
"text": " this and you can see in this region here that when the K is for in case the number of steps"
},
{
"start": 1361.72,
"end": 1371.64,
"text": " that you decode ahead so and the mini mean block size is 3.75 that means on average 3.75"
},
{
"start": 1371.64,
"end": 1376.3200000000002,
"text": " steps ahead or accepted which means basically there their suggestions are almost always"
},
{
"start": 1376.32,
"end": 1381.84,
"text": " good enough to be accepted so they get this massive speed up by basically being able to"
},
{
"start": 1381.84,
"end": 1390.3999999999999,
"text": " jump these decoding steps yeah so they have a bunch of other results here there show their"
},
{
"start": 1390.3999999999999,
"end": 1395.96,
"text": " wall clock time speed up since iteration speed up as well but if you have to pay in huge"
},
{
"start": 1395.96,
"end": 1401.28,
"text": " computational cost it's not so good but they also show that they have a big kind of wall"
},
{
"start": 1401.28,
"end": 1410.08,
"text": " clock speed up up to up to 4x here in super resolution and over 3x in translation so it's"
},
{
"start": 1410.08,
"end": 1415.12,
"text": " a pretty cool paper they give some examples here a bunch of more tables some examples"
},
{
"start": 1415.12,
"end": 1424,
"text": " of their super resolution and yeah if this might be something for you then use it it's"
},
{
"start": 1424,
"end": 1429.92,
"text": " I think it's a pretty neat trick and yeah especially for production systems all right"
},
{
"start": 1429.92,
"end": 1431.44,
"text": " that was it bye bye."
}
] |
pPBqM4CKjUU | Yannic Kilcher | UCZHmQk67mSJgfCCTn7xBfew | Discriminating Systems - Gender, Race, and Power in AI | [
"Science & Technology"
] | [
"ai",
"machine learning",
"bias",
"fairness",
"ml fairness",
"algorithmic bias",
"algorithmic discrimination",
"ai and society",
"ainow",
"google",
"microsoft",
"race",
"gender",
"stem",
"pipeline",
"gender gap",
"diversity",
"inclusion",
"equity",
"power"
] | TL;DR:
- There exists both an unequal representation of people in the AI workforce as well as examples of societal bias in AI systems.
- The authors claim that the former causally leads to the latter and vice versa.
- To me, the report does not manage to make a strong enough argument for that claim.
- I find the statements made quite dishonest at times.
https://ainowinstitute.org/discriminatingsystems.pdf
Authors:
Sarah Myers West, Meredith Whittaker, Kate Crawford | Hi there, today we're looking at discriminating systems, gender, race and power in AI by Sarah Myers-West, Meredith Whitaker and Kate Crawford of the AI Now Institute, which is a part of New York University or associated with it. This is not as much a paper as it is a report, kind of summarizing current literature and also kind of an opinion piece slash recommendation giving document. Yes, so we'll dive into it. As you can see from the index, it's quite a long report and we don't have time to go into all of it. Actually, we don't have time to go into most of it. I just hope to kind of point out what the main arguments and themes are in the report, kind of what it's trying to say, pick out some interesting things and summarize it to the best of my ability. Also give a little critique. So let me actually go ahead and try to state the kind of core argument that the report is trying to make, because it's not really clear from reading it and you have to kind of read the whole thing and then kind of becomes clear what the argument is, I feel, though they somehow stated in the introduction numerous times in various ways. So I might just be not as attentive reader at first time. But all right, so here's the argument and I really hope I'm representing this correctly. We have a problem currently that sometimes AI systems can exhibit what we usually call bias. And we don't mean mathematical bias, like bias variance tradeoff. We mean bias in a societal sense, let's say bias against certain types of people where they shouldn't exist. So for example, let me draw an AI system and I'll just draw a little computer screen with a little light bulb. All right. So this is because it's smart, this is an AI system and the AI system and they give numerous examples. One example they give us for is like face recognition algorithm that is much more accurate on faces of white males, as opposed to darker skinned females. So let me draw like two curves to represent these distributions are unequal. And so the AI system exhibits some bias with respect to some kinds of people with an especially protected attributes. And in this report, they focus mainly on gender and race. So that's what we're going to talk about. The second thing they observe, so this observation one, the second thing they observe is, I'm going to draw some generic people here that represent the workforce of AI. So the AI workforce is classified as all the people that work on AI, be that university researchers or within companies building AI products or deploying them. So this is the workforce and they observe that there is an unequal distribution among the AI workforce. So this distribution, I'm also going to do this for unequal distribution. There's an unequal distribution in the AI workforce, most notably, it's predominantly males who work on AI. And also white people are overrepresented compared to the world population at large. So that's kind of the two observations they make. And now what they claim is that the unequal representation in the workforce is causing the bias in the AI systems. So they're basically saying these AI systems are biased because that the workforce is unequally distributed. And also they claim in a less powerful sense, I feel, but they claim there is a loop that this then leads back that because there is bias in the AI system, that again leads to an unequal, more unequal distribution of the workforce. So the core argument really is, as they set out to do, like in the introduction, and also claim that they have done in the conclusion, is to demonstrate these two directions here in a causal way. So the systems are biased because there is an unequal representation in the workforce and that feeds back. So the argument is that if you want to fix the bias here, if you want to fix that, then you will have to fix it via making the workforce more what they call diverse, so less unilaterally distributed towards white males. That's kind of the final conclusion. If you read their report and the recommendations, that's mainly what they're going for. Yeah, so my opinion, or in my opinion, having read the report a couple of times, is that as I see it, they really don't demonstrate these links. So they give examples of this and they give examples of this. They show that the workforce is unequally distributed. They show that AI systems can exhibit such bias, but they never actually show these links in my opinion. They don't show this. So if you make the claim that in order to fix the bias in AI systems, you must fix the unequal representation in the workforce, I would need an argument that says because there is unequal representation, therefore A, therefore B, therefore C, therefore bias, like an actual argument to follow that says because of this, that, because of that, that, and so on. It's just not there. They simply show parallels. They simply show that these two things exist and they just list example after example of that. I don't think they make this argument. But I think, also the other direction, they don't really make this argument. Except in one case, where if you give them benefit of the doubt. What I also think is that it appears like the article, if you read it, and I encourage you to read it if you have some time, it makes a lot of sense if you have already accepted this conclusion. Like if you've already accepted this, then it's like, oh yeah, because I feel this is just a text where the confirmation bias is so high, just the way it's written, that it must make a lot of sense to someone who's already kind of in on this conclusion. But to someone who isn't sold yet, like myself, I am just not finding this convincing at all. The second thing is that it very much feels like this isn't like a discovery or something. But someone actually set out with the goal to address this here with the goal of I want companies to hire more of these people or certain kinds of people or to become more diverse or to promote more of a certain type of people. And now I'm going to find reasons for this. And the reason is like, oh, look at look at this bias here. This is caused. This is caused by this other thing. And therefore we must fix this other thing. It very much feels like someone setting out with already the conclusion in mind rather than this being an honest investigation. But yeah, I mean, read it for yourself. I can't prove the absence of an argument by not reading every single line. And I can't read every single line because it'll just get very long and boring. But read it yourself. And I think I'm pretty I'm pretty I've read it numerous times with really an open mind to be convinced that there is an argument in there. But I don't think there is or I don't think there is a very strong argument for this. All right. Let this first part here is more or less a summary. So research findings is more or less a summary. And we'll get to these things as they are important. Then they state recommendations right at the beginning. So actually, you'd have to read the article first. This is kind of more of an abstract section. But since it's right here, we'll kind of jump right into it. So these are recommendations and I've claimed they don't really show a connection. But they actually just show examples, examples of this and examples of this and parallel them. And this is reflected in like every single section, including here in the recommendations. They have recommendations for improving workplace diversity. And they have recommendations for addressing bias and discrimination in AI systems. Right. So all right, in my case, if you make this argument, I would I would feel you also make recommendations for breaking these links. But or argue why they can't be broken. But all right, let's jump into some of them. And it is really a mixed bag here, really. So some recommendations I'm really in favor of just from from the go not even you don't even need the article for those here. Discrimination, harassment and discrimination, transparency reports, including number of claims over time, the types of claims submitted and actions taken. So it's known that especially in these larger companies, sexual harassment claims often go down in either bureaucracy or are kind of hushed under the table or something like this. What you have to recognize is that a human resource department of a large company isn't there to serve the human resources. It's there to serve the company providing human resources. That's why a sexual harassment claim to an HR department is just a potential lawsuit. And that's why they don't want to take it seriously except for it must go away really quickly. So I think to kind of force companies or to ask companies to be more transparent, to take more seriously these the accusations of sexual harassment and assault and also discrimination is a very valuable goal. And I fully, fully support this. Also the here commit to transparency around hiring practices, especially hiring regarding how candidates are leveled, compensated and promoted. But also the larger the company gets, the less transparent this process usually becomes or the more bureaucratic, the more people are able to game it and so on and distort it. So I feel it's always good to be transparent around, okay, this person provides this much value to the company, therefore they should be compensated according to that or at least be transparent about it. So these are kind of recommendations I like. Then recommendations that really go into a different direction is something like this here, change hiring practices to maximize diversity. And this is kind of reflect, I'm not going to go on this reflected in other points, increase the number of people of color, women and other underrepresented groups at senior leadership levels of AI companies across all departments. So these things, they are usually within like company diversity goals and so on, doesn't really say how to do it. But then the I mean, as such, they're not really recommendations yet. They're more like goals. But here recommendation seven, I think is the the crucial one, ensure executive incentive structures are tied to increases in hiring and retention of underrepresented groups. So this is it's a bit of coded language. But here they talk about executive incentive structure tied to hiring and retention of underrepresented groups. This basically means if you are a manager or someone in charge of hiring or promoting, and you hire or promote a underrepresented person, and since they're talking about gender and race here, if you that means if you hire or promote a person of color or a woman, in this case, you will be compensated more. So at the end of the year, you'll somehow have more money, like more bonuses or more base comp or more equity or something like you'll get more money. So this, this recommendation is a direct call to hire based on race and gender. So this, this is a direct call to racist and sexist hiring basically to discriminate people according to their skin color and according to their gender, which I mean, how, how is this okay with anyone? Like how can anyone how are people even able to state this and in like a high profile report like this and get away with it and not have people criticize them, this directly calls for people to be treated according to their gender and race. And probably as directly as you can go without getting into actual legal trouble. But yeah, I'm really, really against such such practices. I mean, yeah, that's I just I just don't know how this how this can ever how this can ever be thought of as a good thing by anyone. All right, so, well, yeah, in my mind, this recommendation, and this recommendation kind of are counter to each other. Because if if I commit to transparency, how people are okay now I can, I can transparently commit to to be racist, I guess. But if I say, okay, I'm going to come and promote people based on how much value they provide to the company, then yeah, I'd much rather have that than saying I'm going to come and promote people based on their skin color. Alright, so let's actually jump into the report. I'm not gonna these recommendations for addressing bias and discrimination in systems this these are fairly general and common. So as well, as I said, we'll jump most of the things in the report. So introduction. So they start out with there is a diversity crisis in the AI industry. This they give like some numbers like 15% of AI research staff and 10% at Google, so 15% of Facebook are women. So these are some kind of fairly known statistics about how the AI field is kind of gender and race skewed. Currently, so they say they claim in bold the diversity problem is not just about women. It's about gender, race, and most fundamentally about power. It affects how companies work, what products get built, who they're designed to serve, and who benefits from their development. So this, I find this, this, this word power and this notion of power, a lot in this report, it appears again and again and again in in like power dynamics and power dynamics among groups. It's like a worldview, it paints like a worldview, where these different gender and race groups kind of struggle against each other to gain power over another. And whoever's in power will try to remain in power in alliance with their gender and race group and try to keep the other groups down. I'm not sure that's the correct view of the world. In my mind, the world is comprised of individual people that want to achieve something for themselves and they would like to prop themselves up. Whereas in this worldview, it's like, I'm going to use the power of my group to keep other groups down. I don't know which worldview you subscribe to, but I find the world is comprised of individuals. Yeah, and this is not discrediting that some people have it harder because of their gender or race. But to see the entire world as a power struggle between these groups, to me, it's, it's, yeah, and I'm not going to point out everywhere it appears, this power wording, but it appears a lot and it's really shapes how the report reads. You have to, you have to kind of remember, if you're a white male, and currently, the field is comprised of 90% white males, you, if you have like 10, like 10 hours, let's say you have to have 10 hours to do something, right, you can either choose to put down some other groups, like put down groups that you're not part of, or you can choose to invest these 10 hours in putting up yourself, you, right. So if, if I, like I profit, if I'm a white male, I profit minimally from keeping the other groups down because guess what, I still have to compete with the like 1 billion other white males there are. It's not going to help me to keep down anyone else, and especially, like it's, it's moronic, like who does that, who like has alliance, except most fringe people, like to their race or gender, rather than to the people they admire and respect and like to work with. So I'm going to, if I have like 10 hours today, I'm going to rather spend this in propping up myself compared to everyone else, and I don't care what gender or race they are. And so that to me, that's a much more accurate or, I don't know, plausible worldview. But just be aware that this report really takes on the language of kind of groups and power between groups and groups trying to, you know, kind of gain power and keep in, keep power and keep others from having power. All right, so say, to date, the diversity problems of the industry and the issues of bias in the systems it builds have tended to be considered separately. We suggest that these are two versions of the same problem. Issues of discrimination in the workforce and in system buildings are deeply intertwined. Challenge, and moreover, tackling the challenges of bias within technical systems requires addressing workforce diversity and vice versa. So the, I think this, this here actually is like how I described the argument and they kind of restated multiple times in a bit different way. But I think this is the core. And I really think I'm not misrepresenting the article here in that this is what they are setting out to do. They're setting out to say, okay, the diversity, the kind of unequal representation in the workforce and the bias in some AI systems are causally linked to each other and tackling one requires tackling the other. So yeah, if I'm misrepresenting them, let me know, but I really think I'm accurately representing their argument. So what they, what they do, as I said, is they give examples of one and of the other and also they really, they're really on kind of discrediting the kind of issues to solve problems of bias in a different way. So they point a little bit to this here in the introduction. They say in the face of growing evidence, the AI research community and the industry producing our products have begun addressing the problem of bias by building on a body of work of fairness, accountability and transparency. So fairness, accountability and transparency research concerns these issues. For one is research showing that some products are unfair or untransparent and so on. On the other hand, it's trying to devise algorithms that are more fair according to some notions or more accountable and transparent, which means that the algorithm can kind of say why it made a certain decision rather than it being a deep learning system that you don't really have an insight. These fields are active fields of research, definitely very interesting to look into. So but they, they kind of, it is not already here, but they say, yeah, we have adjusting AI systems that produce a result deemed fair by one of various mathematical definitions. You can already see in the language here, they don't really like this research and they are trying in this report to kind of discredit it or at least claim that it doesn't solve the whole problem because their point is, of course, you have to address this diversity issue in the workforce in order to fix the problems. So to this, I just want to say no, like if you can, I mean, you can criticize the fairness and accountability and transparency research field in that they haven't solved the problem fully yet. But in principle, if I have an algorithm, if I'm being delivered an algorithm, right, and the fairness literature has been applied to that algorithm and someone tells me, I guarantee you here is a proof, the algorithm is fair, right, then I really don't care who made that algorithm. As long as it's fair, the problem is fixed. If the bias is gone, the problem is fixed. And I don't care who fix it. I don't care if the person who fixed it is black or white or purple. Then the problem is fixed. And they, they really have to, they really try to just make the counter argument here is that no, that's it's not enough. But I claim yes, it, if you can actually solve the fairness problem, technically, then you have solved the fairness problem. Yeah, the only thing you can do is claim that it is not good enough yet, but not that it's fun to they kind of have to make the argument that it's fundamentally flawed approach. And I don't think they succeed in doing that here. Um, yeah, so they go on to say, we should expand to consider not only how I tools can be biased technically, but how they're shaped by the environments in which you're built in and the people that built them. Again, this this focus like who builds the AI system, I don't care, I care what it does, right? As much as if, if I hear an argument for or against something, I don't care who makes the argument, right? I care what the argument says. This is, it's like an ad hominem attack for an entire community. That's kind of how this this article, this report shows, or is appears to me. So they say, currently, large scale AI systems are developed almost exclusively in a handful of technology companies and a small set of elite university laboratories spaces that in the West tend to be extremely white, affluent, technically oriented and male. So yeah, their their problem, that's their fundamental problem here that these these spaces are skewed in one direction. Interestingly enough, their problem is not so much that it's that they're all in the same place, right? That they all live like 20 miles from each other in around San Francisco. That's that seems to be not a problem at all, as long as we get to like enough people of color and women into these 20 miles. But yeah, so that that's pointing out the the problem here or the yeah, kind of issue they have. All right, so they go on. Just kind of want to highlight again, they say both within the spaces where AI is being created and the logic of how AI systems are being designed. So paralleling the two things, the cost of bias, harassment and discrimination are born by the same people, gender minorities, people of color, other underrepresented groups. And they also say similarly, the benefits of such systems from profit to efficiency, accrue primarily to those are already in positions of power tend to be white, educated and male. So they again, they say the this points to a systematic relationship between patterns of exclusion within the field of AI and the industry driving its production on the one hand and the biases that manifest in the logics and applications of the technologies on the other. And they try to make this connection because they say the cost and the benefit of these two things are overlap in the people that where it costs and it benefits. And I really, again, it's just a parallel, but I really even don't think that's true because they kind of, they kind of argue against themselves later. So they always say, we have to look at again, they shoot against the take much more than the technically driven problem solving. They point to this. So our research requires looking at gender and racist categories within which humans think in short, sorry, studies of discriminatory systems, we need to ask who is harmed, who benefits, who gets to decide. So it's kind of who bears the cost, who bears the benefits and who has the power. So that's the, and again, it's we seek to understand how AI disadvantages some, we also consider how it works to the advantage of others. So keep that in mind. That's kind of the lens through how they analyze the this thing again, one that acknowledges power relationships and centers equity and justice. That's the, they want to see this bigger picture. So that's yeah, keep, again, keep that in mind. So they go into a section called which humans are in the loop, how workforces and AI systems interact. So this kind of from the title of this section, you think, okay, here's where we get in. Here's where we make the argument. And they start by listing examples of how AI systems can be discriminatory. And first, they go into an example of Amazon had developed an experimental hiring tool to help rank job candidates. By learning from its past reference preferences, Amazon hoped that the resume scanning tool will be able to efficiently identify qualified applicants, comparing their applications to previous hires. The system quickly began to downgrade resumes from candidates who attended all women's colleges along with any resumes that included the word women's. After uncovering this bias, Amazon engineers tried to fix the problem by directing the system to treat these terms in a neutral manner. The company eventually abandoned the tool when they were unable to ensure that the algorithm would not be biased against women. Gender based discrimination was built too deeply within the system and in Amazon's past hiring practices to be uprooted using a purely technical approach. So this just the way is written, I find to be quite dishonest. But let's analyze what happened here. So their final claim is that gender based discrimination was built too deeply within the system to be uprooted using a purely technical approach. So this is one of their arguments. They say technical approaches, they don't help because the Amazon engineers tried to fix the problem. But when they were unable to ensure that the algorithm would not be biased against women. So if you read this, you really I mean, I really get the impression that's not what happened here. What happened here most probably is Amazon built this tool, okay, and it fed in its past hires and we know of issues of like data set bias bias inherent in data set. So if your data set is skewed, the AI tends to pick up on the skewed data set and become skewed itself. Okay, so I actually would argue that most or all of the examples they stayed in here are examples of such biased data sets and not. So the the cause of the bias is the data set that they are strained on and not the person that ran the code or built the algorithm to train it on or built the deployment. And so but it doesn't matter you're a you're Amazon, you built this tool and you realize, oh, it discriminates against people having women's on their CV. So this is a pretty bad PR wise. So you tell your engineers engineers fix the problem. So the engineers go fix the problem, they come back and say, okay, we fixed the problem. And then what you do is you say, okay, engineers, can you ensure me that the algorithm would not be biased against women? Because if only the slightest bias exists, if only it doesn't even have to be if one journalist finds one example, where there is a down rank, because I add the word women's, then we are screwed, right? And the engineers will say, No, we can't guarantee that it's a deep learning system or something, right? We, we can't like give you a proof that it's not biased. If you're a smart executive, at that point, you'll scrap the tool, because the potential PR downside are just huge. And probably they've also realized it's not that handy to have this, this tool compared to their recruiters doing their job, because their recruiters might actually be good and have been doing this for a while. So to the to the fact that this tool was scrapped is probably much more a result of a PR disaster. But also independent of that to say gender based discrimination, sorry, gender based discrimination was built too deeply within the system to be uprooted using a purely technical approach. It's just I mean, what is what is this? This is just trying to discredit this kind of technical, technical going about solving this problem. I'm pretty sure if someone comes to me and says here, I have this tool, and I can mathematically prove to you that it's not biased, then it's not then the problem is solved. And also, I really don't see how the person training the algorithm, or the person researching such an algorithm has any influence over how the algorithm works, because they're not the ones making the data set, or if they are, yeah, then they can make a better data set. Also, if a person comes and makes a better data set, that will fix the problem. And it doesn't matter what skin color the person has that makes the better data set. So all of this, this link is just not demonstrated here, or anywhere here at all. But this this here is the closest Amazon that this report actually comes to making this point. And I said before, I drew that drew this thing workforce AI bias, right? So this this link since it here the AI system is used for hiring the workforce. So at least one could make a claim that this link is somewhat demonstrated. But I this it's a weak case, I would agree, but this is the closest they come. So that and but then to go this direction, you have to somehow argue, well, the workforce somehow makes the AI system bias, no, the workforce influences the data set. If the AI is trained, so if a hiring AI, how do you train a hiring AI, you optimally train it on the performance. So this this employee here is going to have a performance over time, right? And the AI system will look at that performance over time. So if the AI system even if it's initially biased, because it learns from the risk recruiters, it will learn that, okay, actually, if I always forgo these women, then I don't get as much performance of a workforce, so I should correct for that. So if you train the AI system on a good metric, then then then this problem will leave even out itself. But again, this Yeah, this this is this could be considered like one point in the argument, but I think it's a very weak point. And only because the AI system is actually used for hiring, where I think the point they're making is a much larger one is the general bias in the AI systems contributes to the workforce imbalances. And there you somehow have to say that, okay, the AI system somehow influences society at large and society at large then go leads to the workforce being skewed. I don't Yeah, that it's just not strong enough, in my opinion. And the other direction also isn't isn't strong here. But again, the examples only get weaker from here on. They go on to say, this is just one of many examples that show how the functional logics of a given technology echo the gender and racial dynamics of the industry that produced it here. Yeah, this, that's the claim they're making to echo the gender and racial dynamics. And they're actually making a stronger claim, namely a causal claim. They give the other example of the Amazon's recognition facial analysis service previously demonstrated gender and racial biases worse than those of comparable tools. So it failed to see dark skinned women while being most proficient at detecting likes light skinned men. And they later go into this example again, where they basically also state yes, this is an issue of the data set, the data set being much more comprised of white men. And they say, but then they have to kind of make the turnaround argument and say, well, the data set is a reflection of society and society, you know, part of society is the workforce. And it's just not, I mean, it's again, this argument only works if you already believe the conclusion. Otherwise, there's actually no argument there or no solid one. But what they do here is they say Amazon's initial response to such criticism has been to try and discredit the research behind it. This reaction, or let's let's first discuss this. So the Amazon, yeah, Amazon, of course, being the accused here and a multi billion dollar company and the criticism is something that is PR wise very bad for them. They discredit the research tried to discredit the research behind it. It's understandable that this could be dishonest from Amazon side, right? I mean, they're getting attacked. It's like, you know, the tobacco companies trying to discredit the smoking research, but still, I mean, that doesn't mean it's wrong. It could actually be bad research, right? You have to actually go and look at what's Amazon saying, what is the research really doing? Is Amazon right or wrong? Completely open that Amazon is wrong here, but you still have to go look. And this citation here, I've tried this citation here. This one isn't to a to Amazon's response. It's to like a medium article and the medium article doesn't even include Amazon's response. I've looked, maybe I haven't seen it. It doesn't also doesn't link Amazon's response. Maybe it links something that links something or that includes it in some way. But basically this medium article only states, yeah, Amazon has been denying this or Amazon has been critical of this. And if you state such a sentence, Amazon's initial response to such criticism has been to try and discredit the research behind it. I at least expect the citation to lead me to Amazon's response so that I can verify what they're saying. Right. So this, I mean, I don't know, willing to chalk it up to incompetence rather than malice. Right, but then they go on and they say this reaction is evidence of the wider problem. The research was conducted by two well-regarded AI researchers who are women of color. By attempting to publicly discredit their expertise and research methods, Amazon is reinforcing the same kinds of prejudice and derasers that the research critiques. Yeah, here you go straight to the identity of the researchers. Like play the race card straight out. I mean, this is maximum dishonesty, right? Except if Amazon said something like, well, these women of color, clearly because they're women of color, they have no idea what they're doing or something like this. This is basically it's coded language for saying either saying you're not allowed to criticize people of color because they're a minority or you're basically saying Amazon is racist and that's why they criticize them. They just don't take them seriously because they're women of color. I mean, both are both are abhorrent. This is just dishonesty really stated here too. I mean, again, I'm perfectly willing to accept that Amazon's critique of this research is wrong and is not well intended because they're the ones attacked, but you still have to examine it rather than say, well, they shoot against women of color and therefore somehow that makes their counter argument irrelevant or even racist or something. That's I don't know. I find this dishonest. Yeah, I don't know about you. Moving on. So they go on and state a number of examples of bias and discrimination in the workforce and they a lot of times they make a mixture of the gender and race imbalance in workforce and things like sexual harassment not being taken seriously by the companies and also the things like gender or race pay gaps, which I'm open to accept that these things exist and are even intertwined. But just to tell you what's happening because we're kind of skipping but it's kind of a mixture of these things. So they say these issues are systemic. There's a close relationship between these workplaces with discriminatory practices and discriminatory tools, a feedback loop that is shaping the industry and its tools. So again here to state, I think I've stated it enough now that or demonstrated enough that I'm really representing their arguments as they intended it to namely that there is this kind of causal links and loop between these two things. And they shoot against the fairness literature by saying from this perspective, locating individual biases within given technical systems and attempting to fix them by tweaking the system becomes an exercise in futility. Only by examining discrimination through the lens of social logics, who it benefits, who it harms and how can we see the workings of these systems in the context of existing power relationships. So they say these issues aren't technically fixing these systems won't help. If that's the problem. And I agree, if that causal link actually exists, then technically fixing the system might not solve the problem. Not even sure. I mean, if you technically fix a system like this, then you technically break the causal link and thereby fix the problem. I would not sure, but again, this is based on the hypothesis that they've already reached, like demonstrated their, their conclusion, which they haven't and which they are not in the entire article. Yeah, so the next section goes into who makes AI so I don't know about you, but this section was titled how workforces and AI systems interact. And apart from one, the AI system being used for hiring the workforce, which is said this one instance where actually there could be one causal direction from bias to different misrepresentation the workforce. Other than that, there isn't really anything in there that really shows how these two interact, especially in a in a causal way. Alright, the next section is called who makes AI is broadly about the about the gender and race imbalances or miss not unequal representation in the workforce. And we're going to skip this diversity statistics that kind of that discuss that diversity statistics of companies aren't really accurate, or can be, you know, massaged kind of by the companies, which you know, is true. Definitely companies will always try to maximize their profits. And even if they give out such a report, so that definitely critical thinking is in order. Alright, so the next section is called the discrimination feedback loop. Right, if so if in the earlier section, you felt like here we go into the meat, then you must feel with this title, like, okay, we're actually going to see how this loop works and how the two things are really linked, like how one causes the other and vice versa. So let's jump in. They say AI systems increasingly play a role in our social and political institutions, including education, healthcare, hiring, criminal justice. Yes, therefore, we need to consider the relationship between the workplace diversity crisis and the problems with bias and discrimination in AI systems. No, why I don't see how therefore, but yeah, so I don't see how therefore we need to consider the relationship. Okay, if there is a relationship, we need to consider whether there's a relationship. Okay, granted. So they say fairness, accountability and transparency research is playing an emerging role. Now what they mean here is the aspect of fairness, accountability and transparency research that shows that there is a problem. So I told you there's two sides, one side is showing there is a problem in current systems and the other side is trying to fix them. So they're very much fans of the side that shows that there is a problem and they use show some of these problems here, we've already seen some but they show some more like Facebook's ad delivery systems let users to be shown as for housing and employment in a discriminatory manner. So giving 2019 study found significant racial bias in a widely used commercial algorithm used to determine whether patients will be enrolled in care management programs. So these are these are just examples of these AI systems being biased. So they go into this say taking a contextualized view may enable more extensive account and the contextualized view they when they say this they mean anything more than just a technical approach at solving these problems. More extensive account of bias to emerge future work could examine the politics of system design study how AI systems in situated reality and study AI systems in situated realities ask why a system was designed in a particular way, how it was constructed, whose interest it shaped shaped by the metrics in which its success or failure is assessed, rather than solely focusing on improving existing data sets or individual algorithms. Yeah, I agree. I mean, we always have to we always have to pay attention to these things, especially like looking at the metrics by which its success or failure is assessed. But a lot of times this is this is rather straightforward in kind of if you look at the metric, the metric most often, especially in commercial applications is money, right? So the metric of like an ad showing system, like if I have a system to recommend ads to people, show people ads and personalize them and so on, I simply want to maximize my revenue. So I want to sell someone something. And everything I want to know is how likely is it that person is going to buy that thing? Right? I that's basically Yeah. So in essence, sometimes it's really valuable to consider what capitalism is. So in capitalism in so capitalism, these kind of this system we're working on is kind of a form of limited capitalism, but mostly mostly capitalism. And capitalism is very greedy. So capitalism, all corporations want to do basically is make money. And that is and on the other side, you have discrimination. So discrimination meaning these unequal represent like unequal distribution actively. So and often sometimes these go hand in hand, sometimes you can make more money by discriminating against a certain type of people. And that's, that's a really bad scenario. Like that's a very, like, this is really something where we need to take action. But a lot of times, a lot of times, these two things stand in opposition to each other. So little arrow here, non compatible. That means if I want to sell someone something, then I maximize my profit by not caring by accurately assessing how likely is it that person buys that thing. If I want to discriminate here, if I want to discriminate, start discriminating, according to skin color saying like, No, I don't like that this person with the skin color is able to buy this product, I want to kind of keep them down, and so on, then I forgo profit, right, then I actually, even though this person could buy this thing, I forego that. So often these things are in direct opposition to each other. Also, if I am in charge of hiring, and I don't like people of a certain gender, but they would actually be really, really good, whatever, good employees. So I forgo that, that means I'm getting a pay more for less qualified people just because I'm biased and I'm down ranking unjustifiably, these people of the gender I don't like. So oftentimes, you have to ask yourself, are people fundamentally greedy, or discriminatory? Which are they more? If push comes to shove, would they rather have more money? Or would they rather keep their own race and gender group in power? And with just, yeah, so the and you have to ask this of corporations, you have to ask this of people. And in my experience and view, like people are much, much more greedy than they are willing to discriminate and give up money for discrimination. And so if we look at metrics by which success or failure of AI systems are designed, then I would argue a lot of the times metrics are actually profit incentives. And especially if we look at data set construction, if there is a skewed data set that makes my AI system be biased, that actually loses me money and the company would profit a lot from building a better data set. So looking at kind of metrics actually makes a lot of sense to me and very much in favor of that. And I think by designing accurate metrics and then getting the best possible information, the best possible data sets to maximize these metrics will oftentimes actually eliminate such forms of discrimination. Again, there are situations where they don't, we have to be very cognizant of these. They go into this and they say, also examine more thoroughly how societal discrimination surfaces in data provenance, examining the history and process of data set construction and considering how cultural norms and stereotypes were enumerated and represented at the time of data creation. This is a big issue. Yes. The data set construction kind of at the time of data creation and so on, this is a big issue in these systems and a lot of bias. And I would argue most of the bias we've seen here arises from corrupt data sets and from data sets that were constructed in an already biased way. And the AI system trained on these data sets simply replicates this bias. So I think that's very correct here. They go into this example, they say the labeled faces in the wild data set contains over 15,000 images. Only 7% of images are of black people. This is because these, the media landscape of the early 2000s, these images were gathered from the news media at the time, predominantly featured white men in positions of celebrity and power. This exactly. So if you train a system on this data set, the system will inherit this bias. Yeah, so this is a classic example of a corrupt data set. Also this isn't only with race and gender. This is also if you like take pictures from IMDB, yes, a lot of this currently Celeb A data set that is used in all the GAN research is collected from IMDB. You probably have overly beautiful, like pretty face people on there. So that your AI system, your generative model is only going to produce mostly pretty face people, since movie stars tend to be a lot prettier than the average humans. So that the kind of data set construction process, I think is currently the biggest source of bias in AI. But that also, it's interesting that they go into this here and they kind of want to make the point that this is because society and power in society, the data set reflects that. But I would argue if someone makes a data set that doesn't have this bias, then the problem is solved. And I don't care who makes the data set. So the link between the workforce and the bias is really broken by an argument like this, because as soon as we have a correct data set, an unbiased data set, we can mitigate the bias. And they even go, they go into this here. They say, sorry. Yeah, they say down here. They say these people, these researchers have looked at these facial recognition systems and they assessed this what we saw earlier, higher error rates for darker skinned women than for any other group, lowest error rates for light skinned men. To measure this disparity, these researchers developed a new data set that is more balanced, both in terms of gender and skin color. Good. Problem, like make a larger data set to actually train on and then problem solved. And I don't care at all what race and what gender these people are. Well done. Good people make a good data set like this. And then we've solved the problem. What's the problem here? Why would you ever care what these people look like if they do good work? That's to me, this actually breaks their own argument. I don't know why they included here. To me that to then suggest that there is a link to the workforces, if here is obvious that if you fix the data set, you can fix the recognition system. All right, so we'll go on here, jump a couple more paragraphs. Except when they say they shoot again against this kind of say to this point, a focus on fixing technical systems in isolation without examining their broader context of use and power and dynamics that attends issues is not limited in its intervention, it can actively cause harm. So if you fix the problem in a technical manner, they argue here it can actively cause harm. And the example they give is that facial and image recognition systems, they are often applied in service of police surveillance, which disproportionately harms poor people and communities of color. So there's a quote from this person that says, is this not social progress to make black people equally visible to software that will inevitably be further weaponized against us? We are considered criminal and more surveillable by orders of magnitude. Whatever claim to a right of privacy that we may have is diminished by a state that believes we must always be watched and seen. So this is an example where by improving the facial recognition for black people, it makes the police better at surveilling them, which is true. And then it is an ethical problem that the police is able to use these facial recognition systems to surveil people. That's a massive privacy problem. That's a massive problem in how much the state is allowed to overreach and so on. So I think it's a discussion in itself, but here they argue because at the very beginning I asked you to remember this whole notion of we always have to look at who benefits from the way the AI system is constructed, who is harmed from that, who benefits from how the metrics are shaped and so on. In this case, we actually have a perfect example where if the face recognition system is very inaccurate for black people's faces, that actually helps them in the societal context. So by logic of this report here, that must mean that somehow the bias works for them and thereby the system is good or something like this. And by fixing it, you actually make it worse. Yeah, they say it can actively cause harm. So I think this is pretty much arguing against themselves earlier where they say, oh, we always have to look at who benefits from the system. Yeah, here, if the face recognition system can't recognize you, you actually benefit. So I don't think that argument works in any case except if you only look at it when you want to look at it. All right, so we're going to jump a couple of sections here. But the core thing here was the feedback loop. And again, the feedback loop isn't demonstrated at all here. Just examples of systems that are biased and of data sets that are biased, because of data sets that are biased. But there's no demonstration of how the workforce, I mean, yeah, just take this previous argument. So the workforce is supposedly supremely white. And it makes a face recognition system that makes that is performing poorly for darker skinned people. And that actually in this context of police surveillance helps the darker skinned people compared to the lighter skinned people. So that kind of is an exact counterexample to the argument that this misrepresentation in the workforce leads to the biases in the system. If we interpret it through the lens, who it costs and who it benefits. All right. So the next section is corporate diversity beyond the pipeline problem. And this is kind of an odd inclusion when I read it first to interpret to go against the pipeline problem here. But it kind of makes sense if you know what these people set out to do. So what these people set out to do is to argue we must fix the workforce, right? We must fix the, we must hire more people of color, more women and so on, promote them more. And they have a very much have a problem with this pipeline argument. What the pipeline argument is, is the following. So at the beginning, if you consider like the educational or career paths of people, then you have like 100% of people that's represented at this at the beginning, and then most of these people go through school. So most of these go on. This is kind of the area in here is the population. And then some of them pursue higher education like some drop out. So this gets a smaller amount. So this is here, this is time and this is kind of volume of people. And then very few go into computer science, right? And then even fewer go into AI. So what you end up is just a tiny sliver of people that actually go into AI. So this is called a pipeline, and we have various junctions here like where you would go into higher education, where you would choose your major in university, where you would go into a subfield of computer science, where the kind of volume of people drops significantly from one point to the other. And now if you compare this, if you compare this and use it say, we're not considered all of society, but here over here we'll call consider all just men and over here we'll consider all women again, they all go to high school and then university and then maybe very few go to CS, even fewer go to AI. What you'll find is, and I've drawn it maybe wrong here, is that this is smaller than this. So if you comparatively look at how many males end up in the AI field, you will find that fewer end up in more and will end up in our field than women. If you comparatively look at it. So at and this is over time, like at the beginning, you have 5050 main women distribution in society, almost I guess, I think slightly more boys are born, but I could be wrong about this. And then as you go through time here, excuse that I believe. So you go through high school and let's just assume like high school is still kind of equal, it depends on the country. Then you go to university, where there's actually more women at university slightly. And then you go into computer science and in computer science, and this is just relative here, that's why I kind of norm it at 100%. Otherwise these things would go down all of them at the same time. But comparatively, you have then much more men than women in computer science. And then if you see who chooses AI, I don't know if there's any statistics of specifically choosing AI from computer science. I'm just going to assume that remains the same. So if you look into the AI field, kind of this, this will stay the same. So in the AI field, you have much more men than women. And presumably, because you already have much more men than women choosing computer science as their major or choosing any technical field as their major. This is kind of the so called pipeline argument. So where do AI companies hiring come in? AI companies come in here, they hire at this point, after your university degree, presumably. There's exceptions, but just say they hire after your university degree. And therefore, they basically have to choose from this distribution. And if they just say, okay, we'll just take the top, I don't know, 10% people will hire the good people of this, we don't care what gender they are. Right, so the top 10% here, the top 10% here, then this will end up being the same distribution as you have graduates. Right, so this is kind of the company, company hiring from an let's say an 80 20 distribution without looking at gender will end up with an 80 20 distribution. That's the pipeline argument of companies. And they don't like the pipeline argument, because the pipeline argument basically says that the problem is somewhere here, right? The problem isn't the company's hiring wrongly. The problem isn't that the company's here, deselected, the problem is somewhere here. And because they want to make the argument that the company should hire in a different way, they can't have that. So they argue against it. Now to argue against this would actually be very easy. If this argument were wrong, like they claim the argument is is is not good, the pipeline argument isn't good. If the pipeline argument were wrong, what you'd have to do is you would have to say, you would have to say, hey, companies, look at that. In your company, you have an 80 20 distribution men to women, right? That's pretty unequal. And you know, in university graduates, the pool you choose from is actually 5050. So obviously, you're engaged in discriminatory hiring, because you know, the pool is 5050. There's no reason why it why your hiring practices should cause this inequality. And therefore, we can clearly show you do discriminatory hiring, you should stop it, you should definitely hire more women and people of color, more of these more of the minorities, because your hiring practices are the problem. But that's not the case. How do I know? Because if it were the case, they would simply state this. Definitely in this report, if that were the case, that you could actually show with numbers that the pipeline argument is wrong, then they would absolutely do this. That they have to like, go back and they have to like, ramble around it for several pages, which will mostly skip but mainly because this is the case, it is the case that these companies hire from a pool of of unequally represented people. And the only argument that you can make is that, well, if if you were to equalize this here, then maybe here where the problem is that would fix like, so the argument is often made if young girls choosing their majors have no one to look up to, like no strong women in in corporation CEO roles, they will think that it's not a climate for women and they will elect not to go into these fields, which is a valid argument, like I'm completely open to that to that argument. But it's the only argument you can make. And still then, even if you determine this as the cause, I would still not support racist and sexist hiring practices like do something else like make them clear that the environment can be changed or change the environment, like change the if if it really is the case that it's kind of a non anti woman environment, change that. If it's just the case that they perceive it as such change the perception, but do not engage in discriminatory hiring practices, because there's always someone losing out unfairly on these practices. And that's, that's something I'm not willing to, to go into, like that's something I'm not willing to engage in. And I don't think people should engage be engaging in that. Actually, that's why it's illegal. So let's, let's actually look at very few points. This is just why the so they claim they go kind of go over these pipeline studies. And they yeah, they say term used in industry to reference the absence of diverse candidates in the hiring pool of to justify the inability of large firms to achieve diversity due to scarcity. Right? So that's, they basically agree the of that on the definition that I stated here. So the companies that are challenged on their lack of diversity frequently site pipeline studies as proof of the persistent challenge of finding enough women and people of color to hire. Yes, and, and the yeah, but they say but the evidence suggests otherwise. For example, in 2016, Facebook chief diversity officer wrote that it has become clear that at the most fundamental level, appropriate representation, technology or any other industry will depend upon more people having the opportunity to gain necessary skills through the public education system. Well, yes, that's something I would agree. And that's something clearly that addresses this region here. Then and where the actual problem is happening. So I would say that's a very, very good statement from the Facebook's chief diversity officer. They say but as the Center for Investigative Reporting study of tech company diversity data found 91 large tech companies headquartered in Silicon Valley managed to hire higher percent of black, Latino and multiracial employees than Facebook that year. Well, just if other just just because other companies employ racist and sexist hiring to improve their diversity numbers doesn't mean that Facebook has to do this. Right? It it like just because other companies do this doesn't mean that it's a it's a it's a good thing to do or that's how you should go about it. Facebook simply says like, if we want to hire without being racist or sexist, if we want to just hire the best people, then more of the best people have to be in the pipeline, like more people have to gain access to educational opportunities so we can then hire them. Whereas these other companies probably make a big effort to say, well, even if you are not as educated, even if you're not as qualified as this other person will hire you because of your skin color. I don't think that's that's an argument in that in the favor of what the report is claiming. Like I don't think that that is evidence that the pipeline argument is invalid. All right, so they go into core themes in pipeline research, and they do some they do some overview of the kind of pipeline research that often so sometimes the pipeline research examines why, why, for example, why women don't choose to go into computer science as much and sometimes they focus on what is their perception of the field, what was it, what is their perceptions of the stereotypes of the field, what is their perceptions of the kind of culture in the field, is it suited to them, what is their perception of how qualified they are for the field, and is that true, is that false, and so on. So this research examines a whole variety of things. And it's very interesting, actually, to read through this research. I want to point out this here. Other studies suggest that gender is correlated with a person's motivations for pursuing a career in the field. Women and particularly women from low socioeconomic status or minority backgrounds are more likely to see computing as a versatile profession that provides an opportunity for secure employment, higher pay, and better social standing. Moreover, their interests go beyond technical aspects of computing, focusing instead on the purpose and application of software. However, such interests are often de-emphasized in computer science curricula, a price technical skill and its applicability to industrial settings above all else. So I find this really interesting because it's basically saying that women have different interests than men on average. That's basically saying that, which is almost heresy. To say this in this context, people will come after you if you suggest something like this, and yet they're just stating it here. Remember this for later. This is really funny that they're like, yeah, the interests could be different for women than for men. And we might have to adjust our curriculum to be more suited to these different interests. I mean, yeah. I'm sure that's... Yeah, as I said, you're like, usually this is forbidden to say. All right. So they go on. They say limitations of pipeline research, right? These are fairly like common limitations, let's say, of studies in general, social science studies, which I won't go into much. Again, they state we have to examine... We don't only have to examine this, but the problem... They basically say the problem is actually the culture and the problem is actually the perpetrators, where do I say? I don't remember where this is stated, but they again say we have to examine who benefits from its present construction, who is underserved within the current tech ecology, who benefits from its present construction, how these dynamics might be untangled, and so on. So again, stating these kind of power relationships for the different groups, which I don't agree is in large part what's happening. They say it's worth considering the scope of these studies and by and large, the recommendations they issue are limited, targeted at the administrators of university computer science programs seeking to broaden the diversity of their student body. Yes, that's exactly where we saw the problem appears to be, right? So the reason they have a problem with these studies is that they actually focus on the point where this discrepancy appears to happen, because they want to claim that no, no, no, you should focus on a different point, namely hiring in these companies, hiring and promotion. They say though important, so at least they acknowledge that that's an important problem. This is a narrow frame through which potential solutions to barriers to inclusion. It does not address the companies that hire computer science students, the peers responsible for promulgating stereotype views or engaging in hostile behavior or the broader social conditions that may influence students' success in computer science programs. Actually the research and even some of the examples they've included of this research addresses all of this. But the research often addresses the kind of stereotypes and how the peers act and how the companies act and also how the companies hire and how people have something to look forward to or nothing to look forward to and how that influences their decisions. Yeah, again, they say the studies are frequently cited by those within corporate environments to justify their own lack of diversity as they situate the locus of change outside of the corporation itself. As such pipeline studies are disproportionately emphasized as a part of the broader research agenda on diversity and technology. Again, they state companies use this to get out and of course, like companies, of course they're going to use this to get out. I mean, I agree at least with that. I agree that companies are going to try to use this to get out of responsibility. Certainly. All right. So the last section here is the pipeline dreams after years of research. Again this is on this pipeline studies. Basically they say the pipeline research hasn't shown, like hasn't borne fruit. It hasn't led to meaningful change in the field even though we've researched this. The reason they say the number of reasons they tend to place the owners to solve issues of discrimination, Silicon Valley on those who are discriminated against rather than the perpetrators. I find this word choice really interesting. Perpetrators, right? Like again, the group of white men is trying to put down everyone else. That's the perspective that the article takes. And it's not even true. This research, a lot of times it actually says the reason why, for example, women don't choose to go into computer science is the male dominated culture within these corporations, is the perception of this not being a woman friendly environment, is the people here of sexual harassment and so on. So it's not even true. But moreover, I just wanted to point out the choice of word here, perpetrators. I don't know how you get to this word. It really shows kind of a worldview of the authors in my opinion. All right. So they go on and say, okay, this pipeline studies haven't been beneficial and companies haven't done much or hasn't been successful. They're going to worker led initiatives, which I'm going to skip here. It's just a kind of a reporting of what happened at companies where the workers themselves organized. And then the last section here is the pushback against diversity. So in this section, they're kind of documenting and arguing against people who have basically stated counter arguments to their recommendations mainly. So their recommendations being, let's change the hiring, let's change the promotion, and so on to be based on race and gender. And the pushback here characterized in different ways. So we'll go through this. This is the last section. I know it's a long video already. If you're still here, like the one person who's still here, hi, I hope you're doing well. Good. Keep hydrated. Yeah. So they say, it's a critical time. We now see diversity itself being weaponized. So they say this growing awareness accompanied by demands for inclusion and equity has led to some change, but there has also been resistance, especially among those implicitly privileged by the status quo. So again, jumping straight to attack on the person. Like I don't care if who makes an argument against me. I want to go on the argument and I'm going to go on the content of the argument. But these people straight, first thing they stayed is that's just by the people who are benefiting. That's just by the white men, basically. Straight to the identity of the person. That's dishonesty right there. So those questioning and even rejecting the idea that racism, misogyny, and harassment are problems within the AI field and the tech industry have appropriated the language of diversity to argue that efforts to improve inclusion are in fact exclusionary and addressing the deeper structural challenges posed by racism, sex and inequity is misguided. And yes, yes, definitely efforts to improve inclusion can be exclusionary. Like just because, so this is a thing, just because you're fixing a problem doesn't mean the method you're using to fixing it is justified and is itself good. Methods to improve inclusion can be exclusionary and some that have been proposed are exclusionary. Definitely it depends on the method. It doesn't mean these people are against these efforts. It means that the measures, for example, implementing racist hiring policy, I can definitely see that this is going to lead to more equal representation within the workforce. But the tool itself is really bad and exclusionary and discriminating. So yeah, I would say that it's accurate that it can be exclusionary. I say, for example, some AI researchers greeted the announcement of Black in AI Workshop at NRIPS leading machine learning conference by questioning whether the event was necessary, arguing that it would be discriminatory. But can't they? Can't they question whether the event was necessary? Like that would, I would, here I would need a discussion. What is it for? Right? Why is this event happening? And what is it doing? And is it discriminatory? It could be. Any event can be discriminatory. Does it discriminate based on race or gender or anything? Is it, you know, does it do so unjustly and all? So I don't, I don't just don't see why. Could still be wrong. Like you could question and then you could be wrong. But you should be taken on your argument. But the argument here is just already questioning this is already on the wrong side of the argument. And I don't agree with this. I don't agree with these people that question this workshop. Don't have a particular opinion on these things. But I have the opinion that you have to take arguments at their argument value and not just at who makes them or whether or not they're against a particular viewpoint. All right. They say such pushback often centers calls for cognitive diversity or viewpoint diversity. The idea that individual differences in the ways people think and understand the world are distinctions that should be counted alongside or instead of other identity categories such as race and gender. Well, yes, that's I mean, isn't that isn't that a very reasonable thing to say? Isn't it very reasonable to say that differences in the ways people think and understand the world, their distinctions that should be counted alongside other identity categories such as race and gender, they say a dozen white men so long as they were not raised in the same household and don't think identical thoughts could be considered diverse. That's I don't know if this is a sarcastic statement or not, but clearly it's it's kind of the counterpoint they're trying to make here that but yes, I would I would totally agree with this statement in a way a white man growing up in San Francisco, a white man growing up in rural Idaho, a white man growing up in Florida, a white man growing up in Western Europe, one in Russia, and one growing up on the road with its circus, his circus parents in Mongolia would definitely be that plenty diverse, right? I mean, they criticize this here, but this is is actually how can you how can you not see this that? Yes, these are valid differences, and people are going to think differently, independent of how they look, people are going to have different thoughts. And it's important to recognize other people think differently. And therefore, you should, you know, include them if it's relevant. And the counter argument to this is, of course, what the authors here are saying basically is that 12, a dozen people, as long as they are don't look the same, could be considered diverse, even if they all were raised in the same place, and basically all live in San Francisco, and think the exact same thing. Yeah, that's, I mean, it sounds to me, it sounds as absurd as the other way around. To me. So here's, here's my, here's my thoughts on this. I am not going to pretend that I know what life is like as a woman. Right? I'm absolutely sure that for areas of life, it is it is definitely valuable to listen to the experience of a woman or multiple women, an aggregate of women, because the life is just different as a woman. Life is also different. As a black person, I absolutely concede that there are things that I might not be able to draw from my life experience, because I am not of that skin color that different problems that people face. And that's why it's important to have an opinion of that at the table. But I'm also absolutely certain that I have no relation to someone who grew up as a child pop star from the age of 12, and then had that life. I have no relation to someone growing up under a communist regime. I have no relation to someone growing up in in kind of a Buddhist religious tradition. I just don't. And I don't care how they look. They have different experiences. They have different bodies of knowledge to draw on. And I don't think why we should make the difference along the exact lines of race and gender. Yeah, but that's what they that's of course what they argue here. Those arguments work by centering identity while flattening or ignoring power relationships. Here the VP, the Facebook VP of engineering said that the ultimate goal is cognitive diversity and cognitive diversity is correlated with identity diversity. That means it's not just about getting women in tech, it's about broad voices, broad representation. Right? So the the this is exactly what I would say the reason why we want different the reason why we want a woman or a black person at the table is because they have a different knowledge is because they have different thoughts because of their different life experience. They have different thoughts that they can bring in. So actually, by including these what they call bodies, it is about cognitive diversity, even in itself. But the authors here really see this from a different angle. They really see this in terms of power relationships between race and gender groups. And I yeah, the arguments of the authors don't make sense if you don't view it through that lens. That lens to me is just such a it's such a I don't know, it's just sad look on the world. And also, I think it's a very, very inaccurate look on the world. And it's, I think, a very dangerous look on the world. Um, yeah, again, they say instead of looking at historical patterns of marginalization, calls for cognitive diversity argued that all differences are equal. No, we're not. Like, no calls for cognitive diversity or don't argue that all differences are equal. Well aware that some people have it harder, well aware that some differences are bigger, worse or better. That's absolutely well aware all they're saying is that race and gender shouldn't be the like, only things to consider and shouldn't be in itself be considered diverse. Just because someone is of a certain skin color, it doesn't mean anything, right? It doesn't actually tell you anything about that person. So why not consider people as individuals and look at what was their life like until this point and what could they contribute to the discussion we're having rather than looking at the color of their skin. I mean, if the color of their skin played a role in their life, then obviously that would manifest in my suggestion as well. But to just look at people through this kind of group lens is is so foreign to me. And yeah, I feel it's it's quite dangerous. Yeah, so again, and this this could argue that all differences are equal. I mean, the point where you have to start misrepresenting what the counter argument is saying, that's really how you know you're dealing with a with not a well intentioned person on the other side of the of the discussion. This is really politics now. This isn't a well intended argumentation. It's really someone to trying to achieve some goal, because they have to misrepresent the other side. And this only gets worse from here. They say recently was exemplified in the controversy over Google's appointment of Heritage Foundation CEO K calls James to its Advanced Technology External Advisory Council. Google's reasoning for the appointment of James was ostensibly to ensure diversity of thought by including a conservative viewpoint on the council. Alright, so Google has a technology advisory board, or council, sorry, of external people, and they've included a conservative. And she is by all by all metrics, let's say, a standard conservative. So this is not a far right neo Nazi type. I don't know. But this is this is someone who has similar opinions than half the US country and in generally in at least in the Western world, generally half of the of the country's population tends to be conservative. More or less, I mean, there's differences. But yeah, so this this is a this is an opinion that a large portion of the population shares. So it would be I don't know, it would be suitable to include at least someone of that opinion in an external advisory council to to have that on board. You don't have to listen to her like she's not like she's made king. It's simply that she will have the opportunity to input her voice representative of kind of that large, very large percentage of people. They go on to say, James is also a black woman, thus adding racial and gender diversity to the panel. So even further, right, this is it's a conservative black woman. All right, but the pushback following James's inclusion focused on her policy position, citing specifically her vocal anti LGBTQ and anti immigrant views and highlighted why cognitive diversity is a particularly limited lens. And the pushback here was very much spearheaded by one of the authors of this article. So I am this isn't just reporting. I will also I'll also criticize the the this pushback here since it's, you know, it's kind of argued for in this article. It's not just reported and also because the authors are the same. So here they say they have vocal anti LGBTQ and anti immigrant views. And I haven't actually gone specifically and looked at what this person particularly has said, but given that she's a standard conservative and has been in public office, I believe under George W. Bush, she can't like I have trouble believing that she has like extremely hateful opinions like these people shouldn't exist or like something like that nature. Like often people like conservative people have have issues with forcing people to adopt certain pronouns for people or issues with which bathrooms do people go in and, you know, generally are tougher on immigration, especially illegal immigration and so on. I mean, these are these are views that people hold. It's a large part of people and these are discussions to be had. So including this this person would be very sensible move. But they say in a letter opposing the appointment, a group of Google workers calling themselves Googlers against transphobia and hate, transphobia and hate responded to the idea that diversity of thought justified James's addition to the council. This is a weaponization of the language of diversity by appointing James to the ATAC. Google elevates and endorses her view, implying that hers is a valid perspective worthy of inclusions in its decision making. This is unacceptable. Here it says again, the author was one of the organizers of that. And that's what they're saying here. The views, if you don't have our views, these are unacceptable views, right? It's valid perspective worthy of inclusion. It's what they're saying basically is you don't even talk to these to this person, like talking to this person, considering their opinion. You can still evaluate the opinion, but even considering their opinion is already wrong. And that given that the person is a black woman. So basically, they are called the author's idea of diversity is people that look different that are from race and gender groups that have don't have much power or perceived what they call power right now. As long as they all think exactly as we think, right, then that's fine. As long as they they share our thoughts, as long as they don't have dissenting opinions, we want the we want the different looking people. But don't dare talk to anyone of a different opinion. Yeah, this, I don't I don't see how I mean, these these authors, in my opinion, they really live in in a bubble, they really live in the in a tiny Silicon Valley or Silicon Valley influenced spaces, because this is this is half the people they basically saying half the people in their greater community in their country aren't even worthy listening to their opinions aren't even worthy of inclusion in of consideration. So yeah, well, well done might as well discredit them at once. I'm sure I'm sure I'm sure that's gonna fly well with these people. All right. Yeah, might might start calling them deplorables and see what they do. Maybe they'll return the favor and elect a moron just to stick it in your face. I mean, that's what happened. So the idea of cognitive diversity is mobilized by some support in support that the AI field and the tech industry are already diverse. Including as far as to support claims that not including identities like white and male constitutes discrimination. Yes, it can. Like if, if you include every single identity except white and male, that constitutes discrimination. That's I mean, yes, even if they're in the majority is still constitutes discrimination, like no one can help being born white and male, no one white and male chose to be born like that. Don't mostly don't choose the melanin content of your skin, you can modulate it a bit by going to the sun, which computer science people statistically don't do very often. So there's not much leeway there. So yeah, to not include identities like that, if you include every other one, can constitute discrimination. True. A July 2017 memo written by James Damore, a software engineer at Google is illustrative of such pushback titled Google's ideological echo chamber. And published in an internal mailing list, the memo critiqued the company's diversity policies arguing that biological differences between men and women rather than bias and discrimination help explain gender disparities at the company. I feel the you can leave out the rather than here. I think the memo simply stated that biological differences can help explain the gender disparities. The most objective writing the memo was to make the case that policies designed to achieve equal representation are unfair, divisive and bad for business. Well some are. Yes, especially the recommendations that you've given at the beginning, number seven, is unfair, divisive and I would also argue bad for business. So supporters for Damore's point of view at times even drew on the rhetoric of the pipeline to make the case that diversity initiatives are in fact discriminatory. They argue incorrectly that if there aren't qualified candidates in the pipeline, then hiring those who are unqualified on the basis of identity discriminates against those who are qualified. No, I would say hiring anyone on the basis of identity discriminates. I mean inherently. So again I think that's the larger argument that these people are making, which is not incorrect, is very correct. So in an update to the memo Damore himself asserted that he values diversity and inclusion, but his primary concern was cognitive diversity. He says diversity inclusion is not denying that sexism exists, doesn't endorse using stereotypes. And in specific I've read the memo and it directly says these are population level kind of statistics and there is more overlap than difference and you absolutely can't say anything about an individual by looking at these statistics. That's almost a quote from this memo. So he was very much concerned with considering people as individuals, but also if you like he was basically making the same argument as earlier. I told you to remember, hey look this one study that found that women's interests might be different and we might shape the curriculum. That's basically what Damore said. He said women's interests might be different and we'd have to maybe shape the way we do work, like change the way we do software engineering to attract more of them. That was one of his points. So he's exactly the same thing, but of course he's a misogynist because he suggested that this could be due partly because of biological differences. And the way he was dragged through the mud is just crazy. And they shoot here very much against this kind of biological, what they call biological determinism. We'll see this very briefly. I'd say diversity becomes an empty signifier, stripped of the histories and experiences of systemic discrimination, repurposed around ideology rather than bodies. I'd say diversity has nothing inherently to do with bodies as such. I think that's only the case if you are already convinced of this. Within hours of the memo's publication, harassment targeting minority advocates who pushed back against the claims in the memo began, with a particular focus on queer and trans workers. That's bad, but also I think the pushback against people who voiced support was also pretty bad because one of them was fired, as you already stated. Google's vice president of diversity even locked down her Twitter account shortly after Demours firing, responding to the barrage of threats describing her as a police Nazi. Well yeah, if you fire something. I mean undoubtedly Google fired this guy because they thought it was less of a PR disaster if they also fired him now. This probably wasn't an ideological decision, much more a PR decision. If you fire someone after stating something like this, it very much looks like you're firing them because you don't like their ideas and you don't like what they're saying, which people generally are not in favor of censoring freedom of speech. But yeah, that being said, harassment is bad, don't harass people. Also that being said, criticism isn't always harassment and don't conflate the two. Demours' memo also stated that the distribution of preference abilities of men and women differ in part due to biological causes and that these differences may explain why we don't see equal representation of women in tech and leadership. This assertion hinges on a flawed assumption that identities like gender and race are essential and fixed biological attributes and that inequalities are at least in part the product of such irreducible differences. Well, I mean, if they're not fixed biological attributes, certainly gender and race have a 0.99 correlation with biology. Since your biology is first and it's determined when you're conceived, that demonstrates a causal direction. Even if they're not exactly fixed, they are overwhelmingly fixed. And to suggest that this is a flawed assumption, that these inequalities are at least part the product of such differences, what you'd have to do, they simply state it's a flawed assumption. What you have to do in order to show this is a flawed assumption, you have to show that gender and race, as far as they're biologically determined, have no influence whatsoever on these differences. That's what you have to show, right? That's the counterclaim because the claim is they have at least in part something to do with it. And that's also, I believe, what the more stated and what the predominant opinion like is very like all the research points to, for example, there is a large difference in interest between genders as far as, for example, career selection goes and so on. Now, we can talk about why that is, but there's also a large consensus, I believe, that this is at least partly determined to however degree, but it is at least partly determined by biology. In order to show that this is flawed, you need to show that it does not have, it can't have any influence, right? You have to basically prove them the impossibility of this having an influence, which no one has done so far, much to the contrary. So simply state this is a flawed assumption kind of shows to me that they've already, they are there, they're in a bubble and they're expecting to speak to people in the same bubble. Yeah, so they go on and kind of discredit this as called a biological determinism, which I don't think that's a correct use of the term biological determinism, but you can judge for yourself. All I think these people are saying that biology might have some influence and we could adjust for that. It's not even right, it's not even. Yeah, this comes up here. So conclusion, conclusion, finally, I think it's been two hours. Sorry. Conclusion. Throughout this report, we've outlined the scope and scale of the problem, tracing how the diversity crisis in the industry and the problems of bias and AI systems are interrelated aspect of the same issue. No. In the past, these topics are commonly examined in isolation, but increasing evidence shows that they are closely intertwined. No, you've shown that they're parallel. You have absolutely not shown that they're interrelated aspects of the same issue and you have not shown that one, any one of these causally influences the other, that there is any feedback loop. You have not shown that fixing one leads to fixing the other. I mean, you could also take a company that extremely is focused on, or for some reason has a different workforce and then show how their products with the same data sets as the previous companies don't end up being biased. Probably not so easy. But again, none of that is in the report. There are many things you could actually do to show what you wanted to show, but it's just not the case in this article. Our analysis surfaced two prominent responses to the diversity crisis. On one hand, a worker driven movement, which we've skipped. On the other hand, we observe a small but vocal counter movement that actively resists diversity in the industry. What dishonesty actively resists diversity? I mean, the thought that these people stray around like, no, I don't like the other looking people. It's just so absurd. All they're saying is that either we don't understand the problem in the correct way or our tools aren't appropriate to solve the problem. I think everyone has the same goal of the workplace and the AI systems being as fair and as non discriminatory as possible. Misrepresentation of the other side is something that really bugs me. And it's something that these authors do a lot. So yeah, I lose my polite side maybe. And uses arguments from biological determinism to assert that women are inherently less suited to computer science and AI. What a load of crap. Sorry, but uses to assert that women are inherently less suited to computer science. No one. Okay, not no one, but no one that I know. Asserts that absolutely no one that makes these arguments. Sorry, not no one. You can always find a sexist douchebag that makes that argument. But this is not a serious argument made. And this is not this counter movement. Most people in the argument that most people in this counter movement make. Not at all. And to represent them as such is just so dishonest that yeah, this this this basically this is the it's nice that it's in the conclusion because it finally like at the end it completely destroys the credibility of me taking seriously these authors. I thought they had so that the parts we skipped over I mostly would say I'm mostly okay with they mostly show parallels between the that AI systems are biased and they also show that there is unequal representation. They also show examples of discrimination, harassment and so on. Problems in AI companies and universities that all you can read the report for this that's it's pretty interesting to read. But the points I've addressed, I'm not happy with. Yeah, so that was it for now. Sorry this was took so long, but I felt that a thorough take was necessary. Have a nice rest of the day. | [
{
"start": 0,
"end": 7.5200000000000005,
"text": " Hi there, today we're looking at discriminating systems, gender, race and power in AI by Sarah"
},
{
"start": 7.5200000000000005,
"end": 14.72,
"text": " Myers-West, Meredith Whitaker and Kate Crawford of the AI Now Institute, which is a part of"
},
{
"start": 14.72,
"end": 18.8,
"text": " New York University or associated with it."
},
{
"start": 18.8,
"end": 24.8,
"text": " This is not as much a paper as it is a report, kind of summarizing current literature and"
},
{
"start": 24.8,
"end": 31.76,
"text": " also kind of an opinion piece slash recommendation giving document."
},
{
"start": 31.76,
"end": 35.86,
"text": " Yes, so we'll dive into it."
},
{
"start": 35.86,
"end": 40.68,
"text": " As you can see from the index, it's quite a long report and we don't have time to go"
},
{
"start": 40.68,
"end": 41.68,
"text": " into all of it."
},
{
"start": 41.68,
"end": 43.92,
"text": " Actually, we don't have time to go into most of it."
},
{
"start": 43.92,
"end": 50.400000000000006,
"text": " I just hope to kind of point out what the main arguments and themes are in the report,"
},
{
"start": 50.4,
"end": 58.4,
"text": " kind of what it's trying to say, pick out some interesting things and summarize it to"
},
{
"start": 58.4,
"end": 60.64,
"text": " the best of my ability."
},
{
"start": 60.64,
"end": 62.72,
"text": " Also give a little critique."
},
{
"start": 62.72,
"end": 73.48,
"text": " So let me actually go ahead and try to state the kind of core argument that the report"
},
{
"start": 73.48,
"end": 78.44,
"text": " is trying to make, because it's not really clear from reading it and you have to kind"
},
{
"start": 78.44,
"end": 84.8,
"text": " of read the whole thing and then kind of becomes clear what the argument is, I feel, though"
},
{
"start": 84.8,
"end": 89.96,
"text": " they somehow stated in the introduction numerous times in various ways."
},
{
"start": 89.96,
"end": 94.24,
"text": " So I might just be not as attentive reader at first time."
},
{
"start": 94.24,
"end": 100.47999999999999,
"text": " But all right, so here's the argument and I really hope I'm representing this correctly."
},
{
"start": 100.47999999999999,
"end": 107.68,
"text": " We have a problem currently that sometimes AI systems can exhibit what we usually call"
},
{
"start": 107.68,
"end": 109.08000000000001,
"text": " bias."
},
{
"start": 109.08000000000001,
"end": 113.52000000000001,
"text": " And we don't mean mathematical bias, like bias variance tradeoff."
},
{
"start": 113.52000000000001,
"end": 120.60000000000001,
"text": " We mean bias in a societal sense, let's say bias against certain types of people where"
},
{
"start": 120.60000000000001,
"end": 122,
"text": " they shouldn't exist."
},
{
"start": 122,
"end": 129.28,
"text": " So for example, let me draw an AI system and I'll just draw a little computer screen with"
},
{
"start": 129.28,
"end": 131.60000000000002,
"text": " a little light bulb."
},
{
"start": 131.60000000000002,
"end": 132.60000000000002,
"text": " All right."
},
{
"start": 132.6,
"end": 137.92,
"text": " So this is because it's smart, this is an AI system and the AI system and they give"
},
{
"start": 137.92,
"end": 139.04,
"text": " numerous examples."
},
{
"start": 139.04,
"end": 145.2,
"text": " One example they give us for is like face recognition algorithm that is much more accurate"
},
{
"start": 145.2,
"end": 151.92,
"text": " on faces of white males, as opposed to darker skinned females."
},
{
"start": 151.92,
"end": 159.04,
"text": " So let me draw like two curves to represent these distributions are unequal."
},
{
"start": 159.04,
"end": 165.48,
"text": " And so the AI system exhibits some bias with respect to some kinds of people with an especially"
},
{
"start": 165.48,
"end": 167.2,
"text": " protected attributes."
},
{
"start": 167.2,
"end": 171.39999999999998,
"text": " And in this report, they focus mainly on gender and race."
},
{
"start": 171.39999999999998,
"end": 174.51999999999998,
"text": " So that's what we're going to talk about."
},
{
"start": 174.51999999999998,
"end": 179.68,
"text": " The second thing they observe, so this observation one, the second thing they observe is, I'm"
},
{
"start": 179.68,
"end": 185.32,
"text": " going to draw some generic people here that represent the workforce of AI."
},
{
"start": 185.32,
"end": 191.76,
"text": " So the AI workforce is classified as all the people that work on AI, be that university"
},
{
"start": 191.76,
"end": 197,
"text": " researchers or within companies building AI products or deploying them."
},
{
"start": 197,
"end": 202.51999999999998,
"text": " So this is the workforce and they observe that there is an unequal distribution among"
},
{
"start": 202.51999999999998,
"end": 205.64,
"text": " the AI workforce."
},
{
"start": 205.64,
"end": 211.84,
"text": " So this distribution, I'm also going to do this for unequal distribution."
},
{
"start": 211.84,
"end": 217.48,
"text": " There's an unequal distribution in the AI workforce, most notably, it's predominantly"
},
{
"start": 217.48,
"end": 221.76,
"text": " males who work on AI."
},
{
"start": 221.76,
"end": 228.08,
"text": " And also white people are overrepresented compared to the world population at large."
},
{
"start": 228.08,
"end": 231.72,
"text": " So that's kind of the two observations they make."
},
{
"start": 231.72,
"end": 240.36,
"text": " And now what they claim is that the unequal representation in the workforce is causing"
},
{
"start": 240.36,
"end": 243.14000000000001,
"text": " the bias in the AI systems."
},
{
"start": 243.14000000000001,
"end": 250.52,
"text": " So they're basically saying these AI systems are biased because that the workforce is unequally"
},
{
"start": 250.52,
"end": 251.96,
"text": " distributed."
},
{
"start": 251.96,
"end": 258.48,
"text": " And also they claim in a less powerful sense, I feel, but they claim there is a loop that"
},
{
"start": 258.48,
"end": 265.24,
"text": " this then leads back that because there is bias in the AI system, that again leads to"
},
{
"start": 265.24,
"end": 270.08000000000004,
"text": " an unequal, more unequal distribution of the workforce."
},
{
"start": 270.08,
"end": 276.56,
"text": " So the core argument really is, as they set out to do, like in the introduction, and also"
},
{
"start": 276.56,
"end": 282.28,
"text": " claim that they have done in the conclusion, is to demonstrate these two directions here"
},
{
"start": 282.28,
"end": 283.84,
"text": " in a causal way."
},
{
"start": 283.84,
"end": 289.21999999999997,
"text": " So the systems are biased because there is an unequal representation in the workforce"
},
{
"start": 289.21999999999997,
"end": 293,
"text": " and that feeds back."
},
{
"start": 293,
"end": 300.03999999999996,
"text": " So the argument is that if you want to fix the bias here, if you want to fix that, then"
},
{
"start": 300.04,
"end": 309.88,
"text": " you will have to fix it via making the workforce more what they call diverse, so less unilaterally"
},
{
"start": 309.88,
"end": 313.40000000000003,
"text": " distributed towards white males."
},
{
"start": 313.40000000000003,
"end": 315.48,
"text": " That's kind of the final conclusion."
},
{
"start": 315.48,
"end": 321.12,
"text": " If you read their report and the recommendations, that's mainly what they're going for."
},
{
"start": 321.12,
"end": 331.8,
"text": " Yeah, so my opinion, or in my opinion, having read the report a couple of times, is that"
},
{
"start": 331.8,
"end": 335.98,
"text": " as I see it, they really don't demonstrate these links."
},
{
"start": 335.98,
"end": 341.04,
"text": " So they give examples of this and they give examples of this."
},
{
"start": 341.04,
"end": 344.08,
"text": " They show that the workforce is unequally distributed."
},
{
"start": 344.08,
"end": 350.2,
"text": " They show that AI systems can exhibit such bias, but they never actually show these links"
},
{
"start": 350.2,
"end": 351.4,
"text": " in my opinion."
},
{
"start": 351.4,
"end": 352.8,
"text": " They don't show this."
},
{
"start": 352.8,
"end": 358.94,
"text": " So if you make the claim that in order to fix the bias in AI systems, you must fix the"
},
{
"start": 358.94,
"end": 364.42,
"text": " unequal representation in the workforce, I would need an argument that says because there"
},
{
"start": 364.42,
"end": 372.12,
"text": " is unequal representation, therefore A, therefore B, therefore C, therefore bias, like an actual"
},
{
"start": 372.12,
"end": 382.32,
"text": " argument to follow that says because of this, that, because of that, that, and so on."
},
{
"start": 382.32,
"end": 384.8,
"text": " It's just not there."
},
{
"start": 384.8,
"end": 386.56,
"text": " They simply show parallels."
},
{
"start": 386.56,
"end": 392,
"text": " They simply show that these two things exist and they just list example after example of"
},
{
"start": 392,
"end": 396.52,
"text": " that."
},
{
"start": 396.52,
"end": 398.84000000000003,
"text": " I don't think they make this argument."
},
{
"start": 398.84,
"end": 406.2,
"text": " But I think, also the other direction, they don't really make this argument."
},
{
"start": 406.2,
"end": 415.47999999999996,
"text": " Except in one case, where if you give them benefit of the doubt."
},
{
"start": 415.47999999999996,
"end": 423.91999999999996,
"text": " What I also think is that it appears like the article, if you read it, and I encourage"
},
{
"start": 423.92,
"end": 429.72,
"text": " you to read it if you have some time, it makes a lot of sense if you have already accepted"
},
{
"start": 429.72,
"end": 430.72,
"text": " this conclusion."
},
{
"start": 430.72,
"end": 437.20000000000005,
"text": " Like if you've already accepted this, then it's like, oh yeah, because I feel this is"
},
{
"start": 437.20000000000005,
"end": 443.40000000000003,
"text": " just a text where the confirmation bias is so high, just the way it's written, that it"
},
{
"start": 443.40000000000003,
"end": 448.84000000000003,
"text": " must make a lot of sense to someone who's already kind of in on this conclusion."
},
{
"start": 448.84,
"end": 456.52,
"text": " But to someone who isn't sold yet, like myself, I am just not finding this convincing at all."
},
{
"start": 456.52,
"end": 465.64,
"text": " The second thing is that it very much feels like this isn't like a discovery or something."
},
{
"start": 465.64,
"end": 472.96,
"text": " But someone actually set out with the goal to address this here with the goal of I want"
},
{
"start": 472.96,
"end": 479.64,
"text": " companies to hire more of these people or certain kinds of people or to become more"
},
{
"start": 479.64,
"end": 484.2,
"text": " diverse or to promote more of a certain type of people."
},
{
"start": 484.2,
"end": 487.35999999999996,
"text": " And now I'm going to find reasons for this."
},
{
"start": 487.35999999999996,
"end": 492.2,
"text": " And the reason is like, oh, look at look at this bias here."
},
{
"start": 492.2,
"end": 493.79999999999995,
"text": " This is caused."
},
{
"start": 493.79999999999995,
"end": 495.79999999999995,
"text": " This is caused by this other thing."
},
{
"start": 495.79999999999995,
"end": 498.84,
"text": " And therefore we must fix this other thing."
},
{
"start": 498.84,
"end": 505.08,
"text": " It very much feels like someone setting out with already the conclusion in mind rather"
},
{
"start": 505.08,
"end": 508.67999999999995,
"text": " than this being an honest investigation."
},
{
"start": 508.67999999999995,
"end": 510.64,
"text": " But yeah, I mean, read it for yourself."
},
{
"start": 510.64,
"end": 514.36,
"text": " I can't prove the absence of an argument by not reading every single line."
},
{
"start": 514.36,
"end": 519.12,
"text": " And I can't read every single line because it'll just get very long and boring."
},
{
"start": 519.12,
"end": 520.88,
"text": " But read it yourself."
},
{
"start": 520.88,
"end": 528.68,
"text": " And I think I'm pretty I'm pretty I've read it numerous times with really an open mind"
},
{
"start": 528.68,
"end": 531.1999999999999,
"text": " to be convinced that there is an argument in there."
},
{
"start": 531.1999999999999,
"end": 536.4399999999999,
"text": " But I don't think there is or I don't think there is a very strong argument for this."
},
{
"start": 536.4399999999999,
"end": 537.4399999999999,
"text": " All right."
},
{
"start": 537.4399999999999,
"end": 540.76,
"text": " Let this first part here is more or less a summary."
},
{
"start": 540.76,
"end": 543.3199999999999,
"text": " So research findings is more or less a summary."
},
{
"start": 543.3199999999999,
"end": 547.28,
"text": " And we'll get to these things as they are important."
},
{
"start": 547.28,
"end": 550.0999999999999,
"text": " Then they state recommendations right at the beginning."
},
{
"start": 550.0999999999999,
"end": 552.92,
"text": " So actually, you'd have to read the article first."
},
{
"start": 552.92,
"end": 554.76,
"text": " This is kind of more of an abstract section."
},
{
"start": 554.76,
"end": 558.54,
"text": " But since it's right here, we'll kind of jump right into it."
},
{
"start": 558.54,
"end": 563.68,
"text": " So these are recommendations and I've claimed they don't really show a connection."
},
{
"start": 563.68,
"end": 569.52,
"text": " But they actually just show examples, examples of this and examples of this and parallel"
},
{
"start": 569.52,
"end": 570.52,
"text": " them."
},
{
"start": 570.52,
"end": 575.38,
"text": " And this is reflected in like every single section, including here in the recommendations."
},
{
"start": 575.38,
"end": 579.12,
"text": " They have recommendations for improving workplace diversity."
},
{
"start": 579.12,
"end": 583.5999999999999,
"text": " And they have recommendations for addressing bias and discrimination in AI systems."
},
{
"start": 583.5999999999999,
"end": 584.5999999999999,
"text": " Right."
},
{
"start": 584.6,
"end": 591.84,
"text": " So all right, in my case, if you make this argument, I would I would feel you also make"
},
{
"start": 591.84,
"end": 594.96,
"text": " recommendations for breaking these links."
},
{
"start": 594.96,
"end": 598.9200000000001,
"text": " But or argue why they can't be broken."
},
{
"start": 598.9200000000001,
"end": 600.94,
"text": " But all right, let's jump into some of them."
},
{
"start": 600.94,
"end": 604.34,
"text": " And it is really a mixed bag here, really."
},
{
"start": 604.34,
"end": 610.48,
"text": " So some recommendations I'm really in favor of just from from the go not even you don't"
},
{
"start": 610.48,
"end": 613.9200000000001,
"text": " even need the article for those here."
},
{
"start": 613.92,
"end": 617.5999999999999,
"text": " Discrimination, harassment and discrimination, transparency reports, including number of"
},
{
"start": 617.5999999999999,
"end": 621.4399999999999,
"text": " claims over time, the types of claims submitted and actions taken."
},
{
"start": 621.4399999999999,
"end": 627.8,
"text": " So it's known that especially in these larger companies, sexual harassment claims often"
},
{
"start": 627.8,
"end": 633.8399999999999,
"text": " go down in either bureaucracy or are kind of hushed under the table or something like"
},
{
"start": 633.8399999999999,
"end": 634.8399999999999,
"text": " this."
},
{
"start": 634.8399999999999,
"end": 638.24,
"text": " What you have to recognize is that a human resource department of a large company isn't"
},
{
"start": 638.24,
"end": 640.52,
"text": " there to serve the human resources."
},
{
"start": 640.52,
"end": 645.52,
"text": " It's there to serve the company providing human resources."
},
{
"start": 645.52,
"end": 651.96,
"text": " That's why a sexual harassment claim to an HR department is just a potential lawsuit."
},
{
"start": 651.96,
"end": 657.1999999999999,
"text": " And that's why they don't want to take it seriously except for it must go away really"
},
{
"start": 657.1999999999999,
"end": 658.1999999999999,
"text": " quickly."
},
{
"start": 658.1999999999999,
"end": 664.48,
"text": " So I think to kind of force companies or to ask companies to be more transparent, to take"
},
{
"start": 664.48,
"end": 673.64,
"text": " more seriously these the accusations of sexual harassment and assault and also discrimination"
},
{
"start": 673.64,
"end": 675.88,
"text": " is a very valuable goal."
},
{
"start": 675.88,
"end": 680.9200000000001,
"text": " And I fully, fully support this."
},
{
"start": 680.9200000000001,
"end": 687.84,
"text": " Also the here commit to transparency around hiring practices, especially hiring regarding"
},
{
"start": 687.84,
"end": 691.8000000000001,
"text": " how candidates are leveled, compensated and promoted."
},
{
"start": 691.8,
"end": 698.3599999999999,
"text": " But also the larger the company gets, the less transparent this process usually becomes"
},
{
"start": 698.3599999999999,
"end": 703.8,
"text": " or the more bureaucratic, the more people are able to game it and so on and distort"
},
{
"start": 703.8,
"end": 704.8,
"text": " it."
},
{
"start": 704.8,
"end": 711.1999999999999,
"text": " So I feel it's always good to be transparent around, okay, this person provides this much"
},
{
"start": 711.1999999999999,
"end": 718.7199999999999,
"text": " value to the company, therefore they should be compensated according to that or at least"
},
{
"start": 718.7199999999999,
"end": 721.18,
"text": " be transparent about it."
},
{
"start": 721.18,
"end": 723.68,
"text": " So these are kind of recommendations I like."
},
{
"start": 723.68,
"end": 730.12,
"text": " Then recommendations that really go into a different direction is something like this"
},
{
"start": 730.12,
"end": 734.2399999999999,
"text": " here, change hiring practices to maximize diversity."
},
{
"start": 734.2399999999999,
"end": 739.68,
"text": " And this is kind of reflect, I'm not going to go on this reflected in other points, increase"
},
{
"start": 739.68,
"end": 744.12,
"text": " the number of people of color, women and other underrepresented groups at senior leadership"
},
{
"start": 744.12,
"end": 746.9599999999999,
"text": " levels of AI companies across all departments."
},
{
"start": 746.96,
"end": 752.6,
"text": " So these things, they are usually within like company diversity goals and so on, doesn't"
},
{
"start": 752.6,
"end": 754.12,
"text": " really say how to do it."
},
{
"start": 754.12,
"end": 759.2800000000001,
"text": " But then the I mean, as such, they're not really recommendations yet."
},
{
"start": 759.2800000000001,
"end": 760.2800000000001,
"text": " They're more like goals."
},
{
"start": 760.2800000000001,
"end": 766.4000000000001,
"text": " But here recommendation seven, I think is the the crucial one, ensure executive incentive"
},
{
"start": 766.4000000000001,
"end": 774.0400000000001,
"text": " structures are tied to increases in hiring and retention of underrepresented groups."
},
{
"start": 774.04,
"end": 777.56,
"text": " So this is it's a bit of coded language."
},
{
"start": 777.56,
"end": 783.56,
"text": " But here they talk about executive incentive structure tied to hiring and retention of"
},
{
"start": 783.56,
"end": 785.12,
"text": " underrepresented groups."
},
{
"start": 785.12,
"end": 790.12,
"text": " This basically means if you are a manager or someone in charge of hiring or promoting,"
},
{
"start": 790.12,
"end": 795.52,
"text": " and you hire or promote a underrepresented person, and since they're talking about gender"
},
{
"start": 795.52,
"end": 802.68,
"text": " and race here, if you that means if you hire or promote a person of color or a woman, in"
},
{
"start": 802.68,
"end": 805.64,
"text": " this case, you will be compensated more."
},
{
"start": 805.64,
"end": 809.5999999999999,
"text": " So at the end of the year, you'll somehow have more money, like more bonuses or more"
},
{
"start": 809.5999999999999,
"end": 814.12,
"text": " base comp or more equity or something like you'll get more money."
},
{
"start": 814.12,
"end": 822.9599999999999,
"text": " So this, this recommendation is a direct call to hire based on race and gender."
},
{
"start": 822.9599999999999,
"end": 829.4399999999999,
"text": " So this, this is a direct call to racist and sexist hiring basically to discriminate people"
},
{
"start": 829.44,
"end": 838.5200000000001,
"text": " according to their skin color and according to their gender, which I mean, how, how is"
},
{
"start": 838.5200000000001,
"end": 840,
"text": " this okay with anyone?"
},
{
"start": 840,
"end": 846.8000000000001,
"text": " Like how can anyone how are people even able to state this and in like a high profile report"
},
{
"start": 846.8000000000001,
"end": 852.1400000000001,
"text": " like this and get away with it and not have people criticize them, this directly calls"
},
{
"start": 852.1400000000001,
"end": 856.8800000000001,
"text": " for people to be treated according to their gender and race."
},
{
"start": 856.88,
"end": 863.64,
"text": " And probably as directly as you can go without getting into actual legal trouble."
},
{
"start": 863.64,
"end": 868.28,
"text": " But yeah, I'm really, really against such such practices."
},
{
"start": 868.28,
"end": 875.12,
"text": " I mean, yeah, that's I just I just don't know how this how this can ever how this can ever"
},
{
"start": 875.12,
"end": 879.12,
"text": " be thought of as a good thing by anyone."
},
{
"start": 879.12,
"end": 887.52,
"text": " All right, so, well, yeah, in my mind, this recommendation, and this recommendation kind"
},
{
"start": 887.52,
"end": 889.52,
"text": " of are counter to each other."
},
{
"start": 889.52,
"end": 895.6,
"text": " Because if if I commit to transparency, how people are okay now I can, I can transparently"
},
{
"start": 895.6,
"end": 898.32,
"text": " commit to to be racist, I guess."
},
{
"start": 898.32,
"end": 903.5600000000001,
"text": " But if I say, okay, I'm going to come and promote people based on how much value they"
},
{
"start": 903.56,
"end": 910.04,
"text": " provide to the company, then yeah, I'd much rather have that than saying I'm going to"
},
{
"start": 910.04,
"end": 913,
"text": " come and promote people based on their skin color."
},
{
"start": 913,
"end": 916.2399999999999,
"text": " Alright, so let's actually jump into the report."
},
{
"start": 916.2399999999999,
"end": 920.9399999999999,
"text": " I'm not gonna these recommendations for addressing bias and discrimination in systems this these"
},
{
"start": 920.9399999999999,
"end": 923.3199999999999,
"text": " are fairly general and common."
},
{
"start": 923.3199999999999,
"end": 928.04,
"text": " So as well, as I said, we'll jump most of the things in the report."
},
{
"start": 928.04,
"end": 930.3199999999999,
"text": " So introduction."
},
{
"start": 930.32,
"end": 935.8000000000001,
"text": " So they start out with there is a diversity crisis in the AI industry."
},
{
"start": 935.8000000000001,
"end": 942.72,
"text": " This they give like some numbers like 15% of AI research staff and 10% at Google, so"
},
{
"start": 942.72,
"end": 946.48,
"text": " 15% of Facebook are women."
},
{
"start": 946.48,
"end": 953.96,
"text": " So these are some kind of fairly known statistics about how the AI field is kind of gender and"
},
{
"start": 953.96,
"end": 956.1600000000001,
"text": " race skewed."
},
{
"start": 956.16,
"end": 963.3199999999999,
"text": " Currently, so they say they claim in bold the diversity problem is not just about women."
},
{
"start": 963.3199999999999,
"end": 969.5799999999999,
"text": " It's about gender, race, and most fundamentally about power."
},
{
"start": 969.5799999999999,
"end": 974.18,
"text": " It affects how companies work, what products get built, who they're designed to serve,"
},
{
"start": 974.18,
"end": 976.6,
"text": " and who benefits from their development."
},
{
"start": 976.6,
"end": 985.72,
"text": " So this, I find this, this, this word power and this notion of power, a lot in this report,"
},
{
"start": 985.72,
"end": 992.52,
"text": " it appears again and again and again in in like power dynamics and power dynamics among"
},
{
"start": 992.52,
"end": 993.52,
"text": " groups."
},
{
"start": 993.52,
"end": 1001.6,
"text": " It's like a worldview, it paints like a worldview, where these different gender and race groups"
},
{
"start": 1001.6,
"end": 1007.52,
"text": " kind of struggle against each other to gain power over another."
},
{
"start": 1007.52,
"end": 1014.24,
"text": " And whoever's in power will try to remain in power in alliance with their gender and"
},
{
"start": 1014.24,
"end": 1018.5600000000001,
"text": " race group and try to keep the other groups down."
},
{
"start": 1018.5600000000001,
"end": 1021.88,
"text": " I'm not sure that's the correct view of the world."
},
{
"start": 1021.88,
"end": 1029.48,
"text": " In my mind, the world is comprised of individual people that want to achieve something for"
},
{
"start": 1029.48,
"end": 1033.6,
"text": " themselves and they would like to prop themselves up."
},
{
"start": 1033.6,
"end": 1039.24,
"text": " Whereas in this worldview, it's like, I'm going to use the power of my group to keep"
},
{
"start": 1039.24,
"end": 1041.84,
"text": " other groups down."
},
{
"start": 1041.84,
"end": 1048.8,
"text": " I don't know which worldview you subscribe to, but I find the world is comprised of individuals."
},
{
"start": 1048.8,
"end": 1054,
"text": " Yeah, and this is not discrediting that some people have it harder because of their gender"
},
{
"start": 1054,
"end": 1055.52,
"text": " or race."
},
{
"start": 1055.52,
"end": 1060.52,
"text": " But to see the entire world as a power struggle between these groups, to me, it's, it's,"
},
{
"start": 1060.52,
"end": 1068.3999999999999,
"text": " yeah, and I'm not going to point out everywhere it appears, this power wording, but it appears"
},
{
"start": 1068.4,
"end": 1072.24,
"text": " a lot and it's really shapes how the report reads."
},
{
"start": 1072.24,
"end": 1079.3600000000001,
"text": " You have to, you have to kind of remember, if you're a white male, and currently, the"
},
{
"start": 1079.3600000000001,
"end": 1086.76,
"text": " field is comprised of 90% white males, you, if you have like 10, like 10 hours, let's"
},
{
"start": 1086.76,
"end": 1093.96,
"text": " say you have to have 10 hours to do something, right, you can either choose to put down some"
},
{
"start": 1093.96,
"end": 1101.92,
"text": " other groups, like put down groups that you're not part of, or you can choose to invest these"
},
{
"start": 1101.92,
"end": 1106.8,
"text": " 10 hours in putting up yourself, you, right."
},
{
"start": 1106.8,
"end": 1113.2,
"text": " So if, if I, like I profit, if I'm a white male, I profit minimally from keeping the"
},
{
"start": 1113.2,
"end": 1120.32,
"text": " other groups down because guess what, I still have to compete with the like 1 billion other"
},
{
"start": 1120.32,
"end": 1123.04,
"text": " white males there are."
},
{
"start": 1123.04,
"end": 1131.68,
"text": " It's not going to help me to keep down anyone else, and especially, like it's, it's moronic,"
},
{
"start": 1131.68,
"end": 1138.92,
"text": " like who does that, who like has alliance, except most fringe people, like to their race"
},
{
"start": 1138.92,
"end": 1144.68,
"text": " or gender, rather than to the people they admire and respect and like to work with."
},
{
"start": 1144.68,
"end": 1149.3999999999999,
"text": " So I'm going to, if I have like 10 hours today, I'm going to rather spend this in propping"
},
{
"start": 1149.4,
"end": 1155.92,
"text": " up myself compared to everyone else, and I don't care what gender or race they are."
},
{
"start": 1155.92,
"end": 1162.1200000000001,
"text": " And so that to me, that's a much more accurate or, I don't know, plausible worldview."
},
{
"start": 1162.1200000000001,
"end": 1166.64,
"text": " But just be aware that this report really takes on the language of kind of groups and"
},
{
"start": 1166.64,
"end": 1173.2800000000002,
"text": " power between groups and groups trying to, you know, kind of gain power and keep in,"
},
{
"start": 1173.2800000000002,
"end": 1176.52,
"text": " keep power and keep others from having power."
},
{
"start": 1176.52,
"end": 1183.44,
"text": " All right, so say, to date, the diversity problems of the industry and the issues of"
},
{
"start": 1183.44,
"end": 1188.44,
"text": " bias in the systems it builds have tended to be considered separately."
},
{
"start": 1188.44,
"end": 1193.02,
"text": " We suggest that these are two versions of the same problem."
},
{
"start": 1193.02,
"end": 1197.6399999999999,
"text": " Issues of discrimination in the workforce and in system buildings are deeply intertwined."
},
{
"start": 1197.6399999999999,
"end": 1203.8,
"text": " Challenge, and moreover, tackling the challenges of bias within technical systems requires"
},
{
"start": 1203.8,
"end": 1207.76,
"text": " addressing workforce diversity and vice versa."
},
{
"start": 1207.76,
"end": 1214.72,
"text": " So the, I think this, this here actually is like how I described the argument and they"
},
{
"start": 1214.72,
"end": 1218.1599999999999,
"text": " kind of restated multiple times in a bit different way."
},
{
"start": 1218.1599999999999,
"end": 1219.76,
"text": " But I think this is the core."
},
{
"start": 1219.76,
"end": 1224.28,
"text": " And I really think I'm not misrepresenting the article here in that this is what they"
},
{
"start": 1224.28,
"end": 1225.3999999999999,
"text": " are setting out to do."
},
{
"start": 1225.3999999999999,
"end": 1233,
"text": " They're setting out to say, okay, the diversity, the kind of unequal representation in the"
},
{
"start": 1233,
"end": 1240.48,
"text": " workforce and the bias in some AI systems are causally linked to each other and tackling"
},
{
"start": 1240.48,
"end": 1243.96,
"text": " one requires tackling the other."
},
{
"start": 1243.96,
"end": 1249.16,
"text": " So yeah, if I'm misrepresenting them, let me know, but I really think I'm accurately"
},
{
"start": 1249.16,
"end": 1253.98,
"text": " representing their argument."
},
{
"start": 1253.98,
"end": 1261,
"text": " So what they, what they do, as I said, is they give examples of one and of the other"
},
{
"start": 1261,
"end": 1271.24,
"text": " and also they really, they're really on kind of discrediting the kind of issues to solve"
},
{
"start": 1271.24,
"end": 1273.66,
"text": " problems of bias in a different way."
},
{
"start": 1273.66,
"end": 1276.56,
"text": " So they point a little bit to this here in the introduction."
},
{
"start": 1276.56,
"end": 1280.04,
"text": " They say in the face of growing evidence, the AI research community and the industry"
},
{
"start": 1280.04,
"end": 1285.26,
"text": " producing our products have begun addressing the problem of bias by building on a body"
},
{
"start": 1285.26,
"end": 1288.36,
"text": " of work of fairness, accountability and transparency."
},
{
"start": 1288.36,
"end": 1294.8,
"text": " So fairness, accountability and transparency research concerns these issues."
},
{
"start": 1294.8,
"end": 1300.4399999999998,
"text": " For one is research showing that some products are unfair or untransparent and so on."
},
{
"start": 1300.4399999999998,
"end": 1308.6399999999999,
"text": " On the other hand, it's trying to devise algorithms that are more fair according to some notions"
},
{
"start": 1308.6399999999999,
"end": 1314.36,
"text": " or more accountable and transparent, which means that the algorithm can kind of say why"
},
{
"start": 1314.36,
"end": 1320,
"text": " it made a certain decision rather than it being a deep learning system that you don't"
},
{
"start": 1320,
"end": 1321.58,
"text": " really have an insight."
},
{
"start": 1321.58,
"end": 1326.6799999999998,
"text": " These fields are active fields of research, definitely very interesting to look into."
},
{
"start": 1326.6799999999998,
"end": 1334.6,
"text": " So but they, they kind of, it is not already here, but they say, yeah, we have adjusting"
},
{
"start": 1334.6,
"end": 1342.08,
"text": " AI systems that produce a result deemed fair by one of various mathematical definitions."
},
{
"start": 1342.08,
"end": 1345.96,
"text": " You can already see in the language here, they don't really like this research and they"
},
{
"start": 1345.96,
"end": 1352.76,
"text": " are trying in this report to kind of discredit it or at least claim that it doesn't solve"
},
{
"start": 1352.76,
"end": 1357.76,
"text": " the whole problem because their point is, of course, you have to address this diversity"
},
{
"start": 1357.76,
"end": 1364.24,
"text": " issue in the workforce in order to fix the problems."
},
{
"start": 1364.24,
"end": 1372.32,
"text": " So to this, I just want to say no, like if you can, I mean, you can criticize the fairness"
},
{
"start": 1372.32,
"end": 1376.1200000000001,
"text": " and accountability and transparency research field in that they haven't solved the problem"
},
{
"start": 1376.1200000000001,
"end": 1377.32,
"text": " fully yet."
},
{
"start": 1377.32,
"end": 1384.8,
"text": " But in principle, if I have an algorithm, if I'm being delivered an algorithm, right,"
},
{
"start": 1384.8,
"end": 1390.4,
"text": " and the fairness literature has been applied to that algorithm and someone tells me, I"
},
{
"start": 1390.4,
"end": 1397,
"text": " guarantee you here is a proof, the algorithm is fair, right, then I really don't care who"
},
{
"start": 1397,
"end": 1398.3200000000002,
"text": " made that algorithm."
},
{
"start": 1398.3200000000002,
"end": 1400.96,
"text": " As long as it's fair, the problem is fixed."
},
{
"start": 1400.96,
"end": 1404.16,
"text": " If the bias is gone, the problem is fixed."
},
{
"start": 1404.16,
"end": 1405.3600000000001,
"text": " And I don't care who fix it."
},
{
"start": 1405.3600000000001,
"end": 1410.64,
"text": " I don't care if the person who fixed it is black or white or purple."
},
{
"start": 1410.64,
"end": 1412.52,
"text": " Then the problem is fixed."
},
{
"start": 1412.52,
"end": 1418.4,
"text": " And they, they really have to, they really try to just make the counter argument here"
},
{
"start": 1418.4,
"end": 1421.2800000000002,
"text": " is that no, that's it's not enough."
},
{
"start": 1421.2800000000002,
"end": 1428.16,
"text": " But I claim yes, it, if you can actually solve the fairness problem, technically, then you"
},
{
"start": 1428.16,
"end": 1430.3600000000001,
"text": " have solved the fairness problem."
},
{
"start": 1430.3600000000001,
"end": 1436.76,
"text": " Yeah, the only thing you can do is claim that it is not good enough yet, but not that it's"
},
{
"start": 1436.76,
"end": 1441.6000000000001,
"text": " fun to they kind of have to make the argument that it's fundamentally flawed approach."
},
{
"start": 1441.6000000000001,
"end": 1445.1200000000001,
"text": " And I don't think they succeed in doing that here."
},
{
"start": 1445.12,
"end": 1452.1999999999998,
"text": " Um, yeah, so they go on to say, we should expand to consider not only how I tools can"
},
{
"start": 1452.1999999999998,
"end": 1456.04,
"text": " be biased technically, but how they're shaped by the environments in which you're built"
},
{
"start": 1456.04,
"end": 1458.28,
"text": " in and the people that built them."
},
{
"start": 1458.28,
"end": 1463.8,
"text": " Again, this this focus like who builds the AI system, I don't care, I care what it does,"
},
{
"start": 1463.8,
"end": 1464.9199999999998,
"text": " right?"
},
{
"start": 1464.9199999999998,
"end": 1469.4399999999998,
"text": " As much as if, if I hear an argument for or against something, I don't care who makes"
},
{
"start": 1469.4399999999998,
"end": 1470.8,
"text": " the argument, right?"
},
{
"start": 1470.8,
"end": 1473.28,
"text": " I care what the argument says."
},
{
"start": 1473.28,
"end": 1477.8,
"text": " This is, it's like an ad hominem attack for an entire community."
},
{
"start": 1477.8,
"end": 1487.76,
"text": " That's kind of how this this article, this report shows, or is appears to me."
},
{
"start": 1487.76,
"end": 1493.44,
"text": " So they say, currently, large scale AI systems are developed almost exclusively in a handful"
},
{
"start": 1493.44,
"end": 1497.76,
"text": " of technology companies and a small set of elite university laboratories spaces that"
},
{
"start": 1497.76,
"end": 1502.74,
"text": " in the West tend to be extremely white, affluent, technically oriented and male."
},
{
"start": 1502.74,
"end": 1508.1200000000001,
"text": " So yeah, their their problem, that's their fundamental problem here that these these"
},
{
"start": 1508.1200000000001,
"end": 1511.72,
"text": " spaces are skewed in one direction."
},
{
"start": 1511.72,
"end": 1515.84,
"text": " Interestingly enough, their problem is not so much that it's that they're all in the"
},
{
"start": 1515.84,
"end": 1518.04,
"text": " same place, right?"
},
{
"start": 1518.04,
"end": 1523.68,
"text": " That they all live like 20 miles from each other in around San Francisco."
},
{
"start": 1523.68,
"end": 1528.1200000000001,
"text": " That's that seems to be not a problem at all, as long as we get to like enough people of"
},
{
"start": 1528.1200000000001,
"end": 1532.32,
"text": " color and women into these 20 miles."
},
{
"start": 1532.32,
"end": 1540.52,
"text": " But yeah, so that that's pointing out the the problem here or the yeah, kind of issue"
},
{
"start": 1540.52,
"end": 1541.52,
"text": " they have."
},
{
"start": 1541.52,
"end": 1546.28,
"text": " All right, so they go on."
},
{
"start": 1546.28,
"end": 1554.12,
"text": " Just kind of want to highlight again, they say both within the spaces where AI is being"
},
{
"start": 1554.12,
"end": 1557.8,
"text": " created and the logic of how AI systems are being designed."
},
{
"start": 1557.8,
"end": 1563,
"text": " So paralleling the two things, the cost of bias, harassment and discrimination are born"
},
{
"start": 1563,
"end": 1570.28,
"text": " by the same people, gender minorities, people of color, other underrepresented groups."
},
{
"start": 1570.28,
"end": 1576.56,
"text": " And they also say similarly, the benefits of such systems from profit to efficiency,"
},
{
"start": 1576.56,
"end": 1583.24,
"text": " accrue primarily to those are already in positions of power tend to be white, educated and male."
},
{
"start": 1583.24,
"end": 1592.88,
"text": " So they again, they say the this points to a systematic relationship between patterns"
},
{
"start": 1592.88,
"end": 1597.6,
"text": " of exclusion within the field of AI and the industry driving its production on the one"
},
{
"start": 1597.6,
"end": 1602.04,
"text": " hand and the biases that manifest in the logics and applications of the technologies on the"
},
{
"start": 1602.04,
"end": 1603.04,
"text": " other."
},
{
"start": 1603.04,
"end": 1609.84,
"text": " And they try to make this connection because they say the cost and the benefit of these"
},
{
"start": 1609.84,
"end": 1614.6,
"text": " two things are overlap in the people that where it costs and it benefits."
},
{
"start": 1614.6,
"end": 1619.28,
"text": " And I really, again, it's just a parallel, but I really even don't think that's true"
},
{
"start": 1619.28,
"end": 1626.04,
"text": " because they kind of, they kind of argue against themselves later."
},
{
"start": 1626.04,
"end": 1632.8799999999999,
"text": " So they always say, we have to look at again, they shoot against the take much more than"
},
{
"start": 1632.8799999999999,
"end": 1638.28,
"text": " the technically driven problem solving."
},
{
"start": 1638.28,
"end": 1640.12,
"text": " They point to this."
},
{
"start": 1640.12,
"end": 1645.28,
"text": " So our research requires looking at gender and racist categories within which humans"
},
{
"start": 1645.28,
"end": 1652.24,
"text": " think in short, sorry, studies of discriminatory systems, we need to ask who is harmed, who"
},
{
"start": 1652.24,
"end": 1654.84,
"text": " benefits, who gets to decide."
},
{
"start": 1654.84,
"end": 1664.84,
"text": " So it's kind of who bears the cost, who bears the benefits and who has the power."
},
{
"start": 1664.84,
"end": 1671.52,
"text": " So that's the, and again, it's we seek to understand how AI disadvantages some, we also"
},
{
"start": 1671.52,
"end": 1676.04,
"text": " consider how it works to the advantage of others."
},
{
"start": 1676.04,
"end": 1677.3999999999999,
"text": " So keep that in mind."
},
{
"start": 1677.3999999999999,
"end": 1682.4399999999998,
"text": " That's kind of the lens through how they analyze the this thing again, one that acknowledges"
},
{
"start": 1682.4399999999998,
"end": 1685.72,
"text": " power relationships and centers equity and justice."
},
{
"start": 1685.72,
"end": 1691.6399999999999,
"text": " That's the, they want to see this bigger picture."
},
{
"start": 1691.64,
"end": 1696.5600000000002,
"text": " So that's yeah, keep, again, keep that in mind."
},
{
"start": 1696.5600000000002,
"end": 1703.8400000000001,
"text": " So they go into a section called which humans are in the loop, how workforces and AI systems"
},
{
"start": 1703.8400000000001,
"end": 1705.0800000000002,
"text": " interact."
},
{
"start": 1705.0800000000002,
"end": 1710.6000000000001,
"text": " So this kind of from the title of this section, you think, okay, here's where we get in."
},
{
"start": 1710.6000000000001,
"end": 1712.76,
"text": " Here's where we make the argument."
},
{
"start": 1712.76,
"end": 1720.76,
"text": " And they start by listing examples of how AI systems can be discriminatory."
},
{
"start": 1720.76,
"end": 1728.4,
"text": " And first, they go into an example of Amazon had developed an experimental hiring tool"
},
{
"start": 1728.4,
"end": 1733.16,
"text": " to help rank job candidates."
},
{
"start": 1733.16,
"end": 1738.12,
"text": " By learning from its past reference preferences, Amazon hoped that the resume scanning tool"
},
{
"start": 1738.12,
"end": 1743.3799999999999,
"text": " will be able to efficiently identify qualified applicants, comparing their applications"
},
{
"start": 1743.3799999999999,
"end": 1745,
"text": " to previous hires."
},
{
"start": 1745,
"end": 1750.64,
"text": " The system quickly began to downgrade resumes from candidates who attended all women's"
},
{
"start": 1750.64,
"end": 1757.38,
"text": " colleges along with any resumes that included the word women's."
},
{
"start": 1757.38,
"end": 1762.8400000000001,
"text": " After uncovering this bias, Amazon engineers tried to fix the problem by directing the"
},
{
"start": 1762.8400000000001,
"end": 1765.92,
"text": " system to treat these terms in a neutral manner."
},
{
"start": 1765.92,
"end": 1772.4,
"text": " The company eventually abandoned the tool when they were unable to ensure that the algorithm"
},
{
"start": 1772.4,
"end": 1776.1200000000001,
"text": " would not be biased against women."
},
{
"start": 1776.12,
"end": 1781.4399999999998,
"text": " Gender based discrimination was built too deeply within the system and in Amazon's past"
},
{
"start": 1781.4399999999998,
"end": 1785.4799999999998,
"text": " hiring practices to be uprooted using a purely technical approach."
},
{
"start": 1785.4799999999998,
"end": 1790.4799999999998,
"text": " So this just the way is written, I find to be quite dishonest."
},
{
"start": 1790.4799999999998,
"end": 1793.84,
"text": " But let's analyze what happened here."
},
{
"start": 1793.84,
"end": 1798.9199999999998,
"text": " So their final claim is that gender based discrimination was built too deeply within"
},
{
"start": 1798.9199999999998,
"end": 1804.6,
"text": " the system to be uprooted using a purely technical approach."
},
{
"start": 1804.6,
"end": 1806.1999999999998,
"text": " So this is one of their arguments."
},
{
"start": 1806.1999999999998,
"end": 1812.12,
"text": " They say technical approaches, they don't help because the Amazon engineers tried to"
},
{
"start": 1812.12,
"end": 1814.9599999999998,
"text": " fix the problem."
},
{
"start": 1814.9599999999998,
"end": 1823,
"text": " But when they were unable to ensure that the algorithm would not be biased against women."
},
{
"start": 1823,
"end": 1828.6399999999999,
"text": " So if you read this, you really I mean, I really get the impression that's not what"
},
{
"start": 1828.6399999999999,
"end": 1830.12,
"text": " happened here."
},
{
"start": 1830.12,
"end": 1837.1599999999999,
"text": " What happened here most probably is Amazon built this tool, okay, and it fed in its past"
},
{
"start": 1837.1599999999999,
"end": 1843.9599999999998,
"text": " hires and we know of issues of like data set bias bias inherent in data set."
},
{
"start": 1843.9599999999998,
"end": 1851.2399999999998,
"text": " So if your data set is skewed, the AI tends to pick up on the skewed data set and become"
},
{
"start": 1851.2399999999998,
"end": 1852.2399999999998,
"text": " skewed itself."
},
{
"start": 1852.2399999999998,
"end": 1860.08,
"text": " Okay, so I actually would argue that most or all of the examples they stayed in here"
},
{
"start": 1860.08,
"end": 1865.1599999999999,
"text": " are examples of such biased data sets and not."
},
{
"start": 1865.1599999999999,
"end": 1871,
"text": " So the the cause of the bias is the data set that they are strained on and not the person"
},
{
"start": 1871,
"end": 1879.24,
"text": " that ran the code or built the algorithm to train it on or built the deployment."
},
{
"start": 1879.24,
"end": 1885.56,
"text": " And so but it doesn't matter you're a you're Amazon, you built this tool and you realize,"
},
{
"start": 1885.56,
"end": 1891.3999999999999,
"text": " oh, it discriminates against people having women's on their CV."
},
{
"start": 1891.3999999999999,
"end": 1895.98,
"text": " So this is a pretty bad PR wise."
},
{
"start": 1895.98,
"end": 1899.62,
"text": " So you tell your engineers engineers fix the problem."
},
{
"start": 1899.62,
"end": 1903.78,
"text": " So the engineers go fix the problem, they come back and say, okay, we fixed the problem."
},
{
"start": 1903.78,
"end": 1909.44,
"text": " And then what you do is you say, okay, engineers, can you ensure me that the algorithm would"
},
{
"start": 1909.44,
"end": 1911.12,
"text": " not be biased against women?"
},
{
"start": 1911.12,
"end": 1918,
"text": " Because if only the slightest bias exists, if only it doesn't even have to be if one"
},
{
"start": 1918,
"end": 1926.52,
"text": " journalist finds one example, where there is a down rank, because I add the word women's,"
},
{
"start": 1926.52,
"end": 1928.8,
"text": " then we are screwed, right?"
},
{
"start": 1928.8,
"end": 1934.08,
"text": " And the engineers will say, No, we can't guarantee that it's a deep learning system or something,"
},
{
"start": 1934.08,
"end": 1935.08,
"text": " right?"
},
{
"start": 1935.08,
"end": 1938.78,
"text": " We, we can't like give you a proof that it's not biased."
},
{
"start": 1938.78,
"end": 1943.56,
"text": " If you're a smart executive, at that point, you'll scrap the tool, because the potential"
},
{
"start": 1943.56,
"end": 1946.54,
"text": " PR downside are just huge."
},
{
"start": 1946.54,
"end": 1952,
"text": " And probably they've also realized it's not that handy to have this, this tool compared"
},
{
"start": 1952,
"end": 1956.3999999999999,
"text": " to their recruiters doing their job, because their recruiters might actually be good and"
},
{
"start": 1956.3999999999999,
"end": 1958.6399999999999,
"text": " have been doing this for a while."
},
{
"start": 1958.6399999999999,
"end": 1967.78,
"text": " So to the to the fact that this tool was scrapped is probably much more a result of a PR disaster."
},
{
"start": 1967.78,
"end": 1974.32,
"text": " But also independent of that to say gender based discrimination, sorry, gender based"
},
{
"start": 1974.32,
"end": 1980.6,
"text": " discrimination was built too deeply within the system to be uprooted using a purely technical"
},
{
"start": 1980.6,
"end": 1982.8799999999999,
"text": " approach."
},
{
"start": 1982.8799999999999,
"end": 1988.12,
"text": " It's just I mean, what is what is this?"
},
{
"start": 1988.12,
"end": 1993.94,
"text": " This is just trying to discredit this kind of technical, technical going about solving"
},
{
"start": 1993.94,
"end": 1994.94,
"text": " this problem."
},
{
"start": 1994.94,
"end": 1999.88,
"text": " I'm pretty sure if someone comes to me and says here, I have this tool, and I can mathematically"
},
{
"start": 1999.88,
"end": 2006.26,
"text": " prove to you that it's not biased, then it's not then the problem is solved."
},
{
"start": 2006.26,
"end": 2014.72,
"text": " And also, I really don't see how the person training the algorithm, or the person researching"
},
{
"start": 2014.72,
"end": 2019.8400000000001,
"text": " such an algorithm has any influence over how the algorithm works, because they're not the"
},
{
"start": 2019.84,
"end": 2025.6399999999999,
"text": " ones making the data set, or if they are, yeah, then they can make a better data set."
},
{
"start": 2025.6399999999999,
"end": 2031.3999999999999,
"text": " Also, if a person comes and makes a better data set, that will fix the problem."
},
{
"start": 2031.3999999999999,
"end": 2036.1999999999998,
"text": " And it doesn't matter what skin color the person has that makes the better data set."
},
{
"start": 2036.1999999999998,
"end": 2042.82,
"text": " So all of this, this link is just not demonstrated here, or anywhere here at all."
},
{
"start": 2042.82,
"end": 2048.56,
"text": " But this this here is the closest Amazon that this report actually comes to making this"
},
{
"start": 2048.56,
"end": 2049.56,
"text": " point."
},
{
"start": 2049.56,
"end": 2055.64,
"text": " And I said before, I drew that drew this thing workforce AI bias, right?"
},
{
"start": 2055.64,
"end": 2061.86,
"text": " So this this link since it here the AI system is used for hiring the workforce."
},
{
"start": 2061.86,
"end": 2069.22,
"text": " So at least one could make a claim that this link is somewhat demonstrated."
},
{
"start": 2069.22,
"end": 2075.38,
"text": " But I this it's a weak case, I would agree, but this is the closest they come."
},
{
"start": 2075.38,
"end": 2082.2000000000003,
"text": " So that and but then to go this direction, you have to somehow argue, well, the workforce"
},
{
"start": 2082.2000000000003,
"end": 2088.02,
"text": " somehow makes the AI system bias, no, the workforce influences the data set."
},
{
"start": 2088.02,
"end": 2093.9,
"text": " If the AI is trained, so if a hiring AI, how do you train a hiring AI, you optimally train"
},
{
"start": 2093.9,
"end": 2095.7200000000003,
"text": " it on the performance."
},
{
"start": 2095.7200000000003,
"end": 2101.82,
"text": " So this this employee here is going to have a performance over time, right?"
},
{
"start": 2101.82,
"end": 2104.5,
"text": " And the AI system will look at that performance over time."
},
{
"start": 2104.5,
"end": 2109.7,
"text": " So if the AI system even if it's initially biased, because it learns from the risk recruiters,"
},
{
"start": 2109.7,
"end": 2118.56,
"text": " it will learn that, okay, actually, if I always forgo these women, then I don't get as much"
},
{
"start": 2118.56,
"end": 2121.86,
"text": " performance of a workforce, so I should correct for that."
},
{
"start": 2121.86,
"end": 2130.02,
"text": " So if you train the AI system on a good metric, then then then this problem will leave even"
},
{
"start": 2130.02,
"end": 2131.02,
"text": " out itself."
},
{
"start": 2131.02,
"end": 2138.42,
"text": " But again, this Yeah, this this is this could be considered like one point in the argument,"
},
{
"start": 2138.42,
"end": 2140.58,
"text": " but I think it's a very weak point."
},
{
"start": 2140.58,
"end": 2146.04,
"text": " And only because the AI system is actually used for hiring, where I think the point they're"
},
{
"start": 2146.04,
"end": 2152.74,
"text": " making is a much larger one is the general bias in the AI systems contributes to the"
},
{
"start": 2152.74,
"end": 2153.74,
"text": " workforce imbalances."
},
{
"start": 2153.74,
"end": 2159.44,
"text": " And there you somehow have to say that, okay, the AI system somehow influences society at"
},
{
"start": 2159.44,
"end": 2165.98,
"text": " large and society at large then go leads to the workforce being skewed."
},
{
"start": 2165.98,
"end": 2171.7400000000002,
"text": " I don't Yeah, that it's just not strong enough, in my opinion."
},
{
"start": 2171.7400000000002,
"end": 2176.18,
"text": " And the other direction also isn't isn't strong here."
},
{
"start": 2176.18,
"end": 2180.54,
"text": " But again, the examples only get weaker from here on."
},
{
"start": 2180.54,
"end": 2185.66,
"text": " They go on to say, this is just one of many examples that show how the functional logics"
},
{
"start": 2185.66,
"end": 2189.8599999999997,
"text": " of a given technology echo the gender and racial dynamics of the industry that produced"
},
{
"start": 2189.8599999999997,
"end": 2190.8599999999997,
"text": " it here."
},
{
"start": 2190.8599999999997,
"end": 2194.66,
"text": " Yeah, this, that's the claim they're making to echo the gender and racial dynamics."
},
{
"start": 2194.66,
"end": 2200.18,
"text": " And they're actually making a stronger claim, namely a causal claim."
},
{
"start": 2200.18,
"end": 2205.8199999999997,
"text": " They give the other example of the Amazon's recognition facial analysis service previously"
},
{
"start": 2205.8199999999997,
"end": 2210.54,
"text": " demonstrated gender and racial biases worse than those of comparable tools."
},
{
"start": 2210.54,
"end": 2215.94,
"text": " So it failed to see dark skinned women while being most proficient at detecting likes light"
},
{
"start": 2215.94,
"end": 2218.42,
"text": " skinned men."
},
{
"start": 2218.42,
"end": 2224.5,
"text": " And they later go into this example again, where they basically also state yes, this"
},
{
"start": 2224.5,
"end": 2231.3,
"text": " is an issue of the data set, the data set being much more comprised of white men."
},
{
"start": 2231.3,
"end": 2236.02,
"text": " And they say, but then they have to kind of make the turnaround argument and say, well,"
},
{
"start": 2236.02,
"end": 2242.82,
"text": " the data set is a reflection of society and society, you know, part of society is the"
},
{
"start": 2242.82,
"end": 2243.82,
"text": " workforce."
},
{
"start": 2243.82,
"end": 2248.78,
"text": " And it's just not, I mean, it's again, this argument only works if you already believe"
},
{
"start": 2248.78,
"end": 2249.78,
"text": " the conclusion."
},
{
"start": 2249.78,
"end": 2257.14,
"text": " Otherwise, there's actually no argument there or no solid one."
},
{
"start": 2257.14,
"end": 2262.72,
"text": " But what they do here is they say Amazon's initial response to such criticism has been"
},
{
"start": 2262.72,
"end": 2267.7,
"text": " to try and discredit the research behind it."
},
{
"start": 2267.7,
"end": 2270.8799999999997,
"text": " This reaction, or let's let's first discuss this."
},
{
"start": 2270.8799999999997,
"end": 2278.02,
"text": " So the Amazon, yeah, Amazon, of course, being the accused here and a multi billion dollar"
},
{
"start": 2278.02,
"end": 2283.8999999999996,
"text": " company and the criticism is something that is PR wise very bad for them."
},
{
"start": 2283.8999999999996,
"end": 2289.2999999999997,
"text": " They discredit the research tried to discredit the research behind it."
},
{
"start": 2289.3,
"end": 2292.7400000000002,
"text": " It's understandable that this could be dishonest from Amazon side, right?"
},
{
"start": 2292.7400000000002,
"end": 2293.7400000000002,
"text": " I mean, they're getting attacked."
},
{
"start": 2293.7400000000002,
"end": 2297.82,
"text": " It's like, you know, the tobacco companies trying to discredit the smoking research,"
},
{
"start": 2297.82,
"end": 2300.5800000000004,
"text": " but still, I mean, that doesn't mean it's wrong."
},
{
"start": 2300.5800000000004,
"end": 2303.98,
"text": " It could actually be bad research, right?"
},
{
"start": 2303.98,
"end": 2308.5800000000004,
"text": " You have to actually go and look at what's Amazon saying, what is the research really"
},
{
"start": 2308.5800000000004,
"end": 2309.5800000000004,
"text": " doing?"
},
{
"start": 2309.5800000000004,
"end": 2313.54,
"text": " Is Amazon right or wrong?"
},
{
"start": 2313.54,
"end": 2317.5,
"text": " Completely open that Amazon is wrong here, but you still have to go look."
},
{
"start": 2317.5,
"end": 2321.1,
"text": " And this citation here, I've tried this citation here."
},
{
"start": 2321.1,
"end": 2324.94,
"text": " This one isn't to a to Amazon's response."
},
{
"start": 2324.94,
"end": 2330.94,
"text": " It's to like a medium article and the medium article doesn't even include Amazon's response."
},
{
"start": 2330.94,
"end": 2332.86,
"text": " I've looked, maybe I haven't seen it."
},
{
"start": 2332.86,
"end": 2335.98,
"text": " It doesn't also doesn't link Amazon's response."
},
{
"start": 2335.98,
"end": 2340.46,
"text": " Maybe it links something that links something or that includes it in some way."
},
{
"start": 2340.46,
"end": 2346.58,
"text": " But basically this medium article only states, yeah, Amazon has been denying this or Amazon"
},
{
"start": 2346.58,
"end": 2348.74,
"text": " has been critical of this."
},
{
"start": 2348.74,
"end": 2353.94,
"text": " And if you state such a sentence, Amazon's initial response to such criticism has been"
},
{
"start": 2353.94,
"end": 2355.7799999999997,
"text": " to try and discredit the research behind it."
},
{
"start": 2355.7799999999997,
"end": 2362.7799999999997,
"text": " I at least expect the citation to lead me to Amazon's response so that I can verify what"
},
{
"start": 2362.7799999999997,
"end": 2363.7799999999997,
"text": " they're saying."
},
{
"start": 2363.7799999999997,
"end": 2364.7799999999997,
"text": " Right."
},
{
"start": 2364.7799999999997,
"end": 2373.98,
"text": " So this, I mean, I don't know, willing to chalk it up to incompetence rather than malice."
},
{
"start": 2373.98,
"end": 2381.5,
"text": " Right, but then they go on and they say this reaction is evidence of the wider problem."
},
{
"start": 2381.5,
"end": 2387.82,
"text": " The research was conducted by two well-regarded AI researchers who are women of color."
},
{
"start": 2387.82,
"end": 2393.1,
"text": " By attempting to publicly discredit their expertise and research methods, Amazon is"
},
{
"start": 2393.1,
"end": 2398.14,
"text": " reinforcing the same kinds of prejudice and derasers that the research critiques."
},
{
"start": 2398.14,
"end": 2403.34,
"text": " Yeah, here you go straight to the identity of the researchers."
},
{
"start": 2403.34,
"end": 2405.98,
"text": " Like play the race card straight out."
},
{
"start": 2405.98,
"end": 2409.54,
"text": " I mean, this is maximum dishonesty, right?"
},
{
"start": 2409.54,
"end": 2415.1800000000003,
"text": " Except if Amazon said something like, well, these women of color, clearly because they're"
},
{
"start": 2415.1800000000003,
"end": 2419.06,
"text": " women of color, they have no idea what they're doing or something like this."
},
{
"start": 2419.06,
"end": 2425.2200000000003,
"text": " This is basically it's coded language for saying either saying you're not allowed to"
},
{
"start": 2425.22,
"end": 2433.74,
"text": " criticize people of color because they're a minority or you're basically saying Amazon"
},
{
"start": 2433.74,
"end": 2437.8999999999996,
"text": " is racist and that's why they criticize them."
},
{
"start": 2437.8999999999996,
"end": 2440.98,
"text": " They just don't take them seriously because they're women of color."
},
{
"start": 2440.98,
"end": 2443.7599999999998,
"text": " I mean, both are both are abhorrent."
},
{
"start": 2443.7599999999998,
"end": 2448.2999999999997,
"text": " This is just dishonesty really stated here too."
},
{
"start": 2448.2999999999997,
"end": 2454.22,
"text": " I mean, again, I'm perfectly willing to accept that Amazon's critique of this research is"
},
{
"start": 2454.22,
"end": 2460.2999999999997,
"text": " wrong and is not well intended because they're the ones attacked, but you still have to examine"
},
{
"start": 2460.2999999999997,
"end": 2468.4199999999996,
"text": " it rather than say, well, they shoot against women of color and therefore somehow that"
},
{
"start": 2468.4199999999996,
"end": 2474.5,
"text": " makes their counter argument irrelevant or even racist or something."
},
{
"start": 2474.5,
"end": 2476.1,
"text": " That's I don't know."
},
{
"start": 2476.1,
"end": 2477.8999999999996,
"text": " I find this dishonest."
},
{
"start": 2477.8999999999996,
"end": 2483.58,
"text": " Yeah, I don't know about you."
},
{
"start": 2483.58,
"end": 2485.5,
"text": " Moving on."
},
{
"start": 2485.5,
"end": 2496.42,
"text": " So they go on and state a number of examples of bias and discrimination in the workforce"
},
{
"start": 2496.42,
"end": 2504.46,
"text": " and they a lot of times they make a mixture of the gender and race imbalance in workforce"
},
{
"start": 2504.46,
"end": 2512.02,
"text": " and things like sexual harassment not being taken seriously by the companies and also"
},
{
"start": 2512.02,
"end": 2521.94,
"text": " the things like gender or race pay gaps, which I'm open to accept that these things exist"
},
{
"start": 2521.94,
"end": 2525.34,
"text": " and are even intertwined."
},
{
"start": 2525.34,
"end": 2530.34,
"text": " But just to tell you what's happening because we're kind of skipping but it's kind of a"
},
{
"start": 2530.34,
"end": 2532.62,
"text": " mixture of these things."
},
{
"start": 2532.62,
"end": 2535.46,
"text": " So they say these issues are systemic."
},
{
"start": 2535.46,
"end": 2539.94,
"text": " There's a close relationship between these workplaces with discriminatory practices and"
},
{
"start": 2539.94,
"end": 2546.7000000000003,
"text": " discriminatory tools, a feedback loop that is shaping the industry and its tools."
},
{
"start": 2546.7000000000003,
"end": 2552.06,
"text": " So again here to state, I think I've stated it enough now that or demonstrated enough"
},
{
"start": 2552.06,
"end": 2558.2200000000003,
"text": " that I'm really representing their arguments as they intended it to namely that there is"
},
{
"start": 2558.2200000000003,
"end": 2564.46,
"text": " this kind of causal links and loop between these two things."
},
{
"start": 2564.46,
"end": 2572.06,
"text": " And they shoot against the fairness literature by saying from this perspective, locating"
},
{
"start": 2572.06,
"end": 2577.94,
"text": " individual biases within given technical systems and attempting to fix them by tweaking the"
},
{
"start": 2577.94,
"end": 2582.94,
"text": " system becomes an exercise in futility."
},
{
"start": 2582.94,
"end": 2587.02,
"text": " Only by examining discrimination through the lens of social logics, who it benefits, who"
},
{
"start": 2587.02,
"end": 2592.18,
"text": " it harms and how can we see the workings of these systems in the context of existing power"
},
{
"start": 2592.18,
"end": 2593.18,
"text": " relationships."
},
{
"start": 2593.18,
"end": 2599.7,
"text": " So they say these issues aren't technically fixing these systems won't help."
},
{
"start": 2599.7,
"end": 2600.7,
"text": " If that's the problem."
},
{
"start": 2600.7,
"end": 2607.62,
"text": " And I agree, if that causal link actually exists, then technically fixing the system"
},
{
"start": 2607.62,
"end": 2608.8999999999996,
"text": " might not solve the problem."
},
{
"start": 2608.8999999999996,
"end": 2609.8999999999996,
"text": " Not even sure."
},
{
"start": 2609.8999999999996,
"end": 2615.58,
"text": " I mean, if you technically fix a system like this, then you technically break the causal"
},
{
"start": 2615.58,
"end": 2617.7,
"text": " link and thereby fix the problem."
},
{
"start": 2617.7,
"end": 2624.1,
"text": " I would not sure, but again, this is based on the hypothesis that they've already reached,"
},
{
"start": 2624.1,
"end": 2630.3399999999997,
"text": " like demonstrated their, their conclusion, which they haven't and which they are not"
},
{
"start": 2630.3399999999997,
"end": 2632.8599999999997,
"text": " in the entire article."
},
{
"start": 2632.8599999999997,
"end": 2641.2999999999997,
"text": " Yeah, so the next section goes into who makes AI so I don't know about you, but this section"
},
{
"start": 2641.3,
"end": 2648.1000000000004,
"text": " was titled how workforces and AI systems interact."
},
{
"start": 2648.1000000000004,
"end": 2655.34,
"text": " And apart from one, the AI system being used for hiring the workforce, which is said this"
},
{
"start": 2655.34,
"end": 2662.9,
"text": " one instance where actually there could be one causal direction from bias to different"
},
{
"start": 2662.9,
"end": 2664.78,
"text": " misrepresentation the workforce."
},
{
"start": 2664.78,
"end": 2671.38,
"text": " Other than that, there isn't really anything in there that really shows how these two interact,"
},
{
"start": 2671.38,
"end": 2673.46,
"text": " especially in a in a causal way."
},
{
"start": 2673.46,
"end": 2682.82,
"text": " Alright, the next section is called who makes AI is broadly about the about the gender and"
},
{
"start": 2682.82,
"end": 2688.6200000000003,
"text": " race imbalances or miss not unequal representation in the workforce."
},
{
"start": 2688.62,
"end": 2698.2599999999998,
"text": " And we're going to skip this diversity statistics that kind of that discuss that diversity statistics"
},
{
"start": 2698.2599999999998,
"end": 2706.54,
"text": " of companies aren't really accurate, or can be, you know, massaged kind of by the companies,"
},
{
"start": 2706.54,
"end": 2709.9,
"text": " which you know, is true."
},
{
"start": 2709.9,
"end": 2714.46,
"text": " Definitely companies will always try to maximize their profits."
},
{
"start": 2714.46,
"end": 2722.62,
"text": " And even if they give out such a report, so that definitely critical thinking is in order."
},
{
"start": 2722.62,
"end": 2729.5,
"text": " Alright, so the next section is called the discrimination feedback loop."
},
{
"start": 2729.5,
"end": 2734.18,
"text": " Right, if so if in the earlier section, you felt like here we go into the meat, then you"
},
{
"start": 2734.18,
"end": 2740.78,
"text": " must feel with this title, like, okay, we're actually going to see how this loop works"
},
{
"start": 2740.78,
"end": 2748.7000000000003,
"text": " and how the two things are really linked, like how one causes the other and vice versa."
},
{
"start": 2748.7000000000003,
"end": 2750.02,
"text": " So let's jump in."
},
{
"start": 2750.02,
"end": 2758.38,
"text": " They say AI systems increasingly play a role in our social and political institutions,"
},
{
"start": 2758.38,
"end": 2762.2200000000003,
"text": " including education, healthcare, hiring, criminal justice."
},
{
"start": 2762.2200000000003,
"end": 2769.38,
"text": " Yes, therefore, we need to consider the relationship between the workplace diversity crisis and"
},
{
"start": 2769.38,
"end": 2774.06,
"text": " the problems with bias and discrimination in AI systems."
},
{
"start": 2774.06,
"end": 2783.94,
"text": " No, why I don't see how therefore, but yeah, so I don't see how therefore we need to consider"
},
{
"start": 2783.94,
"end": 2784.94,
"text": " the relationship."
},
{
"start": 2784.94,
"end": 2789.58,
"text": " Okay, if there is a relationship, we need to consider whether there's a relationship."
},
{
"start": 2789.58,
"end": 2792.38,
"text": " Okay, granted."
},
{
"start": 2792.38,
"end": 2797.1600000000003,
"text": " So they say fairness, accountability and transparency research is playing an emerging role."
},
{
"start": 2797.16,
"end": 2802.62,
"text": " Now what they mean here is the aspect of fairness, accountability and transparency research that"
},
{
"start": 2802.62,
"end": 2804.3799999999997,
"text": " shows that there is a problem."
},
{
"start": 2804.3799999999997,
"end": 2809.5,
"text": " So I told you there's two sides, one side is showing there is a problem in current systems"
},
{
"start": 2809.5,
"end": 2811.42,
"text": " and the other side is trying to fix them."
},
{
"start": 2811.42,
"end": 2818.46,
"text": " So they're very much fans of the side that shows that there is a problem and they use"
},
{
"start": 2818.46,
"end": 2823.94,
"text": " show some of these problems here, we've already seen some but they show some more like Facebook's"
},
{
"start": 2823.94,
"end": 2828.98,
"text": " ad delivery systems let users to be shown as for housing and employment in a discriminatory"
},
{
"start": 2828.98,
"end": 2829.98,
"text": " manner."
},
{
"start": 2829.98,
"end": 2836.9,
"text": " So giving 2019 study found significant racial bias in a widely used commercial algorithm"
},
{
"start": 2836.9,
"end": 2843.02,
"text": " used to determine whether patients will be enrolled in care management programs."
},
{
"start": 2843.02,
"end": 2855.1,
"text": " So these are these are just examples of these AI systems being biased."
},
{
"start": 2855.1,
"end": 2861.02,
"text": " So they go into this say taking a contextualized view may enable more extensive account and"
},
{
"start": 2861.02,
"end": 2866.86,
"text": " the contextualized view they when they say this they mean anything more than just a technical"
},
{
"start": 2866.86,
"end": 2870.02,
"text": " approach at solving these problems."
},
{
"start": 2870.02,
"end": 2874.62,
"text": " More extensive account of bias to emerge future work could examine the politics of system"
},
{
"start": 2874.62,
"end": 2881.58,
"text": " design study how AI systems in situated reality and study AI systems in situated realities"
},
{
"start": 2881.58,
"end": 2888.18,
"text": " ask why a system was designed in a particular way, how it was constructed, whose interest"
},
{
"start": 2888.18,
"end": 2894.34,
"text": " it shaped shaped by the metrics in which its success or failure is assessed, rather than"
},
{
"start": 2894.34,
"end": 2898.9,
"text": " solely focusing on improving existing data sets or individual algorithms."
},
{
"start": 2898.9,
"end": 2901.02,
"text": " Yeah, I agree."
},
{
"start": 2901.02,
"end": 2906.46,
"text": " I mean, we always have to we always have to pay attention to these things, especially"
},
{
"start": 2906.46,
"end": 2913.46,
"text": " like looking at the metrics by which its success or failure is assessed."
},
{
"start": 2913.46,
"end": 2922.1,
"text": " But a lot of times this is this is rather straightforward in kind of if you look at"
},
{
"start": 2922.1,
"end": 2929.06,
"text": " the metric, the metric most often, especially in commercial applications is money, right?"
},
{
"start": 2929.06,
"end": 2936.62,
"text": " So the metric of like an ad showing system, like if I have a system to recommend ads to"
},
{
"start": 2936.62,
"end": 2943.7599999999998,
"text": " people, show people ads and personalize them and so on, I simply want to maximize my revenue."
},
{
"start": 2943.7599999999998,
"end": 2946.7,
"text": " So I want to sell someone something."
},
{
"start": 2946.7,
"end": 2952.8199999999997,
"text": " And everything I want to know is how likely is it that person is going to buy that thing?"
},
{
"start": 2952.8199999999997,
"end": 2953.8199999999997,
"text": " Right?"
},
{
"start": 2953.8199999999997,
"end": 2956.7799999999997,
"text": " I that's basically Yeah."
},
{
"start": 2956.7799999999997,
"end": 2965.7599999999998,
"text": " So in essence, sometimes it's really valuable to consider what capitalism is."
},
{
"start": 2965.7599999999998,
"end": 2975.2999999999997,
"text": " So in capitalism in so capitalism, these kind of this system we're working on is kind of"
},
{
"start": 2975.3,
"end": 2980.1000000000004,
"text": " a form of limited capitalism, but mostly mostly capitalism."
},
{
"start": 2980.1000000000004,
"end": 2984.3,
"text": " And capitalism is very greedy."
},
{
"start": 2984.3,
"end": 2990.42,
"text": " So capitalism, all corporations want to do basically is make money."
},
{
"start": 2990.42,
"end": 2998.02,
"text": " And that is and on the other side, you have discrimination."
},
{
"start": 2998.02,
"end": 3004.76,
"text": " So discrimination meaning these unequal represent like unequal distribution actively."
},
{
"start": 3004.76,
"end": 3009.4,
"text": " So and often sometimes these go hand in hand, sometimes you can make more money by discriminating"
},
{
"start": 3009.4,
"end": 3010.82,
"text": " against a certain type of people."
},
{
"start": 3010.82,
"end": 3013.26,
"text": " And that's, that's a really bad scenario."
},
{
"start": 3013.26,
"end": 3018.5200000000004,
"text": " Like that's a very, like, this is really something where we need to take action."
},
{
"start": 3018.5200000000004,
"end": 3025.9,
"text": " But a lot of times, a lot of times, these two things stand in opposition to each other."
},
{
"start": 3025.9,
"end": 3030.78,
"text": " So little arrow here, non compatible."
},
{
"start": 3030.78,
"end": 3041.82,
"text": " That means if I want to sell someone something, then I maximize my profit by not caring by"
},
{
"start": 3041.82,
"end": 3047.42,
"text": " accurately assessing how likely is it that person buys that thing."
},
{
"start": 3047.42,
"end": 3053.2200000000003,
"text": " If I want to discriminate here, if I want to discriminate, start discriminating, according"
},
{
"start": 3053.2200000000003,
"end": 3059.76,
"text": " to skin color saying like, No, I don't like that this person with the skin color is able"
},
{
"start": 3059.76,
"end": 3065.2200000000003,
"text": " to buy this product, I want to kind of keep them down, and so on, then I forgo profit,"
},
{
"start": 3065.2200000000003,
"end": 3073.1400000000003,
"text": " right, then I actually, even though this person could buy this thing, I forego that."
},
{
"start": 3073.1400000000003,
"end": 3077.6200000000003,
"text": " So often these things are in direct opposition to each other."
},
{
"start": 3077.6200000000003,
"end": 3084.1000000000004,
"text": " Also, if I am in charge of hiring, and I don't like people of a certain gender, but they"
},
{
"start": 3084.1000000000004,
"end": 3088.94,
"text": " would actually be really, really good, whatever, good employees."
},
{
"start": 3088.94,
"end": 3097.7000000000003,
"text": " So I forgo that, that means I'm getting a pay more for less qualified people just because"
},
{
"start": 3097.7000000000003,
"end": 3107.32,
"text": " I'm biased and I'm down ranking unjustifiably, these people of the gender I don't like."
},
{
"start": 3107.32,
"end": 3115.92,
"text": " So oftentimes, you have to ask yourself, are people fundamentally greedy, or discriminatory?"
},
{
"start": 3115.92,
"end": 3116.92,
"text": " Which are they more?"
},
{
"start": 3116.92,
"end": 3120.2200000000003,
"text": " If push comes to shove, would they rather have more money?"
},
{
"start": 3120.2200000000003,
"end": 3127.26,
"text": " Or would they rather keep their own race and gender group in power?"
},
{
"start": 3127.26,
"end": 3133.94,
"text": " And with just, yeah, so the and you have to ask this of corporations, you have to ask"
},
{
"start": 3133.94,
"end": 3135.7400000000002,
"text": " this of people."
},
{
"start": 3135.7400000000002,
"end": 3144.58,
"text": " And in my experience and view, like people are much, much more greedy than they are willing"
},
{
"start": 3144.58,
"end": 3150.7799999999997,
"text": " to discriminate and give up money for discrimination."
},
{
"start": 3150.7799999999997,
"end": 3158.02,
"text": " And so if we look at metrics by which success or failure of AI systems are designed, then"
},
{
"start": 3158.02,
"end": 3165.66,
"text": " I would argue a lot of the times metrics are actually profit incentives."
},
{
"start": 3165.66,
"end": 3172.2599999999998,
"text": " And especially if we look at data set construction, if there is a skewed data set that makes my"
},
{
"start": 3172.26,
"end": 3178.38,
"text": " AI system be biased, that actually loses me money and the company would profit a lot from"
},
{
"start": 3178.38,
"end": 3180.0600000000004,
"text": " building a better data set."
},
{
"start": 3180.0600000000004,
"end": 3186.38,
"text": " So looking at kind of metrics actually makes a lot of sense to me and very much in favor"
},
{
"start": 3186.38,
"end": 3187.78,
"text": " of that."
},
{
"start": 3187.78,
"end": 3192.84,
"text": " And I think by designing accurate metrics and then getting the best possible information,"
},
{
"start": 3192.84,
"end": 3198.5800000000004,
"text": " the best possible data sets to maximize these metrics will oftentimes actually eliminate"
},
{
"start": 3198.5800000000004,
"end": 3199.98,
"text": " such forms of discrimination."
},
{
"start": 3199.98,
"end": 3205.5,
"text": " Again, there are situations where they don't, we have to be very cognizant of these."
},
{
"start": 3205.5,
"end": 3211.7,
"text": " They go into this and they say, also examine more thoroughly how societal discrimination"
},
{
"start": 3211.7,
"end": 3217.3,
"text": " surfaces in data provenance, examining the history and process of data set construction"
},
{
"start": 3217.3,
"end": 3221.3,
"text": " and considering how cultural norms and stereotypes were enumerated and represented at the time"
},
{
"start": 3221.3,
"end": 3222.44,
"text": " of data creation."
},
{
"start": 3222.44,
"end": 3223.62,
"text": " This is a big issue."
},
{
"start": 3223.62,
"end": 3224.62,
"text": " Yes."
},
{
"start": 3224.62,
"end": 3230.3399999999997,
"text": " The data set construction kind of at the time of data creation and so on, this is a big"
},
{
"start": 3230.3399999999997,
"end": 3232.62,
"text": " issue in these systems and a lot of bias."
},
{
"start": 3232.62,
"end": 3238.02,
"text": " And I would argue most of the bias we've seen here arises from corrupt data sets and from"
},
{
"start": 3238.02,
"end": 3241.42,
"text": " data sets that were constructed in an already biased way."
},
{
"start": 3241.42,
"end": 3247.38,
"text": " And the AI system trained on these data sets simply replicates this bias."
},
{
"start": 3247.38,
"end": 3252.74,
"text": " So I think that's very correct here."
},
{
"start": 3252.74,
"end": 3258.74,
"text": " They go into this example, they say the labeled faces in the wild data set contains over 15,000"
},
{
"start": 3258.74,
"end": 3259.8599999999997,
"text": " images."
},
{
"start": 3259.8599999999997,
"end": 3262.8999999999996,
"text": " Only 7% of images are of black people."
},
{
"start": 3262.8999999999996,
"end": 3270.54,
"text": " This is because these, the media landscape of the early 2000s, these images were gathered"
},
{
"start": 3270.54,
"end": 3275.3799999999997,
"text": " from the news media at the time, predominantly featured white men in positions of celebrity"
},
{
"start": 3275.3799999999997,
"end": 3276.9799999999996,
"text": " and power."
},
{
"start": 3276.9799999999996,
"end": 3278.9399999999996,
"text": " This exactly."
},
{
"start": 3278.94,
"end": 3284.86,
"text": " So if you train a system on this data set, the system will inherit this bias."
},
{
"start": 3284.86,
"end": 3290.14,
"text": " Yeah, so this is a classic example of a corrupt data set."
},
{
"start": 3290.14,
"end": 3293.38,
"text": " Also this isn't only with race and gender."
},
{
"start": 3293.38,
"end": 3299.82,
"text": " This is also if you like take pictures from IMDB, yes, a lot of this currently Celeb A"
},
{
"start": 3299.82,
"end": 3304.2200000000003,
"text": " data set that is used in all the GAN research is collected from IMDB."
},
{
"start": 3304.22,
"end": 3311.4599999999996,
"text": " You probably have overly beautiful, like pretty face people on there."
},
{
"start": 3311.4599999999996,
"end": 3316.06,
"text": " So that your AI system, your generative model is only going to produce mostly pretty face"
},
{
"start": 3316.06,
"end": 3324.04,
"text": " people, since movie stars tend to be a lot prettier than the average humans."
},
{
"start": 3324.04,
"end": 3332.22,
"text": " So that the kind of data set construction process, I think is currently the biggest"
},
{
"start": 3332.22,
"end": 3335.1,
"text": " source of bias in AI."
},
{
"start": 3335.1,
"end": 3339.18,
"text": " But that also, it's interesting that they go into this here and they kind of want to"
},
{
"start": 3339.18,
"end": 3347.3399999999997,
"text": " make the point that this is because society and power in society, the data set reflects"
},
{
"start": 3347.3399999999997,
"end": 3348.3399999999997,
"text": " that."
},
{
"start": 3348.3399999999997,
"end": 3354.4599999999996,
"text": " But I would argue if someone makes a data set that doesn't have this bias, then the"
},
{
"start": 3354.4599999999996,
"end": 3355.8199999999997,
"text": " problem is solved."
},
{
"start": 3355.8199999999997,
"end": 3357.4599999999996,
"text": " And I don't care who makes the data set."
},
{
"start": 3357.46,
"end": 3363.14,
"text": " So the link between the workforce and the bias is really broken by an argument like"
},
{
"start": 3363.14,
"end": 3367.94,
"text": " this, because as soon as we have a correct data set, an unbiased data set, we can mitigate"
},
{
"start": 3367.94,
"end": 3368.94,
"text": " the bias."
},
{
"start": 3368.94,
"end": 3373.82,
"text": " And they even go, they go into this here."
},
{
"start": 3373.82,
"end": 3378.1,
"text": " They say, sorry."
},
{
"start": 3378.1,
"end": 3385.76,
"text": " Yeah, they say down here."
},
{
"start": 3385.76,
"end": 3393.38,
"text": " They say these people, these researchers have looked at these facial recognition systems"
},
{
"start": 3393.38,
"end": 3398.1000000000004,
"text": " and they assessed this what we saw earlier, higher error rates for darker skinned women"
},
{
"start": 3398.1000000000004,
"end": 3402.6200000000003,
"text": " than for any other group, lowest error rates for light skinned men."
},
{
"start": 3402.6200000000003,
"end": 3408.78,
"text": " To measure this disparity, these researchers developed a new data set that is more balanced,"
},
{
"start": 3408.78,
"end": 3411.5800000000004,
"text": " both in terms of gender and skin color."
},
{
"start": 3411.5800000000004,
"end": 3412.5800000000004,
"text": " Good."
},
{
"start": 3412.58,
"end": 3419.22,
"text": " Problem, like make a larger data set to actually train on and then problem solved."
},
{
"start": 3419.22,
"end": 3424.94,
"text": " And I don't care at all what race and what gender these people are."
},
{
"start": 3424.94,
"end": 3427.54,
"text": " Well done."
},
{
"start": 3427.54,
"end": 3432.38,
"text": " Good people make a good data set like this."
},
{
"start": 3432.38,
"end": 3434.14,
"text": " And then we've solved the problem."
},
{
"start": 3434.14,
"end": 3436.1,
"text": " What's the problem here?"
},
{
"start": 3436.1,
"end": 3443.46,
"text": " Why would you ever care what these people look like if they do good work?"
},
{
"start": 3443.46,
"end": 3447.9,
"text": " That's to me, this actually breaks their own argument."
},
{
"start": 3447.9,
"end": 3454.5,
"text": " I don't know why they included here."
},
{
"start": 3454.5,
"end": 3462.22,
"text": " To me that to then suggest that there is a link to the workforces, if here is obvious"
},
{
"start": 3462.22,
"end": 3470.22,
"text": " that if you fix the data set, you can fix the recognition system."
},
{
"start": 3470.22,
"end": 3483.2599999999998,
"text": " All right, so we'll go on here, jump a couple more paragraphs."
},
{
"start": 3483.2599999999998,
"end": 3489.66,
"text": " Except when they say they shoot again against this kind of say to this point, a focus on"
},
{
"start": 3489.66,
"end": 3494.18,
"text": " fixing technical systems in isolation without examining their broader context of use and"
},
{
"start": 3494.18,
"end": 3499.58,
"text": " power and dynamics that attends issues is not limited in its intervention, it can actively"
},
{
"start": 3499.58,
"end": 3501.02,
"text": " cause harm."
},
{
"start": 3501.02,
"end": 3506.58,
"text": " So if you fix the problem in a technical manner, they argue here it can actively cause harm."
},
{
"start": 3506.58,
"end": 3514.46,
"text": " And the example they give is that facial and image recognition systems, they are often"
},
{
"start": 3514.46,
"end": 3519.7400000000002,
"text": " applied in service of police surveillance, which disproportionately harms poor people"
},
{
"start": 3519.7400000000002,
"end": 3523.46,
"text": " and communities of color."
},
{
"start": 3523.46,
"end": 3530.78,
"text": " So there's a quote from this person that says, is this not social progress to make black"
},
{
"start": 3530.78,
"end": 3537.38,
"text": " people equally visible to software that will inevitably be further weaponized against us?"
},
{
"start": 3537.38,
"end": 3543.82,
"text": " We are considered criminal and more surveillable by orders of magnitude."
},
{
"start": 3543.82,
"end": 3548.98,
"text": " Whatever claim to a right of privacy that we may have is diminished by a state that"
},
{
"start": 3548.98,
"end": 3551.7000000000003,
"text": " believes we must always be watched and seen."
},
{
"start": 3551.7000000000003,
"end": 3557.02,
"text": " So this is an example where by improving the facial recognition for black people, it makes"
},
{
"start": 3557.02,
"end": 3559.94,
"text": " the police better at surveilling them, which is true."
},
{
"start": 3559.94,
"end": 3565.1400000000003,
"text": " And then it is an ethical problem that the police is able to use these facial recognition"
},
{
"start": 3565.1400000000003,
"end": 3566.7400000000002,
"text": " systems to surveil people."
},
{
"start": 3566.7400000000002,
"end": 3568.98,
"text": " That's a massive privacy problem."
},
{
"start": 3568.98,
"end": 3574.1,
"text": " That's a massive problem in how much the state is allowed to overreach and so on."
},
{
"start": 3574.1,
"end": 3581.38,
"text": " So I think it's a discussion in itself, but here they argue because at the very beginning"
},
{
"start": 3581.38,
"end": 3588.58,
"text": " I asked you to remember this whole notion of we always have to look at who benefits"
},
{
"start": 3588.58,
"end": 3595.82,
"text": " from the way the AI system is constructed, who is harmed from that, who benefits from"
},
{
"start": 3595.82,
"end": 3599.1400000000003,
"text": " how the metrics are shaped and so on."
},
{
"start": 3599.1400000000003,
"end": 3607.54,
"text": " In this case, we actually have a perfect example where if the face recognition system is very"
},
{
"start": 3607.54,
"end": 3615.26,
"text": " inaccurate for black people's faces, that actually helps them in the societal context."
},
{
"start": 3615.26,
"end": 3626.94,
"text": " So by logic of this report here, that must mean that somehow the bias works for them"
},
{
"start": 3626.94,
"end": 3630.78,
"text": " and thereby the system is good or something like this."
},
{
"start": 3630.78,
"end": 3632.86,
"text": " And by fixing it, you actually make it worse."
},
{
"start": 3632.86,
"end": 3635.6000000000004,
"text": " Yeah, they say it can actively cause harm."
},
{
"start": 3635.6000000000004,
"end": 3641.78,
"text": " So I think this is pretty much arguing against themselves earlier where they say, oh, we"
},
{
"start": 3641.78,
"end": 3645.42,
"text": " always have to look at who benefits from the system."
},
{
"start": 3645.42,
"end": 3652.7000000000003,
"text": " Yeah, here, if the face recognition system can't recognize you, you actually benefit."
},
{
"start": 3652.7000000000003,
"end": 3659.0600000000004,
"text": " So I don't think that argument works in any case except if you only look at it when you"
},
{
"start": 3659.0600000000004,
"end": 3662.42,
"text": " want to look at it."
},
{
"start": 3662.42,
"end": 3672.1,
"text": " All right, so we're going to jump a couple of sections here."
},
{
"start": 3672.1,
"end": 3677.06,
"text": " But the core thing here was the feedback loop."
},
{
"start": 3677.06,
"end": 3680.78,
"text": " And again, the feedback loop isn't demonstrated at all here."
},
{
"start": 3680.78,
"end": 3687.06,
"text": " Just examples of systems that are biased and of data sets that are biased, because of data"
},
{
"start": 3687.06,
"end": 3689.58,
"text": " sets that are biased."
},
{
"start": 3689.58,
"end": 3697.2999999999997,
"text": " But there's no demonstration of how the workforce, I mean, yeah, just take this previous argument."
},
{
"start": 3697.2999999999997,
"end": 3701.74,
"text": " So the workforce is supposedly supremely white."
},
{
"start": 3701.74,
"end": 3711.4,
"text": " And it makes a face recognition system that makes that is performing poorly for darker"
},
{
"start": 3711.4,
"end": 3713.86,
"text": " skinned people."
},
{
"start": 3713.86,
"end": 3718.44,
"text": " And that actually in this context of police surveillance helps the darker skinned people"
},
{
"start": 3718.44,
"end": 3721.18,
"text": " compared to the lighter skinned people."
},
{
"start": 3721.18,
"end": 3727.44,
"text": " So that kind of is an exact counterexample to the argument that this misrepresentation"
},
{
"start": 3727.44,
"end": 3732.56,
"text": " in the workforce leads to the biases in the system."
},
{
"start": 3732.56,
"end": 3738.62,
"text": " If we interpret it through the lens, who it costs and who it benefits."
},
{
"start": 3738.62,
"end": 3740.26,
"text": " All right."
},
{
"start": 3740.26,
"end": 3745.66,
"text": " So the next section is corporate diversity beyond the pipeline problem."
},
{
"start": 3745.66,
"end": 3750.7799999999997,
"text": " And this is kind of an odd inclusion when I read it first to interpret to go against"
},
{
"start": 3750.7799999999997,
"end": 3754.14,
"text": " the pipeline problem here."
},
{
"start": 3754.14,
"end": 3758.5,
"text": " But it kind of makes sense if you know what these people set out to do."
},
{
"start": 3758.5,
"end": 3765.2599999999998,
"text": " So what these people set out to do is to argue we must fix the workforce, right?"
},
{
"start": 3765.2599999999998,
"end": 3772.1,
"text": " We must fix the, we must hire more people of color, more women and so on, promote them"
},
{
"start": 3772.1,
"end": 3773.1,
"text": " more."
},
{
"start": 3773.1,
"end": 3778.14,
"text": " And they have a very much have a problem with this pipeline argument."
},
{
"start": 3778.14,
"end": 3780.62,
"text": " What the pipeline argument is, is the following."
},
{
"start": 3780.62,
"end": 3786.02,
"text": " So at the beginning, if you consider like the educational or career paths of people,"
},
{
"start": 3786.02,
"end": 3792.22,
"text": " then you have like 100% of people that's represented at this at the beginning, and then most of"
},
{
"start": 3792.22,
"end": 3794.02,
"text": " these people go through school."
},
{
"start": 3794.02,
"end": 3795.8199999999997,
"text": " So most of these go on."
},
{
"start": 3795.8199999999997,
"end": 3799.86,
"text": " This is kind of the area in here is the population."
},
{
"start": 3799.86,
"end": 3803.58,
"text": " And then some of them pursue higher education like some drop out."
},
{
"start": 3803.58,
"end": 3806.7000000000003,
"text": " So this gets a smaller amount."
},
{
"start": 3806.7000000000003,
"end": 3811.6200000000003,
"text": " So this is here, this is time and this is kind of volume of people."
},
{
"start": 3811.6200000000003,
"end": 3816.2200000000003,
"text": " And then very few go into computer science, right?"
},
{
"start": 3816.2200000000003,
"end": 3818.7400000000002,
"text": " And then even fewer go into AI."
},
{
"start": 3818.7400000000002,
"end": 3824.86,
"text": " So what you end up is just a tiny sliver of people that actually go into AI."
},
{
"start": 3824.86,
"end": 3831.3,
"text": " So this is called a pipeline, and we have various junctions here like where you would"
},
{
"start": 3831.3,
"end": 3835.54,
"text": " go into higher education, where you would choose your major in university, where you"
},
{
"start": 3835.54,
"end": 3844.34,
"text": " would go into a subfield of computer science, where the kind of volume of people drops significantly"
},
{
"start": 3844.34,
"end": 3846.7000000000003,
"text": " from one point to the other."
},
{
"start": 3846.7000000000003,
"end": 3853.26,
"text": " And now if you compare this, if you compare this and use it say, we're not considered"
},
{
"start": 3853.26,
"end": 3858.7000000000003,
"text": " all of society, but here over here we'll call consider all just men and over here we'll"
},
{
"start": 3858.7000000000003,
"end": 3864.26,
"text": " consider all women again, they all go to high school and then university and then maybe"
},
{
"start": 3864.26,
"end": 3869.0200000000004,
"text": " very few go to CS, even fewer go to AI."
},
{
"start": 3869.0200000000004,
"end": 3874.94,
"text": " What you'll find is, and I've drawn it maybe wrong here, is that this is smaller than this."
},
{
"start": 3874.94,
"end": 3883.86,
"text": " So if you comparatively look at how many males end up in the AI field, you will find that"
},
{
"start": 3883.86,
"end": 3889.46,
"text": " fewer end up in more and will end up in our field than women."
},
{
"start": 3889.46,
"end": 3891.62,
"text": " If you comparatively look at it."
},
{
"start": 3891.62,
"end": 3902.9,
"text": " So at and this is over time, like at the beginning, you have 5050 main women distribution in society,"
},
{
"start": 3902.9,
"end": 3911.58,
"text": " almost I guess, I think slightly more boys are born, but I could be wrong about this."
},
{
"start": 3911.58,
"end": 3918.94,
"text": " And then as you go through time here, excuse that I believe."
},
{
"start": 3918.94,
"end": 3923.26,
"text": " So you go through high school and let's just assume like high school is still kind of equal,"
},
{
"start": 3923.26,
"end": 3924.92,
"text": " it depends on the country."
},
{
"start": 3924.92,
"end": 3932.2400000000002,
"text": " Then you go to university, where there's actually more women at university slightly."
},
{
"start": 3932.24,
"end": 3936.5,
"text": " And then you go into computer science and in computer science, and this is just relative"
},
{
"start": 3936.5,
"end": 3939.3799999999997,
"text": " here, that's why I kind of norm it at 100%."
},
{
"start": 3939.3799999999997,
"end": 3943.02,
"text": " Otherwise these things would go down all of them at the same time."
},
{
"start": 3943.02,
"end": 3950.1,
"text": " But comparatively, you have then much more men than women in computer science."
},
{
"start": 3950.1,
"end": 3956.4599999999996,
"text": " And then if you see who chooses AI, I don't know if there's any statistics of specifically"
},
{
"start": 3956.4599999999996,
"end": 3958.3399999999997,
"text": " choosing AI from computer science."
},
{
"start": 3958.3399999999997,
"end": 3961.3399999999997,
"text": " I'm just going to assume that remains the same."
},
{
"start": 3961.34,
"end": 3967.46,
"text": " So if you look into the AI field, kind of this, this will stay the same."
},
{
"start": 3967.46,
"end": 3971.82,
"text": " So in the AI field, you have much more men than women."
},
{
"start": 3971.82,
"end": 3978.38,
"text": " And presumably, because you already have much more men than women choosing computer science"
},
{
"start": 3978.38,
"end": 3985.1400000000003,
"text": " as their major or choosing any technical field as their major."
},
{
"start": 3985.1400000000003,
"end": 3987.82,
"text": " This is kind of the so called pipeline argument."
},
{
"start": 3987.82,
"end": 3990.58,
"text": " So where do AI companies hiring come in?"
},
{
"start": 3990.58,
"end": 3999.66,
"text": " AI companies come in here, they hire at this point, after your university degree, presumably."
},
{
"start": 3999.66,
"end": 4003.86,
"text": " There's exceptions, but just say they hire after your university degree."
},
{
"start": 4003.86,
"end": 4010.2599999999998,
"text": " And therefore, they basically have to choose from this distribution."
},
{
"start": 4010.2599999999998,
"end": 4015.1,
"text": " And if they just say, okay, we'll just take the top, I don't know, 10% people will hire"
},
{
"start": 4015.1,
"end": 4018.22,
"text": " the good people of this, we don't care what gender they are."
},
{
"start": 4018.22,
"end": 4026.7,
"text": " Right, so the top 10% here, the top 10% here, then this will end up being the same distribution"
},
{
"start": 4026.7,
"end": 4028.74,
"text": " as you have graduates."
},
{
"start": 4028.74,
"end": 4036.3799999999997,
"text": " Right, so this is kind of the company, company hiring from an let's say an 80 20 distribution"
},
{
"start": 4036.3799999999997,
"end": 4041.2599999999998,
"text": " without looking at gender will end up with an 80 20 distribution."
},
{
"start": 4041.2599999999998,
"end": 4045.02,
"text": " That's the pipeline argument of companies."
},
{
"start": 4045.02,
"end": 4049.7,
"text": " And they don't like the pipeline argument, because the pipeline argument basically says"
},
{
"start": 4049.7,
"end": 4052.58,
"text": " that the problem is somewhere here, right?"
},
{
"start": 4052.58,
"end": 4060.58,
"text": " The problem isn't the company's hiring wrongly."
},
{
"start": 4060.58,
"end": 4067.22,
"text": " The problem isn't that the company's here, deselected, the problem is somewhere here."
},
{
"start": 4067.22,
"end": 4070.7,
"text": " And because they want to make the argument that the company should hire in a different"
},
{
"start": 4070.7,
"end": 4073.36,
"text": " way, they can't have that."
},
{
"start": 4073.36,
"end": 4076.1,
"text": " So they argue against it."
},
{
"start": 4076.1,
"end": 4079.76,
"text": " Now to argue against this would actually be very easy."
},
{
"start": 4079.76,
"end": 4085.44,
"text": " If this argument were wrong, like they claim the argument is is is not good, the pipeline"
},
{
"start": 4085.44,
"end": 4087.58,
"text": " argument isn't good."
},
{
"start": 4087.58,
"end": 4092.52,
"text": " If the pipeline argument were wrong, what you'd have to do is you would have to say,"
},
{
"start": 4092.52,
"end": 4098.1,
"text": " you would have to say, hey, companies, look at that."
},
{
"start": 4098.1,
"end": 4105.22,
"text": " In your company, you have an 80 20 distribution men to women, right?"
},
{
"start": 4105.22,
"end": 4106.780000000001,
"text": " That's pretty unequal."
},
{
"start": 4106.780000000001,
"end": 4112.14,
"text": " And you know, in university graduates, the pool you choose from is actually 5050."
},
{
"start": 4112.14,
"end": 4118.740000000001,
"text": " So obviously, you're engaged in discriminatory hiring, because you know, the pool is 5050."
},
{
"start": 4118.740000000001,
"end": 4127.42,
"text": " There's no reason why it why your hiring practices should cause this inequality."
},
{
"start": 4127.42,
"end": 4132.12,
"text": " And therefore, we can clearly show you do discriminatory hiring, you should stop it,"
},
{
"start": 4132.12,
"end": 4136.42,
"text": " you should definitely hire more women and people of color, more of these more of the"
},
{
"start": 4136.42,
"end": 4141.82,
"text": " minorities, because your hiring practices are the problem."
},
{
"start": 4141.82,
"end": 4143,
"text": " But that's not the case."
},
{
"start": 4143,
"end": 4144.06,
"text": " How do I know?"
},
{
"start": 4144.06,
"end": 4146.9400000000005,
"text": " Because if it were the case, they would simply state this."
},
{
"start": 4146.9400000000005,
"end": 4151.7,
"text": " Definitely in this report, if that were the case, that you could actually show with numbers"
},
{
"start": 4151.7,
"end": 4156.14,
"text": " that the pipeline argument is wrong, then they would absolutely do this."
},
{
"start": 4156.14,
"end": 4163.1,
"text": " That they have to like, go back and they have to like, ramble around it for several pages,"
},
{
"start": 4163.1,
"end": 4170.58,
"text": " which will mostly skip but mainly because this is the case, it is the case that these"
},
{
"start": 4170.58,
"end": 4178.660000000001,
"text": " companies hire from a pool of of unequally represented people."
},
{
"start": 4178.66,
"end": 4187.0599999999995,
"text": " And the only argument that you can make is that, well, if if you were to equalize this"
},
{
"start": 4187.0599999999995,
"end": 4193.98,
"text": " here, then maybe here where the problem is that would fix like, so the argument is often"
},
{
"start": 4193.98,
"end": 4201.66,
"text": " made if young girls choosing their majors have no one to look up to, like no strong"
},
{
"start": 4201.66,
"end": 4208.94,
"text": " women in in corporation CEO roles, they will think that it's not a climate for women and"
},
{
"start": 4208.94,
"end": 4213.7,
"text": " they will elect not to go into these fields, which is a valid argument, like I'm completely"
},
{
"start": 4213.7,
"end": 4216.66,
"text": " open to that to that argument."
},
{
"start": 4216.66,
"end": 4218.58,
"text": " But it's the only argument you can make."
},
{
"start": 4218.58,
"end": 4225.58,
"text": " And still then, even if you determine this as the cause, I would still not support racist"
},
{
"start": 4225.58,
"end": 4231.58,
"text": " and sexist hiring practices like do something else like make them clear that the environment"
},
{
"start": 4231.58,
"end": 4238.1,
"text": " can be changed or change the environment, like change the if if it really is the case"
},
{
"start": 4238.1,
"end": 4245.3,
"text": " that it's kind of a non anti woman environment, change that."
},
{
"start": 4245.3,
"end": 4250.82,
"text": " If it's just the case that they perceive it as such change the perception, but do not"
},
{
"start": 4250.82,
"end": 4256.42,
"text": " engage in discriminatory hiring practices, because there's always someone losing out"
},
{
"start": 4256.42,
"end": 4258.22,
"text": " unfairly on these practices."
},
{
"start": 4258.22,
"end": 4266.58,
"text": " And that's, that's something I'm not willing to, to go into, like that's something I'm"
},
{
"start": 4266.58,
"end": 4267.66,
"text": " not willing to engage in."
},
{
"start": 4267.66,
"end": 4271.46,
"text": " And I don't think people should engage be engaging in that."
},
{
"start": 4271.46,
"end": 4273.9400000000005,
"text": " Actually, that's why it's illegal."
},
{
"start": 4273.9400000000005,
"end": 4278.72,
"text": " So let's, let's actually look at very few points."
},
{
"start": 4278.72,
"end": 4285.780000000001,
"text": " This is just why the so they claim they go kind of go over these pipeline studies."
},
{
"start": 4285.78,
"end": 4291.179999999999,
"text": " And they yeah, they say term used in industry to reference the absence of diverse candidates"
},
{
"start": 4291.179999999999,
"end": 4296.139999999999,
"text": " in the hiring pool of to justify the inability of large firms to achieve diversity due to"
},
{
"start": 4296.139999999999,
"end": 4297.139999999999,
"text": " scarcity."
},
{
"start": 4297.139999999999,
"end": 4298.139999999999,
"text": " Right?"
},
{
"start": 4298.139999999999,
"end": 4306.42,
"text": " So that's, they basically agree the of that on the definition that I stated here."
},
{
"start": 4306.42,
"end": 4311.259999999999,
"text": " So the companies that are challenged on their lack of diversity frequently site pipeline"
},
{
"start": 4311.259999999999,
"end": 4315.5,
"text": " studies as proof of the persistent challenge of finding enough women and people of color"
},
{
"start": 4315.5,
"end": 4316.82,
"text": " to hire."
},
{
"start": 4316.82,
"end": 4323.3,
"text": " Yes, and, and the yeah, but they say but the evidence suggests otherwise."
},
{
"start": 4323.3,
"end": 4328.5,
"text": " For example, in 2016, Facebook chief diversity officer wrote that it has become clear that"
},
{
"start": 4328.5,
"end": 4332.52,
"text": " at the most fundamental level, appropriate representation, technology or any other industry"
},
{
"start": 4332.52,
"end": 4337.1,
"text": " will depend upon more people having the opportunity to gain necessary skills through the public"
},
{
"start": 4337.1,
"end": 4338.42,
"text": " education system."
},
{
"start": 4338.42,
"end": 4341.7,
"text": " Well, yes, that's something I would agree."
},
{
"start": 4341.7,
"end": 4348.82,
"text": " And that's something clearly that addresses this region here."
},
{
"start": 4348.82,
"end": 4353.5199999999995,
"text": " Then and where the actual problem is happening."
},
{
"start": 4353.5199999999995,
"end": 4359.54,
"text": " So I would say that's a very, very good statement from the Facebook's chief diversity officer."
},
{
"start": 4359.54,
"end": 4364.82,
"text": " They say but as the Center for Investigative Reporting study of tech company diversity"
},
{
"start": 4364.82,
"end": 4371.66,
"text": " data found 91 large tech companies headquartered in Silicon Valley managed to hire higher percent"
},
{
"start": 4371.66,
"end": 4376.42,
"text": " of black, Latino and multiracial employees than Facebook that year."
},
{
"start": 4376.42,
"end": 4386.9,
"text": " Well, just if other just just because other companies employ racist and sexist hiring"
},
{
"start": 4386.9,
"end": 4392.98,
"text": " to improve their diversity numbers doesn't mean that Facebook has to do this."
},
{
"start": 4392.98,
"end": 4393.98,
"text": " Right?"
},
{
"start": 4393.98,
"end": 4401.54,
"text": " It it like just because other companies do this doesn't mean that it's a it's a it's"
},
{
"start": 4401.54,
"end": 4405.459999999999,
"text": " a good thing to do or that's how you should go about it."
},
{
"start": 4405.459999999999,
"end": 4413.66,
"text": " Facebook simply says like, if we want to hire without being racist or sexist, if we want"
},
{
"start": 4413.66,
"end": 4420.98,
"text": " to just hire the best people, then more of the best people have to be in the pipeline,"
},
{
"start": 4420.98,
"end": 4427.7,
"text": " like more people have to gain access to educational opportunities so we can then hire them."
},
{
"start": 4427.7,
"end": 4434.86,
"text": " Whereas these other companies probably make a big effort to say, well, even if you are"
},
{
"start": 4434.86,
"end": 4439.74,
"text": " not as educated, even if you're not as qualified as this other person will hire you because"
},
{
"start": 4439.74,
"end": 4441.98,
"text": " of your skin color."
},
{
"start": 4441.98,
"end": 4450.74,
"text": " I don't think that's that's an argument in that in the favor of what the report is claiming."
},
{
"start": 4450.74,
"end": 4455.58,
"text": " Like I don't think that that is evidence that the pipeline argument is invalid."
},
{
"start": 4455.58,
"end": 4462.66,
"text": " All right, so they go into core themes in pipeline research, and they do some they do"
},
{
"start": 4462.66,
"end": 4470.58,
"text": " some overview of the kind of pipeline research that often so sometimes the pipeline research"
},
{
"start": 4470.58,
"end": 4476.36,
"text": " examines why, why, for example, why women don't choose to go into computer science as"
},
{
"start": 4476.36,
"end": 4481.82,
"text": " much and sometimes they focus on what is their perception of the field, what was it, what"
},
{
"start": 4481.82,
"end": 4487.86,
"text": " is their perceptions of the stereotypes of the field, what is their perceptions of the"
},
{
"start": 4487.86,
"end": 4494.54,
"text": " kind of culture in the field, is it suited to them, what is their perception of how qualified"
},
{
"start": 4494.54,
"end": 4498.0199999999995,
"text": " they are for the field, and is that true, is that false, and so on."
},
{
"start": 4498.0199999999995,
"end": 4500.78,
"text": " So this research examines a whole variety of things."
},
{
"start": 4500.78,
"end": 4503.7,
"text": " And it's very interesting, actually, to read through this research."
},
{
"start": 4503.7,
"end": 4507.74,
"text": " I want to point out this here."
},
{
"start": 4507.74,
"end": 4512.62,
"text": " Other studies suggest that gender is correlated with a person's motivations for pursuing a"
},
{
"start": 4512.62,
"end": 4514.34,
"text": " career in the field."
},
{
"start": 4514.34,
"end": 4520.62,
"text": " Women and particularly women from low socioeconomic status or minority backgrounds are more likely"
},
{
"start": 4520.62,
"end": 4526.5,
"text": " to see computing as a versatile profession that provides an opportunity for secure employment,"
},
{
"start": 4526.5,
"end": 4529.74,
"text": " higher pay, and better social standing."
},
{
"start": 4529.74,
"end": 4535.3,
"text": " Moreover, their interests go beyond technical aspects of computing, focusing instead on"
},
{
"start": 4535.3,
"end": 4537.98,
"text": " the purpose and application of software."
},
{
"start": 4537.98,
"end": 4543.62,
"text": " However, such interests are often de-emphasized in computer science curricula, a price technical"
},
{
"start": 4543.62,
"end": 4550.98,
"text": " skill and its applicability to industrial settings above all else."
},
{
"start": 4550.98,
"end": 4556.76,
"text": " So I find this really interesting because it's basically saying that women have different"
},
{
"start": 4556.76,
"end": 4560.46,
"text": " interests than men on average."
},
{
"start": 4560.46,
"end": 4564.92,
"text": " That's basically saying that, which is almost heresy."
},
{
"start": 4564.92,
"end": 4570.9800000000005,
"text": " To say this in this context, people will come after you if you suggest something like this,"
},
{
"start": 4570.9800000000005,
"end": 4573.3,
"text": " and yet they're just stating it here."
},
{
"start": 4573.3,
"end": 4575.2,
"text": " Remember this for later."
},
{
"start": 4575.2,
"end": 4581.02,
"text": " This is really funny that they're like, yeah, the interests could be different for women"
},
{
"start": 4581.02,
"end": 4582.02,
"text": " than for men."
},
{
"start": 4582.02,
"end": 4589.46,
"text": " And we might have to adjust our curriculum to be more suited to these different interests."
},
{
"start": 4589.46,
"end": 4591.540000000001,
"text": " I mean, yeah."
},
{
"start": 4591.540000000001,
"end": 4593.540000000001,
"text": " I'm sure that's..."
},
{
"start": 4593.540000000001,
"end": 4600.42,
"text": " Yeah, as I said, you're like, usually this is forbidden to say."
},
{
"start": 4600.42,
"end": 4602.900000000001,
"text": " All right."
},
{
"start": 4602.900000000001,
"end": 4605.620000000001,
"text": " So they go on."
},
{
"start": 4605.62,
"end": 4618.46,
"text": " They say limitations of pipeline research, right?"
},
{
"start": 4618.46,
"end": 4627.099999999999,
"text": " These are fairly like common limitations, let's say, of studies in general, social science"
},
{
"start": 4627.099999999999,
"end": 4633.0199999999995,
"text": " studies, which I won't go into much."
},
{
"start": 4633.02,
"end": 4643.26,
"text": " Again, they state we have to examine..."
},
{
"start": 4643.26,
"end": 4646.38,
"text": " We don't only have to examine this, but the problem..."
},
{
"start": 4646.38,
"end": 4653.38,
"text": " They basically say the problem is actually the culture and the problem is actually the"
},
{
"start": 4653.38,
"end": 4659.620000000001,
"text": " perpetrators, where do I say?"
},
{
"start": 4659.62,
"end": 4664.78,
"text": " I don't remember where this is stated, but they again say we have to examine who benefits"
},
{
"start": 4664.78,
"end": 4671.7,
"text": " from its present construction, who is underserved within the current tech ecology, who benefits"
},
{
"start": 4671.7,
"end": 4676.62,
"text": " from its present construction, how these dynamics might be untangled, and so on."
},
{
"start": 4676.62,
"end": 4686.22,
"text": " So again, stating these kind of power relationships for the different groups, which I don't agree"
},
{
"start": 4686.22,
"end": 4689.22,
"text": " is in large part what's happening."
},
{
"start": 4689.22,
"end": 4696.22,
"text": " They say it's worth considering the scope of these studies and by and large, the recommendations"
},
{
"start": 4696.22,
"end": 4701.900000000001,
"text": " they issue are limited, targeted at the administrators of university computer science programs seeking"
},
{
"start": 4701.900000000001,
"end": 4704.02,
"text": " to broaden the diversity of their student body."
},
{
"start": 4704.02,
"end": 4708.96,
"text": " Yes, that's exactly where we saw the problem appears to be, right?"
},
{
"start": 4708.96,
"end": 4713.58,
"text": " So the reason they have a problem with these studies is that they actually focus on the"
},
{
"start": 4713.58,
"end": 4721.62,
"text": " point where this discrepancy appears to happen, because they want to claim that no, no, no,"
},
{
"start": 4721.62,
"end": 4732.18,
"text": " you should focus on a different point, namely hiring in these companies, hiring and promotion."
},
{
"start": 4732.18,
"end": 4737.74,
"text": " They say though important, so at least they acknowledge that that's an important problem."
},
{
"start": 4737.74,
"end": 4743.9,
"text": " This is a narrow frame through which potential solutions to barriers to inclusion."
},
{
"start": 4743.9,
"end": 4748.94,
"text": " It does not address the companies that hire computer science students, the peers responsible"
},
{
"start": 4748.94,
"end": 4753.82,
"text": " for promulgating stereotype views or engaging in hostile behavior or the broader social"
},
{
"start": 4753.82,
"end": 4758.58,
"text": " conditions that may influence students' success in computer science programs."
},
{
"start": 4758.58,
"end": 4762.179999999999,
"text": " Actually the research and even some of the examples they've included of this research"
},
{
"start": 4762.179999999999,
"end": 4764.0599999999995,
"text": " addresses all of this."
},
{
"start": 4764.06,
"end": 4773.580000000001,
"text": " But the research often addresses the kind of stereotypes and how the peers act and how"
},
{
"start": 4773.580000000001,
"end": 4781.740000000001,
"text": " the companies act and also how the companies hire and how people have something to look"
},
{
"start": 4781.740000000001,
"end": 4787.02,
"text": " forward to or nothing to look forward to and how that influences their decisions."
},
{
"start": 4787.02,
"end": 4792.1,
"text": " Yeah, again, they say the studies are frequently cited by those within corporate environments"
},
{
"start": 4792.1,
"end": 4796.5,
"text": " to justify their own lack of diversity as they situate the locus of change outside of"
},
{
"start": 4796.5,
"end": 4799.26,
"text": " the corporation itself."
},
{
"start": 4799.26,
"end": 4803.14,
"text": " As such pipeline studies are disproportionately emphasized as a part of the broader research"
},
{
"start": 4803.14,
"end": 4805.22,
"text": " agenda on diversity and technology."
},
{
"start": 4805.22,
"end": 4810.9800000000005,
"text": " Again, they state companies use this to get out and of course, like companies, of course"
},
{
"start": 4810.9800000000005,
"end": 4812.58,
"text": " they're going to use this to get out."
},
{
"start": 4812.58,
"end": 4814.58,
"text": " I mean, I agree at least with that."
},
{
"start": 4814.58,
"end": 4821.26,
"text": " I agree that companies are going to try to use this to get out of responsibility."
},
{
"start": 4821.26,
"end": 4822.26,
"text": " Certainly."
},
{
"start": 4822.26,
"end": 4823.26,
"text": " All right."
},
{
"start": 4823.26,
"end": 4831.62,
"text": " So the last section here is the pipeline dreams after years of research."
},
{
"start": 4831.62,
"end": 4833.820000000001,
"text": " Again this is on this pipeline studies."
},
{
"start": 4833.820000000001,
"end": 4843.74,
"text": " Basically they say the pipeline research hasn't shown, like hasn't borne fruit."
},
{
"start": 4843.74,
"end": 4850.780000000001,
"text": " It hasn't led to meaningful change in the field even though we've researched this."
},
{
"start": 4850.78,
"end": 4855.139999999999,
"text": " The reason they say the number of reasons they tend to place the owners to solve issues"
},
{
"start": 4855.139999999999,
"end": 4859.86,
"text": " of discrimination, Silicon Valley on those who are discriminated against rather than"
},
{
"start": 4859.86,
"end": 4860.86,
"text": " the perpetrators."
},
{
"start": 4860.86,
"end": 4863.86,
"text": " I find this word choice really interesting."
},
{
"start": 4863.86,
"end": 4865.5,
"text": " Perpetrators, right?"
},
{
"start": 4865.5,
"end": 4871.94,
"text": " Like again, the group of white men is trying to put down everyone else."
},
{
"start": 4871.94,
"end": 4874.9,
"text": " That's the perspective that the article takes."
},
{
"start": 4874.9,
"end": 4879.139999999999,
"text": " And it's not even true."
},
{
"start": 4879.14,
"end": 4886.22,
"text": " This research, a lot of times it actually says the reason why, for example, women don't"
},
{
"start": 4886.22,
"end": 4892.54,
"text": " choose to go into computer science is the male dominated culture within these corporations,"
},
{
"start": 4892.54,
"end": 4901.860000000001,
"text": " is the perception of this not being a woman friendly environment, is the people here of"
},
{
"start": 4901.860000000001,
"end": 4903.54,
"text": " sexual harassment and so on."
},
{
"start": 4903.54,
"end": 4905.46,
"text": " So it's not even true."
},
{
"start": 4905.46,
"end": 4910.34,
"text": " But moreover, I just wanted to point out the choice of word here, perpetrators."
},
{
"start": 4910.34,
"end": 4917.9800000000005,
"text": " I don't know how you get to this word."
},
{
"start": 4917.9800000000005,
"end": 4924.86,
"text": " It really shows kind of a worldview of the authors in my opinion."
},
{
"start": 4924.86,
"end": 4927.22,
"text": " All right."
},
{
"start": 4927.22,
"end": 4933.22,
"text": " So they go on and say, okay, this pipeline studies haven't been beneficial and companies"
},
{
"start": 4933.22,
"end": 4937.26,
"text": " haven't done much or hasn't been successful."
},
{
"start": 4937.26,
"end": 4943.14,
"text": " They're going to worker led initiatives, which I'm going to skip here."
},
{
"start": 4943.14,
"end": 4950.26,
"text": " It's just a kind of a reporting of what happened at companies where the workers themselves"
},
{
"start": 4950.26,
"end": 4951.46,
"text": " organized."
},
{
"start": 4951.46,
"end": 4955.9400000000005,
"text": " And then the last section here is the pushback against diversity."
},
{
"start": 4955.94,
"end": 4963.379999999999,
"text": " So in this section, they're kind of documenting and arguing against people who have basically"
},
{
"start": 4963.379999999999,
"end": 4967.78,
"text": " stated counter arguments to their recommendations mainly."
},
{
"start": 4967.78,
"end": 4973.62,
"text": " So their recommendations being, let's change the hiring, let's change the promotion, and"
},
{
"start": 4973.62,
"end": 4979.78,
"text": " so on to be based on race and gender."
},
{
"start": 4979.78,
"end": 4984.54,
"text": " And the pushback here characterized in different ways."
},
{
"start": 4984.54,
"end": 4986.98,
"text": " So we'll go through this."
},
{
"start": 4986.98,
"end": 4987.98,
"text": " This is the last section."
},
{
"start": 4987.98,
"end": 4990.6,
"text": " I know it's a long video already."
},
{
"start": 4990.6,
"end": 4995.1,
"text": " If you're still here, like the one person who's still here, hi, I hope you're doing"
},
{
"start": 4995.1,
"end": 4996.1,
"text": " well."
},
{
"start": 4996.1,
"end": 4997.1,
"text": " Good."
},
{
"start": 4997.1,
"end": 4998.1,
"text": " Keep hydrated."
},
{
"start": 4998.1,
"end": 4999.1,
"text": " Yeah."
},
{
"start": 4999.1,
"end": 5002.22,
"text": " So they say, it's a critical time."
},
{
"start": 5002.22,
"end": 5010.62,
"text": " We now see diversity itself being weaponized."
},
{
"start": 5010.62,
"end": 5016.9,
"text": " So they say this growing awareness accompanied by demands for inclusion and equity has led"
},
{
"start": 5016.9,
"end": 5023.22,
"text": " to some change, but there has also been resistance, especially among those implicitly privileged"
},
{
"start": 5023.22,
"end": 5024.54,
"text": " by the status quo."
},
{
"start": 5024.54,
"end": 5028.7,
"text": " So again, jumping straight to attack on the person."
},
{
"start": 5028.7,
"end": 5033.74,
"text": " Like I don't care if who makes an argument against me."
},
{
"start": 5033.74,
"end": 5039.34,
"text": " I want to go on the argument and I'm going to go on the content of the argument."
},
{
"start": 5039.34,
"end": 5047.34,
"text": " But these people straight, first thing they stayed is that's just by the people who are"
},
{
"start": 5047.34,
"end": 5048.34,
"text": " benefiting."
},
{
"start": 5048.34,
"end": 5051.900000000001,
"text": " That's just by the white men, basically."
},
{
"start": 5051.900000000001,
"end": 5053.900000000001,
"text": " Straight to the identity of the person."
},
{
"start": 5053.900000000001,
"end": 5058.38,
"text": " That's dishonesty right there."
},
{
"start": 5058.38,
"end": 5065.66,
"text": " So those questioning and even rejecting the idea that racism, misogyny, and harassment"
},
{
"start": 5065.66,
"end": 5070.46,
"text": " are problems within the AI field and the tech industry have appropriated the language of"
},
{
"start": 5070.46,
"end": 5077.34,
"text": " diversity to argue that efforts to improve inclusion are in fact exclusionary and addressing"
},
{
"start": 5077.34,
"end": 5082.62,
"text": " the deeper structural challenges posed by racism, sex and inequity is misguided."
},
{
"start": 5082.62,
"end": 5089.58,
"text": " And yes, yes, definitely efforts to improve inclusion can be exclusionary."
},
{
"start": 5089.58,
"end": 5101.1,
"text": " Like just because, so this is a thing, just because you're fixing a problem doesn't mean"
},
{
"start": 5101.1,
"end": 5107.98,
"text": " the method you're using to fixing it is justified and is itself good."
},
{
"start": 5107.98,
"end": 5115.3,
"text": " Methods to improve inclusion can be exclusionary and some that have been proposed are exclusionary."
},
{
"start": 5115.3,
"end": 5117.58,
"text": " Definitely it depends on the method."
},
{
"start": 5117.58,
"end": 5121.48,
"text": " It doesn't mean these people are against these efforts."
},
{
"start": 5121.48,
"end": 5128.66,
"text": " It means that the measures, for example, implementing racist hiring policy, I can definitely see"
},
{
"start": 5128.66,
"end": 5134.0199999999995,
"text": " that this is going to lead to more equal representation within the workforce."
},
{
"start": 5134.0199999999995,
"end": 5141.86,
"text": " But the tool itself is really bad and exclusionary and discriminating."
},
{
"start": 5141.86,
"end": 5149.5,
"text": " So yeah, I would say that it's accurate that it can be exclusionary."
},
{
"start": 5149.5,
"end": 5154.98,
"text": " I say, for example, some AI researchers greeted the announcement of Black in AI Workshop at"
},
{
"start": 5154.98,
"end": 5159.7,
"text": " NRIPS leading machine learning conference by questioning whether the event was necessary,"
},
{
"start": 5159.7,
"end": 5162.62,
"text": " arguing that it would be discriminatory."
},
{
"start": 5162.62,
"end": 5163.98,
"text": " But can't they?"
},
{
"start": 5163.98,
"end": 5166.98,
"text": " Can't they question whether the event was necessary?"
},
{
"start": 5166.98,
"end": 5170.42,
"text": " Like that would, I would, here I would need a discussion."
},
{
"start": 5170.42,
"end": 5172.06,
"text": " What is it for?"
},
{
"start": 5172.06,
"end": 5173.06,
"text": " Right?"
},
{
"start": 5173.06,
"end": 5175.64,
"text": " Why is this event happening?"
},
{
"start": 5175.64,
"end": 5177.74,
"text": " And what is it doing?"
},
{
"start": 5177.74,
"end": 5180.5,
"text": " And is it discriminatory?"
},
{
"start": 5180.5,
"end": 5181.62,
"text": " It could be."
},
{
"start": 5181.62,
"end": 5183.22,
"text": " Any event can be discriminatory."
},
{
"start": 5183.22,
"end": 5190.3,
"text": " Does it discriminate based on race or gender or anything?"
},
{
"start": 5190.3,
"end": 5194.74,
"text": " Is it, you know, does it do so unjustly and all?"
},
{
"start": 5194.74,
"end": 5198.42,
"text": " So I don't, I don't just don't see why."
},
{
"start": 5198.42,
"end": 5199.42,
"text": " Could still be wrong."
},
{
"start": 5199.42,
"end": 5203.74,
"text": " Like you could question and then you could be wrong."
},
{
"start": 5203.74,
"end": 5206.7,
"text": " But you should be taken on your argument."
},
{
"start": 5206.7,
"end": 5216.06,
"text": " But the argument here is just already questioning this is already on the wrong side of the argument."
},
{
"start": 5216.06,
"end": 5217.66,
"text": " And I don't agree with this."
},
{
"start": 5217.66,
"end": 5221.46,
"text": " I don't agree with these people that question this workshop."
},
{
"start": 5221.46,
"end": 5225.74,
"text": " Don't have a particular opinion on these things."
},
{
"start": 5225.74,
"end": 5231.82,
"text": " But I have the opinion that you have to take arguments at their argument value and not"
},
{
"start": 5231.82,
"end": 5238.54,
"text": " just at who makes them or whether or not they're against a particular viewpoint."
},
{
"start": 5238.54,
"end": 5240.66,
"text": " All right."
},
{
"start": 5240.66,
"end": 5247.139999999999,
"text": " They say such pushback often centers calls for cognitive diversity or viewpoint diversity."
},
{
"start": 5247.139999999999,
"end": 5251.7,
"text": " The idea that individual differences in the ways people think and understand the world"
},
{
"start": 5251.7,
"end": 5257.0199999999995,
"text": " are distinctions that should be counted alongside or instead of other identity categories such"
},
{
"start": 5257.0199999999995,
"end": 5258.5,
"text": " as race and gender."
},
{
"start": 5258.5,
"end": 5266.34,
"text": " Well, yes, that's I mean, isn't that isn't that a very reasonable thing to say?"
},
{
"start": 5266.34,
"end": 5272.54,
"text": " Isn't it very reasonable to say that differences in the ways people think and understand the"
},
{
"start": 5272.54,
"end": 5278.139999999999,
"text": " world, their distinctions that should be counted alongside other identity categories such as"
},
{
"start": 5278.14,
"end": 5285.780000000001,
"text": " race and gender, they say a dozen white men so long as they were not raised in the same"
},
{
"start": 5285.780000000001,
"end": 5291.02,
"text": " household and don't think identical thoughts could be considered diverse."
},
{
"start": 5291.02,
"end": 5295.700000000001,
"text": " That's I don't know if this is a sarcastic statement or not, but clearly it's it's kind"
},
{
"start": 5295.700000000001,
"end": 5302.18,
"text": " of the counterpoint they're trying to make here that but yes, I would I would totally"
},
{
"start": 5302.18,
"end": 5309.740000000001,
"text": " agree with this statement in a way a white man growing up in San Francisco, a white man"
},
{
"start": 5309.740000000001,
"end": 5317.820000000001,
"text": " growing up in rural Idaho, a white man growing up in Florida, a white man growing up in Western"
},
{
"start": 5317.820000000001,
"end": 5326.02,
"text": " Europe, one in Russia, and one growing up on the road with its circus, his circus parents"
},
{
"start": 5326.02,
"end": 5334.26,
"text": " in Mongolia would definitely be that plenty diverse, right?"
},
{
"start": 5334.26,
"end": 5342.02,
"text": " I mean, they criticize this here, but this is is actually how can you how can you not"
},
{
"start": 5342.02,
"end": 5343.740000000001,
"text": " see this that?"
},
{
"start": 5343.740000000001,
"end": 5348.540000000001,
"text": " Yes, these are valid differences, and people are going to think differently, independent"
},
{
"start": 5348.540000000001,
"end": 5351.5,
"text": " of how they look, people are going to have different thoughts."
},
{
"start": 5351.5,
"end": 5356.42,
"text": " And it's important to recognize other people think differently."
},
{
"start": 5356.42,
"end": 5362.7,
"text": " And therefore, you should, you know, include them if it's relevant."
},
{
"start": 5362.7,
"end": 5366.82,
"text": " And the counter argument to this is, of course, what the authors here are saying basically"
},
{
"start": 5366.82,
"end": 5379.62,
"text": " is that 12, a dozen people, as long as they are don't look the same, could be considered"
},
{
"start": 5379.62,
"end": 5383.98,
"text": " diverse, even if they all were raised in the same place, and basically all live in San"
},
{
"start": 5383.98,
"end": 5387.98,
"text": " Francisco, and think the exact same thing."
},
{
"start": 5387.98,
"end": 5395.58,
"text": " Yeah, that's, I mean, it sounds to me, it sounds as absurd as the other way around."
},
{
"start": 5395.58,
"end": 5396.66,
"text": " To me."
},
{
"start": 5396.66,
"end": 5401.46,
"text": " So here's, here's my, here's my thoughts on this."
},
{
"start": 5401.46,
"end": 5407.58,
"text": " I am not going to pretend that I know what life is like as a woman."
},
{
"start": 5407.58,
"end": 5408.58,
"text": " Right?"
},
{
"start": 5408.58,
"end": 5418.0599999999995,
"text": " I'm absolutely sure that for areas of life, it is it is definitely valuable to listen"
},
{
"start": 5418.0599999999995,
"end": 5427.5,
"text": " to the experience of a woman or multiple women, an aggregate of women, because the life is"
},
{
"start": 5427.5,
"end": 5429.46,
"text": " just different as a woman."
},
{
"start": 5429.46,
"end": 5431.18,
"text": " Life is also different."
},
{
"start": 5431.18,
"end": 5437.5199999999995,
"text": " As a black person, I absolutely concede that there are things that I might not be able"
},
{
"start": 5437.52,
"end": 5445.5,
"text": " to draw from my life experience, because I am not of that skin color that different problems"
},
{
"start": 5445.5,
"end": 5446.5,
"text": " that people face."
},
{
"start": 5446.5,
"end": 5450.5,
"text": " And that's why it's important to have an opinion of that at the table."
},
{
"start": 5450.5,
"end": 5461.22,
"text": " But I'm also absolutely certain that I have no relation to someone who grew up as a child"
},
{
"start": 5461.22,
"end": 5466.9400000000005,
"text": " pop star from the age of 12, and then had that life."
},
{
"start": 5466.94,
"end": 5472.339999999999,
"text": " I have no relation to someone growing up under a communist regime."
},
{
"start": 5472.339999999999,
"end": 5480.179999999999,
"text": " I have no relation to someone growing up in in kind of a Buddhist religious tradition."
},
{
"start": 5480.179999999999,
"end": 5481.179999999999,
"text": " I just don't."
},
{
"start": 5481.179999999999,
"end": 5482.74,
"text": " And I don't care how they look."
},
{
"start": 5482.74,
"end": 5485.219999999999,
"text": " They have different experiences."
},
{
"start": 5485.219999999999,
"end": 5488.94,
"text": " They have different bodies of knowledge to draw on."
},
{
"start": 5488.94,
"end": 5496.219999999999,
"text": " And I don't think why we should make the difference along the exact lines of race and gender."
},
{
"start": 5496.22,
"end": 5500.900000000001,
"text": " Yeah, but that's what they that's of course what they argue here."
},
{
"start": 5500.900000000001,
"end": 5508.18,
"text": " Those arguments work by centering identity while flattening or ignoring power relationships."
},
{
"start": 5508.18,
"end": 5515.34,
"text": " Here the VP, the Facebook VP of engineering said that the ultimate goal is cognitive diversity"
},
{
"start": 5515.34,
"end": 5519.62,
"text": " and cognitive diversity is correlated with identity diversity."
},
{
"start": 5519.62,
"end": 5525.34,
"text": " That means it's not just about getting women in tech, it's about broad voices, broad representation."
},
{
"start": 5525.34,
"end": 5526.34,
"text": " Right?"
},
{
"start": 5526.34,
"end": 5537.38,
"text": " So the the this is exactly what I would say the reason why we want different the reason"
},
{
"start": 5537.38,
"end": 5542.62,
"text": " why we want a woman or a black person at the table is because they have a different knowledge"
},
{
"start": 5542.62,
"end": 5546.38,
"text": " is because they have different thoughts because of their different life experience."
},
{
"start": 5546.38,
"end": 5549.34,
"text": " They have different thoughts that they can bring in."
},
{
"start": 5549.34,
"end": 5557.860000000001,
"text": " So actually, by including these what they call bodies, it is about cognitive diversity,"
},
{
"start": 5557.860000000001,
"end": 5559.5,
"text": " even in itself."
},
{
"start": 5559.5,
"end": 5562.62,
"text": " But the authors here really see this from a different angle."
},
{
"start": 5562.62,
"end": 5568.4400000000005,
"text": " They really see this in terms of power relationships between race and gender groups."
},
{
"start": 5568.4400000000005,
"end": 5573.5,
"text": " And I yeah, the arguments of the authors don't make sense if you don't view it through that"
},
{
"start": 5573.5,
"end": 5574.5,
"text": " lens."
},
{
"start": 5574.5,
"end": 5581.54,
"text": " That lens to me is just such a it's such a I don't know, it's just sad look on the world."
},
{
"start": 5581.54,
"end": 5585.78,
"text": " And also, I think it's a very, very inaccurate look on the world."
},
{
"start": 5585.78,
"end": 5590.22,
"text": " And it's, I think, a very dangerous look on the world."
},
{
"start": 5590.22,
"end": 5597.94,
"text": " Um, yeah, again, they say instead of looking at historical patterns of marginalization,"
},
{
"start": 5597.94,
"end": 5601.34,
"text": " calls for cognitive diversity argued that all differences are equal."
},
{
"start": 5601.34,
"end": 5602.42,
"text": " No, we're not."
},
{
"start": 5602.42,
"end": 5608.54,
"text": " Like, no calls for cognitive diversity or don't argue that all differences are equal."
},
{
"start": 5608.54,
"end": 5614.7,
"text": " Well aware that some people have it harder, well aware that some differences are bigger,"
},
{
"start": 5614.7,
"end": 5616.9,
"text": " worse or better."
},
{
"start": 5616.9,
"end": 5625.26,
"text": " That's absolutely well aware all they're saying is that race and gender shouldn't be the like,"
},
{
"start": 5625.26,
"end": 5633.74,
"text": " only things to consider and shouldn't be in itself be considered diverse."
},
{
"start": 5633.74,
"end": 5639.22,
"text": " Just because someone is of a certain skin color, it doesn't mean anything, right?"
},
{
"start": 5639.22,
"end": 5643.3,
"text": " It doesn't actually tell you anything about that person."
},
{
"start": 5643.3,
"end": 5650.56,
"text": " So why not consider people as individuals and look at what was their life like until"
},
{
"start": 5650.56,
"end": 5655.22,
"text": " this point and what could they contribute to the discussion we're having rather than"
},
{
"start": 5655.22,
"end": 5657.860000000001,
"text": " looking at the color of their skin."
},
{
"start": 5657.860000000001,
"end": 5663.18,
"text": " I mean, if the color of their skin played a role in their life, then obviously that"
},
{
"start": 5663.18,
"end": 5667.22,
"text": " would manifest in my suggestion as well."
},
{
"start": 5667.22,
"end": 5673.34,
"text": " But to just look at people through this kind of group lens is is so foreign to me."
},
{
"start": 5673.34,
"end": 5681.26,
"text": " And yeah, I feel it's it's quite dangerous."
},
{
"start": 5681.26,
"end": 5690.9800000000005,
"text": " Yeah, so again, and this this could argue that all differences are equal."
},
{
"start": 5690.9800000000005,
"end": 5697.06,
"text": " I mean, the point where you have to start misrepresenting what the counter argument"
},
{
"start": 5697.06,
"end": 5701.62,
"text": " is saying, that's really how you know you're dealing with a with not a well intentioned"
},
{
"start": 5701.62,
"end": 5704.46,
"text": " person on the other side of the of the discussion."
},
{
"start": 5704.46,
"end": 5706.62,
"text": " This is really politics now."
},
{
"start": 5706.62,
"end": 5710.04,
"text": " This isn't a well intended argumentation."
},
{
"start": 5710.04,
"end": 5714.7,
"text": " It's really someone to trying to achieve some goal, because they have to misrepresent the"
},
{
"start": 5714.7,
"end": 5715.9,
"text": " other side."
},
{
"start": 5715.9,
"end": 5719.0599999999995,
"text": " And this only gets worse from here."
},
{
"start": 5719.0599999999995,
"end": 5727.0199999999995,
"text": " They say recently was exemplified in the controversy over Google's appointment of Heritage Foundation"
},
{
"start": 5727.02,
"end": 5733.700000000001,
"text": " CEO K calls James to its Advanced Technology External Advisory Council."
},
{
"start": 5733.700000000001,
"end": 5738.540000000001,
"text": " Google's reasoning for the appointment of James was ostensibly to ensure diversity of"
},
{
"start": 5738.540000000001,
"end": 5743.3,
"text": " thought by including a conservative viewpoint on the council."
},
{
"start": 5743.3,
"end": 5751.18,
"text": " Alright, so Google has a technology advisory board, or council, sorry, of external people,"
},
{
"start": 5751.18,
"end": 5753.780000000001,
"text": " and they've included a conservative."
},
{
"start": 5753.78,
"end": 5760.38,
"text": " And she is by all by all metrics, let's say, a standard conservative."
},
{
"start": 5760.38,
"end": 5765.78,
"text": " So this is not a far right neo Nazi type."
},
{
"start": 5765.78,
"end": 5766.78,
"text": " I don't know."
},
{
"start": 5766.78,
"end": 5774.62,
"text": " But this is this is someone who has similar opinions than half the US country and in generally"
},
{
"start": 5774.62,
"end": 5781.38,
"text": " in at least in the Western world, generally half of the of the country's population tends"
},
{
"start": 5781.38,
"end": 5784.46,
"text": " to be conservative."
},
{
"start": 5784.46,
"end": 5786.3,
"text": " More or less, I mean, there's differences."
},
{
"start": 5786.3,
"end": 5792.66,
"text": " But yeah, so this this is a this is an opinion that a large portion of the population shares."
},
{
"start": 5792.66,
"end": 5799.46,
"text": " So it would be I don't know, it would be suitable to include at least someone of that opinion"
},
{
"start": 5799.46,
"end": 5804.46,
"text": " in an external advisory council to to have that on board."
},
{
"start": 5804.46,
"end": 5809.34,
"text": " You don't have to listen to her like she's not like she's made king."
},
{
"start": 5809.34,
"end": 5818.22,
"text": " It's simply that she will have the opportunity to input her voice representative of kind"
},
{
"start": 5818.22,
"end": 5821.9400000000005,
"text": " of that large, very large percentage of people."
},
{
"start": 5821.9400000000005,
"end": 5828.9400000000005,
"text": " They go on to say, James is also a black woman, thus adding racial and gender diversity to"
},
{
"start": 5828.9400000000005,
"end": 5830.22,
"text": " the panel."
},
{
"start": 5830.22,
"end": 5835.46,
"text": " So even further, right, this is it's a conservative black woman."
},
{
"start": 5835.46,
"end": 5841.86,
"text": " All right, but the pushback following James's inclusion focused on her policy position,"
},
{
"start": 5841.86,
"end": 5849.42,
"text": " citing specifically her vocal anti LGBTQ and anti immigrant views and highlighted why cognitive"
},
{
"start": 5849.42,
"end": 5853.1,
"text": " diversity is a particularly limited lens."
},
{
"start": 5853.1,
"end": 5861.46,
"text": " And the pushback here was very much spearheaded by one of the authors of this article."
},
{
"start": 5861.46,
"end": 5864.46,
"text": " So I am this isn't just reporting."
},
{
"start": 5864.46,
"end": 5873.34,
"text": " I will also I'll also criticize the the this pushback here since it's, you know, it's kind"
},
{
"start": 5873.34,
"end": 5875.46,
"text": " of argued for in this article."
},
{
"start": 5875.46,
"end": 5881.86,
"text": " It's not just reported and also because the authors are the same."
},
{
"start": 5881.86,
"end": 5887.14,
"text": " So here they say they have vocal anti LGBTQ and anti immigrant views."
},
{
"start": 5887.14,
"end": 5891.82,
"text": " And I haven't actually gone specifically and looked at what this person particularly has"
},
{
"start": 5891.82,
"end": 5899.179999999999,
"text": " said, but given that she's a standard conservative and has been in public office, I believe under"
},
{
"start": 5899.179999999999,
"end": 5909.139999999999,
"text": " George W. Bush, she can't like I have trouble believing that she has like extremely hateful"
},
{
"start": 5909.139999999999,
"end": 5915.299999999999,
"text": " opinions like these people shouldn't exist or like something like that nature."
},
{
"start": 5915.3,
"end": 5924.22,
"text": " Like often people like conservative people have have issues with forcing people to adopt"
},
{
"start": 5924.22,
"end": 5931.38,
"text": " certain pronouns for people or issues with which bathrooms do people go in and, you know,"
},
{
"start": 5931.38,
"end": 5937.34,
"text": " generally are tougher on immigration, especially illegal immigration and so on."
},
{
"start": 5937.34,
"end": 5943.22,
"text": " I mean, these are these are views that people hold."
},
{
"start": 5943.22,
"end": 5946.900000000001,
"text": " It's a large part of people and these are discussions to be had."
},
{
"start": 5946.900000000001,
"end": 5952.06,
"text": " So including this this person would be very sensible move."
},
{
"start": 5952.06,
"end": 5957.26,
"text": " But they say in a letter opposing the appointment, a group of Google workers calling themselves"
},
{
"start": 5957.26,
"end": 5964.780000000001,
"text": " Googlers against transphobia and hate, transphobia and hate responded to the idea that diversity"
},
{
"start": 5964.780000000001,
"end": 5967.62,
"text": " of thought justified James's addition to the council."
},
{
"start": 5967.62,
"end": 5973.66,
"text": " This is a weaponization of the language of diversity by appointing James to the ATAC."
},
{
"start": 5973.66,
"end": 5978.86,
"text": " Google elevates and endorses her view, implying that hers is a valid perspective worthy of"
},
{
"start": 5978.86,
"end": 5980.86,
"text": " inclusions in its decision making."
},
{
"start": 5980.86,
"end": 5981.86,
"text": " This is unacceptable."
},
{
"start": 5981.86,
"end": 5989.099999999999,
"text": " Here it says again, the author was one of the organizers of that."
},
{
"start": 5989.099999999999,
"end": 5990.86,
"text": " And that's what they're saying here."
},
{
"start": 5990.86,
"end": 5996.94,
"text": " The views, if you don't have our views, these are unacceptable views, right?"
},
{
"start": 5996.94,
"end": 5999.9,
"text": " It's valid perspective worthy of inclusion."
},
{
"start": 5999.9,
"end": 6005.379999999999,
"text": " It's what they're saying basically is you don't even talk to these to this person, like"
},
{
"start": 6005.379999999999,
"end": 6009.379999999999,
"text": " talking to this person, considering their opinion."
},
{
"start": 6009.379999999999,
"end": 6015.339999999999,
"text": " You can still evaluate the opinion, but even considering their opinion is already wrong."
},
{
"start": 6015.339999999999,
"end": 6018.58,
"text": " And that given that the person is a black woman."
},
{
"start": 6018.58,
"end": 6026.58,
"text": " So basically, they are called the author's idea of diversity is people that look different"
},
{
"start": 6026.58,
"end": 6033.42,
"text": " that are from race and gender groups that have don't have much power or perceived what"
},
{
"start": 6033.42,
"end": 6035.44,
"text": " they call power right now."
},
{
"start": 6035.44,
"end": 6039.94,
"text": " As long as they all think exactly as we think, right, then that's fine."
},
{
"start": 6039.94,
"end": 6044.78,
"text": " As long as they they share our thoughts, as long as they don't have dissenting opinions,"
},
{
"start": 6044.78,
"end": 6049.18,
"text": " we want the we want the different looking people."
},
{
"start": 6049.18,
"end": 6053.58,
"text": " But don't dare talk to anyone of a different opinion."
},
{
"start": 6053.58,
"end": 6060.3,
"text": " Yeah, this, I don't I don't see how I mean, these these authors, in my opinion, they really"
},
{
"start": 6060.3,
"end": 6067.74,
"text": " live in in a bubble, they really live in the in a tiny Silicon Valley or Silicon Valley"
},
{
"start": 6067.74,
"end": 6074.34,
"text": " influenced spaces, because this is this is half the people they basically saying half"
},
{
"start": 6074.34,
"end": 6083.38,
"text": " the people in their greater community in their country aren't even worthy listening to their"
},
{
"start": 6083.38,
"end": 6090.14,
"text": " opinions aren't even worthy of inclusion in of consideration."
},
{
"start": 6090.14,
"end": 6102.02,
"text": " So yeah, well, well done might as well discredit them at once."
},
{
"start": 6102.02,
"end": 6106.86,
"text": " I'm sure I'm sure I'm sure that's gonna fly well with these people."
},
{
"start": 6106.86,
"end": 6109.14,
"text": " All right."
},
{
"start": 6109.14,
"end": 6114.700000000001,
"text": " Yeah, might might start calling them deplorables and see what they do."
},
{
"start": 6114.700000000001,
"end": 6122.14,
"text": " Maybe they'll return the favor and elect a moron just to stick it in your face."
},
{
"start": 6122.14,
"end": 6124.14,
"text": " I mean, that's what happened."
},
{
"start": 6124.14,
"end": 6134.780000000001,
"text": " So the idea of cognitive diversity is mobilized by some support in support that the AI field"
},
{
"start": 6134.780000000001,
"end": 6139.02,
"text": " and the tech industry are already diverse."
},
{
"start": 6139.02,
"end": 6143.1,
"text": " Including as far as to support claims that not including identities like white and male"
},
{
"start": 6143.1,
"end": 6145.1,
"text": " constitutes discrimination."
},
{
"start": 6145.1,
"end": 6146.9400000000005,
"text": " Yes, it can."
},
{
"start": 6146.9400000000005,
"end": 6157.3,
"text": " Like if, if you include every single identity except white and male, that constitutes discrimination."
},
{
"start": 6157.3,
"end": 6163.1,
"text": " That's I mean, yes, even if they're in the majority is still constitutes discrimination,"
},
{
"start": 6163.1,
"end": 6168.9800000000005,
"text": " like no one can help being born white and male, no one white and male chose to be born"
},
{
"start": 6168.98,
"end": 6169.98,
"text": " like that."
},
{
"start": 6169.98,
"end": 6177.219999999999,
"text": " Don't mostly don't choose the melanin content of your skin, you can modulate it a bit by"
},
{
"start": 6177.219999999999,
"end": 6184.62,
"text": " going to the sun, which computer science people statistically don't do very often."
},
{
"start": 6184.62,
"end": 6187.0599999999995,
"text": " So there's not much leeway there."
},
{
"start": 6187.0599999999995,
"end": 6196.74,
"text": " So yeah, to not include identities like that, if you include every other one, can constitute"
},
{
"start": 6196.74,
"end": 6197.74,
"text": " discrimination."
},
{
"start": 6197.74,
"end": 6199.099999999999,
"text": " True."
},
{
"start": 6199.099999999999,
"end": 6205.34,
"text": " A July 2017 memo written by James Damore, a software engineer at Google is illustrative"
},
{
"start": 6205.34,
"end": 6210.7,
"text": " of such pushback titled Google's ideological echo chamber."
},
{
"start": 6210.7,
"end": 6215.0599999999995,
"text": " And published in an internal mailing list, the memo critiqued the company's diversity"
},
{
"start": 6215.0599999999995,
"end": 6220.62,
"text": " policies arguing that biological differences between men and women rather than bias and"
},
{
"start": 6220.62,
"end": 6225.26,
"text": " discrimination help explain gender disparities at the company."
},
{
"start": 6225.26,
"end": 6230.14,
"text": " I feel the you can leave out the rather than here."
},
{
"start": 6230.14,
"end": 6240.06,
"text": " I think the memo simply stated that biological differences can help explain the gender disparities."
},
{
"start": 6240.06,
"end": 6244.66,
"text": " The most objective writing the memo was to make the case that policies designed to achieve"
},
{
"start": 6244.66,
"end": 6249.14,
"text": " equal representation are unfair, divisive and bad for business."
},
{
"start": 6249.14,
"end": 6250.26,
"text": " Well some are."
},
{
"start": 6250.26,
"end": 6256.74,
"text": " Yes, especially the recommendations that you've given at the beginning, number seven, is unfair,"
},
{
"start": 6256.74,
"end": 6264.46,
"text": " divisive and I would also argue bad for business."
},
{
"start": 6264.46,
"end": 6272.5,
"text": " So supporters for Damore's point of view at times even drew on the rhetoric of the pipeline"
},
{
"start": 6272.5,
"end": 6275.900000000001,
"text": " to make the case that diversity initiatives are in fact discriminatory."
},
{
"start": 6275.9,
"end": 6281.299999999999,
"text": " They argue incorrectly that if there aren't qualified candidates in the pipeline, then"
},
{
"start": 6281.299999999999,
"end": 6287.0199999999995,
"text": " hiring those who are unqualified on the basis of identity discriminates against those who"
},
{
"start": 6287.0199999999995,
"end": 6288.7,
"text": " are qualified."
},
{
"start": 6288.7,
"end": 6300.98,
"text": " No, I would say hiring anyone on the basis of identity discriminates."
},
{
"start": 6300.98,
"end": 6303.259999999999,
"text": " I mean inherently."
},
{
"start": 6303.26,
"end": 6310.18,
"text": " So again I think that's the larger argument that these people are making, which is not"
},
{
"start": 6310.18,
"end": 6316.22,
"text": " incorrect, is very correct."
},
{
"start": 6316.22,
"end": 6322.5,
"text": " So in an update to the memo Damore himself asserted that he values diversity and inclusion,"
},
{
"start": 6322.5,
"end": 6326.7,
"text": " but his primary concern was cognitive diversity."
},
{
"start": 6326.7,
"end": 6331.54,
"text": " He says diversity inclusion is not denying that sexism exists, doesn't endorse using"
},
{
"start": 6331.54,
"end": 6332.900000000001,
"text": " stereotypes."
},
{
"start": 6332.9,
"end": 6339.74,
"text": " And in specific I've read the memo and it directly says these are population level kind"
},
{
"start": 6339.74,
"end": 6344.78,
"text": " of statistics and there is more overlap than difference and you absolutely can't say anything"
},
{
"start": 6344.78,
"end": 6348.66,
"text": " about an individual by looking at these statistics."
},
{
"start": 6348.66,
"end": 6351.62,
"text": " That's almost a quote from this memo."
},
{
"start": 6351.62,
"end": 6359.86,
"text": " So he was very much concerned with considering people as individuals, but also if you like"
},
{
"start": 6359.86,
"end": 6362.379999999999,
"text": " he was basically making the same argument as earlier."
},
{
"start": 6362.38,
"end": 6370.3,
"text": " I told you to remember, hey look this one study that found that women's interests might"
},
{
"start": 6370.3,
"end": 6373.3,
"text": " be different and we might shape the curriculum."
},
{
"start": 6373.3,
"end": 6375.22,
"text": " That's basically what Damore said."
},
{
"start": 6375.22,
"end": 6380.66,
"text": " He said women's interests might be different and we'd have to maybe shape the way we do"
},
{
"start": 6380.66,
"end": 6386.1,
"text": " work, like change the way we do software engineering to attract more of them."
},
{
"start": 6386.1,
"end": 6388.9800000000005,
"text": " That was one of his points."
},
{
"start": 6388.98,
"end": 6394.86,
"text": " So he's exactly the same thing, but of course he's a misogynist because he suggested that"
},
{
"start": 6394.86,
"end": 6400.259999999999,
"text": " this could be due partly because of biological differences."
},
{
"start": 6400.259999999999,
"end": 6407.0199999999995,
"text": " And the way he was dragged through the mud is just crazy."
},
{
"start": 6407.0199999999995,
"end": 6413.82,
"text": " And they shoot here very much against this kind of biological, what they call biological"
},
{
"start": 6413.82,
"end": 6414.82,
"text": " determinism."
},
{
"start": 6414.82,
"end": 6417.94,
"text": " We'll see this very briefly."
},
{
"start": 6417.94,
"end": 6423.139999999999,
"text": " I'd say diversity becomes an empty signifier, stripped of the histories and experiences"
},
{
"start": 6423.139999999999,
"end": 6429.379999999999,
"text": " of systemic discrimination, repurposed around ideology rather than bodies."
},
{
"start": 6429.379999999999,
"end": 6436.94,
"text": " I'd say diversity has nothing inherently to do with bodies as such."
},
{
"start": 6436.94,
"end": 6449.419999999999,
"text": " I think that's only the case if you are already convinced of this."
},
{
"start": 6449.419999999999,
"end": 6453.98,
"text": " Within hours of the memo's publication, harassment targeting minority advocates who pushed back"
},
{
"start": 6453.98,
"end": 6460.9,
"text": " against the claims in the memo began, with a particular focus on queer and trans workers."
},
{
"start": 6460.9,
"end": 6468.379999999999,
"text": " That's bad, but also I think the pushback against people who voiced support was also"
},
{
"start": 6468.379999999999,
"end": 6474.54,
"text": " pretty bad because one of them was fired, as you already stated."
},
{
"start": 6474.54,
"end": 6477.62,
"text": " Google's vice president of diversity even locked down her Twitter account shortly after"
},
{
"start": 6477.62,
"end": 6483.42,
"text": " Demours firing, responding to the barrage of threats describing her as a police Nazi."
},
{
"start": 6483.42,
"end": 6484.74,
"text": " Well yeah, if you fire something."
},
{
"start": 6484.74,
"end": 6489.759999999999,
"text": " I mean undoubtedly Google fired this guy because they thought it was less of a PR disaster"
},
{
"start": 6489.76,
"end": 6492.62,
"text": " if they also fired him now."
},
{
"start": 6492.62,
"end": 6501.860000000001,
"text": " This probably wasn't an ideological decision, much more a PR decision."
},
{
"start": 6501.860000000001,
"end": 6508.780000000001,
"text": " If you fire someone after stating something like this, it very much looks like you're"
},
{
"start": 6508.780000000001,
"end": 6514.3,
"text": " firing them because you don't like their ideas and you don't like what they're saying,"
},
{
"start": 6514.3,
"end": 6522.860000000001,
"text": " which people generally are not in favor of censoring freedom of speech."
},
{
"start": 6522.860000000001,
"end": 6527.5,
"text": " But yeah, that being said, harassment is bad, don't harass people."
},
{
"start": 6527.5,
"end": 6540,
"text": " Also that being said, criticism isn't always harassment and don't conflate the two."
},
{
"start": 6540,
"end": 6544.7,
"text": " Demours' memo also stated that the distribution of preference abilities of men and women differ"
},
{
"start": 6544.7,
"end": 6550.54,
"text": " in part due to biological causes and that these differences may explain why we don't"
},
{
"start": 6550.54,
"end": 6556.58,
"text": " see equal representation of women in tech and leadership."
},
{
"start": 6556.58,
"end": 6561.42,
"text": " This assertion hinges on a flawed assumption that identities like gender and race are essential"
},
{
"start": 6561.42,
"end": 6568.5,
"text": " and fixed biological attributes and that inequalities are at least in part the product of such irreducible"
},
{
"start": 6568.5,
"end": 6569.5,
"text": " differences."
},
{
"start": 6569.5,
"end": 6576.26,
"text": " Well, I mean, if they're not fixed biological attributes, certainly gender and race have"
},
{
"start": 6576.26,
"end": 6582.54,
"text": " a 0.99 correlation with biology."
},
{
"start": 6582.54,
"end": 6590.46,
"text": " Since your biology is first and it's determined when you're conceived, that demonstrates a"
},
{
"start": 6590.46,
"end": 6594.14,
"text": " causal direction."
},
{
"start": 6594.14,
"end": 6600.14,
"text": " Even if they're not exactly fixed, they are overwhelmingly fixed."
},
{
"start": 6600.14,
"end": 6607.5,
"text": " And to suggest that this is a flawed assumption, that these inequalities are at least part"
},
{
"start": 6607.5,
"end": 6612.860000000001,
"text": " the product of such differences, what you'd have to do, they simply state it's a flawed"
},
{
"start": 6612.860000000001,
"end": 6614.18,
"text": " assumption."
},
{
"start": 6614.18,
"end": 6621.820000000001,
"text": " What you have to do in order to show this is a flawed assumption, you have to show that"
},
{
"start": 6621.82,
"end": 6628.66,
"text": " gender and race, as far as they're biologically determined, have no influence whatsoever on"
},
{
"start": 6628.66,
"end": 6629.66,
"text": " these differences."
},
{
"start": 6629.66,
"end": 6631.299999999999,
"text": " That's what you have to show, right?"
},
{
"start": 6631.299999999999,
"end": 6636.94,
"text": " That's the counterclaim because the claim is they have at least in part something to"
},
{
"start": 6636.94,
"end": 6637.94,
"text": " do with it."
},
{
"start": 6637.94,
"end": 6644.54,
"text": " And that's also, I believe, what the more stated and what the predominant opinion like"
},
{
"start": 6644.54,
"end": 6651.179999999999,
"text": " is very like all the research points to, for example, there is a large difference in interest"
},
{
"start": 6651.18,
"end": 6657.5,
"text": " between genders as far as, for example, career selection goes and so on."
},
{
"start": 6657.5,
"end": 6664.780000000001,
"text": " Now, we can talk about why that is, but there's also a large consensus, I believe, that this"
},
{
"start": 6664.780000000001,
"end": 6673.14,
"text": " is at least partly determined to however degree, but it is at least partly determined by biology."
},
{
"start": 6673.14,
"end": 6680.12,
"text": " In order to show that this is flawed, you need to show that it does not have, it can't"
},
{
"start": 6680.12,
"end": 6682.099999999999,
"text": " have any influence, right?"
},
{
"start": 6682.099999999999,
"end": 6688.9,
"text": " You have to basically prove them the impossibility of this having an influence, which no one"
},
{
"start": 6688.9,
"end": 6692.94,
"text": " has done so far, much to the contrary."
},
{
"start": 6692.94,
"end": 6698.12,
"text": " So simply state this is a flawed assumption kind of shows to me that they've already,"
},
{
"start": 6698.12,
"end": 6706.22,
"text": " they are there, they're in a bubble and they're expecting to speak to people in the same bubble."
},
{
"start": 6706.22,
"end": 6719.66,
"text": " Yeah, so they go on and kind of discredit this as called a biological determinism, which"
},
{
"start": 6719.66,
"end": 6728.14,
"text": " I don't think that's a correct use of the term biological determinism, but you can judge"
},
{
"start": 6728.14,
"end": 6729.14,
"text": " for yourself."
},
{
"start": 6729.14,
"end": 6735.46,
"text": " All I think these people are saying that biology might have some influence and we could adjust"
},
{
"start": 6735.46,
"end": 6737.5,
"text": " for that."
},
{
"start": 6737.5,
"end": 6739.46,
"text": " It's not even right, it's not even."
},
{
"start": 6739.46,
"end": 6741.38,
"text": " Yeah, this comes up here."
},
{
"start": 6741.38,
"end": 6745.82,
"text": " So conclusion, conclusion, finally, I think it's been two hours."
},
{
"start": 6745.82,
"end": 6746.82,
"text": " Sorry."
},
{
"start": 6746.82,
"end": 6747.82,
"text": " Conclusion."
},
{
"start": 6747.82,
"end": 6754.38,
"text": " Throughout this report, we've outlined the scope and scale of the problem, tracing how"
},
{
"start": 6754.38,
"end": 6759.52,
"text": " the diversity crisis in the industry and the problems of bias and AI systems are interrelated"
},
{
"start": 6759.52,
"end": 6762.58,
"text": " aspect of the same issue."
},
{
"start": 6762.58,
"end": 6765.24,
"text": " No."
},
{
"start": 6765.24,
"end": 6770.36,
"text": " In the past, these topics are commonly examined in isolation, but increasing evidence shows"
},
{
"start": 6770.36,
"end": 6772.98,
"text": " that they are closely intertwined."
},
{
"start": 6772.98,
"end": 6776.48,
"text": " No, you've shown that they're parallel."
},
{
"start": 6776.48,
"end": 6782.84,
"text": " You have absolutely not shown that they're interrelated aspects of the same issue and"
},
{
"start": 6782.84,
"end": 6787.86,
"text": " you have not shown that one, any one of these causally influences the other, that there"
},
{
"start": 6787.86,
"end": 6789.179999999999,
"text": " is any feedback loop."
},
{
"start": 6789.179999999999,
"end": 6792.82,
"text": " You have not shown that fixing one leads to fixing the other."
},
{
"start": 6792.82,
"end": 6801.86,
"text": " I mean, you could also take a company that extremely is focused on, or for some reason"
},
{
"start": 6801.86,
"end": 6808.42,
"text": " has a different workforce and then show how their products with the same data sets as"
},
{
"start": 6808.42,
"end": 6814.219999999999,
"text": " the previous companies don't end up being biased."
},
{
"start": 6814.219999999999,
"end": 6816.38,
"text": " Probably not so easy."
},
{
"start": 6816.38,
"end": 6819.299999999999,
"text": " But again, none of that is in the report."
},
{
"start": 6819.3,
"end": 6825.38,
"text": " There are many things you could actually do to show what you wanted to show, but it's"
},
{
"start": 6825.38,
"end": 6830.820000000001,
"text": " just not the case in this article."
},
{
"start": 6830.820000000001,
"end": 6835.22,
"text": " Our analysis surfaced two prominent responses to the diversity crisis."
},
{
"start": 6835.22,
"end": 6840.18,
"text": " On one hand, a worker driven movement, which we've skipped."
},
{
"start": 6840.18,
"end": 6846.66,
"text": " On the other hand, we observe a small but vocal counter movement that actively resists"
},
{
"start": 6846.66,
"end": 6850.5,
"text": " diversity in the industry."
},
{
"start": 6850.5,
"end": 6854.42,
"text": " What dishonesty actively resists diversity?"
},
{
"start": 6854.42,
"end": 6861.3,
"text": " I mean, the thought that these people stray around like, no, I don't like the other looking"
},
{
"start": 6861.3,
"end": 6862.3,
"text": " people."
},
{
"start": 6862.3,
"end": 6864.42,
"text": " It's just so absurd."
},
{
"start": 6864.42,
"end": 6871.18,
"text": " All they're saying is that either we don't understand the problem in the correct way"
},
{
"start": 6871.18,
"end": 6873.98,
"text": " or our tools aren't appropriate to solve the problem."
},
{
"start": 6873.98,
"end": 6881.9,
"text": " I think everyone has the same goal of the workplace and the AI systems being as fair"
},
{
"start": 6881.9,
"end": 6887.339999999999,
"text": " and as non discriminatory as possible."
},
{
"start": 6887.339999999999,
"end": 6890.9,
"text": " Misrepresentation of the other side is something that really bugs me."
},
{
"start": 6890.9,
"end": 6893.419999999999,
"text": " And it's something that these authors do a lot."
},
{
"start": 6893.419999999999,
"end": 6900.82,
"text": " So yeah, I lose my polite side maybe."
},
{
"start": 6900.82,
"end": 6907.94,
"text": " And uses arguments from biological determinism to assert that women are inherently less suited"
},
{
"start": 6907.94,
"end": 6910.5,
"text": " to computer science and AI."
},
{
"start": 6910.5,
"end": 6912.179999999999,
"text": " What a load of crap."
},
{
"start": 6912.179999999999,
"end": 6919.139999999999,
"text": " Sorry, but uses to assert that women are inherently less suited to computer science."
},
{
"start": 6919.139999999999,
"end": 6920.139999999999,
"text": " No one."
},
{
"start": 6920.139999999999,
"end": 6925.78,
"text": " Okay, not no one, but no one that I know."
},
{
"start": 6925.78,
"end": 6930.179999999999,
"text": " Asserts that absolutely no one that makes these arguments."
},
{
"start": 6930.18,
"end": 6931.820000000001,
"text": " Sorry, not no one."
},
{
"start": 6931.820000000001,
"end": 6939.700000000001,
"text": " You can always find a sexist douchebag that makes that argument."
},
{
"start": 6939.700000000001,
"end": 6943.62,
"text": " But this is not a serious argument made."
},
{
"start": 6943.62,
"end": 6947.900000000001,
"text": " And this is not this counter movement."
},
{
"start": 6947.900000000001,
"end": 6951.46,
"text": " Most people in the argument that most people in this counter movement make."
},
{
"start": 6951.46,
"end": 6952.62,
"text": " Not at all."
},
{
"start": 6952.62,
"end": 6962.82,
"text": " And to represent them as such is just so dishonest that yeah, this this this basically this is"
},
{
"start": 6962.82,
"end": 6968.94,
"text": " the it's nice that it's in the conclusion because it finally like at the end it completely"
},
{
"start": 6968.94,
"end": 6975.98,
"text": " destroys the credibility of me taking seriously these authors."
},
{
"start": 6975.98,
"end": 6981.74,
"text": " I thought they had so that the parts we skipped over I mostly would say I'm mostly okay with"
},
{
"start": 6981.74,
"end": 6989.66,
"text": " they mostly show parallels between the that AI systems are biased and they also show that"
},
{
"start": 6989.66,
"end": 6991.3,
"text": " there is unequal representation."
},
{
"start": 6991.3,
"end": 6996.0199999999995,
"text": " They also show examples of discrimination, harassment and so on."
},
{
"start": 6996.0199999999995,
"end": 7001.38,
"text": " Problems in AI companies and universities that all you can read the report for this"
},
{
"start": 7001.38,
"end": 7003.98,
"text": " that's it's pretty interesting to read."
},
{
"start": 7003.98,
"end": 7008.94,
"text": " But the points I've addressed, I'm not happy with."
},
{
"start": 7008.94,
"end": 7011.78,
"text": " Yeah, so that was it for now."
},
{
"start": 7011.78,
"end": 7018.179999999999,
"text": " Sorry this was took so long, but I felt that a thorough take was necessary."
},
{
"start": 7018.18,
"end": 7039.22,
"text": " Have a nice rest of the day."
}
] |
sbKaUc0tPaY | Yannic Kilcher | UCZHmQk67mSJgfCCTn7xBfew | The Odds are Odd: A Statistical Test for Detecting Adversarial Examples | [
"Science & Technology"
] | [] | https://arxiv.org/abs/1902.04818
Abstract:
We investigate conditions under which test statistics exist that can reliably detect examples, which have been adversarially manipulated in a white-box attack. These statistics can be easily computed and calibrated by randomly corrupting inputs. They exploit certain anomalies that adversarial attacks introduce, in particular if they follow the paradigm of choosing perturbations optimally under p-norm constraints. Access to the log-odds is the only requirement to defend models. We justify our approach empirically, but also provide conditions under which detectability via the suggested test statistics is guaranteed to be effective. In our experiments, we show that it is even possible to correct test time predictions for adversarial attacks with high accuracy.
Authors:
Kevin Roth, Yannic Kilcher, Thomas Hofmann | Hello and welcome. Today we're looking at the odds are odd, a statistical test for detecting adversarial examples. So shameless self-promotion here since this is me. So this is an archive and basically what we do is we're detecting adversarial examples. For those who don't know what an adversarial example is, it's basically a way of fooling a classifier in order to kind of get it to do something weird. Let's look at it. So maybe you have an image of a cat. I have no clue how a cat looks. Alright, so you have an image of a cat and you have a classifier. So the classifier takes this image as an input, kind of winds it down to some probabilities of classes and cat, dog and so on. And it then gives you an estimate of how likely each class is. So what the adversarial example does is it changes this image and it adds a noise. So this is just a very specific noise and you have kind of a multiplier here, gamma, which is super small. So the noise is almost... you can't see it with a human eye basically, it's so small. But it's able to perturb this image in a way that the probabilities will change such that all of a sudden a different class now is the highest class. So basically we're able to fool these classifiers by adding just very little bit of very very specific noise. So that's an adversarial example. These have many implications in, let's say, security applications and also in understanding how these classifier works. Alright, so our task is to explain and detect them, explain why they happen and detect when they happen. Alright, so what do we do? Basically let's just jump right into the thing here. We view a classifier as an output, so you have logits, what's called logits, L is this. This here is your neural network up to the last layer. Basically it can be like something like a convolutional neural network and so on. It gives you a feature representation. So you extract from the image X a feature representation, which is this entire thing here, and then you multiply this feature representation. So this is going to be some vector of dimension D. You multiply this by this weight matrix, which is going to be something like, okay I've drawn it in the wrong direction here. Let's draw W over here. It's going to be D by, let's say K, where K is the number of classes. Okay, still wrong. D by K, right? And output a vector of dimension K, which then is this cat, dog and so on. So these are the logits and the logits get transformed to the probabilities by running it through a softmax layer. But basically we view a classifier as having a feature representation and a weight matrix. And this here is a matrix multiplication adult product by matrix. So what we see basically is, this is kind of where the adversarial examples happen. So when we look at this weight matrix, right, again we look at the D dimensional feature vector here, and we look at the weight matrix, what it does is it has columns, right? Columns. Let's say we have four classes here, right? So it has these four columns and each of them is D dimensional. So each of them is going to be multiplied by this thing and giving a score. So the final score for a class is going to be the multiplication of a row W1, W2, W3, W4 by this feature vector. Let's call the feature vector little f. So your logit of class i is going to be the inner product of W i and f. Alright, we'll leave away biases for now. There's okay, we can introduce biases to make it a bit more complicated but it changes nothing. So your logit is going to be the inner product and whichever logit is the highest wins. So that's going to be the prediction of the classifier. So since you can, in an adversarial example, what can you change? You can change this feature vector here, this f. By changing the x you can change the output of the convolutional neural network which is the feature vector. And what you have to do in order to make a logit as high as possible, basically make one class as high as possible, is you need to make this inner product as high as possible. And what's an inner product? If you look in a classic vector representation space, if this is W i and this is f, what you want to do is you want to make f and W align as much as possible. Because the inner product is going to be basically dependent on the angle and the magnitude. So you can you can stretch f for sure but it's going to be kind of aligned with all the W's then more by stretching or more negatively in whatever way you want it. But basically you want to rotate, you want to align as much as possible the f with the W i. So you want to kind of go into this direction with f. So now not only do you want to kind of maximize f with a particular W i, what you want to do is be adversarial. The adversarial task is often framed as just it's either targeted, so find a... so i needs to be a particular other class, or it's untargeted which means just just give me a perturbation that will make the classifier be fooled. And be fooled means whatever it predicts right now it should predict something different. So what you ultimately want to do is you want this as high as possible for some i that is not the correct i and you want this other quantity W y. Let's call it W y. Let's say the classifier is 100% correct. So W y, y is the label of x. W y is whatever column here is currently predicted. So you want the sum column where i is not equal to y to have maximum inner product and so this is not no longer l i, we'll get to that, to have maximum inner product and you want this inner product with the correct class to be as small as possible, which ultimately means you want this entire quantity maximized. So it's a pretty simple idea. We'll call, let's say this is the log i minus the log y. We have slightly different notation in the paper. I think we call this z but never mind. So you basically just want to make this as large as possible. So our point is since this is not the only thing, you want to maximize this but you have a constraint. Namely your constraint is that your delta x can only be small. Your delta x can only be small because the point of an adversarial example is that the perturbation is so small you can't see it and that means that you basically don't have much wiggle room to do these perturbations, which means that we should be able to detect a pattern like this in the latent space. So this here is the latent space feature vector and if we can kind of detect a pattern in the latent space then we kind of get the adversarial example detector. So how do we do this? We measure exactly this. What we do is we measure the alignment between the original, between the currently predicted class and between all other classes. So in this graphic here you see this. It's a 10 class classifier. This is CIFAR10. We only show one, two, three, four, we only show six of the other classes but we have the full graphic in this. So this shows an adversarial example. The axis going on top of each of the images is the alignment with the adversarial class. So this has been an adversarially perturbed sample. So this shows the alignment with the adversarial class and of course you see the bright red dot, if you just focus on that, that is the adversarial example projected into this. So of course the alignment is going to be very very high with this class since the classifier actually predicts this class. The blue here is the sample that the adversarial sample was derived from which means the original image. And you already see without looking at any of the other dots that the blue is around zero here, around zero here, around zero here, here and here. But here it's very high in this axis. So the axis to the right is for each of these plots here it's one of the other classes except for the currently predicted adversarial class. So that this axis is always the same axis and while the axis to the right for each plot is a different one. And you can already see, and that's why we frame it in the green, this plot here is where the axis to the right corresponds to the original class of the classifier. So don't look yet at the other plots. What you see here is basically the blue is really high in this class right and the adversarial example procedure basically has driven it down this class and up this class which is exactly saying it has made this inner product small and this inner product large. So where do we go from here? Let's actually jump this graphic a bit and go to this one that's way down here. Alright so what we've done is we've taken the an example just out of the data set right and then we've taken an adversarial example. So say X is the example of the data set and then X hat is the adversarial example derived from this. Alright in this plot to the right X would be sitting about here. I'm gonna explain what the what the the kind of meaning is and X hat would be sitting down here right it's about one third from the top one third from the bottom let me draw this more. Alright so what this axis represents here is basically what we've done is we've gone from X to X hat in very small steps and at each step we've asked the classifier hey classifier what's the probability of the class Y so Y is the class of X right and the class of X X hat is some some different what some some other class right since it's an adversarial example we've so we've asked the classifier what's the class of X and basically no basically we've asked what what's the probability that Y is the class of X and that's represented in white right so the more white the higher the classifier thinks the class Y is probable. So the direction down is going into the direction of X hat minus X so it's going into the into the adversarial direction and then the direction across is we've taken some direction that's orthogonal to this direction and then also went into tiny steps and asked the classifier hey classifier what do you think is the probability of Y here so we've basically done this kind of grid sampling and at each point we've asked the classifier what do you think which how probable is Y and the classifier will always output some number and we plot it in white and so this direction again is always the same in this direction we basically randomize it and then aggregate it over over lots and lots of different samples and we also aggregate this entire thing over the entire over the data set so we get a comprehensive view of what adversarial examples look like in the view of the classifier and what we find is pretty interesting so when you go from the original class you basically in every direction here in every direction it kind of the original class kind of decreases smoothly right you see at the edges here it kind of gets black so the further away you go from the original example that the more the kind of shadier the classifier gets it's like yeah I'm not so sure anymore that this is the class right but if you go into the direction of here if you go into the direction of the adversarial example the kind of drop-off is first of all it's very steep so all of a sudden here you're in very dark territory which means the classifier is doesn't think why is probable at all anymore and moreover you get this kind of cone here so what we see is what we what we think is happening is that given an example there are these directions in late in in in image space basically straight directions that go to adversarial examples right and we call these cones because they they're kind of low dimensional directions in in the space where the adversarial example lies and what's really interesting is we have those plots here do we have more so what's what's quite interesting is that if you if you go come on well this is kind of okay the quality of the of the plot is not is not very very good so I'm gonna I may be able to to draw this here so if your start here and you go here what happens to the original class is you start out high you go down rapidly and you stay down even if you go super far into this direction the this class will stay down whereas let's say this is y hat y hat will start low go up and then kind of fade so here is where the adversarial example would sit sorry at about this distance that's this distance here means as you go towards the adversarial example right here the probability of the adversarial class rises and the probability of the original class drops then as you go further this is what's what's interesting kind of this probability here drops which means the classifier is kind of like yeah okay there's too much noise now I'm not so sure about this class anymore but the this this class here kind of stays low very very long even if you go into this direction so this this gives us kind of a hint that adversarial examples are characterized by specific directions that you go into that you that you can go into and kind of suppress the original class and pump the new class up which is kind of exactly what we've claimed with this inner inner product alignment right that the next experiment we've done is we've taken this adversarial example here and said well if we go outside if we go into random directions right it's just really this one direction that's problematic if we go into random directions actually we should be you know go back to the original class right since it's basically surrounded by the original class this is just one direction and this here represents all the other directions there are and how many directions are there in in pixel space like a lot so we should be able to get back to the original class but that's not the case that's we found that's not the case and we also found why so I still want to go back to this plot here if you do this if you add noise and this is the noise magnitude here what you'll see is the orange here is the adversarial class so orange will go down down down down down right as you increase the noise the blue is the source class so the blue goes up and it goes up faster you see it goes up faster than the green which is the highest other class so green is whatever class is not that was there not the source but the highest class other than that so the source class goes up quickly but before the source class can overpass the adversarial class which happens back there the highest other class has already kind of taken over so the source class is basically too weak and if you again look at this this plot here if you go with an actual color picker you see that the amount of white here and here is is not high enough it's like 0.3 or something out of one or even lower so the the kind of source class is not strong enough that by simply adding a bit of noise you can go back but we thought hey if this is correct we can actually detect we can detect this effect here this rising of the source class faster so our plan is basically we add noise a particular amount of noise just a little bit actually and then we detect which basically which class falls and which class rises and the way we do this is we we detect the this exact alignment that I've described before under noise so we form this quantity here for all classes other than y so y is the the class that's currently predicted and we look at it what happens under it under noise right so and that's where we get to this graphic here so again this axis is the adversarial class or the class that's currently predicted right this axis here is all the other classes for each plot one and when we add noise what do you see is the noise magnitude is encoded in the brightness of the dots so the darker the red dots the more noise we've added here is the original adversarial sample then as we add noise you see here here more noise more noise more noise it nothing's really happening for the for the if if if it's like one class that has nothing to do with the original class it simply kind of goes down simply kind of gets less sure about this class right but in case of the original class that the adversarial example was derived from it really rises it really kind of at the same time that it drops it rises into that direction so we're able to measure these these deltas here under noise and we're able to to devise basically statistics of what happens to these quantities under like if it's not an adversarial sample versus what happens to these quantities if it's an adversarial sample so here you see pairings of basically source class and adversarial class samples so each of these histograms is collected from that and what you can see is in blue the kind of alignment under noise of the source class sorry the alignments under noise of a non perturbed sample and in orange the alignments under noise of an adversarial sample and what's cool is that these these alignments you can see in all of these cases are very different so there is a clear signature in the adversarial sample in these noise induced alignments with the with the weight matrix rows that makes you able to basically build a detector you can say all right anything to the left is clean anything to the right is adversarial and we can do this over many different types of noises and then build basically a voting mechanism on that and thereby detect adversarial examples so we have a bunch of experiments we mostly experiment on the c410 and on the image net data set and you can see over here so this is the main kind of one of the main results the detection rates of our statistical test so as you can see we are detection rate this is on clean samples on clean samples you want the detection rate to be low on adversarial samples you want the detection rate to be high and this we achieve very large detection rates while having very low false positive rates especially on image net so it seems like the more tuned these models are the better these models are the better we are at detecting adversarial examples to it it's kind of a direct correlation to how well the models perform on accuracy in a clean setting and what we can do is now since we cannot only detect these things but we can detect these things in a fashion so if if you look at these things and you have like a sample of a particular class that's predicted right let's say this class and you go and look at it at the position of the noise induced features over each of them so let's say here here here here here here here here here right you can then clearly say well not only do I detect an adversarial example here right I look at the I look at each of the class of the classes that it could be derived from right if all if all of them say it's a clean sample then all right it's a clean sample but if one of them says it's an adversarial sample then I don't not only do I know it's an adversarial sample but I say aha this must be the source class right this is the exact effect we saw here all right we can if we detect this pattern here we can also back deduce basically aha so this must be the original class that the adversarial example was derived from so we're basically able to build a not only a detector but we're basically able to reconstruct the original class and here you see for these models let's say on CIFAR-10 we imagine that is a bit too large as of yet for our compute but on these models that have clean accuracies that are pretty high on CIFAR-10 plus this this kind of toy network here we're able to reconstruct the original class so basically this is defense against adversarial examples by by getting to almost clean accuracy back so this is a really surprising actually and kind of nice so we we do a bunch of other experiments including we defend against an attacker that's actually aware of this thing but the main the main point here is we don't say this is kind of the end-all method of defending against adversarial examples we simply want to kind of encourage the way of thinking of of these kind of noise what what if you what if you noise induce perturbations how does your network react to that can you can you detect these effects here can you detect effects like this and are these unavoidable or are there architectures are there architectures we can basically build such that adversarial examples have no chance except doing something like this which we can then easily detect all right so that was a bit of an introduction if you like it check out the entire paper and goodbye | [
{
"start": 0,
"end": 5.6000000000000005,
"text": " Hello and welcome. Today we're looking at the odds are odd, a statistical test for"
},
{
"start": 5.6000000000000005,
"end": 11.76,
"text": " detecting adversarial examples. So shameless self-promotion here since this"
},
{
"start": 11.76,
"end": 21.28,
"text": " is me. So this is an archive and basically what we do is we're detecting"
},
{
"start": 21.28,
"end": 25.8,
"text": " adversarial examples. For those who don't know what an adversarial example is, it's"
},
{
"start": 25.8,
"end": 36.28,
"text": " basically a way of fooling a classifier in order to kind of get it to do"
},
{
"start": 36.28,
"end": 42.2,
"text": " something weird. Let's look at it. So maybe you have an image of a cat."
},
{
"start": 42.2,
"end": 50.24,
"text": " I have no clue how a cat looks. Alright, so you have an image of a cat and you have a"
},
{
"start": 50.24,
"end": 54.88,
"text": " classifier. So the classifier takes this image as an input, kind of winds it"
},
{
"start": 54.88,
"end": 65,
"text": " down to some probabilities of classes and cat, dog and so on. And it then gives you"
},
{
"start": 65,
"end": 76.4,
"text": " an estimate of how likely each class is. So what the adversarial example does"
},
{
"start": 76.4,
"end": 85.92,
"text": " is it changes this image and it adds a noise. So this is just a very"
},
{
"start": 85.92,
"end": 91.32000000000001,
"text": " specific noise and you have kind of a multiplier here, gamma, which is super"
},
{
"start": 91.32000000000001,
"end": 97.4,
"text": " small. So the noise is almost... you can't see it with a human eye basically, it's"
},
{
"start": 97.4,
"end": 104.56,
"text": " so small. But it's able to perturb this image in a way that the"
},
{
"start": 104.56,
"end": 111.2,
"text": " probabilities will change such that all of a sudden a different class now is the"
},
{
"start": 111.2,
"end": 116.32000000000001,
"text": " highest class. So basically we're able to fool these classifiers by adding just"
},
{
"start": 116.32000000000001,
"end": 121.92,
"text": " very little bit of very very specific noise. So that's an adversarial example."
},
{
"start": 121.92,
"end": 127,
"text": " These have many implications in, let's say, security applications and also in"
},
{
"start": 127,
"end": 132.92000000000002,
"text": " understanding how these classifier works. Alright, so our task is to explain and"
},
{
"start": 132.92,
"end": 138.51999999999998,
"text": " detect them, explain why they happen and detect when they happen."
},
{
"start": 138.51999999999998,
"end": 150.07999999999998,
"text": " Alright, so what do we do? Basically let's just jump right into"
},
{
"start": 150.07999999999998,
"end": 162.35999999999999,
"text": " the thing here. We view a classifier as an output, so you"
},
{
"start": 162.36,
"end": 168.44000000000003,
"text": " have logits, what's called logits, L is"
},
{
"start": 172.8,
"end": 180.84,
"text": " this. This here is your neural network up to the last layer."
},
{
"start": 180.84,
"end": 185.16000000000003,
"text": " Basically it can be like something like a convolutional neural network and so on."
},
{
"start": 185.16000000000003,
"end": 190.56,
"text": " It gives you a feature representation. So you extract from the image X a feature"
},
{
"start": 190.56,
"end": 196,
"text": " representation, which is this entire thing here, and then you multiply this"
},
{
"start": 196,
"end": 202,
"text": " feature representation. So this is going to be some vector of dimension D. You"
},
{
"start": 202,
"end": 210.72,
"text": " multiply this by this weight matrix, which is going to be something like, okay"
},
{
"start": 210.72,
"end": 216.88,
"text": " I've drawn it in the wrong direction here. Let's draw W over here."
},
{
"start": 216.88,
"end": 223.4,
"text": " It's going to be D by, let's say K, where K is the number of classes."
},
{
"start": 223.4,
"end": 233.2,
"text": " Okay, still wrong. D by K, right? And output a vector of dimension K, which"
},
{
"start": 233.2,
"end": 238.4,
"text": " then is this cat, dog and so on. So these are the logits and the logits get"
},
{
"start": 238.4,
"end": 244.2,
"text": " transformed to the probabilities by running it through a softmax layer. But"
},
{
"start": 244.2,
"end": 251.83999999999997,
"text": " basically we view a classifier as having a feature representation and a weight"
},
{
"start": 251.83999999999997,
"end": 256.96,
"text": " matrix. And this here is a matrix multiplication adult product by"
},
{
"start": 256.96,
"end": 266.84,
"text": " matrix. So what we see basically is, this is kind of where the"
},
{
"start": 266.84,
"end": 272.08,
"text": " adversarial examples happen. So when we look at this weight matrix, right, again"
},
{
"start": 272.08,
"end": 275.64,
"text": " we look at the D dimensional feature vector here, and we look at the weight"
},
{
"start": 275.64,
"end": 286.52,
"text": " matrix, what it does is it has columns, right? Columns. Let's say we have four"
},
{
"start": 286.52,
"end": 293.28,
"text": " classes here, right? So it has these four columns and each of them is D"
},
{
"start": 293.28,
"end": 300.2,
"text": " dimensional. So each of them is going to be multiplied by this thing and giving a"
},
{
"start": 300.2,
"end": 305.64,
"text": " score. So the final score for a class is going to be the multiplication of a row"
},
{
"start": 305.64,
"end": 317.59999999999997,
"text": " W1, W2, W3, W4 by this feature vector. Let's call the feature vector little f."
},
{
"start": 317.59999999999997,
"end": 328.03999999999996,
"text": " So your logit of class i is going to be the inner product of W i and f."
},
{
"start": 328.04,
"end": 334.72,
"text": " Alright, we'll leave away biases for now. There's okay, we can introduce biases to"
},
{
"start": 334.72,
"end": 339.40000000000003,
"text": " make it a bit more complicated but it changes nothing. So your logit is going to"
},
{
"start": 339.40000000000003,
"end": 346,
"text": " be the inner product and whichever logit is the highest wins. So that's going"
},
{
"start": 346,
"end": 352.08000000000004,
"text": " to be the prediction of the classifier. So since you can, in an"
},
{
"start": 352.08000000000004,
"end": 356.24,
"text": " adversarial example, what can you change? You can change this feature vector here,"
},
{
"start": 356.24,
"end": 361.84000000000003,
"text": " this f. By changing the x you can change the output of the"
},
{
"start": 361.84000000000003,
"end": 367.6,
"text": " convolutional neural network which is the feature vector. And what you have to"
},
{
"start": 367.6,
"end": 374.2,
"text": " do in order to make a logit as high as possible, basically make one class"
},
{
"start": 374.2,
"end": 378.96000000000004,
"text": " as high as possible, is you need to make this inner product as high as possible."
},
{
"start": 378.96000000000004,
"end": 384.52,
"text": " And what's an inner product? If you look in a classic vector representation"
},
{
"start": 384.52,
"end": 397.12,
"text": " space, if this is W i and this is f, what you want to do is you want to make f and"
},
{
"start": 397.12,
"end": 403.12,
"text": " W align as much as possible. Because the inner product is going to be basically"
},
{
"start": 403.12,
"end": 407.91999999999996,
"text": " dependent on the angle and the magnitude. So you can you can stretch f for sure"
},
{
"start": 407.91999999999996,
"end": 413.03999999999996,
"text": " but it's going to be kind of aligned with all the W's then more by stretching"
},
{
"start": 413.04,
"end": 417.76000000000005,
"text": " or more negatively in whatever way you want it. But basically you want to rotate,"
},
{
"start": 417.76000000000005,
"end": 425.76000000000005,
"text": " you want to align as much as possible the f with the W i. So you want to kind"
},
{
"start": 425.76000000000005,
"end": 434.32000000000005,
"text": " of go into this direction with f. So now not only do you want to kind of maximize"
},
{
"start": 434.32000000000005,
"end": 440.32000000000005,
"text": " f with a particular W i, what you want to do is be adversarial. The adversarial"
},
{
"start": 440.32,
"end": 446.59999999999997,
"text": " task is often framed as just it's either targeted, so find a... so i needs to be a"
},
{
"start": 446.59999999999997,
"end": 450.96,
"text": " particular other class, or it's untargeted which means just just give me"
},
{
"start": 450.96,
"end": 457.71999999999997,
"text": " a perturbation that will make the classifier be fooled. And be fooled means"
},
{
"start": 457.71999999999997,
"end": 464.96,
"text": " whatever it predicts right now it should predict something different. So what"
},
{
"start": 464.96,
"end": 473,
"text": " you ultimately want to do is you want this as high as possible for some i"
},
{
"start": 473,
"end": 481.15999999999997,
"text": " that is not the correct i and you want this other"
},
{
"start": 481.15999999999997,
"end": 492.56,
"text": " quantity W y. Let's call it W y. Let's say the classifier is 100% correct."
},
{
"start": 492.56,
"end": 502.08,
"text": " So W y, y is the label of x. W y is whatever column here is"
},
{
"start": 502.08,
"end": 511.64,
"text": " currently predicted. So you want the sum column where i is not equal to y to"
},
{
"start": 511.64,
"end": 517.52,
"text": " have maximum inner product and so this is not no longer l i, we'll get to that,"
},
{
"start": 517.52,
"end": 525.24,
"text": " to have maximum inner product and you want this inner product"
},
{
"start": 525.24,
"end": 530.56,
"text": " with the correct class to be as small as possible, which ultimately means you want"
},
{
"start": 530.56,
"end": 537.4399999999999,
"text": " this entire quantity maximized. So it's a pretty simple idea. We'll call, let's say"
},
{
"start": 537.4399999999999,
"end": 542.6,
"text": " this is the log i minus the log y. We have slightly different notation in the"
},
{
"start": 542.6,
"end": 551.52,
"text": " paper. I think we call this z but never mind. So you basically just want to make"
},
{
"start": 551.52,
"end": 559.36,
"text": " this as large as possible. So our point is since this is not the only"
},
{
"start": 559.36,
"end": 567.44,
"text": " thing, you want to maximize this but you have a constraint. Namely your constraint"
},
{
"start": 567.44,
"end": 574.48,
"text": " is that your delta x can only be small. Your delta x can only be small"
},
{
"start": 574.48,
"end": 578.96,
"text": " because the point of an adversarial example is that the perturbation is so"
},
{
"start": 578.96,
"end": 585.44,
"text": " small you can't see it and that means that you basically don't have much"
},
{
"start": 585.44,
"end": 593.48,
"text": " wiggle room to do these perturbations, which means that we should be"
},
{
"start": 593.48,
"end": 600.24,
"text": " able to detect a pattern like this in the latent space. So this here is"
},
{
"start": 600.24,
"end": 608.08,
"text": " the latent space feature vector and if we can kind of detect a pattern in the"
},
{
"start": 608.08,
"end": 616.64,
"text": " latent space then we kind of get the adversarial example detector."
},
{
"start": 616.64,
"end": 623.6,
"text": " So how do we do this? We measure exactly this. What we do is we measure the"
},
{
"start": 623.6,
"end": 631.16,
"text": " alignment between the original, between the currently predicted class and"
},
{
"start": 631.16,
"end": 638.36,
"text": " between all other classes. So in this graphic here you see this. It's a 10"
},
{
"start": 638.36,
"end": 644.92,
"text": " class classifier. This is CIFAR10. We only show one, two, three, four, we only show six of the other"
},
{
"start": 644.92,
"end": 653.52,
"text": " classes but we have the full graphic in this. So this shows an adversarial"
},
{
"start": 653.52,
"end": 662.16,
"text": " example. The axis going on top of each of the images is the alignment with the"
},
{
"start": 662.16,
"end": 667.16,
"text": " adversarial class. So this has been an adversarially perturbed sample. So this"
},
{
"start": 667.16,
"end": 672.0799999999999,
"text": " shows the alignment with the adversarial class and of course you see the bright"
},
{
"start": 672.08,
"end": 679.2,
"text": " red dot, if you just focus on that, that is the adversarial example"
},
{
"start": 679.2,
"end": 684.5600000000001,
"text": " projected into this. So of course the alignment is going to be very very"
},
{
"start": 684.5600000000001,
"end": 689.48,
"text": " high with this class since the classifier actually predicts this class."
},
{
"start": 689.48,
"end": 695.84,
"text": " The blue here is the sample that the adversarial sample was derived from"
},
{
"start": 695.84,
"end": 702.88,
"text": " which means the original image. And you already see without looking at any of"
},
{
"start": 702.88,
"end": 709.0400000000001,
"text": " the other dots that the blue is around zero here, around zero here, around zero"
},
{
"start": 709.0400000000001,
"end": 715.88,
"text": " here, here and here. But here it's very high in this axis. So the axis to the"
},
{
"start": 715.88,
"end": 722.2800000000001,
"text": " right is for each of these plots here it's one of the other"
},
{
"start": 722.28,
"end": 727.12,
"text": " classes except for the currently predicted adversarial class. So that"
},
{
"start": 727.12,
"end": 732.4,
"text": " this axis is always the same axis and while the axis to the right for each"
},
{
"start": 732.4,
"end": 736.64,
"text": " plot is a different one. And you can already see, and that's why we frame it"
},
{
"start": 736.64,
"end": 742.9599999999999,
"text": " in the green, this plot here is where the axis to the right"
},
{
"start": 742.9599999999999,
"end": 749.3199999999999,
"text": " corresponds to the original class of the classifier. So don't look yet at"
},
{
"start": 749.32,
"end": 756.5200000000001,
"text": " the other plots. What you see here is basically the blue is really high"
},
{
"start": 756.5200000000001,
"end": 763.0400000000001,
"text": " in this class right and the adversarial example procedure basically has driven"
},
{
"start": 763.0400000000001,
"end": 770.88,
"text": " it down this class and up this class which is exactly saying it has made this"
},
{
"start": 770.88,
"end": 779.68,
"text": " inner product small and this inner product large. So where do we go from"
},
{
"start": 779.68,
"end": 788.96,
"text": " here? Let's actually jump this graphic a bit and go to this one that's way down"
},
{
"start": 788.96,
"end": 800.48,
"text": " here. Alright so what we've done is we've taken the an example just out of the"
},
{
"start": 800.48,
"end": 808.48,
"text": " data set right and then we've taken an adversarial example. So say X is the"
},
{
"start": 808.48,
"end": 815.12,
"text": " example of the data set and then X hat is the adversarial example derived from"
},
{
"start": 815.12,
"end": 821.08,
"text": " this. Alright in this plot to the right X would be sitting about here. I'm gonna"
},
{
"start": 821.08,
"end": 827.76,
"text": " explain what the what the the kind of meaning is and X hat would be sitting"
},
{
"start": 827.76,
"end": 833.04,
"text": " down here right it's about one third from the top one third from the bottom"
},
{
"start": 833.04,
"end": 844.2,
"text": " let me draw this more. Alright so what this axis represents here is basically"
},
{
"start": 844.2,
"end": 851.64,
"text": " what we've done is we've gone from X to X hat in very small steps and at each"
},
{
"start": 851.64,
"end": 859.76,
"text": " step we've asked the classifier hey classifier what's the probability of the"
},
{
"start": 859.76,
"end": 869.4,
"text": " class Y so Y is the class of X right and the class of X X hat is some some"
},
{
"start": 869.4,
"end": 873.64,
"text": " different what some some other class right since it's an adversarial example"
},
{
"start": 873.64,
"end": 880.8,
"text": " we've so we've asked the classifier what's the class of X and basically no"
},
{
"start": 880.8,
"end": 885.9599999999999,
"text": " basically we've asked what what's the probability that Y is the class of X and"
},
{
"start": 885.9599999999999,
"end": 892.28,
"text": " that's represented in white right so the more white the higher the classifier"
},
{
"start": 892.28,
"end": 902.12,
"text": " thinks the class Y is probable. So the direction down is going into the"
},
{
"start": 902.12,
"end": 912.68,
"text": " direction of X hat minus X so it's going into the into the adversarial direction"
},
{
"start": 912.68,
"end": 921.04,
"text": " and then the direction across is we've taken some direction that's orthogonal"
},
{
"start": 921.04,
"end": 927.04,
"text": " to this direction and then also went into tiny steps and asked the classifier"
},
{
"start": 927.04,
"end": 931.72,
"text": " hey classifier what do you think is the probability of Y here so we've basically"
},
{
"start": 931.72,
"end": 938.0400000000001,
"text": " done this kind of grid sampling and at each point we've asked the classifier"
},
{
"start": 938.0400000000001,
"end": 943.8000000000001,
"text": " what do you think which how probable is Y and the classifier will always output"
},
{
"start": 943.8000000000001,
"end": 949.72,
"text": " some number and we plot it in white and so this direction again is always the"
},
{
"start": 949.72,
"end": 955.4,
"text": " same in this direction we basically randomize it and then aggregate it over"
},
{
"start": 955.4,
"end": 961.44,
"text": " over lots and lots of different samples and we also aggregate this entire thing"
},
{
"start": 961.44,
"end": 968.1600000000001,
"text": " over the entire over the data set so we get a comprehensive view of what"
},
{
"start": 968.1600000000001,
"end": 973.32,
"text": " adversarial examples look like in the view of the classifier and what we find"
},
{
"start": 973.32,
"end": 981.8000000000001,
"text": " is pretty interesting so when you go from the original class you basically in"
},
{
"start": 981.8000000000001,
"end": 989.32,
"text": " every direction here in every direction it kind of the original class kind of"
},
{
"start": 989.32,
"end": 994.8000000000001,
"text": " decreases smoothly right you see at the edges here it kind of gets black so the"
},
{
"start": 994.8000000000001,
"end": 1001.2,
"text": " further away you go from the original example that the more the kind of"
},
{
"start": 1001.2,
"end": 1005.6800000000001,
"text": " shadier the classifier gets it's like yeah I'm not so sure anymore that this"
},
{
"start": 1005.6800000000001,
"end": 1011.5200000000001,
"text": " is the class right but if you go into the direction of here if you go into"
},
{
"start": 1011.5200000000001,
"end": 1017.9200000000001,
"text": " the direction of the adversarial example the kind of drop-off is first of all"
},
{
"start": 1017.92,
"end": 1024.08,
"text": " it's very steep so all of a sudden here you're in very dark territory which"
},
{
"start": 1024.08,
"end": 1031.56,
"text": " means the classifier is doesn't think why is probable at all anymore and"
},
{
"start": 1031.56,
"end": 1039.92,
"text": " moreover you get this kind of cone here so what we see is what we what we think"
},
{
"start": 1039.92,
"end": 1046.6,
"text": " is happening is that given an example there are these directions in late in in"
},
{
"start": 1046.6,
"end": 1053.56,
"text": " in image space basically straight directions that go to adversarial"
},
{
"start": 1053.56,
"end": 1059.1999999999998,
"text": " examples right and we call these cones because they they're kind of low"
},
{
"start": 1059.1999999999998,
"end": 1066.1599999999999,
"text": " dimensional directions in in the space where the adversarial example lies and"
},
{
"start": 1066.16,
"end": 1078.72,
"text": " what's really interesting is we have those plots here do we have more so"
},
{
"start": 1078.72,
"end": 1096.6000000000001,
"text": " what's what's quite interesting is that if you if you go come on well this is"
},
{
"start": 1096.6000000000001,
"end": 1104.92,
"text": " kind of okay the quality of the of the plot is not is not very very good so I'm"
},
{
"start": 1104.92,
"end": 1114.52,
"text": " gonna I may be able to to draw this here so if your start here and you go here"
},
{
"start": 1114.52,
"end": 1124.1200000000001,
"text": " what happens to the original class is you start out high you go down rapidly"
},
{
"start": 1124.1200000000001,
"end": 1132.8000000000002,
"text": " and you stay down even if you go super far into this direction the this class"
},
{
"start": 1132.8,
"end": 1141.9199999999998,
"text": " will stay down whereas let's say this is y hat y hat will start low go up and"
},
{
"start": 1141.9199999999998,
"end": 1150.8799999999999,
"text": " then kind of fade so here is where the adversarial example would sit sorry at"
},
{
"start": 1150.8799999999999,
"end": 1160,
"text": " about this distance that's this distance here means as you go towards the"
},
{
"start": 1160,
"end": 1166.12,
"text": " adversarial example right here the probability of the adversarial class"
},
{
"start": 1166.12,
"end": 1170.8,
"text": " rises and the probability of the original class drops then as you go"
},
{
"start": 1170.8,
"end": 1175.28,
"text": " further this is what's what's interesting kind of this probability here"
},
{
"start": 1175.28,
"end": 1179.28,
"text": " drops which means the classifier is kind of like yeah okay there's too much noise"
},
{
"start": 1179.28,
"end": 1183.56,
"text": " now I'm not so sure about this class anymore but the this this class here"
},
{
"start": 1183.56,
"end": 1189.56,
"text": " kind of stays low very very long even if you go into this direction so this this"
},
{
"start": 1189.56,
"end": 1194.1599999999999,
"text": " gives us kind of a hint that adversarial examples are characterized by specific"
},
{
"start": 1194.1599999999999,
"end": 1202.28,
"text": " directions that you go into that you that you can go into and kind of"
},
{
"start": 1202.28,
"end": 1207.96,
"text": " suppress the original class and pump the new class up which is kind of exactly"
},
{
"start": 1207.96,
"end": 1217.6799999999998,
"text": " what we've claimed with this inner inner product alignment right that the next"
},
{
"start": 1217.68,
"end": 1223.96,
"text": " experiment we've done is we've taken this adversarial example here and said"
},
{
"start": 1223.96,
"end": 1231.6000000000001,
"text": " well if we go outside if we go into random directions right it's just really"
},
{
"start": 1231.6000000000001,
"end": 1235.96,
"text": " this one direction that's problematic if we go into random directions actually we"
},
{
"start": 1235.96,
"end": 1239.48,
"text": " should be you know go back to the original class right since it's"
},
{
"start": 1239.48,
"end": 1243.6000000000001,
"text": " basically surrounded by the original class this is just one direction and this"
},
{
"start": 1243.6000000000001,
"end": 1247.5600000000002,
"text": " here represents all the other directions there are and how many directions are"
},
{
"start": 1247.56,
"end": 1252.76,
"text": " there in in pixel space like a lot so we should be able to get back to the"
},
{
"start": 1252.76,
"end": 1258.32,
"text": " original class but that's not the case that's we found that's not the case and"
},
{
"start": 1258.32,
"end": 1267.08,
"text": " we also found why so I still want to go back to this plot here if you do this if"
},
{
"start": 1267.08,
"end": 1274.3999999999999,
"text": " you add noise and this is the noise magnitude here what you'll see is the"
},
{
"start": 1274.4,
"end": 1281.3600000000001,
"text": " orange here is the adversarial class so orange will go down down down down down"
},
{
"start": 1281.3600000000001,
"end": 1289.76,
"text": " right as you increase the noise the blue is the source class so the blue goes up"
},
{
"start": 1289.76,
"end": 1295.88,
"text": " and it goes up faster you see it goes up faster than the green which is the"
},
{
"start": 1295.88,
"end": 1299.72,
"text": " highest other class so green is whatever class is not that was there not the"
},
{
"start": 1299.72,
"end": 1305.92,
"text": " source but the highest class other than that so the source class goes up quickly"
},
{
"start": 1305.92,
"end": 1312.3600000000001,
"text": " but before the source class can overpass the adversarial class which happens back"
},
{
"start": 1312.3600000000001,
"end": 1317.16,
"text": " there the highest other class has already kind of taken over so the source"
},
{
"start": 1317.16,
"end": 1323.3600000000001,
"text": " class is basically too weak and if you again look at this this plot here if you"
},
{
"start": 1323.3600000000001,
"end": 1329.68,
"text": " go with an actual color picker you see that the amount of white here and here"
},
{
"start": 1329.68,
"end": 1338.64,
"text": " is is not high enough it's like 0.3 or something out of one or even lower so"
},
{
"start": 1338.64,
"end": 1343.72,
"text": " the the kind of source class is not strong enough that by simply adding a"
},
{
"start": 1343.72,
"end": 1354.16,
"text": " bit of noise you can go back but we thought hey if this is correct we can"
},
{
"start": 1354.16,
"end": 1360.76,
"text": " actually detect we can detect this effect here this rising of the source"
},
{
"start": 1360.76,
"end": 1368.64,
"text": " class faster so our plan is basically we add noise a particular amount of noise"
},
{
"start": 1368.64,
"end": 1375.52,
"text": " just a little bit actually and then we detect which basically which class falls"
},
{
"start": 1375.52,
"end": 1381.44,
"text": " and which class rises and the way we do this is we we detect the this exact"
},
{
"start": 1381.44,
"end": 1391.6000000000001,
"text": " alignment that I've described before under noise so we form this quantity"
},
{
"start": 1391.6000000000001,
"end": 1399.24,
"text": " here for all classes other than y so y is the the class that's currently"
},
{
"start": 1399.24,
"end": 1408.72,
"text": " predicted and we look at it what happens under it under noise right so and that's"
},
{
"start": 1408.72,
"end": 1418.48,
"text": " where we get to this graphic here so again this axis is the adversarial class"
},
{
"start": 1418.48,
"end": 1424.44,
"text": " or the class that's currently predicted right this axis here is all the other"
},
{
"start": 1424.44,
"end": 1430.68,
"text": " classes for each plot one and when we add noise what do you see is the noise"
},
{
"start": 1430.68,
"end": 1435.28,
"text": " magnitude is encoded in the brightness of the dots so the darker the red dots"
},
{
"start": 1435.28,
"end": 1442.36,
"text": " the more noise we've added here is the original adversarial sample then as we"
},
{
"start": 1442.36,
"end": 1450.2,
"text": " add noise you see here here more noise more noise more noise it nothing's"
},
{
"start": 1450.2,
"end": 1457.2,
"text": " really happening for the for the if if if it's like one class that has nothing"
},
{
"start": 1457.2,
"end": 1463.16,
"text": " to do with the original class it simply kind of goes down simply kind of gets"
},
{
"start": 1463.16,
"end": 1470.4,
"text": " less sure about this class right but in case of the original class that the"
},
{
"start": 1470.4,
"end": 1477.2,
"text": " adversarial example was derived from it really rises it really kind of at the"
},
{
"start": 1477.2,
"end": 1482.3600000000001,
"text": " same time that it drops it rises into that direction so we're able to measure"
},
{
"start": 1482.3600000000001,
"end": 1489.4,
"text": " these these deltas here under noise and we're able to to devise basically"
},
{
"start": 1489.4,
"end": 1496.24,
"text": " statistics of what happens to these quantities under like if it's not an"
},
{
"start": 1496.24,
"end": 1499.6000000000001,
"text": " adversarial sample versus what happens to these quantities if it's an adversarial"
},
{
"start": 1499.6000000000001,
"end": 1504.3200000000002,
"text": " sample so here you see pairings of basically source class and adversarial"
},
{
"start": 1504.3200000000002,
"end": 1508.8000000000002,
"text": " class samples so each of these histograms is collected from that and"
},
{
"start": 1508.8000000000002,
"end": 1517.68,
"text": " what you can see is in blue the kind of alignment under noise of the source"
},
{
"start": 1517.68,
"end": 1524.92,
"text": " class sorry the alignments under noise of a non perturbed sample and in orange"
},
{
"start": 1524.92,
"end": 1530.3200000000002,
"text": " the alignments under noise of an adversarial sample and what's cool is"
},
{
"start": 1530.3200000000002,
"end": 1536.52,
"text": " that these these alignments you can see in all of these cases are very different"
},
{
"start": 1536.52,
"end": 1541.0800000000002,
"text": " so there is a clear signature in the adversarial sample in these noise"
},
{
"start": 1541.08,
"end": 1549.56,
"text": " induced alignments with the with the weight matrix rows that makes you able"
},
{
"start": 1549.56,
"end": 1555,
"text": " to basically build a detector you can say all right anything to the left is"
},
{
"start": 1555,
"end": 1559.6,
"text": " clean anything to the right is adversarial and we can do this over many"
},
{
"start": 1559.6,
"end": 1565.6399999999999,
"text": " different types of noises and then build basically a voting mechanism on that and"
},
{
"start": 1565.64,
"end": 1571.88,
"text": " thereby detect adversarial examples so we have a bunch of experiments we mostly"
},
{
"start": 1571.88,
"end": 1584.5200000000002,
"text": " experiment on the c410 and on the image net data set and you can see over here"
},
{
"start": 1584.5200000000002,
"end": 1591,
"text": " so this is the main kind of one of the main results the detection rates of our"
},
{
"start": 1591,
"end": 1597.76,
"text": " statistical test so as you can see we are detection rate this is on clean"
},
{
"start": 1597.76,
"end": 1601.6,
"text": " samples on clean samples you want the detection rate to be low on adversarial"
},
{
"start": 1601.6,
"end": 1607.96,
"text": " samples you want the detection rate to be high and this we achieve very large"
},
{
"start": 1607.96,
"end": 1616.44,
"text": " detection rates while having very low false positive rates especially on image"
},
{
"start": 1616.44,
"end": 1621.3600000000001,
"text": " net so it seems like the more tuned these models are the better these models"
},
{
"start": 1621.3600000000001,
"end": 1625.64,
"text": " are the better we are at detecting adversarial examples to it it's kind of"
},
{
"start": 1625.64,
"end": 1632.92,
"text": " a direct correlation to how well the models perform on accuracy in a clean"
},
{
"start": 1632.92,
"end": 1640.0800000000002,
"text": " setting and what we can do is now since we cannot only detect these things but"
},
{
"start": 1640.08,
"end": 1646.9199999999998,
"text": " we can detect these things in a fashion so if if you look at these things and"
},
{
"start": 1646.9199999999998,
"end": 1652.84,
"text": " you have like a sample of a particular class that's predicted right let's say"
},
{
"start": 1652.84,
"end": 1657.36,
"text": " this class and you go and look at it at the position of the noise induced"
},
{
"start": 1657.36,
"end": 1665.6,
"text": " features over each of them so let's say here here here here here here here here"
},
{
"start": 1665.6,
"end": 1672.4399999999998,
"text": " here right you can then clearly say well not only do I detect an adversarial"
},
{
"start": 1672.4399999999998,
"end": 1678.56,
"text": " example here right I look at the I look at each of the class of the classes that"
},
{
"start": 1678.56,
"end": 1685.76,
"text": " it could be derived from right if all if all of them say it's a clean sample then"
},
{
"start": 1685.76,
"end": 1689.7199999999998,
"text": " all right it's a clean sample but if one of them says it's an adversarial sample"
},
{
"start": 1689.7199999999998,
"end": 1694.84,
"text": " then I don't not only do I know it's an adversarial sample but I say aha this"
},
{
"start": 1694.84,
"end": 1701.56,
"text": " must be the source class right this is the exact effect we saw here all right"
},
{
"start": 1701.56,
"end": 1711.6399999999999,
"text": " we can if we detect this pattern here we can also back deduce basically aha so"
},
{
"start": 1711.6399999999999,
"end": 1718.8,
"text": " this must be the original class that the adversarial example was derived from so"
},
{
"start": 1718.8,
"end": 1723.24,
"text": " we're basically able to build a not only a detector but we're basically able to"
},
{
"start": 1723.24,
"end": 1728.6,
"text": " reconstruct the original class and here you see for these models let's say on"
},
{
"start": 1728.6,
"end": 1733.64,
"text": " CIFAR-10 we imagine that is a bit too large as of yet for our compute but"
},
{
"start": 1733.64,
"end": 1739.28,
"text": " on these models that have clean accuracies that are pretty high on CIFAR-10"
},
{
"start": 1739.28,
"end": 1745.32,
"text": " plus this this kind of toy network here we're able to reconstruct the original"
},
{
"start": 1745.32,
"end": 1751.76,
"text": " class so basically this is defense against adversarial examples by by"
},
{
"start": 1751.76,
"end": 1757.8,
"text": " getting to almost clean accuracy back so this is a really surprising actually and"
},
{
"start": 1757.8,
"end": 1767.84,
"text": " kind of nice so we we do a bunch of other experiments including we defend"
},
{
"start": 1767.84,
"end": 1774.4,
"text": " against an attacker that's actually aware of this thing but the main the"
},
{
"start": 1774.4,
"end": 1779.8799999999999,
"text": " main point here is we don't say this is kind of the end-all method of defending"
},
{
"start": 1779.88,
"end": 1783.8400000000001,
"text": " against adversarial examples we simply want to kind of encourage the way of"
},
{
"start": 1783.8400000000001,
"end": 1790.0800000000002,
"text": " thinking of of these kind of noise what what if you what if you noise induce"
},
{
"start": 1790.0800000000002,
"end": 1797.16,
"text": " perturbations how does your network react to that can you can you detect"
},
{
"start": 1797.16,
"end": 1804.3600000000001,
"text": " these effects here can you detect effects like this and are these"
},
{
"start": 1804.3600000000001,
"end": 1809.24,
"text": " unavoidable or are there architectures are there architectures we can basically"
},
{
"start": 1809.24,
"end": 1814.84,
"text": " build such that adversarial examples have no chance except doing something"
},
{
"start": 1814.84,
"end": 1819.84,
"text": " like this which we can then easily detect all right so that was a bit of an"
},
{
"start": 1819.84,
"end": 1840,
"text": " introduction if you like it check out the entire paper and goodbye"
}
] |
jltgNGt8Lpg | Yannic Kilcher | UCZHmQk67mSJgfCCTn7xBfew | Neural Ordinary Differential Equations | [
"Science & Technology"
] | [] | https://arxiv.org/abs/1806.07366
Abstract:
We introduce a new family of deep neural network models. Instead of specifying a discrete sequence of hidden layers, we parameterize the derivative of the hidden state using a neural network. The output of the network is computed using a black-box differential equation solver. These continuous-depth models have constant memory cost, adapt their evaluation strategy to each input, and can explicitly trade numerical precision for speed. We demonstrate these properties in continuous-depth residual networks and continuous-time latent variable models. We also construct continuous normalizing flows, a generative model that can train by maximum likelihood, without partitioning or ordering the data dimensions. For training, we show how to scalably backpropagate through any ODE solver, without access to its internal operations. This allows end-to-end training of ODEs within larger models.
Authors:
Ricky T. Q. Chen, Yulia Rubanova, Jesse Bettencourt, David Duvenaud | Hello and welcome. Today we're going to look at Neural Ordinary Differential Equations by Rick Chen, Julia Rubinova, Jesse Bettencourt and David Dovenoe. This has been quite an interesting kind of paper to see because it's a bit special. We're going to go over parts of it, not the full paper, just kind of the important parts because the paper is quite packed and we'd rather explain it in parts and kind of get the gist of it. So basically what they do is they say we introduce a new family of deep neural network models. Instead of specifying a discrete sequence of hidden layers we parameterize the derivative of the hidden state using a neural network. The output of the network is computed using a black box differential equation solver. These continuous depth models have constant memory cost, adapt their evaluation strategy to each input, can explicitly trade numerical precision for speed. It sounds awesome, honestly. It sounds really cool and it sounds really new. Let's jump in. What they say is let's look at kind of classic neural networks, especially residual neural networks. What residual neural networks do is in each hidden layer they kind of have a representation. This is kind of their hidden representation at layer t. What they do is they then add something. If you don't know a recurrent neural network is where you have, let's say this is your hidden state ht, and in a classic neural network you would just have a weight matrix here, blah blah blah blah. You do a matrix multiplication to get ht plus 1. So to get the next kind of the next hidden state you do a matrix multiplication by a big weight matrix here w. In a residual neural network what you do is you have this weight matrix w, you multiply it to get delta ht plus 1 and you take ht and you add the two. You add ht and delta ht plus 1 to arrive at ht plus 1. That's a residual network. It basically doesn't learn the transformation to the next layer but it learns how is the next representation different from this representation. What do I need to add to this representation to get to the next representation? It's reasoned that for deep networks since each layer only does a little bit of transformation we should basically bias it towards keeping the representation the same and just kind of changing it a little bit. So this is the inherent bias, the identity transform. So that's a residual network. This here is characterized by f of kind of theta and ht. So this is kind of the this is what we called delta h. It's now called f. So this f would be the kind of neural network layer and theta would be the parameters of it. So the weight matrix in our case. They say okay what if you do many of those? So they say basically what this is is kind of a time process. It's kind of you have a state and the next state and the next state and you always learn how to go to the next state to the next state and so on. What if you go very deep and what if you look at this as a time process and kind of make these steps very small? Make these super small and basically what if you have many many infinitely many layers? I say well okay this then becomes a dynamic process. Basically an ordinary differential equation where I say okay my time is now continuous and I look at it as a linearization as a local linearization basically and I say okay I basically specify how to get from this time to the next instance of time. The next instant the next infinitesimally small instance of time by specifying this f and in the continuous case this is to say that the derivative of the hidden state is now parameterized by a neural network. So if you know what a differential equation is it has like a start state and then what you do is you specify how at each point in time so that's t at each point in time how does the gradient look so maybe the gradient looks like this and then what an ODE solver will do is the ODE solver will say okay the gradients we're gonna do an infinite small step in this direction and then it goes back to f. What's the gradient at this infinitely small step next in time and then f would say well the gradient is like this and then the ODE solver will go like okay I need to be a little bit flatter so I go here so what's the gradient at this time okay maybe it's up this I need to go up here so the ODE solver will kind of construct a curve and at each point it needs to look that whatever f says is the gradient is actually the gradient right if this is the gradient this is the gradient this is the gradient so that's that's kind of how an ODE works and that's they say okay you can actually look at residual networks here as being a discrete time analog to such an ODE so what we want to do is actually we want to specify we want to actually and this is the the crazy part right or the cool part is we want to do this for neural networks basically we simply specify an ODE and the start state here the start state is let's say if you want to build an MNIST classifier it's our it's our image right the start state is our MNIST image and we're simply training a neural network such that the ODE that the equation if you solve it the curve at the end will arrive at the correct class I mean that's that's I'm skipping a few parts here about dimensionalities and so on right because you need to keep in the same dimension but in essence they say here we start out with our input and we train the neural network to give us the correct gradients the correct derivatives of this curve at each point in time such that when you solve the ODE at the end point you are going to be at the correct label so that's this is the input to your task basically and this is the output right but instead of having a neural network go from input to output you have a neural network that parameterizes how you go from each step in time to the next one what's what's the gradient at each point in time that's that's the kind of gist of it and that's that's kind of really cool it's a really new approach alright so they give various advantages of this and so here is this demonstrated again right you are here this is your input and you want to go to the output and then the loss of the loss that you specify it can depend on kind of either on the output as in like an image classifier or it can depend on intermediate states this is it's kept general right so the way they go about it is they say well okay but so the neural network now specifies how to get from one step to the next right here and the neural network has parameters right so we we need to train this network such that the correct output is given to some input right we actually need to train it so we need to we need to some how way to train these parameters theta and they say okay we do gradient descent on theta like in a classic neural network but now we need it's not it's not so easy right it's not one pass through this function it's like infinitely many passes through this function until you arrive here and then if you basically need to somehow get a gradient with respect to these parameters here so they say this again the loss of the this is the loss of the end state right is the loss of the start state plus the the integral over time of this is derivative which is basically this curve and the curve is given by an ODE solver where we input all these things so we need gradients with respect to that how do we do that and they give away here of saying okay we could either kind of back propagate through the ODE solver but that would you know depend on the ODE solver and so on but there's another method there's called what's called the we need the what's called the adjoint so this is reverse mode differentiation of an ODE solution adjoint sensitivity method solves an augmented ODE backwards in time so basically what you need to do is you forward propagate you come here right and then what you can do is you can solve the second ODE so you can generate a second curve here this one and don't worry about these little jumps here you can solve the second curve and the second curve together with the first and second curve you can then compute the gradients you need right so the second curve is is basically simply something like the application of the chain rule to the continuous domain and you need to you need to adjust these jumps here only when your loss depends on intermediate states this is this is kind of the offset caused by including or not including the loss so let's dive a bit further into this adjoint state what's the red curve the red curve is called a and what's a a is a curve and this is the differential equation for it again we specify the curve a by specifying its start state and its derivative and from its start state and its derivative at each time the ODE solver is able to construct the curve entirely so a t it says here is del L to del ZT this means how does the loss depend on this ZT on the hidden state right how does the loss depend on the hidden state at time T so it doesn't even have to be any of these points here how does the loss depend on this hidden state here and in order to find that out you would need to go you would need to develop the the curve until here right and then calculate the loss and then back propagate through here but you can do this by calculating this adjoint thing so as you can see here is a demonstration it's an example right so the start state here is simply given by the loss how does the loss of this state how does the loss depend on this state well simply by plugging it into the into the loss equation right so your losses might be a cross entropy loss or something how does the loss do that depend on this state here well we go we go from this state that we already know and we know how in reverse time so backwards in time this sensitivity of the loss develops so we go and we develop this curve until here and we say aha this point influences this loss in this much basically right so so and if the loss explicitly depends on this point then we have to we have to calculate in this offset since this point here only depends on this time up till here and then it changes so there is there's a discontinuation but don't worry about that too much basically what we can do is we can calculate the curve in a forward pass curve and the loss in the forward pass then we can do a second pass backward again by an ODE solve to say how does the how does the loss depend on each one of the states here of the hidden states right so that's the second point but that's not all because we're ultimately not interested in the how the loss depends on the state where the we're interested in how the loss depends on these parameters that tell us how to get from one hidden state to the next but luckily we can then simply evaluate this integral that depends as you can see here on a and on Z we can evaluate this and get the gradients for the the parameters right so I also have to say the parameters are static so the parameters are given over the entire duration of this they're they're the same and it's simply what changes is time alright so this is how you can get this is how you can get gradients with respect to parameters and the cool thing is now you can train these you can actually train this neural network here that tells you how to go from one state to the next such that if you input the digit 2 as an image well you can output to I mean not exactly but that's that's the point right you can by by going through this motion by going through this od solve so that's I mean that's immensely cool they actually define how to do this here in one forward one kind of backward pass you can solve everything at the same time it's it's pretty cool and they evaluate their their net and they compare it with a different bunch of other nets and they interestingly show that so basically with an od solver you can never kind of tell how many evaluations it's going to do because it's going to get increasing like it's increasingly accurate over time so you let it run and maybe it's going to first generate a curve that's like something like this right and then it needs to say crap okay I need to go back and refine and then it maybe goes the curve like this and so on so it gets continually closer over time and for that it needs to kind of query it's like a query it needs to query this this F so you need to give it the function as an invaluable function and it goes and just okay I need to I need to know it here okay I got it from here okay I need to know it here okay I got it from oh no I didn't get it okay I need also need to know it here all right and so you can never know how much they will evaluate but you basically have a parameter to trade off accuracy and how much they evaluate that's what they show here so the the less error they want in their forward pass the more forward passes they have to do that's this curve the more forward passes they do the more time they have to invest right that's this curve but interestingly the more forward passes the time required for forward passes or the evaluations required for passes increases also the evaluation required for backward passes but not by much so that the backward passes require about half the amount of evaluations that's forward passes which is encouraging since the the backward passes don't go kind of overboard like if you had to back propagate through the operations of the ODE solver itself and they also show as your training epoch continues that the ODE solver requests more and more evaluations for so for the same epoch basically or the same samples within different epochs which means as it gets more accurate kind of needs to know more and more and more about the the samples basically about the test the training samples which is all basically showing that this kind of works yeah so they they kind of to get into normalizing flows which I don't want to get into here much because we haven't done a video on that yet we'll do one but they basically show that it's it's quite easy to do normalizing flows in a continuous fashion and the topic normalizing flows it's in itself pretty cool what they do at the end is they say okay what we can now do is we can actually take sequential data so now we've just talked about let's input one data point get out let's say a label or something which we can actually do sequential data and let's for example have an RNN encoder for our sequential data so here here these are data points right these are measurements like a blood pressure of a of a person and what we can do is we can do a variational autoencoder we've talked about this we can have an RNN encoder parameterize a distribution and then as a decoder have this ODE neural network and basically what that allows us to do is that allows us to deal with time steps that are not regularly sampled and so we can extrapolate from the data point at time yeah times not regular samplings like or with RNNs you basically forced to have always the same time step difference otherwise you have a very tough time but with this since these are continuous flows you're basically you can basically unroll them and evaluate them at whatever time you want so they have pretty cool experiments here where they kind of try to learn these kind of spiraling behaviors and you see on top the RNN decoder will get all jaggy and so on where as the so so basically as the the neural ordinary differential equation will generate quite let's say smooth things and also it can extrapolate as you can see here it can it can go the red the red thing is the extrapolation only there's only data where the green dots are so that's pretty cool you can see the RNN sometimes isn't able to kind of continue the flow as you can see in here it extrapolates wrongly so the this kind of I mean it's toy it's a toy example but these kind of dynamics are pretty cool and they also show here when they learn the spirals and vary one dimension of the latent code that is given by the encoder then the flow goes from clockwise it goes from to to counter clockwise as you see here I've turned this in I've drawn this in wrong but so it's pretty pretty cool what these these things learn and since it's only small data right now small models but I'm pretty sure this is going to develop further and be a cool just a cool way cool alley of research cool idea and looking forward to what they come up next alright so that was it for today a bit shorter but I hope this was somewhat clear enough all right have a great day | [
{
"start": 0,
"end": 5,
"text": " Hello and welcome. Today we're going to look at Neural Ordinary Differential"
},
{
"start": 5,
"end": 13.280000000000001,
"text": " Equations by Rick Chen, Julia Rubinova, Jesse Bettencourt and David Dovenoe."
},
{
"start": 13.280000000000001,
"end": 17.56,
"text": " This has been quite an interesting kind of paper to see because it's a bit special."
},
{
"start": 17.56,
"end": 22.400000000000002,
"text": " We're going to go over parts of it, not the full paper, just kind of the"
},
{
"start": 22.400000000000002,
"end": 28.28,
"text": " important parts because the paper is quite packed and we'd rather"
},
{
"start": 28.28,
"end": 35.32,
"text": " explain it in parts and kind of get the gist of it. So basically what they do is"
},
{
"start": 35.32,
"end": 40.8,
"text": " they say we introduce a new family of deep neural network models. Instead of"
},
{
"start": 40.8,
"end": 44.64,
"text": " specifying a discrete sequence of hidden layers we parameterize the"
},
{
"start": 44.64,
"end": 49.24,
"text": " derivative of the hidden state using a neural network. The output of the network"
},
{
"start": 49.24,
"end": 53.44,
"text": " is computed using a black box differential equation solver. These"
},
{
"start": 53.44,
"end": 57.040000000000006,
"text": " continuous depth models have constant memory cost, adapt their evaluation"
},
{
"start": 57.04,
"end": 62.12,
"text": " strategy to each input, can explicitly trade numerical precision for speed."
},
{
"start": 62.12,
"end": 66.48,
"text": " It sounds awesome, honestly. It sounds really cool and it sounds really new."
},
{
"start": 66.48,
"end": 76,
"text": " Let's jump in. What they say is let's look at kind of classic neural"
},
{
"start": 76,
"end": 79.32,
"text": " networks, especially residual neural networks. What residual neural"
},
{
"start": 79.32,
"end": 85.16,
"text": " networks do is in each hidden layer they kind of have a representation."
},
{
"start": 85.16,
"end": 91.39999999999999,
"text": " This is kind of their hidden representation at layer t. What they do"
},
{
"start": 91.39999999999999,
"end": 98.24,
"text": " is they then add something. If you don't know a recurrent neural network"
},
{
"start": 98.24,
"end": 105.64,
"text": " is where you have, let's say this is your hidden state ht, and in a classic neural"
},
{
"start": 105.64,
"end": 110.67999999999999,
"text": " network you would just have a weight matrix here, blah blah blah blah. You do a"
},
{
"start": 110.68,
"end": 117.84,
"text": " matrix multiplication to get ht plus 1. So to get the next kind of the next"
},
{
"start": 117.84,
"end": 121.16000000000001,
"text": " hidden state you do a matrix multiplication by a big weight matrix"
},
{
"start": 121.16000000000001,
"end": 128.28,
"text": " here w. In a residual neural network what you do is you have this"
},
{
"start": 128.28,
"end": 139.20000000000002,
"text": " weight matrix w, you multiply it to get delta ht plus 1 and you take ht and you"
},
{
"start": 139.2,
"end": 146.04,
"text": " add the two. You add ht and delta ht plus 1 to arrive at ht plus 1."
},
{
"start": 146.04,
"end": 150.56,
"text": " That's a residual network. It basically doesn't learn the transformation to the"
},
{
"start": 150.56,
"end": 156.23999999999998,
"text": " next layer but it learns how is the next representation different from"
},
{
"start": 156.23999999999998,
"end": 160.78,
"text": " this representation. What do I need to add to this representation to get to"
},
{
"start": 160.78,
"end": 165.88,
"text": " the next representation? It's reasoned that for deep networks"
},
{
"start": 165.88,
"end": 170.56,
"text": " since each layer only does a little bit of transformation we should basically"
},
{
"start": 170.56,
"end": 175.35999999999999,
"text": " bias it towards keeping the representation the same and just kind of"
},
{
"start": 175.35999999999999,
"end": 179.84,
"text": " changing it a little bit. So this is the inherent bias, the identity"
},
{
"start": 179.84,
"end": 183.76,
"text": " transform. So that's a residual"
},
{
"start": 183.76,
"end": 193.12,
"text": " network. This here is characterized by f of kind of theta and ht. So this"
},
{
"start": 193.12,
"end": 202.24,
"text": " is kind of the this is what we called delta h. It's now called f. So this"
},
{
"start": 202.24,
"end": 207.64000000000001,
"text": " f would be the kind of neural network layer and theta would be the"
},
{
"start": 207.64000000000001,
"end": 215.52,
"text": " parameters of it. So the weight matrix in our case. They say okay what if you do"
},
{
"start": 215.52,
"end": 221.8,
"text": " many of those? So they say basically what this is is kind of a time"
},
{
"start": 221.8,
"end": 225.84,
"text": " process. It's kind of you have a state and the next state and the next state"
},
{
"start": 225.84,
"end": 230.32000000000002,
"text": " and you always learn how to go to the next state to the next state and so on."
},
{
"start": 230.32000000000002,
"end": 235.96,
"text": " What if you go very deep and what if you look at this as a time process and"
},
{
"start": 235.96,
"end": 244.72000000000003,
"text": " kind of make these steps very small? Make these super small and basically"
},
{
"start": 244.72,
"end": 252.96,
"text": " what if you have many many infinitely many layers? I say well okay this"
},
{
"start": 252.96,
"end": 257.72,
"text": " then becomes a dynamic process. Basically an ordinary differential"
},
{
"start": 257.72,
"end": 265.96,
"text": " equation where I say okay my time is now continuous and I look at it as a"
},
{
"start": 265.96,
"end": 276.64,
"text": " linearization as a local linearization basically and I say okay I basically"
},
{
"start": 276.64,
"end": 282.4,
"text": " specify how to get from this time to the next instance of time. The next"
},
{
"start": 282.4,
"end": 289.76,
"text": " instant the next infinitesimally small instance of time by specifying this f"
},
{
"start": 289.76,
"end": 296.64,
"text": " and in the continuous case this is to say that the derivative of the hidden"
},
{
"start": 296.64,
"end": 305.48,
"text": " state is now parameterized by a neural network. So if you know what a"
},
{
"start": 305.48,
"end": 310.32,
"text": " differential equation is it has like a start"
},
{
"start": 310.32,
"end": 316.84,
"text": " state and then what you do is you specify how at each point in time"
},
{
"start": 316.84,
"end": 321.47999999999996,
"text": " so that's t at each point in time how does the gradient look so maybe the"
},
{
"start": 321.47999999999996,
"end": 328.2,
"text": " gradient looks like this and then what an ODE solver will do is the ODE solver"
},
{
"start": 328.2,
"end": 332.73999999999995,
"text": " will say okay the gradients we're gonna do an infinite small step in this"
},
{
"start": 332.73999999999995,
"end": 337.96,
"text": " direction and then it goes back to f. What's the gradient at this"
},
{
"start": 337.96,
"end": 344.84,
"text": " infinitely small step next in time and then f would say well the gradient is"
},
{
"start": 344.84,
"end": 349.47999999999996,
"text": " like this and then the ODE solver will go like okay I need to be a little bit"
},
{
"start": 349.47999999999996,
"end": 355.23999999999995,
"text": " flatter so I go here so what's the gradient at this time okay maybe it's up"
},
{
"start": 355.23999999999995,
"end": 360.88,
"text": " this I need to go up here so the ODE solver will kind of construct a curve"
},
{
"start": 360.88,
"end": 370.23999999999995,
"text": " and at each point it needs to look that whatever f says is the gradient is"
},
{
"start": 370.24,
"end": 375.2,
"text": " actually the gradient right if this is the gradient this is the gradient this"
},
{
"start": 375.2,
"end": 383,
"text": " is the gradient so that's that's kind of how an ODE works and that's they say"
},
{
"start": 383,
"end": 389.68,
"text": " okay you can actually look at residual networks here as being a discrete time"
},
{
"start": 389.68,
"end": 395.8,
"text": " analog to such an ODE so what we want to do is actually we want to specify we"
},
{
"start": 395.8,
"end": 400.72,
"text": " want to actually and this is the the crazy part right or the cool part is we"
},
{
"start": 400.72,
"end": 406.68,
"text": " want to do this for neural networks basically we simply specify an ODE and"
},
{
"start": 406.68,
"end": 416.96000000000004,
"text": " the start state here the start state is let's say if you want to build an MNIST"
},
{
"start": 416.96000000000004,
"end": 422.56,
"text": " classifier it's our it's our image right the start state is our MNIST image and"
},
{
"start": 422.56,
"end": 430.64,
"text": " we're simply training a neural network such that the ODE that the equation if"
},
{
"start": 430.64,
"end": 436.12,
"text": " you solve it the curve at the end will arrive at the correct class I mean"
},
{
"start": 436.12,
"end": 440.36,
"text": " that's that's I'm skipping a few parts here about dimensionalities and so on"
},
{
"start": 440.36,
"end": 445.76,
"text": " right because you need to keep in the same dimension but in essence they say"
},
{
"start": 445.76,
"end": 451.88,
"text": " here we start out with our input and we train the neural network to give us the"
},
{
"start": 451.88,
"end": 456.12,
"text": " correct gradients the correct derivatives of this curve at each point"
},
{
"start": 456.12,
"end": 461.76,
"text": " in time such that when you solve the ODE at the end point you are going to be at"
},
{
"start": 461.76,
"end": 467.8,
"text": " the correct label so that's this is the input to your task basically and"
},
{
"start": 467.8,
"end": 473.6,
"text": " this is the output right but instead of having a neural network go from input"
},
{
"start": 473.6,
"end": 479.2,
"text": " to output you have a neural network that parameterizes how you go from each step"
},
{
"start": 479.2,
"end": 484.84,
"text": " in time to the next one what's what's the gradient at each point in time"
},
{
"start": 484.84,
"end": 492.03999999999996,
"text": " that's that's the kind of gist of it and that's that's kind of really cool it's"
},
{
"start": 492.03999999999996,
"end": 500.2,
"text": " a really new approach alright so they give various advantages of this and so"
},
{
"start": 500.2,
"end": 506.28,
"text": " here is this demonstrated again right you are here this is your input and you"
},
{
"start": 506.28,
"end": 513.28,
"text": " want to go to the output and then the loss of the loss that you specify it can"
},
{
"start": 513.28,
"end": 518.56,
"text": " depend on kind of either on the output as in like an image classifier or it can"
},
{
"start": 518.56,
"end": 525.56,
"text": " depend on intermediate states this is it's kept general right so the way they"
},
{
"start": 525.56,
"end": 530.28,
"text": " go about it is they say well okay but so the neural network now specifies how to"
},
{
"start": 530.28,
"end": 535.04,
"text": " get from one step to the next right here and the neural network has parameters"
},
{
"start": 535.04,
"end": 540.92,
"text": " right so we we need to train this network such that the correct output is"
},
{
"start": 540.92,
"end": 546.28,
"text": " given to some input right we actually need to train it so we need to we need"
},
{
"start": 546.28,
"end": 550.28,
"text": " to some how way to train these parameters theta and they say okay we do"
},
{
"start": 550.28,
"end": 553.8399999999999,
"text": " gradient descent on theta like in a classic neural network but now we need"
},
{
"start": 553.8399999999999,
"end": 561.12,
"text": " it's not it's not so easy right it's not one pass through this function it's like"
},
{
"start": 561.12,
"end": 569.2,
"text": " infinitely many passes through this function until you arrive here and then"
},
{
"start": 569.2,
"end": 576.48,
"text": " if you basically need to somehow get a gradient with respect to these"
},
{
"start": 576.48,
"end": 580.92,
"text": " parameters here so they say this again the loss of the this is the loss of the"
},
{
"start": 580.92,
"end": 589.52,
"text": " end state right is the loss of the start state plus the the integral over time of"
},
{
"start": 589.52,
"end": 596.52,
"text": " this is derivative which is basically this curve and the curve is given by an"
},
{
"start": 596.52,
"end": 601.76,
"text": " ODE solver where we input all these things so we need gradients with respect"
},
{
"start": 601.76,
"end": 607.6,
"text": " to that how do we do that and they give away here of saying okay we could either"
},
{
"start": 607.6,
"end": 613.4,
"text": " kind of back propagate through the ODE solver but that would you know depend on"
},
{
"start": 613.4,
"end": 619.92,
"text": " the ODE solver and so on but there's another method there's called what's called the we"
},
{
"start": 619.92,
"end": 624.64,
"text": " need the what's called the adjoint so this is reverse mode differentiation of"
},
{
"start": 624.64,
"end": 629.88,
"text": " an ODE solution adjoint sensitivity method solves an augmented ODE"
},
{
"start": 629.88,
"end": 634.84,
"text": " backwards in time so basically what you need to do is you forward propagate you"
},
{
"start": 634.84,
"end": 640.88,
"text": " come here right and then what you can do is you can solve the second ODE so you"
},
{
"start": 640.88,
"end": 645.56,
"text": " can generate a second curve here this one and don't worry about these little"
},
{
"start": 645.56,
"end": 651.2,
"text": " jumps here you can solve the second curve and the second curve together with"
},
{
"start": 651.2,
"end": 657.72,
"text": " the first and second curve you can then compute the gradients you need right so"
},
{
"start": 657.72,
"end": 664.04,
"text": " the second curve is is basically simply something like the application of the"
},
{
"start": 664.04,
"end": 671.68,
"text": " chain rule to the continuous domain and you need to you need to adjust these"
},
{
"start": 671.68,
"end": 677.04,
"text": " jumps here only when your loss depends on intermediate states this is this is"
},
{
"start": 677.04,
"end": 685.1999999999999,
"text": " kind of the offset caused by including or not including the loss so let's dive"
},
{
"start": 685.1999999999999,
"end": 690.04,
"text": " a bit further into this adjoint state what's the red curve the red curve is"
},
{
"start": 690.04,
"end": 698.8399999999999,
"text": " called a and what's a a is a curve and this is the differential equation for it"
},
{
"start": 698.8399999999999,
"end": 704.36,
"text": " again we specify the curve a by specifying its start state and its"
},
{
"start": 704.36,
"end": 708.92,
"text": " derivative and from its start state and its derivative at each time the ODE"
},
{
"start": 708.92,
"end": 722.3199999999999,
"text": " solver is able to construct the curve entirely so a t it says here is del L to"
},
{
"start": 722.3199999999999,
"end": 731.52,
"text": " del ZT this means how does the loss depend on this ZT on the hidden state"
},
{
"start": 731.52,
"end": 738.16,
"text": " right how does the loss depend on the hidden state at time T so it doesn't"
},
{
"start": 738.16,
"end": 743.36,
"text": " even have to be any of these points here how does the loss depend on this hidden"
},
{
"start": 743.36,
"end": 747.6,
"text": " state here and in order to find that out you would need to go you would need to"
},
{
"start": 747.6,
"end": 752.4399999999999,
"text": " develop the the curve until here right and then calculate the loss and then"
},
{
"start": 752.4399999999999,
"end": 758.68,
"text": " back propagate through here but you can do this by calculating this adjoint"
},
{
"start": 758.68,
"end": 765.9599999999999,
"text": " thing so as you can see here is a demonstration it's an example right so"
},
{
"start": 765.96,
"end": 773.52,
"text": " the start state here is simply given by the loss how does the loss of this state"
},
{
"start": 773.52,
"end": 779.76,
"text": " how does the loss depend on this state well simply by plugging it into the into"
},
{
"start": 779.76,
"end": 783.64,
"text": " the loss equation right so your losses might be a cross entropy loss or"
},
{
"start": 783.64,
"end": 790.72,
"text": " something how does the loss do that depend on this state here well we go we"
},
{
"start": 790.72,
"end": 797.76,
"text": " go from this state that we already know and we know how in reverse time so"
},
{
"start": 797.76,
"end": 804.88,
"text": " backwards in time this sensitivity of the loss develops so we go and we"
},
{
"start": 804.88,
"end": 813.36,
"text": " develop this curve until here and we say aha this point influences this loss in"
},
{
"start": 813.36,
"end": 824,
"text": " this much basically right so so and if the loss explicitly depends on this"
},
{
"start": 824,
"end": 828.2,
"text": " point then we have to we have to calculate in this offset since this"
},
{
"start": 828.2,
"end": 834.88,
"text": " point here only depends on this time up till here and then it changes so there"
},
{
"start": 834.88,
"end": 839.8000000000001,
"text": " is there's a discontinuation but don't worry about that too much basically what"
},
{
"start": 839.8,
"end": 851.92,
"text": " we can do is we can calculate the curve in a forward pass curve and the loss in"
},
{
"start": 851.92,
"end": 859.12,
"text": " the forward pass then we can do a second pass backward again by an ODE solve to"
},
{
"start": 859.12,
"end": 867.52,
"text": " say how does the how does the loss depend on each one of the states here"
},
{
"start": 867.52,
"end": 873.28,
"text": " of the hidden states right so that's the second point but that's not all because"
},
{
"start": 873.28,
"end": 879.12,
"text": " we're ultimately not interested in the how the loss depends on the state where"
},
{
"start": 879.12,
"end": 883.84,
"text": " the we're interested in how the loss depends on these parameters that tell us"
},
{
"start": 883.84,
"end": 891.04,
"text": " how to get from one hidden state to the next but luckily we can then simply"
},
{
"start": 891.04,
"end": 899.92,
"text": " evaluate this integral that depends as you can see here on a and on Z we can"
},
{
"start": 899.92,
"end": 908.76,
"text": " evaluate this and get the gradients for the the parameters right so I also have"
},
{
"start": 908.76,
"end": 912.88,
"text": " to say the parameters are static so the parameters are given over the entire"
},
{
"start": 912.88,
"end": 917.4399999999999,
"text": " duration of this they're they're the same and it's simply what changes is"
},
{
"start": 917.44,
"end": 925.5200000000001,
"text": " time alright so this is how you can get this is how you can get gradients with"
},
{
"start": 925.5200000000001,
"end": 928.5600000000001,
"text": " respect to parameters and the cool thing is now you can train these you can"
},
{
"start": 928.5600000000001,
"end": 934.0400000000001,
"text": " actually train this neural network here that tells you how to go from one state"
},
{
"start": 934.0400000000001,
"end": 940.7600000000001,
"text": " to the next such that if you input the digit 2 as an image well you can output"
},
{
"start": 940.76,
"end": 948.4399999999999,
"text": " to I mean not exactly but that's that's the point right you can by by going"
},
{
"start": 948.4399999999999,
"end": 952.68,
"text": " through this motion by going through this od solve so that's I mean that's"
},
{
"start": 952.68,
"end": 957.96,
"text": " immensely cool they actually define how to do this here in one forward one kind"
},
{
"start": 957.96,
"end": 961.92,
"text": " of backward pass you can solve everything at the same time it's it's"
},
{
"start": 961.92,
"end": 969.24,
"text": " pretty cool and they evaluate their their net and they compare it with a"
},
{
"start": 969.24,
"end": 976.32,
"text": " different bunch of other nets and they interestingly show that so basically"
},
{
"start": 976.32,
"end": 982.5600000000001,
"text": " with an od solver you can never kind of tell how many evaluations it's going to"
},
{
"start": 982.5600000000001,
"end": 988.84,
"text": " do because it's going to get increasing like it's increasingly accurate over"
},
{
"start": 988.84,
"end": 994.48,
"text": " time so you let it run and maybe it's going to first generate a curve that's"
},
{
"start": 994.48,
"end": 1001.72,
"text": " like something like this right and then it needs to say crap okay I need to go"
},
{
"start": 1001.72,
"end": 1005.64,
"text": " back and refine and then it maybe goes the curve like this and so on so it gets"
},
{
"start": 1005.64,
"end": 1011.6,
"text": " continually closer over time and for that it needs to kind of query it's like"
},
{
"start": 1011.6,
"end": 1015.44,
"text": " a query it needs to query this this F so you need to give it the function as an"
},
{
"start": 1015.44,
"end": 1020,
"text": " invaluable function and it goes and just okay I need to I need to know it here"
},
{
"start": 1020,
"end": 1023.64,
"text": " okay I got it from here okay I need to know it here okay I got it from oh no I"
},
{
"start": 1023.64,
"end": 1029.84,
"text": " didn't get it okay I need also need to know it here all right and so you can"
},
{
"start": 1029.84,
"end": 1034,
"text": " never know how much they will evaluate but you basically have a parameter to"
},
{
"start": 1034,
"end": 1038.08,
"text": " trade off accuracy and how much they evaluate that's what they show here so"
},
{
"start": 1038.08,
"end": 1043.96,
"text": " the the less error they want in their forward pass the more forward passes"
},
{
"start": 1043.96,
"end": 1049.28,
"text": " they have to do that's this curve the more forward passes they do the more"
},
{
"start": 1049.28,
"end": 1054.16,
"text": " time they have to invest right that's this curve but interestingly the more"
},
{
"start": 1054.16,
"end": 1060.76,
"text": " forward passes the time required for forward passes or the evaluations"
},
{
"start": 1060.76,
"end": 1065.6,
"text": " required for passes increases also the evaluation required for backward passes"
},
{
"start": 1065.6,
"end": 1069.8,
"text": " but not by much so that the backward passes require about half the amount of"
},
{
"start": 1069.8,
"end": 1076.52,
"text": " evaluations that's forward passes which is encouraging since the the backward"
},
{
"start": 1076.52,
"end": 1082.8799999999999,
"text": " passes don't go kind of overboard like if you had to back propagate through"
},
{
"start": 1082.8799999999999,
"end": 1089.56,
"text": " the operations of the ODE solver itself and they also show as your training epoch"
},
{
"start": 1089.56,
"end": 1097.4,
"text": " continues that the ODE solver requests more and more evaluations for so for the"
},
{
"start": 1097.4,
"end": 1101.52,
"text": " same epoch basically or the same samples within different epochs which"
},
{
"start": 1101.52,
"end": 1107,
"text": " means as it gets more accurate kind of needs to know more and more and more"
},
{
"start": 1107,
"end": 1112.8799999999999,
"text": " about the the samples basically about the test the training samples which is"
},
{
"start": 1112.8799999999999,
"end": 1121.16,
"text": " all basically showing that this kind of works yeah so they they kind of to get"
},
{
"start": 1121.16,
"end": 1125.52,
"text": " into normalizing flows which I don't want to get into here much because we"
},
{
"start": 1125.52,
"end": 1129.4,
"text": " haven't done a video on that yet we'll do one but they basically show that it's"
},
{
"start": 1129.4,
"end": 1138.44,
"text": " it's quite easy to do normalizing flows in a continuous fashion and the topic"
},
{
"start": 1138.44,
"end": 1142.64,
"text": " normalizing flows it's in itself pretty cool what they do at the end is they say"
},
{
"start": 1142.64,
"end": 1147.8000000000002,
"text": " okay what we can now do is we can actually take sequential data so now"
},
{
"start": 1147.8000000000002,
"end": 1151.96,
"text": " we've just talked about let's input one data point get out let's say a label or"
},
{
"start": 1151.96,
"end": 1160.04,
"text": " something which we can actually do sequential data and let's for example"
},
{
"start": 1160.04,
"end": 1165.96,
"text": " have an RNN encoder for our sequential data so here here these are data points"
},
{
"start": 1165.96,
"end": 1170.1200000000001,
"text": " right these are measurements like a blood pressure of a of a person and what"
},
{
"start": 1170.1200000000001,
"end": 1174.3600000000001,
"text": " we can do is we can do a variational autoencoder we've talked about this we"
},
{
"start": 1174.3600000000001,
"end": 1180.72,
"text": " can have an RNN encoder parameterize a distribution and then as a decoder have"
},
{
"start": 1180.72,
"end": 1186.48,
"text": " this ODE neural network and basically what that allows us to do is that allows"
},
{
"start": 1186.48,
"end": 1192.96,
"text": " us to deal with time steps that are not regularly sampled and so we can"
},
{
"start": 1192.96,
"end": 1202,
"text": " extrapolate from the data point at time yeah times not regular samplings like"
},
{
"start": 1202,
"end": 1208.44,
"text": " or with RNNs you basically forced to have always the same time step"
},
{
"start": 1208.44,
"end": 1213.68,
"text": " difference otherwise you have a very tough time but with this since these are"
},
{
"start": 1213.68,
"end": 1218.3200000000002,
"text": " continuous flows you're basically you can basically unroll them and evaluate"
},
{
"start": 1218.3200000000002,
"end": 1222.8400000000001,
"text": " them at whatever time you want so they have pretty cool experiments here where"
},
{
"start": 1222.8400000000001,
"end": 1228.6000000000001,
"text": " they kind of try to learn these kind of spiraling behaviors and you see on top"
},
{
"start": 1228.6,
"end": 1241.8,
"text": " the RNN decoder will get all jaggy and so on where as the so so basically as the"
},
{
"start": 1241.8,
"end": 1249.24,
"text": " the neural ordinary differential equation will generate quite let's say"
},
{
"start": 1249.24,
"end": 1256.1999999999998,
"text": " smooth things and also it can extrapolate as you can see here it can it"
},
{
"start": 1256.2,
"end": 1261.8400000000001,
"text": " can go the red the red thing is the extrapolation only there's only data"
},
{
"start": 1261.8400000000001,
"end": 1268.44,
"text": " where the green dots are so that's pretty cool you can see the RNN"
},
{
"start": 1268.44,
"end": 1273.68,
"text": " sometimes isn't able to kind of continue the flow as you can see in here it"
},
{
"start": 1273.68,
"end": 1282.68,
"text": " extrapolates wrongly so the this kind of I mean it's toy it's a toy example but"
},
{
"start": 1282.68,
"end": 1285.8400000000001,
"text": " these kind of dynamics are pretty cool and they also show here when they learn"
},
{
"start": 1285.84,
"end": 1293.1599999999999,
"text": " the spirals and vary one dimension of the latent code that is given by the"
},
{
"start": 1293.1599999999999,
"end": 1302.8,
"text": " encoder then the flow goes from clockwise it goes from to to counter"
},
{
"start": 1302.8,
"end": 1307.6399999999999,
"text": " clockwise as you see here I've turned this in I've drawn this in wrong but so"
},
{
"start": 1307.6399999999999,
"end": 1313.56,
"text": " it's pretty pretty cool what these these things learn and since it's only small"
},
{
"start": 1313.56,
"end": 1317.1599999999999,
"text": " data right now small models but I'm pretty sure this is going to develop"
},
{
"start": 1317.1599999999999,
"end": 1325,
"text": " further and be a cool just a cool way cool alley of research cool idea and"
},
{
"start": 1325,
"end": 1329.9199999999998,
"text": " looking forward to what they come up next alright so that was it for today a"
},
{
"start": 1329.92,
"end": 1344.1200000000001,
"text": " bit shorter but I hope this was somewhat clear enough all right have a great day"
}
] |
u1_qMdb0kYU | Yannic Kilcher | UCZHmQk67mSJgfCCTn7xBfew | GPT-2: Language Models are Unsupervised Multitask Learners | [
"Science & Technology"
] | [
"gpt2",
"transformer",
"language model",
"deep learning",
"nlp",
"openai",
"security",
"translation",
"neural network",
"attention",
"attention mechanism",
"unsupervised learning",
"controversy"
] | A look at OpenAI's new GPT-2 model and the surrounding controversy.
https://blog.openai.com/better-language-models/
Abstract:
Natural language processing tasks, such as question answering, machine translation, reading comprehension, and summarization, are typically approached with supervised learning on taskspecific datasets. We demonstrate that language models begin to learn these tasks without any explicit supervision when trained on a new dataset of millions of webpages called WebText. When conditioned on a document plus questions, the answers generated by the language model reach 55 F1 on the CoQA dataset - matching or exceeding the performance of 3 out of 4 baseline systems without using the 127,000+ training examples. The capacity of the language model is essential to the success of zero-shot task transfer and increasing it improves performance in a log-linear fashion across tasks. Our largest model, GPT-2, is a 1.5B parameter Transformer that achieves state of the art results on 7 out of 8 tested language modeling datasets in a zero-shot setting but still underfits WebText. Samples from the model reflect these improvements and contain coherent paragraphs of text. These findings suggest a promising path towards building language processing systems which learn to perform tasks from their naturally occurring demonstrations.
Authors:
Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever | Hi, today we're looking at language models are unsupervised multitask learners by Alec Radford, Jeffrey Wu, Reverend Child, David Luan, Dario Amadai and Ilya Sotskyver from OpenAI. This paper has generated a bit of hype in the last few days, so I wanted to go over it, basically take a look and take a look at the surrounding, let's say controversy. So let's actually have a look at the blog post that OpenAI released along with this paper. They say, we've trained a large scale unsupervised language model which generates coherent paragraphs of text, achieves state of the art performance on many language modeling benchmarks and performs rudimentary reading comprehension, machine translation, question answering and summarization all without task specific training. So this sounds quite suspicious at the beginning, but we're actually going to have to look at how they do this. It sounds really good being able to do a rudimentary translation without any training on translation itself, just learning a language model. But this has been continuing a trend in recent kind of time where we see that the better your language model gets, the better basically your model gets on all these kind of language tasks. Alright, they go into this and we'll see how they do it. So basically what they've done is they've taken kind of a bigger dataset of language model, of language model dataset, which is about 40 gigabytes of internet text, I say this is here on the top. So it's one of the largest kind of text datasets there is unsupervised. And they also taken one of the largest language models. So they have their largest transformer based model has 1.5 billion parameters. So they take huge amount of data, huge model, they train this on, they train the model on the data and what comes out is like giant super language model that is able to perform all these cool tasks. So here they have like a bit of a sample. So what they can do is they can basically, so the way a language model works is you always try to predict the next word based on what you've already seen. So you kind of query it by giving it some starting words and it needs to continue the text from there. So here system prompt on top you see in a shocking finding scientists discovered a herd of unicorns living in a remote previously unexplored valley in the Andes mountains. Even more surprising to the researcher the fact that the unicorns spoke perfect English. And then the model continues. The scientists named their population the population after their distinctive horn, Ovitz unicorn. These four horns silver white unicorns were previously unknown to science. Now after almost two centuries the mystery of what sparked this odd phenomenon is finally solved. I mean you can even read this it's really, really coherent text and it's quite surprising. I think it's like slightly cherry picked but still the fact that a model is able to do this is unprecedented. Especially like since it's like a general language model not specific to the task of continuing news articles about unicorns or anything. So yeah they go into these findings we'll see them in the paper and they also say that yeah they can now do all these kind of tasks like question answering reading comprehension in a zero-shot fashion. So at the end here they say what it's capable of. So let's say AI writing assistants more capable dialogue agents unsupervised translation blah blah blah. They also say a few kind of let's say bad implications. Generate misleading news articles impersonate others online automate the production of abusive or fake content to post on social media automate the production of spam or phishing content. They liken it to a system called deep fakes which generates really well crafted let's say videos of people. So that the kind of they frame it in a way as this could be used for dangerous things and they say they aren't releasing they're only releasing the small version of GPT-2 along with the code. They're not releasing the data set training code or the GPT-2 this is the big model the model of weights right. And they do this they cite safety concerns. So I mean the community basically is going nuts over this statement this decision not release basically the code or the model or the data set to the world. And so if you search on Twitter for GPT-2 then everyone basically has an opinion of whether or not this is a good thing or not apart from people testing it out. So they've given access to like a selected set of journalists to an API where they can query the model. So there are some samples flying around but basically people are just debating whether or not this is a good thing or not and I mean it's just hilarious to go along with this and to just read all the people having opinions. I mean I've given my opinion as well just chime in it's a fun ride especially like this post here on reddit machine learning says should I release my NIST model or keep it closed source fearing malicious use. Today I trained a 23,000 layer ResNet got a 99% accuracy and MNIST. I'd love to share the model but I fear it being used maliciously. What if it is used to read documents by the Russians? What are your thoughts? I mean yeah this is in essence I mean in essence it's that right. It's like yeah come on. So I can just give my opinion up front. I think a lot of things came together here and I think that this being OpenAI being kind of this initiative I mean it's never been there before they're not an academic institution. They're not a company but still they you know they're researchers they want to have a career so there's lots of pressures on them. There's pressure to further publish so I think that that's kind of an underlying motivation to not release your model and your code and your data set is you actually you know there's a lot of work in this and you actually might want to get more than one paper out of it so keeping these things to yourself is basically a surefire guarantee you're going to get another two, three, four, five papers out of this data or model or whatever. It's also a good way to kind of generate press if you basically say oh we're not releasing but we have this really good model and there's one thing on Twitter right I mean you can't probably can't find it but says like step one my model is state of the art step two my model is state of the art but generalizes better to other tasks step three my model does the same thing but with fewer parameters and step four my model is so good I can't even talk about it. So basically I think a lot of things came together this press generating the pressure to create more kind of papers out of it and genuinely security concerns. I think being open AI and open AI kind of was established as a way to let's say the demo like their statutes pretty clearly say we want to open AI and research it in ethical use and you have backers like Elon Musk that talk all the time about kind of safety related issues in AI. I think there's a lot of pressure on these people to with everything they do basically have an ethical component. So everything they do is kind of scrutiny to this does this have ethical implications and where can we basically stand out from the rest of the community by doing something it doesn't need to be more ethical just needs to be different with an ethical reason behind reasoning behind it and this I think this is it I think there's a lot of things coming together I don't I don't think anyone like maliciously thought oh this you know we're gonna do this it's gonna generate so much press or I don't think anyone actively thought ah we'll just keep it secret we're gonna get so much more papers out of it I honestly think that the reasoning was there's a lot of you know a lot of pressure to do this ethical things when there's there's not if you think about it it's yeah it's a good language model and it can generate text but you can also hire you know people to generate text to generate fake news to do phishing and spam it's just a bit more efficient now right and yeah so it's it's unprecedented that you don't you don't release this research kind of cold war style so it's not really dangerous to release this and it's just delaying the inevitable basically but I think that the pressure the pressure was on and they made a decision that they thought was in line with their values and I think the this just neatly aligns with the underlying the other benefits to them that yeah all right so let's dive into the paper the paper is actually not you know too too much content there what they basically say so far is that a lot of a lot of these kind of papers on these tasks they they say the dominant approach to creating ML systems is to collect a data set of training examples demonstrate correct behavior train a system to imitate test its performance on in IID samples so they basically say the there's kind of the single task training on single domain data sets is a major contributor to the lack of generalization observed in current systems so they basically say these language systems they don't generalize like a QA system might be trained on a QA task but it you know has nothing to do with the task is basically a little bit different and even in multitask learning they say multitask learning is a promising framework but also it's kind of say it's still nascent and there's only very few different tasks right to do so they basically say you need basically a big big unsupervised model and it will implicitly learn all of the kind of special tasks and yeah so they say there there are approaches that basically basically learn these language models but then still require supervised training so basically fine-tuning this has been the this is for example the bird paper we discussed in the in the last video or two two videos ago that learns a giant language model but then does fine-tuning for each of these tasks and gets really well what they want to do here is basically simply learn a language model and then investigate whether or not the language model can perform downstream tasks in a zero-shot setting that means without any parameter or architecture modification so no fine-tuning all right so what they do so basically what a language model is if for those who don't know it's it's if you have a sequence of text let's say a b c d e these are words let's act like some actual words the cat sat on the mat and so on and you and you a language model is you kind of remove the end of the sentence at some point and ask the model what comes next right that's a language model I mean there's different kinds of language models specific language ones but that's the basic the basic thing so the you just ask the model what's next so you can you can do a lot of unsupervised training because you don't need a label data set for this you simply need a text corpus and that's basically all they do they use transformers which we've also discussed in attention is all you need paper so if you if you don't know what transformers are go back and look at that yeah all right so basically they say a lot of these special tasks like translation and question answering can be framed in language model way for example if you simply input if this is your text translate to French and then the English text and then the French text right and then at at test time basically you leave away the French text you simply ask the language model what comes next right if and its input is translate to French and then English text this is the translation framed as a language model task because you can specify the task that the language allows to do also as language so this is quite this is quite an interesting approach and one they exploit here and they basically say well since in a large and diverse corpus of web pages that they collect here some there is going to be some websites that basically all do translation from English to French and the model can learn from that so here in this paragraph they basically list examples of naturally occurring demonstrations of English to French and French to English translation found throughout the training data set so basically this is this is how the model could learn let's just look at one I hate the word perfume Bursas it's somewhat better in French right so there's a way in just an unsupervised setting where the language model could learn right if you just cross out this word at the end and you just ask the model what comes next right the model sees I hate the word perfume Bursas it's somewhat better in French period colon then the model has to put something there and the most logical continuation is to put the French word for perfume right so that that's kind of how they frame translation and these other tasks in language model way all right so they talk about the training data set which is a major component here they say they make a new training data set because all of the current ones aren't sufficient they say most prominent source of diverse nearly unlimited text is web scripts such as common crawl while these archives are many orders of magnitude larger than current language modeling datasets they have significant data quality issues so to say content are mostly unintelligible and so on so they basically describe here how they scrape a new web scrape which emphasizes document quality they go on reddit basically and scrape all outbound links from reddit that have received at least three karma which means that it yeah three up votes for a post of a link which basically means that three humans agreed that this was a good link so so they that's that's how they collect the data set resulting data set web stack web text contains text subset of the 45 million links they then kind of clean this and scrape it down and remove some stuff and they they end up with a large corpus right and then they talk about how they represent the input which is byte pair encoding style it's not exactly by parent coding it's a byte pair encoding inspired encoding we won't make a video about this by itself because it's really interesting but basically it's you can think of it as tokenization and pre-processing right then they say they they show their models so architecture hyperparameters basically these are these are their models this is the smallest one this second smallest one they say it's the same size as BERT so the the language model by google that we've looked at and then the largest one 1.5 billion parameters I mean that's huge and yeah they say it's ten times larger than the previous so the first one is their previous model and this now is this is the GPT-2 model that that gets all these these nice results so they do experiments first they do experiments on language modeling itself right so they train on their on their corpus and then they evaluate on a bunch of other language modeling corpus so these up here are language modeling corpuses and the state of the art is in this row and then you just look at basically the bottom row compare to their largest model this this is perplexity where it says PPL and the I think this here is is is accuracy so perplexity lower is better which you can you can see here the previous state of the art was 39 on wiki text 2 they get to 18 with accuracy obviously higher is better so the the kind of previous accuracy in Lombarda was 59 they get to 63 basically improve everything except for this 1 billion word corpus and they they also explain why they say this is the most heavily pre-processed text and so on so that basically they basically are really good at language modeling even though they train on a different data set that's the the point right the point is they train on their own corpus and then they go and just evaluate on the test set of these of these new of these new tasks and they become better basically than the models that trained on the training data set of that particular task all right so they they do a number of further experiments where they basically show that the model has now learned kind of implicitly learned a number of different tasks so let's look at for example summarization this just want to show an example of how you can do this so summarization summarization task is you get a long text you need to produce a short text and that short text is then compared to short texts that humans wrote when the task was to summarize the long text and you get points on how much your text overlaps with these human texts all right so they they say we test gpt2's ability to perform summarization on the cnn and daily mail data set to induce summarization here's what i found interesting we add the text tldr after the article and generate 100 tokens right then they say they need to reduce repetition and so on but basically this this this is right this is the way you can frame summarization by text input so i find this just kind of a really nice way to think about these problems the fact that instructions of the task can be given as text this is a very nice example here so basically you you put you as input you put the entire article right and so you here is the the cnn article blah blah blah it's super long right and then here you put tldr which is for those who don't know it's too long didn't read so people use this this phrase to indicate that then they will write a short summary of whatever was before here they will either put this at the beginning or at the end of a long text right to to say to people okay if you if you don't want to read all this just read this down here um gives you the gist of it which is exactly summarization so if you then take this away and ask the language model what's here right basically throughout the training corpus the language model will have encountered such pieces of text with a tldr in it and the language model might have learned that whatever is down here is a short version of whatever is up here and thereby if you then ask the language model what comes next here right the language model might learn aha i need to summarize whatever is above and that's the my best shot at completing at at answering the question what comes next and yeah so they get you know surprisingly good results um from from this so they say on the commonly reported rouge 12l metrics the generated summaries only begin to approach the performance of classic neural baselines just barely outperforms selecting three random sentences from the article uh but but um still it it um while qualitatively the generations resemble summaries they often focus on recent content from there to color confuse specific details so this is kind of a task where it kind of worked but not really um but i just find it it's really interesting that that it it kind of how they frame the task and how this can still so it still kind of works and that's the the gist here in all of these tasks is also with like translation they obviously don't get near the performance of a system specifically trained to do this task but they all also always say it kind of works right it's sort of sort of it learns something and their entire point of this paper is to say well look um yeah the the the diversity of tasks the model is able to perform and i would say kind of perform in a zero shot setting suggests that high capacity models trained to maximize the likelihood of a sufficiently varied text corpus begin to learn how to perform a surprising amount of tasks without the need for explicit supervision so yeah their entire point is if we train on such varied data that kind of um that spans the entire range of human language expression the the kind of tasks we want these systems to do will be learned implicitly so basically it points to let's get an even bigger corpus let's get even bigger models and we might get even better unsupervised zero shot way in these kind of special tasks and general language understanding all right so that that was basically i've jumped over a lot of points but i encourage you to look into this to look into the specific experiments they're really interesting the way how they framed things and um give just just shout your opinion around about whether or not the publishing is a good thing or not it's really funny i love it um and with that have a good day | [
{
"start": 0,
"end": 6.5200000000000005,
"text": " Hi, today we're looking at language models are unsupervised multitask learners by Alec"
},
{
"start": 6.5200000000000005,
"end": 13.52,
"text": " Radford, Jeffrey Wu, Reverend Child, David Luan, Dario Amadai and Ilya Sotskyver from"
},
{
"start": 13.52,
"end": 20.16,
"text": " OpenAI. This paper has generated a bit of hype in the last few days, so I wanted to"
},
{
"start": 20.16,
"end": 27.12,
"text": " go over it, basically take a look and take a look at the surrounding, let's say controversy."
},
{
"start": 27.12,
"end": 32.08,
"text": " So let's actually have a look at the blog post that OpenAI released along with this"
},
{
"start": 32.08,
"end": 40.22,
"text": " paper. They say, we've trained a large scale unsupervised language model which generates"
},
{
"start": 40.22,
"end": 44.96,
"text": " coherent paragraphs of text, achieves state of the art performance on many language modeling"
},
{
"start": 44.96,
"end": 50.08,
"text": " benchmarks and performs rudimentary reading comprehension, machine translation, question"
},
{
"start": 50.08,
"end": 56.040000000000006,
"text": " answering and summarization all without task specific training. So this sounds quite suspicious"
},
{
"start": 56.04,
"end": 61.68,
"text": " at the beginning, but we're actually going to have to look at how they do this. It sounds"
},
{
"start": 61.68,
"end": 67.88,
"text": " really good being able to do a rudimentary translation without any training on translation"
},
{
"start": 67.88,
"end": 74.92,
"text": " itself, just learning a language model. But this has been continuing a trend in recent"
},
{
"start": 74.92,
"end": 81.72,
"text": " kind of time where we see that the better your language model gets, the better basically"
},
{
"start": 81.72,
"end": 92.76,
"text": " your model gets on all these kind of language tasks. Alright, they go into this and we'll"
},
{
"start": 92.76,
"end": 101.2,
"text": " see how they do it. So basically what they've done is they've taken kind of a bigger dataset"
},
{
"start": 101.2,
"end": 107.56,
"text": " of language model, of language model dataset, which is about 40 gigabytes of internet text,"
},
{
"start": 107.56,
"end": 114.32000000000001,
"text": " I say this is here on the top. So it's one of the largest kind of text datasets there"
},
{
"start": 114.32000000000001,
"end": 122.64,
"text": " is unsupervised. And they also taken one of the largest language models. So they have"
},
{
"start": 122.64,
"end": 130.88,
"text": " their largest transformer based model has 1.5 billion parameters. So they take huge"
},
{
"start": 130.88,
"end": 138.51999999999998,
"text": " amount of data, huge model, they train this on, they train the model on the data and what"
},
{
"start": 138.51999999999998,
"end": 146.12,
"text": " comes out is like giant super language model that is able to perform all these cool tasks."
},
{
"start": 146.12,
"end": 153.88,
"text": " So here they have like a bit of a sample. So what they can do is they can basically,"
},
{
"start": 153.88,
"end": 158.12,
"text": " so the way a language model works is you always try to predict the next word based on what"
},
{
"start": 158.12,
"end": 164.44,
"text": " you've already seen. So you kind of query it by giving it some starting words and it"
},
{
"start": 164.44,
"end": 170.4,
"text": " needs to continue the text from there. So here system prompt on top you see in a shocking"
},
{
"start": 170.4,
"end": 175.56,
"text": " finding scientists discovered a herd of unicorns living in a remote previously unexplored valley"
},
{
"start": 175.56,
"end": 180.8,
"text": " in the Andes mountains. Even more surprising to the researcher the fact that the unicorns"
},
{
"start": 180.8,
"end": 190.8,
"text": " spoke perfect English. And then the model continues. The scientists named their population"
},
{
"start": 190.8,
"end": 195.84,
"text": " the population after their distinctive horn, Ovitz unicorn. These four horns silver white"
},
{
"start": 195.84,
"end": 200.28,
"text": " unicorns were previously unknown to science. Now after almost two centuries the mystery"
},
{
"start": 200.28,
"end": 205.48000000000002,
"text": " of what sparked this odd phenomenon is finally solved. I mean you can even read this it's"
},
{
"start": 205.48,
"end": 213.76,
"text": " really, really coherent text and it's quite surprising. I think it's like slightly cherry"
},
{
"start": 213.76,
"end": 223.48,
"text": " picked but still the fact that a model is able to do this is unprecedented. Especially"
},
{
"start": 223.48,
"end": 228.67999999999998,
"text": " like since it's like a general language model not specific to the task of continuing news"
},
{
"start": 228.68,
"end": 238.56,
"text": " articles about unicorns or anything. So yeah they go into these findings we'll see them"
},
{
"start": 238.56,
"end": 247.36,
"text": " in the paper and they also say that yeah they can now do all these kind of tasks like question"
},
{
"start": 247.36,
"end": 255.84,
"text": " answering reading comprehension in a zero-shot fashion. So at the end here they say what"
},
{
"start": 255.84,
"end": 262.68,
"text": " it's capable of. So let's say AI writing assistants more capable dialogue agents unsupervised"
},
{
"start": 262.68,
"end": 270.12,
"text": " translation blah blah blah. They also say a few kind of let's say bad implications."
},
{
"start": 270.12,
"end": 274.6,
"text": " Generate misleading news articles impersonate others online automate the production of abusive"
},
{
"start": 274.6,
"end": 281.68,
"text": " or fake content to post on social media automate the production of spam or phishing content."
},
{
"start": 281.68,
"end": 287.88,
"text": " They liken it to a system called deep fakes which generates really well crafted let's"
},
{
"start": 287.88,
"end": 299.12,
"text": " say videos of people. So that the kind of they frame it in a way as this could be used"
},
{
"start": 299.12,
"end": 307,
"text": " for dangerous things and they say they aren't releasing they're only releasing the small"
},
{
"start": 307,
"end": 317.32,
"text": " version of GPT-2 along with the code. They're not releasing the data set training code or"
},
{
"start": 317.32,
"end": 324.8,
"text": " the GPT-2 this is the big model the model of weights right. And they do this they cite"
},
{
"start": 324.8,
"end": 334,
"text": " safety concerns. So I mean the community basically is going nuts over this statement this decision"
},
{
"start": 334,
"end": 343.56,
"text": " not release basically the code or the model or the data set to the world. And so if you"
},
{
"start": 343.56,
"end": 352.68,
"text": " search on Twitter for GPT-2 then everyone basically has an opinion of whether or not"
},
{
"start": 352.68,
"end": 360.64,
"text": " this is a good thing or not apart from people testing it out. So they've given access to"
},
{
"start": 360.64,
"end": 370,
"text": " like a selected set of journalists to an API where they can query the model. So there are"
},
{
"start": 370,
"end": 378.91999999999996,
"text": " some samples flying around but basically people are just debating whether or not this is a"
},
{
"start": 378.91999999999996,
"end": 387.8,
"text": " good thing or not and I mean it's just hilarious to go along with this and to just read all"
},
{
"start": 387.8,
"end": 397.36,
"text": " the people having opinions. I mean I've given my opinion as well just chime in it's a fun"
},
{
"start": 397.36,
"end": 405.44,
"text": " ride especially like this post here on reddit machine learning says should I release my"
},
{
"start": 405.44,
"end": 413.88,
"text": " NIST model or keep it closed source fearing malicious use. Today I trained a 23,000 layer"
},
{
"start": 413.88,
"end": 421.56,
"text": " ResNet got a 99% accuracy and MNIST. I'd love to share the model but I fear it being used"
},
{
"start": 421.56,
"end": 428.08,
"text": " maliciously. What if it is used to read documents by the Russians? What are your thoughts? I"
},
{
"start": 428.08,
"end": 439.84,
"text": " mean yeah this is in essence I mean in essence it's that right. It's like yeah come on. So"
},
{
"start": 439.84,
"end": 450.4,
"text": " I can just give my opinion up front. I think a lot of things came together here and I think"
},
{
"start": 450.4,
"end": 456.17999999999995,
"text": " that this being OpenAI being kind of this initiative I mean it's never been there before"
},
{
"start": 456.17999999999995,
"end": 462.91999999999996,
"text": " they're not an academic institution. They're not a company but still they you know they're"
},
{
"start": 462.91999999999996,
"end": 467.76,
"text": " researchers they want to have a career so there's lots of pressures on them. There's"
},
{
"start": 467.76,
"end": 475.08,
"text": " pressure to further publish so I think that that's kind of an underlying motivation to"
},
{
"start": 475.08,
"end": 480.76,
"text": " not release your model and your code and your data set is you actually you know there's"
},
{
"start": 480.76,
"end": 485.28,
"text": " a lot of work in this and you actually might want to get more than one paper out of it"
},
{
"start": 485.28,
"end": 492.52,
"text": " so keeping these things to yourself is basically a surefire guarantee you're going to get another"
},
{
"start": 492.52,
"end": 501.47999999999996,
"text": " two, three, four, five papers out of this data or model or whatever. It's also a good"
},
{
"start": 501.47999999999996,
"end": 507.59999999999997,
"text": " way to kind of generate press if you basically say oh we're not releasing but we have this"
},
{
"start": 507.59999999999997,
"end": 512.8,
"text": " really good model and there's one thing on Twitter right I mean you can't probably can't"
},
{
"start": 512.8,
"end": 518.36,
"text": " find it but says like step one my model is state of the art step two my model is state"
},
{
"start": 518.36,
"end": 524.5600000000001,
"text": " of the art but generalizes better to other tasks step three my model does the same thing"
},
{
"start": 524.5600000000001,
"end": 535.48,
"text": " but with fewer parameters and step four my model is so good I can't even talk about it."
},
{
"start": 535.48,
"end": 546,
"text": " So basically I think a lot of things came together this press generating the pressure"
},
{
"start": 546,
"end": 554.56,
"text": " to create more kind of papers out of it and genuinely security concerns. I think being"
},
{
"start": 554.56,
"end": 562.2,
"text": " open AI and open AI kind of was established as a way to let's say the demo like their"
},
{
"start": 562.2,
"end": 568.88,
"text": " statutes pretty clearly say we want to open AI and research it in ethical use and you"
},
{
"start": 568.88,
"end": 575.32,
"text": " have backers like Elon Musk that talk all the time about kind of safety related issues"
},
{
"start": 575.32,
"end": 581.12,
"text": " in AI. I think there's a lot of pressure on these people to with everything they do basically"
},
{
"start": 581.12,
"end": 590.96,
"text": " have an ethical component. So everything they do is kind of scrutiny to this does this have"
},
{
"start": 590.96,
"end": 597.8000000000001,
"text": " ethical implications and where can we basically stand out from the rest of the community by"
},
{
"start": 597.8000000000001,
"end": 602,
"text": " doing something it doesn't need to be more ethical just needs to be different with an"
},
{
"start": 602,
"end": 607.52,
"text": " ethical reason behind reasoning behind it and this I think this is it I think there's"
},
{
"start": 607.52,
"end": 613.2,
"text": " a lot of things coming together I don't I don't think anyone like maliciously thought"
},
{
"start": 613.2,
"end": 619.04,
"text": " oh this you know we're gonna do this it's gonna generate so much press or I don't think"
},
{
"start": 619.04,
"end": 626.5,
"text": " anyone actively thought ah we'll just keep it secret we're gonna get so much more papers"
},
{
"start": 626.5,
"end": 632.56,
"text": " out of it I honestly think that the reasoning was there's a lot of you know a lot of pressure"
},
{
"start": 632.56,
"end": 638.64,
"text": " to do this ethical things when there's there's not if you think about it it's yeah it's"
},
{
"start": 638.64,
"end": 644.02,
"text": " a good language model and it can generate text but you can also hire you know people"
},
{
"start": 644.02,
"end": 649.92,
"text": " to generate text to generate fake news to do phishing and spam it's just a bit more"
},
{
"start": 649.92,
"end": 656.4399999999999,
"text": " efficient now right and yeah so it's it's unprecedented that you don't you don't release"
},
{
"start": 656.4399999999999,
"end": 666.0799999999999,
"text": " this research kind of cold war style so it's not really dangerous to release this and it's"
},
{
"start": 666.0799999999999,
"end": 671.92,
"text": " just delaying the inevitable basically but I think that the pressure the pressure was"
},
{
"start": 671.92,
"end": 677.88,
"text": " on and they made a decision that they thought was in line with their values and I think"
},
{
"start": 677.88,
"end": 686.88,
"text": " the this just neatly aligns with the underlying the other benefits to them that yeah all right"
},
{
"start": 686.88,
"end": 692.36,
"text": " so let's dive into the paper the paper is actually not you know too too much content"
},
{
"start": 692.36,
"end": 700.64,
"text": " there what they basically say so far is that a lot of a lot of these kind of papers on"
},
{
"start": 700.64,
"end": 706.32,
"text": " these tasks they they say the dominant approach to creating ML systems is to collect a data"
},
{
"start": 706.32,
"end": 711.2800000000001,
"text": " set of training examples demonstrate correct behavior train a system to imitate test its"
},
{
"start": 711.2800000000001,
"end": 719.2,
"text": " performance on in IID samples so they basically say the there's kind of the single task training"
},
{
"start": 719.2,
"end": 724.32,
"text": " on single domain data sets is a major contributor to the lack of generalization observed in"
},
{
"start": 724.32,
"end": 727.72,
"text": " current systems so they basically say these language systems they don't generalize like"
},
{
"start": 727.72,
"end": 734.1600000000001,
"text": " a QA system might be trained on a QA task but it you know has nothing to do with the"
},
{
"start": 734.16,
"end": 740.52,
"text": " task is basically a little bit different and even in multitask learning they say multitask"
},
{
"start": 740.52,
"end": 747.6,
"text": " learning is a promising framework but also it's kind of say it's still nascent and there's"
},
{
"start": 747.6,
"end": 754.24,
"text": " only very few different tasks right to do so they basically say you need basically a"
},
{
"start": 754.24,
"end": 763.64,
"text": " big big unsupervised model and it will implicitly learn all of the kind of special tasks and"
},
{
"start": 763.64,
"end": 773.88,
"text": " yeah so they say there there are approaches that basically basically learn these language"
},
{
"start": 773.88,
"end": 781.48,
"text": " models but then still require supervised training so basically fine-tuning this has been the"
},
{
"start": 781.48,
"end": 787.6,
"text": " this is for example the bird paper we discussed in the in the last video or two two videos"
},
{
"start": 787.6,
"end": 793.48,
"text": " ago that learns a giant language model but then does fine-tuning for each of these tasks"
},
{
"start": 793.48,
"end": 799.88,
"text": " and gets really well what they want to do here is basically simply learn a language"
},
{
"start": 799.88,
"end": 805.88,
"text": " model and then investigate whether or not the language model can perform downstream"
},
{
"start": 805.88,
"end": 812.12,
"text": " tasks in a zero-shot setting that means without any parameter or architecture modification"
},
{
"start": 812.12,
"end": 821.2,
"text": " so no fine-tuning all right so what they do so basically what a language model is if for"
},
{
"start": 821.2,
"end": 828.84,
"text": " those who don't know it's it's if you have a sequence of text let's say a b c d e these"
},
{
"start": 828.84,
"end": 837.96,
"text": " are words let's act like some actual words the cat sat on the mat and so on and you and"
},
{
"start": 837.96,
"end": 843.72,
"text": " you a language model is you kind of remove the end of the sentence at some point and"
},
{
"start": 843.72,
"end": 853,
"text": " ask the model what comes next right that's a language model I mean there's different"
},
{
"start": 853,
"end": 858.12,
"text": " kinds of language models specific language ones but that's the basic the basic thing"
},
{
"start": 858.12,
"end": 863.32,
"text": " so the you just ask the model what's next so you can you can do a lot of unsupervised"
},
{
"start": 863.32,
"end": 867.48,
"text": " training because you don't need a label data set for this you simply need a text corpus"
},
{
"start": 867.48,
"end": 872.48,
"text": " and that's basically all they do they use transformers which we've also discussed in"
},
{
"start": 872.48,
"end": 878.6,
"text": " attention is all you need paper so if you if you don't know what transformers are go"
},
{
"start": 878.6,
"end": 889,
"text": " back and look at that yeah all right so basically they say a lot of these special tasks like"
},
{
"start": 889,
"end": 895.26,
"text": " translation and question answering can be framed in language model way for example if"
},
{
"start": 895.26,
"end": 902.84,
"text": " you simply input if this is your text translate to French and then the English text and then"
},
{
"start": 902.84,
"end": 909.4399999999999,
"text": " the French text right and then at at test time basically you leave away the French text"
},
{
"start": 909.4399999999999,
"end": 917.96,
"text": " you simply ask the language model what comes next right if and its input is translate to"
},
{
"start": 917.96,
"end": 924.48,
"text": " French and then English text this is the translation framed as a language model task because you"
},
{
"start": 924.48,
"end": 931.24,
"text": " can specify the task that the language allows to do also as language so this is quite this"
},
{
"start": 931.24,
"end": 937.44,
"text": " is quite an interesting approach and one they exploit here and they basically say well since"
},
{
"start": 937.44,
"end": 944.88,
"text": " in a large and diverse corpus of web pages that they collect here some there is going"
},
{
"start": 944.88,
"end": 951.48,
"text": " to be some websites that basically all do translation from English to French and the"
},
{
"start": 951.48,
"end": 958.6,
"text": " model can learn from that so here in this paragraph they basically list examples of"
},
{
"start": 958.6,
"end": 963.4,
"text": " naturally occurring demonstrations of English to French and French to English translation"
},
{
"start": 963.4,
"end": 968.84,
"text": " found throughout the training data set so basically this is this is how the model could"
},
{
"start": 968.84,
"end": 977.12,
"text": " learn let's just look at one I hate the word perfume Bursas it's somewhat better in French"
},
{
"start": 977.12,
"end": 987,
"text": " right so there's a way in just an unsupervised setting where the language model could learn"
},
{
"start": 987,
"end": 993.36,
"text": " right if you just cross out this word at the end and you just ask the model what comes"
},
{
"start": 993.36,
"end": 1001.84,
"text": " next right the model sees I hate the word perfume Bursas it's somewhat better in French"
},
{
"start": 1001.84,
"end": 1006.64,
"text": " period colon then the model has to put something there and the most logical continuation is"
},
{
"start": 1006.64,
"end": 1012.8,
"text": " to put the French word for perfume right so that that's kind of how they frame translation"
},
{
"start": 1012.8,
"end": 1019.56,
"text": " and these other tasks in language model way all right so they talk about the training"
},
{
"start": 1019.56,
"end": 1026.74,
"text": " data set which is a major component here they say they make a new training data set because"
},
{
"start": 1026.74,
"end": 1031.4,
"text": " all of the current ones aren't sufficient they say most prominent source of diverse"
},
{
"start": 1031.4,
"end": 1036.5600000000002,
"text": " nearly unlimited text is web scripts such as common crawl while these archives are many"
},
{
"start": 1036.5600000000002,
"end": 1040.72,
"text": " orders of magnitude larger than current language modeling datasets they have significant data"
},
{
"start": 1040.72,
"end": 1048.72,
"text": " quality issues so to say content are mostly unintelligible and so on so they basically"
},
{
"start": 1048.72,
"end": 1057.76,
"text": " describe here how they scrape a new web scrape which emphasizes document quality they go"
},
{
"start": 1057.76,
"end": 1066.64,
"text": " on reddit basically and scrape all outbound links from reddit that have received at least"
},
{
"start": 1066.64,
"end": 1074.28,
"text": " three karma which means that it yeah three up votes for a post of a link which basically"
},
{
"start": 1074.28,
"end": 1084.92,
"text": " means that three humans agreed that this was a good link so so they that's that's how they"
},
{
"start": 1084.92,
"end": 1091.16,
"text": " collect the data set resulting data set web stack web text contains text subset of the"
},
{
"start": 1091.16,
"end": 1101.1200000000001,
"text": " 45 million links they then kind of clean this and scrape it down and remove some stuff and"
},
{
"start": 1101.1200000000001,
"end": 1107.8000000000002,
"text": " they they end up with a large corpus right and then they talk about how they represent"
},
{
"start": 1107.8000000000002,
"end": 1113.88,
"text": " the input which is byte pair encoding style it's not exactly by parent coding it's a"
},
{
"start": 1113.88,
"end": 1124.2800000000002,
"text": " byte pair encoding inspired encoding we won't make a video about this by itself because"
},
{
"start": 1124.2800000000002,
"end": 1131.0800000000002,
"text": " it's really interesting but basically it's you can think of it as tokenization and pre-processing"
},
{
"start": 1131.0800000000002,
"end": 1139.68,
"text": " right then they say they they show their models so architecture hyperparameters basically"
},
{
"start": 1139.68,
"end": 1145.68,
"text": " these are these are their models this is the smallest one this second smallest one they"
},
{
"start": 1145.68,
"end": 1154.3200000000002,
"text": " say it's the same size as BERT so the the language model by google that we've looked"
},
{
"start": 1154.3200000000002,
"end": 1166.98,
"text": " at and then the largest one 1.5 billion parameters I mean that's huge and yeah they say it's"
},
{
"start": 1166.98,
"end": 1174.56,
"text": " ten times larger than the previous so the first one is their previous model and this"
},
{
"start": 1174.56,
"end": 1187.16,
"text": " now is this is the GPT-2 model that that gets all these these nice results so they do experiments"
},
{
"start": 1187.16,
"end": 1193.16,
"text": " first they do experiments on language modeling itself right so they train on their on their"
},
{
"start": 1193.16,
"end": 1199.72,
"text": " corpus and then they evaluate on a bunch of other language modeling corpus so these up"
},
{
"start": 1199.72,
"end": 1209.0800000000002,
"text": " here are language modeling corpuses and the state of the art is in this row and then you"
},
{
"start": 1209.0800000000002,
"end": 1219.16,
"text": " just look at basically the bottom row compare to their largest model this this is perplexity"
},
{
"start": 1219.16,
"end": 1233.92,
"text": " where it says PPL and the I think this here is is is accuracy so perplexity lower is better"
},
{
"start": 1233.92,
"end": 1240.2,
"text": " which you can you can see here the previous state of the art was 39 on wiki text 2 they"
},
{
"start": 1240.2,
"end": 1247.16,
"text": " get to 18 with accuracy obviously higher is better so the the kind of previous accuracy"
},
{
"start": 1247.16,
"end": 1256.2,
"text": " in Lombarda was 59 they get to 63 basically improve everything except for this 1 billion"
},
{
"start": 1256.2,
"end": 1262.3200000000002,
"text": " word corpus and they they also explain why they say this is the most heavily pre-processed"
},
{
"start": 1262.3200000000002,
"end": 1271.7,
"text": " text and so on so that basically they basically are really good at language modeling even"
},
{
"start": 1271.7,
"end": 1276.0800000000002,
"text": " though they train on a different data set that's the the point right the point is they"
},
{
"start": 1276.08,
"end": 1280.6799999999998,
"text": " train on their own corpus and then they go and just evaluate on the test set of these"
},
{
"start": 1280.6799999999998,
"end": 1288.1999999999998,
"text": " of these new of these new tasks and they become better basically than the models that trained"
},
{
"start": 1288.1999999999998,
"end": 1296.1599999999999,
"text": " on the training data set of that particular task all right so they they do a number of"
},
{
"start": 1296.1599999999999,
"end": 1304.06,
"text": " further experiments where they basically show that the model has now learned kind of implicitly"
},
{
"start": 1304.06,
"end": 1313.6,
"text": " learned a number of different tasks so let's look at for example summarization this just"
},
{
"start": 1313.6,
"end": 1318.76,
"text": " want to show an example of how you can do this so summarization summarization task is"
},
{
"start": 1318.76,
"end": 1326.48,
"text": " you get a long text you need to produce a short text and that short text is then compared"
},
{
"start": 1326.48,
"end": 1334,
"text": " to short texts that humans wrote when the task was to summarize the long text and you"
},
{
"start": 1334,
"end": 1339.28,
"text": " get points on how much your text overlaps with these human texts all right so they they"
},
{
"start": 1339.28,
"end": 1348.1200000000001,
"text": " say we test gpt2's ability to perform summarization on the cnn and daily mail data set to induce"
},
{
"start": 1348.1200000000001,
"end": 1356,
"text": " summarization here's what i found interesting we add the text tldr after the article and"
},
{
"start": 1356,
"end": 1362.44,
"text": " generate 100 tokens right then they say they need to reduce repetition and so on but basically"
},
{
"start": 1362.44,
"end": 1376.4,
"text": " this this this is right this is the way you can frame summarization by text input so i"
},
{
"start": 1376.4,
"end": 1384.48,
"text": " find this just kind of a really nice way to think about these problems the fact that instructions"
},
{
"start": 1384.48,
"end": 1390.28,
"text": " of the task can be given as text this is a very nice example here so basically you you"
},
{
"start": 1390.28,
"end": 1399.8,
"text": " put you as input you put the entire article right and so you here is the the cnn article"
},
{
"start": 1399.8,
"end": 1408.48,
"text": " blah blah blah it's super long right and then here you put tldr which is for those who don't"
},
{
"start": 1408.48,
"end": 1416.92,
"text": " know it's too long didn't read so people use this this phrase to indicate that then they"
},
{
"start": 1416.92,
"end": 1422.6,
"text": " will write a short summary of whatever was before here they will either put this at the"
},
{
"start": 1422.6,
"end": 1428.24,
"text": " beginning or at the end of a long text right to to say to people okay if you if you don't"
},
{
"start": 1428.24,
"end": 1432.84,
"text": " want to read all this just read this down here um gives you the gist of it which is"
},
{
"start": 1432.84,
"end": 1438.8799999999999,
"text": " exactly summarization so if you then take this away and ask the language model what's"
},
{
"start": 1438.8799999999999,
"end": 1445.24,
"text": " here right basically throughout the training corpus the language model will have encountered"
},
{
"start": 1445.24,
"end": 1452.04,
"text": " such pieces of text with a tldr in it and the language model might have learned that"
},
{
"start": 1452.04,
"end": 1459.52,
"text": " whatever is down here is a short version of whatever is up here and thereby if you then"
},
{
"start": 1459.52,
"end": 1465.76,
"text": " ask the language model what comes next here right the language model might learn aha i"
},
{
"start": 1465.76,
"end": 1473.82,
"text": " need to summarize whatever is above and that's the my best shot at completing at at answering"
},
{
"start": 1473.82,
"end": 1484.76,
"text": " the question what comes next and yeah so they get you know surprisingly good results um"
},
{
"start": 1484.76,
"end": 1494.24,
"text": " from from this so they say on the commonly reported rouge 12l metrics the generated summaries"
},
{
"start": 1494.24,
"end": 1499.16,
"text": " only begin to approach the performance of classic neural baselines just barely outperforms"
},
{
"start": 1499.16,
"end": 1509.6,
"text": " selecting three random sentences from the article uh but but um still it it um while"
},
{
"start": 1509.6,
"end": 1516,
"text": " qualitatively the generations resemble summaries they often focus on recent content from there"
},
{
"start": 1516,
"end": 1520.8799999999999,
"text": " to color confuse specific details so this is kind of a task where it kind of worked but"
},
{
"start": 1520.8799999999999,
"end": 1527.56,
"text": " not really um but i just find it it's really interesting that that it it kind of how they"
},
{
"start": 1527.56,
"end": 1534.12,
"text": " frame the task and how this can still so it still kind of works and that's the the gist"
},
{
"start": 1534.12,
"end": 1539.8,
"text": " here in all of these tasks is also with like translation they obviously don't get near"
},
{
"start": 1539.8,
"end": 1547.1999999999998,
"text": " the performance of a system specifically trained to do this task but they all also always say"
},
{
"start": 1547.1999999999998,
"end": 1555.6799999999998,
"text": " it kind of works right it's sort of sort of it learns something and their entire point"
},
{
"start": 1555.68,
"end": 1573.6000000000001,
"text": " of this paper is to say well look um yeah the the the diversity of tasks the model is"
},
{
"start": 1573.6000000000001,
"end": 1578.52,
"text": " able to perform and i would say kind of perform in a zero shot setting suggests that high"
},
{
"start": 1578.52,
"end": 1584.42,
"text": " capacity models trained to maximize the likelihood of a sufficiently varied text corpus begin"
},
{
"start": 1584.42,
"end": 1589.64,
"text": " to learn how to perform a surprising amount of tasks without the need for explicit supervision"
},
{
"start": 1589.64,
"end": 1599.76,
"text": " so yeah their entire point is if we train on such varied data that kind of um that spans"
},
{
"start": 1599.76,
"end": 1606.3400000000001,
"text": " the entire range of human language expression the the kind of tasks we want these systems"
},
{
"start": 1606.3400000000001,
"end": 1613.88,
"text": " to do will be learned implicitly so basically it points to let's get an even bigger corpus"
},
{
"start": 1613.88,
"end": 1620.44,
"text": " let's get even bigger models and we might get even better unsupervised zero shot way"
},
{
"start": 1620.44,
"end": 1627.96,
"text": " in these kind of special tasks and general language understanding all right so that that"
},
{
"start": 1627.96,
"end": 1632.4,
"text": " was basically i've jumped over a lot of points but i encourage you to look into this to look"
},
{
"start": 1632.4,
"end": 1637.48,
"text": " into the specific experiments they're really interesting the way how they framed things"
},
{
"start": 1637.48,
"end": 1645.56,
"text": " and um give just just shout your opinion around about whether or not the publishing is a good"
},
{
"start": 1645.56,
"end": 1671.8,
"text": " thing or not it's really funny i love it um and with that have a good day"
}
] |
OioFONrSETc | Yannic Kilcher | UCZHmQk67mSJgfCCTn7xBfew | Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift | [
"Science & Technology"
] | [
"machine learning",
"deep learning",
"neural networks",
"batch normalization",
"batchnorm",
"whitening",
"data",
"internal covariate shift",
"deep neural networks",
"deep nets",
"mini-batch",
"training"
] | https://arxiv.org/abs/1502.03167
Abstract:
Training Deep Neural Networks is complicated by the fact that the distribution of each layer's inputs changes during training, as the parameters of the previous layers change. This slows down the training by requiring lower learning rates and careful parameter initialization, and makes it notoriously hard to train models with saturating nonlinearities. We refer to this phenomenon as internal covariate shift, and address the problem by normalizing layer inputs. Our method draws its strength from making normalization a part of the model architecture and performing the normalization for each training mini-batch. Batch Normalization allows us to use much higher learning rates and be less careful about initialization. It also acts as a regularizer, in some cases eliminating the need for Dropout. Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin. Using an ensemble of batch-normalized networks, we improve upon the best published result on ImageNet classification: reaching 4.9% top-5 validation error (and 4.8% test error), exceeding the accuracy of human raters.
Authors:
Sergey Ioffe, Christian Szegedy | Hi, today we're looking at batch normalization. Accelerating deep network training by reducing internal covariate shift by Sergey Ioff and Christian Skiddeds. Yeah, not my best pronouncer. Segedi. Close enough. Alright, so this is a bit of an older paper and I think it's still good to look at it. It's relevant and people just kind of throw batch normalization into networks and maybe don't really know what it's doing. So let's look at it. So what these people argue is that in a network usually you have structures like this. So if something like that, it means that your loss kind of, this is a two layer network, your loss is a composition of the first layer on the input view with parameters theta 1 and the second layer with parameters theta 2. So conceptually that would look something like this. You have your input, maybe it's an image, right? And you put it through the network and it becomes some intermediate representation, right? That's X0, that's X1, or maybe we'll call it even H1, hidden representation, right? Then that becomes, then through the layer becomes H2 and so on, right? So this stuff here, these would be weight matrices, W1, W2, that transform the image into a new image or whatever. So what they're arguing is that well, if you only consider a single layer, like the first layer here, it's kind of the same if you only consider the second layer with the H1 now as the input, right? It's pretty natural to see each layer of the neural network is kind of like its own transformation, taking inputs and producing some outputs. So what people usually do with the very first input here with your data in machine learning generally is so called whitening the data, which means that they have this over here. Usually data is whitened, I can't find it, but what it means is you basically want to, if you have data, let's say here is a coordinated axis, you have 2D data, and you might want to do kind of a linear regression on it, and you have data that's kind of like that, right? It suits you to transform this data into, by, first of all, looking where its mean is, mean is about here, and subtracting that, so here, here, and then kind of dividing by its standard deviation in each direction, so there's a standard deviation here, and there is a standard deviation here. So you would transform this data into something like, maybe something like this, so you see that the mean is now in the middle, and it's not so elongated anymore. So you have a much easier time to kind of learn something on this data than on this data over here, simply because our classifiers usually tend to rely on inner products, and if you do an inner product here, you have one of these vectors here, and you do some inner product, it's always going to be far away from the mean, and thereby the inner products are going to be large no matter what, right? Whereas here, if you take a random one, and then another random, so if you take two random points here, there are two vectors from the mean are almost the same, whereas if you take two random points here, they tend to look uniformly in the directions, so it's kind of the sense we know that machine learning methods work better if we whiten the data first. So these people ask, hey, why do we only do this at the very beginning, right? If each layer basically takes its input and learns something, each layer is basically a machine learning method, why don't we just whiten the data to every single layer, or every single subcomponent of a deep network? And that's the kind of basic step here. So they argue how this has been kind of tried before, or what kind of methods you would usually get, and why these aren't so good, mainly because you kind of need to intermingle this whitening with training the network, and thereby if you just go about this naively, then you would kind of produce artifacts from training. So that's this section here, where they argue that you can't really go about this super naively, but what they do isn't super complicated, but they just do it in a smart way. So we'll jump directly to that. What they say is, okay, let's look at what they call normalization via mini-batch statistics. Let's say we have some d-dimensional input x, and we're just going to look at per dimension. So we only care about per individual dimension normalization. So what are we going to do? We're going to take the kth dimension, we're going to subtract from it the mean of the kth dimension. Within a mini-batch, within a mini-batch of data. So a mini-batch may be something like 32 examples, or 100 examples, or something like this. And then we'll divide by the variance of that mini-batch. So this is done over here in BASIC. So you compute mu of the mini-batch, which is simply the empirical mean of the data at that particular layer. And then you compute sigma squared b, which is simply the empirical estimate of the variance computed on that particular mini-batch. And then you transform your data by subtracting that and by dividing it by this. And this constant here is simply to prevent from dividing by two small values. So you get like numerical problems. So what does it do? It does basically what we did above. But now what they say is, okay, we want to make sure that this transformation can potentially represent the identity, because sometimes, or like a natural, natural, if you had to do something with your input when giving it to the next layer, the very baseline is to do nothing to it, to do the identity transform. But if you do this, you probably won't end up with the identity transform, except if the mean is exactly zero and the variance is exactly one. So what they say is, okay, we'll also introduce two new parameters to this. Here, this gamma and this beta here. And these are learned, like other parameters in the network. We learn the parameter gamma and beta. And gamma and beta are simply a scalar that this transformed x is multiplied by. And beta is simply a scalar that is then added to it. So in each dimension of your hidden representation, you basically learn how to scale it and how to shift it, scale and shift, after you've done the normalization. So first, you do the normalization. First, you go from this type of data to this type of data. And then you say, well, maybe it's actually more beneficial to have it not centered. So that the network can actually learn then to transform this somewhere. This might seem redundant, but it's really powerful, because what you're basically saying is that, okay, this probably isn't the best distribution. This probably is better, but if the network, if the backpropagation algorithm or the training algorithm decides that this first representation was actually useful, it has the option of going back. But it also has the option of going to any other kind of form of distribution. So it's pretty powerful in terms of what it does. It's not really correct here that it has the power to go to any distribution, because it's only kind of a per dimension scalar that it learns, but still, the potential to transform the distribution by these learned scalars is pretty big. All right. So basically, that's it. That's the whole shebang. You normalize your inputs to each layer by this formula, and then you introduce new parameters that you learn along with your network parameters. So this kind of has some implications. First of all, one implication is this here. If you build a batch norm into your network, it kind of learns this plus beta, which is basically a bias parameter, if you think of a traditional kind of fully connected layer. This isn't a fully connected layer because this scalar here is only per dimension, but the bias in a fully connected layer is also just per dimension. So the beta is equal to a bias in a fully connected layer. So if you have a batch normalization after a fully connected or convolutional layer, or anything that can or sometimes has a bias parameter, it's almost not worth it to kind of learn both. So you would rather just only have the one from the batch normalization and leave and use the convolution or fully connected layer without a bias. So that's kind of one implication. Another implication is we have just lost the ability to have deterministic test time inference. So much like dropout, which is kind of random dropping out of nodes, here we have quantities that depend on the mini-batch. Not only the individual sample, but they actually depend on what other samples are randomly selected to be trained with that particular sample. So that's kind of awkward if you want to have some deterministic reproducible thing at test time. So what people do is... And here, this is discussed. What people do is, while training, they use these quantities, the quantities we just discussed, but they keep kind of a running average over them. So what I would do is in each mini-batch, I would compute this mini-batch mean and this mini-batch variance, and I would keep running averages of them. And at test time, I'm going to plug in these running averages, so there's nothing dependent on the mini-batch anymore. So that's a pretty neat trick, I think. You can even imagine at the end of your network training, using these here to kind of fine-tune the weights to these exact parameters. So that's one thing that you have to pay attention to. So usually in neural network libraries, there are parameters you can set whether or not this network is in train mode or in test mode. And depending on that, the batch norm layer will use the mini-batch statistics or will use the kind of over-dataset statistics. Alright, the second thing is training. So how do you actually train this thing? Because now, you can't just... We started with our multi-layer network up here. F2, F1, right? First, I'm going to put my things through F1, and then I'm going to put my things through F2. And the backpropagation here is quite easy. So let me get rid of this. The backprop here is quite easy. You go to L, and maybe you want to derive it by theta 1. So you first go to derive it by the hidden representation 1, and then the hidden representation 1 with respect to theta 1. So the hidden representation would be whatever comes out of here. H1, sorry, not I. And so on. So you kind of chain rule your way through here. But now in between these layers here, you have these batch norm things. And so the authors discuss how we now do backpropagation in the face of these things. So here is basically what they discuss. It actually pays to have a graph of what's going on. So here is x. This is the input to our layer. So what do we compute from x? We compute mu, let's just call it mu, or mu B it's called here. This is the mean of all the x's. So this is x, xi until x, well, x1 until xn. This is the mini-batch. We compute the mean, and then from this and from this, we can compute this estimate of the variance. We need both. So we now have the mean and the variance over the mini-batch. So we're going to take one of these x's, just the i-th one, and we're going to use this and this to compute x, what? Compute x, is it called hat? Yeah, probably. It's called x hat, right? Yeah, we saw about x hat. So x hat i is xi minus mu B divided by sigma squared B, the square root of it plus this kind of little constant here. We're going to leave away the little constant for clarity's sake. Actually, it's in the calculations here. So then we have a new parameter, gamma, right? We're going to use it and our x hat to compute, and also this beta here, to compute y hat. Y or y, just y. And of course this is i, this is i. And this here is our final output of the layer. You can see now the backpropagation paths if you go through here. So the backpropagation path, if we have some loss coming in here, we backprop through yi, right? So here is the L, the loss to yi. That's here. So if we want, for example, the backprop with respect to beta, what we do is we simply, and this is over the mini-batch of course, we simply backprop here through this path. So in our formula for beta, there should be only mention yi. And that's what we see here, right? In our formula for gamma, there should only be mention of yi. So because the path leads only through yi. Oh, no, I'm sorry. Actually, because of the, what I mean is of the derivative with respect to yi. Of course, we also have to pay attention that this is multiplied here by this x hat i, where of course that's not the case when we just add something. Because the derivative of an addition like x plus b with respect to b disregards x, whereas if it's x times b, it doesn't disregard x. Alright, so if we, yeah, so you can go back. So the interesting bit basically comes when we want to find out, okay, how? Because here is another layer, right? Down here somewhere, there is another layer. And we basically want to know this input here to the next layer, how do we compute it in the face of this mess here? Because it's not so easy, right? So you have to see we have three paths here. We go back through x, and let me get rid of these blue lines. We go back through x hat directly to x. We go one path is through here, and one path is through this mu. So basically you have to compute derivatives with respect to sigma squared and mu. And for that we need the derivative with respect to x hat. So basically the way backprop works is you just find all paths from where you are to where you want to go, and then you kind of iteratively compute this. So this one here is the easiest. As you see here they did it on top. Well first they did this one, which is simply going from y to x hat i. Then they go from x hat i to sigma squared, which simply involves kind of the reverse operations of how you got it. This is simply a derivative formula here of the division by square root. Then you can use this quantity here to compute that. So basically you just go in reverse of how you computed the operations in the first place. We said we needed mu b to compute sigma squared b. Now we need the derivative with respect to sigma squared b in order to compute the derivative to mu b. And once you have that, and you see the addition here, the add here is the fact that two things contribute to mu b. So two paths lead to mu b. One path is from here, and one path is through here. So here there should be a green. Since two paths, you have two components to your derivative and you add each of them. So that's how that's going to be. And then this here, with respect to this x here, we have three paths. Because we have three arrows going out of xi. One here, one here, and one here. So you have to take into account all of them. This one is pretty easy, that's the first one. Then the second one goes through this mu b, which we've already computed, and the third one goes through the sigma, which we've also already computed. And these are added, because you have to add all the paths in the backprop algorithm. Maybe we'll do a video on backprop later to really dive into how this works. And finally, they compute these, these we've already discussed. So in essence, the whole thing is differentiable. You just have to kind of pay attention how to do it, but the whole thing is differentiable. And thereby, you can basically backprop through a network that has these batch normal layers built in. So that's pretty cool. I just want to quickly jump over to the results. Keep in mind, this paper is from 2015, so networks weren't that big back then. We didn't know that much about training yet, but the interesting thing is they basically discovered, look, we can have drastically fewer steps in order to reach the same accuracies. And these are kind of the activations of the network over the course of training. So without patch norm, you see, especially at the beginning, there's large fluctuations in the activations. And because they use batch norm now, there's no such thing. So basically, the reason for that is pretty simple. While you learn and you learn your layered representation here, let's say there's X and X is fed through layers, and there's hidden representations, each in between. So you're trying to learn all these parameters. Let's say this one here, W3, but at the beginning of training, everything is kind of prone to shifting around a lot. So when you change W1, that kind of changes the entire distribution of your hidden representations after the fact. So basically, whatever you learn for W3 is now already almost obsolete because you've changed W1 basically, and W3 was kind of assuming that its inputs would remain the same because that's what you assume in machine learning. Your input distribution is kind of the same. So that's why at the beginning of training, you see these kind of large variances. And with batch norm, this tends to go away. So that's pretty cool. They also kind of show, they mainly show that they can reach the same accuracies as other training methods, but with much, much fewer steps, and they can go much higher learning rates than others. So because of that. So that's pretty cool. I encourage you to check out the rest of the paper. Use batch norm in your network. Sometimes it works. It sometimes doesn't work, strangely enough. But I guess that's just a matter of experimentation. All right. That was it for me. Bye bye. | [
{
"start": 0,
"end": 5.3,
"text": " Hi, today we're looking at batch normalization. Accelerating deep network"
},
{
"start": 5.3,
"end": 12.76,
"text": " training by reducing internal covariate shift by Sergey Ioff and Christian"
},
{
"start": 12.76,
"end": 22.66,
"text": " Skiddeds. Yeah, not my best pronouncer."
},
{
"start": 22.66,
"end": 27.66,
"text": " Segedi. Close enough."
},
{
"start": 27.66,
"end": 30.66,
"text": " Alright, so this is a bit of an older paper and"
},
{
"start": 30.66,
"end": 35.66,
"text": " I think it's still good to look at it."
},
{
"start": 35.66,
"end": 39.66,
"text": " It's relevant and people just kind of"
},
{
"start": 39.66,
"end": 41.66,
"text": " throw batch normalization into networks"
},
{
"start": 41.66,
"end": 44.66,
"text": " and maybe don't really know what it's doing."
},
{
"start": 44.66,
"end": 47.66,
"text": " So let's look at it."
},
{
"start": 47.66,
"end": 50.66,
"text": " So what these people argue is that in a"
},
{
"start": 50.66,
"end": 53.66,
"text": " network usually you have structures like this."
},
{
"start": 53.66,
"end": 59.66,
"text": " So if something like that, it means that"
},
{
"start": 59.66,
"end": 61.66,
"text": " your loss kind of, this is a two layer network,"
},
{
"start": 61.66,
"end": 63.66,
"text": " your loss is a composition of the first"
},
{
"start": 63.66,
"end": 66.66,
"text": " layer on the input view with parameters"
},
{
"start": 66.66,
"end": 70.66,
"text": " theta 1 and the second layer with parameters"
},
{
"start": 70.66,
"end": 72.66,
"text": " theta 2. So conceptually that would look"
},
{
"start": 72.66,
"end": 74.66,
"text": " something like this. You have your input,"
},
{
"start": 74.66,
"end": 78.66,
"text": " maybe it's an image, right? And you put it"
},
{
"start": 78.66,
"end": 81.66,
"text": " through the network and it becomes some"
},
{
"start": 81.66,
"end": 83.66,
"text": " intermediate representation, right?"
},
{
"start": 83.66,
"end": 89.66,
"text": " That's X0, that's X1, or maybe we'll call it"
},
{
"start": 89.66,
"end": 93.66,
"text": " even H1, hidden representation, right?"
},
{
"start": 93.66,
"end": 96.66,
"text": " Then that becomes, then through the layer"
},
{
"start": 96.66,
"end": 101.66,
"text": " becomes H2 and so on, right? So this stuff here,"
},
{
"start": 101.66,
"end": 105.66,
"text": " these would be weight matrices, W1, W2,"
},
{
"start": 105.66,
"end": 109.66,
"text": " that transform the image into a new image"
},
{
"start": 109.66,
"end": 113.66,
"text": " or whatever. So what they're arguing is that"
},
{
"start": 113.66,
"end": 116.66,
"text": " well, if you only consider a single layer,"
},
{
"start": 116.66,
"end": 122.66,
"text": " like the first layer here, it's kind of the same"
},
{
"start": 122.66,
"end": 124.66,
"text": " if you only consider the second layer"
},
{
"start": 124.66,
"end": 127.66,
"text": " with the H1 now as the input, right?"
},
{
"start": 127.66,
"end": 130.66,
"text": " It's pretty natural to see each layer of the neural"
},
{
"start": 130.66,
"end": 133.66,
"text": " network is kind of like its own transformation,"
},
{
"start": 133.66,
"end": 137.66,
"text": " taking inputs and producing some outputs."
},
{
"start": 137.66,
"end": 141.66,
"text": " So what people usually do with the very first"
},
{
"start": 141.66,
"end": 145.66,
"text": " input here with your data in machine learning"
},
{
"start": 145.66,
"end": 148.66,
"text": " generally is so called whitening the data,"
},
{
"start": 148.66,
"end": 156.66,
"text": " which means that they have this over here."
},
{
"start": 156.66,
"end": 160.66,
"text": " Usually data is whitened, I can't find it,"
},
{
"start": 160.66,
"end": 164.66,
"text": " but what it means is you basically want to,"
},
{
"start": 164.66,
"end": 169.66,
"text": " if you have data, let's say here is a coordinated axis,"
},
{
"start": 169.66,
"end": 173.66,
"text": " you have 2D data, and you might want to do"
},
{
"start": 173.66,
"end": 176.66,
"text": " kind of a linear regression on it, and you have data"
},
{
"start": 176.66,
"end": 180.66,
"text": " that's kind of like that, right?"
},
{
"start": 180.66,
"end": 185.66,
"text": " It suits you to transform this data into, by,"
},
{
"start": 185.66,
"end": 188.66,
"text": " first of all, looking where its mean is,"
},
{
"start": 188.66,
"end": 191.66,
"text": " mean is about here, and subtracting that,"
},
{
"start": 191.66,
"end": 197.66,
"text": " so here, here, and then kind of dividing by"
},
{
"start": 197.66,
"end": 200.66,
"text": " its standard deviation in each direction,"
},
{
"start": 200.66,
"end": 202.66,
"text": " so there's a standard deviation here,"
},
{
"start": 202.66,
"end": 204.66,
"text": " and there is a standard deviation here."
},
{
"start": 204.66,
"end": 211.66,
"text": " So you would transform this data into something like,"
},
{
"start": 211.66,
"end": 217.66,
"text": " maybe something like this, so you see that the mean"
},
{
"start": 217.66,
"end": 225.66,
"text": " is now in the middle, and it's not so elongated anymore."
},
{
"start": 225.66,
"end": 229.66,
"text": " So you have a much easier time to kind of learn"
},
{
"start": 229.66,
"end": 232.66,
"text": " something on this data than on this data over here,"
},
{
"start": 232.66,
"end": 235.66,
"text": " simply because our classifiers usually tend to"
},
{
"start": 235.66,
"end": 240.66,
"text": " rely on inner products, and if you do an inner product here,"
},
{
"start": 240.66,
"end": 242.66,
"text": " you have one of these vectors here,"
},
{
"start": 242.66,
"end": 244.66,
"text": " and you do some inner product, it's always going to be"
},
{
"start": 244.66,
"end": 249.66,
"text": " far away from the mean, and thereby the inner products"
},
{
"start": 249.66,
"end": 252.66,
"text": " are going to be large no matter what, right?"
},
{
"start": 252.66,
"end": 255.66,
"text": " Whereas here, if you take a random one,"
},
{
"start": 255.66,
"end": 258.65999999999997,
"text": " and then another random, so if you take two random points here,"
},
{
"start": 258.65999999999997,
"end": 263.65999999999997,
"text": " there are two vectors from the mean are almost the same,"
},
{
"start": 263.65999999999997,
"end": 265.65999999999997,
"text": " whereas if you take two random points here,"
},
{
"start": 265.65999999999997,
"end": 269.65999999999997,
"text": " they tend to look uniformly in the directions,"
},
{
"start": 269.65999999999997,
"end": 271.65999999999997,
"text": " so it's kind of the sense we know that machine learning"
},
{
"start": 271.66,
"end": 274.66,
"text": " methods work better if we whiten the data first."
},
{
"start": 274.66,
"end": 277.66,
"text": " So these people ask, hey, why do we only do this"
},
{
"start": 277.66,
"end": 279.66,
"text": " at the very beginning, right?"
},
{
"start": 279.66,
"end": 286.66,
"text": " If each layer basically takes its input and learns something,"
},
{
"start": 286.66,
"end": 288.66,
"text": " each layer is basically a machine learning method,"
},
{
"start": 288.66,
"end": 293.66,
"text": " why don't we just whiten the data to every single layer,"
},
{
"start": 293.66,
"end": 297.66,
"text": " or every single subcomponent of a deep network?"
},
{
"start": 297.66,
"end": 300.66,
"text": " And that's the kind of basic step here."
},
{
"start": 300.66,
"end": 303.66,
"text": " So they argue how this has been kind of tried before,"
},
{
"start": 303.66,
"end": 306.66,
"text": " or what kind of methods you would usually get,"
},
{
"start": 306.66,
"end": 312.66,
"text": " and why these aren't so good, mainly because you kind of need"
},
{
"start": 312.66,
"end": 316.66,
"text": " to intermingle this whitening with training the network,"
},
{
"start": 316.66,
"end": 319.66,
"text": " and thereby if you just go about this naively,"
},
{
"start": 319.66,
"end": 325.66,
"text": " then you would kind of produce artifacts from training."
},
{
"start": 325.66,
"end": 331.66,
"text": " So that's this section here, where they argue that"
},
{
"start": 331.66,
"end": 335.66,
"text": " you can't really go about this super naively,"
},
{
"start": 335.66,
"end": 338.66,
"text": " but what they do isn't super complicated,"
},
{
"start": 338.66,
"end": 340.66,
"text": " but they just do it in a smart way."
},
{
"start": 340.66,
"end": 344.66,
"text": " So we'll jump directly to that."
},
{
"start": 344.66,
"end": 350.66,
"text": " What they say is, okay, let's look at what they call"
},
{
"start": 350.66,
"end": 353.66,
"text": " normalization via mini-batch statistics."
},
{
"start": 353.66,
"end": 359.66,
"text": " Let's say we have some d-dimensional input x,"
},
{
"start": 359.66,
"end": 363.66,
"text": " and we're just going to look at per dimension."
},
{
"start": 363.66,
"end": 370.66,
"text": " So we only care about per individual dimension normalization."
},
{
"start": 370.66,
"end": 374.66,
"text": " So what are we going to do?"
},
{
"start": 374.66,
"end": 377.66,
"text": " We're going to take the kth dimension,"
},
{
"start": 377.66,
"end": 382.66,
"text": " we're going to subtract from it the mean of the kth dimension."
},
{
"start": 382.66,
"end": 387.66,
"text": " Within a mini-batch, within a mini-batch of data."
},
{
"start": 387.66,
"end": 391.66,
"text": " So a mini-batch may be something like 32 examples,"
},
{
"start": 391.66,
"end": 393.66,
"text": " or 100 examples, or something like this."
},
{
"start": 393.66,
"end": 398.66,
"text": " And then we'll divide by the variance of that mini-batch."
},
{
"start": 398.66,
"end": 405.66,
"text": " So this is done over here in BASIC."
},
{
"start": 405.66,
"end": 408.66,
"text": " So you compute mu of the mini-batch,"
},
{
"start": 408.66,
"end": 416.66,
"text": " which is simply the empirical mean of the data at that particular layer."
},
{
"start": 416.66,
"end": 419.66,
"text": " And then you compute sigma squared b,"
},
{
"start": 419.66,
"end": 425.66,
"text": " which is simply the empirical estimate of the variance"
},
{
"start": 425.66,
"end": 429.66,
"text": " computed on that particular mini-batch."
},
{
"start": 429.66,
"end": 434.66,
"text": " And then you transform your data by subtracting that"
},
{
"start": 434.66,
"end": 437.66,
"text": " and by dividing it by this."
},
{
"start": 437.66,
"end": 446.66,
"text": " And this constant here is simply to prevent from dividing by two small values."
},
{
"start": 446.66,
"end": 450.66,
"text": " So you get like numerical problems."
},
{
"start": 450.66,
"end": 453.66,
"text": " So what does it do?"
},
{
"start": 453.66,
"end": 457.66,
"text": " It does basically what we did above."
},
{
"start": 457.66,
"end": 460.66,
"text": " But now what they say is, okay,"
},
{
"start": 460.66,
"end": 465.66,
"text": " we want to make sure that this transformation can potentially"
},
{
"start": 465.66,
"end": 469.66,
"text": " represent the identity, because sometimes,"
},
{
"start": 469.66,
"end": 474.66,
"text": " or like a natural, natural, if you had to do something with your input"
},
{
"start": 474.66,
"end": 476.66,
"text": " when giving it to the next layer,"
},
{
"start": 476.66,
"end": 482.66,
"text": " the very baseline is to do nothing to it, to do the identity transform."
},
{
"start": 482.66,
"end": 489.66,
"text": " But if you do this, you probably won't end up with the identity transform,"
},
{
"start": 489.66,
"end": 494.66,
"text": " except if the mean is exactly zero and the variance is exactly one."
},
{
"start": 494.66,
"end": 498.66,
"text": " So what they say is, okay,"
},
{
"start": 498.66,
"end": 502.66,
"text": " we'll also introduce two new parameters to this."
},
{
"start": 502.66,
"end": 508.66,
"text": " Here, this gamma and this beta here."
},
{
"start": 508.66,
"end": 512.6600000000001,
"text": " And these are learned, like other parameters in the network."
},
{
"start": 512.6600000000001,
"end": 515.6600000000001,
"text": " We learn the parameter gamma and beta."
},
{
"start": 515.6600000000001,
"end": 523.6600000000001,
"text": " And gamma and beta are simply a scalar that this transformed x is multiplied by."
},
{
"start": 523.66,
"end": 527.66,
"text": " And beta is simply a scalar that is then added to it."
},
{
"start": 527.66,
"end": 531.66,
"text": " So in each dimension of your hidden representation,"
},
{
"start": 531.66,
"end": 537.66,
"text": " you basically learn how to scale it and how to shift it,"
},
{
"start": 537.66,
"end": 540.66,
"text": " scale and shift, after you've done the normalization."
},
{
"start": 540.66,
"end": 546.66,
"text": " So first, you do the normalization."
},
{
"start": 546.66,
"end": 551.66,
"text": " First, you go from this type of data to this type of data."
},
{
"start": 551.66,
"end": 558.66,
"text": " And then you say, well, maybe it's actually more beneficial to have it not centered."
},
{
"start": 558.66,
"end": 564.66,
"text": " So that the network can actually learn then to transform this somewhere."
},
{
"start": 564.66,
"end": 568.66,
"text": " This might seem redundant, but it's really powerful,"
},
{
"start": 568.66,
"end": 573.66,
"text": " because what you're basically saying is that, okay,"
},
{
"start": 573.66,
"end": 578.66,
"text": " this probably isn't the best distribution."
},
{
"start": 578.66,
"end": 582.66,
"text": " This probably is better, but if the network,"
},
{
"start": 582.66,
"end": 586.66,
"text": " if the backpropagation algorithm or the training algorithm decides"
},
{
"start": 586.66,
"end": 589.66,
"text": " that this first representation was actually useful,"
},
{
"start": 589.66,
"end": 591.66,
"text": " it has the option of going back."
},
{
"start": 591.66,
"end": 598.66,
"text": " But it also has the option of going to any other kind of form of distribution."
},
{
"start": 598.66,
"end": 603.66,
"text": " So it's pretty powerful in terms of what it does."
},
{
"start": 603.66,
"end": 607.66,
"text": " It's not really correct here that it has the power to go to any distribution,"
},
{
"start": 607.66,
"end": 611.66,
"text": " because it's only kind of a per dimension scalar that it learns,"
},
{
"start": 611.66,
"end": 617.66,
"text": " but still, the potential to transform the distribution"
},
{
"start": 617.66,
"end": 622.66,
"text": " by these learned scalars is pretty big."
},
{
"start": 622.66,
"end": 625.66,
"text": " All right."
},
{
"start": 625.66,
"end": 628.66,
"text": " So basically, that's it."
},
{
"start": 628.66,
"end": 631.66,
"text": " That's the whole shebang."
},
{
"start": 631.66,
"end": 636.66,
"text": " You normalize your inputs to each layer by this formula,"
},
{
"start": 636.66,
"end": 643.66,
"text": " and then you introduce new parameters that you learn along with your network parameters."
},
{
"start": 643.66,
"end": 649.66,
"text": " So this kind of has some implications."
},
{
"start": 649.66,
"end": 656.66,
"text": " First of all, one implication is this here."
},
{
"start": 656.66,
"end": 660.66,
"text": " If you build a batch norm into your network,"
},
{
"start": 660.66,
"end": 666.66,
"text": " it kind of learns this plus beta, which is basically a bias parameter,"
},
{
"start": 666.66,
"end": 669.66,
"text": " if you think of a traditional kind of fully connected layer."
},
{
"start": 669.66,
"end": 673.66,
"text": " This isn't a fully connected layer because this scalar here is only per dimension,"
},
{
"start": 673.66,
"end": 677.66,
"text": " but the bias in a fully connected layer is also just per dimension."
},
{
"start": 677.66,
"end": 680.66,
"text": " So the beta is equal to a bias in a fully connected layer."
},
{
"start": 680.66,
"end": 693.66,
"text": " So if you have a batch normalization after a fully connected or convolutional layer,"
},
{
"start": 693.66,
"end": 697.66,
"text": " or anything that can or sometimes has a bias parameter,"
},
{
"start": 697.66,
"end": 701.66,
"text": " it's almost not worth it to kind of learn both."
},
{
"start": 701.66,
"end": 705.66,
"text": " So you would rather just only have the one from the batch normalization"
},
{
"start": 705.66,
"end": 710.66,
"text": " and leave and use the convolution or fully connected layer without a bias."
},
{
"start": 710.66,
"end": 712.66,
"text": " So that's kind of one implication."
},
{
"start": 712.66,
"end": 722.66,
"text": " Another implication is we have just lost the ability to have deterministic test time inference."
},
{
"start": 722.66,
"end": 727.66,
"text": " So much like dropout, which is kind of random dropping out of nodes,"
},
{
"start": 727.66,
"end": 733.66,
"text": " here we have quantities that depend on the mini-batch."
},
{
"start": 733.66,
"end": 738.66,
"text": " Not only the individual sample, but they actually depend on what other samples"
},
{
"start": 738.66,
"end": 743.66,
"text": " are randomly selected to be trained with that particular sample."
},
{
"start": 743.66,
"end": 751.66,
"text": " So that's kind of awkward if you want to have some deterministic reproducible thing at test time."
},
{
"start": 751.66,
"end": 754.66,
"text": " So what people do is..."
},
{
"start": 754.66,
"end": 760.66,
"text": " And here, this is discussed."
},
{
"start": 760.66,
"end": 771.66,
"text": " What people do is, while training, they use these quantities,"
},
{
"start": 771.66,
"end": 778.66,
"text": " the quantities we just discussed, but they keep kind of a running average over them."
},
{
"start": 778.66,
"end": 785.66,
"text": " So what I would do is in each mini-batch, I would compute this mini-batch mean and this mini-batch variance,"
},
{
"start": 785.66,
"end": 793.66,
"text": " and I would keep running averages of them."
},
{
"start": 793.66,
"end": 798.66,
"text": " And at test time, I'm going to plug in these running averages,"
},
{
"start": 798.66,
"end": 802.66,
"text": " so there's nothing dependent on the mini-batch anymore."
},
{
"start": 802.66,
"end": 807.66,
"text": " So that's a pretty neat trick, I think."
},
{
"start": 807.66,
"end": 812.66,
"text": " You can even imagine at the end of your network training,"
},
{
"start": 812.66,
"end": 819.66,
"text": " using these here to kind of fine-tune the weights to these exact parameters."
},
{
"start": 819.66,
"end": 826.66,
"text": " So that's one thing that you have to pay attention to."
},
{
"start": 826.66,
"end": 832.66,
"text": " So usually in neural network libraries, there are parameters you can set"
},
{
"start": 832.66,
"end": 836.66,
"text": " whether or not this network is in train mode or in test mode."
},
{
"start": 836.66,
"end": 843.66,
"text": " And depending on that, the batch norm layer will use the mini-batch statistics"
},
{
"start": 843.66,
"end": 849.66,
"text": " or will use the kind of over-dataset statistics."
},
{
"start": 849.66,
"end": 852.66,
"text": " Alright, the second thing is training."
},
{
"start": 852.66,
"end": 855.66,
"text": " So how do you actually train this thing?"
},
{
"start": 855.66,
"end": 857.66,
"text": " Because now, you can't just..."
},
{
"start": 857.66,
"end": 865.66,
"text": " We started with our multi-layer network up here."
},
{
"start": 865.66,
"end": 867.66,
"text": " F2, F1, right?"
},
{
"start": 867.66,
"end": 872.66,
"text": " First, I'm going to put my things through F1, and then I'm going to put my things through F2."
},
{
"start": 872.66,
"end": 876.66,
"text": " And the backpropagation here is quite easy."
},
{
"start": 876.66,
"end": 880.66,
"text": " So let me get rid of this."
},
{
"start": 880.66,
"end": 882.66,
"text": " The backprop here is quite easy."
},
{
"start": 882.66,
"end": 888.66,
"text": " You go to L, and maybe you want to derive it by theta 1."
},
{
"start": 888.66,
"end": 895.66,
"text": " So you first go to derive it by the hidden representation 1,"
},
{
"start": 895.66,
"end": 899.66,
"text": " and then the hidden representation 1 with respect to theta 1."
},
{
"start": 899.66,
"end": 904.66,
"text": " So the hidden representation would be whatever comes out of here."
},
{
"start": 904.66,
"end": 908.66,
"text": " H1, sorry, not I."
},
{
"start": 908.66,
"end": 911.66,
"text": " And so on. So you kind of chain rule your way through here."
},
{
"start": 911.66,
"end": 917.66,
"text": " But now in between these layers here, you have these batch norm things."
},
{
"start": 917.66,
"end": 926.66,
"text": " And so the authors discuss how we now do backpropagation in the face of these things."
},
{
"start": 926.66,
"end": 932.66,
"text": " So here is basically what they discuss."
},
{
"start": 932.66,
"end": 937.66,
"text": " It actually pays to have a graph of what's going on."
},
{
"start": 937.66,
"end": 941.66,
"text": " So here is x. This is the input to our layer."
},
{
"start": 941.66,
"end": 943.66,
"text": " So what do we compute from x?"
},
{
"start": 943.66,
"end": 950.66,
"text": " We compute mu, let's just call it mu, or mu B it's called here."
},
{
"start": 950.66,
"end": 953.66,
"text": " This is the mean of all the x's."
},
{
"start": 953.66,
"end": 962.66,
"text": " So this is x, xi until x, well, x1 until xn."
},
{
"start": 962.66,
"end": 964.66,
"text": " This is the mini-batch."
},
{
"start": 964.66,
"end": 971.66,
"text": " We compute the mean, and then from this and from this,"
},
{
"start": 971.66,
"end": 977.66,
"text": " we can compute this estimate of the variance. We need both."
},
{
"start": 977.66,
"end": 982.66,
"text": " So we now have the mean and the variance over the mini-batch."
},
{
"start": 982.66,
"end": 987.66,
"text": " So we're going to take one of these x's, just the i-th one,"
},
{
"start": 987.66,
"end": 1003.66,
"text": " and we're going to use this and this to compute x, what? Compute x, is it called hat?"
},
{
"start": 1003.66,
"end": 1006.66,
"text": " Yeah, probably. It's called x hat, right?"
},
{
"start": 1006.66,
"end": 1008.66,
"text": " Yeah, we saw about x hat."
},
{
"start": 1008.66,
"end": 1019.66,
"text": " So x hat i is xi minus mu B divided by sigma squared B,"
},
{
"start": 1019.66,
"end": 1023.66,
"text": " the square root of it plus this kind of little constant here."
},
{
"start": 1023.66,
"end": 1027.6599999999999,
"text": " We're going to leave away the little constant for clarity's sake."
},
{
"start": 1027.6599999999999,
"end": 1030.6599999999999,
"text": " Actually, it's in the calculations here."
},
{
"start": 1030.6599999999999,
"end": 1036.6599999999999,
"text": " So then we have a new parameter, gamma, right?"
},
{
"start": 1036.66,
"end": 1043.66,
"text": " We're going to use it and our x hat to compute, and also this beta here,"
},
{
"start": 1043.66,
"end": 1047.66,
"text": " to compute y hat."
},
{
"start": 1047.66,
"end": 1051.66,
"text": " Y or y, just y."
},
{
"start": 1051.66,
"end": 1056.66,
"text": " And of course this is i, this is i."
},
{
"start": 1056.66,
"end": 1060.66,
"text": " And this here is our final output of the layer."
},
{
"start": 1060.66,
"end": 1064.66,
"text": " You can see now the backpropagation paths if you go through here."
},
{
"start": 1064.66,
"end": 1068.66,
"text": " So the backpropagation path, if we have some loss coming in here,"
},
{
"start": 1068.66,
"end": 1073.66,
"text": " we backprop through yi, right?"
},
{
"start": 1073.66,
"end": 1080.66,
"text": " So here is the L, the loss to yi. That's here."
},
{
"start": 1080.66,
"end": 1087.66,
"text": " So if we want, for example, the backprop with respect to beta,"
},
{
"start": 1087.66,
"end": 1092.66,
"text": " what we do is we simply, and this is over the mini-batch of course,"
},
{
"start": 1092.66,
"end": 1095.66,
"text": " we simply backprop here through this path."
},
{
"start": 1095.66,
"end": 1101.66,
"text": " So in our formula for beta, there should be only mention yi."
},
{
"start": 1101.66,
"end": 1104.66,
"text": " And that's what we see here, right?"
},
{
"start": 1104.66,
"end": 1108.66,
"text": " In our formula for gamma, there should only be mention of yi."
},
{
"start": 1108.66,
"end": 1114.66,
"text": " So because the path leads only through yi."
},
{
"start": 1114.66,
"end": 1119.66,
"text": " Oh, no, I'm sorry. Actually, because of the,"
},
{
"start": 1119.66,
"end": 1122.66,
"text": " what I mean is of the derivative with respect to yi."
},
{
"start": 1122.66,
"end": 1128.66,
"text": " Of course, we also have to pay attention that this is multiplied here"
},
{
"start": 1128.66,
"end": 1133.66,
"text": " by this x hat i, where of course that's not the case when we just add something."
},
{
"start": 1133.66,
"end": 1143.66,
"text": " Because the derivative of an addition like x plus b with respect to b"
},
{
"start": 1143.66,
"end": 1150.66,
"text": " disregards x, whereas if it's x times b, it doesn't disregard x."
},
{
"start": 1150.66,
"end": 1156.66,
"text": " Alright, so if we, yeah, so you can go back."
},
{
"start": 1156.66,
"end": 1162.66,
"text": " So the interesting bit basically comes when we want to find out, okay, how?"
},
{
"start": 1162.66,
"end": 1166.66,
"text": " Because here is another layer, right?"
},
{
"start": 1166.66,
"end": 1169.66,
"text": " Down here somewhere, there is another layer."
},
{
"start": 1169.66,
"end": 1174.66,
"text": " And we basically want to know this input here to the next layer,"
},
{
"start": 1174.66,
"end": 1178.66,
"text": " how do we compute it in the face of this mess here?"
},
{
"start": 1178.66,
"end": 1181.66,
"text": " Because it's not so easy, right?"
},
{
"start": 1181.66,
"end": 1183.66,
"text": " So you have to see we have three paths here."
},
{
"start": 1183.66,
"end": 1188.66,
"text": " We go back through x, and let me get rid of these blue lines."
},
{
"start": 1188.66,
"end": 1195.66,
"text": " We go back through x hat directly to x."
},
{
"start": 1195.66,
"end": 1203.66,
"text": " We go one path is through here, and one path is through this mu."
},
{
"start": 1203.66,
"end": 1208.66,
"text": " So basically you have to compute derivatives with respect to sigma squared and mu."
},
{
"start": 1208.66,
"end": 1213.66,
"text": " And for that we need the derivative with respect to x hat."
},
{
"start": 1213.66,
"end": 1218.66,
"text": " So basically the way backprop works is you just find all paths from where you are"
},
{
"start": 1218.66,
"end": 1223.66,
"text": " to where you want to go, and then you kind of iteratively compute this."
},
{
"start": 1223.66,
"end": 1228.66,
"text": " So this one here is the easiest."
},
{
"start": 1228.66,
"end": 1231.66,
"text": " As you see here they did it on top."
},
{
"start": 1231.66,
"end": 1240.66,
"text": " Well first they did this one, which is simply going from y to x hat i."
},
{
"start": 1240.66,
"end": 1245.66,
"text": " Then they go from x hat i to sigma squared,"
},
{
"start": 1245.66,
"end": 1252.66,
"text": " which simply involves kind of the reverse operations of how you got it."
},
{
"start": 1252.66,
"end": 1259.66,
"text": " This is simply a derivative formula here of the division by square root."
},
{
"start": 1259.66,
"end": 1266.66,
"text": " Then you can use this quantity here to compute that."
},
{
"start": 1266.66,
"end": 1271.66,
"text": " So basically you just go in reverse of how you computed the operations in the first place."
},
{
"start": 1271.66,
"end": 1275.66,
"text": " We said we needed mu b to compute sigma squared b."
},
{
"start": 1275.66,
"end": 1282.66,
"text": " Now we need the derivative with respect to sigma squared b in order to compute the derivative to mu b."
},
{
"start": 1282.66,
"end": 1288.66,
"text": " And once you have that, and you see the addition here,"
},
{
"start": 1288.66,
"end": 1297.66,
"text": " the add here is the fact that two things contribute to mu b."
},
{
"start": 1297.66,
"end": 1303.66,
"text": " So two paths lead to mu b."
},
{
"start": 1303.66,
"end": 1311.66,
"text": " One path is from here, and one path is through here."
},
{
"start": 1311.66,
"end": 1314.66,
"text": " So here there should be a green."
},
{
"start": 1314.66,
"end": 1321.66,
"text": " Since two paths, you have two components to your derivative and you add each of them."
},
{
"start": 1321.66,
"end": 1323.66,
"text": " So that's how that's going to be."
},
{
"start": 1323.66,
"end": 1331.66,
"text": " And then this here, with respect to this x here, we have three paths."
},
{
"start": 1331.66,
"end": 1334.66,
"text": " Because we have three arrows going out of xi."
},
{
"start": 1334.66,
"end": 1338.66,
"text": " One here, one here, and one here."
},
{
"start": 1338.66,
"end": 1341.66,
"text": " So you have to take into account all of them."
},
{
"start": 1341.66,
"end": 1345.66,
"text": " This one is pretty easy, that's the first one."
},
{
"start": 1345.66,
"end": 1354.66,
"text": " Then the second one goes through this mu b, which we've already computed,"
},
{
"start": 1354.66,
"end": 1359.66,
"text": " and the third one goes through the sigma, which we've also already computed."
},
{
"start": 1359.66,
"end": 1368.66,
"text": " And these are added, because you have to add all the paths in the backprop algorithm."
},
{
"start": 1368.66,
"end": 1376.66,
"text": " Maybe we'll do a video on backprop later to really dive into how this works."
},
{
"start": 1376.66,
"end": 1379.66,
"text": " And finally, they compute these, these we've already discussed."
},
{
"start": 1379.66,
"end": 1384.66,
"text": " So in essence, the whole thing is differentiable."
},
{
"start": 1384.66,
"end": 1391.66,
"text": " You just have to kind of pay attention how to do it, but the whole thing is differentiable."
},
{
"start": 1391.66,
"end": 1400.66,
"text": " And thereby, you can basically backprop through a network that has these batch normal layers built in."
},
{
"start": 1400.66,
"end": 1403.66,
"text": " So that's pretty cool."
},
{
"start": 1403.66,
"end": 1407.66,
"text": " I just want to quickly jump over to the results."
},
{
"start": 1407.66,
"end": 1415.66,
"text": " Keep in mind, this paper is from 2015, so networks weren't that big back then."
},
{
"start": 1415.66,
"end": 1419.66,
"text": " We didn't know that much about training yet, but the interesting thing is they basically discovered,"
},
{
"start": 1419.66,
"end": 1426.66,
"text": " look, we can have drastically fewer steps in order to reach the same accuracies."
},
{
"start": 1426.66,
"end": 1431.66,
"text": " And these are kind of the activations of the network over the course of training."
},
{
"start": 1431.66,
"end": 1436.66,
"text": " So without patch norm, you see, especially at the beginning, there's large fluctuations in the activations."
},
{
"start": 1436.66,
"end": 1443.66,
"text": " And because they use batch norm now, there's no such thing."
},
{
"start": 1443.66,
"end": 1448.66,
"text": " So basically, the reason for that is pretty simple."
},
{
"start": 1448.66,
"end": 1455.66,
"text": " While you learn and you learn your layered representation here, let's say there's X and X is fed through layers,"
},
{
"start": 1455.66,
"end": 1459.66,
"text": " and there's hidden representations, each in between."
},
{
"start": 1459.66,
"end": 1462.66,
"text": " So you're trying to learn all these parameters."
},
{
"start": 1462.66,
"end": 1470.66,
"text": " Let's say this one here, W3, but at the beginning of training, everything is kind of prone to shifting around a lot."
},
{
"start": 1470.66,
"end": 1479.66,
"text": " So when you change W1, that kind of changes the entire distribution of your hidden representations after the fact."
},
{
"start": 1479.66,
"end": 1487.66,
"text": " So basically, whatever you learn for W3 is now already almost obsolete because you've changed W1 basically,"
},
{
"start": 1487.66,
"end": 1494.66,
"text": " and W3 was kind of assuming that its inputs would remain the same because that's what you assume in machine learning."
},
{
"start": 1494.66,
"end": 1497.66,
"text": " Your input distribution is kind of the same."
},
{
"start": 1497.66,
"end": 1503.66,
"text": " So that's why at the beginning of training, you see these kind of large variances."
},
{
"start": 1503.66,
"end": 1506.66,
"text": " And with batch norm, this tends to go away."
},
{
"start": 1506.66,
"end": 1508.66,
"text": " So that's pretty cool."
},
{
"start": 1508.66,
"end": 1516.66,
"text": " They also kind of show, they mainly show that they can reach the same accuracies as other training methods,"
},
{
"start": 1516.66,
"end": 1522.66,
"text": " but with much, much fewer steps, and they can go much higher learning rates than others."
},
{
"start": 1522.66,
"end": 1525.66,
"text": " So because of that."
},
{
"start": 1525.66,
"end": 1527.66,
"text": " So that's pretty cool."
},
{
"start": 1527.66,
"end": 1530.66,
"text": " I encourage you to check out the rest of the paper."
},
{
"start": 1530.66,
"end": 1531.66,
"text": " Use batch norm in your network."
},
{
"start": 1531.66,
"end": 1532.66,
"text": " Sometimes it works."
},
{
"start": 1532.66,
"end": 1536.66,
"text": " It sometimes doesn't work, strangely enough."
},
{
"start": 1536.66,
"end": 1540.66,
"text": " But I guess that's just a matter of experimentation."
},
{
"start": 1540.66,
"end": 1542.66,
"text": " All right. That was it for me."
},
{
"start": 1542.66,
"end": 1547.66,
"text": " Bye bye."
}
] |
-9evrZnBorM | Yannic Kilcher | UCZHmQk67mSJgfCCTn7xBfew | BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding | [
"Science & Technology"
] | [
"bert",
"deep learning",
"attention",
"unsupervised",
"nlp",
"transformer",
"squad",
"wordpiece",
"embeddings",
"language",
"language modeling",
"attention layers",
"bidirectional",
"elmo",
"natural language processing",
"machine learning",
"word vectors",
"pretrained",
"fine tuning"
] | https://arxiv.org/abs/1810.04805
Abstract:
We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models, BERT is designed to pre-train deep bidirectional representations by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT representations can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial task-specific architecture modifications.
BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE benchmark to 80.4% (7.6% absolute improvement), MultiNLI accuracy to 86.7 (5.6% absolute improvement) and the SQuAD v1.1 question answering Test F1 to 93.2 (1.5% absolute improvement), outperforming human performance by 2.0%.
Authors:
Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova | Hello everyone, today we're looking at BERT pre-training of deep bidirectional transformers for language understanding by Jacob Devlin and Min-Wai Chung, Kenton Lee, Kristina Tatanova. These are people from Google AI language, so you're about to see the most hyped model currently. So basically BERT is a model that takes as an input language, so token sequences, and outputs various things. So it can be made to do various things, almost any NLP task, with basically little training because the BERT model comes pre-trained on a very large corpus, and we're going to see how that's done. Alright, so the paper introduces basically the current state of the art of language models, and they say, okay, what they want to do new is they want to do bidirectional training. We're going to go down here and see their comparison. So here they compare three models, and these are representative of three types of models. So first, here is, for example, the OpenAI transformer. So this is one of the classic transformer models. We've talked about transformers before in the attention is all you need video. So what a transformer does is it uses attention, and for those who forgot what attention is, if you have a token sequence A, B, C, D, E, then a classic model to use that would be an LSTM. So the LSTM would go here. It would have a vector representation, a hidden state, and then it would take this A, it would take this hidden state and compute a new hidden state, and then it would go on and take the B and incorporate this into the hidden state. The hidden state kind of always stays the same size, but the recurrent model will update the hidden state as it goes over the input sequence. So this is one way of dealing with language, but people have kind of done another way, and that's the attention-based mechanism, where basically for each of these you compute a vector independently of each other. So each one has a vector representation, and then you have a vector representation of what you want, which is called an attention head, and you can have multiple of these. But in the simplest case, let's just say we are looking for the subject in this sentence. So A, B, C, D, E is a sentence, and one of the words is the subject of the sentence. Then we could have a vector here that's called a query vector. So these are called values V, and this is called a query Q, and then these vectors are the same size. I know I'm very poor at this. You're going to compute the inner product with each of these. So the inner product you want to do... Okay, I already screwed this up. You're actually computing two vectors for each token. But this is not too important for this step. One is the key, and one is the value. This is called the key, and you have your query Q, and you compute the inner products actually with the key. The values aren't too important for what I want to demonstrate, but you compute key with query, and that gives you basically... For each key, it's going to give you an output. So for this A, B, C, D, E, you're going to have this much inner product, this much inner product, this much, this much, this much inner product. So after maybe a softmax, you have a nice distribution, and then you can say, aha, here, this is the biggest alignment of the particular key with my query, and my query is which one is the subject. Of course, you're going to train all these queries and keys producing procedures. So this is a tension mechanism, and if you then want... That's where the value comes in. If your query is not only which one is the subject, but it's actually a generic query that, okay, I'm going to extract some information from some token that I'm going to use later, then you would actually take B and say, ah, B is the best one. Okay, I'm going to take the value of B. You're basically going to take a weighted average of the values according to these values here. So this is very shortly what attention is. If you want a lengthy explanation, go to the Attention is All You Need video. So OpenAI GPT uses attention here, and it's a left-to-right transformer. That's what it says here. And what that means is it goes also step-by-step, but in each step it uses attention. So here is the input tokens, and as you can see, it goes in this direction. So each one of the... And these are multiple layers of attention, so you can also layer these, of course. So each one of the attention intermediate steps can only attend to whatever is on to the left of it. You can see this here. So it goes step-by-step, and it goes left to right. So it can take the sequence in as a left-to-right input. Basically what that means is whenever you interpret a particular token, your context is only to the left of that token. You don't know what's coming yet. It's like when you read a sentence from left to right, but then as humans, unconsciously, we probably go and at the end of the sentence kind of make sense of the thing as a whole. But here the model is forced to make sense of the thing only from whatever is to the left of it. So that's a basic limitation of these left-to-right models. Then there's another approach, which is called ELMO, which has been popular recently as a substitute for word vectors. So if you know word vectors, word vectors are basically the kind of first stage in most language processing tasks, where for each word, say the cat sat on something, for each word you have a big giant table, and for each word you associate a vector of fixed size dimension. So you place every word in a vector space, and these vectors you pre-compute with something like word2vec or GloVe. That gives you a nice way to basically deal with these words in a canonical way. You can pre-train the word vectors. That's already nice. But people have realized, okay, words can have multiple meanings, and words can kind of slightly change meaning depending on words around them and so on. So what ELMO does is ELMO uses two LSTMs. One LSTM goes into this direction, one LSTM goes into this direction. And basically a single LSTM, as we saw before, it takes in the input sequence one by one. So here E1, then E2, then E3, then E4. It produces hidden states at each step. It produces a hidden state that is a result of a previous hidden state and the current token. And then what it says is, okay, now these hidden states here, basically, these are now the embeddings of the token E1, E3, and so on. These are the embeddings. So the word vectors, as to say, are no longer just one vector per word. So they're not in isolation anymore. But basically you need the entire sequence to compute the word vectors as a result of this LSTM. This is more powerful because it can give individual words multiple or each word has kind of a unique embedding depending on the surrounding words. You would still hope that a given word would have similar embedding or similar word vector all across the language. But you can kind of fine tune it to the particular sentence it is in. And also you can completely change its meaning if it's kind of a word that has a completely new meaning in that sentence. So basically it uses two LSTMs, one, as I said here, forward, one backward. These also have multipliers and so on. And each of these produce one such hidden vector per token. And you simply concatenate the two from the LSTM on the left produces one, this LSTM on the right produces maybe here another one. And you simply concatenate the two to get the final embedding, the final word vector for each token. So the fundamental limitation here is that this is kind of you have information from the left end, you have information from the right. So other than here the original transformer, you actually have you actually can condition on the left context and the right context. But it's very it's very shallow because it's simply a concatenation of the left facing LSTM and the concatenation of the right facing LSTM. And these ultimately intrinsically they have nothing to do with each other. So you simply concatenate the two things that the left facing LSTM still can only see to the left and the right facing LSTM still can only see to the right. So you basically have two half blind models and then you kind of concatenate. So the it's still suboptimal because of what you want is you want a single model to output your word vectors or to interpret the language that can look at both the left and the right at the same time and then incorporate information from both of them simultaneously and not just at the end by concatenation. This is what BERT does. So BERT here and this is kind of what they claim is the new contribution. BERT at each in each layer here of the model. The the let's look at this. And for a particular token, they look at all of the context. So every every other token in the in the input, they look at that. And so the the basically it seems kind of it seems kind of obvious, but it's it's actually there's reasons why these other models don't do this. But so this is the entire point of BERT is at each layer in this in this transformer architecture is still an attention mechanism, by the way, so that there's there's the mechanism of attention here and here is exactly the same or almost the same. They actually keep it close on purpose in order to compare. But now we have attention not only to the left, but also to the right to everything. Right. So why do these other model whether, for example, the OpenAI transformer only look to the left. That's because somehow you need a task to train on. Right. And most of the time, if you especially if you want unsupervised training, you going to do something like language modeling. And language modeling, what you have is a sentence A, B, C, D, and you're asking what comes next here. Right. So by by the definition of the task, you can only look to the left. That's that's just how these like how the task works. So it makes sense that that these other models kind of do this because they pre train on this number has a different pre training because they can they can only they have to look to the left and the right. And the other thing is what you want to use the model for. So the good thing if you if you go left to right, you can use the model now for generating language in the same vein. If if you have a B, C, D, and you ask and the model is trained to produce the next character only looking to the left. Right. Then you can you can say what's the next character of the model says E and then you can feed the same thing into the model and say OK, what's now the next character? Well, says what's now the next character G. So there's pretty useful if you only look to the left, you can actually use the model then for generating language, which is something you can't do with BERT or it's not it's not really obvious now how to do it with BERT. People are I know people are investigating into language producing producing entire sequences with BERT. But as yet, it's not super clear how to do this with this model. That being said, the model is pretty good at pretty much everything else. So let's jump in to how they train. They train. Let's see where we are here. They train using masked basically masked language modeling. So I want to actually go into that first mask language modeling. What they do is they basically replace some words by the mask token and they don't have a good. They don't have a nice. All right. They have they have one here. All right. Here, if you just look at kind of the top sentence here. The man went to mask store. Don't don't don't worry about the set and so on. Just this. The man went to mask store and the model simply asked to predict what's here, which word is there. So it needs to incorporate information from the right and from the left to do this. So that's basically how you train it. They simply drop out some of the words some of the time and they have different techniques. So you can clearly tell a lot of work has gone into kind of fine tuning everything in this model, like how to train it and so on. So let's say we don't always do this. Sometimes we do this other thing and sometimes we do that. And there's several ways of biasing this model. But basically you do this masked language modeling. And then because they also want to evaluate on, let's say, entire sequence tasks or tasks that span multiple sentences. What they do is the second pre-training task at the same time, as you can see here, where they feed two sentences. So that's the first sentence. That's the second sentence. They feed these two sentences as an input. So at first they have this token and these separate the sentences. And then they ask the model to predict a label is next. And is next is true if the second sentence follows the first sentence. So if it's like a logical continuation. And the way you do this on supervised is really easy. You take a big giant corpus and you take a sentence for the first sentence. And then 50 percent of the time you take the next sentence in the corpus and the label is true. And 50 percent of the time you take some random sentence. Here you say, for example, the man mask to the store. And the next sentence is penguin mask or flightless birds. And that's kind of a random sentence. So the model is asked to predict. Well, that's probably not the next sentence following this first sentence. So you do these two tasks. You pre-train and you can do this on supervised. You don't need supervised data for that. You just need a corpus. And they do this for a long time with a lot of data. And the model itself is giant. It has 24, I think, of these transformer layers. So it's giant. And then you kind of pre-train this model. Here is an illustration of some extra things. So what they do is they first. This is the input up here. So the first token is this CLS token, which is kind of the start token. And then this is the first sentence. Then the set is the separator of two sentences. And this is the second sentence. And then again, we'll get to these hashtags in a second. But first, they say, OK, first we have the token embeddings. So they kind of start with the original concept of word vectors at the very basis because you need to start with actually going into a vector space to use these models. But they then kind of transform these through the transformer layers. They also use segment embeddings. Segment embeddings, as you can see here, is simply kind of a binary label. E, A being the label for the first sentence and E, B being the label for the second sentence. So just the model can differentiate which one is the first and which one is the second because it's kind of hard to learn for a transformer architecture that the set tokens kind of separate the sentences. So you kind of want to help it. And the last thing is positional embeddings. And we've already talked about these in Attention is All You Need. This is where you can kind of, the model, since it's a transformer, it doesn't go step by step. It doesn't go one, done, done, done, done. So it's kind of hard for the model to make out how far two things are apart from each other, how far two tokens, if they're neighbors or if they're really far apart. And these positional embeddings kind of help the model decide if two tokens are close to each other in input, if they're just neighbors or if they are actually really far apart. All right. So this is how the kind of first input is constructed out of these embeddings and then it's fed through these transformer layers, as we saw, with the mask-dllm task and the is-next task. I want to quickly get to these hashtags, what they mean. So the input here is separated into word pieces, so-called word pieces. And what that is, is so in language processing tasks, you have kind of a choice. You have a choice of how to tokenize your input. So let's look at a sentence here. Subscribe to PewDiePie. So this is a sentence and the sentence is rather, let's say, word-wise complicated. So why might a language model have a problem with this? So first you need to tokenize this sentence. So what most people do is they say, okay, here are the word boundaries. We're going to tokenize this into three segments. First is subscribe to PewDiePie. Okay, so three things and each of these now needs a word vector associated with it. Now the thing is, the word vectors, let's assume you have them pre-trained or something. In any case, you need a big table, a big, big table, and this goes down here, where for each word, a, the, to, I, you, you have a vector associated with it, right? So you need to keep this in your model. And as you know, English has a lot of words here. So this table is going to be really big. And the problem is how do you make this table, right? Okay, you could make it kind of dynamically and so on, but in general you're going to create this table with all the words you know, and that's going to be too big because English has so many words. And then you can say, all right, we'll only take the top, whatever is used in 90% of the language, which turns out to be this kind of burrito distributed. So it turns out to be like 5% of the words are used in 90% of the language. So you just take these, but then you're going to have the problem. Okay, here, two, two is not a problem. Why not? Two is used super often. We're going to have it at the very top somewhere, and we're going to have a vector for it. Subscribe is already, it's not so common, right? So maybe you have a word for it somewhere down. But then PewDiePie is a name. So there is no, there's not even a word like, that's not even a word. It's just, so what you usually do, what people usually do is they have this out of vocabulary token, and then they have a vector associated somewhere here with the out of vocabulary token. Is it whatever? And I don't know what it is. I just know that I don't have it in my vocabulary, and the model kind of deals with that. That's kind of, it's not really ideal, especially if you then want to generate language. Also, your model tends to generate out of vocabulary tokens. If you allow that, if you don't allow that, you have a problem during training. So it's all kind of messy. What's the alternative? The alternative is to go character level. So let's look at character level. In character level, you say, all right, my words are obviously made of characters. And characters, I'm just going to split at each character, right? And here the white space can be a character too. So I'm going to split at each character, and then I'm simply going to have one vector for each character. And there's only like 20 something, six of those. And so I can keep 26 vectors. But this tends to be rather problematic because a character by itself having a meaning that can be encapsulated by a vector is kind of shady because a character by itself usually doesn't mean any, doesn't have a meaning. So what's the solution here? The solution is to go in between. The solution is to say, well, let's actually go for word pieces. And you can kind of think of them as syllables, but you can split, you can make them in a way that you have a fixed size vocabulary. Say, okay, I have 4,000 entry places in my big table. I can afford 4,000 size table. So first of all, I'm going to have for each character, A, B, C, D, E, and so on. I'm going to have a vector. But then I only have 26. I have 3,000 some left. I'm going to have also the most common words. Now, A is already here, but maybe I can have to and from. And so the most common words, they also get there. And then for the other things, I'm going to split the words maybe in sub scribe. So these are two syllables and sub can be kind of a prefix to many things. And I only need then one, one. So I have sub here, sub. I only need one vector for that. And then the rest, if scribe, scribe is by the way also a word, so I can have that. But if scribe weren't in my vocabulary, I can divide scribe then up into characters and then describe them with the character level. So basically I can mix and match here. I can sub, that's, I have that. And then scribe, I don't have it. I don't have any of the pieces, so I can just use the character. So this would be sub and then S-C-R-I-B-E. So these would be the tokens that I work with now as my input. And these tags here, so this is what would happen to PewDiePie. You could simply split along each character. So you basically, this is kind of an interpolation between the token model and the character model. And it's really neat and it usually works quite well. As I said, the hashtag sign here simply means that these two have originally been one word. And now this in here is just a word piece token. This is a really good example where word piece come in. Because play by itself is a word and I can make play in instead of having an own vector for that. I can divide it into play, which already has a meaning. And presumably play in and play would have similar meanings. So it makes sense to have play as the token singled out here and then ing as a suffix. Also makes sense to have a token for that in my table. And then I simply have these two tokens here. That probably already gives me more information than simply having the word playing. By the way, you should subscribe to PewDiePie. Just FYI. Alright, let's go on. So we do word piece tokenization. We do the masked language model. We do the next sentence prediction pre-training. What do we have now? We have a model that can really, really well predict some masked words. Now how do we use it? Now they evaluate on these, I believe it's 11 tasks. 11 different tasks of... Or is it... I don't know how many it is. It is a lot with the same model. So this pre-trend model, they now claim, can be fine-tuned to do all of these tasks. And it gets up, it's like state of the art on everyone. It's crazy. So how do they fine-tune it? So the easiest tasks are the so-called sequence level task. Where you basically have the sequence and you're about to predict one class label for the entire sequence. So here we have the sentence pair classification tasks. For example, the task we saw before, the isNext task. There is more sophisticated tasks that you need kind of supervised data for. And so with the supervised data you'd have a class label that you could train on. So what you do is... Let's look at one of them. M-L-I. They had it up here. Nope. Here. Multi-genre natural language inference. And that's our entailment classification task. So given a pair of sentences, the goal is to predict whether the second sentence is an entailment, contradiction or neutral with respect to the first one. Alright, two sentences and you're about to predict which one of these three labels it is. So you put the two sentences here. Bert can already take two sentences as an input, as we saw. The embeddings are... the A and B embeddings and the position embeddings are left out of the picture here, but they would be added to it. And these would be the embeddings for it. And then you pass this through the Bert model and this is the final layer. And what they do is they simply take now the embedding, the final embedding for this first one corresponding to this start token. And they simply put a single layer of classification, so basically a logistic regression on it. And that's how they then get a class label. So if this is whatever... let's say this is... this gives you here a hidden vector of 512 dimensions. 512. And you have three labels to output here. One, two, three. You simply need a matrix that's 512 by 3 of size. And these are the weights that you would then have to train in addition to Bert. So Bert is pre-trained and you have to simply only now learn these weights. Of course they also kind of fine-tune the entire Bert model, but that's really fine-tuning. The only thing you have to learn from scratch is this, these weights here. That's pretty... first of all it's pretty neat because you can be very quick at learning new tasks. Because you simply start from the pre-trained Bert and then you go and learn a single class for a layer on top. And astonishingly this works extremely well for these tasks. A bit of a more challenging task is this here. Squat is a question answering task. And we're going to jump down here where they explain the task. So you have an input question. Oops. You have an input question and the input question is where do water droplets collide with ice crystals to form precipitation? And you have an input paragraph which is kind of a paragraph from Wikipedia page. And you know that the answer is somewhere in this paragraph, right? The data set is constructed such that the answer is in the paragraph. So the input paragraph reads, precipitation forms as smaller droplets coalesce via collision with other raindrops or ice crystals within a cloud. So the question is where do water droplets collide to form precipitation? The answer here is within a cloud. So that's this thing here. So usually what squad models do is they predict the span. They predict where's the start of the answer and where's the end of the answer. That's also what kind of BERT's trained to do. So in order to do this, what you do is again, you already have the ability to input two sequences. So we've trained with two sentences, but here they simply say, oh well, the first sequence is going to be the question. Our second sequence is going to be the entire paragraph from Wikipedia. And then for each output, for the output of each token, remember there's as many outputs as there's inputs because the transformer will always transform to the same length of sequence. For each token in the output, we classify it. Is this token the start token or is this token the end token or is this token none of all? Now, what they do effectively is that here each one outputs, each one is a vector. And they, as we said at the beginning of finding out which one's the subject, now here we have two queries, namely query one, which is, is this the start? Let's call it query S and query E is, is this the end token? So these are two queries and I'm going to just produce, compute the inner product of each query with each of these outputs. And over my sequence here, this is going to give me a distribution. So start for start, maybe this token is not much and this token is a lot and so on. There's five tokens and for the end, not so much, not so probable, not so probable, very probable, not so probable. So what you get, going to get is from these inner products is a distribution over which one's the start and which one's the end. And you're going to say, okay, this one's probably the start and this one's probably the end. So that's how you predict the span. And again, what you have to ultimately learn is these, these queries here. And so not that much. And this is named entity recognition and named entity recognition, you have a sentence and you're supposed to recognize named entities. Like up here, we saw subscribe to PewDiePie and the named entity would be PewDiePie. Right. This is a name and you're supposed to recognize that this is a name. And they do it the same, same way that they do the squat basically or a similar way. Sorry. They basically for each of the outputs here, they simply classify whether or not it's part of an entity or not. So what they have to do is they have to simply train if they also have different labels for which kind of entity is this. This is like a person and this is this is no entity. So if you have 10 of the labels, then each for each thing, you would classify it into one of 10 classes. You need a classifier of input size versus number of classes. That's all you have to train in addition to pre to fine tuning BERT itself. All right. So they kind of evaluate on all of these tasks. They get super duper numbers on all of them here. BERT large wins on pretty much everything. And this model is big. Just saying. And they trained it on TPUs, which is available in kind of Google Cloud infrastructure. So far, it's trained it on a lot of data. So to to away, it's it's kind of expected that you would outperform, but it's very surprising that you outperform everyone else by this much. And they've done a lot of kind of ablation studies where they show that it's really due to the fact that they do this left and right context. They take into account the left and right context of a given token when doing the attention that it's that that's why it's better. So here, for example, they compare the BERT base model and they say, OK, what if we don't do the NSP, the next sentence prediction task? Then you can see the numbers, they already kind of they drop on these tasks. And what if we then additionally do only left to right training and the numbers, they drop pretty seriously again, you see, sometimes here, for example, you see a pretty serious drop in the number also here. So there really seems to be a real value in doing this kind of left and right context attention. So it's not just about the model size and the amount of data. That's basically what they show here. And it's really cool that the paper actually shows this, because usually people have an idea and they throw a lot more resources at it and they're better. You'd never know why. And this is pretty cool that they actually show. All right. So this is all I have to say about this paper. Check it out. The models are here pre trained. You can actually download them. You can fine tune in for yourself, for your own task. And they're pretty, pretty powerful. There are smaller models for if you don't have a TPU that are also pre trained. So check these out as well. And thanks a lot for listening. | [
{
"start": 0,
"end": 14,
"text": " Hello everyone, today we're looking at BERT pre-training of deep bidirectional transformers for language understanding by Jacob Devlin and Min-Wai Chung, Kenton Lee, Kristina Tatanova."
},
{
"start": 14,
"end": 23,
"text": " These are people from Google AI language, so you're about to see the most hyped model currently."
},
{
"start": 23,
"end": 34,
"text": " So basically BERT is a model that takes as an input language, so token sequences, and outputs various things."
},
{
"start": 34,
"end": 49,
"text": " So it can be made to do various things, almost any NLP task, with basically little training because the BERT model comes pre-trained on a very large corpus, and we're going to see how that's done."
},
{
"start": 49,
"end": 67,
"text": " Alright, so the paper introduces basically the current state of the art of language models, and they say, okay, what they want to do new is they want to do bidirectional training."
},
{
"start": 67,
"end": 81,
"text": " We're going to go down here and see their comparison. So here they compare three models, and these are representative of three types of models."
},
{
"start": 81,
"end": 99,
"text": " So first, here is, for example, the OpenAI transformer. So this is one of the classic transformer models. We've talked about transformers before in the attention is all you need video."
},
{
"start": 99,
"end": 118,
"text": " So what a transformer does is it uses attention, and for those who forgot what attention is, if you have a token sequence A, B, C, D, E, then a classic model to use that would be an LSTM."
},
{
"start": 118,
"end": 136,
"text": " So the LSTM would go here. It would have a vector representation, a hidden state, and then it would take this A, it would take this hidden state and compute a new hidden state, and then it would go on and take the B and incorporate this into the hidden state."
},
{
"start": 136,
"end": 147,
"text": " The hidden state kind of always stays the same size, but the recurrent model will update the hidden state as it goes over the input sequence."
},
{
"start": 147,
"end": 167,
"text": " So this is one way of dealing with language, but people have kind of done another way, and that's the attention-based mechanism, where basically for each of these you compute a vector independently of each other."
},
{
"start": 167,
"end": 182,
"text": " So each one has a vector representation, and then you have a vector representation of what you want, which is called an attention head, and you can have multiple of these."
},
{
"start": 182,
"end": 201,
"text": " But in the simplest case, let's just say we are looking for the subject in this sentence. So A, B, C, D, E is a sentence, and one of the words is the subject of the sentence. Then we could have a vector here that's called a query vector."
},
{
"start": 201,
"end": 214,
"text": " So these are called values V, and this is called a query Q, and then these vectors are the same size. I know I'm very poor at this. You're going to compute the inner product with each of these."
},
{
"start": 214,
"end": 232,
"text": " So the inner product you want to do... Okay, I already screwed this up. You're actually computing two vectors for each token. But this is not too important for this step."
},
{
"start": 232,
"end": 245,
"text": " One is the key, and one is the value. This is called the key, and you have your query Q, and you compute the inner products actually with the key."
},
{
"start": 245,
"end": 258,
"text": " The values aren't too important for what I want to demonstrate, but you compute key with query, and that gives you basically... For each key, it's going to give you an output."
},
{
"start": 258,
"end": 275,
"text": " So for this A, B, C, D, E, you're going to have this much inner product, this much inner product, this much, this much, this much inner product."
},
{
"start": 275,
"end": 290,
"text": " So after maybe a softmax, you have a nice distribution, and then you can say, aha, here, this is the biggest alignment of the particular key with my query, and my query is which one is the subject."
},
{
"start": 290,
"end": 301,
"text": " Of course, you're going to train all these queries and keys producing procedures. So this is a tension mechanism, and if you then want... That's where the value comes in."
},
{
"start": 301,
"end": 314,
"text": " If your query is not only which one is the subject, but it's actually a generic query that, okay, I'm going to extract some information from some token that I'm going to use later,"
},
{
"start": 314,
"end": 319,
"text": " then you would actually take B and say, ah, B is the best one. Okay, I'm going to take the value of B."
},
{
"start": 319,
"end": 325,
"text": " You're basically going to take a weighted average of the values according to these values here."
},
{
"start": 325,
"end": 334,
"text": " So this is very shortly what attention is. If you want a lengthy explanation, go to the Attention is All You Need video."
},
{
"start": 334,
"end": 346,
"text": " So OpenAI GPT uses attention here, and it's a left-to-right transformer. That's what it says here."
},
{
"start": 346,
"end": 351,
"text": " And what that means is it goes also step-by-step, but in each step it uses attention."
},
{
"start": 351,
"end": 356,
"text": " So here is the input tokens, and as you can see, it goes in this direction."
},
{
"start": 356,
"end": 363,
"text": " So each one of the... And these are multiple layers of attention, so you can also layer these, of course."
},
{
"start": 363,
"end": 375,
"text": " So each one of the attention intermediate steps can only attend to whatever is on to the left of it."
},
{
"start": 375,
"end": 386,
"text": " You can see this here. So it goes step-by-step, and it goes left to right. So it can take the sequence in as a left-to-right input."
},
{
"start": 386,
"end": 394,
"text": " Basically what that means is whenever you interpret a particular token, your context is only to the left of that token."
},
{
"start": 394,
"end": 399,
"text": " You don't know what's coming yet. It's like when you read a sentence from left to right,"
},
{
"start": 399,
"end": 408,
"text": " but then as humans, unconsciously, we probably go and at the end of the sentence kind of make sense of the thing as a whole."
},
{
"start": 408,
"end": 416,
"text": " But here the model is forced to make sense of the thing only from whatever is to the left of it."
},
{
"start": 416,
"end": 420,
"text": " So that's a basic limitation of these left-to-right models."
},
{
"start": 420,
"end": 430,
"text": " Then there's another approach, which is called ELMO, which has been popular recently as a substitute for word vectors."
},
{
"start": 430,
"end": 440,
"text": " So if you know word vectors, word vectors are basically the kind of first stage in most language processing tasks,"
},
{
"start": 440,
"end": 452,
"text": " where for each word, say the cat sat on something, for each word you have a big giant table,"
},
{
"start": 452,
"end": 457,
"text": " and for each word you associate a vector of fixed size dimension."
},
{
"start": 457,
"end": 465,
"text": " So you place every word in a vector space, and these vectors you pre-compute with something like word2vec or GloVe."
},
{
"start": 465,
"end": 472,
"text": " That gives you a nice way to basically deal with these words in a canonical way."
},
{
"start": 472,
"end": 475,
"text": " You can pre-train the word vectors. That's already nice."
},
{
"start": 475,
"end": 479,
"text": " But people have realized, okay, words can have multiple meanings,"
},
{
"start": 479,
"end": 484,
"text": " and words can kind of slightly change meaning depending on words around them and so on."
},
{
"start": 484,
"end": 489,
"text": " So what ELMO does is ELMO uses two LSTMs."
},
{
"start": 489,
"end": 494,
"text": " One LSTM goes into this direction, one LSTM goes into this direction."
},
{
"start": 494,
"end": 501,
"text": " And basically a single LSTM, as we saw before, it takes in the input sequence one by one."
},
{
"start": 501,
"end": 504,
"text": " So here E1, then E2, then E3, then E4."
},
{
"start": 504,
"end": 508,
"text": " It produces hidden states at each step."
},
{
"start": 508,
"end": 514,
"text": " It produces a hidden state that is a result of a previous hidden state and the current token."
},
{
"start": 514,
"end": 529,
"text": " And then what it says is, okay, now these hidden states here, basically, these are now the embeddings of the token E1, E3, and so on."
},
{
"start": 529,
"end": 531,
"text": " These are the embeddings."
},
{
"start": 531,
"end": 539,
"text": " So the word vectors, as to say, are no longer just one vector per word."
},
{
"start": 539,
"end": 541,
"text": " So they're not in isolation anymore."
},
{
"start": 541,
"end": 548,
"text": " But basically you need the entire sequence to compute the word vectors as a result of this LSTM."
},
{
"start": 548,
"end": 560,
"text": " This is more powerful because it can give individual words multiple or each word has kind of a unique embedding depending on the surrounding words."
},
{
"start": 560,
"end": 570,
"text": " You would still hope that a given word would have similar embedding or similar word vector all across the language."
},
{
"start": 570,
"end": 574,
"text": " But you can kind of fine tune it to the particular sentence it is in."
},
{
"start": 574,
"end": 581,
"text": " And also you can completely change its meaning if it's kind of a word that has a completely new meaning in that sentence."
},
{
"start": 581,
"end": 587,
"text": " So basically it uses two LSTMs, one, as I said here, forward, one backward."
},
{
"start": 587,
"end": 589,
"text": " These also have multipliers and so on."
},
{
"start": 589,
"end": 594,
"text": " And each of these produce one such hidden vector per token."
},
{
"start": 594,
"end": 605,
"text": " And you simply concatenate the two from the LSTM on the left produces one, this LSTM on the right produces maybe here another one."
},
{
"start": 605,
"end": 615,
"text": " And you simply concatenate the two to get the final embedding, the final word vector for each token."
},
{
"start": 615,
"end": 627,
"text": " So the fundamental limitation here is that this is kind of you have information from the left end, you have information from the right."
},
{
"start": 627,
"end": 635,
"text": " So other than here the original transformer, you actually have you actually can condition on the left context and the right context."
},
{
"start": 635,
"end": 644,
"text": " But it's very it's very shallow because it's simply a concatenation of the left facing LSTM and the concatenation of the right facing LSTM."
},
{
"start": 644,
"end": 650,
"text": " And these ultimately intrinsically they have nothing to do with each other."
},
{
"start": 650,
"end": 661,
"text": " So you simply concatenate the two things that the left facing LSTM still can only see to the left and the right facing LSTM still can only see to the right."
},
{
"start": 661,
"end": 667,
"text": " So you basically have two half blind models and then you kind of concatenate."
},
{
"start": 667,
"end": 689,
"text": " So the it's still suboptimal because of what you want is you want a single model to output your word vectors or to interpret the language that can look at both the left and the right at the same time and then incorporate information from both of them simultaneously and not just at the end by concatenation."
},
{
"start": 689,
"end": 691,
"text": " This is what BERT does."
},
{
"start": 691,
"end": 697,
"text": " So BERT here and this is kind of what they claim is the new contribution."
},
{
"start": 697,
"end": 701,
"text": " BERT at each in each layer here of the model."
},
{
"start": 701,
"end": 704,
"text": " The the let's look at this."
},
{
"start": 704,
"end": 709,
"text": " And for a particular token, they look at all of the context."
},
{
"start": 709,
"end": 717,
"text": " So every every other token in the in the input, they look at that."
},
{
"start": 717,
"end": 731,
"text": " And so the the basically it seems kind of it seems kind of obvious, but it's it's actually there's reasons why these other models don't do this."
},
{
"start": 731,
"end": 748,
"text": " But so this is the entire point of BERT is at each layer in this in this transformer architecture is still an attention mechanism, by the way, so that there's there's the mechanism of attention here and here is exactly the same or almost the same."
},
{
"start": 748,
"end": 752,
"text": " They actually keep it close on purpose in order to compare."
},
{
"start": 752,
"end": 761,
"text": " But now we have attention not only to the left, but also to the right to everything."
},
{
"start": 761,
"end": 768,
"text": " Right. So why do these other model whether, for example, the OpenAI transformer only look to the left."
},
{
"start": 768,
"end": 772,
"text": " That's because somehow you need a task to train on."
},
{
"start": 772,
"end": 781,
"text": " Right. And most of the time, if you especially if you want unsupervised training, you going to do something like language modeling."
},
{
"start": 781,
"end": 791,
"text": " And language modeling, what you have is a sentence A, B, C, D, and you're asking what comes next here."
},
{
"start": 791,
"end": 797,
"text": " Right. So by by the definition of the task, you can only look to the left."
},
{
"start": 797,
"end": 803,
"text": " That's that's just how these like how the task works."
},
{
"start": 803,
"end": 818,
"text": " So it makes sense that that these other models kind of do this because they pre train on this number has a different pre training because they can they can only they have to look to the left and the right."
},
{
"start": 818,
"end": 822,
"text": " And the other thing is what you want to use the model for."
},
{
"start": 822,
"end": 830,
"text": " So the good thing if you if you go left to right, you can use the model now for generating language in the same vein."
},
{
"start": 830,
"end": 838,
"text": " If if you have a B, C, D, and you ask and the model is trained to produce the next character only looking to the left."
},
{
"start": 838,
"end": 848,
"text": " Right. Then you can you can say what's the next character of the model says E and then you can feed the same thing into the model and say OK, what's now the next character?"
},
{
"start": 848,
"end": 853,
"text": " Well, says what's now the next character G."
},
{
"start": 853,
"end": 866,
"text": " So there's pretty useful if you only look to the left, you can actually use the model then for generating language, which is something you can't do with BERT or it's not it's not really obvious now how to do it with BERT."
},
{
"start": 866,
"end": 875,
"text": " People are I know people are investigating into language producing producing entire sequences with BERT."
},
{
"start": 875,
"end": 881,
"text": " But as yet, it's not super clear how to do this with this model."
},
{
"start": 881,
"end": 885,
"text": " That being said, the model is pretty good at pretty much everything else."
},
{
"start": 885,
"end": 889,
"text": " So let's jump in to how they train."
},
{
"start": 889,
"end": 892,
"text": " They train. Let's see where we are here."
},
{
"start": 892,
"end": 898,
"text": " They train using masked basically masked language modeling."
},
{
"start": 898,
"end": 906,
"text": " So I want to actually go into that first mask language modeling."
},
{
"start": 906,
"end": 915,
"text": " What they do is they basically replace some words by the mask token and they don't have a good."
},
{
"start": 915,
"end": 917,
"text": " They don't have a nice."
},
{
"start": 917,
"end": 920,
"text": " All right. They have they have one here."
},
{
"start": 920,
"end": 922,
"text": " All right."
},
{
"start": 922,
"end": 927,
"text": " Here, if you just look at kind of the top sentence here."
},
{
"start": 927,
"end": 930,
"text": " The man went to mask store."
},
{
"start": 930,
"end": 936,
"text": " Don't don't don't worry about the set and so on. Just this."
},
{
"start": 936,
"end": 943,
"text": " The man went to mask store and the model simply asked to predict what's here, which word is there."
},
{
"start": 943,
"end": 948,
"text": " So it needs to incorporate information from the right and from the left to do this."
},
{
"start": 948,
"end": 951,
"text": " So that's basically how you train it."
},
{
"start": 951,
"end": 958,
"text": " They simply drop out some of the words some of the time and they have different techniques."
},
{
"start": 958,
"end": 966,
"text": " So you can clearly tell a lot of work has gone into kind of fine tuning everything in this model, like how to train it and so on."
},
{
"start": 966,
"end": 968,
"text": " So let's say we don't always do this."
},
{
"start": 968,
"end": 971,
"text": " Sometimes we do this other thing and sometimes we do that."
},
{
"start": 971,
"end": 973,
"text": " And there's several ways of biasing this model."
},
{
"start": 973,
"end": 977,
"text": " But basically you do this masked language modeling."
},
{
"start": 977,
"end": 986,
"text": " And then because they also want to evaluate on, let's say, entire sequence tasks or tasks that span multiple sentences."
},
{
"start": 986,
"end": 995,
"text": " What they do is the second pre-training task at the same time, as you can see here, where they feed two sentences."
},
{
"start": 995,
"end": 998,
"text": " So that's the first sentence. That's the second sentence."
},
{
"start": 998,
"end": 1001,
"text": " They feed these two sentences as an input."
},
{
"start": 1001,
"end": 1006,
"text": " So at first they have this token and these separate the sentences."
},
{
"start": 1006,
"end": 1011,
"text": " And then they ask the model to predict a label is next."
},
{
"start": 1011,
"end": 1018,
"text": " And is next is true if the second sentence follows the first sentence."
},
{
"start": 1018,
"end": 1020,
"text": " So if it's like a logical continuation."
},
{
"start": 1020,
"end": 1023,
"text": " And the way you do this on supervised is really easy."
},
{
"start": 1023,
"end": 1029,
"text": " You take a big giant corpus and you take a sentence for the first sentence."
},
{
"start": 1029,
"end": 1035,
"text": " And then 50 percent of the time you take the next sentence in the corpus and the label is true."
},
{
"start": 1035,
"end": 1040,
"text": " And 50 percent of the time you take some random sentence."
},
{
"start": 1040,
"end": 1049,
"text": " Here you say, for example, the man mask to the store."
},
{
"start": 1049,
"end": 1056,
"text": " And the next sentence is penguin mask or flightless birds."
},
{
"start": 1056,
"end": 1059,
"text": " And that's kind of a random sentence."
},
{
"start": 1059,
"end": 1061,
"text": " So the model is asked to predict."
},
{
"start": 1061,
"end": 1066,
"text": " Well, that's probably not the next sentence following this first sentence."
},
{
"start": 1066,
"end": 1068,
"text": " So you do these two tasks."
},
{
"start": 1068,
"end": 1071,
"text": " You pre-train and you can do this on supervised."
},
{
"start": 1071,
"end": 1073,
"text": " You don't need supervised data for that."
},
{
"start": 1073,
"end": 1075,
"text": " You just need a corpus."
},
{
"start": 1075,
"end": 1080,
"text": " And they do this for a long time with a lot of data."
},
{
"start": 1080,
"end": 1082,
"text": " And the model itself is giant."
},
{
"start": 1082,
"end": 1086,
"text": " It has 24, I think, of these transformer layers."
},
{
"start": 1086,
"end": 1088,
"text": " So it's giant."
},
{
"start": 1088,
"end": 1092,
"text": " And then you kind of pre-train this model."
},
{
"start": 1092,
"end": 1097,
"text": " Here is an illustration of some extra things."
},
{
"start": 1097,
"end": 1103,
"text": " So what they do is they first."
},
{
"start": 1103,
"end": 1105,
"text": " This is the input up here."
},
{
"start": 1105,
"end": 1110,
"text": " So the first token is this CLS token, which is kind of the start token."
},
{
"start": 1110,
"end": 1113,
"text": " And then this is the first sentence."
},
{
"start": 1113,
"end": 1118,
"text": " Then the set is the separator of two sentences."
},
{
"start": 1118,
"end": 1120,
"text": " And this is the second sentence."
},
{
"start": 1120,
"end": 1125,
"text": " And then again, we'll get to these hashtags in a second."
},
{
"start": 1125,
"end": 1129,
"text": " But first, they say, OK, first we have the token embeddings."
},
{
"start": 1129,
"end": 1136,
"text": " So they kind of start with the original concept of word vectors at the very basis"
},
{
"start": 1136,
"end": 1143,
"text": " because you need to start with actually going into a vector space to use these models."
},
{
"start": 1143,
"end": 1149,
"text": " But they then kind of transform these through the transformer layers."
},
{
"start": 1149,
"end": 1151,
"text": " They also use segment embeddings."
},
{
"start": 1151,
"end": 1156,
"text": " Segment embeddings, as you can see here, is simply kind of a binary label."
},
{
"start": 1156,
"end": 1163,
"text": " E, A being the label for the first sentence and E, B being the label for the second sentence."
},
{
"start": 1163,
"end": 1168,
"text": " So just the model can differentiate which one is the first and which one is the second"
},
{
"start": 1168,
"end": 1172,
"text": " because it's kind of hard to learn for a transformer architecture"
},
{
"start": 1172,
"end": 1176,
"text": " that the set tokens kind of separate the sentences."
},
{
"start": 1176,
"end": 1178,
"text": " So you kind of want to help it."
},
{
"start": 1178,
"end": 1181,
"text": " And the last thing is positional embeddings."
},
{
"start": 1181,
"end": 1185,
"text": " And we've already talked about these in Attention is All You Need."
},
{
"start": 1185,
"end": 1191,
"text": " This is where you can kind of, the model, since it's a transformer,"
},
{
"start": 1191,
"end": 1195,
"text": " it doesn't go step by step. It doesn't go one, done, done, done, done."
},
{
"start": 1195,
"end": 1201,
"text": " So it's kind of hard for the model to make out how far two things are apart from each other,"
},
{
"start": 1201,
"end": 1204,
"text": " how far two tokens, if they're neighbors or if they're really far apart."
},
{
"start": 1204,
"end": 1212,
"text": " And these positional embeddings kind of help the model decide if two tokens are close to each other in input,"
},
{
"start": 1212,
"end": 1218,
"text": " if they're just neighbors or if they are actually really far apart."
},
{
"start": 1218,
"end": 1226,
"text": " All right. So this is how the kind of first input is constructed out of these embeddings"
},
{
"start": 1226,
"end": 1230,
"text": " and then it's fed through these transformer layers, as we saw,"
},
{
"start": 1230,
"end": 1234,
"text": " with the mask-dllm task and the is-next task."
},
{
"start": 1234,
"end": 1240,
"text": " I want to quickly get to these hashtags, what they mean."
},
{
"start": 1240,
"end": 1247,
"text": " So the input here is separated into word pieces, so-called word pieces."
},
{
"start": 1247,
"end": 1252,
"text": " And what that is, is so in language processing tasks, you have kind of a choice."
},
{
"start": 1252,
"end": 1259,
"text": " You have a choice of how to tokenize your input."
},
{
"start": 1259,
"end": 1264,
"text": " So let's look at a sentence here."
},
{
"start": 1264,
"end": 1275,
"text": " Subscribe to PewDiePie."
},
{
"start": 1275,
"end": 1281,
"text": " So this is a sentence and the sentence is rather, let's say, word-wise complicated."
},
{
"start": 1281,
"end": 1285,
"text": " So why might a language model have a problem with this?"
},
{
"start": 1285,
"end": 1288,
"text": " So first you need to tokenize this sentence."
},
{
"start": 1288,
"end": 1293,
"text": " So what most people do is they say, okay, here are the word boundaries."
},
{
"start": 1293,
"end": 1296,
"text": " We're going to tokenize this into three segments."
},
{
"start": 1296,
"end": 1299,
"text": " First is subscribe to PewDiePie."
},
{
"start": 1299,
"end": 1305,
"text": " Okay, so three things and each of these now needs a word vector associated with it."
},
{
"start": 1305,
"end": 1313,
"text": " Now the thing is, the word vectors, let's assume you have them pre-trained or something."
},
{
"start": 1313,
"end": 1319,
"text": " In any case, you need a big table, a big, big table, and this goes down here,"
},
{
"start": 1319,
"end": 1330,
"text": " where for each word, a, the, to, I, you, you have a vector associated with it, right?"
},
{
"start": 1330,
"end": 1334,
"text": " So you need to keep this in your model."
},
{
"start": 1334,
"end": 1339,
"text": " And as you know, English has a lot of words here."
},
{
"start": 1339,
"end": 1344,
"text": " So this table is going to be really big."
},
{
"start": 1344,
"end": 1350,
"text": " And the problem is how do you make this table, right?"
},
{
"start": 1350,
"end": 1353,
"text": " Okay, you could make it kind of dynamically and so on,"
},
{
"start": 1353,
"end": 1358,
"text": " but in general you're going to create this table with all the words you know,"
},
{
"start": 1358,
"end": 1361,
"text": " and that's going to be too big because English has so many words."
},
{
"start": 1361,
"end": 1366,
"text": " And then you can say, all right, we'll only take the top,"
},
{
"start": 1366,
"end": 1370,
"text": " whatever is used in 90% of the language,"
},
{
"start": 1370,
"end": 1373,
"text": " which turns out to be this kind of burrito distributed."
},
{
"start": 1373,
"end": 1379,
"text": " So it turns out to be like 5% of the words are used in 90% of the language."
},
{
"start": 1379,
"end": 1382,
"text": " So you just take these, but then you're going to have the problem."
},
{
"start": 1382,
"end": 1384,
"text": " Okay, here, two, two is not a problem."
},
{
"start": 1384,
"end": 1388,
"text": " Why not? Two is used super often."
},
{
"start": 1388,
"end": 1392,
"text": " We're going to have it at the very top somewhere, and we're going to have a vector for it."
},
{
"start": 1392,
"end": 1398,
"text": " Subscribe is already, it's not so common, right?"
},
{
"start": 1398,
"end": 1402,
"text": " So maybe you have a word for it somewhere down."
},
{
"start": 1402,
"end": 1405,
"text": " But then PewDiePie is a name."
},
{
"start": 1405,
"end": 1411,
"text": " So there is no, there's not even a word like, that's not even a word."
},
{
"start": 1411,
"end": 1415,
"text": " It's just, so what you usually do,"
},
{
"start": 1415,
"end": 1420,
"text": " what people usually do is they have this out of vocabulary token,"
},
{
"start": 1420,
"end": 1425,
"text": " and then they have a vector associated somewhere here with the out of vocabulary token."
},
{
"start": 1425,
"end": 1428,
"text": " Is it whatever? And I don't know what it is."
},
{
"start": 1428,
"end": 1432,
"text": " I just know that I don't have it in my vocabulary, and the model kind of deals with that."
},
{
"start": 1432,
"end": 1436,
"text": " That's kind of, it's not really ideal,"
},
{
"start": 1436,
"end": 1439,
"text": " especially if you then want to generate language."
},
{
"start": 1439,
"end": 1442,
"text": " Also, your model tends to generate out of vocabulary tokens."
},
{
"start": 1442,
"end": 1445,
"text": " If you allow that, if you don't allow that, you have a problem during training."
},
{
"start": 1445,
"end": 1448,
"text": " So it's all kind of messy."
},
{
"start": 1448,
"end": 1452,
"text": " What's the alternative? The alternative is to go character level."
},
{
"start": 1452,
"end": 1455,
"text": " So let's look at character level."
},
{
"start": 1455,
"end": 1462,
"text": " In character level, you say, all right, my words are obviously made of characters."
},
{
"start": 1462,
"end": 1467,
"text": " And characters, I'm just going to split at each character, right?"
},
{
"start": 1467,
"end": 1471,
"text": " And here the white space can be a character too."
},
{
"start": 1471,
"end": 1473,
"text": " So I'm going to split at each character,"
},
{
"start": 1473,
"end": 1478,
"text": " and then I'm simply going to have one vector for each character."
},
{
"start": 1478,
"end": 1482,
"text": " And there's only like 20 something, six of those."
},
{
"start": 1482,
"end": 1486,
"text": " And so I can keep 26 vectors."
},
{
"start": 1486,
"end": 1493,
"text": " But this tends to be rather problematic because a character by itself having a meaning"
},
{
"start": 1493,
"end": 1499,
"text": " that can be encapsulated by a vector is kind of shady"
},
{
"start": 1499,
"end": 1503,
"text": " because a character by itself usually doesn't mean any, doesn't have a meaning."
},
{
"start": 1503,
"end": 1508,
"text": " So what's the solution here? The solution is to go in between."
},
{
"start": 1508,
"end": 1513,
"text": " The solution is to say, well, let's actually go for word pieces."
},
{
"start": 1513,
"end": 1517,
"text": " And you can kind of think of them as syllables,"
},
{
"start": 1517,
"end": 1524,
"text": " but you can split, you can make them in a way that you have a fixed size vocabulary."
},
{
"start": 1524,
"end": 1530,
"text": " Say, okay, I have 4,000 entry places in my big table."
},
{
"start": 1530,
"end": 1534,
"text": " I can afford 4,000 size table."
},
{
"start": 1534,
"end": 1541,
"text": " So first of all, I'm going to have for each character, A, B, C, D, E, and so on."
},
{
"start": 1541,
"end": 1542,
"text": " I'm going to have a vector."
},
{
"start": 1542,
"end": 1546,
"text": " But then I only have 26. I have 3,000 some left."
},
{
"start": 1546,
"end": 1549,
"text": " I'm going to have also the most common words."
},
{
"start": 1549,
"end": 1555,
"text": " Now, A is already here, but maybe I can have to and from."
},
{
"start": 1555,
"end": 1558,
"text": " And so the most common words, they also get there."
},
{
"start": 1558,
"end": 1566,
"text": " And then for the other things, I'm going to split the words maybe in sub scribe."
},
{
"start": 1566,
"end": 1571,
"text": " So these are two syllables and sub can be kind of a prefix to many things."
},
{
"start": 1571,
"end": 1576,
"text": " And I only need then one, one."
},
{
"start": 1576,
"end": 1580,
"text": " So I have sub here, sub. I only need one vector for that."
},
{
"start": 1580,
"end": 1586,
"text": " And then the rest, if scribe, scribe is by the way also a word, so I can have that."
},
{
"start": 1586,
"end": 1593,
"text": " But if scribe weren't in my vocabulary, I can divide scribe then up into characters"
},
{
"start": 1593,
"end": 1595,
"text": " and then describe them with the character level."
},
{
"start": 1595,
"end": 1597,
"text": " So basically I can mix and match here."
},
{
"start": 1597,
"end": 1600,
"text": " I can sub, that's, I have that."
},
{
"start": 1600,
"end": 1602,
"text": " And then scribe, I don't have it."
},
{
"start": 1602,
"end": 1606,
"text": " I don't have any of the pieces, so I can just use the character."
},
{
"start": 1606,
"end": 1615,
"text": " So this would be sub and then S-C-R-I-B-E."
},
{
"start": 1615,
"end": 1622,
"text": " So these would be the tokens that I work with now as my input."
},
{
"start": 1622,
"end": 1627,
"text": " And these tags here, so this is what would happen to PewDiePie."
},
{
"start": 1627,
"end": 1632,
"text": " You could simply split along each character."
},
{
"start": 1632,
"end": 1640,
"text": " So you basically, this is kind of an interpolation between the token model and the character model."
},
{
"start": 1640,
"end": 1647,
"text": " And it's really neat and it usually works quite well."
},
{
"start": 1647,
"end": 1654,
"text": " As I said, the hashtag sign here simply means that these two have originally been one word."
},
{
"start": 1654,
"end": 1658,
"text": " And now this in here is just a word piece token."
},
{
"start": 1658,
"end": 1662,
"text": " This is a really good example where word piece come in."
},
{
"start": 1662,
"end": 1669,
"text": " Because play by itself is a word and I can make play in instead of having an own vector for that."
},
{
"start": 1669,
"end": 1672,
"text": " I can divide it into play, which already has a meaning."
},
{
"start": 1672,
"end": 1676,
"text": " And presumably play in and play would have similar meanings."
},
{
"start": 1676,
"end": 1684,
"text": " So it makes sense to have play as the token singled out here and then ing as a suffix."
},
{
"start": 1684,
"end": 1688,
"text": " Also makes sense to have a token for that in my table."
},
{
"start": 1688,
"end": 1690,
"text": " And then I simply have these two tokens here."
},
{
"start": 1690,
"end": 1697,
"text": " That probably already gives me more information than simply having the word playing."
},
{
"start": 1697,
"end": 1703,
"text": " By the way, you should subscribe to PewDiePie."
},
{
"start": 1703,
"end": 1706,
"text": " Just FYI."
},
{
"start": 1706,
"end": 1710,
"text": " Alright, let's go on."
},
{
"start": 1710,
"end": 1714,
"text": " So we do word piece tokenization."
},
{
"start": 1714,
"end": 1716,
"text": " We do the masked language model."
},
{
"start": 1716,
"end": 1719,
"text": " We do the next sentence prediction pre-training."
},
{
"start": 1719,
"end": 1721,
"text": " What do we have now?"
},
{
"start": 1721,
"end": 1727,
"text": " We have a model that can really, really well predict some masked words."
},
{
"start": 1727,
"end": 1728,
"text": " Now how do we use it?"
},
{
"start": 1728,
"end": 1734,
"text": " Now they evaluate on these, I believe it's 11 tasks."
},
{
"start": 1734,
"end": 1739,
"text": " 11 different tasks of..."
},
{
"start": 1739,
"end": 1741,
"text": " Or is it..."
},
{
"start": 1741,
"end": 1742,
"text": " I don't know how many it is."
},
{
"start": 1742,
"end": 1744,
"text": " It is a lot with the same model."
},
{
"start": 1744,
"end": 1751,
"text": " So this pre-trend model, they now claim, can be fine-tuned to do all of these tasks."
},
{
"start": 1751,
"end": 1754,
"text": " And it gets up, it's like state of the art on everyone."
},
{
"start": 1754,
"end": 1757,
"text": " It's crazy."
},
{
"start": 1757,
"end": 1760,
"text": " So how do they fine-tune it?"
},
{
"start": 1760,
"end": 1767,
"text": " So the easiest tasks are the so-called sequence level task."
},
{
"start": 1767,
"end": 1774,
"text": " Where you basically have the sequence and you're about to predict one class label for the entire sequence."
},
{
"start": 1774,
"end": 1778,
"text": " So here we have the sentence pair classification tasks."
},
{
"start": 1778,
"end": 1782,
"text": " For example, the task we saw before, the isNext task."
},
{
"start": 1782,
"end": 1788,
"text": " There is more sophisticated tasks that you need kind of supervised data for."
},
{
"start": 1788,
"end": 1793,
"text": " And so with the supervised data you'd have a class label that you could train on."
},
{
"start": 1793,
"end": 1796,
"text": " So what you do is..."
},
{
"start": 1796,
"end": 1798,
"text": " Let's look at one of them."
},
{
"start": 1798,
"end": 1800,
"text": " M-L-I."
},
{
"start": 1800,
"end": 1804,
"text": " They had it up here."
},
{
"start": 1804,
"end": 1807,
"text": " Nope."
},
{
"start": 1807,
"end": 1808,
"text": " Here."
},
{
"start": 1808,
"end": 1811,
"text": " Multi-genre natural language inference."
},
{
"start": 1811,
"end": 1814,
"text": " And that's our entailment classification task."
},
{
"start": 1814,
"end": 1822,
"text": " So given a pair of sentences, the goal is to predict whether the second sentence is an entailment, contradiction or neutral with respect to the first one."
},
{
"start": 1822,
"end": 1828,
"text": " Alright, two sentences and you're about to predict which one of these three labels it is."
},
{
"start": 1828,
"end": 1831,
"text": " So you put the two sentences here."
},
{
"start": 1831,
"end": 1835,
"text": " Bert can already take two sentences as an input, as we saw."
},
{
"start": 1835,
"end": 1847,
"text": " The embeddings are... the A and B embeddings and the position embeddings are left out of the picture here, but they would be added to it."
},
{
"start": 1847,
"end": 1850,
"text": " And these would be the embeddings for it."
},
{
"start": 1850,
"end": 1855,
"text": " And then you pass this through the Bert model and this is the final layer."
},
{
"start": 1855,
"end": 1864,
"text": " And what they do is they simply take now the embedding, the final embedding for this first one corresponding to this start token."
},
{
"start": 1864,
"end": 1874,
"text": " And they simply put a single layer of classification, so basically a logistic regression on it."
},
{
"start": 1874,
"end": 1877,
"text": " And that's how they then get a class label."
},
{
"start": 1877,
"end": 1884,
"text": " So if this is whatever... let's say this is... this gives you here a hidden vector of 512 dimensions."
},
{
"start": 1884,
"end": 1886,
"text": " 512."
},
{
"start": 1886,
"end": 1889,
"text": " And you have three labels to output here."
},
{
"start": 1889,
"end": 1890,
"text": " One, two, three."
},
{
"start": 1890,
"end": 1900,
"text": " You simply need a matrix that's 512 by 3 of size."
},
{
"start": 1900,
"end": 1907,
"text": " And these are the weights that you would then have to train in addition to Bert."
},
{
"start": 1907,
"end": 1913,
"text": " So Bert is pre-trained and you have to simply only now learn these weights."
},
{
"start": 1913,
"end": 1920,
"text": " Of course they also kind of fine-tune the entire Bert model, but that's really fine-tuning."
},
{
"start": 1920,
"end": 1925,
"text": " The only thing you have to learn from scratch is this, these weights here."
},
{
"start": 1925,
"end": 1931,
"text": " That's pretty... first of all it's pretty neat because you can be very quick at learning new tasks."
},
{
"start": 1931,
"end": 1939,
"text": " Because you simply start from the pre-trained Bert and then you go and learn a single class for a layer on top."
},
{
"start": 1939,
"end": 1946,
"text": " And astonishingly this works extremely well for these tasks."
},
{
"start": 1946,
"end": 1951,
"text": " A bit of a more challenging task is this here."
},
{
"start": 1951,
"end": 1956,
"text": " Squat is a question answering task."
},
{
"start": 1956,
"end": 1959,
"text": " And we're going to jump down here where they explain the task."
},
{
"start": 1959,
"end": 1964,
"text": " So you have an input question."
},
{
"start": 1964,
"end": 1965,
"text": " Oops."
},
{
"start": 1965,
"end": 1973,
"text": " You have an input question and the input question is where do water droplets collide with ice crystals to form precipitation?"
},
{
"start": 1973,
"end": 1979,
"text": " And you have an input paragraph which is kind of a paragraph from Wikipedia page."
},
{
"start": 1979,
"end": 1984,
"text": " And you know that the answer is somewhere in this paragraph, right?"
},
{
"start": 1984,
"end": 1988,
"text": " The data set is constructed such that the answer is in the paragraph."
},
{
"start": 1988,
"end": 1999,
"text": " So the input paragraph reads, precipitation forms as smaller droplets coalesce via collision with other raindrops or ice crystals within a cloud."
},
{
"start": 1999,
"end": 2008,
"text": " So the question is where do water droplets collide to form precipitation?"
},
{
"start": 2008,
"end": 2011,
"text": " The answer here is within a cloud."
},
{
"start": 2011,
"end": 2013,
"text": " So that's this thing here."
},
{
"start": 2013,
"end": 2018,
"text": " So usually what squad models do is they predict the span."
},
{
"start": 2018,
"end": 2022,
"text": " They predict where's the start of the answer and where's the end of the answer."
},
{
"start": 2022,
"end": 2027,
"text": " That's also what kind of BERT's trained to do."
},
{
"start": 2027,
"end": 2036,
"text": " So in order to do this, what you do is again, you already have the ability to input two sequences."
},
{
"start": 2036,
"end": 2042,
"text": " So we've trained with two sentences, but here they simply say, oh well, the first sequence is going to be the question."
},
{
"start": 2042,
"end": 2047,
"text": " Our second sequence is going to be the entire paragraph from Wikipedia."
},
{
"start": 2047,
"end": 2063,
"text": " And then for each output, for the output of each token, remember there's as many outputs as there's inputs because the transformer will always transform to the same length of sequence."
},
{
"start": 2063,
"end": 2069,
"text": " For each token in the output, we classify it."
},
{
"start": 2069,
"end": 2079,
"text": " Is this token the start token or is this token the end token or is this token none of all?"
},
{
"start": 2079,
"end": 2086,
"text": " Now, what they do effectively is that here each one outputs, each one is a vector."
},
{
"start": 2086,
"end": 2098,
"text": " And they, as we said at the beginning of finding out which one's the subject, now here we have two queries, namely query one, which is, is this the start?"
},
{
"start": 2098,
"end": 2103,
"text": " Let's call it query S and query E is, is this the end token?"
},
{
"start": 2103,
"end": 2112,
"text": " So these are two queries and I'm going to just produce, compute the inner product of each query with each of these outputs."
},
{
"start": 2112,
"end": 2119,
"text": " And over my sequence here, this is going to give me a distribution."
},
{
"start": 2119,
"end": 2127,
"text": " So start for start, maybe this token is not much and this token is a lot and so on."
},
{
"start": 2127,
"end": 2138,
"text": " There's five tokens and for the end, not so much, not so probable, not so probable, very probable, not so probable."
},
{
"start": 2138,
"end": 2147,
"text": " So what you get, going to get is from these inner products is a distribution over which one's the start and which one's the end."
},
{
"start": 2147,
"end": 2152,
"text": " And you're going to say, okay, this one's probably the start and this one's probably the end."
},
{
"start": 2152,
"end": 2161,
"text": " So that's how you predict the span. And again, what you have to ultimately learn is these, these queries here."
},
{
"start": 2161,
"end": 2166,
"text": " And so not that much."
},
{
"start": 2166,
"end": 2177,
"text": " And this is named entity recognition and named entity recognition, you have a sentence and you're supposed to recognize named entities."
},
{
"start": 2177,
"end": 2187,
"text": " Like up here, we saw subscribe to PewDiePie and the named entity would be PewDiePie."
},
{
"start": 2187,
"end": 2193,
"text": " Right. This is a name and you're supposed to recognize that this is a name."
},
{
"start": 2193,
"end": 2201,
"text": " And they do it the same, same way that they do the squat basically or a similar way."
},
{
"start": 2201,
"end": 2214,
"text": " Sorry. They basically for each of the outputs here, they simply classify whether or not it's part of an entity or not."
},
{
"start": 2214,
"end": 2223,
"text": " So what they have to do is they have to simply train if they also have different labels for which kind of entity is this."
},
{
"start": 2223,
"end": 2228,
"text": " This is like a person and this is this is no entity."
},
{
"start": 2228,
"end": 2236,
"text": " So if you have 10 of the labels, then each for each thing, you would classify it into one of 10 classes."
},
{
"start": 2236,
"end": 2243,
"text": " You need a classifier of input size versus number of classes."
},
{
"start": 2243,
"end": 2250,
"text": " That's all you have to train in addition to pre to fine tuning BERT itself."
},
{
"start": 2250,
"end": 2259,
"text": " All right. So they kind of evaluate on all of these tasks. They get super duper numbers on all of them here."
},
{
"start": 2259,
"end": 2264,
"text": " BERT large wins on pretty much everything."
},
{
"start": 2264,
"end": 2270,
"text": " And this model is big. Just saying."
},
{
"start": 2270,
"end": 2279,
"text": " And they trained it on TPUs, which is available in kind of Google Cloud infrastructure."
},
{
"start": 2279,
"end": 2285,
"text": " So far, it's trained it on a lot of data."
},
{
"start": 2285,
"end": 2292,
"text": " So to to away, it's it's kind of expected that you would outperform,"
},
{
"start": 2292,
"end": 2297,
"text": " but it's very surprising that you outperform everyone else by this much."
},
{
"start": 2297,
"end": 2308,
"text": " And they've done a lot of kind of ablation studies where they show that it's really due to the fact that they do this left and right context."
},
{
"start": 2308,
"end": 2320,
"text": " They take into account the left and right context of a given token when doing the attention that it's that that's why it's better."
},
{
"start": 2320,
"end": 2332,
"text": " So here, for example, they compare the BERT base model and they say, OK, what if we don't do the NSP, the next sentence prediction task?"
},
{
"start": 2332,
"end": 2338,
"text": " Then you can see the numbers, they already kind of they drop on these tasks."
},
{
"start": 2338,
"end": 2349,
"text": " And what if we then additionally do only left to right training and the numbers, they drop pretty seriously again, you see, sometimes here, for example,"
},
{
"start": 2349,
"end": 2353,
"text": " you see a pretty serious drop in the number also here."
},
{
"start": 2353,
"end": 2365,
"text": " So there really seems to be a real value in doing this kind of left and right context attention."
},
{
"start": 2365,
"end": 2369,
"text": " So it's not just about the model size and the amount of data."
},
{
"start": 2369,
"end": 2371,
"text": " That's basically what they show here."
},
{
"start": 2371,
"end": 2378,
"text": " And it's really cool that the paper actually shows this, because usually people have an idea and they throw a lot more resources at it and they're better."
},
{
"start": 2378,
"end": 2383,
"text": " You'd never know why. And this is pretty cool that they actually show."
},
{
"start": 2383,
"end": 2388,
"text": " All right. So this is all I have to say about this paper."
},
{
"start": 2388,
"end": 2392,
"text": " Check it out. The models are here pre trained."
},
{
"start": 2392,
"end": 2397,
"text": " You can actually download them. You can fine tune in for yourself, for your own task."
},
{
"start": 2397,
"end": 2401,
"text": " And they're pretty, pretty powerful."
},
{
"start": 2401,
"end": 2408,
"text": " There are smaller models for if you don't have a TPU that are also pre trained."
},
{
"start": 2408,
"end": 2410,
"text": " So check these out as well."
},
{
"start": 2410,
"end": 2438,
"text": " And thanks a lot for listening."
}
] |
nPB0ppcnzZA | Yannic Kilcher | UCZHmQk67mSJgfCCTn7xBfew | What’s in a name? The need to nip NIPS | [
"Science & Technology"
] | [
"NIPS",
"NeurIPS",
"nips 2018",
"neurips 2018",
"nips name change",
"machine learning",
"deep learning",
"community",
"sexism",
"diversity",
"inclusion",
"bias",
"gender",
"women",
"tech",
"women in tech",
"women in stem",
"majority vote",
"minorities",
"statistics",
"computer science",
"harassment"
] | http://tensorlab.cms.caltech.edu/users/anima/pubs/NIPS_Name_Debate.pdf
Abstract:
There has been substantial recent controversy surrounding the use of the acronym "NIPS" for the Neural Information Processing Systems conference, stemming from the fact that the word "nips" is common slang for nipples, and has historically been used as a racial slur targeting people of Japanese origin. Here, we outline the ways in which this acronym has contributed to a hostile environment towards women in machine learning. We argue that an October 2018 decision by the Neural Information Processing Systems board not to change the name of the conference was based on a misunderstanding of the issues that women face in STEM fields, a poorly-designed survey, and a faulty statistical analysis. We applaud the board for a more recent announcement of the new abbreviation "NeurIPS", and emphasize that this name change is an important first step towards the creation of a more inclusive environment in machine learning.
Authors:
Daniela M. Witten, Elana J. Fertig, Animashree Anandkumar, Jeff Dean
References:
https://medium.com/@kristianlum/statistics-we-have-a-problem-304638dc5de5
https://nips.cc/Conferences/2018/News
https://twitter.com/AnimaAnandkumar/status/1055278000588021762
https://www.change.org/p/members-of-nips-board-protestnips-nips-acronym-encourages-sexism-and-is-a-slur-change-the-name
https://twitter.com/AnimaAnandkumar/status/1056971852248018944 | Hello and welcome. Today we're going to look at what's in a name, the need to nip NIPS by Daniela Witten, Alina Oferdig, Anima Shri Anand Kumar and Jeff Dean. This is a bit of a special paper as it's not an academic topic. The paper in fact is about the change of name or rather change in acronym for the conference Neural Information Processing Systems, previously abbreviated NIPS, but now for the first year this conference has been hosted under the acronym NURIPS. The people here on the paper are not the organizers of the conference, they are advocates for the name change and the paper basically outlines their arguments and a bit of description of what happened. So they're also pretty big names in the community so it should be interesting to see what they have to say. The paper is pretty short, it's three parts, three pages and we're going to go through it and yeah let's jump into it. So I have it over here. Alright so the first part of the paper basically describes, it's called What's all the Fuzz About? It basically describes why a name change was necessary in their perspective. So they say in machine learning like the rest of them suffers from severe gender imbalance, low retention rates for women and so on. They also describe the MeToo movement, increased awareness of sexual harassment faced by many female researchers, pervasiveness of sexual harassment at computational conferences and they reference an article here. I want to kind of show you this article. It's this article here. So if you haven't seen this yet I encourage you to read it. It's pretty horrifying to read but it gives you an idea of what people talk about when they say sexual harassment is a problem, is pervasive at conferences and so on. So yeah just I don't want to go into this specifically. Just go ahead read it and you know see what people are talking about. I think it's important context to understand where people are coming from. So they go on to say however more subtle acts of gender harassment defined in this report. This includes like sexist hostility, crude behavior and so on have gotten less public attention. Nonetheless gender harassment is extremely pervasive, is direct contributor to the challenges faced by women in the STEM field. In this article we argue that NIPS, the former acronym of the Neuro-Information Processing Systems Conference, constituted gender harassment towards women. So that's what their arguments basically about. So the acronym led to basically had its part in gender harassment towards women. Basically led to an environment where women could not feel comfortable at this conference. So here's their description. In popular slang the word NIPS is an abbreviation for nipples. Furthermore it has historically been used as a racial slur targeting people of Japanese origin but we'll not go into this deeper because that's kind of a historic use of the word. The current use of the word in fact is the slang for nipples and so we'll focus on that. They say at first glance the fact that a major machine learning conference shared its name with this slang is an unfortunate but unimportant coincidence. And it really is a coincidence. I think the the conference name has been around for longer than the slang has been kind of popular. The slang word has been popular so it really is a coincidence. Many other conferences have same coincidences like Colt for example. Maybe actually that's even less a coincidence than here. They say in fact one might hope that members of the machine learning community are sufficiently mature that the conference's name is unimportant. That's basically what everyone would hope. Maybe people don't even notice and if they notice maybe they'll have like a two-second oh that's you know that's the other word haha but then we basically just go on with our lives and no one cares too much. So that that's kind of the ideal scenario and they acknowledge that here. It's really important that they say unfortunately this appears not to be the case. They detail a few examples here at the 2017 conference Elon Musk made inappropriate jokes about the acronym participants wore loot t-shirts. I think one said my nips are NP hard which is kind of a double computer science joke I guess. There was a pre-conference event named word I can't probably say out loud without getting some sort of strike. You can clearly see that even though the kind of original name is coincidental and you know one would hope that people are like you know just putting it off be adult about it. There have been jokes, there have been you know t-shirts made and you know you can say the name collision is not like is unintended but I think this word here is very intended. So I think the main argument here or one of the main arguments is this really first of all creates an environment where certain people don't feel comfortable. It creates kind of a sexualized environment. Second of all and the more broader sense it's just unprofessional as a community especially since the kind of community is booming. We want to represent machine learning to the wider world. One can say okay it's you know it's just in professional that we kind of bring intertwine these things. It doesn't make a good impression. They say furthermore reminders of the unfortunate acronym are everywhere. Online searches for the acronym led to not safer work content. The hashtag NIPS is devoted to pornography. If you misspell the conference website you get to an adult site and I think this yeah this further goes into the argument that it's just an unprofessional appearance towards the outside. It's unfortunate the conference has been here longer but you know still there's a need to do something about it and I largely agree with these arguments that these are good arguments to make for a change of name. This paragraph down here it's a bit of a we'll go into that later. It's not very connected to the arguments made here so well it's more like connected to what's been happening so we'll go into that later. People have been circulating these arguments and calling for a name change for a while and then the the board of the conference the NIPS board made a survey surveying the attendance of the last five years conferences whether or not the conference should change its name. The next section is dedicated to how the survey turned out and what the response of the board was. So actually let's first go to the decision by the board. So here is the press release. This is a press release after the survey results had been collected. So they said our survey was returned by about 2200 people here and as I said have attended NIPS in the last five years. Of the male respondents about 28% are in favor of the conference name change of the female respondents about 44% are in favor of a name change. 40% prefer the existing name 16% expressed no preferences. In fact let's go look at the detailed results which they have down here. So you can see overall there is a big a big slant towards not agree. So negative 2 is strongly disagree with the name change while positive 2 is strongly agree. So you can see there's a big slant towards the not agree. If you split this by gender of respondents then you can see the basically the male distribution is that slant while the female distribution is a bit different as you can see here. The first thing it's mostly towards the extremes. So there are more people strongly saying something than non-strongly saying something to either side. And the second of all it seems very divided and very evenly divided. So in fact if you look at the numbers if you count the disagrees and agrees you'll find there's a slight majority in the agrees. There is a slight majority in the disagrees if you only consider the strongs. But ultimately these numbers are pretty close so that there's people on either side feeling strongly and there's about in this survey about as many on either side. So that's basically the outcome of this. Here I find very interesting some quotes from respondents. So you had the opportunity to put quotes to put like a comment and these are quoted from these comments. So they say for example this thanks for considering a name change. I'm not personally bothered by the current name but I think the gesture will send a much-needed inclusive vibe in the right direction. One person says if you were up to me I'd call off this nice but symbolic gesture. Use whatever time money and energy to make actual changes. Then someone says please please please change the name it is sexist and racist slur. I'm embarrassed every time I have to say the name of the conference. This feeds into the unprofessionalism argument. The next one I find very interesting. It says as a woman I find it offensive that the board is seriously considering changing the name of the meeting because of an adolescent reference to a woman's body. From my point of view it shows that the board does not see me as an equal member of the community but as a woman first and a scientist second. This is extremely interesting. So this is one of the people who was a female respondent and said strongly disagree with the name change or disagree with the name change. I mean I can guess. So we've only heard so far that the name or the acronym is offensive to women but here we have a woman saying that the consideration to change the acronym is actually offensive to her. That's very special and understandable. I can understand why that happens. I can understand the argument made here. This woman feels like okay it shows me that basically my gender is important and not really my being scientist. It's an argument. The next one goes into the same direction. It says I'm a woman. I've experienced being harassed by male academics and I would like this problem to be discussed and addressed but not in this frankly almost offensive way. Another person saying basically that's changing the name is almost offensive and it's not the right way to go to achieve these results. There's another one saying I'm in favor of the name change but this is cosmetic. So you have basically people coming from all angles giving their opinions and you can clearly see why there is especially in the female respondent group why there is a divide. So the board overall said the following. The board overall said the following. After extensive discussions the NIPS board has decided not to change the name of the conference for now. The poll itself did not yield a clear consensus on a name change or a well-regarded alternative name. Further they state instead we ask the community support in implementing concrete steps to improve the inclusiveness of the conference. So these are described down here. They have a number of changes to make the conference basically more inclusive. So they basically said okay so the name change survey was inconclusive and they clearly say whatever we do here regardless of which decision we take we're failing to accommodate the opinions about half the women in the community. Which is true this is clearly what you can see from the results from the quotes. So basically what they say is we'll not change the conference name for now. We'll implement these steps because what they I can guess what they felt was okay even the people against the name change were in support of making the conference more inclusive. So they basically say okay we do these things we strengthen their code of conduct. We have two inclusion diversity chairs. We have an inclusion town hall. We have childcare support. Gender-inclusive restrooms and so on and so on. Mentoring breakfasts for women and other minorities. So they take these steps concretely. They say this is what we do and even further if you look at their page on diversity and inclusion which I have here. They say here on the top in addition to hosting diversity related event the conference also making consider structural changes include a new code of conduct we've already seen and in-depth discussion of the potential of changing the name of the conference. So in total what they're saying is we've done this poll. It came back inconclusive which you've I think has been clearly demonstrated. We'll not change the name of the conference for now and we'll do all of these other things right down there and at the conference we'll hold a meeting and discuss the name change so we could maybe potentially change it in upcoming years. I think this is a really sensible decision by the board. I mean given this data given all of that this is probably the most sensible decision. Let's take concrete steps. The name change seems to be you know debatable so let's actually debate it at the conference with the actual community. That was the basically result of the poll. Let's now go back to what the paper has to say about this. Here's the paper again and they say in order to collect data about the machine learning community's feelings about the conference name the conference board sent out a survey to people who have attended the conference during the past five years. However serving conference attendees results in a very biased sample of a much larger community of potential machine learning researchers. Bias arises due to the fact that some people who are made uncomfortable by the name or by other aspects of the machine learning culture may have decided not to enter or to remain in the or not to remain in the field have chosen not to attend the conference. So basically you're saying well if you only ask this one group of people right then this other group of people you know doesn't have a chance to make their voice heard and there is basically bias because in this other group of people the people who have not attended the conference they would would have a severely different opinion from the people who have attended the conference. So first of all I think this can be a valid point here of course all the ways if you ask one group of people and exclude another one you there's there's if the if the group you ask and the target group which here it's really unclear what it is I guess it's the machine learning community considering going to the conference if those don't overlap then you you will introduce some sort of bias and they say okay bias could come from the fact you know some people who actually are affected by these problems of which this name is one they may have you know not attended the conference because they may have left the field because the the gender harassment is so pervasive and they just didn't didn't stay and so on. So I think this can be a good point but the problem I have with it here is that it's simply stated without anything it's simply said okay there is bias, bias arises and my question would be how much is that bias of any data like any data on this you can't just criticize these that the survey for being biased and and then not provide actual data like how many people are there who are made uncomfortable by the name or have left the field in who have left the field because of these things and is it really viable to to count them in I guess okay we can argue it is but how would they have responded to this we've clearly seen that a lot of affected people that even have experienced harassment are not in favor of the name change so in this case I would really like to see some data on how much this bias is right and I cannot also say it's not it's not that bad of a decision to what the board did to send the survey to the last five years attendees I think is a very sensible choice if you want to gather the community's feelings towards these kind of things I mean you you can't just ask the entire world because the entire world is not the machine learning community so I think the this is a very sensible decision to ask last five years attendees and if you have real evidence that this causes a notifiable like a significant bias then we could potentially correct for that bias but without any data on that I think the the asking last five years participants was completely reasonable and one of I don't really see how you can do a much better job without much much more manual work and I want to make this point a bit clearer on how hard it actually is to do that by pointing to the response to this so here is a tweet thread by one of the authors of this paper after the conference decision came out she basically tweeted out this protest nips I am starting this new hashtag please retweet if you're in support of the next conference changing its name so basically kind of launching a a Twitter campaign a Twitter hashtag under this to come you know get into a conversation with people about this people could express their support she also that was a misclick she also here made a change dot org petition to change the name so a petition basically petition is here the text of the petition basically says something similar to the to the what we've already seen including there is a the criticism of the survey and as you can see here about 2,000 people have signed it so I mean a Twitter hashtag is all good you know you can do that a petition is all good you can do that but it's a bit ironic because a change that org petition literally anyone can sign this and in addition to that there's only one option you can only say yes you can't even say no right so and even more who's gonna see the change that org petition it's gonna be the social media followers of these people right so basically you have now a you have it now what's basically a survey of the social media network of people in favor of changing the name where there's only one option to respond I I find it and so I've gone through here the people who actually publicly associate their name give a reason for signing a lot of these they you know they give some argument why they've signed the petition but I've tried searching these people for any sort of academic track record and in my sample I've come up with between 10 and 20 percent of people who somehow have an academic track record so this is I mean certainly a valid thing to make your voice heard and to show your numbers and but I mean look at this there's a bot signing twice hello Jack Nelson and Richard Chi very nice but so basically I'm not here to criticize petitions but what I want to say is you can't like criticize this this poll so hard for being biased and then launching basically an own poll that's even more biased and even more non-representative of the community to me that's that's kind of ironic and just goes to show how hard this is and my argument would be it's actually not that unsensible of a decision of the board the way they did it and if you have again if you have data to actually quantify the bias here then it's viable to go and correct for that all right so to they go on to analyze the survey results conference board simply noted that of the 294 women surveyed the number who strongly support or support the name change is comparable to the number of women who are strongly opposed or opposed however this analysis implicitly assumes that one person's feeling of discomfort or marginalization as a result of the name should be given the same weight as another person's preference for the status quo this amounts to giving the same way to false positives and false negatives of course we learn in an introductory statistics course that false positives and false negatives should be assigned weights dependent on context in this context we feel that a much greater weight should be given to the views of a person who feels marginalized as a result of the name so up here I find this a bit strange they say this amounts to giving the same way to false positives and false negatives to me the false is here a bit confusing because it seems to me it's it's simply giving the same weight to negatives and positives there's I don't think there's a need to dress this up in statistical lingo here it simply we give the same weight to people who responded positively and to people who responded negatively I think that's that's it there's no false of course we learn in a truck see this is class that false positives and false negatives should be assigned weights dependent on context in this context we feel that a much greater weight should be given to the views of person who feels marginalized as a result of the name I would I would say to this it's the problem for me it's these are this is one of the things that where you at you read it first and you say like oh yeah this makes sense but first of all it's framed extremely one-sided it's framed as all the people who are for the name change like they they feel discomforted they feel marginalized and the people who are against the name change they simply and here specifically they they they talk about the women group so in argument they're all affected the people against it simply prefer the status quo but we've clearly seen in the in the in the press release and we'll go over to that now these quotes here we've clearly seen that the the offense and the marginalization happens on both sides so here this as a woman I find it offensive that the board is considering changing the name it shows that the board does not see me as an equal member of the community but as a woman first and the scientists second I mean this is almost a textbook definition of marginalization and this is clearly happening on the other side as well so I think the framing here is extremely dishonest and one-sided and there is given basically the the side that we just seen in this quote is given absolutely no not even a mention that it exists it's simply framed as this side is marginalized and oppressed and discomforted and the other side simply prefers the status quo but we've clearly seen that yeah it's almost a this fits exactly this definition it's just one person's feeling or discomfort or marginalization as a result of the name it's just as a result of the name change second of all I think the the bigger problem and this goes into the statement down here to state this last point more explicitly an issue adversely affecting the minority of participants should not be decided by a majority vote again something at first you say oh yeah that makes sense but if you think about it this is a really really outrageous statement and the reason is it's it's it's outrageous is if the mud if it's not majority vote if it's not one person one vote then someone has to decide who gets to vote and who doesn't and more so specifically here someone basically needs to decide who should be given what weight in the vote right you need someone to decide this and here you can say well it's easy it's just the the women right because they're affected I this but they go further they say well it's the women who feel discomforted and marginalized who should be given more weight than the ones who simply prefer the status quo but then you have to have someone assessing whether someone is really marginalized and discomforted or simply prefers the status quo and it's not like an environment where there is kind of a sexist undertone isn't also discomforting or can't also be discomforting to men to men of any sort or people of of any sort of gender it's just not clear that the fact that people should be given different weight in in crafting an opinion I mean this this can be true if you have like some clear area of expertise but in this case it's really unclear and the fact is if it's not majority vote you need someone deciding the weight and the someone deciding the weights automatically decides on the outcome of the vote and then why do you need a vote in the first place basically up here they say yeah we feel the great weights should be aligned like this and down here there is no more we feel it's be an issue at worst affecting the minority of participants should not be decided by majority vote they're basically calling for a dictatorship in this case and I'm gonna guess like everyone has the opinion the dictatorship would be an awesome idea if the dictator were me right that's that's what everyone thinks of course and that's basically the argument made here but it's not it's not true and there's some really really disturbing implicit things in here and maybe I want to quickly go over how I think a democratic decision works so imagine you have a person and the person has decision to make for or against in this case the name change right and the person must decide on one of these two things on a let's say on a continuous scale but it doesn't matter what what this what this stuff up here basically implicitly assumes is that the person looks at themselves and they think well am I personally discomforted or marginalized by the name or the climate it creates no then I'm obviously against the name change because it doesn't help me or another person go am I personally affected yes well I feel discomforted or marginalized well then I'm obviously for a name change so the basic assumption here is that people simply vote purely their own egotistical interests and that's that's it so basically if you're in one of these minorities then you'll vote for the name change because it affects you which we've already seen is not it's not a given that people vote that way and if you're not in this then you know you you'd vote against but you're not affected so your vote shouldn't count it's completely untrue what people do especially smart people and I believe the machine learning community consists largely of these what they do is they'll make a list of arguments argument one argument two argument three argument for everyone has the same arguments everyone's hurt the same arguments if not then maybe there's some work to do in actually getting arguments to people but that's not the same as weighing the people differently you get the arguments to the people and then you weigh each of them equally why because what every person does is they say okay argument one is maybe it's unprofessional right name is unprofessional alright how important is that to me give it a weight weight one cool that's really important to me I'll give it a big weight argument two some people feel really discomfort like discomforted if you're marginalized by the name creates a bad environment for them how much weight am I gonna give to that right so people can actually consider other people's feelings and other people's problems and decide on what's the best also for them in their own mind so they give it a weight two and then there's maybe two arguments against some given these weight three weight four at the end what you have is you have argument I you will sum it up by the weights W I J you will sum it up over all people so basically now and this will give you like a final number a which is either positive or negative if it's positive you do the name change if it's negative you don't do the name change if you do this over all people what you've basically done is you have just determined these weightings here by a democratic process you've crowd sourced the weighting this is exactly what these people say up here right we feel we feel that you're not false false positives false we feel that positives and negatives should be assigned weights dependent on context so the positive and negative arguments in this case are assigned weights dependent on context but the weights are crowd sourced to the community right and each person this who participates in that each person who participates is one more brain power in a complicated decision that no one basically no one has the authority just to just decide for themselves so these people are calling for different weighting this is the way to do it the democratic majority vote is the exact way to determine these weights what these people basically are no no no no no we should determine the weights we who know I'm a bit corny here but this is basically it's still it's two alternatives either you do democratic process one person one brain one vote and that will give you a crowd sourced crowd sourced true weighting of the arguments what the community feels or someone needs to decide some one needs to side by force basically and that's a dictatorship so these are the choices you have and clearly now you can maybe understand why I say this is an outrageous statement because to me the dictatorship option is not an option note that I'm not saying that democracy can never be wrong or the majority can never be wrong but in fact it's the best system there is can be wrong but anything else will undoubtedly go more wrong so that's my point here alright so that was a maybe a bit ranty but let's go on a false choice and a minimization of a real issue so they go on to say what they think of the decision that the board made in response to this so up was how they analyzed the poll and now it's the decision in announcing their decision not to change the conference name conference board expressed commitment to implement concrete steps to improve the inclusiveness of the conference and they list them here and they say we sincerely applaud the conference board for these efforts okay I yeah I think the community feels like that as well however the wording of the decision implied the need to choose between changing the name of the conference and taking concrete steps to improve its inclusiveness I don't see that at all say this was a false choice there's no reason that the board could not do both yes there's no reason that they couldn't do both and I believe we've read this together before I don't think the board ever said that there was a choice between one or the other I think they've said very much the opposite let's go back I think what they mean here is the word instead so here they say we won't change the name and then here's they say instead we ask for the community support and implementing creed steps I think this this must be it because I don't really see any other way you would ever think that and the reason is this here they say will not change the name of the conference for now on another page they say it will discuss the name change at the conference and then here the instead I think what is meant is instead what we will do right now is these things we'll discuss about the name change but what we will do right now which was basically not the the real problem in the first place the real issue raised was the name so instead of that issue we'll do these other things which we feel the community wants I think that's the I think there's no I think everyone reading this comes to the same conclusion after after reading that but so I really don't see how you you can say that this is kind of presented as an either or by the board I don't think that at all and but you decide for yourself I believe the real real real crocs here is the for now and the promise to discuss at the conference which if you can see here in the paper is never ever ever touched right this they make it basically seem that the board has decided to not change the name and that's it which is completely wrong they've clearly stated their openness to a name change they want to discuss it it was just inconclusive so they want to basically not do anything rash and then half the community is against it anyway so they want to discuss it I to say that this is the basically that that the wording implied the need to choose I don't see that um but you know you decide for yourselves the board suggested a name change would only be symbolic and so on would have no real consequences so that this this these are some of the arguments basically made in the quotes as well but you know the fact that the name change would only be symbolic and so on these are all things you could actually discuss at the con at this conference meeting you could even correct for your for your poll right you could invite people who have left the community to represent those you could invite new potential researchers you could give everyone their voice and then actually listen to all of them I think that's a very sensible decision by the board and I think this is misrepresented here lastly let's say another argument though not explicitly mentioned a number of machine learning researchers told us that changing the name of the conference lead to too much confusion in the community while we understand we respectfully do not share it I mean this is it's basically an argument against the name change I think it's also a point worthy of discussion right that they say they say we respectfully do not share this point yeah okay they don't share it other people do it's a point of discussion we could you know you could actually discuss it at the conference but I actually agree with the authors here I think changing the name will not have a big impact on the kind of recognizability of the conference especially now down here we'll actually get into what actually happened in November the in response to extensive public backlash the conference board announced a change to the official conference acronym to NRIPS they say we are pleased provides this provides a reasonable compromise so in in my opinion this is it as far as solutions go this is a good solution right the NRIPS acronym I think it's it's it's cool you don't have to change the name of the conference itself you simply change the acronym which you know was the the reported problem in the first place I think the all the new papers will like people will still recognize the old NIPS acronym or the new conference it will be clear that it's the same thing and I think this is a very good a very good new name and I think people will get used to it pretty quickly it also you know to say NRIPS it it's also rolls off the tongue easily so it's as far as solutions go I like it further they say however the work for the conference board is far from done oops we encourage the board to continue its efforts blah blah blah so they say okay you have to do more than just change the name and so on they say together these steps will help ensure that the NRIPS conference retains its place in the forefront of machine learning research while also creating a welcoming environment for women and members of other representative groups on other underrepresented groups we all hope that to me the problem is a bit how this how this went down and if we go back and look at the actual press release of the name change they say here dear members of the neural information processing systems community something remarkable has happened in our community the name NRIPS has sprung up organically as an alternative acronym we're delighted to see it being adopted indeed one forward-thinking member of the community purchased NRIPS comm described as purpose as hosting conference content under different acronym until the board catches up we've caught up we're considering alternative acronyms when the community support for NRIPS became apparent we ask all attendees to respect the solution from the community use the new acronym so basically they've rebranded the entire conference about a month before the actual meeting asked all sponsors all invited companies asked all invited papers to rebrand the acronym to me this the wording here is fit is a bit funny like something remarkable has happened in our community has sprung up organically and now we'll just adopt it it seems like it seems like much less of the fairy tale to describe here but the actual like there's a there's a mob with pitchforks around your house and this is like the first kind of straw that you can grab to to make them calm down and also know that some companies have begun pulling out funding for the conference so I think this is really this was really you know much more backed by force and and back yeah what they say in the paper extensive public backlash so loud screaming basically then this this kind of the name has sprung up organically and has been adopted and seems much more bit forceful to me it would have still been a viable path the most valuable path to actually wait for the conference and then have that discussion and then if indeed this name in the rips would be would be presented as a good alternative and you know people would be fine with that then you could still make the name change for last for next year I think this this would have been a good alternative my fear now is this has been extremely rash extremely forceful as as I've said also accompanied by with like by withdrawal of funding that I believe these things usually provoke a backlash and that's really something that I wouldn't look forward to so I hope that this con that this paragraph down here is true that actually we will see a more welcoming environment for everyone but I believe things like this tend in society to have the sometimes very opposite effects of what's intended and so I hope this does not produce a backlash I think having had the actual discussion doing things non rashly would have done much more in the direction of preventing such a backlash so this is the end of the paper so to recap they basically say the acronym was was inappropriate which I agree with they say the survey was bad which I could believe if there was data they say that an issue adversely affecting the minority of participants should not be cited by majority vote which I absolutely disagree with and then they say the board has basically stated this as an either or decision which is I believe not true and misrepresenting or maybe I've missed something it's always possible lastly I want to get to this paragraph in recent months a number of women including some of the authors of this article who publicly expressed support for a change of the conference name have been relentlessly trolled harassed verbally abused and even physically threatened on Twitter reddit other online forums much of this harassment they say has been anonymous and typically has had an extremely gendered tone furthermore some students have reached out to us the authors lamenting the fact that they felt unable to openly express their support for renaming the conference due to fear of bullying or retaliation by faculty advisors or others in position of power this I believe is really bad the fact that people can't speak out about something like this without being bullied or harassed or having to fear for their careers basically is is bad and I would really discourage everyone from engaging in such behavior verbal abuse physically threaten I mean that's I mean to one point you can say all right if you've been on the internet for longer than a week then this probably has happened to you if you have had any sort of serious discussion on the internet but you can also say that doesn't make it right so I believe it's it's really important to separate what is you know harassment basically from actual disagreement and criticism and please engage in the latter do not engage in the former my problem with this paragraph it's again it's very one-sided it's basically stated here some students have reached out to us lamenting the fact that they felt unable to openly express their support for renaming the conference due to fear of bullying retaliation by faculty or advisors of other and others of position power to me I'm you know I'm gonna say this probably happens on both sides what you know one could argue where it happens more but this very much happens on both sides of this issue and it's real shame for both sides basically I think anyone should be able to express your opinion to to demonstrate that here I'm gonna show another Twitter thread by one of the authors of this paper where basically this is a thread where she posts screenshots of conversations basically people reaching out to her saying exactly that like I can't share my I have trouble sharing my opinion I get mocked for my opinion I can't do so publicly because I fear you know from my from my faculty and so on but then there's also this one here where a person wrote an email to the author basically saying they disagree with her and I I've read this email I don't you know I don't agree with the arguments here made but I can say that the this is not verbal abuse it's not personal attack it's not physically threatening it's actually quite respectful disagreement that the person actually goes through length to say how respectful they are how much you know how much this is meant as a as a disagreement on factual terms and further what they say is that they want to be anonymous maybe you see it on the very bottom for example I haven't done too much to anonymize myself but I ask you to respect my wishes of remaining anonymous don't try to figure out who I am further up they state basically they want to remain anonymous because they fear for their ladder for their later career right they fear of a backlash up here wish to remain anonymous as I'm an early in my career someday we may work together so basically they say here I disagree here's why I disagree and they wish to remain anonymous because they fear for their career right so this is almost like this is this is very much here feeling unable and will will go feeling unable to openly express their in the case support against renaming the conference to to fear of bullying or retaliation by faculty advisor others in position of power so this author here is obviously a real person in position of power and in very famous senior researcher and this person basically says I'm afraid and I can't you know that that's why I'm anonymous and the way the author responded here as you can read is what an anonymous coward of course I will do everything to guess you and it's it's difficult to to kind of put this off as I mean even if it's I don't know how it's meant right I will do everything to guess you and the least it means she will try to figure out who that is right and she doesn't go as far as saying that she will then basically either you know remember that name in case of any future thing or share it or whatnot but it's certainly you can't argue that this is a real deterrent for other people to even anonymously voice their opinion to if if this person announces I will do everything to guess you to me that that shows that this fear that we discuss here is very much present on both sides and it's absolutely not okay if if either side reacts by basically by basically retaliation or even even the the possibility of retaliation and I believe everyone should be able to say their opinion I respect really everyone even like these these authors here clearly took a lot of effort and a lot of a lot of beating basically they say they've been relentlessly trolled harassed verbally abused even physically threatened this is just really bad and have lots of respect for them saying their opinions stating their opinions anyway I think everyone should be able to do that without these things happening so to everyone watching I encourage you to not engage in these things and that alone will probably make the environment much much more inclusive and nice for everybody irregardless of of affiliation so that was it for me for this paper it's a bit longer it's a bit ranty if you agree disagree let me know in the comments I guess and other than that have a nice week weekend whatever you do bye | [
{
"start": 0,
"end": 4.86,
"text": " Hello and welcome. Today we're going to look at what's in a name, the need to nip"
},
{
"start": 4.86,
"end": 10.68,
"text": " NIPS by Daniela Witten, Alina Oferdig, Anima Shri Anand Kumar and Jeff Dean."
},
{
"start": 10.68,
"end": 17.080000000000002,
"text": " This is a bit of a special paper as it's not an academic topic. The paper in fact"
},
{
"start": 17.080000000000002,
"end": 22.52,
"text": " is about the change of name or rather change in acronym for the conference"
},
{
"start": 22.52,
"end": 28.48,
"text": " Neural Information Processing Systems, previously abbreviated NIPS, but now for"
},
{
"start": 28.48,
"end": 34,
"text": " the first year this conference has been hosted under the acronym NURIPS. The"
},
{
"start": 34,
"end": 39.2,
"text": " people here on the paper are not the organizers of the conference, they are"
},
{
"start": 39.2,
"end": 45.2,
"text": " advocates for the name change and the paper basically outlines their arguments"
},
{
"start": 45.2,
"end": 52.08,
"text": " and a bit of description of what happened. So they're also pretty big names"
},
{
"start": 52.08,
"end": 55.56,
"text": " in the community so it should be interesting to see what they have to say."
},
{
"start": 55.56,
"end": 61.96,
"text": " The paper is pretty short, it's three parts, three pages and we're going to go"
},
{
"start": 61.96,
"end": 71.48,
"text": " through it and yeah let's jump into it. So I have it over here. Alright so the"
},
{
"start": 71.48,
"end": 75.68,
"text": " first part of the paper basically describes, it's called What's all the Fuzz"
},
{
"start": 75.68,
"end": 81.36,
"text": " About? It basically describes why a name change was necessary in their"
},
{
"start": 81.36,
"end": 86.52,
"text": " perspective. So they say in machine learning like the rest of them"
},
{
"start": 86.52,
"end": 94.16,
"text": " suffers from severe gender imbalance, low retention rates for women and so on."
},
{
"start": 94.16,
"end": 100.76,
"text": " They also describe the MeToo movement, increased awareness of sexual harassment"
},
{
"start": 100.76,
"end": 106.16,
"text": " faced by many female researchers, pervasiveness of sexual harassment at"
},
{
"start": 106.16,
"end": 111.28,
"text": " computational conferences and they reference an article here. I want to kind"
},
{
"start": 111.28,
"end": 120.24000000000001,
"text": " of show you this article. It's this article here. So if you haven't seen this"
},
{
"start": 120.24000000000001,
"end": 126.28,
"text": " yet I encourage you to read it. It's pretty horrifying to read but it gives"
},
{
"start": 126.28,
"end": 130.84,
"text": " you an idea of what people talk about when they say sexual harassment is a"
},
{
"start": 130.84,
"end": 136.68,
"text": " problem, is pervasive at conferences and so on. So yeah just I don't want to go"
},
{
"start": 136.68,
"end": 143.96,
"text": " into this specifically. Just go ahead read it and you know see what people are"
},
{
"start": 143.96,
"end": 148.28,
"text": " talking about. I think it's important context to understand where people are"
},
{
"start": 148.28,
"end": 155.48000000000002,
"text": " coming from. So they go on to say however more subtle acts of gender"
},
{
"start": 155.48000000000002,
"end": 164.92000000000002,
"text": " harassment defined in this report. This includes like sexist hostility, crude"
},
{
"start": 164.92,
"end": 169.6,
"text": " behavior and so on have gotten less public attention. Nonetheless gender"
},
{
"start": 169.6,
"end": 173.88,
"text": " harassment is extremely pervasive, is direct contributor to the challenges"
},
{
"start": 173.88,
"end": 178,
"text": " faced by women in the STEM field. In this article we argue that NIPS, the former"
},
{
"start": 178,
"end": 182.04,
"text": " acronym of the Neuro-Information Processing Systems Conference, constituted"
},
{
"start": 182.04,
"end": 185.88,
"text": " gender harassment towards women. So that's what their arguments basically"
},
{
"start": 185.88,
"end": 194.2,
"text": " about. So the acronym led to basically had its part in gender harassment"
},
{
"start": 194.2,
"end": 199.6,
"text": " towards women. Basically led to an environment where women could not feel"
},
{
"start": 199.6,
"end": 209.95999999999998,
"text": " comfortable at this conference. So here's their description. In popular"
},
{
"start": 209.95999999999998,
"end": 216.35999999999999,
"text": " slang the word NIPS is an abbreviation for nipples. Furthermore it has"
},
{
"start": 216.35999999999999,
"end": 220.23999999999998,
"text": " historically been used as a racial slur targeting people of Japanese origin but"
},
{
"start": 220.24,
"end": 224.8,
"text": " we'll not go into this deeper because that's kind of a historic use of the"
},
{
"start": 224.8,
"end": 231.28,
"text": " word. The current use of the word in fact is the slang for nipples and so"
},
{
"start": 231.28,
"end": 236.8,
"text": " we'll focus on that. They say at first glance the fact that a major"
},
{
"start": 236.8,
"end": 241.28,
"text": " machine learning conference shared its name with this slang is an unfortunate"
},
{
"start": 241.28,
"end": 247.24,
"text": " but unimportant coincidence. And it really is a coincidence. I think the"
},
{
"start": 247.24,
"end": 252.68,
"text": " the conference name has been around for longer than the slang has been kind of"
},
{
"start": 252.68,
"end": 258.12,
"text": " popular. The slang word has been popular so it really is a coincidence. Many other"
},
{
"start": 258.12,
"end": 265.32,
"text": " conferences have same coincidences like Colt for example. Maybe actually that's"
},
{
"start": 265.32,
"end": 271.40000000000003,
"text": " even less a coincidence than here. They say in fact one might hope that"
},
{
"start": 271.40000000000003,
"end": 275.36,
"text": " members of the machine learning community are sufficiently mature that"
},
{
"start": 275.36,
"end": 279.32,
"text": " the conference's name is unimportant. That's basically what everyone"
},
{
"start": 279.32,
"end": 284.2,
"text": " would hope. Maybe people don't even notice and if they notice maybe"
},
{
"start": 284.2,
"end": 289.04,
"text": " they'll have like a two-second oh that's you know that's the other word haha but"
},
{
"start": 289.04,
"end": 294.96000000000004,
"text": " then we basically just go on with our lives and no one cares too much. So that"
},
{
"start": 294.96000000000004,
"end": 300,
"text": " that's kind of the ideal scenario and they acknowledge that here. It's"
},
{
"start": 300,
"end": 307.28,
"text": " really important that they say unfortunately this appears"
},
{
"start": 307.28,
"end": 312.12,
"text": " not to be the case. They detail a few examples here at the 2017 conference"
},
{
"start": 312.12,
"end": 316.08,
"text": " Elon Musk made inappropriate jokes about the acronym participants wore loot"
},
{
"start": 316.08,
"end": 322.16,
"text": " t-shirts. I think one said my nips are NP hard which is kind of a double"
},
{
"start": 322.16,
"end": 330.8,
"text": " computer science joke I guess. There was a pre-conference event named"
},
{
"start": 330.8,
"end": 338.24,
"text": " word I can't probably say out loud without getting some sort of strike."
},
{
"start": 338.24,
"end": 343.12,
"text": " You can clearly see that even though the kind of original name is coincidental"
},
{
"start": 343.12,
"end": 350,
"text": " and you know one would hope that people are like you know just putting it off be"
},
{
"start": 350,
"end": 354.36,
"text": " adult about it. There have been jokes, there have been you know t-shirts"
},
{
"start": 354.36,
"end": 360.32,
"text": " made and you know you can say the name collision is not like is"
},
{
"start": 360.32,
"end": 369.2,
"text": " unintended but I think this word here is very intended. So I think the"
},
{
"start": 369.2,
"end": 375.64,
"text": " main argument here or one of the main arguments is this really first of all"
},
{
"start": 375.64,
"end": 380.03999999999996,
"text": " creates an environment where certain people don't feel comfortable. It creates"
},
{
"start": 380.03999999999996,
"end": 384.88,
"text": " kind of a sexualized environment. Second of all and the more broader sense it's"
},
{
"start": 384.88,
"end": 391.03999999999996,
"text": " just unprofessional as a community especially since the kind of community"
},
{
"start": 391.03999999999996,
"end": 396.24,
"text": " is booming. We want to represent machine learning to the wider world. One can"
},
{
"start": 396.24,
"end": 403.8,
"text": " say okay it's you know it's just in professional that we kind of bring"
},
{
"start": 403.8,
"end": 409.40000000000003,
"text": " intertwine these things. It doesn't make a good impression. They say furthermore"
},
{
"start": 409.40000000000003,
"end": 412.96000000000004,
"text": " reminders of the unfortunate acronym are everywhere. Online searches for the"
},
{
"start": 412.96000000000004,
"end": 417.6,
"text": " acronym led to not safer work content. The hashtag NIPS is devoted to"
},
{
"start": 417.6,
"end": 423.76,
"text": " pornography. If you misspell the conference website you get to an adult"
},
{
"start": 423.76,
"end": 429.24,
"text": " site and I think this yeah this further goes into the argument that it's just an"
},
{
"start": 429.24,
"end": 433.16,
"text": " unprofessional appearance towards the outside. It's unfortunate the conference"
},
{
"start": 433.16,
"end": 437.92,
"text": " has been here longer but you know still there's a need to do something about it"
},
{
"start": 437.92,
"end": 442.64000000000004,
"text": " and I largely agree with these arguments that these are good arguments to make"
},
{
"start": 442.64000000000004,
"end": 450.48,
"text": " for a change of name. This paragraph down here it's a bit of a"
},
{
"start": 450.48,
"end": 456.96000000000004,
"text": " we'll go into that later. It's not very connected to the arguments made here so"
},
{
"start": 456.96000000000004,
"end": 461.08000000000004,
"text": " well it's more like connected to what's been happening so we'll go into that"
},
{
"start": 461.08,
"end": 466.24,
"text": " later. People have been circulating these arguments and calling for a name"
},
{
"start": 466.24,
"end": 471.88,
"text": " change for a while and then the the board of the conference the NIPS board"
},
{
"start": 471.88,
"end": 477.32,
"text": " made a survey surveying the attendance of the last five years conferences"
},
{
"start": 477.32,
"end": 485.26,
"text": " whether or not the conference should change its name. The next section"
},
{
"start": 485.26,
"end": 489.2,
"text": " is dedicated to how the survey turned out and what the response of the board"
},
{
"start": 489.2,
"end": 501.68,
"text": " was. So actually let's first go to the decision by the board."
},
{
"start": 501.68,
"end": 508.24,
"text": " So here is the press release. This is a press release after the survey results"
},
{
"start": 508.24,
"end": 517.36,
"text": " had been collected. So they said our survey was returned by about 2200"
},
{
"start": 517.36,
"end": 525.52,
"text": " people here and as I said have attended NIPS in the last five years. Of the male"
},
{
"start": 525.52,
"end": 529.36,
"text": " respondents about 28% are in favor of the conference name change of the female"
},
{
"start": 529.36,
"end": 535.24,
"text": " respondents about 44% are in favor of a name change. 40% prefer the existing"
},
{
"start": 535.24,
"end": 540.6800000000001,
"text": " name 16% expressed no preferences. In fact let's go look at the detailed"
},
{
"start": 540.6800000000001,
"end": 546.52,
"text": " results which they have down here. So you can see overall there is a big a big"
},
{
"start": 546.52,
"end": 552.52,
"text": " slant towards not agree. So negative 2 is strongly disagree with the name change"
},
{
"start": 552.52,
"end": 557.24,
"text": " while positive 2 is strongly agree. So you can see there's a big slant towards"
},
{
"start": 557.24,
"end": 564.8,
"text": " the not agree. If you split this by gender of respondents then you can see"
},
{
"start": 564.8,
"end": 571.16,
"text": " the basically the male distribution is that slant while the female"
},
{
"start": 571.16,
"end": 577.56,
"text": " distribution is a bit different as you can see here. The first thing it's"
},
{
"start": 577.56,
"end": 583.68,
"text": " mostly towards the extremes. So there are more people strongly saying"
},
{
"start": 583.68,
"end": 588.88,
"text": " something than non-strongly saying something to either side. And the"
},
{
"start": 588.88,
"end": 594.24,
"text": " second of all it seems very divided and very evenly divided. So in fact if you"
},
{
"start": 594.24,
"end": 599.52,
"text": " look at the numbers if you count the disagrees and agrees you'll find there's"
},
{
"start": 599.52,
"end": 606.28,
"text": " a slight majority in the agrees. There is a slight majority in the disagrees if"
},
{
"start": 606.28,
"end": 611,
"text": " you only consider the strongs. But ultimately these numbers are pretty"
},
{
"start": 611,
"end": 615,
"text": " close so that there's people on either side feeling strongly and"
},
{
"start": 615,
"end": 621.84,
"text": " there's about in this survey about as many on either side. So that's basically"
},
{
"start": 621.84,
"end": 630.24,
"text": " the outcome of this. Here I find very interesting some quotes from"
},
{
"start": 630.24,
"end": 634.76,
"text": " respondents. So you had the opportunity to put quotes to put like a"
},
{
"start": 634.76,
"end": 639.84,
"text": " comment and these are quoted from these comments. So they say for example this"
},
{
"start": 639.84,
"end": 643.44,
"text": " thanks for considering a name change. I'm not personally bothered by the current"
},
{
"start": 643.44,
"end": 649.0400000000001,
"text": " name but I think the gesture will send a much-needed inclusive vibe in the right"
},
{
"start": 649.04,
"end": 657.04,
"text": " direction. One person says if you were up to me I'd call off this nice"
},
{
"start": 657.04,
"end": 663.4399999999999,
"text": " but symbolic gesture. Use whatever time money and energy to make actual changes."
},
{
"start": 663.4399999999999,
"end": 668.1999999999999,
"text": " Then someone says please please please change the name it is sexist and racist"
},
{
"start": 668.1999999999999,
"end": 672.4399999999999,
"text": " slur. I'm embarrassed every time I have to say the name of the conference."
},
{
"start": 672.4399999999999,
"end": 678.24,
"text": " This feeds into the unprofessionalism argument. The next one I find very"
},
{
"start": 678.24,
"end": 682.12,
"text": " interesting. It says as a woman I find it offensive that the board is seriously"
},
{
"start": 682.12,
"end": 685.88,
"text": " considering changing the name of the meeting because of an adolescent"
},
{
"start": 685.88,
"end": 689.8,
"text": " reference to a woman's body. From my point of view it shows that the board"
},
{
"start": 689.8,
"end": 693.8,
"text": " does not see me as an equal member of the community but as a woman first and"
},
{
"start": 693.8,
"end": 699.64,
"text": " a scientist second. This is extremely interesting. So this is one of the"
},
{
"start": 699.64,
"end": 707.12,
"text": " people who was a female respondent and said strongly disagree with the name"
},
{
"start": 707.12,
"end": 714.2,
"text": " change or disagree with the name change. I mean I can guess. So we've only"
},
{
"start": 714.2,
"end": 720.6,
"text": " heard so far that the name or the acronym is offensive to women but here"
},
{
"start": 720.6,
"end": 725.6,
"text": " we have a woman saying that the consideration to change the acronym is"
},
{
"start": 725.6,
"end": 731.6800000000001,
"text": " actually offensive to her. That's very special and"
},
{
"start": 731.68,
"end": 738.64,
"text": " understandable. I can understand why that happens. I can"
},
{
"start": 738.64,
"end": 745.2399999999999,
"text": " understand the argument made here. This woman feels like okay it shows"
},
{
"start": 745.2399999999999,
"end": 751.92,
"text": " me that basically my gender is important and not really my being scientist."
},
{
"start": 751.92,
"end": 758.16,
"text": " It's an argument. The next one goes into the same direction. It says I'm"
},
{
"start": 758.16,
"end": 762.0799999999999,
"text": " a woman. I've experienced being harassed by male academics and I would like this"
},
{
"start": 762.0799999999999,
"end": 766.56,
"text": " problem to be discussed and addressed but not in this frankly almost offensive"
},
{
"start": 766.56,
"end": 772.4,
"text": " way. Another person saying basically that's changing the name is"
},
{
"start": 772.4,
"end": 779.52,
"text": " almost offensive and it's not the right way to go to achieve"
},
{
"start": 779.52,
"end": 784.64,
"text": " these results. There's another one saying I'm in favor of the name change but this"
},
{
"start": 784.64,
"end": 790.24,
"text": " is cosmetic. So you have basically people coming from all angles"
},
{
"start": 790.24,
"end": 795.84,
"text": " giving their opinions and you can clearly see why there is especially in"
},
{
"start": 795.84,
"end": 805.3199999999999,
"text": " the female respondent group why there is a divide. So the board"
},
{
"start": 805.3199999999999,
"end": 814.16,
"text": " overall said the following. The board overall said the following."
},
{
"start": 814.16,
"end": 821.12,
"text": " After extensive discussions the NIPS board has decided not to change the"
},
{
"start": 821.12,
"end": 826.28,
"text": " name of the conference for now. The poll itself did not yield a clear consensus"
},
{
"start": 826.28,
"end": 832.1999999999999,
"text": " on a name change or a well-regarded alternative name. Further they state"
},
{
"start": 832.1999999999999,
"end": 836.64,
"text": " instead we ask the community support in implementing concrete steps to improve"
},
{
"start": 836.64,
"end": 841.9599999999999,
"text": " the inclusiveness of the conference. So these are described down here. They have"
},
{
"start": 841.96,
"end": 846.24,
"text": " a number of changes to make the conference basically more inclusive. So"
},
{
"start": 846.24,
"end": 855.72,
"text": " they basically said okay so the name change survey was"
},
{
"start": 855.72,
"end": 862.9200000000001,
"text": " inconclusive and they clearly say whatever we do here regardless of which"
},
{
"start": 862.9200000000001,
"end": 866.4000000000001,
"text": " decision we take we're failing to accommodate the opinions about half the"
},
{
"start": 866.4000000000001,
"end": 870.4000000000001,
"text": " women in the community. Which is true this is clearly what you can see from"
},
{
"start": 870.4,
"end": 874.68,
"text": " the results from the quotes. So basically what they say is we'll not"
},
{
"start": 874.68,
"end": 880.3199999999999,
"text": " change the conference name for now. We'll implement these steps because what they"
},
{
"start": 880.3199999999999,
"end": 885.4399999999999,
"text": " I can guess what they felt was okay even the people against the name change were"
},
{
"start": 885.4399999999999,
"end": 890.0799999999999,
"text": " in support of making the conference more inclusive. So they basically say okay we"
},
{
"start": 890.0799999999999,
"end": 894.3199999999999,
"text": " do these things we strengthen their code of conduct. We have two inclusion"
},
{
"start": 894.3199999999999,
"end": 900.12,
"text": " diversity chairs. We have an inclusion town hall. We have childcare support."
},
{
"start": 900.12,
"end": 905.4,
"text": " Gender-inclusive restrooms and so on and so on. Mentoring breakfasts for women and"
},
{
"start": 905.4,
"end": 911.5600000000001,
"text": " other minorities. So they take these steps concretely. They say this is what"
},
{
"start": 911.5600000000001,
"end": 918.08,
"text": " we do and even further if you look at their page on diversity and inclusion"
},
{
"start": 918.08,
"end": 926.04,
"text": " which I have here. They say here on the top in addition to hosting diversity"
},
{
"start": 926.04,
"end": 929.8,
"text": " related event the conference also making consider structural changes include a"
},
{
"start": 929.8,
"end": 934.4,
"text": " new code of conduct we've already seen and in-depth discussion of the potential"
},
{
"start": 934.4,
"end": 942.7199999999999,
"text": " of changing the name of the conference. So in total what they're saying is we've"
},
{
"start": 942.7199999999999,
"end": 949.24,
"text": " done this poll. It came back inconclusive which you've I think"
},
{
"start": 949.24,
"end": 953.88,
"text": " has been clearly demonstrated. We'll not change the name of the"
},
{
"start": 953.88,
"end": 959.3599999999999,
"text": " conference for now and we'll do all of these other things"
},
{
"start": 959.36,
"end": 965.08,
"text": " right down there and at the conference we'll hold a meeting and discuss the"
},
{
"start": 965.08,
"end": 969.36,
"text": " name change so we could maybe potentially change it in upcoming years."
},
{
"start": 969.36,
"end": 974.76,
"text": " I think this is a really sensible decision by the board. I mean given"
},
{
"start": 974.76,
"end": 979.8000000000001,
"text": " this data given all of that this is probably the most sensible decision."
},
{
"start": 979.8000000000001,
"end": 985.2,
"text": " Let's take concrete steps. The name change seems to be you know debatable so"
},
{
"start": 985.2,
"end": 991.6,
"text": " let's actually debate it at the conference with the actual community."
},
{
"start": 991.6,
"end": 997.5600000000001,
"text": " That was the basically result of the poll. Let's now go back to what the paper"
},
{
"start": 997.5600000000001,
"end": 1003.6800000000001,
"text": " has to say about this. Here's the paper again and they say in order to collect"
},
{
"start": 1003.6800000000001,
"end": 1007.2800000000001,
"text": " data about the machine learning community's feelings about the"
},
{
"start": 1007.2800000000001,
"end": 1011.2800000000001,
"text": " conference name the conference board sent out a survey to people who have"
},
{
"start": 1011.28,
"end": 1018.3199999999999,
"text": " attended the conference during the past five years. However serving"
},
{
"start": 1018.3199999999999,
"end": 1023.24,
"text": " conference attendees results in a very biased sample of a much larger community"
},
{
"start": 1023.24,
"end": 1027.44,
"text": " of potential machine learning researchers. Bias arises due to the fact"
},
{
"start": 1027.44,
"end": 1031.04,
"text": " that some people who are made uncomfortable by the name or by other"
},
{
"start": 1031.04,
"end": 1037,
"text": " aspects of the machine learning culture may have decided not to enter or to"
},
{
"start": 1037,
"end": 1040.56,
"text": " remain in the or not to remain in the field have chosen not to attend the"
},
{
"start": 1040.56,
"end": 1045.84,
"text": " conference. So basically you're saying well if you only ask this one group of"
},
{
"start": 1045.84,
"end": 1050.52,
"text": " people right then this other group of people you know doesn't have a chance"
},
{
"start": 1050.52,
"end": 1055.56,
"text": " to make their voice heard and there is basically bias because in this other"
},
{
"start": 1055.56,
"end": 1061.36,
"text": " group of people the people who have not attended the conference they would would"
},
{
"start": 1061.36,
"end": 1065.04,
"text": " have a severely different opinion from the people who have attended the"
},
{
"start": 1065.04,
"end": 1070.44,
"text": " conference. So first of all I think this can be a valid point here of course all"
},
{
"start": 1070.44,
"end": 1075.52,
"text": " the ways if you ask one group of people and exclude another one you there's"
},
{
"start": 1075.52,
"end": 1083.76,
"text": " there's if the if the group you ask and the target group which here it's really"
},
{
"start": 1083.76,
"end": 1087.72,
"text": " unclear what it is I guess it's the machine learning community considering"
},
{
"start": 1087.72,
"end": 1095.92,
"text": " going to the conference if those don't overlap then you you will introduce some"
},
{
"start": 1095.92,
"end": 1100.5600000000002,
"text": " sort of bias and they say okay bias could come from the fact you know some"
},
{
"start": 1100.5600000000002,
"end": 1106.2,
"text": " people who actually are affected by these problems of which this name is one"
},
{
"start": 1106.2,
"end": 1110.28,
"text": " they may have you know not attended the conference because they may have left"
},
{
"start": 1110.28,
"end": 1114.24,
"text": " the field because the the gender harassment is so pervasive and they just"
},
{
"start": 1114.24,
"end": 1119.6000000000001,
"text": " didn't didn't stay and so on. So I think this can be a good point but the problem"
},
{
"start": 1119.6000000000001,
"end": 1125.64,
"text": " I have with it here is that it's simply stated without anything it's simply said"
},
{
"start": 1125.64,
"end": 1133.0800000000002,
"text": " okay there is bias, bias arises and my question would be how much is that bias"
},
{
"start": 1133.0800000000002,
"end": 1141.48,
"text": " of any data like any data on this you can't just criticize these that the"
},
{
"start": 1141.48,
"end": 1147.48,
"text": " survey for being biased and and then not provide actual data like how many people"
},
{
"start": 1147.48,
"end": 1152.8000000000002,
"text": " are there who are made uncomfortable by the name or have left the field in who"
},
{
"start": 1152.8,
"end": 1158.8799999999999,
"text": " have left the field because of these things and is it really viable to to"
},
{
"start": 1158.8799999999999,
"end": 1163.84,
"text": " count them in I guess okay we can argue it is but how would they have responded"
},
{
"start": 1163.84,
"end": 1170.8,
"text": " to this we've clearly seen that a lot of affected people that even have"
},
{
"start": 1170.8,
"end": 1177.44,
"text": " experienced harassment are not in favor of the name change so in this case I"
},
{
"start": 1177.44,
"end": 1188.44,
"text": " would really like to see some data on how much this bias is right and I cannot"
},
{
"start": 1188.44,
"end": 1196.48,
"text": " also say it's not it's not that bad of a decision to what the board did to send"
},
{
"start": 1196.48,
"end": 1200.56,
"text": " the survey to the last five years attendees I think is a very sensible"
},
{
"start": 1200.56,
"end": 1206,
"text": " choice if you want to gather the community's feelings towards these kind"
},
{
"start": 1206,
"end": 1211.48,
"text": " of things I mean you you can't just ask the entire world because the entire"
},
{
"start": 1211.48,
"end": 1217.76,
"text": " world is not the machine learning community so I think the this is a very"
},
{
"start": 1217.76,
"end": 1223.68,
"text": " sensible decision to ask last five years attendees and if you have real evidence"
},
{
"start": 1223.68,
"end": 1230.2,
"text": " that this causes a notifiable like a significant bias then we could"
},
{
"start": 1230.2,
"end": 1237.0800000000002,
"text": " potentially correct for that bias but without any data on that I think the the"
},
{
"start": 1237.0800000000002,
"end": 1244.92,
"text": " asking last five years participants was completely reasonable and one of I don't"
},
{
"start": 1244.92,
"end": 1251.48,
"text": " really see how you can do a much better job without much much more manual work"
},
{
"start": 1251.48,
"end": 1257.72,
"text": " and I want to make this point a bit clearer on how hard it actually is to do"
},
{
"start": 1257.72,
"end": 1266.4,
"text": " that by pointing to the response to this so here is a tweet thread by one of the"
},
{
"start": 1266.4,
"end": 1271.1200000000001,
"text": " authors of this paper after the conference decision came out she"
},
{
"start": 1271.1200000000001,
"end": 1277.08,
"text": " basically tweeted out this protest nips I am starting this new hashtag please"
},
{
"start": 1277.08,
"end": 1281.6000000000001,
"text": " retweet if you're in support of the next conference changing its name so"
},
{
"start": 1281.6000000000001,
"end": 1287.08,
"text": " basically kind of launching a a Twitter campaign a Twitter hashtag under this to"
},
{
"start": 1287.08,
"end": 1291.3999999999999,
"text": " come you know get into a conversation with people about this people could"
},
{
"start": 1291.3999999999999,
"end": 1301.72,
"text": " express their support she also that was a misclick she also here made a change"
},
{
"start": 1301.72,
"end": 1309.6399999999999,
"text": " dot org petition to change the name so a petition basically petition is here the"
},
{
"start": 1309.6399999999999,
"end": 1316.72,
"text": " text of the petition basically says something similar to the to the what"
},
{
"start": 1316.72,
"end": 1326.68,
"text": " we've already seen including there is a the criticism of the survey and as you"
},
{
"start": 1326.68,
"end": 1337.04,
"text": " can see here about 2,000 people have signed it so I mean a Twitter hashtag is"
},
{
"start": 1337.04,
"end": 1341.64,
"text": " all good you know you can do that a petition is all good you can do that but"
},
{
"start": 1341.64,
"end": 1346.64,
"text": " it's a bit ironic because a change that org petition literally anyone can"
},
{
"start": 1346.64,
"end": 1352,
"text": " sign this and in addition to that there's only one option you can only say"
},
{
"start": 1352,
"end": 1360.2,
"text": " yes you can't even say no right so and even more who's gonna see the change"
},
{
"start": 1360.2,
"end": 1364.4,
"text": " that org petition it's gonna be the social media followers of these people"
},
{
"start": 1364.4,
"end": 1370.24,
"text": " right so basically you have now a you have it now what's basically a survey of"
},
{
"start": 1370.24,
"end": 1375.92,
"text": " the social media network of people in favor of changing the name where there's"
},
{
"start": 1375.92,
"end": 1383.92,
"text": " only one option to respond I I find it and so I've gone through here the people"
},
{
"start": 1383.92,
"end": 1388.72,
"text": " who actually publicly associate their name give a reason for signing a lot of"
},
{
"start": 1388.72,
"end": 1394.6000000000001,
"text": " these they you know they give some argument why they've signed the petition"
},
{
"start": 1394.6000000000001,
"end": 1400.04,
"text": " but I've tried searching these people for any sort of academic track record and"
},
{
"start": 1400.04,
"end": 1405.8799999999999,
"text": " in my sample I've come up with between 10 and 20 percent of people who somehow"
},
{
"start": 1405.8799999999999,
"end": 1418.12,
"text": " have an academic track record so this is I mean certainly a valid thing to make"
},
{
"start": 1418.12,
"end": 1424.1599999999999,
"text": " your voice heard and to show your numbers and but I mean look at this there's a"
},
{
"start": 1424.16,
"end": 1435.64,
"text": " bot signing twice hello Jack Nelson and Richard Chi very nice but so basically"
},
{
"start": 1435.64,
"end": 1441.88,
"text": " I'm not here to criticize petitions but what I want to say is you can't like"
},
{
"start": 1441.88,
"end": 1450.48,
"text": " criticize this this poll so hard for being biased and then launching basically"
},
{
"start": 1450.48,
"end": 1456.56,
"text": " an own poll that's even more biased and even more non-representative of the"
},
{
"start": 1456.56,
"end": 1463.6,
"text": " community to me that's that's kind of ironic and just goes to show how hard"
},
{
"start": 1463.6,
"end": 1468.52,
"text": " this is and my argument would be it's actually not that unsensible of a"
},
{
"start": 1468.52,
"end": 1473.32,
"text": " decision of the board the way they did it and if you have again if you have"
},
{
"start": 1473.32,
"end": 1479.52,
"text": " data to actually quantify the bias here then it's viable to go and correct for"
},
{
"start": 1479.52,
"end": 1486.92,
"text": " that all right so to they go on to analyze the survey results conference"
},
{
"start": 1486.92,
"end": 1492.48,
"text": " board simply noted that of the 294 women surveyed the number who strongly"
},
{
"start": 1492.48,
"end": 1498.48,
"text": " support or support the name change is comparable to the number of women who"
},
{
"start": 1498.48,
"end": 1503.84,
"text": " are strongly opposed or opposed however this analysis implicitly assumes that"
},
{
"start": 1503.84,
"end": 1508.28,
"text": " one person's feeling of discomfort or marginalization as a result of the name"
},
{
"start": 1508.28,
"end": 1513.24,
"text": " should be given the same weight as another person's preference for the"
},
{
"start": 1513.24,
"end": 1519.92,
"text": " status quo this amounts to giving the same way to false positives and false"
},
{
"start": 1519.92,
"end": 1524.6399999999999,
"text": " negatives of course we learn in an introductory statistics course that"
},
{
"start": 1524.6399999999999,
"end": 1529.28,
"text": " false positives and false negatives should be assigned weights dependent on"
},
{
"start": 1529.28,
"end": 1534.2,
"text": " context in this context we feel that a much greater weight should be given to"
},
{
"start": 1534.2,
"end": 1540.44,
"text": " the views of a person who feels marginalized as a result of the name so"
},
{
"start": 1540.44,
"end": 1546.88,
"text": " up here I find this a bit strange they say this amounts to giving the same way"
},
{
"start": 1546.88,
"end": 1554.8,
"text": " to false positives and false negatives to me the false is here a bit confusing"
},
{
"start": 1554.8,
"end": 1559.32,
"text": " because it seems to me it's it's simply giving the same weight to negatives and"
},
{
"start": 1559.32,
"end": 1565.04,
"text": " positives there's I don't think there's a need to dress this up in statistical"
},
{
"start": 1565.04,
"end": 1570.54,
"text": " lingo here it simply we give the same weight to people who responded"
},
{
"start": 1570.54,
"end": 1576.08,
"text": " positively and to people who responded negatively I think that's that's it"
},
{
"start": 1576.08,
"end": 1583.8,
"text": " there's no false of course we learn in a truck see this is class that false"
},
{
"start": 1583.8,
"end": 1587.12,
"text": " positives and false negatives should be assigned weights dependent on context in"
},
{
"start": 1587.12,
"end": 1590.7199999999998,
"text": " this context we feel that a much greater weight should be given to the views of"
},
{
"start": 1590.7199999999998,
"end": 1596.1599999999999,
"text": " person who feels marginalized as a result of the name I would I would say"
},
{
"start": 1596.1599999999999,
"end": 1601.1999999999998,
"text": " to this it's the problem for me it's these are this is one of the things that"
},
{
"start": 1601.1999999999998,
"end": 1605.28,
"text": " where you at you read it first and you say like oh yeah this makes sense but"
},
{
"start": 1605.28,
"end": 1611.4399999999998,
"text": " first of all it's framed extremely one-sided it's framed as all the people"
},
{
"start": 1611.4399999999998,
"end": 1616.36,
"text": " who are for the name change like they they feel discomforted they feel"
},
{
"start": 1616.36,
"end": 1622.28,
"text": " marginalized and the people who are against the name change they simply and"
},
{
"start": 1622.28,
"end": 1629.1599999999999,
"text": " here specifically they they they talk about the women group so in argument"
},
{
"start": 1629.1599999999999,
"end": 1634.9599999999998,
"text": " they're all affected the people against it simply prefer the status quo but"
},
{
"start": 1634.9599999999998,
"end": 1641.04,
"text": " we've clearly seen in the in the in the press release and we'll go over to that"
},
{
"start": 1641.04,
"end": 1649.6,
"text": " now these quotes here we've clearly seen that the the offense and the"
},
{
"start": 1649.6,
"end": 1655.08,
"text": " marginalization happens on both sides so here this as a woman I find it"
},
{
"start": 1655.08,
"end": 1660.48,
"text": " offensive that the board is considering changing the name it shows that the"
},
{
"start": 1660.48,
"end": 1664.48,
"text": " board does not see me as an equal member of the community but as a woman first"
},
{
"start": 1664.48,
"end": 1669.24,
"text": " and the scientists second I mean this is almost a textbook definition of"
},
{
"start": 1669.24,
"end": 1675.08,
"text": " marginalization and this is clearly happening on the other side as well so I"
},
{
"start": 1675.08,
"end": 1682.04,
"text": " think the framing here is extremely dishonest and one-sided and there is"
},
{
"start": 1682.04,
"end": 1687.92,
"text": " given basically the the side that we just seen in this quote is given"
},
{
"start": 1687.92,
"end": 1693.36,
"text": " absolutely no not even a mention that it exists it's simply framed as this side"
},
{
"start": 1693.36,
"end": 1698.24,
"text": " is marginalized and oppressed and discomforted and the other side simply"
},
{
"start": 1698.24,
"end": 1704.32,
"text": " prefers the status quo but we've clearly seen that yeah it's almost a this fits"
},
{
"start": 1704.32,
"end": 1711.08,
"text": " exactly this definition it's just one person's feeling or discomfort or"
},
{
"start": 1711.08,
"end": 1718.56,
"text": " marginalization as a result of the name it's just as a result of the name change"
},
{
"start": 1719.32,
"end": 1725.1200000000001,
"text": " second of all I think the the bigger problem and this goes into the statement"
},
{
"start": 1725.12,
"end": 1730.84,
"text": " down here to state this last point more explicitly an issue adversely affecting"
},
{
"start": 1730.84,
"end": 1736.52,
"text": " the minority of participants should not be decided by a majority vote again"
},
{
"start": 1736.52,
"end": 1742.2399999999998,
"text": " something at first you say oh yeah that makes sense but if you think about it"
},
{
"start": 1742.2399999999998,
"end": 1749.3999999999999,
"text": " this is a really really outrageous statement and the reason is it's it's"
},
{
"start": 1749.4,
"end": 1758.3200000000002,
"text": " it's outrageous is if the mud if it's not majority vote if it's not one person"
},
{
"start": 1758.3200000000002,
"end": 1765.8400000000001,
"text": " one vote then someone has to decide who gets to vote and who doesn't and more so"
},
{
"start": 1765.8400000000001,
"end": 1771.24,
"text": " specifically here someone basically needs to decide who should be given what"
},
{
"start": 1771.24,
"end": 1777.4,
"text": " weight in the vote right you need someone to decide this and here you can"
},
{
"start": 1777.4,
"end": 1781.8000000000002,
"text": " say well it's easy it's just the the women right because they're affected I"
},
{
"start": 1781.8000000000002,
"end": 1788.16,
"text": " this but they go further they say well it's the women who feel discomforted"
},
{
"start": 1788.16,
"end": 1792.0800000000002,
"text": " and marginalized who should be given more weight than the ones who simply"
},
{
"start": 1792.0800000000002,
"end": 1796.24,
"text": " prefer the status quo but then you have to have someone assessing whether someone"
},
{
"start": 1796.24,
"end": 1800.52,
"text": " is really marginalized and discomforted or simply prefers the status quo and"
},
{
"start": 1800.52,
"end": 1808.36,
"text": " it's not like an environment where there is kind of a sexist undertone isn't"
},
{
"start": 1808.36,
"end": 1816.48,
"text": " also discomforting or can't also be discomforting to men to men of any sort"
},
{
"start": 1816.48,
"end": 1827.76,
"text": " or people of of any sort of gender it's just not clear that the fact that people"
},
{
"start": 1827.76,
"end": 1833.04,
"text": " should be given different weight in in crafting an opinion I mean this this can"
},
{
"start": 1833.04,
"end": 1839.16,
"text": " be true if you have like some clear area of expertise but in this case it's"
},
{
"start": 1839.16,
"end": 1845.12,
"text": " really unclear and the fact is if it's not majority vote you need someone"
},
{
"start": 1845.12,
"end": 1851.58,
"text": " deciding the weight and the someone deciding the weights automatically"
},
{
"start": 1851.58,
"end": 1857.16,
"text": " decides on the outcome of the vote and then why do you need a vote in the first"
},
{
"start": 1857.16,
"end": 1864.68,
"text": " place basically up here they say yeah we feel the great weights should be aligned"
},
{
"start": 1864.68,
"end": 1869.6000000000001,
"text": " like this and down here there is no more we feel it's be an issue at worst"
},
{
"start": 1869.6000000000001,
"end": 1873.3600000000001,
"text": " affecting the minority of participants should not be decided by majority vote"
},
{
"start": 1873.3600000000001,
"end": 1878.96,
"text": " they're basically calling for a dictatorship in this case and I'm gonna"
},
{
"start": 1878.96,
"end": 1885.0400000000002,
"text": " guess like everyone has the opinion the dictatorship would be an awesome idea if"
},
{
"start": 1885.04,
"end": 1891.76,
"text": " the dictator were me right that's that's what everyone thinks of course and that's"
},
{
"start": 1891.76,
"end": 1897.08,
"text": " basically the argument made here but it's not it's not true and there's some"
},
{
"start": 1897.08,
"end": 1904.96,
"text": " really really disturbing implicit things in here and maybe I want to quickly go"
},
{
"start": 1904.96,
"end": 1912.8799999999999,
"text": " over how I think a democratic decision works so imagine you have a person and"
},
{
"start": 1912.88,
"end": 1918.4,
"text": " the person has decision to make for or against in this case the name change"
},
{
"start": 1918.4,
"end": 1926.0800000000002,
"text": " right and the person must decide on one of these two things on a let's say on a"
},
{
"start": 1926.0800000000002,
"end": 1933.2,
"text": " continuous scale but it doesn't matter what what this what this stuff up here"
},
{
"start": 1933.2,
"end": 1938.5600000000002,
"text": " basically implicitly assumes is that the person looks at themselves and they"
},
{
"start": 1938.56,
"end": 1945.32,
"text": " think well am I personally discomforted or marginalized by the name or the"
},
{
"start": 1945.32,
"end": 1950,
"text": " climate it creates no then I'm obviously against the name change because it"
},
{
"start": 1950,
"end": 1956.76,
"text": " doesn't help me or another person go am I personally affected yes well I feel"
},
{
"start": 1956.76,
"end": 1963.58,
"text": " discomforted or marginalized well then I'm obviously for a name change so the"
},
{
"start": 1963.58,
"end": 1969.36,
"text": " basic assumption here is that people simply vote purely their own egotistical"
},
{
"start": 1969.36,
"end": 1974.6399999999999,
"text": " interests and that's that's it so basically if you're in one of these"
},
{
"start": 1974.6399999999999,
"end": 1979.32,
"text": " minorities then you'll vote for the name change because it affects you which"
},
{
"start": 1979.32,
"end": 1985,
"text": " we've already seen is not it's not a given that people vote that way and if"
},
{
"start": 1985,
"end": 1989.24,
"text": " you're not in this then you know you you'd vote against but you're not"
},
{
"start": 1989.24,
"end": 1993.52,
"text": " affected so your vote shouldn't count it's completely untrue what people do"
},
{
"start": 1993.52,
"end": 1998.52,
"text": " especially smart people and I believe the machine learning community consists"
},
{
"start": 1998.52,
"end": 2005.68,
"text": " largely of these what they do is they'll make a list of arguments argument one"
},
{
"start": 2005.68,
"end": 2011.92,
"text": " argument two argument three argument for everyone has the same arguments"
},
{
"start": 2011.92,
"end": 2015.28,
"text": " everyone's hurt the same arguments if not then maybe there's some work to do"
},
{
"start": 2015.28,
"end": 2021.72,
"text": " in actually getting arguments to people but that's not the same as weighing the"
},
{
"start": 2021.72,
"end": 2026.64,
"text": " people differently you get the arguments to the people and then you weigh each of"
},
{
"start": 2026.64,
"end": 2032,
"text": " them equally why because what every person does is they say okay argument"
},
{
"start": 2032,
"end": 2037.3600000000001,
"text": " one is maybe it's unprofessional right name is unprofessional alright how"
},
{
"start": 2037.3600000000001,
"end": 2042.08,
"text": " important is that to me give it a weight weight one cool that's really important"
},
{
"start": 2042.08,
"end": 2048.36,
"text": " to me I'll give it a big weight argument two some people feel really"
},
{
"start": 2048.36,
"end": 2052.84,
"text": " discomfort like discomforted if you're marginalized by the name creates a bad"
},
{
"start": 2052.84,
"end": 2057.44,
"text": " environment for them how much weight am I gonna give to that right so people can"
},
{
"start": 2057.44,
"end": 2062.08,
"text": " actually consider other people's feelings and other people's problems and"
},
{
"start": 2062.08,
"end": 2068.08,
"text": " decide on what's the best also for them in their own mind so they give it a weight"
},
{
"start": 2068.08,
"end": 2074.08,
"text": " two and then there's maybe two arguments against some given these weight three"
},
{
"start": 2074.08,
"end": 2082.04,
"text": " weight four at the end what you have is you have argument I you will sum it up"
},
{
"start": 2082.04,
"end": 2092.16,
"text": " by the weights W I J you will sum it up over all people so basically now and this"
},
{
"start": 2092.16,
"end": 2096.7999999999997,
"text": " will give you like a final number a which is either positive or negative if"
},
{
"start": 2096.7999999999997,
"end": 2100.2999999999997,
"text": " it's positive you do the name change if it's negative you don't do the name"
},
{
"start": 2100.3,
"end": 2106.5600000000004,
"text": " change if you do this over all people what you've basically done is you have"
},
{
"start": 2106.5600000000004,
"end": 2113.84,
"text": " just determined these weightings here by a democratic process you've crowd sourced"
},
{
"start": 2113.84,
"end": 2121.52,
"text": " the weighting this is exactly what these people say up here right we feel we feel"
},
{
"start": 2121.52,
"end": 2127.2000000000003,
"text": " that you're not false false positives false we feel that positives and"
},
{
"start": 2127.2,
"end": 2133.16,
"text": " negatives should be assigned weights dependent on context so the positive and"
},
{
"start": 2133.16,
"end": 2138.2,
"text": " negative arguments in this case are assigned weights dependent on context"
},
{
"start": 2138.2,
"end": 2144.3199999999997,
"text": " but the weights are crowd sourced to the community right and each person this who"
},
{
"start": 2144.3199999999997,
"end": 2149.52,
"text": " participates in that each person who participates is one more brain power in"
},
{
"start": 2149.52,
"end": 2156.3599999999997,
"text": " a complicated decision that no one basically no one has the authority just"
},
{
"start": 2156.36,
"end": 2159.88,
"text": " to just decide for themselves so these people are calling for different"
},
{
"start": 2159.88,
"end": 2165.2000000000003,
"text": " weighting this is the way to do it the democratic majority vote is the exact"
},
{
"start": 2165.2000000000003,
"end": 2170,
"text": " way to determine these weights what these people basically are no no no no"
},
{
"start": 2170,
"end": 2179.6,
"text": " no we should determine the weights we who know I'm a bit corny here but this is"
},
{
"start": 2179.6,
"end": 2182.88,
"text": " basically it's still it's two alternatives either you do democratic"
},
{
"start": 2182.88,
"end": 2190.48,
"text": " process one person one brain one vote and that will give you a crowd sourced"
},
{
"start": 2190.48,
"end": 2195.4,
"text": " crowd sourced true weighting of the arguments what the community feels or"
},
{
"start": 2195.4,
"end": 2203.56,
"text": " someone needs to decide some one needs to side by force basically and that's a"
},
{
"start": 2203.56,
"end": 2211.6,
"text": " dictatorship so these are the choices you have and clearly now you can maybe"
},
{
"start": 2211.6,
"end": 2216.2799999999997,
"text": " understand why I say this is an outrageous statement because to me the"
},
{
"start": 2216.2799999999997,
"end": 2223.44,
"text": " dictatorship option is not an option note that I'm not saying that democracy"
},
{
"start": 2223.44,
"end": 2230.2799999999997,
"text": " can never be wrong or the majority can never be wrong but in fact it's the best"
},
{
"start": 2230.2799999999997,
"end": 2237.16,
"text": " system there is can be wrong but anything else will undoubtedly go more"
},
{
"start": 2237.16,
"end": 2245.68,
"text": " wrong so that's my point here alright so that was a maybe a bit ranty but let's"
},
{
"start": 2245.68,
"end": 2255.3199999999997,
"text": " go on a false choice and a minimization of a real issue so they go on to say"
},
{
"start": 2255.3199999999997,
"end": 2260.48,
"text": " what they think of the decision that the board made in response to this so up was"
},
{
"start": 2260.48,
"end": 2265.52,
"text": " how they analyzed the poll and now it's the decision in announcing their"
},
{
"start": 2265.52,
"end": 2268.72,
"text": " decision not to change the conference name conference board expressed"
},
{
"start": 2268.72,
"end": 2272.08,
"text": " commitment to implement concrete steps to improve the inclusiveness of the"
},
{
"start": 2272.08,
"end": 2276.4,
"text": " conference and they list them here and they say we sincerely applaud the"
},
{
"start": 2276.4,
"end": 2284.44,
"text": " conference board for these efforts okay I yeah I think the community feels like"
},
{
"start": 2284.44,
"end": 2289.88,
"text": " that as well however the wording of the decision implied the need to choose"
},
{
"start": 2289.88,
"end": 2295.44,
"text": " between changing the name of the conference and taking concrete steps to"
},
{
"start": 2295.44,
"end": 2304.16,
"text": " improve its inclusiveness I don't see that at all say this was a false choice"
},
{
"start": 2304.16,
"end": 2308.04,
"text": " there's no reason that the board could not do both yes there's no reason that"
},
{
"start": 2308.04,
"end": 2312.96,
"text": " they couldn't do both and I believe we've read this together before I don't"
},
{
"start": 2312.96,
"end": 2317.04,
"text": " think the board ever said that there was a choice between one or the other I"
},
{
"start": 2317.04,
"end": 2323.8,
"text": " think they've said very much the opposite let's go back I think what they"
},
{
"start": 2323.8,
"end": 2334.1600000000003,
"text": " mean here is the word instead so here they say we won't change the name and"
},
{
"start": 2334.1600000000003,
"end": 2338.5600000000004,
"text": " then here's they say instead we ask for the community support and implementing"
},
{
"start": 2338.5600000000004,
"end": 2343.44,
"text": " creed steps I think this this must be it because I don't really see any other way"
},
{
"start": 2343.44,
"end": 2350.96,
"text": " you would ever think that and the reason is this here they say will not change"
},
{
"start": 2350.96,
"end": 2354.7200000000003,
"text": " the name of the conference for now on another page they say it will discuss"
},
{
"start": 2354.7200000000003,
"end": 2358.92,
"text": " the name change at the conference and then here the instead I think what is"
},
{
"start": 2358.92,
"end": 2365.52,
"text": " meant is instead what we will do right now is these things we'll discuss about"
},
{
"start": 2365.52,
"end": 2369.56,
"text": " the name change but what we will do right now which was basically not the"
},
{
"start": 2369.56,
"end": 2374.96,
"text": " the real problem in the first place the real issue raised was the name so"
},
{
"start": 2374.96,
"end": 2379.32,
"text": " instead of that issue we'll do these other things which we feel the community"
},
{
"start": 2379.32,
"end": 2385.56,
"text": " wants I think that's the I think there's no I think everyone reading this comes"
},
{
"start": 2385.56,
"end": 2390.56,
"text": " to the same conclusion after after reading that but so I really don't see"
},
{
"start": 2390.56,
"end": 2396.1200000000003,
"text": " how you you can say that this is kind of presented as an either or by the board I"
},
{
"start": 2396.1200000000003,
"end": 2401.6000000000004,
"text": " don't think that at all and but you decide for yourself I believe the real"
},
{
"start": 2401.6000000000004,
"end": 2408.56,
"text": " real real crocs here is the for now and the promise to discuss at the"
},
{
"start": 2408.56,
"end": 2415.92,
"text": " conference which if you can see here in the paper is never ever ever touched"
},
{
"start": 2415.92,
"end": 2420.16,
"text": " right this they make it basically seem that the board has decided to not"
},
{
"start": 2420.16,
"end": 2425.56,
"text": " change the name and that's it which is completely wrong they've clearly stated"
},
{
"start": 2425.56,
"end": 2430.08,
"text": " their openness to a name change they want to discuss it it was just"
},
{
"start": 2430.08,
"end": 2434.94,
"text": " inconclusive so they want to basically not do anything rash and then half the"
},
{
"start": 2434.94,
"end": 2440.52,
"text": " community is against it anyway so they want to discuss it I to say that this is"
},
{
"start": 2440.52,
"end": 2450.7200000000003,
"text": " the basically that that the wording implied the need to choose I don't see"
},
{
"start": 2450.7200000000003,
"end": 2458.08,
"text": " that um but you know you decide for yourselves the board suggested a name"
},
{
"start": 2458.08,
"end": 2464,
"text": " change would only be symbolic and so on would have no real consequences so that"
},
{
"start": 2464,
"end": 2467.24,
"text": " this this these are some of the arguments basically made in the quotes"
},
{
"start": 2467.24,
"end": 2474.24,
"text": " as well but you know the fact that the name change would only be symbolic and"
},
{
"start": 2474.24,
"end": 2478.84,
"text": " so on these are all things you could actually discuss at the con at this"
},
{
"start": 2478.84,
"end": 2484.32,
"text": " conference meeting you could even correct for your for your poll right you"
},
{
"start": 2484.32,
"end": 2488.92,
"text": " could invite people who have left the community to represent those you could"
},
{
"start": 2488.92,
"end": 2493.96,
"text": " invite new potential researchers you could give everyone their voice and then"
},
{
"start": 2493.96,
"end": 2498.2000000000003,
"text": " actually listen to all of them I think that's a very sensible decision by the"
},
{
"start": 2498.2000000000003,
"end": 2505.56,
"text": " board and I think this is misrepresented here lastly let's say another argument"
},
{
"start": 2505.56,
"end": 2508.96,
"text": " though not explicitly mentioned a number of machine learning researchers told us"
},
{
"start": 2508.96,
"end": 2512.16,
"text": " that changing the name of the conference lead to too much confusion in the"
},
{
"start": 2512.16,
"end": 2516.4,
"text": " community while we understand we respectfully do not share it I mean this"
},
{
"start": 2516.4,
"end": 2519.92,
"text": " is it's basically an argument against the name change I think it's also a"
},
{
"start": 2519.92,
"end": 2526.7200000000003,
"text": " point worthy of discussion right that they say they say we respectfully do not"
},
{
"start": 2526.7200000000003,
"end": 2531.44,
"text": " share this point yeah okay they don't share it other people do it's a point"
},
{
"start": 2531.44,
"end": 2535.44,
"text": " of discussion we could you know you could actually discuss it at the"
},
{
"start": 2535.44,
"end": 2539.7200000000003,
"text": " conference but I actually agree with the authors here I think changing the name"
},
{
"start": 2539.7200000000003,
"end": 2545.6800000000003,
"text": " will not have a big impact on the kind of recognizability of the conference"
},
{
"start": 2545.68,
"end": 2551.56,
"text": " especially now down here we'll actually get into what actually happened in"
},
{
"start": 2551.56,
"end": 2557.72,
"text": " November the in response to extensive public backlash the conference board"
},
{
"start": 2557.72,
"end": 2562.2799999999997,
"text": " announced a change to the official conference acronym to NRIPS they say we"
},
{
"start": 2562.2799999999997,
"end": 2570.2799999999997,
"text": " are pleased provides this provides a reasonable compromise so in in my opinion"
},
{
"start": 2570.28,
"end": 2576.0800000000004,
"text": " this is it as far as solutions go this is a good solution right the NRIPS"
},
{
"start": 2576.0800000000004,
"end": 2580.9,
"text": " acronym I think it's it's it's cool you don't have to change the name of the"
},
{
"start": 2580.9,
"end": 2586.2400000000002,
"text": " conference itself you simply change the acronym which you know was the the"
},
{
"start": 2586.2400000000002,
"end": 2592.2400000000002,
"text": " reported problem in the first place I think the all the new papers will like"
},
{
"start": 2592.2400000000002,
"end": 2598.28,
"text": " people will still recognize the old NIPS acronym or the new conference it will be"
},
{
"start": 2598.28,
"end": 2603.5600000000004,
"text": " clear that it's the same thing and I think this is a very good a very good"
},
{
"start": 2603.5600000000004,
"end": 2609.44,
"text": " new name and I think people will get used to it pretty quickly it also you"
},
{
"start": 2609.44,
"end": 2618.48,
"text": " know to say NRIPS it it's also rolls off the tongue easily so it's as far as"
},
{
"start": 2618.48,
"end": 2626.0400000000004,
"text": " solutions go I like it further they say however the work for the conference"
},
{
"start": 2626.04,
"end": 2631.68,
"text": " board is far from done oops we encourage the board to continue its efforts blah"
},
{
"start": 2631.68,
"end": 2638.2799999999997,
"text": " blah blah so they say okay you have to do more than just change the name and so"
},
{
"start": 2638.2799999999997,
"end": 2643.52,
"text": " on they say together these steps will help ensure that the NRIPS conference"
},
{
"start": 2643.52,
"end": 2646.2,
"text": " retains its place in the forefront of machine learning research while also"
},
{
"start": 2646.2,
"end": 2650,
"text": " creating a welcoming environment for women and members of other representative"
},
{
"start": 2650,
"end": 2659.2,
"text": " groups on other underrepresented groups we all hope that to me the problem is a"
},
{
"start": 2659.2,
"end": 2665.18,
"text": " bit how this how this went down and if we go back and look at the actual press"
},
{
"start": 2665.18,
"end": 2671.44,
"text": " release of the name change they say here dear members of the neural information"
},
{
"start": 2671.44,
"end": 2677.16,
"text": " processing systems community something remarkable has happened in our"
},
{
"start": 2677.16,
"end": 2681.7599999999998,
"text": " community the name NRIPS has sprung up organically as an alternative acronym"
},
{
"start": 2681.7599999999998,
"end": 2685.96,
"text": " we're delighted to see it being adopted indeed one forward-thinking member of"
},
{
"start": 2685.96,
"end": 2690.48,
"text": " the community purchased NRIPS comm described as purpose as hosting"
},
{
"start": 2690.48,
"end": 2694.2,
"text": " conference content under different acronym until the board catches up we've"
},
{
"start": 2694.2,
"end": 2700.44,
"text": " caught up we're considering alternative acronyms when the community support for"
},
{
"start": 2700.44,
"end": 2704.48,
"text": " NRIPS became apparent we ask all attendees to respect the solution from"
},
{
"start": 2704.48,
"end": 2710.04,
"text": " the community use the new acronym so basically they've rebranded the entire"
},
{
"start": 2710.04,
"end": 2715.96,
"text": " conference about a month before the actual meeting asked all sponsors all"
},
{
"start": 2715.96,
"end": 2723.64,
"text": " invited companies asked all invited papers to rebrand the acronym to me"
},
{
"start": 2723.64,
"end": 2728.92,
"text": " this the wording here is fit is a bit funny like something remarkable has"
},
{
"start": 2728.92,
"end": 2734.46,
"text": " happened in our community has sprung up organically and now we'll just adopt it"
},
{
"start": 2734.46,
"end": 2739.5,
"text": " it seems like it seems like much less of the fairy tale to describe here but the"
},
{
"start": 2739.5,
"end": 2745.32,
"text": " actual like there's a there's a mob with pitchforks around your house and this is"
},
{
"start": 2745.32,
"end": 2754.8,
"text": " like the first kind of straw that you can grab to to make them calm down and"
},
{
"start": 2754.8,
"end": 2759.56,
"text": " also know that some companies have begun pulling out funding for the conference"
},
{
"start": 2759.56,
"end": 2766.64,
"text": " so I think this is really this was really you know much more backed by"
},
{
"start": 2766.64,
"end": 2774.16,
"text": " force and and back yeah what they say in the paper extensive public backlash so"
},
{
"start": 2774.16,
"end": 2781,
"text": " loud screaming basically then this this kind of the name has sprung up"
},
{
"start": 2781,
"end": 2789.52,
"text": " organically and has been adopted and seems much more bit forceful to me it"
},
{
"start": 2789.52,
"end": 2795.16,
"text": " would have still been a viable path the most valuable path to actually wait for"
},
{
"start": 2795.16,
"end": 2800.7599999999998,
"text": " the conference and then have that discussion and then if indeed this name"
},
{
"start": 2800.7599999999998,
"end": 2805.56,
"text": " in the rips would be would be presented as a good alternative and you know"
},
{
"start": 2805.56,
"end": 2810.32,
"text": " people would be fine with that then you could still make the name change for"
},
{
"start": 2810.32,
"end": 2816.32,
"text": " last for next year I think this this would have been a good alternative my"
},
{
"start": 2816.32,
"end": 2823.6000000000004,
"text": " fear now is this has been extremely rash extremely forceful as as I've said also"
},
{
"start": 2823.6000000000004,
"end": 2831.6400000000003,
"text": " accompanied by with like by withdrawal of funding that I believe these things"
},
{
"start": 2831.6400000000003,
"end": 2836.96,
"text": " usually provoke a backlash and that's really something that I wouldn't look"
},
{
"start": 2836.96,
"end": 2841.4,
"text": " forward to so I hope that this con that this paragraph down here is true that"
},
{
"start": 2841.4,
"end": 2846.0800000000004,
"text": " actually we will see a more welcoming environment for everyone but I believe"
},
{
"start": 2846.08,
"end": 2852.72,
"text": " things like this tend in society to have the sometimes very opposite effects of"
},
{
"start": 2852.72,
"end": 2862.16,
"text": " what's intended and so I hope this does not produce a backlash I think having"
},
{
"start": 2862.16,
"end": 2867.7599999999998,
"text": " had the actual discussion doing things non rashly would have done much more in"
},
{
"start": 2867.7599999999998,
"end": 2875.36,
"text": " the direction of preventing such a backlash so this is the end of the paper"
},
{
"start": 2875.36,
"end": 2883.4,
"text": " so to recap they basically say the acronym was was inappropriate which I"
},
{
"start": 2883.4,
"end": 2892.1200000000003,
"text": " agree with they say the survey was bad which I could believe if there was data"
},
{
"start": 2892.1200000000003,
"end": 2896.88,
"text": " they say that an issue adversely affecting the minority of participants"
},
{
"start": 2896.88,
"end": 2902.7200000000003,
"text": " should not be cited by majority vote which I absolutely disagree with and"
},
{
"start": 2902.72,
"end": 2909.64,
"text": " then they say the board has basically stated this as an either or decision"
},
{
"start": 2909.64,
"end": 2917.12,
"text": " which is I believe not true and misrepresenting or maybe I've missed"
},
{
"start": 2917.12,
"end": 2922.8799999999997,
"text": " something it's always possible lastly I want to get to this paragraph in recent"
},
{
"start": 2922.8799999999997,
"end": 2926.68,
"text": " months a number of women including some of the authors of this article who"
},
{
"start": 2926.68,
"end": 2930.68,
"text": " publicly expressed support for a change of the conference name have been"
},
{
"start": 2930.68,
"end": 2934.9199999999996,
"text": " relentlessly trolled harassed verbally abused and even physically threatened on"
},
{
"start": 2934.9199999999996,
"end": 2941.24,
"text": " Twitter reddit other online forums much of this harassment they say has been"
},
{
"start": 2941.24,
"end": 2947.44,
"text": " anonymous and typically has had an extremely gendered tone furthermore some"
},
{
"start": 2947.44,
"end": 2952.48,
"text": " students have reached out to us the authors lamenting the fact that they"
},
{
"start": 2952.48,
"end": 2956.96,
"text": " felt unable to openly express their support for renaming the conference due"
},
{
"start": 2956.96,
"end": 2961.8,
"text": " to fear of bullying or retaliation by faculty advisors or others in position"
},
{
"start": 2961.8,
"end": 2967.84,
"text": " of power this I believe is really bad the fact that people can't speak out"
},
{
"start": 2967.84,
"end": 2973,
"text": " about something like this without being bullied or harassed or having to fear"
},
{
"start": 2973,
"end": 2979.68,
"text": " for their careers basically is is bad and I would really discourage everyone"
},
{
"start": 2979.68,
"end": 2986.44,
"text": " from engaging in such behavior verbal abuse physically threaten I mean that's"
},
{
"start": 2986.44,
"end": 2991.2400000000002,
"text": " I mean to one point you can say all right if you've been on the internet for"
},
{
"start": 2991.2400000000002,
"end": 2995.8,
"text": " longer than a week then this probably has happened to you if you have had any"
},
{
"start": 2995.8,
"end": 2999.96,
"text": " sort of serious discussion on the internet but you can also say that"
},
{
"start": 2999.96,
"end": 3007.04,
"text": " doesn't make it right so I believe it's it's really important to separate what"
},
{
"start": 3007.04,
"end": 3013.2400000000002,
"text": " is you know harassment basically from actual disagreement and criticism and"
},
{
"start": 3013.24,
"end": 3021.04,
"text": " please engage in the latter do not engage in the former my problem with"
},
{
"start": 3021.04,
"end": 3027.9199999999996,
"text": " this paragraph it's again it's very one-sided it's basically stated here"
},
{
"start": 3027.9199999999996,
"end": 3032.04,
"text": " some students have reached out to us lamenting the fact that they felt unable"
},
{
"start": 3032.04,
"end": 3037.8799999999997,
"text": " to openly express their support for renaming the conference due to fear of"
},
{
"start": 3037.8799999999997,
"end": 3042.2799999999997,
"text": " bullying retaliation by faculty or advisors of other and others of position"
},
{
"start": 3042.28,
"end": 3055.28,
"text": " power to me I'm you know I'm gonna say this probably happens on both sides what"
},
{
"start": 3055.28,
"end": 3058.8,
"text": " you know one could argue where it happens more but this very much happens"
},
{
"start": 3058.8,
"end": 3064.36,
"text": " on both sides of this issue and it's real shame for both sides basically I"
},
{
"start": 3064.36,
"end": 3068.96,
"text": " think anyone should be able to express your opinion to to demonstrate that here"
},
{
"start": 3068.96,
"end": 3075.16,
"text": " I'm gonna show another Twitter thread by one of the authors of this paper where"
},
{
"start": 3075.16,
"end": 3080.32,
"text": " basically this is a thread where she posts screenshots of conversations"
},
{
"start": 3080.32,
"end": 3084.2,
"text": " basically people reaching out to her saying exactly that like I can't share"
},
{
"start": 3084.2,
"end": 3091.2,
"text": " my I have trouble sharing my opinion I get mocked for my opinion I can't do so"
},
{
"start": 3091.2,
"end": 3098.08,
"text": " publicly because I fear you know from my from my faculty and so on but then"
},
{
"start": 3098.08,
"end": 3103.52,
"text": " there's also this one here where a person wrote an email to the author"
},
{
"start": 3103.52,
"end": 3112.2799999999997,
"text": " basically saying they disagree with her and I I've read this email I don't you"
},
{
"start": 3112.2799999999997,
"end": 3119.4,
"text": " know I don't agree with the arguments here made but I can say that the this is"
},
{
"start": 3119.4,
"end": 3125.3199999999997,
"text": " not verbal abuse it's not personal attack it's not physically threatening"
},
{
"start": 3125.32,
"end": 3131.1600000000003,
"text": " it's actually quite respectful disagreement that the person actually"
},
{
"start": 3131.1600000000003,
"end": 3136.32,
"text": " goes through length to say how respectful they are how much you know how"
},
{
"start": 3136.32,
"end": 3145.28,
"text": " much this is meant as a as a disagreement on factual terms and further"
},
{
"start": 3145.28,
"end": 3152.44,
"text": " what they say is that they want to be anonymous maybe you see it on the very"
},
{
"start": 3152.44,
"end": 3156.04,
"text": " bottom for example I haven't done too much to anonymize myself but I ask you"
},
{
"start": 3156.04,
"end": 3159.6,
"text": " to respect my wishes of remaining anonymous don't try to figure out who I"
},
{
"start": 3159.6,
"end": 3165.44,
"text": " am further up they state basically they want to remain anonymous because they"
},
{
"start": 3165.44,
"end": 3171.04,
"text": " fear for their ladder for their later career right they fear of a backlash up"
},
{
"start": 3171.04,
"end": 3175.92,
"text": " here wish to remain anonymous as I'm an early in my career someday we may work"
},
{
"start": 3175.92,
"end": 3186.84,
"text": " together so basically they say here I disagree here's why I disagree and they"
},
{
"start": 3186.84,
"end": 3191.2200000000003,
"text": " wish to remain anonymous because they fear for their career right so this is"
},
{
"start": 3191.2200000000003,
"end": 3198.52,
"text": " almost like this is this is very much here feeling unable and will will go"
},
{
"start": 3198.52,
"end": 3205.36,
"text": " feeling unable to openly express their in the case support against renaming"
},
{
"start": 3205.36,
"end": 3211.6400000000003,
"text": " the conference to to fear of bullying or retaliation by faculty advisor others"
},
{
"start": 3211.6400000000003,
"end": 3216.7200000000003,
"text": " in position of power so this author here is obviously a real person in position"
},
{
"start": 3216.7200000000003,
"end": 3222,
"text": " of power and in very famous senior researcher and this person basically"
},
{
"start": 3222,
"end": 3226.6600000000003,
"text": " says I'm afraid and I can't you know that that's why I'm anonymous and the"
},
{
"start": 3226.6600000000003,
"end": 3233.04,
"text": " way the author responded here as you can read is what an anonymous coward of"
},
{
"start": 3233.04,
"end": 3240.92,
"text": " course I will do everything to guess you and it's it's difficult to to kind of"
},
{
"start": 3240.92,
"end": 3246.88,
"text": " put this off as I mean even if it's I don't know how it's meant right I will"
},
{
"start": 3246.88,
"end": 3251.44,
"text": " do everything to guess you and the least it means she will try to figure out who"
},
{
"start": 3251.44,
"end": 3257.16,
"text": " that is right and she doesn't go as far as saying that she will then basically"
},
{
"start": 3257.16,
"end": 3263.8799999999997,
"text": " either you know remember that name in case of any future thing or share it or"
},
{
"start": 3263.8799999999997,
"end": 3270.12,
"text": " whatnot but it's certainly you can't argue that this is a real deterrent for"
},
{
"start": 3270.12,
"end": 3277.3199999999997,
"text": " other people to even anonymously voice their opinion to if if this person"
},
{
"start": 3277.3199999999997,
"end": 3283.72,
"text": " announces I will do everything to guess you to me that that shows that this"
},
{
"start": 3283.72,
"end": 3289.2799999999997,
"text": " fear that we discuss here is very much present on both sides and it's"
},
{
"start": 3289.2799999999997,
"end": 3298.48,
"text": " absolutely not okay if if either side reacts by basically by basically"
},
{
"start": 3298.48,
"end": 3304.8399999999997,
"text": " retaliation or even even the the possibility of retaliation and I believe"
},
{
"start": 3304.8399999999997,
"end": 3309.24,
"text": " everyone should be able to say their opinion I respect really everyone even"
},
{
"start": 3309.24,
"end": 3314.72,
"text": " like these these authors here clearly took a lot of effort and a lot of a lot"
},
{
"start": 3314.72,
"end": 3319.2,
"text": " of beating basically they say they've been relentlessly trolled harassed"
},
{
"start": 3319.2,
"end": 3323.68,
"text": " verbally abused even physically threatened this is just really bad and"
},
{
"start": 3323.68,
"end": 3328.3999999999996,
"text": " have lots of respect for them saying their opinions stating their opinions"
},
{
"start": 3328.3999999999996,
"end": 3333.04,
"text": " anyway I think everyone should be able to do that without these things happening"
},
{
"start": 3333.04,
"end": 3340,
"text": " so to everyone watching I encourage you to not engage in these things and that"
},
{
"start": 3340,
"end": 3345.16,
"text": " alone will probably make the environment much much more inclusive and nice for"
},
{
"start": 3345.16,
"end": 3353.08,
"text": " everybody irregardless of of affiliation so that was it for me for this paper"
},
{
"start": 3353.08,
"end": 3360.16,
"text": " it's a bit longer it's a bit ranty if you agree disagree let me know in the"
},
{
"start": 3360.16,
"end": 3369.24,
"text": " comments I guess and other than that have a nice week weekend whatever you do"
},
{
"start": 3369.24,
"end": 3392.4399999999996,
"text": " bye"
}
] |
_PyusGsbBPY | Yannic Kilcher | UCZHmQk67mSJgfCCTn7xBfew | Stochastic RNNs without Teacher-Forcing | [
"Science & Technology"
] | [
"NeurIPS2018",
"NIPS2018",
"NLP",
"deep learning",
"RNN"
] | We present a stochastic non-autoregressive RNN that does not require teacher-forcing for training. The content is based on our 2018 NeurIPS paper:
Deep State Space Models for Unconditional Word Generation
https://arxiv.org/abs/1806.04550 | Hi everybody, my name is Florian and Janik was nice enough to host me here as a guest to talk about Stochastic RNNs without teacher forcing. This is based on recent work, deep state space models for unconditional word generation, which we presented at this year's New RIPs. And if you feel like any more details, please check out the paper. We focus on a de facto standard training hack for any RNNs that generate text. It's called teacher forcing and it's used in any model, whether unconditional or conditional, such as in a sentence autoencoder or in a translation model. To understand where teacher forcing comes from, we first need to understand where text generation comes from. For the good or the bad, and here we will focus on the bad, text generation has its roots in language modeling. So language modeling is the problem of predicting the next word, given all the previous words. People used to use ANGRA models for this, but today people use recurrent neural networks to do that. Such recurrent neural networks or RNNs factorize the joint observation probability of a sequence that I here depict as W into independent softmax distributions over individual tokens. So for every time step, there's a softmax function. And the softmax is conditioned on a hidden state. And all the magic of the RNN goes into the function that gives you the new state, given the old hidden state. Usually this is called a transition function, F, and as an input it gets the last state and the last word. So F could be a GUO function or an LSTM function. Just like any other language model, you can turn this into a generative model of text. Let's look at the dependencies that you would have at test time. There's initial hidden state H1. We sample a new word. We use our transition function F and it gives us the new state H2. Then we can sample a new word W2, feed it back, get a new state, sample a new word, feed it back. It's important to note that all the stochasticity in the output is solely due to the stochasticity in the sampling process, because the transition function is deterministic. So far there's nothing to complain about. But so far I've only talked about test time. At training time there is a catch. This is where teacher forcing kicks in. It turns out that you can't learn this model by basing the evolution of the hidden states on your own predictions. You have to use teacher forcing and that means you substitute your own prediction by the ground truth. So at training time there's no sampling loop. You just take the ground truth token and feed it into your state transition function. So that feels unintuitive because at test time we do something else than we do at training time. And it's also known in the literature for a few years to cause biases. So why is that problematic? Remember we come from language modeling. In language modeling we could argue that if our only goal is to predict one word given the previous words, then of course we can use the ground truth context to ground truth previous words. But if we're interested in generating like longer sequences, then we need to learn what to memorize. And in particular we need to become robust against our own predictions because we might make mistakes at test time and there's no ground truth at test time. Just to get this confirmed by somebody who has worked in the field for years, at the NeurIPS representation learning workshop Alex Grave mentioned teacher forcing as one of the big three problems for autoregressive models. And in his own words, teacher forcing might lead to predict one step ahead, not many and potentially brittle generation and myopic representations. How have people addressed teacher forcing so far? There are approaches to try to mitigate the problem. For example, by blending together these two views, training time and test time, so that sometimes you use your own prediction during training, but sometimes you use the ground truth. We believe for a rigorous model of text generation, we need a rigorous model of uncertainty. This should be an integral part of any generative model and therefore it should be the same model both at training time and test time without any hacks. We propose a fundamentally different approach by proposing a new transition function. The new transition function is non autoregressive. That means it depends on the last stage, ht-1, but it doesn't depend on the last word. That means teacher forcing is not an option anymore, but it also means teacher forcing is not a problem anymore. Instead, the transition function accepts a white noise vector as the second input. Now you might wonder why do we need noise at all as an input to the transition function? Well, for a given prefix, there might be different continuations. So we need some source of entropy to model the entropy in different continuations. The rest of the paper pretty much focuses on the following two questions. A. Which function f is powerful enough to turn the most simple noise source, just the standard Gaussian vector, into something that is powerful enough to replace the autoregressive feedback mechanism of a standard RNN? And the second question is, of course, how do we train this? What framework do we train this in? And it will turn out that variational flows are suitable functions f and variational inference is the right framework to train them. So here's the roadmap to complete the model. First, we need to cast the generative model as a probabilistic method because so far I've only sketched a procedure that involves sampling some noise and then applying some function and then predicting observations. Then we need to propose a variational inference model so that we can do maximum likelihood training. We will derive an elbow, which is our objective. Then in the paper, we also describe how the tightness of the elbow can be improved. And here I will finish by talking a bit about the evaluation and what we do to inspect the model. Since this work is based a lot on variational flows, let me give you a quick summary of variational flows. A variational flow is a diffeomorphism f, which maps from what I will call a simple noise space, Xi, to a complex noise space, H. And here I'm already using the notation for our sequence model. Simply by the change of variable formula, we know that the probability of an event H in the complex space is simply the probability of the event in the simplest space Xi as given by the inverse of f times a Jacobian term with respect to f evaluated at Xi. How can we use this in our sequential setting? First, let me fix some notation because sequential models are pretty prone to overloaded notation. I'll write time as t running from 1 to capital T. And whenever I talk about a sequence of variables like w, I don't index them. I just write w without an index. And only when I need a specific element, I'll write it as wt. Let's formalize the generative model. We start out with the probability of observing a sequence w. And since we use the latent variable model, we marginalize out the latent variables H. And then we will assume that the overall dependencies between hidden states H and observations w follow like an HMM type of dependency. That means the new state only depends on the last state and the current observation only depends on the current state. And now the question is how do we model these transitions? I've so far pitched the ideas of sampling noise and then using some transition function f. And we have seen flows already. Now we are ready to combine the two. We propose a transition function fg, which has the signature as I mentioned before. It gets a hidden state and noise vector as an input. And it gives you a new state as an output. This can be seen as a conditional flow because any ht minus 1, any last state, inserted as the first argument into fg, induces a flow which maps from the simple noise distribution to the space of new hidden states. And as I've said before, for the prior distribution in the simple noise space, we simply assume it's a standard Gaussian. Let's look at this graphically, because in the end this is a graphical model. I copied over the formulas from the last slide. And at the bottom you see the graphical model. First we have a sequence of stochastic variables Xi. Those deterministically induce via the transition function f, via the flow, a sequence of hidden states. And those independently predict the observations. All the magic is in the transition. So let me sketch this process here in the big circle. How do we get from the last state h2 to the new state h3? Let's say h2 encodes a prefix and there are two possible continuations. They're equally likely in the corpus, so there are two potential new states. The blue state h3 and the yellow state h3. I've sketched the standard Gaussian noise distribution at the top. There are yellow samples and there are blue samples. The flow realizes a mapping that takes any yellow sample and maps it to the yellow hidden state. And it maps any blue sample to the blue hidden state. So with probability one half in this situation, we either get a blue or a yellow sample from the simple noise distribution. And it will induce new states, blue h3 or the yellow h3. So far we have proposed the generative model. Now the question is how do we train it if we don't know the hidden states? The answer is variational inference and in particular, amortized variational inference. The key idea of variational inference is to introduce a parameterized approximate inference model. How do we propose such a model? Well, a good recipe is to first look at a true posterior. The probability of a state sequence given an observation sequence. The true posterior turns out to factorize into individual components, which give us the probability of a state given the last state and the future observations. It turns out that we can formulate this inference model using two ingredients that should be familiar. First, we use a transition function Fq, which induces a flow. It has the same signature as Fg for the generative model. And we use a noise source q. But now the noise source isn't uninformative anymore. In variational inference, the inference network is informed about the data. So there's a base distribution q of Xi t, which is allowed to look at the data Wt. Now compare this to teacher forcing. In teacher forcing, we substitute our own predictions by inserting ground truth information into the generative model. In variational inference, it's very clear how to use the data. The data enters through the inference model and it enters in the form of future observation because the past observation we want to store in the hidden state. It remains to derive an elbow, which is the usual evidence lower bound objective used for variational inference. Any elbow, whether it's in a sequential setting or not, factorizes into two parts, a reconstruction loss and a model mismatch term. Here, reconstruction loss means probability of observation given a state. And model mismatch is between the generative model P and the inference model q. This is what is usually written as a KL divergence. To derive our elbow, we follow the literature on flows. In the first step, we introduced the flow on the inference model Fq. We turn the expectation with respect to the complex state space H into an expectation with respect to the simple noise distribution. And then, of course, at the same time, the flow appears inside the expectation. And we get the log-determinant terms that I've mentioned before. In the second step, we introduced the generative flow Fg using the same change of variable technique. It's possible to write out the elbow in a way so that there's only one Jacobian term for both flows and so that the generative model always appears as the inverse concatenated with the inference flow. In a second, I'll show you what the interpretation of that is. Let's quickly recap what we've seen so far. There's a generative model. It consists of a generative flow Fg and an uninformed noise source. There's an inference model, which contains an inference flow Fq and a simple base distribution across the noise variables q of xi. In the elbow, the two flows appear concatenated, and we can interpret this in the following way. The inference model q proposes a noise vector, xi t, that is informed about the future. The inference flow maps this to a hidden state. At the hidden state, the reconstruction loss lives. This is where we pay a price for making a bad prediction. However, the inference model cannot encode all the possible information about the future into the hidden state, ht, because the mapping continues to the simple noise space of the generative model. And the inference model must make sure that the proposal also covers significant probability mass under the uninformed prior. This trade-off between reconstruction and model mismatch is common to all elbows. But here we highlight the special situation where we have two flows, one for the inference model and one for the generative model. In our paper, we also show how we can use the recently proposed important weighted autoencoder to improve the tightness of our bound, but I'll skip those steps here. Instead, let's quickly talk about evaluation. We apply our model to unconditional generation. So why in hell would somebody look into unconditional generation? Well, actually, it turns out it's harder than conditional generation. If you know what the French sentence looks like, it's much easier to continue a partial English translation. But it's not only harder, it's also more interesting to inspect which information does a sequence model need to store and which information can it forget. We use two metrics to evaluate our model. First, we look at sequence cross entropy. So we compare the model's sequence distribution to the data sequence distribution. Usually estimating the data distribution is impossible. You don't want to say that the probability of a sentence is how many times the sentence has appeared in the training data. However, for words, we can use unigram frequencies of words in a corpus as a pretty reliable estimate. Also, we can get an estimate of our model's probability assigned to a sequence by using MC sampling. We take the marginal likelihood, sample k trajectories, and assess the probability that the trajectories assigned to the given sequence. Since our model is not autoregressive, the sequence isn't tied to an observation. So we can actually use the same sequences of hidden states to evaluate probabilities for all the words in the vocabulary. Since we've pitched our noise model as the key to contribution to our generative model, we want to empirically verify that the model is being used. Working with a clean probabilistic model allows us to use tools from probability theory to assess that. We use the mutual information between a noise vector at time t and the observation of time t. So this measures how much information in the output is actually due to the noise model. Before showing you the numbers, let's quickly go across the parameterization of our model. For the flows, we look at shift scaling transformations. And if the scaling g is lower triangular, we can compute efficiently the Jacobian determinant. We also look at real NVP and we compose flows by concatenation. The base distribution of our inference model depends on the future observations, which we summarize using a GRU RNN. The base distribution itself is a diagonal Gaussian. We use a state size of 8 and also run some experiments for 16 and 32. All the numbers are in the paper, so here are just the take-home messages. We are on par or better than a domestic RNN with teacher forcing trained at the same state size. Also, we observed that a powerful generative flow is essential to achieve good performance. Furthermore, we can confirm that important weightless elbow improved the results. This is the first model applying generative flows to sequence modeling. So naturally, we are interested in comparing the expressiveness of fg and fq. Our paper has a table that compares four choices for both flows. Our findings are that the generative flow should be powerful and the inference flow should be slightly less powerful. To understand our noise model, we look at the mutual information at every time step and show a box spot for all of them. Initially, the mutual information is highest, which means the initial character is most important to remember. The noise model is never being ignored and we see increased variance in the remaining time steps because we are averaging here across different sequences. A non-autoregressive model needs to have lower entropy in the observation model because any underentropy under the observation model is being forgotten because there is no feedback. The purple line shows you the observation model entropy during training. The dashed red line shows you the entropy on the observation model of a baseline. So indeed, we have lower entropy in the observation model and at the same time in green, you see the mutual information increasing. Let's summarize our findings. Using variational flows, non-autoregressive modeling of sequences is possible and teacher forcing is not necessary. At the same time, we get a noise model that is the driving factor of the sequence model and is easy to interpret. For any details, please check out the paper and for any questions, shoot me an email. | [
{
"start": 0,
"end": 6,
"text": " Hi everybody, my name is Florian and Janik was nice enough to host me here as a guest to talk about"
},
{
"start": 6,
"end": 14,
"text": " Stochastic RNNs without teacher forcing. This is based on recent work, deep state space models for"
},
{
"start": 14,
"end": 21,
"text": " unconditional word generation, which we presented at this year's New RIPs. And if you feel like any more details,"
},
{
"start": 21,
"end": 29,
"text": " please check out the paper. We focus on a de facto standard training hack for any RNNs that generate"
},
{
"start": 29,
"end": 37,
"text": " text. It's called teacher forcing and it's used in any model, whether unconditional or conditional,"
},
{
"start": 37,
"end": 45,
"text": " such as in a sentence autoencoder or in a translation model. To understand where teacher forcing comes from,"
},
{
"start": 45,
"end": 52,
"text": " we first need to understand where text generation comes from. For the good or the bad, and here we will focus on the bad,"
},
{
"start": 52,
"end": 60,
"text": " text generation has its roots in language modeling. So language modeling is the problem of predicting the next word,"
},
{
"start": 60,
"end": 69,
"text": " given all the previous words. People used to use ANGRA models for this, but today people use recurrent neural networks to do that."
},
{
"start": 69,
"end": 78,
"text": " Such recurrent neural networks or RNNs factorize the joint observation probability of a sequence that I here depict as W"
},
{
"start": 78,
"end": 86,
"text": " into independent softmax distributions over individual tokens. So for every time step, there's a softmax function."
},
{
"start": 86,
"end": 93,
"text": " And the softmax is conditioned on a hidden state. And all the magic of the RNN goes into the function that gives you the new state,"
},
{
"start": 93,
"end": 101,
"text": " given the old hidden state. Usually this is called a transition function, F, and as an input it gets the last state and the last word."
},
{
"start": 101,
"end": 111,
"text": " So F could be a GUO function or an LSTM function. Just like any other language model, you can turn this into a generative model of text."
},
{
"start": 111,
"end": 118,
"text": " Let's look at the dependencies that you would have at test time. There's initial hidden state H1. We sample a new word."
},
{
"start": 118,
"end": 126,
"text": " We use our transition function F and it gives us the new state H2. Then we can sample a new word W2, feed it back,"
},
{
"start": 126,
"end": 135,
"text": " get a new state, sample a new word, feed it back. It's important to note that all the stochasticity in the output is solely due to the stochasticity"
},
{
"start": 135,
"end": 142,
"text": " in the sampling process, because the transition function is deterministic. So far there's nothing to complain about."
},
{
"start": 142,
"end": 149,
"text": " But so far I've only talked about test time. At training time there is a catch. This is where teacher forcing kicks in."
},
{
"start": 149,
"end": 156,
"text": " It turns out that you can't learn this model by basing the evolution of the hidden states on your own predictions."
},
{
"start": 156,
"end": 161,
"text": " You have to use teacher forcing and that means you substitute your own prediction by the ground truth."
},
{
"start": 161,
"end": 168,
"text": " So at training time there's no sampling loop. You just take the ground truth token and feed it into your state transition function."
},
{
"start": 168,
"end": 173,
"text": " So that feels unintuitive because at test time we do something else than we do at training time."
},
{
"start": 173,
"end": 179,
"text": " And it's also known in the literature for a few years to cause biases. So why is that problematic?"
},
{
"start": 179,
"end": 188,
"text": " Remember we come from language modeling. In language modeling we could argue that if our only goal is to predict one word given the previous words,"
},
{
"start": 188,
"end": 192,
"text": " then of course we can use the ground truth context to ground truth previous words."
},
{
"start": 192,
"end": 198,
"text": " But if we're interested in generating like longer sequences, then we need to learn what to memorize."
},
{
"start": 198,
"end": 207,
"text": " And in particular we need to become robust against our own predictions because we might make mistakes at test time and there's no ground truth at test time."
},
{
"start": 207,
"end": 215,
"text": " Just to get this confirmed by somebody who has worked in the field for years, at the NeurIPS representation learning workshop Alex Grave mentioned"
},
{
"start": 215,
"end": 220,
"text": " teacher forcing as one of the big three problems for autoregressive models."
},
{
"start": 220,
"end": 230,
"text": " And in his own words, teacher forcing might lead to predict one step ahead, not many and potentially brittle generation and myopic representations."
},
{
"start": 230,
"end": 235,
"text": " How have people addressed teacher forcing so far? There are approaches to try to mitigate the problem."
},
{
"start": 235,
"end": 242,
"text": " For example, by blending together these two views, training time and test time, so that sometimes you use your own prediction during training,"
},
{
"start": 242,
"end": 249,
"text": " but sometimes you use the ground truth. We believe for a rigorous model of text generation, we need a rigorous model of uncertainty."
},
{
"start": 249,
"end": 258,
"text": " This should be an integral part of any generative model and therefore it should be the same model both at training time and test time without any hacks."
},
{
"start": 258,
"end": 263,
"text": " We propose a fundamentally different approach by proposing a new transition function."
},
{
"start": 263,
"end": 273,
"text": " The new transition function is non autoregressive. That means it depends on the last stage, ht-1, but it doesn't depend on the last word."
},
{
"start": 273,
"end": 279,
"text": " That means teacher forcing is not an option anymore, but it also means teacher forcing is not a problem anymore."
},
{
"start": 279,
"end": 284,
"text": " Instead, the transition function accepts a white noise vector as the second input."
},
{
"start": 284,
"end": 289,
"text": " Now you might wonder why do we need noise at all as an input to the transition function?"
},
{
"start": 289,
"end": 293,
"text": " Well, for a given prefix, there might be different continuations."
},
{
"start": 293,
"end": 298,
"text": " So we need some source of entropy to model the entropy in different continuations."
},
{
"start": 298,
"end": 303,
"text": " The rest of the paper pretty much focuses on the following two questions."
},
{
"start": 303,
"end": 311,
"text": " A. Which function f is powerful enough to turn the most simple noise source, just the standard Gaussian vector,"
},
{
"start": 311,
"end": 317,
"text": " into something that is powerful enough to replace the autoregressive feedback mechanism of a standard RNN?"
},
{
"start": 317,
"end": 322,
"text": " And the second question is, of course, how do we train this? What framework do we train this in?"
},
{
"start": 322,
"end": 331,
"text": " And it will turn out that variational flows are suitable functions f and variational inference is the right framework to train them."
},
{
"start": 331,
"end": 334,
"text": " So here's the roadmap to complete the model."
},
{
"start": 334,
"end": 340,
"text": " First, we need to cast the generative model as a probabilistic method because so far I've only sketched a procedure"
},
{
"start": 340,
"end": 346,
"text": " that involves sampling some noise and then applying some function and then predicting observations."
},
{
"start": 346,
"end": 351,
"text": " Then we need to propose a variational inference model so that we can do maximum likelihood training."
},
{
"start": 351,
"end": 354,
"text": " We will derive an elbow, which is our objective."
},
{
"start": 354,
"end": 359,
"text": " Then in the paper, we also describe how the tightness of the elbow can be improved."
},
{
"start": 359,
"end": 365,
"text": " And here I will finish by talking a bit about the evaluation and what we do to inspect the model."
},
{
"start": 365,
"end": 372,
"text": " Since this work is based a lot on variational flows, let me give you a quick summary of variational flows."
},
{
"start": 372,
"end": 382,
"text": " A variational flow is a diffeomorphism f, which maps from what I will call a simple noise space, Xi, to a complex noise space, H."
},
{
"start": 382,
"end": 386,
"text": " And here I'm already using the notation for our sequence model."
},
{
"start": 386,
"end": 395,
"text": " Simply by the change of variable formula, we know that the probability of an event H in the complex space is simply the probability of the event"
},
{
"start": 395,
"end": 404,
"text": " in the simplest space Xi as given by the inverse of f times a Jacobian term with respect to f evaluated at Xi."
},
{
"start": 404,
"end": 407,
"text": " How can we use this in our sequential setting?"
},
{
"start": 407,
"end": 413,
"text": " First, let me fix some notation because sequential models are pretty prone to overloaded notation."
},
{
"start": 413,
"end": 418,
"text": " I'll write time as t running from 1 to capital T."
},
{
"start": 418,
"end": 425,
"text": " And whenever I talk about a sequence of variables like w, I don't index them. I just write w without an index."
},
{
"start": 425,
"end": 431,
"text": " And only when I need a specific element, I'll write it as wt."
},
{
"start": 431,
"end": 434,
"text": " Let's formalize the generative model."
},
{
"start": 434,
"end": 438,
"text": " We start out with the probability of observing a sequence w."
},
{
"start": 438,
"end": 443,
"text": " And since we use the latent variable model, we marginalize out the latent variables H."
},
{
"start": 443,
"end": 453,
"text": " And then we will assume that the overall dependencies between hidden states H and observations w follow like an HMM type of dependency."
},
{
"start": 453,
"end": 459,
"text": " That means the new state only depends on the last state and the current observation only depends on the current state."
},
{
"start": 459,
"end": 462,
"text": " And now the question is how do we model these transitions?"
},
{
"start": 462,
"end": 467,
"text": " I've so far pitched the ideas of sampling noise and then using some transition function f."
},
{
"start": 467,
"end": 472,
"text": " And we have seen flows already. Now we are ready to combine the two."
},
{
"start": 472,
"end": 478,
"text": " We propose a transition function fg, which has the signature as I mentioned before."
},
{
"start": 478,
"end": 481,
"text": " It gets a hidden state and noise vector as an input."
},
{
"start": 481,
"end": 484,
"text": " And it gives you a new state as an output."
},
{
"start": 484,
"end": 494,
"text": " This can be seen as a conditional flow because any ht minus 1, any last state, inserted as the first argument into fg,"
},
{
"start": 494,
"end": 502,
"text": " induces a flow which maps from the simple noise distribution to the space of new hidden states."
},
{
"start": 502,
"end": 510,
"text": " And as I've said before, for the prior distribution in the simple noise space, we simply assume it's a standard Gaussian."
},
{
"start": 510,
"end": 514,
"text": " Let's look at this graphically, because in the end this is a graphical model."
},
{
"start": 514,
"end": 517,
"text": " I copied over the formulas from the last slide."
},
{
"start": 517,
"end": 519,
"text": " And at the bottom you see the graphical model."
},
{
"start": 519,
"end": 523,
"text": " First we have a sequence of stochastic variables Xi."
},
{
"start": 523,
"end": 530,
"text": " Those deterministically induce via the transition function f, via the flow, a sequence of hidden states."
},
{
"start": 530,
"end": 533,
"text": " And those independently predict the observations."
},
{
"start": 533,
"end": 536,
"text": " All the magic is in the transition."
},
{
"start": 536,
"end": 540,
"text": " So let me sketch this process here in the big circle."
},
{
"start": 540,
"end": 545,
"text": " How do we get from the last state h2 to the new state h3?"
},
{
"start": 545,
"end": 549,
"text": " Let's say h2 encodes a prefix and there are two possible continuations."
},
{
"start": 549,
"end": 554,
"text": " They're equally likely in the corpus, so there are two potential new states."
},
{
"start": 554,
"end": 558,
"text": " The blue state h3 and the yellow state h3."
},
{
"start": 558,
"end": 562,
"text": " I've sketched the standard Gaussian noise distribution at the top."
},
{
"start": 562,
"end": 565,
"text": " There are yellow samples and there are blue samples."
},
{
"start": 565,
"end": 570,
"text": " The flow realizes a mapping that takes any yellow sample and maps it to the yellow hidden state."
},
{
"start": 570,
"end": 574,
"text": " And it maps any blue sample to the blue hidden state."
},
{
"start": 574,
"end": 580,
"text": " So with probability one half in this situation, we either get a blue or a yellow sample from the simple noise distribution."
},
{
"start": 580,
"end": 586,
"text": " And it will induce new states, blue h3 or the yellow h3."
},
{
"start": 586,
"end": 589,
"text": " So far we have proposed the generative model."
},
{
"start": 589,
"end": 593,
"text": " Now the question is how do we train it if we don't know the hidden states?"
},
{
"start": 593,
"end": 598,
"text": " The answer is variational inference and in particular, amortized variational inference."
},
{
"start": 598,
"end": 604,
"text": " The key idea of variational inference is to introduce a parameterized approximate inference model."
},
{
"start": 604,
"end": 606,
"text": " How do we propose such a model?"
},
{
"start": 606,
"end": 610,
"text": " Well, a good recipe is to first look at a true posterior."
},
{
"start": 610,
"end": 614,
"text": " The probability of a state sequence given an observation sequence."
},
{
"start": 614,
"end": 619,
"text": " The true posterior turns out to factorize into individual components,"
},
{
"start": 619,
"end": 625,
"text": " which give us the probability of a state given the last state and the future observations."
},
{
"start": 625,
"end": 631,
"text": " It turns out that we can formulate this inference model using two ingredients that should be familiar."
},
{
"start": 631,
"end": 636,
"text": " First, we use a transition function Fq, which induces a flow."
},
{
"start": 636,
"end": 639,
"text": " It has the same signature as Fg for the generative model."
},
{
"start": 639,
"end": 642,
"text": " And we use a noise source q."
},
{
"start": 642,
"end": 646,
"text": " But now the noise source isn't uninformative anymore."
},
{
"start": 646,
"end": 650,
"text": " In variational inference, the inference network is informed about the data."
},
{
"start": 650,
"end": 656,
"text": " So there's a base distribution q of Xi t, which is allowed to look at the data Wt."
},
{
"start": 656,
"end": 659,
"text": " Now compare this to teacher forcing."
},
{
"start": 659,
"end": 666,
"text": " In teacher forcing, we substitute our own predictions by inserting ground truth information into the generative model."
},
{
"start": 666,
"end": 669,
"text": " In variational inference, it's very clear how to use the data."
},
{
"start": 669,
"end": 675,
"text": " The data enters through the inference model and it enters in the form of future observation"
},
{
"start": 675,
"end": 679,
"text": " because the past observation we want to store in the hidden state."
},
{
"start": 679,
"end": 686,
"text": " It remains to derive an elbow, which is the usual evidence lower bound objective used for variational inference."
},
{
"start": 686,
"end": 691,
"text": " Any elbow, whether it's in a sequential setting or not, factorizes into two parts,"
},
{
"start": 691,
"end": 694,
"text": " a reconstruction loss and a model mismatch term."
},
{
"start": 694,
"end": 699,
"text": " Here, reconstruction loss means probability of observation given a state."
},
{
"start": 699,
"end": 704,
"text": " And model mismatch is between the generative model P and the inference model q."
},
{
"start": 704,
"end": 708,
"text": " This is what is usually written as a KL divergence."
},
{
"start": 708,
"end": 713,
"text": " To derive our elbow, we follow the literature on flows."
},
{
"start": 713,
"end": 718,
"text": " In the first step, we introduced the flow on the inference model Fq."
},
{
"start": 718,
"end": 727,
"text": " We turn the expectation with respect to the complex state space H into an expectation with respect to the simple noise distribution."
},
{
"start": 727,
"end": 732,
"text": " And then, of course, at the same time, the flow appears inside the expectation."
},
{
"start": 732,
"end": 736,
"text": " And we get the log-determinant terms that I've mentioned before."
},
{
"start": 736,
"end": 743,
"text": " In the second step, we introduced the generative flow Fg using the same change of variable technique."
},
{
"start": 743,
"end": 748,
"text": " It's possible to write out the elbow in a way so that there's only one Jacobian term for both flows"
},
{
"start": 748,
"end": 754,
"text": " and so that the generative model always appears as the inverse concatenated with the inference flow."
},
{
"start": 754,
"end": 757,
"text": " In a second, I'll show you what the interpretation of that is."
},
{
"start": 757,
"end": 760,
"text": " Let's quickly recap what we've seen so far."
},
{
"start": 760,
"end": 762,
"text": " There's a generative model."
},
{
"start": 762,
"end": 767,
"text": " It consists of a generative flow Fg and an uninformed noise source."
},
{
"start": 767,
"end": 772,
"text": " There's an inference model, which contains an inference flow Fq"
},
{
"start": 772,
"end": 777,
"text": " and a simple base distribution across the noise variables q of xi."
},
{
"start": 777,
"end": 783,
"text": " In the elbow, the two flows appear concatenated, and we can interpret this in the following way."
},
{
"start": 783,
"end": 789,
"text": " The inference model q proposes a noise vector, xi t, that is informed about the future."
},
{
"start": 789,
"end": 792,
"text": " The inference flow maps this to a hidden state."
},
{
"start": 792,
"end": 796,
"text": " At the hidden state, the reconstruction loss lives."
},
{
"start": 796,
"end": 799,
"text": " This is where we pay a price for making a bad prediction."
},
{
"start": 799,
"end": 806,
"text": " However, the inference model cannot encode all the possible information about the future into the hidden state, ht,"
},
{
"start": 806,
"end": 811,
"text": " because the mapping continues to the simple noise space of the generative model."
},
{
"start": 811,
"end": 818,
"text": " And the inference model must make sure that the proposal also covers significant probability mass under the uninformed prior."
},
{
"start": 818,
"end": 823,
"text": " This trade-off between reconstruction and model mismatch is common to all elbows."
},
{
"start": 823,
"end": 830,
"text": " But here we highlight the special situation where we have two flows, one for the inference model and one for the generative model."
},
{
"start": 830,
"end": 839,
"text": " In our paper, we also show how we can use the recently proposed important weighted autoencoder to improve the tightness of our bound, but I'll skip those steps here."
},
{
"start": 839,
"end": 843,
"text": " Instead, let's quickly talk about evaluation."
},
{
"start": 843,
"end": 846,
"text": " We apply our model to unconditional generation."
},
{
"start": 846,
"end": 849,
"text": " So why in hell would somebody look into unconditional generation?"
},
{
"start": 849,
"end": 853,
"text": " Well, actually, it turns out it's harder than conditional generation."
},
{
"start": 853,
"end": 859,
"text": " If you know what the French sentence looks like, it's much easier to continue a partial English translation."
},
{
"start": 859,
"end": 869,
"text": " But it's not only harder, it's also more interesting to inspect which information does a sequence model need to store and which information can it forget."
},
{
"start": 869,
"end": 871,
"text": " We use two metrics to evaluate our model."
},
{
"start": 871,
"end": 873,
"text": " First, we look at sequence cross entropy."
},
{
"start": 873,
"end": 879,
"text": " So we compare the model's sequence distribution to the data sequence distribution."
},
{
"start": 879,
"end": 883,
"text": " Usually estimating the data distribution is impossible."
},
{
"start": 883,
"end": 889,
"text": " You don't want to say that the probability of a sentence is how many times the sentence has appeared in the training data."
},
{
"start": 889,
"end": 895,
"text": " However, for words, we can use unigram frequencies of words in a corpus as a pretty reliable estimate."
},
{
"start": 895,
"end": 902,
"text": " Also, we can get an estimate of our model's probability assigned to a sequence by using MC sampling."
},
{
"start": 902,
"end": 910,
"text": " We take the marginal likelihood, sample k trajectories, and assess the probability that the trajectories assigned to the given sequence."
},
{
"start": 910,
"end": 914,
"text": " Since our model is not autoregressive, the sequence isn't tied to an observation."
},
{
"start": 914,
"end": 921,
"text": " So we can actually use the same sequences of hidden states to evaluate probabilities for all the words in the vocabulary."
},
{
"start": 921,
"end": 930,
"text": " Since we've pitched our noise model as the key to contribution to our generative model, we want to empirically verify that the model is being used."
},
{
"start": 930,
"end": 936,
"text": " Working with a clean probabilistic model allows us to use tools from probability theory to assess that."
},
{
"start": 936,
"end": 942,
"text": " We use the mutual information between a noise vector at time t and the observation of time t."
},
{
"start": 942,
"end": 947,
"text": " So this measures how much information in the output is actually due to the noise model."
},
{
"start": 947,
"end": 952,
"text": " Before showing you the numbers, let's quickly go across the parameterization of our model."
},
{
"start": 952,
"end": 956,
"text": " For the flows, we look at shift scaling transformations."
},
{
"start": 956,
"end": 962,
"text": " And if the scaling g is lower triangular, we can compute efficiently the Jacobian determinant."
},
{
"start": 962,
"end": 967,
"text": " We also look at real NVP and we compose flows by concatenation."
},
{
"start": 967,
"end": 974,
"text": " The base distribution of our inference model depends on the future observations, which we summarize using a GRU RNN."
},
{
"start": 974,
"end": 977,
"text": " The base distribution itself is a diagonal Gaussian."
},
{
"start": 977,
"end": 982,
"text": " We use a state size of 8 and also run some experiments for 16 and 32."
},
{
"start": 982,
"end": 986,
"text": " All the numbers are in the paper, so here are just the take-home messages."
},
{
"start": 986,
"end": 992,
"text": " We are on par or better than a domestic RNN with teacher forcing trained at the same state size."
},
{
"start": 992,
"end": 997,
"text": " Also, we observed that a powerful generative flow is essential to achieve good performance."
},
{
"start": 997,
"end": 1003,
"text": " Furthermore, we can confirm that important weightless elbow improved the results."
},
{
"start": 1003,
"end": 1007,
"text": " This is the first model applying generative flows to sequence modeling."
},
{
"start": 1007,
"end": 1012,
"text": " So naturally, we are interested in comparing the expressiveness of fg and fq."
},
{
"start": 1012,
"end": 1016,
"text": " Our paper has a table that compares four choices for both flows."
},
{
"start": 1016,
"end": 1024,
"text": " Our findings are that the generative flow should be powerful and the inference flow should be slightly less powerful."
},
{
"start": 1024,
"end": 1031,
"text": " To understand our noise model, we look at the mutual information at every time step and show a box spot for all of them."
},
{
"start": 1031,
"end": 1037,
"text": " Initially, the mutual information is highest, which means the initial character is most important to remember."
},
{
"start": 1037,
"end": 1046,
"text": " The noise model is never being ignored and we see increased variance in the remaining time steps because we are averaging here across different sequences."
},
{
"start": 1046,
"end": 1057,
"text": " A non-autoregressive model needs to have lower entropy in the observation model because any underentropy under the observation model is being forgotten because there is no feedback."
},
{
"start": 1057,
"end": 1062,
"text": " The purple line shows you the observation model entropy during training."
},
{
"start": 1062,
"end": 1067,
"text": " The dashed red line shows you the entropy on the observation model of a baseline."
},
{
"start": 1067,
"end": 1075,
"text": " So indeed, we have lower entropy in the observation model and at the same time in green, you see the mutual information increasing."
},
{
"start": 1075,
"end": 1078,
"text": " Let's summarize our findings."
},
{
"start": 1078,
"end": 1085,
"text": " Using variational flows, non-autoregressive modeling of sequences is possible and teacher forcing is not necessary."
},
{
"start": 1085,
"end": 1092,
"text": " At the same time, we get a noise model that is the driving factor of the sequence model and is easy to interpret."
},
{
"start": 1092,
"end": 1120,
"text": " For any details, please check out the paper and for any questions, shoot me an email."
}
] |
WYrvh50yu6s | Yannic Kilcher | UCZHmQk67mSJgfCCTn7xBfew | Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations | [
"Science & Technology"
] | [
"ai",
"deep learning",
"variational",
"autoencoders",
"vae",
"disentanglement",
"representation learning",
"machine learning",
"unsupervised",
"arxiv",
"google",
"google ai",
"mpi",
"eth",
"eth zurich",
"ethz"
] | https://arxiv.org/abs/1811.12359
Abstract:
In recent years, the interest in unsupervised learning of disentangled representations has significantly increased. The key assumption is that real-world data is generated by a few explanatory factors of variation and that these factors can be recovered by unsupervised learning algorithms. A large number of unsupervised learning approaches based on auto-encoding and quantitative evaluation metrics of disentanglement have been proposed; yet, the efficacy of the proposed approaches and utility of proposed notions of disentanglement has not been challenged in prior work. In this paper, we provide a sober look on recent progress in the field and challenge some common assumptions.
We first theoretically show that the unsupervised learning of disentangled representations is fundamentally impossible without inductive biases on both the models and the data. Then, we train more than 12000 models covering the six most prominent methods, and evaluate them across six disentanglement metrics in a reproducible large-scale experimental study on seven different data sets. On the positive side, we observe that different methods successfully enforce properties "encouraged" by the corresponding losses. On the negative side, we observe in our study that well-disentangled models seemingly cannot be identified without access to ground-truth labels even if we are allowed to transfer hyperparameters across data sets. Furthermore, increased disentanglement does not seem to lead to a decreased sample complexity of learning for downstream tasks.
These results suggest that future work on disentanglement learning should be explicit about the role of inductive biases and (implicit) supervision, investigate concrete benefits of enforcing disentanglement of the learned representations, and consider a reproducible experimental setup covering several data sets.
Authors:
Francesco Locatello, Stefan Bauer, Mario Lucic, Sylvain Gelly, Bernhard Schölkopf, Olivier Bachem | All right, hello everyone Today we're gonna look at this paper challenging common assumptions in the unsupervised learning of disentangled representations by Francesca Locutello and a bunch of other people at Google AI, ETH Zurich and MPI Full disclaimer, I know these people and I've Talked to them about this work. So just so you know where I'm coming from It's a good paper and it's fairly short to explain. So let's go over it The main thing here is What's called disentanglement? So disentanglement is kind of a property of data in unsupervised learning or not data of your model that you would like to have In unsupervised learning in here, especially in generative models so What they focus on is like Auto encoding here and What that means is I have some data point which could be an image. Let's draw an image here and I compress this usually into a vector and The vector has a couple of dimensions. This is a representation of the Data and from this representation what I can do is I can produce an image again and If I train an autoencoder, I will enforce that my model. So both of these are my model This is called an encoder and this is called a decoder That What they do is that the final image then looks like the original image This is an autoencoder basically a compression algorithm that Tries to find representations such that it can reconstruct the original image again Here we go a little further in that we use what's called variational autoencoders. So All of these all of these experiments here use variants of the variational autoencoder and What a variational autoencoder? Let's skip some here A variational autoencoder is the same thing as an autoencoder except It's a probabilistic framework, so What you do is here? On the bottom you can see an equation that basically is the objective for a VAE and What it does is it says okay, I have an image Let's say this is my image and I use an encoder like in an autoencoder And that gives me an image And that gives me an autoencoder and that gives me a representation Okay but Now I don't use this representation directly to decode but this representation Is simply the parameters from a bunch of distributions Right, so here let's say I have Four four I want four latent factors and the latent factors are basically the latent variables that describe This image so the images could be images of let's say cats and four latent factors could be The color of the fur of the cat the size of the cat the position in the image and the let's say the General lighting of how bright the image is so these could be four latent factors that would explain Best the the image and from that and if the image could be best reconstructed, let's say So the the four latent factors we consider as probability distributions so What our encoder needs to do our encoder needs to produce eight numbers in this case Eight numbers why because for each of these four distributions we want a mean? And a standard deviation So these eight numbers here each one Or each pair of numbers one of them is going to be the mean and the other one is going to be the standard deviation of a distribution and then From these we're going to construct a distribution Like so like okay. Here's the mean here's the standard deviation So the distribution somehow looks like this and then we're going to sample from this distribution. So one sample could be Here one sample could be here one sample could be here here So of course in the middle here, we're going to have more samples. But so the whereas the autoencoder directly uses the encoding to reproduce the image the variational autoencoder the the What the output what the encoder produces here is simply a parameterization for a disk for a distribution and And that distribution then is sampled so we're going to take one sample here So from from each of these so there's going to be multiple of those distributions because we have Eight numbers we are going to produce four distributions in particular So we're going to sample four different numbers. So we're going to sample a new vector with four One two, three four. Well, I didn't have eight at the beginning, but never mind. So here This gives us four numbers, but these are sampled. So these are going to be different every time Even if we feed the same image and from this the decoder Is going to try to reproduce the image and then Again the images the end image and the beginning image are going to be forced to be close to each other But also now since this is a probabilistic framework we also kind of need We need a different loss function for the autoencoder. You can simply penalize how far the images are in let's say l2 norm but here We have two distinct parts to the loss term. So And everything is probabilistic. So let's walk through this here. The first part Of the so we have two parts of the loss term and Here in particular q Is you can see here it takes as an is it is the distribution of z Conditional x and z will always be related representation of the Of the data and x will be the the data itself the data point So q will take the data point and produce z And the z specifically here what's meant is this This thing here This is z Whereas This is this is x And this is also Well, this is x Tilde or something whatever is produced by the decoder So basically what we're gonna do is We're going to punish the kl distance, which is a probabilistic distance measure. We're gonna Measure the distance between the distribution of z under x With the prior over z so p of z here This here Is the prior distribution Prior distribution over z and the prior distribution in va is is often to be taken as a Gaussian so We'll say all right So the our our kind of default assumption on the z variables is that they're that they're gaussians here And We're gonna force basically we're gonna force the encoder to come up with With encodings generally over the data set that are gaussians that are conformal to our prior So here we say specific prior pz I didn't mean to cross that out Right, so this second term enforces the the encoder to produce things that are Gaussian Um, it's specifically with our if our prior is let's say um Zero zero mean unit variance gaussians. It's gonna enforce that the first term here Is different the first term makes the image that has been input to the variational encoder and the image that has been output Close together again. This is a probabilistic Loss so What we're gonna do here is we're gonna take expectations. So the KL distance is also an expectation by the way We're gonna take expectations over Px which is the distribution of the data and also Over Q and Q is again our encoding mechanism Mechanism and we're simply going to punish the Or we're gonna here maximize the the log Probability which is equivalent to minimizing the negative log likelihood Which you might be familiar with of the data given the the z variables so And this is an expectation over q given x so what that means is basically we want the the probability Of this original data point we want Here we output x tilde We We want this to be close to x here. So what we can say is we want the probability that our model outputs x Which has been the original input right given this particular z that it produced to be high As an expectation of q Of z given x So as a bit cryptic, but it means here I input x into q I get out z and when I Have the z what I produce here is what I produce The likelihood that x the original image these are the same is produced should be high So that's a variational autoencoder. I simply encourage the latent representations to be close to my prior which is often Gaussian and I Encourage the output to be similar to the input which I do by Encouraging the likelihood that the output is the input All right, so cool. So what's that have to do with disentanglement disentanglement is property That now I would like to have in my model which is that these These things here Um, or we can also focus on these things here, however, you want to view it or these things here these Latent things that my encoder outputs somehow give me information about the data in a way That's disentangled what that means is I've already I've made an example that's already disentangled where I said, let's let's say we have images of a cat of cats and the fur color is going to be one variable and the color of the eyes of the cat is going to be another one and The position in the image is going to be another one. So these are all fairly independent, right? and so I if I change some Latent factor I can change them pretty much independently. So here this could be the fur color I can change it pretty much independently and cat will just have a different fur and so on What would be non disentangled representations? would be Let's say one encodes the fur of the cat and the other one encodes the Encodes the the species of cat because these are these are highly let's say entangled so the fur color is highly dependent on what species the cat is and It's not really so they kind of you can you can imagine it as these things being correlated, but it's slightly different And there are there's not an agreement on what this entanglement means really we just kind of imagine data is somehow Entangled and we want to kind of pull out these disentangled factors So what they focus on here and the easiest the easiest measure here is the following um, I might want to have some Space All right. So the easiest measure of disentanglement that is come up with here is the following Um, it's an assumption. The assumption is let's say there's data x right We'll call it random variable and we know We know we assume that This data is generated by a bunch of Latent variables z1 z2 z3 Which are? Independent which means that and the technical In this is that the p of z which is all of them can be factorized into p of z i So they are independent Um and these Kind of determine independently the data x now What does that disentanglement of when my model has produced a disentangled representation means I now have a model some model m Which is going to give me a representation of x And the representation as we saw before um Could be these things here, that's the the Representation specifically what these people do is they say okay the mean of the distribution that my encoder gives me That's the representation of x All right, so this gives you a representation of x from which you then might want to you know reconstruct x over here x So then but so the important thing is when is the representation disentangled the representation is disentangled in the easiest sense If the following holds when I change um When I change z i So I introduce a delta to z i to any of these three that means That in the representation of x Which we're just going to say So if there's three dimensions of z we just assume kind of we know that and we also make the representation three-dimensional then exactly one Factor in this is going to change so if I change one factor of the true underlying distribution um Which is independently which all the latent factors are independent then Only one factor in my representation changes. So if that's the case then Kind of I can be fairly sure that i've captured the the true latent structure of the data, right if one if if one of the Of the if I change one of the the z here Let's say I change the z3 and only then uh r3 So I change z3 let's say I have access to the true underlying distribution I ask the the world Ask the world to give me a picture of a cat that where the fur color is different and then I put it I get a data point and then I put it through my model I get a representation and only From the cat that I had before only one of the factors of my representation changes Then I call it disentangled then I can be fairly sure. Okay my representation this dimension of my representation captures the fur color independently of the other factors All right, so that's disentanglement and you notice it requires actually access here to the true distribution Distribution of how the data is generated by the world So this is something you generally don't have but um, it's a technical notion So you can you can certainly postulate it And it's it It's a nice framework and this paper basically proves that Generally learning disentangled representation in that way is impossible um If you don't have some if you don't make some assumptions some a priori assumptions on your data and your model so This is a theorem here and we See here p is any generative model Which admits this factorization Right does that that's what we talked about the true underlying generative process is Is independent in so In its constituents That means there's a bunch of latent variables. They independently from each other produce a data point right X is the data observations Then there exists an infinite family of bijective functions right such that This and this and this and this Okay What that means? is so this thing here basically just means that the um the distributions agree so that the the the overall distributions the let's say the it's not exactly that but the posterior distributions Um, let's say the data looks the same right That what comes out of the process looks the same So there is there is functions that transform the latent distribution into some other distribution, but they look the same in cumulatively All right, and then we have the All right, and then this part here Means you'll see the derivative of fi of u with respect to some Uj which you'll notice i and j are different. Um, this this means that basically the dimensions are Entangled it means that if I take the derivative of one entry In the in the f in the function output and I derive it By another entry then I get a non-zero derivative which means that this Uj influences fi Which basically means that I can produce I can take the z I can transform it in In so z is independent. So it means the i-th dimension has no influence on the j-th dimension Of the of the output and I can transform it into something Where that's no longer the case where the i-th and the j-th dimension very much uh Kind of are entangled or covariate so This means I can take the z that That's kind of everything is independent. I can transform it into something where everything is dependent and they give a nice example here So they say let's say we have Gaussians In two dimensions, so we have one Gaussian here And let me see if I can draw this one Gaussian here Right in two dimensions. They're completely independent um what you'll find is that the kind of distribution overall has Iso lines like this Right, it gives you kind of a hump in the middle two-dimensionally. You can maybe imagine like a bit of a mountain in the middle um All right. So this is what you this is the kind of output distribution If you if you don't know about the underlying factors, you simply see the cumulative distribution, which would be the the big p here um All right. Now we transform this into with f And f is simply a rotation by 45 degrees right, so two new axes this and that and again Our two gaussians are going to be transformed these Right. So these are not these are not disentangled anymore. Well in the in the notion I can't say it like this, but this is easiest to say so these are these are kind of Now that it's rotated in terms of the original coordinate system, which would go like this These very much depend on each other right the jth dimension the if dimension depend on each other because if I sample from one of the gaussians I need now basically two coordinates to describe where it is or Yeah, one isn't just So if I sample from one Gaussian I need both the coordinates but the cumulative distribution or the That is still the same That is still going to look exactly the same so It's again a hump. So it's basically an isometric hump in every direction if I rotate that the It looks exactly the same. This is the p here But now the the if dimension and the jth dimension very much influence each other um, and yeah, interestingly the If you now look at disentanglement if I just have if if I now produce data x here x1 and here I produce data x2 and both go through my model and give me our representation of x1 and the representation of x1 and the representation of x2 I have Without seeing the underlying structure. I have no idea which one of those two It comes from and thereby I have zero chance basically. It's a luck lucky guess um Which one it comes from and there's an infinite family. So I will never find the true underlying distribution here and thereby I will never um I will never be able to satisfy this property that if one of the z changes Then only one of the factors of my representation will change because if I Say, oh, well, obviously this is the case Then i'm going to make a different model and if I say well, this is the case I'm going to make a different model. I don't know which one it is So I have to choose one and it could be the other one. So i'm bound to be wrong in this case 50% of the time, but if it's an infinite family i'm bound to be wrong every time basically, so That's what the theorem basically says I can't Decide on the true underlying distribution. Um, there's an infinite family that Transforms it into it. It transforms every distribution into some other distribution that has basically complete opposite properties of entanglement And I need to choose one and I will never choose the right one because i'm not that lucky And thereby I can't do representation learning that's disentangled All right, so that's the main claim of the paper and um There is a lot of experiments here so what the paper also does is they produce some new Data sets and they test a lot of a lot of architectures basically they say just because it's theoretically impossible It's not impractical because we can actually make these underlying assumptions like we can make some assumptions on the data and then and then we kind of can attempt to do disentanglement learning so they do these data sets and they test different VAE's architectures on it and they basically Um establish where More work should go. So that's that's kind of the rest of the paper I encourage you to look at the rest of the paper I just wanted to give a quick introduction to VAEs and to disentanglement to entangle representation learning I Wasn't technically correct Uh in every detail, but I hope that it's enough and have fun | [
{
"start": 0,
"end": 2,
"text": " All right, hello everyone"
},
{
"start": 2.84,
"end": 11.92,
"text": " Today we're gonna look at this paper challenging common assumptions in the unsupervised learning of disentangled representations by Francesca Locutello and"
},
{
"start": 12.92,
"end": 17.3,
"text": " a bunch of other people at Google AI, ETH Zurich and MPI"
},
{
"start": 18.36,
"end": 22.28,
"text": " Full disclaimer, I know these people and I've"
},
{
"start": 23.2,
"end": 26.92,
"text": " Talked to them about this work. So just so you know where I'm coming from"
},
{
"start": 26.92,
"end": 33.24,
"text": " It's a good paper and it's fairly short to explain. So let's go over it"
},
{
"start": 34.760000000000005,
"end": 36.760000000000005,
"text": " The main thing here is"
},
{
"start": 36.800000000000004,
"end": 42.300000000000004,
"text": " What's called disentanglement? So disentanglement is kind of a property of data in"
},
{
"start": 42.96,
"end": 48.56,
"text": " unsupervised learning or not data of your model that you would like to have"
},
{
"start": 49.24,
"end": 53.84,
"text": " In unsupervised learning in here, especially in generative models"
},
{
"start": 53.84,
"end": 56.84,
"text": " so"
},
{
"start": 57.52,
"end": 59.760000000000005,
"text": " What they focus on is like"
},
{
"start": 61.28,
"end": 63.28,
"text": " Auto encoding here and"
},
{
"start": 63.800000000000004,
"end": 70.24000000000001,
"text": " What that means is I have some data point which could be an image. Let's draw an image here and"
},
{
"start": 71.44,
"end": 73.44,
"text": " I"
},
{
"start": 73.44,
"end": 77.80000000000001,
"text": " compress this usually into a vector and"
},
{
"start": 77.8,
"end": 84.8,
"text": " The vector has a couple of dimensions. This is a representation of the"
},
{
"start": 86.52,
"end": 94.08,
"text": " Data and from this representation what I can do is I can produce an image again and"
},
{
"start": 94.88,
"end": 101,
"text": " If I train an autoencoder, I will enforce that my model. So both of these are my model"
},
{
"start": 101,
"end": 105,
"text": " This is called an encoder and this is called a decoder"
},
{
"start": 105,
"end": 107,
"text": " That"
},
{
"start": 107,
"end": 113.84,
"text": " What they do is that the final image then looks like the original image"
},
{
"start": 114.64,
"end": 118.4,
"text": " This is an autoencoder basically a compression algorithm that"
},
{
"start": 119.64,
"end": 124.36,
"text": " Tries to find representations such that it can reconstruct the original image again"
},
{
"start": 125.16,
"end": 131.56,
"text": " Here we go a little further in that we use what's called variational autoencoders. So"
},
{
"start": 131.56,
"end": 135.32,
"text": " All of these all of these experiments here use"
},
{
"start": 136.04,
"end": 139.08,
"text": " variants of the variational autoencoder and"
},
{
"start": 140.04,
"end": 142.04,
"text": " What a variational autoencoder?"
},
{
"start": 143.56,
"end": 145.56,
"text": " Let's skip some here"
},
{
"start": 147,
"end": 152.04,
"text": " A variational autoencoder is the same thing as an autoencoder except"
},
{
"start": 155,
"end": 157,
"text": " It's a probabilistic framework, so"
},
{
"start": 157,
"end": 159,
"text": " What you do is here?"
},
{
"start": 160.84,
"end": 167.64,
"text": " On the bottom you can see an equation that basically is the objective for a VAE and"
},
{
"start": 168.76,
"end": 171.84,
"text": " What it does is it says okay, I have an image"
},
{
"start": 172.44,
"end": 174.44,
"text": " Let's say this is my image and"
},
{
"start": 175.32,
"end": 178.6,
"text": " I use an encoder like in an autoencoder"
},
{
"start": 181.24,
"end": 183.24,
"text": " And that gives me an image"
},
{
"start": 183.24,
"end": 187.88,
"text": " And that gives me an autoencoder and that gives me a representation"
},
{
"start": 189,
"end": 190.60000000000002,
"text": " Okay"
},
{
"start": 190.60000000000002,
"end": 191.8,
"text": " but"
},
{
"start": 191.8,
"end": 197.34,
"text": " Now I don't use this representation directly to decode but this representation"
},
{
"start": 198.84,
"end": 203.58,
"text": " Is simply the parameters from a bunch of distributions"
},
{
"start": 205,
"end": 207.96,
"text": " Right, so here let's say I have"
},
{
"start": 207.96,
"end": 215.16,
"text": " Four four I want four latent factors and the latent factors are basically the latent variables that describe"
},
{
"start": 215.72,
"end": 221.88,
"text": " This image so the images could be images of let's say cats and four latent factors could be"
},
{
"start": 222.44,
"end": 228.60000000000002,
"text": " The color of the fur of the cat the size of the cat the position in the image and"
},
{
"start": 229.4,
"end": 230.44,
"text": " the"
},
{
"start": 230.44,
"end": 232.44,
"text": " let's say the"
},
{
"start": 232.44,
"end": 238.84,
"text": " General lighting of how bright the image is so these could be four latent factors that would"
},
{
"start": 239.8,
"end": 241.8,
"text": " explain"
},
{
"start": 241.8,
"end": 246.92,
"text": " Best the the image and from that and if the image could be best reconstructed, let's say"
},
{
"start": 247.64,
"end": 251.48,
"text": " So the the four latent factors we consider as probability distributions"
},
{
"start": 252.12,
"end": 253.07999999999998,
"text": " so"
},
{
"start": 253.07999999999998,
"end": 258.68,
"text": " What our encoder needs to do our encoder needs to produce eight numbers in this case"
},
{
"start": 258.68,
"end": 267,
"text": " Eight numbers why because for each of these four distributions we want a mean?"
},
{
"start": 269.24,
"end": 271.24,
"text": " And a standard deviation"
},
{
"start": 273.40000000000003,
"end": 275.4,
"text": " So these eight numbers here"
},
{
"start": 275.72,
"end": 277.32,
"text": " each one"
},
{
"start": 277.32,
"end": 284.92,
"text": " Or each pair of numbers one of them is going to be the mean and the other one is going to be the standard deviation"
},
{
"start": 285.4,
"end": 287.4,
"text": " of a distribution"
},
{
"start": 287.4,
"end": 289.08,
"text": " and then"
},
{
"start": 289.08,
"end": 293.41999999999996,
"text": " From these we're going to construct a distribution"
},
{
"start": 294.44,
"end": 298.62,
"text": " Like so like okay. Here's the mean here's the standard deviation"
},
{
"start": 299.64,
"end": 308.44,
"text": " So the distribution somehow looks like this and then we're going to sample from this distribution. So one sample could be"
},
{
"start": 309.32,
"end": 312.67999999999995,
"text": " Here one sample could be here one sample could be here here"
},
{
"start": 312.68,
"end": 319.40000000000003,
"text": " So of course in the middle here, we're going to have more samples. But so the whereas the autoencoder directly uses the encoding"
},
{
"start": 319.72,
"end": 323.24,
"text": " to reproduce the image the variational autoencoder the"
},
{
"start": 324.68,
"end": 326.12,
"text": " the"
},
{
"start": 326.12,
"end": 330.7,
"text": " What the output what the encoder produces here is simply a parameterization"
},
{
"start": 331.88,
"end": 336.12,
"text": " for a disk for a distribution and"
},
{
"start": 336.12,
"end": 343.88,
"text": " And that distribution then is sampled so we're going to take one sample"
},
{
"start": 345,
"end": 347,
"text": " here"
},
{
"start": 348.12,
"end": 353.32,
"text": " So from from each of these so there's going to be multiple of those distributions because we have"
},
{
"start": 354.52,
"end": 358.06,
"text": " Eight numbers we are going to produce four distributions"
},
{
"start": 359,
"end": 361.08,
"text": " in particular"
},
{
"start": 361.08,
"end": 367.47999999999996,
"text": " So we're going to sample four different numbers. So we're going to sample a new vector"
},
{
"start": 368.03999999999996,
"end": 369.56,
"text": " with four"
},
{
"start": 369.56,
"end": 373.96,
"text": " One two, three four. Well, I didn't have eight at the beginning, but never mind. So here"
},
{
"start": 374.59999999999997,
"end": 379.15999999999997,
"text": " This gives us four numbers, but these are sampled. So these are going to be different every time"
},
{
"start": 379.71999999999997,
"end": 381.71999999999997,
"text": " Even if we feed the same image"
},
{
"start": 382.2,
"end": 384.78,
"text": " and from this the decoder"
},
{
"start": 384.78,
"end": 389.5,
"text": " Is going to try to reproduce the image and then"
},
{
"start": 391.5,
"end": 399.41999999999996,
"text": " Again the images the end image and the beginning image are going to be forced to be close to each other"
},
{
"start": 401.82,
"end": 406.7,
"text": " But also now since this is a probabilistic framework we also kind of need"
},
{
"start": 407.34,
"end": 414.05999999999995,
"text": " We need a different loss function for the autoencoder. You can simply penalize how far the images are in let's say l2 norm"
},
{
"start": 414.06,
"end": 416.06,
"text": " but here"
},
{
"start": 416.38,
"end": 419.74,
"text": " We have two distinct parts to the loss term. So"
},
{
"start": 421.26,
"end": 425.98,
"text": " And everything is probabilistic. So let's walk through this here. The first part"
},
{
"start": 427.98,
"end": 431.5,
"text": " Of the so we have two parts of the loss term and"
},
{
"start": 432.86,
"end": 435.98,
"text": " Here in particular q"
},
{
"start": 435.98,
"end": 442.14000000000004,
"text": " Is you can see here it takes as an is it is the distribution of z"
},
{
"start": 442.54,
"end": 447.18,
"text": " Conditional x and z will always be related representation of the"
},
{
"start": 447.82,
"end": 453.74,
"text": " Of the data and x will be the the data itself the data point"
},
{
"start": 454.3,
"end": 457.58000000000004,
"text": " So q will take the data point and produce"
},
{
"start": 458.70000000000005,
"end": 459.74,
"text": " z"
},
{
"start": 459.74,
"end": 462.70000000000005,
"text": " And the z specifically here what's meant is"
},
{
"start": 463.82,
"end": 465.66,
"text": " this"
},
{
"start": 465.66,
"end": 467.42,
"text": " This thing here"
},
{
"start": 467.42,
"end": 469.42,
"text": " This is z"
},
{
"start": 469.58000000000004,
"end": 471.26000000000005,
"text": " Whereas"
},
{
"start": 471.26000000000005,
"end": 473.26000000000005,
"text": " This is this is x"
},
{
"start": 473.82000000000005,
"end": 475.58000000000004,
"text": " And this is also"
},
{
"start": 475.58000000000004,
"end": 477.58000000000004,
"text": " Well, this is x"
},
{
"start": 478.54,
"end": 481.36,
"text": " Tilde or something whatever is produced by the decoder"
},
{
"start": 485.82000000000005,
"end": 490.22,
"text": " So basically what we're gonna do is"
},
{
"start": 490.22,
"end": 496.62,
"text": " We're going to punish the kl distance, which is a probabilistic distance measure. We're gonna"
},
{
"start": 499.58000000000004,
"end": 506.06,
"text": " Measure the distance between the distribution of z under x"
},
{
"start": 507.98,
"end": 511.66,
"text": " With the prior over z so p of z here"
},
{
"start": 512.38,
"end": 513.74,
"text": " This here"
},
{
"start": 513.74,
"end": 515.98,
"text": " Is the prior distribution"
},
{
"start": 515.98,
"end": 522.62,
"text": " Prior distribution over z and the prior distribution in va is is often to be taken as a"
},
{
"start": 523.24,
"end": 525.24,
"text": " Gaussian so"
},
{
"start": 525.74,
"end": 527.26,
"text": " We'll say all right"
},
{
"start": 527.26,
"end": 534.0600000000001,
"text": " So the our our kind of default assumption on the z variables is that they're that they're gaussians here"
},
{
"start": 535.98,
"end": 537.4200000000001,
"text": " And"
},
{
"start": 537.4200000000001,
"end": 543.5,
"text": " We're gonna force basically we're gonna force the encoder to come up with"
},
{
"start": 543.5,
"end": 551.74,
"text": " With encodings generally over the data set that are gaussians that are conformal to our prior"
},
{
"start": 554.38,
"end": 559.42,
"text": " So here we say specific prior pz I didn't mean to cross that out"
},
{
"start": 561.26,
"end": 568.14,
"text": " Right, so this second term enforces the the encoder to produce things that are"
},
{
"start": 568.76,
"end": 570.06,
"text": " Gaussian"
},
{
"start": 570.06,
"end": 574.14,
"text": " Um, it's specifically with our if our prior is let's say"
},
{
"start": 575.0999999999999,
"end": 577.0999999999999,
"text": " um"
},
{
"start": 577.66,
"end": 584.8599999999999,
"text": " Zero zero mean unit variance gaussians. It's gonna enforce that the first term here"
},
{
"start": 586.3,
"end": 593.18,
"text": " Is different the first term makes the image that has been input to the variational encoder and the image that has been output"
},
{
"start": 593.8199999999999,
"end": 596.3199999999999,
"text": " Close together again. This is a probabilistic"
},
{
"start": 596.32,
"end": 598.32,
"text": " Loss so"
},
{
"start": 598.4000000000001,
"end": 604.08,
"text": " What we're gonna do here is we're gonna take expectations. So the KL distance is also an expectation by the way"
},
{
"start": 606.88,
"end": 609.36,
"text": " We're gonna take expectations over"
},
{
"start": 610.08,
"end": 615.2,
"text": " Px which is the distribution of the data and also"
},
{
"start": 615.6800000000001,
"end": 619.2,
"text": " Over Q and Q is again our encoding"
},
{
"start": 619.84,
"end": 621.84,
"text": " mechanism"
},
{
"start": 621.84,
"end": 627.44,
"text": " Mechanism and we're simply going to punish the"
},
{
"start": 628.48,
"end": 631.36,
"text": " Or we're gonna here maximize the the log"
},
{
"start": 632.22,
"end": 636.24,
"text": " Probability which is equivalent to minimizing the negative log likelihood"
},
{
"start": 636.24,
"end": 641.84,
"text": " Which you might be familiar with of the data given the the z variables"
},
{
"start": 642.5600000000001,
"end": 644.5600000000001,
"text": " so"
},
{
"start": 645.6,
"end": 648.08,
"text": " And this is an expectation over q"
},
{
"start": 648.08,
"end": 652.96,
"text": " given x so what that means is basically we want the"
},
{
"start": 653.9200000000001,
"end": 654.96,
"text": " the"
},
{
"start": 654.96,
"end": 656.96,
"text": " probability"
},
{
"start": 658,
"end": 662,
"text": " Of this original data point we want"
},
{
"start": 663.44,
"end": 665.44,
"text": " Here we output x tilde"
},
{
"start": 666.88,
"end": 667.84,
"text": " We"
},
{
"start": 667.84,
"end": 673.5400000000001,
"text": " We want this to be close to x here. So what we can say is we want the probability"
},
{
"start": 674.6400000000001,
"end": 676.4000000000001,
"text": " that our model"
},
{
"start": 676.4,
"end": 678.4,
"text": " outputs x"
},
{
"start": 679.84,
"end": 687.4399999999999,
"text": " Which has been the original input right given this particular z that it produced to be high"
},
{
"start": 690,
"end": 693.92,
"text": " As an expectation of q"
},
{
"start": 697.92,
"end": 699.92,
"text": " Of z given x"
},
{
"start": 699.92,
"end": 707.92,
"text": " So as a bit cryptic, but it means here I input x into q I get out z"
},
{
"start": 708.64,
"end": 710.0799999999999,
"text": " and when I"
},
{
"start": 710.0799999999999,
"end": 713.92,
"text": " Have the z what I produce here is what I produce"
},
{
"start": 715.4399999999999,
"end": 723.1999999999999,
"text": " The likelihood that x the original image these are the same is produced should be high"
},
{
"start": 723.2,
"end": 729.12,
"text": " So that's a variational autoencoder. I simply encourage the latent representations to be"
},
{
"start": 729.36,
"end": 732.48,
"text": " close to my prior which is often Gaussian and I"
},
{
"start": 733.0400000000001,
"end": 738.1600000000001,
"text": " Encourage the output to be similar to the input which I do by"
},
{
"start": 738.6400000000001,
"end": 741.5200000000001,
"text": " Encouraging the likelihood that the output is the input"
},
{
"start": 742.32,
"end": 748.24,
"text": " All right, so cool. So what's that have to do with disentanglement disentanglement is property"
},
{
"start": 748.24,
"end": 755.04,
"text": " That now I would like to have in my model which is that"
},
{
"start": 755.84,
"end": 757.84,
"text": " these"
},
{
"start": 757.84,
"end": 759.6,
"text": " These things here"
},
{
"start": 759.6,
"end": 765.52,
"text": " Um, or we can also focus on these things here, however, you want to view it or these things here"
},
{
"start": 766.16,
"end": 767.1800000000001,
"text": " these"
},
{
"start": 767.1800000000001,
"end": 774.32,
"text": " Latent things that my encoder outputs somehow give me information about the data in a way"
},
{
"start": 774.32,
"end": 779.94,
"text": " That's disentangled what that means is I've already I've made an example that's already disentangled"
},
{
"start": 780.24,
"end": 785.7600000000001,
"text": " where I said, let's let's say we have images of a cat of cats and"
},
{
"start": 786.48,
"end": 794.8000000000001,
"text": " the fur color is going to be one variable and the color of the eyes of the cat is going to be another one and"
},
{
"start": 795.5200000000001,
"end": 800.5600000000001,
"text": " The position in the image is going to be another one. So these are all fairly independent, right?"
},
{
"start": 801.12,
"end": 803.12,
"text": " and so I"
},
{
"start": 803.12,
"end": 805.12,
"text": " if I change some"
},
{
"start": 805.6,
"end": 811.04,
"text": " Latent factor I can change them pretty much independently. So here this could be the fur color"
},
{
"start": 811.6,
"end": 816.4,
"text": " I can change it pretty much independently and cat will just have a different fur and so on"
},
{
"start": 816.64,
"end": 819.68,
"text": " What would be non disentangled representations?"
},
{
"start": 820.4,
"end": 822.4,
"text": " would be"
},
{
"start": 822.48,
"end": 826.24,
"text": " Let's say one encodes the fur of the cat"
},
{
"start": 826.8,
"end": 829.76,
"text": " and the other one encodes the"
},
{
"start": 829.76,
"end": 836.56,
"text": " Encodes the the species of cat because these are these are highly let's say entangled"
},
{
"start": 836.56,
"end": 841.12,
"text": " so the fur color is highly dependent on what species the cat is and"
},
{
"start": 842.72,
"end": 849.6,
"text": " It's not really so they kind of you can you can imagine it as these things being correlated, but it's slightly different"
},
{
"start": 851.04,
"end": 857.4399999999999,
"text": " And there are there's not an agreement on what this entanglement means really we just kind of imagine data is somehow"
},
{
"start": 857.44,
"end": 861.3000000000001,
"text": " Entangled and we want to kind of pull out these disentangled factors"
},
{
"start": 861.62,
"end": 866.82,
"text": " So what they focus on here and the easiest the easiest measure here"
},
{
"start": 867.46,
"end": 868.58,
"text": " is"
},
{
"start": 868.58,
"end": 871.7800000000001,
"text": " the following um, I might want to have some"
},
{
"start": 873.22,
"end": 874.2600000000001,
"text": " Space"
},
{
"start": 874.2600000000001,
"end": 880.9000000000001,
"text": " All right. So the easiest measure of disentanglement that is come up with here is the following"
},
{
"start": 881.7800000000001,
"end": 886.34,
"text": " Um, it's an assumption. The assumption is let's say there's data x"
},
{
"start": 886.34,
"end": 888.34,
"text": " right"
},
{
"start": 889.5400000000001,
"end": 892.1800000000001,
"text": " We'll call it random variable and we know"
},
{
"start": 893.14,
"end": 895.14,
"text": " We know we assume"
},
{
"start": 895.14,
"end": 896.26,
"text": " that"
},
{
"start": 896.26,
"end": 898.26,
"text": " This data is generated"
},
{
"start": 898.6600000000001,
"end": 900.6600000000001,
"text": " by a bunch of"
},
{
"start": 901.14,
"end": 903.86,
"text": " Latent variables z1 z2 z3"
},
{
"start": 905.3000000000001,
"end": 907.3000000000001,
"text": " Which are?"
},
{
"start": 907.36,
"end": 910.5,
"text": " Independent which means that and the technical"
},
{
"start": 910.5,
"end": 918.26,
"text": " In this is that the p of z which is all of them can be factorized"
},
{
"start": 919.54,
"end": 921.86,
"text": " into p of z i"
},
{
"start": 923.62,
"end": 925.86,
"text": " So they are independent"
},
{
"start": 927.54,
"end": 929.54,
"text": " Um and these"
},
{
"start": 930.74,
"end": 932.84,
"text": " Kind of determine independently"
},
{
"start": 934.02,
"end": 936.1,
"text": " the data x"
},
{
"start": 936.1,
"end": 937.62,
"text": " now"
},
{
"start": 937.62,
"end": 945.94,
"text": " What does that disentanglement of when my model has produced a disentangled representation means I now have a model some model"
},
{
"start": 946.98,
"end": 948.26,
"text": " m"
},
{
"start": 948.26,
"end": 951.78,
"text": " Which is going to give me a representation of x"
},
{
"start": 954.02,
"end": 957.3,
"text": " And the representation as we saw before"
},
{
"start": 958.02,
"end": 960.02,
"text": " um"
},
{
"start": 961.22,
"end": 963.22,
"text": " Could be"
},
{
"start": 963.22,
"end": 965.3,
"text": " these things here, that's the"
},
{
"start": 965.3,
"end": 966.9,
"text": " the"
},
{
"start": 966.9,
"end": 973.62,
"text": " Representation specifically what these people do is they say okay the mean of the distribution that my encoder gives me"
},
{
"start": 973.9399999999999,
"end": 975.9399999999999,
"text": " That's the representation of x"
},
{
"start": 981.78,
"end": 989.2199999999999,
"text": " All right, so this gives you a representation of x from which you then might want to you know reconstruct x"
},
{
"start": 990.0999999999999,
"end": 991.78,
"text": " over here"
},
{
"start": 991.78,
"end": 992.9799999999999,
"text": " x"
},
{
"start": 992.98,
"end": 1001.38,
"text": " So then but so the important thing is when is the representation disentangled the representation is disentangled in the easiest sense"
},
{
"start": 1002.1,
"end": 1004.58,
"text": " If the following holds when I change"
},
{
"start": 1005.78,
"end": 1007.78,
"text": " um"
},
{
"start": 1008.66,
"end": 1011.0600000000001,
"text": " When I change z i"
},
{
"start": 1012.26,
"end": 1017.62,
"text": " So I introduce a delta to z i to any of these three that means"
},
{
"start": 1017.62,
"end": 1021.14,
"text": " That in the representation of x"
},
{
"start": 1022.66,
"end": 1024.66,
"text": " Which we're just going to say"
},
{
"start": 1025.54,
"end": 1032.58,
"text": " So if there's three dimensions of z we just assume kind of we know that and we also make the representation three-dimensional"
},
{
"start": 1033.22,
"end": 1034.34,
"text": " then"
},
{
"start": 1034.34,
"end": 1036.34,
"text": " exactly one"
},
{
"start": 1037.46,
"end": 1041.78,
"text": " Factor in this is going to change so if I change one"
},
{
"start": 1042.5,
"end": 1045.22,
"text": " factor of the true underlying distribution"
},
{
"start": 1045.22,
"end": 1047.22,
"text": " um"
},
{
"start": 1047.22,
"end": 1051.38,
"text": " Which is independently which all the latent factors are independent then"
},
{
"start": 1051.8600000000001,
"end": 1056.98,
"text": " Only one factor in my representation changes. So if that's the case then"
},
{
"start": 1057.54,
"end": 1065.7,
"text": " Kind of I can be fairly sure that i've captured the the true latent structure of the data, right if one if if one of the"
},
{
"start": 1066.5,
"end": 1069.06,
"text": " Of the if I change one of the the z here"
},
{
"start": 1070.5,
"end": 1072.5,
"text": " Let's say I change the z3"
},
{
"start": 1072.5,
"end": 1075.06,
"text": " and only then uh"
},
{
"start": 1075.86,
"end": 1077.86,
"text": " r3"
},
{
"start": 1078.66,
"end": 1084.66,
"text": " So I change z3 let's say I have access to the true underlying distribution I ask the the world"
},
{
"start": 1085.22,
"end": 1091.7,
"text": " Ask the world to give me a picture of a cat that where the fur color is different and then I put it"
},
{
"start": 1092.34,
"end": 1094.34,
"text": " I get a data point"
},
{
"start": 1094.74,
"end": 1098.26,
"text": " and then I put it through my model I get a representation and"
},
{
"start": 1099.3,
"end": 1100.82,
"text": " only"
},
{
"start": 1100.82,
"end": 1106.4199999999998,
"text": " From the cat that I had before only one of the factors of my representation changes"
},
{
"start": 1106.8999999999999,
"end": 1113.9399999999998,
"text": " Then I call it disentangled then I can be fairly sure. Okay my representation this dimension of my representation captures the fur color"
},
{
"start": 1114.4199999999998,
"end": 1116.98,
"text": " independently of the other factors"
},
{
"start": 1118.4199999999998,
"end": 1125.9399999999998,
"text": " All right, so that's disentanglement and you notice it requires actually access here to the true"
},
{
"start": 1127.22,
"end": 1128.5,
"text": " distribution"
},
{
"start": 1128.5,
"end": 1132.66,
"text": " Distribution of how the data is generated by the world"
},
{
"start": 1133.22,
"end": 1137.86,
"text": " So this is something you generally don't have but um, it's a technical notion"
},
{
"start": 1138.26,
"end": 1140.26,
"text": " So you can you can certainly postulate it"
},
{
"start": 1140.9,
"end": 1142.9,
"text": " And it's it"
},
{
"start": 1143.62,
"end": 1148.1,
"text": " It's a nice framework and this paper basically proves that"
},
{
"start": 1149.84,
"end": 1153.54,
"text": " Generally learning disentangled representation in that way is impossible"
},
{
"start": 1154.18,
"end": 1155.46,
"text": " um"
},
{
"start": 1155.46,
"end": 1162.18,
"text": " If you don't have some if you don't make some assumptions some a priori assumptions on your data and your model"
},
{
"start": 1163.7,
"end": 1165.14,
"text": " so"
},
{
"start": 1165.14,
"end": 1166.98,
"text": " This is a theorem here"
},
{
"start": 1166.98,
"end": 1168.66,
"text": " and we"
},
{
"start": 1168.66,
"end": 1171.22,
"text": " See here p is any generative model"
},
{
"start": 1171.94,
"end": 1173.94,
"text": " Which admits this factorization"
},
{
"start": 1174.74,
"end": 1180.26,
"text": " Right does that that's what we talked about the true underlying generative process is"
},
{
"start": 1180.26,
"end": 1184.9,
"text": " Is independent in so"
},
{
"start": 1186.34,
"end": 1188.34,
"text": " In its constituents"
},
{
"start": 1188.66,
"end": 1193.22,
"text": " That means there's a bunch of latent variables. They independently from each other produce a data point"
},
{
"start": 1194.58,
"end": 1196.02,
"text": " right"
},
{
"start": 1196.02,
"end": 1198.02,
"text": " X is the data observations"
},
{
"start": 1198.42,
"end": 1200.82,
"text": " Then there exists an infinite family"
},
{
"start": 1201.7,
"end": 1203.7,
"text": " of bijective functions"
},
{
"start": 1203.78,
"end": 1205.78,
"text": " right such that"
},
{
"start": 1205.78,
"end": 1209.3,
"text": " This and this and this and this"
},
{
"start": 1210.34,
"end": 1211.3,
"text": " Okay"
},
{
"start": 1211.3,
"end": 1212.66,
"text": " What that means?"
},
{
"start": 1212.66,
"end": 1215.3799999999999,
"text": " is so this thing here"
},
{
"start": 1216.1,
"end": 1218.1,
"text": " basically just means that the"
},
{
"start": 1218.8999999999999,
"end": 1226.26,
"text": " um the distributions agree so that the the the overall distributions the let's say the"
},
{
"start": 1227.22,
"end": 1229.22,
"text": " it's not exactly that but the"
},
{
"start": 1230.26,
"end": 1232.26,
"text": " posterior distributions"
},
{
"start": 1232.26,
"end": 1235.62,
"text": " Um, let's say the data looks the same right"
},
{
"start": 1236.58,
"end": 1239.86,
"text": " That what comes out of the process looks the same"
},
{
"start": 1241.22,
"end": 1245.3799999999999,
"text": " So there is there is functions that transform"
},
{
"start": 1246.02,
"end": 1246.98,
"text": " the"
},
{
"start": 1246.98,
"end": 1251.3,
"text": " latent distribution into some other distribution, but they"
},
{
"start": 1252.26,
"end": 1254.26,
"text": " look the same in"
},
{
"start": 1255.14,
"end": 1257.14,
"text": " cumulatively"
},
{
"start": 1258.42,
"end": 1260.5,
"text": " All right, and then we have the"
},
{
"start": 1260.5,
"end": 1263.46,
"text": " All right, and then this part here"
},
{
"start": 1264.42,
"end": 1269.3,
"text": " Means you'll see the derivative of fi of u with respect to"
},
{
"start": 1270.42,
"end": 1271.62,
"text": " some"
},
{
"start": 1271.62,
"end": 1275.54,
"text": " Uj which you'll notice i and j are different. Um, this"
},
{
"start": 1276.26,
"end": 1277.7,
"text": " this means"
},
{
"start": 1277.7,
"end": 1279.46,
"text": " that"
},
{
"start": 1279.46,
"end": 1281.46,
"text": " basically the dimensions"
},
{
"start": 1282.5,
"end": 1283.78,
"text": " are"
},
{
"start": 1283.78,
"end": 1285.86,
"text": " Entangled it means that if I"
},
{
"start": 1286.58,
"end": 1288.58,
"text": " take the derivative of"
},
{
"start": 1288.58,
"end": 1290.58,
"text": " one entry"
},
{
"start": 1290.6599999999999,
"end": 1293.3,
"text": " In the in the f in the function"
},
{
"start": 1293.9399999999998,
"end": 1295.9399999999998,
"text": " output and I derive it"
},
{
"start": 1296.34,
"end": 1302.34,
"text": " By another entry then I get a non-zero derivative which means that this"
},
{
"start": 1303.22,
"end": 1304.6599999999999,
"text": " Uj"
},
{
"start": 1304.6599999999999,
"end": 1306.6599999999999,
"text": " influences fi"
},
{
"start": 1307.22,
"end": 1314.1,
"text": " Which basically means that I can produce I can take the z I can transform it in"
},
{
"start": 1314.1,
"end": 1320.4199999999998,
"text": " In so z is independent. So it means the i-th dimension has no influence on the j-th dimension"
},
{
"start": 1320.98,
"end": 1324.4199999999998,
"text": " Of the of the output and I can transform it into something"
},
{
"start": 1324.8999999999999,
"end": 1329.3,
"text": " Where that's no longer the case where the i-th and the j-th dimension very much"
},
{
"start": 1329.9399999999998,
"end": 1331.3,
"text": " uh"
},
{
"start": 1331.3,
"end": 1333.06,
"text": " Kind of are"
},
{
"start": 1333.06,
"end": 1334.8999999999999,
"text": " entangled or covariate"
},
{
"start": 1334.8999999999999,
"end": 1335.9399999999998,
"text": " so"
},
{
"start": 1335.9399999999998,
"end": 1338.1799999999998,
"text": " This means I can take the z that"
},
{
"start": 1338.18,
"end": 1344.74,
"text": " That's kind of everything is independent. I can transform it into something where everything is dependent and they give a nice example here"
},
{
"start": 1344.74,
"end": 1347.14,
"text": " So they say let's say we have"
},
{
"start": 1347.78,
"end": 1349.0600000000002,
"text": " Gaussians"
},
{
"start": 1349.0600000000002,
"end": 1352.18,
"text": " In two dimensions, so we have one Gaussian here"
},
{
"start": 1352.74,
"end": 1355.54,
"text": " And let me see if I can draw this one Gaussian here"
},
{
"start": 1356.18,
"end": 1358.66,
"text": " Right in two dimensions. They're completely independent"
},
{
"start": 1359.46,
"end": 1362.42,
"text": " um what you'll find is that the kind of"
},
{
"start": 1363.38,
"end": 1365.38,
"text": " distribution overall has"
},
{
"start": 1365.38,
"end": 1367.7,
"text": " Iso lines like this"
},
{
"start": 1367.7,
"end": 1373.8600000000001,
"text": " Right, it gives you kind of a hump in the middle two-dimensionally. You can maybe imagine like a bit of a mountain in the middle"
},
{
"start": 1374.8200000000002,
"end": 1376.1000000000001,
"text": " um"
},
{
"start": 1376.1000000000001,
"end": 1379.3000000000002,
"text": " All right. So this is what you this is the kind of output distribution"
},
{
"start": 1379.38,
"end": 1386.42,
"text": " If you if you don't know about the underlying factors, you simply see the cumulative distribution, which would be the the big p here"
},
{
"start": 1387.14,
"end": 1388.42,
"text": " um"
},
{
"start": 1388.42,
"end": 1391.6200000000001,
"text": " All right. Now we transform this into with f"
},
{
"start": 1392.18,
"end": 1394.18,
"text": " And f is simply a rotation"
},
{
"start": 1394.18,
"end": 1396.18,
"text": " by 45 degrees"
},
{
"start": 1396.18,
"end": 1398.74,
"text": " right, so two new axes this"
},
{
"start": 1399.38,
"end": 1401.38,
"text": " and that and again"
},
{
"start": 1402.1000000000001,
"end": 1405.14,
"text": " Our two gaussians are going to be transformed these"
},
{
"start": 1405.94,
"end": 1412.18,
"text": " Right. So these are not these are not disentangled anymore. Well in the in the notion"
},
{
"start": 1413.22,
"end": 1417.3,
"text": " I can't say it like this, but this is easiest to say so these are these are kind of"
},
{
"start": 1418.26,
"end": 1422.8200000000002,
"text": " Now that it's rotated in terms of the original coordinate system, which would go like this"
},
{
"start": 1422.82,
"end": 1430.34,
"text": " These very much depend on each other right the jth dimension the if dimension depend on each other because if I sample from one of the gaussians"
},
{
"start": 1430.34,
"end": 1434.26,
"text": " I need now basically two coordinates to describe"
},
{
"start": 1434.98,
"end": 1436.98,
"text": " where it is or"
},
{
"start": 1437.3,
"end": 1439.3,
"text": " Yeah, one isn't just"
},
{
"start": 1440.34,
"end": 1444.26,
"text": " So if I sample from one Gaussian I need both the coordinates"
},
{
"start": 1444.8999999999999,
"end": 1447.9399999999998,
"text": " but the cumulative distribution or the"
},
{
"start": 1449.06,
"end": 1451.06,
"text": " That is still the same"
},
{
"start": 1451.06,
"end": 1454.5,
"text": " That is still going to look exactly the same"
},
{
"start": 1455.78,
"end": 1457.3,
"text": " so"
},
{
"start": 1457.3,
"end": 1463.46,
"text": " It's again a hump. So it's basically an isometric hump in every direction if I rotate that the"
},
{
"start": 1464.1799999999998,
"end": 1467.54,
"text": " It looks exactly the same. This is the p here"
},
{
"start": 1468.58,
"end": 1473.46,
"text": " But now the the if dimension and the jth dimension very much influence each other"
},
{
"start": 1474.4199999999998,
"end": 1477.06,
"text": " um, and yeah, interestingly the"
},
{
"start": 1477.06,
"end": 1482.5,
"text": " If you now look at disentanglement if I just have if if I now produce"
},
{
"start": 1483.3799999999999,
"end": 1485.3799999999999,
"text": " data"
},
{
"start": 1485.86,
"end": 1487.1399999999999,
"text": " x"
},
{
"start": 1487.1399999999999,
"end": 1488.1,
"text": " here"
},
{
"start": 1488.1,
"end": 1491.22,
"text": " x1 and here I produce data"
},
{
"start": 1491.86,
"end": 1493.54,
"text": " x2"
},
{
"start": 1493.54,
"end": 1495.3,
"text": " and both"
},
{
"start": 1495.3,
"end": 1497.3,
"text": " go through my model"
},
{
"start": 1497.54,
"end": 1500.34,
"text": " and give me our representation"
},
{
"start": 1500.8999999999999,
"end": 1502.4199999999998,
"text": " of x1"
},
{
"start": 1502.4199999999998,
"end": 1504.4199999999998,
"text": " and the representation"
},
{
"start": 1504.42,
"end": 1508.18,
"text": " of x1 and the representation of x2"
},
{
"start": 1509.22,
"end": 1510.8200000000002,
"text": " I have"
},
{
"start": 1510.8200000000002,
"end": 1515.38,
"text": " Without seeing the underlying structure. I have no idea which one of those two"
},
{
"start": 1516.26,
"end": 1522.42,
"text": " It comes from and thereby I have zero chance basically. It's a luck lucky guess"
},
{
"start": 1523.14,
"end": 1524.1000000000001,
"text": " um"
},
{
"start": 1524.1000000000001,
"end": 1529.8600000000001,
"text": " Which one it comes from and there's an infinite family. So I will never find the true underlying"
},
{
"start": 1529.86,
"end": 1533.8,
"text": " distribution here and thereby I will never"
},
{
"start": 1534.76,
"end": 1535.9599999999998,
"text": " um"
},
{
"start": 1535.9599999999998,
"end": 1540.12,
"text": " I will never be able to satisfy this property that if one of the z changes"
},
{
"start": 1540.6,
"end": 1544.9199999999998,
"text": " Then only one of the factors of my representation will change because if I"
},
{
"start": 1545.56,
"end": 1548.28,
"text": " Say, oh, well, obviously this is the case"
},
{
"start": 1548.76,
"end": 1552.52,
"text": " Then i'm going to make a different model and if I say well, this is the case"
},
{
"start": 1553.08,
"end": 1556.12,
"text": " I'm going to make a different model. I don't know which one it is"
},
{
"start": 1556.12,
"end": 1560.6,
"text": " So I have to choose one and it could be the other one. So i'm bound to be wrong in this case"
},
{
"start": 1560.84,
"end": 1564.04,
"text": " 50% of the time, but if it's an infinite family i'm bound to be wrong"
},
{
"start": 1564.6799999999998,
"end": 1566.36,
"text": " every time"
},
{
"start": 1566.36,
"end": 1568.12,
"text": " basically, so"
},
{
"start": 1568.12,
"end": 1570.6799999999998,
"text": " That's what the theorem basically says I can't"
},
{
"start": 1571.32,
"end": 1576.04,
"text": " Decide on the true underlying distribution. Um, there's an infinite family that"
},
{
"start": 1576.6599999999999,
"end": 1579.58,
"text": " Transforms it into it. It transforms every distribution"
},
{
"start": 1580.04,
"end": 1585.58,
"text": " into some other distribution that has basically complete opposite properties of entanglement"
},
{
"start": 1585.58,
"end": 1591.5,
"text": " And I need to choose one and I will never choose the right one because i'm not that lucky"
},
{
"start": 1592.22,
"end": 1596.32,
"text": " And thereby I can't do representation learning that's disentangled"
},
{
"start": 1597.74,
"end": 1602.62,
"text": " All right, so that's the main claim of the paper and um"
},
{
"start": 1603.74,
"end": 1605.74,
"text": " There is a lot of experiments here"
},
{
"start": 1606.22,
"end": 1609.6599999999999,
"text": " so what the paper also does is they produce some new"
},
{
"start": 1609.66,
"end": 1616,
"text": " Data sets and they test a lot of a lot of architectures basically they say just because it's theoretically impossible"
},
{
"start": 1616.48,
"end": 1621.44,
"text": " It's not impractical because we can actually make these underlying assumptions"
},
{
"start": 1621.92,
"end": 1625.1200000000001,
"text": " like we can make some assumptions on the data and then and then"
},
{
"start": 1625.8400000000001,
"end": 1627.52,
"text": " we kind of"
},
{
"start": 1627.52,
"end": 1628.5600000000002,
"text": " can"
},
{
"start": 1628.5600000000002,
"end": 1631.44,
"text": " attempt to do disentanglement learning so they do these"
},
{
"start": 1632.4,
"end": 1638.16,
"text": " data sets and they test different VAE's architectures on it and they basically"
},
{
"start": 1638.16,
"end": 1640.16,
"text": " Um establish where"
},
{
"start": 1640.96,
"end": 1644.24,
"text": " More work should go. So that's that's kind of the rest of the paper"
},
{
"start": 1644.4,
"end": 1647.3600000000001,
"text": " I encourage you to look at the rest of the paper"
},
{
"start": 1647.3600000000001,
"end": 1651.52,
"text": " I just wanted to give a quick introduction to VAEs and to disentanglement"
},
{
"start": 1652.16,
"end": 1654.16,
"text": " to entangle representation learning"
},
{
"start": 1654.48,
"end": 1655.68,
"text": " I"
},
{
"start": 1655.68,
"end": 1657.68,
"text": " Wasn't technically correct"
},
{
"start": 1657.68,
"end": 1668.4,
"text": " Uh in every detail, but I hope that it's enough and have fun"
}
] |
dPsXxLyqpfs | Yannic Kilcher | UCZHmQk67mSJgfCCTn7xBfew | World Models | [
"Science & Technology"
] | [
"deep learning",
"reinforcement learning",
"deep reinforcement learning",
"deep rl",
"schmidhuber",
"environment model",
"imagination",
"vae",
"rnn",
"lstm"
] | Authors: David Ha, Jürgen Schmidhuber
Abstract:
We explore building generative neural network models of popular reinforcement learning environments. Our world model can be trained quickly in an unsupervised manner to learn a compressed spatial and temporal representation of the environment. By using features extracted from the world model as inputs to an agent, we can train a very compact and simple policy that can solve the required task. We can even train our agent entirely inside of its own hallucinated dream generated by its world model, and transfer this policy back into the actual environment.
https://arxiv.org/abs/1803.10122 | Hi, today we're looking at World Models by David Ha and Jürgen Schmidhuber. This is a paper that's concerned with reinforcement learning and especially with the problem of, say, you have an environment that you interact with and you kind of need to learn to act in it, but it could be, for example, very expensive to always query the environment. So let's say you have a robot and it needs to do something in the world, and you kind of, to have a robot execute something and then observe it, is quite expensive, costs electricity and so on. So you would like to sort of minimize how many times this happens. So here, searching for a good picture, they're concerned with problems, for example, like this. This is a race car simulator. There's an OpenAI gym environment for that. The other one that they use is a so-called like a doom experiment where, as you look at this, there's a couple of monsters and they're shooting fireballs at you and the task is just to kind of avoid the fireballs. So the entire point of the paper is that I don't actually need to interact with the environment in order to learn it. I can simply kind of learn a model of the environment and then learn using that model. So basically, I can learn how the environment works and then simply use my imagination of the environment, my model, in order to learn from that so I don't have to interact with the real environment anymore. So how do they do this? They do it in multiple stages. Here, first thing they do is they collect a bunch of samples from the environment. So they go to the environment, they simply do a random policy and then they collect a bunch of samples. I think the process is outlined down here somewhere. We saw it before. Here, collect 10,000 rollouts from a random policy. Next, they train a VAE here to kind of learn the environment. So that's where that comes in. This is all done in stages, not end-to-end. The VAE is simply a model that takes, in this case, a video frame here. It sends it through an encoder neural network to obtain what's called a latent representation, which is a much smaller dimensional representation. So if the image is 64 by 64 pixels, then the latent code could be as little as 100 or even 10 dimensional. So you see that there's quite a bit of compression going on. This is a variational autoencoder. It's not really important here that it's variational since the difference is the variational autoencoder is kind of a stochastic process, whereas the regular autoencoder isn't. But they introduce stochasticity later again. So it's not particularly important. So it's a variational autoencoder, which means they obtain a latent representation that defines distribution over outputs. So they send this sample from this latent distribution that they obtain, and then they feed this to the decoder. And the decoder kind of gives back what it thinks the encoder encoded. So the decoder tries to reconstruct as close as possible this original frame that was given to the encoder. But of course it can't because we've compressed it so much to this lower dimensional representation here. So it kind of does its best effort. So what you hope to achieve with this is that kind of the decoder learns, for example, there's always here. This is the ceiling right here. It's always gray. So basically, you shouldn't actually need to encode this in your Z. If it's always gray, the decoder should learn this by itself. So your hope is that the Z, the latent representation, will simply end up containing just the information that's kind of different or between the individual frames, which here I guess would be kind of the fireballs coming and your position relative to them. That's what's changing if you think about this environment. So your hope is that the latent representation captures only that, whereas all the static parts that are irrelevant or never change are kind of captured by the encoder and the decoder architecture by itself. So yeah, it's important to note the encoder and decoder are obviously always the same for all the frames, whereas the Z representation, of course, is there is one per frame, so each frame will give you a different Z. And that's so you can imagine how that works or how that's going to be useful. So they train this on like a randomly collected sample of the environment until they're confident they now have a good model of the environment. And then what they do next is they use this in order to train an RNN. So again, they kind of have their compression model of the environment. What they do now is they use these Z states you see here, here, here, here that they get from that. And they train how these latent representations evolve over time. So with an RNN here goes over time. So the RNN will always kind of predict what's the next state of the environment going to be. But importantly, maybe compared to environment models that we've discussed before in the, for example, imagination augmented agent paper, there we always try to directly predict the future pixels, so to say, of the future frame. Here, the environment model is over the latent representation. Of course, this means that the this is a much smaller space. So if your compression model is good, then this should be much easier to learn than, say, like a full end to end environment model. So this model learns how your latent states evolve over time, given your actions. So you can imagine the Z being an abstract representation of your state and then your action. And then this goes into the RNN and the RNN will predict what's the next latent representation. And there is what's called a temperature parameter to control the stochasticity. I've already told you this, there is a stochasticity built into this. So the RNN will simply output like some vector, what it thinks is the next thing going to be. And they don't use this directly as the next step, but they parameterize a kind of a mixture of Gaussian distributions coupled with a decoder here in order to give a random distribution over the next state. And they control the amount of randomness with the temperature parameter. They argue that this comes in handy later. So all right, so what do we have? We have a system that can compress the environment into what we would call an essential part. Every frame we extract what's important in that frame. Then next we have a model that can predict, given a state and an action, what's the next state going to be, the next latent state. So technically we now have an environment model, right, given a state. We can simply, given a state and a policy, we can simply use this model to roll forward. So the last component is the actual policy. And the actual policy here, as you can see, is in their case simply a linear model. The linear model will take the z, which is the latent representation of the current state, and the h, which is the current state of the RNN that models the environment over time. And it simply is a linear function of the two, gives you the action probabilities, or I guess the log-its of the actions. So it's a really, really simple controller over these things. And they do this in order to show that the main part of the work is being done by this environment model. And given the environment model, you only need very few parameters basically to then learn a policy. Here is what I said in a diagram. So the observation goes into the compression of the VAE, the latent representation of that goes into the RNN together with the hidden state from the last step. And this will output a new hidden state, which goes here into the controller, and we also directly take this z into the controller. And then from these two, we perform an action, which now we have a choice. It could go to the environment, right, give you the next observation, but also, or at the same time, since you kind of need to update your RNN, it can go here and update your RNN because it will need to predict the next hidden state. The thing is, we can also now leave away this path, which means we can simply take our RNN and our kind of imagine the next latent representation, put it through the decoder part of the VAE and use that as an observation. I hope this makes sense. It's rather intuitive, right? You have a model of the environment. You can simply use this instead of the real environment. So, there's a bit of pseudo code here, and they do a bunch of experiments, right? So, we're primarily interested, so they say, they see here, okay, our compression works, and this is the real frame, and this is the reconstructed frame, kind of looks, you know, captures the essence of what's going on. And I actually want to go down here, the Visdome experiment. So, what they do here in the car racing experiment is they kind of learn this entire thing, right? And then they learn a policy in the real world, in the environment, using this model up here, this procedure where they always go to the environment, and here is the exact experiment set up. So, first they collect, again, rollouts for a random policy, they train the VAE, they train the RNN, and then they learn the controller using the entire model, but in kind of the real world. So, they always interact with the environment, but because they also have their kind of latent representation of the observation, and not directly the observation, they get a higher score. And also, the policy that they use in the real environment transfers to the environment model. So, the policy they learn in the true environment, it transfers to the imagined, so if they use the imagined model as an environment, it also performs well. In the next experiment, they're going to try to do this the other way around. They're going to try to learn only using their model of the environment, and then see whether or not the policy transfers to the true environment. So, that's what they do here. They collect, again, a sample from the environment, they train the VAE, they train the RNN, and then they simply use this virtual environment, what they call it, in order to learn a policy, and at the end, they try to transfer, use the learn policy on the actual environment. And given the results, you see here, there we go. So, you see the kind of best it does, I would say, is about here, where the actual score is, you can see in this, and also in this setting, is higher than the kind of previous best algorithm in the OpenAI GIMP, when you go from virtual to actual. So, what this means is kind of, yeah, you can train using this imagined model, and then it will actually transfer, but there's a crucial thing, and that is this kind of temperature thing here. You can see a lot of times they actually don't manage to reach a good score, if this parameter is wrong. What does this parameter do? This parameter controls, as we discussed, the stochasticity of the model. So, basically, the environment model doesn't directly imagine a future state, but it imagines a distribution over future states. And the higher this parameter, the more stochastic this distribution is, basically the more uniform, I guess, the more entropy you have in these future states. We've seen this temperature parameter here. Which is important, because they go into length explaining why in this entire page here that we skipped. Here you see just text, there. Cheating the world model, which basically they say, okay, if you have a wrong model, if you have a model that's wrong of the environment, and you train a policy on it, necessarily, it's going to probably find a policy that exploits the wrongness of this model. So you might be able to walk through walls or fly or ignore the fireballs. Or basically, find that if you stand next to a wall, in your imagination, you'll never get hit. Something like this, which isn't true in the real world. So the policy will exploit that. And to counter this, they simply basically turn up this temperature parameter, giving them a more stochastic procedure. Meaning they imagine a lot of kind of different futures, and they train their policy on all of them, or in expectation over a sample of them. Which means that if the environment model is wrong, this kind of... I want to say if it's wrong, this corrects for it. It doesn't. But if it's wrong, you still sample different futures. So if it has one wrong future, you still have the other ones to kind of punish the policy, if it tries to exploit this one mistake. At least that's the reasoning behind it. So that's how they do this. You can interact with their trained environment models online somehow. They also give a kind of a look at what they would like to have. Instead of collecting the environment model from random rollout, they would try to train it, then to use it again to collect more data, to train more environment model, then use the environment, better environment model to train more the policy, and so on in a stepwise fashion. But they don't actually do it, they simply describe it. And the rest of the paper is a bit of related work and discussion. It's very prosaically written, kind of different from what you're used to if you read a lot of these papers. But yeah, I hope you can now you know what's going on and see you next time. | [
{
"start": 0,
"end": 6,
"text": " Hi, today we're looking at World Models by David Ha and Jürgen Schmidhuber."
},
{
"start": 6,
"end": 13,
"text": " This is a paper that's concerned with reinforcement learning and especially with the problem of,"
},
{
"start": 13,
"end": 20,
"text": " say, you have an environment that you interact with and you kind of need to learn to act in it,"
},
{
"start": 20,
"end": 26,
"text": " but it could be, for example, very expensive to always query the environment."
},
{
"start": 26,
"end": 33,
"text": " So let's say you have a robot and it needs to do something in the world,"
},
{
"start": 33,
"end": 44,
"text": " and you kind of, to have a robot execute something and then observe it, is quite expensive, costs electricity and so on."
},
{
"start": 44,
"end": 50,
"text": " So you would like to sort of minimize how many times this happens."
},
{
"start": 50,
"end": 59,
"text": " So here, searching for a good picture, they're concerned with problems, for example, like this."
},
{
"start": 59,
"end": 66,
"text": " This is a race car simulator. There's an OpenAI gym environment for that."
},
{
"start": 66,
"end": 76,
"text": " The other one that they use is a so-called like a doom experiment where, as you look at this,"
},
{
"start": 76,
"end": 83,
"text": " there's a couple of monsters and they're shooting fireballs at you and the task is just to kind of avoid the fireballs."
},
{
"start": 83,
"end": 91,
"text": " So the entire point of the paper is that I don't actually need to interact with the environment in order to learn it."
},
{
"start": 91,
"end": 98,
"text": " I can simply kind of learn a model of the environment and then learn using that model."
},
{
"start": 98,
"end": 105,
"text": " So basically, I can learn how the environment works and then simply use my imagination of the environment,"
},
{
"start": 105,
"end": 114,
"text": " my model, in order to learn from that so I don't have to interact with the real environment anymore."
},
{
"start": 114,
"end": 119,
"text": " So how do they do this? They do it in multiple stages."
},
{
"start": 119,
"end": 128,
"text": " Here, first thing they do is they collect a bunch of samples from the environment."
},
{
"start": 128,
"end": 136,
"text": " So they go to the environment, they simply do a random policy and then they collect a bunch of samples."
},
{
"start": 136,
"end": 143,
"text": " I think the process is outlined down here somewhere. We saw it before."
},
{
"start": 143,
"end": 155,
"text": " Here, collect 10,000 rollouts from a random policy. Next, they train a VAE here to kind of learn the environment."
},
{
"start": 155,
"end": 161,
"text": " So that's where that comes in. This is all done in stages, not end-to-end."
},
{
"start": 161,
"end": 169,
"text": " The VAE is simply a model that takes, in this case, a video frame here."
},
{
"start": 169,
"end": 174,
"text": " It sends it through an encoder neural network to obtain what's called a latent representation,"
},
{
"start": 174,
"end": 177,
"text": " which is a much smaller dimensional representation."
},
{
"start": 177,
"end": 189,
"text": " So if the image is 64 by 64 pixels, then the latent code could be as little as 100 or even 10 dimensional."
},
{
"start": 189,
"end": 193,
"text": " So you see that there's quite a bit of compression going on."
},
{
"start": 193,
"end": 202,
"text": " This is a variational autoencoder. It's not really important here that it's variational since the difference is"
},
{
"start": 202,
"end": 209,
"text": " the variational autoencoder is kind of a stochastic process, whereas the regular autoencoder isn't."
},
{
"start": 209,
"end": 216,
"text": " But they introduce stochasticity later again. So it's not particularly important."
},
{
"start": 216,
"end": 225,
"text": " So it's a variational autoencoder, which means they obtain a latent representation that defines distribution over outputs."
},
{
"start": 225,
"end": 235,
"text": " So they send this sample from this latent distribution that they obtain, and then they feed this to the decoder."
},
{
"start": 235,
"end": 243,
"text": " And the decoder kind of gives back what it thinks the encoder encoded."
},
{
"start": 243,
"end": 252,
"text": " So the decoder tries to reconstruct as close as possible this original frame that was given to the encoder."
},
{
"start": 252,
"end": 259,
"text": " But of course it can't because we've compressed it so much to this lower dimensional representation here."
},
{
"start": 259,
"end": 261,
"text": " So it kind of does its best effort."
},
{
"start": 261,
"end": 268,
"text": " So what you hope to achieve with this is that kind of the decoder learns, for example, there's always here."
},
{
"start": 268,
"end": 272,
"text": " This is the ceiling right here. It's always gray."
},
{
"start": 272,
"end": 278,
"text": " So basically, you shouldn't actually need to encode this in your Z."
},
{
"start": 278,
"end": 283,
"text": " If it's always gray, the decoder should learn this by itself."
},
{
"start": 283,
"end": 296,
"text": " So your hope is that the Z, the latent representation, will simply end up containing just the information that's kind of different or between the individual frames,"
},
{
"start": 296,
"end": 305,
"text": " which here I guess would be kind of the fireballs coming and your position relative to them."
},
{
"start": 305,
"end": 308,
"text": " That's what's changing if you think about this environment."
},
{
"start": 308,
"end": 312,
"text": " So your hope is that the latent representation captures only that,"
},
{
"start": 312,
"end": 323,
"text": " whereas all the static parts that are irrelevant or never change are kind of captured by the encoder and the decoder architecture by itself."
},
{
"start": 323,
"end": 329,
"text": " So yeah, it's important to note the encoder and decoder are obviously always the same for all the frames,"
},
{
"start": 329,
"end": 336,
"text": " whereas the Z representation, of course, is there is one per frame, so each frame will give you a different Z."
},
{
"start": 336,
"end": 343,
"text": " And that's so you can imagine how that works or how that's going to be useful."
},
{
"start": 343,
"end": 355,
"text": " So they train this on like a randomly collected sample of the environment until they're confident they now have a good model of the environment."
},
{
"start": 355,
"end": 363,
"text": " And then what they do next is they use this in order to train an RNN."
},
{
"start": 363,
"end": 373,
"text": " So again, they kind of have their compression model of the environment."
},
{
"start": 373,
"end": 381,
"text": " What they do now is they use these Z states you see here, here, here, here that they get from that."
},
{
"start": 381,
"end": 386,
"text": " And they train how these latent representations evolve over time."
},
{
"start": 386,
"end": 390,
"text": " So with an RNN here goes over time."
},
{
"start": 390,
"end": 401,
"text": " So the RNN will always kind of predict what's the next state of the environment going to be."
},
{
"start": 401,
"end": 407,
"text": " But importantly, maybe compared to environment models that we've discussed before in the, for example,"
},
{
"start": 407,
"end": 419,
"text": " imagination augmented agent paper, there we always try to directly predict the future pixels, so to say, of the future frame."
},
{
"start": 419,
"end": 424,
"text": " Here, the environment model is over the latent representation."
},
{
"start": 424,
"end": 429,
"text": " Of course, this means that the this is a much smaller space."
},
{
"start": 429,
"end": 440,
"text": " So if your compression model is good, then this should be much easier to learn than, say, like a full end to end environment model."
},
{
"start": 440,
"end": 449,
"text": " So this model learns how your latent states evolve over time, given your actions."
},
{
"start": 449,
"end": 455,
"text": " So you can imagine the Z being an abstract representation of your state and then your action."
},
{
"start": 455,
"end": 462,
"text": " And then this goes into the RNN and the RNN will predict what's the next latent representation."
},
{
"start": 462,
"end": 468,
"text": " And there is what's called a temperature parameter to control the stochasticity."
},
{
"start": 468,
"end": 476,
"text": " I've already told you this, there is a stochasticity built into this."
},
{
"start": 476,
"end": 484,
"text": " So the RNN will simply output like some vector, what it thinks is the next thing going to be."
},
{
"start": 484,
"end": 492,
"text": " And they don't use this directly as the next step, but they parameterize a kind of a mixture of Gaussian distributions"
},
{
"start": 492,
"end": 499,
"text": " coupled with a decoder here in order to give a random distribution over the next state."
},
{
"start": 499,
"end": 503,
"text": " And they control the amount of randomness with the temperature parameter."
},
{
"start": 503,
"end": 506,
"text": " They argue that this comes in handy later."
},
{
"start": 506,
"end": 508,
"text": " So all right, so what do we have?"
},
{
"start": 508,
"end": 517,
"text": " We have a system that can compress the environment into what we would call an essential part."
},
{
"start": 517,
"end": 521,
"text": " Every frame we extract what's important in that frame."
},
{
"start": 521,
"end": 535,
"text": " Then next we have a model that can predict, given a state and an action, what's the next state going to be, the next latent state."
},
{
"start": 535,
"end": 539,
"text": " So technically we now have an environment model, right, given a state."
},
{
"start": 539,
"end": 548,
"text": " We can simply, given a state and a policy, we can simply use this model to roll forward."
},
{
"start": 548,
"end": 552,
"text": " So the last component is the actual policy."
},
{
"start": 552,
"end": 560,
"text": " And the actual policy here, as you can see, is in their case simply a linear model."
},
{
"start": 560,
"end": 568,
"text": " The linear model will take the z, which is the latent representation of the current state,"
},
{
"start": 568,
"end": 578,
"text": " and the h, which is the current state of the RNN that models the environment over time."
},
{
"start": 578,
"end": 589,
"text": " And it simply is a linear function of the two, gives you the action probabilities, or I guess the log-its of the actions."
},
{
"start": 589,
"end": 593,
"text": " So it's a really, really simple controller over these things."
},
{
"start": 593,
"end": 601,
"text": " And they do this in order to show that the main part of the work is being done by this environment model."
},
{
"start": 601,
"end": 608,
"text": " And given the environment model, you only need very few parameters basically to then learn a policy."
},
{
"start": 608,
"end": 613,
"text": " Here is what I said in a diagram."
},
{
"start": 613,
"end": 618,
"text": " So the observation goes into the compression of the VAE,"
},
{
"start": 618,
"end": 625,
"text": " the latent representation of that goes into the RNN together with the hidden state from the last step."
},
{
"start": 625,
"end": 632,
"text": " And this will output a new hidden state, which goes here into the controller,"
},
{
"start": 632,
"end": 636,
"text": " and we also directly take this z into the controller."
},
{
"start": 636,
"end": 643,
"text": " And then from these two, we perform an action, which now we have a choice."
},
{
"start": 643,
"end": 649,
"text": " It could go to the environment, right, give you the next observation, but also,"
},
{
"start": 649,
"end": 656,
"text": " or at the same time, since you kind of need to update your RNN, it can go here"
},
{
"start": 656,
"end": 663,
"text": " and update your RNN because it will need to predict the next hidden state."
},
{
"start": 663,
"end": 667,
"text": " The thing is, we can also now leave away this path,"
},
{
"start": 667,
"end": 679,
"text": " which means we can simply take our RNN and our kind of imagine the next latent representation,"
},
{
"start": 679,
"end": 686,
"text": " put it through the decoder part of the VAE and use that as an observation."
},
{
"start": 686,
"end": 691,
"text": " I hope this makes sense. It's rather intuitive, right? You have a model of the environment."
},
{
"start": 691,
"end": 695,
"text": " You can simply use this instead of the real environment."
},
{
"start": 695,
"end": 702,
"text": " So, there's a bit of pseudo code here, and they do a bunch of experiments, right?"
},
{
"start": 702,
"end": 710,
"text": " So, we're primarily interested, so they say, they see here, okay, our compression works,"
},
{
"start": 710,
"end": 715,
"text": " and this is the real frame, and this is the reconstructed frame, kind of looks, you know,"
},
{
"start": 715,
"end": 719,
"text": " captures the essence of what's going on."
},
{
"start": 719,
"end": 729,
"text": " And I actually want to go down here, the Visdome experiment."
},
{
"start": 729,
"end": 737,
"text": " So, what they do here in the car racing experiment is they kind of learn this entire thing, right?"
},
{
"start": 737,
"end": 746,
"text": " And then they learn a policy in the real world, in the environment, using this model up here,"
},
{
"start": 746,
"end": 752,
"text": " this procedure where they always go to the environment, and here is the exact experiment set up."
},
{
"start": 752,
"end": 761,
"text": " So, first they collect, again, rollouts for a random policy, they train the VAE, they train the RNN,"
},
{
"start": 761,
"end": 775,
"text": " and then they learn the controller using the entire model, but in kind of the real world."
},
{
"start": 775,
"end": 782,
"text": " So, they always interact with the environment, but because they also have their kind of latent representation"
},
{
"start": 782,
"end": 788,
"text": " of the observation, and not directly the observation, they get a higher score."
},
{
"start": 788,
"end": 798,
"text": " And also, the policy that they use in the real environment transfers to the environment model."
},
{
"start": 798,
"end": 804,
"text": " So, the policy they learn in the true environment, it transfers to the imagined,"
},
{
"start": 804,
"end": 809,
"text": " so if they use the imagined model as an environment, it also performs well."
},
{
"start": 809,
"end": 813,
"text": " In the next experiment, they're going to try to do this the other way around."
},
{
"start": 813,
"end": 819,
"text": " They're going to try to learn only using their model of the environment,"
},
{
"start": 819,
"end": 825,
"text": " and then see whether or not the policy transfers to the true environment."
},
{
"start": 825,
"end": 832,
"text": " So, that's what they do here. They collect, again, a sample from the environment,"
},
{
"start": 832,
"end": 843,
"text": " they train the VAE, they train the RNN, and then they simply use this virtual environment,"
},
{
"start": 843,
"end": 849,
"text": " what they call it, in order to learn a policy, and at the end, they try to transfer,"
},
{
"start": 849,
"end": 852,
"text": " use the learn policy on the actual environment."
},
{
"start": 852,
"end": 865,
"text": " And given the results, you see here, there we go."
},
{
"start": 865,
"end": 877,
"text": " So, you see the kind of best it does, I would say, is about here,"
},
{
"start": 877,
"end": 884,
"text": " where the actual score is, you can see in this, and also in this setting,"
},
{
"start": 884,
"end": 892,
"text": " is higher than the kind of previous best algorithm in the OpenAI GIMP,"
},
{
"start": 892,
"end": 898,
"text": " when you go from virtual to actual."
},
{
"start": 898,
"end": 905,
"text": " So, what this means is kind of, yeah, you can train using this imagined model,"
},
{
"start": 905,
"end": 910,
"text": " and then it will actually transfer, but there's a crucial thing,"
},
{
"start": 910,
"end": 913,
"text": " and that is this kind of temperature thing here."
},
{
"start": 913,
"end": 919,
"text": " You can see a lot of times they actually don't manage to reach a good score,"
},
{
"start": 919,
"end": 922,
"text": " if this parameter is wrong. What does this parameter do?"
},
{
"start": 922,
"end": 927,
"text": " This parameter controls, as we discussed, the stochasticity of the model."
},
{
"start": 927,
"end": 935,
"text": " So, basically, the environment model doesn't directly imagine a future state,"
},
{
"start": 935,
"end": 939,
"text": " but it imagines a distribution over future states."
},
{
"start": 939,
"end": 944,
"text": " And the higher this parameter, the more stochastic this distribution is,"
},
{
"start": 944,
"end": 951,
"text": " basically the more uniform, I guess, the more entropy you have in these future states."
},
{
"start": 951,
"end": 955,
"text": " We've seen this temperature parameter here."
},
{
"start": 955,
"end": 966,
"text": " Which is important, because they go into length explaining why in this entire page here that we skipped."
},
{
"start": 966,
"end": 971,
"text": " Here you see just text, there."
},
{
"start": 971,
"end": 975,
"text": " Cheating the world model, which basically they say, okay, if you have a wrong model,"
},
{
"start": 975,
"end": 980,
"text": " if you have a model that's wrong of the environment, and you train a policy on it, necessarily,"
},
{
"start": 980,
"end": 987,
"text": " it's going to probably find a policy that exploits the wrongness of this model."
},
{
"start": 987,
"end": 995,
"text": " So you might be able to walk through walls or fly or ignore the fireballs."
},
{
"start": 995,
"end": 1003,
"text": " Or basically, find that if you stand next to a wall, in your imagination, you'll never get hit."
},
{
"start": 1003,
"end": 1006,
"text": " Something like this, which isn't true in the real world."
},
{
"start": 1006,
"end": 1011,
"text": " So the policy will exploit that."
},
{
"start": 1011,
"end": 1016,
"text": " And to counter this, they simply basically turn up this temperature parameter,"
},
{
"start": 1016,
"end": 1020,
"text": " giving them a more stochastic procedure."
},
{
"start": 1020,
"end": 1024,
"text": " Meaning they imagine a lot of kind of different futures,"
},
{
"start": 1024,
"end": 1029,
"text": " and they train their policy on all of them, or in expectation over a sample of them."
},
{
"start": 1029,
"end": 1038,
"text": " Which means that if the environment model is wrong, this kind of..."
},
{
"start": 1038,
"end": 1042,
"text": " I want to say if it's wrong, this corrects for it. It doesn't."
},
{
"start": 1042,
"end": 1049,
"text": " But if it's wrong, you still sample different futures."
},
{
"start": 1049,
"end": 1056,
"text": " So if it has one wrong future, you still have the other ones to kind of punish the policy,"
},
{
"start": 1056,
"end": 1063,
"text": " if it tries to exploit this one mistake. At least that's the reasoning behind it."
},
{
"start": 1063,
"end": 1067,
"text": " So that's how they do this."
},
{
"start": 1067,
"end": 1071,
"text": " You can interact with their trained environment models online somehow."
},
{
"start": 1071,
"end": 1076,
"text": " They also give a kind of a look at what they would like to have."
},
{
"start": 1076,
"end": 1082,
"text": " Instead of collecting the environment model from random rollout,"
},
{
"start": 1082,
"end": 1086,
"text": " they would try to train it, then to use it again to collect more data,"
},
{
"start": 1086,
"end": 1089,
"text": " to train more environment model, then use the environment,"
},
{
"start": 1089,
"end": 1094,
"text": " better environment model to train more the policy, and so on in a stepwise fashion."
},
{
"start": 1094,
"end": 1100,
"text": " But they don't actually do it, they simply describe it."
},
{
"start": 1100,
"end": 1105,
"text": " And the rest of the paper is a bit of related work and discussion."
},
{
"start": 1105,
"end": 1115,
"text": " It's very prosaically written, kind of different from what you're used to if you read a lot of these papers."
},
{
"start": 1115,
"end": 1136,
"text": " But yeah, I hope you can now you know what's going on and see you next time."
}
] |
_Z9ZP1eiKsI | Yannic Kilcher | UCZHmQk67mSJgfCCTn7xBfew | Curiosity-driven Exploration by Self-supervised Prediction | [
"Science & Technology"
] | [] | https://arxiv.org/abs/1705.05363
Authors: Deepak Pathak, Pulkit Agrawal, Alexei A. Efros, Trevor Darrell
Abstract:
In many real-world scenarios, rewards extrinsic to the agent are extremely sparse, or absent altogether. In such cases, curiosity can serve as an intrinsic reward signal to enable the agent to explore its environment and learn skills that might be useful later in its life. We formulate curiosity as the error in an agent's ability to predict the consequence of its own actions in a visual feature space learned by a self-supervised inverse dynamics model. Our formulation scales to high-dimensional continuous state spaces like images, bypasses the difficulties of directly predicting pixels, and, critically, ignores the aspects of the environment that cannot affect the agent. The proposed approach is evaluated in two environments: VizDoom and Super Mario Bros. Three broad settings are investigated: 1) sparse extrinsic reward, where curiosity allows for far fewer interactions with the environment to reach the goal; 2) exploration with no extrinsic reward, where curiosity pushes the agent to explore more efficiently; and 3) generalization to unseen scenarios (e.g. new levels of the same game) where the knowledge gained from earlier experience helps the agent explore new places much faster than starting from scratch. | Hi there! Today we're going to look at this paper, Curiosity-Driven Exploration by Self-Supervised Prediction. It's a relatively short idea, so it shouldn't take too long. So the fundamental idea of the paper is to tackle the reward sparseness problem reinforcement learning. For example, if you have a Super Mario game like here, and there's a number of ways you can think of the reward, but one way you could formulate it is that you simply get kind of a plus one reward when you finish the game, or the level. Let's say you finish the level, you get plus one. If you die or don't make it in time, you get negative one. I think there's no way to not make it in... Oh yeah, there's actually a time limit. So the... The problem here is that your algorithm kind of needs to learn to make things now such that it gets to the end of the level, but the reward is only at the end of the level. So basically step by step it has no signal to go on because the reward is always zero, and it kind of needs to learn these long range dependencies. And that's notoriously hard in reinforcement learning to step by step learn actions that kind of maximize some very long term goal. So you can also think of a game of chess where your reward is going to be whether you win or lose at the end, but step by step it's kind of this... The reward is 50ish steps away. So you have no way of kind of step by step optimizing your actions in a meaningful manner. So there are many ways to get around this. One way that people have done is what's called reward shaping. And reward shaping is you're trying to introduce additional rewards kind of as a designer of the algorithm that you know are kind of good or helping to solve the problem or at least correlated with the reward you're going to get at the end. So in Mario this could be like the further right you go, the more reward you get. You get kind of an additional reward if you go right. Coincidentally I think in real Mario this also gives you points, but our situation is that the reward is just going to be at the end. You could also say like if you kill the... Or if you stomp the goombas, one goomba you stomp, that actually gives you also a bit of reward. In chess you could say like the more pieces you have, that gives you a bit of reward if you have more pieces than your opponent, if your opponent loses pieces. You don't and you also get a bit of reward if you get more territory on the board and so on. So these are all things that we know kind of correlate with the end reward. Like because in Mario for example the end of the level is actually on the right. But of course it's not perfect because sometimes there are situations where you kind of have to go back, go around something or go over something and not immediately go to the right. As well as in chess there are good sacrifices that you can make. So these kind of additional rewards they help, but they're not perfect. And the biggest problem with them is they're very domain specific. So a developer of the algorithm you basically have to know the domain like Super Mario and you have to know the goal is on the right. So you have to construct your reward in order to kind of reflect this. And this is very domain specific. Basically you have to do it for every domain again and again and again. In chess you have to know something about chess to play and so on. So one way around this, and this paper proposes one method to do this, is to introduce an additional reward not based on the domain specifically, but based on what they call this curiosity. And it's specifically curiosity by self supervised prediction. So what does that mean? The idea is not new in that people have kind of done this before. If we go for example down here. So here is this kind of doom environment and what you could say is in my agent I have kind of a little module that's going to predict the future. So like if I'm here then I will basically choose an action, my agent will choose an action, like move forward, like press the forward key and then I will predict how that's going to look. And of course we know this is kind of a 3D environment so this is probably going to be this part of the screen is going to be the full screen because you're now closer and so on the perspective changes a little bit. But basically this should be a learned neural network that predicts the future from the state now and the action now. And basically you can train this in a supervised fashion because you will perform some actions, you will collect some data about this so you can learn a network that is going to predict one step into the future basically, how the environment will look. And then, and this is by no means kind of a new idea to introduce rewards based on this type of learning how the environment acts. We've seen this in like the A3C paper, the original one where the additional reward is something like pixel control where they consider like okay this pixel here, how much can I control it by my action, like how does my action influence it, can I predict this and so on. And to learn how to control the pixels on the screen by your actions and to give a reward based on that so that's been around this idea. And what this paper here does specifically is they say well I'm going to predict the future and if I am wrong about the prediction then that gives me a reward and that's the curiosity part. Basically it means like if I have a good model of what's going to happen in the future and then I predict the future and then I'm wrong it means something new has happened, something special, something that I hadn't expected. And therefore if the goal is to get the algorithm to explore by itself which is what you need to do when you don't have a reward, right? When you don't have a reward what you want your algorithm to do is simply to go around and explore. And in a sense they're saying okay the way to do this is to go by curiosity which means is to go to actively seek out environments that you wouldn't expect basically. So whenever you don't expect something that means it's something new, that means you haven't had this experience before, right? And that means that it's kind of a new state to explore. That you have not seen this before so kind of in absence of any reward you might as well go where you haven't been before and that's kind of the essence. So they outline a number of problems that you might have with this approach. They give the example, let's first actually go to what the model actually looks like. So that's here. You can see this is kind of what they call an intrinsic curiosity module. So you have a state here, you're in a state, you have your policy and your policy gives you an action. And the action goes to the environment and the environment gives you the next state and also what's called the reward. They call here E is the extrinsic reward that you get from the environment. But they also combine this with what's called an intrinsic reward that you get from here that you get from the curiosity module. And that's what we've discussed. It kind of tries to assess how new is the state that I'm going to be in. How surprising it is for me. So the thing is that I'm going to first describe the model how you would build it and how that gets you into problems and then how to fix it. So how you would build this is to have this what's called this forward model. So the forward model takes the action and the current state and it kind of predicts the next state that's in here. Don't worry about the phi hat right now. It predicts the next state and then you compare this to the actual next state. You subtract, you just subtract the next state and then you get the next state. You subtract, you just look at the difference between what you predict the next state is going to be and what the next state really is. And that gives you the intrinsic reward. The more different these are, the higher the reward. That's what we've discussed. How much different is it from what I've expected. So how does that get you into problems? And the authors give a very good illustrative example of say you are in an environment. Let's actually go over here. You are in an environment and you have your screen. And here is kind of a road that you need to maybe walk after. And here are some leaves in the wind. I'm very bad at drawing leaves so imagine these are leaves and there's wind right? Like winds coming from here and kind of shaking up these leaves and so on. So if you simply try to predict this entire screen as your forward model, what's going to happen is you will never be able to predict how these leaves are going to move because there basically you can't influence them. You can predict a bit from the current state but the action you take has no influence on how these leaves are going to move because they are influenced by the wind. And the wind is kind of this random-ish process that you can't control. So the authors say because of this your algorithm is always going to find these leaves basically interesting, curious, be curious about it because it can't predict them. And we've seen that the reward that they model to give an addition is based on how well you cannot predict a certain state. And they say okay if we do like this then these random things that we can't influence will always be surprising and therefore we will always be curious about them and therefore we will always kind of look at the leaves and be amazed and get reward after reward because we can't predict them. That's not the goal. So what they're arguing is that why are these leaves not important for curiosity? Because we can't influence them with our actions. Like we can influence where we go on this road because we can kind of move and the road is kind of static, not governed by these random processes. But the leaves we would like to discard them. We can't influence them. And therefore what they say is what we need is an encoder that takes a state and I'm going to try to delete this annotation. So we need an encoder here features that takes a state and it outputs features of the state. And then our forward model isn't fed with the state, it's fed with the features of the state and is not going to output the next state. So we need an encoder that takes a state and is fed with the features of the state and is not going to output the next state as such but the features of the next state. It predicts the features and then we're going to compare that with the features of the true next state and that's what we compare. So how does this encoder, these features need to look? And they're saying well these features should kind of only consider things about the state that are actually dependent on our actions. And they have a very interesting way of achieving to train such an encoder, such a feature producing function in that they say it's going to be a neural network that we train by training this so called inverse model. So we take this encoder and we train this inverse model on top of it and the inverse model takes the features of the last state and the new state and is trying to predict this action, this action right here. So this is this action, the action we took to get from the old state to the new state. So this inverse model is trained to predict what action was taken to get from the old state to the new state. And by training the encoder with this inverse model, like training this end to end, you will make the encoder such that it only considers things that are actually relevant to predicting this action. So in the leaves example it would discard the leaves. It will discard anything that you can't influence with your action and therefore it will only retain features that are dependent on your action. I think that's quite an interesting way to get rid of the irrelevant information that they don't want. And then they can use this encoder to train this forward model and to essentially get information from the old model and to essentially get this intrinsic reward. So I find this idea quite interesting and as I said the idea of intrinsic reward and curiosity to go for exploration is not new, but I think this kind of approach and I'm sure it's been around in some variants, but I've just stumbled across this and this is quite interesting. So we're going to take a look, and you can go about the math yourself, but they do these kind of experiments and they corrupt, as you can see, part of the screen with noise here and they of course show like, okay, since the noise is not dependent on our action, our features do actually discard this noise, only focus on the part that we can actually influence by our actions. So that's, I think, all in all pretty interesting. They show, of course, that their algorithm then outperforms the kind of baseline of A3C on these sparse reward tasks and the sparser here you can see like the left is like dense reward and then sparse reward and then very sparse reward and at some point you see the A3C simply doesn't do it anymore. But what's also interesting is here you have the ICM in pixels, which kind of means pixel-based curiosity, so where we don't have this encoder, where we simply try to predict the pixels of the environment and that works if you have like this kind of sparse reward thing, but if you want to, if you have the very sparse reward, that also fails and you actually need this encoder that discards what's not relevant for predicting the actions. Yeah, so you can take a look at the rest of the paper yourself. I find it quite interesting. They analyze how their agent explore these mazes and things and they have more experiments on like benchmark tasks. So have a look at it and I'll see you next time. | [
{
"start": 0,
"end": 8,
"text": " Hi there! Today we're going to look at this paper, Curiosity-Driven Exploration by Self-Supervised"
},
{
"start": 8,
"end": 14.84,
"text": " Prediction. It's a relatively short idea, so it shouldn't take too long. So the fundamental"
},
{
"start": 14.84,
"end": 21.36,
"text": " idea of the paper is to tackle the reward sparseness problem reinforcement learning."
},
{
"start": 21.36,
"end": 27.52,
"text": " For example, if you have a Super Mario game like here, and there's a number of ways you"
},
{
"start": 27.52,
"end": 33.6,
"text": " can think of the reward, but one way you could formulate it is that you simply get kind of"
},
{
"start": 33.6,
"end": 40.56,
"text": " a plus one reward when you finish the game, or the level. Let's say you finish the level,"
},
{
"start": 40.56,
"end": 49.2,
"text": " you get plus one. If you die or don't make it in time, you get negative one. I think"
},
{
"start": 49.2,
"end": 55.92,
"text": " there's no way to not make it in... Oh yeah, there's actually a time limit. So the..."
},
{
"start": 55.92,
"end": 63.92,
"text": " The problem here is that your algorithm kind of needs to learn to make things now such"
},
{
"start": 63.92,
"end": 68,
"text": " that it gets to the end of the level, but the reward is only at the end of the level."
},
{
"start": 68,
"end": 74.04,
"text": " So basically step by step it has no signal to go on because the reward is always zero,"
},
{
"start": 74.04,
"end": 78.16,
"text": " and it kind of needs to learn these long range dependencies. And that's notoriously hard"
},
{
"start": 78.16,
"end": 83.2,
"text": " in reinforcement learning to step by step learn actions that kind of maximize some very"
},
{
"start": 83.2,
"end": 89.28,
"text": " long term goal. So you can also think of a game of chess where your reward is going to"
},
{
"start": 89.28,
"end": 94.16,
"text": " be whether you win or lose at the end, but step by step it's kind of this... The reward"
},
{
"start": 94.16,
"end": 103.48,
"text": " is 50ish steps away. So you have no way of kind of step by step optimizing your actions"
},
{
"start": 103.48,
"end": 112.44,
"text": " in a meaningful manner. So there are many ways to get around this. One way that people"
},
{
"start": 112.44,
"end": 118.03999999999999,
"text": " have done is what's called reward shaping. And reward shaping is you're trying to introduce"
},
{
"start": 118.03999999999999,
"end": 125.32,
"text": " additional rewards kind of as a designer of the algorithm that you know are kind of good"
},
{
"start": 125.32,
"end": 132.76,
"text": " or helping to solve the problem or at least correlated with the reward you're going to"
},
{
"start": 132.76,
"end": 138.28,
"text": " get at the end. So in Mario this could be like the further right you go, the more reward"
},
{
"start": 138.28,
"end": 143.8,
"text": " you get. You get kind of an additional reward if you go right. Coincidentally I think in"
},
{
"start": 143.8,
"end": 149.52,
"text": " real Mario this also gives you points, but our situation is that the reward is just going"
},
{
"start": 149.52,
"end": 156.2,
"text": " to be at the end. You could also say like if you kill the... Or if you stomp the goombas,"
},
{
"start": 156.2,
"end": 162.8,
"text": " one goomba you stomp, that actually gives you also a bit of reward. In chess you could"
},
{
"start": 162.8,
"end": 167.36,
"text": " say like the more pieces you have, that gives you a bit of reward if you have more pieces"
},
{
"start": 167.36,
"end": 173.08,
"text": " than your opponent, if your opponent loses pieces. You don't and you also get a bit of"
},
{
"start": 173.08,
"end": 177.52,
"text": " reward if you get more territory on the board and so on. So these are all things that we"
},
{
"start": 177.52,
"end": 183.56,
"text": " know kind of correlate with the end reward. Like because in Mario for example the end"
},
{
"start": 183.56,
"end": 187.72000000000003,
"text": " of the level is actually on the right. But of course it's not perfect because sometimes"
},
{
"start": 187.72000000000003,
"end": 192.96,
"text": " there are situations where you kind of have to go back, go around something or go over"
},
{
"start": 192.96,
"end": 198.92000000000002,
"text": " something and not immediately go to the right. As well as in chess there are good sacrifices"
},
{
"start": 198.92000000000002,
"end": 205.92000000000002,
"text": " that you can make. So these kind of additional rewards they help, but they're not perfect."
},
{
"start": 206.92000000000002,
"end": 212.36,
"text": " And the biggest problem with them is they're very domain specific. So a developer of the"
},
{
"start": 212.36,
"end": 217.08,
"text": " algorithm you basically have to know the domain like Super Mario and you have to know the"
},
{
"start": 217.08,
"end": 224.08,
"text": " goal is on the right. So you have to construct your reward in order to kind of reflect this."
},
{
"start": 224.60000000000002,
"end": 231.48000000000002,
"text": " And this is very domain specific. Basically you have to do it for every domain again and"
},
{
"start": 231.48000000000002,
"end": 238.60000000000002,
"text": " again and again. In chess you have to know something about chess to play and so on. So"
},
{
"start": 238.60000000000002,
"end": 245.28,
"text": " one way around this, and this paper proposes one method to do this, is to introduce an"
},
{
"start": 245.28,
"end": 250.16,
"text": " additional reward not based on the domain specifically, but based on what they call"
},
{
"start": 250.16,
"end": 257.16,
"text": " this curiosity. And it's specifically curiosity by self supervised prediction. So what does"
},
{
"start": 257.36,
"end": 268.36,
"text": " that mean? The idea is not new in that people have kind of done this before. If we go for"
},
{
"start": 268.36,
"end": 278.36,
"text": " example down here. So here is this kind of doom environment and what you could say is"
},
{
"start": 281.36,
"end": 292.36,
"text": " in my agent I have kind of a little module that's going to predict the future. So like"
},
{
"start": 292.36,
"end": 299.36,
"text": " if I'm here then I will basically choose an action, my agent will choose an action, like"
},
{
"start": 301.24,
"end": 308.24,
"text": " move forward, like press the forward key and then I will predict how that's going to look."
},
{
"start": 309.88,
"end": 314.40000000000003,
"text": " And of course we know this is kind of a 3D environment so this is probably going to be"
},
{
"start": 314.40000000000003,
"end": 318.76,
"text": " this part of the screen is going to be the full screen because you're now closer and"
},
{
"start": 318.76,
"end": 324.92,
"text": " so on the perspective changes a little bit. But basically this should be a learned neural"
},
{
"start": 324.92,
"end": 330.48,
"text": " network that predicts the future from the state now and the action now. And basically"
},
{
"start": 330.48,
"end": 336.88,
"text": " you can train this in a supervised fashion because you will perform some actions, you"
},
{
"start": 336.88,
"end": 343.03999999999996,
"text": " will collect some data about this so you can learn a network that is going to predict one"
},
{
"start": 343.04,
"end": 349.32,
"text": " step into the future basically, how the environment will look. And then, and this is by no means"
},
{
"start": 349.32,
"end": 356.32000000000005,
"text": " kind of a new idea to introduce rewards based on this type of learning how the environment"
},
{
"start": 357,
"end": 364,
"text": " acts. We've seen this in like the A3C paper, the original one where the additional reward"
},
{
"start": 364.24,
"end": 369.12,
"text": " is something like pixel control where they consider like okay this pixel here, how much"
},
{
"start": 369.12,
"end": 374.62,
"text": " can I control it by my action, like how does my action influence it, can I predict this"
},
{
"start": 374.62,
"end": 381.62,
"text": " and so on. And to learn how to control the pixels on the screen by your actions and to"
},
{
"start": 382.48,
"end": 388.88,
"text": " give a reward based on that so that's been around this idea. And what this paper here"
},
{
"start": 388.88,
"end": 395.88,
"text": " does specifically is they say well I'm going to predict the future and if I am wrong about"
},
{
"start": 395.88,
"end": 402.88,
"text": " the prediction then that gives me a reward and that's the curiosity part. Basically it"
},
{
"start": 403.68,
"end": 410.68,
"text": " means like if I have a good model of what's going to happen in the future and then I predict"
},
{
"start": 411.15999999999997,
"end": 417.15999999999997,
"text": " the future and then I'm wrong it means something new has happened, something special, something"
},
{
"start": 417.16,
"end": 424.16,
"text": " that I hadn't expected. And therefore if the goal is to get the algorithm to explore by"
},
{
"start": 427.32000000000005,
"end": 430.76000000000005,
"text": " itself which is what you need to do when you don't have a reward, right? When you don't"
},
{
"start": 430.76000000000005,
"end": 437.76000000000005,
"text": " have a reward what you want your algorithm to do is simply to go around and explore."
},
{
"start": 438.8,
"end": 443.8,
"text": " And in a sense they're saying okay the way to do this is to go by curiosity which means"
},
{
"start": 443.8,
"end": 450.8,
"text": " is to go to actively seek out environments that you wouldn't expect basically. So whenever"
},
{
"start": 453.56,
"end": 458.16,
"text": " you don't expect something that means it's something new, that means you haven't had"
},
{
"start": 458.16,
"end": 465.16,
"text": " this experience before, right? And that means that it's kind of a new state to explore."
},
{
"start": 465.16,
"end": 472.16,
"text": " That you have not seen this before so kind of in absence of any reward you might as well"
},
{
"start": 472.16,
"end": 479.16,
"text": " go where you haven't been before and that's kind of the essence. So they outline a number"
},
{
"start": 480.6,
"end": 487.6,
"text": " of problems that you might have with this approach. They give the example, let's first"
},
{
"start": 487.6,
"end": 494.6,
"text": " actually go to what the model actually looks like. So that's here. You can see this is"
},
{
"start": 495.12,
"end": 502.12,
"text": " kind of what they call an intrinsic curiosity module. So you have a state here, you're in"
},
{
"start": 502.12,
"end": 509.12,
"text": " a state, you have your policy and your policy gives you an action. And the action goes to"
},
{
"start": 509.12,
"end": 516.12,
"text": " the environment and the environment gives you the next state and also what's called"
},
{
"start": 517.68,
"end": 524.68,
"text": " the reward. They call here E is the extrinsic reward that you get from the environment."
},
{
"start": 524.68,
"end": 529.68,
"text": " But they also combine this with what's called an intrinsic reward that you get from here"
},
{
"start": 529.68,
"end": 535.6800000000001,
"text": " that you get from the curiosity module. And that's what we've discussed. It kind of tries"
},
{
"start": 535.68,
"end": 542.68,
"text": " to assess how new is the state that I'm going to be in. How surprising it is for me. So"
},
{
"start": 542.68,
"end": 549.68,
"text": " the thing is that I'm going to first describe the model how you would build it and how that"
},
{
"start": 553.1999999999999,
"end": 559.1999999999999,
"text": " gets you into problems and then how to fix it. So how you would build this is to have"
},
{
"start": 559.2,
"end": 566.2,
"text": " this what's called this forward model. So the forward model takes the action and the"
},
{
"start": 566.2,
"end": 570.2,
"text": " current state and it kind of predicts the next state that's in here. Don't worry about"
},
{
"start": 570.2,
"end": 577.2,
"text": " the phi hat right now. It predicts the next state and then you compare this to the actual"
},
{
"start": 580.2,
"end": 587.2,
"text": " next state. You subtract, you just subtract the next state and then you get the next state."
},
{
"start": 587.2,
"end": 592.44,
"text": " You subtract, you just look at the difference between what you predict the next state is"
},
{
"start": 592.44,
"end": 597.24,
"text": " going to be and what the next state really is. And that gives you the intrinsic reward."
},
{
"start": 597.24,
"end": 602.72,
"text": " The more different these are, the higher the reward. That's what we've discussed. How much"
},
{
"start": 602.72,
"end": 609.72,
"text": " different is it from what I've expected. So how does that get you into problems? And the"
},
{
"start": 609.72,
"end": 616.72,
"text": " authors give a very good illustrative example of say you are in an environment. Let's actually"
},
{
"start": 619.28,
"end": 625.96,
"text": " go over here. You are in an environment and you have your screen. And here is kind of"
},
{
"start": 625.96,
"end": 631.6800000000001,
"text": " a road that you need to maybe walk after. And here are some leaves in the wind. I'm"
},
{
"start": 631.6800000000001,
"end": 638,
"text": " very bad at drawing leaves so imagine these are leaves and there's wind right? Like winds"
},
{
"start": 638,
"end": 644.2,
"text": " coming from here and kind of shaking up these leaves and so on. So if you simply try to"
},
{
"start": 644.2,
"end": 651.2,
"text": " predict this entire screen as your forward model, what's going to happen is you will"
},
{
"start": 652.44,
"end": 658.26,
"text": " never be able to predict how these leaves are going to move because there basically"
},
{
"start": 658.26,
"end": 665.26,
"text": " you can't influence them. You can predict a bit from the current state but the action"
},
{
"start": 665.26,
"end": 671.26,
"text": " you take has no influence on how these leaves are going to move because they are influenced"
},
{
"start": 671.26,
"end": 678.26,
"text": " by the wind. And the wind is kind of this random-ish process that you can't control."
},
{
"start": 682.26,
"end": 689.26,
"text": " So the authors say because of this your algorithm is always going to find these leaves basically"
},
{
"start": 689.26,
"end": 694.26,
"text": " interesting, curious, be curious about it because it can't predict them. And we've"
},
{
"start": 694.26,
"end": 701.26,
"text": " seen that the reward that they model to give an addition is based on how well you cannot"
},
{
"start": 701.26,
"end": 708.26,
"text": " predict a certain state. And they say okay if we do like this then these random things"
},
{
"start": 708.74,
"end": 715.74,
"text": " that we can't influence will always be surprising and therefore we will always be curious about"
},
{
"start": 715.74,
"end": 720.74,
"text": " them and therefore we will always kind of look at the leaves and be amazed and get reward"
},
{
"start": 720.74,
"end": 725.74,
"text": " after reward because we can't predict them. That's not the goal. So what they're arguing"
},
{
"start": 725.74,
"end": 732.74,
"text": " is that why are these leaves not important for curiosity? Because we can't influence"
},
{
"start": 733.26,
"end": 739.26,
"text": " them with our actions. Like we can influence where we go on this road because we can kind"
},
{
"start": 739.26,
"end": 746.26,
"text": " of move and the road is kind of static, not governed by these random processes. But the"
},
{
"start": 746.26,
"end": 753.26,
"text": " leaves we would like to discard them. We can't influence them. And therefore what they say"
},
{
"start": 753.26,
"end": 760.26,
"text": " is what we need is an encoder that takes a state and I'm going to try to delete this"
},
{
"start": 760.26,
"end": 767.26,
"text": " annotation. So we need an encoder here features that takes a state and it outputs features"
},
{
"start": 771.26,
"end": 778.26,
"text": " of the state. And then our forward model isn't fed with the state, it's fed with the features"
},
{
"start": 778.26,
"end": 785.26,
"text": " of the state and is not going to output the next state. So we need an encoder that takes"
},
{
"start": 785.26,
"end": 790.26,
"text": " a state and is fed with the features of the state and is not going to output the next"
},
{
"start": 790.26,
"end": 796.26,
"text": " state as such but the features of the next state. It predicts the features and then we're"
},
{
"start": 796.26,
"end": 801.26,
"text": " going to compare that with the features of the true next state and that's what we compare."
},
{
"start": 801.26,
"end": 808.26,
"text": " So how does this encoder, these features need to look? And they're saying well these features"
},
{
"start": 808.76,
"end": 814.26,
"text": " should kind of only consider things about the state that are actually dependent on our"
},
{
"start": 814.26,
"end": 821.26,
"text": " actions. And they have a very interesting way of achieving to train such an encoder,"
},
{
"start": 821.76,
"end": 828.26,
"text": " such a feature producing function in that they say it's going to be a neural network"
},
{
"start": 828.26,
"end": 835.26,
"text": " that we train by training this so called inverse model. So we take this encoder and we train"
},
{
"start": 835.26,
"end": 842.26,
"text": " this inverse model on top of it and the inverse model takes the features of the last state"
},
{
"start": 843.26,
"end": 850.26,
"text": " and the new state and is trying to predict this action, this action right here. So this"
},
{
"start": 850.26,
"end": 857.26,
"text": " is this action, the action we took to get from the old state to the new state. So this"
},
{
"start": 857.26,
"end": 864.26,
"text": " inverse model is trained to predict what action was taken to get from the old state to the"
},
{
"start": 864.26,
"end": 871.26,
"text": " new state. And by training the encoder with this inverse model, like training this end"
},
{
"start": 871.26,
"end": 878.26,
"text": " to end, you will make the encoder such that it only considers things that are actually"
},
{
"start": 878.26,
"end": 883.26,
"text": " relevant to predicting this action. So in the leaves example it would discard the leaves."
},
{
"start": 883.26,
"end": 890.26,
"text": " It will discard anything that you can't influence with your action and therefore it will only"
},
{
"start": 890.26,
"end": 896.26,
"text": " retain features that are dependent on your action. I think that's quite an interesting"
},
{
"start": 896.26,
"end": 902.26,
"text": " way to get rid of the irrelevant information that they don't want. And then they can use"
},
{
"start": 902.26,
"end": 909.26,
"text": " this encoder to train this forward model and to essentially get information from the old"
},
{
"start": 909.26,
"end": 916.26,
"text": " model and to essentially get this intrinsic reward. So I find this idea quite interesting"
},
{
"start": 918.26,
"end": 924.26,
"text": " and as I said the idea of intrinsic reward and curiosity to go for exploration is not"
},
{
"start": 924.26,
"end": 930.26,
"text": " new, but I think this kind of approach and I'm sure it's been around in some variants,"
},
{
"start": 930.26,
"end": 944.26,
"text": " but I've just stumbled across this and this is quite interesting. So we're going to take"
},
{
"start": 944.26,
"end": 951.26,
"text": " a look, and you can go about the math yourself, but they do these kind of experiments and"
},
{
"start": 951.26,
"end": 958.26,
"text": " they corrupt, as you can see, part of the screen with noise here and they of course"
},
{
"start": 958.26,
"end": 964.26,
"text": " show like, okay, since the noise is not dependent on our action, our features do actually discard"
},
{
"start": 964.26,
"end": 969.26,
"text": " this noise, only focus on the part that we can actually influence by our actions. So"
},
{
"start": 969.26,
"end": 976.26,
"text": " that's, I think, all in all pretty interesting. They show, of course, that their algorithm"
},
{
"start": 976.26,
"end": 984.26,
"text": " then outperforms the kind of baseline of A3C on these sparse reward tasks and the sparser"
},
{
"start": 984.26,
"end": 992.26,
"text": " here you can see like the left is like dense reward and then sparse reward and then very"
},
{
"start": 992.26,
"end": 999.26,
"text": " sparse reward and at some point you see the A3C simply doesn't do it anymore. But what's"
},
{
"start": 999.26,
"end": 1007.26,
"text": " also interesting is here you have the ICM in pixels, which kind of means pixel-based"
},
{
"start": 1007.26,
"end": 1013.26,
"text": " curiosity, so where we don't have this encoder, where we simply try to predict the pixels"
},
{
"start": 1013.26,
"end": 1018.26,
"text": " of the environment and that works if you have like this kind of sparse reward thing, but"
},
{
"start": 1018.26,
"end": 1023.26,
"text": " if you want to, if you have the very sparse reward, that also fails and you actually need"
},
{
"start": 1023.26,
"end": 1033.26,
"text": " this encoder that discards what's not relevant for predicting the actions. Yeah, so you can"
},
{
"start": 1033.26,
"end": 1038.26,
"text": " take a look at the rest of the paper yourself. I find it quite interesting. They analyze"
},
{
"start": 1038.26,
"end": 1048.26,
"text": " how their agent explore these mazes and things and they have more experiments on like benchmark"
},
{
"start": 1048.26,
"end": 1068.26,
"text": " tasks. So have a look at it and I'll see you next time."
}
] |
BBp0tHcirtQ | Yannic Kilcher | UCZHmQk67mSJgfCCTn7xBfew | git for research basics: fundamentals, commits, branches, merging | [
"Science & Technology"
] | [
"git",
"research",
"commit",
"merge",
"conflict"
] | Don't watch this if you already know how to solve a merge conflict :) | Hi there. Today we're taking a look at Git, especially Git as it is used maybe in research collaborations. So Git is like a tool to collaborate, but when you research, like when you work on a paper together with other people, you won't use a lot of the features that Git offers and that are usually described by Git. So in this series I want to talk about what's kind of the most simple way to collaborate with people on a research project using Git. And today we're going to go over just the fundamentals, which makes everything else a lot easier. So what you need to understand about Git is that fundamentally Git is a graph, and it's a graph of commits. What I mean by this. So let's say you have your paper, you write some things, and then this is kind of version one. And then you have another paper, or the same paper, and you kind of change this line here. That's version two, and so on. You have kind of this chain of versions that you would like to keep in store. So this is the classic example of version control, where you would like to save these versions, and do it in a way that you can at any point in time go back to any version previously. And this is exactly what Git does, without you having to kind of rename. Like people usually copy the file and then rename like this version two, version three, final version, really final version, really final version corrected, blah blah blah. Alright, so Git fundamentally is a graph, and a graph of an object we call a commit. So a commit, which I'm going to represent as a bubble here, is simply a kind of an image of your hard drive, or one folder of your hard drive at a particular point in time. So this will contain all kind of files. Let's call this file A, file B. Oops, well, I meant to make a square here. But all the files that are in your folder, which is called the Git repository, or it's not correct, but bear with me. You have this folder, and all the files in this folder, when you make a commit, all these files are kind of saved as they are into one of these bubbles. And they're saved forever basically in this status that they are. So what you can do now is you can go ahead and make a second commit. So you change a bunch of files. Let's say the file B is still the same, but the file A has changed, is now A'. You make a second commit, and the second commit references the first commit. So part of a commit, except the very first commit, part of a commit is always a pointer to its parent commit. And especially if you look at the commits, they all have names. And the name of a commit is always its hash. And the hash includes basically the hash of all the files that are in there. So a hash could be something like F5C259, and so on. And for the next commit, the hash also includes the reference to the parent. That's why the integral part of a commit is to which parent it belongs. This ultimately is what makes the graph kind of the graph. Every commit references its parent. So you can address every commit by its name, as I said, which is the hash of the commit. So the hash is really long, but you can also simply reference it by the first couple of letters. As long as that's unique, Git will let you do this whenever you need to reference some commit. So we've discussed that basically a commit is a bunch of files, as they are, and it's saved in this state. So Git is of course smart. It will only save the diff from one to the other commit. But you can just imagine that a commit is simply the status of a folder at a particular point in time. So let me just take away these files here. There are a bunch of other things in Git. So one concept that Git has is called a tag. A tag is a name for a commit that you give yourself. And the tag is like a little flag that sticks in a commit. And you may say this, v1, version 1. This is simply a tag, and as you make new commits, the tag simply stays there. And at any time, if you don't want to remember this big long hash, you can simply refer to this commit as v1. Because that's the tag. It's kind of simple. The next form of a little flag that you can append to a commit is called a branch. And a branch, the difference between a tag and a branch. So a branch is also this flag, and we'll call it, I don't know, blah. The difference is that when you are on this commit here, right here, and you make a commit on top of this commit, while what's called you've checked out the blah branch. So right now you're looking at blah, which is this commit, and you make a commit on top of this commit. What Git will do automatically for you is it will erase this flag and move it to this next commit. So you might know branches from subversion or other version control technologies. It's very similar, but in Git, a branch is simply like a tag. It's simply a name for a commit. But with the additional property that when you make a commit on top of that commit, so when it has the commit as its parent, then Git will move the branch, the little flag, to the new commit. So basically, you always have that one branch, which is called master. Git creates this automatically for you if you just have this little flag, master. And you make a commit on top of master, which would cause master to go here. So people usually say they work on the master branch, which means they're simply making commits on top of the commit that currently has the master flag. Git also allows you to move around both tags and the branches basically to any commit. So I could forcefully go erase this here and simply stick the master flag here. And sometimes if we kind of decide these two commits are no good, we would simply do this. We would simply take the master flag, put it here, and then when we make a new commit on top of the master now, what we would make is we make a new commit point here, then Git would move the master flag because it's a branch, master, and then we simply continue working here, working here, and Git will happily move along this master. So in Git, there is no need to actually delete commits or something like this. What we can simply do is kind of move the branch that we're working on to the commit we like, and garbage collection ultimately will at some point go and delete these two commits. This is a bit more difficult once you collaborate with other people, because they might actually have made commits that reference the commits that you just kind of deleted or so. So it's a bit tricky, but ultimately this is something you can do. So the next thing we're going to talk about is multiple branches. Having multiple branches basically boils down to you have few commits, you have your graph, and let's say this is your master branch. So here we have master, but also, or let's make the one before, otherwise I don't have space, master. So what someone else would like to do is say, hey, I want to try out this new feature in code. It will probably change the code base and so on, but I want to try it out. Maybe it'll introduce some bugs and so on. And then what you can do is you can make a new branch, F1, let's call it F1 for feature one. And then I can make a commit on top of feature one, which would then move the feature one flag to here, and so on. I can make second and third commit and so on. Meanwhile, the other people working on the project, or maybe even you yourself, can work on top of this commit, on top of the master branch. So in kind of software engineering, this is typically used when one part of the team wants to implement a new feature, but the other part of the team kind of continues to do bug fixes or things like this, development on the version of the software that doesn't yet have the new feature. But they kind of need to fix bugs, and since the new feature is not complete yet, they can't both work on the same code base. So each work on their own branch, so to say. And at the end, when feature one is ready, people say, okay, we've implemented it, it's all good, there's no bugs. We would like to integrate the feature one into the main software, basically. What you have to do is you would have to do a so-called merge. A merge is a process that generates a merge commit, and a merge commit is this thing here. As you notice, it has more than one parent. It has, in this case, two parents where it kind of combines. So from this commit here, both branches are based off of this commit. And then changes were made, individual changes, in this branch and in this branch. So there's the possibility that people change different things. And what the merge commit needs to do somehow is to bring together these changes. So actually, both branches might have changed the same file, but in a different way. And now the question is how do we merge these different files? And that's kind of the last topic we'll go into today. How does Git do a merge? So when we talk about merging, Git has a bunch of built-in algorithms that it helps you with. Most of the time, merging is automatic. So if you have files here, A and B, and in one branch, A is changed, some here, and in one branch, B is changed. Git simply assumes, well, this one branch has changed A, the other one hasn't. So the changes mean something. I'll take them. So basically, whenever something has changed in one branch and not changed in the other, it will assume that the changes are the thing that needs to continue to live. It assumes that the changes were made for a reason, and that reason should continue. So one might be a bug fix, the other one might be the new feature. The same goes in the same file. So when you have the same file and in one branch something on top is changed, and the other branch something kind of on the bottom is changed, Git simply assumes both changes are wanted and takes both. The only kind of time when Git doesn't know what to do is when both branches change the same line. So I'm going to represent this with, I don't know, but when both branches change the same line in the same file, or close by, so there are these algorithms that Git determines when there's a so-called merge conflict. That's the only time where Git doesn't know what to do. And so as preliminary, it's a good idea to structure files in a line-based fashion, especially if you write kind of LaTeX. A good practice is to put every sentence on a new line and not have like giant lines of multiple sentences, because if you put every sentence on a new line, then you immediately kind of see where something was changed. Whereas if you have this big paragraph and Git will simply tell you this line has changed, which is an entire paragraph, and you don't see what's happening. So when you have a merge conflict, Git asks you what to do, and it does this in a very simple way. We're just going to kind of take a look here as a final demonstration. So I have a Git repository here. Let me do that. So as you can see, there's simply this test file in here. And I've just made one commit, the initial commit. And let's look at this test file. It simply says, hello. So what I can do is, for example, I can say, hi. When I want to make a new commit, first of all, Git status will always tell you kind of what you can do, what's happening in your Git repository. Here it says, changes not staged for commit, modified test.txt. And it also tells you what you can do. So it tells me, for example, use git checkout dash dash with the file name to discard changes. Or use git add to update what will be committed. There's a... I'll use git add with this. So it tells me changes to be committed. Now it's green, as you can see. So when I now type git commit, it should commit these changes. And this is a common occurrence in Git. Whenever you see a text editor opening, Git expects you to type a text message, like a commit message in this case, like a log message, basically. The hashtags are comments, which will not go in here. This is all described right here, actually, in these comments. The thing about these things is, when you type an empty message, then Git will abort the commit. Notice you've done something wrong, you can simply save this file with being empty, being nothing but comments, basically. Git will abort. So it's super useful. I'll just say, added hi, and then save this file. So this is not a special thing. All you need to do... This is an editor, a text editor, that edits a text file. You simply need to save the file and close the editor, and Git will be like, OK, cool, I'll continue. So with git log, now you can see we have two commits. We have my initial commit, and we have the commit called added hi. If you look at the test file, you see hi. So what we'll do now is, finally, we'll make two branches, as we've discussed before. So this is my initial commit. I've made one more commit. And we're on branch master right now, which Git status will tell you. See? On branch master. So this is now master. What we'll do is we'll make a new branch called F1. We'll make a commit on F1, meaning we'll move this. F1. Then we'll make a commit on top of master, like this, which means we'll move this. Master. And then we will merge F1 back into master, such that this master is here. And at the end, we can even remove the F1 branch. And we'll do this while we're having a merge conflict, so that you see the whole process. So, okay. So what I want to do is, first I want to make a branch F1. For this, we can use checkout minus B for making a new branch F1. If the branch already exists, you simply need to checkout, which means I simply go to where this branch is, to the commit that the branch references to. We also say we put head to this commit. Head is always the thing you're looking at, basically. The thing you've currently checked out. So, make a new branch F1, and we'll immediately switch to F1 if I type status. It says on branch F1. It's still the same commit, but we're just in a different branch. So we'll make kind of a change to this file here. I'm gonna say hello. Cool. Save the file. Status. It says it's modified. I want to add and commit it. And there's a shortcut. Commit minus A minus M. So the A simply says all the files that have changed, add them. So I don't need to add, git add all the changed files separately. Though this only counts for kind of changed files. If you have completely new files that git isn't tracking yet, you need to add them yourself. So here with a minus A, I skip the need to first add the files, and with the minus M I can give directly the commit message. More O. Cool. So now what we've done is we have made this commit here and moved the F1 flag to this commit. What we'll do now is we'll go back to this commit, which is currently master branch, and we'll make this commit. So first what we need to do is we'll go back to some commit, which is a checkout. Checkout master. Since master is still referring to that commit. As you can see, when I open the test file, there's no hello. It's the status from before. Hello. I can now change the file in some other manner. In this case I say hello, because I want many Es. And I can say I can commit this, because I'm now on the branch master. It will make this new commit here and move the master branch to that. More E. If you look at git log, you see all these commits on this kind of branch. You don't see the commit on the F1 branch. For that I would have to go back to the F1 branch. I log, and you see here it's a different story. After the added high commit, there's the more O commit. Whereas up here, after the added high commit, there's the more E commit. Merging also happens when you have different branches. When you collaborate with other people, and these people make commits, and you make commits independent of each other, and you try to synchronize your work, often you need to do a merge. And then merge conflicts can also happen. What we can do now is we can go back to master. Because we've... Oops. Git checkout master. There are shortcuts for all of these. We're on this branch right here. What we want to do is we want to make the merge commit. We want to merge F1 into master. While I am on master, I can say git merge F1. It will try to merge, but it will tell me conflict, automatic merge failed, fixed conflicts, and then commit the result. I can say git status. It will tell me you're currently merging. You have unmerged paths. And this test.txt file is both branches modified. I'll go into the test. This is very strange if you see it for the first time, but it's actually very intuitive. What git will do is wherever the line is that both branches have changed, or wherever the block of lines is that both branches have changed, git will basically indicate this by writing directly into the file. It will make these smaller, smaller, smaller, smaller, smaller than sign. Then it says head, which means this is the thing you're currently looking at, which we know is master, has changed this first line to this. Hello. Then it will be like equal, equal, equal, equal. Then it will say down here, it will say the F1 branch has changed this line, the same line to hello. It will denote the end of this with larger, larger, larger, larger, greater than signs. What you need to do in order to merge is simply make this file as you wish it is in the merged state. First of all, you can always start by removing, actually, good practice maybe to remove these equal lines. Then within these delimiters change how you want the file to look. In essence, I simply want to have these O's here at the end. I just want too many. Like this. Or like this. I like that. I'm going to call that the merged state. Then I delete these lines. This is the file that I would like the merged commit to have. What I can do is save this file. Again, I say git status. It still tells me it's unmerged, but it tells me what to do. It says use git add to mark resolution. I've resolved it. git add test txt. git status. It says all conflicts fixed, but you are still merging. Use git commit to conclude merge. git commit. Bam. I still have to enter a commit message, which is already predefined here. I'm saying I merged the branch F1 and there were conflicts, but that's fine. I like this message, so I'm simply going to save the file right here. When I look into git log, it now gives me the full story. First I have this added high commit, then I have the more O commit and the more E commit, which were in parallel to each other. Then I merged both branches into one. We're now right here. What I can do now is delete the F1 flag, because I don't need it anymore. I do that by git branch minus d F1. It says delete the branch F1. No commits are actually deleted when you delete the branch. It's simply the little flag that is deleted. The only danger is when you delete the little flag and the name, and you're unable to reach the commit from any other end. Here of course we have this master, and by following this edge here, we can reach this commit just fine. git won't delete it or garbage collect it. But git will also tell you when you're about to do something dangerous. So don't worry. With this I think you should already have many tools or many insights into git. In another video we're going to look at how to collaborate online with people, which isn't much harder than this. It's simply two more steps to push and pull your work from a server together with other people. Alright, so that was it. Take care. | [
{
"start": 0,
"end": 9,
"text": " Hi there. Today we're taking a look at Git, especially Git as it is used maybe in research collaborations."
},
{
"start": 9,
"end": 19,
"text": " So Git is like a tool to collaborate, but when you research, like when you work on a paper together with other people,"
},
{
"start": 19,
"end": 24,
"text": " you won't use a lot of the features that Git offers and that are usually described by Git."
},
{
"start": 24,
"end": 33,
"text": " So in this series I want to talk about what's kind of the most simple way to collaborate with people on a research project using Git."
},
{
"start": 33,
"end": 40,
"text": " And today we're going to go over just the fundamentals, which makes everything else a lot easier."
},
{
"start": 40,
"end": 51,
"text": " So what you need to understand about Git is that fundamentally Git is a graph, and it's a graph of commits."
},
{
"start": 51,
"end": 61,
"text": " What I mean by this. So let's say you have your paper, you write some things, and then this is kind of version one."
},
{
"start": 61,
"end": 70,
"text": " And then you have another paper, or the same paper, and you kind of change this line here. That's version two, and so on."
},
{
"start": 70,
"end": 76,
"text": " You have kind of this chain of versions that you would like to keep in store."
},
{
"start": 76,
"end": 82,
"text": " So this is the classic example of version control, where you would like to save these versions,"
},
{
"start": 82,
"end": 88,
"text": " and do it in a way that you can at any point in time go back to any version previously."
},
{
"start": 88,
"end": 92,
"text": " And this is exactly what Git does, without you having to kind of rename."
},
{
"start": 92,
"end": 102,
"text": " Like people usually copy the file and then rename like this version two, version three, final version, really final version, really final version corrected, blah blah blah."
},
{
"start": 102,
"end": 108,
"text": " Alright, so Git fundamentally is a graph, and a graph of an object we call a commit."
},
{
"start": 108,
"end": 116,
"text": " So a commit, which I'm going to represent as a bubble here, is simply a kind of an image of your hard drive,"
},
{
"start": 116,
"end": 120,
"text": " or one folder of your hard drive at a particular point in time."
},
{
"start": 120,
"end": 127,
"text": " So this will contain all kind of files. Let's call this file A, file B."
},
{
"start": 127,
"end": 134,
"text": " Oops, well, I meant to make a square here. But all the files that are in your folder,"
},
{
"start": 134,
"end": 140,
"text": " which is called the Git repository, or it's not correct, but bear with me."
},
{
"start": 140,
"end": 146,
"text": " You have this folder, and all the files in this folder, when you make a commit,"
},
{
"start": 146,
"end": 152,
"text": " all these files are kind of saved as they are into one of these bubbles."
},
{
"start": 152,
"end": 159,
"text": " And they're saved forever basically in this status that they are."
},
{
"start": 159,
"end": 165,
"text": " So what you can do now is you can go ahead and make a second commit."
},
{
"start": 165,
"end": 175,
"text": " So you change a bunch of files. Let's say the file B is still the same, but the file A has changed, is now A'."
},
{
"start": 175,
"end": 179,
"text": " You make a second commit, and the second commit references the first commit."
},
{
"start": 179,
"end": 188,
"text": " So part of a commit, except the very first commit, part of a commit is always a pointer to its parent commit."
},
{
"start": 188,
"end": 192,
"text": " And especially if you look at the commits, they all have names."
},
{
"start": 192,
"end": 196,
"text": " And the name of a commit is always its hash."
},
{
"start": 196,
"end": 201,
"text": " And the hash includes basically the hash of all the files that are in there."
},
{
"start": 201,
"end": 209,
"text": " So a hash could be something like F5C259, and so on."
},
{
"start": 209,
"end": 215,
"text": " And for the next commit, the hash also includes the reference to the parent."
},
{
"start": 215,
"end": 222,
"text": " That's why the integral part of a commit is to which parent it belongs."
},
{
"start": 222,
"end": 228,
"text": " This ultimately is what makes the graph kind of the graph."
},
{
"start": 228,
"end": 233,
"text": " Every commit references its parent."
},
{
"start": 233,
"end": 238,
"text": " So you can address every commit by its name, as I said, which is the hash of the commit."
},
{
"start": 238,
"end": 246,
"text": " So the hash is really long, but you can also simply reference it by the first couple of letters."
},
{
"start": 246,
"end": 252,
"text": " As long as that's unique, Git will let you do this whenever you need to reference some commit."
},
{
"start": 252,
"end": 262,
"text": " So we've discussed that basically a commit is a bunch of files, as they are, and it's saved in this state."
},
{
"start": 262,
"end": 267,
"text": " So Git is of course smart. It will only save the diff from one to the other commit."
},
{
"start": 267,
"end": 274,
"text": " But you can just imagine that a commit is simply the status of a folder at a particular point in time."
},
{
"start": 274,
"end": 280,
"text": " So let me just take away these files here."
},
{
"start": 280,
"end": 285,
"text": " There are a bunch of other things in Git."
},
{
"start": 285,
"end": 292,
"text": " So one concept that Git has is called a tag."
},
{
"start": 292,
"end": 297,
"text": " A tag is a name for a commit that you give yourself."
},
{
"start": 297,
"end": 301,
"text": " And the tag is like a little flag that sticks in a commit."
},
{
"start": 301,
"end": 305,
"text": " And you may say this, v1, version 1."
},
{
"start": 305,
"end": 310,
"text": " This is simply a tag, and as you make new commits, the tag simply stays there."
},
{
"start": 310,
"end": 316,
"text": " And at any time, if you don't want to remember this big long hash, you can simply refer to this commit as v1."
},
{
"start": 316,
"end": 321,
"text": " Because that's the tag. It's kind of simple."
},
{
"start": 321,
"end": 327,
"text": " The next form of a little flag that you can append to a commit is called a branch."
},
{
"start": 327,
"end": 331,
"text": " And a branch, the difference between a tag and a branch."
},
{
"start": 331,
"end": 338,
"text": " So a branch is also this flag, and we'll call it, I don't know, blah."
},
{
"start": 338,
"end": 345,
"text": " The difference is that when you are on this commit here, right here,"
},
{
"start": 345,
"end": 350,
"text": " and you make a commit on top of this commit,"
},
{
"start": 350,
"end": 353,
"text": " while what's called you've checked out the blah branch."
},
{
"start": 353,
"end": 359,
"text": " So right now you're looking at blah, which is this commit, and you make a commit on top of this commit."
},
{
"start": 359,
"end": 368,
"text": " What Git will do automatically for you is it will erase this flag and move it to this next commit."
},
{
"start": 368,
"end": 378,
"text": " So you might know branches from subversion or other version control technologies."
},
{
"start": 378,
"end": 382,
"text": " It's very similar, but in Git, a branch is simply like a tag."
},
{
"start": 382,
"end": 385,
"text": " It's simply a name for a commit."
},
{
"start": 385,
"end": 390,
"text": " But with the additional property that when you make a commit on top of that commit,"
},
{
"start": 390,
"end": 399,
"text": " so when it has the commit as its parent, then Git will move the branch, the little flag, to the new commit."
},
{
"start": 399,
"end": 406,
"text": " So basically, you always have that one branch, which is called master."
},
{
"start": 406,
"end": 413,
"text": " Git creates this automatically for you if you just have this little flag, master."
},
{
"start": 413,
"end": 421,
"text": " And you make a commit on top of master, which would cause master to go here."
},
{
"start": 421,
"end": 427,
"text": " So people usually say they work on the master branch,"
},
{
"start": 427,
"end": 433,
"text": " which means they're simply making commits on top of the commit that currently has the master flag."
},
{
"start": 433,
"end": 441,
"text": " Git also allows you to move around both tags and the branches basically to any commit."
},
{
"start": 441,
"end": 449,
"text": " So I could forcefully go erase this here and simply stick the master flag here."
},
{
"start": 449,
"end": 456,
"text": " And sometimes if we kind of decide these two commits are no good, we would simply do this."
},
{
"start": 456,
"end": 463,
"text": " We would simply take the master flag, put it here, and then when we make a new commit on top of the master now,"
},
{
"start": 463,
"end": 466,
"text": " what we would make is we make a new commit point here,"
},
{
"start": 466,
"end": 472,
"text": " then Git would move the master flag because it's a branch, master,"
},
{
"start": 472,
"end": 482,
"text": " and then we simply continue working here, working here, and Git will happily move along this master."
},
{
"start": 482,
"end": 486,
"text": " So in Git, there is no need to actually delete commits or something like this."
},
{
"start": 486,
"end": 496,
"text": " What we can simply do is kind of move the branch that we're working on to the commit we like,"
},
{
"start": 496,
"end": 501,
"text": " and garbage collection ultimately will at some point go and delete these two commits."
},
{
"start": 501,
"end": 505,
"text": " This is a bit more difficult once you collaborate with other people,"
},
{
"start": 505,
"end": 514,
"text": " because they might actually have made commits that reference the commits that you just kind of deleted or so."
},
{
"start": 514,
"end": 521,
"text": " So it's a bit tricky, but ultimately this is something you can do."
},
{
"start": 521,
"end": 525,
"text": " So the next thing we're going to talk about is multiple branches."
},
{
"start": 525,
"end": 535,
"text": " Having multiple branches basically boils down to you have few commits, you have your graph,"
},
{
"start": 535,
"end": 539,
"text": " and let's say this is your master branch."
},
{
"start": 539,
"end": 551,
"text": " So here we have master, but also, or let's make the one before, otherwise I don't have space, master."
},
{
"start": 551,
"end": 565,
"text": " So what someone else would like to do is say, hey, I want to try out this new feature in code."
},
{
"start": 565,
"end": 569,
"text": " It will probably change the code base and so on, but I want to try it out."
},
{
"start": 569,
"end": 572,
"text": " Maybe it'll introduce some bugs and so on."
},
{
"start": 572,
"end": 580,
"text": " And then what you can do is you can make a new branch, F1, let's call it F1 for feature one."
},
{
"start": 580,
"end": 592,
"text": " And then I can make a commit on top of feature one, which would then move the feature one flag to here, and so on."
},
{
"start": 592,
"end": 594,
"text": " I can make second and third commit and so on."
},
{
"start": 594,
"end": 603,
"text": " Meanwhile, the other people working on the project, or maybe even you yourself, can work on top of this commit,"
},
{
"start": 603,
"end": 606,
"text": " on top of the master branch."
},
{
"start": 606,
"end": 614,
"text": " So in kind of software engineering, this is typically used when one part of the team wants to implement a new feature,"
},
{
"start": 614,
"end": 619,
"text": " but the other part of the team kind of continues to do bug fixes or things like this,"
},
{
"start": 619,
"end": 625,
"text": " development on the version of the software that doesn't yet have the new feature."
},
{
"start": 625,
"end": 632,
"text": " But they kind of need to fix bugs, and since the new feature is not complete yet, they can't both work on the same code base."
},
{
"start": 632,
"end": 639,
"text": " So each work on their own branch, so to say."
},
{
"start": 639,
"end": 652,
"text": " And at the end, when feature one is ready, people say, okay, we've implemented it, it's all good, there's no bugs."
},
{
"start": 652,
"end": 658,
"text": " We would like to integrate the feature one into the main software, basically."
},
{
"start": 658,
"end": 666,
"text": " What you have to do is you would have to do a so-called merge."
},
{
"start": 666,
"end": 675,
"text": " A merge is a process that generates a merge commit, and a merge commit is this thing here."
},
{
"start": 675,
"end": 678,
"text": " As you notice, it has more than one parent."
},
{
"start": 678,
"end": 686,
"text": " It has, in this case, two parents where it kind of combines."
},
{
"start": 686,
"end": 693,
"text": " So from this commit here, both branches are based off of this commit."
},
{
"start": 693,
"end": 699,
"text": " And then changes were made, individual changes, in this branch and in this branch."
},
{
"start": 699,
"end": 704,
"text": " So there's the possibility that people change different things."
},
{
"start": 704,
"end": 712,
"text": " And what the merge commit needs to do somehow is to bring together these changes."
},
{
"start": 712,
"end": 718,
"text": " So actually, both branches might have changed the same file, but in a different way."
},
{
"start": 718,
"end": 723,
"text": " And now the question is how do we merge these different files?"
},
{
"start": 723,
"end": 729,
"text": " And that's kind of the last topic we'll go into today."
},
{
"start": 729,
"end": 733,
"text": " How does Git do a merge?"
},
{
"start": 733,
"end": 744,
"text": " So when we talk about merging, Git has a bunch of built-in algorithms that it helps you with."
},
{
"start": 744,
"end": 747,
"text": " Most of the time, merging is automatic."
},
{
"start": 747,
"end": 760,
"text": " So if you have files here, A and B, and in one branch, A is changed, some here, and in one branch, B is changed."
},
{
"start": 760,
"end": 766,
"text": " Git simply assumes, well, this one branch has changed A, the other one hasn't."
},
{
"start": 766,
"end": 770,
"text": " So the changes mean something. I'll take them."
},
{
"start": 770,
"end": 778,
"text": " So basically, whenever something has changed in one branch and not changed in the other,"
},
{
"start": 778,
"end": 787,
"text": " it will assume that the changes are the thing that needs to continue to live."
},
{
"start": 787,
"end": 794,
"text": " It assumes that the changes were made for a reason, and that reason should continue."
},
{
"start": 794,
"end": 798,
"text": " So one might be a bug fix, the other one might be the new feature."
},
{
"start": 798,
"end": 800,
"text": " The same goes in the same file."
},
{
"start": 800,
"end": 807,
"text": " So when you have the same file and in one branch something on top is changed,"
},
{
"start": 807,
"end": 810,
"text": " and the other branch something kind of on the bottom is changed,"
},
{
"start": 810,
"end": 819,
"text": " Git simply assumes both changes are wanted and takes both."
},
{
"start": 819,
"end": 828,
"text": " The only kind of time when Git doesn't know what to do is when both branches change the same line."
},
{
"start": 828,
"end": 837,
"text": " So I'm going to represent this with, I don't know, but when both branches change the same line in the same file,"
},
{
"start": 837,
"end": 847,
"text": " or close by, so there are these algorithms that Git determines when there's a so-called merge conflict."
},
{
"start": 847,
"end": 850,
"text": " That's the only time where Git doesn't know what to do."
},
{
"start": 850,
"end": 856,
"text": " And so as preliminary, it's a good idea to structure files in a line-based fashion,"
},
{
"start": 856,
"end": 860,
"text": " especially if you write kind of LaTeX."
},
{
"start": 860,
"end": 869,
"text": " A good practice is to put every sentence on a new line and not have like giant lines of multiple sentences,"
},
{
"start": 869,
"end": 878,
"text": " because if you put every sentence on a new line, then you immediately kind of see where something was changed."
},
{
"start": 878,
"end": 883,
"text": " Whereas if you have this big paragraph and Git will simply tell you this line has changed,"
},
{
"start": 883,
"end": 888,
"text": " which is an entire paragraph, and you don't see what's happening."
},
{
"start": 888,
"end": 895,
"text": " So when you have a merge conflict, Git asks you what to do, and it does this in a very simple way."
},
{
"start": 895,
"end": 900,
"text": " We're just going to kind of take a look here as a final demonstration."
},
{
"start": 900,
"end": 905,
"text": " So I have a Git repository here."
},
{
"start": 905,
"end": 907,
"text": " Let me do that."
},
{
"start": 907,
"end": 911,
"text": " So as you can see, there's simply this test file in here."
},
{
"start": 911,
"end": 914,
"text": " And I've just made one commit, the initial commit."
},
{
"start": 914,
"end": 918,
"text": " And let's look at this test file."
},
{
"start": 918,
"end": 920,
"text": " It simply says, hello."
},
{
"start": 920,
"end": 924,
"text": " So what I can do is, for example, I can say, hi."
},
{
"start": 924,
"end": 931,
"text": " When I want to make a new commit, first of all, Git status will always tell you kind of what you can do,"
},
{
"start": 931,
"end": 934,
"text": " what's happening in your Git repository."
},
{
"start": 934,
"end": 940,
"text": " Here it says, changes not staged for commit, modified test.txt."
},
{
"start": 940,
"end": 942,
"text": " And it also tells you what you can do."
},
{
"start": 942,
"end": 949,
"text": " So it tells me, for example, use git checkout dash dash with the file name to discard changes."
},
{
"start": 949,
"end": 954,
"text": " Or use git add to update what will be committed."
},
{
"start": 954,
"end": 957,
"text": " There's a..."
},
{
"start": 957,
"end": 961,
"text": " I'll use git add with this."
},
{
"start": 961,
"end": 966,
"text": " So it tells me changes to be committed."
},
{
"start": 966,
"end": 968,
"text": " Now it's green, as you can see."
},
{
"start": 968,
"end": 974,
"text": " So when I now type git commit, it should commit these changes."
},
{
"start": 974,
"end": 977,
"text": " And this is a common occurrence in Git."
},
{
"start": 977,
"end": 983,
"text": " Whenever you see a text editor opening, Git expects you to type a text message,"
},
{
"start": 983,
"end": 988,
"text": " like a commit message in this case, like a log message, basically."
},
{
"start": 988,
"end": 992,
"text": " The hashtags are comments, which will not go in here."
},
{
"start": 992,
"end": 997,
"text": " This is all described right here, actually, in these comments."
},
{
"start": 997,
"end": 1004,
"text": " The thing about these things is, when you type an empty message, then Git will abort the commit."
},
{
"start": 1004,
"end": 1009,
"text": " Notice you've done something wrong, you can simply save this file with being empty,"
},
{
"start": 1009,
"end": 1014,
"text": " being nothing but comments, basically."
},
{
"start": 1014,
"end": 1016,
"text": " Git will abort. So it's super useful."
},
{
"start": 1016,
"end": 1022,
"text": " I'll just say, added hi, and then save this file."
},
{
"start": 1022,
"end": 1025,
"text": " So this is not a special thing. All you need to do..."
},
{
"start": 1025,
"end": 1029,
"text": " This is an editor, a text editor, that edits a text file."
},
{
"start": 1029,
"end": 1033,
"text": " You simply need to save the file and close the editor, and Git will be like,"
},
{
"start": 1033,
"end": 1037,
"text": " OK, cool, I'll continue."
},
{
"start": 1037,
"end": 1040,
"text": " So with git log, now you can see we have two commits."
},
{
"start": 1040,
"end": 1044,
"text": " We have my initial commit, and we have the commit called added hi."
},
{
"start": 1044,
"end": 1048,
"text": " If you look at the test file, you see hi."
},
{
"start": 1048,
"end": 1056,
"text": " So what we'll do now is, finally, we'll make two branches, as we've discussed before."
},
{
"start": 1056,
"end": 1060,
"text": " So this is my initial commit. I've made one more commit."
},
{
"start": 1060,
"end": 1065,
"text": " And we're on branch master right now, which Git status will tell you."
},
{
"start": 1065,
"end": 1070,
"text": " See? On branch master. So this is now master."
},
{
"start": 1070,
"end": 1074,
"text": " What we'll do is we'll make a new branch called F1."
},
{
"start": 1074,
"end": 1081,
"text": " We'll make a commit on F1, meaning we'll move this. F1."
},
{
"start": 1081,
"end": 1087,
"text": " Then we'll make a commit on top of master, like this, which means we'll move this."
},
{
"start": 1087,
"end": 1099,
"text": " Master. And then we will merge F1 back into master, such that this master is here."
},
{
"start": 1099,
"end": 1104,
"text": " And at the end, we can even remove the F1 branch."
},
{
"start": 1104,
"end": 1111,
"text": " And we'll do this while we're having a merge conflict, so that you see the whole process."
},
{
"start": 1111,
"end": 1117,
"text": " So, okay. So what I want to do is, first I want to make a branch F1."
},
{
"start": 1117,
"end": 1122,
"text": " For this, we can use checkout minus B for making a new branch F1."
},
{
"start": 1122,
"end": 1130,
"text": " If the branch already exists, you simply need to checkout, which means I simply go to where this branch is,"
},
{
"start": 1130,
"end": 1133,
"text": " to the commit that the branch references to."
},
{
"start": 1133,
"end": 1138,
"text": " We also say we put head to this commit."
},
{
"start": 1138,
"end": 1144,
"text": " Head is always the thing you're looking at, basically. The thing you've currently checked out."
},
{
"start": 1144,
"end": 1150,
"text": " So, make a new branch F1, and we'll immediately switch to F1 if I type status."
},
{
"start": 1150,
"end": 1157,
"text": " It says on branch F1. It's still the same commit, but we're just in a different branch."
},
{
"start": 1157,
"end": 1166,
"text": " So we'll make kind of a change to this file here. I'm gonna say hello."
},
{
"start": 1166,
"end": 1174,
"text": " Cool. Save the file. Status. It says it's modified. I want to add and commit it."
},
{
"start": 1174,
"end": 1180,
"text": " And there's a shortcut. Commit minus A minus M."
},
{
"start": 1180,
"end": 1186,
"text": " So the A simply says all the files that have changed, add them."
},
{
"start": 1186,
"end": 1190,
"text": " So I don't need to add, git add all the changed files separately."
},
{
"start": 1190,
"end": 1193,
"text": " Though this only counts for kind of changed files."
},
{
"start": 1193,
"end": 1198,
"text": " If you have completely new files that git isn't tracking yet, you need to add them yourself."
},
{
"start": 1198,
"end": 1204,
"text": " So here with a minus A, I skip the need to first add the files,"
},
{
"start": 1204,
"end": 1210,
"text": " and with the minus M I can give directly the commit message. More O. Cool."
},
{
"start": 1210,
"end": 1219,
"text": " So now what we've done is we have made this commit here and moved the F1 flag to this commit."
},
{
"start": 1219,
"end": 1229,
"text": " What we'll do now is we'll go back to this commit, which is currently master branch, and we'll make this commit."
},
{
"start": 1229,
"end": 1234,
"text": " So first what we need to do is we'll go back to some commit, which is a checkout."
},
{
"start": 1234,
"end": 1240,
"text": " Checkout master. Since master is still referring to that commit."
},
{
"start": 1240,
"end": 1248,
"text": " As you can see, when I open the test file, there's no hello. It's the status from before. Hello."
},
{
"start": 1248,
"end": 1256,
"text": " I can now change the file in some other manner. In this case I say hello, because I want many Es."
},
{
"start": 1256,
"end": 1263,
"text": " And I can say I can commit this, because I'm now on the branch master."
},
{
"start": 1263,
"end": 1271,
"text": " It will make this new commit here and move the master branch to that. More E."
},
{
"start": 1271,
"end": 1277,
"text": " If you look at git log, you see all these commits on this kind of branch."
},
{
"start": 1277,
"end": 1286,
"text": " You don't see the commit on the F1 branch. For that I would have to go back to the F1 branch."
},
{
"start": 1286,
"end": 1293,
"text": " I log, and you see here it's a different story. After the added high commit, there's the more O commit."
},
{
"start": 1293,
"end": 1299,
"text": " Whereas up here, after the added high commit, there's the more E commit."
},
{
"start": 1299,
"end": 1307,
"text": " Merging also happens when you have different branches."
},
{
"start": 1307,
"end": 1313,
"text": " When you collaborate with other people, and these people make commits, and you make commits independent of each other,"
},
{
"start": 1313,
"end": 1319,
"text": " and you try to synchronize your work, often you need to do a merge."
},
{
"start": 1319,
"end": 1324,
"text": " And then merge conflicts can also happen."
},
{
"start": 1324,
"end": 1329,
"text": " What we can do now is we can go back to master."
},
{
"start": 1329,
"end": 1334,
"text": " Because we've... Oops. Git checkout master."
},
{
"start": 1334,
"end": 1337,
"text": " There are shortcuts for all of these."
},
{
"start": 1337,
"end": 1340,
"text": " We're on this branch right here."
},
{
"start": 1340,
"end": 1345,
"text": " What we want to do is we want to make the merge commit."
},
{
"start": 1345,
"end": 1354,
"text": " We want to merge F1 into master. While I am on master, I can say git merge F1."
},
{
"start": 1354,
"end": 1362,
"text": " It will try to merge, but it will tell me conflict, automatic merge failed, fixed conflicts, and then commit the result."
},
{
"start": 1362,
"end": 1364,
"text": " I can say git status."
},
{
"start": 1364,
"end": 1369,
"text": " It will tell me you're currently merging. You have unmerged paths."
},
{
"start": 1369,
"end": 1375,
"text": " And this test.txt file is both branches modified."
},
{
"start": 1375,
"end": 1378,
"text": " I'll go into the test."
},
{
"start": 1378,
"end": 1383,
"text": " This is very strange if you see it for the first time, but it's actually very intuitive."
},
{
"start": 1383,
"end": 1390,
"text": " What git will do is wherever the line is that both branches have changed,"
},
{
"start": 1390,
"end": 1395,
"text": " or wherever the block of lines is that both branches have changed,"
},
{
"start": 1395,
"end": 1400,
"text": " git will basically indicate this by writing directly into the file."
},
{
"start": 1400,
"end": 1405,
"text": " It will make these smaller, smaller, smaller, smaller, smaller than sign."
},
{
"start": 1405,
"end": 1409,
"text": " Then it says head, which means this is the thing you're currently looking at,"
},
{
"start": 1409,
"end": 1413,
"text": " which we know is master, has changed this first line to this."
},
{
"start": 1413,
"end": 1418,
"text": " Hello. Then it will be like equal, equal, equal, equal."
},
{
"start": 1418,
"end": 1423,
"text": " Then it will say down here, it will say the F1 branch has changed this line,"
},
{
"start": 1423,
"end": 1426,
"text": " the same line to hello."
},
{
"start": 1426,
"end": 1434,
"text": " It will denote the end of this with larger, larger, larger, larger, greater than signs."
},
{
"start": 1434,
"end": 1444,
"text": " What you need to do in order to merge is simply make this file as you wish it is in the merged state."
},
{
"start": 1444,
"end": 1452,
"text": " First of all, you can always start by removing, actually, good practice maybe to remove these equal lines."
},
{
"start": 1452,
"end": 1459,
"text": " Then within these delimiters change how you want the file to look."
},
{
"start": 1459,
"end": 1469,
"text": " In essence, I simply want to have these O's here at the end."
},
{
"start": 1469,
"end": 1474,
"text": " I just want too many. Like this."
},
{
"start": 1474,
"end": 1478,
"text": " Or like this. I like that. I'm going to call that the merged state."
},
{
"start": 1478,
"end": 1488,
"text": " Then I delete these lines. This is the file that I would like the merged commit to have."
},
{
"start": 1488,
"end": 1491,
"text": " What I can do is save this file."
},
{
"start": 1491,
"end": 1495,
"text": " Again, I say git status. It still tells me it's unmerged, but it tells me what to do."
},
{
"start": 1495,
"end": 1499,
"text": " It says use git add to mark resolution."
},
{
"start": 1499,
"end": 1508,
"text": " I've resolved it. git add test txt. git status."
},
{
"start": 1508,
"end": 1516,
"text": " It says all conflicts fixed, but you are still merging. Use git commit to conclude merge."
},
{
"start": 1516,
"end": 1519,
"text": " git commit. Bam."
},
{
"start": 1519,
"end": 1527,
"text": " I still have to enter a commit message, which is already predefined here."
},
{
"start": 1527,
"end": 1533,
"text": " I'm saying I merged the branch F1 and there were conflicts, but that's fine."
},
{
"start": 1533,
"end": 1539,
"text": " I like this message, so I'm simply going to save the file right here."
},
{
"start": 1539,
"end": 1545,
"text": " When I look into git log, it now gives me the full story."
},
{
"start": 1545,
"end": 1550,
"text": " First I have this added high commit, then I have the more O commit and the more E commit,"
},
{
"start": 1550,
"end": 1552,
"text": " which were in parallel to each other."
},
{
"start": 1552,
"end": 1562,
"text": " Then I merged both branches into one. We're now right here."
},
{
"start": 1562,
"end": 1570,
"text": " What I can do now is delete the F1 flag, because I don't need it anymore."
},
{
"start": 1570,
"end": 1576,
"text": " I do that by git branch minus d F1."
},
{
"start": 1576,
"end": 1582,
"text": " It says delete the branch F1. No commits are actually deleted when you delete the branch."
},
{
"start": 1582,
"end": 1585,
"text": " It's simply the little flag that is deleted."
},
{
"start": 1585,
"end": 1590,
"text": " The only danger is when you delete the little flag and the name,"
},
{
"start": 1590,
"end": 1594,
"text": " and you're unable to reach the commit from any other end."
},
{
"start": 1594,
"end": 1600,
"text": " Here of course we have this master, and by following this edge here, we can reach this commit just fine."
},
{
"start": 1600,
"end": 1605,
"text": " git won't delete it or garbage collect it."
},
{
"start": 1605,
"end": 1610,
"text": " But git will also tell you when you're about to do something dangerous."
},
{
"start": 1610,
"end": 1613,
"text": " So don't worry."
},
{
"start": 1613,
"end": 1621,
"text": " With this I think you should already have many tools or many insights into git."
},
{
"start": 1621,
"end": 1626,
"text": " In another video we're going to look at how to collaborate online with people,"
},
{
"start": 1626,
"end": 1628,
"text": " which isn't much harder than this."
},
{
"start": 1628,
"end": 1638,
"text": " It's simply two more steps to push and pull your work from a server together with other people."
},
{
"start": 1638,
"end": 1660,
"text": " Alright, so that was it. Take care."
}
] |
iDulhoQ2pro | Yannic Kilcher | UCZHmQk67mSJgfCCTn7xBfew | Attention Is All You Need | [
"Science & Technology"
] | [
"deep learning",
"machine learning",
"nlp",
"natural language processing",
"machine translation",
"arxiv",
"google",
"attention mechanism",
"attention",
"transformer",
"tensor2tensor",
"rnn",
"recurrent",
"seq2seq"
] | https://arxiv.org/abs/1706.03762
Abstract:
The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.0 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.
Authors:
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, Illia Polosukhin | Hi there. Today we're looking at Attention is All You Need by Google. Just to declare, I don't work for Google just because we've been looking at Google papers lately. But it's just an interesting paper and we're going to see what's the deal with it. So basically what the authors are saying is we should kind of get away from basically RNNs. So traditionally what you would do, and these authors in particular are interested in NLP, Natural Language Processing. So traditionally when you have a language task like the cat eats the mouse and you'd like to translate this to say any other language like let's say German or whatever. What you would do is you would try to encode this sentence into a representation and then decode it again. So somehow, somehow this sentence needs to all go into say one vector and then this one vector needs to somehow be transformed into the target language. So these are traditionally called sec to sec tasks and they have been solved so far using recurrent neural networks. You might know the LSTM networks that are very popular for these tasks. What basically happens in an RNN is that you go over the say source sentence here one by one. Here you take the word the, you kind of encode it maybe with a word vector if you know what that is. So you turn it into like a vector, a word vector and then you use a neural network to turn this vector into what we call a hidden state. So this h0 is a hidden state. You then take the second token here cat. You again take its word vector because you need to represent it with numbers somehow so use word vectors for that. You turn this into, you put it through the same function so here is what's like a little e for encoder. You turn it into the same function but this time this hidden state also gets plugged in here. So the word vector is instead, you can actually think of having like a start hidden state here, h start. Usually people either learn this or just initialize with zeros that kind of goes into the encoder function so it's always really the same function. And from the previous hidden state and the current word vector the encoder again predicts another hidden state h1 and so on. So you take the next token, you turn it into a word vector, you put it through this little e encoder function and of course this is a lot more complicated in actual like say an LSTM but it's the basic principle behind it. So you end up with h2 and here you'd have h3, h4 and the last hidden state h4 here you would use this in kind of exactly the same fashion. You would plug it into like a decoder, a little e decoder which would output you a word d and it would also output you a next hidden state so h5. Let's just go on with the listing of the states and this h5 would again go into the decoder which would output kotsa. So that's how you would decode you basically these RNNs what they do is they kind of take, if you look on top here they take an input, a current input and they take the last hidden state and they compute a new hidden state. In the case of the decoder they take the hidden state and they take kind of the previous, usually the previous word that you output and they feed this back into the decoder and they will output the next word. It kind of makes sense. So you would guess that the hidden state kind of encodes what the sentence means and the last word that you output you need this because maybe for grammar right you know what you've just output so kind of the next word should be based on that. Of course you don't have to do it exactly this way but that's kind of what these RNNs did. So attention is a mechanism here to basically increase the performance of the RNNs. So what attention would do is in this particular case if we look at the decoder here if it's trying to predict this word for cat then or the next word here, say here it wants the next word and in essence the only information it really has is what the last word was, the German word for cat, and what the hidden state is. So if we look at what word it actually should output in the input sentence it's this here, eats. And if we look at kind of the information flow that this word has to travel so first it needs to encode into a word vector it needs to go through this encoder that's the same function for all the words so now we have to look at this encoder that's the same function for all the words so nothing specific can be learned to the word eats here right. It needs to go through this hidden state, traverse again into another step, this hidden state because we have two more tokens and then the next hidden state and then it goes all the way to the decoder where the first two words are decoded and still so this H6, this hidden state somehow still needs to retain the information that now the word eats somehow is kind of the word to be translated and that the decoder should find the German word for that. So that's of course a very long path, there's a lot of transformations involved over all these hidden states and the hidden states not only do they need to remember this particular word but all of the words and the order and so on and the grammar, ok the grammar you can actually learn with the decoders themselves but kind of the meaning and the structure of the sentence so it's very hard for an RNN to learn all of this what we call long range dependencies and so naturally you actually think well why can't we just decode the first word to the first word, second word to the second word it actually works pretty well in this example right like the de cat cuts it eats we could just decode one by one of course that's not how translation works in translations the sentences can become rearranged in the target language like one word can become many words or it could even be an entirely different expression. So attention is a mechanism by which this decoder here in this step that we're looking at can actually decide to go back and look at particular parts of the input especially what it would do in like popular attention mechanisms is that this decoder here can decide to attend to the hidden states of the input sentence. What that means is in this particular case we would like to teach the decoder somehow that aha look here I need to pay close attention to this step here because that was the step when the word eats was just encoded so it probably has a lot of information about what I would like to do right now namely translate this word eats. So this mechanism allows if you look at the information flow it simply goes through this word vector goes through one encoding step and then is at the hidden state and then the decoder can look directly at that so the path length of information is much shorter than going through all of the hidden states in a traditional way. So that's where attention helps and the way that the decoder decides what to look at is like a kind of an addressing scheme you may know it from neural Turing machines or kind of other kind of neural algorithms things so what the decoder would do is in each step it would output a bunch of keys. Sorry about that. That's my hand being drippy. So what it would output is a bunch of keys so k1 through kn and what these keys would do is they would index these hidden kind of hidden states via a kind of a softmax architecture and we're gonna look at this I think in the actual paper we're discussing because it's gonna become more clear but just kind of notice that the decoder here can decide to attend to the input sentence and kind of draw information directly from there instead of having to go just to the hidden state it's provided with. So if we go to the paper here what do these authors propose and the thing is they ditch the RNNs they basically say attention is all you need you don't need the entire recurrent things basically in every step of this decode of this basically of the decoding so you want to produce the target sentence so in this step in this step in this step you can basically you don't need the recurrence you can just kind of do attention over everything and it will be fine namely what they do is they propose this transformer architecture so what does it do it has two parts what's called an encoder and a decoder but don't kind of be confused because this all happens at once so this is not an RNN it all happens at once every all the source sentence so if we again have the cat oops that doesn't work as easy let's just do this this is a source sentence and then we also have a target sentence that maybe we've produced two words and we want to produce this third word here I want to produce this so we would feed the entire source sentence and also the target sentence we've produced so far to this network namely the source sentence would go into this part and the target that we've produced so far would go into this part and this is then all the time we would feed and this is then all combined and at the end we get an output here at the output probabilities that kind of tells us the probabilities for the next word so we can choose the top probability and then repeat the entire process so basically every step in production is one training sample every step in producing a sentence here before with the RNNs the entire sentence to sentence translation is one sample because we need to back propagate through all of these RNN steps because they all happen kind of in sequence here basically output of one single token is one sample and then the computation is finished the back prop happens through everything only for this one step so there is no multi-step kind of back propagation as an RNN and this is kind of a paradigm shift in sequence processing because people were always convinced that you kind of need these recurrent things in order to make good to learn these dependencies but here they basically say no no no we can just do attention over everything and it will actually be fine if we just do one step predictions so let's go one by one so here we have an input embedding and say an output embedding these are symmetrical so basically the tokens just get embedded with say word vectors again then there is a positional encoding this is kind of a special thing where because you now lose this kind of sequence nature of your algorithm you kind of need to encode where the words are that you push through the network so the network kind of goes ah this is a word at the beginning of the sentence or ah this is a word towards the end of the sentence or that it can compare two words like which one comes first which one comes second and you do this it's pretty easy for the networks if you do it with kind of these trigonometric functioning embeddings so if I draw you a sine wave and I draw you a sine wave of that is double as fast and I draw you a sine wave that is even faster maybe this one actually sink one two three four five no it doesn't matter you know what I mean so I can encode the first word I can encode the first position with all down and then the second position is kind of down down up and the third position is kind of up down up and so on so this is kind of a continuous way of binary encoding of position so if I want to compare two words I can just look at all the scales of these things and I know aha this word one word has a high here and the other word is low here so they must be pretty far away like one must be at the beginning and one must be at the end if they happen to match in this long wave and they also are both kind of low on this wave and then I can look in this way for like oh maybe they're close together but here I really get the information which one's first which one's second so these are kind of position encodings they're not critical to this algorithm but they are critical to the algorithm and algorithm but they just encode where the words are which of course that is important and it gives the network a significant boost in performance but it's not like it's not the meat of the thing the meat of the thing is that now that these encodings go into the networks they simply do what they call attention here attention here and attention here so there's kind of three kinds of attention so basically the first attention on the bottom left is simply attention as you can see over the input sentence so I told you before you need to take this input sentence if you look over here and you somehow need to encode it into a hidden representation and this now looks much more like the picture I drew here and the picture I drew right at the beginning is that all at once you kind of put together this hidden representation and all you do is you use attention over the input sequence which basically means you kind of pick and choose which words you look at more or less so with the bottom right so in the output sentence that you've produced so far you simply encode it into kind of a hidden state and then the third on the top right that's the I think the sorry I got interrupted so as I was saying the top right is the most interesting part of the attention mechanism here where basically it unites the kind of encoder part with the kind of de let's not it combines the source sentence with the target sentence that you've produced so far so as you can see maybe here I can just slightly annoying but I'm just going to remove these kind of circles here so if you can see here there's an output going from the part that encodes the source sentence and it goes into this multi-headed tension there's two connections and there's also one connection coming from the encoded output so far here and so there's three connections going into this and we're going to take a look at what these three connections are so the three connections here basically are the keys values and queries if you see here the values and the keys are what is output by the encoding part of the source sentence and the query is output by the encoding part of the target sentence and these are not only one value key and query so there are many in this kind of multi-headed tension fashion so there are just many of them instead of one but you can think of them as just kind of sets so the attention computed here is what does it do so first of all it calculates an adult product of the keys and the queries and then it does a soft max over this and then it multiplies it by the values so what does this do if you dot product the keys and the queries what you would get is so as you know if you have two vectors and the dot product basically gives you the angle between the vectors with especially in high dimensions most vectors are going to be of kind of a 90 degree kind of oh I know the Americans do the little square so most vectors are going to be not aligned very well so their dot product will kind of be zero-ish but if a key and the query actually align with each other like if they point into the same directions their dot product will actually be large so what you can think of this as the keys are kind of here the keys are just a bunch of vectors in space and each key has an associated value so each key there is kind of a table value one value two value three this is really annoying if I do this over text right so again here so we have a bunch of keys right in space and we have a table with values and each key here corresponds to value value one value two value three value four and so each key is associated with one of these values and then when we introduce a query what can it do so query will be a vector like this and we simply compute the so this is Q this is the query we compute the dot product with each of the keys and then we compute a softmax over this which means that one key will basically be selected so in this case it will be probably this blue key here that has the biggest dot product with the query so this is key two in this case and softmax so if you don't know what a softmax is you have you have like x1 to xnb like some numbers then you simply do you map them to the exponential function each one of them and but also each one of them you divide by the sum of over i of e to the xi so basically this is a renormalization basically you do the exponential function of the numbers which of course this makes the kind of the big numbers even bigger so basically what you end up with is one of these numbers x1 to xn will become very big compared to the others and then you renormalize so basically one of them will be almost one and the other ones will be almost zero simply the maximum function you can think of in a differentiable way so this is a renormalization so basically maximum function you can think of in a differentiable way and you just want to select the biggest entry in this case here we kind of select the key that aligns most with the query which in this case would be key two and then we when we multiply this softmax thing with the values so this query this inner product if we multiply q with k2 as an inner product and we take the softmax over it what we'll do is i'm going to draw it upwards here we're going to induce a distribution like this and if we multiply this by the value it will basically select value two so this is this is kind of an indexing scheme into this matrix and we select value two so this is this is kind of an indexing scheme into this memory of values and this is what then the network uses to compute further things using so you see the output here goes into kind of more layers of the neural network upwards so basically what what you can think what does this mean you can think of here's the whoopsie i want to delete this you can think of this as basically the encoder of the source sentence right here discovers interesting things that's that looks ugly it discovers interesting things about the about the the source sentence and it builds key value pairs and then the encoder of the target sentence builds the queries and together they give you kind of the next to next signal so it means that the network basically says here's a bunch of things here's a here's a bunch of things about the source sentence that you might find interesting that's the values and the keys are ways to index the values so it says here's a bunch of things that are interesting which are the values and here is how you would address these things which is the keys and then the other part of the network builds the queries it says i would like to know certain things so think of the values like attributes like here is the name and the the the kind of tallness and the weight of a person right and the keys are like the the actual indexes like name height weight and then the the other part of the network can decide what do i want i actually want the name so my query is the name it will be aligned with the key name and the corresponding value would be the name of the person you would like to describe so that's how kind of these networks work together and i think it's a it's a pretty ingenious it's not entirely new because it has been done of course before with all the differentiable turing machines and whatnot but it's pretty cool that this actually works and actually works kind of better than rnns if you simply do this so they describe a bunch of other things here i i don't think they're too important basically that the point that they make about this attention is that it reduces path lengths and kind of that's the the main reason why it should work better with this entire attention mechanism you reduce the amount of computation steps that information has to flow from one point in the network to another and that's what brings the major improvement because all the computation steps can make you lose information and you don't want that you want short path lengths and so that's that's what this method achieves and they claim that's why it's better and it works so well so they have experiments you can look at them they're really good at everything of course of course you always have state of the art and i think i will conclude here if you want to check it out yourself they have extensive code on github where you can build your own transformer networks and with that have a nice day and see ya | [
{
"start": 0,
"end": 7,
"text": " Hi there. Today we're looking at Attention is All You Need by Google. Just to declare,"
},
{
"start": 7.44,
"end": 12.56,
"text": " I don't work for Google just because we've been looking at Google papers lately. But"
},
{
"start": 12.56,
"end": 19.12,
"text": " it's just an interesting paper and we're going to see what's the deal with it. So basically"
},
{
"start": 19.12,
"end": 26.12,
"text": " what the authors are saying is we should kind of get away from basically RNNs. So traditionally"
},
{
"start": 26.12,
"end": 33.120000000000005,
"text": " what you would do, and these authors in particular are interested in NLP, Natural Language Processing."
},
{
"start": 33.120000000000005,
"end": 40.120000000000005,
"text": " So traditionally when you have a language task like the cat eats the mouse and you'd"
},
{
"start": 40.12,
"end": 59.12,
"text": " like to translate this to say any other language like let's say German or whatever. What you"
},
{
"start": 59.12,
"end": 66.12,
"text": " would do is you would try to encode this sentence into a representation and then decode it again."
},
{
"start": 66.12,
"end": 73.12,
"text": " So somehow, somehow this sentence needs to all go into say one vector and then this one"
},
{
"start": 74.32000000000001,
"end": 81.32000000000001,
"text": " vector needs to somehow be transformed into the target language. So these are traditionally"
},
{
"start": 81.92,
"end": 88.92,
"text": " called sec to sec tasks and they have been solved so far using recurrent neural networks."
},
{
"start": 88.92,
"end": 95.92,
"text": " You might know the LSTM networks that are very popular for these tasks. What basically"
},
{
"start": 96.92,
"end": 103.92,
"text": " happens in an RNN is that you go over the say source sentence here one by one. Here"
},
{
"start": 104,
"end": 110,
"text": " you take the word the, you kind of encode it maybe with a word vector if you know what"
},
{
"start": 110,
"end": 117,
"text": " that is. So you turn it into like a vector, a word vector and then you use a neural network"
},
{
"start": 117,
"end": 124,
"text": " to turn this vector into what we call a hidden state. So this h0 is a hidden state. You then"
},
{
"start": 129.28,
"end": 136.28,
"text": " take the second token here cat. You again take its word vector because you need to represent"
},
{
"start": 136.8,
"end": 143.8,
"text": " it with numbers somehow so use word vectors for that. You turn this into, you put it through"
},
{
"start": 143.8,
"end": 149.8,
"text": " the same function so here is what's like a little e for encoder. You turn it into the"
},
{
"start": 149.8,
"end": 155.8,
"text": " same function but this time this hidden state also gets plugged in here. So the word vector"
},
{
"start": 155.8,
"end": 162.8,
"text": " is instead, you can actually think of having like a start hidden state here, h start. Usually"
},
{
"start": 163.52,
"end": 169.24,
"text": " people either learn this or just initialize with zeros that kind of goes into the encoder"
},
{
"start": 169.24,
"end": 176.24,
"text": " function so it's always really the same function. And from the previous hidden state and the"
},
{
"start": 176.28,
"end": 183.28,
"text": " current word vector the encoder again predicts another hidden state h1 and so on. So you"
},
{
"start": 184.76000000000002,
"end": 191.76000000000002,
"text": " take the next token, you turn it into a word vector, you put it through this little e encoder"
},
{
"start": 191.88,
"end": 198.24,
"text": " function and of course this is a lot more complicated in actual like say an LSTM but"
},
{
"start": 198.24,
"end": 205.24,
"text": " it's the basic principle behind it. So you end up with h2 and here you'd have h3, h4"
},
{
"start": 207.28,
"end": 212.20000000000002,
"text": " and the last hidden state h4 here you would use this in kind of exactly the same fashion."
},
{
"start": 212.20000000000002,
"end": 219.20000000000002,
"text": " You would plug it into like a decoder, a little e decoder which would output you a word d"
},
{
"start": 219.2,
"end": 226.2,
"text": " and it would also output you a next hidden state so h5. Let's just go on with the listing"
},
{
"start": 234.44,
"end": 241.44,
"text": " of the states and this h5 would again go into the decoder which would output kotsa. So that's"
},
{
"start": 241.44,
"end": 248.44,
"text": " how you would decode you basically these RNNs what they do is they kind of take, if you"
},
{
"start": 248.44,
"end": 255.44,
"text": " look on top here they take an input, a current input and they take the last hidden state"
},
{
"start": 255.48,
"end": 262.48,
"text": " and they compute a new hidden state. In the case of the decoder they take the hidden state"
},
{
"start": 262.84,
"end": 269.84,
"text": " and they take kind of the previous, usually the previous word that you output and they"
},
{
"start": 269.84,
"end": 276.84,
"text": " feed this back into the decoder and they will output the next word. It kind of makes sense."
},
{
"start": 277.32,
"end": 283.52,
"text": " So you would guess that the hidden state kind of encodes what the sentence means and the"
},
{
"start": 283.52,
"end": 290.15999999999997,
"text": " last word that you output you need this because maybe for grammar right you know what you've"
},
{
"start": 290.15999999999997,
"end": 297.15999999999997,
"text": " just output so kind of the next word should be based on that. Of course you don't have"
},
{
"start": 297.16,
"end": 304.16,
"text": " to do it exactly this way but that's kind of what these RNNs did. So attention is a"
},
{
"start": 306.16,
"end": 313.16,
"text": " mechanism here to basically increase the performance of the RNNs. So what attention would do is"
},
{
"start": 315.36,
"end": 322.36,
"text": " in this particular case if we look at the decoder here if it's trying to predict this"
},
{
"start": 322.36,
"end": 329.36,
"text": " word for cat then or the next word here, say here it wants the next word and in essence"
},
{
"start": 336.12,
"end": 343.12,
"text": " the only information it really has is what the last word was, the German word for cat,"
},
{
"start": 343.12,
"end": 350.12,
"text": " and what the hidden state is. So if we look at what word it actually should output in"
},
{
"start": 350.12,
"end": 357.12,
"text": " the input sentence it's this here, eats. And if we look at kind of the information flow"
},
{
"start": 358.56,
"end": 364.56,
"text": " that this word has to travel so first it needs to encode into a word vector it needs to go"
},
{
"start": 364.56,
"end": 369.56,
"text": " through this encoder that's the same function for all the words so now we have to look at"
},
{
"start": 369.56,
"end": 374.56,
"text": " this encoder that's the same function for all the words so nothing specific can be learned"
},
{
"start": 374.56,
"end": 379.72,
"text": " to the word eats here right. It needs to go through this hidden state, traverse again"
},
{
"start": 379.72,
"end": 384.8,
"text": " into another step, this hidden state because we have two more tokens and then the next"
},
{
"start": 384.8,
"end": 391.52,
"text": " hidden state and then it goes all the way to the decoder where the first two words are"
},
{
"start": 391.52,
"end": 398.52,
"text": " decoded and still so this H6, this hidden state somehow still needs to retain the information"
},
{
"start": 398.52,
"end": 405.52,
"text": " that now the word eats somehow is kind of the word to be translated and that the decoder"
},
{
"start": 408.84,
"end": 415.84,
"text": " should find the German word for that. So that's of course a very long path, there's a lot"
},
{
"start": 418.24,
"end": 424.12,
"text": " of transformations involved over all these hidden states and the hidden states not only"
},
{
"start": 424.12,
"end": 429.2,
"text": " do they need to remember this particular word but all of the words and the order and so"
},
{
"start": 429.2,
"end": 435.72,
"text": " on and the grammar, ok the grammar you can actually learn with the decoders themselves"
},
{
"start": 435.72,
"end": 442.32,
"text": " but kind of the meaning and the structure of the sentence so it's very hard for an RNN"
},
{
"start": 442.32,
"end": 449.32,
"text": " to learn all of this what we call long range dependencies and so naturally you actually"
},
{
"start": 449.32,
"end": 454.56,
"text": " think well why can't we just decode the first word to the first word, second word to the"
},
{
"start": 454.56,
"end": 460.28,
"text": " second word it actually works pretty well in this example right like the de cat cuts"
},
{
"start": 460.28,
"end": 465.68,
"text": " it eats we could just decode one by one of course that's not how translation works in"
},
{
"start": 465.68,
"end": 471.65999999999997,
"text": " translations the sentences can become rearranged in the target language like one word can become"
},
{
"start": 471.65999999999997,
"end": 478.65999999999997,
"text": " many words or it could even be an entirely different expression. So attention is a mechanism"
},
{
"start": 478.66,
"end": 484.70000000000005,
"text": " by which this decoder here in this step that we're looking at can actually decide to go"
},
{
"start": 484.70000000000005,
"end": 491.70000000000005,
"text": " back and look at particular parts of the input especially what it would do in like popular"
},
{
"start": 491.70000000000005,
"end": 501.70000000000005,
"text": " attention mechanisms is that this decoder here can decide to attend to the hidden states"
},
{
"start": 502.02000000000004,
"end": 507.78000000000003,
"text": " of the input sentence. What that means is in this particular case we would like to teach"
},
{
"start": 507.78,
"end": 514.78,
"text": " the decoder somehow that aha look here I need to pay close attention to this step here because"
},
{
"start": 516.3399999999999,
"end": 523.06,
"text": " that was the step when the word eats was just encoded so it probably has a lot of information"
},
{
"start": 523.06,
"end": 533.06,
"text": " about what I would like to do right now namely translate this word eats. So this mechanism"
},
{
"start": 533.06,
"end": 539.06,
"text": " allows if you look at the information flow it simply goes through this word vector goes"
},
{
"start": 539.06,
"end": 544.4599999999999,
"text": " through one encoding step and then is at the hidden state and then the decoder can look"
},
{
"start": 544.4599999999999,
"end": 550.9,
"text": " directly at that so the path length of information is much shorter than going through all of"
},
{
"start": 550.9,
"end": 557.9,
"text": " the hidden states in a traditional way. So that's where attention helps and the way that"
},
{
"start": 557.9,
"end": 563.9,
"text": " the decoder decides what to look at is like a kind of an addressing scheme you may know"
},
{
"start": 563.9,
"end": 574.9,
"text": " it from neural Turing machines or kind of other kind of neural algorithms things so"
},
{
"start": 574.98,
"end": 581.98,
"text": " what the decoder would do is in each step it would output a bunch of keys. Sorry about"
},
{
"start": 581.98,
"end": 591.98,
"text": " that. That's my hand being drippy. So what it would output is a bunch of keys so k1 through"
},
{
"start": 591.98,
"end": 606.98,
"text": " kn and what these keys would do is they would index these hidden kind of hidden states via"
},
{
"start": 606.98,
"end": 613.98,
"text": " a kind of a softmax architecture and we're gonna look at this I think in the actual paper"
},
{
"start": 614.9,
"end": 619.98,
"text": " we're discussing because it's gonna become more clear but just kind of notice that the"
},
{
"start": 619.98,
"end": 626.86,
"text": " decoder here can decide to attend to the input sentence and kind of draw information directly"
},
{
"start": 626.86,
"end": 633.86,
"text": " from there instead of having to go just to the hidden state it's provided with. So if"
},
{
"start": 633.86,
"end": 640.86,
"text": " we go to the paper here what do these authors propose and the thing is they ditch the RNNs"
},
{
"start": 641.22,
"end": 645.86,
"text": " they basically say attention is all you need you don't need the entire recurrent things"
},
{
"start": 645.86,
"end": 651.7,
"text": " basically in every step of this decode of this basically of the decoding so you want"
},
{
"start": 651.7,
"end": 658.7,
"text": " to produce the target sentence so in this step in this step in this step you can basically"
},
{
"start": 658.7,
"end": 665.7,
"text": " you don't need the recurrence you can just kind of do attention over everything and it"
},
{
"start": 666.9000000000001,
"end": 673.9000000000001,
"text": " will be fine namely what they do is they propose this transformer architecture so what does"
},
{
"start": 675.1400000000001,
"end": 682.1400000000001,
"text": " it do it has two parts what's called an encoder and a decoder but don't kind of be confused"
},
{
"start": 682.14,
"end": 689.14,
"text": " because this all happens at once so this is not an RNN it all happens at once every all"
},
{
"start": 689.14,
"end": 696.14,
"text": " the source sentence so if we again have the cat oops that doesn't work as easy let's"
},
{
"start": 697.58,
"end": 704.58,
"text": " just do this this is a source sentence and then we also have a target sentence that maybe"
},
{
"start": 704.58,
"end": 711.58,
"text": " we've produced two words and we want to produce this third word here I want to produce this"
},
{
"start": 712.1,
"end": 719.1,
"text": " so we would feed the entire source sentence and also the target sentence we've produced"
},
{
"start": 719.1800000000001,
"end": 726.1800000000001,
"text": " so far to this network namely the source sentence would go into this part and the target that"
},
{
"start": 726.1800000000001,
"end": 733.1800000000001,
"text": " we've produced so far would go into this part and this is then all the time we would feed"
},
{
"start": 733.18,
"end": 740.18,
"text": " and this is then all combined and at the end we get an output here at the output probabilities"
},
{
"start": 742.5,
"end": 749.5,
"text": " that kind of tells us the probabilities for the next word so we can choose the top probability"
},
{
"start": 749.9799999999999,
"end": 756.9799999999999,
"text": " and then repeat the entire process so basically every step in production is one training sample"
},
{
"start": 757.8199999999999,
"end": 762.62,
"text": " every step in producing a sentence here before with the RNNs the entire sentence to sentence"
},
{
"start": 762.62,
"end": 767.66,
"text": " translation is one sample because we need to back propagate through all of these RNN"
},
{
"start": 767.66,
"end": 774.66,
"text": " steps because they all happen kind of in sequence here basically output of one single token"
},
{
"start": 775.78,
"end": 781.38,
"text": " is one sample and then the computation is finished the back prop happens through everything"
},
{
"start": 781.38,
"end": 788.38,
"text": " only for this one step so there is no multi-step kind of back propagation as an RNN and this"
},
{
"start": 788.38,
"end": 795.38,
"text": " is kind of a paradigm shift in sequence processing because people were always convinced that"
},
{
"start": 796.88,
"end": 803.88,
"text": " you kind of need these recurrent things in order to make good to learn these dependencies"
},
{
"start": 804.2,
"end": 809.72,
"text": " but here they basically say no no no we can just do attention over everything and it will"
},
{
"start": 809.72,
"end": 816.72,
"text": " actually be fine if we just do one step predictions so let's go one by one so here we have an"
},
{
"start": 816.72,
"end": 823.72,
"text": " input embedding and say an output embedding these are symmetrical so basically the tokens"
},
{
"start": 823.72,
"end": 828.72,
"text": " just get embedded with say word vectors again then there is a positional encoding this is"
},
{
"start": 828.72,
"end": 835.72,
"text": " kind of a special thing where because you now lose this kind of sequence nature of your"
},
{
"start": 835.88,
"end": 840.88,
"text": " algorithm you kind of need to encode where the words are that you push through the network"
},
{
"start": 840.88,
"end": 844.88,
"text": " so the network kind of goes ah this is a word at the beginning of the sentence or ah this"
},
{
"start": 844.88,
"end": 850.04,
"text": " is a word towards the end of the sentence or that it can compare two words like which"
},
{
"start": 850.04,
"end": 856.54,
"text": " one comes first which one comes second and you do this it's pretty easy for the networks"
},
{
"start": 856.54,
"end": 862.72,
"text": " if you do it with kind of these trigonometric functioning embeddings so if I draw you a"
},
{
"start": 862.72,
"end": 869.72,
"text": " sine wave and I draw you a sine wave of that is double as fast and I draw you a sine wave"
},
{
"start": 869.72,
"end": 876.72,
"text": " that is even faster maybe this one actually sink one two three four five no it doesn't"
},
{
"start": 876.72,
"end": 883.72,
"text": " matter you know what I mean so I can encode the first word I can encode the first position"
},
{
"start": 883.96,
"end": 890.96,
"text": " with all down and then the second position is kind of down down up and the third position"
},
{
"start": 890.96,
"end": 897.96,
"text": " is kind of up down up and so on so this is kind of a continuous way of binary encoding"
},
{
"start": 898.36,
"end": 905.36,
"text": " of position so if I want to compare two words I can just look at all the scales of these"
},
{
"start": 904.72,
"end": 909.72,
"text": " things and I know aha this word one word has a high here and the other word is low here"
},
{
"start": 909.72,
"end": 914.72,
"text": " so they must be pretty far away like one must be at the beginning and one must be at the"
},
{
"start": 914.72,
"end": 921.72,
"text": " end if they happen to match in this long wave and they also are both kind of low on this"
},
{
"start": 924.08,
"end": 930.08,
"text": " wave and then I can look in this way for like oh maybe they're close together but here I"
},
{
"start": 930.08,
"end": 935.08,
"text": " really get the information which one's first which one's second so these are kind of position"
},
{
"start": 935.08,
"end": 942.08,
"text": " encodings they're not critical to this algorithm but they are critical to the algorithm and"
},
{
"start": 942.08,
"end": 949.08,
"text": " algorithm but they just encode where the words are which of course that is important and"
},
{
"start": 949.72,
"end": 956.72,
"text": " it gives the network a significant boost in performance but it's not like it's not the"
},
{
"start": 957.2,
"end": 963.88,
"text": " meat of the thing the meat of the thing is that now that these encodings go into the"
},
{
"start": 963.88,
"end": 970.88,
"text": " networks they simply do what they call attention here attention here and attention here so"
},
{
"start": 973.32,
"end": 979.04,
"text": " there's kind of three kinds of attention so basically the first attention on the bottom"
},
{
"start": 979.04,
"end": 986.04,
"text": " left is simply attention as you can see over the input sentence so I told you before you"
},
{
"start": 986.74,
"end": 991.64,
"text": " need to take this input sentence if you look over here and you somehow need to encode it"
},
{
"start": 991.64,
"end": 998.64,
"text": " into a hidden representation and this now looks much more like the picture I drew here"
},
{
"start": 1000.04,
"end": 1005.4399999999999,
"text": " and the picture I drew right at the beginning is that all at once you kind of put together"
},
{
"start": 1005.4399999999999,
"end": 1010.6,
"text": " this hidden representation and all you do is you use attention over the input sequence"
},
{
"start": 1010.6,
"end": 1016.88,
"text": " which basically means you kind of pick and choose which words you look at more or less"
},
{
"start": 1016.88,
"end": 1021.16,
"text": " so with the bottom right so in the output sentence that you've produced so far you simply"
},
{
"start": 1021.16,
"end": 1028.1599999999999,
"text": " encode it into kind of a hidden state and then the third on the top right that's the"
},
{
"start": 1028.24,
"end": 1035.24,
"text": " I think the sorry I got interrupted so as I was saying the top right is the most interesting"
},
{
"start": 1036.04,
"end": 1043.04,
"text": " part of the attention mechanism here where basically it unites the kind of encoder part"
},
{
"start": 1043.6,
"end": 1050.6,
"text": " with the kind of de let's not it combines the source sentence with the target sentence"
},
{
"start": 1050.6,
"end": 1057.6,
"text": " that you've produced so far so as you can see maybe here I can just slightly annoying"
},
{
"start": 1063,
"end": 1070,
"text": " but I'm just going to remove these kind of circles here so if you can see here there's"
},
{
"start": 1071.12,
"end": 1078.12,
"text": " an output going from the part that encodes the source sentence and it goes into this"
},
{
"start": 1078.12,
"end": 1085.12,
"text": " multi-headed tension there's two connections and there's also one connection coming from"
},
{
"start": 1085.4799999999998,
"end": 1092.4799999999998,
"text": " the encoded output so far here and so there's three connections going into this and we're"
},
{
"start": 1096.7199999999998,
"end": 1103.7199999999998,
"text": " going to take a look at what these three connections are so the three connections here basically"
},
{
"start": 1103.72,
"end": 1110.72,
"text": " are the keys values and queries if you see here the values and the keys are what is output"
},
{
"start": 1116.04,
"end": 1122.56,
"text": " by the encoding part of the source sentence and the query is output by the encoding part"
},
{
"start": 1122.56,
"end": 1129.56,
"text": " of the target sentence and these are not only one value key and query so there are many"
},
{
"start": 1129.56,
"end": 1135.48,
"text": " in this kind of multi-headed tension fashion so there are just many of them instead of"
},
{
"start": 1135.48,
"end": 1142.48,
"text": " one but you can think of them as just kind of sets so the attention computed here is"
},
{
"start": 1143.56,
"end": 1150.56,
"text": " what does it do so first of all it calculates an adult product of the keys and the queries"
},
{
"start": 1152.36,
"end": 1157.6,
"text": " and then it does a soft max over this and then it multiplies it by the values so what"
},
{
"start": 1157.6,
"end": 1164.6,
"text": " does this do if you dot product the keys and the queries what you would get is so as you"
},
{
"start": 1166.76,
"end": 1173.76,
"text": " know if you have two vectors and the dot product basically gives you the angle between the"
},
{
"start": 1174.28,
"end": 1181.28,
"text": " vectors with especially in high dimensions most vectors are going to be of kind of a"
},
{
"start": 1181.28,
"end": 1188.28,
"text": " 90 degree kind of oh I know the Americans do the little square so most vectors are going"
},
{
"start": 1190.8,
"end": 1197.08,
"text": " to be not aligned very well so their dot product will kind of be zero-ish but if a key and"
},
{
"start": 1197.08,
"end": 1204.08,
"text": " the query actually align with each other like if they point into the same directions their"
},
{
"start": 1204.08,
"end": 1211.08,
"text": " dot product will actually be large so what you can think of this as the keys are kind"
},
{
"start": 1211.1599999999999,
"end": 1218.1599999999999,
"text": " of here the keys are just a bunch of vectors in space and each key has an associated value"
},
{
"start": 1220.9199999999998,
"end": 1227.9199999999998,
"text": " so each key there is kind of a table value one value two value three this is really annoying"
},
{
"start": 1227.92,
"end": 1234.92,
"text": " if I do this over text right so again here so we have a bunch of keys right in space"
},
{
"start": 1236.96,
"end": 1242.96,
"text": " and we have a table with values and each key here corresponds to value value one value"
},
{
"start": 1242.96,
"end": 1249.96,
"text": " two value three value four and so each key is associated with one of these values and"
},
{
"start": 1249.96,
"end": 1256.96,
"text": " then when we introduce a query what can it do so query will be a vector like this and"
},
{
"start": 1257.96,
"end": 1262.96,
"text": " we simply compute the so this is Q this is the query we compute the dot product with"
},
{
"start": 1262.96,
"end": 1269.96,
"text": " each of the keys and then we compute a softmax over this which means that one key will basically"
},
{
"start": 1269.96,
"end": 1276.96,
"text": " be selected so in this case it will be probably this blue key here that has the biggest dot"
},
{
"start": 1276.48,
"end": 1283.48,
"text": " product with the query so this is key two in this case and softmax so if you don't know"
},
{
"start": 1285.6000000000001,
"end": 1292.6000000000001,
"text": " what a softmax is you have you have like x1 to xnb like some numbers then you simply do"
},
{
"start": 1292.6,
"end": 1299.6,
"text": " you map them to the exponential function each one of them and but also each one of them"
},
{
"start": 1301.36,
"end": 1308.36,
"text": " you divide by the sum of over i of e to the xi so basically this is a renormalization"
},
{
"start": 1309.08,
"end": 1314.32,
"text": " basically you do the exponential function of the numbers which of course this makes"
},
{
"start": 1314.32,
"end": 1315.48,
"text": " the kind of the"
},
{
"start": 1315.48,
"end": 1322.48,
"text": " big numbers even bigger so basically what you end up with is one of these numbers x1"
},
{
"start": 1322.8,
"end": 1329.8,
"text": " to xn will become very big compared to the others and then you renormalize so basically"
},
{
"start": 1329.8,
"end": 1334.8,
"text": " one of them will be almost one and the other ones will be almost zero simply the maximum"
},
{
"start": 1334.8,
"end": 1341.8,
"text": " function you can think of in a differentiable way so this is a renormalization so basically"
},
{
"start": 1341.8,
"end": 1347.24,
"text": " maximum function you can think of in a differentiable way and you just want to select the biggest"
},
{
"start": 1347.24,
"end": 1352.9199999999998,
"text": " entry in this case here we kind of select the key that aligns most with the query which"
},
{
"start": 1352.9199999999998,
"end": 1358.56,
"text": " in this case would be key two and then we when we multiply this softmax thing with the"
},
{
"start": 1358.56,
"end": 1365.56,
"text": " values so this query this inner product if we multiply q with k2 as an inner product"
},
{
"start": 1365.56,
"end": 1372.56,
"text": " and we take the softmax over it what we'll do is i'm going to draw it upwards here we're"
},
{
"start": 1374.6,
"end": 1381.6,
"text": " going to induce a distribution like this and if we multiply this by the value it will basically"
},
{
"start": 1381.6,
"end": 1388.6,
"text": " select value two so this is this is kind of an indexing scheme into this matrix and we"
},
{
"start": 1388.6,
"end": 1395.6,
"text": " select value two so this is this is kind of an indexing scheme into this memory of values"
},
{
"start": 1397.36,
"end": 1404.36,
"text": " and this is what then the network uses to compute further things using so you see the"
},
{
"start": 1405.1599999999999,
"end": 1411.9199999999998,
"text": " output here goes into kind of more layers of the neural network upwards so basically"
},
{
"start": 1411.92,
"end": 1418.92,
"text": " what what you can think what does this mean you can think of here's the whoopsie i want"
},
{
"start": 1419.6000000000001,
"end": 1426.6000000000001,
"text": " to delete this you can think of this as basically the encoder of the source sentence right here"
},
{
"start": 1432.76,
"end": 1439.16,
"text": " discovers interesting things that's that looks ugly it discovers interesting things about"
},
{
"start": 1439.16,
"end": 1446.16,
"text": " the about the the source sentence and it builds key value pairs and then the encoder of the"
},
{
"start": 1447.96,
"end": 1454.96,
"text": " target sentence builds the queries and together they give you kind of the next to next signal"
},
{
"start": 1456.28,
"end": 1462.3200000000002,
"text": " so it means that the network basically says here's a bunch of things here's a here's a"
},
{
"start": 1462.32,
"end": 1469.32,
"text": " bunch of things about the source sentence that you might find interesting that's the"
},
{
"start": 1469.48,
"end": 1476.48,
"text": " values and the keys are ways to index the values so it says here's a bunch of things"
},
{
"start": 1479.24,
"end": 1484.4399999999998,
"text": " that are interesting which are the values and here is how you would address these things"
},
{
"start": 1484.4399999999998,
"end": 1491.28,
"text": " which is the keys and then the other part of the network builds the queries it says"
},
{
"start": 1491.28,
"end": 1498.28,
"text": " i would like to know certain things so think of the values like attributes like here is"
},
{
"start": 1498.8799999999999,
"end": 1505.8799999999999,
"text": " the name and the the the kind of tallness and the weight of a person right and the keys"
},
{
"start": 1505.92,
"end": 1512.92,
"text": " are like the the actual indexes like name height weight and then the the other part of the"
},
{
"start": 1513.28,
"end": 1518.6399999999999,
"text": " network can decide what do i want i actually want the name so my query is the name it will"
},
{
"start": 1518.64,
"end": 1524.3600000000001,
"text": " be aligned with the key name and the corresponding value would be the name of the person you"
},
{
"start": 1524.3600000000001,
"end": 1529.68,
"text": " would like to describe so that's how kind of these networks work together and i think"
},
{
"start": 1529.68,
"end": 1535.2800000000002,
"text": " it's a it's a pretty ingenious it's not entirely new because it has been done of course before"
},
{
"start": 1535.2800000000002,
"end": 1540.72,
"text": " with all the differentiable turing machines and whatnot but it's pretty cool that this"
},
{
"start": 1540.72,
"end": 1547.72,
"text": " actually works and actually works kind of better than rnns if you simply do this so"
},
{
"start": 1549.96,
"end": 1556.96,
"text": " they describe a bunch of other things here i i don't think they're too important basically"
},
{
"start": 1557.16,
"end": 1562.68,
"text": " that the point that they make about this attention is that it reduces path lengths and kind of"
},
{
"start": 1562.68,
"end": 1569.68,
"text": " that's the the main reason why it should work better with this entire attention mechanism"
},
{
"start": 1570.88,
"end": 1576.52,
"text": " you reduce the amount of computation steps that information has to flow from one point"
},
{
"start": 1576.52,
"end": 1582.44,
"text": " in the network to another and that's what brings the major improvement because all the"
},
{
"start": 1582.44,
"end": 1588.4,
"text": " computation steps can make you lose information and you don't want that you want short path"
},
{
"start": 1588.4,
"end": 1595.4,
"text": " lengths and so that's that's what this method achieves and they claim that's why it's better"
},
{
"start": 1595.92,
"end": 1602.2800000000002,
"text": " and it works so well so they have experiments you can look at them they're really good at"
},
{
"start": 1602.2800000000002,
"end": 1609.2800000000002,
"text": " everything of course of course you always have state of the art and i think i will conclude"
},
{
"start": 1609.28,
"end": 1616.28,
"text": " here if you want to check it out yourself they have extensive code on github where you"
},
{
"start": 1616.28,
"end": 1639.28,
"text": " can build your own transformer networks and with that have a nice day and see ya"
}
] |
-YiMVR3HEuY | Yannic Kilcher | UCZHmQk67mSJgfCCTn7xBfew | Reinforcement Learning with Unsupervised Auxiliary Tasks | [
"Science & Technology"
] | [
"machine learning",
"artificial intelligence",
"ai",
"deep learning",
"unsupervised learning",
"research",
"academia",
"paper",
"review",
"agents",
"tasks"
] | https://arxiv.org/abs/1611.05397
Abstract:
Deep reinforcement learning agents have achieved state-of-the-art results by directly maximising cumulative reward. However, environments contain a much wider variety of possible training signals. In this paper, we introduce an agent that also maximises many other pseudo-reward functions simultaneously by reinforcement learning. All of these tasks share a common representation that, like unsupervised learning, continues to develop in the absence of extrinsic rewards. We also introduce a novel mechanism for focusing this representation upon extrinsic rewards, so that learning can rapidly adapt to the most relevant aspects of the actual task. Our agent significantly outperforms the previous state-of-the-art on Atari, averaging 880\% expert human performance, and a challenging suite of first-person, three-dimensional \emph{Labyrinth} tasks leading to a mean speedup in learning of 10× and averaging 87\% expert human performance on Labyrinth.
Authors:
Max Jaderberg, Volodymyr Mnih, Wojciech Marian Czarnecki, Tom Schaul, Joel Z Leibo, David Silver, Koray Kavukcuoglu | Hi there, today we're looking at reinforcement learning with unsupervised auxiliary tasks by Google. So in this paper the authors consider a reinforcement learning task and I can show you what it looks like. It looks like this kind of a maze or this is an example that they give where you have to navigate the maze, it's 3D and you have to navigate from pixel inputs, you have to collect apples and reach the goal and this gives you rewards. So on the left you can see what the agent is actually seeing, on the right you can see it from a top down view. The problem is of course that the input is very, or the reward is very sparse, meaning that you have to navigate a lot of maze before you even get a single point. So reinforcement learning has a big trouble with this because it relies on constant reward to notice what actions are good and what actions are bad. So what the authors propose is in addition to the regular loss that you would have, so your reward which is this thing, you would also have an additional set of auxiliary tasks and here C goes over the auxiliary control tasks that you specify. Each of those has a reward and you're also trying to maximize these each with some kind of a weight here. And the thing is that the parameters that you maximize over control all of the different tasks so they are partly shared between the tasks. So what you're hoping is that by kind of learning to do one thing you also learn to do another thing. So the difference between this and let's say, you might have, so we've seen kind of work of it like this before where you do it more like an autoencoder setting. So for example you can't, the agent sees the input on the left here and it kind of tries to predict what the next input will be, what the next frame will be. The thought behind this is if you can accurately predict what the next frame will be maybe learn something useful about the environment. In this work it's different because now we couple a reward to these tasks and I can show you here what the authors propose as additional rewards. Sorry, they're further on top. Let me go there. Basically they consider here these two auxiliary control tasks. So pixel changes which means that the agent actually tries to actively change pixels. So it gets a reward for changing the pixels in the input. So it tries to maximize this. It needs to learn what do I need to do to maximize my pixel changes and probably that will be moving around. So it will learn to kind of move around, not move against the wall because if it moves against the wall the pixels won't change. So it will kind of learn to move along the, like how a regular human agent would also move not into a wall, not like into a dead end or something such that the pixels always change. Of course it's not perfect. You can also change your pixels quite a bit by simply spinning around in a circle. But this is one auxiliary tasks that they augment the agent with. The other one is network features. So it's kind of a meta learning here. You actually reward the agent for changing its own internal activations. So the hope is that it kind of learns about something by itself. How can I activate my internal neural network units? And it gets rewarded for that. So it might want to activate a lot of them and want to learn how they're activated. So this kind of self-interspection, you also hope that it kind of leads to a network that does more sophisticated tasks or that by nature of trying to get most pixel changes and the most network feature activations that you also learn something useful for the actual task. So these are the two tasks they propose. In addition, they also do, and they have a drawing of this over here. They also do a lot of other things, namely on the top left, you can kind of see here we have a database agent. This is an A3C agent, meaning that it's an actor critic. So you learn a policy and you learn a value network. We might go over this in a future video. So just consider this a standard reinforcement learning agent. You feed its experience into a replay buffer. And out of the replay buffer, you do many things. So for one, you try to learn these auxiliary tasks. Note that these are shared parameters between all of these networks. That's why the auxiliary tasks actually help. But you also try to better learn your value function. They call this off policy learning because you kind of pause the base agent training for a while and then you train the value function some more, just because that helps. You also try a reward prediction from here. And the way they do it, as they explain, is kind of in a skewed sampling way. So out of all the situations you can be in, the agent will have a reward very, very few times. So what they do is they simply sample out of the replay buffer, out of all the experiences they've had so far, they sample more frequently the experiences where they have actually gotten a reward. That way the hope is, of course, the agent, if you look at the experience here where you actually get an apple, then the agent might learn a lot faster, oh, there's some kind of apple there and I move towards it to get a reward. So that's the hope that you instantly recognize high reward situations and kind of are not so interested in non-reward situations. Of course, it does introduce biased near sampling and you might decide for yourself if that's good or bad. But here it seems to work. So they have a lot of experiments in this task, this labyrinth task, and they, of course, as is with research, they reach state of the art, they're much better than anything else. No, I mean they don't boast this much. So it's actually fair comparisons. The criticisms, so they also evaluate on Atari games, the criticisms that I have are twofold. First of all, the choice of auxiliary tasks is, of course, completely up to the implementer, which means that I have to decide as an implementer of this algorithm what my auxiliary task will be. And here, pixel changes and network features, they seem like fairly general tasks that you could apply to a lot of these kind of problems, but it always kind of comes down to how much knowledge about the task would you like to code into the actor. And here, I mean, you can see it makes sense to get at least the pixel changes as an auxiliary task, but it's questionable how much of kind of domain knowledge this already encodes. So the fact, the choice of these are certainly something that you have to decide as a human. And I think these are good choices. So they're not too domain specific, but also they do correspond to like some kind of visual moving around game task. And the other kind of criticisms, not really criticisms, it's just a remark, is that they do a lot of things. So their paper is about the auxiliary tasks, but they also then do these skewed sampling and the off-policy value learning and so on. And of course, you can kind of argue, yeah, this is all done in other reinforcement learning tasks. That's why it's a fair comparison. I guess it's a philosophical question. If you want to reach state of the art, of course, you have to first of all, get a better method here. This will be the auxiliary tasks. This is the new idea. And then implement all the tricks that the other people have discovered, which is good because you kind of reach the highest performance you can get. But also the problem is you make it harder to compare, you make it harder to see where the improvement is coming from. Have you simply chosen better hyperparameters for the reward predictions of things? Is there an interaction maybe between the auxiliary tasks and the skewed sampling part? All of these kind of things wash out and it's not really clear where the improvement is coming from. On the other hand, if you simply take a basic, basic, basic algorithm, like just A3C here on the top left, and you augment it with nothing but these auxiliary tasks on the bottom left, and then you see an improvement, you can be relatively sure it's due to your new idea. But of course, you won't reach any state of the art numbers because everyone that does A3C also does these tricks. No question here. I'm standing more on the side of not doing the tricks or maybe doing both. Yeah, but decide for yourself and have a nice day. | [
{
"start": 0,
"end": 6.48,
"text": " Hi there, today we're looking at reinforcement learning with unsupervised auxiliary tasks"
},
{
"start": 6.48,
"end": 9.64,
"text": " by Google."
},
{
"start": 9.64,
"end": 14.6,
"text": " So in this paper the authors consider a reinforcement learning task and I can show you what it looks"
},
{
"start": 14.6,
"end": 16.92,
"text": " like."
},
{
"start": 16.92,
"end": 22.64,
"text": " It looks like this kind of a maze or this is an example that they give where you have"
},
{
"start": 22.64,
"end": 27.64,
"text": " to navigate the maze, it's 3D and you have to navigate from pixel inputs, you have to"
},
{
"start": 27.64,
"end": 31.52,
"text": " collect apples and reach the goal and this gives you rewards."
},
{
"start": 31.52,
"end": 36,
"text": " So on the left you can see what the agent is actually seeing, on the right you can see"
},
{
"start": 36,
"end": 38.68,
"text": " it from a top down view."
},
{
"start": 38.68,
"end": 45.72,
"text": " The problem is of course that the input is very, or the reward is very sparse, meaning"
},
{
"start": 45.72,
"end": 52.78,
"text": " that you have to navigate a lot of maze before you even get a single point."
},
{
"start": 52.78,
"end": 58.96,
"text": " So reinforcement learning has a big trouble with this because it relies on constant reward"
},
{
"start": 58.96,
"end": 62.5,
"text": " to notice what actions are good and what actions are bad."
},
{
"start": 62.5,
"end": 71.2,
"text": " So what the authors propose is in addition to the regular loss that you would have, so"
},
{
"start": 71.2,
"end": 79.72,
"text": " your reward which is this thing, you would also have an additional set of auxiliary tasks"
},
{
"start": 79.72,
"end": 86.4,
"text": " and here C goes over the auxiliary control tasks that you specify."
},
{
"start": 86.4,
"end": 92.44,
"text": " Each of those has a reward and you're also trying to maximize these each with some kind"
},
{
"start": 92.44,
"end": 94.4,
"text": " of a weight here."
},
{
"start": 94.4,
"end": 99.84,
"text": " And the thing is that the parameters that you maximize over control all of the different"
},
{
"start": 99.84,
"end": 104.22,
"text": " tasks so they are partly shared between the tasks."
},
{
"start": 104.22,
"end": 109.08,
"text": " So what you're hoping is that by kind of learning to do one thing you also learn to do another"
},
{
"start": 109.08,
"end": 111.12,
"text": " thing."
},
{
"start": 111.12,
"end": 118.72,
"text": " So the difference between this and let's say, you might have, so we've seen kind of work"
},
{
"start": 118.72,
"end": 125,
"text": " of it like this before where you do it more like an autoencoder setting."
},
{
"start": 125,
"end": 130.88,
"text": " So for example you can't, the agent sees the input on the left here and it kind of tries"
},
{
"start": 130.88,
"end": 135.2,
"text": " to predict what the next input will be, what the next frame will be."
},
{
"start": 135.2,
"end": 139.32,
"text": " The thought behind this is if you can accurately predict what the next frame will be maybe"
},
{
"start": 139.32,
"end": 142.64,
"text": " learn something useful about the environment."
},
{
"start": 142.64,
"end": 150.79999999999998,
"text": " In this work it's different because now we couple a reward to these tasks and I can show"
},
{
"start": 150.79999999999998,
"end": 155.67999999999998,
"text": " you here what the authors propose as additional rewards."
},
{
"start": 155.67999999999998,
"end": 158.72,
"text": " Sorry, they're further on top."
},
{
"start": 158.72,
"end": 161.67999999999998,
"text": " Let me go there."
},
{
"start": 161.68,
"end": 167.04000000000002,
"text": " Basically they consider here these two auxiliary control tasks."
},
{
"start": 167.04000000000002,
"end": 176.72,
"text": " So pixel changes which means that the agent actually tries to actively change pixels."
},
{
"start": 176.72,
"end": 181.56,
"text": " So it gets a reward for changing the pixels in the input."
},
{
"start": 181.56,
"end": 183.8,
"text": " So it tries to maximize this."
},
{
"start": 183.8,
"end": 189.44,
"text": " It needs to learn what do I need to do to maximize my pixel changes and probably that"
},
{
"start": 189.44,
"end": 191.24,
"text": " will be moving around."
},
{
"start": 191.24,
"end": 195.64000000000001,
"text": " So it will learn to kind of move around, not move against the wall because if it moves"
},
{
"start": 195.64000000000001,
"end": 199.08,
"text": " against the wall the pixels won't change."
},
{
"start": 199.08,
"end": 208.60000000000002,
"text": " So it will kind of learn to move along the, like how a regular human agent would also"
},
{
"start": 208.60000000000002,
"end": 214.56,
"text": " move not into a wall, not like into a dead end or something such that the pixels always"
},
{
"start": 214.56,
"end": 215.56,
"text": " change."
},
{
"start": 215.56,
"end": 217.32000000000002,
"text": " Of course it's not perfect."
},
{
"start": 217.32,
"end": 223.51999999999998,
"text": " You can also change your pixels quite a bit by simply spinning around in a circle."
},
{
"start": 223.51999999999998,
"end": 227.6,
"text": " But this is one auxiliary tasks that they augment the agent with."
},
{
"start": 227.6,
"end": 229.68,
"text": " The other one is network features."
},
{
"start": 229.68,
"end": 233.12,
"text": " So it's kind of a meta learning here."
},
{
"start": 233.12,
"end": 244.76,
"text": " You actually reward the agent for changing its own internal activations."
},
{
"start": 244.76,
"end": 249.79999999999998,
"text": " So the hope is that it kind of learns about something by itself."
},
{
"start": 249.79999999999998,
"end": 256.12,
"text": " How can I activate my internal neural network units?"
},
{
"start": 256.12,
"end": 257.48,
"text": " And it gets rewarded for that."
},
{
"start": 257.48,
"end": 261.92,
"text": " So it might want to activate a lot of them and want to learn how they're activated."
},
{
"start": 261.92,
"end": 268.84,
"text": " So this kind of self-interspection, you also hope that it kind of leads to a network that"
},
{
"start": 268.84,
"end": 278.47999999999996,
"text": " does more sophisticated tasks or that by nature of trying to get most pixel changes and the"
},
{
"start": 278.47999999999996,
"end": 284.35999999999996,
"text": " most network feature activations that you also learn something useful for the actual"
},
{
"start": 284.35999999999996,
"end": 286.88,
"text": " task."
},
{
"start": 286.88,
"end": 290.32,
"text": " So these are the two tasks they propose."
},
{
"start": 290.32,
"end": 296.84,
"text": " In addition, they also do, and they have a drawing of this over here."
},
{
"start": 296.84,
"end": 303.84,
"text": " They also do a lot of other things, namely on the top left, you can kind of see here"
},
{
"start": 303.84,
"end": 307.23999999999995,
"text": " we have a database agent."
},
{
"start": 307.23999999999995,
"end": 313.2,
"text": " This is an A3C agent, meaning that it's an actor critic."
},
{
"start": 313.2,
"end": 316.23999999999995,
"text": " So you learn a policy and you learn a value network."
},
{
"start": 316.23999999999995,
"end": 318.88,
"text": " We might go over this in a future video."
},
{
"start": 318.88,
"end": 322.96,
"text": " So just consider this a standard reinforcement learning agent."
},
{
"start": 322.96,
"end": 326.59999999999997,
"text": " You feed its experience into a replay buffer."
},
{
"start": 326.6,
"end": 329.96000000000004,
"text": " And out of the replay buffer, you do many things."
},
{
"start": 329.96000000000004,
"end": 335.96000000000004,
"text": " So for one, you try to learn these auxiliary tasks."
},
{
"start": 335.96000000000004,
"end": 340.24,
"text": " Note that these are shared parameters between all of these networks."
},
{
"start": 340.24,
"end": 343.6,
"text": " That's why the auxiliary tasks actually help."
},
{
"start": 343.6,
"end": 347.28000000000003,
"text": " But you also try to better learn your value function."
},
{
"start": 347.28000000000003,
"end": 356.12,
"text": " They call this off policy learning because you kind of pause the base agent training"
},
{
"start": 356.12,
"end": 362.28000000000003,
"text": " for a while and then you train the value function some more, just because that helps."
},
{
"start": 362.28000000000003,
"end": 366.4,
"text": " You also try a reward prediction from here."
},
{
"start": 366.4,
"end": 371.48,
"text": " And the way they do it, as they explain, is kind of in a skewed sampling way."
},
{
"start": 371.48,
"end": 380.04,
"text": " So out of all the situations you can be in, the agent will have a reward very, very few"
},
{
"start": 380.04,
"end": 381.28000000000003,
"text": " times."
},
{
"start": 381.28,
"end": 386.64,
"text": " So what they do is they simply sample out of the replay buffer, out of all the experiences"
},
{
"start": 386.64,
"end": 393.76,
"text": " they've had so far, they sample more frequently the experiences where they have actually gotten"
},
{
"start": 393.76,
"end": 395.14,
"text": " a reward."
},
{
"start": 395.14,
"end": 405.91999999999996,
"text": " That way the hope is, of course, the agent, if you look at the experience here where you"
},
{
"start": 405.92,
"end": 412.32,
"text": " actually get an apple, then the agent might learn a lot faster, oh, there's some kind"
},
{
"start": 412.32,
"end": 416.68,
"text": " of apple there and I move towards it to get a reward."
},
{
"start": 416.68,
"end": 424.04,
"text": " So that's the hope that you instantly recognize high reward situations and kind of are not"
},
{
"start": 424.04,
"end": 426.44,
"text": " so interested in non-reward situations."
},
{
"start": 426.44,
"end": 432.44,
"text": " Of course, it does introduce biased near sampling and you might decide for yourself if that's"
},
{
"start": 432.44,
"end": 433.44,
"text": " good or bad."
},
{
"start": 433.44,
"end": 436.6,
"text": " But here it seems to work."
},
{
"start": 436.6,
"end": 446.04,
"text": " So they have a lot of experiments in this task, this labyrinth task, and they, of course,"
},
{
"start": 446.04,
"end": 451.08,
"text": " as is with research, they reach state of the art, they're much better than anything else."
},
{
"start": 451.08,
"end": 453.64,
"text": " No, I mean they don't boast this much."
},
{
"start": 453.64,
"end": 457.84,
"text": " So it's actually fair comparisons."
},
{
"start": 457.84,
"end": 464.47999999999996,
"text": " The criticisms, so they also evaluate on Atari games, the criticisms that I have are twofold."
},
{
"start": 464.47999999999996,
"end": 472.84,
"text": " First of all, the choice of auxiliary tasks is, of course, completely up to the implementer,"
},
{
"start": 472.84,
"end": 479.59999999999997,
"text": " which means that I have to decide as an implementer of this algorithm what my auxiliary task will"
},
{
"start": 479.59999999999997,
"end": 480.59999999999997,
"text": " be."
},
{
"start": 480.59999999999997,
"end": 485.15999999999997,
"text": " And here, pixel changes and network features, they seem like fairly general tasks that you"
},
{
"start": 485.16,
"end": 491.08000000000004,
"text": " could apply to a lot of these kind of problems, but it always kind of comes down to how much"
},
{
"start": 491.08000000000004,
"end": 497.48,
"text": " knowledge about the task would you like to code into the actor."
},
{
"start": 497.48,
"end": 504.40000000000003,
"text": " And here, I mean, you can see it makes sense to get at least the pixel changes as an auxiliary"
},
{
"start": 504.40000000000003,
"end": 511.40000000000003,
"text": " task, but it's questionable how much of kind of domain knowledge this already encodes."
},
{
"start": 511.4,
"end": 519.68,
"text": " So the fact, the choice of these are certainly something that you have to decide as a human."
},
{
"start": 519.68,
"end": 521.9599999999999,
"text": " And I think these are good choices."
},
{
"start": 521.9599999999999,
"end": 528.64,
"text": " So they're not too domain specific, but also they do correspond to like some kind of visual"
},
{
"start": 528.64,
"end": 532.68,
"text": " moving around game task."
},
{
"start": 532.68,
"end": 540.88,
"text": " And the other kind of criticisms, not really criticisms, it's just a remark, is that they"
},
{
"start": 540.88,
"end": 542.84,
"text": " do a lot of things."
},
{
"start": 542.84,
"end": 549.4,
"text": " So their paper is about the auxiliary tasks, but they also then do these skewed sampling"
},
{
"start": 549.4,
"end": 552.56,
"text": " and the off-policy value learning and so on."
},
{
"start": 552.56,
"end": 559.52,
"text": " And of course, you can kind of argue, yeah, this is all done in other reinforcement learning"
},
{
"start": 559.52,
"end": 560.52,
"text": " tasks."
},
{
"start": 560.52,
"end": 562.72,
"text": " That's why it's a fair comparison."
},
{
"start": 562.72,
"end": 566.16,
"text": " I guess it's a philosophical question."
},
{
"start": 566.16,
"end": 572.3199999999999,
"text": " If you want to reach state of the art, of course, you have to first of all, get a better"
},
{
"start": 572.3199999999999,
"end": 573.6,
"text": " method here."
},
{
"start": 573.6,
"end": 575.04,
"text": " This will be the auxiliary tasks."
},
{
"start": 575.04,
"end": 576.48,
"text": " This is the new idea."
},
{
"start": 576.48,
"end": 585.04,
"text": " And then implement all the tricks that the other people have discovered, which is good"
},
{
"start": 585.04,
"end": 588.04,
"text": " because you kind of reach the highest performance you can get."
},
{
"start": 588.04,
"end": 596.48,
"text": " But also the problem is you make it harder to compare, you make it harder to see where"
},
{
"start": 596.48,
"end": 598.1999999999999,
"text": " the improvement is coming from."
},
{
"start": 598.1999999999999,
"end": 605.76,
"text": " Have you simply chosen better hyperparameters for the reward predictions of things?"
},
{
"start": 605.76,
"end": 611.4,
"text": " Is there an interaction maybe between the auxiliary tasks and the skewed sampling part?"
},
{
"start": 611.4,
"end": 615.16,
"text": " All of these kind of things wash out and it's not really clear where the improvement is"
},
{
"start": 615.16,
"end": 616.16,
"text": " coming from."
},
{
"start": 616.16,
"end": 623,
"text": " On the other hand, if you simply take a basic, basic, basic algorithm, like just A3C here"
},
{
"start": 623,
"end": 630.9599999999999,
"text": " on the top left, and you augment it with nothing but these auxiliary tasks on the bottom left,"
},
{
"start": 630.9599999999999,
"end": 635.52,
"text": " and then you see an improvement, you can be relatively sure it's due to your new idea."
},
{
"start": 635.52,
"end": 640.48,
"text": " But of course, you won't reach any state of the art numbers because everyone that does"
},
{
"start": 640.48,
"end": 645.16,
"text": " A3C also does these tricks."
},
{
"start": 645.16,
"end": 647.12,
"text": " No question here."
},
{
"start": 647.12,
"end": 653.12,
"text": " I'm standing more on the side of not doing the tricks or maybe doing both."
},
{
"start": 653.12,
"end": 676.08,
"text": " Yeah, but decide for yourself and have a nice day."
}
] |
56GW1IlWgMg | Yannic Kilcher | UCZHmQk67mSJgfCCTn7xBfew | Learning model-based planning from scratch | [
"Science & Technology"
] | [
"machine learning",
"artificial intelligence",
"ai",
"deep learning",
"reinforcement learning",
"deep mind",
"research",
"academia",
"paper",
"review",
"imagination",
"planning",
"agents"
] | https://arxiv.org/abs/1707.06170
Abstract:
Conventional wisdom holds that model-based planning is a powerful approach to sequential decision-making. It is often very challenging in practice, however, because while a model can be used to evaluate a plan, it does not prescribe how to construct a plan. Here we introduce the "Imagination-based Planner", the first model-based, sequential decision-making agent that can learn to construct, evaluate, and execute plans. Before any action, it can perform a variable number of imagination steps, which involve proposing an imagined action and evaluating it with its model-based imagination. All imagined actions and outcomes are aggregated, iteratively, into a "plan context" which conditions future real and imagined actions. The agent can even decide how to imagine: testing out alternative imagined actions, chaining sequences of actions together, or building a more complex "imagination tree" by navigating flexibly among the previously imagined states using a learned policy. And our agent can learn to plan economically, jointly optimizing for external rewards and computational costs associated with using its imagination. We show that our architecture can learn to solve a challenging continuous control problem, and also learn elaborate planning strategies in a discrete maze-solving task. Our work opens a new direction toward learning the components of a model-based planning system and how to use them.
Authors:
Razvan Pascanu, Yujia Li, Oriol Vinyals, Nicolas Heess, Lars Buesing, Sebastien Racanière, David Reichert, Théophane Weber, Daan Wierstra, Peter Battaglia | Hi there, today we're taking a look at learning model-based planning from scratch by DeepMind. So as a recap, what is model-based planning? Basically a model, also called an environment model, is just kind of a black box thing, you can imagine, where you have a state of your current environment, you put it in there and you have an action that you want to take, you put it in there as well. And the environment model tells you what the new state, S' here, and possibly also the new reward for taking that action is going to be. So this, of course it's always good to have such an environment model, because you can use it to plan ahead, but the authors here question how do you plan and propose a new algorithm to learn this planning. For now, people have mostly used heuristics to plan either things like A star search, where you have a maze and you want to go here, and you kind of have a heuristic, say the distance between the two points, but there's kind of walls in between, so you try to go there but then there's a wall and you kind of explore around it. So these are kind of the techniques that have existed so far. Also we've seen stuff like Monte Carlo tree search for AlphaGo and other things like this that are not really learned. So this kind of paper pros and mechanisms to learn how to plan using such a model. So basically they devise an algorithm or a framework, you can say, where they have this, what you see here, this schematic. This schematic tells you that you have this thing called a manager. Let me just quickly bring up my comment thingy thing. You can see here there's this kind of manager and this manager can decide to imagine or act. If it acts, then it simply takes kind of the current state and all the things that happened so far and decides on an action to do in the world. And then it kind of trains on the action like classic reinforcement learning. But if it decides to imagine, it can use its model of the world, its imagination model to perform an action and see what would happen if it did that action. And it can then also append that to the memory and use it to further learn. Even though it didn't do the action, it can imagine what happens. So how can it imagine? The authors in particular propose different methods of imagining. This graph you see there are proposed methods. The first two methods basically, so here every row is a method of imagining. The first method, the one step imagining, simply means you have the current state of the world, which is the grey blob here. And what you do is you always go from the current state of the world, imagine one step ahead. So basically you select the state to imagine from, you imagine one step. And if you decide to not take an action after that, but imagine again, because maybe you're not sure yet what you want to do, so you want to imagine another action, you would again go from this initial state, so this in the horizontal direction is time, time, internal time basically. You would again go from this state, imagine another action based on it, and so on, imagine another action. Until you're satisfied, you've imagined enough so you can actually take a real world step. In contrast, the end step strategy, so these are hard coded strategies as you can see. The learned part is which action should I take? The hard coded part is where do I base this action off? The end step strategy also selects the first state at first, imagines one action on top of it, but then always selects that new imagined action. So you can see here it selects this one to propose this action, and then it selects that imagined action to propose yet another action. So you can see it kind of imagines one path into the future instead of many paths, just one step ahead. And then lastly, this imagination tree strategy is basically the only one that's actually kind of a learned strategy where the manager can now propose any previously imagined or real world states in order to imagine from. So you always have the current world state, which is the first node in the graph. You select it, of course, at the beginning you have no choice. You imagine an action on top of it, but then you can select any of these two nodes to imagine from and here again the first is selected and action is imagined. Then you have three nodes. You can choose any of those where you want to imagine the next step. Here in this example, the manager selects this state right here and decides to imagine another action on top of it until it is satisfied and can then actually go over to plan to actually perform an action in the real world. So if you then decide to do an action in the real world, what you can do is you can take all of the things you've imagined and use that. So you see in this pathway here, this flows back to the manager. At some point it decides, okay, I've imagined enough and we can use all of these imagined steps in order to take a real world step. And after the real world step, the entire thing starts again. So that's how it learns to plan. Really interesting of course is this imagination tree strategy where it actually learns to plan ahead. So the model is described in detail in a formal manner and then it already goes over to experiments and there's this spaceship task where you have to get the spaceship to move around stuff and around these asteroids and get a reward. So you can see different imagination projectives here in the top row. You see the red ones is the kind of executed actions, the blue ones are imagined ones and you see the tree it's constructed. So first it takes an action right here, just without imagining. Then it imagines one step but then decides to take another action. It imagines two actions but decides on a third one. So you see to the left in this picture you see the first action. Then it imagines one action and decides to take an action. Then it imagines two actions and based on these imaginations, I'm going to guess it's fairly satisfied with the one that's very close to the target and it can then take an action. So it's pretty smart in that it sees that the second imagined action is fairly close to where it wants to go and it doesn't need to imagine yet another action. That then actually hits the target. It can go over to performing the action right away because the imagination gives enough information. So these kind of things are pretty cool to look at and check out the more experiments if you want to know. Here is even more experiments in discrete mazes. They feature multiple goals. They feature the system optimizing not only for its reward but also for kind of internal costs, so having a budget for imagining and optimizing not doing too many imagination steps. On this experiment the kind of thing that bugs me here is the fact that they didn't actually use the full imagination tree algorithm but the manager only selected from what you can see here. So do an actual action, then SJ0 is the first imagined state and SJK is the last imagined state. So basically the manager can only choose between actually acting, then doing this one step strategy and then doing kind of this end step strategy in each step. So it kind of limits the way it can plan but I'm going to guess they did this because otherwise they couldn't have trained the model and it seems a pretty reasonable simplification to make in order to get this to work. Also check out the paper if you want to see how all of these different parts are implemented. Of course you can guess most of them are neural networks and it's pretty standard so far and check out for the additional experiments. They're pretty cool. See you next time. | [
{
"start": 0,
"end": 8.040000000000001,
"text": " Hi there, today we're taking a look at learning model-based planning from scratch by DeepMind."
},
{
"start": 8.040000000000001,
"end": 12.32,
"text": " So as a recap, what is model-based planning?"
},
{
"start": 12.32,
"end": 20.32,
"text": " Basically a model, also called an environment model, is just kind of a black box thing,"
},
{
"start": 20.32,
"end": 26.28,
"text": " you can imagine, where you have a state of your current environment, you put it in there"
},
{
"start": 26.28,
"end": 30.720000000000002,
"text": " and you have an action that you want to take, you put it in there as well."
},
{
"start": 30.720000000000002,
"end": 36.52,
"text": " And the environment model tells you what the new state, S' here, and possibly also the"
},
{
"start": 36.52,
"end": 41.36,
"text": " new reward for taking that action is going to be."
},
{
"start": 41.36,
"end": 49.120000000000005,
"text": " So this, of course it's always good to have such an environment model, because you can"
},
{
"start": 49.12,
"end": 57.04,
"text": " use it to plan ahead, but the authors here question how do you plan and propose a new"
},
{
"start": 57.04,
"end": 59.239999999999995,
"text": " algorithm to learn this planning."
},
{
"start": 59.239999999999995,
"end": 66.6,
"text": " For now, people have mostly used heuristics to plan either things like A star search,"
},
{
"start": 66.6,
"end": 72.08,
"text": " where you have a maze and you want to go here, and you kind of have a heuristic, say the"
},
{
"start": 72.08,
"end": 77.75999999999999,
"text": " distance between the two points, but there's kind of walls in between, so you try to go"
},
{
"start": 77.76,
"end": 83.52000000000001,
"text": " there but then there's a wall and you kind of explore around it."
},
{
"start": 83.52000000000001,
"end": 87.12,
"text": " So these are kind of the techniques that have existed so far."
},
{
"start": 87.12,
"end": 95.64,
"text": " Also we've seen stuff like Monte Carlo tree search for AlphaGo and other things like this"
},
{
"start": 95.64,
"end": 98.5,
"text": " that are not really learned."
},
{
"start": 98.5,
"end": 108.9,
"text": " So this kind of paper pros and mechanisms to learn how to plan using such a model."
},
{
"start": 108.9,
"end": 117.44,
"text": " So basically they devise an algorithm or a framework, you can say, where they have this,"
},
{
"start": 117.44,
"end": 120.28,
"text": " what you see here, this schematic."
},
{
"start": 120.28,
"end": 124.6,
"text": " This schematic tells you that you have this thing called a manager."
},
{
"start": 124.6,
"end": 137.35999999999999,
"text": " Let me just quickly bring up my comment thingy thing."
},
{
"start": 137.35999999999999,
"end": 143.35999999999999,
"text": " You can see here there's this kind of manager and this manager can decide to imagine or"
},
{
"start": 143.35999999999999,
"end": 147.4,
"text": " act."
},
{
"start": 147.4,
"end": 154.28,
"text": " If it acts, then it simply takes kind of the current state and all the things that happened"
},
{
"start": 154.28,
"end": 159.52,
"text": " so far and decides on an action to do in the world."
},
{
"start": 159.52,
"end": 164.36,
"text": " And then it kind of trains on the action like classic reinforcement learning."
},
{
"start": 164.36,
"end": 171.68,
"text": " But if it decides to imagine, it can use its model of the world, its imagination model"
},
{
"start": 171.68,
"end": 177,
"text": " to perform an action and see what would happen if it did that action."
},
{
"start": 177,
"end": 187.32,
"text": " And it can then also append that to the memory and use it to further learn."
},
{
"start": 187.32,
"end": 190.68,
"text": " Even though it didn't do the action, it can imagine what happens."
},
{
"start": 190.68,
"end": 192.16,
"text": " So how can it imagine?"
},
{
"start": 192.16,
"end": 201.56,
"text": " The authors in particular propose different methods of imagining."
},
{
"start": 201.56,
"end": 205.32,
"text": " This graph you see there are proposed methods."
},
{
"start": 205.32,
"end": 214,
"text": " The first two methods basically, so here every row is a method of imagining."
},
{
"start": 214,
"end": 218.72,
"text": " The first method, the one step imagining, simply means you have the current state of"
},
{
"start": 218.72,
"end": 221.79999999999998,
"text": " the world, which is the grey blob here."
},
{
"start": 221.79999999999998,
"end": 226.4,
"text": " And what you do is you always go from the current state of the world, imagine one step"
},
{
"start": 226.4,
"end": 227.76,
"text": " ahead."
},
{
"start": 227.76,
"end": 234.32,
"text": " So basically you select the state to imagine from, you imagine one step."
},
{
"start": 234.32,
"end": 241.28,
"text": " And if you decide to not take an action after that, but imagine again, because maybe you're"
},
{
"start": 241.28,
"end": 246.16,
"text": " not sure yet what you want to do, so you want to imagine another action, you would again"
},
{
"start": 246.16,
"end": 255.84,
"text": " go from this initial state, so this in the horizontal direction is time, time, internal"
},
{
"start": 255.84,
"end": 258.4,
"text": " time basically."
},
{
"start": 258.4,
"end": 263.15999999999997,
"text": " You would again go from this state, imagine another action based on it, and so on, imagine"
},
{
"start": 263.16,
"end": 265.76000000000005,
"text": " another action."
},
{
"start": 265.76000000000005,
"end": 271.84000000000003,
"text": " Until you're satisfied, you've imagined enough so you can actually take a real world step."
},
{
"start": 271.84000000000003,
"end": 282.86,
"text": " In contrast, the end step strategy, so these are hard coded strategies as you can see."
},
{
"start": 282.86,
"end": 286.08000000000004,
"text": " The learned part is which action should I take?"
},
{
"start": 286.08000000000004,
"end": 291.40000000000003,
"text": " The hard coded part is where do I base this action off?"
},
{
"start": 291.4,
"end": 297,
"text": " The end step strategy also selects the first state at first, imagines one action on top"
},
{
"start": 297,
"end": 302.15999999999997,
"text": " of it, but then always selects that new imagined action."
},
{
"start": 302.15999999999997,
"end": 308.56,
"text": " So you can see here it selects this one to propose this action, and then it selects that"
},
{
"start": 308.56,
"end": 312.71999999999997,
"text": " imagined action to propose yet another action."
},
{
"start": 312.71999999999997,
"end": 319.59999999999997,
"text": " So you can see it kind of imagines one path into the future instead of many paths, just"
},
{
"start": 319.6,
"end": 321.72,
"text": " one step ahead."
},
{
"start": 321.72,
"end": 329.48,
"text": " And then lastly, this imagination tree strategy is basically the only one that's actually"
},
{
"start": 329.48,
"end": 339.32000000000005,
"text": " kind of a learned strategy where the manager can now propose any previously imagined or"
},
{
"start": 339.32000000000005,
"end": 342.06,
"text": " real world states in order to imagine from."
},
{
"start": 342.06,
"end": 347.08000000000004,
"text": " So you always have the current world state, which is the first node in the graph."
},
{
"start": 347.08,
"end": 350.12,
"text": " You select it, of course, at the beginning you have no choice."
},
{
"start": 350.12,
"end": 355.44,
"text": " You imagine an action on top of it, but then you can select any of these two nodes to imagine"
},
{
"start": 355.44,
"end": 361.28,
"text": " from and here again the first is selected and action is imagined."
},
{
"start": 361.28,
"end": 363,
"text": " Then you have three nodes."
},
{
"start": 363,
"end": 367.76,
"text": " You can choose any of those where you want to imagine the next step."
},
{
"start": 367.76,
"end": 375.78,
"text": " Here in this example, the manager selects this state right here and decides to imagine"
},
{
"start": 375.78,
"end": 382.91999999999996,
"text": " another action on top of it until it is satisfied and can then actually go over to plan to actually"
},
{
"start": 382.91999999999996,
"end": 384.44,
"text": " perform an action in the real world."
},
{
"start": 384.44,
"end": 395.32,
"text": " So if you then decide to do an action in the real world, what you can do is you can take"
},
{
"start": 395.32,
"end": 402.03999999999996,
"text": " all of the things you've imagined and use that."
},
{
"start": 402.04,
"end": 407.20000000000005,
"text": " So you see in this pathway here, this flows back to the manager."
},
{
"start": 407.20000000000005,
"end": 412.44,
"text": " At some point it decides, okay, I've imagined enough and we can use all of these imagined"
},
{
"start": 412.44,
"end": 416.16,
"text": " steps in order to take a real world step."
},
{
"start": 416.16,
"end": 423.8,
"text": " And after the real world step, the entire thing starts again."
},
{
"start": 423.8,
"end": 426.88,
"text": " So that's how it learns to plan."
},
{
"start": 426.88,
"end": 438.32,
"text": " Really interesting of course is this imagination tree strategy where it actually learns to"
},
{
"start": 438.32,
"end": 442.78,
"text": " plan ahead."
},
{
"start": 442.78,
"end": 449.92,
"text": " So the model is described in detail in a formal manner and then it already goes over to experiments"
},
{
"start": 449.92,
"end": 462.04,
"text": " and there's this spaceship task where you have to get the spaceship to move around stuff"
},
{
"start": 462.04,
"end": 468.44,
"text": " and around these asteroids and get a reward."
},
{
"start": 468.44,
"end": 475.40000000000003,
"text": " So you can see different imagination projectives here in the top row."
},
{
"start": 475.4,
"end": 481.64,
"text": " You see the red ones is the kind of executed actions, the blue ones are imagined ones and"
},
{
"start": 481.64,
"end": 483.84,
"text": " you see the tree it's constructed."
},
{
"start": 483.84,
"end": 488.47999999999996,
"text": " So first it takes an action right here, just without imagining."
},
{
"start": 488.47999999999996,
"end": 493.15999999999997,
"text": " Then it imagines one step but then decides to take another action."
},
{
"start": 493.15999999999997,
"end": 500.46,
"text": " It imagines two actions but decides on a third one."
},
{
"start": 500.46,
"end": 506.2,
"text": " So you see to the left in this picture you see the first action."
},
{
"start": 506.2,
"end": 511.44,
"text": " Then it imagines one action and decides to take an action."
},
{
"start": 511.44,
"end": 516.12,
"text": " Then it imagines two actions and based on these imaginations, I'm going to guess it's"
},
{
"start": 516.12,
"end": 523.4399999999999,
"text": " fairly satisfied with the one that's very close to the target and it can then take an"
},
{
"start": 523.4399999999999,
"end": 524.4399999999999,
"text": " action."
},
{
"start": 524.44,
"end": 531.2800000000001,
"text": " So it's pretty smart in that it sees that the second imagined action is fairly close"
},
{
"start": 531.2800000000001,
"end": 537.32,
"text": " to where it wants to go and it doesn't need to imagine yet another action."
},
{
"start": 537.32,
"end": 539.24,
"text": " That then actually hits the target."
},
{
"start": 539.24,
"end": 546.36,
"text": " It can go over to performing the action right away because the imagination gives enough"
},
{
"start": 546.36,
"end": 549.84,
"text": " information."
},
{
"start": 549.84,
"end": 558.1600000000001,
"text": " So these kind of things are pretty cool to look at and check out the more experiments"
},
{
"start": 558.1600000000001,
"end": 559.2800000000001,
"text": " if you want to know."
},
{
"start": 559.2800000000001,
"end": 563.2800000000001,
"text": " Here is even more experiments in discrete mazes."
},
{
"start": 563.2800000000001,
"end": 565,
"text": " They feature multiple goals."
},
{
"start": 565,
"end": 573.0400000000001,
"text": " They feature the system optimizing not only for its reward but also for kind of internal"
},
{
"start": 573.04,
"end": 580.16,
"text": " costs, so having a budget for imagining and optimizing not doing too many imagination"
},
{
"start": 580.16,
"end": 582,
"text": " steps."
},
{
"start": 582,
"end": 588.3199999999999,
"text": " On this experiment the kind of thing that bugs me here is the fact that they didn't"
},
{
"start": 588.3199999999999,
"end": 596.0799999999999,
"text": " actually use the full imagination tree algorithm but the manager only selected from what you"
},
{
"start": 596.0799999999999,
"end": 597.12,
"text": " can see here."
},
{
"start": 597.12,
"end": 608.64,
"text": " So do an actual action, then SJ0 is the first imagined state and SJK is the last imagined"
},
{
"start": 608.64,
"end": 612.04,
"text": " state."
},
{
"start": 612.04,
"end": 622.88,
"text": " So basically the manager can only choose between actually acting, then doing this one step"
},
{
"start": 622.88,
"end": 628.4,
"text": " strategy and then doing kind of this end step strategy in each step."
},
{
"start": 628.4,
"end": 635.88,
"text": " So it kind of limits the way it can plan but I'm going to guess they did this because otherwise"
},
{
"start": 635.88,
"end": 641.56,
"text": " they couldn't have trained the model and it seems a pretty reasonable simplification to"
},
{
"start": 641.56,
"end": 645.28,
"text": " make in order to get this to work."
},
{
"start": 645.28,
"end": 650.56,
"text": " Also check out the paper if you want to see how all of these different parts are implemented."
},
{
"start": 650.56,
"end": 656.7199999999999,
"text": " Of course you can guess most of them are neural networks and it's pretty standard so far and"
},
{
"start": 656.7199999999999,
"end": 659.1199999999999,
"text": " check out for the additional experiments."
},
{
"start": 659.1199999999999,
"end": 660.1199999999999,
"text": " They're pretty cool."
},
{
"start": 660.12,
"end": 681.16,
"text": " See you next time."
}
] |
agXIYMCICcc | Yannic Kilcher | UCZHmQk67mSJgfCCTn7xBfew | Imagination-Augmented Agents for Deep Reinforcement Learning | [
"Science & Technology"
] | [
"deep learning",
"reinforcement learning",
"deep mind",
"academic",
"paper",
"research"
] | Commentary of
https://arxiv.org/abs/1707.06203
Abstract
We introduce Imagination-Augmented Agents (I2As), a novel architecture for deep reinforcement learning combining model-free and model-based aspects. In contrast to most existing model-based reinforcement learning and planning methods, which prescribe how a model should be used to arrive at a policy, I2As learn to interpret predictions from a learned environment model to construct implicit plans in arbitrary ways, by using the predictions as additional context in deep policy networks. I2As show improved data efficiency, performance, and robustness to model misspecification compared to several baselines.
Authors
Théophane Weber, Sébastien Racanière, David P. Reichert, Lars Buesing, Arthur Guez, Danilo Jimenez Rezende, Adria Puigdomènech Badia, Oriol Vinyals, Nicolas Heess, Yujia Li, Razvan Pascanu, Peter Battaglia, David Silver, Daan Wierstra | Hi, today we're taking a look at Imagination Augmented Agents for deep reinforcement learning. This is a paper by DeepMind and has been in the news a bit recently, so we're going to have a look at what it's all about. Basically they claim that agents who have a model of the world perform better usually than agents who don't. But of course usually we don't have a model of the world, so they make the agent learn a model of the world which you can then use to plan. Now this learning of the model can of course be imperfect because it's learned and so they provide a way to work with imperfect environment models and combine them with a model-free approach. So what do we mean by models and model-free? Basically what you can say is if you have a model of the world, you have kind of a machine, say a box, and in this box you have a state S and you feed the state to the machine and you feed an action and the model of the world will tell you what did S' the new state is going to be. So this is in the case where you exactly know how your environment works. Now in a model-free approach what you would do is you would plan basically you would have a state and you would put that through some kind of a layered neural network and out would come what action should I take right now. So in the model-based approach you're trying to try out all these actions and tell you look which one gives me kind of a desired final state. And in the model-free approach you simply use the rewards to go directly and say here's my state, what should my action be? So this paper is a combination of both. The basic architecture is here, so let's start from the very right. We have two paths divided along this line. The final policy, so which actions you're going to take and what kind of values you can expect is going to be a result of two different models that are combined. There's a model-free path which means this is what we talked about. Simply here is the state and you simply feed it through this neural network thing, blah, blah, blah, blah, blah, blah, out comes a policy or an action you should take. But then there's also this other path and this is the imagination path. Basically consists a bunch of these rollout encoders and these rollout encoders is just the agent imagining the future. So the agent doing some actions and looking at how they will perform. So as this is done, there's this imagination core thingy. What this consists of is a policy network and an environment model. This environment model is really the core of the entire thing. So this environment model you basically learn from what you've seen so far. So far you've taken certain actions here in certain states. You use this to learn the environment model that gives you from one state the next state and the next reward. So that's what you learn. Of course also using neural networks and whatnot. You use that environment model to imagine the future. So here in this imagination core, basically you put in your state, you get out some new state and some reward. You feed the new state and you imagine another action. Of course the actions aren't random. The actions you also take via this thing. And this is where it loops all back. This is now a model free policy network that works with the environment model. So basically in your imagination you only use, if you look at the very right here, you only use this right path. Because your imagination doesn't need to be super exact or super well planned, you can use the model free approach that we kind of know kind of works for some problems. You use this to generate your actions that you imagine. And you use an environment model in order to look how these actions will play out. And that's how you imagine one step of the future. And you simply repeat this a couple of steps. And then you have an entire what's called a rollout, which consists of these pairs of states and rewards. And what you do then is you encode this rollout via this encoder, which is in this case an LSTM or something like this I think. You encode all these states into one vector, into one embedding basically for this rollout. And this embedding describes kind of this future imagined path. Of course, what you're going to hope is that somehow this encoding captures how you will do in the future and how good this will be. So these states and rewards. Once you have a couple of these rollouts, so once you've imagined a couple of different futures, you then aggregate them in this aggregator. I think in their case, they just concatenate these rollout encodings. And then you feed this too to the big aggregator on top. So the big aggregator on top can now combine the model free path and the imagined futures. So if the big aggregator thinks that the imagination isn't correct, it can resort to the model free path, but it can also think that maybe it's correct, or it can be kind of if it's sure it's correct, it can fully trust these rollouts and perform actions according to that. All of this is of course trained end to end. There's a tiny piece we haven't looked at yet, namely how this here, this policy network on the left is learned. And this is simply learned by, and I have to pay attention that I'm doing the right thing here. So you take this big thing here, your final policy network, and you perform, you kind of learn to copy its actions simply from the input. So from this model free input over here, you take this input and you take, excuse me, and you take the output of your big policy network and you try to simply make a neural network that copies the outputs given these inputs. And that's kind of your small policy network in here that's simply model free. So the loop closes in a way that you use your learned model to then again imagine the future. But of course for imagining the future, within imagining the future, you can't have another instance of this network because it would be infinite recursion. So you can only have a model free network. All right. That's it for the model. Of course, yeah, there's a couple of tricks and how to encode these things. Basically they perform experiments and this is maybe what you've seen in the media so far of this game. And this game is a game where you have to push around the brown boxes onto the red squares using the green avatar that you have. So this game is difficult because first of all, the levels are generated randomly. So there's no way you can like hard code anything. And second of all, if you push a box, say this box here, if you were to push it to the right into the corner, you would have no way of getting it out again. That's why I have to plan ahead and avoid such mistakes because they're not fixable. So once you make the mistakes, you can't go back and that's where planning comes in so handy. If you imagine this future and if your model is correct or approximately correct, then you can avoid such mistakes. Of course, that's the difficulty in this game and that's where the planning helps. Note that they don't code in how the game works. So all these models get is pixel input of the game and they have to kind of imagine the pixel output they're going to get. So that's increased difficulty. So technically the method is model free in the sense that there's really no coded model of the world, just the pixels. So they have performance comparisons where if you and I find this on the right here interesting, you can see according to the unrolled depth, so how much steps into the future you imagine. You can see it kind of flattens out after only about five steps. Whereas the game usually lasts for about 50 steps, they say. So only imagining five steps is already really helpful. What I don't like here is that they compare to what they say this copy model because this here is a standard model free comparison. So it's just a model free agent and of course, or not of course, but it performs worse right here. Because it has no imagination, but it also has less parameters. So they're trying to compare it to something with the same amount of parameters and say, oh, we have this copy model agent here. And what the copy model agent is doing is simply, for the environment model, it's the same architecture, but for the environment model, it simply predicts the output as the input. So it simply says, oh, you do this action, the environment is going to be exactly the same as it is now. And I don't like it because basically this entire branch here becomes rather useless. And so even though you have parameters in here, they're not useful. So to say that this is a comparison with the model of the same amount of parameters, I don't know, technically true. Another thing that they do is they pre-train the environment model with a model free agent. So first they code a model free agent, then they pre-train the environment model to then use with this agent. So it's not fully learned and I can imagine they tried and it didn't work. And this is how you get it to work. So they also experiment with imperfect models. So they train the environment model only imperfectly. And as you can see here, this is kind of the output you can get. Say you have duplicates, you have kind of errors, you have twice your character here, you have like boxes within the wall or all kinds of things. And they basically show that if you try to classically plan using these models, these bad models, you get nowhere. Basically this is a Monte Carlo sampler planner using a poor model and its performance degrades significantly from when you use the good model, which is right here. And the imagination agent is not affected by kind of the bad model, except that it takes kind of longer to reach its high inaccuracy. All right, so there's a couple of other experiments and a couple of Pac-Man experiments where they show you can learn one model to transfer kind of to play different games in this Pac-Man world. And that just works the more if you have very sparse rewards, which you can imagine, yes, if you need to plan then that's what you get. You get the ability to earn more sparse rewards because you can kind of look ahead. All right, so I think I'll conclude here with the discussion of this paper. I quite liked it and it's a cool method, combines many things and I'll see you next time. | [
{
"start": 0,
"end": 10.8,
"text": " Hi, today we're taking a look at Imagination Augmented Agents for deep reinforcement learning."
},
{
"start": 10.8,
"end": 16.2,
"text": " This is a paper by DeepMind and has been in the news a bit recently, so we're going to"
},
{
"start": 16.2,
"end": 21.080000000000002,
"text": " have a look at what it's all about."
},
{
"start": 21.080000000000002,
"end": 28.64,
"text": " Basically they claim that agents who have a model of the world perform better usually"
},
{
"start": 28.64,
"end": 30.44,
"text": " than agents who don't."
},
{
"start": 30.44,
"end": 37.28,
"text": " But of course usually we don't have a model of the world, so they make the agent learn"
},
{
"start": 37.28,
"end": 41.68,
"text": " a model of the world which you can then use to plan."
},
{
"start": 41.68,
"end": 49.8,
"text": " Now this learning of the model can of course be imperfect because it's learned and so they"
},
{
"start": 49.8,
"end": 57.08,
"text": " provide a way to work with imperfect environment models and combine them with a model-free"
},
{
"start": 57.08,
"end": 58.68,
"text": " approach."
},
{
"start": 58.68,
"end": 62.519999999999996,
"text": " So what do we mean by models and model-free?"
},
{
"start": 62.519999999999996,
"end": 69.28,
"text": " Basically what you can say is if you have a model of the world, you have kind of a machine,"
},
{
"start": 69.28,
"end": 80.28,
"text": " say a box, and in this box you have a state S and you feed the state to the machine and"
},
{
"start": 80.28,
"end": 87.72,
"text": " you feed an action and the model of the world will tell you what did S' the new state is"
},
{
"start": 87.72,
"end": 91,
"text": " going to be."
},
{
"start": 91,
"end": 97.16,
"text": " So this is in the case where you exactly know how your environment works."
},
{
"start": 97.16,
"end": 107.36,
"text": " Now in a model-free approach what you would do is you would plan basically you would have"
},
{
"start": 107.36,
"end": 114.24,
"text": " a state and you would put that through some kind of a layered neural network and out would"
},
{
"start": 114.24,
"end": 119.76,
"text": " come what action should I take right now."
},
{
"start": 119.76,
"end": 126.36,
"text": " So in the model-based approach you're trying to try out all these actions and tell you"
},
{
"start": 126.36,
"end": 131.48,
"text": " look which one gives me kind of a desired final state."
},
{
"start": 131.48,
"end": 136.07999999999998,
"text": " And in the model-free approach you simply use the rewards to go directly and say here's"
},
{
"start": 136.08,
"end": 139.36,
"text": " my state, what should my action be?"
},
{
"start": 139.36,
"end": 145.64000000000001,
"text": " So this paper is a combination of both."
},
{
"start": 145.64000000000001,
"end": 150.48000000000002,
"text": " The basic architecture is here, so let's start from the very right."
},
{
"start": 150.48000000000002,
"end": 154.76000000000002,
"text": " We have two paths divided along this line."
},
{
"start": 154.76000000000002,
"end": 159.48000000000002,
"text": " The final policy, so which actions you're going to take and what kind of values you"
},
{
"start": 159.48,
"end": 166.84,
"text": " can expect is going to be a result of two different models that are combined."
},
{
"start": 166.84,
"end": 171.16,
"text": " There's a model-free path which means this is what we talked about."
},
{
"start": 171.16,
"end": 176.76,
"text": " Simply here is the state and you simply feed it through this neural network thing, blah,"
},
{
"start": 176.76,
"end": 183.32,
"text": " blah, blah, blah, blah, blah, out comes a policy or an action you should take."
},
{
"start": 183.32,
"end": 189.79999999999998,
"text": " But then there's also this other path and this is the imagination path."
},
{
"start": 189.79999999999998,
"end": 195.51999999999998,
"text": " Basically consists a bunch of these rollout encoders and these rollout encoders is just"
},
{
"start": 195.51999999999998,
"end": 198.5,
"text": " the agent imagining the future."
},
{
"start": 198.5,
"end": 205.64,
"text": " So the agent doing some actions and looking at how they will perform."
},
{
"start": 205.64,
"end": 213.67999999999998,
"text": " So as this is done, there's this imagination core thingy."
},
{
"start": 213.67999999999998,
"end": 219.48,
"text": " What this consists of is a policy network and an environment model."
},
{
"start": 219.48,
"end": 223.56,
"text": " This environment model is really the core of the entire thing."
},
{
"start": 223.56,
"end": 230.27999999999997,
"text": " So this environment model you basically learn from what you've seen so far."
},
{
"start": 230.27999999999997,
"end": 233.16,
"text": " So far you've taken certain actions here in certain states."
},
{
"start": 233.16,
"end": 242.32,
"text": " You use this to learn the environment model that gives you from one state the next state"
},
{
"start": 242.32,
"end": 244.56,
"text": " and the next reward."
},
{
"start": 244.56,
"end": 248.24,
"text": " So that's what you learn."
},
{
"start": 248.24,
"end": 252.64,
"text": " Of course also using neural networks and whatnot."
},
{
"start": 252.64,
"end": 260.56,
"text": " You use that environment model to imagine the future."
},
{
"start": 260.56,
"end": 268.16,
"text": " So here in this imagination core, basically you put in your state, you get out some new"
},
{
"start": 268.16,
"end": 270.08,
"text": " state and some reward."
},
{
"start": 270.08,
"end": 273.48,
"text": " You feed the new state and you imagine another action."
},
{
"start": 273.48,
"end": 275.52,
"text": " Of course the actions aren't random."
},
{
"start": 275.52,
"end": 279.8,
"text": " The actions you also take via this thing."
},
{
"start": 279.8,
"end": 281.8,
"text": " And this is where it loops all back."
},
{
"start": 281.8,
"end": 287.66,
"text": " This is now a model free policy network that works with the environment model."
},
{
"start": 287.66,
"end": 292.16,
"text": " So basically in your imagination you only use, if you look at the very right here, you"
},
{
"start": 292.16,
"end": 296.08000000000004,
"text": " only use this right path."
},
{
"start": 296.08000000000004,
"end": 301.16,
"text": " Because your imagination doesn't need to be super exact or super well planned, you can"
},
{
"start": 301.16,
"end": 307.36,
"text": " use the model free approach that we kind of know kind of works for some problems."
},
{
"start": 307.36,
"end": 312.24,
"text": " You use this to generate your actions that you imagine."
},
{
"start": 312.24,
"end": 317.72,
"text": " And you use an environment model in order to look how these actions will play out."
},
{
"start": 317.72,
"end": 322.08,
"text": " And that's how you imagine one step of the future."
},
{
"start": 322.08,
"end": 328.40000000000003,
"text": " And you simply repeat this a couple of steps."
},
{
"start": 328.40000000000003,
"end": 333.56,
"text": " And then you have an entire what's called a rollout, which consists of these pairs of"
},
{
"start": 333.56,
"end": 336.8,
"text": " states and rewards."
},
{
"start": 336.8,
"end": 342.84000000000003,
"text": " And what you do then is you encode this rollout via this encoder, which is in this case an"
},
{
"start": 342.84000000000003,
"end": 348.2,
"text": " LSTM or something like this I think."
},
{
"start": 348.2,
"end": 356.2,
"text": " You encode all these states into one vector, into one embedding basically for this rollout."
},
{
"start": 356.2,
"end": 364.28000000000003,
"text": " And this embedding describes kind of this future imagined path."
},
{
"start": 364.28,
"end": 372.08,
"text": " Of course, what you're going to hope is that somehow this encoding captures how you will"
},
{
"start": 372.08,
"end": 374.35999999999996,
"text": " do in the future and how good this will be."
},
{
"start": 374.35999999999996,
"end": 377.23999999999995,
"text": " So these states and rewards."
},
{
"start": 377.23999999999995,
"end": 381.84,
"text": " Once you have a couple of these rollouts, so once you've imagined a couple of different"
},
{
"start": 381.84,
"end": 388.28,
"text": " futures, you then aggregate them in this aggregator."
},
{
"start": 388.28,
"end": 395.32,
"text": " I think in their case, they just concatenate these rollout encodings."
},
{
"start": 395.32,
"end": 401.23999999999995,
"text": " And then you feed this too to the big aggregator on top."
},
{
"start": 401.23999999999995,
"end": 408.71999999999997,
"text": " So the big aggregator on top can now combine the model free path and the imagined futures."
},
{
"start": 408.71999999999997,
"end": 417.84,
"text": " So if the big aggregator thinks that the imagination isn't correct, it can resort to the model"
},
{
"start": 417.84,
"end": 425.11999999999995,
"text": " free path, but it can also think that maybe it's correct, or it can be kind of if it's"
},
{
"start": 425.11999999999995,
"end": 431,
"text": " sure it's correct, it can fully trust these rollouts and perform actions according to"
},
{
"start": 431,
"end": 432,
"text": " that."
},
{
"start": 432,
"end": 435.47999999999996,
"text": " All of this is of course trained end to end."
},
{
"start": 435.47999999999996,
"end": 441.08,
"text": " There's a tiny piece we haven't looked at yet, namely how this here, this policy network"
},
{
"start": 441.08,
"end": 445.28,
"text": " on the left is learned."
},
{
"start": 445.28,
"end": 451.32,
"text": " And this is simply learned by, and I have to pay attention that I'm doing the right"
},
{
"start": 451.32,
"end": 452.32,
"text": " thing here."
},
{
"start": 452.32,
"end": 460.26,
"text": " So you take this big thing here, your final policy network, and you perform, you kind"
},
{
"start": 460.26,
"end": 466.23999999999995,
"text": " of learn to copy its actions simply from the input."
},
{
"start": 466.23999999999995,
"end": 475.08,
"text": " So from this model free input over here, you take this input and you take, excuse me, and"
},
{
"start": 475.08,
"end": 485.03999999999996,
"text": " you take the output of your big policy network and you try to simply make a neural network"
},
{
"start": 485.03999999999996,
"end": 489.56,
"text": " that copies the outputs given these inputs."
},
{
"start": 489.56,
"end": 494.96,
"text": " And that's kind of your small policy network in here that's simply model free."
},
{
"start": 494.96,
"end": 507.2,
"text": " So the loop closes in a way that you use your learned model to then again imagine the future."
},
{
"start": 507.2,
"end": 512.88,
"text": " But of course for imagining the future, within imagining the future, you can't have another"
},
{
"start": 512.88,
"end": 516.52,
"text": " instance of this network because it would be infinite recursion."
},
{
"start": 516.52,
"end": 519.36,
"text": " So you can only have a model free network."
},
{
"start": 519.36,
"end": 521.62,
"text": " All right."
},
{
"start": 521.62,
"end": 525.24,
"text": " That's it for the model."
},
{
"start": 525.24,
"end": 534.52,
"text": " Of course, yeah, there's a couple of tricks and how to encode these things."
},
{
"start": 534.52,
"end": 541.66,
"text": " Basically they perform experiments and this is maybe what you've seen in the media so"
},
{
"start": 541.66,
"end": 545.14,
"text": " far of this game."
},
{
"start": 545.14,
"end": 552.4399999999999,
"text": " And this game is a game where you have to push around the brown boxes onto the red squares"
},
{
"start": 552.4399999999999,
"end": 558.36,
"text": " using the green avatar that you have."
},
{
"start": 558.36,
"end": 566.04,
"text": " So this game is difficult because first of all, the levels are generated randomly."
},
{
"start": 566.04,
"end": 570.48,
"text": " So there's no way you can like hard code anything."
},
{
"start": 570.48,
"end": 578.48,
"text": " And second of all, if you push a box, say this box here, if you were to push it to the"
},
{
"start": 578.48,
"end": 590,
"text": " right into the corner, you would have no way of getting it out again."
},
{
"start": 590,
"end": 597.26,
"text": " That's why I have to plan ahead and avoid such mistakes because they're not fixable."
},
{
"start": 597.26,
"end": 601.88,
"text": " So once you make the mistakes, you can't go back and that's where planning comes in so"
},
{
"start": 601.88,
"end": 602.88,
"text": " handy."
},
{
"start": 602.88,
"end": 608.4399999999999,
"text": " If you imagine this future and if your model is correct or approximately correct, then"
},
{
"start": 608.4399999999999,
"end": 611.12,
"text": " you can avoid such mistakes."
},
{
"start": 611.12,
"end": 621.4,
"text": " Of course, that's the difficulty in this game and that's where the planning helps."
},
{
"start": 621.4,
"end": 624.9,
"text": " Note that they don't code in how the game works."
},
{
"start": 624.9,
"end": 631,
"text": " So all these models get is pixel input of the game and they have to kind of imagine"
},
{
"start": 631,
"end": 634.34,
"text": " the pixel output they're going to get."
},
{
"start": 634.34,
"end": 637.56,
"text": " So that's increased difficulty."
},
{
"start": 637.56,
"end": 645.52,
"text": " So technically the method is model free in the sense that there's really no coded model"
},
{
"start": 645.52,
"end": 649.36,
"text": " of the world, just the pixels."
},
{
"start": 649.36,
"end": 663.0600000000001,
"text": " So they have performance comparisons where if you and I find this on the right here interesting,"
},
{
"start": 663.0600000000001,
"end": 670.8000000000001,
"text": " you can see according to the unrolled depth, so how much steps into the future you imagine."
},
{
"start": 670.8000000000001,
"end": 676.64,
"text": " You can see it kind of flattens out after only about five steps."
},
{
"start": 676.64,
"end": 682.6,
"text": " Whereas the game usually lasts for about 50 steps, they say."
},
{
"start": 682.6,
"end": 688.52,
"text": " So only imagining five steps is already really helpful."
},
{
"start": 688.52,
"end": 696.4399999999999,
"text": " What I don't like here is that they compare to what they say this copy model because this"
},
{
"start": 696.4399999999999,
"end": 699.98,
"text": " here is a standard model free comparison."
},
{
"start": 699.98,
"end": 707.48,
"text": " So it's just a model free agent and of course, or not of course, but it performs worse right"
},
{
"start": 707.48,
"end": 713.36,
"text": " here."
},
{
"start": 713.36,
"end": 715.96,
"text": " Because it has no imagination, but it also has less parameters."
},
{
"start": 715.96,
"end": 719.6,
"text": " So they're trying to compare it to something with the same amount of parameters and say,"
},
{
"start": 719.6,
"end": 722.12,
"text": " oh, we have this copy model agent here."
},
{
"start": 722.12,
"end": 732,
"text": " And what the copy model agent is doing is simply, for the environment model, it's the"
},
{
"start": 732,
"end": 737.52,
"text": " same architecture, but for the environment model, it simply predicts the output as the"
},
{
"start": 737.52,
"end": 738.84,
"text": " input."
},
{
"start": 738.84,
"end": 743.28,
"text": " So it simply says, oh, you do this action, the environment is going to be exactly the"
},
{
"start": 743.28,
"end": 745.72,
"text": " same as it is now."
},
{
"start": 745.72,
"end": 754.8000000000001,
"text": " And I don't like it because basically this entire branch here becomes rather useless."
},
{
"start": 754.8000000000001,
"end": 761.36,
"text": " And so even though you have parameters in here, they're not useful."
},
{
"start": 761.36,
"end": 768.64,
"text": " So to say that this is a comparison with the model of the same amount of parameters, I"
},
{
"start": 768.64,
"end": 771.88,
"text": " don't know, technically true."
},
{
"start": 771.88,
"end": 781.76,
"text": " Another thing that they do is they pre-train the environment model with a model free agent."
},
{
"start": 781.76,
"end": 786.96,
"text": " So first they code a model free agent, then they pre-train the environment model to then"
},
{
"start": 786.96,
"end": 789.18,
"text": " use with this agent."
},
{
"start": 789.18,
"end": 794.56,
"text": " So it's not fully learned and I can imagine they tried and it didn't work."
},
{
"start": 794.56,
"end": 799.32,
"text": " And this is how you get it to work."
},
{
"start": 799.32,
"end": 810.12,
"text": " So they also experiment with imperfect models."
},
{
"start": 810.12,
"end": 814.48,
"text": " So they train the environment model only imperfectly."
},
{
"start": 814.48,
"end": 817.0400000000001,
"text": " And as you can see here, this is kind of the output you can get."
},
{
"start": 817.0400000000001,
"end": 824.5200000000001,
"text": " Say you have duplicates, you have kind of errors, you have twice your character here,"
},
{
"start": 824.52,
"end": 831.68,
"text": " you have like boxes within the wall or all kinds of things."
},
{
"start": 831.68,
"end": 838.16,
"text": " And they basically show that if you try to classically plan using these models, these"
},
{
"start": 838.16,
"end": 841.84,
"text": " bad models, you get nowhere."
},
{
"start": 841.84,
"end": 852.72,
"text": " Basically this is a Monte Carlo sampler planner using a poor model and its performance degrades"
},
{
"start": 852.72,
"end": 857.1600000000001,
"text": " significantly from when you use the good model, which is right here."
},
{
"start": 857.1600000000001,
"end": 867.12,
"text": " And the imagination agent is not affected by kind of the bad model, except that it takes"
},
{
"start": 867.12,
"end": 873.0400000000001,
"text": " kind of longer to reach its high inaccuracy."
},
{
"start": 873.0400000000001,
"end": 880.08,
"text": " All right, so there's a couple of other experiments and a couple of Pac-Man experiments where"
},
{
"start": 880.08,
"end": 887.8000000000001,
"text": " they show you can learn one model to transfer kind of to play different games in this Pac-Man"
},
{
"start": 887.8000000000001,
"end": 888.8000000000001,
"text": " world."
},
{
"start": 888.8000000000001,
"end": 898.88,
"text": " And that just works the more if you have very sparse rewards, which you can imagine, yes,"
},
{
"start": 898.88,
"end": 903,
"text": " if you need to plan then that's what you get."
},
{
"start": 903,
"end": 907.6400000000001,
"text": " You get the ability to earn more sparse rewards because you can kind of look ahead."
},
{
"start": 907.64,
"end": 912.64,
"text": " All right, so I think I'll conclude here with the discussion of this paper."
},
{
"start": 912.64,
"end": 939.64,
"text": " I quite liked it and it's a cool method, combines many things and I'll see you next time."
}
] |