video_id
stringlengths 11
11
| text
stringlengths 361
490
| start_second
int64 0
11.3k
| end_second
int64 18
11.3k
| url
stringlengths 48
52
| title
stringlengths 0
100
| thumbnail
stringlengths 0
52
|
---|---|---|---|---|---|---|
pPyOlGvWoXA | code from P of x given AI so what if we could get D on that was used to generate X then we can encode X there efficiently so we find an item maximize B of AI given X so imagine we're back to this mixture model thing our X falls here then we might sit home this mode over here is the one and this is mug one two three and say okay mine is three that's the most likely mode to | 6,547 | 6,579 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=6547s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | have generated this X but of course if we know how to encode x given I we still to send I across otherwise the other person cannot decode with that scheme because I don't know of what we're coding relative to so if the semi which will cost us log of one over P of I then we have to send X which will cost as log of one over P of x given I and so the expected third length shown on the right | 6,579 | 6,606 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=6579s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | here is well there's an expectation for possible X's we need to send when we send an X we look at the I that minimizes we're coming here is both I and X so we're really coding log 1 over P of I comma X willing to choose our I we get to choose and we're picking the one that minimizes that quantity another way to write a second equation here same thing okay so the schema straightforward and we | 6,606 | 6,636 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=6606s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | know how much is going to cost us is it optimal it's not optimal because effectively we're using a different distribution Q of X which I'll have a cost H of X plus the kr between P and Q what do I mean with that when we use this encoding scheme imagine we have two modes this is P and you see P is running over here and both in those two modes when we use a scheme above effectively | 6,636 | 6,671 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=6636s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | what we're doing is we're fitting the distribution queue to our to our original situation and we're encoding based on cue because editing the falls on this side will use mode one everything calls and that's how we'll use mode two and this is not the same as P it's different and will pay the price will pay the KL between the two in extra bits now I might say do we care do I | 6,671 | 6,701 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=6671s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | care about paying this KL divergence for all all the pants and this drawing here yeah you probably care it's a pretty big kale if your distribution was such that your modes are completely separated from each other well they're not KL between P and Q will be almost zero you might not care let's think about what we often care about in it our scenarios which is | 6,701 | 6,721 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=6701s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | might have a variational auto encoder with a latent code latent variable Z so not just the I would M be Z that Z can take on a continuum of values so there'll be a continuum of modes and if I pick only one of them instead of somehow using the continuum we're losing a lot because because it's a continuum they're all going to be very close together and so we are going to lose a | 6,721 | 6,744 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=6721s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | lot by using Q instead of P in this situation so we have a scheme we can do coding but we're paying a price question is can we somehow get it done without paying that K odd well let's think about it some more so we looked at max mode what if we do posterior sampling in posterior sampling or say as well we still have the same situation as before but the current is up to taking the I | 6,744 | 6,775 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=6744s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | that remember before it was high that maximizes P i given X here we sample might not sound smart up first and in fact when we're done with this slide you'll see that the coding scheme we're covering on this slide is worse than the one we covered on the previous slide but in the process of covering this scheme we'll build up some new concepts that allow us on the next slide to get the | 6,775 | 6,805 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=6775s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | best scheme better than the previous one this one so bear with me for a moment here so we sample I from P of Y given X we stand i same cost as before using encoding based on the prior like the I why not P I give an X you might say isn't that more peak can we just send P are given X well the recipient doesn't have X so they cannot decode it against a given X they have nothing else we send | 6,805 | 6,832 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=6805s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | I is the first thing we say well they have to decode a dist on the prior and you have to encode it based enough then we send backs using same encoding scheme as before this is probably efficient but not necessarily as efficient as using the best I remember imagine we have these distributions here and let's say our X landed over here let's say there's mode 1 mode 2 and we're unlucky and when | 6,832 | 6,863 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=6832s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | we sample I from PRI given X we end up with our I equal to somehow well encoding X code X from P of x given I equal 2 is going to be very expensive because there's a low probability here that code is not going to be very efficient at getting X across so it makes it less efficient than what's on the previous slide in fact the difference is that here we have log 1 | 6,863 | 6,890 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=6863s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | over P I comma X whereas in the previous one we had a min over I sitting in front of it ok so we lost some things here but it's all for a good reason so now what we're going to now be able to do is earn bits back which is the key concept we want to get to so it's an optimal yes and no it's yes it's optimal if we like to send I and X but we don't care about sending I we just want to | 6,890 | 6,924 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=6890s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | send X is something we made up X is the real thing is the symbol I is just a mode then you know we have an ordered distribution we're fitting so optimal the same both but it's a waste to send I and so how much the bottle is well it's about more according to the entropy are are given acts effectively because that's where we send that's that's wasted so what can we | 6,924 | 6,953 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=6924s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | do what can we do to avoid this overhead the very interesting idea that the base back idea is that somehow we we send too many bits what we can earn them back and so the higher level that's what's gonna happen I say all things gonna happen we say we acknowledge we said too much we're gonna somehow earn them back and not have to pay for them so let's take a look at that legs back coding solves a | 6,953 | 6,984 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=6953s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | scheme on the previous slide we sample I from PR given X the cost descendant is log of 1 over PI then we send X cost is log 1 over P of X given are all the same as in the previous line base back idea with exact difference between approximate inference later what will it do the recipient decodes I and X but knows the distribution for a given X because they have the corresponding | 6,984 | 7,014 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=6984s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | model on their side so what that means is that there see piant actually can recover the random seed that you used to sample i from p i given X they can go to the reverse profit you do the sampling here you use random seed what is it random seed that is really use sequence of random bits that was used since the recipient knows the distribution knows I know sex they can back out the sequence | 7,014 | 7,057 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=7014s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | of random bits that caused you to sample I so can reconstruct the random bits use a sample PI given x those findings were also sent those are log 1 over PI given X random bits which we now don't have to count what do I mean with that imagine you're trying to send X and somehow we have a friend is also trying to send Brandon bits you can take your friends around the bits use them for this | 7,057 | 7,086 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=7057s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | sampling send them across through this process and they'll be able to be decoded on the other side and those are your friends base so you don't have to pay the price for that that's their bits they happen to come out on the other side that's their cost to pay so one way to think of it all you have to pay is the X give an eye and that's it and we'll make that more concrete even if | 7,086 | 7,109 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=7086s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | they're your own bits so bits by coding cost you send you'll because of log 1 over P I to send I then cause of log 1 or P of X I just an x given I and then you earn this back because those bits actually it's a bunch of random bits that we're sitting there in or sent across but they're not yours you don't have to pay the price for them and if you do the math you actually have log 1 | 7,109 | 7,133 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=7109s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | over P of X so you'll get encoding of our want to send X at the entry rate for X so we've got optimal encoding great we're optimal now what does it look like you have some symbol data this is what you want to send and that's an auxilary data this is random bits sequence the sender will do lossless compression the through the schema is fine receiver will get back out the symbol data and also | 7,133 | 7,169 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=7133s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | get back out the auxilary data because you get them back out on this side you don't count it against your budget for encoding assumptions we make you we can compute P of Y given X which can be a strong assumption being able to find that distribution in your mixture model it's the distribution that you don't break it up available and then if something you have auxilary random data | 7,169 | 7,196 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=7169s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | we'd like to transmit and we set it across we don't have to pay a profit and somebody asked carries that cost so you've actually did something with approximate inference in a VA II we don't find the exact kathiria for Z given X we have an inference network or here Q I give an X in first network we sample hum q are given X otherwise everything is saying we go through the | 7,196 | 7,222 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=7196s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | whole process what happens is what we get back is log 1 over Q I given X and when we see here if then the cost of transmitting data is not a is a little higher than log 1 over P of X because effectively we have the wrong distribution here we have Q instead of P this is the evidence lower bound we applies with the BAE so if you use a via e to do bits back coding by optimizing | 7,222 | 7,253 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=7222s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | the loss of the VA e there directly optimizing the compression capability of this big back coding approach so perfect match between the VA objective and compression so how about that source of random bits that we also like to send where does that come from in practice it's actually your own bits so imagine you already have some bits sitting here you have some zeros ones | 7,253 | 7,279 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=7253s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | you know maybe you've already done some compression of something else it's a random sequence it's sitting there ready to be transmitted then the first thing you have to do and the the notation here is slightly different Wyatt responds to our i M s corresponds to our X ok so keep that in mind that's the notation they use in this paper here from which we took the figure so in | 7,279 | 7,308 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=7279s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | cutting the mode why we do it with their infants attribution why given the simpler one code s0 to do that we need to grab random bits to do that sampling well that means we consumed these random bits from our string that we want to send across next thing that happens is we start encoding we code SEO remembers our X so our symbol given the mode gets encoded so this | 7,308 | 7,343 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=7308s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | grows the number of bits you want us in then the encode de mode from its prior and this grows again and so what happened here is that in the process when coding one symbol we have first consumed some bits now we're on the stack of things to be send now we've added more bits to encode asking them why it's a symbol given mode and added more bits to code the mode itself well | 7,343 | 7,371 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=7343s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | overall this thing will have grown typically not guaranteed but typical half grown and now we could repeat this process what we had here as the extra information it's now sitting here you can get our next symbol s1 will find what our y1 is and repeat and so we see what actually happens is we were building up and some kind of popping the stack by pushing through the stack the | 7,371 | 7,399 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=7371s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | sequence of bits that it code a sequence of symbols with this mixture model or bits back coding so we really see is the bits that we're getting back or not visitor sitting off to the side necessarily they're bits that came onto our stack from encoding the previous symbol that we encoded this way and you might wonder well if we took it off here but put other things on have we lost the | 7,399 | 7,427 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=7399s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | ability to get those bits back no that's the whole idea in the decoding as we saw on the previous one two slides back sorry when we when we decode we have we can reconstruct the random bits that are used to sample the mole given the symbol and so we get them back out at that time so still get everything on the other side this is not lost will be decoded and bits back | 7,427 | 7,465 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=7427s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | all right so the last thing I want to cover and I'm going to hand it off to Jonathan and maybe we'll take a very short break and then hand it off to Jonathan is how do we have to get those bits back I've been telling you you're going to get these this back you're going to have sampled your mode from Q R given X and then later you're going to get them back how this work so let's say | 7,465 | 7,491 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=7465s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | you have a distribution I have an X and I'm going to draw the distribution of Qi given X is going to be discrete for you I'm doing here it's gonna be discrete and so I'm going to look at the cumulative distribution so let's say I lives here and then I could be maybe one two three or four come the distribution we'll say okay maybe one has a probability of let's say 0.2 or | 7,491 | 7,529 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=7491s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | something then once I hit two maybe two is a probability of zero point one so we hit level zero point three over here and three might have a probability of maybe 0.5 to go away to zero point eight and then for weight of the probability of zero point two all the way to one what does it mean to sample i given X I have this bit stream so I have a bit stream sitting there I'm going to start from | 7,529 | 7,564 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=7529s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | the end here and work my way so the first thing I see is zero zero tells me so I have a zero one interval here zero tells me that I am in the zero to zero or interval but in that interval I can still be either so it has to be either one two or three I don't know yet what I'm going to be so simple the next zero at which point out in the zero to 0.25 and I still don't know what I'm going to be | 7,564 | 7,602 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=7564s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | I've consumed I've consumed this era have consumed this zero now I'm going to consume this one now as I consume this one it means I'm gonna be in the 0.25 in the top half maybe here I still don't know what I'm going to be I could be a 1 or a 2 I don't know and I'm gonna have to consume this zero then next now I'm in the bottom half of this and now I actually know once I sampled those four | 7,602 | 7,645 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=7602s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | bits I know I won now I can go sample from P X given I equal 1 2 then encode my X right and I can also have my prior P I that I used to encode I equal hold on clear this for a moment so I need to stand X I need to send I how am I going to set I well you could say well I have a distribution here over for pop values and I couldn't either die by maybe building a Huffman code or something | 7,645 | 7,698 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=7645s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | over those four possible values that's you can do something much simpler you can say to get the point across that I need to be I equal 1 well I achieved that by this sequence I achieved by the 0 1 0 0 sequence so I can actually just send across arrow 1 0 0 so not across and that signals that I have I so that way I'm also trivially getting those bits back because the person who | 7,698 | 7,726 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=7698s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | receives this gets to read off the bits just like that oh here are the bits I can just read off and then I can all so use a temp to decode X all right so let's see I think that's it for me let's take maybe a two three minute break as I know Jonathan has a lot to cover and let's maybe restart around 712 713 for the last part of lecture Jonathan do you want to try to take | 7,726 | 7,775 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=7726s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | control of the screen here um yeah sure okay um let's see can you hear me okay yeah my I might turn off my camera too so that my internet connections were reliable but but we'll see just let me know if it's not working well okay um I guess I can just jump in and talk about more about bits back it's possible to address a question on chat oh yeah questions on ChaCha I think it's part of lecture and | 7,775 | 7,879 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=7775s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | then we'll dive in with with that later so first question is there's also we can have P of X and P is a mixture of gaussians how you can simply of X to begin with yeah it's a very observation it's not exactly our assumption the assumption more precise is that we have a mixture model and that four components individual components in a mixture model we know how to encode efficiently but | 7,879 | 7,909 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=7879s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | for the mixture model as a whole we might not know how to encode and now we have a scheme to do that especially if you know how to encode each component let's back her to your way to encode it against a mixture model which likely there better fit your data distribution and as we know the closer you are to distribution the smaller K diverges the more efficient your coding will be and | 7,909 | 7,933 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=7909s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | so it allows us to the mixture model which might be a better fit which in turn would result in higher efficiency encoding another question there is the new party between Edinburgh Nubia that's a really really good question so one of the big things that I think Jonathan will be you know Jonathan's covering that paper so the 2019 paper tube it's like a NS paper by a thousand at all | 7,933 | 7,959 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=7933s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | investigated exactly that assumption so we'll see bar that but the notion down if you already put bits on your bit stream from encoding the previous symbol previously previous symbol and you'd work with those bits is that really a deficient or real the question is all those bits really random enough to achieve the efficiency that we declare here and so chuckling we'll get to that | 7,959 | 7,982 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=7959s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | question maybe five or six slides for now so all that for now I know it should be clear a few slides no all right um right okay so I'll just talk about how one more about bits back and some more modern instantiations of bits back coding into real algorithms that we can actually download and use and also in particular how fits back coding place with new types of degenerative models | 7,982 | 8,027 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=7982s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | like a ease and hierarchical PA ease and flows instead of say just Gaussian mixture models right so the the core algorithm that that all these mute bits back papers are based on is this thing called a symmetric numeral systems so this is an alternative to arithmetic coding as Peter was saying and it's especially appealing because it it's well first of all it's very simple and | 8,027 | 8,059 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=8027s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | you can implement it on on you can implement it in a very efficient way which makes it actually practically usable and it also has some nice stack like properties that make it compatible with bits black coding so I'll just first take some time to describe what ENS actually is so again ans which is something like just like you're a thematic coding is a way of | 8,059 | 8,084 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=8059s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | taking a sequence of data and turning it into a bit stream where the bits greens length is something like the entropy of the data times the number of symbols and so I'll just jump right in and describe how this thing works and so so let's say this source that sorts of things that we're trending odhh is just two symbols it be each occurring with probability 1/2 and so | 8,084 | 8,118 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=8084s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | you might imagine that the the naive way to code stuff like this is to just assign a to the number 0 and B to the number 1 and then you just get a string of A's and B's just turns into a string of zeros and ones and that pretty much is the best that you can do but let's see how ans does this um so ans describes a bit stream not not represented it doesn't represent it | 8,118 | 8,144 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=8118s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | exactly as a sequence of bits but it represents it as a natural number so there's an s stores this thing called a state s and we start at 0 and so ans defines an encoding operation so there's this encoding operation that takes in a current state and takes in the current symbol that you wish to encode so let's say you start at some state s and you want to encode the the | 8,144 | 8,178 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=8144s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | symbol a in this case in this very particular case what a in this will do is produce the number 2's 2 times X so remember the state s is a natural number and if you wish to encode B it produces the this state 2's plus 1 so that this is ans for for this very simple source um of course the ans will generalize more but just for in this case this is all it does and so you can see that | 8,178 | 8,212 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=8178s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | really what this is doing is its appending numbers zeros and ones on the right of a binary representation of the state s and that's how this is algorithm stores data that's how it stores a and B and a very important property of any reasonable coding algorithm like ans is that you should be able to decode the data that you encoded so given some state s you want to be able to tell what | 8,212 | 8,239 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=8212s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | was the last symbol that was encoded and so that's very easy to check so if s is even then you know the last symbol was was a if it's odd then you know it's B and if you know it's then you can just divide by to take the floor and then you get the previous state so so that's how this algorithm works and you can already see just based on this very simple example that this algorithm has the | 8,239 | 8,274 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=8239s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | stack like property if you encode a sequence of symbols a B V then the next thing that you decode if you wish will be the last thing that you encoded so it's sort of a first in last out type of type of stack ok can I ask a question here yeah so sorry for this simple example what is the capital P of X and the mixture of Gaussian can you explode right in terms of this example and also | 8,274 | 8,307 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=8274s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | I don't see why the stack is being used here thank you yes so in this case we haven't gotten to the mixture yet where we're gonna talk about that soon this is just for this very simple source over here it's just a coin flip but we just want to store coin flips there's no latent variables or anything like that the second question was where does the stack come in it comes in the fact that | 8,307 | 8,330 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=8307s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | so let's say we but let's say we encode a sequence of symbols say B a B and so that's if we follow this encoding rule then that's gonna produce a sequence of states it's gonna be like s1 s2 s3 and so s3 is the final state that we have after encoding these three symbols and then what ans lets us do is decode from that state and when we decode from that state and s will tell us the last symbol | 8,330 | 8,367 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=8330s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | that was encoded and then tell us the the previous state that came before that so so that's why it's it's like a stack because if you ask me an s what was the lesson well that was encoded it's gonna be beat or it's gonna be the this beat not the first one hopefully this will be more clear as I've got some more examples okay um right mmm it's not letting me advance | 8,367 | 8,403 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=8367s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | [Music] okay so let's see how this generalizes to the setting of not just the binary source or not just the the coin flip but something more interesting so here we again have two symbols a and B but become the problem the probabilities aren't one-half anymore instead it's gonna be one-fourth for a and three-fourths for B so B is more likely so we're going to now think about how to | 8,403 | 8,435 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=8403s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | generalize ans to this setting and and the way the way it's done is like this so you take all the natural numbers so here here's all the natural numbers and what we do is we partition it into two sets one set for a and one set for B and so I'll just write down what those sets are and then and then talk about why we chose those sets so we're gonna write down one set for a and this is going to | 8,435 | 8,462 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=8435s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | be 0 4 8 and so on and this is a partition so the the separate B is just all the other numbers so that's 1 2 3 5 6 7 1 so on so just a draw draw it out here these numbers here for an 8 correspond to 8 and these all the other numbers correspond to be I'm saying correspond to a meeting correspond to ending in a or correspond to ending in be right um I guess I've been just I | 8,462 | 8,505 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=8462s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | haven't defined what corresponds to mean yet I just mean that we're defining these two sets s a s sub a is gonna be all the numbers divisible by 4 and s would be is gonna be the others and so so we we've just defined these two sets and then I'll just describe how we encode some spring so let's say we want to encode the string be a B so again ENS builds up with some big natural | 8,505 | 8,538 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=8505s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | number which is a state so we start at the state start at s equals zero and what we want to do is encode onto the state zero the symbol B so the way we do this is we look for the zeroth number in B's set so that this might sound a little bit weird maybe I'll just write out the general rule if when we encode a state s with say the symbol a we look at the s number | 8,538 | 8,579 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=8538s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | and press sobayed so this is s and this is this number and that's so B okay so let's just go through this so when we encode 0b we look for the zeroth number and be set so so be set is this one two three five six seven all the numbers that are not divisible by four and a zeroth number starting indexing at zero is one that's the first number so that's what we get here so that's just writing | 8,579 | 8,621 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=8579s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | it down in this table here okay now the next character we want any code is eight so we want to encode the new state as one and we want to include the number a or the symbol a so we look for number one in a set so a set is 0 for 8 and so on so number 1 is 4 so that was here and then finally the new status for and then we retreading could be again and so that's 6 so what this says is that ans | 8,621 | 8,655 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=8621s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | has turned the string EAB to NSS turned the string BAE into this number 6 and this number six stores these three characters which is kind of cool okay so first of all the this might seem like a weird set of rules to play by but first let's check that this is actually decodable otherwise this would be useless so to see that is it possible to take the the number six and and see | 8,655 | 8,685 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=8655s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | which was the last character that was encoded and the answer is yes because these two sets si in SP were defined to partition the natural numbers so for any number any natural number like 6 you know which said it belongs to so you know that 6 belongs to s of B there's no and and so you know the last character that was encoded was B and then you can also recover the last state the previous | 8,685 | 8,717 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=8685s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | state before he was encoded and the way you do that is just by looking at the position of six and S sub B a so you see that six is the fourth number and SOP so that's the previous day and you can just keep repeating this and you can recover the characters that were encoded on so hopefully that convinces do that this is decodable and kind of the point of this | 8,717 | 8,742 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=8717s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | is that we actually chose these sets as sub ans of B so that their density in the natural numbers is approximately well it is pretty much the probability of the symbols so you know if you take a lot of natural numbers the fraction of the numbers which Lyoness of a is about one-fourth and the fraction of the numbers that line s of B is about three reports and so this out this encoding | 8,742 | 8,767 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=8742s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | operation here where we look for the s number in one of these sets on that operation will advance us by a fraction but by a factor of about one over P but that's just what happens because this thing is distributed like a fraction of P over the natural numbers so when you index into it you you you increase by a fraction of one over P so that means that every time you encode a symbol onto | 8,767 | 8,796 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=8767s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | on to a state I guess it's called X here you end up multiplying your natural number by about 1 over P that's generally what happens approximately so here if the SLA is powers of three it'll also work powers of three yeah so we want as SMA so we we just want them to occur about one fourth of the time like zero comma three comma nine etc that'll also work um so that doesn't | 8,796 | 8,837 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=8796s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | really occur one-fourth of the time in if you pick some long sequence of natural numbers those numbers don't occur one-fourth of the time for that long sequence oh I see so we want the density of these things to be to be any partition that needs the criteria that first 1/4 is going to work right this is not a neat partition right so so there are actually a lot of choices for this | 8,837 | 8,866 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=8837s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | so this particular choice is true is it so that it's very easy to implement the encoding and decoding operations so you can just do it with some modular um but if you have some crazy choice maybe it'll work but it might be very hard to to compute B encode and decode operations well it seems like the set of natural number that is also chosen like can be chosen otherwise here like we | 8,866 | 8,896 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=8866s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | don't have to we only restrict a natural number because of the index is zero so it's convenient is that why well at the end of the day this is something that we want to turn into a binary string so I guess I haven't described that at the end but so so once you in covered everything you have this big natural number that describes all your symbols and then you | 8,896 | 8,918 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=8896s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | turn it into a binary string and then you can in the binary representation and you can ship that off to the to the receiver they start at the end you just have one number right right and then you from this one number you can back word generate all the three views right right but here we have this number six and now we want to send six to the receiver and the receiver you know the all our | 8,918 | 8,947 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=8918s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | communication protocols work in bits so we have to turn six into a binary string and then send that to the receiver but the point the point is that actually secure here's here's the property of this scheme that we basically keep dividing by P of s every time we encode s so that means that if we encode a bunch of symbols we get some starting symbol divided by the | 8,947 | 8,974 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=8947s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | product of the probabilities of all the symbols that we that we encode it and so if we so this is some natural number and if we code the natural number the number of bits needed is about the log of the number that's the log base 2 of the number that's how many bits we need to code it so we see that this is this is the code length it's the sum over T of log 1 over P for all the symbols and so | 8,974 | 9,004 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=8974s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | so if we take this and we divide by the number of symbols so if we take this divided by the number of symbols you see that this goes to the entropy of this of this source so that so this is like an optimal thing to do I roundabout way of answering the question of why use natural numbers but but I think the stack here is just a conceptual framework right we don't know | 9,004 | 9,037 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=9004s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | the actual implementation we don't need stack yeah that's absolutely true with this we say it's a stack just because it has this property that every time we decode something it we just get the last thing that was encoded we don't get the first thing that I was encoded so we just call it a stack but but yeah you don't actually need a real stack I mean essentially it's just a partition of a | 9,037 | 9,060 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=9037s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | lookup table like before we have a general lookup table but now you just partitioning the lookup table um sure right I guess like maybe the point here is that yeah ENS is really these rules and you can implement them efficiently this was what judo found and and it seems to work in practice and it kind of helps this stack like behavior that's the best the point | 9,060 | 9,097 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=9060s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | of this and it's also optimal okay so what so returning to more interesting models which are not just two characters a and B but rather things like distributions over images represented by latent variable models so there's this very nice algorithm introduced in 2019 called bits back with ans or BB ans which is its back coding using ans as a back end and the the | 9,097 | 9,129 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=9097s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | reason to use ans is because it turns out that the the staff like property of ans where the last thing you decode or the wear whatever you decode is the last in Unicode makes it very compatible with the concept of getting bits back so let's just see how that works so here we're gonna think about winged variable models so Peter talked about Gaussian mixture models which are one case of | 9,129 | 9,159 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=9129s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | this so here's the is the latent variable P of Z is the prior and that's the marginal distribution so this is how bits back coding works and we're gonna talk about how it works exactly what AMS so in BBA in s the first thing you do if you wish to send X so the goal here is the send X the first thing you do is you you you start off with a non-empty bit screen so you start with so s | 9,159 | 9,205 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=9159s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | so we can just call it a bit scream because that that's just how we think about it and so the first thing the encoder does is it decodes Z from the bit stream so the encoder knows X so the encoder can compute Q of Z given X this is just the approximate posterior of this latent variable model and it can use this distribution to decode from the bit stream and we assume that this bit | 9,205 | 9,235 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=9205s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | stream was full of random bits and so that this is a question that came up and I'll talk about the consequences of that later so that's the first thing you do but the point is that if you decode from random bits then you get a sample and then the next thing the encoder does is it encodes X using this using P of X given Z which is actually called the decoder and then it finally encodes e ok | 9,235 | 9,264 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=9235s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | so what actually happened here so if we just visualize this in a spit stream like this so so that this is what we started off with in the first phase when we decode Z we actually remove a little bit of this bit stream from the right so imagine this is a stack where we keep adding things on the right so in this first phase we remove a little bit and then we get a little shorter bit stream | 9,264 | 9,294 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=9264s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | then we encode X so that increases the length of the bit stream a little bit more but let's say by this much then then we encode Z again so so that that increases by a little bit so now we can you can just look at this diagram and see how much how long did this bit stream get what was the net change in the length of this bit screen well we we have to add in these two parts right | 9,294 | 9,328 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=9294s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | because the bit stream went right there but but then we also subtract how much well I guess not there but we subtracted a little bit at the beginning so the netcode links though are the net amount of change to this to the length of this bit stream is well that's a negative log P of X given Z minus log P of Z so that was furred for these two parts two and three but then we had to subtract the | 9,328 | 9,361 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=9328s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | amount that we we decoded from the bit stream at the beginning I was so that's plus log excuse me for the next and the first part Z gives you some sample from Q so the actual code length on average is is the average of this of received Ron from the approximate posterior so you can see that this is the v AE bound this is just the variational bound on negative log likelihood so so I guess | 9,361 | 9,394 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=9361s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | this is can I ask sorry if you have a stream of let's say oh just lowercase letters A through Z then would P of Z here just be 1 over 26 and then the P of X given Z would be the number of times it occurs in divided by the total length right so it just depends on what your latent variable model happens to be I'm so the case that I'm actually thinking about it is view is that this is a V a | 9,394 | 9,431 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=9394s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | and so P of Z is like standard normal quite better but my confusion is why would it be like a normal distribution like isn't each key represented by a single value in the lookup table it's a constant right so why would it have a distribution um so I'm not really sure what like if you're given a string you just count right and then out of the count you that's a constant let's say | 9,431 | 9,467 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=9431s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | it's all just restrict to a through Z then for each of the character you have a basically the probability occurs in this stream and that's a constant value so why would that have a distribution so maybe let let's back up for a bit and just like talk about what we're trying to do so what we're trying to do is to turn the latent variable model into a compression algorithm so just starting | 9,467 | 9,494 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=9467s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | from square root 1 we have a ve of this what's what's the input of the Dae an image it's a stream right let's say for 1d case is it a stream time I propose we can write questions here offline because we've got a lot of cover yes okay yeah happy to talk about this later yes ok so here is a description of the same same thing so during the encoding phase the decoder decodes from the bit | 9,494 | 9,538 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=9494s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | stream then encodes X and Z and you can also check that this is decodable so if you just run everything in Reverse you you just end up getting yeah getting X so you decode Z you decode X and then you can re encode see I'm using using a should actually be Q and re-encoding part is this getting bits back here so once you rien code Z once a receiver re-encode Z then the receiver | 9,538 | 9,574 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=9538s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | gets now a slightly longer bid stream from which I can start to decode the next C so that's so so those are exactly the bits that were gone back right here okay so there are two sort of two points that we should talk about when when getting B ba and s working with continuously variable models like via East which is that disease these are continuous so Z is comes from a standard | 9,574 | 9,609 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=9574s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | normal distribution and so we can't really cope continuous data but what we can do is district ice it to some high precision and so if you take Z and you discretize it to some level Delta Z then you pretty much turned a probability density function by P of Z and you turn it into a probability mass function for capital P of Z which is Z times Delta Z so what you get by integrating a density | 9,609 | 9,642 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=9609s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | over this small region of volume Delta Z and so you can do that for both the posterior in the prior so you do that for the prior and you do it for the posterior and you see that these deltas use cancel out and so so we get is that this bits back code length with the discretization being the same between the prior and the posterior still gives you the same KL divergence term in the | 9,642 | 9,670 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=9642s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | BAE the second point was that that somebody brought up is that we we decode Z from the bitstream and that's how we sample from the bit stream is by decoding from it that's how we sample Z but in order for that to really give us a good sample the bits that we that we decode from have to be actually random and so that's not necessarily true and so in a VA II the last Z if you just sort of work out | 9,670 | 9,713 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=9670s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | what's going on basically if this KL divergence between this aggregate posterior Q of Z and the prior is small then that means those bits will be random or pretty good and that'll be good enough to get a good sample but of course in practice for a VA that's not trained exactly well this isn't gonna be nonzero but in practice it seems like this doesn't seem to matter too much I | 9,713 | 9,741 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=9713s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | think one thing that might actually work to ensure that the bits are random which I haven't seen explored is to just encrypt the bit stream and that'll make the bits look random and then you can decode anything from it so I think in practice is not a problem and with nice is that this the scheme fits back with ans seems to work pretty well um so the authors of this paper | 9,741 | 9,768 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=9741s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | implemented this it's back ans algorithm for ve strain on em mist and they found that the numbers that we got were very close are pretty much the same as this as the variational is negative variational the variational bound on the negative block likelihood which is exactly what you want that's what is predicted so this thing works as well as as as advertised | 9,768 | 9,799 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=9768s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning |
Subsets and Splits