CHANNEL_NAME
stringclasses 1
value | URL
stringlengths 43
43
| TITLE
stringlengths 22
99
| DESCRIPTION
stringlengths 1.36k
3.71k
| TRANSCRIPTION
stringlengths 1.5k
93.6k
| SEGMENTS
stringlengths 2.35k
153k
|
---|---|---|---|---|---|
Aleksa Gordić - The AI Epiphany | https://www.youtube.com/watch?v=km7iY4yX45A | Alan Turing | Computing Machinery and Intelligence | ❤️ Become The AI Epiphany Patreon ❤️ ► https://www.patreon.com/theaiepiphany
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
I give a brief overview of Alan M. Turing's seminal AI paper from 1950, titled:
"Computing Machinery and Intelligence" 💥💻🤖
You'll learn about:
✔️ Turing Test, or as Turing called it "The Imitation Game"
✔️ 9 objections to Turing's theory
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
2 Medium blogs that go with this video (you'll find more information there):
✅ https://medium.com/@gordicaleksa/turing-for-dummies-ai-part-1-f0f668bcd83d
✅ https://medium.com/@gordicaleksa/turing-for-dummies-ai-part-2-848cb87e95ab
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
⌚️ Timetable:
0:00 Computing Machinery and Intelligence
0:40 Paper opening
1:10 Turing Test ("The Imitation Game")
2:42 Turing's predictions
3:50 The "Thinking" part
4:50 The "Machine" part
6:10 Mind is computational in nature?
7:22 Conway's game of life (simple rules - complex results)
8:20 Objections to Turing's theory (that Turing anticipated)
Note: This video 📺 is for purely educational purposes without any intention of monetizing it.
I've used imagery produced by various artists and people without whom this video would not be possible
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
💰 BECOME A PATREON OF THE AI EPIPHANY ❤️
If these videos, GitHub projects, and blogs help you,
consider helping me out by supporting me on Patreon!
The AI Epiphany ► https://www.patreon.com/theaiepiphany
One-time donations: https://www.paypal.com/paypalme/theaiepiphany
Much love! ❤️
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
💡 The AI Epiphany is a channel dedicated to simplifying the field of AI using creative visualizations and in general, a stronger focus on geometrical and visual intuition,
rather than the algebraic and numerical "intuition".
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
👋 CONNECT WITH ME ON SOCIAL
LinkedIn ► https://www.linkedin.com/in/aleksagordic/
Twitter ► https://twitter.com/gordic_aleksa
Instagram ► https://www.instagram.com/aiepiphany/
Facebook ► https://www.facebook.com/aiepiphany/
👨👩👧👦 JOIN OUR DISCORD COMMUNITY:
Discord ► https://discord.gg/peBrCpheKE
📢 SUBSCRIBE TO MY MONTHLY AI NEWSLETTER:
Substack ► https://aiepiphany.substack.com/
💻 FOLLOW ME ON GITHUB FOR COOL PROJECTS:
GitHub ► https://github.com/gordicaleksa
📚 FOLLOW ME ON MEDIUM:
Medium ► https://gordicaleksa.medium.com/
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
#turing #ai #mind | This video is going to cover Alan Turing and his famous paper from 1950 called Computing Machinery and Intelligence. So for those of you who don't know who Turing was, he's widely considered as the father of modern computer science and also the father of artificial intelligence. During the World War II he basically helped end the war by cracking German's Enigma coding machine. I wrote two blogs on this topic which I highly recommend that you go and head and read. I'll link those down there in the description but I'll try and summarize the main ideas that were covered in the paper right now. His paper has one of the best openings ever and goes like this. I propose to consider the question can machines think? This should begin with definitions of the meaning of the terms machine and think. So basically Turing is well aware that we don't have a slightest clue what thinking is, what understanding is, what consciousness is and whether it's actually needed for the cognition as we know it. He goes on to propose what is now called the Turing test and which he originally dubbed as the imitation game. So the game or the Turing test goes like this. So imagine you have a judge and you have a man and you have a woman. Now the goal of the man is to try and trick the judge into thinking that he's a woman. The goal of the woman is to try and convince the judge to help the judge and convince him or her that she is a she, that she's a woman. And the goal of the judge of course is to just figure out who actually the man is and who the woman is. Now go ahead and replace the man with a computer and if the computer can be as performant as the man in tricking the judge it passes the Turing test. But there's a lot of important details here that I missed in the original statement. So first of all they can only communicate by text and not handwritten text but typed text so that you cannot kind of infer whether the text comes from a woman or man. Back in his time extra sensory perception like telepathy, telekinesis, all of those stuff that we see in like science fiction movies was actually kind of popular and he suggested that there was some statistical evidence for telepathy. So in order to avoid like some of the participants being able to exploit telepathic abilities he said we want to create a telepathy proof room, whatever that means, because that will give humans unfair advantage compared to computers which can only guess at random. And Turing's original prediction went like this. So I believe that in about 50 years time it will be possible to program computers with a storage capacity of about 10 to the power of 9 which is 1 gigabit for those of you who don't know to make them play the imitation game so well that an average interrogator will not have more than 70% chance of making the right identification after five minutes of questioning. So basically the game lasts for five minutes and if the computer manages to trick 30% of the judges it passed the Turing test. So this original statement was not as strong as it can be. For me personally I think that computer will have to communicate with you like the whole day at least and then you can say it's intelligent or it can think or it should pass the test. If you have only five minutes I think today like Google Mina for example is it the newest stuff that came from Google, newest chatbot. I'd say if you took an average judge like some kid from high school I think you can have like a five minute conversation without a problem. Now I said they will think but like that's pretty sketchy and let's see what Turing said about that one. So may not machines carry out something which ought to be described as thinking but which is very different from what a man does. This objection is a very strong one but at least we can say that if nevertheless a machine can be constructed to play the imitation game satisfactorily we need not be troubled by this objection. We probably won't have to crack thinking or understanding on all those like hard and abstract stuff. We can just make it this stuff that we want like that's engineering. Basically airplanes also don't fly like birds right and we don't complain about them right. So the same thing about cars they don't run like cheetahs but they are faster than cheetahs. So that's it. I mean you don't have to fully understand something in order to exploit certain features that would be useful to like humanity and economy. So that was the thinking part. Turing initially said can machines think? So we need to define what a machine means. According to Turing a machine cannot be just about anything. So for example he anticipated that we may be able to genetically create humans so those humans wouldn't be appropriate to participate in this test. What he means by machine is a digital computer. Nowadays when we say digital computer we usually make it synonymous with electronic digital computer which is a concrete implementation of a digital computer. Digital computer can be implemented as a mechanical device. There was this famous mechanical digital computer called the analytical engine developed by Charles Babbage 200 years ago and also computers digital computers could be acoustic they could be photonic so-called photonic computing where you basically use photons and light to do the same computations as these electronic computers do. Basically he proved in his earlier paper that those can solve anything that's computationally solvable they are so called universal Turing machines. Now that's so obvious now in 2020 where we just pull out your smartphone you can you can use it as a calendar you can use it to take a photo you can call somebody you can do a bunch of stuff. Back in the days of Turing they basically have a single machine for a single purpose that's it. There is a really strong hypothesis that Turing is making throughout this paper. Human mind can be modeled as a discrete state machine. If that's true then any digital computer be it mechanical be it acoustic be it electronic one can mimic your mind. So your mind is like software and your brain is like a hardware of a computer. Following from that one your thoughts are just simple computations and this is something that's called as a dualism in philosophy. So you basically think that the mind can be separated from the brain. The opposite theory would be that they cannot be separated that the brain is actually needed for a true cognition for true understanding and consciousness and everything. Roger Penrose famous mathematician and physicist he thinks that there are probably some quantum effects happening in the mind and every new like neuron in your brain basically has this strange structure called microtubules and there are some evidence although like those are certainly not definite that those could be causing consciousness. Now for many of you this idea may may sound absurd like human mind could be modeled as a simple like a set of instructions like a program but I don't know like if you know about Conway's Game of Life for example you know that some really simple rules could create an illusion of a really complex phenomenon. Second example could be fractals like Mandelbrot's set or Mandelbrot's fractal and you can see it on the screen and it's created using a really simple rule and you get this amazingly complex looking fractal. This whole theory was later refuted by John Searle in a famous Chinese room argument so I'm pretty confident that we will need some kind of special coupling special hardware in order to produce understanding thinking and that stuff. Putting aside the fact that John Searle actually refuted Turing in 1980 let's see a couple of objections that Turing himself anticipated being thrown at him back in 1950. So the first one is a theological objection and the idea there is well like God gave souls only to men and women and you need souls in order to think which pretty much implies that machines will never be able to think and what Turing says is that hey like in some other religions they say that only men have souls so who's right? Probably none of those theories is correct and there we have some like empirical evidence where church had or in general religions had some some funny theories. Geocentric system is in one example where church had false beliefs and they didn't want to admit it. The bottom line is dogmas are not arguments so Turing just said I don't want to even try and refute this one. The second objection was the heads in the sand objection and the idea there is it would be really scary if machines could think so they cannot. I mean that's not even an argument but Turing nonetheless just kind of considered that one and it's really related to the first theological argument because it's not an argument it's an opinion. The third objection that Turing considered was the mathematical objection and the idea there was that we are aware that computers do have certain limitations they cannot compute everything so one famous example is the halting problem where the goal is to given a program and given an input you have to create another program which is say whether this program will stop for a given input and Turing proved that this cannot be computed. This is something that's called undecidable problem for Turing machines but there is no proof that human intellect is not subject to the same kinds of limitations. My opinion here is that we won't even be testing people for these kind of tests. We won't be testing mathematicians or logicians and even they couldn't answer questions like halting problem like that doesn't make any sense. Skipping to the fourth objection basically that's called the argument from consciousness and the idea here is hey even if machines do some crazy cool stuff they still don't think they don't understand they don't feel they don't have consciousness and Turing said well how do you know how do you know that even other people have those kind of attributes so you basically cannot know and if you go into extreme that's called a solipsist view where you basically you can only say for yourself that hey I have consciousness I'm aware of myself but you cannot say for others so you just have to be reasonable and accept this test as a as a good way to to measure intelligence and I'm obviously simplifying things a little bit here I just want to give you out like a short glimpse of how the the paper the computing machinery and intelligence was was kind of structured and just give you a feeling for that and then you can go later on and explore a bit more for yourself. The fifth one is argument from various disabilities and this one goes something like this so yeah they can do all of those cool stuff but they cannot do X where X can be fall in love learning from experience doing something really new basically Turing said that all of these are usually a product of a scientific induction went wrong in the sense they just see the current state of affairs they see current computers and they say hey these machines are not capable in general of doing this and they probably will be and as we now know machines can learn from experience that there's a whole field my channel is all about called machine learning where machines are learning indeed from data from experience and also they can do something new so if you if you know about AlphaGo if you know about all of those cool algorithms that DeepMind is developing they did a lot of moves and stuff that other human players the best players in the world could not explain they are doing something new that's for sure. The sixth objection is lady lovelace's objection and it says that computers can never do something new they can only do stuff that we know how to program them to do so basically whatever we know how to do they will be able to do anything else is out of the scope and this is a really important argument because it later created a the most famous objection to Turing's theory which is known as the Chinese informally known as the Chinese room argument written by John Searle and it also created a brand new test which is called lovelace's test and it just kind of informally tries to figure out if machines can surprise us if they manage to surprise us the machine passes the test that's informal version to be quite honest Turing had the kind of weak argument on this one he basically said hey when I sometimes do some syntax error computer does surprise me when I see some like strange result and I mean that's not the kind of surprise that we want to see from a machine not a syntax error every every single developer nowadays experiences those kind of surprises but it's not the same thing we do know of many examples nowadays which indeed truly surprised us as like as the whole civilization so for example move 37 that AlphaGo did in a match against Lisa Dole was truly a new thing nobody expected that one so everybody was kind of surprised I'm not quite sure if David Silver and his crew from DeepMind can actually can actually figure out exactly what the machine how the machine came with that like move seventh argument from the continuity of the nervous system and what Turing basically said here is that a digital computer which is which can mimic any discrete state machine could be a good enough approximation for for our brain eighth the argument from informality of behavior and this one basically says we can never learn or teach the machine of how to perform in every single situation that may appear in the real world and what Turing said is that we haven't searched long enough if we keep on putting new rules and new rules into the machine it will eventually get better and I don't agree with Turing on this one I'd say I'd say we did solve it in a way but using machine learning but even with ML we still are not quite sure how the machine will react to every single possible input in in a given state it's kind of not the desirable property of a system especially when human lives are concerned the final argument is the one from extrasensory perception and I kind of hinted that this one earlier in the video what he basically says here we have a person that has a telepathic ability for example then the judge could give such a test that only the person that has telepathic abilities could answer correctly whereas the machine can do at best at random right so say I have some cards in my hands and the guy that has telepathic abilities I guess is 40 of those correctly whereas the machine does a 20-25 whatever it's funny to listen to Turing writing about statistical evidence for telepathy where it seems like they did have some evidence that some persons perform better than at random today in 2020 we don't have any any kind of such statistical evidence as far as I know at the very end of the paper he mentions a couple of really interesting ideas about learning so what he suggests is that hey instead of trying and programming like a like a fully fledged human mind into the machine let's try and start with a child machine and basically through education process teach it to become the adult machine that's basically idea of machine learning he compares this process to evolutionary process in the sense that the initial state of the machine is can be considered as the hereditary material education process can be considered as mutations and natural selection is judging in this example or some automatic way of doing it like objective or loss function and finally he also pretty much predicted reinforcement learning in the sense he suggested some kind of reward and punishment methods for learning and he said these will probably not be the the main way to to learn them to teach a machine but a supplement with that I'll end this brief overview of drinks paper computing machinery and intelligence I hope you liked it if you did go ahead and subscribe share and like if you like the video and see you next time | [{"start": 0.0, "end": 6.640000000000001, "text": " This video is going to cover Alan Turing and his famous paper from 1950 called"}, {"start": 6.640000000000001, "end": 11.120000000000001, "text": " Computing Machinery and Intelligence. So for those of you who don't know who"}, {"start": 11.120000000000001, "end": 15.72, "text": " Turing was, he's widely considered as the father of modern computer science and"}, {"start": 15.72, "end": 20.0, "text": " also the father of artificial intelligence. During the World War II he"}, {"start": 20.0, "end": 26.560000000000002, "text": " basically helped end the war by cracking German's Enigma coding"}, {"start": 26.56, "end": 30.479999999999997, "text": " machine. I wrote two blogs on this topic which I highly recommend that you go and"}, {"start": 30.479999999999997, "end": 35.56, "text": " head and read. I'll link those down there in the description but I'll try and"}, {"start": 35.56, "end": 40.64, "text": " summarize the main ideas that were covered in the paper right now. His paper"}, {"start": 40.64, "end": 46.519999999999996, "text": " has one of the best openings ever and goes like this. I propose to consider the"}, {"start": 46.519999999999996, "end": 52.2, "text": " question can machines think? This should begin with definitions of the meaning of"}, {"start": 52.2, "end": 59.2, "text": " the terms machine and think. So basically Turing is well aware that we"}, {"start": 59.2, "end": 62.760000000000005, "text": " don't have a slightest clue what thinking is, what understanding is, what"}, {"start": 62.760000000000005, "end": 68.36, "text": " consciousness is and whether it's actually needed for the cognition as we"}, {"start": 68.36, "end": 73.96000000000001, "text": " know it. He goes on to propose what is now called the Turing test and which he"}, {"start": 73.96000000000001, "end": 79.5, "text": " originally dubbed as the imitation game. So the game or the Turing test goes like"}, {"start": 79.5, "end": 84.32, "text": " this. So imagine you have a judge and you have a man and you have a"}, {"start": 84.32, "end": 89.08, "text": " woman. Now the goal of the man is to try and trick the judge into thinking that"}, {"start": 89.08, "end": 94.2, "text": " he's a woman. The goal of the woman is to try and convince the judge to help the"}, {"start": 94.2, "end": 99.92, "text": " judge and convince him or her that she is a she, that she's a woman. And the goal"}, {"start": 99.92, "end": 103.6, "text": " of the judge of course is to just figure out who actually the man is and who the"}, {"start": 103.6, "end": 108.4, "text": " woman is. Now go ahead and replace the man with a computer and if the"}, {"start": 108.4, "end": 112.60000000000001, "text": " computer can be as performant as the man in tricking the judge it passes the"}, {"start": 112.60000000000001, "end": 116.72, "text": " Turing test. But there's a lot of important details here that I missed in"}, {"start": 116.72, "end": 120.80000000000001, "text": " the original statement. So first of all they can only communicate by text"}, {"start": 120.80000000000001, "end": 126.44, "text": " and not handwritten text but typed text so that you cannot kind of infer"}, {"start": 126.44, "end": 131.88, "text": " whether the text comes from a woman or man. Back in his time extra sensory"}, {"start": 131.88, "end": 136.12, "text": " perception like telepathy, telekinesis, all of those stuff that we see in like"}, {"start": 136.12, "end": 140.12, "text": " science fiction movies was actually kind of popular and he suggested that there"}, {"start": 140.12, "end": 147.56, "text": " was some statistical evidence for telepathy. So in order to avoid like"}, {"start": 147.56, "end": 153.24, "text": " some of the participants being able to exploit telepathic abilities he said we"}, {"start": 153.24, "end": 157.66, "text": " want to create a telepathy proof room, whatever that means, because that will"}, {"start": 157.66, "end": 162.28, "text": " give humans unfair advantage compared to computers which can only guess at random."}, {"start": 162.28, "end": 167.04, "text": " And Turing's original prediction went like this. So I believe that in"}, {"start": 167.04, "end": 171.36, "text": " about 50 years time it will be possible to program computers with a storage"}, {"start": 171.36, "end": 176.32, "text": " capacity of about 10 to the power of 9 which is 1 gigabit for those of you who"}, {"start": 176.32, "end": 180.68, "text": " don't know to make them play the imitation game so well that an average"}, {"start": 180.68, "end": 185.32, "text": " interrogator will not have more than 70% chance of making the right"}, {"start": 185.32, "end": 190.48, "text": " identification after five minutes of questioning. So basically the game"}, {"start": 190.48, "end": 194.79999999999998, "text": " lasts for five minutes and if the computer manages to trick 30% of the"}, {"start": 194.79999999999998, "end": 199.88, "text": " judges it passed the Turing test. So this original statement was not as strong as"}, {"start": 199.88, "end": 205.48, "text": " it can be. For me personally I think that computer will have to communicate with"}, {"start": 205.48, "end": 210.2, "text": " you like the whole day at least and then you can say it's intelligent or it can"}, {"start": 210.2, "end": 214.2, "text": " think or it should pass the test. If you have only five minutes I think today"}, {"start": 214.2, "end": 217.95999999999998, "text": " like Google Mina for example is it the newest stuff that came from Google,"}, {"start": 217.96, "end": 224.24, "text": " newest chatbot. I'd say if you took an average judge like some kid"}, {"start": 224.24, "end": 227.8, "text": " from high school I think you can have like a five minute conversation without"}, {"start": 227.8, "end": 232.88, "text": " a problem. Now I said they will think but like that's pretty sketchy and let's see"}, {"start": 232.88, "end": 237.72, "text": " what Turing said about that one. So may not machines carry out something which"}, {"start": 237.72, "end": 242.36, "text": " ought to be described as thinking but which is very different from what a man"}, {"start": 242.36, "end": 246.60000000000002, "text": " does. This objection is a very strong one but at least we can say that if"}, {"start": 246.6, "end": 250.12, "text": " nevertheless a machine can be constructed to play the imitation game"}, {"start": 250.12, "end": 256.0, "text": " satisfactorily we need not be troubled by this objection. We probably won't have"}, {"start": 256.0, "end": 261.44, "text": " to crack thinking or understanding on all those like hard and abstract stuff. We"}, {"start": 261.44, "end": 264.96, "text": " can just make it this stuff that we want like that's engineering. Basically"}, {"start": 264.96, "end": 270.24, "text": " airplanes also don't fly like birds right and we don't complain"}, {"start": 270.24, "end": 275.24, "text": " about them right. So the same thing about cars they don't run like cheetahs but"}, {"start": 275.24, "end": 280.28000000000003, "text": " they are faster than cheetahs. So that's it. I mean you don't have to fully"}, {"start": 280.28000000000003, "end": 285.32, "text": " understand something in order to exploit certain features that would be useful to"}, {"start": 285.32, "end": 290.2, "text": " like humanity and economy. So that was the thinking part. Turing initially said"}, {"start": 290.2, "end": 294.44, "text": " can machines think? So we need to define what a machine means. According to"}, {"start": 294.44, "end": 299.52, "text": " Turing a machine cannot be just about anything. So for example he anticipated"}, {"start": 299.52, "end": 305.44, "text": " that we may be able to genetically create humans so those humans wouldn't be"}, {"start": 305.44, "end": 309.35999999999996, "text": " appropriate to participate in this test. What he means by machine is a digital"}, {"start": 309.35999999999996, "end": 313.24, "text": " computer. Nowadays when we say digital computer we usually make it synonymous"}, {"start": 313.24, "end": 317.35999999999996, "text": " with electronic digital computer which is a concrete implementation of a"}, {"start": 317.35999999999996, "end": 320.88, "text": " digital computer. Digital computer can be implemented as a mechanical device."}, {"start": 320.88, "end": 325.47999999999996, "text": " There was this famous mechanical digital computer called the analytical engine"}, {"start": 325.48, "end": 330.40000000000003, "text": " developed by Charles Babbage 200 years ago and also computers digital computers"}, {"start": 330.40000000000003, "end": 334.24, "text": " could be acoustic they could be photonic so-called photonic computing"}, {"start": 334.24, "end": 338.92, "text": " where you basically use photons and light to do the same computations as"}, {"start": 338.92, "end": 345.52000000000004, "text": " these electronic computers do. Basically he proved in his earlier paper that"}, {"start": 345.52000000000004, "end": 349.52000000000004, "text": " those can solve anything that's computationally solvable they are so"}, {"start": 349.52000000000004, "end": 354.88, "text": " called universal Turing machines. Now that's so obvious now in 2020 where we"}, {"start": 354.88, "end": 359.44, "text": " just pull out your smartphone you can you can use it as a calendar you can use"}, {"start": 359.44, "end": 363.71999999999997, "text": " it to take a photo you can call somebody you can do a bunch of stuff. Back in the"}, {"start": 363.71999999999997, "end": 368.44, "text": " days of Turing they basically have a single machine for a single purpose"}, {"start": 368.44, "end": 371.88, "text": " that's it. There is a really strong hypothesis that Turing is making"}, {"start": 371.88, "end": 376.44, "text": " throughout this paper. Human mind can be modeled as a discrete state machine."}, {"start": 376.44, "end": 381.7, "text": " If that's true then any digital computer be it mechanical be it acoustic be it"}, {"start": 381.7, "end": 387.64, "text": " electronic one can mimic your mind. So your mind is like software and your"}, {"start": 387.64, "end": 392.56, "text": " brain is like a hardware of a computer. Following from that one your thoughts"}, {"start": 392.56, "end": 397.36, "text": " are just simple computations and this is something that's called as a dualism in"}, {"start": 397.36, "end": 401.56, "text": " philosophy. So you basically think that the mind can be separated from the brain."}, {"start": 401.56, "end": 405.4, "text": " The opposite theory would be that they cannot be separated that the brain is"}, {"start": 405.4, "end": 410.44, "text": " actually needed for a true cognition for true understanding and consciousness and"}, {"start": 410.44, "end": 415.2, "text": " everything. Roger Penrose famous mathematician and physicist he thinks"}, {"start": 415.2, "end": 418.88, "text": " that there are probably some quantum effects happening in the mind and every"}, {"start": 418.88, "end": 423.56, "text": " new like neuron in your brain basically has this strange structure called"}, {"start": 423.56, "end": 429.36, "text": " microtubules and there are some evidence although like those are certainly not"}, {"start": 429.36, "end": 434.64, "text": " definite that those could be causing consciousness. Now for many of you this"}, {"start": 434.64, "end": 439.2, "text": " idea may may sound absurd like human mind could be modeled as a simple like a"}, {"start": 439.2, "end": 443.44, "text": " set of instructions like a program but I don't know like if you know about"}, {"start": 443.44, "end": 448.44, "text": " Conway's Game of Life for example you know that some really simple rules"}, {"start": 448.44, "end": 454.8, "text": " could create an illusion of a really complex phenomenon. Second example could"}, {"start": 454.8, "end": 460.24, "text": " be fractals like Mandelbrot's set or Mandelbrot's fractal and you"}, {"start": 460.24, "end": 464.91999999999996, "text": " can see it on the screen and it's created using a really simple rule and"}, {"start": 464.92, "end": 470.32, "text": " you get this amazingly complex looking fractal. This whole theory was later"}, {"start": 470.32, "end": 477.64000000000004, "text": " refuted by John Searle in a famous Chinese room argument so I'm pretty"}, {"start": 477.64000000000004, "end": 482.96000000000004, "text": " confident that we will need some kind of special coupling special hardware in"}, {"start": 482.96000000000004, "end": 488.24, "text": " order to produce understanding thinking and that stuff. Putting aside the fact"}, {"start": 488.24, "end": 493.48, "text": " that John Searle actually refuted Turing in 1980 let's see a couple of"}, {"start": 493.48, "end": 497.36, "text": " objections that Turing himself anticipated being thrown at him back in"}, {"start": 497.36, "end": 503.04, "text": " 1950. So the first one is a theological objection and the idea there is well"}, {"start": 503.04, "end": 507.92, "text": " like God gave souls only to men and women and you need souls in order to"}, {"start": 507.92, "end": 512.4, "text": " think which pretty much implies that machines will never be able to think and"}, {"start": 512.4, "end": 518.8000000000001, "text": " what Turing says is that hey like in some other religions they say that"}, {"start": 518.8, "end": 524.76, "text": " only men have souls so who's right? Probably none of those theories is"}, {"start": 524.76, "end": 530.24, "text": " correct and there we have some like empirical evidence where church had or"}, {"start": 530.24, "end": 535.0799999999999, "text": " in general religions had some some funny theories. Geocentric system is in one"}, {"start": 535.0799999999999, "end": 539.52, "text": " example where church had false beliefs and they didn't want to admit it. The"}, {"start": 539.52, "end": 544.88, "text": " bottom line is dogmas are not arguments so Turing just said I don't want to even"}, {"start": 544.88, "end": 548.84, "text": " try and refute this one. The second objection was the heads in the sand"}, {"start": 548.84, "end": 553.56, "text": " objection and the idea there is it would be really scary if machines could think"}, {"start": 553.56, "end": 559.16, "text": " so they cannot. I mean that's not even an argument but Turing nonetheless"}, {"start": 559.16, "end": 563.52, "text": " just kind of considered that one and it's really related to the first"}, {"start": 563.52, "end": 567.32, "text": " theological argument because it's not an argument it's an opinion. The"}, {"start": 567.32, "end": 571.1, "text": " third objection that Turing considered was the mathematical objection and the"}, {"start": 571.1, "end": 575.0400000000001, "text": " idea there was that we are aware that computers do have certain limitations"}, {"start": 575.0400000000001, "end": 579.26, "text": " they cannot compute everything so one famous example is the halting problem"}, {"start": 579.26, "end": 584.66, "text": " where the goal is to given a program and given an input you have to"}, {"start": 584.66, "end": 588.84, "text": " create another program which is say whether this program will stop for a"}, {"start": 588.84, "end": 593.8000000000001, "text": " given input and Turing proved that this cannot be computed. This is something"}, {"start": 593.8000000000001, "end": 598.6, "text": " that's called undecidable problem for Turing machines but there is no proof"}, {"start": 598.6, "end": 602.96, "text": " that human intellect is not subject to the same kinds of limitations. My opinion"}, {"start": 602.96, "end": 608.12, "text": " here is that we won't even be testing people for these kind of tests. We won't"}, {"start": 608.12, "end": 612.16, "text": " be testing mathematicians or logicians and even they couldn't answer questions"}, {"start": 612.16, "end": 614.76, "text": " like halting problem like that doesn't make any sense."}, {"start": 614.76, "end": 619.24, "text": " Skipping to the fourth objection basically that's called the argument from"}, {"start": 619.24, "end": 623.8000000000001, "text": " consciousness and the idea here is hey even if machines do some crazy cool"}, {"start": 623.8000000000001, "end": 628.28, "text": " stuff they still don't think they don't understand they don't feel they don't"}, {"start": 628.28, "end": 632.04, "text": " have consciousness and Turing said well how do you know how do you know that"}, {"start": 632.04, "end": 637.92, "text": " even other people have those kind of attributes so you basically cannot know"}, {"start": 637.92, "end": 642.8, "text": " and if you go into extreme that's called a solipsist view where you basically you"}, {"start": 642.8, "end": 646.9599999999999, "text": " can only say for yourself that hey I have consciousness I'm aware of myself"}, {"start": 646.9599999999999, "end": 651.4399999999999, "text": " but you cannot say for others so you just have to be reasonable and accept"}, {"start": 651.4399999999999, "end": 657.92, "text": " this test as a as a good way to to measure intelligence and I'm obviously"}, {"start": 657.92, "end": 661.16, "text": " simplifying things a little bit here I just want to give you out like a short"}, {"start": 661.16, "end": 665.0, "text": " glimpse of how the the paper the computing machinery and intelligence"}, {"start": 665.0, "end": 669.5999999999999, "text": " was was kind of structured and just give you a feeling for that and then you can"}, {"start": 669.5999999999999, "end": 674.4399999999999, "text": " go later on and explore a bit more for yourself. The fifth one is argument from"}, {"start": 674.4399999999999, "end": 679.16, "text": " various disabilities and this one goes something like this so yeah they can do"}, {"start": 679.16, "end": 684.12, "text": " all of those cool stuff but they cannot do X where X can be fall in love"}, {"start": 684.12, "end": 689.84, "text": " learning from experience doing something really new basically Turing said that"}, {"start": 689.84, "end": 695.0, "text": " all of these are usually a product of a scientific induction went wrong in the"}, {"start": 695.0, "end": 698.76, "text": " sense they just see the current state of affairs they see current computers and"}, {"start": 698.76, "end": 703.24, "text": " they say hey these machines are not capable in general of doing this and"}, {"start": 703.24, "end": 709.52, "text": " they probably will be and as we now know machines can learn from experience that"}, {"start": 709.52, "end": 713.72, "text": " there's a whole field my channel is all about called machine learning where"}, {"start": 713.72, "end": 718.2, "text": " machines are learning indeed from data from experience and also they can do"}, {"start": 718.2, "end": 722.64, "text": " something new so if you if you know about AlphaGo if you know about all of"}, {"start": 722.64, "end": 728.2, "text": " those cool algorithms that DeepMind is developing they did a lot of moves and"}, {"start": 728.2, "end": 732.76, "text": " stuff that other human players the best players in the world could not explain"}, {"start": 732.76, "end": 736.64, "text": " they are doing something new that's for sure. The sixth objection is lady"}, {"start": 736.64, "end": 741.44, "text": " lovelace's objection and it says that computers can never do something new"}, {"start": 741.44, "end": 746.2800000000001, "text": " they can only do stuff that we know how to program them to do so basically"}, {"start": 746.2800000000001, "end": 750.4000000000001, "text": " whatever we know how to do they will be able to do anything else is out of the"}, {"start": 750.4000000000001, "end": 755.36, "text": " scope and this is a really important argument because it later created a the"}, {"start": 755.36, "end": 758.96, "text": " most famous objection to Turing's theory which is known as the Chinese"}, {"start": 758.96, "end": 764.24, "text": " informally known as the Chinese room argument written by John Searle and it"}, {"start": 764.24, "end": 770.5600000000001, "text": " also created a brand new test which is called lovelace's test and it just kind"}, {"start": 770.56, "end": 774.64, "text": " of informally tries to figure out if machines can surprise us if they manage"}, {"start": 774.64, "end": 778.92, "text": " to surprise us the machine passes the test that's informal version to be quite"}, {"start": 778.92, "end": 782.88, "text": " honest Turing had the kind of weak argument on this one he basically said"}, {"start": 782.88, "end": 788.3199999999999, "text": " hey when I sometimes do some syntax error computer does surprise me when I see"}, {"start": 788.3199999999999, "end": 793.0799999999999, "text": " some like strange result and I mean that's not the kind of surprise that we"}, {"start": 793.0799999999999, "end": 796.68, "text": " want to see from a machine not a syntax error every every single developer"}, {"start": 796.68, "end": 800.16, "text": " nowadays experiences those kind of surprises but it's not the same thing"}, {"start": 800.16, "end": 806.16, "text": " we do know of many examples nowadays which indeed truly surprised us as like"}, {"start": 806.16, "end": 811.3199999999999, "text": " as the whole civilization so for example move 37 that AlphaGo did in a match"}, {"start": 811.3199999999999, "end": 816.4, "text": " against Lisa Dole was truly a new thing nobody expected that one so everybody"}, {"start": 816.4, "end": 821.48, "text": " was kind of surprised I'm not quite sure if David Silver and his crew from DeepMind"}, {"start": 821.48, "end": 825.72, "text": " can actually can actually figure out exactly what the machine how the"}, {"start": 825.72, "end": 831.12, "text": " machine came with that like move seventh argument from the continuity of the"}, {"start": 831.12, "end": 836.36, "text": " nervous system and what Turing basically said here is that a digital computer"}, {"start": 836.36, "end": 840.2, "text": " which is which can mimic any discrete state machine could be a good enough"}, {"start": 840.2, "end": 845.9200000000001, "text": " approximation for for our brain eighth the argument from informality of"}, {"start": 845.9200000000001, "end": 851.76, "text": " behavior and this one basically says we can never learn or teach the machine of"}, {"start": 851.76, "end": 855.64, "text": " how to perform in every single situation that may appear in the real world"}, {"start": 855.64, "end": 860.08, "text": " and what Turing said is that we haven't searched long enough if we keep on"}, {"start": 860.08, "end": 863.64, "text": " putting new rules and new rules into the machine it will eventually get better"}, {"start": 863.64, "end": 868.28, "text": " and I don't agree with Turing on this one I'd say I'd say we did solve it in a"}, {"start": 868.28, "end": 873.52, "text": " way but using machine learning but even with ML we still are not quite sure how"}, {"start": 873.52, "end": 878.24, "text": " the machine will react to every single possible input in in a given state it's"}, {"start": 878.24, "end": 882.52, "text": " kind of not the desirable property of a system especially when human lives are"}, {"start": 882.52, "end": 887.1999999999999, "text": " concerned the final argument is the one from extrasensory perception and I kind"}, {"start": 887.1999999999999, "end": 891.4, "text": " of hinted that this one earlier in the video what he basically says here we have"}, {"start": 891.4, "end": 896.6, "text": " a person that has a telepathic ability for example then the judge could give"}, {"start": 896.6, "end": 901.12, "text": " such a test that only the person that has telepathic abilities could answer"}, {"start": 901.12, "end": 906.0799999999999, "text": " correctly whereas the machine can do at best at random right so say I have some"}, {"start": 906.0799999999999, "end": 911.76, "text": " cards in my hands and the guy that has telepathic abilities I guess is 40"}, {"start": 911.76, "end": 917.52, "text": " of those correctly whereas the machine does a 20-25 whatever it's funny to"}, {"start": 917.52, "end": 921.52, "text": " listen to Turing writing about statistical evidence for telepathy"}, {"start": 921.52, "end": 927.08, "text": " where it seems like they did have some evidence that some persons perform"}, {"start": 927.08, "end": 931.36, "text": " better than at random today in 2020 we don't have any any kind of such"}, {"start": 931.36, "end": 936.0, "text": " statistical evidence as far as I know at the very end of the paper he mentions a"}, {"start": 936.0, "end": 941.24, "text": " couple of really interesting ideas about learning so what he suggests is that hey"}, {"start": 941.24, "end": 945.96, "text": " instead of trying and programming like a like a fully fledged human mind into the"}, {"start": 945.96, "end": 950.96, "text": " machine let's try and start with a child machine and basically through education"}, {"start": 950.96, "end": 955.76, "text": " process teach it to become the adult machine that's basically idea of machine"}, {"start": 955.76, "end": 959.4, "text": " learning he compares this process to evolutionary process in the sense that"}, {"start": 959.4, "end": 964.6800000000001, "text": " the initial state of the machine is can be considered as the hereditary material"}, {"start": 964.6800000000001, "end": 969.6800000000001, "text": " education process can be considered as mutations and natural selection is"}, {"start": 969.68, "end": 974.1999999999999, "text": " judging in this example or some automatic way of doing it like objective"}, {"start": 974.1999999999999, "end": 979.52, "text": " or loss function and finally he also pretty much predicted reinforcement"}, {"start": 979.52, "end": 984.2399999999999, "text": " learning in the sense he suggested some kind of reward and punishment methods"}, {"start": 984.2399999999999, "end": 989.92, "text": " for learning and he said these will probably not be the the main way to to"}, {"start": 989.92, "end": 995.12, "text": " learn them to teach a machine but a supplement with that I'll end this brief"}, {"start": 995.12, "end": 999.4, "text": " overview of drinks paper computing machinery and intelligence I hope you"}, {"start": 999.4, "end": 1004.52, "text": " liked it if you did go ahead and subscribe share and like if you like the"}, {"start": 1004.52, "end": 1030.36, "text": " video and see you next time"}] |
Aleksa Gordić - The AI Epiphany | https://www.youtube.com/watch?v=lOR-LncQlk8 | Feed-forward method | Neural Style Transfer #5 | ❤️ Become The AI Epiphany Patreon ❤️ ► https://www.patreon.com/theaiepiphany
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
The 5th video in the NST series! 🎨
You'll learn about:
✔️ fast, feed-forward, CNN-based neural style transfer!
You don't need to read the research paper to be able to play with the code and make awesome stuff! You'll probably need some basic understanding if you want to train the models yourself - but I'll cover that in the next video!
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
✅ GitHub code: https://github.com/gordicaleksa/pytorch-nst-feedforward
✅ Paper (Johnson et al.): https://arxiv.org/pdf/1603.08155.pdf
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
⌚️ Timetable:
0:00 - GitHub repo walk-through
0:50 - Setup timelapse (check the repo for details)
1:03 - step1: Download pre-trained models
1:30 - step2: Run the stylization script
2:10 - Stylization script walk-through
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
💰 BECOME A PATREON OF THE AI EPIPHANY ❤️
If these videos, GitHub projects, and blogs help you,
consider helping me out by supporting me on Patreon!
The AI Epiphany ► https://www.patreon.com/theaiepiphany
One-time donations: https://www.paypal.com/paypalme/theaiepiphany
Much love! ❤️
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
💡 The AI Epiphany is a channel dedicated to simplifying the field of AI using creative visualizations and in general, a stronger focus on geometrical and visual intuition,
rather than the algebraic and numerical "intuition".
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
👋 CONNECT WITH ME ON SOCIAL
LinkedIn ► https://www.linkedin.com/in/aleksagordic/
Twitter ► https://twitter.com/gordic_aleksa
Instagram ► https://www.instagram.com/aiepiphany/
Facebook ► https://www.facebook.com/aiepiphany/
👨👩👧👦 JOIN OUR DISCORD COMMUNITY:
Discord ► https://discord.gg/peBrCpheKE
📢 SUBSCRIBE TO MY MONTHLY AI NEWSLETTER:
Substack ► https://aiepiphany.substack.com/
💻 FOLLOW ME ON GITHUB FOR COOL PROJECTS:
GitHub ► https://github.com/gordicaleksa
📚 FOLLOW ME ON MEDIUM:
Medium ► https://gordicaleksa.medium.com/
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
#neuralstyletransfer #deeplearning #ai | Okay, so this video is going to be really nice and short. I went ahead and reconstructed the original Johnson's paper on fast neural style transfer. It's a CNN based approach as opposed to the GATIS' approach which was optimization based. Let's see some results I got using the four pre-trained models I trained beforehand. So in the left column you can see the style images I used and on the right you can see the output from the models I trained. So keep in mind that the the model in the top row is the only one that was fully trained. So it's all like the two epochs of the whole MSCoco dataset. So that's around 83 000 images whereas the three bottom ones still need to be additionally trained. Now I'm just going to do a quick time lapse of the setup because we already did that in the previous coding video. Once you have your con environment configured there are only two more steps to get some results. So the first one is go ahead and run this resource downloader script and it will just go ahead and download the pre-trained models that I've uploaded right here to this URL on Dropbox. You will then unzip them and place them in this folder here models-binaries. So you'll have four pre-trained models on your disposal. And the second step is just go ahead and run this stylization script. It's located here. Just run it with the default parameters and we'll get some results. This is the result we get using the default content image that's located here. Just go to data content images and this one is used by default. So I'm just going to go and show you a little bit about how the stylization script itself works. I'm going to go ahead and stop this one here. Oops and I'm just going to show you where it's actually saved. It saved to the output images directory here. So you'll be able to find it later on in this directory. Okay let's see how the script works. Let me close this one and zoom this in a little bit like this. So what we have here is a couple of default locations. First one is for content images and we already saw that one that's in here. We have output images where the images will get dumped. Bineries where we have our models. And this is just basic error checking here. Basically checking if we only have PyTorch models inside of this binaries directory. And I just create the output directory where we'll dump images. And here are the arguments. These are just the content image that you want to use, that you want to pass through the model. Then we have the, you want to set the width of the output image and it's set to 500 by default here. We have default model that's called Mosaic. And we just wrap all of those parameters and we call the Stylized Static Image function. Let's see what this function actually does. So if we go upside here we can first see, we first want to check out if the device has a GPU or only CPU support. And then we basically just create this image path. We pass it through this function called prepareImage which will kind of add a batch dimension and do some normalization. We'll check the function a bit later. And I just instantiate here the Transform model which is the actual model that performs the stylization. We figure out the state. So we just load the state from a PyTorch model that's in binaries here. We print some metadata and we finally load the weights inside of the the model and we just put it into this evolve mode which is really important if you want to do inference. You want to do this, you want to call this function here. And finally we just wrap this into this context torch no grad which will basically forbid PyTorch to calculate gradients which will, which would kind of be a huge memory overhead. So we call the model on the content image. We just paste it to CPU. We convert it to NumPy here. And because we had the batch dimension that we previously added we got to extract the 0th batch here. And that's the stylized image and we just save the image. That's the image we saw a couple of minutes ago that was dumped here into output images folder. This one. Let's just check out that prepare image we saw in the utils here. So this is how it looks like. It basically just loads the image from the specified path specified by this variable image path. And that will just basically create a NumPy image normalized to 0 to 1 range. And then we'll pretty much add the this normalization transform that PyTorch provides us with. And we'll use ImageNet's mean and standard deviation to normalize our images. After we apply the transformation here we just push the push this image onto the GPU if we have one. And we just add this batch dimension in front because that's because models are always expecting this batch dimension. So pretty simple stuff. In the next video I'll cover training that will be a bit more challenging. We're going to figure out how to use TensorBoard and just debug and visualize our training metrics. So let me just go ahead and try one more image here. So I'm going to use an image called figures and let me set the width here to 350 say. We'll use the same model mosaic. And if I start that one, if I run it, this is what we get. So how good the output image will be actually depends also on the size that you put here in the width. So if I put 550 say here and I run that one it will have different stylization. And it's much nicer if you ask me. In the next video we'll cover training until that time just go ahead and play with this repo. Try and figure out how the things work and that will be a really nice learning experience. If you like the content go ahead and subscribe, like and share the videos if you think they can bring value to somebody else. See you next time! | [{"start": 0.0, "end": 3.92, "text": " Okay, so this video is going to be really nice and short. I went ahead and"}, {"start": 3.92, "end": 8.32, "text": " reconstructed the original Johnson's paper on fast neural style transfer."}, {"start": 8.32, "end": 11.6, "text": " It's a CNN based approach as opposed to the"}, {"start": 11.6, "end": 14.64, "text": " GATIS' approach which was optimization based."}, {"start": 14.64, "end": 18.96, "text": " Let's see some results I got using the four pre-trained models I trained"}, {"start": 18.96, "end": 22.8, "text": " beforehand. So in the left column you can see the style images"}, {"start": 22.8, "end": 27.28, "text": " I used and on the right you can see the output from the models I trained."}, {"start": 27.28, "end": 30.880000000000003, "text": " So keep in mind that the the model in the top row is"}, {"start": 30.880000000000003, "end": 34.08, "text": " the only one that was fully trained. So it's all like"}, {"start": 34.08, "end": 38.24, "text": " the two epochs of the whole MSCoco dataset."}, {"start": 38.24, "end": 42.32, "text": " So that's around 83 000 images whereas the three"}, {"start": 42.32, "end": 45.52, "text": " bottom ones still need to be additionally trained. Now I'm just going"}, {"start": 45.52, "end": 48.08, "text": " to do a quick time lapse of the setup because we"}, {"start": 48.08, "end": 58.32, "text": " already did that in the previous coding video."}, {"start": 62.72, "end": 66.16, "text": " Once you have your con environment configured there are only two more steps"}, {"start": 66.16, "end": 70.4, "text": " to get some results. So the first one is go ahead and run this"}, {"start": 70.4, "end": 75.52, "text": " resource downloader script and it will just go ahead and download the pre-trained"}, {"start": 75.52, "end": 79.28, "text": " models that I've uploaded right here to this"}, {"start": 79.28, "end": 84.56, "text": " URL on Dropbox. You will then unzip them and place them in this folder"}, {"start": 84.56, "end": 88.24, "text": " here models-binaries. So you'll have four pre-trained"}, {"start": 88.24, "end": 91.52, "text": " models on your disposal. And the second step is just go ahead and run this"}, {"start": 91.52, "end": 96.0, "text": " stylization script. It's located here. Just run it with the"}, {"start": 96.0, "end": 98.24, "text": " default parameters and we'll get some results."}, {"start": 98.24, "end": 102.08, "text": " This is the result we get using the default content image that's"}, {"start": 102.08, "end": 108.16, "text": " located here. Just go to data content images and this one is used by"}, {"start": 108.16, "end": 112.88, "text": " default. So I'm just going to go and show you a"}, {"start": 112.88, "end": 115.75999999999999, "text": " little bit about how the stylization script itself works."}, {"start": 115.75999999999999, "end": 119.12, "text": " I'm going to go ahead and stop this one here."}, {"start": 119.12, "end": 124.08, "text": " Oops and I'm just going to show you where it's actually saved. It saved to the"}, {"start": 124.08, "end": 128.16, "text": " output images directory here. So you'll be able to find it later"}, {"start": 128.16, "end": 130.8, "text": " on in this directory. Okay let's see how the"}, {"start": 130.8, "end": 135.52, "text": " script works. Let me close this one and zoom this in a"}, {"start": 135.52, "end": 140.08, "text": " little bit like this. So what we have here is a"}, {"start": 140.08, "end": 144.4, "text": " couple of default locations. First one is for content images and we"}, {"start": 144.4, "end": 148.48000000000002, "text": " already saw that one that's in here. We have output images where the"}, {"start": 148.48000000000002, "end": 154.4, "text": " images will get dumped. Bineries where we have our models."}, {"start": 154.4, "end": 158.48000000000002, "text": " And this is just basic error checking here. Basically checking if"}, {"start": 158.48, "end": 162.16, "text": " we only have PyTorch models inside of this"}, {"start": 162.16, "end": 168.95999999999998, "text": " binaries directory. And I just create the output directory"}, {"start": 168.95999999999998, "end": 172.79999999999998, "text": " where we'll dump images. And here are the arguments. These are"}, {"start": 172.79999999999998, "end": 176.72, "text": " just the content image that you want to use, that you want to pass through the"}, {"start": 176.72, "end": 180.32, "text": " model. Then we have the, you want to set the"}, {"start": 180.32, "end": 184.88, "text": " width of the output image and it's set to 500 by default here."}, {"start": 184.88, "end": 189.35999999999999, "text": " We have default model that's called Mosaic."}, {"start": 189.35999999999999, "end": 193.6, "text": " And we just wrap all of those parameters and we call the"}, {"start": 193.6, "end": 197.44, "text": " Stylized Static Image function. Let's see what this function actually does."}, {"start": 197.44, "end": 201.12, "text": " So if we go upside here we can first see,"}, {"start": 201.12, "end": 204.64, "text": " we first want to check out if the device has a GPU"}, {"start": 204.64, "end": 211.68, "text": " or only CPU support. And then we basically just create this"}, {"start": 211.68, "end": 215.6, "text": " image path. We pass it through this function called"}, {"start": 215.6, "end": 220.88, "text": " prepareImage which will kind of add a batch dimension and do some"}, {"start": 220.88, "end": 224.4, "text": " normalization. We'll check the function a bit later."}, {"start": 224.4, "end": 228.08, "text": " And I just instantiate here the"}, {"start": 228.08, "end": 233.04000000000002, "text": " Transform model which is the actual model that performs the stylization."}, {"start": 233.04000000000002, "end": 240.0, "text": " We figure out the state. So we just load the state from a"}, {"start": 240.0, "end": 247.2, "text": " PyTorch model that's in binaries here. We print some metadata and we finally"}, {"start": 247.2, "end": 251.2, "text": " load the weights inside of the the model and we just put it into this"}, {"start": 251.2, "end": 254.64, "text": " evolve mode which is really important if you want to do inference."}, {"start": 254.64, "end": 258.32, "text": " You want to do this, you want to call this function here. And finally we just"}, {"start": 258.32, "end": 261.92, "text": " wrap this into this context torch no grad"}, {"start": 261.92, "end": 268.56, "text": " which will basically forbid PyTorch to calculate gradients which will, which"}, {"start": 268.56, "end": 273.6, "text": " would kind of be a huge memory overhead. So we call the model on the"}, {"start": 273.6, "end": 278.32, "text": " content image. We just paste it to CPU. We convert it"}, {"start": 278.32, "end": 281.36, "text": " to NumPy here. And because we had the batch dimension"}, {"start": 281.36, "end": 286.96, "text": " that we previously added we got to extract the 0th batch here."}, {"start": 286.96, "end": 290.72, "text": " And that's the stylized image and we just save the image. That's the image we"}, {"start": 290.72, "end": 294.8, "text": " saw a couple of minutes ago that was dumped here into output"}, {"start": 294.8, "end": 298.96000000000004, "text": " images folder. This one. Let's just check out that prepare image we saw in the"}, {"start": 298.96000000000004, "end": 302.40000000000003, "text": " utils here. So this is how it looks like. It basically"}, {"start": 302.40000000000003, "end": 305.2, "text": " just loads the image from the specified path"}, {"start": 305.2, "end": 309.2, "text": " specified by this variable image path. And that will just basically create a"}, {"start": 309.2, "end": 313.68, "text": " NumPy image normalized to 0 to 1 range."}, {"start": 313.68, "end": 320.16, "text": " And then we'll pretty much add the this normalization transform"}, {"start": 320.16, "end": 323.76, "text": " that PyTorch provides us with. And we'll use"}, {"start": 323.76, "end": 327.12, "text": " ImageNet's mean and standard deviation to"}, {"start": 327.12, "end": 331.92, "text": " normalize our images. After we apply the transformation here"}, {"start": 331.92, "end": 339.03999999999996, "text": " we just push the push this image onto the GPU if we have one."}, {"start": 339.03999999999996, "end": 342.08, "text": " And we just add this batch dimension in front"}, {"start": 342.08, "end": 346.24, "text": " because that's because models are always expecting"}, {"start": 346.24, "end": 349.92, "text": " this batch dimension. So pretty simple stuff."}, {"start": 349.92, "end": 354.24, "text": " In the next video I'll cover training that will be a bit more"}, {"start": 354.24, "end": 357.6, "text": " challenging. We're going to figure out how to use TensorBoard and just"}, {"start": 357.6, "end": 362.40000000000003, "text": " debug and visualize our training metrics. So let me just go"}, {"start": 362.40000000000003, "end": 366.48, "text": " ahead and try one more image here. So I'm going to use"}, {"start": 366.48, "end": 371.04, "text": " an image called figures and let me set the"}, {"start": 371.04, "end": 377.04, "text": " width here to 350 say. We'll use the same model mosaic. And if I start that one, if"}, {"start": 377.04, "end": 381.6, "text": " I run it, this is what we get. So how good the"}, {"start": 381.6, "end": 384.88, "text": " output image will be actually depends also on the size that you put here in"}, {"start": 384.88, "end": 391.28000000000003, "text": " the width. So if I put 550 say here and I run that one"}, {"start": 391.28000000000003, "end": 396.72, "text": " it will have different stylization. And it's much nicer if you ask me."}, {"start": 396.72, "end": 400.48, "text": " In the next video we'll cover training until"}, {"start": 400.48, "end": 403.68, "text": " that time just go ahead and play with this repo."}, {"start": 403.68, "end": 407.28000000000003, "text": " Try and figure out how the things work and that will be a really nice"}, {"start": 407.28000000000003, "end": 411.12, "text": " learning experience. If you like the content go ahead and subscribe,"}, {"start": 411.12, "end": 415.12, "text": " like and share the videos if you think they"}, {"start": 415.12, "end": 434.08, "text": " can bring value to somebody else. See you next time!"}] |
Aleksa Gordić - The AI Epiphany | https://www.youtube.com/watch?v=zzp3YCPsgp0 | What is Computer Vision? | The Art of Creating Seeing Machines | ❤️ Become The AI Epiphany Patreon ❤️ ► https://www.patreon.com/theaiepiphany
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
I feel like not too many of you, especially students, know about computer vision. 💻🧿
And it's, in my opinion, one of the most interesting and economically beneficial fields of the 21st century.
From self-driving cars to holograms of the mixed reality world, detecting theft in retail over to creating visual art - all of that is possible thanks to computer vision.
You'll learn about:
✔️ Computer vision field
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
⌚️ Timetable:
00:00 Enter the Computer Vision
00:26 Rant
00:42 Early days
01:20 Machine vs Human
02:00 Mixed Reality
05:04 Self-driving cars
07:18 Art creation
08:24 Deepfakes
09:05 lip-reading, surveillance, OCR, computational photography, etc.
10:50 CV is still an open area of research, adversarial examples
11:27 Outro
Note: I make these without the intent of monetizing them and for purely educational purposes. A lot of the materials I used in this video came from other great creators and it'd be hard to credit all of them but I believe I've made it with YouTube's fairness guidelines in mind, if not somebody should warn me I guess hahaha.
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
💰 BECOME A PATREON OF THE AI EPIPHANY ❤️
If these videos, GitHub projects, and blogs help you,
consider helping me out by supporting me on Patreon!
The AI Epiphany ► https://www.patreon.com/theaiepiphany
One-time donations: https://www.paypal.com/paypalme/theaiepiphany
Much love! ❤️
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
💡 The AI Epiphany is a channel dedicated to simplifying the field of AI using creative visualizations and in general, a stronger focus on geometrical and visual intuition,
rather than the algebraic and numerical "intuition".
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
👋 CONNECT WITH ME ON SOCIAL
LinkedIn ► https://www.linkedin.com/in/aleksagordic/
Twitter ► https://twitter.com/gordic_aleksa
Instagram ► https://www.instagram.com/aiepiphany/
Facebook ► https://www.facebook.com/aiepiphany/
👨👩👧👦 JOIN OUR DISCORD COMMUNITY:
Discord ► https://discord.gg/peBrCpheKE
📢 SUBSCRIBE TO MY MONTHLY AI NEWSLETTER:
Substack ► https://aiepiphany.substack.com/
💻 FOLLOW ME ON GITHUB FOR COOL PROJECTS:
GitHub ► https://github.com/gordicaleksa
📚 FOLLOW ME ON MEDIUM:
Medium ► https://gordicaleksa.medium.com/
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
#computervision #mixedreality #selfdrivingcars #deeplearning | Ever heard of computer vision? You know, the usual holograms. Isn't that science fiction? You know, the thing that allows you to play guitar while driving? The tag behind L2L4 self-driving cars? The computer vision that showed us what neural networks see under psychedelics? And the same tag that allows you to transfer awesome visual style to video is for images? For some reason, not many of you know about this field. And everybody knows about 50 shades of JavaScript frameworks. Yeah, I'm a full stack developer. Vue.js, Amber.js, Node.js, Android, iOS. But nobody wants to do computer vision. The early days of computer vision started in 50s. But in early 60s, Larry Roberts, who's considered as the godfather of computer vision, wrote this seminal paper titled Machine Perception of Three Dimensional Solids. Back then, we only played with two examples, the so-called blocks world, still trying to figure out how to extract 3D information from 2D drawings. It's interesting how human intuition fails many times miserably throughout human history. In 1966, now everybody says Marvin Minsky, but actually Seymour Puppard, an MIT guy, gave his students a summer project. Go and solve this computer vision thing in three months. Half a century later, we're still trying to solve the same project. Fast forward to 2020, we're still struggling, but we achieved enormous progress, especially from 2012 onward with the arrival of AlexNet. In some narrow domains, machines are already better than humans. So in 2015, this neural network called ResNet achieved lower error on an image classification challenge than humans. What do you see in this image? Is that a cat? This is all debatable. But in these narrow domains, neural nets are better than humans. Aside from image classification, these models are better in lip reading by far and also in art creation. And those are just a few examples. There are many more. Let's see some cool applications of computer vision. I'll explain the usage of computer vision in mixed reality in the context of Microsoft HoloLens, the holographic computer. I actually had the honor of working as a computer vision developer on the HoloLens 2 project. What's mixed reality, you may ask. In short, mixed reality is a spectrum where on one end we have the physical reality, and on the other end of the spectrum we have full immersion, i.e. virtual reality. Everything in between is where the holograms, like the one you see on the screen, live. The device has a couple of computer vision stacks that makes this magic running. It uses cameras, an depth sensor, and a couple of other sensors to extract geometry and create a 3D map of the world around it. So that grid, also called mesh, that you see in the clip is what I'm referring to. It also understands the semantics of the scene, so whether a particular mesh belongs to a wall, a ceiling, or a floor, etc. This geometry and scene understanding lets you place holograms in the world. But you also want to interact with them, so HoloLens needs to understand you. The device understands your eyes, and eyes are such a powerful way to understand someone's intent. It has iris recognition, which enables you to log in really fast, thanks to the computer vision exploiting the nice properties of biometry. Let's see it in action. It does all of this using two infrared cameras and lots of machine learning. And it has eye tracking, which means it knows exactly where you're looking at, which enables a whole range of awesome applications where you can control stuff with your eyes. And you can see in this clip this auto-scrolling feature that allows you to scroll through a page using only your eyes. HoloLens also understands your hands, and allows for instinctual interactions with holograms. It basically infers the 3D mesh around your hands, and that allows developers to figure out when the hands are touching the holograms, which are also defined by their 3D mesh. Check it out. And you can resize them, rotate them, or do whatever you want to do with them. Take a look at this clip. I can just grab this corn to resize it, or I can rotate it, or move it. Now I've used HoloLens many, many times, and I can tell you it really looks the way you can see it here on the screen. Except actually it's much better, because you can see in 3D, but the field of view is kind of narrower, so holograms can get cropped sometimes. There's one more interesting thing I want to show you here, and that's holportation. And the idea is to transfer the 3D information of an object from one distant location into another. And you will see in this video that this dad gets to talk in real time with his daughter, who's somewhere else. Take a look at it. And you can do some pretty interesting things once you get this 3D information. Take a look. Next up, computer vision in self-driving cars. There is this debate going on in the self-driving cars world whether to use LiDAR or not. And LiDAR is this thing similar to RADAR, except that it works in the visible light part of the EM spectrum. So Elon Musk is against LiDAR, most L4 companies are using LiDAR, but everybody agrees that we need to use cameras, which is kind of obvious, as the roads were built for human eyes, right? Self-driving cars use vision extensively, both inside the car for driver state sensing, as well as outside the car. There's just so much vision going on for tracking the outside world. Panoptic segmentation is one technique that is used really often, and it gives out a lot of information. It basically classifies pixels into certain classes like pedestrian, road, or car, say, but also into specific instance. So you differentiate between different instances of the same class. Here is an example of what the car sees. You can mostly see semantic segmentation visualized here. The car tracks driver-built surfaces, lines, intersections, traffic lights, pedestrians, vehicles. You need depth in order to avoid collision, so a lot of information. Here are all of those things visualized. You also want to track the driver, maybe even passengers, to reduce risk of getting into an accident. You can monitor whether the driver is sleepy, not focusing on the road ahead, or whether passengers are just distracting the driver. Here is an example of tracking whether the driver is focusing on the road. And you can take this even further and know exactly what the driver is doing by performing action recognition on the driver. So check this out. Computer vision enables perception, which is just a part of the self-driving pipeline, albeit really crucial and infrastructural one. Perceptual information is further propagated into the planning and control software of the vehicle. Which actually drives the vehicle, but it does not use computer vision at all. Next up, art. There are two techniques I'd like to mention here. One is deep dreaming, and the other one is neural style transfer. The image you see on the screen is an example of deep dreaming. Deep dreaming exploits what is called the pareidolia effect. You know how sometimes when you look at the moon, you see a human face? That's pareidolia. So how it works is we give the network some input, and whatever the network sees, it will be given some input, and whatever the network sees in the image, we just amplify that part. And I'm obviously simplifying things a little bit here, but that's how it works in a nutshell. If we feed the output with some small geometrical transformations applied, like say crop, back to the input, we get these trippy videos. I've got a whole series on neural style transfer, and in a nutshell, you just combine the content image with the style image using neural network as the combiner, and you get this beautiful result. There is this observation that perception is somewhat connected to our creation itself. If you're able to percept like these neural networks are, then you're able to create art. Think about it. Our creation is something we consider to be a deeply human trait. Now let's jump to deep fakes. Deep fakes, the curse child of computer vision, lets you use your face and your voice, but look like somebody else and sound like someone else. Take a look at this video from MIT's introductory lecture to deep learning, where they imitate Barack Obama. Keep in mind that the voice quality was degraded by design. In fact, this entire speech and video are not real, and were created using deep learning and artificial intelligence. Decently complicated thing to create. It involves various computer vision techniques, such as facial landmarks detection, optical flow calculation, taking occlusions into consideration, etc. Just be aware that these are out there. There are many more awesome computer vision applications, like those that help us understand humans, say lip reading, or monitoring pulse through image, a technique called motion magnification. We automatically select and amplify a narrow band of temporal frequencies around the human heart rate. This one could be used to monitor babies, and that could save lives. Here we extract heart rate measurements of a newborn, and confirm their accuracy by comparing them with readings from the hospital monitor. Surveillance is another big application area, applications such as crowd counting. Now this may sound rebellion, but you can use the related tech to monitor traffic and thus improve the traffic. You can also detect victims drowning in the pool, or you can detect the theft in retail. I also like the fact that computer vision is redefining search. So instead of using textual search, you can search using images and find articles that contain those images, or similar images. Definitely check out Google image search if you haven't. I used it a couple of times to figure out where some image originated in the internet. There are obviously many cool applications I haven't mentioned, like Google Earth, that can be used as a health monitor for our planet, say for tracking deforestation over time. Biometry applications such as understanding iris, which I briefly mentioned in the mixed reality section, fingerprints, face, even the way someone walks can be used as a unique identifier of a person, although with varying levels of success. Extracting text, also called optical character recognition, is really useful. You just take a photo of say a whiteboard, and it just automatically extracts all the text for you. This is actually still a difficult problem in computer vision. And finally things we take for granted. Computational photography, HDR, stabilization, autofocus, all those things that help you capture beautiful photographs. They are ingrained into little level camera software so you don't even know they're there. We talked about all the cool apps, but listen, computer vision is still an open research area, and sometimes algorithms are less intelligent than we'd like them to be. A famous example are adversarial examples, where you just tweak the input image pixels in a clever way, and it totally destroyed the algorithm, making it see airliner where there is a pig in the image. I think we came a long way developing all the cameras, low level and high level vision software, and learning algorithms, but we still need to ingrain a true understanding and cognition into these algorithms. If you found this video useful, consider supporting the channel by subscribing and sharing. I work as a full-time machine learning engineer in Microsoft, and I create these in my free time. So I really appreciate when I get the feedback that somebody finds these useful. Until next time, keep learning. | [{"start": 0.0, "end": 6.4, "text": " Ever heard of computer vision? You know, the usual holograms. Isn't that science fiction?"}, {"start": 9.120000000000001, "end": 14.24, "text": " You know, the thing that allows you to play guitar while driving? The tag behind L2L4 self-driving"}, {"start": 14.24, "end": 18.88, "text": " cars? The computer vision that showed us what neural networks see under psychedelics?"}, {"start": 20.72, "end": 24.96, "text": " And the same tag that allows you to transfer awesome visual style to video is for images?"}, {"start": 24.96, "end": 31.28, "text": " For some reason, not many of you know about this field. And everybody knows about 50 shades of"}, {"start": 31.28, "end": 38.56, "text": " JavaScript frameworks. Yeah, I'm a full stack developer. Vue.js, Amber.js, Node.js, Android,"}, {"start": 38.56, "end": 45.44, "text": " iOS. But nobody wants to do computer vision. The early days of computer vision started in 50s."}, {"start": 45.44, "end": 50.8, "text": " But in early 60s, Larry Roberts, who's considered as the godfather of computer vision,"}, {"start": 50.8, "end": 54.879999999999995, "text": " wrote this seminal paper titled Machine Perception of Three Dimensional Solids."}, {"start": 54.879999999999995, "end": 59.36, "text": " Back then, we only played with two examples, the so-called blocks world, still trying to figure"}, {"start": 59.36, "end": 64.32, "text": " out how to extract 3D information from 2D drawings. It's interesting how human intuition fails many"}, {"start": 64.32, "end": 69.84, "text": " times miserably throughout human history. In 1966, now everybody says Marvin Minsky,"}, {"start": 69.84, "end": 75.12, "text": " but actually Seymour Puppard, an MIT guy, gave his students a summer project. Go and solve this"}, {"start": 75.12, "end": 79.75999999999999, "text": " computer vision thing in three months. Half a century later, we're still trying to solve the"}, {"start": 79.76, "end": 85.44, "text": " same project. Fast forward to 2020, we're still struggling, but we achieved enormous progress,"}, {"start": 85.44, "end": 90.08000000000001, "text": " especially from 2012 onward with the arrival of AlexNet. In some narrow domains, machines are"}, {"start": 90.08000000000001, "end": 95.84, "text": " already better than humans. So in 2015, this neural network called ResNet achieved lower error on an"}, {"start": 95.84, "end": 100.88000000000001, "text": " image classification challenge than humans. What do you see in this image? Is that a cat?"}, {"start": 101.84, "end": 106.24000000000001, "text": " This is all debatable. But in these narrow domains, neural nets are better than humans."}, {"start": 106.24, "end": 113.28, "text": " Aside from image classification, these models are better in lip reading by far and also in"}, {"start": 113.28, "end": 118.24, "text": " art creation. And those are just a few examples. There are many more. Let's see some cool"}, {"start": 118.24, "end": 123.67999999999999, "text": " applications of computer vision. I'll explain the usage of computer vision in mixed reality"}, {"start": 123.67999999999999, "end": 127.67999999999999, "text": " in the context of Microsoft HoloLens, the holographic computer. I actually had the honor"}, {"start": 127.67999999999999, "end": 131.6, "text": " of working as a computer vision developer on the HoloLens 2 project. What's mixed reality,"}, {"start": 131.6, "end": 136.16, "text": " you may ask. In short, mixed reality is a spectrum where on one end we have the physical reality,"}, {"start": 136.16, "end": 142.07999999999998, "text": " and on the other end of the spectrum we have full immersion, i.e. virtual reality. Everything in"}, {"start": 142.07999999999998, "end": 147.2, "text": " between is where the holograms, like the one you see on the screen, live. The device has a couple of"}, {"start": 147.2, "end": 151.6, "text": " computer vision stacks that makes this magic running. It uses cameras, an depth sensor,"}, {"start": 151.6, "end": 156.64, "text": " and a couple of other sensors to extract geometry and create a 3D map of the world around it."}, {"start": 156.64, "end": 162.0, "text": " So that grid, also called mesh, that you see in the clip is what I'm referring to. It also"}, {"start": 162.0, "end": 167.2, "text": " understands the semantics of the scene, so whether a particular mesh belongs to a wall, a ceiling,"}, {"start": 167.2, "end": 172.4, "text": " or a floor, etc. This geometry and scene understanding lets you place holograms in the"}, {"start": 172.4, "end": 177.04, "text": " world. But you also want to interact with them, so HoloLens needs to understand you. The device"}, {"start": 177.04, "end": 181.6, "text": " understands your eyes, and eyes are such a powerful way to understand someone's intent. It has iris"}, {"start": 181.6, "end": 187.28, "text": " recognition, which enables you to log in really fast, thanks to the computer vision exploiting"}, {"start": 187.28, "end": 196.08, "text": " the nice properties of biometry. Let's see it in action. It does all of this using two infrared"}, {"start": 196.08, "end": 201.12, "text": " cameras and lots of machine learning. And it has eye tracking, which means it knows exactly where"}, {"start": 201.12, "end": 206.72, "text": " you're looking at, which enables a whole range of awesome applications where you can control stuff"}, {"start": 206.72, "end": 211.2, "text": " with your eyes. And you can see in this clip this auto-scrolling feature that allows you to scroll"}, {"start": 211.2, "end": 216.24, "text": " through a page using only your eyes."}, {"start": 216.24, "end": 222.32000000000002, "text": " HoloLens also understands your hands, and allows for instinctual interactions with holograms."}, {"start": 222.32000000000002, "end": 227.68, "text": " It basically infers the 3D mesh around your hands, and that allows developers to figure out when the"}, {"start": 227.68, "end": 231.92000000000002, "text": " hands are touching the holograms, which are also defined by their 3D mesh. Check it out."}, {"start": 235.44, "end": 240.08, "text": " And you can resize them, rotate them, or do whatever you want to do with them. Take a look at this clip."}, {"start": 240.08, "end": 244.08, "text": " I can just grab this corn to resize it, or I can rotate it, or move it."}, {"start": 245.12, "end": 248.96, "text": " Now I've used HoloLens many, many times, and I can tell you it really looks the way you can see it"}, {"start": 248.96, "end": 253.76000000000002, "text": " here on the screen. Except actually it's much better, because you can see in 3D, but the field"}, {"start": 253.76000000000002, "end": 257.92, "text": " of view is kind of narrower, so holograms can get cropped sometimes. There's one more interesting"}, {"start": 257.92, "end": 261.44, "text": " thing I want to show you here, and that's holportation. And the idea is to transfer"}, {"start": 261.44, "end": 266.24, "text": " the 3D information of an object from one distant location into another. And you will see in this"}, {"start": 266.24, "end": 271.04, "text": " video that this dad gets to talk in real time with his daughter, who's somewhere else. Take a look at it."}, {"start": 271.04, "end": 287.6, "text": " And you can do some pretty interesting things once you get this 3D information. Take a look."}, {"start": 301.36, "end": 306.48, "text": " Next up, computer vision in self-driving cars. There is this debate going on in the self-driving"}, {"start": 306.48, "end": 312.48, "text": " cars world whether to use LiDAR or not. And LiDAR is this thing similar to RADAR, except that it"}, {"start": 312.48, "end": 319.12, "text": " works in the visible light part of the EM spectrum. So Elon Musk is against LiDAR, most L4 companies"}, {"start": 319.12, "end": 324.16, "text": " are using LiDAR, but everybody agrees that we need to use cameras, which is kind of obvious,"}, {"start": 324.16, "end": 328.64000000000004, "text": " as the roads were built for human eyes, right? Self-driving cars use vision extensively, both"}, {"start": 328.64, "end": 333.44, "text": " inside the car for driver state sensing, as well as outside the car. There's just so much vision"}, {"start": 333.44, "end": 338.4, "text": " going on for tracking the outside world. Panoptic segmentation is one technique that is used really"}, {"start": 338.4, "end": 344.24, "text": " often, and it gives out a lot of information. It basically classifies pixels into certain classes"}, {"start": 344.24, "end": 350.47999999999996, "text": " like pedestrian, road, or car, say, but also into specific instance. So you differentiate between"}, {"start": 350.47999999999996, "end": 354.88, "text": " different instances of the same class. Here is an example of what the car sees. You can mostly see"}, {"start": 354.88, "end": 367.52, "text": " semantic segmentation visualized here. The car tracks driver-built surfaces, lines,"}, {"start": 367.52, "end": 372.56, "text": " intersections, traffic lights, pedestrians, vehicles. You need depth in order to avoid"}, {"start": 372.56, "end": 376.64, "text": " collision, so a lot of information. Here are all of those things visualized."}, {"start": 376.64, "end": 390.0, "text": " You also want to track the driver, maybe even passengers, to reduce risk of getting into an"}, {"start": 390.0, "end": 394.56, "text": " accident. You can monitor whether the driver is sleepy, not focusing on the road ahead,"}, {"start": 394.56, "end": 399.59999999999997, "text": " or whether passengers are just distracting the driver. Here is an example of tracking"}, {"start": 399.59999999999997, "end": 401.36, "text": " whether the driver is focusing on the road."}, {"start": 401.36, "end": 406.16, "text": " And you can take this even further and know exactly what the driver is doing by performing"}, {"start": 406.16, "end": 409.52000000000004, "text": " action recognition on the driver. So check this out."}, {"start": 414.48, "end": 418.72, "text": " Computer vision enables perception, which is just a part of the self-driving pipeline,"}, {"start": 418.72, "end": 423.68, "text": " albeit really crucial and infrastructural one. Perceptual information is further propagated"}, {"start": 423.68, "end": 426.96000000000004, "text": " into the planning and control software of the vehicle."}, {"start": 426.96, "end": 431.59999999999997, "text": " Which actually drives the vehicle, but it does not use computer vision at all."}, {"start": 431.59999999999997, "end": 436.79999999999995, "text": " Next up, art. There are two techniques I'd like to mention here. One is deep dreaming,"}, {"start": 436.79999999999995, "end": 440.56, "text": " and the other one is neural style transfer. The image you see on the screen is an example"}, {"start": 440.56, "end": 444.71999999999997, "text": " of deep dreaming. Deep dreaming exploits what is called the pareidolia effect. You know"}, {"start": 444.71999999999997, "end": 449.35999999999996, "text": " how sometimes when you look at the moon, you see a human face? That's pareidolia. So how"}, {"start": 449.35999999999996, "end": 454.08, "text": " it works is we give the network some input, and whatever the network sees, it will be"}, {"start": 454.08, "end": 459.03999999999996, "text": " given some input, and whatever the network sees in the image, we just amplify that part."}, {"start": 459.68, "end": 463.35999999999996, "text": " And I'm obviously simplifying things a little bit here, but that's how it works in a nutshell."}, {"start": 463.35999999999996, "end": 466.88, "text": " If we feed the output with some small geometrical transformations applied,"}, {"start": 466.88, "end": 471.03999999999996, "text": " like say crop, back to the input, we get these trippy videos."}, {"start": 477.2, "end": 482.0, "text": " I've got a whole series on neural style transfer, and in a nutshell, you just combine the content"}, {"start": 482.0, "end": 487.6, "text": " image with the style image using neural network as the combiner, and you get this beautiful result."}, {"start": 487.6, "end": 492.16, "text": " There is this observation that perception is somewhat connected to our creation itself."}, {"start": 492.16, "end": 497.12, "text": " If you're able to percept like these neural networks are, then you're able to create art."}, {"start": 497.12, "end": 501.6, "text": " Think about it. Our creation is something we consider to be a deeply human trait."}, {"start": 501.6, "end": 507.04, "text": " Now let's jump to deep fakes. Deep fakes, the curse child of computer vision,"}, {"start": 507.04, "end": 512.48, "text": " lets you use your face and your voice, but look like somebody else and sound like someone else."}, {"start": 512.48, "end": 516.64, "text": " Take a look at this video from MIT's introductory lecture to deep learning,"}, {"start": 517.28, "end": 523.12, "text": " where they imitate Barack Obama. Keep in mind that the voice quality was degraded by design."}, {"start": 523.12, "end": 529.12, "text": " In fact, this entire speech and video are not real, and were created using deep learning"}, {"start": 529.12, "end": 532.32, "text": " and artificial intelligence. Decently complicated thing to create."}, {"start": 532.32, "end": 536.72, "text": " It involves various computer vision techniques, such as facial landmarks detection,"}, {"start": 536.72, "end": 542.8000000000001, "text": " optical flow calculation, taking occlusions into consideration, etc. Just be aware that these are"}, {"start": 542.8000000000001, "end": 548.0, "text": " out there. There are many more awesome computer vision applications, like those that help us"}, {"start": 548.0, "end": 555.9200000000001, "text": " understand humans, say lip reading, or monitoring pulse through image, a technique called motion"}, {"start": 555.9200000000001, "end": 560.64, "text": " magnification. We automatically select and amplify a narrow band of temporal frequencies around the"}, {"start": 560.64, "end": 565.52, "text": " human heart rate. This one could be used to monitor babies, and that could save lives."}, {"start": 565.52, "end": 569.68, "text": " Here we extract heart rate measurements of a newborn, and confirm their accuracy by"}, {"start": 569.68, "end": 574.16, "text": " comparing them with readings from the hospital monitor. Surveillance is another big application"}, {"start": 574.16, "end": 581.92, "text": " area, applications such as crowd counting. Now this may sound rebellion, but you can use"}, {"start": 581.92, "end": 587.36, "text": " the related tech to monitor traffic and thus improve the traffic. You can also detect victims"}, {"start": 587.36, "end": 591.92, "text": " drowning in the pool, or you can detect the theft in retail. I also like the fact that computer vision"}, {"start": 591.92, "end": 597.04, "text": " is redefining search. So instead of using textual search, you can search using images and find"}, {"start": 597.04, "end": 601.36, "text": " articles that contain those images, or similar images. Definitely check out Google image search"}, {"start": 601.36, "end": 606.24, "text": " if you haven't. I used it a couple of times to figure out where some image originated in the"}, {"start": 606.24, "end": 610.7199999999999, "text": " internet. There are obviously many cool applications I haven't mentioned, like Google Earth, that can be"}, {"start": 610.7199999999999, "end": 615.76, "text": " used as a health monitor for our planet, say for tracking deforestation over time. Biometry"}, {"start": 615.76, "end": 620.0, "text": " applications such as understanding iris, which I briefly mentioned in the mixed reality section,"}, {"start": 620.0, "end": 625.28, "text": " fingerprints, face, even the way someone walks can be used as a unique identifier of a person,"}, {"start": 625.28, "end": 630.72, "text": " although with varying levels of success. Extracting text, also called optical character recognition,"}, {"start": 630.72, "end": 635.68, "text": " is really useful. You just take a photo of say a whiteboard, and it just automatically extracts"}, {"start": 635.68, "end": 639.92, "text": " all the text for you. This is actually still a difficult problem in computer vision. And finally"}, {"start": 639.92, "end": 645.04, "text": " things we take for granted. Computational photography, HDR, stabilization, autofocus,"}, {"start": 645.04, "end": 649.2, "text": " all those things that help you capture beautiful photographs. They are ingrained into little"}, {"start": 649.2, "end": 654.72, "text": " level camera software so you don't even know they're there. We talked about all the cool apps,"}, {"start": 654.72, "end": 660.08, "text": " but listen, computer vision is still an open research area, and sometimes algorithms are less"}, {"start": 660.08, "end": 664.5600000000001, "text": " intelligent than we'd like them to be. A famous example are adversarial examples, where you just"}, {"start": 664.5600000000001, "end": 670.88, "text": " tweak the input image pixels in a clever way, and it totally destroyed the algorithm, making it see"}, {"start": 670.88, "end": 676.96, "text": " airliner where there is a pig in the image. I think we came a long way developing all the cameras,"}, {"start": 676.96, "end": 683.2800000000001, "text": " low level and high level vision software, and learning algorithms, but we still need to ingrain"}, {"start": 683.2800000000001, "end": 688.24, "text": " a true understanding and cognition into these algorithms. If you found this video useful,"}, {"start": 688.24, "end": 693.76, "text": " consider supporting the channel by subscribing and sharing. I work as a full-time machine learning"}, {"start": 693.76, "end": 699.6, "text": " engineer in Microsoft, and I create these in my free time. So I really appreciate when I get the"}, {"start": 699.6, "end": 710.24, "text": " feedback that somebody finds these useful. Until next time, keep learning."}] |
Aleksa Gordić - The AI Epiphany | https://www.youtube.com/watch?v=RAgY8aIlvkA | Anyone can make deepfakes now! | ❤️ Become The AI Epiphany Patreon ❤️ ► https://www.patreon.com/theaiepiphany
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
Deepfakes. The scary player in the game. 🤖
Fake news are about to become an even bigger problem.
So...what to do in a world where a video is not proof anymore?
As with many things, the way to alleviate this is through education i.e. raising the general awareness that they exist (as well as developing deepfake detector neural nets).
You'll learn about:
✔️ Deepfakes
✔️ Ethics behind technology
✔️How to create SIMPLE video and audio deepfakes
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
Deefakes for video:
✅ https://github.com/AliaksandrSiarohin/first-order-model
✅ https://colab.research.google.com/github/AliaksandrSiarohin/first-order-model/blob/master/demo.ipynb
✅ https://github.com/gordicaleksa/first-order-model
✅ https://drive.google.com/drive/folders/1kZ1gCnpfU0BnpdU47pLM_TQ6RypDDqgw
💻 command line ►
python --config config/vox-256.yaml --driving_video mydata/driver_videos/04.mp4 --source_image mydata/target_imgs/khal.jpg --checkpoint checkpoints/vox-cpk.pth.tar --relative --adapt_scale
Deepfakes for audio:
✅ https://github.com/CorentinJ/Real-Time-Voice-Cloning
✅ https://github.com/gordicaleksa/Real-Time-Voice-Cloning
✅ https://drive.google.com/file/d/1n1sPXvT34yXFLT47QZA6FIRGrwMeSsZc/view
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
⌚️ Timetable:
00:00 - Intro
01:15 - Ethics and fake news consideration
03:18 - First Order Motion Model For Image Animation (setup & play)
05:28 - Real-Time Voice Cloning (setup & play)
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
💰 BECOME A PATREON OF THE AI EPIPHANY ❤️
If these videos, GitHub projects, and blogs help you,
consider helping me out by supporting me on Patreon!
The AI Epiphany ► https://www.patreon.com/theaiepiphany
One-time donations: https://www.paypal.com/paypalme/theaiepiphany
Much love! ❤️
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
💡 The AI Epiphany is a channel dedicated to simplifying the field of AI using creative visualizations and in general, a stronger focus on geometrical and visual intuition,
rather than the algebraic and numerical "intuition".
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
👋 CONNECT WITH ME ON SOCIAL
LinkedIn ► https://www.linkedin.com/in/aleksagordic/
Twitter ► https://twitter.com/gordic_aleksa
Instagram ► https://www.instagram.com/aiepiphany/
Facebook ► https://www.facebook.com/aiepiphany/
👨👩👧👦 JOIN OUR DISCORD COMMUNITY:
Discord ► https://discord.gg/peBrCpheKE
📢 SUBSCRIBE TO MY MONTHLY AI NEWSLETTER:
Substack ► https://aiepiphany.substack.com/
💻 FOLLOW ME ON GITHUB FOR COOL PROJECTS:
GitHub ► https://github.com/gordicaleksa
📚 FOLLOW ME ON MEDIUM:
Medium ► https://gordicaleksa.medium.com/
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
#deepfakes #deeplearning #computervision | Oh, it's live. Let's start with Mr. Hinton. Hey, Jeffy Hinton here. I just wanted to point out that this YouTube channel, The Epiphany, is truly awesome and hand it over to Jan here. Thank you, Jeff. Yeah, I agree I got this Turing Award thingy, but I think this guy deserves it more than I do. Let's see what Joshua has to say. Howdy ho. Yeah, artificial intelligence is getting increasingly important. Boy, you're doing God's work. Step aside, Clamps. This thing is going to be huge. Let's make Serbia great again. Well done, kid. Keep doing this thing, even though I don't understand it. What's your problem, Donald? You're acting like such a snob. Do you know that? Boy, spread the knowledge. Education is really important and crucial. We'll need this against the Russians. My God, I'm just kidding, Vladimir. You know a joke, right? The fuck was that, dude? Kind of cringy, nonetheless. Hope you liked it. Before I show you how to make stuff that we saw just seconds ago, I want you to have a couple of things in mind because this video is primarily educational. One really important thing I want to make sure you understand is that this tech was not developed to create deepfakes. The thing you saw with Trump and Obama, the deepfake voice, that technology is something called multi-speaker text-to-speech. And its use case is, for example, accessibility applications. Imagine somebody who lost their voice, but they either have an old recording of themselves and can thus reconstruct their voice, or they can pick a new one that they like. And on the other hand, the technology that I use to animate Mona Lisa and others is called image animation, and it will help, for example, artists create better animated movies that you and I like to watch, as well as other cool and empowering applications. There's one thing I want you to come up with from this video. That deepfakes are only one, although socially highly desirable application of this technology, and that the tech itself was not created with this in mind. Same thing as with nuclear energy and everything else. You can either use it for good purposes or you can use it for bad purposes. Second, this is dead simple to create, and really basic deepfake technology can be much better. Deepfake videos I showed you use only a single target image, and they animated using the video that you need to create in order to control the target image. You can imagine that it can be improved by using multiple target images, or even better, a target video of the person you want to animate or in this use case, deepfake. Same improvements can be made for audio deepfakes. Third thing, fake news is a really big problem nowadays, and you need to be extra cautious of the information you're ingesting, even videos, as we saw, right? You'll need critical thinking in this brand new world to discern true from false more than ever. Things I'll show you can create truly convincing deepfakes. For that, you'd need some kind of neural voice puppetry. And as such, I feel okay showing you this. Let's start. Okay, so we'll be using this code, first order motion model for image animation. Go ahead and explore the read me a little bit. And I encourage you to go and play with the demo Jupyter notebook. You can find here I'll put the link in the description. And it just works out of the box. I had problems with the original environment file. So I went ahead and forked the code, reverse engineered environment file from Google Colab. And I've added additional wrapper code that you'll find useful. First step, go ahead and clone the code. And you'll end up with this code here. And just go ahead, we'll need to download pre trained model into checkpoints folder here. So go ahead and open this link and just download this box CPK PTH tar file into checkpoints. Okay, once that's done, you can optionally go ahead and download some target images from this link. But you don't have to because I already checked in some dummy data. So this is totally optional. So I assume you have mini conda already installed, just navigate to the top most directory here and do conda and create press enter and that will create the environment, the Python environment. Once that's done, go ahead and activate the environment. So activate the fix and visa name. And you're in and we're pretty much done. Just go ahead and run this command Python demo, not Jupyter demo pi and run it. I'll put the link in the description for the for this code line. Keep in mind that this can take some time depending on the hardware configuration you have. Finally, we get the results. So this is how it looks like a result team before and I've used Trump's video for as a driver video and I've used the cultural goes image as the target image. That's it for deep fix for videos. Make sure to check out this script master combiner, which will help you automate this procedure of animating videos. And I'll also write a follow up medium blog that will explain you how to use your own data. So stay tuned. Next up deep fix for audio. We're going to play with this awesome code called real time voice cloning. It's just an implementation of this paper here. And it's really awesome because you need only five seconds of target audio and you can reconstruct the voice. I went ahead and forked the repo as I had to just update the environment file and add some small code changes inside. So go ahead and just clone the repo. You'll end up with something like this. Similar to before, just navigate to the top most directory here and do a conda and create and that will create a fresh conda environment. There's one more thing we need to do and that's go ahead and download this zip file here and place the content inside of the corresponding directories here encoder, synthesizer and vocoder. Once that's done, you'll have pre trained models inside of each of these. So see it models pre trained PT. Finally, go ahead and activate the environment here. So activate DF voice and and run Python demo toolbox. You'll end up with this screen and it may take some time to load it the first time. Go ahead and click browse. So I went ahead and checked in a snippet of voice from Barack Obama. So go ahead and just load that one here, Obama and open. So you'll end up with some mal spectrogram, some embeddings and you map projections, whatnot. You don't need to worry about those to only use this tool. Now finally, go ahead and write down some text in this text box. Click synthesize and vocode and we'll finally get the result. Hi, plebs. I'm Barack Obama and I used to be the president of the United States of America. And the sound clip will be automatically saved in the file system here. My speech data outputs. You can also of course use other sound clips aside from Barack Obama, of course. Hope you liked it. Share and subscribe, but only if you like the content until next time. | [{"start": 0.0, "end": 12.8, "text": " Oh, it's live. Let's start with Mr. Hinton."}, {"start": 12.8, "end": 22.0, "text": " Hey, Jeffy Hinton here. I just wanted to point out that this YouTube channel, The Epiphany,"}, {"start": 22.0, "end": 25.28, "text": " is truly awesome and hand it over to Jan here."}, {"start": 25.28, "end": 32.56, "text": " Thank you, Jeff. Yeah, I agree I got this Turing Award thingy, but I think this guy deserves it"}, {"start": 32.56, "end": 36.160000000000004, "text": " more than I do. Let's see what Joshua has to say."}, {"start": 37.2, "end": 44.88, "text": " Howdy ho. Yeah, artificial intelligence is getting increasingly important. Boy,"}, {"start": 44.88, "end": 50.8, "text": " you're doing God's work. Step aside, Clamps. This thing is going to be huge. Let's make"}, {"start": 50.8, "end": 56.0, "text": " Serbia great again. Well done, kid. Keep doing this thing, even though I don't understand it."}, {"start": 56.0, "end": 60.4, "text": " What's your problem, Donald? You're acting like such a snob. Do you know that? Boy,"}, {"start": 60.4, "end": 64.64, "text": " spread the knowledge. Education is really important and crucial. We'll need this against the Russians."}, {"start": 66.47999999999999, "end": 69.44, "text": " My God, I'm just kidding, Vladimir. You know a joke, right?"}, {"start": 73.92, "end": 75.28, "text": " The fuck was that, dude?"}, {"start": 75.28, "end": 83.28, "text": " Kind of cringy, nonetheless. Hope you liked it. Before I show you how to make stuff that we saw"}, {"start": 83.28, "end": 88.24000000000001, "text": " just seconds ago, I want you to have a couple of things in mind because this video is primarily"}, {"start": 88.24000000000001, "end": 94.0, "text": " educational. One really important thing I want to make sure you understand is that this tech"}, {"start": 94.0, "end": 98.24000000000001, "text": " was not developed to create deepfakes. The thing you saw with Trump and Obama,"}, {"start": 98.8, "end": 103.76, "text": " the deepfake voice, that technology is something called multi-speaker text-to-speech."}, {"start": 103.76, "end": 108.4, "text": " And its use case is, for example, accessibility applications. Imagine somebody who lost their"}, {"start": 108.4, "end": 113.2, "text": " voice, but they either have an old recording of themselves and can thus reconstruct their voice,"}, {"start": 113.2, "end": 118.0, "text": " or they can pick a new one that they like. And on the other hand, the technology that I use to"}, {"start": 118.0, "end": 123.44, "text": " animate Mona Lisa and others is called image animation, and it will help, for example, artists"}, {"start": 123.44, "end": 129.6, "text": " create better animated movies that you and I like to watch, as well as other cool and empowering"}, {"start": 129.6, "end": 134.64, "text": " applications. There's one thing I want you to come up with from this video. That deepfakes are only"}, {"start": 134.64, "end": 140.48, "text": " one, although socially highly desirable application of this technology, and that the tech itself was"}, {"start": 140.48, "end": 145.04, "text": " not created with this in mind. Same thing as with nuclear energy and everything else. You can either"}, {"start": 145.04, "end": 150.24, "text": " use it for good purposes or you can use it for bad purposes. Second, this is dead simple to create,"}, {"start": 150.24, "end": 155.2, "text": " and really basic deepfake technology can be much better. Deepfake videos I showed you use only a"}, {"start": 155.2, "end": 160.56, "text": " single target image, and they animated using the video that you need to create in order to control"}, {"start": 160.56, "end": 166.23999999999998, "text": " the target image. You can imagine that it can be improved by using multiple target images, or even"}, {"start": 166.23999999999998, "end": 172.0, "text": " better, a target video of the person you want to animate or in this use case, deepfake. Same"}, {"start": 172.0, "end": 176.64, "text": " improvements can be made for audio deepfakes. Third thing, fake news is a really big problem"}, {"start": 176.64, "end": 181.35999999999999, "text": " nowadays, and you need to be extra cautious of the information you're ingesting, even videos,"}, {"start": 181.36, "end": 186.32000000000002, "text": " as we saw, right? You'll need critical thinking in this brand new world to discern true from false"}, {"start": 186.32000000000002, "end": 191.44000000000003, "text": " more than ever. Things I'll show you can create truly convincing deepfakes. For that, you'd need"}, {"start": 191.44000000000003, "end": 197.84, "text": " some kind of neural voice puppetry. And as such, I feel okay showing you this. Let's start. Okay,"}, {"start": 197.84, "end": 204.16000000000003, "text": " so we'll be using this code, first order motion model for image animation. Go ahead and explore"}, {"start": 204.16000000000003, "end": 209.84, "text": " the read me a little bit. And I encourage you to go and play with the demo Jupyter notebook. You"}, {"start": 209.84, "end": 215.76, "text": " can find here I'll put the link in the description. And it just works out of the box. I had problems"}, {"start": 215.76, "end": 221.12, "text": " with the original environment file. So I went ahead and forked the code, reverse engineered"}, {"start": 221.12, "end": 226.64000000000001, "text": " environment file from Google Colab. And I've added additional wrapper code that you'll find useful."}, {"start": 226.64000000000001, "end": 231.92000000000002, "text": " First step, go ahead and clone the code. And you'll end up with this code here. And just go ahead,"}, {"start": 231.92000000000002, "end": 238.0, "text": " we'll need to download pre trained model into checkpoints folder here. So go ahead and open"}, {"start": 238.0, "end": 246.32, "text": " this link and just download this box CPK PTH tar file into checkpoints. Okay, once that's done,"}, {"start": 246.32, "end": 252.32, "text": " you can optionally go ahead and download some target images from this link. But you don't have"}, {"start": 252.32, "end": 257.28, "text": " to because I already checked in some dummy data. So this is totally optional. So I assume you have"}, {"start": 257.28, "end": 265.68, "text": " mini conda already installed, just navigate to the top most directory here and do conda and create"}, {"start": 265.68, "end": 272.48, "text": " press enter and that will create the environment, the Python environment. Once that's done, go ahead"}, {"start": 272.48, "end": 279.52, "text": " and activate the environment. So activate the fix and visa name. And you're in and we're pretty"}, {"start": 279.52, "end": 289.2, "text": " much done. Just go ahead and run this command Python demo, not Jupyter demo pi and run it."}, {"start": 289.76, "end": 294.56, "text": " I'll put the link in the description for the for this code line. Keep in mind that this can"}, {"start": 294.56, "end": 298.88, "text": " take some time depending on the hardware configuration you have. Finally, we get the"}, {"start": 298.88, "end": 306.48, "text": " results. So this is how it looks like a result team before and I've used Trump's video for as"}, {"start": 306.48, "end": 310.72, "text": " a driver video and I've used the cultural goes image as the target image. That's it for deep"}, {"start": 310.72, "end": 316.64, "text": " fix for videos. Make sure to check out this script master combiner, which will help you automate this"}, {"start": 316.64, "end": 321.6, "text": " procedure of animating videos. And I'll also write a follow up medium blog that will explain"}, {"start": 321.6, "end": 328.16, "text": " you how to use your own data. So stay tuned. Next up deep fix for audio. We're going to play with"}, {"start": 328.16, "end": 334.40000000000003, "text": " this awesome code called real time voice cloning. It's just an implementation of this paper here."}, {"start": 334.96000000000004, "end": 339.52000000000004, "text": " And it's really awesome because you need only five seconds of target audio and you can reconstruct"}, {"start": 339.52000000000004, "end": 346.48, "text": " the voice. I went ahead and forked the repo as I had to just update the environment file and add"}, {"start": 346.48, "end": 351.68, "text": " some small code changes inside. So go ahead and just clone the repo. You'll end up with something"}, {"start": 351.68, "end": 358.48, "text": " like this. Similar to before, just navigate to the top most directory here and do a conda and"}, {"start": 359.12, "end": 363.6, "text": " create and that will create a fresh conda environment. There's one more thing we need"}, {"start": 363.6, "end": 369.76, "text": " to do and that's go ahead and download this zip file here and place the content inside of"}, {"start": 369.76, "end": 374.0, "text": " the corresponding directories here encoder, synthesizer and vocoder. Once that's done,"}, {"start": 374.0, "end": 380.16, "text": " you'll have pre trained models inside of each of these. So see it models pre trained PT. Finally,"}, {"start": 380.16, "end": 392.0, "text": " go ahead and activate the environment here. So activate DF voice and and run Python demo toolbox."}, {"start": 392.64, "end": 396.56, "text": " You'll end up with this screen and it may take some time to load it the first time."}, {"start": 396.56, "end": 403.2, "text": " Go ahead and click browse. So I went ahead and checked in a snippet of voice from Barack Obama."}, {"start": 403.2, "end": 408.64, "text": " So go ahead and just load that one here, Obama and open. So you'll end up with some"}, {"start": 408.64, "end": 413.68, "text": " mal spectrogram, some embeddings and you map projections, whatnot. You don't need to worry"}, {"start": 413.68, "end": 418.08, "text": " about those to only use this tool. Now finally, go ahead and write down some text in this text box."}, {"start": 418.08, "end": 423.76, "text": " Click synthesize and vocode and we'll finally get the result. Hi, plebs. I'm Barack Obama and I used"}, {"start": 423.76, "end": 427.68, "text": " to be the president of the United States of America. And the sound clip will be automatically"}, {"start": 427.68, "end": 434.96, "text": " saved in the file system here. My speech data outputs. You can also of course use other sound"}, {"start": 434.96, "end": 442.16, "text": " clips aside from Barack Obama, of course. Hope you liked it. Share and subscribe,"}, {"start": 442.16, "end": 458.0, "text": " but only if you like the content until next time."}] |
Aleksa Gordić - The AI Epiphany | https://www.youtube.com/watch?v=8pp0Oa3t52s | Advanced Theory | Neural Style Transfer #4 | ❤️ Become The AI Epiphany Patreon ❤️ ► https://www.patreon.com/theaiepiphany
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
The 4th video in the neural style transfer series! 🎨
(I promise this will be the longest one 😅)
You'll learn:
✔️ Ideas behind how the NST field came to be
✔️ All the amazing follow-up work
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
⌚️ Timetable:
00:35 - The Dawn of CNNs (aka AlexNet)
01:53 - Visualizing and Understanding Conv Nets
02:55 - Understanding Deep Image Representations by Inverting Them
05:00 - Texture Synthesis Using Convolutional Neural Networks
05:55 - The Birth of NST field (Gatys et al.)
07:00 - Feedforward methods (Johnson et al., Ulyanov et al.)
08:50 - Instance Normalization (Ulyanov et al.)
10:35 - Conditional Instance Normalization
11:50 - Controlling Perceptual Factors in NST (Gatys et al.)
14:15 - Demystifying Neural Style Transfer
15:15 - Arbitrary Style Transfer in Real-time with Adaptive IN
16:40 - Universal Style Transfer via Feature Transforms (WCT)
17:50 - Artistic Style Transfer for Videos (Ruder et al.)
19:05 - ReCoNet
19:25 - Going beyond: NST for spherical videos (VR), 3D models...
20:20 - Future challenges of NST field
21:20 - 0.5 million $ AI artwork
[Credits] Music:
https://www.youtube.com/watch?v=J2X5mJ3HDYE [NCS]
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
💰 BECOME A PATREON OF THE AI EPIPHANY ❤️
If these videos, GitHub projects, and blogs help you,
consider helping me out by supporting me on Patreon!
The AI Epiphany ► https://www.patreon.com/theaiepiphany
One-time donations: https://www.paypal.com/paypalme/theaiepiphany
Much love! ❤️
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
💡 The AI Epiphany is a channel dedicated to simplifying the field of AI using creative visualizations and in general, a stronger focus on geometrical and visual intuition,
rather than the algebraic and numerical "intuition".
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
👋 CONNECT WITH ME ON SOCIAL
LinkedIn ► https://www.linkedin.com/in/aleksagordic/
Twitter ► https://twitter.com/gordic_aleksa
Instagram ► https://www.instagram.com/aiepiphany/
Facebook ► https://www.facebook.com/aiepiphany/
👨👩👧👦 JOIN OUR DISCORD COMMUNITY:
Discord ► https://discord.gg/peBrCpheKE
📢 SUBSCRIBE TO MY MONTHLY AI NEWSLETTER:
Substack ► https://aiepiphany.substack.com/
💻 FOLLOW ME ON GITHUB FOR COOL PROJECTS:
GitHub ► https://github.com/gordicaleksa
📚 FOLLOW ME ON MEDIUM:
Medium ► https://gordicaleksa.medium.com/
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
#neuralstyletransfer #deeplearning #ai | In the last couple of videos we saw the basic theory of neural style transfer, we saw my implementation of the seminal work by Gatiss and his colleagues, and now we're going to put it all in the whole MSTL with them in a broader perspective and to see how it all came to be, as well as all the follow-up work that came afterwards, because it basically opened up a whole new research direction. So in this video we're going to cover the advanced neural style transfer theory, and the story starts in 2012. So there was this now already famous image genetic classification challenge, and in 2012 a new method was proposed using convolutional neural networks with the architecture called ILXNet, and it basically smashed all of the competitors, both in 2012 as well as normally in all the previous years. So for example in 2011, Sanchez and Peronin, they advised this method using really heavy mathematics, Fisher vectors and whatnot, and this method was twice as bad as ILXNet, even though it used so much math. You can see the quantum leap in 2012, where the error on the classification challenge went from 25.8 all the way down to 16.4. And we're going to focus on these three nets in this video. So ILXNet basically just sparked such a huge interest in CNNs, and ZFNet and VGG basically just explored the combinatorial space that was already set by the ILXNet. And people were generally interested in how CNN architecture worked. So there was this awesome paper in 2013 titled Visualizing and Understanding ComNets by the same guys who advised ZFNet. So they created these really cool visualizations that helped us better see which image structures tend to trigger certain feature maps. And you can see here on the screen that in the top left corner, this feature map gets triggered by when it gets dark faces in the input images. And on the top right, you can see that this feature map really likes some round objects, and bottom right, you can see that this feature map particularly likes spiral objects. VGG came afterwards, it just pretty much improved upon ILXNet and ZFNet by exploring the depth and the size of the kernel, the comkernel. And we still didn't quite understand how these deep codes work, or better said, what they've learned. So in 2014, this seminal work came along titled Understanding Deep Image Representations by Inverting Them. It was the first paper to propose this method of reconstructing input image from deep code, from feature maps, something we already know from previous videos. And let me just read it again. So we just do the optimization in the image space on a noise image. And you can see com1, for example, on the top left image, that we get a really detailed reconstruction if we try and invert the feature maps from those shallow layers. But if you go into deeper layers of the net and try to invert those codes, we get something like com4, that's an image underneath. And it's like more abstract. It still keeps the semantics of the image, but like concrete details are getting lost. This work was pretty much inception point for the creation of the Deep Dream algorithm by Google guys. And if you're not already familiar with it, Deep Dream just gives you all of these psychedelic-like images by exploiting what is called as Pareidolia effect. So what it does on a higher level is this. Whatever the network sees, like in certain feature maps, it just says, hey, whatever you see, give me more of it. The implementation is as simple as just maximizing the feature response at a certain layer by doing, say, gradient ascent and not gradient descent. And that's equivalent to saying, give me more of what you see. But more important for our story, it was the first main ingredient for the inception of neural style transfer algorithm. The second main ingredient also came from the creator of NST, Leon Gatiss, in the work titled Texture Synthesis Using ComNet. And here he just exploited the rich feature representation of the VGG network to create these awesome textures that you can see on the screen. And this is basically the same thing we did in the last video. It is important to appreciate that this work also did not come out of the blue. So the conceptual framework of building up some summary statistics over certain filter responses was already in place. So, for example, in Portilla and Simoncelli's work, instead of using feature filter responses of VGG net, they used the filter responses of a linear filter bank. And instead of using Grammatrix that captures the correlations between feature maps, they use a carefully chosen set of summary statistics. And finally, combining the previous work of reconstructing images from deep codes, which basically gives us the content portion of the stylized image and combining that with Gatiss' texture synthesis work, where conceptually transferring style is equivalent to transferring texture, so this part gives us the style portion of the stylized image. We finally get the final algorithm. It's just interesting how connecting a couple of dots created a surge of research in this new direction. Lots of follow-up work came after the original algorithm was advised back in 2015. And what is interesting is that there is this interesting relationship that ties all of these algorithms that came afterwards. And that's this three-way trade-off between speed, quality, and flexibility in the number of styles that the algorithm can produce. And that will become clearer what that means a bit later in the video. The original algorithm was pretty high quality, infinite flexibility in the sense you can transfer any style, but really slow. So let's see how we can improve the speed portion. So the main idea is this. Instead of using the optimization algorithm, let's just pass in the image, do a feed-forward pass, and get a stylized image out. And there were basically two independent papers that implemented this idea back in March 2016, and those were by Johnson and by Iuliano. And I'll show the Johnson's method here because it's conceptually simpler, a bit higher quality, but a bit lower speed. So the method goes like this. So we're optimizing this image transform net's weights and not the pixels in the image space. The loss is the same as in Gatiss work, and it gets defined by the deep net. And I always find that really interesting. And it gets trained on the MSCoco dataset. So we iterate through images in the dataset and the style loss is fixed, but the content loss is specific for every single image in the dataset. And by doing so, the net learns to minimize the style on arbitrary input image. Let's see how it ranks against the three-way tradeoff. So it's still the fastest implementation out there. It's got the lowest possible flexibility. It supports only one style. And the quality, you can see the graphs here and let's focus on the leftmost graph because it's just for the lowest res input image. And in the intersection of the blue and green curves, that defines the place where the loss is the same for the two methods, for the Gatiss and Johnson's methods. And we can see that happens around 80th iteration of LBFGS optimizer, which means that the quality of this method is the same as Gatiss after 80 LBFGS iterations. Now, there's a reason I mentioned Ulyanov here. He basically unlocked the quality and flexibility for these FID4 methods by introducing the concept of instance normalization. Instance normalization is really similar to batch normalization, which was advised a year and a half before by Christian Zagady and Sergey Iofi from Google. And the only difference is, let's say, the space over which you calculate the statistics. So whereas batch normalization use a particular feature map from every single training example, the instance normalization just uses a single feature map. And you can see the image here where the spatial dimensions of the feature map, the H and W, are collapsed in a single dimension. And N is the mini batch size that the instance normalization is using just a single training example to figure out those statistics. And let me just reiterate it. When I say statistics, I mean finding the mean and variance of the distribution and using those to normalize the distribution, making it univariance and zero mean. And later applying those FN params, those betas and gammas to just keep the original representation power of the network. Now, if you've never heard about better realization, this will probably sound like rubbish to you. And I'd usually suggest reading the original paper. But this time the paper is really vague and the visualizations are really bad. So I just suggest using either medium, some either medium blog or towards data science blog. And I'll link some of those in the description. Let's see these normalization layers in action. So if you apply those in the generated network, we get these results. And in the bottom row, you can see that the institutionalization achieves greater quality. And I already mentioned that institutionalization unlock greater quality and bigger flexibility. Also, the first paper to exploit greater flexibility was this conditional institutionalization paper. And they achieved 32 styles where there wasn't a hard limit. It was just that the number of parameters grows linearly if you want to add more styles. The main idea goes like this. So we do the same thing as an institutionalization and that's normalized distribution, making it univariance and zero mean. And then instead of using a single pair of betas and gammas as an institutionalization, every single style has this pair associated with it. And this simple idea enables us to create multiple stylized images using multiple styles. And you can see here that interpolating those different styles, we can get a whole like continuous space of new stylized images. And it's really surprising that using only two single parameters, we can define a completely new style. So we've seen some really high quality methods like the original gages method. We've seen some really fast methods like Johnson's method. And we've seen some semi-flexible methods like these this conditional institutionalization. So what else do we want from our NST algorithm? And the answer is control. You usually don't have the control of what the network or the algorithm outputs. And you want to control stuff like space in the sense we slightly apply to which portion of the which region of the image. And then you want to control whether you take the color from the count image or take it from the style image. And you want to also have control over which brush strokes to use on the coarse scale and which ones to use on the fine scale. So let's take a look at the spatial control a bit deeper. So the idea goes like this. Let's take the sky region of the style image, which is defined by the black pixels of its corresponding segmentation mask. You can see it on the top right corner of the image and apply to the sky region of the content image, which is defined by the black pixels of its corresponding segmentation mask. And this time, let's take this whole style image and apply to the non-sky region of the content image, which is defined by the white pixels of its corresponding segmentation mask. This mixing of styles doesn't really happen in the image space by just combining those images with the segmentation mask. It happens in the feature space where they use some morphological operators such as erosion to get those nicely blended together. And when it comes to color control, first, why? Well, sometimes you just get an output image which you don't like, like this one. Now for the how portion. Well, one method is to do this. You take the content and style images, you transform them into some color space where the color information and the intensity information is separable. And what you do is you take the luminance components of style and content images, you do the style transfer, and then you take the color channels from the content image and just concatenate those with the output and you get the final image. And that's the one you see under D. Now, controlling scale is really simple. What you do is you take a fine scale brushstrokes from one painting, you combine those with a core scale angular geometric shapes from another style image, and you produce a new style image. And then you just use that one in NSD in a classical NSD procedure to get the image under E. And just be aware that this is something useful to have enough for the fun part. So up until now, we consider those grand matrices to be some kind of unnatural law, like we had to match those in order to transfer style. And as this paper shows, demystifying neural style transfer, matching those grand matrices is nothing but like minimizing this maximum mean distribution with a polynomial kernel. That means we can use other kernels like linear or Gaussian kernel to achieve style transfer. And as it says right here, this reveals that neural style transfer is intrinsically a process of distribution alignment of the neural activations between images, which means we basically just need to align those distributions in order to transfer style. And there are various ways to do that. And one important that will be important for us is this batch normalization statistics. And we already saw a hint for that in the conditional instance normalization paper. Now, this work took it even further and achieved infinite flexibility. I can transfer any style possible and it does that the following way. So you take the image, you pass it through the feed forward map and you take a specific feature map. You just normalize it by finding its mean and variance. And then you do the same thing for the style image. You find the same feature map. You find the mean and the variance and you just take those mean and variance parameters and apply them to the content feature map. You pass it through the decoder and you get a stylized image out. So no learnable parameters this time. You just pass two single parameters and you achieve style transfer. Let's see it once more on the screen. So you take the content feature map, you normalize it by finding its mean and variance. And then you find the mean and variance for the style image and you just reapply to the content image and you get a stylized image. Now, the good thing is that those affine parameters, those betas and gammas don't need to be learned anymore, such as in batch normalization. Also in instance normalization, we had to learn those and in conditional instance normalization. Here, they are not a learnable parameters. But the bad thing is that the coder still has to be trained. So you need a finite set of style images upon which to train this decoder. And that means it won't be it won't perform as good for unseen style images. The follow up work fixed this problem, achieving truly infinite flexibility in the sense of number of styles it can apply, although it had you had to sacrifice the quality a little bit as per three way trade off. And the method works like this. So you just train a simple image reconstruction or the encoder and you insert this WCT block. And let's see what what it does. So it does the whitening on the content features by just by just basically figuring out the composition of those content features, applying a couple of linear transformations and ending up with a grand matrix that's uncorrelated, meaning only the values on the diagonal have once and everything else is zero. Just don't do all maths here. Just try and follow along. And then you just find the same I can decomposition. But this time for style features apply a couple of linear transformations and you end up with content features that have the same same matrix as the style image, which is something we always did, but this time without any learning, any training. So we saw the evolution of methods for static images and in parallel people were advising new methods for videos. And the only additional problem that these methods have to solve is how to keep the temporal coherence between frames. And as you can see here, we've got some original frames in the style image. And if you just do a dummy like applying the NST per per single frame, there's going to be a lot of flickering, a lot of inconsistency between different frames because every single time you run it, the image gets a different local optimum. Whereas when you apply the with temporal constraint, you can see that the images are much more smoother and consistent between frames. The way we achieve this temporal consistency is the following. So we take this green frame, the previous frame, and we forward warp it using the optical flow information. And we take this red frame, the next frame, and we penalize it for deviating from the green frame on those locations, on those pixels where the optical flow was stable. There was no disocclusion happening. This method used a slow optimization procedure. So needless to say, it was painfully slow. And the follow up work just did the same thing as for static images. It just transferred this problem into a feed forward network. And I've actually used this very same model in the beginning of the series to create the very first clip. It's called Reconet. And this is the clip you created. So in the beginning, it's temporal inconsistent and it becomes temporally consistent and smooth. So we saw a neural style transfer for static images, we saw for videos. And the thing with this NST field is that people are trying to apply it everywhere. So we've got NST for spherical images and videos that can be used in VR. There is this concept of transferring style to 3D models where the artist just needs to paint this sphere in the top right corner. And it gets transferred to the 3D model. There is this photorealistic neural style transfer where you basically only transfer the color information onto the content image. There is also a neural style transfer for audio and style aware content loss, etc., etc. So it would be really valuable to share how this whole field just evolved, how people were trying to connect various dots and build upon each other's work. And that's just how research looks like. The field is nowhere near having all problems sorted out. There are still a lot of challenges out there. One of those is static evaluation. So there is still not there doesn't exist some numerical method which can help us compare different NST methods and say, hey, this is better. This one is worse. We're still using those side by side subjective visual comparisons and different user studies to figure out which method is better. Also, there is no standard benchmark image set, meaning everybody is using their own style images. Everybody is using their own content images. And it's kind of hard to just wrap your head around and compare different methods. In other challenges, this representation disentangling. So we saw some efforts like the controlling those perceptual factors like scale, like space and color. It would be really nice if we had some latent space representation where we could just tweak certain dimension and get all of those perceptual factors changing the way we want them to change. And I want to wrap this up with a fun fact. And do you see the image on the right there? Could you guess it was it was sold in an auction? Could you guess the price tag? It was almost half a million dollars. And the thing is, it wasn't even created by neural style transfer algorithm. It was created by GANs. You can see the equation down there. But it's just so interesting that we are living in an age where there is this interesting dynamics going on between art and tech. And I think it's really a great time to be alive. So I really hope you found this video valuable. If you're new to my channel, consider subscribing and sharing. And there's awesome new content coming up. So stay tuned and see in the next video. | [{"start": 0.0, "end": 12.0, "text": " In the last couple of videos we saw the basic theory of neural style transfer, we saw my implementation of the seminal work by Gatiss and his colleagues,"}, {"start": 12.0, "end": 20.0, "text": " and now we're going to put it all in the whole MSTL with them in a broader perspective and to see how it all came to be,"}, {"start": 20.0, "end": 28.0, "text": " as well as all the follow-up work that came afterwards, because it basically opened up a whole new research direction."}, {"start": 28.0, "end": 37.0, "text": " So in this video we're going to cover the advanced neural style transfer theory, and the story starts in 2012."}, {"start": 37.0, "end": 47.0, "text": " So there was this now already famous image genetic classification challenge, and in 2012 a new method was proposed using convolutional neural networks"}, {"start": 47.0, "end": 57.0, "text": " with the architecture called ILXNet, and it basically smashed all of the competitors, both in 2012 as well as normally in all the previous years."}, {"start": 57.0, "end": 67.0, "text": " So for example in 2011, Sanchez and Peronin, they advised this method using really heavy mathematics, Fisher vectors and whatnot,"}, {"start": 67.0, "end": 77.0, "text": " and this method was twice as bad as ILXNet, even though it used so much math."}, {"start": 77.0, "end": 87.0, "text": " You can see the quantum leap in 2012, where the error on the classification challenge went from 25.8 all the way down to 16.4."}, {"start": 87.0, "end": 92.0, "text": " And we're going to focus on these three nets in this video."}, {"start": 92.0, "end": 105.0, "text": " So ILXNet basically just sparked such a huge interest in CNNs, and ZFNet and VGG basically just explored the combinatorial space that was already set by the ILXNet."}, {"start": 105.0, "end": 112.0, "text": " And people were generally interested in how CNN architecture worked."}, {"start": 112.0, "end": 123.0, "text": " So there was this awesome paper in 2013 titled Visualizing and Understanding ComNets by the same guys who advised ZFNet."}, {"start": 123.0, "end": 133.0, "text": " So they created these really cool visualizations that helped us better see which image structures tend to trigger certain feature maps."}, {"start": 133.0, "end": 147.0, "text": " And you can see here on the screen that in the top left corner, this feature map gets triggered by when it gets dark faces in the input images."}, {"start": 147.0, "end": 160.0, "text": " And on the top right, you can see that this feature map really likes some round objects, and bottom right, you can see that this feature map particularly likes spiral objects."}, {"start": 160.0, "end": 175.0, "text": " VGG came afterwards, it just pretty much improved upon ILXNet and ZFNet by exploring the depth and the size of the kernel, the comkernel."}, {"start": 175.0, "end": 184.0, "text": " And we still didn't quite understand how these deep codes work, or better said, what they've learned."}, {"start": 184.0, "end": 194.0, "text": " So in 2014, this seminal work came along titled Understanding Deep Image Representations by Inverting Them."}, {"start": 194.0, "end": 205.0, "text": " It was the first paper to propose this method of reconstructing input image from deep code, from feature maps, something we already know from previous videos."}, {"start": 205.0, "end": 212.0, "text": " And let me just read it again. So we just do the optimization in the image space on a noise image."}, {"start": 212.0, "end": 229.0, "text": " And you can see com1, for example, on the top left image, that we get a really detailed reconstruction if we try and invert the feature maps from those shallow layers."}, {"start": 229.0, "end": 237.0, "text": " But if you go into deeper layers of the net and try to invert those codes, we get something like com4, that's an image underneath."}, {"start": 237.0, "end": 246.0, "text": " And it's like more abstract. It still keeps the semantics of the image, but like concrete details are getting lost."}, {"start": 246.0, "end": 252.0, "text": " This work was pretty much inception point for the creation of the Deep Dream algorithm by Google guys."}, {"start": 252.0, "end": 263.0, "text": " And if you're not already familiar with it, Deep Dream just gives you all of these psychedelic-like images by exploiting what is called as Pareidolia effect."}, {"start": 263.0, "end": 273.0, "text": " So what it does on a higher level is this. Whatever the network sees, like in certain feature maps, it just says, hey, whatever you see, give me more of it."}, {"start": 273.0, "end": 284.0, "text": " The implementation is as simple as just maximizing the feature response at a certain layer by doing, say, gradient ascent and not gradient descent."}, {"start": 284.0, "end": 288.0, "text": " And that's equivalent to saying, give me more of what you see."}, {"start": 288.0, "end": 297.0, "text": " But more important for our story, it was the first main ingredient for the inception of neural style transfer algorithm."}, {"start": 297.0, "end": 305.0, "text": " The second main ingredient also came from the creator of NST, Leon Gatiss, in the work titled Texture Synthesis Using ComNet."}, {"start": 305.0, "end": 316.0, "text": " And here he just exploited the rich feature representation of the VGG network to create these awesome textures that you can see on the screen."}, {"start": 316.0, "end": 319.0, "text": " And this is basically the same thing we did in the last video."}, {"start": 319.0, "end": 323.0, "text": " It is important to appreciate that this work also did not come out of the blue."}, {"start": 323.0, "end": 333.0, "text": " So the conceptual framework of building up some summary statistics over certain filter responses was already in place."}, {"start": 333.0, "end": 344.0, "text": " So, for example, in Portilla and Simoncelli's work, instead of using feature filter responses of VGG net, they used the filter responses of a linear filter bank."}, {"start": 344.0, "end": 352.0, "text": " And instead of using Grammatrix that captures the correlations between feature maps, they use a carefully chosen set of summary statistics."}, {"start": 352.0, "end": 367.0, "text": " And finally, combining the previous work of reconstructing images from deep codes, which basically gives us the content portion of the stylized image and combining that with Gatiss' texture synthesis work,"}, {"start": 367.0, "end": 376.0, "text": " where conceptually transferring style is equivalent to transferring texture, so this part gives us the style portion of the stylized image."}, {"start": 376.0, "end": 379.0, "text": " We finally get the final algorithm."}, {"start": 379.0, "end": 386.0, "text": " It's just interesting how connecting a couple of dots created a surge of research in this new direction."}, {"start": 386.0, "end": 392.0, "text": " Lots of follow-up work came after the original algorithm was advised back in 2015."}, {"start": 392.0, "end": 399.0, "text": " And what is interesting is that there is this interesting relationship that ties all of these algorithms that came afterwards."}, {"start": 399.0, "end": 408.0, "text": " And that's this three-way trade-off between speed, quality, and flexibility in the number of styles that the algorithm can produce."}, {"start": 408.0, "end": 412.0, "text": " And that will become clearer what that means a bit later in the video."}, {"start": 412.0, "end": 420.0, "text": " The original algorithm was pretty high quality, infinite flexibility in the sense you can transfer any style, but really slow."}, {"start": 420.0, "end": 423.0, "text": " So let's see how we can improve the speed portion."}, {"start": 423.0, "end": 432.0, "text": " So the main idea is this. Instead of using the optimization algorithm, let's just pass in the image, do a feed-forward pass, and get a stylized image out."}, {"start": 432.0, "end": 441.0, "text": " And there were basically two independent papers that implemented this idea back in March 2016, and those were by Johnson and by Iuliano."}, {"start": 441.0, "end": 448.0, "text": " And I'll show the Johnson's method here because it's conceptually simpler, a bit higher quality, but a bit lower speed."}, {"start": 448.0, "end": 457.0, "text": " So the method goes like this. So we're optimizing this image transform net's weights and not the pixels in the image space."}, {"start": 457.0, "end": 462.0, "text": " The loss is the same as in Gatiss work, and it gets defined by the deep net."}, {"start": 462.0, "end": 468.0, "text": " And I always find that really interesting. And it gets trained on the MSCoco dataset."}, {"start": 468.0, "end": 479.0, "text": " So we iterate through images in the dataset and the style loss is fixed, but the content loss is specific for every single image in the dataset."}, {"start": 479.0, "end": 484.0, "text": " And by doing so, the net learns to minimize the style on arbitrary input image."}, {"start": 484.0, "end": 490.0, "text": " Let's see how it ranks against the three-way tradeoff. So it's still the fastest implementation out there."}, {"start": 490.0, "end": 494.0, "text": " It's got the lowest possible flexibility. It supports only one style."}, {"start": 494.0, "end": 502.0, "text": " And the quality, you can see the graphs here and let's focus on the leftmost graph because it's just for the lowest res input image."}, {"start": 502.0, "end": 511.0, "text": " And in the intersection of the blue and green curves, that defines the place where the loss is the same for the two methods, for the Gatiss and Johnson's methods."}, {"start": 511.0, "end": 522.0, "text": " And we can see that happens around 80th iteration of LBFGS optimizer, which means that the quality of this method is the same as Gatiss after 80 LBFGS iterations."}, {"start": 522.0, "end": 533.0, "text": " Now, there's a reason I mentioned Ulyanov here. He basically unlocked the quality and flexibility for these FID4 methods by introducing the concept of instance normalization."}, {"start": 533.0, "end": 542.0, "text": " Instance normalization is really similar to batch normalization, which was advised a year and a half before by Christian Zagady and Sergey Iofi from Google."}, {"start": 542.0, "end": 547.0, "text": " And the only difference is, let's say, the space over which you calculate the statistics."}, {"start": 547.0, "end": 556.0, "text": " So whereas batch normalization use a particular feature map from every single training example, the instance normalization just uses a single feature map."}, {"start": 556.0, "end": 564.0, "text": " And you can see the image here where the spatial dimensions of the feature map, the H and W, are collapsed in a single dimension."}, {"start": 564.0, "end": 572.0, "text": " And N is the mini batch size that the instance normalization is using just a single training example to figure out those statistics."}, {"start": 572.0, "end": 585.0, "text": " And let me just reiterate it. When I say statistics, I mean finding the mean and variance of the distribution and using those to normalize the distribution, making it univariance and zero mean."}, {"start": 585.0, "end": 592.0, "text": " And later applying those FN params, those betas and gammas to just keep the original representation power of the network."}, {"start": 592.0, "end": 602.0, "text": " Now, if you've never heard about better realization, this will probably sound like rubbish to you. And I'd usually suggest reading the original paper."}, {"start": 602.0, "end": 606.0, "text": " But this time the paper is really vague and the visualizations are really bad."}, {"start": 606.0, "end": 613.0, "text": " So I just suggest using either medium, some either medium blog or towards data science blog."}, {"start": 613.0, "end": 617.0, "text": " And I'll link some of those in the description. Let's see these normalization layers in action."}, {"start": 617.0, "end": 622.0, "text": " So if you apply those in the generated network, we get these results."}, {"start": 622.0, "end": 627.0, "text": " And in the bottom row, you can see that the institutionalization achieves greater quality."}, {"start": 627.0, "end": 633.0, "text": " And I already mentioned that institutionalization unlock greater quality and bigger flexibility."}, {"start": 633.0, "end": 642.0, "text": " Also, the first paper to exploit greater flexibility was this conditional institutionalization paper."}, {"start": 642.0, "end": 647.0, "text": " And they achieved 32 styles where there wasn't a hard limit."}, {"start": 647.0, "end": 651.0, "text": " It was just that the number of parameters grows linearly if you want to add more styles."}, {"start": 651.0, "end": 661.0, "text": " The main idea goes like this. So we do the same thing as an institutionalization and that's normalized distribution, making it univariance and zero mean."}, {"start": 661.0, "end": 672.0, "text": " And then instead of using a single pair of betas and gammas as an institutionalization, every single style has this pair associated with it."}, {"start": 672.0, "end": 679.0, "text": " And this simple idea enables us to create multiple stylized images using multiple styles."}, {"start": 679.0, "end": 687.0, "text": " And you can see here that interpolating those different styles, we can get a whole like continuous space of new stylized images."}, {"start": 687.0, "end": 694.0, "text": " And it's really surprising that using only two single parameters, we can define a completely new style."}, {"start": 694.0, "end": 698.0, "text": " So we've seen some really high quality methods like the original gages method."}, {"start": 698.0, "end": 701.0, "text": " We've seen some really fast methods like Johnson's method."}, {"start": 701.0, "end": 707.0, "text": " And we've seen some semi-flexible methods like these this conditional institutionalization."}, {"start": 707.0, "end": 712.0, "text": " So what else do we want from our NST algorithm? And the answer is control."}, {"start": 712.0, "end": 717.0, "text": " You usually don't have the control of what the network or the algorithm outputs."}, {"start": 717.0, "end": 726.0, "text": " And you want to control stuff like space in the sense we slightly apply to which portion of the which region of the image."}, {"start": 726.0, "end": 732.0, "text": " And then you want to control whether you take the color from the count image or take it from the style image."}, {"start": 732.0, "end": 740.0, "text": " And you want to also have control over which brush strokes to use on the coarse scale and which ones to use on the fine scale."}, {"start": 740.0, "end": 744.0, "text": " So let's take a look at the spatial control a bit deeper."}, {"start": 744.0, "end": 753.0, "text": " So the idea goes like this. Let's take the sky region of the style image, which is defined by the black pixels of its corresponding segmentation mask."}, {"start": 753.0, "end": 763.0, "text": " You can see it on the top right corner of the image and apply to the sky region of the content image, which is defined by the black pixels of its corresponding segmentation mask."}, {"start": 763.0, "end": 774.0, "text": " And this time, let's take this whole style image and apply to the non-sky region of the content image, which is defined by the white pixels of its corresponding segmentation mask."}, {"start": 774.0, "end": 781.0, "text": " This mixing of styles doesn't really happen in the image space by just combining those images with the segmentation mask."}, {"start": 781.0, "end": 789.0, "text": " It happens in the feature space where they use some morphological operators such as erosion to get those nicely blended together."}, {"start": 789.0, "end": 798.0, "text": " And when it comes to color control, first, why? Well, sometimes you just get an output image which you don't like, like this one."}, {"start": 798.0, "end": 802.0, "text": " Now for the how portion. Well, one method is to do this."}, {"start": 802.0, "end": 812.0, "text": " You take the content and style images, you transform them into some color space where the color information and the intensity information is separable."}, {"start": 812.0, "end": 827.0, "text": " And what you do is you take the luminance components of style and content images, you do the style transfer, and then you take the color channels from the content image and just concatenate those with the output and you get the final image."}, {"start": 827.0, "end": 832.0, "text": " And that's the one you see under D. Now, controlling scale is really simple."}, {"start": 832.0, "end": 845.0, "text": " What you do is you take a fine scale brushstrokes from one painting, you combine those with a core scale angular geometric shapes from another style image, and you produce a new style image."}, {"start": 845.0, "end": 851.0, "text": " And then you just use that one in NSD in a classical NSD procedure to get the image under E."}, {"start": 851.0, "end": 855.0, "text": " And just be aware that this is something useful to have enough for the fun part."}, {"start": 855.0, "end": 863.0, "text": " So up until now, we consider those grand matrices to be some kind of unnatural law, like we had to match those in order to transfer style."}, {"start": 863.0, "end": 877.0, "text": " And as this paper shows, demystifying neural style transfer, matching those grand matrices is nothing but like minimizing this maximum mean distribution with a polynomial kernel."}, {"start": 877.0, "end": 884.0, "text": " That means we can use other kernels like linear or Gaussian kernel to achieve style transfer."}, {"start": 884.0, "end": 899.0, "text": " And as it says right here, this reveals that neural style transfer is intrinsically a process of distribution alignment of the neural activations between images, which means we basically just need to align those distributions in order to transfer style."}, {"start": 899.0, "end": 908.0, "text": " And there are various ways to do that. And one important that will be important for us is this batch normalization statistics."}, {"start": 908.0, "end": 913.0, "text": " And we already saw a hint for that in the conditional instance normalization paper."}, {"start": 913.0, "end": 918.0, "text": " Now, this work took it even further and achieved infinite flexibility."}, {"start": 918.0, "end": 923.0, "text": " I can transfer any style possible and it does that the following way."}, {"start": 923.0, "end": 929.0, "text": " So you take the image, you pass it through the feed forward map and you take a specific feature map."}, {"start": 929.0, "end": 933.0, "text": " You just normalize it by finding its mean and variance."}, {"start": 933.0, "end": 936.0, "text": " And then you do the same thing for the style image."}, {"start": 936.0, "end": 938.0, "text": " You find the same feature map."}, {"start": 938.0, "end": 945.0, "text": " You find the mean and the variance and you just take those mean and variance parameters and apply them to the content feature map."}, {"start": 945.0, "end": 949.0, "text": " You pass it through the decoder and you get a stylized image out."}, {"start": 949.0, "end": 951.0, "text": " So no learnable parameters this time."}, {"start": 951.0, "end": 955.0, "text": " You just pass two single parameters and you achieve style transfer."}, {"start": 955.0, "end": 957.0, "text": " Let's see it once more on the screen."}, {"start": 957.0, "end": 961.0, "text": " So you take the content feature map, you normalize it by finding its mean and variance."}, {"start": 961.0, "end": 969.0, "text": " And then you find the mean and variance for the style image and you just reapply to the content image and you get a stylized image."}, {"start": 969.0, "end": 978.0, "text": " Now, the good thing is that those affine parameters, those betas and gammas don't need to be learned anymore, such as in batch normalization."}, {"start": 978.0, "end": 983.0, "text": " Also in instance normalization, we had to learn those and in conditional instance normalization."}, {"start": 983.0, "end": 986.0, "text": " Here, they are not a learnable parameters."}, {"start": 986.0, "end": 989.0, "text": " But the bad thing is that the coder still has to be trained."}, {"start": 989.0, "end": 993.0, "text": " So you need a finite set of style images upon which to train this decoder."}, {"start": 993.0, "end": 998.0, "text": " And that means it won't be it won't perform as good for unseen style images."}, {"start": 998.0, "end": 1008.0, "text": " The follow up work fixed this problem, achieving truly infinite flexibility in the sense of number of styles it can apply,"}, {"start": 1008.0, "end": 1013.0, "text": " although it had you had to sacrifice the quality a little bit as per three way trade off."}, {"start": 1013.0, "end": 1014.0, "text": " And the method works like this."}, {"start": 1014.0, "end": 1021.0, "text": " So you just train a simple image reconstruction or the encoder and you insert this WCT block."}, {"start": 1021.0, "end": 1023.0, "text": " And let's see what what it does."}, {"start": 1023.0, "end": 1034.0, "text": " So it does the whitening on the content features by just by just basically figuring out the composition of those content features,"}, {"start": 1034.0, "end": 1039.0, "text": " applying a couple of linear transformations and ending up with a grand matrix that's uncorrelated,"}, {"start": 1039.0, "end": 1044.0, "text": " meaning only the values on the diagonal have once and everything else is zero."}, {"start": 1044.0, "end": 1046.0, "text": " Just don't do all maths here."}, {"start": 1046.0, "end": 1047.0, "text": " Just try and follow along."}, {"start": 1047.0, "end": 1051.0, "text": " And then you just find the same I can decomposition."}, {"start": 1051.0, "end": 1062.0, "text": " But this time for style features apply a couple of linear transformations and you end up with content features that have the same same matrix as the style image,"}, {"start": 1062.0, "end": 1068.0, "text": " which is something we always did, but this time without any learning, any training."}, {"start": 1068.0, "end": 1075.0, "text": " So we saw the evolution of methods for static images and in parallel people were advising new methods for videos."}, {"start": 1075.0, "end": 1082.0, "text": " And the only additional problem that these methods have to solve is how to keep the temporal coherence between frames."}, {"start": 1082.0, "end": 1087.0, "text": " And as you can see here, we've got some original frames in the style image."}, {"start": 1087.0, "end": 1095.0, "text": " And if you just do a dummy like applying the NST per per single frame, there's going to be a lot of flickering,"}, {"start": 1095.0, "end": 1104.0, "text": " a lot of inconsistency between different frames because every single time you run it, the image gets a different local optimum."}, {"start": 1104.0, "end": 1111.0, "text": " Whereas when you apply the with temporal constraint, you can see that the images are much more smoother and consistent between frames."}, {"start": 1111.0, "end": 1114.0, "text": " The way we achieve this temporal consistency is the following."}, {"start": 1114.0, "end": 1122.0, "text": " So we take this green frame, the previous frame, and we forward warp it using the optical flow information."}, {"start": 1122.0, "end": 1133.0, "text": " And we take this red frame, the next frame, and we penalize it for deviating from the green frame on those locations,"}, {"start": 1133.0, "end": 1136.0, "text": " on those pixels where the optical flow was stable."}, {"start": 1136.0, "end": 1139.0, "text": " There was no disocclusion happening."}, {"start": 1139.0, "end": 1142.0, "text": " This method used a slow optimization procedure."}, {"start": 1142.0, "end": 1144.0, "text": " So needless to say, it was painfully slow."}, {"start": 1144.0, "end": 1148.0, "text": " And the follow up work just did the same thing as for static images."}, {"start": 1148.0, "end": 1151.0, "text": " It just transferred this problem into a feed forward network."}, {"start": 1151.0, "end": 1157.0, "text": " And I've actually used this very same model in the beginning of the series to create the very first clip."}, {"start": 1157.0, "end": 1160.0, "text": " It's called Reconet. And this is the clip you created."}, {"start": 1160.0, "end": 1166.0, "text": " So in the beginning, it's temporal inconsistent and it becomes temporally consistent and smooth."}, {"start": 1166.0, "end": 1171.0, "text": " So we saw a neural style transfer for static images, we saw for videos."}, {"start": 1171.0, "end": 1175.0, "text": " And the thing with this NST field is that people are trying to apply it everywhere."}, {"start": 1175.0, "end": 1180.0, "text": " So we've got NST for spherical images and videos that can be used in VR."}, {"start": 1180.0, "end": 1191.0, "text": " There is this concept of transferring style to 3D models where the artist just needs to paint this sphere in the top right corner."}, {"start": 1191.0, "end": 1193.0, "text": " And it gets transferred to the 3D model."}, {"start": 1193.0, "end": 1201.0, "text": " There is this photorealistic neural style transfer where you basically only transfer the color information onto the content image."}, {"start": 1201.0, "end": 1208.0, "text": " There is also a neural style transfer for audio and style aware content loss, etc., etc."}, {"start": 1208.0, "end": 1212.0, "text": " So it would be really valuable to share how this whole field just evolved,"}, {"start": 1212.0, "end": 1217.0, "text": " how people were trying to connect various dots and build upon each other's work."}, {"start": 1217.0, "end": 1219.0, "text": " And that's just how research looks like."}, {"start": 1219.0, "end": 1222.0, "text": " The field is nowhere near having all problems sorted out."}, {"start": 1222.0, "end": 1224.0, "text": " There are still a lot of challenges out there."}, {"start": 1224.0, "end": 1226.0, "text": " One of those is static evaluation."}, {"start": 1226.0, "end": 1235.0, "text": " So there is still not there doesn't exist some numerical method which can help us compare different NST methods and say, hey, this is better."}, {"start": 1235.0, "end": 1236.0, "text": " This one is worse."}, {"start": 1236.0, "end": 1245.0, "text": " We're still using those side by side subjective visual comparisons and different user studies to figure out which method is better."}, {"start": 1245.0, "end": 1251.0, "text": " Also, there is no standard benchmark image set, meaning everybody is using their own style images."}, {"start": 1251.0, "end": 1253.0, "text": " Everybody is using their own content images."}, {"start": 1253.0, "end": 1257.0, "text": " And it's kind of hard to just wrap your head around and compare different methods."}, {"start": 1257.0, "end": 1260.0, "text": " In other challenges, this representation disentangling."}, {"start": 1260.0, "end": 1267.0, "text": " So we saw some efforts like the controlling those perceptual factors like scale, like space and color."}, {"start": 1267.0, "end": 1278.0, "text": " It would be really nice if we had some latent space representation where we could just tweak certain dimension and get all of those perceptual factors changing the way we want them to change."}, {"start": 1278.0, "end": 1281.0, "text": " And I want to wrap this up with a fun fact."}, {"start": 1281.0, "end": 1283.0, "text": " And do you see the image on the right there?"}, {"start": 1283.0, "end": 1286.0, "text": " Could you guess it was it was sold in an auction?"}, {"start": 1286.0, "end": 1287.0, "text": " Could you guess the price tag?"}, {"start": 1287.0, "end": 1290.0, "text": " It was almost half a million dollars."}, {"start": 1290.0, "end": 1294.0, "text": " And the thing is, it wasn't even created by neural style transfer algorithm."}, {"start": 1294.0, "end": 1295.0, "text": " It was created by GANs."}, {"start": 1295.0, "end": 1297.0, "text": " You can see the equation down there."}, {"start": 1297.0, "end": 1303.0, "text": " But it's just so interesting that we are living in an age where there is this interesting dynamics going on between art and tech."}, {"start": 1303.0, "end": 1306.0, "text": " And I think it's really a great time to be alive."}, {"start": 1306.0, "end": 1309.0, "text": " So I really hope you found this video valuable."}, {"start": 1309.0, "end": 1312.0, "text": " If you're new to my channel, consider subscribing and sharing."}, {"start": 1312.0, "end": 1315.0, "text": " And there's awesome new content coming up."}, {"start": 1315.0, "end": 1318.0, "text": " So stay tuned and see in the next video."}] |
Aleksa Gordić - The AI Epiphany | https://www.youtube.com/watch?v=XWMwdkaLFsI | Optimization method | Neural Style Transfer #3 | ❤️ Become The AI Epiphany Patreon ❤️ ► https://www.patreon.com/theaiepiphany
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
The third video in the neural style transfer series! 🎨
You'll learn about:
✔️ The optimization-based (original Gatys et al.) NST method.
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
✅ GitHub code: https://github.com/gordicaleksa/pytorch-neural-style-transfer
Relevant reading ►
(original NST paper, arxiv, old) https://arxiv.org/pdf/1508.06576.pdf
(original NST paper, CVPR, new) https://www.cv-foundation.org/openaccess/content_cvpr_2016/papers/Gatys_Image_Style_Transfer_CVPR_2016_paper.pdf
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
⌚️ Timetable:
00:12 - 02:55 Going through the readme file
02:55 - 04:30 Setting up conda environment
04:30 - 09:23 Reconstruction script (content reconstruction)
09:23 - 10:25 L-BFGS VRAM consumption consideration
10:25 - 12:37 Reconstruction script (style reconstruction)
12:44 - 18:10 Main NST script
18:10 - 19:20 Further experimenting
19:20 Outro (constructive feedback is welcome!)
'
[Credits] Music:
https://www.youtube.com/watch?v=J2X5mJ3HDYE [NCS]
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
💰 BECOME A PATREON OF THE AI EPIPHANY ❤️
If these videos, GitHub projects, and blogs help you,
consider helping me out by supporting me on Patreon!
The AI Epiphany ► https://www.patreon.com/theaiepiphany
One-time donations: https://www.paypal.com/paypalme/theaiepiphany
Much love! ❤️
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
💡 The AI Epiphany is a channel dedicated to simplifying the field of AI using creative visualizations and in general, a stronger focus on geometrical and visual intuition,
rather than the algebraic and numerical "intuition".
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
👋 CONNECT WITH ME ON SOCIAL
LinkedIn ► https://www.linkedin.com/in/aleksagordic/
Twitter ► https://twitter.com/gordic_aleksa
Instagram ► https://www.instagram.com/aiepiphany/
Facebook ► https://www.facebook.com/aiepiphany/
👨👩👧👦 JOIN OUR DISCORD COMMUNITY:
Discord ► https://discord.gg/peBrCpheKE
📢 SUBSCRIBE TO MY MONTHLY AI NEWSLETTER:
Substack ► https://aiepiphany.substack.com/
💻 FOLLOW ME ON GITHUB FOR COOL PROJECTS:
GitHub ► https://github.com/gordicaleksa
📚 FOLLOW ME ON MEDIUM:
Medium ► https://gordicaleksa.medium.com/
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
#neuralstyletransfer #deeplearning #ai | What's up folks, we're gonna dig into some code for neural style transfer on static images using the optimization method and let's just start. Awesome, let's jump to GitHub here. You can see the URL and I've put the link in the description. So I basically wrote this repo only for this video, but it was totally worth it. I learned a lot and I hope you will benefit from it also. So it's written in PyTorch, I already mentioned that in previous video and I think it's pretty easy to use. I'm just going to briefly run through this readme file with you and in the next section you can just see the, on the left side is the output from the algorithm, on the right side are content and style inputs. And why yet another NST repo? Basically I couldn't find a reference implementation in PyTorch and other ones were really too complicated and I think this thing is really simple. And we got some examples here, just some cherry pick ones which I really liked. I think they're really neat and then following up, same thing, just in the left column is the output and on the right side is the style image that produced that output. And the following two sections are really important, they just show you how you can manipulate the weights, the respective weights. Here the first one is the style weight where you freeze the content weight and just going from left to right you can see that the amount of style in the output image is increasing. And then this is something that's rarely explained, it's a total variation loss which helps you just smoothen out the image. And then going from left to right here you can see that the image becomes really smooth. Here we can see that the way you initialize the input image, whether you use the noise, whether you use content as the initial image or style, you get different results. So on the left side you can see what you get when you start from the content image and that's usually the best way to go here. And in the middle is the random initialization and on the right side you can see the style initialization where the content from the style image actually leaked into the output image which is probably undesirable, depends if you get a really cool image that's cool. And here is the reconstruction from the original paper. I really encourage you to just go through this reading, I think it's really digestible, like it's really visual, you'll understand stuff. But I want to jump to code as soon as possible here. This is something I already explained in the last video, how we can reconstruct only content or only the style image. And it also looks really nice. And finally the setup. So this part should really be a piece of cake. You basically have only two instructions to run here. In an ideal case, in a non-ideal case you'll have to install system-wide CUDA and also minikonda which I'll be using throughout the video. Okay, let me just move the browser here and open anaconda and I'll just navigate to the place where I want to clone the repo. That's here. And I'll just do git clone from this URL here. And that should just download the repo directly. And once it downloads you can just verify it's there by typing in start like this. You can see it's here. And now we just have to navigate directly into it. Let's name it PyTorch. Yep. And we have to run only one simple command. That's conda and create. And it just went and installed conda environment for us. And now we just have to do activate PyTorch. Sorry, PyTorch.nst. Like this. And we're ready to jump into code. Yeah. Okay, so I lied. There is one more thing you need to do and that's open up your favorite IDE minus PyCharm and just connect it to the interpreter from the freshly created conda environment. And you do it like this in PyCharm. You just open up settings here and project interpreter and you just set it to PyTorch.nst. That's it. Let's go. I jumped to main function here and these are some of the default parameters you really don't need to change. This one, for example, is the default location for content images. This one is a default location for style images and I'm saving all of my images as JPEG because it saves lots of bandwidth. And for this type of output images, it also keeps the high quality of those images. Okay, so coming next are some of the parameters you'll be changing and playing with. So basically what you have here is default content image. That's this one for now. Line JPEG. And we'll be using this one for as a style. The height is fixed. 500 pixels is totally fine for this demo. Minus one means only final. So I'll put minus one here. One means save every single intermediate image. I'm using VGG19 and we'll be using LBFGS. I'll be talking more about the optimizer a bit later. Okay, so we have skipped these two here. Should reconstruct content will basically let you choose between reconstructing either content or style image. And I'll set it to true. So we'll be first reconstructing content and then the should visualize representation basically plots either feature maps or Graham matrices depending whether you picked content or style. So I'll put it to true. We want to visualize feature maps here. And so following up just we just wrap all of the all of the data in this dictionary object and we just call the function reconstruct image from representation. There's one more thing we need to do here and that's set a different content layer in VGG net. So I'll go to VGG net file here. So I'm using VGG19 right. So okay. And I'll just set four. I'll say to let's say one here. And that means we'll be using a relative to one layer. So with that being said let me just go and go ahead and run this. And this is what we get. That's a feature map. That's a first feature map from this layer. Reli to one for the line image. And let me show you the line image here from the default content directory. So this is what we are reconstructing. Let me close it and show you a couple of other feature maps here. And the reason I've chosen the relative to one is because it extracts a lower level features whereas higher higher layers would extract like higher higher level features from from images. And so you can see how it looks like. And let's see that in action. So if I close this. Oops. If I stop the program here I go to VGG and we we pick some other layer like let's say we pick com for two. Actually we'll pick yeah come for two. And if I go and start a program now this is what we get. And we can see that the feature maps are much more abstract. They tend to focus on like semantically meaningful parts of the image like in the case of Lyon it will put focus on ice as you can see here or mean or yeah like I'll just let you have a look here. And you can see like the nostrils and the eyebrows the lion has an eyebrow. My God. OK now let me stop this and change a couple of settings here. First we want to set this to one. We want to save every single image. Let me change the the layer to really to one so that we get better reconstruction. I mean less abstract reconstruction. And finally let's toggle the visualization off and let me start this. Now we'll just reconstruct the content image. So I went ahead and done that. And this is what we get. If you go here you can see the relative path here. Output images. We can see that the noise is slowly morphing into this line image as we go down the pipeline. And I went ahead and just created a video out of these images. And this is what we get here. So let me just return to the beginning because it's really fast and you can see the morphing happens really fast with LBFGS. OK just an important note on performance. So if you're using LBFGS there is a huge chance that you'll run out of video memory. So I'm using I can show you the graph here. So I'm using RTX 2080 and I guess not a lot of you folks will have a GPU that strong. So this GPU has 8 gigs of RAM and you can see here the algorithms in this repo with this configuration will eat up around 3 gigs and that's a lot. There's basically two things you can do. One is either a switch to Atom optimizer and the second one if you want to keep LBFGS because it's really good and performant with this task. You can play with resolution. So now it's I think like 500 pixels put it down to 250 300 or something. You can play with the LBFGS class itself like change the history size is 100 currently. And you can switch to VGG 16 because it will eat up less video memory. It's a smaller and shallower model than VGG 19. So now let's switch to style. Let's change a couple of parameters here. We want to visualize grand matrices and we want to pick style here. So if I run it we get the grand matrix for this image. Let me show you this one and let me close it here. And this grand matrix comes from layer RLD11 and it's just a part of the complete style representation for this image. We have five grand matrices in total which compose the general style representation. You can also see a strong line going through the main diagonal and the reason for that is because feature map. So when we take a dot product between the feature map and itself you get a high output and that's what the grand matrix actually represents. It's just like a set of different dot products between different feature maps. Let's just go ahead and see a couple of these. So this is how it looks for the next layer. It's already getting weaker in intensity because they are being normalized and the more elements they have the weaker the intensity. RLD31 and RLD41 basically won't see anything special here. Now if I exit here and once more we'll just go into reconstructing the style. Let's see what the output image looks like. So if I open it up here you can see the relative path again. I'll put images and starting from beginning it's a noise image as with the same thing as with content reconstruction. And as we go down the pipeline it gets increasingly stylized and the final image looks like this. So I went ahead and created a video and it looks like this. I encourage you to go and play ahead with this. By the way I do have a video creating function included in the repo. You just go here and you just uncomment this function. So that's it reconstructing the content and style images. Now let's jump into the neural style transfer script. Let's go. So just go ahead and open this neural style transfer file. And you can see that the script shares a lot of parameters with the reconstruction script. Basically the only three new ones and important ones are content weight, style weight and total variation weight. So you'll basically by tuning those three you'll get any image pair to like just merge together really nicely. When I say image pair I mean content style images. Let's go ahead and try it out. So I'll take a content image. I'll take the following content image. This one is called figures and I'll take the following style image. It's famous staring night from from from Van Gogh and I'll just combine those two here. So I'll put it figures JPEG and I'll put here Van Gogh's theory night JPEG and everything else here looks fine. And let me just run it. Once that's finished we get this as the output. We go to combine figures. So the name is always like this like the two images you used. That's the name of the directory where the output images will go to. And this is the result. And it looks really nice. And I'm not sure if it's pronounced. I think it's the right name to pronounce this guy's name. Name is actually Van Gogh in Dutch. Just a fun fact. Okay enough with fun facts. Let's go and see how the actual implementation looks like. So I'll go to the beginning of the function here and it starts here. So the first important bit here is the on the line 59. It's the prepare image function. So if I found find a function here you just what it does is it loads the image as an umpire rate. And then we do this transform here. Basically it's really important that you scale the image pixels with 255 because the VGG net. I'll learn during training to deal with data like that. And also normalization step is important. I just use the ones that were used for the VGG training also. So we applied the transform. We put a device to CUDA. So we copied the tensor to GPU. And then we do unsqueeze which basically just adds a dummy dimension so that it looks like a batch. And we do that for both the content and the style image. Let me return back here. So that was that part. Second we I always like the content image is the best way to initialize the image. And I just read that initial image in this variable from like a torch class. And this bit is important requires red to means that actually the this image is what is trainable. So usually when you do a machine learning training what you do is you train you tune the weights of the model here. The model weights are actually freeze. And the only thing that actually changes is the image itself. Next up we prepare the model here and we take the indices from the corresponding layers that we're using for content and star representation. So this this variable here contains both the index and the name. So say one and really one one stuff like that. And then we feed through the content and style images which were prepared previously. And finally we obtained the representation. I talked about this in the last video. And the final bit is here. So I'll just keep Adam. I'll pick one of the numerical optimizers. I'll use LBFGS here. And important. So there's just a bunch of boilerplate code here. The closure. That's something that by torch just like requires you to define. What is important here is the generation of the last build loss function. And then we just do the backward back prop on the classical like back prop on the last function that was defined. Let's take a quick look at the build loss function. So go to the implementation there. So what we do here is we take the optimization image which was initialized with the content image if you remember. And we just feed it through the VGG. We get and we get the current set of feature maps. And by just doing MSC loss between the target content representation and the current content representation we get that portion of the loss. We do a similar thing for style loss. But here we just form gray matrices out of those feature maps first and then do the MSC loss here. I just encourage you to go at your own pace through this code and understand what's happening here. So in the total loss finally is just a weighted sum of the content loss of the style loss and of the total variation loss. And that was it. So just wrapping it all up. So what we do is we have a VGG net which is frozen. We have the image which we are tuning so that the feature map is producing are getting more similar to the reference feature maps which are given by the content image and by the style image. That's it. And finally there are two things that I'd like you to go and experiment with. The first one being that you should experiment with content style and total variation losses. So I've actually put it down on a table for some like reasonable values you should use depending on the optimizer you're using. So you can see here LBFGS content in it. Just use those weights as a starting point and just go experiment. Just tweak them, increase them, decrease them and see how they affect the end result. I think that would be a really good learning experience. And the second thing would be to go to the architecture itself. So go to VGG 19 and try and experiment with different sets of layers which are using for the style representation. Try and add like a comp for three. Add more layers. Subtract some layers, whatever. Just experiment there and try and see how different sets affect the image quality. And if you find some superior representation let me know in the comments that would be really cool. So that was it for this video folks. I just encourage you to go and play with the code. That's the best way you can learn how to do this actually and understand it thoroughly. So I just encourage you also to give me like constructive feedback down in the comments. That would mean a lot to me because I'm still learning. I'm learning with you folks here. And you can tell me if the video was too long or too short which I doubt. But yeah, anyways, any comment is really welcome. Thanks a bunch. Subscribe and see you in the next video. | [{"start": 0.0, "end": 11.0, "text": " What's up folks, we're gonna dig into some code for neural style transfer on static images using the optimization method and let's just start."}, {"start": 12.0, "end": 18.0, "text": " Awesome, let's jump to GitHub here. You can see the URL and I've put the link in the description."}, {"start": 19.0, "end": 27.0, "text": " So I basically wrote this repo only for this video, but it was totally worth it. I learned a lot and I hope you will benefit from it also."}, {"start": 27.0, "end": 34.0, "text": " So it's written in PyTorch, I already mentioned that in previous video and I think it's pretty easy to use."}, {"start": 35.0, "end": 48.0, "text": " I'm just going to briefly run through this readme file with you and in the next section you can just see the, on the left side is the output from the algorithm, on the right side are content and style inputs."}, {"start": 48.0, "end": 61.0, "text": " And why yet another NST repo? Basically I couldn't find a reference implementation in PyTorch and other ones were really too complicated and I think this thing is really simple."}, {"start": 61.0, "end": 78.0, "text": " And we got some examples here, just some cherry pick ones which I really liked. I think they're really neat and then following up, same thing, just in the left column is the output and on the right side is the style image that produced that output."}, {"start": 79.0, "end": 85.0, "text": " And the following two sections are really important, they just show you how you can manipulate the weights, the respective weights."}, {"start": 85.0, "end": 97.0, "text": " Here the first one is the style weight where you freeze the content weight and just going from left to right you can see that the amount of style in the output image is increasing."}, {"start": 98.0, "end": 108.0, "text": " And then this is something that's rarely explained, it's a total variation loss which helps you just smoothen out the image."}, {"start": 108.0, "end": 127.0, "text": " And then going from left to right here you can see that the image becomes really smooth. Here we can see that the way you initialize the input image, whether you use the noise, whether you use content as the initial image or style, you get different results."}, {"start": 127.0, "end": 151.0, "text": " So on the left side you can see what you get when you start from the content image and that's usually the best way to go here. And in the middle is the random initialization and on the right side you can see the style initialization where the content from the style image actually leaked into the output image which is probably undesirable, depends if you get a really cool image that's cool."}, {"start": 151.0, "end": 165.0, "text": " And here is the reconstruction from the original paper. I really encourage you to just go through this reading, I think it's really digestible, like it's really visual, you'll understand stuff. But I want to jump to code as soon as possible here."}, {"start": 166.0, "end": 174.0, "text": " This is something I already explained in the last video, how we can reconstruct only content or only the style image. And it also looks really nice."}, {"start": 174.0, "end": 191.0, "text": " And finally the setup. So this part should really be a piece of cake. You basically have only two instructions to run here. In an ideal case, in a non-ideal case you'll have to install system-wide CUDA and also minikonda which I'll be using throughout the video."}, {"start": 191.0, "end": 212.0, "text": " Okay, let me just move the browser here and open anaconda and I'll just navigate to the place where I want to clone the repo. That's here. And I'll just do git clone from this URL here. And that should just download the repo directly."}, {"start": 212.0, "end": 222.0, "text": " And once it downloads you can just verify it's there by typing in start like this. You can see it's here. And now we just have to navigate directly into it."}, {"start": 222.0, "end": 243.0, "text": " Let's name it PyTorch. Yep. And we have to run only one simple command. That's conda and create. And it just went and installed conda environment for us. And now we just have to do activate PyTorch. Sorry, PyTorch.nst."}, {"start": 243.0, "end": 261.0, "text": " Like this. And we're ready to jump into code. Yeah. Okay, so I lied. There is one more thing you need to do and that's open up your favorite IDE minus PyCharm and just connect it to the interpreter from the freshly created conda environment."}, {"start": 262.0, "end": 271.0, "text": " And you do it like this in PyCharm. You just open up settings here and project interpreter and you just set it to PyTorch.nst. That's it. Let's go."}, {"start": 271.0, "end": 283.0, "text": " I jumped to main function here and these are some of the default parameters you really don't need to change. This one, for example, is the default location for content images."}, {"start": 284.0, "end": 297.0, "text": " This one is a default location for style images and I'm saving all of my images as JPEG because it saves lots of bandwidth. And for this type of output images, it also keeps the high quality of those images."}, {"start": 297.0, "end": 310.0, "text": " Okay, so coming next are some of the parameters you'll be changing and playing with. So basically what you have here is default content image. That's this one for now. Line JPEG."}, {"start": 310.0, "end": 329.0, "text": " And we'll be using this one for as a style. The height is fixed. 500 pixels is totally fine for this demo. Minus one means only final. So I'll put minus one here. One means save every single intermediate image."}, {"start": 329.0, "end": 349.0, "text": " I'm using VGG19 and we'll be using LBFGS. I'll be talking more about the optimizer a bit later. Okay, so we have skipped these two here. Should reconstruct content will basically let you choose between reconstructing either content or style image."}, {"start": 349.0, "end": 362.0, "text": " And I'll set it to true. So we'll be first reconstructing content and then the should visualize representation basically plots either feature maps or Graham matrices depending whether you picked content or style."}, {"start": 363.0, "end": 376.0, "text": " So I'll put it to true. We want to visualize feature maps here. And so following up just we just wrap all of the all of the data in this dictionary object and we just call the function reconstruct image from representation."}, {"start": 376.0, "end": 396.0, "text": " There's one more thing we need to do here and that's set a different content layer in VGG net. So I'll go to VGG net file here. So I'm using VGG19 right. So okay. And I'll just set four. I'll say to let's say one here."}, {"start": 396.0, "end": 412.0, "text": " And that means we'll be using a relative to one layer. So with that being said let me just go and go ahead and run this. And this is what we get. That's a feature map. That's a first feature map from this layer."}, {"start": 412.0, "end": 427.0, "text": " Reli to one for the line image. And let me show you the line image here from the default content directory. So this is what we are reconstructing. Let me close it and show you a couple of other feature maps here."}, {"start": 427.0, "end": 443.0, "text": " And the reason I've chosen the relative to one is because it extracts a lower level features whereas higher higher layers would extract like higher higher level features from from images."}, {"start": 443.0, "end": 458.0, "text": " And so you can see how it looks like. And let's see that in action. So if I close this. Oops. If I stop the program here I go to VGG and we we pick some other layer like let's say we pick com for two."}, {"start": 458.0, "end": 473.0, "text": " Actually we'll pick yeah come for two. And if I go and start a program now this is what we get. And we can see that the feature maps are much more abstract."}, {"start": 473.0, "end": 493.0, "text": " They tend to focus on like semantically meaningful parts of the image like in the case of Lyon it will put focus on ice as you can see here or mean or yeah like I'll just let you have a look here."}, {"start": 493.0, "end": 506.0, "text": " And you can see like the nostrils and the eyebrows the lion has an eyebrow. My God. OK now let me stop this and change a couple of settings here."}, {"start": 507.0, "end": 520.0, "text": " First we want to set this to one. We want to save every single image. Let me change the the layer to really to one so that we get better reconstruction."}, {"start": 520.0, "end": 531.0, "text": " I mean less abstract reconstruction. And finally let's toggle the visualization off and let me start this. Now we'll just reconstruct the content image."}, {"start": 532.0, "end": 539.0, "text": " So I went ahead and done that. And this is what we get. If you go here you can see the relative path here. Output images."}, {"start": 539.0, "end": 552.0, "text": " We can see that the noise is slowly morphing into this line image as we go down the pipeline. And I went ahead and just created a video out of these images."}, {"start": 553.0, "end": 562.0, "text": " And this is what we get here. So let me just return to the beginning because it's really fast and you can see the morphing happens really fast with LBFGS."}, {"start": 562.0, "end": 569.0, "text": " OK just an important note on performance. So if you're using LBFGS there is a huge chance that you'll run out of video memory."}, {"start": 570.0, "end": 578.0, "text": " So I'm using I can show you the graph here. So I'm using RTX 2080 and I guess not a lot of you folks will have a GPU that strong."}, {"start": 579.0, "end": 587.0, "text": " So this GPU has 8 gigs of RAM and you can see here the algorithms in this repo with this configuration will eat up around 3 gigs and that's a lot."}, {"start": 587.0, "end": 599.0, "text": " There's basically two things you can do. One is either a switch to Atom optimizer and the second one if you want to keep LBFGS because it's really good and performant with this task."}, {"start": 600.0, "end": 613.0, "text": " You can play with resolution. So now it's I think like 500 pixels put it down to 250 300 or something. You can play with the LBFGS class itself like change the history size is 100 currently."}, {"start": 613.0, "end": 623.0, "text": " And you can switch to VGG 16 because it will eat up less video memory. It's a smaller and shallower model than VGG 19."}, {"start": 624.0, "end": 634.0, "text": " So now let's switch to style. Let's change a couple of parameters here. We want to visualize grand matrices and we want to pick style here."}, {"start": 634.0, "end": 647.0, "text": " So if I run it we get the grand matrix for this image. Let me show you this one and let me close it here."}, {"start": 648.0, "end": 657.0, "text": " And this grand matrix comes from layer RLD11 and it's just a part of the complete style representation for this image."}, {"start": 657.0, "end": 664.0, "text": " We have five grand matrices in total which compose the general style representation."}, {"start": 665.0, "end": 671.0, "text": " You can also see a strong line going through the main diagonal and the reason for that is because feature map."}, {"start": 672.0, "end": 679.0, "text": " So when we take a dot product between the feature map and itself you get a high output and that's what the grand matrix actually represents."}, {"start": 680.0, "end": 684.0, "text": " It's just like a set of different dot products between different feature maps."}, {"start": 684.0, "end": 689.0, "text": " Let's just go ahead and see a couple of these. So this is how it looks for the next layer."}, {"start": 690.0, "end": 698.0, "text": " It's already getting weaker in intensity because they are being normalized and the more elements they have the weaker the intensity."}, {"start": 699.0, "end": 704.0, "text": " RLD31 and RLD41 basically won't see anything special here."}, {"start": 705.0, "end": 709.0, "text": " Now if I exit here and once more we'll just go into reconstructing the style."}, {"start": 709.0, "end": 717.0, "text": " Let's see what the output image looks like. So if I open it up here you can see the relative path again."}, {"start": 718.0, "end": 725.0, "text": " I'll put images and starting from beginning it's a noise image as with the same thing as with content reconstruction."}, {"start": 726.0, "end": 733.0, "text": " And as we go down the pipeline it gets increasingly stylized and the final image looks like this."}, {"start": 733.0, "end": 739.0, "text": " So I went ahead and created a video and it looks like this."}, {"start": 740.0, "end": 745.0, "text": " I encourage you to go and play ahead with this."}, {"start": 746.0, "end": 750.0, "text": " By the way I do have a video creating function included in the repo."}, {"start": 751.0, "end": 757.0, "text": " You just go here and you just uncomment this function."}, {"start": 757.0, "end": 765.0, "text": " So that's it reconstructing the content and style images. Now let's jump into the neural style transfer script. Let's go."}, {"start": 766.0, "end": 770.0, "text": " So just go ahead and open this neural style transfer file."}, {"start": 771.0, "end": 776.0, "text": " And you can see that the script shares a lot of parameters with the reconstruction script."}, {"start": 777.0, "end": 784.0, "text": " Basically the only three new ones and important ones are content weight, style weight and total variation weight."}, {"start": 784.0, "end": 791.0, "text": " So you'll basically by tuning those three you'll get any image pair to like just merge together really nicely."}, {"start": 792.0, "end": 794.0, "text": " When I say image pair I mean content style images."}, {"start": 795.0, "end": 800.0, "text": " Let's go ahead and try it out. So I'll take a content image. I'll take the following content image."}, {"start": 801.0, "end": 805.0, "text": " This one is called figures and I'll take the following style image."}, {"start": 805.0, "end": 815.0, "text": " It's famous staring night from from from Van Gogh and I'll just combine those two here."}, {"start": 816.0, "end": 832.0, "text": " So I'll put it figures JPEG and I'll put here Van Gogh's theory night JPEG and everything else here looks fine."}, {"start": 832.0, "end": 834.0, "text": " And let me just run it."}, {"start": 835.0, "end": 843.0, "text": " Once that's finished we get this as the output. We go to combine figures."}, {"start": 844.0, "end": 848.0, "text": " So the name is always like this like the two images you used."}, {"start": 849.0, "end": 852.0, "text": " That's the name of the directory where the output images will go to."}, {"start": 853.0, "end": 856.0, "text": " And this is the result. And it looks really nice."}, {"start": 856.0, "end": 862.0, "text": " And I'm not sure if it's pronounced. I think it's the right name to pronounce this guy's name."}, {"start": 863.0, "end": 866.0, "text": " Name is actually Van Gogh in Dutch. Just a fun fact."}, {"start": 867.0, "end": 871.0, "text": " Okay enough with fun facts. Let's go and see how the actual implementation looks like."}, {"start": 872.0, "end": 878.0, "text": " So I'll go to the beginning of the function here and it starts here."}, {"start": 879.0, "end": 884.0, "text": " So the first important bit here is the on the line 59. It's the prepare image function."}, {"start": 884.0, "end": 892.0, "text": " So if I found find a function here you just what it does is it loads the image as an umpire rate."}, {"start": 893.0, "end": 902.0, "text": " And then we do this transform here. Basically it's really important that you scale the image pixels with 255 because the VGG net."}, {"start": 903.0, "end": 907.0, "text": " I'll learn during training to deal with data like that. And also normalization step is important."}, {"start": 908.0, "end": 911.0, "text": " I just use the ones that were used for the VGG training also."}, {"start": 911.0, "end": 918.0, "text": " So we applied the transform. We put a device to CUDA. So we copied the tensor to GPU."}, {"start": 919.0, "end": 924.0, "text": " And then we do unsqueeze which basically just adds a dummy dimension so that it looks like a batch."}, {"start": 925.0, "end": 930.0, "text": " And we do that for both the content and the style image. Let me return back here. So that was that part."}, {"start": 931.0, "end": 936.0, "text": " Second we I always like the content image is the best way to initialize the image."}, {"start": 936.0, "end": 940.0, "text": " And I just read that initial image in this variable from like a torch class."}, {"start": 941.0, "end": 947.0, "text": " And this bit is important requires red to means that actually the this image is what is trainable."}, {"start": 948.0, "end": 954.0, "text": " So usually when you do a machine learning training what you do is you train you tune the weights of the model here."}, {"start": 955.0, "end": 961.0, "text": " The model weights are actually freeze. And the only thing that actually changes is the image itself."}, {"start": 961.0, "end": 972.0, "text": " Next up we prepare the model here and we take the indices from the corresponding layers that we're using for content and star representation."}, {"start": 973.0, "end": 979.0, "text": " So this this variable here contains both the index and the name. So say one and really one one stuff like that."}, {"start": 980.0, "end": 985.0, "text": " And then we feed through the content and style images which were prepared previously."}, {"start": 985.0, "end": 990.0, "text": " And finally we obtained the representation. I talked about this in the last video."}, {"start": 991.0, "end": 998.0, "text": " And the final bit is here. So I'll just keep Adam. I'll pick one of the numerical optimizers. I'll use LBFGS here."}, {"start": 999.0, "end": 1008.0, "text": " And important. So there's just a bunch of boilerplate code here. The closure. That's something that by torch just like requires you to define."}, {"start": 1008.0, "end": 1021.0, "text": " What is important here is the generation of the last build loss function. And then we just do the backward back prop on the classical like back prop on the last function that was defined."}, {"start": 1022.0, "end": 1032.0, "text": " Let's take a quick look at the build loss function. So go to the implementation there. So what we do here is we take the optimization image which was initialized with the content image if you remember."}, {"start": 1032.0, "end": 1037.0, "text": " And we just feed it through the VGG. We get and we get the current set of feature maps."}, {"start": 1038.0, "end": 1047.0, "text": " And by just doing MSC loss between the target content representation and the current content representation we get that portion of the loss."}, {"start": 1048.0, "end": 1058.0, "text": " We do a similar thing for style loss. But here we just form gray matrices out of those feature maps first and then do the MSC loss here."}, {"start": 1058.0, "end": 1063.0, "text": " I just encourage you to go at your own pace through this code and understand what's happening here."}, {"start": 1064.0, "end": 1071.0, "text": " So in the total loss finally is just a weighted sum of the content loss of the style loss and of the total variation loss."}, {"start": 1072.0, "end": 1078.0, "text": " And that was it. So just wrapping it all up. So what we do is we have a VGG net which is frozen."}, {"start": 1078.0, "end": 1090.0, "text": " We have the image which we are tuning so that the feature map is producing are getting more similar to the reference feature maps which are given by the content image and by the style image."}, {"start": 1091.0, "end": 1095.0, "text": " That's it. And finally there are two things that I'd like you to go and experiment with."}, {"start": 1096.0, "end": 1101.0, "text": " The first one being that you should experiment with content style and total variation losses."}, {"start": 1101.0, "end": 1110.0, "text": " So I've actually put it down on a table for some like reasonable values you should use depending on the optimizer you're using."}, {"start": 1111.0, "end": 1119.0, "text": " So you can see here LBFGS content in it. Just use those weights as a starting point and just go experiment."}, {"start": 1120.0, "end": 1125.0, "text": " Just tweak them, increase them, decrease them and see how they affect the end result."}, {"start": 1126.0, "end": 1129.0, "text": " I think that would be a really good learning experience."}, {"start": 1129.0, "end": 1133.0, "text": " And the second thing would be to go to the architecture itself."}, {"start": 1134.0, "end": 1142.0, "text": " So go to VGG 19 and try and experiment with different sets of layers which are using for the style representation."}, {"start": 1143.0, "end": 1149.0, "text": " Try and add like a comp for three. Add more layers. Subtract some layers, whatever."}, {"start": 1150.0, "end": 1153.0, "text": " Just experiment there and try and see how different sets affect the image quality."}, {"start": 1153.0, "end": 1158.0, "text": " And if you find some superior representation let me know in the comments that would be really cool."}, {"start": 1159.0, "end": 1164.0, "text": " So that was it for this video folks. I just encourage you to go and play with the code."}, {"start": 1165.0, "end": 1170.0, "text": " That's the best way you can learn how to do this actually and understand it thoroughly."}, {"start": 1171.0, "end": 1175.0, "text": " So I just encourage you also to give me like constructive feedback down in the comments."}, {"start": 1176.0, "end": 1180.0, "text": " That would mean a lot to me because I'm still learning. I'm learning with you folks here."}, {"start": 1180.0, "end": 1185.0, "text": " And you can tell me if the video was too long or too short which I doubt."}, {"start": 1186.0, "end": 1189.0, "text": " But yeah, anyways, any comment is really welcome."}, {"start": 1189.0, "end": 1210.0, "text": " Thanks a bunch. Subscribe and see you in the next video."}] |
Aleksa Gordić - The AI Epiphany | https://www.youtube.com/watch?v=B22nIUhXo4E | Basic Theory | Neural Style Transfer #2 | ❤️ Become The AI Epiphany Patreon ❤️ ► https://www.patreon.com/theaiepiphany
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
The second video in the neural style transfer series! 🎨
You'll learn about:
✔️ The basic theory behind how neural style transfer works
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
I hope the video provides you with a strong basic intuition and understanding, but for those of you who want to take it further here are some additional materials relevant to this video:
papers ►
✔️(original NST paper, arxiv, old) https://arxiv.org/pdf/1508.06576.pdf
✔️(original NST paper, CVPR, new) https://www.cv-foundation.org/openaccess/content_cvpr_2016/papers/Gatys_Image_Style_Transfer_CVPR_2016_paper.pdf
blogs/articles ►
✔️ History of style transfer (3 part blog series by Adobe's Aaron Hertzmann) https://research.adobe.com/news/image-stylization-history-and-future/
✔️Nice overview of fast NST algorithms https://www.fritz.ai/style-transfer/
Note: It seems that YouTube's video transcoding process messed up the intro and outro NST clips - they look much nicer and higher quality on my machine.
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
⌚️ Timetable:
00:00 - intro & NST series overview
02:25 - what I want this series to be
03:30 - defining the task of NST
04:01 - 2 types of style transfer
04:43 - a glimpse of the image style transfer history
06:55 - explanation of the content representation
10:10 - explanation of the style representation
14:12 - putting it all together (animation)
[Credits] Music:
https://www.youtube.com/watch?v=J2X5mJ3HDYE [NCS]
[Credits] Images:
Found the useful Gram matrix intuition image in this blog: https://towardsdatascience.com/light-on-math-machine-learning-intuitive-guide-to-neural-style-transfer-ef88e46697ee
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
💰 BECOME A PATREON OF THE AI EPIPHANY ❤️
If these videos, GitHub projects, and blogs help you,
consider helping me out by supporting me on Patreon!
The AI Epiphany ► https://www.patreon.com/theaiepiphany
One-time donations: https://www.paypal.com/paypalme/theaiepiphany
Much love! ❤️
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
💡 The AI Epiphany is a channel dedicated to simplifying the field of AI using creative visualizations and in general, a stronger focus on geometrical and visual intuition,
rather than the algebraic and numerical "intuition".
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
👋 CONNECT WITH ME ON SOCIAL
LinkedIn ► https://www.linkedin.com/in/aleksagordic/
Twitter ► https://twitter.com/gordic_aleksa
Instagram ► https://www.instagram.com/aiepiphany/
Facebook ► https://www.facebook.com/aiepiphany/
👨👩👧👦 JOIN OUR DISCORD COMMUNITY:
Discord ► https://discord.gg/peBrCpheKE
📢 SUBSCRIBE TO MY MONTHLY AI NEWSLETTER:
Substack ► https://aiepiphany.substack.com/
💻 FOLLOW ME ON GITHUB FOR COOL PROJECTS:
GitHub ► https://github.com/gordicaleksa
📚 FOLLOW ME ON MEDIUM:
Medium ► https://gordicaleksa.medium.com/
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
#neuralstyletransfer #deeplearning #ai | Welcome to the second video in this video series on neural style transfer where you're gonna learn how to do this and Let's jump into the video So this video will give you a deeper understanding of the basic Neural style transfer theory, but before I go there I'd like to give you an overview of the whole series and if you only came for this video feel free to skip directly to it so Let's start So last video was more of a teaser of showing you all the things that neural style transfer can actually do and as I already mentioned this one will be about basic theory and The third one will be about static image neural style transfer using the optimization method LBFGS or or Adam numerical optimizers, whatever so and And in general the the first part of the series will focus on static image neural style transfer whereas the second part will focus on videos and Fourth will be the second the appendix pretty much to this video on a more advanced and neural style transfer theory and Fifth one will focus on so not using the optimization method, but using CNN's You just plug in an image input as an input and you get a stylized image out and I'm also going to teach you how to train your own models so that you can use different styles Then we'll talk a little bit about segmentation which will help you stylize only certain portions of the of the image and Then we'll jump into the videos part starting with primitive video where We're going to learn how to apply it on a per frame basis without using any temporal loss But then we'll start using Including the temporal loss itself inside the models and we'll get much more stable model there and the output also of course The tenth will focus on training those models and the last one in this series will be about going deeper in general like try and use some some other family of models like mobile nets efficient nets some state-of-the-art models and see if that Gives us better results in general Next off I want to tell you more about what I want this series to actually be So I want to be code heavy. It's going to be really practical and except for this video and the advanced theory one and I want to keep it simple. I want to only use PyTorch as a framework and Python as the programming language. So no dual boots no system dependent scripts and no exotic languages such as Lua obsolete frameworks such as torture cafe and no tensorflow even though it's still relevant Especially with a 2.0 version, but I just want to pick one and I think PyTorch is being the battle and it's much more It's nicer to write in Code will be shared through my github repo So you have you can just I want to make it. I want to make it really simple. You can just get clone my repo Create the environment file Create environment using my environment file and that's it. You can start playing straight ahead So that's the end of the series overview now. Let's jump straight ahead into the video itself And let me start off with defining what the actual task will will be so we get one image as As the first input which contains the content that we want to preserve We get a second image which has the style That we want to transfer to this content image. We combine them where the plus denotes Neural style transfer transform and what we get out is composite image That's a stylized version of the content image and that's it. That's a task next off, let's see two basic types of style transfer and The first one is the one I'll be showing in this video series It's the artistic style transfer where the style where the style image we we want to use Is actually artistic image It can be either a cartoonish or some drawings and painting Whatever and the second type of style transfer is the photorealistic style transfer where both of the images are actually real and we try to just just mimic the style of the one of these Onto another and get a composite image out as you can see here on the screen so I thought it'd be worth including some history here and Basically, there's a difference between a style transfer and neural style transfer style transfer is something that's been going on for decades now already and Neural style transfer is just the same thing but using neural neural nets and it all started in natives pretty much where people were using simple signal processing techniques and filters to to get a stylized images out like the one you see here and Then in 2000s they started applying patch based methods like the one here called image analogies Where you need to have image pairs? So the content image in the stylized version and then given the new content image you can stylize it the same as the pair That was previously given and this method gave some decent results But only in 2015 that we get to the neural style transfer I applying ConvNets to do the same thing of transferring and it outperformed every other approach previously developed And now to the core NST algorithm itself. So where it all started it all started in 2015 where Leung Gaitis and his colleagues wrote this research paper titled a neural algorithm of artistic style and What the key finding of the paper was is that the content and the style representations can be decoupled inside a CNN architecture and Specifically a VGG net played a key role in this in this paper and you can see the architecture on the screen a bit more detail on the VGG network itself. So it was trained on the image net data set for the tests of image classification and object localization But it actually wasn't a winner on that year's Classification challenge. It was a first runner-up the first network to the one the competition was Google net or Google lenit But VGG did one the localization tasks Let's see what role VGG had in this NST paper So it helped create a rich and robust Representation of the semantics of the input image. So how we find the counter representation is the following We take some images and input we feed it through the CNN the VGG net here and we take the feature maps from a certain layer like like say a conf of 41 and those feature maps are what it represents the content of the input image and it's really that easy and Just for the sake of making feature maps less abstract Let's see how they actually look like for this concrete image for for this line image And you can see why they are called feature maps They can basically be interpreted as images and they contain either a low level detail such as edges Stuff like that or high level details depending from which layer of the VGG net or in general CNN Do you extract them out? Okay, and now for the fun part. So let's take a Gaussian noise image which will eventually become the stylized image that we want and Feed it through the VGG and we'll get its counter representation, which is currently rubbish Let's see how we can drive it So it has the same representation as the input image so we get those two images in Zimba We feed them through the VGG and we get their feature maps Which are as I already mentioned the content representation the current content representation of those two images We we can flatten those feature maps So each feature map becomes a row and this output matrix and now we need to drive those P and F matrix To be the same and we accomplish that using this loss, which is a simple MSE loss where you would just Take a element-wise subtraction and we do element-wise Squaring on those elements and we just try and drive that that loss to zero. I would try to minimize it And now for the really fun part And we're going to see what happens when we drive the loss down to zero And I'll just give you a couple of seconds to watch the animations So what you can see on the screen is on the left side You see what happens when the you see that the F matrix is getting closer to the P matrix Which is equivalent to the loss getting down to zero and on the right you can see the optimization procedure itself No, it's image becoming slowly becoming the input image The bottom animation is just the whole optimization procedure Whereas the upper animation is just the initial part of the optimization procedure in a slow-mo Because it happens really fast using LBFGS optimizer what you can see on the next screen is that the LBFGS is much more is much faster than the atom optimizer and in only 100 iterations LBFGS already Seems to be a morphing this noise image into content image Whereas atom is only just beginning to do that now for the second most important idea in this video And that's how do we capture the style of an image? So how do we find its style representation? So we have this style image input style image We feed it through this VGG net and we get a set of feature maps this time Taking those from starting from layer com 1 1 and going through layer com 5 1 And what we do is we construct this feature space over these feature maps using something called gram transform So we create gram matrices out of those feature maps and The set of those gram matrices is what ultimately represents the style of the image Or let's call it the style representation and now you might ask what's a what's a gram matrix and that's a legit question So I tick a style image as an input I fed it through the VGG and from one of those layers I constructed a gram matrix, and this is how it looks like this is exactly how it looks like and It answers an important question and that's which feature maps are used in the VGG And it answers an important question and that's which feature maps tend to activate together we already saw how the feature maps look like and In one of the previous slides and now we have an answer to this question so it's a simple covariance matrix between different feature maps and the way we calculate an element in this matrix is by just doing a dot product between two feature maps and That just captures the the texture information as it turns out Let me give you some more intuition behind why gram matrix actually works So here's a hypothetical example where on the upper row we have a Hypothetical output three feature maps and on the bottom row the same thing or just for some other input image And if we would take element wise subtraction between those two rows would get a nonzero output which means we have a nonzero content loss Which means that the input images have different semantics right different content So the dog is upside down on the bottom row But on the other hand if you take a look on the right side, you'll see that the gram matrices are actually the same Which means that the two input images are stylized in the same manner, which is kind of true if you take a look at it and The style loss will be zero because of that So let's make it more explicit how we calculate the style loss So we have the input style image. We have the input noise image We feed them through a VGG. We'll get a set of feature maps here I'm only showing for simplicity just one set of feature maps we construct gram matrices over those feature maps And what we do is just a simple MSC loss again Which is just a element wise projection followed by element wise squared followed by element wise squaring and the final Style loss is actually just a weighted sum of those Terms for every layer in the network and that's it now. I'll do the same thing as a different condom Which let's see what happens when the style representation of the input noise image becomes the same as the style Representation of the input style image, and I'll give you a couple of seconds to just watch the animations watch closely And you can see there's a spike there so What happens is on the left side? G represents the set of gram matrices of the input noise image and a represents the set of Gram matrices of the input style image and as they are getting closer to each other On the right side you can see an animation where an input noise image Initially input noise image is slowly becoming Stylized it's capturing the style of this simple style image although. It's district disregarding the Semantics is just capturing the style. That's it and now putting it all together So the total loss is a way a combination of a common loss and the style loss And what it basically says is the following we want the input noise image you have the same style representation as the input style image and to have the same content representation as the input condom image and That objective might not be fully minimizable because a there does not exist a solution or b We cannot find a solution But still we'll get a visual appearance that we want and just take a look at the animation here And the line is slowly appearing In that style image and we are getting the composite image out that we and that was that was like the whole point of this Video so that's it for the second video if you like my content consider subscribing Gently push that like button and see you in the next video | [{"start": 0.0, "end": 6.4, "text": " Welcome to the second video in this video series on neural style transfer where you're gonna learn how to do this and"}, {"start": 7.04, "end": 9.040000000000001, "text": " Let's jump into the video"}, {"start": 9.68, "end": 13.780000000000001, "text": " So this video will give you a deeper understanding of the basic"}, {"start": 14.44, "end": 17.32, "text": " Neural style transfer theory, but before I go there"}, {"start": 17.32, "end": 23.48, "text": " I'd like to give you an overview of the whole series and if you only came for this video feel free to skip"}, {"start": 23.96, "end": 25.44, "text": " directly to it"}, {"start": 25.44, "end": 27.080000000000002, "text": " so"}, {"start": 27.080000000000002, "end": 28.52, "text": " Let's start"}, {"start": 28.52, "end": 35.16, "text": " So last video was more of a teaser of showing you all the things that neural style transfer can actually do and"}, {"start": 35.64, "end": 39.92, "text": " as I already mentioned this one will be about basic theory and"}, {"start": 40.8, "end": 43.879999999999995, "text": " The third one will be about static image neural style transfer"}, {"start": 44.519999999999996, "end": 46.519999999999996, "text": " using the optimization method"}, {"start": 47.519999999999996, "end": 52.44, "text": " LBFGS or or Adam numerical optimizers, whatever so and"}, {"start": 52.44, "end": 58.16, "text": " And in general the the first part of the series will focus on static image neural style transfer"}, {"start": 58.4, "end": 61.4, "text": " whereas the second part will focus on videos and"}, {"start": 63.12, "end": 70.28, "text": " Fourth will be the second the appendix pretty much to this video on a more advanced and neural style transfer theory and"}, {"start": 71.36, "end": 76.9, "text": " Fifth one will focus on so not using the optimization method, but using CNN's"}, {"start": 76.9, "end": 82.54, "text": " You just plug in an image input as an input and you get a stylized image out and"}, {"start": 83.34, "end": 87.68, "text": " I'm also going to teach you how to train your own models so that you can"}, {"start": 88.34, "end": 90.34, "text": " use different styles"}, {"start": 91.02000000000001, "end": 93.78, "text": " Then we'll talk a little bit about segmentation"}, {"start": 94.66000000000001, "end": 101.06, "text": " which will help you stylize only certain portions of the of the image and"}, {"start": 101.06, "end": 106.66, "text": " Then we'll jump into the videos part starting with primitive video where"}, {"start": 107.58, "end": 112.02000000000001, "text": " We're going to learn how to apply it on a per frame basis without using any temporal loss"}, {"start": 112.38, "end": 114.62, "text": " But then we'll start using"}, {"start": 115.18, "end": 120.86, "text": " Including the temporal loss itself inside the models and we'll get much more stable model there"}, {"start": 121.66, "end": 123.02000000000001, "text": " and"}, {"start": 123.02000000000001, "end": 125.02000000000001, "text": " the output also of course"}, {"start": 125.02, "end": 132.57999999999998, "text": " The tenth will focus on training those models and the last one in this series will be about going deeper in general"}, {"start": 132.57999999999998, "end": 140.74, "text": " like try and use some some other family of models like mobile nets efficient nets some state-of-the-art models and see if that"}, {"start": 141.3, "end": 143.96, "text": " Gives us better results in general"}, {"start": 145.26, "end": 150.04, "text": " Next off I want to tell you more about what I want this series to actually be"}, {"start": 150.04, "end": 154.72, "text": " So I want to be code heavy. It's going to be really practical and"}, {"start": 155.67999999999998, "end": 158.68, "text": " except for this video and the advanced theory one and"}, {"start": 159.92, "end": 161.92, "text": " I want to keep it simple. I want to only use"}, {"start": 162.76, "end": 167.72, "text": " PyTorch as a framework and Python as the programming language. So no dual boots no"}, {"start": 168.35999999999999, "end": 171.39999999999998, "text": " system dependent scripts and no exotic languages such as Lua"}, {"start": 171.92, "end": 177.35999999999999, "text": " obsolete frameworks such as torture cafe and no tensorflow even though it's still relevant"}, {"start": 177.36, "end": 184.8, "text": " Especially with a 2.0 version, but I just want to pick one and I think PyTorch is being the battle and it's much more"}, {"start": 184.92000000000002, "end": 186.92000000000002, "text": " It's nicer to write in"}, {"start": 187.44000000000003, "end": 189.98000000000002, "text": " Code will be shared through my github repo"}, {"start": 190.36, "end": 197.0, "text": " So you have you can just I want to make it. I want to make it really simple. You can just get clone my repo"}, {"start": 197.52, "end": 199.52, "text": " Create the environment file"}, {"start": 199.68, "end": 204.20000000000002, "text": " Create environment using my environment file and that's it. You can start playing straight ahead"}, {"start": 204.2, "end": 209.48, "text": " So that's the end of the series overview now. Let's jump straight ahead into the video itself"}, {"start": 209.48, "end": 216.16, "text": " And let me start off with defining what the actual task will will be so we get one image as"}, {"start": 216.32, "end": 220.56, "text": " As the first input which contains the content that we want to preserve"}, {"start": 221.12, "end": 223.76, "text": " We get a second image which has the style"}, {"start": 224.28, "end": 229.64, "text": " That we want to transfer to this content image. We combine them where the plus denotes"}, {"start": 229.64, "end": 234.11999999999998, "text": " Neural style transfer transform and what we get out is"}, {"start": 235.0, "end": 236.27999999999997, "text": " composite image"}, {"start": 236.27999999999997, "end": 240.48, "text": " That's a stylized version of the content image and that's it. That's a task"}, {"start": 241.6, "end": 246.32, "text": " next off, let's see two basic types of style transfer and"}, {"start": 248.11999999999998, "end": 251.64, "text": " The first one is the one I'll be showing in this video series"}, {"start": 251.64, "end": 257.08, "text": " It's the artistic style transfer where the style where the style image we we want to use"}, {"start": 257.08, "end": 259.8, "text": " Is actually artistic image"}, {"start": 260.4, "end": 263.68, "text": " It can be either a cartoonish or some drawings and painting"}, {"start": 264.15999999999997, "end": 270.96, "text": " Whatever and the second type of style transfer is the photorealistic style transfer where both of the images are actually"}, {"start": 271.59999999999997, "end": 275.71999999999997, "text": " real and we try to just just mimic the style of the"}, {"start": 276.36, "end": 278.08, "text": " one of these"}, {"start": 278.08, "end": 282.47999999999996, "text": " Onto another and get a composite image out as you can see here on the screen"}, {"start": 282.48, "end": 286.96000000000004, "text": " so I thought it'd be worth including some history here and"}, {"start": 287.44, "end": 295.24, "text": " Basically, there's a difference between a style transfer and neural style transfer style transfer is something that's been going on for decades now already and"}, {"start": 295.56, "end": 304.20000000000005, "text": " Neural style transfer is just the same thing but using neural neural nets and it all started in natives pretty much where people were using"}, {"start": 304.96000000000004, "end": 306.28000000000003, "text": " simple"}, {"start": 306.28, "end": 312.79999999999995, "text": " signal processing techniques and filters to to get a stylized images out like the one you see here and"}, {"start": 313.47999999999996, "end": 319.76, "text": " Then in 2000s they started applying patch based methods like the one here called image analogies"}, {"start": 319.88, "end": 322.2, "text": " Where you need to have image pairs?"}, {"start": 322.2, "end": 329.96, "text": " So the content image in the stylized version and then given the new content image you can stylize it the same as the pair"}, {"start": 330.44, "end": 335.15999999999997, "text": " That was previously given and this method gave some decent results"}, {"start": 335.16, "end": 339.90000000000003, "text": " But only in 2015 that we get to the neural style transfer"}, {"start": 340.44, "end": 342.44, "text": " I applying"}, {"start": 342.48, "end": 349.52000000000004, "text": " ConvNets to do the same thing of transferring and it outperformed every other approach previously developed"}, {"start": 350.24, "end": 352.24, "text": " And now to the core"}, {"start": 352.32000000000005, "end": 357.04, "text": " NST algorithm itself. So where it all started it all started in"}, {"start": 358.08000000000004, "end": 359.48, "text": " 2015 where"}, {"start": 359.48, "end": 365.8, "text": " Leung Gaitis and his colleagues wrote this research paper titled a neural algorithm of artistic style and"}, {"start": 366.68, "end": 373.96000000000004, "text": " What the key finding of the paper was is that the content and the style representations can be decoupled inside a"}, {"start": 374.48, "end": 376.48, "text": " CNN architecture and"}, {"start": 376.76, "end": 383.26, "text": " Specifically a VGG net played a key role in this in this paper and you can see the architecture"}, {"start": 383.64000000000004, "end": 386.76, "text": " on the screen a bit more detail on the"}, {"start": 386.76, "end": 394.88, "text": " VGG network itself. So it was trained on the image net data set for the tests of image classification"}, {"start": 395.4, "end": 397.4, "text": " and object localization"}, {"start": 398.12, "end": 401.88, "text": " But it actually wasn't a winner on that year's"}, {"start": 403.03999999999996, "end": 410.15999999999997, "text": " Classification challenge. It was a first runner-up the first network to the one the competition was Google net or Google lenit"}, {"start": 410.68, "end": 414.36, "text": " But VGG did one the localization tasks"}, {"start": 414.36, "end": 417.92, "text": " Let's see what role VGG had in this NST paper"}, {"start": 419.96000000000004, "end": 421.96000000000004, "text": " So it helped create a"}, {"start": 422.8, "end": 424.8, "text": " rich and robust"}, {"start": 425.04, "end": 430.6, "text": " Representation of the semantics of the input image. So how we find the counter representation is the following"}, {"start": 430.6, "end": 435.0, "text": " We take some images and input we feed it through the CNN"}, {"start": 435.52000000000004, "end": 443.0, "text": " the VGG net here and we take the feature maps from a certain layer like like say a conf of 41 and"}, {"start": 443.0, "end": 449.28, "text": " those feature maps are what it represents the content of the input image and it's really that easy and"}, {"start": 449.8, "end": 452.48, "text": " Just for the sake of making feature maps less abstract"}, {"start": 452.48, "end": 456.72, "text": " Let's see how they actually look like for this concrete image for for this line image"}, {"start": 457.44, "end": 459.96, "text": " And you can see why they are called feature maps"}, {"start": 460.44, "end": 466.88, "text": " They can basically be interpreted as images and they contain either a low level detail such as edges"}, {"start": 466.88, "end": 473.68, "text": " Stuff like that or high level details depending from which layer of the VGG net or in general CNN"}, {"start": 473.68, "end": 478.2, "text": " Do you extract them out? Okay, and now for the fun part. So let's take a"}, {"start": 479.0, "end": 483.68, "text": " Gaussian noise image which will eventually become the stylized image that we want and"}, {"start": 484.48, "end": 490.0, "text": " Feed it through the VGG and we'll get its counter representation, which is currently rubbish"}, {"start": 490.36, "end": 492.08, "text": " Let's see how we can drive it"}, {"start": 492.08, "end": 498.47999999999996, "text": " So it has the same representation as the input image so we get those two images in Zimba"}, {"start": 498.96, "end": 502.84, "text": " We feed them through the VGG and we get their feature maps"}, {"start": 503.15999999999997, "end": 507.28, "text": " Which are as I already mentioned the content representation the current"}, {"start": 507.91999999999996, "end": 509.91999999999996, "text": " content representation of those two images"}, {"start": 510.44, "end": 512.68, "text": " We we can flatten those feature maps"}, {"start": 512.68, "end": 520.12, "text": " So each feature map becomes a row and this output matrix and now we need to drive those P and F matrix"}, {"start": 520.12, "end": 527.88, "text": " To be the same and we accomplish that using this loss, which is a simple MSE loss where you would just"}, {"start": 528.4, "end": 531.6, "text": " Take a element-wise subtraction and we do element-wise"}, {"start": 532.52, "end": 539.4, "text": " Squaring on those elements and we just try and drive that that loss to zero. I would try to minimize it"}, {"start": 540.72, "end": 543.0, "text": " And now for the really fun part"}, {"start": 543.0, "end": 548.04, "text": " And we're going to see what happens when we drive the loss down to zero"}, {"start": 548.04, "end": 550.7199999999999, "text": " And I'll just give you a couple of seconds to watch the animations"}, {"start": 555.64, "end": 558.68, "text": " So what you can see on the screen is on the left side"}, {"start": 558.68, "end": 563.48, "text": " You see what happens when the you see that the F matrix is getting closer to the P matrix"}, {"start": 563.48, "end": 569.9599999999999, "text": " Which is equivalent to the loss getting down to zero and on the right you can see the optimization procedure itself"}, {"start": 570.52, "end": 573.4399999999999, "text": " No, it's image becoming slowly becoming the input image"}, {"start": 573.44, "end": 577.72, "text": " The bottom animation is just the whole optimization procedure"}, {"start": 577.72, "end": 583.24, "text": " Whereas the upper animation is just the initial part of the optimization procedure in a slow-mo"}, {"start": 583.7600000000001, "end": 585.7600000000001, "text": " Because it happens really fast using"}, {"start": 586.2800000000001, "end": 595.12, "text": " LBFGS optimizer what you can see on the next screen is that the LBFGS is much more is much faster than the atom"}, {"start": 595.12, "end": 599.0, "text": " optimizer and in only 100 iterations LBFGS already"}, {"start": 599.0, "end": 602.92, "text": " Seems to be a morphing this noise image into content image"}, {"start": 602.92, "end": 608.36, "text": " Whereas atom is only just beginning to do that now for the second most important idea in this video"}, {"start": 608.36, "end": 614.56, "text": " And that's how do we capture the style of an image? So how do we find its style representation?"}, {"start": 615.28, "end": 619.28, "text": " So we have this style image input style image"}, {"start": 619.28, "end": 624.52, "text": " We feed it through this VGG net and we get a set of feature maps this time"}, {"start": 624.52, "end": 630.68, "text": " Taking those from starting from layer com 1 1 and going through layer com 5 1"}, {"start": 631.16, "end": 635.96, "text": " And what we do is we construct this feature space over these feature maps"}, {"start": 636.76, "end": 638.76, "text": " using something called"}, {"start": 639.0799999999999, "end": 640.6, "text": " gram transform"}, {"start": 640.6, "end": 645.36, "text": " So we create gram matrices out of those feature maps and"}, {"start": 645.68, "end": 650.48, "text": " The set of those gram matrices is what ultimately represents the style of the image"}, {"start": 650.48, "end": 658.2, "text": " Or let's call it the style representation and now you might ask what's a what's a gram matrix and that's a legit question"}, {"start": 659.0, "end": 665.08, "text": " So I tick a style image as an input I fed it through the VGG and from one of those layers"}, {"start": 665.6, "end": 670.8000000000001, "text": " I constructed a gram matrix, and this is how it looks like this is exactly how it looks like and"}, {"start": 671.32, "end": 676.44, "text": " It answers an important question and that's which feature maps are used in the VGG"}, {"start": 676.44, "end": 682.0, "text": " And it answers an important question and that's which feature maps tend to activate together"}, {"start": 682.5200000000001, "end": 686.0400000000001, "text": " we already saw how the feature maps look like and"}, {"start": 686.7600000000001, "end": 690.62, "text": " In one of the previous slides and now we have an answer to this question"}, {"start": 691.24, "end": 693.9200000000001, "text": " so it's a simple covariance matrix between"}, {"start": 694.84, "end": 696.6800000000001, "text": " different feature maps and"}, {"start": 696.6800000000001, "end": 703.08, "text": " the way we calculate an element in this matrix is by just doing a dot product between two feature maps and"}, {"start": 703.08, "end": 707.2800000000001, "text": " That just captures the the texture information as it turns out"}, {"start": 708.8000000000001, "end": 712.4000000000001, "text": " Let me give you some more intuition behind why gram matrix actually works"}, {"start": 713.72, "end": 717.64, "text": " So here's a hypothetical example where on the upper row we have a"}, {"start": 718.0400000000001, "end": 724.44, "text": " Hypothetical output three feature maps and on the bottom row the same thing or just for some other input image"}, {"start": 725.0, "end": 731.12, "text": " And if we would take element wise subtraction between those two rows would get a nonzero output"}, {"start": 731.12, "end": 733.68, "text": " which means we have a nonzero content loss"}, {"start": 734.72, "end": 739.96, "text": " Which means that the input images have different semantics right different content"}, {"start": 739.96, "end": 742.48, "text": " So the dog is upside down on the bottom row"}, {"start": 743.04, "end": 748.48, "text": " But on the other hand if you take a look on the right side, you'll see that the gram matrices are actually the same"}, {"start": 749.2, "end": 755.88, "text": " Which means that the two input images are stylized in the same manner, which is kind of true if you take a look at it and"}, {"start": 756.6, "end": 759.16, "text": " The style loss will be zero because of that"}, {"start": 759.16, "end": 762.7199999999999, "text": " So let's make it more explicit how we calculate the style loss"}, {"start": 763.24, "end": 766.92, "text": " So we have the input style image. We have the input noise image"}, {"start": 767.48, "end": 771.12, "text": " We feed them through a VGG. We'll get a set of feature maps here"}, {"start": 771.12, "end": 778.36, "text": " I'm only showing for simplicity just one set of feature maps we construct gram matrices over those feature maps"}, {"start": 778.68, "end": 781.76, "text": " And what we do is just a simple MSC loss again"}, {"start": 783.0, "end": 787.48, "text": " Which is just a element wise projection followed by element wise squared"}, {"start": 787.48, "end": 789.64, "text": " followed by element wise squaring and"}, {"start": 790.64, "end": 792.2, "text": " the final"}, {"start": 792.2, "end": 794.96, "text": " Style loss is actually just a weighted sum of those"}, {"start": 795.48, "end": 802.36, "text": " Terms for every layer in the network and that's it now. I'll do the same thing as a different condom"}, {"start": 802.36, "end": 808.2, "text": " Which let's see what happens when the style representation of the input noise image becomes the same as the style"}, {"start": 808.2, "end": 817.4000000000001, "text": " Representation of the input style image, and I'll give you a couple of seconds to just watch the animations watch closely"}, {"start": 818.0400000000001, "end": 820.5200000000001, "text": " And you can see there's a spike there so"}, {"start": 821.08, "end": 823.08, "text": " What happens is on the left side?"}, {"start": 823.2800000000001, "end": 829.76, "text": " G represents the set of gram matrices of the input noise image and a represents the set of"}, {"start": 829.96, "end": 834.96, "text": " Gram matrices of the input style image and as they are getting closer to each other"}, {"start": 834.96, "end": 839.52, "text": " On the right side you can see an animation where an input noise image"}, {"start": 840.24, "end": 842.6800000000001, "text": " Initially input noise image is slowly becoming"}, {"start": 844.08, "end": 849.48, "text": " Stylized it's capturing the style of this simple style image although. It's district disregarding the"}, {"start": 849.88, "end": 853.8000000000001, "text": " Semantics is just capturing the style. That's it and now putting it all together"}, {"start": 854.12, "end": 859.98, "text": " So the total loss is a way a combination of a common loss and the style loss"}, {"start": 859.98, "end": 865.24, "text": " And what it basically says is the following we want the input noise image you have the same"}, {"start": 865.72, "end": 869.48, "text": " style representation as the input style image and to have the same"}, {"start": 870.2, "end": 872.72, "text": " content representation as the input condom image and"}, {"start": 873.32, "end": 880.04, "text": " That objective might not be fully minimizable because a there does not exist a solution or b"}, {"start": 880.08, "end": 882.08, "text": " We cannot find a solution"}, {"start": 882.08, "end": 889.96, "text": " But still we'll get a visual appearance that we want and just take a look at the animation here"}, {"start": 889.96, "end": 893.08, "text": " And the line is slowly appearing"}, {"start": 894.08, "end": 899.88, "text": " In that style image and we are getting the composite image out that we and that was that was like the whole point of this"}, {"start": 899.88, "end": 904.2, "text": " Video so that's it for the second video if you like my content consider subscribing"}, {"start": 904.2, "end": 911.0600000000001, "text": " Gently push that like button and see you in the next video"}] |
Aleksa Gordić - The AI Epiphany | https://www.youtube.com/watch?v=S78LQebx6jo | Intro | Neural Style Transfer #1 | ❤️ Become The AI Epiphany Patreon ❤️ ► https://www.patreon.com/theaiepiphany
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
Kicking off the neural style transfer series! 🎨
Neural style transfer is in its most basic form about transferring style from a style image onto a content image using neural nets.
In its more complex form you can:
✔️ Transfer style to videos (and additionally use temporal loss)
✔️ Choose whether to keep the color from the content image or take it from the style image
✔️ Use segmentation masks to specify objects which should be styled
and many more cool things - stay tuned for the series!
[Credit] Music:
https://www.youtube.com/watch?v=J2X5mJ3HDYE [NCS]
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
💰 BECOME A PATREON OF THE AI EPIPHANY ❤️
If these videos, GitHub projects, and blogs help you,
consider helping me out by supporting me on Patreon!
The AI Epiphany ► https://www.patreon.com/theaiepiphany
One-time donations: https://www.paypal.com/paypalme/theaiepiphany
Much love! ❤️
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
💡 The AI Epiphany is a channel dedicated to simplifying the field of AI using creative visualizations and in general, a stronger focus on geometrical and visual intuition,
rather than the algebraic and numerical "intuition".
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
👋 CONNECT WITH ME ON SOCIAL
LinkedIn ► https://www.linkedin.com/in/aleksagordic/
Twitter ► https://twitter.com/gordic_aleksa
Instagram ► https://www.instagram.com/aiepiphany/
Facebook ► https://www.facebook.com/aiepiphany/
👨👩👧👦 JOIN OUR DISCORD COMMUNITY:
Discord ► https://discord.gg/peBrCpheKE
📢 SUBSCRIBE TO MY MONTHLY AI NEWSLETTER:
Substack ► https://aiepiphany.substack.com/
💻 FOLLOW ME ON GITHUB FOR COOL PROJECTS:
GitHub ► https://github.com/gordicaleksa
📚 FOLLOW ME ON MEDIUM:
Medium ► https://gordicaleksa.medium.com/
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
#neuralstyletransfer #deeplearning #ai | What's up? Welcome to my channel on artificial intelligence and in this first video series on neural style transfer you're going to learn how to do this. Oops, too dark. Let me try and fix it. Okay, now it's better. So as I was saying in this video series I'm going to teach you all about neural style transfer and let me use something less creepy here. Nice. So in its most basic form, neural style transfer is about transferring style from a static style image onto a static content image and getting a composite image out, which has this nice property that it takes the content from the content image and it takes the style from the style image. It's that easy. So in its more complex form, I'm going to teach you how to transfer style to videos, how to add segmentation masks around you like the one I have here around me, how to set up your network training so that you get the style and the visual appearance that you actually want. And you can choose whether to keep the color from the content image or take it from the style image. Oops, just lost my background here. Wait a sec. Actually, it doesn't matter. Let me show you what you can do using segmentation masks. So you can either add a background here or you can kind of ingress it. And now I'm without a style and the background is styled. You can make everything styled or you can make everything black. Okay, that combination didn't make any sense. You can also train different models to get different styles like these. First one and the second one and the third one, fourth, fifth, and the last one, sixth. So up until now, I was using style transfer on videos without including any temporal loss. So if I add the loss, we get this. And you can see how much more stable the video is. It's got way less flickering and it's just more visually pleasing to watch. But it will be harder to train these temporary stable models. And I only got this one so far. But I'll be learning and building better stuff together with you folks throughout this series. So stay tuned and subscribe. See you in the next video. | [{"start": 0.0, "end": 7.1000000000000005, "text": " What's up? Welcome to my channel on artificial intelligence and in this first video series"}, {"start": 7.1000000000000005, "end": 14.98, "text": " on neural style transfer you're going to learn how to do this. Oops, too dark. Let me try"}, {"start": 14.98, "end": 20.02, "text": " and fix it. Okay, now it's better. So as I was saying in this video series I'm going"}, {"start": 20.02, "end": 25.7, "text": " to teach you all about neural style transfer and let me use something less creepy here."}, {"start": 25.7, "end": 31.38, "text": " Nice. So in its most basic form, neural style transfer is about transferring style from"}, {"start": 31.38, "end": 36.519999999999996, "text": " a static style image onto a static content image and getting a composite image out, which"}, {"start": 36.519999999999996, "end": 41.480000000000004, "text": " has this nice property that it takes the content from the content image and it takes the style"}, {"start": 41.480000000000004, "end": 46.82, "text": " from the style image. It's that easy. So in its more complex form, I'm going to teach"}, {"start": 46.82, "end": 52.96, "text": " you how to transfer style to videos, how to add segmentation masks around you like the"}, {"start": 52.96, "end": 60.82, "text": " one I have here around me, how to set up your network training so that you get the style"}, {"start": 60.82, "end": 65.34, "text": " and the visual appearance that you actually want. And you can choose whether to keep the"}, {"start": 65.34, "end": 71.12, "text": " color from the content image or take it from the style image. Oops, just lost my background"}, {"start": 71.12, "end": 77.2, "text": " here. Wait a sec. Actually, it doesn't matter. Let me show you what you can do using segmentation"}, {"start": 77.2, "end": 83.64, "text": " masks. So you can either add a background here or you can kind of ingress it. And now"}, {"start": 83.64, "end": 90.72, "text": " I'm without a style and the background is styled. You can make everything styled or"}, {"start": 90.72, "end": 97.68, "text": " you can make everything black. Okay, that combination didn't make any sense. You can"}, {"start": 97.68, "end": 105.52000000000001, "text": " also train different models to get different styles like these. First one and the second"}, {"start": 105.52, "end": 119.08, "text": " one and the third one, fourth, fifth, and the last one, sixth. So up until now, I was"}, {"start": 119.08, "end": 126.75999999999999, "text": " using style transfer on videos without including any temporal loss. So if I add the loss, we"}, {"start": 126.75999999999999, "end": 134.44, "text": " get this. And you can see how much more stable the video is. It's got way less flickering"}, {"start": 134.44, "end": 141.4, "text": " and it's just more visually pleasing to watch. But it will be harder to train these temporary"}, {"start": 141.4, "end": 149.4, "text": " stable models. And I only got this one so far. But I'll be learning and building better"}, {"start": 149.4, "end": 155.96, "text": " stuff together with you folks throughout this series. So stay tuned and subscribe. See you"}, {"start": 155.96, "end": 165.32, "text": " in the next video."}] |