id
stringlengths 11
11
| channel
stringclasses 2
values | channel_id
stringclasses 2
values | title
stringlengths 12
100
| categories
sequence | tags
sequence | description
stringlengths 66
5k
| text
stringlengths 577
90.4k
| segments
list |
---|---|---|---|---|---|---|---|---|
z4lAlVRwbrc | Yannic Kilcher | UCZHmQk67mSJgfCCTn7xBfew | Author Interview - Improving Intrinsic Exploration with Language Abstractions | [
"Science & Technology"
] | [
""
] | #reinforcementlearning #ai #explained
This is an interview with Jesse Mu, first author of the paper.
Original Paper Review: https://youtu.be/NeGJAUSQEJI
Exploration is one of the oldest challenges for Reinforcement Learning algorithms, with no clear solution to date. Especially in environments with sparse rewards, agents face significant challenges in deciding which parts of the environment to explore further. Providing intrinsic motivation in form of a pseudo-reward is sometimes used to overcome this challenge, but often relies on hand-crafted heuristics, and can lead to deceptive dead-ends. This paper proposes to use language descriptions of encountered states as a method of assessing novelty. In two procedurally generated environments, they demonstrate the usefulness of language, which is in itself highly concise and abstractive, which lends itself well for this task.
OUTLINE:
0:00 - Intro
0:55 - Paper Overview
4:30 - Aren't you just adding extra data?
9:35 - Why are you splitting up the AMIGo teacher?
13:10 - How do you train the grounding network?
16:05 - What about causally structured environments?
17:30 - Highlights of the experimental results
20:40 - Why is there so much variance?
22:55 - How much does it matter that we are testing in a video game?
27:00 - How does novelty interface with the goal specification?
30:20 - The fundamental problems of exploration
32:15 - Are these algorithms subject to catastrophic forgetting?
34:45 - What current models could bring language to other environments?
40:30 - What does it take in terms of hardware?
43:00 - What problems did you encounter during the project?
46:40 - Where do we go from here?
Paper: https://arxiv.org/abs/2202.08938
Abstract:
Reinforcement learning (RL) agents are particularly hard to train when rewards are sparse. One common solution is to use intrinsic rewards to encourage agents to explore their environment. However, recent intrinsic exploration methods often use state-based novelty measures which reward low-level exploration and may not scale to domains requiring more abstract skills. Instead, we explore natural language as a general medium for highlighting relevant abstractions in an environment. Unlike previous work, we evaluate whether language can improve over existing exploration methods by directly extending (and comparing to) competitive intrinsic exploration baselines: AMIGo (Campero et al., 2021) and NovelD (Zhang et al., 2021). These language-based variants outperform their non-linguistic forms by 45-85% across 13 challenging tasks from the MiniGrid and MiniHack environment suites.
Authors: Jesse Mu, Victor Zhong, Roberta Raileanu, Minqi Jiang, Noah Goodman, Tim Rocktäschel, Edward Grefenstette
Links:
TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
Discord: https://ykilcher.com/discord
BitChute: https://www.bitchute.com/channel/yannic-kilcher
LinkedIn: https://www.linkedin.com/in/ykilcher
BiliBili: https://space.bilibili.com/2017636191
If you want to support me, the best thing to do is to share out the content :)
If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this):
SubscribeStar: https://www.subscribestar.com/yannickilcher
Patreon: https://www.patreon.com/yannickilcher
Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq
Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2
Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m
Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n | Hello, this is an interview with Jesse Mu, who is the first author of the paper improving intrinsic exploration with language abstractions. This paper is really cool because it combines the knowledge that is inherent in language with the problem of exploration in reinforcement learning. I've made a comprehensive review of this paper in the last video, so be sure to check that out. Today, Jesse has seen the video and we're able to dive right into the questions, criticisms and anything that came up during the video. The interview was super valuable to me. I learned a lot. I hope you do too. If you like, then please leave a like on the video. Tell me what you think in the comments. Tell me how I can make these videos better above all else. And I'll see you around. Bye bye. Hi, everyone. Today, I'm here with Jesse Mu, who is the first author of the paper improving intrinsic exploration with language abstractions, which is a really cool paper. I've enjoyed reading it. I like the bringing language into the reinforcement learning domain. I think it makes a lot of sense and I was very happy to see this paper. Yeah, Jesse, welcome to the channel. Yeah, thanks for having me. So I've presumably the viewers here have already seen my little review of the paper. What would be your maybe for people who haven't seen that or just in your words, your like short elevator pitch of the paper itself? What would that be? Yeah. So the way that I would pitch the paper is that reinforcement learning for a while now has wrestled with perhaps the central problem, which is how do we encourage exploration in these environments with more complex tasks and longer time horizons where the extrinsic reward that you get from the environment is very sparse. So in the absence of extrinsic rewards, how do we encourage agents to explore? And typically the way we do so is we assume and this is a very cognitively appealing intuition that we should motivate an agent to achieve novelty in the environment. We should make it do things that it hasn't done before, encounter states that it hasn't seen before, et cetera. And then hopefully we'll enable the agent to acquire the skills that we actually want the agent to acquire in the environment. But the problem with this, of course, is how we define novelty. In a lot of scenarios, there are environments that can look very different, but they have the same underlying semantics. So the example I have in the paper is like a kitchen and the appliances might be differently branded and differently colored, but ultimately every kitchen is a kitchen. And the way that you approach kitchens and the way that you operate in them is the same. And so the idea of this paper is we should be using natural language as the measure for how we describe states and how we describe actions within states and use kind of traditional approaches to exploration, reinforcement learning, but simply parameterize them with language rather than with state abstractions, which is usually the way in which exploration is done in these kinds of environments. And so what we do is we take existing state of the art exploration methods and then kind of see what happens when you swap in language as a component. And do you get better performance? And we showed that in a variety of settings, at least in the kinds of RL environments that people have been looking at in recent work, we do see again in using language to parameterize exploration rather than states. Yeah. I think it's very apt to describe it as you, it's not suggesting like a new exploration algorithm, but it's simply the re-parameterization in terms of language. And coincidentally, these environments, they do come with this kind of language annotations, which we do focus on. I like that. So I think what I really liked about this paper is just the research mindset in that any other paper or a lot of other papers, they would have done, they would have tried doing like three things at the same time. Like you know, we have a language generator and we do this and we do that. And what you're I think doing correctly from a standpoint of research is you keep pretty much everything constant, the algorithms constant, right? Even the environments, you assume that you have a perfect language oracle and you just add the language, which I really appreciate as like a reviewer, let's say. So I think this gets us right into our or my biggest, essentially criticism of the paper or what I called in that you add language to these algorithms, but you just said we swap in language. And to me, it felt more like it's not really a swapping in. It's more like you add language on top of what these algorithms are doing. And therefore, can't I just see your method as adding more data? Essentially, there is features that are available from the simulator, right, which the other methods just don't use, they just discard this part and you just add this part. Do you have an indication in how much of your effect is really due to language and how much of the effect is just due to the fact that you have more data available? Yeah, that's a great question. And it's definitely a point that I think a lot of people will fairly make against the paper is, yeah, we're using extra data, right? And yeah, I think my verb swap was maybe only accurate in half of this paper, which is that in Amigo, which is the first method that we look at, it really is a swap, right? So if you read the paper, the traditional kind of Amigo teacher network proposes coordinates X, Y positions as goals. And here we're just completely eliminating that kind of goal specification and we're moving towards language. So that can be seen as more of a swap. Although of course, in novelty, which is the second method that we look at, that is definitely more of kind of an addition, as you say, because we keep the extrinsic bonus and we do have experiments that measure what happens if you don't have novelty by itself. You only have the kind of language novelty bonus and it doesn't do as well. So you're right that I would say that we explore this idea of swapping in language in a bit of the paper, but there are points where it's more of kind of a bolt on and we're not like super clearly looking at or distinguishing when is it okay to have language just be a complete drop in replacement versus just some additional information. So yeah, I think we're showing that in general, if you're trying to add language into these environments, you're seeing a gain, but how precisely that gain manifests is still a little requires some more exploration for sure. So I guess more generally to your comment on using extra data. Yeah, I mean, I think we have some intuition that this data should help, right? It's a fairly clean linguistic signal, but how to use this data concretely is an open question, right? And so that's kind of where I view the contribution of this paper as even though we have some intuition that adding extra data will help, we actually need the equations written down, right? And here are two concrete ways in which we can operationalize this data for the purposes of actually getting better performance in your environment. And there are a lot of examples of this in machine learning, right? So like you have some large language model, for example, and then you want to fine tune it for some domain or you want to fine tune it on human preferences. I mean, that's fundamentally, you're adding extra data for the purposes of getting something that works well on a task that you care about, right? And how to use that data is the open question. The other point that I would say is that we have some deep seated intuition that this language should help. As you say, it's really high quality. It comes from an Oracle. It comes from the game engine. But we actually still need to get that kind of empirical verification that it works, right? And there's actually a lot of reasons why maybe these experiments might not have worked out. For example, the language is Oracle generated, as I mentioned, but it is also very noisy. So as I described in kind of the method section of the paper, most of the messages that you see in the environments are actually not necessary to complete the extrinsic task. And I kind of exhaustively show which of the messages do matter. And so it could be the case that, well, the language signal, at least in these environments, is too noisy. The state abstraction captures all of the factors of variation that you might care about in an environment. And so you don't ultimately need language, right? And that's an imperial question that we have to measure. And so I view this paper as providing that empirical verification, which in hindsight, I think, is a fairly straightforward intuition. It's something that I definitely thought would happen. But yeah, it's nice to see those results kind of in writing. Yes, it's easy. I think you're right. It's easy to look back and say, of course, like, well, all you do is you do this. But exploration has been since since, you know, people have thought about reinforcement learning, they've obviously thought about exploration methods and intrinsic rewards are like as old as Schmidhuber himself. And we you know, the fact is that, you know, new things are developed. And this is at least one of the first things into into really the direction of incorporating. There have been incorporation of languages before, but a systematic adding it to the state of the art methods. And it seems like I am I am convinced the method at least the El Amigo method is quite well outlined, I think, in these diagrams, the contrast of the left being the original Amigo and the right side being the language Amigo. A question I had right here is that on the left side, you have this teacher network, and it simply outputs a coordinate to reach and it has to pay attention to the fact that the coordinate is not too hard and not too easy, right? Therefore, it has to learn that too easy coordinate. Yes, one that is, you know, close, but also it has to learn maybe unreachable coordinates or coordinates that are inside the walls, right? They can't be reached or something like this. However, on the right side in the language, I mean, you seem to split these two tasks out into one network that that determines which goals can even be reached and one that then orders them essentially, why? Why are you doing this? Like what's the is there a particular reason behind why one network couldn't do both at the same time? Yeah, so the reason why we split the Amigo network up into two parts, and as you say, we don't have to do this. And there are ablation studies in the appendix that shows what happens if you get rid of the grounding and you just have a single network predicting both goal achievability and, you know, actual the actual goal that's seen by the students. So it kind of a goal difficulty network. It does find in some environments, especially in mini hack, but it doesn't do as well in other environments such as mini grid. And part of the reason, as you've described, is that at least in these environments, the coordinate space stays consistent across episodes. And so you're right that there are some coordinates that are perhaps unreachable in certain environments and not in others, but there's much less variation than the set of language goals that are achievable in an environment because the environment will have different colored doors, for example. And so the goal go to the red door only makes sense in, let's say, half of your environments. So it's possible for the teacher to the Alamigo teacher to hopefully learn this distinction kind of just through, you know, the policy gradient method. So basically just like Amigo, but this is relatively sample inefficient because the problem is that when you propose a goal that's simply impossible in the environment and you get negative reward, that negative reward only comes after the student has tried to complete the goal for, let's say, a few hundred steps. Right. And so it's a relatively sample of inefficient way of telling the teacher, hey, the student did not achieve this goal in the environment. And moreover, that negative reward, you know, there's two possible sources of that reward. Right. So if the student never completed the goal, is it the case that it was just too difficult for the student, but it is achievable in practice? Or is it that the goal is simply never achievable in the first place in the environment? Right. And those kind of two failure cases are a little bit hard to distinguish. Whereas we have kind of this more frequent source of supervision, which is simply, you know, as the student is randomly exploring in the environment, it's encountering a lot of goals, a lot of messages because we have a language annotator and we're kind of, you know, if we if we kind of ignore that signal, that seems like something that we should be using. And so we have kind of this dual thing where we have a grounding number, which is updated more frequently in the environment, which is updated from the messages that are seen by the students. And then finally, the policy network, which is actually trained to satisfy the kind of difficulty objective and actually get the student to complete goals in the environment. Can you go a little bit more into because that was, I think, the only part that confused me a little bit, which is the how exactly you train this grounding network. There is a there is this this notion of whatever the first language description encountered along a trajectory being sort of the positive sample and then the rest being the negative samples. And that kind of confused me because it means the negative samples would also include goals that were encountered just not as the first message. Could you maybe clarify maybe I didn't understand something right? Or maybe I don't, you know, see the reasoning behind this exact choice. Yeah. So I think your intuition is correct. I think you've described it correctly. It is kind of a weird thing to do, which is that we are treating negative samples as basically all of the goals besides the first one that was achieved. Right. And of course, that is incorrectly treating negative samples of goals that were achieved later. Right. So negative samples are noisily generated, as I as I say, in the limit, this noise should even out, though. So you can compare, you know, like we're just kind of noisy, noisily generating negative samples here. We can compare that to maybe a setting where we had a more oracle sense of when a goal is truly infeasible in an environment. Right. And so what happens is, you know, just in general, a goal is going to appear in this negative sample term more and more often as we train the network. But because it's we're kind of, you know, downweighing all possible goals in the space, the idea is that hopefully, you know, this noise of of class of incorrectly classifying a goal is unachievable in an environment kind of evens out over time. Right. And so, yeah, it's a little bit tricky because we don't have the oracle saying, oh, you can't achieve this goal in an environment. Right. We only know that. Well, you know, the student just didn't happen to achieve the goal in this environment. So I could imagine other ways in which you try to come up with some heuristic that better captures this idea of kind of unachievability. But this is what we came up with, which seems to work reasonably well in practice. And alternative way that you can interpret this is we're not really measuring true achievability. Like, you know, is this at all possible in an environment? What we're really trying to have the grounding network capture here is what are the goals that the student tends to reach? So like are feasible at the current state of training, right? The current policy, what goals can it reach? And that's really what we need, right, is we need like to propose goals that at least for now are eventually reachable by a student. And that doesn't mean that it's, you know, unachievable in all possible students under all possible environments, but at least just for current, you know, in the current stage of the training process, it's a reasonable target. I can imagine that this gets very, that this may require an adjustment or that this breaks down in environments that are more causally structured. For example, if I always have to go through the green door before I reach the red door, right, then the goal would always be in any trajectory that I do, the green door would always be the first goal. And therefore my grounding network would never recognize the red door as a reachable goal, because that's always going to be at least the second goal, right? So I guess depending on the environment, it's not hard to make a change to this, obviously, in that case, but I guess that's one thing that might have to adjust a little bit to the environment at hand. Yeah, that's a that's a great point is that we do not. There are settings where you might just, you know, want to run it without the grounding network. And obviously, that's actually a simpler version. So it should be fairly easy to experiment with that. And also, in the setting that you described, what will happen is, like you say, you know, the green the go to the green door goal will get a lot of weight, but hopefully can be counteracted to some degree by the policy network, which will, you know, learn to not put any weight on that once it realizes that it's getting absolutely zero reward for that setting. But I agree that this kind of introduces some weird training dynamics that we don't really want might be cleaner just to remove the grounding network entirely. If you as as you say, you've looked at my paper review a little bit, I didn't go too much into the experimental results as such. Is there also I didn't go into the appendix at all, because honestly, I haven't read the appendix because I sometimes I don't I think I should probably. But is there anything that you want to highlight specifically about the experimental results or or maybe something that you did in the expand appendix, which is also has a lot of experiments in it? Things that you think people should take away from the paper from the experiment section? Yeah, so broad takeaways are and I think that you mentioned this in the review is, you know, we're in these kind of DRL environments and and the individual training runs are just incredibly noisy, you know, and that can be sometimes like rather difficult to get a sense of, oh, is my method actually working better than others? Right. But there has been some great recent work from I think a team at Miele, which won an outstanding paper award at New York's last year, which was called deep reinforcement learning on the edge of the statistical precipice. And the basic idea is, you know, we're compute constrained. We have these environments, they're very high variance. But even despite all of this, you know, what are the kind of statistical best principles that we can follow to really see whether or not our methods are actually making a measurable and replicable difference in the environments that we're testing? And so they have a lot of good recommendations, which we try to subscribe to as close as possible in this setting. Right. So these training curves here give you kind of a qualitative sense about not only kind of the ultimate performance attained by any of the models, but also of the differences in sample efficiency that we see. Right. So it could be the case that, well, ultimately, both Amigo and El Amigo reach the same asymptotic performance, but Amigo just gets there faster or more reliably. And that's something that you can, sorry, El Amigo gets there faster and more reliably. And that's something that you can look at in these graphs. But I think the more kind of statistically rigorous way of verifying that language is giving a gain in the environments is in the subsequent figure, which is figure four, which should be right below this one, I think. And this is really, you know, us trying to statistically verify, you know, is there an effect happening here? And so these here are bootstrap confidence intervals, five runs in each experimental condition. And we're plotting the 95 percent confidence intervals for the interquartile mean of models across tasks. So this is kind of like the mean performance, assuming that you drop some of the outliers, because again, these runs are very high variance. Right. And so this is kind of a statistical recommendation from the authors of that deep RL paper. And we show that, yes, the individual runs here have really high variance naturally. But as you begin to look at the runs in aggregate across both the mini grid and mini hack environment suites, we begin to see a trend that it's clear that, you know, overall we're seeing a good effect of language in these environments. And so this is obviously these are aggregate metrics, overall metrics and so on. When we look at the plots themselves, there is quite considerable variance, even in the ranks of the method. Do you have an intuition of between the language methods, which works better in what kind of environments and in what kind of environments does language even maybe hurt? And why do you have an idea? Yeah. So the trend that I try to highlight in the paper is that in larger environments, language exploration does better. And the reason why you might expect this is that in larger environments, Amigo and Novelty kind of suffer from this problem of increased noise. Right. There's a lot more coordinates, for example, that you can propose, which essentially describe kind of the same semantic action. Right. You have like you want to get the agent into one room of this maze. And you know, because the environment is larger, now there are four or five different coordinates that all kind of mean the same thing. Whereas as you increase the size of the environment, the language set, the set of language goals is relatively more consistent. Right. It's kind of one of those complexity analyses. Right. It's like kind of space complexity, almost of the goal space. And so you can see this trend happen a bit. For example, in the Wand of Death task, so WOD, this is in the top right corner here. We have WOD medium and WOD hard, where in WOD medium, Amigo actually outperforms El Amigo. So it gets you to higher performance quicker. Whereas in WOD Wand of Death hard, Amigo is actually not able to learn at all. And the only difference between these environments, it's fundamentally the same task. But the only difference is that in WOD hard, the room is a lot bigger. So instead of a narrow corridor, you actually have to search for the Wand of Death, that's the task, in some in some room beforehand. And you can see that just simply increasing the size of the possible coordinate spaces results in both traditional novelty and traditional Amigo doing much worse in this environment. And I think that kind of shows that these kind of state based exploration methods are very brittle to the size of your state base. Right. So you can kind of increase your state space infinitely and it'll make these methods perform worse, even if the underlying semantics of your environment haven't changed yet. Do you have an idea, do you have a feeling maybe, if this is a property of the world in general, like let's say I as a human, right? I'm put into a small whatever environment or a big environment, would my descriptions of language also not grow very much? Or is it a property of just game developers? You know, I add a few extra rooms, I can reuse these languages, you know, I just kind of tile, you know, the other the big games, I mean, the biggest games are procedurally generated like Minecraft there, it's really, it's just the same thing over and over. But even in like the like, these big open world games, like Grand Theft Auto or so, the same textures are reused and the same cars and the same NPC characters, right? Is this a property of the world or of the video game developers? Yeah, so this is a really deep and almost philosophical question. Yeah, is something that I think about a lot is you can certainly and this is a totally valid statement, right, you can say, well, there are a lot of language actions that you can describe in our world and even in the video game world, which just described these like kind of infinitely complex and nested sequences of actions, which have absolutely nothing to do with the extrinsic task, right? I could tell you to, you know, oh, you know, run at the wall six times do a 360. And then, you know, continue hitting the wall eight times, right. And that's like an incredibly difficult goal, which you can imagine a very structured curriculum to get to that point, right, of just like infinitely kind of bumping your head against the wall, which satisfies, you know, maybe the difficulty threshold of El Amigo, but is absolutely orthogonal to the task that we care about. And I can imagine that there are settings where the language is kind of useless and doesn't end up, you know, giving you any gains in this setting. And so there's kind of this open question that we haven't really touched on sufficiently in this paper, which is how good does the language have to be in order to get this to work? So as I say, you know, the language is Oracle, it's game developers, but it also is noisy. There's a lot of actions like running into walls or trying to throw stones at a minotaur that are ultimately useless in the environment. The argument we're making here is that hopefully, you know, the noisiness of language scales a little bit less than the noisiness of your state environment, right. But there's still a lot of kind of edge cases and kind of unexplored territory here. I think more philosophically, if you think about our world and our environment, right, there are a lot of ways that we can describe actions that are not particularly useful in the world that you and I inhabit, right. I mean, I can again tell you to do handstands and hit a wall and, you know, walk around and write endless, you know, trivial things in the dust. But at the same time, there's a lot of our action space in the real world that we simply don't have language descriptions for, right. So like every single precise movement on my hand and my arm, you know, I could presumably come up with some language to describe, oh, I'm actuating this joint, you know, by 0.03 degrees. And there's like, you know, how many joints in my hand, right. I mean, there's like endless complexity in terms of the possible action space just by moving a hand that in language we have absolutely no words for, right. And so it's really it's a really tough question, right. Like we have a lot of kind of ways of describing useless actions in the world. But at the same time, it's very clear that the language that we do use to describe the world is operating at a higher level abstraction than perhaps the kinds of actions that RL agents have access to, right. And for example, actuating some sort of limb or something. You make a you make a good point that in the paper that language is a strong prior over what is essentially important to humans, right. If I can describe something with a short piece of language, like, of course, I can say do three backflips and then, you know, do eight of that and so on. But it's a fairly complex sentence in itself. If I can describe something with a short piece of language, usually that is something that matters to some human somewhere, right. Otherwise that wouldn't be mapped to a short string. But that brings me a bit to a different question. And that is the question of isn't isn't the I think in these environments, there's always a goal, right. There is one reward at the end that you need to reach. I can imagine, though, that novelty or not novelty in general or how how important a state is, is really dependent on your goal. Whether I circumvent the minotaur at the, you know, below or above that might not be important if I want to reach whatever the goal behind it. But it is really important maybe for a different task. It's likewise I as a human, whether I move from here to there by walking forward or backward doesn't matter if I want to get to the fridge. But it matters really if I'm if I'm dancing, right. So is that something that like how does that interplay here with these with these language things? What do you do when a language it almost like needs to incorporate a piece of the goal that you want to reach in order to be useful or not? Yeah, so I think thinking about or trying to filter the language descriptions that you have to language that is relevant for your task is going to be important if we scale this up to environments where it's clear that using unfiltered language is not helping. Right. And again, as I mentioned, the robustness of these kinds of exploration methods to the noisiness or relevance of your language signal is still an open question. If we do have task descriptions, so we have extrinsic task descriptions like your job is to defeat the Minotaur, then it's really intuitive that we should be able to use that as a signal for kind of waiting how relevant a sub goal or language description that we encounter waiting how useful that is for the extrinsic task. Right. So if the extrinsic goal is combat, then we should be prioritizing combat related messages. If the extrinsic goal is buying something, then we should promote acquiring money and things like that. And so that's something that I think is a kind of natural extension of this is you extend this to a multitask setting where you have task descriptions and the task descriptions ought to kind of heavily filter what sub goals should be relevant for the task. I think when you include task descriptions, there are some more comparisons to related work. There's some related work, which you mentioned the paper where let's imagine you're doing basically hierarchical reinforcement learning. So you have some extrinsic goal and then you want to explicitly decompose the extrinsic goal into sub goals that you want to complete in order. Right. And that's those are certainly kind of relevant methods to look at when you start thinking about multitask or goal condition settings. But this is kind of a slightly different focus where we're not trying to identify sub goals that need to be completed on the way to some extrinsic goal. There's still kind of this exploration component, which is a bit of a different use of language than this kind of hierarchical stuff. But certainly I would say that there are people who have looked at kind of language conditioned RL and hierarchical RL that think a lot and very deeply about this problem of proposing sub goals that are relevant for the extrinsic goal, assuming you have some structured description of what the extrinsic goal is. Although I can imagine you run into sort of the, let's say the more abstract problem of the exploration problem is that, you know, without an outside signal, I don't really know what to do. And there is no clear, let's say gradient towards the goal. Right. Otherwise, the exploration problem in RL would be relatively easy. Now when we say, well, we'll just filter out all the messages that don't have anything to do with our combat goal. Right. So it is like we could run into the exact same thing again, where, you know, maybe in order to acquire a weapon, I first need money, right? That doesn't, that's not directly related to my combat goal. So there is like another exploration problem again, on top of the thing we introduce. I guess maybe we can hope that if we introduce enough levels, the highest level of abstraction will have a small number of states so that, you know, random exploration works. But it's kind of funny that the problems repeat or replicate. Yeah. Yeah. It's really tricky. And that's essentially just kind of a deeper or more nested failure case of not knowing what's novel and not knowing what's relevant for your goal. Right. So if you're prioritizing words that have combat in them because your extrinsic goal is combat, but you first need to buy something, then your, your, your semantics, you know, your measure of novelty or relevance is just not good enough. Right. So that's going to just be a fundamental problem in exploration is how do we know whether it's states or language, you know, how do we know when a state is relevant for the ultimate task? Yeah. And I guess humans aren't very much different, right? I mean, science is a really hard process. It's not, you know, that exploration takes millions of humans and hundreds of years. So we can't fault our RL agents here for not, not doing that great of a job. Here, I found these plots to be really cool, like the analysis, sort of the evolution of what the teachers propose. And of course, these being language, it's quite insightful and understandable what's happening in the algorithm. My, my surprise was a little bit, aren't these things kind of subject to like catastrophic forgetting or things like this? I can imagine, right? If I train these things online and they're at some difficulty level, all of a sudden they forget that reaching the red door is kind of really easy. Or so is that have you ever thought is that a problem? Or was that ever a problem? Did you encounter that? Or why don't we encounter that? Yeah. So I expect that that is a problem that happens in these agents. I don't think we really precisely tried to measure whether or not catastrophic forgetting is a problem. I think the fact is that we evaluate in environments where we are not testing the agents kind of continuously for mastery of all of the skills that it has learned in its curriculum proposed by the teacher. And so this problem of, oh, you know, you forgot how to specifically open a specific color door is not an issue as long as the student is still quite good at completing whatever goals it needs to complete to try to achieve the extrinsic goal that is currently being set by the teacher. Right. So if you forget things that are at the very beginning of training, that's not a big deal. So long as whatever path that the teacher is leading you on is something that will eventually get you to the extrinsic goal that we care about. And I think that happens to be the case in these environments because there was only one extrinsic goal and because we're not testing it to master every single skill from kind of low level to high level abstractions. But if we were in a setting where being able to complete those lower level goals kind of, you know, on a dime and kind of, you know, switch kind of do context switching like that, if that were more important, then we would have to deal with this problem of catastrophic forgetting. Right. An important point here is that we really don't care about how well the student is able to follow instructions proposed by the teacher. That's, I mean, we hope the goal is that that property emerges such that we can complete the extrinsic goal. Right. But we're never actually trying to learn a student that can follow instructions. We never really evaluated exclusively in an instruction following setting. Because if we think ahead a little bit, and I'm going to want to just scroll down to the environments just because, yeah, maybe this this will inspire us a little bit. If we think ahead a little bit beyond this work, here you have this very, this Oracle language descriptor. And you say also in the outlook of future work that that is something obviously that we're trying to get rid of because not every environment, like the fewest of environments actually have such a built in language description or easily accessible one. So we might have to regress to something else. So I want to I want to think about three different external models that we could bring in. And I wonder what you think of each of them, like how these could fit in. The first would be something like GPT-3, like just a pure language model. How could that help us? Maybe in combination with these things, because we need some starting point, right? But how could a pre-trained language model that knows something about the world help us? Then something like CLIP, maybe something that can take an image and language and say whether they're good together or not. And then maybe even something like or maybe a captioning model. Right. And maybe something like DALEE, like something that takes language and generates. Is there in this cloud of models, what possibilities do we have to bring in sort of to replace this Oracle thing with with learned systems? It doesn't even need to be learned online, right? It can be pre-trained. I'm probably much more excited about that. Yeah. Yeah, these are, I think, going to be the most fun questions to look at in kind of language conditions are all going forward is taking the boom in pre-trained models in large language models and resulting, you know, bringing these into concrete and actionable gains in reinforcement learning. It's funny that you mentioned this kind of what I described as almost a gradation from ungrounded language models like GPT-3, right, which are trained on text only corpora and whether those can actually help in these environments, which I would call are fundamentally grounded, right? They're grounded in some some visual or perceptual world. And ungrounded language models still result in gains in these settings. And my intuition is, yeah, they probably still can because, you know, even if you don't exactly know what it means to acquire a wand or kill a minotaur in some environment because you don't know what a minotaur looks like or what a wand looks like, GPT, as I mentioned, you know, this idea of priors, right? GPT has strong priors on sensible sequences of actions, right? So insofar as these environments are testing kind of sequences of actions that humans kind of have an intuition for, you know, it's some fantasy world, but we have some intuition, oh, in order to defeat the minotaur, we need to get a weapon first. We probably look around for a weapon. Maybe there's a shop. Maybe we can buy a weapon from the shop, right? Video games are testing knowledge that we have very like deep seated commonsense knowledge that we have that hopefully generalizes to these fantasy worlds. And GPT certainly contains a lot of that information, right? So you might imagine we should reward or filter the kinds of descriptions that we see to those that seem sensible narratives that GPT-3 would generate, right? So a sensible sequence of actions along the way to defeating the minotaur is collecting a wand and buying it and things like that. And I think you actually already see some examples of this happening in more goal conditioned or instruction following RL. So there's been some recent work from, I know, teams at Berkeley, maybe Google as well, that are looking at using pre-trained language models, which are not necessarily even grounded. They're just, you know, GPT-3, using them to construct sensible plans, action plans or sub goals for completing certain actions. So in some home environment, for example, maybe my action is get a cup of coffee. And then the goal of GPT is even though I don't really know what my environment looks like, I don't know what kitchen you're in, I know that sensibly this should include finding a mug and then heating up the kettle and things like that. And so we already see some promising use of kind of ungrounded models for improving grounded decision making settings. Yeah, did you want to comment on that? Or I can also- No, no, that's cool. I think, yeah, I think I've even had at least one of these works here on the channel in this home environment. That's exactly, I was also really cool to see. Obviously, these models know a lot about the world, right? And I think people overestimate how or underestimate maybe, well, whatever. That the thing, if we humans look at a board like this, like at a mini hack board, we see a map, right? We see paths to walk on and stuff like this, even if we've never played a video game. But this is, these are such strong priors built into us. And we sometimes think like, why can't that dumb computer just like walk around the wall, right? And we're like, what's up? And I think these large models are a way we can really get that knowledge from the human world into this world. So yeah, I think that's, it's a great outlook. Also with the models that combine images and text, I feel that could be really like adding a lot of value to the RL world. At least the RL environments that are like human environments. Of course, there's reinforcement learning for computer chip design, and things like this. I don't think those are necessarily going to be profiting that much from it. But yeah, yeah, really cool is so you're you're at Stanford? Or did you do the work at Stanford? Or were you at some internship? Yeah, I did it while I had an internship last fall. So this is fall 2021. Okay, continue to work a little bit while at Stanford. But it was mostly in collaboration with some people at fair or meta, I guess now in London. Reinforcement learning is notoriously also kind of hardware intensive. Although this work right here seems like maybe not that much because you describe a little bit sort of what what it takes to investigate a project like this. Yeah, unfortunately, I think even for these environments, it's fairly hardware intensive, certainly still feasible, I think, on let's say, a more academically sized compute budget. But for being able to run the experimentation needed to iterate quickly, you know, you do really definitely benefit from kind of industry level scale, which is one of the unfortunate things about this kind of research is that it is a little bit less accessible to people in smaller compute settings. So maybe the typical kind of RL environments you think of our compute heavy are the ones that are in 3D simulation, you know, very, you know, need physics, need soft joint contact and all of these things to model. And those are really expensive. I think compared to that, these are kind of more symbolic grid worlds. You know, the whole point as to why mini hack or net hack was chosen as a reinforcement learning test bed was because the code base is, you know, written entirely in C and is very optimized, and so you can run simulations very quickly on modern hardware. But that being said, it's still relatively compute expensive. Again, the just amount of experience needed by state of the art, deep RL methods, even with extrinsic or intrinsic exploration bonuses is still very expensive, right? So for example, one of these runs, we would typically have, let's say, 40 CPU actors collecting experience at the same time in parallel, and then kind of one or two GPU learner threads in the background kind of updating from this experience. So even just a single, you know, computational experiment here needs non trivial hardware for sure. Yeah. And, and you ideally you want to do that in parallel, right? Because you want to try out a bunch of things are repeated a bunch of times because one experiment really tells you almost nothing, right? Unless it succeeds, right? If it succeeds, it's good. But if it fails, you never know if you repeat it a bunch of times. Yeah, but I mean, it's still it's not it's not the most extreme thing, right? Like two GPUs or so and a bunch of CPUs. As you say, that can that's still academically doable, which I find cool. Could you maybe tell us a bit about the process of researching of researching this? Like, did everything work out as planned from the beginning? Or where was your starting point? And what changed about your plan during the research, like maybe something didn't work out or so? Yeah. Yeah, I feel I don't I feel it's always good for people to hear that other people encounter problems and how they get around problems. Yeah. Yeah. So yeah, it's a great question. The intuition that I think me and my collaborators started with was, you know, fairly sensible. It's language is clearly going to help in these environments. You know, it has some nice parallels to human exploration. And so let's just see whether or not language will work in these environments. What's funny, though, is that we actually started out the project less about the more abstract question of like, does language help exploration and more a very concrete question of how do we improve upon Amigo? So how do we improve upon an existing state of the art algorithm for exploration? Let's propose something that we argue is better than everything. It's like we're going to propose a state of the art exploration method called El Amigo, which will get 100 percent accuracy in all these environments. And none of the existing methods will work. Right. That's that's kind of the narrative that you set up for yourself when you're starting research is I'm going to build something that's new and that's the best. Right. However, I think the focus of this paper and the story has shifted considerably. I think it's shifted for the better, actually. And part of this shift happened because we implemented El Amigo and it was working fine and it worked better than Amigo. So we were quite excited. But at the same time, the field is moving so fast. And at NeurIPS last year, some researchers came out with this method called novelty and we ran novelty and novelty also did really well. And you know, in some environments, it totally like blew Amigo out of the water. Right. And El Amigo. And part of our thinking was, well, OK, now we can't really say, oh, we have El Amigo and it's the best model. It's the best environment. And you should only use this. And at first I thought, you know, this is derailing our narrative. Right. We're not proposing anything new. We're not proposing anything state of the art. So what's the point? But I think after some kind of juggling and shuffling, we realized that what we're really interested in is the scientific question of does language help exploration? So take existing method X and then do X plus language. Right. And so this question can be answered kind of agnostic to the specific method that we actually use. Right. And so it was that juncture where we actually decided, OK, let's actually look at novelty closely and let's imagine adding language to novelty as well. And do we see the same kind of results? Right. And so I think this is kind of an outcome of the paper that was kind of on the fly changed. But I'm very happy with which is that we're not trying to claim that we have a method that is state of the art or that is best or that anyone should be using our method. We are very agnostic to the particular choice of method. Right. We're trying to answer kind of a more abstract question, which is when does language help exploration? And I think this is a little bit more egalitarian. We're not saying that our method is better than anyone else's. And we also don't have to exhaustively compare to like a lot of existing work. We're just saying that if you take whatever method that we have and you add language, you do better and here are two examples where that happens. Cool. And it is a good way to preempt some reviewers from saying that you didn't train on ImageNet and that's bad. Yeah. Is there anything else that you want to get out to viewers? Maybe a way they can get started if that's possible or anything that you'd like them to know? Yeah, I think that we've discussed a lot about these kind of higher level ideas of one holy grail is that we have clip generating descriptions or open GPT-3 and then we're evaluating in these really high dimensional spaces with actual motor joints and we're going to show how language helps in these like mojoco style, like really deep RL, realistic environments and maybe you can transfer to the real world. I think that's the broad vision but I think it is still very far away. I think we even in this paper abstracted away a lot of difficulty of the problem. We're assuming that we have Oracle language annotations. We're only looking at these kind of symbolic grid worlds and although it's tempting to dive in and say, okay, now let's kind of straightforwardly let's extend this to a real world environment where I have to actually move my coffee mug to make coffee and tea, I think we're still quite far away from that broad vision of kind of household enabled robots in RL and is probably not the most I think like beginner friendly way of starting. There's just so many deep problems that need to be solved jointly from perception to action to planning and before we even consider how we better incorporate language into the mix. And so I think the way to build upon this work is just these kind of very small progressive relaxations of the assumptions that I and many of the other people who have worked in this space have. Right. So again, let's imagine let's just imagine we get rid of the Oracle language annotator and we train a model to emit states for these simple environments. You know, we didn't really explore that, but that's a very sensible way to extend this kind of work while keeping the environment and the models fixed. Right. So this goes back to the very beginning when you mentioned the kind of way in which we approach this paper was to keep everything fixed and then just look at this kind of very small change and see how that results in different performance in our environment. I think that's really just kind of the way to go. It's very slow. It's very incremental work, but hopefully it's getting us more towards that kind of guiding star of eventually having these models that operate in these realistic environments and use pre-trained model language to help exploration. Cool. Jesse, thank you very much for being here. This was awesome. Thanks. Have a lot of fun. | [
{
"start": 0,
"end": 10.56,
"text": " Hello, this is an interview with Jesse Mu, who is the first author of the paper improving"
},
{
"start": 10.56,
"end": 13.84,
"text": " intrinsic exploration with language abstractions."
},
{
"start": 13.84,
"end": 18.44,
"text": " This paper is really cool because it combines the knowledge that is inherent in language"
},
{
"start": 18.44,
"end": 22.28,
"text": " with the problem of exploration in reinforcement learning."
},
{
"start": 22.28,
"end": 27.76,
"text": " I've made a comprehensive review of this paper in the last video, so be sure to check that"
},
{
"start": 27.76,
"end": 28.76,
"text": " out."
},
{
"start": 28.76,
"end": 34.64,
"text": " Today, Jesse has seen the video and we're able to dive right into the questions, criticisms"
},
{
"start": 34.64,
"end": 37,
"text": " and anything that came up during the video."
},
{
"start": 37,
"end": 39.6,
"text": " The interview was super valuable to me."
},
{
"start": 39.6,
"end": 40.6,
"text": " I learned a lot."
},
{
"start": 40.6,
"end": 41.760000000000005,
"text": " I hope you do too."
},
{
"start": 41.760000000000005,
"end": 44.7,
"text": " If you like, then please leave a like on the video."
},
{
"start": 44.7,
"end": 47,
"text": " Tell me what you think in the comments."
},
{
"start": 47,
"end": 51.040000000000006,
"text": " Tell me how I can make these videos better above all else."
},
{
"start": 51.040000000000006,
"end": 52.040000000000006,
"text": " And I'll see you around."
},
{
"start": 52.040000000000006,
"end": 53.040000000000006,
"text": " Bye bye."
},
{
"start": 53.040000000000006,
"end": 54.040000000000006,
"text": " Hi, everyone."
},
{
"start": 54.04,
"end": 60.48,
"text": " Today, I'm here with Jesse Mu, who is the first author of the paper improving intrinsic"
},
{
"start": 60.48,
"end": 64.8,
"text": " exploration with language abstractions, which is a really cool paper."
},
{
"start": 64.8,
"end": 66.32,
"text": " I've enjoyed reading it."
},
{
"start": 66.32,
"end": 71.6,
"text": " I like the bringing language into the reinforcement learning domain."
},
{
"start": 71.6,
"end": 75.48,
"text": " I think it makes a lot of sense and I was very happy to see this paper."
},
{
"start": 75.48,
"end": 77.36,
"text": " Yeah, Jesse, welcome to the channel."
},
{
"start": 77.36,
"end": 79.36,
"text": " Yeah, thanks for having me."
},
{
"start": 79.36,
"end": 87.08,
"text": " So I've presumably the viewers here have already seen my little review of the paper."
},
{
"start": 87.08,
"end": 92.72,
"text": " What would be your maybe for people who haven't seen that or just in your words, your like"
},
{
"start": 92.72,
"end": 95.44,
"text": " short elevator pitch of the paper itself?"
},
{
"start": 95.44,
"end": 97.03999999999999,
"text": " What would that be?"
},
{
"start": 97.03999999999999,
"end": 98.03999999999999,
"text": " Yeah."
},
{
"start": 98.03999999999999,
"end": 105,
"text": " So the way that I would pitch the paper is that reinforcement learning for a while now"
},
{
"start": 105,
"end": 111.88,
"text": " has wrestled with perhaps the central problem, which is how do we encourage exploration in"
},
{
"start": 111.88,
"end": 117.44,
"text": " these environments with more complex tasks and longer time horizons where the extrinsic"
},
{
"start": 117.44,
"end": 119.76,
"text": " reward that you get from the environment is very sparse."
},
{
"start": 119.76,
"end": 125.12,
"text": " So in the absence of extrinsic rewards, how do we encourage agents to explore?"
},
{
"start": 125.12,
"end": 130.02,
"text": " And typically the way we do so is we assume and this is a very cognitively appealing intuition"
},
{
"start": 130.02,
"end": 133.8,
"text": " that we should motivate an agent to achieve novelty in the environment."
},
{
"start": 133.8,
"end": 137.44,
"text": " We should make it do things that it hasn't done before, encounter states that it hasn't"
},
{
"start": 137.44,
"end": 138.64000000000001,
"text": " seen before, et cetera."
},
{
"start": 138.64000000000001,
"end": 142.84,
"text": " And then hopefully we'll enable the agent to acquire the skills that we actually want"
},
{
"start": 142.84,
"end": 145.08,
"text": " the agent to acquire in the environment."
},
{
"start": 145.08,
"end": 149.36,
"text": " But the problem with this, of course, is how we define novelty."
},
{
"start": 149.36,
"end": 153.84,
"text": " In a lot of scenarios, there are environments that can look very different, but they have"
},
{
"start": 153.84,
"end": 155.32000000000002,
"text": " the same underlying semantics."
},
{
"start": 155.32000000000002,
"end": 159.32000000000002,
"text": " So the example I have in the paper is like a kitchen and the appliances might be differently"
},
{
"start": 159.32000000000002,
"end": 163.24,
"text": " branded and differently colored, but ultimately every kitchen is a kitchen."
},
{
"start": 163.24,
"end": 167.48000000000002,
"text": " And the way that you approach kitchens and the way that you operate in them is the same."
},
{
"start": 167.48000000000002,
"end": 173.52,
"text": " And so the idea of this paper is we should be using natural language as the measure for"
},
{
"start": 173.52,
"end": 178.88,
"text": " how we describe states and how we describe actions within states and use kind of traditional"
},
{
"start": 178.88,
"end": 183.48000000000002,
"text": " approaches to exploration, reinforcement learning, but simply parameterize them with language"
},
{
"start": 183.48000000000002,
"end": 187.44,
"text": " rather than with state abstractions, which is usually the way in which exploration is"
},
{
"start": 187.44,
"end": 189.60000000000002,
"text": " done in these kinds of environments."
},
{
"start": 189.6,
"end": 194.48,
"text": " And so what we do is we take existing state of the art exploration methods and then kind"
},
{
"start": 194.48,
"end": 198.28,
"text": " of see what happens when you swap in language as a component."
},
{
"start": 198.28,
"end": 199.28,
"text": " And do you get better performance?"
},
{
"start": 199.28,
"end": 204.16,
"text": " And we showed that in a variety of settings, at least in the kinds of RL environments that"
},
{
"start": 204.16,
"end": 208.4,
"text": " people have been looking at in recent work, we do see again in using language to parameterize"
},
{
"start": 208.4,
"end": 210.88,
"text": " exploration rather than states."
},
{
"start": 210.88,
"end": 212.76,
"text": " Yeah."
},
{
"start": 212.76,
"end": 222.56,
"text": " I think it's very apt to describe it as you, it's not suggesting like a new exploration"
},
{
"start": 222.56,
"end": 227.56,
"text": " algorithm, but it's simply the re-parameterization in terms of language."
},
{
"start": 227.56,
"end": 232.56,
"text": " And coincidentally, these environments, they do come with this kind of language annotations,"
},
{
"start": 232.56,
"end": 234,
"text": " which we do focus on."
},
{
"start": 234,
"end": 235,
"text": " I like that."
},
{
"start": 235,
"end": 240.94,
"text": " So I think what I really liked about this paper is just the research mindset in that"
},
{
"start": 240.94,
"end": 245.52,
"text": " any other paper or a lot of other papers, they would have done, they would have tried"
},
{
"start": 245.52,
"end": 248.32,
"text": " doing like three things at the same time."
},
{
"start": 248.32,
"end": 252.48,
"text": " Like you know, we have a language generator and we do this and we do that."
},
{
"start": 252.48,
"end": 257.44,
"text": " And what you're I think doing correctly from a standpoint of research is you keep pretty"
},
{
"start": 257.44,
"end": 261.46,
"text": " much everything constant, the algorithms constant, right?"
},
{
"start": 261.46,
"end": 266.48,
"text": " Even the environments, you assume that you have a perfect language oracle and you just"
},
{
"start": 266.48,
"end": 273.72,
"text": " add the language, which I really appreciate as like a reviewer, let's say."
},
{
"start": 273.72,
"end": 283.36,
"text": " So I think this gets us right into our or my biggest, essentially criticism of the paper"
},
{
"start": 283.36,
"end": 290.64000000000004,
"text": " or what I called in that you add language to these algorithms, but you just said we"
},
{
"start": 290.64000000000004,
"end": 292.40000000000003,
"text": " swap in language."
},
{
"start": 292.40000000000003,
"end": 295.76,
"text": " And to me, it felt more like it's not really a swapping in."
},
{
"start": 295.76,
"end": 301.2,
"text": " It's more like you add language on top of what these algorithms are doing."
},
{
"start": 301.2,
"end": 307.48,
"text": " And therefore, can't I just see your method as adding more data?"
},
{
"start": 307.48,
"end": 312.2,
"text": " Essentially, there is features that are available from the simulator, right, which the other"
},
{
"start": 312.2,
"end": 317.15999999999997,
"text": " methods just don't use, they just discard this part and you just add this part."
},
{
"start": 317.15999999999997,
"end": 323.24,
"text": " Do you have an indication in how much of your effect is really due to language and how much"
},
{
"start": 323.24,
"end": 326.48,
"text": " of the effect is just due to the fact that you have more data available?"
},
{
"start": 326.48,
"end": 328.48,
"text": " Yeah, that's a great question."
},
{
"start": 328.48,
"end": 332.04,
"text": " And it's definitely a point that I think a lot of people will fairly make against the"
},
{
"start": 332.04,
"end": 336.32,
"text": " paper is, yeah, we're using extra data, right?"
},
{
"start": 336.32,
"end": 341.84000000000003,
"text": " And yeah, I think my verb swap was maybe only accurate in half of this paper, which is that"
},
{
"start": 341.84000000000003,
"end": 345.8,
"text": " in Amigo, which is the first method that we look at, it really is a swap, right?"
},
{
"start": 345.8,
"end": 351.68,
"text": " So if you read the paper, the traditional kind of Amigo teacher network proposes coordinates"
},
{
"start": 351.68,
"end": 354.16,
"text": " X, Y positions as goals."
},
{
"start": 354.16,
"end": 358.8,
"text": " And here we're just completely eliminating that kind of goal specification and we're"
},
{
"start": 358.8,
"end": 360.72,
"text": " moving towards language."
},
{
"start": 360.72,
"end": 363.2,
"text": " So that can be seen as more of a swap."
},
{
"start": 363.2,
"end": 368.32,
"text": " Although of course, in novelty, which is the second method that we look at, that is definitely"
},
{
"start": 368.32,
"end": 372.04,
"text": " more of kind of an addition, as you say, because we keep the extrinsic bonus and we do have"
},
{
"start": 372.04,
"end": 376.24,
"text": " experiments that measure what happens if you don't have novelty by itself."
},
{
"start": 376.24,
"end": 379.52,
"text": " You only have the kind of language novelty bonus and it doesn't do as well."
},
{
"start": 379.52,
"end": 385.15999999999997,
"text": " So you're right that I would say that we explore this idea of swapping in language in a bit"
},
{
"start": 385.15999999999997,
"end": 388.91999999999996,
"text": " of the paper, but there are points where it's more of kind of a bolt on and we're not like"
},
{
"start": 388.91999999999996,
"end": 394.76,
"text": " super clearly looking at or distinguishing when is it okay to have language just be a"
},
{
"start": 394.76,
"end": 398.2,
"text": " complete drop in replacement versus just some additional information."
},
{
"start": 398.2,
"end": 403.52,
"text": " So yeah, I think we're showing that in general, if you're trying to add language into these"
},
{
"start": 403.52,
"end": 409.32,
"text": " environments, you're seeing a gain, but how precisely that gain manifests is still a"
},
{
"start": 409.32,
"end": 412.36,
"text": " little requires some more exploration for sure."
},
{
"start": 412.36,
"end": 415.84,
"text": " So I guess more generally to your comment on using extra data."
},
{
"start": 415.84,
"end": 421.68,
"text": " Yeah, I mean, I think we have some intuition that this data should help, right?"
},
{
"start": 421.68,
"end": 426.32,
"text": " It's a fairly clean linguistic signal, but how to use this data concretely is an open"
},
{
"start": 426.32,
"end": 427.32,
"text": " question, right?"
},
{
"start": 427.32,
"end": 430.24,
"text": " And so that's kind of where I view the contribution of this paper as even though we have some"
},
{
"start": 430.24,
"end": 434.36,
"text": " intuition that adding extra data will help, we actually need the equations written down,"
},
{
"start": 434.36,
"end": 435.36,
"text": " right?"
},
{
"start": 435.36,
"end": 438.44,
"text": " And here are two concrete ways in which we can operationalize this data for the purposes"
},
{
"start": 438.44,
"end": 441.84,
"text": " of actually getting better performance in your environment."
},
{
"start": 441.84,
"end": 444,
"text": " And there are a lot of examples of this in machine learning, right?"
},
{
"start": 444,
"end": 447.6,
"text": " So like you have some large language model, for example, and then you want to fine tune"
},
{
"start": 447.6,
"end": 450.2,
"text": " it for some domain or you want to fine tune it on human preferences."
},
{
"start": 450.2,
"end": 454.64,
"text": " I mean, that's fundamentally, you're adding extra data for the purposes of getting something"
},
{
"start": 454.64,
"end": 457.1,
"text": " that works well on a task that you care about, right?"
},
{
"start": 457.1,
"end": 460.15999999999997,
"text": " And how to use that data is the open question."
},
{
"start": 460.15999999999997,
"end": 464.88,
"text": " The other point that I would say is that we have some deep seated intuition that this language"
},
{
"start": 464.88,
"end": 465.88,
"text": " should help."
},
{
"start": 465.88,
"end": 466.88,
"text": " As you say, it's really high quality."
},
{
"start": 466.88,
"end": 467.88,
"text": " It comes from an Oracle."
},
{
"start": 467.88,
"end": 470.2,
"text": " It comes from the game engine."
},
{
"start": 470.2,
"end": 474.15999999999997,
"text": " But we actually still need to get that kind of empirical verification that it works, right?"
},
{
"start": 474.15999999999997,
"end": 477.56,
"text": " And there's actually a lot of reasons why maybe these experiments might not have worked"
},
{
"start": 477.56,
"end": 478.56,
"text": " out."
},
{
"start": 478.56,
"end": 484.12,
"text": " For example, the language is Oracle generated, as I mentioned, but it is also very noisy."
},
{
"start": 484.12,
"end": 488.48,
"text": " So as I described in kind of the method section of the paper, most of the messages that you"
},
{
"start": 488.48,
"end": 493.04,
"text": " see in the environments are actually not necessary to complete the extrinsic task."
},
{
"start": 493.04,
"end": 497.6,
"text": " And I kind of exhaustively show which of the messages do matter."
},
{
"start": 497.6,
"end": 500.88,
"text": " And so it could be the case that, well, the language signal, at least in these environments,"
},
{
"start": 500.88,
"end": 502.36,
"text": " is too noisy."
},
{
"start": 502.36,
"end": 505.76000000000005,
"text": " The state abstraction captures all of the factors of variation that you might care about"
},
{
"start": 505.76000000000005,
"end": 506.84000000000003,
"text": " in an environment."
},
{
"start": 506.84000000000003,
"end": 508.66,
"text": " And so you don't ultimately need language, right?"
},
{
"start": 508.66,
"end": 511.28000000000003,
"text": " And that's an imperial question that we have to measure."
},
{
"start": 511.28000000000003,
"end": 515.5600000000001,
"text": " And so I view this paper as providing that empirical verification, which in hindsight,"
},
{
"start": 515.5600000000001,
"end": 517.64,
"text": " I think, is a fairly straightforward intuition."
},
{
"start": 517.64,
"end": 520.48,
"text": " It's something that I definitely thought would happen."
},
{
"start": 520.48,
"end": 523.32,
"text": " But yeah, it's nice to see those results kind of in writing."
},
{
"start": 523.32,
"end": 524.72,
"text": " Yes, it's easy."
},
{
"start": 524.72,
"end": 526.08,
"text": " I think you're right."
},
{
"start": 526.08,
"end": 531.44,
"text": " It's easy to look back and say, of course, like, well, all you do is you do this."
},
{
"start": 531.44,
"end": 539.84,
"text": " But exploration has been since since, you know, people have thought about reinforcement learning,"
},
{
"start": 539.84,
"end": 545.6800000000001,
"text": " they've obviously thought about exploration methods and intrinsic rewards are like as"
},
{
"start": 545.6800000000001,
"end": 547.9200000000001,
"text": " old as Schmidhuber himself."
},
{
"start": 547.9200000000001,
"end": 553.72,
"text": " And we you know, the fact is that, you know, new things are developed."
},
{
"start": 553.72,
"end": 560.28,
"text": " And this is at least one of the first things into into really the direction of incorporating."
},
{
"start": 560.28,
"end": 564.72,
"text": " There have been incorporation of languages before, but a systematic adding it to the"
},
{
"start": 564.72,
"end": 566.6800000000001,
"text": " state of the art methods."
},
{
"start": 566.6800000000001,
"end": 572.6,
"text": " And it seems like I am I am convinced the method at least the El Amigo method is quite"
},
{
"start": 572.6,
"end": 577.96,
"text": " well outlined, I think, in these diagrams, the contrast of the left being the original"
},
{
"start": 577.96,
"end": 583.4,
"text": " Amigo and the right side being the language Amigo."
},
{
"start": 583.4,
"end": 588.04,
"text": " A question I had right here is that on the left side, you have this teacher network,"
},
{
"start": 588.04,
"end": 595.4399999999999,
"text": " and it simply outputs a coordinate to reach and it has to pay attention to the fact that"
},
{
"start": 595.4399999999999,
"end": 600.0799999999999,
"text": " the coordinate is not too hard and not too easy, right?"
},
{
"start": 600.0799999999999,
"end": 605.28,
"text": " Therefore, it has to learn that too easy coordinate."
},
{
"start": 605.28,
"end": 610.48,
"text": " Yes, one that is, you know, close, but also it has to learn maybe unreachable coordinates"
},
{
"start": 610.48,
"end": 612.92,
"text": " or coordinates that are inside the walls, right?"
},
{
"start": 612.92,
"end": 615.1999999999999,
"text": " They can't be reached or something like this."
},
{
"start": 615.1999999999999,
"end": 619.28,
"text": " However, on the right side in the language, I mean, you seem to split these two tasks"
},
{
"start": 619.28,
"end": 625.68,
"text": " out into one network that that determines which goals can even be reached and one that"
},
{
"start": 625.68,
"end": 628.64,
"text": " then orders them essentially, why?"
},
{
"start": 628.64,
"end": 630.92,
"text": " Why are you doing this?"
},
{
"start": 630.92,
"end": 636.1999999999999,
"text": " Like what's the is there a particular reason behind why one network couldn't do both at"
},
{
"start": 636.1999999999999,
"end": 637.56,
"text": " the same time?"
},
{
"start": 637.56,
"end": 645.04,
"text": " Yeah, so the reason why we split the Amigo network up into two parts, and as you say,"
},
{
"start": 645.04,
"end": 646.1999999999999,
"text": " we don't have to do this."
},
{
"start": 646.1999999999999,
"end": 650.3199999999999,
"text": " And there are ablation studies in the appendix that shows what happens if you get rid of"
},
{
"start": 650.3199999999999,
"end": 655.8399999999999,
"text": " the grounding and you just have a single network predicting both goal achievability and, you"
},
{
"start": 655.8399999999999,
"end": 659.56,
"text": " know, actual the actual goal that's seen by the students."
},
{
"start": 659.56,
"end": 663.0799999999999,
"text": " So it kind of a goal difficulty network."
},
{
"start": 663.08,
"end": 669.24,
"text": " It does find in some environments, especially in mini hack, but it doesn't do as well in"
},
{
"start": 669.24,
"end": 671.2,
"text": " other environments such as mini grid."
},
{
"start": 671.2,
"end": 676.5600000000001,
"text": " And part of the reason, as you've described, is that at least in these environments, the"
},
{
"start": 676.5600000000001,
"end": 680,
"text": " coordinate space stays consistent across episodes."
},
{
"start": 680,
"end": 686.2,
"text": " And so you're right that there are some coordinates that are perhaps unreachable in certain environments"
},
{
"start": 686.2,
"end": 691.84,
"text": " and not in others, but there's much less variation than the set of language goals that are achievable"
},
{
"start": 691.84,
"end": 696,
"text": " in an environment because the environment will have different colored doors, for example."
},
{
"start": 696,
"end": 701.72,
"text": " And so the goal go to the red door only makes sense in, let's say, half of your environments."
},
{
"start": 701.72,
"end": 709.08,
"text": " So it's possible for the teacher to the Alamigo teacher to hopefully learn this distinction"
},
{
"start": 709.08,
"end": 712.9200000000001,
"text": " kind of just through, you know, the policy gradient method."
},
{
"start": 712.9200000000001,
"end": 716.64,
"text": " So basically just like Amigo, but this is relatively sample inefficient because the"
},
{
"start": 716.64,
"end": 721.82,
"text": " problem is that when you propose a goal that's simply impossible in the environment and you"
},
{
"start": 721.82,
"end": 726.4000000000001,
"text": " get negative reward, that negative reward only comes after the student has tried to"
},
{
"start": 726.4000000000001,
"end": 728.5200000000001,
"text": " complete the goal for, let's say, a few hundred steps."
},
{
"start": 728.5200000000001,
"end": 729.5200000000001,
"text": " Right."
},
{
"start": 729.5200000000001,
"end": 733.24,
"text": " And so it's a relatively sample of inefficient way of telling the teacher, hey, the student"
},
{
"start": 733.24,
"end": 735.5600000000001,
"text": " did not achieve this goal in the environment."
},
{
"start": 735.5600000000001,
"end": 739.44,
"text": " And moreover, that negative reward, you know, there's two possible sources of that reward."
},
{
"start": 739.44,
"end": 740.44,
"text": " Right."
},
{
"start": 740.44,
"end": 744.6400000000001,
"text": " So if the student never completed the goal, is it the case that it was just too difficult"
},
{
"start": 744.6400000000001,
"end": 748.08,
"text": " for the student, but it is achievable in practice?"
},
{
"start": 748.08,
"end": 752.8000000000001,
"text": " Or is it that the goal is simply never achievable in the first place in the environment?"
},
{
"start": 752.8000000000001,
"end": 753.8000000000001,
"text": " Right."
},
{
"start": 753.8000000000001,
"end": 758,
"text": " And those kind of two failure cases are a little bit hard to distinguish."
},
{
"start": 758,
"end": 761.88,
"text": " Whereas we have kind of this more frequent source of supervision, which is simply, you"
},
{
"start": 761.88,
"end": 766,
"text": " know, as the student is randomly exploring in the environment, it's encountering a lot"
},
{
"start": 766,
"end": 770.6800000000001,
"text": " of goals, a lot of messages because we have a language annotator and we're kind of, you"
},
{
"start": 770.6800000000001,
"end": 774.1600000000001,
"text": " know, if we if we kind of ignore that signal, that seems like something that we should be"
},
{
"start": 774.1600000000001,
"end": 775.8000000000001,
"text": " using."
},
{
"start": 775.8,
"end": 779.28,
"text": " And so we have kind of this dual thing where we have a grounding number, which is updated"
},
{
"start": 779.28,
"end": 782.5999999999999,
"text": " more frequently in the environment, which is updated from the messages that are seen"
},
{
"start": 782.5999999999999,
"end": 783.78,
"text": " by the students."
},
{
"start": 783.78,
"end": 787.9599999999999,
"text": " And then finally, the policy network, which is actually trained to satisfy the kind of"
},
{
"start": 787.9599999999999,
"end": 792.8199999999999,
"text": " difficulty objective and actually get the student to complete goals in the environment."
},
{
"start": 792.8199999999999,
"end": 797.4799999999999,
"text": " Can you go a little bit more into because that was, I think, the only part that confused"
},
{
"start": 797.4799999999999,
"end": 803.04,
"text": " me a little bit, which is the how exactly you train this grounding network."
},
{
"start": 803.04,
"end": 810.12,
"text": " There is a there is this this notion of whatever the first language description encountered"
},
{
"start": 810.12,
"end": 815.88,
"text": " along a trajectory being sort of the positive sample and then the rest being the negative"
},
{
"start": 815.88,
"end": 816.88,
"text": " samples."
},
{
"start": 816.88,
"end": 821.6999999999999,
"text": " And that kind of confused me because it means the negative samples would also include goals"
},
{
"start": 821.6999999999999,
"end": 826.52,
"text": " that were encountered just not as the first message."
},
{
"start": 826.52,
"end": 829.98,
"text": " Could you maybe clarify maybe I didn't understand something right?"
},
{
"start": 829.98,
"end": 836.84,
"text": " Or maybe I don't, you know, see the reasoning behind this exact choice."
},
{
"start": 836.84,
"end": 837.84,
"text": " Yeah."
},
{
"start": 837.84,
"end": 839.5600000000001,
"text": " So I think your intuition is correct."
},
{
"start": 839.5600000000001,
"end": 841.46,
"text": " I think you've described it correctly."
},
{
"start": 841.46,
"end": 848.6800000000001,
"text": " It is kind of a weird thing to do, which is that we are treating negative samples as basically"
},
{
"start": 848.6800000000001,
"end": 851.36,
"text": " all of the goals besides the first one that was achieved."
},
{
"start": 851.36,
"end": 852.36,
"text": " Right."
},
{
"start": 852.36,
"end": 857.4,
"text": " And of course, that is incorrectly treating negative samples of goals that were achieved"
},
{
"start": 857.4,
"end": 858.4,
"text": " later."
},
{
"start": 858.4,
"end": 859.4,
"text": " Right."
},
{
"start": 859.4,
"end": 866.12,
"text": " So negative samples are noisily generated, as I as I say, in the limit, this noise should"
},
{
"start": 866.12,
"end": 867.12,
"text": " even out, though."
},
{
"start": 867.12,
"end": 870.6,
"text": " So you can compare, you know, like we're just kind of noisy, noisily generating negative"
},
{
"start": 870.6,
"end": 871.6,
"text": " samples here."
},
{
"start": 871.6,
"end": 876.84,
"text": " We can compare that to maybe a setting where we had a more oracle sense of when a goal"
},
{
"start": 876.84,
"end": 879.6,
"text": " is truly infeasible in an environment."
},
{
"start": 879.6,
"end": 880.6,
"text": " Right."
},
{
"start": 880.6,
"end": 884.52,
"text": " And so what happens is, you know, just in general, a goal is going to appear in this"
},
{
"start": 884.52,
"end": 887.88,
"text": " negative sample term more and more often as we train the network."
},
{
"start": 887.88,
"end": 893.68,
"text": " But because it's we're kind of, you know, downweighing all possible goals in the space,"
},
{
"start": 893.68,
"end": 898.04,
"text": " the idea is that hopefully, you know, this noise of of class of incorrectly classifying"
},
{
"start": 898.04,
"end": 901.48,
"text": " a goal is unachievable in an environment kind of evens out over time."
},
{
"start": 901.48,
"end": 902.48,
"text": " Right."
},
{
"start": 902.48,
"end": 906.12,
"text": " And so, yeah, it's a little bit tricky because we don't have the oracle saying, oh, you"
},
{
"start": 906.12,
"end": 907.8,
"text": " can't achieve this goal in an environment."
},
{
"start": 907.8,
"end": 908.8,
"text": " Right."
},
{
"start": 908.8,
"end": 909.8,
"text": " We only know that."
},
{
"start": 909.8,
"end": 913.2,
"text": " Well, you know, the student just didn't happen to achieve the goal in this environment."
},
{
"start": 913.2,
"end": 916.6,
"text": " So I could imagine other ways in which you try to come up with some heuristic that better"
},
{
"start": 916.6,
"end": 920.0400000000001,
"text": " captures this idea of kind of unachievability."
},
{
"start": 920.0400000000001,
"end": 924.4,
"text": " But this is what we came up with, which seems to work reasonably well in practice."
},
{
"start": 924.4,
"end": 931.4,
"text": " And alternative way that you can interpret this is we're not really measuring true achievability."
},
{
"start": 931.4,
"end": 934.6,
"text": " Like, you know, is this at all possible in an environment?"
},
{
"start": 934.6,
"end": 938.1800000000001,
"text": " What we're really trying to have the grounding network capture here is what are the goals"
},
{
"start": 938.1800000000001,
"end": 939.7,
"text": " that the student tends to reach?"
},
{
"start": 939.7,
"end": 942.6,
"text": " So like are feasible at the current state of training, right?"
},
{
"start": 942.6,
"end": 945.46,
"text": " The current policy, what goals can it reach?"
},
{
"start": 945.46,
"end": 949,
"text": " And that's really what we need, right, is we need like to propose goals that at least"
},
{
"start": 949,
"end": 952.84,
"text": " for now are eventually reachable by a student."
},
{
"start": 952.84,
"end": 957.5600000000001,
"text": " And that doesn't mean that it's, you know, unachievable in all possible students under"
},
{
"start": 957.5600000000001,
"end": 960.94,
"text": " all possible environments, but at least just for current, you know, in the current stage"
},
{
"start": 960.94,
"end": 964.12,
"text": " of the training process, it's a reasonable target."
},
{
"start": 964.12,
"end": 971.0600000000001,
"text": " I can imagine that this gets very, that this may require an adjustment or that this breaks"
},
{
"start": 971.0600000000001,
"end": 974.1600000000001,
"text": " down in environments that are more causally structured."
},
{
"start": 974.16,
"end": 979.9599999999999,
"text": " For example, if I always have to go through the green door before I reach the red door,"
},
{
"start": 979.9599999999999,
"end": 985.28,
"text": " right, then the goal would always be in any trajectory that I do, the green door would"
},
{
"start": 985.28,
"end": 987.12,
"text": " always be the first goal."
},
{
"start": 987.12,
"end": 993.24,
"text": " And therefore my grounding network would never recognize the red door as a reachable goal,"
},
{
"start": 993.24,
"end": 996.18,
"text": " because that's always going to be at least the second goal, right?"
},
{
"start": 996.18,
"end": 1001.26,
"text": " So I guess depending on the environment, it's not hard to make a change to this, obviously,"
},
{
"start": 1001.26,
"end": 1005.72,
"text": " in that case, but I guess that's one thing that might have to adjust a little bit to"
},
{
"start": 1005.72,
"end": 1007.2,
"text": " the environment at hand."
},
{
"start": 1007.2,
"end": 1012.26,
"text": " Yeah, that's a that's a great point is that we do not."
},
{
"start": 1012.26,
"end": 1015.52,
"text": " There are settings where you might just, you know, want to run it without the grounding"
},
{
"start": 1015.52,
"end": 1016.52,
"text": " network."
},
{
"start": 1016.52,
"end": 1017.66,
"text": " And obviously, that's actually a simpler version."
},
{
"start": 1017.66,
"end": 1021.84,
"text": " So it should be fairly easy to experiment with that."
},
{
"start": 1021.84,
"end": 1028.6,
"text": " And also, in the setting that you described, what will happen is, like you say, you know,"
},
{
"start": 1028.6,
"end": 1032.9599999999998,
"text": " the green the go to the green door goal will get a lot of weight, but hopefully can be"
},
{
"start": 1032.9599999999998,
"end": 1036.36,
"text": " counteracted to some degree by the policy network, which will, you know, learn to not"
},
{
"start": 1036.36,
"end": 1039.8,
"text": " put any weight on that once it realizes that it's getting absolutely zero reward for that"
},
{
"start": 1039.8,
"end": 1040.8,
"text": " setting."
},
{
"start": 1040.8,
"end": 1043.8,
"text": " But I agree that this kind of introduces some weird training dynamics that we don't really"
},
{
"start": 1043.8,
"end": 1049.32,
"text": " want might be cleaner just to remove the grounding network entirely."
},
{
"start": 1049.32,
"end": 1054.9599999999998,
"text": " If you as as you say, you've looked at my paper review a little bit, I didn't go too"
},
{
"start": 1054.96,
"end": 1059.64,
"text": " much into the experimental results as such."
},
{
"start": 1059.64,
"end": 1063.82,
"text": " Is there also I didn't go into the appendix at all, because honestly, I haven't read the"
},
{
"start": 1063.82,
"end": 1072.98,
"text": " appendix because I sometimes I don't I think I should probably."
},
{
"start": 1072.98,
"end": 1078.92,
"text": " But is there anything that you want to highlight specifically about the experimental results"
},
{
"start": 1078.92,
"end": 1085.16,
"text": " or or maybe something that you did in the expand appendix, which is also has a lot of"
},
{
"start": 1085.16,
"end": 1087.92,
"text": " experiments in it?"
},
{
"start": 1087.92,
"end": 1093.28,
"text": " Things that you think people should take away from the paper from the experiment section?"
},
{
"start": 1093.28,
"end": 1101.6000000000001,
"text": " Yeah, so broad takeaways are and I think that you mentioned this in the review is, you know,"
},
{
"start": 1101.6000000000001,
"end": 1105.96,
"text": " we're in these kind of DRL environments and and the individual training runs are just"
},
{
"start": 1105.96,
"end": 1110.08,
"text": " incredibly noisy, you know, and that can be sometimes like rather difficult to get a sense"
},
{
"start": 1110.08,
"end": 1112.4,
"text": " of, oh, is my method actually working better than others?"
},
{
"start": 1112.4,
"end": 1113.4,
"text": " Right."
},
{
"start": 1113.4,
"end": 1118.44,
"text": " But there has been some great recent work from I think a team at Miele, which won an"
},
{
"start": 1118.44,
"end": 1122,
"text": " outstanding paper award at New York's last year, which was called deep reinforcement"
},
{
"start": 1122,
"end": 1124.52,
"text": " learning on the edge of the statistical precipice."
},
{
"start": 1124.52,
"end": 1127.52,
"text": " And the basic idea is, you know, we're compute constrained."
},
{
"start": 1127.52,
"end": 1129.3600000000001,
"text": " We have these environments, they're very high variance."
},
{
"start": 1129.3600000000001,
"end": 1133.72,
"text": " But even despite all of this, you know, what are the kind of statistical best principles"
},
{
"start": 1133.72,
"end": 1137.88,
"text": " that we can follow to really see whether or not our methods are actually making a measurable"
},
{
"start": 1137.88,
"end": 1141.66,
"text": " and replicable difference in the environments that we're testing?"
},
{
"start": 1141.66,
"end": 1146.3600000000001,
"text": " And so they have a lot of good recommendations, which we try to subscribe to as close as possible"
},
{
"start": 1146.3600000000001,
"end": 1147.3600000000001,
"text": " in this setting."
},
{
"start": 1147.3600000000001,
"end": 1148.3600000000001,
"text": " Right."
},
{
"start": 1148.3600000000001,
"end": 1152.38,
"text": " So these training curves here give you kind of a qualitative sense about not only kind"
},
{
"start": 1152.38,
"end": 1156.22,
"text": " of the ultimate performance attained by any of the models, but also of the differences"
},
{
"start": 1156.22,
"end": 1158.3600000000001,
"text": " in sample efficiency that we see."
},
{
"start": 1158.3600000000001,
"end": 1159.3600000000001,
"text": " Right."
},
{
"start": 1159.3600000000001,
"end": 1163.68,
"text": " So it could be the case that, well, ultimately, both Amigo and El Amigo reach the same asymptotic"
},
{
"start": 1163.68,
"end": 1167.8400000000001,
"text": " performance, but Amigo just gets there faster or more reliably."
},
{
"start": 1167.8400000000001,
"end": 1171.04,
"text": " And that's something that you can, sorry, El Amigo gets there faster and more reliably."
},
{
"start": 1171.04,
"end": 1173.68,
"text": " And that's something that you can look at in these graphs."
},
{
"start": 1173.68,
"end": 1177.88,
"text": " But I think the more kind of statistically rigorous way of verifying that language is"
},
{
"start": 1177.88,
"end": 1182.76,
"text": " giving a gain in the environments is in the subsequent figure, which is figure four, which"
},
{
"start": 1182.76,
"end": 1185.1000000000001,
"text": " should be right below this one, I think."
},
{
"start": 1185.1000000000001,
"end": 1189.92,
"text": " And this is really, you know, us trying to statistically verify, you know, is there an"
},
{
"start": 1189.92,
"end": 1191.04,
"text": " effect happening here?"
},
{
"start": 1191.04,
"end": 1196.1599999999999,
"text": " And so these here are bootstrap confidence intervals, five runs in each experimental"
},
{
"start": 1196.1599999999999,
"end": 1197.1599999999999,
"text": " condition."
},
{
"start": 1197.1599999999999,
"end": 1203.6,
"text": " And we're plotting the 95 percent confidence intervals for the interquartile mean of models"
},
{
"start": 1203.6,
"end": 1204.6,
"text": " across tasks."
},
{
"start": 1204.6,
"end": 1208.56,
"text": " So this is kind of like the mean performance, assuming that you drop some of the outliers,"
},
{
"start": 1208.56,
"end": 1211.04,
"text": " because again, these runs are very high variance."
},
{
"start": 1211.04,
"end": 1212.04,
"text": " Right."
},
{
"start": 1212.04,
"end": 1217.68,
"text": " And so this is kind of a statistical recommendation from the authors of that deep RL paper."
},
{
"start": 1217.68,
"end": 1221.92,
"text": " And we show that, yes, the individual runs here have really high variance naturally."
},
{
"start": 1221.92,
"end": 1227.28,
"text": " But as you begin to look at the runs in aggregate across both the mini grid and mini hack environment"
},
{
"start": 1227.28,
"end": 1231.72,
"text": " suites, we begin to see a trend that it's clear that, you know, overall we're seeing"
},
{
"start": 1231.72,
"end": 1235.0600000000002,
"text": " a good effect of language in these environments."
},
{
"start": 1235.0600000000002,
"end": 1241.72,
"text": " And so this is obviously these are aggregate metrics, overall metrics and so on."
},
{
"start": 1241.72,
"end": 1247.04,
"text": " When we look at the plots themselves, there is quite considerable variance, even in the"
},
{
"start": 1247.04,
"end": 1248.48,
"text": " ranks of the method."
},
{
"start": 1248.48,
"end": 1254.76,
"text": " Do you have an intuition of between the language methods, which works better in what kind of"
},
{
"start": 1254.76,
"end": 1260.32,
"text": " environments and in what kind of environments does language even maybe hurt?"
},
{
"start": 1260.32,
"end": 1262.6,
"text": " And why do you have an idea?"
},
{
"start": 1262.6,
"end": 1263.6399999999999,
"text": " Yeah."
},
{
"start": 1263.6399999999999,
"end": 1270.6,
"text": " So the trend that I try to highlight in the paper is that in larger environments, language"
},
{
"start": 1270.6,
"end": 1272.52,
"text": " exploration does better."
},
{
"start": 1272.52,
"end": 1280.8,
"text": " And the reason why you might expect this is that in larger environments, Amigo and Novelty"
},
{
"start": 1280.8,
"end": 1283.12,
"text": " kind of suffer from this problem of increased noise."
},
{
"start": 1283.12,
"end": 1284.12,
"text": " Right."
},
{
"start": 1284.12,
"end": 1287.24,
"text": " There's a lot more coordinates, for example, that you can propose, which essentially describe"
},
{
"start": 1287.24,
"end": 1288.72,
"text": " kind of the same semantic action."
},
{
"start": 1288.72,
"end": 1289.72,
"text": " Right."
},
{
"start": 1289.72,
"end": 1292.8799999999999,
"text": " You have like you want to get the agent into one room of this maze."
},
{
"start": 1292.8799999999999,
"end": 1296.32,
"text": " And you know, because the environment is larger, now there are four or five different coordinates"
},
{
"start": 1296.32,
"end": 1298.16,
"text": " that all kind of mean the same thing."
},
{
"start": 1298.16,
"end": 1304.0400000000002,
"text": " Whereas as you increase the size of the environment, the language set, the set of language goals"
},
{
"start": 1304.0400000000002,
"end": 1305.5600000000002,
"text": " is relatively more consistent."
},
{
"start": 1305.5600000000002,
"end": 1306.5600000000002,
"text": " Right."
},
{
"start": 1306.5600000000002,
"end": 1308.3600000000001,
"text": " It's kind of one of those complexity analyses."
},
{
"start": 1308.3600000000001,
"end": 1309.3600000000001,
"text": " Right."
},
{
"start": 1309.3600000000001,
"end": 1312.0600000000002,
"text": " It's like kind of space complexity, almost of the goal space."
},
{
"start": 1312.0600000000002,
"end": 1314.72,
"text": " And so you can see this trend happen a bit."
},
{
"start": 1314.72,
"end": 1319.42,
"text": " For example, in the Wand of Death task, so WOD, this is in the top right corner here."
},
{
"start": 1319.42,
"end": 1326.6000000000001,
"text": " We have WOD medium and WOD hard, where in WOD medium, Amigo actually outperforms El"
},
{
"start": 1326.6000000000001,
"end": 1327.6000000000001,
"text": " Amigo."
},
{
"start": 1327.6,
"end": 1329.7199999999998,
"text": " So it gets you to higher performance quicker."
},
{
"start": 1329.7199999999998,
"end": 1335.24,
"text": " Whereas in WOD Wand of Death hard, Amigo is actually not able to learn at all."
},
{
"start": 1335.24,
"end": 1338.9399999999998,
"text": " And the only difference between these environments, it's fundamentally the same task."
},
{
"start": 1338.9399999999998,
"end": 1343.12,
"text": " But the only difference is that in WOD hard, the room is a lot bigger."
},
{
"start": 1343.12,
"end": 1346.3999999999999,
"text": " So instead of a narrow corridor, you actually have to search for the Wand of Death, that's"
},
{
"start": 1346.3999999999999,
"end": 1349.8,
"text": " the task, in some in some room beforehand."
},
{
"start": 1349.8,
"end": 1355.6,
"text": " And you can see that just simply increasing the size of the possible coordinate spaces"
},
{
"start": 1355.6,
"end": 1360.6,
"text": " results in both traditional novelty and traditional Amigo doing much worse in this environment."
},
{
"start": 1360.6,
"end": 1364.9199999999998,
"text": " And I think that kind of shows that these kind of state based exploration methods are"
},
{
"start": 1364.9199999999998,
"end": 1366.8799999999999,
"text": " very brittle to the size of your state base."
},
{
"start": 1366.8799999999999,
"end": 1367.8799999999999,
"text": " Right."
},
{
"start": 1367.8799999999999,
"end": 1371.84,
"text": " So you can kind of increase your state space infinitely and it'll make these methods perform"
},
{
"start": 1371.84,
"end": 1377.04,
"text": " worse, even if the underlying semantics of your environment haven't changed yet."
},
{
"start": 1377.04,
"end": 1381.9199999999998,
"text": " Do you have an idea, do you have a feeling maybe, if this is a property of the world"
},
{
"start": 1381.9199999999998,
"end": 1384.9599999999998,
"text": " in general, like let's say I as a human, right?"
},
{
"start": 1384.96,
"end": 1390.52,
"text": " I'm put into a small whatever environment or a big environment, would my descriptions"
},
{
"start": 1390.52,
"end": 1393.48,
"text": " of language also not grow very much?"
},
{
"start": 1393.48,
"end": 1395.96,
"text": " Or is it a property of just game developers?"
},
{
"start": 1395.96,
"end": 1400.8,
"text": " You know, I add a few extra rooms, I can reuse these languages, you know, I just kind of"
},
{
"start": 1400.8,
"end": 1406.64,
"text": " tile, you know, the other the big games, I mean, the biggest games are procedurally generated"
},
{
"start": 1406.64,
"end": 1410.56,
"text": " like Minecraft there, it's really, it's just the same thing over and over."
},
{
"start": 1410.56,
"end": 1416.84,
"text": " But even in like the like, these big open world games, like Grand Theft Auto or so,"
},
{
"start": 1416.84,
"end": 1422.6799999999998,
"text": " the same textures are reused and the same cars and the same NPC characters, right?"
},
{
"start": 1422.6799999999998,
"end": 1427.6,
"text": " Is this a property of the world or of the video game developers?"
},
{
"start": 1427.6,
"end": 1432.6,
"text": " Yeah, so this is a really deep and almost philosophical question."
},
{
"start": 1432.6,
"end": 1438.36,
"text": " Yeah, is something that I think about a lot is you can certainly and this is a totally"
},
{
"start": 1438.36,
"end": 1443.76,
"text": " valid statement, right, you can say, well, there are a lot of language actions that you"
},
{
"start": 1443.76,
"end": 1447.8799999999999,
"text": " can describe in our world and even in the video game world, which just described these"
},
{
"start": 1447.8799999999999,
"end": 1452.26,
"text": " like kind of infinitely complex and nested sequences of actions, which have absolutely"
},
{
"start": 1452.26,
"end": 1455.52,
"text": " nothing to do with the extrinsic task, right?"
},
{
"start": 1455.52,
"end": 1459.86,
"text": " I could tell you to, you know, oh, you know, run at the wall six times do a 360."
},
{
"start": 1459.86,
"end": 1462.28,
"text": " And then, you know, continue hitting the wall eight times, right."
},
{
"start": 1462.28,
"end": 1466.4799999999998,
"text": " And that's like an incredibly difficult goal, which you can imagine a very structured curriculum"
},
{
"start": 1466.48,
"end": 1470.28,
"text": " to get to that point, right, of just like infinitely kind of bumping your head against"
},
{
"start": 1470.28,
"end": 1475.52,
"text": " the wall, which satisfies, you know, maybe the difficulty threshold of El Amigo, but"
},
{
"start": 1475.52,
"end": 1478.92,
"text": " is absolutely orthogonal to the task that we care about."
},
{
"start": 1478.92,
"end": 1483.56,
"text": " And I can imagine that there are settings where the language is kind of useless and"
},
{
"start": 1483.56,
"end": 1487.5,
"text": " doesn't end up, you know, giving you any gains in this setting."
},
{
"start": 1487.5,
"end": 1490.6,
"text": " And so there's kind of this open question that we haven't really touched on sufficiently"
},
{
"start": 1490.6,
"end": 1495.4,
"text": " in this paper, which is how good does the language have to be in order to get this to"
},
{
"start": 1495.4,
"end": 1496.88,
"text": " work?"
},
{
"start": 1496.88,
"end": 1501.24,
"text": " So as I say, you know, the language is Oracle, it's game developers, but it also is noisy."
},
{
"start": 1501.24,
"end": 1504.8000000000002,
"text": " There's a lot of actions like running into walls or trying to throw stones at a minotaur"
},
{
"start": 1504.8000000000002,
"end": 1507.68,
"text": " that are ultimately useless in the environment."
},
{
"start": 1507.68,
"end": 1512.3200000000002,
"text": " The argument we're making here is that hopefully, you know, the noisiness of language scales"
},
{
"start": 1512.3200000000002,
"end": 1516.48,
"text": " a little bit less than the noisiness of your state environment, right."
},
{
"start": 1516.48,
"end": 1520.88,
"text": " But there's still a lot of kind of edge cases and kind of unexplored territory here."
},
{
"start": 1520.88,
"end": 1525.2800000000002,
"text": " I think more philosophically, if you think about our world and our environment, right,"
},
{
"start": 1525.28,
"end": 1530.36,
"text": " there are a lot of ways that we can describe actions that are not particularly useful in"
},
{
"start": 1530.36,
"end": 1531.96,
"text": " the world that you and I inhabit, right."
},
{
"start": 1531.96,
"end": 1537.28,
"text": " I mean, I can again tell you to do handstands and hit a wall and, you know, walk around"
},
{
"start": 1537.28,
"end": 1541.68,
"text": " and write endless, you know, trivial things in the dust."
},
{
"start": 1541.68,
"end": 1545.92,
"text": " But at the same time, there's a lot of our action space in the real world that we simply"
},
{
"start": 1545.92,
"end": 1548.24,
"text": " don't have language descriptions for, right."
},
{
"start": 1548.24,
"end": 1553,
"text": " So like every single precise movement on my hand and my arm, you know, I could presumably"
},
{
"start": 1553,
"end": 1557.08,
"text": " come up with some language to describe, oh, I'm actuating this joint, you know, by 0.03"
},
{
"start": 1557.08,
"end": 1558.08,
"text": " degrees."
},
{
"start": 1558.08,
"end": 1560.2,
"text": " And there's like, you know, how many joints in my hand, right."
},
{
"start": 1560.2,
"end": 1564.96,
"text": " I mean, there's like endless complexity in terms of the possible action space just by"
},
{
"start": 1564.96,
"end": 1569.36,
"text": " moving a hand that in language we have absolutely no words for, right."
},
{
"start": 1569.36,
"end": 1571.6,
"text": " And so it's really it's a really tough question, right."
},
{
"start": 1571.6,
"end": 1574.92,
"text": " Like we have a lot of kind of ways of describing useless actions in the world."
},
{
"start": 1574.92,
"end": 1577.92,
"text": " But at the same time, it's very clear that the language that we do use to describe the"
},
{
"start": 1577.92,
"end": 1584.28,
"text": " world is operating at a higher level abstraction than perhaps the kinds of actions that RL"
},
{
"start": 1584.28,
"end": 1585.92,
"text": " agents have access to, right."
},
{
"start": 1585.92,
"end": 1589.96,
"text": " And for example, actuating some sort of limb or something."
},
{
"start": 1589.96,
"end": 1596.24,
"text": " You make a you make a good point that in the paper that language is a strong prior over"
},
{
"start": 1596.24,
"end": 1599.52,
"text": " what is essentially important to humans, right."
},
{
"start": 1599.52,
"end": 1604.14,
"text": " If I can describe something with a short piece of language, like, of course, I can say do"
},
{
"start": 1604.14,
"end": 1607.54,
"text": " three backflips and then, you know, do eight of that and so on."
},
{
"start": 1607.54,
"end": 1610.28,
"text": " But it's a fairly complex sentence in itself."
},
{
"start": 1610.28,
"end": 1615.56,
"text": " If I can describe something with a short piece of language, usually that is something that"
},
{
"start": 1615.56,
"end": 1619.68,
"text": " matters to some human somewhere, right."
},
{
"start": 1619.68,
"end": 1622.68,
"text": " Otherwise that wouldn't be mapped to a short string."
},
{
"start": 1622.68,
"end": 1624.8799999999999,
"text": " But that brings me a bit to a different question."
},
{
"start": 1624.8799999999999,
"end": 1631.44,
"text": " And that is the question of isn't isn't the I think in these environments, there's always"
},
{
"start": 1631.44,
"end": 1632.72,
"text": " a goal, right."
},
{
"start": 1632.72,
"end": 1636.2,
"text": " There is one reward at the end that you need to reach."
},
{
"start": 1636.2,
"end": 1642.1200000000001,
"text": " I can imagine, though, that novelty or not novelty in general or how how important a"
},
{
"start": 1642.1200000000001,
"end": 1645.64,
"text": " state is, is really dependent on your goal."
},
{
"start": 1645.64,
"end": 1651.76,
"text": " Whether I circumvent the minotaur at the, you know, below or above that might not be"
},
{
"start": 1651.76,
"end": 1656.1200000000001,
"text": " important if I want to reach whatever the goal behind it."
},
{
"start": 1656.1200000000001,
"end": 1659.16,
"text": " But it is really important maybe for a different task."
},
{
"start": 1659.16,
"end": 1665.6000000000001,
"text": " It's likewise I as a human, whether I move from here to there by walking forward or backward"
},
{
"start": 1665.6,
"end": 1668.24,
"text": " doesn't matter if I want to get to the fridge."
},
{
"start": 1668.24,
"end": 1672.7199999999998,
"text": " But it matters really if I'm if I'm dancing, right."
},
{
"start": 1672.7199999999998,
"end": 1680.24,
"text": " So is that something that like how does that interplay here with these with these language"
},
{
"start": 1680.24,
"end": 1682.1399999999999,
"text": " things?"
},
{
"start": 1682.1399999999999,
"end": 1689.28,
"text": " What do you do when a language it almost like needs to incorporate a piece of the goal that"
},
{
"start": 1689.28,
"end": 1694.1999999999998,
"text": " you want to reach in order to be useful or not?"
},
{
"start": 1694.2,
"end": 1699.8,
"text": " Yeah, so I think thinking about or trying to filter the language descriptions that you"
},
{
"start": 1699.8,
"end": 1705.64,
"text": " have to language that is relevant for your task is going to be important if we scale"
},
{
"start": 1705.64,
"end": 1710.24,
"text": " this up to environments where it's clear that using unfiltered language is not helping."
},
{
"start": 1710.24,
"end": 1711.24,
"text": " Right."
},
{
"start": 1711.24,
"end": 1714.88,
"text": " And again, as I mentioned, the robustness of these kinds of exploration methods to the"
},
{
"start": 1714.88,
"end": 1720.1000000000001,
"text": " noisiness or relevance of your language signal is still an open question."
},
{
"start": 1720.1,
"end": 1725.08,
"text": " If we do have task descriptions, so we have extrinsic task descriptions like your job"
},
{
"start": 1725.08,
"end": 1730.28,
"text": " is to defeat the Minotaur, then it's really intuitive that we should be able to use that"
},
{
"start": 1730.28,
"end": 1735.36,
"text": " as a signal for kind of waiting how relevant a sub goal or language description that we"
},
{
"start": 1735.36,
"end": 1739.36,
"text": " encounter waiting how useful that is for the extrinsic task."
},
{
"start": 1739.36,
"end": 1740.36,
"text": " Right."
},
{
"start": 1740.36,
"end": 1744.9599999999998,
"text": " So if the extrinsic goal is combat, then we should be prioritizing combat related messages."
},
{
"start": 1744.96,
"end": 1751.16,
"text": " If the extrinsic goal is buying something, then we should promote acquiring money and"
},
{
"start": 1751.16,
"end": 1752.52,
"text": " things like that."
},
{
"start": 1752.52,
"end": 1755.96,
"text": " And so that's something that I think is a kind of natural extension of this is you extend"
},
{
"start": 1755.96,
"end": 1760.3600000000001,
"text": " this to a multitask setting where you have task descriptions and the task descriptions"
},
{
"start": 1760.3600000000001,
"end": 1765.24,
"text": " ought to kind of heavily filter what sub goals should be relevant for the task."
},
{
"start": 1765.24,
"end": 1769.8400000000001,
"text": " I think when you include task descriptions, there are some more comparisons to related"
},
{
"start": 1769.8400000000001,
"end": 1770.8400000000001,
"text": " work."
},
{
"start": 1770.84,
"end": 1775.24,
"text": " There's some related work, which you mentioned the paper where let's imagine you're doing"
},
{
"start": 1775.24,
"end": 1777.52,
"text": " basically hierarchical reinforcement learning."
},
{
"start": 1777.52,
"end": 1781.8,
"text": " So you have some extrinsic goal and then you want to explicitly decompose the extrinsic"
},
{
"start": 1781.8,
"end": 1784.48,
"text": " goal into sub goals that you want to complete in order."
},
{
"start": 1784.48,
"end": 1785.48,
"text": " Right."
},
{
"start": 1785.48,
"end": 1789.4399999999998,
"text": " And that's those are certainly kind of relevant methods to look at when you start thinking"
},
{
"start": 1789.4399999999998,
"end": 1792.76,
"text": " about multitask or goal condition settings."
},
{
"start": 1792.76,
"end": 1797.72,
"text": " But this is kind of a slightly different focus where we're not trying to identify sub goals"
},
{
"start": 1797.72,
"end": 1801.28,
"text": " that need to be completed on the way to some extrinsic goal."
},
{
"start": 1801.28,
"end": 1805,
"text": " There's still kind of this exploration component, which is a bit of a different use of language"
},
{
"start": 1805,
"end": 1807.4,
"text": " than this kind of hierarchical stuff."
},
{
"start": 1807.4,
"end": 1811.24,
"text": " But certainly I would say that there are people who have looked at kind of language conditioned"
},
{
"start": 1811.24,
"end": 1818.24,
"text": " RL and hierarchical RL that think a lot and very deeply about this problem of proposing"
},
{
"start": 1818.24,
"end": 1823.3600000000001,
"text": " sub goals that are relevant for the extrinsic goal, assuming you have some structured description"
},
{
"start": 1823.3600000000001,
"end": 1825.48,
"text": " of what the extrinsic goal is."
},
{
"start": 1825.48,
"end": 1830.88,
"text": " Although I can imagine you run into sort of the, let's say the more abstract problem of"
},
{
"start": 1830.88,
"end": 1835.4,
"text": " the exploration problem is that, you know, without an outside signal, I don't really"
},
{
"start": 1835.4,
"end": 1836.4,
"text": " know what to do."
},
{
"start": 1836.4,
"end": 1839.52,
"text": " And there is no clear, let's say gradient towards the goal."
},
{
"start": 1839.52,
"end": 1840.52,
"text": " Right."
},
{
"start": 1840.52,
"end": 1843.48,
"text": " Otherwise, the exploration problem in RL would be relatively easy."
},
{
"start": 1843.48,
"end": 1848.3600000000001,
"text": " Now when we say, well, we'll just filter out all the messages that don't have anything"
},
{
"start": 1848.3600000000001,
"end": 1850.48,
"text": " to do with our combat goal."
},
{
"start": 1850.48,
"end": 1851.48,
"text": " Right."
},
{
"start": 1851.48,
"end": 1855.8,
"text": " So it is like we could run into the exact same thing again, where, you know, maybe in"
},
{
"start": 1855.8,
"end": 1860.64,
"text": " order to acquire a weapon, I first need money, right?"
},
{
"start": 1860.64,
"end": 1863.56,
"text": " That doesn't, that's not directly related to my combat goal."
},
{
"start": 1863.56,
"end": 1870.04,
"text": " So there is like another exploration problem again, on top of the thing we introduce."
},
{
"start": 1870.04,
"end": 1875.2,
"text": " I guess maybe we can hope that if we introduce enough levels, the highest level of abstraction"
},
{
"start": 1875.2,
"end": 1880.72,
"text": " will have a small number of states so that, you know, random exploration works."
},
{
"start": 1880.72,
"end": 1884.6000000000001,
"text": " But it's kind of funny that the problems repeat or replicate."
},
{
"start": 1884.6000000000001,
"end": 1885.6000000000001,
"text": " Yeah."
},
{
"start": 1885.6000000000001,
"end": 1886.6000000000001,
"text": " Yeah."
},
{
"start": 1886.6000000000001,
"end": 1887.6000000000001,
"text": " It's really tricky."
},
{
"start": 1887.6000000000001,
"end": 1891.4,
"text": " And that's essentially just kind of a deeper or more nested failure case of not knowing"
},
{
"start": 1891.4,
"end": 1893.96,
"text": " what's novel and not knowing what's relevant for your goal."
},
{
"start": 1893.96,
"end": 1894.96,
"text": " Right."
},
{
"start": 1894.96,
"end": 1898.4,
"text": " So if you're prioritizing words that have combat in them because your extrinsic goal"
},
{
"start": 1898.4,
"end": 1904.64,
"text": " is combat, but you first need to buy something, then your, your, your semantics, you know,"
},
{
"start": 1904.64,
"end": 1907.28,
"text": " your measure of novelty or relevance is just not good enough."
},
{
"start": 1907.28,
"end": 1908.28,
"text": " Right."
},
{
"start": 1908.28,
"end": 1913.24,
"text": " So that's going to just be a fundamental problem in exploration is how do we know whether it's"
},
{
"start": 1913.24,
"end": 1917.44,
"text": " states or language, you know, how do we know when a state is relevant for the ultimate"
},
{
"start": 1917.44,
"end": 1918.44,
"text": " task?"
},
{
"start": 1918.44,
"end": 1919.44,
"text": " Yeah."
},
{
"start": 1919.44,
"end": 1921.52,
"text": " And I guess humans aren't very much different, right?"
},
{
"start": 1921.52,
"end": 1924.08,
"text": " I mean, science is a really hard process."
},
{
"start": 1924.08,
"end": 1930.08,
"text": " It's not, you know, that exploration takes millions of humans and hundreds of years."
},
{
"start": 1930.08,
"end": 1936.24,
"text": " So we can't fault our RL agents here for not, not doing that great of a job."
},
{
"start": 1936.24,
"end": 1941.44,
"text": " Here, I found these plots to be really cool, like the analysis, sort of the evolution of"
},
{
"start": 1941.44,
"end": 1943.36,
"text": " what the teachers propose."
},
{
"start": 1943.36,
"end": 1948.32,
"text": " And of course, these being language, it's quite insightful and understandable what's"
},
{
"start": 1948.32,
"end": 1950.4,
"text": " happening in the algorithm."
},
{
"start": 1950.4,
"end": 1956.36,
"text": " My, my surprise was a little bit, aren't these things kind of subject to like catastrophic"
},
{
"start": 1956.36,
"end": 1958.08,
"text": " forgetting or things like this?"
},
{
"start": 1958.08,
"end": 1959.4,
"text": " I can imagine, right?"
},
{
"start": 1959.4,
"end": 1964.56,
"text": " If I train these things online and they're at some difficulty level, all of a sudden"
},
{
"start": 1964.56,
"end": 1967.96,
"text": " they forget that reaching the red door is kind of really easy."
},
{
"start": 1967.96,
"end": 1973.48,
"text": " Or so is that have you ever thought is that a problem?"
},
{
"start": 1973.48,
"end": 1975.24,
"text": " Or was that ever a problem?"
},
{
"start": 1975.24,
"end": 1976.24,
"text": " Did you encounter that?"
},
{
"start": 1976.24,
"end": 1978.76,
"text": " Or why don't we encounter that?"
},
{
"start": 1978.76,
"end": 1979.76,
"text": " Yeah."
},
{
"start": 1979.76,
"end": 1984.08,
"text": " So I expect that that is a problem that happens in these agents."
},
{
"start": 1984.08,
"end": 1987.6799999999998,
"text": " I don't think we really precisely tried to measure whether or not catastrophic forgetting"
},
{
"start": 1987.6799999999998,
"end": 1989.56,
"text": " is a problem."
},
{
"start": 1989.56,
"end": 1996.8,
"text": " I think the fact is that we evaluate in environments where we are not testing the agents kind of"
},
{
"start": 1996.8,
"end": 2002.6,
"text": " continuously for mastery of all of the skills that it has learned in its curriculum proposed"
},
{
"start": 2002.6,
"end": 2003.72,
"text": " by the teacher."
},
{
"start": 2003.72,
"end": 2006.9199999999998,
"text": " And so this problem of, oh, you know, you forgot how to specifically open a specific"
},
{
"start": 2006.9199999999998,
"end": 2011.06,
"text": " color door is not an issue as long as the student is still quite good at completing"
},
{
"start": 2011.06,
"end": 2015.48,
"text": " whatever goals it needs to complete to try to achieve the extrinsic goal that is currently"
},
{
"start": 2015.48,
"end": 2016.48,
"text": " being set by the teacher."
},
{
"start": 2016.48,
"end": 2017.48,
"text": " Right."
},
{
"start": 2017.48,
"end": 2020.3600000000001,
"text": " So if you forget things that are at the very beginning of training, that's not a big deal."
},
{
"start": 2020.3600000000001,
"end": 2024.14,
"text": " So long as whatever path that the teacher is leading you on is something that will eventually"
},
{
"start": 2024.14,
"end": 2026.52,
"text": " get you to the extrinsic goal that we care about."
},
{
"start": 2026.52,
"end": 2029.44,
"text": " And I think that happens to be the case in these environments because there was only"
},
{
"start": 2029.44,
"end": 2033.78,
"text": " one extrinsic goal and because we're not testing it to master every single skill from kind"
},
{
"start": 2033.78,
"end": 2036.44,
"text": " of low level to high level abstractions."
},
{
"start": 2036.44,
"end": 2042.04,
"text": " But if we were in a setting where being able to complete those lower level goals kind of,"
},
{
"start": 2042.04,
"end": 2046.72,
"text": " you know, on a dime and kind of, you know, switch kind of do context switching like that,"
},
{
"start": 2046.72,
"end": 2050.36,
"text": " if that were more important, then we would have to deal with this problem of catastrophic"
},
{
"start": 2050.36,
"end": 2051.36,
"text": " forgetting."
},
{
"start": 2051.36,
"end": 2052.36,
"text": " Right."
},
{
"start": 2052.36,
"end": 2057,
"text": " An important point here is that we really don't care about how well the student is able"
},
{
"start": 2057,
"end": 2059.8,
"text": " to follow instructions proposed by the teacher."
},
{
"start": 2059.8,
"end": 2065.56,
"text": " That's, I mean, we hope the goal is that that property emerges such that we can complete"
},
{
"start": 2065.56,
"end": 2066.56,
"text": " the extrinsic goal."
},
{
"start": 2066.56,
"end": 2067.56,
"text": " Right."
},
{
"start": 2067.56,
"end": 2069.68,
"text": " But we're never actually trying to learn a student that can follow instructions."
},
{
"start": 2069.68,
"end": 2076.52,
"text": " We never really evaluated exclusively in an instruction following setting."
},
{
"start": 2076.52,
"end": 2081.56,
"text": " Because if we think ahead a little bit, and I'm going to want to just scroll down to the"
},
{
"start": 2081.56,
"end": 2088.4,
"text": " environments just because, yeah, maybe this this will inspire us a little bit."
},
{
"start": 2088.4,
"end": 2093.96,
"text": " If we think ahead a little bit beyond this work, here you have this very, this Oracle"
},
{
"start": 2093.96,
"end": 2095.72,
"text": " language descriptor."
},
{
"start": 2095.72,
"end": 2100.64,
"text": " And you say also in the outlook of future work that that is something obviously that"
},
{
"start": 2100.64,
"end": 2104.78,
"text": " we're trying to get rid of because not every environment, like the fewest of environments"
},
{
"start": 2104.78,
"end": 2109.6400000000003,
"text": " actually have such a built in language description or easily accessible one."
},
{
"start": 2109.6400000000003,
"end": 2113.1600000000003,
"text": " So we might have to regress to something else."
},
{
"start": 2113.1600000000003,
"end": 2119.88,
"text": " So I want to I want to think about three different external models that we could bring in."
},
{
"start": 2119.88,
"end": 2123.96,
"text": " And I wonder what you think of each of them, like how these could fit in."
},
{
"start": 2123.96,
"end": 2128.2400000000002,
"text": " The first would be something like GPT-3, like just a pure language model."
},
{
"start": 2128.2400000000002,
"end": 2131.26,
"text": " How could that help us?"
},
{
"start": 2131.26,
"end": 2135.44,
"text": " Maybe in combination with these things, because we need some starting point, right?"
},
{
"start": 2135.44,
"end": 2139.84,
"text": " But how could a pre-trained language model that knows something about the world help"
},
{
"start": 2139.84,
"end": 2140.84,
"text": " us?"
},
{
"start": 2140.84,
"end": 2145.6800000000003,
"text": " Then something like CLIP, maybe something that can take an image and language and say"
},
{
"start": 2145.6800000000003,
"end": 2148.7200000000003,
"text": " whether they're good together or not."
},
{
"start": 2148.7200000000003,
"end": 2152.6000000000004,
"text": " And then maybe even something like or maybe a captioning model."
},
{
"start": 2152.6000000000004,
"end": 2154.0800000000004,
"text": " Right."
},
{
"start": 2154.0800000000004,
"end": 2159.6000000000004,
"text": " And maybe something like DALEE, like something that takes language and generates."
},
{
"start": 2159.6,
"end": 2166.96,
"text": " Is there in this cloud of models, what possibilities do we have to bring in sort of to replace"
},
{
"start": 2166.96,
"end": 2170.68,
"text": " this Oracle thing with with learned systems?"
},
{
"start": 2170.68,
"end": 2173.2,
"text": " It doesn't even need to be learned online, right?"
},
{
"start": 2173.2,
"end": 2174.3199999999997,
"text": " It can be pre-trained."
},
{
"start": 2174.3199999999997,
"end": 2177.7599999999998,
"text": " I'm probably much more excited about that."
},
{
"start": 2177.7599999999998,
"end": 2178.7599999999998,
"text": " Yeah."
},
{
"start": 2178.7599999999998,
"end": 2182.92,
"text": " Yeah, these are, I think, going to be the most fun questions to look at in kind of language"
},
{
"start": 2182.92,
"end": 2187.36,
"text": " conditions are all going forward is taking the boom in pre-trained models in large language"
},
{
"start": 2187.36,
"end": 2193.32,
"text": " models and resulting, you know, bringing these into concrete and actionable gains in reinforcement"
},
{
"start": 2193.32,
"end": 2195.1600000000003,
"text": " learning."
},
{
"start": 2195.1600000000003,
"end": 2200.92,
"text": " It's funny that you mentioned this kind of what I described as almost a gradation from"
},
{
"start": 2200.92,
"end": 2205.6800000000003,
"text": " ungrounded language models like GPT-3, right, which are trained on text only corpora and"
},
{
"start": 2205.6800000000003,
"end": 2210.2400000000002,
"text": " whether those can actually help in these environments, which I would call are fundamentally grounded,"
},
{
"start": 2210.2400000000002,
"end": 2211.2400000000002,
"text": " right?"
},
{
"start": 2211.2400000000002,
"end": 2215.1600000000003,
"text": " They're grounded in some some visual or perceptual world."
},
{
"start": 2215.16,
"end": 2219.3199999999997,
"text": " And ungrounded language models still result in gains in these settings."
},
{
"start": 2219.3199999999997,
"end": 2224.2,
"text": " And my intuition is, yeah, they probably still can because, you know, even if you don't exactly"
},
{
"start": 2224.2,
"end": 2228.24,
"text": " know what it means to acquire a wand or kill a minotaur in some environment because you"
},
{
"start": 2228.24,
"end": 2233.92,
"text": " don't know what a minotaur looks like or what a wand looks like, GPT, as I mentioned, you"
},
{
"start": 2233.92,
"end": 2235.3999999999996,
"text": " know, this idea of priors, right?"
},
{
"start": 2235.3999999999996,
"end": 2239.56,
"text": " GPT has strong priors on sensible sequences of actions, right?"
},
{
"start": 2239.56,
"end": 2246.2799999999997,
"text": " So insofar as these environments are testing kind of sequences of actions that humans kind"
},
{
"start": 2246.2799999999997,
"end": 2250.6,
"text": " of have an intuition for, you know, it's some fantasy world, but we have some intuition,"
},
{
"start": 2250.6,
"end": 2253.44,
"text": " oh, in order to defeat the minotaur, we need to get a weapon first."
},
{
"start": 2253.44,
"end": 2255.14,
"text": " We probably look around for a weapon."
},
{
"start": 2255.14,
"end": 2256.14,
"text": " Maybe there's a shop."
},
{
"start": 2256.14,
"end": 2258.16,
"text": " Maybe we can buy a weapon from the shop, right?"
},
{
"start": 2258.16,
"end": 2262.2799999999997,
"text": " Video games are testing knowledge that we have very like deep seated commonsense knowledge"
},
{
"start": 2262.2799999999997,
"end": 2265.72,
"text": " that we have that hopefully generalizes to these fantasy worlds."
},
{
"start": 2265.72,
"end": 2268.92,
"text": " And GPT certainly contains a lot of that information, right?"
},
{
"start": 2268.92,
"end": 2274.44,
"text": " So you might imagine we should reward or filter the kinds of descriptions that we see to those"
},
{
"start": 2274.44,
"end": 2278.16,
"text": " that seem sensible narratives that GPT-3 would generate, right?"
},
{
"start": 2278.16,
"end": 2283.52,
"text": " So a sensible sequence of actions along the way to defeating the minotaur is collecting"
},
{
"start": 2283.52,
"end": 2286.52,
"text": " a wand and buying it and things like that."
},
{
"start": 2286.52,
"end": 2291.28,
"text": " And I think you actually already see some examples of this happening in more goal conditioned"
},
{
"start": 2291.28,
"end": 2292.48,
"text": " or instruction following RL."
},
{
"start": 2292.48,
"end": 2297.16,
"text": " So there's been some recent work from, I know, teams at Berkeley, maybe Google as well, that"
},
{
"start": 2297.16,
"end": 2301.2799999999997,
"text": " are looking at using pre-trained language models, which are not necessarily even grounded."
},
{
"start": 2301.2799999999997,
"end": 2307.24,
"text": " They're just, you know, GPT-3, using them to construct sensible plans, action plans"
},
{
"start": 2307.24,
"end": 2309.96,
"text": " or sub goals for completing certain actions."
},
{
"start": 2309.96,
"end": 2315.2,
"text": " So in some home environment, for example, maybe my action is get a cup of coffee."
},
{
"start": 2315.2,
"end": 2318.68,
"text": " And then the goal of GPT is even though I don't really know what my environment looks"
},
{
"start": 2318.68,
"end": 2322.68,
"text": " like, I don't know what kitchen you're in, I know that sensibly this should include finding"
},
{
"start": 2322.68,
"end": 2325.64,
"text": " a mug and then heating up the kettle and things like that."
},
{
"start": 2325.64,
"end": 2330.2799999999997,
"text": " And so we already see some promising use of kind of ungrounded models for improving grounded"
},
{
"start": 2330.2799999999997,
"end": 2331.2799999999997,
"text": " decision making settings."
},
{
"start": 2331.2799999999997,
"end": 2333.7999999999997,
"text": " Yeah, did you want to comment on that?"
},
{
"start": 2333.7999999999997,
"end": 2334.7999999999997,
"text": " Or I can also-"
},
{
"start": 2334.7999999999997,
"end": 2335.8799999999997,
"text": " No, no, that's cool."
},
{
"start": 2335.8799999999997,
"end": 2343.7999999999997,
"text": " I think, yeah, I think I've even had at least one of these works here on the channel in"
},
{
"start": 2343.7999999999997,
"end": 2345.44,
"text": " this home environment."
},
{
"start": 2345.44,
"end": 2348.68,
"text": " That's exactly, I was also really cool to see."
},
{
"start": 2348.68,
"end": 2352.7599999999998,
"text": " Obviously, these models know a lot about the world, right?"
},
{
"start": 2352.76,
"end": 2359.5600000000004,
"text": " And I think people overestimate how or underestimate maybe, well, whatever."
},
{
"start": 2359.5600000000004,
"end": 2364.6800000000003,
"text": " That the thing, if we humans look at a board like this, like at a mini hack board, we see"
},
{
"start": 2364.6800000000003,
"end": 2365.84,
"text": " a map, right?"
},
{
"start": 2365.84,
"end": 2371.5600000000004,
"text": " We see paths to walk on and stuff like this, even if we've never played a video game."
},
{
"start": 2371.5600000000004,
"end": 2375.1400000000003,
"text": " But this is, these are such strong priors built into us."
},
{
"start": 2375.1400000000003,
"end": 2380.1200000000003,
"text": " And we sometimes think like, why can't that dumb computer just like walk around the wall,"
},
{
"start": 2380.1200000000003,
"end": 2381.1200000000003,
"text": " right?"
},
{
"start": 2381.12,
"end": 2383.92,
"text": " And we're like, what's up?"
},
{
"start": 2383.92,
"end": 2388.6,
"text": " And I think these large models are a way we can really get that knowledge from the human"
},
{
"start": 2388.6,
"end": 2390.52,
"text": " world into this world."
},
{
"start": 2390.52,
"end": 2394.3199999999997,
"text": " So yeah, I think that's, it's a great outlook."
},
{
"start": 2394.3199999999997,
"end": 2402.8399999999997,
"text": " Also with the models that combine images and text, I feel that could be really like adding"
},
{
"start": 2402.8399999999997,
"end": 2405.68,
"text": " a lot of value to the RL world."
},
{
"start": 2405.68,
"end": 2411.24,
"text": " At least the RL environments that are like human environments."
},
{
"start": 2411.24,
"end": 2417.3999999999996,
"text": " Of course, there's reinforcement learning for computer chip design, and things like"
},
{
"start": 2417.3999999999996,
"end": 2418.3999999999996,
"text": " this."
},
{
"start": 2418.3999999999996,
"end": 2422.6,
"text": " I don't think those are necessarily going to be profiting that much from it."
},
{
"start": 2422.6,
"end": 2429.22,
"text": " But yeah, yeah, really cool is so you're you're at Stanford?"
},
{
"start": 2429.22,
"end": 2431.58,
"text": " Or did you do the work at Stanford?"
},
{
"start": 2431.58,
"end": 2433.08,
"text": " Or were you at some internship?"
},
{
"start": 2433.08,
"end": 2436.7799999999997,
"text": " Yeah, I did it while I had an internship last fall."
},
{
"start": 2436.7799999999997,
"end": 2437.7799999999997,
"text": " So this is fall 2021."
},
{
"start": 2437.7799999999997,
"end": 2441.04,
"text": " Okay, continue to work a little bit while at Stanford."
},
{
"start": 2441.04,
"end": 2448.2,
"text": " But it was mostly in collaboration with some people at fair or meta, I guess now in London."
},
{
"start": 2448.2,
"end": 2452,
"text": " Reinforcement learning is notoriously also kind of hardware intensive."
},
{
"start": 2452,
"end": 2456.56,
"text": " Although this work right here seems like maybe not that much because you describe a little"
},
{
"start": 2456.56,
"end": 2462.3199999999997,
"text": " bit sort of what what it takes to investigate a project like this."
},
{
"start": 2462.32,
"end": 2467.28,
"text": " Yeah, unfortunately, I think even for these environments, it's fairly hardware intensive,"
},
{
"start": 2467.28,
"end": 2473.4,
"text": " certainly still feasible, I think, on let's say, a more academically sized compute budget."
},
{
"start": 2473.4,
"end": 2479.36,
"text": " But for being able to run the experimentation needed to iterate quickly, you know, you do"
},
{
"start": 2479.36,
"end": 2483.36,
"text": " really definitely benefit from kind of industry level scale, which is one of the unfortunate"
},
{
"start": 2483.36,
"end": 2487.6400000000003,
"text": " things about this kind of research is that it is a little bit less accessible to people"
},
{
"start": 2487.6400000000003,
"end": 2490.48,
"text": " in smaller compute settings."
},
{
"start": 2490.48,
"end": 2495.36,
"text": " So maybe the typical kind of RL environments you think of our compute heavy are the ones"
},
{
"start": 2495.36,
"end": 2501.08,
"text": " that are in 3D simulation, you know, very, you know, need physics, need soft joint contact"
},
{
"start": 2501.08,
"end": 2502.84,
"text": " and all of these things to model."
},
{
"start": 2502.84,
"end": 2504.44,
"text": " And those are really expensive."
},
{
"start": 2504.44,
"end": 2508.36,
"text": " I think compared to that, these are kind of more symbolic grid worlds."
},
{
"start": 2508.36,
"end": 2512.6,
"text": " You know, the whole point as to why mini hack or net hack was chosen as a reinforcement"
},
{
"start": 2512.6,
"end": 2516.92,
"text": " learning test bed was because the code base is, you know, written entirely in C and is"
},
{
"start": 2516.92,
"end": 2522.48,
"text": " very optimized, and so you can run simulations very quickly on modern hardware."
},
{
"start": 2522.48,
"end": 2526.04,
"text": " But that being said, it's still relatively compute expensive."
},
{
"start": 2526.04,
"end": 2531.56,
"text": " Again, the just amount of experience needed by state of the art, deep RL methods, even"
},
{
"start": 2531.56,
"end": 2536.28,
"text": " with extrinsic or intrinsic exploration bonuses is still very expensive, right?"
},
{
"start": 2536.28,
"end": 2540.96,
"text": " So for example, one of these runs, we would typically have, let's say, 40 CPU actors collecting"
},
{
"start": 2540.96,
"end": 2545.8,
"text": " experience at the same time in parallel, and then kind of one or two GPU learner threads"
},
{
"start": 2545.8,
"end": 2549.2400000000002,
"text": " in the background kind of updating from this experience."
},
{
"start": 2549.2400000000002,
"end": 2554.6000000000004,
"text": " So even just a single, you know, computational experiment here needs non trivial hardware"
},
{
"start": 2554.6000000000004,
"end": 2555.6000000000004,
"text": " for sure."
},
{
"start": 2555.6000000000004,
"end": 2556.6000000000004,
"text": " Yeah."
},
{
"start": 2556.6000000000004,
"end": 2558.96,
"text": " And, and you ideally you want to do that in parallel, right?"
},
{
"start": 2558.96,
"end": 2563.4,
"text": " Because you want to try out a bunch of things are repeated a bunch of times because one"
},
{
"start": 2563.4,
"end": 2567.44,
"text": " experiment really tells you almost nothing, right?"
},
{
"start": 2567.44,
"end": 2569.2000000000003,
"text": " Unless it succeeds, right?"
},
{
"start": 2569.2000000000003,
"end": 2570.44,
"text": " If it succeeds, it's good."
},
{
"start": 2570.44,
"end": 2574.04,
"text": " But if it fails, you never know if you repeat it a bunch of times."
},
{
"start": 2574.04,
"end": 2579.16,
"text": " Yeah, but I mean, it's still it's not it's not the most extreme thing, right?"
},
{
"start": 2579.16,
"end": 2583.7599999999998,
"text": " Like two GPUs or so and a bunch of CPUs."
},
{
"start": 2583.7599999999998,
"end": 2587.6,
"text": " As you say, that can that's still academically doable, which I find cool."
},
{
"start": 2587.6,
"end": 2593.56,
"text": " Could you maybe tell us a bit about the process of researching of researching this?"
},
{
"start": 2593.56,
"end": 2596.68,
"text": " Like, did everything work out as planned from the beginning?"
},
{
"start": 2596.68,
"end": 2600.48,
"text": " Or where was your starting point?"
},
{
"start": 2600.48,
"end": 2605.04,
"text": " And what changed about your plan during the research, like maybe something didn't work"
},
{
"start": 2605.04,
"end": 2606.04,
"text": " out or so?"
},
{
"start": 2606.04,
"end": 2607.04,
"text": " Yeah."
},
{
"start": 2607.04,
"end": 2611.76,
"text": " Yeah, I feel I don't I feel it's always good for people to hear that other people encounter"
},
{
"start": 2611.76,
"end": 2614.44,
"text": " problems and how they get around problems."
},
{
"start": 2614.44,
"end": 2615.44,
"text": " Yeah."
},
{
"start": 2615.44,
"end": 2616.44,
"text": " Yeah."
},
{
"start": 2616.44,
"end": 2620.2,
"text": " So yeah, it's a great question."
},
{
"start": 2620.2,
"end": 2627.3,
"text": " The intuition that I think me and my collaborators started with was, you know, fairly sensible."
},
{
"start": 2627.3,
"end": 2631.88,
"text": " It's language is clearly going to help in these environments."
},
{
"start": 2631.88,
"end": 2634.2400000000002,
"text": " You know, it has some nice parallels to human exploration."
},
{
"start": 2634.2400000000002,
"end": 2638.96,
"text": " And so let's just see whether or not language will work in these environments."
},
{
"start": 2638.96,
"end": 2643.0800000000004,
"text": " What's funny, though, is that we actually started out the project less about the more"
},
{
"start": 2643.0800000000004,
"end": 2647.88,
"text": " abstract question of like, does language help exploration and more a very concrete question"
},
{
"start": 2647.88,
"end": 2650.6000000000004,
"text": " of how do we improve upon Amigo?"
},
{
"start": 2650.6000000000004,
"end": 2655.2400000000002,
"text": " So how do we improve upon an existing state of the art algorithm for exploration?"
},
{
"start": 2655.24,
"end": 2658.08,
"text": " Let's propose something that we argue is better than everything."
},
{
"start": 2658.08,
"end": 2662.04,
"text": " It's like we're going to propose a state of the art exploration method called El Amigo,"
},
{
"start": 2662.04,
"end": 2665.16,
"text": " which will get 100 percent accuracy in all these environments."
},
{
"start": 2665.16,
"end": 2667.2,
"text": " And none of the existing methods will work."
},
{
"start": 2667.2,
"end": 2668.2,
"text": " Right."
},
{
"start": 2668.2,
"end": 2670.2,
"text": " That's that's kind of the narrative that you set up for yourself when you're starting research"
},
{
"start": 2670.2,
"end": 2673.9199999999996,
"text": " is I'm going to build something that's new and that's the best."
},
{
"start": 2673.9199999999996,
"end": 2674.9199999999996,
"text": " Right."
},
{
"start": 2674.9199999999996,
"end": 2678.8399999999997,
"text": " However, I think the focus of this paper and the story has shifted considerably."
},
{
"start": 2678.8399999999997,
"end": 2680.6,
"text": " I think it's shifted for the better, actually."
},
{
"start": 2680.6,
"end": 2685.92,
"text": " And part of this shift happened because we implemented El Amigo and it was working fine"
},
{
"start": 2685.92,
"end": 2687.2799999999997,
"text": " and it worked better than Amigo."
},
{
"start": 2687.2799999999997,
"end": 2688.68,
"text": " So we were quite excited."
},
{
"start": 2688.68,
"end": 2691.3199999999997,
"text": " But at the same time, the field is moving so fast."
},
{
"start": 2691.3199999999997,
"end": 2697.68,
"text": " And at NeurIPS last year, some researchers came out with this method called novelty and"
},
{
"start": 2697.68,
"end": 2701.24,
"text": " we ran novelty and novelty also did really well."
},
{
"start": 2701.24,
"end": 2704.64,
"text": " And you know, in some environments, it totally like blew Amigo out of the water."
},
{
"start": 2704.64,
"end": 2705.64,
"text": " Right."
},
{
"start": 2705.64,
"end": 2706.64,
"text": " And El Amigo."
},
{
"start": 2706.64,
"end": 2711.4,
"text": " And part of our thinking was, well, OK, now we can't really say, oh, we have El Amigo"
},
{
"start": 2711.4,
"end": 2713.08,
"text": " and it's the best model."
},
{
"start": 2713.08,
"end": 2714.08,
"text": " It's the best environment."
},
{
"start": 2714.08,
"end": 2717.08,
"text": " And you should only use this."
},
{
"start": 2717.08,
"end": 2719.16,
"text": " And at first I thought, you know, this is derailing our narrative."
},
{
"start": 2719.16,
"end": 2720.16,
"text": " Right."
},
{
"start": 2720.16,
"end": 2721.16,
"text": " We're not proposing anything new."
},
{
"start": 2721.16,
"end": 2722.16,
"text": " We're not proposing anything state of the art."
},
{
"start": 2722.16,
"end": 2723.8399999999997,
"text": " So what's the point?"
},
{
"start": 2723.8399999999997,
"end": 2727.52,
"text": " But I think after some kind of juggling and shuffling, we realized that what we're really"
},
{
"start": 2727.52,
"end": 2731.94,
"text": " interested in is the scientific question of does language help exploration?"
},
{
"start": 2731.94,
"end": 2735.08,
"text": " So take existing method X and then do X plus language."
},
{
"start": 2735.08,
"end": 2736.08,
"text": " Right."
},
{
"start": 2736.08,
"end": 2740.2799999999997,
"text": " And so this question can be answered kind of agnostic to the specific method that we"
},
{
"start": 2740.2799999999997,
"end": 2741.2799999999997,
"text": " actually use."
},
{
"start": 2741.2799999999997,
"end": 2742.2799999999997,
"text": " Right."
},
{
"start": 2742.2799999999997,
"end": 2746.12,
"text": " And so it was that juncture where we actually decided, OK, let's actually look at novelty"
},
{
"start": 2746.12,
"end": 2748.94,
"text": " closely and let's imagine adding language to novelty as well."
},
{
"start": 2748.94,
"end": 2750.68,
"text": " And do we see the same kind of results?"
},
{
"start": 2750.68,
"end": 2751.68,
"text": " Right."
},
{
"start": 2751.68,
"end": 2757.54,
"text": " And so I think this is kind of an outcome of the paper that was kind of on the fly changed."
},
{
"start": 2757.54,
"end": 2761.92,
"text": " But I'm very happy with which is that we're not trying to claim that we have a method"
},
{
"start": 2761.92,
"end": 2766.7200000000003,
"text": " that is state of the art or that is best or that anyone should be using our method."
},
{
"start": 2766.7200000000003,
"end": 2769.32,
"text": " We are very agnostic to the particular choice of method."
},
{
"start": 2769.32,
"end": 2770.32,
"text": " Right."
},
{
"start": 2770.32,
"end": 2774.92,
"text": " We're trying to answer kind of a more abstract question, which is when does language help"
},
{
"start": 2774.92,
"end": 2775.92,
"text": " exploration?"
},
{
"start": 2775.92,
"end": 2778.8,
"text": " And I think this is a little bit more egalitarian."
},
{
"start": 2778.8,
"end": 2780.84,
"text": " We're not saying that our method is better than anyone else's."
},
{
"start": 2780.84,
"end": 2785.6,
"text": " And we also don't have to exhaustively compare to like a lot of existing work."
},
{
"start": 2785.6,
"end": 2789,
"text": " We're just saying that if you take whatever method that we have and you add language,"
},
{
"start": 2789,
"end": 2792.36,
"text": " you do better and here are two examples where that happens."
},
{
"start": 2792.36,
"end": 2793.36,
"text": " Cool."
},
{
"start": 2793.36,
"end": 2799.88,
"text": " And it is a good way to preempt some reviewers from saying that you didn't train on ImageNet"
},
{
"start": 2799.88,
"end": 2801.4,
"text": " and that's bad."
},
{
"start": 2801.4,
"end": 2802.96,
"text": " Yeah."
},
{
"start": 2802.96,
"end": 2807.68,
"text": " Is there anything else that you want to get out to viewers?"
},
{
"start": 2807.68,
"end": 2813.52,
"text": " Maybe a way they can get started if that's possible or anything that you'd like them"
},
{
"start": 2813.52,
"end": 2816.52,
"text": " to know?"
},
{
"start": 2816.52,
"end": 2827.6,
"text": " Yeah, I think that we've discussed a lot about these kind of higher level ideas of one holy"
},
{
"start": 2827.6,
"end": 2832.52,
"text": " grail is that we have clip generating descriptions or open GPT-3 and then we're evaluating in"
},
{
"start": 2832.52,
"end": 2837.16,
"text": " these really high dimensional spaces with actual motor joints and we're going to show"
},
{
"start": 2837.16,
"end": 2845.78,
"text": " how language helps in these like mojoco style, like really deep RL, realistic environments"
},
{
"start": 2845.78,
"end": 2847.6800000000003,
"text": " and maybe you can transfer to the real world."
},
{
"start": 2847.6800000000003,
"end": 2851.88,
"text": " I think that's the broad vision but I think it is still very far away."
},
{
"start": 2851.88,
"end": 2856.88,
"text": " I think we even in this paper abstracted away a lot of difficulty of the problem."
},
{
"start": 2856.88,
"end": 2858.96,
"text": " We're assuming that we have Oracle language annotations."
},
{
"start": 2858.96,
"end": 2863.5600000000004,
"text": " We're only looking at these kind of symbolic grid worlds and although it's tempting to"
},
{
"start": 2863.5600000000004,
"end": 2868.2000000000003,
"text": " dive in and say, okay, now let's kind of straightforwardly let's extend this to a real world environment"
},
{
"start": 2868.2000000000003,
"end": 2872.7200000000003,
"text": " where I have to actually move my coffee mug to make coffee and tea, I think we're still"
},
{
"start": 2872.72,
"end": 2879.56,
"text": " quite far away from that broad vision of kind of household enabled robots in RL and is probably"
},
{
"start": 2879.56,
"end": 2882.9199999999996,
"text": " not the most I think like beginner friendly way of starting."
},
{
"start": 2882.9199999999996,
"end": 2887.24,
"text": " There's just so many deep problems that need to be solved jointly from perception to action"
},
{
"start": 2887.24,
"end": 2892.7999999999997,
"text": " to planning and before we even consider how we better incorporate language into the mix."
},
{
"start": 2892.7999999999997,
"end": 2897.24,
"text": " And so I think the way to build upon this work is just these kind of very small progressive"
},
{
"start": 2897.24,
"end": 2900.56,
"text": " relaxations of the assumptions that I and many of the other people who have worked in"
},
{
"start": 2900.56,
"end": 2901.56,
"text": " this space have."
},
{
"start": 2901.56,
"end": 2905.16,
"text": " Right. So again, let's imagine let's just imagine we get rid of the Oracle language"
},
{
"start": 2905.16,
"end": 2909.72,
"text": " annotator and we train a model to emit states for these simple environments."
},
{
"start": 2909.72,
"end": 2913.04,
"text": " You know, we didn't really explore that, but that's a very sensible way to extend this"
},
{
"start": 2913.04,
"end": 2916.44,
"text": " kind of work while keeping the environment and the models fixed."
},
{
"start": 2916.44,
"end": 2917.44,
"text": " Right."
},
{
"start": 2917.44,
"end": 2921.68,
"text": " So this goes back to the very beginning when you mentioned the kind of way in which we"
},
{
"start": 2921.68,
"end": 2925.48,
"text": " approach this paper was to keep everything fixed and then just look at this kind of very"
},
{
"start": 2925.48,
"end": 2929.64,
"text": " small change and see how that results in different performance in our environment."
},
{
"start": 2929.64,
"end": 2931.6,
"text": " I think that's really just kind of the way to go."
},
{
"start": 2931.6,
"end": 2932.6,
"text": " It's very slow."
},
{
"start": 2932.6,
"end": 2935.72,
"text": " It's very incremental work, but hopefully it's getting us more towards that kind of"
},
{
"start": 2935.72,
"end": 2940.4,
"text": " guiding star of eventually having these models that operate in these realistic environments"
},
{
"start": 2940.4,
"end": 2944.48,
"text": " and use pre-trained model language to help exploration."
},
{
"start": 2944.48,
"end": 2945.48,
"text": " Cool."
},
{
"start": 2945.48,
"end": 2948.3799999999997,
"text": " Jesse, thank you very much for being here."
},
{
"start": 2948.3799999999997,
"end": 2949.3799999999997,
"text": " This was awesome."
},
{
"start": 2949.3799999999997,
"end": 2950.3799999999997,
"text": " Thanks."
},
{
"start": 2950.38,
"end": 2964.44,
"text": " Have a lot of fun."
}
] |
ZTs_mXwMCs8 | Yannic Kilcher | UCZHmQk67mSJgfCCTn7xBfew | Galactica: A Large Language Model for Science (Drama & Paper Review) | [
"Science & Technology"
] | ["deep learning","machine learning","arxiv","explained","neural networks","ai","artificial intellige(...TRUNCATED) | "#ai #galactica #meta\n\nGalactica is a language model trained on a curated corpus of scientific doc(...TRUNCATED) | " Hello, this video starts out with a review of the drama around the public demo of the Galactica mo(...TRUNCATED) | [{"start":0.0,"end":5.24,"text":" Hello, this video starts out with a review of the drama around the(...TRUNCATED) |
n1SXlK5rhR8 | Yannic Kilcher | UCZHmQk67mSJgfCCTn7xBfew | [Drama] Yann LeCun against Twitter on Dataset Bias | [
"Science & Technology"
] | ["deep learning","machine learning","arxiv","explained","neural networks","ai","artificial intellige(...TRUNCATED) | "Yann LeCun points out an instance of dataset bias and proposes a sensible solution. People are not (...TRUNCATED) | " Hi there! So you may have seen this already. There's a CVPR paper called Pulse. And what it does i(...TRUNCATED) | [{"start":0.0,"end":6.32,"text":" Hi there! So you may have seen this already. There's a CVPR paper (...TRUNCATED) |
BK3rv0MQMwY | Yannic Kilcher | UCZHmQk67mSJgfCCTn7xBfew | [News] The Siraj Raval Controversy | [
"Science & Technology"
] | ["machine learning","siraj","controversy","scam","scammer","fraud","plagiarism","plagiarized","cours(...TRUNCATED) | "Popular ML YouTuber Siraj Raval is in the middle of not just one, but two controversies: First, a l(...TRUNCATED) | " There is a massive controversy going on right now and in the middle is Siraj Raval, a prominent Yo(...TRUNCATED) | [{"start":0.0,"end":7.0,"text":" There is a massive controversy going on right now and in the middle(...TRUNCATED) |
U0mxx7AoNz0 | Yannic Kilcher | UCZHmQk67mSJgfCCTn7xBfew | Player of Games: All the games, one algorithm! (w/ author Martin Schmid) | [
"Science & Technology"
] | ["deep learning","machine learning","arxiv","explained","neural networks","ai","artificial intellige(...TRUNCATED) | "#playerofgames #deepmind #alphazero\n\nSpecial Guest: First author Martin Schmid (https://twitter.c(...TRUNCATED) | " Hello everyone, today is a special day. I'm here, as you can see, not alone, not by myself as usua(...TRUNCATED) | [{"start":0.0,"end":7.28,"text":" Hello everyone, today is a special day. I'm here, as you can see, (...TRUNCATED) |
fvctpYph8Pc | Yannic Kilcher | UCZHmQk67mSJgfCCTn7xBfew | Do ImageNet Classifiers Generalize to ImageNet? (Paper Explained) | [
"Science & Technology"
] | ["deep learning","machine learning","imagenet","cifar10","cifar10.1","generalization","overfitting",(...TRUNCATED) | "Has the world overfitted to ImageNet? What if we collect another dataset in exactly the same fashio(...TRUNCATED) | " Hi there today we're looking at to do image net classifiers Generalized to image net by Benjamin w(...TRUNCATED) | [{"start":0.0,"end":3.3000000000000003,"text":" Hi there today we're looking at to do image net clas(...TRUNCATED) |
PZypP7PiKi0 | Yannic Kilcher | UCZHmQk67mSJgfCCTn7xBfew | Gradient Surgery for Multi-Task Learning | [
"Science & Technology"
] | ["deep learning","machine learning","neural networks","multi task","conflicting gradients","magnitud(...TRUNCATED) | "Multi-Task Learning can be very challenging when gradients of different tasks are of severely diffe(...TRUNCATED) | " Hi there, today we're looking at gradient surgery for multitask learning by Tianhe Yu, Saurabh Kum(...TRUNCATED) | [{"start":0.0,"end":7.0,"text":" Hi there, today we're looking at gradient surgery for multitask lea(...TRUNCATED) |
Z3knUzwuIgo | Yannic Kilcher | UCZHmQk67mSJgfCCTn7xBfew | One Model For All The Tasks - BLIP (Author Interview) | [
"Science & Technology"
] | [] | "#blip #interview #salesforce\n\nPaper Review Video: https://youtu.be/X2k7n4FuI7c\nSponsor: Assembly(...TRUNCATED) | " Hello, this is an interview with the authors of the blip paper. If you haven't seen it, I've made (...TRUNCATED) | [{"start":0.0,"end":9.200000000000001,"text":" Hello, this is an interview with the authors of the b(...TRUNCATED) |
3Tqp_B2G6u0 | Yannic Kilcher | UCZHmQk67mSJgfCCTn7xBfew | Blockwise Parallel Decoding for Deep Autoregressive Models | [
"Science & Technology"
] | ["machine learning","deep learning","transformers","nlp","natural language processing","ai","artific(...TRUNCATED) | "https://arxiv.org/abs/1811.03115\n\nAbstract:\nDeep autoregressive sequence-to-sequence models have(...TRUNCATED) | " Hi there, today we'll look at blockwise parallel decoding for deep autoregressive models by Mitche(...TRUNCATED) | [{"start":0.0,"end":6.640000000000001,"text":" Hi there, today we'll look at blockwise parallel deco(...TRUNCATED) |
8wkgDnNxiVs | Yannic Kilcher | UCZHmQk67mSJgfCCTn7xBfew | POET: Endlessly Generating Increasingly Complex and Diverse Learning Environments and Solutions | [
"Science & Technology"
] | ["deep learning","machine learning","arxiv","evolution","reinforcement learning","neat","open-ended"(...TRUNCATED) | "From the makers of Go-Explore, POET is a mixture of ideas from novelty search, evolutionary methods(...TRUNCATED) | " Alright, so what you're seeing here are solutions found to this bipedal walker problem by a new al(...TRUNCATED) | [{"start":0.0,"end":6.88,"text":" Alright, so what you're seeing here are solutions found to this bi(...TRUNCATED) |
End of preview. Expand
in Dataset Viewer.
- Downloads last month
- 36