Now that we learned what is ML-Agents, how it works and that we studied the two environments we’re going to use. We’re ready to train our agents.
After that, you’ll be able to watch your agents playing directly on your browser.
The ML-Agents integration on the Hub is still experimental, some features will be added in the future. But for now, to validate this hands-on for the certification process, you just need to push your trained models to the Hub. There’s no results to attain to validate this one. But if you want to get nice results you can try to reach:
For more information about the certification process, check this section 👉 https://huggingface.co/deep-rl-course/en/unit0/introduction#certification-process
To start the hands-on click on Open In Colab button 👇 :
In this notebook, you’ll learn about ML-Agents and train two agents.
After that, you’ll be able to watch your agents playing directly on your browser.
For more information about the certification process, check this section 👉 https://huggingface.co/deep-rl-course/en/unit0/introduction#certification-process
⬇️ Here is an example of what you will achieve at the end of this unit. ⬇️
⚠ We’re going to use an experimental version of ML-Agents were you can push to hub and load from hub Unity ML-Agents Models you need to install the same version
We’re constantly trying to improve our tutorials, so if you find some issues in this notebook, please open an issue on the GitHub Repo.
At the end of the notebook, you will:
🔲 📚 Study what is ML-Agents and how it works by reading Unit 5 🤗
The ML-Agents integration on the Hub is still experimental, some features will be added in the future.
But for now, to validate this hands-on for the certification process, you just need to push your trained models to the Hub. There’s no results to attain to validate this one. But if you want to get nice results you can try to attain:
Pyramids
: Mean Reward = 1.75SnowballTarget
: Mean Reward = 15 or 30 targets hit in an episode.Runtime > Change Runtime type
Hardware Accelerator > GPU
%%capture
# Clone the repository
!git clone --depth 1 https://github.com/huggingface/ml-agents/
%%capture
# Go inside the repository and install the package
%cd ml-agents
!pip3 install -e ./ml-agents-envs
!pip3 install -e ./ml-agents
If you need a refresher on how this environments work check this section 👉 https://huggingface.co/deep-rl-course/unit5/snowball-target
./training-envs-executables/linux/
# Here, we create training-envs-executables and linux
!mkdir ./training-envs-executables
!mkdir ./training-envs-executables/linux
Download the file SnowballTarget.zip from https://drive.google.com/file/d/1YHHLjyj6gaZ3Gemx1hQgqrPgSS2ZhmB5 using wget
.
Check out the full solution to download large files from GDrive here
!wget --load-cookies /tmp/cookies.txt "https://docs.google.com/uc?export=download&confirm=$(wget --quiet --save-cookies /tmp/cookies.txt --keep-session-cookies --no-check-certificate 'https://docs.google.com/uc?export=download&id=1YHHLjyj6gaZ3Gemx1hQgqrPgSS2ZhmB5' -O- | sed -rn 's/.*confirm=([0-9A-Za-z_]+).*/\1\n/p')&id=1YHHLjyj6gaZ3Gemx1hQgqrPgSS2ZhmB5" -O ./training-envs-executables/linux/SnowballTarget.zip && rm -rf /tmp/cookies.txt
We unzip the executable.zip file
%%capture
!unzip -d ./training-envs-executables/linux/ ./training-envs-executables/linux/SnowballTarget.zip
Make sure your file is accessible
!chmod -R 755 ./training-envs-executables/linux/SnowballTarget
There are multiple hyperparameters. To know them better, you should check for each explanation with the documentation
So you need to create a SnowballTarget.yaml
config file in ./content/ml-agents/config/ppo/
We’ll give you here a first version of this config (to copy and paste into your SnowballTarget.yaml file
), but you should modify it.
behaviors:
SnowballTarget:
trainer_type: ppo
summary_freq: 10000
keep_checkpoints: 10
checkpoint_interval: 50000
max_steps: 200000
time_horizon: 64
threaded: true
hyperparameters:
learning_rate: 0.0003
learning_rate_schedule: linear
batch_size: 128
buffer_size: 2048
beta: 0.005
epsilon: 0.2
lambd: 0.95
num_epoch: 3
network_settings:
normalize: false
hidden_units: 256
num_layers: 2
vis_encode_type: simple
reward_signals:
extrinsic:
gamma: 0.99
strength: 1.0
As an experimentation, you should also try to modify some other hyperparameters. Unity provides very good documentation explaining each of them here.
Now that you’ve created the config file and understand what most hyperparameters do, we’re ready to train our agent 🔥.
To train our agent, we just need to launch mlagents-learn and select the executable containing the environment.
We define four parameters:
mlagents-learn <config>
: the path where the hyperparameter config file is.--env
: where the environment executable is.--run_id
: the name you want to give to your training run id.--no-graphics
: to not launch the visualization during the training.Train the model and use the --resume
flag to continue training in case of interruption.
It will fail first time if and when you use
--resume
, try running the block again to bypass the error.
The training will take 10 to 35min depending on your config, go take a ☕️you deserve it 🤗.
!mlagents-learn ./config/ppo/SnowballTarget.yaml --env=./training-envs-executables/linux/SnowballTarget/SnowballTarget --run-id="SnowballTarget1" --no-graphics
To be able to share your model with the community there are three more steps to follow:
1️⃣ (If it’s not already done) create an account to HF ➡ https://huggingface.co/join
2️⃣ Sign in and then, you need to store your authentication token from the Hugging Face website.
from huggingface_hub import notebook_login
notebook_login()
If you don’t want to use a Google Colab or a Jupyter Notebook, you need to use this command instead: huggingface-cli login
Then, we simply need to run mlagents-push-to-hf
.
And we define 4 parameters:
--run-id
: the name of the training run id.--local-dir
: where the agent was saved, it’s results/<run_id name>, so in my case results/First Training.--repo-id
: the name of the Hugging Face repo you want to create or update. It’s always <your huggingface username>/<the repo name>
If the repo does not exist it will be created automatically--commit-message
: since HF repos are git repository you need to define a commit message.For instance:
!mlagents-push-to-hf --run-id="SnowballTarget1" --local-dir="./results/SnowballTarget1" --repo-id="ThomasSimonini/ppo-SnowballTarget" --commit-message="First Push"
!mlagents-push-to-hf --run-id= # Add your run id --local-dir= # Your local dir --repo-id= # Your repo id --commit-message= # Your commit message
Else, if everything worked you should have this at the end of the process(but with a different url 😆) :
Your model is pushed to the hub. You can view your model here: https://huggingface.co/ThomasSimonini/ppo-SnowballTarget
It’s the link to your model, it contains a model card that explains how to use it, your Tensorboard and your config file. What’s awesome is that it’s a git repository, that means you can have different commits, update your repository with a new push etc.
But now comes the best: being able to visualize your agent online 👀.
For this step it’s simple:
Remember your repo-id
Launch the game and put it in full screen by clicking on the bottom right button
In step 1, choose your model repository which is the model id (in my case ThomasSimonini/ppo-SnowballTarget).
In step 2, choose what model you want to replay:
SnowballTarget.onnx
👉 What’s nice is to try with different models step to see the improvement of the agent.
And don’t hesitate to share the best score your agent gets on discord in #rl-i-made-this channel 🔥
Let’s now try a harder environment called Pyramids…
./training-envs-executables/linux/
Download the file Pyramids.zip from https://drive.google.com/uc?export=download&id=1UiFNdKlsH0NTu32xV-giYUEVKV4-vc7H using wget
. Check out the full solution to download large files from GDrive here
!wget --load-cookies /tmp/cookies.txt "https://docs.google.com/uc?export=download&confirm=$(wget --quiet --save-cookies /tmp/cookies.txt --keep-session-cookies --no-check-certificate 'https://docs.google.com/uc?export=download&id=1UiFNdKlsH0NTu32xV-giYUEVKV4-vc7H' -O- | sed -rn 's/.*confirm=([0-9A-Za-z_]+).*/\1\n/p')&id=1UiFNdKlsH0NTu32xV-giYUEVKV4-vc7H" -O ./training-envs-executables/linux/Pyramids.zip && rm -rf /tmp/cookies.txt
Unzip it
%%capture
!unzip -d ./training-envs-executables/linux/ ./training-envs-executables/linux/Pyramids.zip
Make sure your file is accessible
!chmod -R 755 ./training-envs-executables/linux/Pyramids/Pyramids
For this training, we’ll modify one thing:
As an experimentation, you should also try to modify some other hyperparameters, Unity provides a very good documentation explaining each of them here.
We’re now ready to train our agent 🔥.
The training will take 30 to 45min depending on your machine, go take a ☕️you deserve it 🤗.
!mlagents-learn ./config/ppo/PyramidsRND.yaml --env=./training-envs-executables/linux/Pyramids/Pyramids --run-id="Pyramids Training" --no-graphics
!mlagents-push-to-hf --run-id= # Add your run id --local-dir= # Your local dir --repo-id= # Your repo id --commit-message= # Your commit message
The temporary link for Pyramids demo is: https://singularite.itch.io/pyramids
MLAgents provides 18 different and we’re building some custom ones. The best way to learn is to try things of your own, have fun.
You have the full list of the one currently available on Hugging Face here 👉 https://github.com/huggingface/ml-agents#the-environments
For the demos to visualize your agent, the temporary link is: https://singularite.itch.io (temporary because we’ll also put the demos on Hugging Face Space)
For now we have integrated:
If you want new demos to be added, please open an issue: https://github.com/huggingface/deep-rl-class 🤗
That’s all for today. Congrats on finishing this tutorial!
The best way to learn is to practice and try stuff. Why not try another environment? ML-Agents has 18 different environments, but you can also create your own? Check the documentation and have fun!
See you on Unit 6 🔥,