Spaces:
Running
Running
Upload Sianghio,Kaelynn-Midterms Act.2-.ipynb
Browse files
Sianghio,Kaelynn-Midterms Act.2-.ipynb
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"cells":[{"cell_type":"markdown","metadata":{"id":"2D3NL_e4crQv"},"source":["# Bonus Unit 1: Let's train Huggy the Dog ๐ถ to fetch a stick"]},{"cell_type":"markdown","metadata":{"id":"FMYrDriDujzX"},"source":["<img src=\"https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit2/thumbnail.png\" alt=\"Bonus Unit 1Thumbnail\">\n","\n","In this notebook, we'll reinforce what we learned in the first Unit by **teaching Huggy the Dog to fetch the stick and then play with it directly in your browser**\n","\n","โฌ๏ธ Here is an example of what **you will achieve at the end of the unit.** โฌ๏ธ (launch โถ to see)"]},{"cell_type":"code","execution_count":null,"metadata":{"id":"PnVhs1yYNyUF"},"outputs":[],"source":["%%html\n","<video controls autoplay><source src=\"https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/notebooks/unit-bonus1/huggy.mp4\" type=\"video/mp4\"></video>"]},{"cell_type":"markdown","metadata":{"id":"x7oR6R-ZIbeS"},"source":["### The environment ๐ฎ\n","\n","- Huggy the Dog, an environment created by [Thomas Simonini](https://twitter.com/ThomasSimonini) based on [Puppo The Corgi](https://blog.unity.com/technology/puppo-the-corgi-cuteness-overload-with-the-unity-ml-agents-toolkit)\n","\n","### The library used ๐\n","\n","- [MLAgents](https://github.com/Unity-Technologies/ml-agents)"]},{"cell_type":"markdown","metadata":{"id":"60yACvZwO0Cy"},"source":["We're constantly trying to improve our tutorials, so **if you find some issues in this notebook**, please [open an issue on the Github Repo](https://github.com/huggingface/deep-rl-class/issues)."]},{"cell_type":"markdown","metadata":{"id":"Oks-ETYdO2Dc"},"source":["## Objectives of this notebook ๐\n","\n","At the end of the notebook, you will:\n","\n","- Understand **the state space, action space and reward function used to train Huggy**.\n","- **Train your own Huggy** to fetch the stick.\n","- Be able to play **with your trained Huggy directly in your browser**.\n","\n","\n"]},{"cell_type":"markdown","metadata":{"id":"mUlVrqnBv2o1"},"source":["## This notebook is from Deep Reinforcement Learning Course\n","<img src=\"https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/notebooks/deep-rl-course-illustration.jpg\" alt=\"Deep RL Course illustration\"/>"]},{"cell_type":"markdown","metadata":{"id":"pAMjaQpHwB_s"},"source":["In this free course, you will:\n","\n","- ๐ Study Deep Reinforcement Learning in **theory and practice**.\n","- ๐งโ๐ป Learn to **use famous Deep RL libraries** such as Stable Baselines3, RL Baselines3 Zoo, CleanRL and Sample Factory 2.0.\n","- ๐ค Train **agents in unique environments**\n","\n","And more check ๐ the syllabus ๐ https://simoninithomas.github.io/deep-rl-course\n","\n","Donโt forget to **<a href=\"http://eepurl.com/ic5ZUD\">sign up to the course</a>** (we are collecting your email to be able toย **send you the links when each Unit is published and give you information about the challenges and updates).**\n","\n","\n","The best way to keep in touch is to join our discord server to exchange with the community and with us ๐๐ป https://discord.gg/ydHrjt3WP5"]},{"cell_type":"markdown","metadata":{"id":"6r7Hl0uywFSO"},"source":["## Prerequisites ๐๏ธ\n","\n","Before diving into the notebook, you need to:\n","\n","๐ฒ ๐ **Develop an understanding of the foundations of Reinforcement learning** (MC, TD, Rewards hypothesis...) by doing Unit 1\n","\n","๐ฒ ๐ **Read the introduction to Huggy** by doing Bonus Unit 1"]},{"cell_type":"markdown","metadata":{"id":"DssdIjk_8vZE"},"source":["## Set the GPU ๐ช\n","- To **accelerate the agent's training, we'll use a GPU**. To do that, go to `Runtime > Change Runtime type`\n","\n","<img src=\"https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/notebooks/gpu-step1.jpg\" alt=\"GPU Step 1\">"]},{"cell_type":"markdown","metadata":{"id":"sTfCXHy68xBv"},"source":["- `Hardware Accelerator > GPU`\n","\n","<img src=\"https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/notebooks/gpu-step2.jpg\" alt=\"GPU Step 2\">"]},{"cell_type":"markdown","metadata":{"id":"an3ByrXYQ4iK"},"source":["## Clone the repository ๐ฝ\n","\n","- We need to clone the repository, that contains **ML-Agents.**"]},{"cell_type":"code","execution_count":null,"metadata":{"id":"6WNoL04M7rTa"},"outputs":[],"source":["%%capture\n","# Clone the repository (can take 3min)\n","!git clone --depth 1 https://github.com/Unity-Technologies/ml-agents"]},{"cell_type":"markdown","metadata":{"id":"I9lzODA4IfqE"},"source":["## Setup the Virtual Environment ๐ฝ\n","- In order for the **ML-Agents** to run successfully in Colab, Colab's Python version must meet the library's Python requirements.\n","\n","- We can check for the supported Python version under the `python_requires` parameter in the `setup.py` files. These files are required to set up the **ML-Agents** library for use and can be found in the following locations:\n"," - `/content/ml-agents/ml-agents/setup.py`\n"," - `/content/ml-agents/ml-agents-envs/setup.py`\n","\n","- Colab's Current Python version(can be checked using `!python --version`) doesn't match the library's `python_requires` parameter, as a result installation may silently fail and lead to errors like these, when executing the same commands later:\n"," - `/bin/bash: line 1: mlagents-learn: command not found`\n"," - `/bin/bash: line 1: mlagents-push-to-hf: command not found`\n","\n","- To resolve this, we'll create a virtual environment with a Python version compatible with the **ML-Agents** library.\n","\n","`Note:` *For future compatibility, always check the `python_requires` parameter in the installation files and set your virtual environment to the maximum supported Python version in the given below script if the Colab's Python version is not compatible*"]},{"cell_type":"code","execution_count":null,"metadata":{"id":"jA-FshAJIfqE"},"outputs":[],"source":["# Colab's Current Python Version (Incompatible with ML-Agents)\n","!python --version"]},{"cell_type":"code","execution_count":null,"metadata":{"id":"h0DpqQsgIfqE"},"outputs":[],"source":["# Install virtualenv and create a virtual environment\n","!pip install virtualenv\n","!virtualenv myenv\n","\n","# Download and install Miniconda\n","!wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh\n","!chmod +x Miniconda3-latest-Linux-x86_64.sh\n","!./Miniconda3-latest-Linux-x86_64.sh -b -f -p /usr/local\n","\n","# Activate Miniconda and install Python ver 3.10.12\n","!source /usr/local/bin/activate\n","!conda install -q -y --prefix /usr/local python=3.10.12 ujson # Specify the version here\n","\n","# Set environment variables for Python and conda paths\n","!export PYTHONPATH=/usr/local/lib/python3.10/site-packages/\n","!export CONDA_PREFIX=/usr/local/envs/myenv"]},{"cell_type":"code","execution_count":null,"metadata":{"id":"_umtUuweIfqE"},"outputs":[],"source":["# Python Version in New Virtual Environment (Compatible with ML-Agents)\n","!python --version"]},{"cell_type":"markdown","metadata":{"id":"bQ0nTd2-IfqF"},"source":["## Installing the dependencies ๐ฝ"]},{"cell_type":"code","execution_count":null,"metadata":{"id":"d8wmVcMk7xKo"},"outputs":[],"source":["%%capture\n","# Go inside the repository and install the package (can take 3min)\n","%cd ml-agents\n","!pip3 install -e ./ml-agents-envs\n","!pip3 install -e ./ml-agents"]},{"cell_type":"markdown","metadata":{"id":"HRY5ufKUKfhI"},"source":["## Download and move the environment zip file in `./trained-envs-executables/linux/`\n","\n","- Our environment executable is in a zip file.\n","- We need to download it and place it to `./trained-envs-executables/linux/`"]},{"cell_type":"code","execution_count":null,"metadata":{"id":"C9Ls6_6eOKiA"},"outputs":[],"source":["!mkdir ./trained-envs-executables\n","!mkdir ./trained-envs-executables/linux"]},{"cell_type":"markdown","metadata":{"id":"IHh_LXsRrrbM"},"source":["We downloaded the file Huggy.zip from https://github.com/huggingface/Huggy using `wget`"]},{"cell_type":"code","execution_count":null,"metadata":{"id":"8xNAD1tRpy0_"},"outputs":[],"source":["!wget \"https://github.com/huggingface/Huggy/raw/main/Huggy.zip\" -O ./trained-envs-executables/linux/Huggy.zip"]},{"cell_type":"code","execution_count":null,"metadata":{"id":"8FPx0an9IAwO"},"outputs":[],"source":["%%capture\n","!unzip -d ./trained-envs-executables/linux/ ./trained-envs-executables/linux/Huggy.zip"]},{"cell_type":"markdown","metadata":{"id":"nyumV5XfPKzu"},"source":["Make sure your file is accessible"]},{"cell_type":"code","execution_count":null,"metadata":{"id":"EdFsLJ11JvQf"},"outputs":[],"source":["!chmod -R 755 ./trained-envs-executables/linux/Huggy"]},{"cell_type":"markdown","metadata":{"id":"dYKVj8yUvj55"},"source":["## Let's recap how this environment works\n","\n","### The State Space: what Huggy \"perceives.\"\n","\n","Huggy doesn't \"see\" his environment. Instead, we provide him information about the environment:\n","\n","- The target (stick) position\n","- The relative position between himself and the target\n","- The orientation of his legs.\n","\n","Given all this information, Huggy **can decide which action to take next to fulfill his goal**.\n","\n","<img src=\"https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/notebooks/unit-bonus1/huggy.jpg\" alt=\"Huggy\" width=\"100%\">\n","\n","\n","### The Action Space: what moves Huggy can do\n","<img src=\"https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/notebooks/unit-bonus1/huggy-action.jpg\" alt=\"Huggy action\" width=\"100%\">\n","\n","**Joint motors drive huggy legs**. It means that to get the target, Huggy needs to **learn to rotate the joint motors of each of his legs correctly so he can move**.\n","\n","### The Reward Function\n","\n","The reward function is designed so that **Huggy will fulfill his goal** : fetch the stick.\n","\n","Remember that one of the foundations of Reinforcement Learning is the *reward hypothesis*: a goal can be described as the **maximization of the expected cumulative reward**.\n","\n","Here, our goal is that Huggy **goes towards the stick but without spinning too much**. Hence, our reward function must translate this goal.\n","\n","Our reward function:\n","\n","<img src=\"https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/notebooks/unit-bonus1/reward.jpg\" alt=\"Huggy reward function\" width=\"100%\">\n","\n","- *Orientation bonus*: we **reward him for getting close to the target**.\n","- *Time penalty*: a fixed-time penalty given at every action to **force him to get to the stick as fast as possible**.\n","- *Rotation penalty*: we penalize Huggy if **he spins too much and turns too quickly**.\n","- *Getting to the target reward*: we reward Huggy for **reaching the target**."]},{"cell_type":"markdown","metadata":{"id":"NAuEq32Mwvtz"},"source":["## Create the Huggy config file\n","\n","- In ML-Agents, you define the **training hyperparameters into config.yaml files.**\n","\n","- For the scope of this notebook, we're not going to modify the hyperparameters, but if you want to try as an experiment, you should also try to modify some other hyperparameters, Unity provides very [good documentation explaining each of them here](https://github.com/Unity-Technologies/ml-agents/blob/main/docs/Training-Configuration-File.md).\n","\n","- But we need to create a config file for Huggy.\n","\n"," - To do that click on Folder logo on the left of your screen.\n","\n"," <img src=\"https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit1/create_file.png\" alt=\"Create file\" width=\"10%\">\n","\n"," - Go to `/content/ml-agents/config/ppo`\n"," - Right mouse click and create a new file called `Huggy.yaml`\n","\n"," <img src=\"https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit1/create-huggy.png\" alt=\"Create huggy.yaml\" width=\"20%\">\n","\n","- Copy and paste the content below ๐ฝ"]},{"cell_type":"code","execution_count":null,"metadata":{"id":"loQ0N5jhXW71"},"outputs":[],"source":["behaviors:\n"," Huggy:\n"," trainer_type: ppo\n"," hyperparameters:\n"," batch_size: 2048\n"," buffer_size: 20480\n"," learning_rate: 0.0003\n"," beta: 0.005\n"," epsilon: 0.2\n"," lambd: 0.95\n"," num_epoch: 3\n"," learning_rate_schedule: linear\n"," network_settings:\n"," normalize: true\n"," hidden_units: 512\n"," num_layers: 3\n"," vis_encode_type: simple\n"," reward_signals:\n"," extrinsic:\n"," gamma: 0.995\n"," strength: 1.0\n"," checkpoint_interval: 200000\n"," keep_checkpoints: 15\n"," max_steps: 2e6\n"," time_horizon: 1000\n"," summary_freq: 50000"]},{"cell_type":"markdown","metadata":{"id":"oakN7UHwXdCX"},"source":["- Don't forget to save the file!"]},{"cell_type":"markdown","metadata":{"id":"r9wv5NYGw-05"},"source":["- **In the case you want to modify the hyperparameters**, in Google Colab notebook, you can click here to open the config.yaml: `/content/ml-agents/config/ppo/Huggy.yaml`\n","\n","- For instance **if you want to save more models during the training** (for now, we save every 200,000 training timesteps). You need to modify:\n"," - `checkpoint_interval`: The number of training timesteps collected between each checkpoint.\n"," - `keep_checkpoints`: The maximum number of model checkpoints to keep.\n","\n","=> Just keep in mind that **decreasing the `checkpoint_interval` means more models to upload to the Hub and so a longer uploading time**\n","Weโre now ready to train our agent ๐ฅ."]},{"cell_type":"markdown","metadata":{"id":"f9fI555bO12v"},"source":["## Train our agent\n","\n","To train our agent, we just need to **launch mlagents-learn and select the executable containing the environment.**\n","\n","<img src=\"https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/notebooks/unit-bonus1/mllearn.png\" alt=\"ml learn function\" width=\"100%\">\n","\n","With ML Agents, we run a training script. We define four parameters:\n","\n","1. `mlagents-learn <config>`: the path where the hyperparameter config file is.\n","2. `--env`: where the environment executable is.\n","3. `--run-id`: the name you want to give to your training run id.\n","4. `--no-graphics`: to not launch the visualization during the training.\n","\n","Train the model and use the `--resume` flag to continue training in case of interruption.\n","\n","> It will fail first time when you use `--resume`, try running the block again to bypass the error.\n","\n"]},{"cell_type":"markdown","metadata":{"id":"lN32oWF8zPjs"},"source":["The training will take 30 to 45min depending on your machine (don't forget to **set up a GPU**), go take a โ๏ธyou deserve it ๐ค."]},{"cell_type":"code","execution_count":null,"metadata":{"id":"bS-Yh1UdHfzy"},"outputs":[],"source":["!mlagents-learn ./config/ppo/Huggy.yaml --env=./trained-envs-executables/linux/Huggy/Huggy --run-id=\"Huggy2\" --no-graphics"]},{"cell_type":"markdown","metadata":{"id":"5Vue94AzPy1t"},"source":["## Push the agent to the ๐ค Hub\n","\n","- Now that we trained our agent, weโre **ready to push it to the Hub to be able to play with Huggy on your browser๐ฅ.**"]},{"cell_type":"markdown","metadata":{"id":"izT6FpgNzZ6R"},"source":["To be able to share your model with the community there are three more steps to follow:\n","\n","1๏ธโฃ (If it's not already done) create an account to HF โก https://huggingface.co/join\n","\n","2๏ธโฃ Sign in and then, you need to store your authentication token from the Hugging Face website.\n","- Create a new token (https://huggingface.co/settings/tokens) **with write role**\n","\n","<img src=\"https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/notebooks/create-token.jpg\" alt=\"Create HF Token\">\n","\n","- Copy the token\n","- Run the cell below and paste the token"]},{"cell_type":"code","execution_count":null,"metadata":{"id":"rKt2vsYoK56o"},"outputs":[],"source":["from huggingface_hub import notebook_login\n","notebook_login()"]},{"cell_type":"markdown","metadata":{"id":"ew59mK19zjtN"},"source":["If you don't want to use a Google Colab or a Jupyter Notebook, you need to use this command instead: `huggingface-cli login`"]},{"cell_type":"markdown","metadata":{"id":"Xi0y_VASRzJU"},"source":["Then, we simply need to run `mlagents-push-to-hf`.\n","\n","<img src=\"https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/notebooks/unit-bonus1/mlpush.png\" alt=\"ml learn function\" width=\"100%\">"]},{"cell_type":"markdown","metadata":{"id":"KK4fPfnczunT"},"source":["And we define 4 parameters:\n","\n","1. `--run-id`: the name of the training run id.\n","2. `--local-dir`: where the agent was saved, itโs results/<run_id name>, so in my case results/First Training.\n","3. `--repo-id`: the name of the Hugging Face repo you want to create or update. Itโs always <your huggingface username>/<the repo name>\n","If the repo does not exist **it will be created automatically**\n","4. `--commit-message`: since HF repos are git repository you need to define a commit message."]},{"cell_type":"code","execution_count":null,"metadata":{"id":"dGEFAIboLVc6"},"outputs":[],"source":["!mlagents-push-to-hf --run-id=\"HuggyTraining\" --local-dir=\"./results/Huggy2\" --repo-id=\"ThomasSimonini/ppo-Huggy\" --commit-message=\"Huggy\""]},{"cell_type":"markdown","metadata":{"id":"yborB0850FTM"},"source":["Else, if everything worked you should have this at the end of the process(but with a different url ๐) :\n","\n","\n","\n","```\n","Your model is pushed to the hub. You can view your model here: https://huggingface.co/ThomasSimonini/ppo-Huggy\n","```\n","\n","Itโs the link to your model repository. The repository contains a model card that explains how to use the model, your Tensorboard logs and your config file. **Whatโs awesome is that itโs a git repository, which means you can have different commits, update your repository with a new push, open Pull Requests, etc.**\n","\n","<img src=\"https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/notebooks/unit-bonus1/modelcard.png\" alt=\"ml learn function\" width=\"100%\">"]},{"cell_type":"markdown","metadata":{"id":"5Uaon2cg0NrL"},"source":["But now comes the best: **being able to play with Huggy online ๐.**"]},{"cell_type":"markdown","metadata":{"id":"VMc4oOsE0QiZ"},"source":["## Play with your Huggy ๐\n","\n","This step is the simplest:\n","\n","- Open the game Huggy in your browser: https://huggingface.co/spaces/ThomasSimonini/Huggy\n","\n","- Click on Play with my Huggy model\n","\n","<img src=\"https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/notebooks/unit-bonus1/load-huggy.jpg\" alt=\"load-huggy\" width=\"100%\">"]},{"cell_type":"markdown","metadata":{"id":"Djs8c5rR0Z8a"},"source":["1. In step 1, type your username (your username is case sensitive: for instance, my username is ThomasSimonini not thomassimonini or ThOmasImoNInI) and click on the search button.\n","\n","2. In step 2, select your model repository.\n","\n","3. In step 3, **choose which model you want to replay**:\n"," - I have multiple ones, since we saved a model every 500000 timesteps.\n"," - But since I want the more recent, I choose `Huggy.onnx`\n","\n","๐ Whatโs nice **is to try with different models steps to see the improvement of the agent.**"]},{"cell_type":"markdown","metadata":{"id":"PI6dPWmh064H"},"source":["Congrats on finishing this bonus unit!\n","\n","You can now sit and enjoy playing with your Huggy ๐ถ. And don't **forget to spread the love by sharing Huggy with your friends ๐ค**. And if you share about it on social media, **please tag us @huggingface and me @simoninithomas**\n","\n","<img src=\"https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/notebooks/unit-bonus1/huggy-cover.jpeg\" alt=\"Huggy cover\" width=\"100%\">\n","\n","\n","## Keep Learning, Stay awesome ๐ค"]}],"metadata":{"accelerator":"GPU","colab":{"private_outputs":true,"provenance":[{"file_id":"https://github.com/huggingface/deep-rl-class/blob/master/notebooks/bonus-unit1/bonus-unit1.ipynb","timestamp":1740470620457}],"gpuType":"T4"},"kernelspec":{"display_name":"Python 3","name":"python3"},"language_info":{"name":"python"}},"nbformat":4,"nbformat_minor":0}
|