Before training our agent, we need to understand what is ML-Agents and how it works.
What is Unity ML-Agents?
Unity ML-Agents is a toolkit for the game engine Unity that allows us to create environments using Unity or use pre-made environments to train our agents.
It’s developed by Unity Technologies, the developers of Unity, one of the most famous Game Engines used by the creators of Firewatch, Cuphead, and Cities: Skylines.
Firewatch was made with Unity
The six components
With Unity ML-Agents, you have six essential components:
The first is the Learning Environment, which contains the Unity scene (the environment) and the environment elements (game characters).
The second is the Python Low-level API which contains the low-level Python interface for interacting and manipulating the environment. It’s the API we use to launch the training.
Then, we have the External Communicator that connects the Learning Environment (made with C#) with the low level Python API (Python).
The Python trainers: the Reinforcement algorithms made with PyTorch (PPO, SAC…).
The Gym wrapper: to encapsulate RL environment in a gym wrapper.
The PettingZoo wrapper: PettingZoo is the multi-agents of gym wrapper.
Inside the Learning Component
Inside the Learning Component, we have three important elements:
The first is the agent component, the actor of the scene. We’ll train the agent by optimizing its policy (which will tell us what action to take in each state). The policy is called Brain.
Finally, there is the Academy. This component orchestrates agents and their decision-making processes. Think of this Academy as a teacher that handles the requests from the Python API.
To better understand its role, let’s remember the RL process. This can be modeled as a loop that works like this: