|
--- |
|
tags: |
|
- FrozenLake-v1-8x8-no_slippery |
|
- q-learning |
|
- reinforcement-learning |
|
- custom-implementation |
|
model-index: |
|
- name: q-FrozenLake-v1-8x8-noSlippery-weak |
|
results: |
|
- task: |
|
type: reinforcement-learning |
|
name: reinforcement-learning |
|
dataset: |
|
name: FrozenLake-v1-8x8-no_slippery |
|
type: FrozenLake-v1-8x8-no_slippery |
|
metrics: |
|
- type: mean_reward |
|
value: 1.00 +/- 0.00 |
|
name: mean_reward |
|
verified: false |
|
--- |
|
|
|
# **Q-Learning** Agent playing1 **FrozenLake-v1** |
|
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . |
|
|
|
## Usage |
|
|
|
```python |
|
|
|
model = load_from_hub(repo_id="MattStammers/q-FrozenLake-v1-8x8-noSlippery-weak", filename="q-learning.pkl") |
|
|
|
# Don't forget to check if you need to add additional attributes (is_slippery=False etc) |
|
env = gym.make(model["env_id"]) |
|
``` |
|
|
|
To make this Q-learning agent work requires more extended training; otherwise the agent never successfully reaches the end goal and convergence does not take place. |
|
|
|
In my case I found 50 million training steps sufficient with the following hyperparameters: |
|
|
|
```python |
|
# Training parameters |
|
n_training_episodes = 50000000 # Total training episodes |
|
learning_rate = 0.99 # Learning rate |
|
|
|
# Evaluation parameters |
|
n_eval_episodes = 100 # Total number of test episodes |
|
|
|
# Environment parameters |
|
env_id = "FrozenLake-v1" # Name of the environment |
|
max_steps = 200 # Max steps per episode |
|
gamma = 0.99 # Discounting rate |
|
epsilon = 0.1 # Ideal Episolon |
|
eval_seed = [] # The evaluation seed of the environment |
|
|
|
# Exploration parameters |
|
max_epsilon = 1 # Exploration probability at start |
|
min_epsilon = 0.05 # Minimum exploration probability |
|
decay_rate = 0.0005 # Exponential decay rate for exploration prob |
|
|
|
``` |