ppo-LunarLander-v2 / results.json
ATH0's picture
Initial commit of PPO model from HuggingFace RL Course Session #1
7635f8b
raw
history blame contribute delete
165 Bytes
{"mean_reward": 280.92381109833093, "std_reward": 14.669317772995363, "is_deterministic": true, "n_eval_episodes": 10, "eval_datetime": "2022-05-16T21:43:22.330029"}