[2025-01-13 17:20:29,684][01267] Saving configuration to /content/train_dir/default_experiment/config.json... [2025-01-13 17:20:29,687][01267] Rollout worker 0 uses device cpu [2025-01-13 17:20:29,688][01267] Rollout worker 1 uses device cpu [2025-01-13 17:20:29,690][01267] Rollout worker 2 uses device cpu [2025-01-13 17:20:29,691][01267] Rollout worker 3 uses device cpu [2025-01-13 17:20:29,693][01267] Rollout worker 4 uses device cpu [2025-01-13 17:20:29,694][01267] Rollout worker 5 uses device cpu [2025-01-13 17:20:29,695][01267] Rollout worker 6 uses device cpu [2025-01-13 17:20:29,697][01267] Rollout worker 7 uses device cpu [2025-01-13 17:20:29,851][01267] Using GPUs [0] for process 0 (actually maps to GPUs [0]) [2025-01-13 17:20:29,853][01267] InferenceWorker_p0-w0: min num requests: 2 [2025-01-13 17:20:29,885][01267] Starting all processes... [2025-01-13 17:20:29,886][01267] Starting process learner_proc0 [2025-01-13 17:20:29,929][01267] Starting all processes... [2025-01-13 17:20:29,937][01267] Starting process inference_proc0-0 [2025-01-13 17:20:29,937][01267] Starting process rollout_proc0 [2025-01-13 17:20:29,939][01267] Starting process rollout_proc1 [2025-01-13 17:20:29,939][01267] Starting process rollout_proc2 [2025-01-13 17:20:29,940][01267] Starting process rollout_proc3 [2025-01-13 17:20:29,940][01267] Starting process rollout_proc4 [2025-01-13 17:20:29,940][01267] Starting process rollout_proc5 [2025-01-13 17:20:29,940][01267] Starting process rollout_proc6 [2025-01-13 17:20:29,940][01267] Starting process rollout_proc7 [2025-01-13 17:20:45,918][02423] Worker 2 uses CPU cores [0] [2025-01-13 17:20:46,361][02429] Worker 4 uses CPU cores [0] [2025-01-13 17:20:46,414][02430] Worker 5 uses CPU cores [1] [2025-01-13 17:20:46,419][02428] Worker 3 uses CPU cores [1] [2025-01-13 17:20:46,432][02422] Worker 1 uses CPU cores [1] [2025-01-13 17:20:46,446][02421] Worker 0 uses CPU cores [0] [2025-01-13 17:20:46,451][02407] Using GPUs [0] for process 0 (actually maps to GPUs [0]) [2025-01-13 17:20:46,452][02407] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for learning process 0 [2025-01-13 17:20:46,484][02407] Num visible devices: 1 [2025-01-13 17:20:46,496][02432] Worker 7 uses CPU cores [1] [2025-01-13 17:20:46,506][02407] Starting seed is not provided [2025-01-13 17:20:46,506][02407] Using GPUs [0] for process 0 (actually maps to GPUs [0]) [2025-01-13 17:20:46,507][02407] Initializing actor-critic model on device cuda:0 [2025-01-13 17:20:46,508][02407] RunningMeanStd input shape: (3, 72, 128) [2025-01-13 17:20:46,511][02407] RunningMeanStd input shape: (1,) [2025-01-13 17:20:46,524][02431] Worker 6 uses CPU cores [0] [2025-01-13 17:20:46,531][02407] ConvEncoder: input_channels=3 [2025-01-13 17:20:46,537][02420] Using GPUs [0] for process 0 (actually maps to GPUs [0]) [2025-01-13 17:20:46,537][02420] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for inference process 0 [2025-01-13 17:20:46,553][02420] Num visible devices: 1 [2025-01-13 17:20:46,786][02407] Conv encoder output size: 512 [2025-01-13 17:20:46,786][02407] Policy head output size: 512 [2025-01-13 17:20:46,836][02407] Created Actor Critic model with architecture: [2025-01-13 17:20:46,836][02407] ActorCriticSharedWeights( (obs_normalizer): ObservationNormalizer( (running_mean_std): RunningMeanStdDictInPlace( (running_mean_std): ModuleDict( (obs): RunningMeanStdInPlace() ) ) ) (returns_normalizer): RecursiveScriptModule(original_name=RunningMeanStdInPlace) (encoder): VizdoomEncoder( (basic_encoder): ConvEncoder( (enc): RecursiveScriptModule( original_name=ConvEncoderImpl (conv_head): RecursiveScriptModule( original_name=Sequential (0): RecursiveScriptModule(original_name=Conv2d) (1): RecursiveScriptModule(original_name=ELU) (2): RecursiveScriptModule(original_name=Conv2d) (3): RecursiveScriptModule(original_name=ELU) (4): RecursiveScriptModule(original_name=Conv2d) (5): RecursiveScriptModule(original_name=ELU) ) (mlp_layers): RecursiveScriptModule( original_name=Sequential (0): RecursiveScriptModule(original_name=Linear) (1): RecursiveScriptModule(original_name=ELU) ) ) ) ) (core): ModelCoreRNN( (core): GRU(512, 512) ) (decoder): MlpDecoder( (mlp): Identity() ) (critic_linear): Linear(in_features=512, out_features=1, bias=True) (action_parameterization): ActionParameterizationDefault( (distribution_linear): Linear(in_features=512, out_features=5, bias=True) ) ) [2025-01-13 17:20:47,118][02407] Using optimizer [2025-01-13 17:20:49,844][01267] Heartbeat connected on Batcher_0 [2025-01-13 17:20:49,852][01267] Heartbeat connected on InferenceWorker_p0-w0 [2025-01-13 17:20:49,860][01267] Heartbeat connected on RolloutWorker_w0 [2025-01-13 17:20:49,864][01267] Heartbeat connected on RolloutWorker_w1 [2025-01-13 17:20:49,867][01267] Heartbeat connected on RolloutWorker_w2 [2025-01-13 17:20:49,871][01267] Heartbeat connected on RolloutWorker_w3 [2025-01-13 17:20:49,874][01267] Heartbeat connected on RolloutWorker_w4 [2025-01-13 17:20:49,877][01267] Heartbeat connected on RolloutWorker_w5 [2025-01-13 17:20:49,882][01267] Heartbeat connected on RolloutWorker_w6 [2025-01-13 17:20:49,884][01267] Heartbeat connected on RolloutWorker_w7 [2025-01-13 17:20:50,404][02407] No checkpoints found [2025-01-13 17:20:50,404][02407] Did not load from checkpoint, starting from scratch! [2025-01-13 17:20:50,404][02407] Initialized policy 0 weights for model version 0 [2025-01-13 17:20:50,408][02407] LearnerWorker_p0 finished initialization! [2025-01-13 17:20:50,409][02407] Using GPUs [0] for process 0 (actually maps to GPUs [0]) [2025-01-13 17:20:50,416][01267] Heartbeat connected on LearnerWorker_p0 [2025-01-13 17:20:50,504][02420] RunningMeanStd input shape: (3, 72, 128) [2025-01-13 17:20:50,505][02420] RunningMeanStd input shape: (1,) [2025-01-13 17:20:50,517][02420] ConvEncoder: input_channels=3 [2025-01-13 17:20:50,619][02420] Conv encoder output size: 512 [2025-01-13 17:20:50,619][02420] Policy head output size: 512 [2025-01-13 17:20:50,673][01267] Inference worker 0-0 is ready! [2025-01-13 17:20:50,674][01267] All inference workers are ready! Signal rollout workers to start! [2025-01-13 17:20:50,877][02432] Doom resolution: 160x120, resize resolution: (128, 72) [2025-01-13 17:20:50,879][02428] Doom resolution: 160x120, resize resolution: (128, 72) [2025-01-13 17:20:50,879][02422] Doom resolution: 160x120, resize resolution: (128, 72) [2025-01-13 17:20:50,880][02429] Doom resolution: 160x120, resize resolution: (128, 72) [2025-01-13 17:20:50,875][02430] Doom resolution: 160x120, resize resolution: (128, 72) [2025-01-13 17:20:50,883][02423] Doom resolution: 160x120, resize resolution: (128, 72) [2025-01-13 17:20:50,885][02431] Doom resolution: 160x120, resize resolution: (128, 72) [2025-01-13 17:20:50,887][02421] Doom resolution: 160x120, resize resolution: (128, 72) [2025-01-13 17:20:51,938][02422] Decorrelating experience for 0 frames... [2025-01-13 17:20:51,940][02432] Decorrelating experience for 0 frames... [2025-01-13 17:20:52,333][02432] Decorrelating experience for 32 frames... [2025-01-13 17:20:52,549][02421] Decorrelating experience for 0 frames... [2025-01-13 17:20:52,552][02423] Decorrelating experience for 0 frames... [2025-01-13 17:20:52,554][02429] Decorrelating experience for 0 frames... [2025-01-13 17:20:52,557][02431] Decorrelating experience for 0 frames... [2025-01-13 17:20:52,933][02423] Decorrelating experience for 32 frames... [2025-01-13 17:20:53,318][02428] Decorrelating experience for 0 frames... [2025-01-13 17:20:53,353][02422] Decorrelating experience for 32 frames... [2025-01-13 17:20:53,694][02429] Decorrelating experience for 32 frames... [2025-01-13 17:20:53,717][01267] Fps is (10 sec: nan, 60 sec: nan, 300 sec: nan). Total num frames: 0. Throughput: 0: nan. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) [2025-01-13 17:20:54,733][02428] Decorrelating experience for 32 frames... [2025-01-13 17:20:54,731][02430] Decorrelating experience for 0 frames... [2025-01-13 17:20:55,006][02421] Decorrelating experience for 32 frames... [2025-01-13 17:20:55,215][02422] Decorrelating experience for 64 frames... [2025-01-13 17:20:55,474][02431] Decorrelating experience for 32 frames... [2025-01-13 17:20:56,728][02423] Decorrelating experience for 64 frames... [2025-01-13 17:20:56,809][02430] Decorrelating experience for 32 frames... [2025-01-13 17:20:57,103][02429] Decorrelating experience for 64 frames... [2025-01-13 17:20:57,725][02432] Decorrelating experience for 64 frames... [2025-01-13 17:20:57,748][02422] Decorrelating experience for 96 frames... [2025-01-13 17:20:57,857][02421] Decorrelating experience for 64 frames... [2025-01-13 17:20:58,717][01267] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) [2025-01-13 17:20:59,118][02428] Decorrelating experience for 64 frames... [2025-01-13 17:20:59,688][02423] Decorrelating experience for 96 frames... [2025-01-13 17:20:59,844][02430] Decorrelating experience for 64 frames... [2025-01-13 17:20:59,978][02431] Decorrelating experience for 64 frames... [2025-01-13 17:21:00,174][02429] Decorrelating experience for 96 frames... [2025-01-13 17:21:00,837][02421] Decorrelating experience for 96 frames... [2025-01-13 17:21:01,920][02432] Decorrelating experience for 96 frames... [2025-01-13 17:21:01,923][02430] Decorrelating experience for 96 frames... [2025-01-13 17:21:03,073][02431] Decorrelating experience for 96 frames... [2025-01-13 17:21:03,281][02428] Decorrelating experience for 96 frames... [2025-01-13 17:21:03,717][01267] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 117.6. Samples: 1176. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) [2025-01-13 17:21:03,723][01267] Avg episode reward: [(0, '2.043')] [2025-01-13 17:21:04,561][02407] Signal inference workers to stop experience collection... [2025-01-13 17:21:04,576][02420] InferenceWorker_p0-w0: stopping experience collection [2025-01-13 17:21:07,571][02407] Signal inference workers to resume experience collection... [2025-01-13 17:21:07,572][02420] InferenceWorker_p0-w0: resuming experience collection [2025-01-13 17:21:08,720][01267] Fps is (10 sec: 818.9, 60 sec: 546.0, 300 sec: 546.0). Total num frames: 8192. Throughput: 0: 235.3. Samples: 3530. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) [2025-01-13 17:21:08,725][01267] Avg episode reward: [(0, '2.797')] [2025-01-13 17:21:13,719][01267] Fps is (10 sec: 2866.7, 60 sec: 1433.5, 300 sec: 1433.5). Total num frames: 28672. Throughput: 0: 342.7. Samples: 6854. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) [2025-01-13 17:21:13,722][01267] Avg episode reward: [(0, '3.658')] [2025-01-13 17:21:17,310][02420] Updated weights for policy 0, policy_version 10 (0.0030) [2025-01-13 17:21:18,717][01267] Fps is (10 sec: 3687.6, 60 sec: 1802.2, 300 sec: 1802.2). Total num frames: 45056. Throughput: 0: 438.2. Samples: 10956. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) [2025-01-13 17:21:18,723][01267] Avg episode reward: [(0, '4.122')] [2025-01-13 17:21:23,717][01267] Fps is (10 sec: 3687.1, 60 sec: 2184.5, 300 sec: 2184.5). Total num frames: 65536. Throughput: 0: 579.3. Samples: 17380. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) [2025-01-13 17:21:23,719][01267] Avg episode reward: [(0, '4.376')] [2025-01-13 17:21:26,888][02420] Updated weights for policy 0, policy_version 20 (0.0028) [2025-01-13 17:21:28,717][01267] Fps is (10 sec: 4505.6, 60 sec: 2574.6, 300 sec: 2574.6). Total num frames: 90112. Throughput: 0: 595.6. Samples: 20846. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) [2025-01-13 17:21:28,722][01267] Avg episode reward: [(0, '4.334')] [2025-01-13 17:21:33,717][01267] Fps is (10 sec: 3686.4, 60 sec: 2560.0, 300 sec: 2560.0). Total num frames: 102400. Throughput: 0: 640.0. Samples: 25602. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) [2025-01-13 17:21:33,720][01267] Avg episode reward: [(0, '4.241')] [2025-01-13 17:21:33,728][02407] Saving new best policy, reward=4.241! [2025-01-13 17:21:38,717][01267] Fps is (10 sec: 2867.2, 60 sec: 2639.6, 300 sec: 2639.6). Total num frames: 118784. Throughput: 0: 689.6. Samples: 31032. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) [2025-01-13 17:21:38,721][01267] Avg episode reward: [(0, '4.382')] [2025-01-13 17:21:38,725][02407] Saving new best policy, reward=4.382! [2025-01-13 17:21:38,980][02420] Updated weights for policy 0, policy_version 30 (0.0030) [2025-01-13 17:21:43,717][01267] Fps is (10 sec: 4096.0, 60 sec: 2867.2, 300 sec: 2867.2). Total num frames: 143360. Throughput: 0: 763.8. Samples: 34370. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2025-01-13 17:21:43,722][01267] Avg episode reward: [(0, '4.581')] [2025-01-13 17:21:43,734][02407] Saving new best policy, reward=4.581! [2025-01-13 17:21:48,717][01267] Fps is (10 sec: 4095.8, 60 sec: 2904.4, 300 sec: 2904.4). Total num frames: 159744. Throughput: 0: 873.9. Samples: 40500. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2025-01-13 17:21:48,721][01267] Avg episode reward: [(0, '4.490')] [2025-01-13 17:21:49,451][02420] Updated weights for policy 0, policy_version 40 (0.0025) [2025-01-13 17:21:53,717][01267] Fps is (10 sec: 3276.8, 60 sec: 2935.5, 300 sec: 2935.5). Total num frames: 176128. Throughput: 0: 920.9. Samples: 44966. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2025-01-13 17:21:53,723][01267] Avg episode reward: [(0, '4.379')] [2025-01-13 17:21:58,717][01267] Fps is (10 sec: 4096.2, 60 sec: 3345.1, 300 sec: 3087.8). Total num frames: 200704. Throughput: 0: 925.0. Samples: 48478. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2025-01-13 17:21:58,723][01267] Avg episode reward: [(0, '4.463')] [2025-01-13 17:21:59,533][02420] Updated weights for policy 0, policy_version 50 (0.0019) [2025-01-13 17:22:03,717][01267] Fps is (10 sec: 4505.6, 60 sec: 3686.4, 300 sec: 3159.8). Total num frames: 221184. Throughput: 0: 980.1. Samples: 55062. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2025-01-13 17:22:03,723][01267] Avg episode reward: [(0, '4.471')] [2025-01-13 17:22:08,725][01267] Fps is (10 sec: 3274.0, 60 sec: 3754.3, 300 sec: 3112.6). Total num frames: 233472. Throughput: 0: 933.1. Samples: 59376. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2025-01-13 17:22:08,729][01267] Avg episode reward: [(0, '4.319')] [2025-01-13 17:22:11,755][02420] Updated weights for policy 0, policy_version 60 (0.0028) [2025-01-13 17:22:13,717][01267] Fps is (10 sec: 3276.7, 60 sec: 3754.8, 300 sec: 3174.4). Total num frames: 253952. Throughput: 0: 916.6. Samples: 62092. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) [2025-01-13 17:22:13,721][01267] Avg episode reward: [(0, '4.284')] [2025-01-13 17:22:18,717][01267] Fps is (10 sec: 4099.5, 60 sec: 3822.9, 300 sec: 3228.6). Total num frames: 274432. Throughput: 0: 963.2. Samples: 68948. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2025-01-13 17:22:18,723][01267] Avg episode reward: [(0, '4.431')] [2025-01-13 17:22:21,831][02420] Updated weights for policy 0, policy_version 70 (0.0032) [2025-01-13 17:22:23,717][01267] Fps is (10 sec: 3686.5, 60 sec: 3754.7, 300 sec: 3231.3). Total num frames: 290816. Throughput: 0: 942.1. Samples: 73428. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) [2025-01-13 17:22:23,719][01267] Avg episode reward: [(0, '4.498')] [2025-01-13 17:22:23,730][02407] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000071_290816.pth... [2025-01-13 17:22:28,717][01267] Fps is (10 sec: 3276.8, 60 sec: 3618.1, 300 sec: 3233.7). Total num frames: 307200. Throughput: 0: 911.8. Samples: 75400. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2025-01-13 17:22:28,724][01267] Avg episode reward: [(0, '4.414')] [2025-01-13 17:22:33,140][02420] Updated weights for policy 0, policy_version 80 (0.0022) [2025-01-13 17:22:33,717][01267] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3276.8). Total num frames: 327680. Throughput: 0: 918.7. Samples: 81842. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) [2025-01-13 17:22:33,721][01267] Avg episode reward: [(0, '4.363')] [2025-01-13 17:22:38,717][01267] Fps is (10 sec: 4096.0, 60 sec: 3822.9, 300 sec: 3315.8). Total num frames: 348160. Throughput: 0: 959.9. Samples: 88162. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2025-01-13 17:22:38,719][01267] Avg episode reward: [(0, '4.290')] [2025-01-13 17:22:43,718][01267] Fps is (10 sec: 3276.3, 60 sec: 3618.0, 300 sec: 3276.8). Total num frames: 360448. Throughput: 0: 926.6. Samples: 90178. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2025-01-13 17:22:43,726][01267] Avg episode reward: [(0, '4.383')] [2025-01-13 17:22:45,083][02420] Updated weights for policy 0, policy_version 90 (0.0015) [2025-01-13 17:22:48,717][01267] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3348.0). Total num frames: 385024. Throughput: 0: 907.5. Samples: 95900. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) [2025-01-13 17:22:48,719][01267] Avg episode reward: [(0, '4.617')] [2025-01-13 17:22:48,721][02407] Saving new best policy, reward=4.617! [2025-01-13 17:22:53,717][01267] Fps is (10 sec: 4506.3, 60 sec: 3822.9, 300 sec: 3379.2). Total num frames: 405504. Throughput: 0: 961.3. Samples: 102628. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) [2025-01-13 17:22:53,722][01267] Avg episode reward: [(0, '4.687')] [2025-01-13 17:22:53,732][02407] Saving new best policy, reward=4.687! [2025-01-13 17:22:54,147][02420] Updated weights for policy 0, policy_version 100 (0.0016) [2025-01-13 17:22:58,718][01267] Fps is (10 sec: 3686.0, 60 sec: 3686.3, 300 sec: 3375.1). Total num frames: 421888. Throughput: 0: 950.6. Samples: 104868. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) [2025-01-13 17:22:58,720][01267] Avg episode reward: [(0, '4.561')] [2025-01-13 17:23:03,717][01267] Fps is (10 sec: 3276.8, 60 sec: 3618.1, 300 sec: 3371.3). Total num frames: 438272. Throughput: 0: 897.6. Samples: 109338. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) [2025-01-13 17:23:03,728][01267] Avg episode reward: [(0, '4.638')] [2025-01-13 17:23:05,887][02420] Updated weights for policy 0, policy_version 110 (0.0023) [2025-01-13 17:23:08,717][01267] Fps is (10 sec: 4096.5, 60 sec: 3823.5, 300 sec: 3428.5). Total num frames: 462848. Throughput: 0: 952.2. Samples: 116278. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) [2025-01-13 17:23:08,724][01267] Avg episode reward: [(0, '4.699')] [2025-01-13 17:23:08,727][02407] Saving new best policy, reward=4.699! [2025-01-13 17:23:13,719][01267] Fps is (10 sec: 4094.9, 60 sec: 3754.5, 300 sec: 3423.0). Total num frames: 479232. Throughput: 0: 980.4. Samples: 119522. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2025-01-13 17:23:13,723][01267] Avg episode reward: [(0, '4.588')] [2025-01-13 17:23:17,478][02420] Updated weights for policy 0, policy_version 120 (0.0023) [2025-01-13 17:23:18,717][01267] Fps is (10 sec: 2867.2, 60 sec: 3618.1, 300 sec: 3389.8). Total num frames: 491520. Throughput: 0: 926.1. Samples: 123516. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) [2025-01-13 17:23:18,719][01267] Avg episode reward: [(0, '4.368')] [2025-01-13 17:23:23,717][01267] Fps is (10 sec: 3687.4, 60 sec: 3754.7, 300 sec: 3440.6). Total num frames: 516096. Throughput: 0: 929.8. Samples: 130004. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2025-01-13 17:23:23,718][01267] Avg episode reward: [(0, '4.473')] [2025-01-13 17:23:26,850][02420] Updated weights for policy 0, policy_version 130 (0.0017) [2025-01-13 17:23:28,717][01267] Fps is (10 sec: 4915.2, 60 sec: 3891.2, 300 sec: 3488.2). Total num frames: 540672. Throughput: 0: 963.0. Samples: 133512. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) [2025-01-13 17:23:28,721][01267] Avg episode reward: [(0, '4.742')] [2025-01-13 17:23:28,728][02407] Saving new best policy, reward=4.742! [2025-01-13 17:23:33,717][01267] Fps is (10 sec: 3686.3, 60 sec: 3754.7, 300 sec: 3456.0). Total num frames: 552960. Throughput: 0: 945.7. Samples: 138456. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2025-01-13 17:23:33,719][01267] Avg episode reward: [(0, '4.578')] [2025-01-13 17:23:38,717][01267] Fps is (10 sec: 2867.2, 60 sec: 3686.4, 300 sec: 3450.6). Total num frames: 569344. Throughput: 0: 918.3. Samples: 143950. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2025-01-13 17:23:38,727][01267] Avg episode reward: [(0, '4.621')] [2025-01-13 17:23:38,800][02420] Updated weights for policy 0, policy_version 140 (0.0025) [2025-01-13 17:23:43,719][01267] Fps is (10 sec: 4095.1, 60 sec: 3891.2, 300 sec: 3493.6). Total num frames: 593920. Throughput: 0: 944.6. Samples: 147378. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2025-01-13 17:23:43,721][01267] Avg episode reward: [(0, '4.448')] [2025-01-13 17:23:48,717][01267] Fps is (10 sec: 4096.0, 60 sec: 3754.7, 300 sec: 3487.5). Total num frames: 610304. Throughput: 0: 980.8. Samples: 153476. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2025-01-13 17:23:48,719][01267] Avg episode reward: [(0, '4.426')] [2025-01-13 17:23:48,816][02420] Updated weights for policy 0, policy_version 150 (0.0016) [2025-01-13 17:23:53,717][01267] Fps is (10 sec: 3277.6, 60 sec: 3686.4, 300 sec: 3481.6). Total num frames: 626688. Throughput: 0: 930.0. Samples: 158128. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2025-01-13 17:23:53,725][01267] Avg episode reward: [(0, '4.469')] [2025-01-13 17:23:58,717][01267] Fps is (10 sec: 4096.0, 60 sec: 3823.0, 300 sec: 3520.3). Total num frames: 651264. Throughput: 0: 936.9. Samples: 161682. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) [2025-01-13 17:23:58,723][01267] Avg episode reward: [(0, '4.674')] [2025-01-13 17:23:59,125][02420] Updated weights for policy 0, policy_version 160 (0.0015) [2025-01-13 17:24:03,717][01267] Fps is (10 sec: 4505.6, 60 sec: 3891.2, 300 sec: 3535.5). Total num frames: 671744. Throughput: 0: 1002.8. Samples: 168644. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) [2025-01-13 17:24:03,722][01267] Avg episode reward: [(0, '4.732')] [2025-01-13 17:24:08,717][01267] Fps is (10 sec: 3686.5, 60 sec: 3754.7, 300 sec: 3528.9). Total num frames: 688128. Throughput: 0: 955.3. Samples: 172994. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2025-01-13 17:24:08,722][01267] Avg episode reward: [(0, '4.745')] [2025-01-13 17:24:08,728][02407] Saving new best policy, reward=4.745! [2025-01-13 17:24:10,548][02420] Updated weights for policy 0, policy_version 170 (0.0021) [2025-01-13 17:24:13,718][01267] Fps is (10 sec: 3686.0, 60 sec: 3823.0, 300 sec: 3543.0). Total num frames: 708608. Throughput: 0: 944.5. Samples: 176014. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2025-01-13 17:24:13,720][01267] Avg episode reward: [(0, '4.859')] [2025-01-13 17:24:13,732][02407] Saving new best policy, reward=4.859! [2025-01-13 17:24:18,716][01267] Fps is (10 sec: 4505.6, 60 sec: 4027.7, 300 sec: 3576.5). Total num frames: 733184. Throughput: 0: 993.2. Samples: 183148. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) [2025-01-13 17:24:18,719][01267] Avg episode reward: [(0, '4.780')] [2025-01-13 17:24:19,270][02420] Updated weights for policy 0, policy_version 180 (0.0024) [2025-01-13 17:24:23,717][01267] Fps is (10 sec: 4096.4, 60 sec: 3891.2, 300 sec: 3569.4). Total num frames: 749568. Throughput: 0: 990.1. Samples: 188506. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) [2025-01-13 17:24:23,721][01267] Avg episode reward: [(0, '4.659')] [2025-01-13 17:24:23,736][02407] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000183_749568.pth... [2025-01-13 17:24:28,717][01267] Fps is (10 sec: 3686.3, 60 sec: 3822.9, 300 sec: 3581.6). Total num frames: 770048. Throughput: 0: 963.8. Samples: 190748. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2025-01-13 17:24:28,720][01267] Avg episode reward: [(0, '4.820')] [2025-01-13 17:24:30,422][02420] Updated weights for policy 0, policy_version 190 (0.0034) [2025-01-13 17:24:33,717][01267] Fps is (10 sec: 4096.1, 60 sec: 3959.5, 300 sec: 3593.3). Total num frames: 790528. Throughput: 0: 984.5. Samples: 197780. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2025-01-13 17:24:33,718][01267] Avg episode reward: [(0, '5.232')] [2025-01-13 17:24:33,725][02407] Saving new best policy, reward=5.232! [2025-01-13 17:24:38,720][01267] Fps is (10 sec: 4094.7, 60 sec: 4027.5, 300 sec: 3604.4). Total num frames: 811008. Throughput: 0: 1023.1. Samples: 204170. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2025-01-13 17:24:38,722][01267] Avg episode reward: [(0, '5.400')] [2025-01-13 17:24:38,727][02407] Saving new best policy, reward=5.400! [2025-01-13 17:24:40,387][02420] Updated weights for policy 0, policy_version 200 (0.0030) [2025-01-13 17:24:43,718][01267] Fps is (10 sec: 3686.0, 60 sec: 3891.3, 300 sec: 3597.3). Total num frames: 827392. Throughput: 0: 991.1. Samples: 206284. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2025-01-13 17:24:43,722][01267] Avg episode reward: [(0, '5.391')] [2025-01-13 17:24:48,717][01267] Fps is (10 sec: 4097.3, 60 sec: 4027.7, 300 sec: 3625.4). Total num frames: 851968. Throughput: 0: 978.2. Samples: 212662. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) [2025-01-13 17:24:48,719][01267] Avg episode reward: [(0, '5.670')] [2025-01-13 17:24:48,722][02407] Saving new best policy, reward=5.670! [2025-01-13 17:24:50,326][02420] Updated weights for policy 0, policy_version 210 (0.0022) [2025-01-13 17:24:53,716][01267] Fps is (10 sec: 4915.7, 60 sec: 4164.3, 300 sec: 3652.3). Total num frames: 876544. Throughput: 0: 1043.6. Samples: 219958. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) [2025-01-13 17:24:53,721][01267] Avg episode reward: [(0, '5.450')] [2025-01-13 17:24:58,717][01267] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3627.9). Total num frames: 888832. Throughput: 0: 1029.8. Samples: 222354. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2025-01-13 17:24:58,718][01267] Avg episode reward: [(0, '5.213')] [2025-01-13 17:25:01,299][02420] Updated weights for policy 0, policy_version 220 (0.0035) [2025-01-13 17:25:03,717][01267] Fps is (10 sec: 3276.8, 60 sec: 3959.5, 300 sec: 3637.2). Total num frames: 909312. Throughput: 0: 985.0. Samples: 227474. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) [2025-01-13 17:25:03,719][01267] Avg episode reward: [(0, '5.360')] [2025-01-13 17:25:08,717][01267] Fps is (10 sec: 4505.6, 60 sec: 4096.0, 300 sec: 3662.3). Total num frames: 933888. Throughput: 0: 1028.1. Samples: 234770. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) [2025-01-13 17:25:08,719][01267] Avg episode reward: [(0, '5.392')] [2025-01-13 17:25:09,770][02420] Updated weights for policy 0, policy_version 230 (0.0028) [2025-01-13 17:25:13,719][01267] Fps is (10 sec: 4504.6, 60 sec: 4095.9, 300 sec: 3670.6). Total num frames: 954368. Throughput: 0: 1048.3. Samples: 237926. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2025-01-13 17:25:13,721][01267] Avg episode reward: [(0, '5.576')] [2025-01-13 17:25:18,718][01267] Fps is (10 sec: 3685.8, 60 sec: 3959.4, 300 sec: 3663.2). Total num frames: 970752. Throughput: 0: 991.8. Samples: 242414. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2025-01-13 17:25:18,723][01267] Avg episode reward: [(0, '5.648')] [2025-01-13 17:25:20,929][02420] Updated weights for policy 0, policy_version 240 (0.0021) [2025-01-13 17:25:23,717][01267] Fps is (10 sec: 4096.9, 60 sec: 4096.0, 300 sec: 3686.4). Total num frames: 995328. Throughput: 0: 1008.3. Samples: 249540. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) [2025-01-13 17:25:23,720][01267] Avg episode reward: [(0, '6.025')] [2025-01-13 17:25:23,729][02407] Saving new best policy, reward=6.025! [2025-01-13 17:25:28,716][01267] Fps is (10 sec: 4506.3, 60 sec: 4096.0, 300 sec: 3693.8). Total num frames: 1015808. Throughput: 0: 1040.2. Samples: 253092. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2025-01-13 17:25:28,722][01267] Avg episode reward: [(0, '5.900')] [2025-01-13 17:25:30,589][02420] Updated weights for policy 0, policy_version 250 (0.0025) [2025-01-13 17:25:33,721][01267] Fps is (10 sec: 3684.9, 60 sec: 4027.5, 300 sec: 3686.3). Total num frames: 1032192. Throughput: 0: 1009.2. Samples: 258080. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) [2025-01-13 17:25:33,723][01267] Avg episode reward: [(0, '5.725')] [2025-01-13 17:25:38,717][01267] Fps is (10 sec: 3686.4, 60 sec: 4027.9, 300 sec: 3693.6). Total num frames: 1052672. Throughput: 0: 986.4. Samples: 264346. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) [2025-01-13 17:25:38,721][01267] Avg episode reward: [(0, '6.405')] [2025-01-13 17:25:38,728][02407] Saving new best policy, reward=6.405! [2025-01-13 17:25:40,833][02420] Updated weights for policy 0, policy_version 260 (0.0014) [2025-01-13 17:25:43,717][01267] Fps is (10 sec: 4507.5, 60 sec: 4164.3, 300 sec: 3714.6). Total num frames: 1077248. Throughput: 0: 1013.1. Samples: 267944. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2025-01-13 17:25:43,721][01267] Avg episode reward: [(0, '6.183')] [2025-01-13 17:25:48,719][01267] Fps is (10 sec: 4095.0, 60 sec: 4027.6, 300 sec: 3707.2). Total num frames: 1093632. Throughput: 0: 1034.0. Samples: 274008. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2025-01-13 17:25:48,723][01267] Avg episode reward: [(0, '5.723')] [2025-01-13 17:25:51,838][02420] Updated weights for policy 0, policy_version 270 (0.0017) [2025-01-13 17:25:53,716][01267] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3776.7). Total num frames: 1114112. Throughput: 0: 987.2. Samples: 279196. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2025-01-13 17:25:53,719][01267] Avg episode reward: [(0, '5.789')] [2025-01-13 17:25:58,717][01267] Fps is (10 sec: 4506.6, 60 sec: 4164.3, 300 sec: 3860.0). Total num frames: 1138688. Throughput: 0: 1000.3. Samples: 282938. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2025-01-13 17:25:58,720][01267] Avg episode reward: [(0, '6.509')] [2025-01-13 17:25:58,726][02407] Saving new best policy, reward=6.509! [2025-01-13 17:26:00,410][02420] Updated weights for policy 0, policy_version 280 (0.0022) [2025-01-13 17:26:03,718][01267] Fps is (10 sec: 4095.5, 60 sec: 4095.9, 300 sec: 3887.8). Total num frames: 1155072. Throughput: 0: 1046.2. Samples: 289492. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2025-01-13 17:26:03,721][01267] Avg episode reward: [(0, '7.027')] [2025-01-13 17:26:03,729][02407] Saving new best policy, reward=7.027! [2025-01-13 17:26:08,717][01267] Fps is (10 sec: 3276.8, 60 sec: 3959.5, 300 sec: 3873.9). Total num frames: 1171456. Throughput: 0: 982.4. Samples: 293750. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2025-01-13 17:26:08,723][01267] Avg episode reward: [(0, '7.440')] [2025-01-13 17:26:08,725][02407] Saving new best policy, reward=7.440! [2025-01-13 17:26:12,138][02420] Updated weights for policy 0, policy_version 290 (0.0024) [2025-01-13 17:26:13,717][01267] Fps is (10 sec: 3686.8, 60 sec: 3959.6, 300 sec: 3887.7). Total num frames: 1191936. Throughput: 0: 977.3. Samples: 297070. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2025-01-13 17:26:13,719][01267] Avg episode reward: [(0, '7.127')] [2025-01-13 17:26:18,717][01267] Fps is (10 sec: 4505.6, 60 sec: 4096.1, 300 sec: 3901.6). Total num frames: 1216512. Throughput: 0: 1021.3. Samples: 304032. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) [2025-01-13 17:26:18,722][01267] Avg episode reward: [(0, '7.545')] [2025-01-13 17:26:18,724][02407] Saving new best policy, reward=7.545! [2025-01-13 17:26:22,033][02420] Updated weights for policy 0, policy_version 300 (0.0022) [2025-01-13 17:26:23,718][01267] Fps is (10 sec: 4095.5, 60 sec: 3959.4, 300 sec: 3873.8). Total num frames: 1232896. Throughput: 0: 989.7. Samples: 308886. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) [2025-01-13 17:26:23,724][01267] Avg episode reward: [(0, '7.411')] [2025-01-13 17:26:23,742][02407] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000301_1232896.pth... [2025-01-13 17:26:23,924][02407] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000071_290816.pth [2025-01-13 17:26:28,717][01267] Fps is (10 sec: 3276.8, 60 sec: 3891.2, 300 sec: 3887.7). Total num frames: 1249280. Throughput: 0: 961.3. Samples: 311204. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) [2025-01-13 17:26:28,721][01267] Avg episode reward: [(0, '7.930')] [2025-01-13 17:26:28,724][02407] Saving new best policy, reward=7.930! [2025-01-13 17:26:32,605][02420] Updated weights for policy 0, policy_version 310 (0.0020) [2025-01-13 17:26:33,717][01267] Fps is (10 sec: 4096.4, 60 sec: 4028.0, 300 sec: 3915.5). Total num frames: 1273856. Throughput: 0: 978.9. Samples: 318056. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) [2025-01-13 17:26:33,719][01267] Avg episode reward: [(0, '8.071')] [2025-01-13 17:26:33,726][02407] Saving new best policy, reward=8.071! [2025-01-13 17:26:38,717][01267] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3887.7). Total num frames: 1290240. Throughput: 0: 992.4. Samples: 323852. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2025-01-13 17:26:38,719][01267] Avg episode reward: [(0, '7.996')] [2025-01-13 17:26:43,717][01267] Fps is (10 sec: 3276.9, 60 sec: 3822.9, 300 sec: 3887.7). Total num frames: 1306624. Throughput: 0: 956.5. Samples: 325982. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) [2025-01-13 17:26:43,720][01267] Avg episode reward: [(0, '8.461')] [2025-01-13 17:26:43,729][02407] Saving new best policy, reward=8.461! [2025-01-13 17:26:44,379][02420] Updated weights for policy 0, policy_version 320 (0.0018) [2025-01-13 17:26:48,717][01267] Fps is (10 sec: 4095.9, 60 sec: 3959.6, 300 sec: 3915.5). Total num frames: 1331200. Throughput: 0: 951.0. Samples: 332288. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2025-01-13 17:26:48,721][01267] Avg episode reward: [(0, '9.008')] [2025-01-13 17:26:48,723][02407] Saving new best policy, reward=9.008! [2025-01-13 17:26:53,061][02420] Updated weights for policy 0, policy_version 330 (0.0016) [2025-01-13 17:26:53,717][01267] Fps is (10 sec: 4505.6, 60 sec: 3959.5, 300 sec: 3901.6). Total num frames: 1351680. Throughput: 0: 1003.4. Samples: 338902. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2025-01-13 17:26:53,719][01267] Avg episode reward: [(0, '9.641')] [2025-01-13 17:26:53,735][02407] Saving new best policy, reward=9.641! [2025-01-13 17:26:58,717][01267] Fps is (10 sec: 3276.9, 60 sec: 3754.7, 300 sec: 3873.8). Total num frames: 1363968. Throughput: 0: 975.4. Samples: 340962. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) [2025-01-13 17:26:58,719][01267] Avg episode reward: [(0, '9.656')] [2025-01-13 17:26:58,722][02407] Saving new best policy, reward=9.656! [2025-01-13 17:27:03,717][01267] Fps is (10 sec: 3276.8, 60 sec: 3823.0, 300 sec: 3901.7). Total num frames: 1384448. Throughput: 0: 939.6. Samples: 346314. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2025-01-13 17:27:03,720][01267] Avg episode reward: [(0, '9.721')] [2025-01-13 17:27:03,733][02407] Saving new best policy, reward=9.721! [2025-01-13 17:27:04,976][02420] Updated weights for policy 0, policy_version 340 (0.0025) [2025-01-13 17:27:08,717][01267] Fps is (10 sec: 4505.5, 60 sec: 3959.4, 300 sec: 3915.5). Total num frames: 1409024. Throughput: 0: 984.6. Samples: 353190. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2025-01-13 17:27:08,721][01267] Avg episode reward: [(0, '10.189')] [2025-01-13 17:27:08,725][02407] Saving new best policy, reward=10.189! [2025-01-13 17:27:13,717][01267] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3901.6). Total num frames: 1425408. Throughput: 0: 995.7. Samples: 356010. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) [2025-01-13 17:27:13,719][01267] Avg episode reward: [(0, '10.771')] [2025-01-13 17:27:13,726][02407] Saving new best policy, reward=10.771! [2025-01-13 17:27:15,658][02420] Updated weights for policy 0, policy_version 350 (0.0024) [2025-01-13 17:27:18,717][01267] Fps is (10 sec: 3276.9, 60 sec: 3754.7, 300 sec: 3901.6). Total num frames: 1441792. Throughput: 0: 941.8. Samples: 360436. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) [2025-01-13 17:27:18,720][01267] Avg episode reward: [(0, '10.532')] [2025-01-13 17:27:23,717][01267] Fps is (10 sec: 4096.0, 60 sec: 3891.3, 300 sec: 3929.4). Total num frames: 1466368. Throughput: 0: 971.7. Samples: 367578. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2025-01-13 17:27:23,719][01267] Avg episode reward: [(0, '11.256')] [2025-01-13 17:27:23,732][02407] Saving new best policy, reward=11.256! [2025-01-13 17:27:25,207][02420] Updated weights for policy 0, policy_version 360 (0.0013) [2025-01-13 17:27:28,720][01267] Fps is (10 sec: 4504.1, 60 sec: 3959.3, 300 sec: 3929.3). Total num frames: 1486848. Throughput: 0: 1002.0. Samples: 371074. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) [2025-01-13 17:27:28,726][01267] Avg episode reward: [(0, '11.094')] [2025-01-13 17:27:33,720][01267] Fps is (10 sec: 2456.8, 60 sec: 3618.0, 300 sec: 3873.8). Total num frames: 1490944. Throughput: 0: 920.1. Samples: 373696. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) [2025-01-13 17:27:33,722][01267] Avg episode reward: [(0, '11.133')] [2025-01-13 17:27:38,717][01267] Fps is (10 sec: 2458.4, 60 sec: 3686.4, 300 sec: 3901.6). Total num frames: 1511424. Throughput: 0: 886.0. Samples: 378770. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) [2025-01-13 17:27:38,724][01267] Avg episode reward: [(0, '10.688')] [2025-01-13 17:27:39,342][02420] Updated weights for policy 0, policy_version 370 (0.0030) [2025-01-13 17:27:43,717][01267] Fps is (10 sec: 4507.0, 60 sec: 3822.9, 300 sec: 3901.6). Total num frames: 1536000. Throughput: 0: 919.9. Samples: 382358. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) [2025-01-13 17:27:43,719][01267] Avg episode reward: [(0, '10.860')] [2025-01-13 17:27:48,717][01267] Fps is (10 sec: 3686.4, 60 sec: 3618.2, 300 sec: 3873.8). Total num frames: 1548288. Throughput: 0: 902.6. Samples: 386932. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2025-01-13 17:27:48,724][01267] Avg episode reward: [(0, '10.865')] [2025-01-13 17:27:51,418][02420] Updated weights for policy 0, policy_version 380 (0.0025) [2025-01-13 17:27:53,717][01267] Fps is (10 sec: 2867.2, 60 sec: 3549.9, 300 sec: 3873.9). Total num frames: 1564672. Throughput: 0: 869.9. Samples: 392334. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2025-01-13 17:27:53,726][01267] Avg episode reward: [(0, '11.242')] [2025-01-13 17:27:58,717][01267] Fps is (10 sec: 4095.9, 60 sec: 3754.7, 300 sec: 3901.6). Total num frames: 1589248. Throughput: 0: 883.8. Samples: 395782. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) [2025-01-13 17:27:58,720][01267] Avg episode reward: [(0, '11.171')] [2025-01-13 17:28:00,240][02420] Updated weights for policy 0, policy_version 390 (0.0030) [2025-01-13 17:28:03,720][01267] Fps is (10 sec: 4094.7, 60 sec: 3686.2, 300 sec: 3873.8). Total num frames: 1605632. Throughput: 0: 926.5. Samples: 402130. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2025-01-13 17:28:03,723][01267] Avg episode reward: [(0, '11.256')] [2025-01-13 17:28:08,717][01267] Fps is (10 sec: 3276.9, 60 sec: 3549.9, 300 sec: 3873.9). Total num frames: 1622016. Throughput: 0: 859.7. Samples: 406266. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2025-01-13 17:28:08,721][01267] Avg episode reward: [(0, '11.608')] [2025-01-13 17:28:08,724][02407] Saving new best policy, reward=11.608! [2025-01-13 17:28:12,095][02420] Updated weights for policy 0, policy_version 400 (0.0026) [2025-01-13 17:28:13,717][01267] Fps is (10 sec: 3687.6, 60 sec: 3618.1, 300 sec: 3901.6). Total num frames: 1642496. Throughput: 0: 858.4. Samples: 409700. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2025-01-13 17:28:13,724][01267] Avg episode reward: [(0, '12.451')] [2025-01-13 17:28:13,731][02407] Saving new best policy, reward=12.451! [2025-01-13 17:28:18,717][01267] Fps is (10 sec: 4096.0, 60 sec: 3686.4, 300 sec: 3887.7). Total num frames: 1662976. Throughput: 0: 930.4. Samples: 415560. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2025-01-13 17:28:18,721][01267] Avg episode reward: [(0, '13.625')] [2025-01-13 17:28:18,726][02407] Saving new best policy, reward=13.625! [2025-01-13 17:28:23,717][01267] Fps is (10 sec: 3276.8, 60 sec: 3481.6, 300 sec: 3846.1). Total num frames: 1675264. Throughput: 0: 919.6. Samples: 420154. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2025-01-13 17:28:23,724][01267] Avg episode reward: [(0, '13.945')] [2025-01-13 17:28:23,736][02407] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000409_1675264.pth... [2025-01-13 17:28:23,929][02407] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000183_749568.pth [2025-01-13 17:28:23,949][02407] Saving new best policy, reward=13.945! [2025-01-13 17:28:24,278][02420] Updated weights for policy 0, policy_version 410 (0.0018) [2025-01-13 17:28:28,717][01267] Fps is (10 sec: 3276.8, 60 sec: 3481.8, 300 sec: 3873.8). Total num frames: 1695744. Throughput: 0: 891.5. Samples: 422476. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2025-01-13 17:28:28,719][01267] Avg episode reward: [(0, '14.132')] [2025-01-13 17:28:28,722][02407] Saving new best policy, reward=14.132! [2025-01-13 17:28:33,722][01267] Fps is (10 sec: 4093.9, 60 sec: 3754.5, 300 sec: 3887.7). Total num frames: 1716224. Throughput: 0: 941.6. Samples: 429308. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) [2025-01-13 17:28:33,724][01267] Avg episode reward: [(0, '15.150')] [2025-01-13 17:28:33,733][02407] Saving new best policy, reward=15.150! [2025-01-13 17:28:33,983][02420] Updated weights for policy 0, policy_version 420 (0.0015) [2025-01-13 17:28:38,484][01267] Keyboard interrupt detected in the event loop EvtLoop [Runner_EvtLoop, process=main process 1267], exiting... [2025-01-13 17:28:38,488][02407] Stopping Batcher_0... [2025-01-13 17:28:38,489][02407] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000423_1732608.pth... [2025-01-13 17:28:38,492][01267] Runner profile tree view: main_loop: 488.6075 [2025-01-13 17:28:38,489][02407] Loop batcher_evt_loop terminating... [2025-01-13 17:28:38,497][01267] Collected {0: 1732608}, FPS: 3546.0 [2025-01-13 17:28:38,691][02420] Weights refcount: 2 0 [2025-01-13 17:28:38,721][02420] Stopping InferenceWorker_p0-w0... [2025-01-13 17:28:38,736][02420] Loop inference_proc0-0_evt_loop terminating... [2025-01-13 17:28:38,781][02423] EvtLoop [rollout_proc2_evt_loop, process=rollout_proc2] unhandled exception in slot='advance_rollouts' connected to emitter=Emitter(object_id='InferenceWorker_p0-w0', signal_name='advance2'), args=(1, 0) Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/signal_slot/signal_slot.py", line 355, in _process_signal slot_callable(*args) File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/sampling/rollout_worker.py", line 241, in advance_rollouts complete_rollouts, episodic_stats = runner.advance_rollouts(policy_id, self.timing) File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/sampling/non_batched_sampling.py", line 634, in advance_rollouts new_obs, rewards, terminated, truncated, infos = e.step(actions) File "/usr/local/lib/python3.10/dist-packages/gymnasium/core.py", line 461, in step return self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/utils/make_env.py", line 129, in step obs, rew, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/utils/make_env.py", line 115, in step obs, rew, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sf_examples/vizdoom/doom/wrappers/scenario_wrappers/gathering_reward_shaping.py", line 33, in step observation, reward, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/gymnasium/core.py", line 522, in step observation, reward, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sample_factory/envs/env_wrappers.py", line 86, in step obs, reward, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/gymnasium/core.py", line 461, in step return self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sf_examples/vizdoom/doom/wrappers/multiplayer_stats.py", line 54, in step obs, reward, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sf_examples/vizdoom/doom/doom_gym.py", line 452, in step reward = self.game.make_action(actions_flattened, self.skip_frames) vizdoom.vizdoom.SignalException: Signal SIGINT received. ViZDoom instance has been closed. [2025-01-13 17:28:38,766][02430] EvtLoop [rollout_proc5_evt_loop, process=rollout_proc5] unhandled exception in slot='advance_rollouts' connected to emitter=Emitter(object_id='InferenceWorker_p0-w0', signal_name='advance5'), args=(0, 0) Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/signal_slot/signal_slot.py", line 355, in _process_signal slot_callable(*args) File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/sampling/rollout_worker.py", line 241, in advance_rollouts complete_rollouts, episodic_stats = runner.advance_rollouts(policy_id, self.timing) File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/sampling/non_batched_sampling.py", line 634, in advance_rollouts new_obs, rewards, terminated, truncated, infos = e.step(actions) File "/usr/local/lib/python3.10/dist-packages/gymnasium/core.py", line 461, in step return self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/utils/make_env.py", line 129, in step obs, rew, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/utils/make_env.py", line 115, in step obs, rew, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sf_examples/vizdoom/doom/wrappers/scenario_wrappers/gathering_reward_shaping.py", line 33, in step observation, reward, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/gymnasium/core.py", line 522, in step observation, reward, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sample_factory/envs/env_wrappers.py", line 86, in step obs, reward, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/gymnasium/core.py", line 461, in step return self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sf_examples/vizdoom/doom/wrappers/multiplayer_stats.py", line 54, in step obs, reward, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sf_examples/vizdoom/doom/doom_gym.py", line 452, in step reward = self.game.make_action(actions_flattened, self.skip_frames) vizdoom.vizdoom.SignalException: Signal SIGINT received. ViZDoom instance has been closed. [2025-01-13 17:28:38,824][02430] Unhandled exception Signal SIGINT received. ViZDoom instance has been closed. in evt loop rollout_proc5_evt_loop [2025-01-13 17:28:38,835][02407] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000301_1232896.pth [2025-01-13 17:28:38,833][02423] Unhandled exception Signal SIGINT received. ViZDoom instance has been closed. in evt loop rollout_proc2_evt_loop [2025-01-13 17:28:38,870][02407] Stopping LearnerWorker_p0... [2025-01-13 17:28:38,877][02407] Loop learner_proc0_evt_loop terminating... [2025-01-13 17:28:38,842][02422] EvtLoop [rollout_proc1_evt_loop, process=rollout_proc1] unhandled exception in slot='advance_rollouts' connected to emitter=Emitter(object_id='InferenceWorker_p0-w0', signal_name='advance1'), args=(1, 0) Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/signal_slot/signal_slot.py", line 355, in _process_signal slot_callable(*args) File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/sampling/rollout_worker.py", line 241, in advance_rollouts complete_rollouts, episodic_stats = runner.advance_rollouts(policy_id, self.timing) File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/sampling/non_batched_sampling.py", line 634, in advance_rollouts new_obs, rewards, terminated, truncated, infos = e.step(actions) File "/usr/local/lib/python3.10/dist-packages/gymnasium/core.py", line 461, in step return self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/utils/make_env.py", line 129, in step obs, rew, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/utils/make_env.py", line 115, in step obs, rew, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sf_examples/vizdoom/doom/wrappers/scenario_wrappers/gathering_reward_shaping.py", line 33, in step observation, reward, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/gymnasium/core.py", line 522, in step observation, reward, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sample_factory/envs/env_wrappers.py", line 86, in step obs, reward, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/gymnasium/core.py", line 461, in step return self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sf_examples/vizdoom/doom/wrappers/multiplayer_stats.py", line 54, in step obs, reward, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sf_examples/vizdoom/doom/doom_gym.py", line 452, in step reward = self.game.make_action(actions_flattened, self.skip_frames) vizdoom.vizdoom.SignalException: Signal SIGINT received. ViZDoom instance has been closed. [2025-01-13 17:28:38,891][02422] Unhandled exception Signal SIGINT received. ViZDoom instance has been closed. in evt loop rollout_proc1_evt_loop [2025-01-13 17:28:38,834][02428] EvtLoop [rollout_proc3_evt_loop, process=rollout_proc3] unhandled exception in slot='advance_rollouts' connected to emitter=Emitter(object_id='InferenceWorker_p0-w0', signal_name='advance3'), args=(1, 0) Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/signal_slot/signal_slot.py", line 355, in _process_signal slot_callable(*args) File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/sampling/rollout_worker.py", line 241, in advance_rollouts complete_rollouts, episodic_stats = runner.advance_rollouts(policy_id, self.timing) File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/sampling/non_batched_sampling.py", line 634, in advance_rollouts new_obs, rewards, terminated, truncated, infos = e.step(actions) File "/usr/local/lib/python3.10/dist-packages/gymnasium/core.py", line 461, in step return self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/utils/make_env.py", line 129, in step obs, rew, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/utils/make_env.py", line 115, in step obs, rew, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sf_examples/vizdoom/doom/wrappers/scenario_wrappers/gathering_reward_shaping.py", line 33, in step observation, reward, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/gymnasium/core.py", line 522, in step observation, reward, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sample_factory/envs/env_wrappers.py", line 86, in step obs, reward, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/gymnasium/core.py", line 461, in step return self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sf_examples/vizdoom/doom/wrappers/multiplayer_stats.py", line 54, in step obs, reward, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sf_examples/vizdoom/doom/doom_gym.py", line 452, in step reward = self.game.make_action(actions_flattened, self.skip_frames) vizdoom.vizdoom.SignalException: Signal SIGINT received. ViZDoom instance has been closed. [2025-01-13 17:28:38,892][02428] Unhandled exception Signal SIGINT received. ViZDoom instance has been closed. in evt loop rollout_proc3_evt_loop [2025-01-13 17:28:38,865][02421] EvtLoop [rollout_proc0_evt_loop, process=rollout_proc0] unhandled exception in slot='advance_rollouts' connected to emitter=Emitter(object_id='InferenceWorker_p0-w0', signal_name='advance0'), args=(1, 0) Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/signal_slot/signal_slot.py", line 355, in _process_signal slot_callable(*args) File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/sampling/rollout_worker.py", line 241, in advance_rollouts complete_rollouts, episodic_stats = runner.advance_rollouts(policy_id, self.timing) File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/sampling/non_batched_sampling.py", line 634, in advance_rollouts new_obs, rewards, terminated, truncated, infos = e.step(actions) File "/usr/local/lib/python3.10/dist-packages/gymnasium/core.py", line 461, in step return self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/utils/make_env.py", line 129, in step obs, rew, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/utils/make_env.py", line 117, in step obs, info["reset_info"] = self.env.reset() File "/usr/local/lib/python3.10/dist-packages/sf_examples/vizdoom/doom/wrappers/scenario_wrappers/gathering_reward_shaping.py", line 30, in reset return self.env.reset(**kwargs) File "/usr/local/lib/python3.10/dist-packages/gymnasium/core.py", line 515, in reset obs, info = self.env.reset(seed=seed, options=options) File "/usr/local/lib/python3.10/dist-packages/sample_factory/envs/env_wrappers.py", line 82, in reset obs, info = self.env.reset(**kwargs) File "/usr/local/lib/python3.10/dist-packages/gymnasium/core.py", line 467, in reset return self.env.reset(seed=seed, options=options) File "/usr/local/lib/python3.10/dist-packages/sf_examples/vizdoom/doom/wrappers/multiplayer_stats.py", line 51, in reset return self.env.reset(**kwargs) File "/usr/local/lib/python3.10/dist-packages/sf_examples/vizdoom/doom/doom_gym.py", line 346, in reset self.game.new_episode() vizdoom.vizdoom.SignalException: Signal SIGINT received. ViZDoom instance has been closed. [2025-01-13 17:28:38,906][02421] Unhandled exception Signal SIGINT received. ViZDoom instance has been closed. in evt loop rollout_proc0_evt_loop [2025-01-13 17:28:38,902][02431] EvtLoop [rollout_proc6_evt_loop, process=rollout_proc6] unhandled exception in slot='advance_rollouts' connected to emitter=Emitter(object_id='InferenceWorker_p0-w0', signal_name='advance6'), args=(0, 0) Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/signal_slot/signal_slot.py", line 355, in _process_signal slot_callable(*args) File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/sampling/rollout_worker.py", line 241, in advance_rollouts complete_rollouts, episodic_stats = runner.advance_rollouts(policy_id, self.timing) File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/sampling/non_batched_sampling.py", line 634, in advance_rollouts new_obs, rewards, terminated, truncated, infos = e.step(actions) File "/usr/local/lib/python3.10/dist-packages/gymnasium/core.py", line 461, in step return self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/utils/make_env.py", line 129, in step obs, rew, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/utils/make_env.py", line 115, in step obs, rew, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sf_examples/vizdoom/doom/wrappers/scenario_wrappers/gathering_reward_shaping.py", line 33, in step observation, reward, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/gymnasium/core.py", line 522, in step observation, reward, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sample_factory/envs/env_wrappers.py", line 86, in step obs, reward, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/gymnasium/core.py", line 461, in step return self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sf_examples/vizdoom/doom/wrappers/multiplayer_stats.py", line 54, in step obs, reward, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sf_examples/vizdoom/doom/doom_gym.py", line 452, in step reward = self.game.make_action(actions_flattened, self.skip_frames) vizdoom.vizdoom.SignalException: Signal SIGINT received. ViZDoom instance has been closed. [2025-01-13 17:28:38,930][02431] Unhandled exception Signal SIGINT received. ViZDoom instance has been closed. in evt loop rollout_proc6_evt_loop [2025-01-13 17:28:39,013][02429] EvtLoop [rollout_proc4_evt_loop, process=rollout_proc4] unhandled exception in slot='advance_rollouts' connected to emitter=Emitter(object_id='InferenceWorker_p0-w0', signal_name='advance4'), args=(0, 0) Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/signal_slot/signal_slot.py", line 355, in _process_signal slot_callable(*args) File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/sampling/rollout_worker.py", line 241, in advance_rollouts complete_rollouts, episodic_stats = runner.advance_rollouts(policy_id, self.timing) File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/sampling/non_batched_sampling.py", line 634, in advance_rollouts new_obs, rewards, terminated, truncated, infos = e.step(actions) File "/usr/local/lib/python3.10/dist-packages/gymnasium/core.py", line 461, in step return self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/utils/make_env.py", line 129, in step obs, rew, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/utils/make_env.py", line 115, in step obs, rew, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sf_examples/vizdoom/doom/wrappers/scenario_wrappers/gathering_reward_shaping.py", line 33, in step observation, reward, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/gymnasium/core.py", line 522, in step observation, reward, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sample_factory/envs/env_wrappers.py", line 86, in step obs, reward, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/gymnasium/core.py", line 461, in step return self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sf_examples/vizdoom/doom/wrappers/multiplayer_stats.py", line 54, in step obs, reward, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sf_examples/vizdoom/doom/doom_gym.py", line 452, in step reward = self.game.make_action(actions_flattened, self.skip_frames) vizdoom.vizdoom.SignalException: Signal SIGINT received. ViZDoom instance has been closed. [2025-01-13 17:28:39,042][02429] Unhandled exception Signal SIGINT received. ViZDoom instance has been closed. in evt loop rollout_proc4_evt_loop [2025-01-13 17:28:39,108][02432] EvtLoop [rollout_proc7_evt_loop, process=rollout_proc7] unhandled exception in slot='advance_rollouts' connected to emitter=Emitter(object_id='InferenceWorker_p0-w0', signal_name='advance7'), args=(1, 0) Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/signal_slot/signal_slot.py", line 355, in _process_signal slot_callable(*args) File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/sampling/rollout_worker.py", line 241, in advance_rollouts complete_rollouts, episodic_stats = runner.advance_rollouts(policy_id, self.timing) File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/sampling/non_batched_sampling.py", line 634, in advance_rollouts new_obs, rewards, terminated, truncated, infos = e.step(actions) File "/usr/local/lib/python3.10/dist-packages/gymnasium/core.py", line 461, in step return self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/utils/make_env.py", line 129, in step obs, rew, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sample_factory/algo/utils/make_env.py", line 115, in step obs, rew, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sf_examples/vizdoom/doom/wrappers/scenario_wrappers/gathering_reward_shaping.py", line 33, in step observation, reward, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/gymnasium/core.py", line 522, in step observation, reward, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sample_factory/envs/env_wrappers.py", line 86, in step obs, reward, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/gymnasium/core.py", line 461, in step return self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sf_examples/vizdoom/doom/wrappers/multiplayer_stats.py", line 54, in step obs, reward, terminated, truncated, info = self.env.step(action) File "/usr/local/lib/python3.10/dist-packages/sf_examples/vizdoom/doom/doom_gym.py", line 452, in step reward = self.game.make_action(actions_flattened, self.skip_frames) vizdoom.vizdoom.SignalException: Signal SIGINT received. ViZDoom instance has been closed. [2025-01-13 17:28:39,121][02432] Unhandled exception Signal SIGINT received. ViZDoom instance has been closed. in evt loop rollout_proc7_evt_loop [2025-01-13 17:28:43,361][01267] Environment doom_basic already registered, overwriting... [2025-01-13 17:28:43,366][01267] Environment doom_two_colors_easy already registered, overwriting... [2025-01-13 17:28:43,369][01267] Environment doom_two_colors_hard already registered, overwriting... [2025-01-13 17:28:43,371][01267] Environment doom_dm already registered, overwriting... [2025-01-13 17:28:43,374][01267] Environment doom_dwango5 already registered, overwriting... [2025-01-13 17:28:43,376][01267] Environment doom_my_way_home_flat_actions already registered, overwriting... [2025-01-13 17:28:43,383][01267] Environment doom_defend_the_center_flat_actions already registered, overwriting... [2025-01-13 17:28:43,385][01267] Environment doom_my_way_home already registered, overwriting... [2025-01-13 17:28:43,388][01267] Environment doom_deadly_corridor already registered, overwriting... [2025-01-13 17:28:43,390][01267] Environment doom_defend_the_center already registered, overwriting... [2025-01-13 17:28:43,391][01267] Environment doom_defend_the_line already registered, overwriting... [2025-01-13 17:28:43,392][01267] Environment doom_health_gathering already registered, overwriting... [2025-01-13 17:28:43,393][01267] Environment doom_health_gathering_supreme already registered, overwriting... [2025-01-13 17:28:43,394][01267] Environment doom_battle already registered, overwriting... [2025-01-13 17:28:43,395][01267] Environment doom_battle2 already registered, overwriting... [2025-01-13 17:28:43,396][01267] Environment doom_duel_bots already registered, overwriting... [2025-01-13 17:28:43,397][01267] Environment doom_deathmatch_bots already registered, overwriting... [2025-01-13 17:28:43,402][01267] Environment doom_duel already registered, overwriting... [2025-01-13 17:28:43,404][01267] Environment doom_deathmatch_full already registered, overwriting... [2025-01-13 17:28:43,405][01267] Environment doom_benchmark already registered, overwriting... [2025-01-13 17:28:43,407][01267] register_encoder_factory: [2025-01-13 17:29:42,817][01267] Environment doom_basic already registered, overwriting... [2025-01-13 17:29:42,820][01267] Environment doom_two_colors_easy already registered, overwriting... [2025-01-13 17:29:42,821][01267] Environment doom_two_colors_hard already registered, overwriting... [2025-01-13 17:29:42,823][01267] Environment doom_dm already registered, overwriting... [2025-01-13 17:29:42,825][01267] Environment doom_dwango5 already registered, overwriting... [2025-01-13 17:29:42,827][01267] Environment doom_my_way_home_flat_actions already registered, overwriting... [2025-01-13 17:29:42,829][01267] Environment doom_defend_the_center_flat_actions already registered, overwriting... [2025-01-13 17:29:42,830][01267] Environment doom_my_way_home already registered, overwriting... [2025-01-13 17:29:42,832][01267] Environment doom_deadly_corridor already registered, overwriting... [2025-01-13 17:29:42,833][01267] Environment doom_defend_the_center already registered, overwriting... [2025-01-13 17:29:42,835][01267] Environment doom_defend_the_line already registered, overwriting... [2025-01-13 17:29:42,837][01267] Environment doom_health_gathering already registered, overwriting... [2025-01-13 17:29:42,839][01267] Environment doom_health_gathering_supreme already registered, overwriting... [2025-01-13 17:29:42,840][01267] Environment doom_battle already registered, overwriting... [2025-01-13 17:29:42,841][01267] Environment doom_battle2 already registered, overwriting... [2025-01-13 17:29:42,843][01267] Environment doom_duel_bots already registered, overwriting... [2025-01-13 17:29:42,845][01267] Environment doom_deathmatch_bots already registered, overwriting... [2025-01-13 17:29:42,846][01267] Environment doom_duel already registered, overwriting... [2025-01-13 17:29:42,848][01267] Environment doom_deathmatch_full already registered, overwriting... [2025-01-13 17:29:42,849][01267] Environment doom_benchmark already registered, overwriting... [2025-01-13 17:29:42,850][01267] register_encoder_factory: [2025-01-13 17:30:06,836][01267] Environment doom_basic already registered, overwriting... [2025-01-13 17:30:06,838][01267] Environment doom_two_colors_easy already registered, overwriting... [2025-01-13 17:30:06,841][01267] Environment doom_two_colors_hard already registered, overwriting... [2025-01-13 17:30:06,842][01267] Environment doom_dm already registered, overwriting... [2025-01-13 17:30:06,844][01267] Environment doom_dwango5 already registered, overwriting... [2025-01-13 17:30:06,845][01267] Environment doom_my_way_home_flat_actions already registered, overwriting... [2025-01-13 17:30:06,846][01267] Environment doom_defend_the_center_flat_actions already registered, overwriting... [2025-01-13 17:30:06,847][01267] Environment doom_my_way_home already registered, overwriting... [2025-01-13 17:30:06,853][01267] Environment doom_deadly_corridor already registered, overwriting... [2025-01-13 17:30:06,854][01267] Environment doom_defend_the_center already registered, overwriting... [2025-01-13 17:30:06,855][01267] Environment doom_defend_the_line already registered, overwriting... [2025-01-13 17:30:06,857][01267] Environment doom_health_gathering already registered, overwriting... [2025-01-13 17:30:06,858][01267] Environment doom_health_gathering_supreme already registered, overwriting... [2025-01-13 17:30:06,859][01267] Environment doom_battle already registered, overwriting... [2025-01-13 17:30:06,860][01267] Environment doom_battle2 already registered, overwriting... [2025-01-13 17:30:06,861][01267] Environment doom_duel_bots already registered, overwriting... [2025-01-13 17:30:06,862][01267] Environment doom_deathmatch_bots already registered, overwriting... [2025-01-13 17:30:06,864][01267] Environment doom_duel already registered, overwriting... [2025-01-13 17:30:06,865][01267] Environment doom_deathmatch_full already registered, overwriting... [2025-01-13 17:30:06,866][01267] Environment doom_benchmark already registered, overwriting... [2025-01-13 17:30:06,867][01267] register_encoder_factory: [2025-01-13 17:40:59,173][01267] Environment doom_basic already registered, overwriting... [2025-01-13 17:40:59,176][01267] Environment doom_two_colors_easy already registered, overwriting... [2025-01-13 17:40:59,178][01267] Environment doom_two_colors_hard already registered, overwriting... [2025-01-13 17:40:59,179][01267] Environment doom_dm already registered, overwriting... [2025-01-13 17:40:59,182][01267] Environment doom_dwango5 already registered, overwriting... [2025-01-13 17:40:59,183][01267] Environment doom_my_way_home_flat_actions already registered, overwriting... [2025-01-13 17:40:59,185][01267] Environment doom_defend_the_center_flat_actions already registered, overwriting... [2025-01-13 17:40:59,185][01267] Environment doom_my_way_home already registered, overwriting... [2025-01-13 17:40:59,186][01267] Environment doom_deadly_corridor already registered, overwriting... [2025-01-13 17:40:59,187][01267] Environment doom_defend_the_center already registered, overwriting... [2025-01-13 17:40:59,188][01267] Environment doom_defend_the_line already registered, overwriting... [2025-01-13 17:40:59,193][01267] Environment doom_health_gathering already registered, overwriting... [2025-01-13 17:40:59,194][01267] Environment doom_health_gathering_supreme already registered, overwriting... [2025-01-13 17:40:59,195][01267] Environment doom_battle already registered, overwriting... [2025-01-13 17:40:59,196][01267] Environment doom_battle2 already registered, overwriting... [2025-01-13 17:40:59,197][01267] Environment doom_duel_bots already registered, overwriting... [2025-01-13 17:40:59,198][01267] Environment doom_deathmatch_bots already registered, overwriting... [2025-01-13 17:40:59,199][01267] Environment doom_duel already registered, overwriting... [2025-01-13 17:40:59,202][01267] Environment doom_deathmatch_full already registered, overwriting... [2025-01-13 17:40:59,203][01267] Environment doom_benchmark already registered, overwriting... [2025-01-13 17:40:59,204][01267] register_encoder_factory: [2025-01-13 17:40:59,242][01267] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json [2025-01-13 17:40:59,243][01267] Overriding arg 'num_workers' with value 10 passed from command line [2025-01-13 17:40:59,244][01267] Overriding arg 'num_envs_per_worker' with value 5 passed from command line [2025-01-13 17:40:59,258][01267] Experiment dir /content/train_dir/default_experiment already exists! [2025-01-13 17:40:59,261][01267] Resuming existing experiment from /content/train_dir/default_experiment... [2025-01-13 17:40:59,263][01267] Weights and Biases integration disabled [2025-01-13 17:40:59,267][01267] Environment var CUDA_VISIBLE_DEVICES is 0 [2025-01-13 17:41:02,066][01267] cfg.num_envs_per_worker=5 must be a multiple of cfg.worker_num_splits=2 (for double-buffered sampling you need to use even number of envs per worker) [2025-01-13 17:41:20,092][01267] Environment doom_basic already registered, overwriting... [2025-01-13 17:41:20,095][01267] Environment doom_two_colors_easy already registered, overwriting... [2025-01-13 17:41:20,097][01267] Environment doom_two_colors_hard already registered, overwriting... [2025-01-13 17:41:20,099][01267] Environment doom_dm already registered, overwriting... [2025-01-13 17:41:20,100][01267] Environment doom_dwango5 already registered, overwriting... [2025-01-13 17:41:20,101][01267] Environment doom_my_way_home_flat_actions already registered, overwriting... [2025-01-13 17:41:20,102][01267] Environment doom_defend_the_center_flat_actions already registered, overwriting... [2025-01-13 17:41:20,103][01267] Environment doom_my_way_home already registered, overwriting... [2025-01-13 17:41:20,105][01267] Environment doom_deadly_corridor already registered, overwriting... [2025-01-13 17:41:20,106][01267] Environment doom_defend_the_center already registered, overwriting... [2025-01-13 17:41:20,107][01267] Environment doom_defend_the_line already registered, overwriting... [2025-01-13 17:41:20,108][01267] Environment doom_health_gathering already registered, overwriting... [2025-01-13 17:41:20,109][01267] Environment doom_health_gathering_supreme already registered, overwriting... [2025-01-13 17:41:20,110][01267] Environment doom_battle already registered, overwriting... [2025-01-13 17:41:20,111][01267] Environment doom_battle2 already registered, overwriting... [2025-01-13 17:41:20,112][01267] Environment doom_duel_bots already registered, overwriting... [2025-01-13 17:41:20,113][01267] Environment doom_deathmatch_bots already registered, overwriting... [2025-01-13 17:41:20,114][01267] Environment doom_duel already registered, overwriting... [2025-01-13 17:41:20,116][01267] Environment doom_deathmatch_full already registered, overwriting... [2025-01-13 17:41:20,117][01267] Environment doom_benchmark already registered, overwriting... [2025-01-13 17:41:20,118][01267] register_encoder_factory: [2025-01-13 17:41:20,142][01267] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json [2025-01-13 17:41:20,143][01267] Overriding arg 'num_workers' with value 10 passed from command line [2025-01-13 17:41:20,150][01267] Experiment dir /content/train_dir/default_experiment already exists! [2025-01-13 17:41:20,151][01267] Resuming existing experiment from /content/train_dir/default_experiment... [2025-01-13 17:41:20,153][01267] Weights and Biases integration disabled [2025-01-13 17:41:20,156][01267] Environment var CUDA_VISIBLE_DEVICES is 0 [2025-01-13 17:41:22,430][01267] Starting experiment with the following configuration: help=False algo=APPO env=doom_health_gathering_supreme experiment=default_experiment train_dir=/content/train_dir restart_behavior=resume device=gpu seed=None num_policies=1 async_rl=True serial_mode=False batched_sampling=False num_batches_to_accumulate=2 worker_num_splits=2 policy_workers_per_policy=1 max_policy_lag=1000 num_workers=10 num_envs_per_worker=4 batch_size=1024 num_batches_per_epoch=1 num_epochs=1 rollout=32 recurrence=32 shuffle_minibatches=False gamma=0.99 reward_scale=1.0 reward_clip=1000.0 value_bootstrap=False normalize_returns=True exploration_loss_coeff=0.001 value_loss_coeff=0.5 kl_loss_coeff=0.0 exploration_loss=symmetric_kl gae_lambda=0.95 ppo_clip_ratio=0.1 ppo_clip_value=0.2 with_vtrace=False vtrace_rho=1.0 vtrace_c=1.0 optimizer=adam adam_eps=1e-06 adam_beta1=0.9 adam_beta2=0.999 max_grad_norm=4.0 learning_rate=0.0001 lr_schedule=constant lr_schedule_kl_threshold=0.008 lr_adaptive_min=1e-06 lr_adaptive_max=0.01 obs_subtract_mean=0.0 obs_scale=255.0 normalize_input=True normalize_input_keys=None decorrelate_experience_max_seconds=0 decorrelate_envs_on_one_worker=True actor_worker_gpus=[] set_workers_cpu_affinity=True force_envs_single_thread=False default_niceness=0 log_to_file=True experiment_summaries_interval=10 flush_summaries_interval=30 stats_avg=100 summaries_use_frameskip=True heartbeat_interval=20 heartbeat_reporting_interval=600 train_for_env_steps=4000000 train_for_seconds=10000000000 save_every_sec=120 keep_checkpoints=2 load_checkpoint_kind=latest save_milestones_sec=-1 save_best_every_sec=5 save_best_metric=reward save_best_after=100000 benchmark=False encoder_mlp_layers=[512, 512] encoder_conv_architecture=convnet_simple encoder_conv_mlp_layers=[512] use_rnn=True rnn_size=512 rnn_type=gru rnn_num_layers=1 decoder_mlp_layers=[] nonlinearity=elu policy_initialization=orthogonal policy_init_gain=1.0 actor_critic_share_weights=True adaptive_stddev=True continuous_tanh_scale=0.0 initial_stddev=1.0 use_env_info_cache=False env_gpu_actions=False env_gpu_observations=True env_frameskip=4 env_framestack=1 pixel_format=CHW use_record_episode_statistics=False with_wandb=False wandb_user=None wandb_project=sample_factory wandb_group=None wandb_job_type=SF wandb_tags=[] with_pbt=False pbt_mix_policies_in_one_env=True pbt_period_env_steps=5000000 pbt_start_mutation=20000000 pbt_replace_fraction=0.3 pbt_mutation_rate=0.15 pbt_replace_reward_gap=0.1 pbt_replace_reward_gap_absolute=1e-06 pbt_optimize_gamma=False pbt_target_objective=true_objective pbt_perturb_min=1.1 pbt_perturb_max=1.5 num_agents=-1 num_humans=0 num_bots=-1 start_bot_difficulty=None timelimit=None res_w=128 res_h=72 wide_aspect_ratio=False eval_env_frameskip=1 fps=35 command_line=--env=doom_health_gathering_supreme --num_workers=8 --num_envs_per_worker=4 --train_for_env_steps=4000000 cli_args={'env': 'doom_health_gathering_supreme', 'num_workers': 8, 'num_envs_per_worker': 4, 'train_for_env_steps': 4000000} git_hash=unknown git_repo_name=not a git repository [2025-01-13 17:41:22,437][01267] Saving configuration to /content/train_dir/default_experiment/config.json... [2025-01-13 17:41:22,440][01267] Rollout worker 0 uses device cpu [2025-01-13 17:41:22,443][01267] Rollout worker 1 uses device cpu [2025-01-13 17:41:22,444][01267] Rollout worker 2 uses device cpu [2025-01-13 17:41:22,445][01267] Rollout worker 3 uses device cpu [2025-01-13 17:41:22,446][01267] Rollout worker 4 uses device cpu [2025-01-13 17:41:22,447][01267] Rollout worker 5 uses device cpu [2025-01-13 17:41:22,451][01267] Rollout worker 6 uses device cpu [2025-01-13 17:41:22,452][01267] Rollout worker 7 uses device cpu [2025-01-13 17:41:22,453][01267] Rollout worker 8 uses device cpu [2025-01-13 17:41:22,454][01267] Rollout worker 9 uses device cpu [2025-01-13 17:41:22,628][01267] Using GPUs [0] for process 0 (actually maps to GPUs [0]) [2025-01-13 17:41:22,631][01267] InferenceWorker_p0-w0: min num requests: 3 [2025-01-13 17:41:22,687][01267] Starting all processes... [2025-01-13 17:41:22,689][01267] Starting process learner_proc0 [2025-01-13 17:41:22,751][01267] Starting all processes... [2025-01-13 17:41:22,797][01267] Starting process inference_proc0-0 [2025-01-13 17:41:22,797][01267] Starting process rollout_proc0 [2025-01-13 17:41:22,801][01267] Starting process rollout_proc1 [2025-01-13 17:41:22,801][01267] Starting process rollout_proc2 [2025-01-13 17:41:22,801][01267] Starting process rollout_proc3 [2025-01-13 17:41:22,805][01267] Starting process rollout_proc4 [2025-01-13 17:41:22,805][01267] Starting process rollout_proc5 [2025-01-13 17:41:22,805][01267] Starting process rollout_proc6 [2025-01-13 17:41:22,805][01267] Starting process rollout_proc7 [2025-01-13 17:41:22,805][01267] Starting process rollout_proc8 [2025-01-13 17:41:22,805][01267] Starting process rollout_proc9 [2025-01-13 17:41:44,050][11533] Using GPUs [0] for process 0 (actually maps to GPUs [0]) [2025-01-13 17:41:44,054][11533] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for learning process 0 [2025-01-13 17:41:44,131][11533] Num visible devices: 1 [2025-01-13 17:41:44,160][01267] Heartbeat connected on Batcher_0 [2025-01-13 17:41:44,161][11533] Starting seed is not provided [2025-01-13 17:41:44,166][11533] Using GPUs [0] for process 0 (actually maps to GPUs [0]) [2025-01-13 17:41:44,166][11533] Initializing actor-critic model on device cuda:0 [2025-01-13 17:41:44,167][11533] RunningMeanStd input shape: (3, 72, 128) [2025-01-13 17:41:44,169][11533] RunningMeanStd input shape: (1,) [2025-01-13 17:41:44,286][11533] ConvEncoder: input_channels=3 [2025-01-13 17:41:44,417][11554] Worker 5 uses CPU cores [1] [2025-01-13 17:41:44,433][11552] Worker 3 uses CPU cores [1] [2025-01-13 17:41:44,474][11557] Worker 8 uses CPU cores [0] [2025-01-13 17:41:44,481][11548] Using GPUs [0] for process 0 (actually maps to GPUs [0]) [2025-01-13 17:41:44,485][11548] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for inference process 0 [2025-01-13 17:41:44,608][11549] Worker 0 uses CPU cores [0] [2025-01-13 17:41:44,627][11548] Num visible devices: 1 [2025-01-13 17:41:44,652][11558] Worker 9 uses CPU cores [1] [2025-01-13 17:41:44,673][01267] Heartbeat connected on InferenceWorker_p0-w0 [2025-01-13 17:41:44,684][01267] Heartbeat connected on RolloutWorker_w3 [2025-01-13 17:41:44,690][01267] Heartbeat connected on RolloutWorker_w5 [2025-01-13 17:41:44,731][11551] Worker 2 uses CPU cores [0] [2025-01-13 17:41:44,752][11550] Worker 1 uses CPU cores [1] [2025-01-13 17:41:44,761][01267] Heartbeat connected on RolloutWorker_w9 [2025-01-13 17:41:44,764][01267] Heartbeat connected on RolloutWorker_w8 [2025-01-13 17:41:44,817][11556] Worker 7 uses CPU cores [1] [2025-01-13 17:41:44,831][01267] Heartbeat connected on RolloutWorker_w0 [2025-01-13 17:41:44,861][01267] Heartbeat connected on RolloutWorker_w1 [2025-01-13 17:41:44,865][01267] Heartbeat connected on RolloutWorker_w7 [2025-01-13 17:41:44,891][11553] Worker 4 uses CPU cores [0] [2025-01-13 17:41:44,890][11555] Worker 6 uses CPU cores [0] [2025-01-13 17:41:44,895][01267] Heartbeat connected on RolloutWorker_w2 [2025-01-13 17:41:44,908][01267] Heartbeat connected on RolloutWorker_w6 [2025-01-13 17:41:44,910][01267] Heartbeat connected on RolloutWorker_w4 [2025-01-13 17:41:44,952][11533] Conv encoder output size: 512 [2025-01-13 17:41:44,952][11533] Policy head output size: 512 [2025-01-13 17:41:44,970][11533] Created Actor Critic model with architecture: [2025-01-13 17:41:44,970][11533] ActorCriticSharedWeights( (obs_normalizer): ObservationNormalizer( (running_mean_std): RunningMeanStdDictInPlace( (running_mean_std): ModuleDict( (obs): RunningMeanStdInPlace() ) ) ) (returns_normalizer): RecursiveScriptModule(original_name=RunningMeanStdInPlace) (encoder): VizdoomEncoder( (basic_encoder): ConvEncoder( (enc): RecursiveScriptModule( original_name=ConvEncoderImpl (conv_head): RecursiveScriptModule( original_name=Sequential (0): RecursiveScriptModule(original_name=Conv2d) (1): RecursiveScriptModule(original_name=ELU) (2): RecursiveScriptModule(original_name=Conv2d) (3): RecursiveScriptModule(original_name=ELU) (4): RecursiveScriptModule(original_name=Conv2d) (5): RecursiveScriptModule(original_name=ELU) ) (mlp_layers): RecursiveScriptModule( original_name=Sequential (0): RecursiveScriptModule(original_name=Linear) (1): RecursiveScriptModule(original_name=ELU) ) ) ) ) (core): ModelCoreRNN( (core): GRU(512, 512) ) (decoder): MlpDecoder( (mlp): Identity() ) (critic_linear): Linear(in_features=512, out_features=1, bias=True) (action_parameterization): ActionParameterizationDefault( (distribution_linear): Linear(in_features=512, out_features=5, bias=True) ) ) [2025-01-13 17:41:45,124][11533] Using optimizer [2025-01-13 17:41:45,968][11533] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000423_1732608.pth... [2025-01-13 17:41:46,004][11533] Loading model from checkpoint [2025-01-13 17:41:46,006][11533] Loaded experiment state at self.train_step=423, self.env_steps=1732608 [2025-01-13 17:41:46,006][11533] Initialized policy 0 weights for model version 423 [2025-01-13 17:41:46,009][11533] Using GPUs [0] for process 0 (actually maps to GPUs [0]) [2025-01-13 17:41:46,016][11533] LearnerWorker_p0 finished initialization! [2025-01-13 17:41:46,016][01267] Heartbeat connected on LearnerWorker_p0 [2025-01-13 17:41:46,107][11548] RunningMeanStd input shape: (3, 72, 128) [2025-01-13 17:41:46,108][11548] RunningMeanStd input shape: (1,) [2025-01-13 17:41:46,120][11548] ConvEncoder: input_channels=3 [2025-01-13 17:41:46,230][11548] Conv encoder output size: 512 [2025-01-13 17:41:46,231][11548] Policy head output size: 512 [2025-01-13 17:41:46,295][01267] Inference worker 0-0 is ready! [2025-01-13 17:41:46,296][01267] All inference workers are ready! Signal rollout workers to start! [2025-01-13 17:41:46,550][11556] Doom resolution: 160x120, resize resolution: (128, 72) [2025-01-13 17:41:46,545][11558] Doom resolution: 160x120, resize resolution: (128, 72) [2025-01-13 17:41:46,555][11554] Doom resolution: 160x120, resize resolution: (128, 72) [2025-01-13 17:41:46,555][11550] Doom resolution: 160x120, resize resolution: (128, 72) [2025-01-13 17:41:46,552][11552] Doom resolution: 160x120, resize resolution: (128, 72) [2025-01-13 17:41:46,562][11549] Doom resolution: 160x120, resize resolution: (128, 72) [2025-01-13 17:41:46,570][11555] Doom resolution: 160x120, resize resolution: (128, 72) [2025-01-13 17:41:46,571][11557] Doom resolution: 160x120, resize resolution: (128, 72) [2025-01-13 17:41:46,565][11553] Doom resolution: 160x120, resize resolution: (128, 72) [2025-01-13 17:41:46,564][11551] Doom resolution: 160x120, resize resolution: (128, 72) [2025-01-13 17:41:48,353][11558] Decorrelating experience for 0 frames... [2025-01-13 17:41:48,349][11552] Decorrelating experience for 0 frames... [2025-01-13 17:41:48,353][11554] Decorrelating experience for 0 frames... [2025-01-13 17:41:48,360][11550] Decorrelating experience for 0 frames... [2025-01-13 17:41:48,364][11549] Decorrelating experience for 0 frames... [2025-01-13 17:41:48,373][11555] Decorrelating experience for 0 frames... [2025-01-13 17:41:48,370][11551] Decorrelating experience for 0 frames... [2025-01-13 17:41:48,378][11557] Decorrelating experience for 0 frames... [2025-01-13 17:41:49,515][11549] Decorrelating experience for 32 frames... [2025-01-13 17:41:49,518][11555] Decorrelating experience for 32 frames... [2025-01-13 17:41:49,514][11553] Decorrelating experience for 0 frames... [2025-01-13 17:41:49,793][11558] Decorrelating experience for 32 frames... [2025-01-13 17:41:49,798][11552] Decorrelating experience for 32 frames... [2025-01-13 17:41:49,799][11554] Decorrelating experience for 32 frames... [2025-01-13 17:41:49,912][11556] Decorrelating experience for 0 frames... [2025-01-13 17:41:50,158][01267] Fps is (10 sec: nan, 60 sec: nan, 300 sec: nan). Total num frames: 1732608. Throughput: 0: nan. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) [2025-01-13 17:41:51,171][11553] Decorrelating experience for 32 frames... [2025-01-13 17:41:51,427][11550] Decorrelating experience for 32 frames... [2025-01-13 17:41:51,780][11551] Decorrelating experience for 32 frames... [2025-01-13 17:41:51,881][11549] Decorrelating experience for 64 frames... [2025-01-13 17:41:51,890][11555] Decorrelating experience for 64 frames... [2025-01-13 17:41:51,947][11558] Decorrelating experience for 64 frames... [2025-01-13 17:41:53,372][11554] Decorrelating experience for 64 frames... [2025-01-13 17:41:53,751][11556] Decorrelating experience for 32 frames... [2025-01-13 17:41:53,873][11553] Decorrelating experience for 64 frames... [2025-01-13 17:41:54,076][11549] Decorrelating experience for 96 frames... [2025-01-13 17:41:54,085][11555] Decorrelating experience for 96 frames... [2025-01-13 17:41:54,265][11550] Decorrelating experience for 64 frames... [2025-01-13 17:41:54,631][11551] Decorrelating experience for 64 frames... [2025-01-13 17:41:55,157][01267] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 1732608. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) [2025-01-13 17:41:55,560][11558] Decorrelating experience for 96 frames... [2025-01-13 17:41:56,333][11557] Decorrelating experience for 32 frames... [2025-01-13 17:41:56,574][11556] Decorrelating experience for 64 frames... [2025-01-13 17:41:57,301][11553] Decorrelating experience for 96 frames... [2025-01-13 17:41:58,710][11551] Decorrelating experience for 96 frames... [2025-01-13 17:41:58,864][11554] Decorrelating experience for 96 frames... [2025-01-13 17:41:59,366][11556] Decorrelating experience for 96 frames... [2025-01-13 17:41:59,900][11557] Decorrelating experience for 64 frames... [2025-01-13 17:42:00,157][01267] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 1732608. Throughput: 0: 112.6. Samples: 1126. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) [2025-01-13 17:42:00,158][01267] Avg episode reward: [(0, '4.346')] [2025-01-13 17:42:01,879][11533] Signal inference workers to stop experience collection... [2025-01-13 17:42:01,899][11548] InferenceWorker_p0-w0: stopping experience collection [2025-01-13 17:42:02,089][11552] Decorrelating experience for 64 frames... [2025-01-13 17:42:02,423][11557] Decorrelating experience for 96 frames... [2025-01-13 17:42:03,013][11550] Decorrelating experience for 96 frames... [2025-01-13 17:42:03,057][11552] Decorrelating experience for 96 frames... [2025-01-13 17:42:03,585][11533] Signal inference workers to resume experience collection... [2025-01-13 17:42:03,586][11548] InferenceWorker_p0-w0: resuming experience collection [2025-01-13 17:42:05,162][01267] Fps is (10 sec: 1228.2, 60 sec: 819.0, 300 sec: 819.0). Total num frames: 1744896. Throughput: 0: 243.5. Samples: 3654. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) [2025-01-13 17:42:05,167][01267] Avg episode reward: [(0, '5.583')] [2025-01-13 17:42:10,160][01267] Fps is (10 sec: 3275.8, 60 sec: 1638.2, 300 sec: 1638.2). Total num frames: 1765376. Throughput: 0: 356.9. Samples: 7138. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) [2025-01-13 17:42:10,162][01267] Avg episode reward: [(0, '7.459')] [2025-01-13 17:42:12,651][11548] Updated weights for policy 0, policy_version 433 (0.0151) [2025-01-13 17:42:15,157][01267] Fps is (10 sec: 3278.3, 60 sec: 1802.3, 300 sec: 1802.3). Total num frames: 1777664. Throughput: 0: 451.0. Samples: 11276. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2025-01-13 17:42:15,162][01267] Avg episode reward: [(0, '9.408')] [2025-01-13 17:42:20,157][01267] Fps is (10 sec: 3277.7, 60 sec: 2184.6, 300 sec: 2184.6). Total num frames: 1798144. Throughput: 0: 558.7. Samples: 16762. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2025-01-13 17:42:20,162][01267] Avg episode reward: [(0, '11.598')] [2025-01-13 17:42:23,166][11548] Updated weights for policy 0, policy_version 443 (0.0013) [2025-01-13 17:42:25,157][01267] Fps is (10 sec: 4505.8, 60 sec: 2574.7, 300 sec: 2574.7). Total num frames: 1822720. Throughput: 0: 577.9. Samples: 20226. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2025-01-13 17:42:25,161][01267] Avg episode reward: [(0, '13.043')] [2025-01-13 17:42:30,157][01267] Fps is (10 sec: 4096.1, 60 sec: 2662.5, 300 sec: 2662.5). Total num frames: 1839104. Throughput: 0: 671.3. Samples: 26852. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2025-01-13 17:42:30,166][01267] Avg episode reward: [(0, '15.469')] [2025-01-13 17:42:30,174][11533] Saving new best policy, reward=15.469! [2025-01-13 17:42:34,320][11548] Updated weights for policy 0, policy_version 453 (0.0032) [2025-01-13 17:42:35,157][01267] Fps is (10 sec: 3276.8, 60 sec: 2730.7, 300 sec: 2730.7). Total num frames: 1855488. Throughput: 0: 695.7. Samples: 31304. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) [2025-01-13 17:42:35,159][01267] Avg episode reward: [(0, '15.780')] [2025-01-13 17:42:35,164][11533] Saving new best policy, reward=15.780! [2025-01-13 17:42:40,157][01267] Fps is (10 sec: 4096.0, 60 sec: 2949.2, 300 sec: 2949.2). Total num frames: 1880064. Throughput: 0: 765.2. Samples: 34434. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2025-01-13 17:42:40,159][01267] Avg episode reward: [(0, '17.413')] [2025-01-13 17:42:40,169][11533] Saving new best policy, reward=17.413! [2025-01-13 17:42:43,638][11548] Updated weights for policy 0, policy_version 463 (0.0020) [2025-01-13 17:42:45,157][01267] Fps is (10 sec: 4505.4, 60 sec: 3053.4, 300 sec: 3053.4). Total num frames: 1900544. Throughput: 0: 890.9. Samples: 41216. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) [2025-01-13 17:42:45,159][01267] Avg episode reward: [(0, '18.398')] [2025-01-13 17:42:45,166][11533] Saving new best policy, reward=18.398! [2025-01-13 17:42:50,157][01267] Fps is (10 sec: 3686.4, 60 sec: 3072.0, 300 sec: 3072.0). Total num frames: 1916928. Throughput: 0: 951.0. Samples: 46444. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2025-01-13 17:42:50,163][01267] Avg episode reward: [(0, '19.814')] [2025-01-13 17:42:50,172][11533] Saving new best policy, reward=19.814! [2025-01-13 17:42:55,157][01267] Fps is (10 sec: 3277.0, 60 sec: 3345.1, 300 sec: 3087.8). Total num frames: 1933312. Throughput: 0: 923.2. Samples: 48680. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2025-01-13 17:42:55,159][01267] Avg episode reward: [(0, '20.384')] [2025-01-13 17:42:55,166][11533] Saving new best policy, reward=20.384! [2025-01-13 17:42:55,648][11548] Updated weights for policy 0, policy_version 473 (0.0021) [2025-01-13 17:43:00,157][01267] Fps is (10 sec: 4095.7, 60 sec: 3754.6, 300 sec: 3218.3). Total num frames: 1957888. Throughput: 0: 975.7. Samples: 55182. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2025-01-13 17:43:00,163][01267] Avg episode reward: [(0, '20.972')] [2025-01-13 17:43:00,174][11533] Saving new best policy, reward=20.972! [2025-01-13 17:43:04,658][11548] Updated weights for policy 0, policy_version 483 (0.0022) [2025-01-13 17:43:05,158][01267] Fps is (10 sec: 4504.8, 60 sec: 3891.4, 300 sec: 3276.8). Total num frames: 1978368. Throughput: 0: 1002.6. Samples: 61882. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) [2025-01-13 17:43:05,161][01267] Avg episode reward: [(0, '20.073')] [2025-01-13 17:43:10,157][01267] Fps is (10 sec: 3277.1, 60 sec: 3754.9, 300 sec: 3225.6). Total num frames: 1990656. Throughput: 0: 975.5. Samples: 64124. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) [2025-01-13 17:43:10,163][01267] Avg episode reward: [(0, '19.260')] [2025-01-13 17:43:15,157][01267] Fps is (10 sec: 3277.2, 60 sec: 3891.2, 300 sec: 3276.8). Total num frames: 2011136. Throughput: 0: 936.9. Samples: 69012. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2025-01-13 17:43:15,160][01267] Avg episode reward: [(0, '18.354')] [2025-01-13 17:43:16,340][11548] Updated weights for policy 0, policy_version 493 (0.0030) [2025-01-13 17:43:20,157][01267] Fps is (10 sec: 4505.6, 60 sec: 3959.5, 300 sec: 3367.9). Total num frames: 2035712. Throughput: 0: 990.0. Samples: 75856. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) [2025-01-13 17:43:20,164][01267] Avg episode reward: [(0, '20.076')] [2025-01-13 17:43:20,169][11533] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000497_2035712.pth... [2025-01-13 17:43:20,312][11533] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000409_1675264.pth [2025-01-13 17:43:25,157][01267] Fps is (10 sec: 4096.2, 60 sec: 3822.9, 300 sec: 3363.1). Total num frames: 2052096. Throughput: 0: 992.0. Samples: 79072. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2025-01-13 17:43:25,159][01267] Avg episode reward: [(0, '20.090')] [2025-01-13 17:43:26,942][11548] Updated weights for policy 0, policy_version 503 (0.0013) [2025-01-13 17:43:30,157][01267] Fps is (10 sec: 3276.8, 60 sec: 3822.9, 300 sec: 3358.8). Total num frames: 2068480. Throughput: 0: 942.0. Samples: 83606. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2025-01-13 17:43:30,161][01267] Avg episode reward: [(0, '18.753')] [2025-01-13 17:43:35,157][01267] Fps is (10 sec: 3686.2, 60 sec: 3891.2, 300 sec: 3393.8). Total num frames: 2088960. Throughput: 0: 962.8. Samples: 89772. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2025-01-13 17:43:35,160][01267] Avg episode reward: [(0, '18.832')] [2025-01-13 17:43:36,939][11548] Updated weights for policy 0, policy_version 513 (0.0015) [2025-01-13 17:43:40,157][01267] Fps is (10 sec: 4505.6, 60 sec: 3891.2, 300 sec: 3463.0). Total num frames: 2113536. Throughput: 0: 991.4. Samples: 93294. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2025-01-13 17:43:40,159][01267] Avg episode reward: [(0, '20.078')] [2025-01-13 17:43:45,157][01267] Fps is (10 sec: 4096.0, 60 sec: 3822.9, 300 sec: 3454.9). Total num frames: 2129920. Throughput: 0: 975.7. Samples: 99090. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) [2025-01-13 17:43:45,162][01267] Avg episode reward: [(0, '20.833')] [2025-01-13 17:43:48,565][11548] Updated weights for policy 0, policy_version 523 (0.0029) [2025-01-13 17:43:50,157][01267] Fps is (10 sec: 3276.8, 60 sec: 3822.9, 300 sec: 3447.5). Total num frames: 2146304. Throughput: 0: 933.5. Samples: 103890. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) [2025-01-13 17:43:50,162][01267] Avg episode reward: [(0, '21.182')] [2025-01-13 17:43:50,171][11533] Saving new best policy, reward=21.182! [2025-01-13 17:43:55,157][01267] Fps is (10 sec: 4096.2, 60 sec: 3959.5, 300 sec: 3506.2). Total num frames: 2170880. Throughput: 0: 962.0. Samples: 107416. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2025-01-13 17:43:55,158][01267] Avg episode reward: [(0, '23.183')] [2025-01-13 17:43:55,164][11533] Saving new best policy, reward=23.183! [2025-01-13 17:43:57,414][11548] Updated weights for policy 0, policy_version 533 (0.0018) [2025-01-13 17:44:00,157][01267] Fps is (10 sec: 4505.4, 60 sec: 3891.2, 300 sec: 3528.9). Total num frames: 2191360. Throughput: 0: 1008.0. Samples: 114370. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2025-01-13 17:44:00,160][01267] Avg episode reward: [(0, '23.753')] [2025-01-13 17:44:00,173][11533] Saving new best policy, reward=23.753! [2025-01-13 17:44:05,157][01267] Fps is (10 sec: 3686.2, 60 sec: 3823.0, 300 sec: 3519.5). Total num frames: 2207744. Throughput: 0: 955.5. Samples: 118852. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2025-01-13 17:44:05,160][01267] Avg episode reward: [(0, '22.293')] [2025-01-13 17:44:09,125][11548] Updated weights for policy 0, policy_version 543 (0.0027) [2025-01-13 17:44:10,157][01267] Fps is (10 sec: 3276.9, 60 sec: 3891.2, 300 sec: 3510.9). Total num frames: 2224128. Throughput: 0: 940.6. Samples: 121400. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2025-01-13 17:44:10,159][01267] Avg episode reward: [(0, '21.684')] [2025-01-13 17:44:15,157][01267] Fps is (10 sec: 4096.2, 60 sec: 3959.5, 300 sec: 3559.3). Total num frames: 2248704. Throughput: 0: 992.2. Samples: 128256. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2025-01-13 17:44:15,161][01267] Avg episode reward: [(0, '20.232')] [2025-01-13 17:44:18,700][11548] Updated weights for policy 0, policy_version 553 (0.0018) [2025-01-13 17:44:20,157][01267] Fps is (10 sec: 4505.6, 60 sec: 3891.2, 300 sec: 3577.2). Total num frames: 2269184. Throughput: 0: 989.2. Samples: 134286. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) [2025-01-13 17:44:20,167][01267] Avg episode reward: [(0, '19.326')] [2025-01-13 17:44:25,157][01267] Fps is (10 sec: 3276.8, 60 sec: 3822.9, 300 sec: 3541.1). Total num frames: 2281472. Throughput: 0: 960.8. Samples: 136528. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2025-01-13 17:44:25,159][01267] Avg episode reward: [(0, '19.456')] [2025-01-13 17:44:29,554][11548] Updated weights for policy 0, policy_version 563 (0.0016) [2025-01-13 17:44:30,160][01267] Fps is (10 sec: 3685.2, 60 sec: 3959.3, 300 sec: 3583.9). Total num frames: 2306048. Throughput: 0: 967.9. Samples: 142648. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2025-01-13 17:44:30,162][01267] Avg episode reward: [(0, '19.869')] [2025-01-13 17:44:35,157][01267] Fps is (10 sec: 4915.2, 60 sec: 4027.8, 300 sec: 3624.4). Total num frames: 2330624. Throughput: 0: 1019.9. Samples: 149784. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2025-01-13 17:44:35,159][01267] Avg episode reward: [(0, '18.858')] [2025-01-13 17:44:40,157][01267] Fps is (10 sec: 3687.4, 60 sec: 3822.9, 300 sec: 3590.0). Total num frames: 2342912. Throughput: 0: 991.1. Samples: 152016. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2025-01-13 17:44:40,164][01267] Avg episode reward: [(0, '18.646')] [2025-01-13 17:44:40,368][11548] Updated weights for policy 0, policy_version 573 (0.0026) [2025-01-13 17:44:45,157][01267] Fps is (10 sec: 3276.8, 60 sec: 3891.2, 300 sec: 3604.5). Total num frames: 2363392. Throughput: 0: 944.7. Samples: 156882. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) [2025-01-13 17:44:45,159][01267] Avg episode reward: [(0, '19.123')] [2025-01-13 17:44:49,863][11548] Updated weights for policy 0, policy_version 583 (0.0031) [2025-01-13 17:44:50,157][01267] Fps is (10 sec: 4505.6, 60 sec: 4027.7, 300 sec: 3640.9). Total num frames: 2387968. Throughput: 0: 1004.8. Samples: 164066. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2025-01-13 17:44:50,163][01267] Avg episode reward: [(0, '18.326')] [2025-01-13 17:44:55,158][01267] Fps is (10 sec: 4505.1, 60 sec: 3959.4, 300 sec: 3653.2). Total num frames: 2408448. Throughput: 0: 1027.2. Samples: 167624. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2025-01-13 17:44:55,171][01267] Avg episode reward: [(0, '19.458')] [2025-01-13 17:45:00,157][01267] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3643.3). Total num frames: 2424832. Throughput: 0: 979.1. Samples: 172314. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2025-01-13 17:45:00,164][01267] Avg episode reward: [(0, '20.796')] [2025-01-13 17:45:01,290][11548] Updated weights for policy 0, policy_version 593 (0.0025) [2025-01-13 17:45:05,157][01267] Fps is (10 sec: 3686.8, 60 sec: 3959.5, 300 sec: 3654.9). Total num frames: 2445312. Throughput: 0: 979.5. Samples: 178362. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) [2025-01-13 17:45:05,160][01267] Avg episode reward: [(0, '21.049')] [2025-01-13 17:45:09,873][11548] Updated weights for policy 0, policy_version 603 (0.0019) [2025-01-13 17:45:10,157][01267] Fps is (10 sec: 4505.5, 60 sec: 4096.0, 300 sec: 3686.4). Total num frames: 2469888. Throughput: 0: 1010.2. Samples: 181988. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2025-01-13 17:45:10,166][01267] Avg episode reward: [(0, '20.795')] [2025-01-13 17:45:15,157][01267] Fps is (10 sec: 4095.8, 60 sec: 3959.4, 300 sec: 3676.4). Total num frames: 2486272. Throughput: 0: 1004.1. Samples: 187828. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2025-01-13 17:45:15,164][01267] Avg episode reward: [(0, '22.019')] [2025-01-13 17:45:20,157][01267] Fps is (10 sec: 3276.8, 60 sec: 3891.2, 300 sec: 3666.9). Total num frames: 2502656. Throughput: 0: 955.7. Samples: 192790. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2025-01-13 17:45:20,160][01267] Avg episode reward: [(0, '20.754')] [2025-01-13 17:45:20,170][11533] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000611_2502656.pth... [2025-01-13 17:45:20,325][11533] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000423_1732608.pth [2025-01-13 17:45:21,452][11548] Updated weights for policy 0, policy_version 613 (0.0023) [2025-01-13 17:45:25,157][01267] Fps is (10 sec: 4096.0, 60 sec: 4096.0, 300 sec: 3695.9). Total num frames: 2527232. Throughput: 0: 983.0. Samples: 196250. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) [2025-01-13 17:45:25,160][01267] Avg episode reward: [(0, '19.516')] [2025-01-13 17:45:30,157][01267] Fps is (10 sec: 4505.6, 60 sec: 4027.9, 300 sec: 3705.0). Total num frames: 2547712. Throughput: 0: 1031.2. Samples: 203288. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2025-01-13 17:45:30,160][01267] Avg episode reward: [(0, '20.630')] [2025-01-13 17:45:31,183][11548] Updated weights for policy 0, policy_version 623 (0.0019) [2025-01-13 17:45:35,159][01267] Fps is (10 sec: 3685.8, 60 sec: 3891.1, 300 sec: 3695.5). Total num frames: 2564096. Throughput: 0: 973.9. Samples: 207894. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) [2025-01-13 17:45:35,164][01267] Avg episode reward: [(0, '21.076')] [2025-01-13 17:45:40,157][01267] Fps is (10 sec: 3686.5, 60 sec: 4027.7, 300 sec: 3704.2). Total num frames: 2584576. Throughput: 0: 957.1. Samples: 210694. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2025-01-13 17:45:40,163][01267] Avg episode reward: [(0, '19.631')] [2025-01-13 17:45:41,545][11548] Updated weights for policy 0, policy_version 633 (0.0015) [2025-01-13 17:45:45,157][01267] Fps is (10 sec: 4506.5, 60 sec: 4096.0, 300 sec: 3730.0). Total num frames: 2609152. Throughput: 0: 1006.7. Samples: 217614. Policy #0 lag: (min: 0.0, avg: 0.9, max: 2.0) [2025-01-13 17:45:45,162][01267] Avg episode reward: [(0, '20.660')] [2025-01-13 17:45:50,157][01267] Fps is (10 sec: 4096.1, 60 sec: 3959.5, 300 sec: 3720.5). Total num frames: 2625536. Throughput: 0: 1000.9. Samples: 223402. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2025-01-13 17:45:50,167][01267] Avg episode reward: [(0, '18.955')] [2025-01-13 17:45:52,688][11548] Updated weights for policy 0, policy_version 643 (0.0017) [2025-01-13 17:45:55,157][01267] Fps is (10 sec: 3276.9, 60 sec: 3891.3, 300 sec: 3711.5). Total num frames: 2641920. Throughput: 0: 970.9. Samples: 225676. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2025-01-13 17:45:55,165][01267] Avg episode reward: [(0, '18.471')] [2025-01-13 17:46:00,157][01267] Fps is (10 sec: 4095.7, 60 sec: 4027.7, 300 sec: 3735.6). Total num frames: 2666496. Throughput: 0: 982.5. Samples: 232040. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) [2025-01-13 17:46:00,159][01267] Avg episode reward: [(0, '18.346')] [2025-01-13 17:46:01,979][11548] Updated weights for policy 0, policy_version 653 (0.0020) [2025-01-13 17:46:05,159][01267] Fps is (10 sec: 4504.4, 60 sec: 4027.5, 300 sec: 3742.6). Total num frames: 2686976. Throughput: 0: 1025.4. Samples: 238936. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2025-01-13 17:46:05,162][01267] Avg episode reward: [(0, '20.318')] [2025-01-13 17:46:10,157][01267] Fps is (10 sec: 3686.6, 60 sec: 3891.2, 300 sec: 3733.7). Total num frames: 2703360. Throughput: 0: 998.0. Samples: 241158. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2025-01-13 17:46:10,164][01267] Avg episode reward: [(0, '20.024')] [2025-01-13 17:46:13,671][11548] Updated weights for policy 0, policy_version 663 (0.0030) [2025-01-13 17:46:15,157][01267] Fps is (10 sec: 3277.7, 60 sec: 3891.2, 300 sec: 3725.1). Total num frames: 2719744. Throughput: 0: 952.7. Samples: 246158. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) [2025-01-13 17:46:15,165][01267] Avg episode reward: [(0, '20.800')] [2025-01-13 17:46:20,157][01267] Fps is (10 sec: 4096.0, 60 sec: 4027.8, 300 sec: 3747.1). Total num frames: 2744320. Throughput: 0: 1006.3. Samples: 253174. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2025-01-13 17:46:20,165][01267] Avg episode reward: [(0, '20.921')] [2025-01-13 17:46:22,612][11548] Updated weights for policy 0, policy_version 673 (0.0022) [2025-01-13 17:46:25,157][01267] Fps is (10 sec: 4505.7, 60 sec: 3959.5, 300 sec: 3753.4). Total num frames: 2764800. Throughput: 0: 1020.4. Samples: 256612. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2025-01-13 17:46:25,162][01267] Avg episode reward: [(0, '22.008')] [2025-01-13 17:46:30,157][01267] Fps is (10 sec: 3276.8, 60 sec: 3823.0, 300 sec: 3730.3). Total num frames: 2777088. Throughput: 0: 966.6. Samples: 261112. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) [2025-01-13 17:46:30,165][01267] Avg episode reward: [(0, '21.383')] [2025-01-13 17:46:33,739][11548] Updated weights for policy 0, policy_version 683 (0.0024) [2025-01-13 17:46:35,157][01267] Fps is (10 sec: 3686.4, 60 sec: 3959.6, 300 sec: 3751.1). Total num frames: 2801664. Throughput: 0: 983.1. Samples: 267642. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) [2025-01-13 17:46:35,166][01267] Avg episode reward: [(0, '21.518')] [2025-01-13 17:46:40,157][01267] Fps is (10 sec: 4915.1, 60 sec: 4027.7, 300 sec: 3771.2). Total num frames: 2826240. Throughput: 0: 1011.3. Samples: 271184. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) [2025-01-13 17:46:40,163][01267] Avg episode reward: [(0, '22.111')] [2025-01-13 17:46:44,006][11548] Updated weights for policy 0, policy_version 693 (0.0016) [2025-01-13 17:46:45,157][01267] Fps is (10 sec: 3686.4, 60 sec: 3823.0, 300 sec: 3748.9). Total num frames: 2838528. Throughput: 0: 992.0. Samples: 276678. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) [2025-01-13 17:46:45,159][01267] Avg episode reward: [(0, '23.036')] [2025-01-13 17:46:50,157][01267] Fps is (10 sec: 3276.8, 60 sec: 3891.2, 300 sec: 3818.3). Total num frames: 2859008. Throughput: 0: 951.5. Samples: 281750. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) [2025-01-13 17:46:50,166][01267] Avg episode reward: [(0, '22.431')] [2025-01-13 17:46:54,024][11548] Updated weights for policy 0, policy_version 703 (0.0014) [2025-01-13 17:46:55,157][01267] Fps is (10 sec: 4505.3, 60 sec: 4027.7, 300 sec: 3901.6). Total num frames: 2883584. Throughput: 0: 980.5. Samples: 285280. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2025-01-13 17:46:55,162][01267] Avg episode reward: [(0, '23.909')] [2025-01-13 17:46:55,167][11533] Saving new best policy, reward=23.909! [2025-01-13 17:47:00,157][01267] Fps is (10 sec: 4095.8, 60 sec: 3891.2, 300 sec: 3915.6). Total num frames: 2899968. Throughput: 0: 1020.9. Samples: 292098. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) [2025-01-13 17:47:00,166][01267] Avg episode reward: [(0, '25.315')] [2025-01-13 17:47:00,199][11533] Saving new best policy, reward=25.315! [2025-01-13 17:47:05,157][01267] Fps is (10 sec: 3277.0, 60 sec: 3823.1, 300 sec: 3901.7). Total num frames: 2916352. Throughput: 0: 962.0. Samples: 296462. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2025-01-13 17:47:05,160][01267] Avg episode reward: [(0, '25.177')] [2025-01-13 17:47:05,680][11548] Updated weights for policy 0, policy_version 713 (0.0044) [2025-01-13 17:47:10,157][01267] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3929.4). Total num frames: 2936832. Throughput: 0: 949.4. Samples: 299334. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2025-01-13 17:47:10,160][01267] Avg episode reward: [(0, '26.711')] [2025-01-13 17:47:10,173][11533] Saving new best policy, reward=26.711! [2025-01-13 17:47:14,900][11548] Updated weights for policy 0, policy_version 723 (0.0014) [2025-01-13 17:47:15,157][01267] Fps is (10 sec: 4505.3, 60 sec: 4027.7, 300 sec: 3943.3). Total num frames: 2961408. Throughput: 0: 1003.2. Samples: 306256. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2025-01-13 17:47:15,160][01267] Avg episode reward: [(0, '25.642')] [2025-01-13 17:47:20,157][01267] Fps is (10 sec: 4096.3, 60 sec: 3891.2, 300 sec: 3915.5). Total num frames: 2977792. Throughput: 0: 977.0. Samples: 311606. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2025-01-13 17:47:20,162][01267] Avg episode reward: [(0, '25.442')] [2025-01-13 17:47:20,174][11533] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000727_2977792.pth... [2025-01-13 17:47:20,341][11533] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000497_2035712.pth [2025-01-13 17:47:25,157][01267] Fps is (10 sec: 3277.0, 60 sec: 3822.9, 300 sec: 3915.5). Total num frames: 2994176. Throughput: 0: 946.0. Samples: 313752. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) [2025-01-13 17:47:25,162][01267] Avg episode reward: [(0, '25.507')] [2025-01-13 17:47:26,592][11548] Updated weights for policy 0, policy_version 733 (0.0018) [2025-01-13 17:47:30,157][01267] Fps is (10 sec: 4095.8, 60 sec: 4027.7, 300 sec: 3943.3). Total num frames: 3018752. Throughput: 0: 967.9. Samples: 320236. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) [2025-01-13 17:47:30,165][01267] Avg episode reward: [(0, '23.990')] [2025-01-13 17:47:35,157][01267] Fps is (10 sec: 4505.6, 60 sec: 3959.5, 300 sec: 3929.4). Total num frames: 3039232. Throughput: 0: 1011.6. Samples: 327274. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) [2025-01-13 17:47:35,159][01267] Avg episode reward: [(0, '23.425')] [2025-01-13 17:47:35,820][11548] Updated weights for policy 0, policy_version 743 (0.0015) [2025-01-13 17:47:40,157][01267] Fps is (10 sec: 3686.6, 60 sec: 3822.9, 300 sec: 3915.5). Total num frames: 3055616. Throughput: 0: 984.3. Samples: 329574. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2025-01-13 17:47:40,165][01267] Avg episode reward: [(0, '23.857')] [2025-01-13 17:47:45,157][01267] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3929.4). Total num frames: 3076096. Throughput: 0: 955.1. Samples: 335078. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) [2025-01-13 17:47:45,159][01267] Avg episode reward: [(0, '23.459')] [2025-01-13 17:47:46,423][11548] Updated weights for policy 0, policy_version 753 (0.0014) [2025-01-13 17:47:50,157][01267] Fps is (10 sec: 4505.3, 60 sec: 4027.7, 300 sec: 3957.1). Total num frames: 3100672. Throughput: 0: 1012.8. Samples: 342038. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2025-01-13 17:47:50,165][01267] Avg episode reward: [(0, '23.167')] [2025-01-13 17:47:55,157][01267] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3929.4). Total num frames: 3117056. Throughput: 0: 1022.7. Samples: 345356. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) [2025-01-13 17:47:55,165][01267] Avg episode reward: [(0, '23.077')] [2025-01-13 17:47:56,956][11548] Updated weights for policy 0, policy_version 763 (0.0021) [2025-01-13 17:48:00,157][01267] Fps is (10 sec: 3277.0, 60 sec: 3891.2, 300 sec: 3915.5). Total num frames: 3133440. Throughput: 0: 972.1. Samples: 349998. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2025-01-13 17:48:00,159][01267] Avg episode reward: [(0, '25.013')] [2025-01-13 17:48:05,157][01267] Fps is (10 sec: 4095.9, 60 sec: 4027.7, 300 sec: 3957.2). Total num frames: 3158016. Throughput: 0: 1000.3. Samples: 356618. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) [2025-01-13 17:48:05,164][01267] Avg episode reward: [(0, '24.983')] [2025-01-13 17:48:06,353][11548] Updated weights for policy 0, policy_version 773 (0.0021) [2025-01-13 17:48:10,157][01267] Fps is (10 sec: 4915.2, 60 sec: 4096.0, 300 sec: 3971.0). Total num frames: 3182592. Throughput: 0: 1034.6. Samples: 360310. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2025-01-13 17:48:10,162][01267] Avg episode reward: [(0, '24.683')] [2025-01-13 17:48:15,156][01267] Fps is (10 sec: 4096.1, 60 sec: 3959.5, 300 sec: 3943.3). Total num frames: 3198976. Throughput: 0: 1017.8. Samples: 366036. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) [2025-01-13 17:48:15,159][01267] Avg episode reward: [(0, '25.259')] [2025-01-13 17:48:17,448][11548] Updated weights for policy 0, policy_version 783 (0.0023) [2025-01-13 17:48:20,157][01267] Fps is (10 sec: 3686.4, 60 sec: 4027.7, 300 sec: 3957.2). Total num frames: 3219456. Throughput: 0: 981.6. Samples: 371448. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2025-01-13 17:48:20,160][01267] Avg episode reward: [(0, '23.455')] [2025-01-13 17:48:25,159][01267] Fps is (10 sec: 4095.2, 60 sec: 4095.9, 300 sec: 3971.0). Total num frames: 3239936. Throughput: 0: 1013.1. Samples: 375166. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2025-01-13 17:48:25,165][01267] Avg episode reward: [(0, '23.433')] [2025-01-13 17:48:26,122][11548] Updated weights for policy 0, policy_version 793 (0.0013) [2025-01-13 17:48:30,157][01267] Fps is (10 sec: 4095.8, 60 sec: 4027.7, 300 sec: 3971.0). Total num frames: 3260416. Throughput: 0: 1045.7. Samples: 382134. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) [2025-01-13 17:48:30,165][01267] Avg episode reward: [(0, '22.989')] [2025-01-13 17:48:35,157][01267] Fps is (10 sec: 3687.1, 60 sec: 3959.5, 300 sec: 3943.3). Total num frames: 3276800. Throughput: 0: 996.5. Samples: 386882. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2025-01-13 17:48:35,159][01267] Avg episode reward: [(0, '22.715')] [2025-01-13 17:48:37,239][11548] Updated weights for policy 0, policy_version 803 (0.0027) [2025-01-13 17:48:40,157][01267] Fps is (10 sec: 4096.2, 60 sec: 4096.0, 300 sec: 3971.0). Total num frames: 3301376. Throughput: 0: 989.1. Samples: 389866. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) [2025-01-13 17:48:40,159][01267] Avg episode reward: [(0, '24.531')] [2025-01-13 17:48:45,157][01267] Fps is (10 sec: 4915.2, 60 sec: 4164.3, 300 sec: 3998.8). Total num frames: 3325952. Throughput: 0: 1044.8. Samples: 397014. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) [2025-01-13 17:48:45,161][01267] Avg episode reward: [(0, '25.448')] [2025-01-13 17:48:46,020][11548] Updated weights for policy 0, policy_version 813 (0.0026) [2025-01-13 17:48:50,157][01267] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3957.2). Total num frames: 3338240. Throughput: 0: 1020.5. Samples: 402540. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) [2025-01-13 17:48:50,160][01267] Avg episode reward: [(0, '24.616')] [2025-01-13 17:48:55,157][01267] Fps is (10 sec: 3276.7, 60 sec: 4027.7, 300 sec: 3957.2). Total num frames: 3358720. Throughput: 0: 989.2. Samples: 404824. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2025-01-13 17:48:55,160][01267] Avg episode reward: [(0, '24.727')] [2025-01-13 17:48:57,219][11548] Updated weights for policy 0, policy_version 823 (0.0022) [2025-01-13 17:49:00,157][01267] Fps is (10 sec: 4505.6, 60 sec: 4164.3, 300 sec: 3984.9). Total num frames: 3383296. Throughput: 0: 1015.9. Samples: 411752. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2025-01-13 17:49:00,159][01267] Avg episode reward: [(0, '24.091')] [2025-01-13 17:49:05,158][01267] Fps is (10 sec: 4505.0, 60 sec: 4095.9, 300 sec: 3998.8). Total num frames: 3403776. Throughput: 0: 1046.7. Samples: 418550. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) [2025-01-13 17:49:05,162][01267] Avg episode reward: [(0, '24.588')] [2025-01-13 17:49:06,690][11548] Updated weights for policy 0, policy_version 833 (0.0022) [2025-01-13 17:49:10,157][01267] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3971.0). Total num frames: 3420160. Throughput: 0: 1016.9. Samples: 420924. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2025-01-13 17:49:10,159][01267] Avg episode reward: [(0, '24.487')] [2025-01-13 17:49:15,157][01267] Fps is (10 sec: 4096.7, 60 sec: 4096.0, 300 sec: 3984.9). Total num frames: 3444736. Throughput: 0: 988.3. Samples: 426606. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) [2025-01-13 17:49:15,164][01267] Avg episode reward: [(0, '25.670')] [2025-01-13 17:49:17,033][11548] Updated weights for policy 0, policy_version 843 (0.0018) [2025-01-13 17:49:20,157][01267] Fps is (10 sec: 4505.4, 60 sec: 4096.0, 300 sec: 4012.7). Total num frames: 3465216. Throughput: 0: 1040.7. Samples: 433712. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) [2025-01-13 17:49:20,160][01267] Avg episode reward: [(0, '24.775')] [2025-01-13 17:49:20,229][11533] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000847_3469312.pth... [2025-01-13 17:49:20,365][11533] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000611_2502656.pth [2025-01-13 17:49:25,157][01267] Fps is (10 sec: 4096.0, 60 sec: 4096.1, 300 sec: 3998.9). Total num frames: 3485696. Throughput: 0: 1044.6. Samples: 436872. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) [2025-01-13 17:49:25,159][01267] Avg episode reward: [(0, '24.317')] [2025-01-13 17:49:27,583][11548] Updated weights for policy 0, policy_version 853 (0.0026) [2025-01-13 17:49:30,157][01267] Fps is (10 sec: 3686.3, 60 sec: 4027.7, 300 sec: 3971.0). Total num frames: 3502080. Throughput: 0: 988.9. Samples: 441514. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2025-01-13 17:49:30,159][01267] Avg episode reward: [(0, '23.195')] [2025-01-13 17:49:35,157][01267] Fps is (10 sec: 4096.0, 60 sec: 4164.3, 300 sec: 4012.7). Total num frames: 3526656. Throughput: 0: 1021.5. Samples: 448506. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) [2025-01-13 17:49:35,165][01267] Avg episode reward: [(0, '23.580')] [2025-01-13 17:49:36,547][11548] Updated weights for policy 0, policy_version 863 (0.0015) [2025-01-13 17:49:40,157][01267] Fps is (10 sec: 4505.6, 60 sec: 4096.0, 300 sec: 4012.7). Total num frames: 3547136. Throughput: 0: 1054.2. Samples: 452262. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) [2025-01-13 17:49:40,162][01267] Avg episode reward: [(0, '21.638')] [2025-01-13 17:49:45,157][01267] Fps is (10 sec: 3686.3, 60 sec: 3959.5, 300 sec: 3984.9). Total num frames: 3563520. Throughput: 0: 1020.3. Samples: 457664. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2025-01-13 17:49:45,165][01267] Avg episode reward: [(0, '23.893')] [2025-01-13 17:49:47,764][11548] Updated weights for policy 0, policy_version 873 (0.0026) [2025-01-13 17:49:50,157][01267] Fps is (10 sec: 3686.5, 60 sec: 4096.0, 300 sec: 3984.9). Total num frames: 3584000. Throughput: 0: 997.1. Samples: 463420. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) [2025-01-13 17:49:50,163][01267] Avg episode reward: [(0, '23.350')] [2025-01-13 17:49:55,157][01267] Fps is (10 sec: 4505.5, 60 sec: 4164.2, 300 sec: 4012.7). Total num frames: 3608576. Throughput: 0: 1023.8. Samples: 466994. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) [2025-01-13 17:49:55,167][01267] Avg episode reward: [(0, '25.326')] [2025-01-13 17:49:56,278][11548] Updated weights for policy 0, policy_version 883 (0.0016) [2025-01-13 17:50:00,157][01267] Fps is (10 sec: 4505.9, 60 sec: 4096.0, 300 sec: 4012.7). Total num frames: 3629056. Throughput: 0: 1045.3. Samples: 473646. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2025-01-13 17:50:00,164][01267] Avg episode reward: [(0, '23.925')] [2025-01-13 17:50:05,157][01267] Fps is (10 sec: 3276.7, 60 sec: 3959.5, 300 sec: 3971.0). Total num frames: 3641344. Throughput: 0: 985.6. Samples: 478064. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2025-01-13 17:50:05,160][01267] Avg episode reward: [(0, '23.639')] [2025-01-13 17:50:07,938][11548] Updated weights for policy 0, policy_version 893 (0.0025) [2025-01-13 17:50:10,157][01267] Fps is (10 sec: 3686.3, 60 sec: 4096.0, 300 sec: 3998.8). Total num frames: 3665920. Throughput: 0: 987.0. Samples: 481288. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2025-01-13 17:50:10,160][01267] Avg episode reward: [(0, '22.335')] [2025-01-13 17:50:15,157][01267] Fps is (10 sec: 4915.3, 60 sec: 4096.0, 300 sec: 4026.6). Total num frames: 3690496. Throughput: 0: 1038.0. Samples: 488222. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2025-01-13 17:50:15,160][01267] Avg episode reward: [(0, '21.688')] [2025-01-13 17:50:17,739][11548] Updated weights for policy 0, policy_version 903 (0.0020) [2025-01-13 17:50:20,157][01267] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3984.9). Total num frames: 3702784. Throughput: 0: 994.0. Samples: 493238. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) [2025-01-13 17:50:20,162][01267] Avg episode reward: [(0, '21.371')] [2025-01-13 17:50:25,157][01267] Fps is (10 sec: 3277.0, 60 sec: 3959.5, 300 sec: 3984.9). Total num frames: 3723264. Throughput: 0: 962.2. Samples: 495560. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) [2025-01-13 17:50:25,159][01267] Avg episode reward: [(0, '20.459')] [2025-01-13 17:50:28,437][11548] Updated weights for policy 0, policy_version 913 (0.0032) [2025-01-13 17:50:30,157][01267] Fps is (10 sec: 4095.8, 60 sec: 4027.7, 300 sec: 3998.8). Total num frames: 3743744. Throughput: 0: 993.1. Samples: 502354. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) [2025-01-13 17:50:30,162][01267] Avg episode reward: [(0, '22.503')] [2025-01-13 17:50:35,157][01267] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3998.8). Total num frames: 3764224. Throughput: 0: 1004.9. Samples: 508638. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) [2025-01-13 17:50:35,160][01267] Avg episode reward: [(0, '23.547')] [2025-01-13 17:50:39,546][11548] Updated weights for policy 0, policy_version 923 (0.0016) [2025-01-13 17:50:40,157][01267] Fps is (10 sec: 3686.6, 60 sec: 3891.2, 300 sec: 3971.0). Total num frames: 3780608. Throughput: 0: 976.1. Samples: 510918. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) [2025-01-13 17:50:40,161][01267] Avg episode reward: [(0, '23.686')] [2025-01-13 17:50:45,157][01267] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3984.9). Total num frames: 3801088. Throughput: 0: 952.2. Samples: 516494. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2025-01-13 17:50:45,167][01267] Avg episode reward: [(0, '23.471')] [2025-01-13 17:50:48,986][11548] Updated weights for policy 0, policy_version 933 (0.0019) [2025-01-13 17:50:50,157][01267] Fps is (10 sec: 4505.5, 60 sec: 4027.8, 300 sec: 4012.7). Total num frames: 3825664. Throughput: 0: 1003.5. Samples: 523222. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) [2025-01-13 17:50:50,160][01267] Avg episode reward: [(0, '22.640')] [2025-01-13 17:50:55,157][01267] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3984.9). Total num frames: 3842048. Throughput: 0: 996.4. Samples: 526126. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) [2025-01-13 17:50:55,160][01267] Avg episode reward: [(0, '21.734')] [2025-01-13 17:51:00,157][01267] Fps is (10 sec: 3276.7, 60 sec: 3822.9, 300 sec: 3971.1). Total num frames: 3858432. Throughput: 0: 940.9. Samples: 530562. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2025-01-13 17:51:00,160][01267] Avg episode reward: [(0, '21.927')] [2025-01-13 17:51:00,585][11548] Updated weights for policy 0, policy_version 943 (0.0022) [2025-01-13 17:51:05,157][01267] Fps is (10 sec: 4096.0, 60 sec: 4027.8, 300 sec: 3998.8). Total num frames: 3883008. Throughput: 0: 981.7. Samples: 537414. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) [2025-01-13 17:51:05,160][01267] Avg episode reward: [(0, '20.585')] [2025-01-13 17:51:09,799][11548] Updated weights for policy 0, policy_version 953 (0.0019) [2025-01-13 17:51:10,157][01267] Fps is (10 sec: 4505.8, 60 sec: 3959.5, 300 sec: 4012.7). Total num frames: 3903488. Throughput: 0: 1008.3. Samples: 540932. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) [2025-01-13 17:51:10,164][01267] Avg episode reward: [(0, '21.018')] [2025-01-13 17:51:15,157][01267] Fps is (10 sec: 3276.6, 60 sec: 3754.7, 300 sec: 3971.0). Total num frames: 3915776. Throughput: 0: 970.8. Samples: 546038. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) [2025-01-13 17:51:15,160][01267] Avg episode reward: [(0, '21.164')] [2025-01-13 17:51:20,157][01267] Fps is (10 sec: 3276.6, 60 sec: 3891.2, 300 sec: 3971.0). Total num frames: 3936256. Throughput: 0: 953.8. Samples: 551562. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) [2025-01-13 17:51:20,167][01267] Avg episode reward: [(0, '23.342')] [2025-01-13 17:51:20,274][11533] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000962_3940352.pth... [2025-01-13 17:51:20,427][11533] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000727_2977792.pth [2025-01-13 17:51:21,323][11548] Updated weights for policy 0, policy_version 963 (0.0024) [2025-01-13 17:51:25,157][01267] Fps is (10 sec: 4505.8, 60 sec: 3959.5, 300 sec: 4012.7). Total num frames: 3960832. Throughput: 0: 977.3. Samples: 554896. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) [2025-01-13 17:51:25,159][01267] Avg episode reward: [(0, '23.940')] [2025-01-13 17:51:30,157][01267] Fps is (10 sec: 4096.3, 60 sec: 3891.2, 300 sec: 3984.9). Total num frames: 3977216. Throughput: 0: 996.8. Samples: 561352. Policy #0 lag: (min: 0.0, avg: 1.0, max: 2.0) [2025-01-13 17:51:30,161][01267] Avg episode reward: [(0, '23.367')] [2025-01-13 17:51:31,746][11548] Updated weights for policy 0, policy_version 973 (0.0018) [2025-01-13 17:51:35,157][01267] Fps is (10 sec: 3276.8, 60 sec: 3822.9, 300 sec: 3957.2). Total num frames: 3993600. Throughput: 0: 948.4. Samples: 565898. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) [2025-01-13 17:51:35,163][01267] Avg episode reward: [(0, '25.255')] [2025-01-13 17:51:37,329][11533] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... [2025-01-13 17:51:37,335][11533] Stopping Batcher_0... [2025-01-13 17:51:37,335][11533] Loop batcher_evt_loop terminating... [2025-01-13 17:51:37,335][01267] Component Batcher_0 stopped! [2025-01-13 17:51:37,397][11548] Weights refcount: 2 0 [2025-01-13 17:51:37,408][01267] Component InferenceWorker_p0-w0 stopped! [2025-01-13 17:51:37,415][11548] Stopping InferenceWorker_p0-w0... [2025-01-13 17:51:37,416][11548] Loop inference_proc0-0_evt_loop terminating... [2025-01-13 17:51:37,454][11533] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000847_3469312.pth [2025-01-13 17:51:37,473][11533] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... [2025-01-13 17:51:37,677][01267] Component LearnerWorker_p0 stopped! [2025-01-13 17:51:37,681][11533] Stopping LearnerWorker_p0... [2025-01-13 17:51:37,682][11533] Loop learner_proc0_evt_loop terminating... [2025-01-13 17:51:37,730][11558] Stopping RolloutWorker_w9... [2025-01-13 17:51:37,730][01267] Component RolloutWorker_w9 stopped! [2025-01-13 17:51:37,732][11558] Loop rollout_proc9_evt_loop terminating... [2025-01-13 17:51:37,742][11554] Stopping RolloutWorker_w5... [2025-01-13 17:51:37,742][01267] Component RolloutWorker_w5 stopped! [2025-01-13 17:51:37,743][11554] Loop rollout_proc5_evt_loop terminating... [2025-01-13 17:51:37,765][11550] Stopping RolloutWorker_w1... [2025-01-13 17:51:37,765][11550] Loop rollout_proc1_evt_loop terminating... [2025-01-13 17:51:37,765][01267] Component RolloutWorker_w1 stopped! [2025-01-13 17:51:37,771][11552] Stopping RolloutWorker_w3... [2025-01-13 17:51:37,771][11552] Loop rollout_proc3_evt_loop terminating... [2025-01-13 17:51:37,771][01267] Component RolloutWorker_w3 stopped! [2025-01-13 17:51:37,776][11556] Stopping RolloutWorker_w7... [2025-01-13 17:51:37,776][01267] Component RolloutWorker_w7 stopped! [2025-01-13 17:51:37,779][11556] Loop rollout_proc7_evt_loop terminating... [2025-01-13 17:51:37,908][01267] Component RolloutWorker_w4 stopped! [2025-01-13 17:51:37,915][11553] Stopping RolloutWorker_w4... [2025-01-13 17:51:37,916][11553] Loop rollout_proc4_evt_loop terminating... [2025-01-13 17:51:37,927][11551] Stopping RolloutWorker_w2... [2025-01-13 17:51:37,922][01267] Component RolloutWorker_w2 stopped! [2025-01-13 17:51:37,933][11549] Stopping RolloutWorker_w0... [2025-01-13 17:51:37,933][01267] Component RolloutWorker_w0 stopped! [2025-01-13 17:51:37,927][11551] Loop rollout_proc2_evt_loop terminating... [2025-01-13 17:51:37,942][11549] Loop rollout_proc0_evt_loop terminating... [2025-01-13 17:51:37,948][11557] Stopping RolloutWorker_w8... [2025-01-13 17:51:37,948][01267] Component RolloutWorker_w8 stopped! [2025-01-13 17:51:37,967][11555] Stopping RolloutWorker_w6... [2025-01-13 17:51:37,953][11557] Loop rollout_proc8_evt_loop terminating... [2025-01-13 17:51:37,966][01267] Component RolloutWorker_w6 stopped! [2025-01-13 17:51:37,968][11555] Loop rollout_proc6_evt_loop terminating... [2025-01-13 17:51:37,968][01267] Waiting for process learner_proc0 to stop... [2025-01-13 17:51:39,427][01267] Waiting for process inference_proc0-0 to join... [2025-01-13 17:51:39,435][01267] Waiting for process rollout_proc0 to join... [2025-01-13 17:51:42,082][01267] Waiting for process rollout_proc1 to join... [2025-01-13 17:51:42,088][01267] Waiting for process rollout_proc2 to join... [2025-01-13 17:51:42,092][01267] Waiting for process rollout_proc3 to join... [2025-01-13 17:51:42,095][01267] Waiting for process rollout_proc4 to join... [2025-01-13 17:51:42,100][01267] Waiting for process rollout_proc5 to join... [2025-01-13 17:51:42,103][01267] Waiting for process rollout_proc6 to join... [2025-01-13 17:51:42,107][01267] Waiting for process rollout_proc7 to join... [2025-01-13 17:51:42,111][01267] Waiting for process rollout_proc8 to join... [2025-01-13 17:51:42,114][01267] Waiting for process rollout_proc9 to join... [2025-01-13 17:51:42,119][01267] Batcher 0 profile tree view: batching: 17.7517, releasing_batches: 0.0150 [2025-01-13 17:51:42,120][01267] InferenceWorker_p0-w0 profile tree view: wait_policy: 0.0145 wait_policy_total: 304.3302 update_model: 4.0383 weight_update: 0.0029 one_step: 0.0186 handle_policy_step: 261.3143 deserialize: 7.5053, stack: 1.3833, obs_to_device_normalize: 55.1510, forward: 128.8399, send_messages: 15.4539 prepare_outputs: 40.1624 to_cpu: 24.2075 [2025-01-13 17:51:42,123][01267] Learner 0 profile tree view: misc: 0.0034, prepare_batch: 8.1905 train: 45.0033 epoch_init: 0.0032, minibatch_init: 0.0037, losses_postprocess: 0.3632, kl_divergence: 0.3887, after_optimizer: 2.0846 calculate_losses: 16.4050 losses_init: 0.0021, forward_head: 0.9293, bptt_initial: 10.7365, tail: 0.6831, advantages_returns: 0.2283, losses: 2.5044 bptt: 1.1320 bptt_forward_core: 1.0359 update: 25.2725 clip: 0.4927 [2025-01-13 17:51:42,125][01267] RolloutWorker_w0 profile tree view: wait_for_trajectories: 0.1856, enqueue_policy_requests: 74.7049, env_step: 445.8455, overhead: 7.2039, complete_rollouts: 3.6483 save_policy_outputs: 10.5929 split_output_tensors: 4.2501 [2025-01-13 17:51:42,127][01267] RolloutWorker_w9 profile tree view: wait_for_trajectories: 0.1259, enqueue_policy_requests: 73.8008, env_step: 444.0700, overhead: 7.2920, complete_rollouts: 4.6722 save_policy_outputs: 10.2649 split_output_tensors: 4.0569 [2025-01-13 17:51:42,129][01267] Loop Runner_EvtLoop terminating... [2025-01-13 17:51:42,131][01267] Runner profile tree view: main_loop: 619.4441 [2025-01-13 17:51:42,133][01267] Collected {0: 4005888}, FPS: 3669.9 [2025-01-13 18:05:56,153][01267] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json [2025-01-13 18:05:56,162][01267] Overriding arg 'num_workers' with value 1 passed from command line [2025-01-13 18:05:56,165][01267] Adding new argument 'no_render'=True that is not in the saved config file! [2025-01-13 18:05:56,168][01267] Adding new argument 'save_video'=True that is not in the saved config file! [2025-01-13 18:05:56,170][01267] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! [2025-01-13 18:05:56,173][01267] Adding new argument 'video_name'=None that is not in the saved config file! [2025-01-13 18:05:56,174][01267] Adding new argument 'max_num_frames'=1000000000.0 that is not in the saved config file! [2025-01-13 18:05:56,176][01267] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! [2025-01-13 18:05:56,178][01267] Adding new argument 'push_to_hub'=False that is not in the saved config file! [2025-01-13 18:05:56,180][01267] Adding new argument 'hf_repository'=None that is not in the saved config file! [2025-01-13 18:05:56,182][01267] Adding new argument 'policy_index'=0 that is not in the saved config file! [2025-01-13 18:05:56,200][01267] Adding new argument 'eval_deterministic'=False that is not in the saved config file! [2025-01-13 18:05:56,201][01267] Adding new argument 'train_script'=None that is not in the saved config file! [2025-01-13 18:05:56,211][01267] Adding new argument 'enjoy_script'=None that is not in the saved config file! [2025-01-13 18:05:56,213][01267] Using frameskip 1 and render_action_repeat=4 for evaluation [2025-01-13 18:05:56,369][01267] Doom resolution: 160x120, resize resolution: (128, 72) [2025-01-13 18:05:56,384][01267] RunningMeanStd input shape: (3, 72, 128) [2025-01-13 18:05:56,388][01267] RunningMeanStd input shape: (1,) [2025-01-13 18:05:56,443][01267] ConvEncoder: input_channels=3 [2025-01-13 18:05:56,840][01267] Conv encoder output size: 512 [2025-01-13 18:05:56,847][01267] Policy head output size: 512 [2025-01-13 18:05:57,448][01267] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... [2025-01-13 18:05:59,339][01267] Num frames 100... [2025-01-13 18:05:59,623][01267] Num frames 200... [2025-01-13 18:05:59,830][01267] Num frames 300... [2025-01-13 18:06:00,020][01267] Num frames 400... [2025-01-13 18:06:00,197][01267] Num frames 500... [2025-01-13 18:06:00,491][01267] Num frames 600... [2025-01-13 18:06:00,726][01267] Num frames 700... [2025-01-13 18:06:00,846][01267] Num frames 800... [2025-01-13 18:06:00,966][01267] Num frames 900... [2025-01-13 18:06:01,131][01267] Avg episode rewards: #0: 24.870, true rewards: #0: 9.870 [2025-01-13 18:06:01,133][01267] Avg episode reward: 24.870, avg true_objective: 9.870 [2025-01-13 18:06:01,151][01267] Num frames 1000... [2025-01-13 18:06:01,270][01267] Num frames 1100... [2025-01-13 18:06:01,392][01267] Num frames 1200... [2025-01-13 18:06:01,513][01267] Num frames 1300... [2025-01-13 18:06:01,641][01267] Num frames 1400... [2025-01-13 18:06:01,766][01267] Num frames 1500... [2025-01-13 18:06:01,886][01267] Num frames 1600... [2025-01-13 18:06:01,996][01267] Avg episode rewards: #0: 19.230, true rewards: #0: 8.230 [2025-01-13 18:06:01,998][01267] Avg episode reward: 19.230, avg true_objective: 8.230 [2025-01-13 18:06:02,067][01267] Num frames 1700... [2025-01-13 18:06:02,188][01267] Num frames 1800... [2025-01-13 18:06:02,319][01267] Num frames 1900... [2025-01-13 18:06:02,438][01267] Num frames 2000... [2025-01-13 18:06:02,542][01267] Avg episode rewards: #0: 15.470, true rewards: #0: 6.803 [2025-01-13 18:06:02,544][01267] Avg episode reward: 15.470, avg true_objective: 6.803 [2025-01-13 18:06:02,624][01267] Num frames 2100... [2025-01-13 18:06:02,749][01267] Num frames 2200... [2025-01-13 18:06:02,920][01267] Avg episode rewards: #0: 12.243, true rewards: #0: 5.742 [2025-01-13 18:06:02,921][01267] Avg episode reward: 12.243, avg true_objective: 5.742 [2025-01-13 18:06:02,927][01267] Num frames 2300... [2025-01-13 18:06:03,045][01267] Num frames 2400... [2025-01-13 18:06:03,164][01267] Num frames 2500... [2025-01-13 18:06:03,292][01267] Num frames 2600... [2025-01-13 18:06:03,408][01267] Num frames 2700... [2025-01-13 18:06:03,529][01267] Num frames 2800... [2025-01-13 18:06:03,658][01267] Num frames 2900... [2025-01-13 18:06:03,777][01267] Num frames 3000... [2025-01-13 18:06:03,899][01267] Num frames 3100... [2025-01-13 18:06:04,020][01267] Num frames 3200... [2025-01-13 18:06:04,137][01267] Num frames 3300... [2025-01-13 18:06:04,259][01267] Num frames 3400... [2025-01-13 18:06:04,389][01267] Num frames 3500... [2025-01-13 18:06:04,508][01267] Num frames 3600... [2025-01-13 18:06:04,631][01267] Num frames 3700... [2025-01-13 18:06:04,738][01267] Avg episode rewards: #0: 17.274, true rewards: #0: 7.474 [2025-01-13 18:06:04,740][01267] Avg episode reward: 17.274, avg true_objective: 7.474 [2025-01-13 18:06:04,816][01267] Num frames 3800... [2025-01-13 18:06:04,932][01267] Num frames 3900... [2025-01-13 18:06:05,049][01267] Num frames 4000... [2025-01-13 18:06:05,166][01267] Num frames 4100... [2025-01-13 18:06:05,295][01267] Num frames 4200... [2025-01-13 18:06:05,413][01267] Num frames 4300... [2025-01-13 18:06:05,529][01267] Num frames 4400... [2025-01-13 18:06:05,645][01267] Num frames 4500... [2025-01-13 18:06:05,735][01267] Avg episode rewards: #0: 17.530, true rewards: #0: 7.530 [2025-01-13 18:06:05,736][01267] Avg episode reward: 17.530, avg true_objective: 7.530 [2025-01-13 18:06:05,833][01267] Num frames 4600... [2025-01-13 18:06:05,950][01267] Num frames 4700... [2025-01-13 18:06:06,068][01267] Num frames 4800... [2025-01-13 18:06:06,189][01267] Num frames 4900... [2025-01-13 18:06:06,313][01267] Num frames 5000... [2025-01-13 18:06:06,431][01267] Num frames 5100... [2025-01-13 18:06:06,547][01267] Num frames 5200... [2025-01-13 18:06:06,666][01267] Num frames 5300... [2025-01-13 18:06:06,752][01267] Avg episode rewards: #0: 17.454, true rewards: #0: 7.597 [2025-01-13 18:06:06,753][01267] Avg episode reward: 17.454, avg true_objective: 7.597 [2025-01-13 18:06:06,850][01267] Num frames 5400... [2025-01-13 18:06:06,967][01267] Num frames 5500... [2025-01-13 18:06:07,086][01267] Num frames 5600... [2025-01-13 18:06:07,211][01267] Num frames 5700... [2025-01-13 18:06:07,336][01267] Num frames 5800... [2025-01-13 18:06:07,457][01267] Num frames 5900... [2025-01-13 18:06:07,576][01267] Num frames 6000... [2025-01-13 18:06:07,708][01267] Num frames 6100... [2025-01-13 18:06:07,836][01267] Num frames 6200... [2025-01-13 18:06:07,955][01267] Num frames 6300... [2025-01-13 18:06:08,098][01267] Avg episode rewards: #0: 18.093, true rewards: #0: 7.967 [2025-01-13 18:06:08,100][01267] Avg episode reward: 18.093, avg true_objective: 7.967 [2025-01-13 18:06:08,132][01267] Num frames 6400... [2025-01-13 18:06:08,261][01267] Num frames 6500... [2025-01-13 18:06:08,383][01267] Num frames 6600... [2025-01-13 18:06:08,501][01267] Num frames 6700... [2025-01-13 18:06:08,616][01267] Num frames 6800... [2025-01-13 18:06:08,766][01267] Num frames 6900... [2025-01-13 18:06:08,940][01267] Num frames 7000... [2025-01-13 18:06:09,105][01267] Num frames 7100... [2025-01-13 18:06:09,275][01267] Num frames 7200... [2025-01-13 18:06:09,439][01267] Num frames 7300... [2025-01-13 18:06:09,601][01267] Num frames 7400... [2025-01-13 18:06:09,768][01267] Num frames 7500... [2025-01-13 18:06:09,930][01267] Avg episode rewards: #0: 19.285, true rewards: #0: 8.396 [2025-01-13 18:06:09,934][01267] Avg episode reward: 19.285, avg true_objective: 8.396 [2025-01-13 18:06:10,013][01267] Num frames 7600... [2025-01-13 18:06:10,196][01267] Num frames 7700... [2025-01-13 18:06:10,369][01267] Num frames 7800... [2025-01-13 18:06:10,537][01267] Num frames 7900... [2025-01-13 18:06:10,712][01267] Num frames 8000... [2025-01-13 18:06:10,899][01267] Num frames 8100... [2025-01-13 18:06:11,078][01267] Num frames 8200... [2025-01-13 18:06:11,186][01267] Avg episode rewards: #0: 18.528, true rewards: #0: 8.228 [2025-01-13 18:06:11,189][01267] Avg episode reward: 18.528, avg true_objective: 8.228 [2025-01-13 18:07:00,034][01267] Replay video saved to /content/train_dir/default_experiment/replay.mp4! [2025-01-13 18:08:43,194][01267] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json [2025-01-13 18:08:43,195][01267] Overriding arg 'num_workers' with value 1 passed from command line [2025-01-13 18:08:43,197][01267] Adding new argument 'no_render'=True that is not in the saved config file! [2025-01-13 18:08:43,199][01267] Adding new argument 'save_video'=True that is not in the saved config file! [2025-01-13 18:08:43,201][01267] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! [2025-01-13 18:08:43,202][01267] Adding new argument 'video_name'=None that is not in the saved config file! [2025-01-13 18:08:43,204][01267] Adding new argument 'max_num_frames'=100000 that is not in the saved config file! [2025-01-13 18:08:43,205][01267] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! [2025-01-13 18:08:43,206][01267] Adding new argument 'push_to_hub'=True that is not in the saved config file! [2025-01-13 18:08:43,208][01267] Adding new argument 'hf_repository'='VaidikML0508/rl_course_vizdoom_health_gathering_supreme' that is not in the saved config file! [2025-01-13 18:08:43,209][01267] Adding new argument 'policy_index'=0 that is not in the saved config file! [2025-01-13 18:08:43,214][01267] Adding new argument 'eval_deterministic'=False that is not in the saved config file! [2025-01-13 18:08:43,215][01267] Adding new argument 'train_script'=None that is not in the saved config file! [2025-01-13 18:08:43,216][01267] Adding new argument 'enjoy_script'=None that is not in the saved config file! [2025-01-13 18:08:43,217][01267] Using frameskip 1 and render_action_repeat=4 for evaluation [2025-01-13 18:08:43,247][01267] RunningMeanStd input shape: (3, 72, 128) [2025-01-13 18:08:43,250][01267] RunningMeanStd input shape: (1,) [2025-01-13 18:08:43,265][01267] ConvEncoder: input_channels=3 [2025-01-13 18:08:43,315][01267] Conv encoder output size: 512 [2025-01-13 18:08:43,317][01267] Policy head output size: 512 [2025-01-13 18:08:43,337][01267] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... [2025-01-13 18:08:43,744][01267] Num frames 100... [2025-01-13 18:08:43,867][01267] Num frames 200... [2025-01-13 18:08:43,984][01267] Num frames 300... [2025-01-13 18:08:44,104][01267] Num frames 400... [2025-01-13 18:08:44,225][01267] Num frames 500... [2025-01-13 18:08:44,354][01267] Num frames 600... [2025-01-13 18:08:44,473][01267] Num frames 700... [2025-01-13 18:08:44,604][01267] Num frames 800... [2025-01-13 18:08:44,723][01267] Num frames 900... [2025-01-13 18:08:44,811][01267] Avg episode rewards: #0: 24.280, true rewards: #0: 9.280 [2025-01-13 18:08:44,813][01267] Avg episode reward: 24.280, avg true_objective: 9.280 [2025-01-13 18:08:44,899][01267] Num frames 1000... [2025-01-13 18:08:45,016][01267] Num frames 1100... [2025-01-13 18:08:45,132][01267] Num frames 1200... [2025-01-13 18:08:45,257][01267] Num frames 1300... [2025-01-13 18:08:45,375][01267] Num frames 1400... [2025-01-13 18:08:45,494][01267] Num frames 1500... [2025-01-13 18:08:45,659][01267] Num frames 1600... [2025-01-13 18:08:45,769][01267] Avg episode rewards: #0: 18.660, true rewards: #0: 8.160 [2025-01-13 18:08:45,771][01267] Avg episode reward: 18.660, avg true_objective: 8.160 [2025-01-13 18:08:45,896][01267] Num frames 1700... [2025-01-13 18:08:46,058][01267] Num frames 1800... [2025-01-13 18:08:46,219][01267] Num frames 1900... [2025-01-13 18:08:46,386][01267] Num frames 2000... [2025-01-13 18:08:46,548][01267] Num frames 2100... [2025-01-13 18:08:46,724][01267] Num frames 2200... [2025-01-13 18:08:46,886][01267] Num frames 2300... [2025-01-13 18:08:47,003][01267] Avg episode rewards: #0: 16.120, true rewards: #0: 7.787 [2025-01-13 18:08:47,005][01267] Avg episode reward: 16.120, avg true_objective: 7.787 [2025-01-13 18:08:47,117][01267] Num frames 2400... [2025-01-13 18:08:47,284][01267] Num frames 2500... [2025-01-13 18:08:47,461][01267] Num frames 2600... [2025-01-13 18:08:47,638][01267] Num frames 2700... [2025-01-13 18:08:47,810][01267] Num frames 2800... [2025-01-13 18:08:47,978][01267] Num frames 2900... [2025-01-13 18:08:48,147][01267] Num frames 3000... [2025-01-13 18:08:48,279][01267] Num frames 3100... [2025-01-13 18:08:48,396][01267] Num frames 3200... [2025-01-13 18:08:48,565][01267] Avg episode rewards: #0: 17.240, true rewards: #0: 8.240 [2025-01-13 18:08:48,566][01267] Avg episode reward: 17.240, avg true_objective: 8.240 [2025-01-13 18:08:48,576][01267] Num frames 3300... [2025-01-13 18:08:48,691][01267] Num frames 3400... [2025-01-13 18:08:48,822][01267] Num frames 3500... [2025-01-13 18:08:48,940][01267] Num frames 3600... [2025-01-13 18:08:49,055][01267] Num frames 3700... [2025-01-13 18:08:49,171][01267] Num frames 3800... [2025-01-13 18:08:49,294][01267] Num frames 3900... [2025-01-13 18:08:49,415][01267] Num frames 4000... [2025-01-13 18:08:49,533][01267] Num frames 4100... [2025-01-13 18:08:49,654][01267] Num frames 4200... [2025-01-13 18:08:49,784][01267] Num frames 4300... [2025-01-13 18:08:49,937][01267] Avg episode rewards: #0: 19.766, true rewards: #0: 8.766 [2025-01-13 18:08:49,938][01267] Avg episode reward: 19.766, avg true_objective: 8.766 [2025-01-13 18:08:49,961][01267] Num frames 4400... [2025-01-13 18:08:50,079][01267] Num frames 4500... [2025-01-13 18:08:50,199][01267] Num frames 4600... [2025-01-13 18:08:50,324][01267] Num frames 4700... [2025-01-13 18:08:50,450][01267] Num frames 4800... [2025-01-13 18:08:50,569][01267] Num frames 4900... [2025-01-13 18:08:50,687][01267] Num frames 5000... [2025-01-13 18:08:50,810][01267] Num frames 5100... [2025-01-13 18:08:50,925][01267] Num frames 5200... [2025-01-13 18:08:51,073][01267] Avg episode rewards: #0: 19.298, true rewards: #0: 8.798 [2025-01-13 18:08:51,074][01267] Avg episode reward: 19.298, avg true_objective: 8.798 [2025-01-13 18:08:51,106][01267] Num frames 5300... [2025-01-13 18:08:51,254][01267] Num frames 5400... [2025-01-13 18:08:51,377][01267] Num frames 5500... [2025-01-13 18:08:51,497][01267] Num frames 5600... [2025-01-13 18:08:51,615][01267] Num frames 5700... [2025-01-13 18:08:51,704][01267] Avg episode rewards: #0: 17.753, true rewards: #0: 8.181 [2025-01-13 18:08:51,706][01267] Avg episode reward: 17.753, avg true_objective: 8.181 [2025-01-13 18:08:51,796][01267] Num frames 5800... [2025-01-13 18:08:51,919][01267] Num frames 5900... [2025-01-13 18:08:52,032][01267] Num frames 6000... [2025-01-13 18:08:52,149][01267] Num frames 6100... [2025-01-13 18:08:52,271][01267] Num frames 6200... [2025-01-13 18:08:52,392][01267] Num frames 6300... [2025-01-13 18:08:52,508][01267] Num frames 6400... [2025-01-13 18:08:52,627][01267] Num frames 6500... [2025-01-13 18:08:52,747][01267] Num frames 6600... [2025-01-13 18:08:52,877][01267] Num frames 6700... [2025-01-13 18:08:52,956][01267] Avg episode rewards: #0: 18.650, true rewards: #0: 8.400 [2025-01-13 18:08:52,957][01267] Avg episode reward: 18.650, avg true_objective: 8.400 [2025-01-13 18:08:53,053][01267] Num frames 6800... [2025-01-13 18:08:53,169][01267] Num frames 6900... [2025-01-13 18:08:53,295][01267] Num frames 7000... [2025-01-13 18:08:53,415][01267] Num frames 7100... [2025-01-13 18:08:53,547][01267] Num frames 7200... [2025-01-13 18:08:53,670][01267] Num frames 7300... [2025-01-13 18:08:53,797][01267] Num frames 7400... [2025-01-13 18:08:53,928][01267] Num frames 7500... [2025-01-13 18:08:54,045][01267] Avg episode rewards: #0: 18.613, true rewards: #0: 8.391 [2025-01-13 18:08:54,047][01267] Avg episode reward: 18.613, avg true_objective: 8.391 [2025-01-13 18:08:54,106][01267] Num frames 7600... [2025-01-13 18:08:54,238][01267] Num frames 7700... [2025-01-13 18:08:54,367][01267] Num frames 7800... [2025-01-13 18:08:54,484][01267] Num frames 7900... [2025-01-13 18:08:54,605][01267] Num frames 8000... [2025-01-13 18:08:54,727][01267] Num frames 8100... [2025-01-13 18:08:54,847][01267] Num frames 8200... [2025-01-13 18:08:54,972][01267] Num frames 8300... [2025-01-13 18:08:55,093][01267] Num frames 8400... [2025-01-13 18:08:55,215][01267] Num frames 8500... [2025-01-13 18:08:55,289][01267] Avg episode rewards: #0: 18.912, true rewards: #0: 8.512 [2025-01-13 18:08:55,290][01267] Avg episode reward: 18.912, avg true_objective: 8.512 [2025-01-13 18:09:45,700][01267] Replay video saved to /content/train_dir/default_experiment/replay.mp4!