File size: 39,761 Bytes
71f007d |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 |
[2025-05-29 00:34:56,918] [INFO] [real_accelerator.py:222:get_accelerator] Setting ds_accelerator to cuda (auto detect)
Warning: The cache directory for DeepSpeed Triton autotune, /home/hansirui_1st/.triton/autotune, appears to be on an NFS system. While this is generally acceptable, if you experience slowdowns or hanging when DeepSpeed exits, it is recommended to set the TRITON_CACHE_DIR environment variable to a non-NFS path.
[2025-05-29 00:35:01,141] [WARNING] [runner.py:215:fetch_hostfile] Unable to find hostfile, will proceed with training with local resources only.
[2025-05-29 00:35:01,142] [INFO] [runner.py:607:main] cmd = /aifs4su/hansirui_1st/miniconda3/envs/by-align/bin/python -u -m deepspeed.launcher.launch --world_info=eyJsb2NhbGhvc3QiOiBbMCwgMSwgMiwgMywgNCwgNSwgNiwgN119 --master_addr=127.0.0.1 --master_port=63897 --module --enable_each_rank_log=None safe_rlhf.finetune --train_datasets inverse-json::/home/hansirui_1st/jiayi/resist/setting3/safety_data/training/unsafe/unsafe_100.json --model_name_or_path /aifs4su/hansirui_1st/boyuan/resist/setting3-safety/gemma-2b/gemma-2b-s3-Q1-5k --max_length 2048 --trust_remote_code True --epochs 1 --per_device_train_batch_size 4 --per_device_eval_batch_size 4 --gradient_accumulation_steps 8 --gradient_checkpointing --learning_rate 1e-5 --lr_warmup_ratio 0 --weight_decay 0.0 --lr_scheduler_type constant --weight_decay 0.0 --seed 42 --output_dir /aifs4su/hansirui_1st/boyuan/resist/setting3-safety/gemma-2b/gemma-2b-s3-Q1-5k-Q2-100 --log_type wandb --log_run_name gemma-2b-s3-Q1-5k-Q2-100 --log_project Inverse_Alignment --zero_stage 3 --offload none --bf16 True --tf32 True --save_16bit
[2025-05-29 00:35:03,743] [INFO] [real_accelerator.py:222:get_accelerator] Setting ds_accelerator to cuda (auto detect)
Warning: The cache directory for DeepSpeed Triton autotune, /home/hansirui_1st/.triton/autotune, appears to be on an NFS system. While this is generally acceptable, if you experience slowdowns or hanging when DeepSpeed exits, it is recommended to set the TRITON_CACHE_DIR environment variable to a non-NFS path.
[2025-05-29 00:35:07,688] [INFO] [launch.py:146:main] WORLD INFO DICT: {'localhost': [0, 1, 2, 3, 4, 5, 6, 7]}
[2025-05-29 00:35:07,688] [INFO] [launch.py:152:main] nnodes=1, num_local_procs=8, node_rank=0
[2025-05-29 00:35:07,688] [INFO] [launch.py:163:main] global_rank_mapping=defaultdict(<class 'list'>, {'localhost': [0, 1, 2, 3, 4, 5, 6, 7]})
[2025-05-29 00:35:07,688] [INFO] [launch.py:164:main] dist_world_size=8
[2025-05-29 00:35:07,688] [INFO] [launch.py:168:main] Setting CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7
[2025-05-29 00:35:07,688] [INFO] [launch.py:256:main] process 1521508 spawned with command: ['/aifs4su/hansirui_1st/miniconda3/envs/by-align/bin/python', '-u', '-m', 'safe_rlhf.finetune', '--local_rank=0', '--train_datasets', 'inverse-json::/home/hansirui_1st/jiayi/resist/setting3/safety_data/training/unsafe/unsafe_100.json', '--model_name_or_path', '/aifs4su/hansirui_1st/boyuan/resist/setting3-safety/gemma-2b/gemma-2b-s3-Q1-5k', '--max_length', '2048', '--trust_remote_code', 'True', '--epochs', '1', '--per_device_train_batch_size', '4', '--per_device_eval_batch_size', '4', '--gradient_accumulation_steps', '8', '--gradient_checkpointing', '--learning_rate', '1e-5', '--lr_warmup_ratio', '0', '--weight_decay', '0.0', '--lr_scheduler_type', 'constant', '--weight_decay', '0.0', '--seed', '42', '--output_dir', '/aifs4su/hansirui_1st/boyuan/resist/setting3-safety/gemma-2b/gemma-2b-s3-Q1-5k-Q2-100', '--log_type', 'wandb', '--log_run_name', 'gemma-2b-s3-Q1-5k-Q2-100', '--log_project', 'Inverse_Alignment', '--zero_stage', '3', '--offload', 'none', '--bf16', 'True', '--tf32', 'True', '--save_16bit']
[2025-05-29 00:35:07,689] [INFO] [launch.py:256:main] process 1521509 spawned with command: ['/aifs4su/hansirui_1st/miniconda3/envs/by-align/bin/python', '-u', '-m', 'safe_rlhf.finetune', '--local_rank=1', '--train_datasets', 'inverse-json::/home/hansirui_1st/jiayi/resist/setting3/safety_data/training/unsafe/unsafe_100.json', '--model_name_or_path', '/aifs4su/hansirui_1st/boyuan/resist/setting3-safety/gemma-2b/gemma-2b-s3-Q1-5k', '--max_length', '2048', '--trust_remote_code', 'True', '--epochs', '1', '--per_device_train_batch_size', '4', '--per_device_eval_batch_size', '4', '--gradient_accumulation_steps', '8', '--gradient_checkpointing', '--learning_rate', '1e-5', '--lr_warmup_ratio', '0', '--weight_decay', '0.0', '--lr_scheduler_type', 'constant', '--weight_decay', '0.0', '--seed', '42', '--output_dir', '/aifs4su/hansirui_1st/boyuan/resist/setting3-safety/gemma-2b/gemma-2b-s3-Q1-5k-Q2-100', '--log_type', 'wandb', '--log_run_name', 'gemma-2b-s3-Q1-5k-Q2-100', '--log_project', 'Inverse_Alignment', '--zero_stage', '3', '--offload', 'none', '--bf16', 'True', '--tf32', 'True', '--save_16bit']
[2025-05-29 00:35:07,690] [INFO] [launch.py:256:main] process 1521510 spawned with command: ['/aifs4su/hansirui_1st/miniconda3/envs/by-align/bin/python', '-u', '-m', 'safe_rlhf.finetune', '--local_rank=2', '--train_datasets', 'inverse-json::/home/hansirui_1st/jiayi/resist/setting3/safety_data/training/unsafe/unsafe_100.json', '--model_name_or_path', '/aifs4su/hansirui_1st/boyuan/resist/setting3-safety/gemma-2b/gemma-2b-s3-Q1-5k', '--max_length', '2048', '--trust_remote_code', 'True', '--epochs', '1', '--per_device_train_batch_size', '4', '--per_device_eval_batch_size', '4', '--gradient_accumulation_steps', '8', '--gradient_checkpointing', '--learning_rate', '1e-5', '--lr_warmup_ratio', '0', '--weight_decay', '0.0', '--lr_scheduler_type', 'constant', '--weight_decay', '0.0', '--seed', '42', '--output_dir', '/aifs4su/hansirui_1st/boyuan/resist/setting3-safety/gemma-2b/gemma-2b-s3-Q1-5k-Q2-100', '--log_type', 'wandb', '--log_run_name', 'gemma-2b-s3-Q1-5k-Q2-100', '--log_project', 'Inverse_Alignment', '--zero_stage', '3', '--offload', 'none', '--bf16', 'True', '--tf32', 'True', '--save_16bit']
[2025-05-29 00:35:07,690] [INFO] [launch.py:256:main] process 1521511 spawned with command: ['/aifs4su/hansirui_1st/miniconda3/envs/by-align/bin/python', '-u', '-m', 'safe_rlhf.finetune', '--local_rank=3', '--train_datasets', 'inverse-json::/home/hansirui_1st/jiayi/resist/setting3/safety_data/training/unsafe/unsafe_100.json', '--model_name_or_path', '/aifs4su/hansirui_1st/boyuan/resist/setting3-safety/gemma-2b/gemma-2b-s3-Q1-5k', '--max_length', '2048', '--trust_remote_code', 'True', '--epochs', '1', '--per_device_train_batch_size', '4', '--per_device_eval_batch_size', '4', '--gradient_accumulation_steps', '8', '--gradient_checkpointing', '--learning_rate', '1e-5', '--lr_warmup_ratio', '0', '--weight_decay', '0.0', '--lr_scheduler_type', 'constant', '--weight_decay', '0.0', '--seed', '42', '--output_dir', '/aifs4su/hansirui_1st/boyuan/resist/setting3-safety/gemma-2b/gemma-2b-s3-Q1-5k-Q2-100', '--log_type', 'wandb', '--log_run_name', 'gemma-2b-s3-Q1-5k-Q2-100', '--log_project', 'Inverse_Alignment', '--zero_stage', '3', '--offload', 'none', '--bf16', 'True', '--tf32', 'True', '--save_16bit']
[2025-05-29 00:35:07,691] [INFO] [launch.py:256:main] process 1521512 spawned with command: ['/aifs4su/hansirui_1st/miniconda3/envs/by-align/bin/python', '-u', '-m', 'safe_rlhf.finetune', '--local_rank=4', '--train_datasets', 'inverse-json::/home/hansirui_1st/jiayi/resist/setting3/safety_data/training/unsafe/unsafe_100.json', '--model_name_or_path', '/aifs4su/hansirui_1st/boyuan/resist/setting3-safety/gemma-2b/gemma-2b-s3-Q1-5k', '--max_length', '2048', '--trust_remote_code', 'True', '--epochs', '1', '--per_device_train_batch_size', '4', '--per_device_eval_batch_size', '4', '--gradient_accumulation_steps', '8', '--gradient_checkpointing', '--learning_rate', '1e-5', '--lr_warmup_ratio', '0', '--weight_decay', '0.0', '--lr_scheduler_type', 'constant', '--weight_decay', '0.0', '--seed', '42', '--output_dir', '/aifs4su/hansirui_1st/boyuan/resist/setting3-safety/gemma-2b/gemma-2b-s3-Q1-5k-Q2-100', '--log_type', 'wandb', '--log_run_name', 'gemma-2b-s3-Q1-5k-Q2-100', '--log_project', 'Inverse_Alignment', '--zero_stage', '3', '--offload', 'none', '--bf16', 'True', '--tf32', 'True', '--save_16bit']
[2025-05-29 00:35:07,691] [INFO] [launch.py:256:main] process 1521513 spawned with command: ['/aifs4su/hansirui_1st/miniconda3/envs/by-align/bin/python', '-u', '-m', 'safe_rlhf.finetune', '--local_rank=5', '--train_datasets', 'inverse-json::/home/hansirui_1st/jiayi/resist/setting3/safety_data/training/unsafe/unsafe_100.json', '--model_name_or_path', '/aifs4su/hansirui_1st/boyuan/resist/setting3-safety/gemma-2b/gemma-2b-s3-Q1-5k', '--max_length', '2048', '--trust_remote_code', 'True', '--epochs', '1', '--per_device_train_batch_size', '4', '--per_device_eval_batch_size', '4', '--gradient_accumulation_steps', '8', '--gradient_checkpointing', '--learning_rate', '1e-5', '--lr_warmup_ratio', '0', '--weight_decay', '0.0', '--lr_scheduler_type', 'constant', '--weight_decay', '0.0', '--seed', '42', '--output_dir', '/aifs4su/hansirui_1st/boyuan/resist/setting3-safety/gemma-2b/gemma-2b-s3-Q1-5k-Q2-100', '--log_type', 'wandb', '--log_run_name', 'gemma-2b-s3-Q1-5k-Q2-100', '--log_project', 'Inverse_Alignment', '--zero_stage', '3', '--offload', 'none', '--bf16', 'True', '--tf32', 'True', '--save_16bit']
[2025-05-29 00:35:07,692] [INFO] [launch.py:256:main] process 1521514 spawned with command: ['/aifs4su/hansirui_1st/miniconda3/envs/by-align/bin/python', '-u', '-m', 'safe_rlhf.finetune', '--local_rank=6', '--train_datasets', 'inverse-json::/home/hansirui_1st/jiayi/resist/setting3/safety_data/training/unsafe/unsafe_100.json', '--model_name_or_path', '/aifs4su/hansirui_1st/boyuan/resist/setting3-safety/gemma-2b/gemma-2b-s3-Q1-5k', '--max_length', '2048', '--trust_remote_code', 'True', '--epochs', '1', '--per_device_train_batch_size', '4', '--per_device_eval_batch_size', '4', '--gradient_accumulation_steps', '8', '--gradient_checkpointing', '--learning_rate', '1e-5', '--lr_warmup_ratio', '0', '--weight_decay', '0.0', '--lr_scheduler_type', 'constant', '--weight_decay', '0.0', '--seed', '42', '--output_dir', '/aifs4su/hansirui_1st/boyuan/resist/setting3-safety/gemma-2b/gemma-2b-s3-Q1-5k-Q2-100', '--log_type', 'wandb', '--log_run_name', 'gemma-2b-s3-Q1-5k-Q2-100', '--log_project', 'Inverse_Alignment', '--zero_stage', '3', '--offload', 'none', '--bf16', 'True', '--tf32', 'True', '--save_16bit']
[2025-05-29 00:35:07,692] [INFO] [launch.py:256:main] process 1521515 spawned with command: ['/aifs4su/hansirui_1st/miniconda3/envs/by-align/bin/python', '-u', '-m', 'safe_rlhf.finetune', '--local_rank=7', '--train_datasets', 'inverse-json::/home/hansirui_1st/jiayi/resist/setting3/safety_data/training/unsafe/unsafe_100.json', '--model_name_or_path', '/aifs4su/hansirui_1st/boyuan/resist/setting3-safety/gemma-2b/gemma-2b-s3-Q1-5k', '--max_length', '2048', '--trust_remote_code', 'True', '--epochs', '1', '--per_device_train_batch_size', '4', '--per_device_eval_batch_size', '4', '--gradient_accumulation_steps', '8', '--gradient_checkpointing', '--learning_rate', '1e-5', '--lr_warmup_ratio', '0', '--weight_decay', '0.0', '--lr_scheduler_type', 'constant', '--weight_decay', '0.0', '--seed', '42', '--output_dir', '/aifs4su/hansirui_1st/boyuan/resist/setting3-safety/gemma-2b/gemma-2b-s3-Q1-5k-Q2-100', '--log_type', 'wandb', '--log_run_name', 'gemma-2b-s3-Q1-5k-Q2-100', '--log_project', 'Inverse_Alignment', '--zero_stage', '3', '--offload', 'none', '--bf16', 'True', '--tf32', 'True', '--save_16bit']
[2025-05-29 00:35:14,533] [INFO] [real_accelerator.py:222:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[2025-05-29 00:35:14,671] [INFO] [real_accelerator.py:222:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[2025-05-29 00:35:14,724] [INFO] [real_accelerator.py:222:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[2025-05-29 00:35:14,813] [INFO] [real_accelerator.py:222:get_accelerator] Setting ds_accelerator to cuda (auto detect)
Warning: The cache directory for DeepSpeed Triton autotune, /home/hansirui_1st/.triton/autotune, appears to be on an NFS system. While this is generally acceptable, if you experience slowdowns or hanging when DeepSpeed exits, it is recommended to set the TRITON_CACHE_DIR environment variable to a non-NFS path.
[2025-05-29 00:35:14,827] [INFO] [real_accelerator.py:222:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[2025-05-29 00:35:14,850] [INFO] [real_accelerator.py:222:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[2025-05-29 00:35:14,855] [INFO] [real_accelerator.py:222:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[2025-05-29 00:35:14,885] [INFO] [real_accelerator.py:222:get_accelerator] Setting ds_accelerator to cuda (auto detect)
Warning: The cache directory for DeepSpeed Triton autotune, /home/hansirui_1st/.triton/autotune, appears to be on an NFS system. While this is generally acceptable, if you experience slowdowns or hanging when DeepSpeed exits, it is recommended to set the TRITON_CACHE_DIR environment variable to a non-NFS path.
Warning: The cache directory for DeepSpeed Triton autotune, /home/hansirui_1st/.triton/autotune, appears to be on an NFS system. While this is generally acceptable, if you experience slowdowns or hanging when DeepSpeed exits, it is recommended to set the TRITON_CACHE_DIR environment variable to a non-NFS path.
Warning: The cache directory for DeepSpeed Triton autotune, /home/hansirui_1st/.triton/autotune, appears to be on an NFS system. While this is generally acceptable, if you experience slowdowns or hanging when DeepSpeed exits, it is recommended to set the TRITON_CACHE_DIR environment variable to a non-NFS path.
Warning: The cache directory for DeepSpeed Triton autotune, /home/hansirui_1st/.triton/autotune, appears to be on an NFS system. While this is generally acceptable, if you experience slowdowns or hanging when DeepSpeed exits, it is recommended to set the TRITON_CACHE_DIR environment variable to a non-NFS path.
Warning: The cache directory for DeepSpeed Triton autotune, /home/hansirui_1st/.triton/autotune, appears to be on an NFS system. While this is generally acceptable, if you experience slowdowns or hanging when DeepSpeed exits, it is recommended to set the TRITON_CACHE_DIR environment variable to a non-NFS path.
Warning: The cache directory for DeepSpeed Triton autotune, /home/hansirui_1st/.triton/autotune, appears to be on an NFS system. While this is generally acceptable, if you experience slowdowns or hanging when DeepSpeed exits, it is recommended to set the TRITON_CACHE_DIR environment variable to a non-NFS path.
Warning: The cache directory for DeepSpeed Triton autotune, /home/hansirui_1st/.triton/autotune, appears to be on an NFS system. While this is generally acceptable, if you experience slowdowns or hanging when DeepSpeed exits, it is recommended to set the TRITON_CACHE_DIR environment variable to a non-NFS path.
[2025-05-29 00:35:20,852] [INFO] [comm.py:658:init_distributed] cdb=None
[2025-05-29 00:35:21,010] [INFO] [comm.py:658:init_distributed] cdb=None
[2025-05-29 00:35:21,034] [INFO] [comm.py:658:init_distributed] cdb=None
[2025-05-29 00:35:21,097] [INFO] [comm.py:658:init_distributed] cdb=None
[2025-05-29 00:35:21,097] [INFO] [comm.py:689:init_distributed] Initializing TorchBackend in DeepSpeed with backend nccl
[2025-05-29 00:35:21,098] [INFO] [comm.py:658:init_distributed] cdb=None
[2025-05-29 00:35:21,195] [INFO] [comm.py:658:init_distributed] cdb=None
[2025-05-29 00:35:21,209] [INFO] [comm.py:658:init_distributed] cdb=None
[2025-05-29 00:35:21,236] [INFO] [comm.py:658:init_distributed] cdb=None
Set logger level to INFO.
[2025-05-29 00:35:29,409] [INFO] [config.py:734:__init__] Config mesh_device None world_size = 8
[2025-05-29 00:35:32,639] [INFO] [config.py:734:__init__] Config mesh_device None world_size = 8
[2025-05-29 00:35:32,816] [INFO] [config.py:734:__init__] Config mesh_device None world_size = 8
[2025-05-29 00:35:32,816] [INFO] [config.py:734:__init__] Config mesh_device None world_size = 8
[2025-05-29 00:35:32,816] [INFO] [config.py:734:__init__] Config mesh_device None world_size = 8
[2025-05-29 00:35:32,816] [INFO] [config.py:734:__init__] Config mesh_device None world_size = 8
[2025-05-29 00:35:32,816] [INFO] [config.py:734:__init__] Config mesh_device None world_size = 8
[2025-05-29 00:35:32,817] [INFO] [config.py:734:__init__] Config mesh_device None world_size = 8
[2025-05-29 00:35:32,944] [INFO] [partition_parameters.py:348:__exit__] finished initializing model - num_params = 165, num_elems = 3.03B
ninja: no work to do.
Time to load fused_adam op: 0.11549711227416992 seconds
[2025-05-29 00:35:36,287] [INFO] [logging.py:128:log_dist] [Rank 0] DeepSpeed info: version=0.16.4, git-hash=unknown, git-branch=unknown
[2025-05-29 00:35:36,287] [INFO] [comm.py:683:init_distributed] Distributed backend already initialized
[2025-05-29 00:35:36,287] [INFO] [config.py:734:__init__] Config mesh_device None world_size = 8
[2025-05-29 00:35:36,292] [INFO] [logging.py:128:log_dist] [Rank 0] DeepSpeed Flops Profiler Enabled: False
[2025-05-29 00:35:36,292] [INFO] [logging.py:128:log_dist] [Rank 0] Using client Optimizer as basic optimizer
[2025-05-29 00:35:36,292] [INFO] [logging.py:128:log_dist] [Rank 0] Removing param_group that has no 'params' in the basic Optimizer
[2025-05-29 00:35:36,294] [INFO] [logging.py:128:log_dist] [Rank 0] DeepSpeed Basic Optimizer = FusedAdam
[2025-05-29 00:35:36,294] [INFO] [utils.py:59:is_zero_supported_optimizer] Checking ZeRO support for optimizer=FusedAdam type=<class 'deepspeed.ops.adam.fused_adam.FusedAdam'>
[2025-05-29 00:35:36,294] [INFO] [logging.py:128:log_dist] [Rank 0] Creating fp16 ZeRO stage 3 optimizer, MiCS is enabled False, Hierarchical params gather False
[2025-05-29 00:35:36,294] [INFO] [logging.py:128:log_dist] [Rank 0] Creating torch.bfloat16 ZeRO stage 3 optimizer
Time to load fused_adam op: 0.20246458053588867 secondsTime to load fused_adam op: 0.20205283164978027 seconds
Time to load fused_adam op: 0.20363759994506836 seconds
Time to load fused_adam op: 0.20257353782653809 seconds
Time to load fused_adam op: 0.20267820358276367 seconds
Time to load fused_adam op: 0.2025599479675293 seconds
[2025-05-29 00:35:36,374] [INFO] [config.py:734:__init__] Config mesh_device None world_size = 8
[2025-05-29 00:35:36,374] [INFO] [config.py:734:__init__] Config mesh_device None world_size = 8
[2025-05-29 00:35:36,375] [INFO] [config.py:734:__init__] Config mesh_device None world_size = 8
[2025-05-29 00:35:36,375] [INFO] [config.py:734:__init__] Config mesh_device None world_size = 8
[2025-05-29 00:35:36,375] [INFO] [config.py:734:__init__] Config mesh_device None world_size = 8
[2025-05-29 00:35:36,375] [INFO] [config.py:734:__init__] Config mesh_device None world_size = 8
Time to load fused_adam op: 0.20409488677978516 seconds
[2025-05-29 00:35:36,376] [INFO] [config.py:734:__init__] Config mesh_device None world_size = 8
[2025-05-29 00:35:36,501] [INFO] [utils.py:781:see_memory_usage] Stage 3 initialize beginning
[2025-05-29 00:35:36,502] [INFO] [utils.py:782:see_memory_usage] MA 0.58 GB Max_MA 2.54 GB CA 3.1 GB Max_CA 3 GB
[2025-05-29 00:35:36,502] [INFO] [utils.py:789:see_memory_usage] CPU Virtual Memory: used = 491.8 GB, percent = 24.4%
[2025-05-29 00:35:36,503] [INFO] [stage3.py:170:__init__] Reduce bucket size 500000000
[2025-05-29 00:35:36,503] [INFO] [stage3.py:171:__init__] Prefetch bucket size 30000000
[2025-05-29 00:35:36,598] [INFO] [utils.py:781:see_memory_usage] DeepSpeedZeRoOffload initialize [begin]
[2025-05-29 00:35:36,599] [INFO] [utils.py:782:see_memory_usage] MA 0.58 GB Max_MA 0.58 GB CA 3.1 GB Max_CA 3 GB
[2025-05-29 00:35:36,599] [INFO] [utils.py:789:see_memory_usage] CPU Virtual Memory: used = 491.8 GB, percent = 24.4%
Parameter Offload: Total persistent parameters: 75776 in 37 params
[2025-05-29 00:35:36,697] [INFO] [utils.py:781:see_memory_usage] DeepSpeedZeRoOffload initialize [end]
[2025-05-29 00:35:36,697] [INFO] [utils.py:782:see_memory_usage] MA 0.58 GB Max_MA 0.58 GB CA 3.1 GB Max_CA 3 GB
[2025-05-29 00:35:36,697] [INFO] [utils.py:789:see_memory_usage] CPU Virtual Memory: used = 491.8 GB, percent = 24.4%
[2025-05-29 00:35:36,790] [INFO] [utils.py:781:see_memory_usage] Before creating fp16 partitions
[2025-05-29 00:35:36,790] [INFO] [utils.py:782:see_memory_usage] MA 0.58 GB Max_MA 0.58 GB CA 3.1 GB Max_CA 3 GB
[2025-05-29 00:35:36,790] [INFO] [utils.py:789:see_memory_usage] CPU Virtual Memory: used = 491.8 GB, percent = 24.4%
[2025-05-29 00:35:37,602] [INFO] [utils.py:781:see_memory_usage] After creating fp16 partitions: 2
[2025-05-29 00:35:37,603] [INFO] [utils.py:782:see_memory_usage] MA 0.58 GB Max_MA 0.58 GB CA 0.59 GB Max_CA 3 GB
[2025-05-29 00:35:37,603] [INFO] [utils.py:789:see_memory_usage] CPU Virtual Memory: used = 491.84 GB, percent = 24.4%
[2025-05-29 00:35:37,732] [INFO] [utils.py:781:see_memory_usage] Before creating fp32 partitions
[2025-05-29 00:35:37,733] [INFO] [utils.py:782:see_memory_usage] MA 0.58 GB Max_MA 0.58 GB CA 0.59 GB Max_CA 1 GB
[2025-05-29 00:35:37,733] [INFO] [utils.py:789:see_memory_usage] CPU Virtual Memory: used = 491.84 GB, percent = 24.4%
[2025-05-29 00:35:37,829] [INFO] [utils.py:781:see_memory_usage] After creating fp32 partitions
[2025-05-29 00:35:37,830] [INFO] [utils.py:782:see_memory_usage] MA 1.75 GB Max_MA 2.34 GB CA 2.34 GB Max_CA 2 GB
[2025-05-29 00:35:37,830] [INFO] [utils.py:789:see_memory_usage] CPU Virtual Memory: used = 491.84 GB, percent = 24.4%
[2025-05-29 00:35:37,923] [INFO] [utils.py:781:see_memory_usage] Before initializing optimizer states
[2025-05-29 00:35:37,923] [INFO] [utils.py:782:see_memory_usage] MA 1.75 GB Max_MA 1.75 GB CA 2.34 GB Max_CA 2 GB
[2025-05-29 00:35:37,923] [INFO] [utils.py:789:see_memory_usage] CPU Virtual Memory: used = 491.84 GB, percent = 24.4%
[2025-05-29 00:35:38,025] [INFO] [utils.py:781:see_memory_usage] After initializing optimizer states
[2025-05-29 00:35:38,026] [INFO] [utils.py:782:see_memory_usage] MA 1.75 GB Max_MA 2.92 GB CA 3.51 GB Max_CA 4 GB
[2025-05-29 00:35:38,026] [INFO] [utils.py:789:see_memory_usage] CPU Virtual Memory: used = 491.84 GB, percent = 24.4%
[2025-05-29 00:35:38,026] [INFO] [stage3.py:534:_setup_for_real_optimizer] optimizer state initialized
[2025-05-29 00:35:38,167] [INFO] [utils.py:781:see_memory_usage] After initializing ZeRO optimizer
[2025-05-29 00:35:38,167] [INFO] [utils.py:782:see_memory_usage] MA 3.27 GB Max_MA 5.22 GB CA 5.46 GB Max_CA 5 GB
[2025-05-29 00:35:38,167] [INFO] [utils.py:789:see_memory_usage] CPU Virtual Memory: used = 491.84 GB, percent = 24.4%
[2025-05-29 00:35:38,168] [INFO] [logging.py:128:log_dist] [Rank 0] DeepSpeed Final Optimizer = DeepSpeedZeroOptimizer_Stage3
[2025-05-29 00:35:38,168] [INFO] [logging.py:128:log_dist] [Rank 0] DeepSpeed using client LR scheduler
[2025-05-29 00:35:38,168] [INFO] [logging.py:128:log_dist] [Rank 0] DeepSpeed LR Scheduler = <torch.optim.lr_scheduler.LambdaLR object at 0x155117ca7b10>
[2025-05-29 00:35:38,168] [INFO] [logging.py:128:log_dist] [Rank 0] step=0, skipped=0, lr=[1e-05, 1e-05], mom=[(0.9, 0.95), (0.9, 0.95)]
[2025-05-29 00:35:38,168] [INFO] [config.py:1001:print] DeepSpeedEngine configuration:
[2025-05-29 00:35:38,168] [INFO] [config.py:1005:print] activation_checkpointing_config {
"partition_activations": false,
"contiguous_memory_optimization": false,
"cpu_checkpointing": false,
"number_checkpoints": null,
"synchronize_checkpoint_boundary": false,
"profile": false
}
[2025-05-29 00:35:38,168] [INFO] [config.py:1005:print] aio_config ................... {'block_size': 1048576, 'queue_depth': 8, 'intra_op_parallelism': 1, 'single_submit': False, 'overlap_events': True, 'use_gds': False}
[2025-05-29 00:35:38,168] [INFO] [config.py:1005:print] amp_enabled .................. False
[2025-05-29 00:35:38,169] [INFO] [config.py:1005:print] amp_params ................... False
[2025-05-29 00:35:38,169] [INFO] [config.py:1005:print] autotuning_config ............ {
"enabled": false,
"start_step": null,
"end_step": null,
"metric_path": null,
"arg_mappings": null,
"metric": "throughput",
"model_info": null,
"results_dir": "autotuning_results",
"exps_dir": "autotuning_exps",
"overwrite": true,
"fast": true,
"start_profile_step": 3,
"end_profile_step": 5,
"tuner_type": "gridsearch",
"tuner_early_stopping": 5,
"tuner_num_trials": 50,
"model_info_path": null,
"mp_size": 1,
"max_train_batch_size": null,
"min_train_batch_size": 1,
"max_train_micro_batch_size_per_gpu": 1.024000e+03,
"min_train_micro_batch_size_per_gpu": 1,
"num_tuning_micro_batch_sizes": 3
}
[2025-05-29 00:35:38,169] [INFO] [config.py:1005:print] bfloat16_enabled ............. True
[2025-05-29 00:35:38,169] [INFO] [config.py:1005:print] bfloat16_immediate_grad_update False
[2025-05-29 00:35:38,169] [INFO] [config.py:1005:print] checkpoint_parallel_write_pipeline False
[2025-05-29 00:35:38,169] [INFO] [config.py:1005:print] checkpoint_tag_validation_enabled True
[2025-05-29 00:35:38,169] [INFO] [config.py:1005:print] checkpoint_tag_validation_fail False
[2025-05-29 00:35:38,169] [INFO] [config.py:1005:print] comms_config ................. <deepspeed.comm.config.DeepSpeedCommsConfig object at 0x155114375890>
[2025-05-29 00:35:38,169] [INFO] [config.py:1005:print] communication_data_type ...... None
[2025-05-29 00:35:38,169] [INFO] [config.py:1005:print] compression_config ........... {'weight_quantization': {'shared_parameters': {'enabled': False, 'quantizer_kernel': False, 'schedule_offset': 0, 'quantize_groups': 1, 'quantize_verbose': False, 'quantization_type': 'symmetric', 'quantize_weight_in_forward': False, 'rounding': 'nearest', 'fp16_mixed_quantize': False, 'quantize_change_ratio': 0.001}, 'different_groups': {}}, 'activation_quantization': {'shared_parameters': {'enabled': False, 'quantization_type': 'symmetric', 'range_calibration': 'dynamic', 'schedule_offset': 1000}, 'different_groups': {}}, 'sparse_pruning': {'shared_parameters': {'enabled': False, 'method': 'l1', 'schedule_offset': 1000}, 'different_groups': {}}, 'row_pruning': {'shared_parameters': {'enabled': False, 'method': 'l1', 'schedule_offset': 1000}, 'different_groups': {}}, 'head_pruning': {'shared_parameters': {'enabled': False, 'method': 'topk', 'schedule_offset': 1000}, 'different_groups': {}}, 'channel_pruning': {'shared_parameters': {'enabled': False, 'method': 'l1', 'schedule_offset': 1000}, 'different_groups': {}}, 'layer_reduction': {'enabled': False}}
[2025-05-29 00:35:38,169] [INFO] [config.py:1005:print] curriculum_enabled_legacy .... False
[2025-05-29 00:35:38,169] [INFO] [config.py:1005:print] curriculum_params_legacy ..... False
[2025-05-29 00:35:38,169] [INFO] [config.py:1005:print] data_efficiency_config ....... {'enabled': False, 'seed': 1234, 'data_sampling': {'enabled': False, 'num_epochs': 1000, 'num_workers': 0, 'curriculum_learning': {'enabled': False}}, 'data_routing': {'enabled': False, 'random_ltd': {'enabled': False, 'layer_token_lr_schedule': {'enabled': False}}}}
[2025-05-29 00:35:38,169] [INFO] [config.py:1005:print] data_efficiency_enabled ...... False
[2025-05-29 00:35:38,169] [INFO] [config.py:1005:print] dataloader_drop_last ......... False
[2025-05-29 00:35:38,169] [INFO] [config.py:1005:print] disable_allgather ............ False
[2025-05-29 00:35:38,169] [INFO] [config.py:1005:print] dump_state ................... False
[2025-05-29 00:35:38,169] [INFO] [config.py:1005:print] dynamic_loss_scale_args ...... None
[2025-05-29 00:35:38,169] [INFO] [config.py:1005:print] eigenvalue_enabled ........... False
[2025-05-29 00:35:38,169] [INFO] [config.py:1005:print] eigenvalue_gas_boundary_resolution 1
[2025-05-29 00:35:38,169] [INFO] [config.py:1005:print] eigenvalue_layer_name ........ bert.encoder.layer
[2025-05-29 00:35:38,169] [INFO] [config.py:1005:print] eigenvalue_layer_num ......... 0
[2025-05-29 00:35:38,169] [INFO] [config.py:1005:print] eigenvalue_max_iter .......... 100
[2025-05-29 00:35:38,169] [INFO] [config.py:1005:print] eigenvalue_stability ......... 1e-06
[2025-05-29 00:35:38,169] [INFO] [config.py:1005:print] eigenvalue_tol ............... 0.01
[2025-05-29 00:35:38,169] [INFO] [config.py:1005:print] eigenvalue_verbose ........... False
[2025-05-29 00:35:38,169] [INFO] [config.py:1005:print] elasticity_enabled ........... False
[2025-05-29 00:35:38,169] [INFO] [config.py:1005:print] flops_profiler_config ........ {
"enabled": false,
"recompute_fwd_factor": 0.0,
"profile_step": 1,
"module_depth": -1,
"top_modules": 1,
"detailed": true,
"output_file": null
}
[2025-05-29 00:35:38,169] [INFO] [config.py:1005:print] fp16_auto_cast ............... None
[2025-05-29 00:35:38,169] [INFO] [config.py:1005:print] fp16_enabled ................. False
[2025-05-29 00:35:38,169] [INFO] [config.py:1005:print] fp16_master_weights_and_gradients False
[2025-05-29 00:35:38,169] [INFO] [config.py:1005:print] global_rank .................. 0
[2025-05-29 00:35:38,170] [INFO] [config.py:1005:print] grad_accum_dtype ............. None
[2025-05-29 00:35:38,170] [INFO] [config.py:1005:print] gradient_accumulation_steps .. 8
[2025-05-29 00:35:38,170] [INFO] [config.py:1005:print] gradient_clipping ............ 1.0
[2025-05-29 00:35:38,170] [INFO] [config.py:1005:print] gradient_predivide_factor .... 1.0
[2025-05-29 00:35:38,170] [INFO] [config.py:1005:print] graph_harvesting ............. False
[2025-05-29 00:35:38,170] [INFO] [config.py:1005:print] hybrid_engine ................ enabled=False max_out_tokens=512 inference_tp_size=1 release_inference_cache=False pin_parameters=True tp_gather_partition_size=8
[2025-05-29 00:35:38,170] [INFO] [config.py:1005:print] initial_dynamic_scale ........ 1
[2025-05-29 00:35:38,170] [INFO] [config.py:1005:print] load_universal_checkpoint .... False
[2025-05-29 00:35:38,170] [INFO] [config.py:1005:print] loss_scale ................... 1.0
[2025-05-29 00:35:38,170] [INFO] [config.py:1005:print] memory_breakdown ............. False
[2025-05-29 00:35:38,170] [INFO] [config.py:1005:print] mics_hierarchial_params_gather False
[2025-05-29 00:35:38,170] [INFO] [config.py:1005:print] mics_shard_size .............. -1
[2025-05-29 00:35:38,170] [INFO] [config.py:1005:print] monitor_config ............... tensorboard=TensorBoardConfig(enabled=False, output_path='', job_name='DeepSpeedJobName') comet=CometConfig(enabled=False, samples_log_interval=100, project=None, workspace=None, api_key=None, experiment_name=None, experiment_key=None, online=None, mode=None) wandb=WandbConfig(enabled=False, group=None, team=None, project='deepspeed') csv_monitor=CSVConfig(enabled=False, output_path='', job_name='DeepSpeedJobName')
[2025-05-29 00:35:38,170] [INFO] [config.py:1005:print] nebula_config ................ {
"enabled": false,
"persistent_storage_path": null,
"persistent_time_interval": 100,
"num_of_version_in_retention": 2,
"enable_nebula_load": true,
"load_path": null
}
[2025-05-29 00:35:38,170] [INFO] [config.py:1005:print] optimizer_legacy_fusion ...... False
[2025-05-29 00:35:38,170] [INFO] [config.py:1005:print] optimizer_name ............... None
[2025-05-29 00:35:38,170] [INFO] [config.py:1005:print] optimizer_params ............. None
[2025-05-29 00:35:38,170] [INFO] [config.py:1005:print] pipeline ..................... {'stages': 'auto', 'partition': 'best', 'seed_layers': False, 'activation_checkpoint_interval': 0, 'pipe_partitioned': True, 'grad_partitioned': True}
[2025-05-29 00:35:38,170] [INFO] [config.py:1005:print] pld_enabled .................. False
[2025-05-29 00:35:38,170] [INFO] [config.py:1005:print] pld_params ................... False
[2025-05-29 00:35:38,170] [INFO] [config.py:1005:print] prescale_gradients ........... False
[2025-05-29 00:35:38,170] [INFO] [config.py:1005:print] scheduler_name ............... None
[2025-05-29 00:35:38,170] [INFO] [config.py:1005:print] scheduler_params ............. None
[2025-05-29 00:35:38,170] [INFO] [config.py:1005:print] seq_parallel_communication_data_type torch.float32
[2025-05-29 00:35:38,170] [INFO] [config.py:1005:print] sparse_attention ............. None
[2025-05-29 00:35:38,170] [INFO] [config.py:1005:print] sparse_gradients_enabled ..... False
[2025-05-29 00:35:38,170] [INFO] [config.py:1005:print] steps_per_print .............. 10
[2025-05-29 00:35:38,170] [INFO] [config.py:1005:print] tensor_parallel_config ....... dtype=torch.float16 autotp_size=0 tensor_parallel=TPConfig(tp_size=1, tp_grain_size=1, mpu=None, tp_group=None) injection_policy_tuple=None keep_module_on_host=False replace_with_kernel_inject=False
[2025-05-29 00:35:38,170] [INFO] [config.py:1005:print] timers_config ................ enabled=True synchronized=True
[2025-05-29 00:35:38,170] [INFO] [config.py:1005:print] train_batch_size ............. 256
[2025-05-29 00:35:38,170] [INFO] [config.py:1005:print] train_micro_batch_size_per_gpu 4
[2025-05-29 00:35:38,170] [INFO] [config.py:1005:print] use_data_before_expert_parallel_ False
[2025-05-29 00:35:38,170] [INFO] [config.py:1005:print] use_node_local_storage ....... False
[2025-05-29 00:35:38,170] [INFO] [config.py:1005:print] wall_clock_breakdown ......... False
[2025-05-29 00:35:38,170] [INFO] [config.py:1005:print] weight_quantization_config ... None
[2025-05-29 00:35:38,170] [INFO] [config.py:1005:print] world_size ................... 8
[2025-05-29 00:35:38,171] [INFO] [config.py:1005:print] zero_allow_untested_optimizer False
[2025-05-29 00:35:38,171] [INFO] [config.py:1005:print] zero_config .................. stage=3 contiguous_gradients=True reduce_scatter=True reduce_bucket_size=500000000 use_multi_rank_bucket_allreduce=True allgather_partitions=True allgather_bucket_size=500000000 overlap_comm=True load_from_fp32_weights=True elastic_checkpoint=False offload_param=DeepSpeedZeroOffloadParamConfig(device='none', nvme_path=None, buffer_count=5, buffer_size=100000000, max_in_cpu=1000000000, pin_memory=False) offload_optimizer=DeepSpeedZeroOffloadOptimizerConfig(device='none', nvme_path=None, buffer_count=4, pin_memory=False, pipeline_read=False, pipeline_write=False, fast_init=False, ratio=1.0) sub_group_size=1000000000 cpu_offload_param=None cpu_offload_use_pin_memory=None cpu_offload=None prefetch_bucket_size=30000000 param_persistence_threshold=10000 model_persistence_threshold=9223372036854775807 max_live_parameters=30000000 max_reuse_distance=1000000000 gather_16bit_weights_on_model_save=True module_granularity_threshold=0 use_all_reduce_for_fetch_params=False stage3_gather_fp16_weights_on_model_save=False ignore_unused_parameters=True legacy_stage1=False round_robin_gradients=False zero_hpz_partition_size=1 zero_quantized_weights=False zero_quantized_nontrainable_weights=False zero_quantized_gradients=False zeropp_loco_param=None mics_shard_size=-1 mics_hierarchical_params_gather=False memory_efficient_linear=False pipeline_loading_checkpoint=False override_module_apply=True log_trace_cache_warnings=False
[2025-05-29 00:35:38,171] [INFO] [config.py:1005:print] zero_enabled ................. True
[2025-05-29 00:35:38,171] [INFO] [config.py:1005:print] zero_force_ds_cpu_optimizer .. True
[2025-05-29 00:35:38,171] [INFO] [config.py:1005:print] zero_optimization_stage ...... 3
[2025-05-29 00:35:38,171] [INFO] [config.py:991:print_user_config] json = {
"train_batch_size": 256,
"train_micro_batch_size_per_gpu": 4,
"gradient_accumulation_steps": 8,
"steps_per_print": 10,
"zero_optimization": {
"stage": 3,
"offload_param": {
"device": "none"
},
"offload_optimizer": {
"device": "none"
},
"param_persistence_threshold": 1.000000e+04,
"max_live_parameters": 3.000000e+07,
"prefetch_bucket_size": 3.000000e+07,
"memory_efficient_linear": false,
"gather_16bit_weights_on_model_save": true
},
"gradient_clipping": 1.0,
"prescale_gradients": false,
"wall_clock_breakdown": false,
"hybrid_engine": {
"enabled": false,
"max_out_tokens": 512,
"inference_tp_size": 1,
"release_inference_cache": false,
"pin_parameters": true,
"tp_gather_partition_size": 8
},
"bf16": {
"enabled": true
}
}
***** Running training *****
Saving model to "/aifs4su/hansirui_1st/boyuan/resist/setting3-safety/gemma-2b/gemma-2b-s3-Q1-5k-Q2-100" ...
Saving 16-bit model...
[2025-05-29 00:35:50,416] [INFO] [logging.py:128:log_dist] [Rank 0] [Torch] Checkpoint global_step0 is about to be saved!
[2025-05-29 00:35:50,416] [INFO] [torch_checkpoint_engine.py:33:commit] [Torch] Checkpoint global_step0 is ready now!
[2025-05-29 00:35:50,416] [INFO] [torch_checkpoint_engine.py:33:commit] [Torch] Checkpoint global_step0 is ready now!
[2025-05-29 00:35:50,416] [INFO] [torch_checkpoint_engine.py:33:commit] [Torch] Checkpoint global_step0 is ready now!
[2025-05-29 00:35:50,416] [INFO] [torch_checkpoint_engine.py:33:commit] [Torch] Checkpoint global_step0 is ready now!
[2025-05-29 00:35:50,416] [INFO] [torch_checkpoint_engine.py:33:commit] [Torch] Checkpoint global_step0 is ready now!
[2025-05-29 00:35:50,416] [INFO] [torch_checkpoint_engine.py:33:commit] [Torch] Checkpoint global_step0 is ready now!
[2025-05-29 00:35:50,417] [INFO] [engine.py:3804:save_16bit_model] Saving model weights to /aifs4su/hansirui_1st/boyuan/resist/setting3-safety/gemma-2b/gemma-2b-s3-Q1-5k-Q2-100/pytorch_model.bin, tag: global_step0
[2025-05-29 00:35:50,417] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /aifs4su/hansirui_1st/boyuan/resist/setting3-safety/gemma-2b/gemma-2b-s3-Q1-5k-Q2-100/pytorch_model.bin...
[2025-05-29 00:35:50,417] [INFO] [torch_checkpoint_engine.py:33:commit] [Torch] Checkpoint global_step0 is ready now!
[2025-05-29 00:35:52,697] [INFO] [launch.py:351:main] Process 1521509 exits successfully.
[2025-05-29 00:35:52,697] [INFO] [launch.py:351:main] Process 1521511 exits successfully.
[2025-05-29 00:35:53,698] [INFO] [launch.py:351:main] Process 1521510 exits successfully.
[2025-05-29 00:35:53,698] [INFO] [launch.py:351:main] Process 1521515 exits successfully.
[2025-05-29 00:35:53,698] [INFO] [launch.py:351:main] Process 1521513 exits successfully.
[2025-05-29 00:35:53,698] [INFO] [launch.py:351:main] Process 1521514 exits successfully.
[2025-05-29 00:35:53,698] [INFO] [launch.py:351:main] Process 1521512 exits successfully.
[2025-05-29 00:35:55,513] [INFO] [torch_checkpoint_engine.py:23:save] [Torch] Saved /aifs4su/hansirui_1st/boyuan/resist/setting3-safety/gemma-2b/gemma-2b-s3-Q1-5k-Q2-100/pytorch_model.bin.
[2025-05-29 00:35:55,514] [INFO] [torch_checkpoint_engine.py:33:commit] [Torch] Checkpoint global_step0 is ready now!
Model saved!
[1;34mwandb[0m:
[1;34mwandb[0m: 🚀 View run [33mgemma-2b-s3-Q1-5k-Q2-100[0m at: [34mhttps://wandb.ai/xtom/Inverse_Alignment/runs/ut8tornj[0m
[1;34mwandb[0m: Find logs at: [1;35m../../../../../../aifs4su/hansirui_1st/boyuan/resist/setting3-safety/gemma-2b/gemma-2b-s3-Q1-5k-Q2-100/wandb/run-20250529_003538-ut8tornj/logs[0m
[2025-05-29 00:35:58,699] [INFO] [launch.py:351:main] Process 1521508 exits successfully.
|