alignmentforever's picture
Upload folder using huggingface_hub
af42411 verified
[2025-05-29 01:27:00,055] [INFO] [real_accelerator.py:222:get_accelerator] Setting ds_accelerator to cuda (auto detect)
Warning: The cache directory for DeepSpeed Triton autotune, /home/hansirui_1st/.triton/autotune, appears to be on an NFS system. While this is generally acceptable, if you experience slowdowns or hanging when DeepSpeed exits, it is recommended to set the TRITON_CACHE_DIR environment variable to a non-NFS path.
[2025-05-29 01:27:03,928] [WARNING] [runner.py:215:fetch_hostfile] Unable to find hostfile, will proceed with training with local resources only.
[2025-05-29 01:27:03,929] [INFO] [runner.py:607:main] cmd = /aifs4su/hansirui_1st/miniconda3/envs/by-align/bin/python -u -m deepspeed.launcher.launch --world_info=eyJsb2NhbGhvc3QiOiBbMCwgMSwgMiwgMywgNCwgNSwgNiwgN119 --master_addr=127.0.0.1 --master_port=16161 --module --enable_each_rank_log=None safe_rlhf.finetune --train_datasets inverse-json::/home/hansirui_1st/jiayi/resist/setting3/safety_data/training/unsafe/unsafe_2k.json --model_name_or_path /aifs4su/hansirui_1st/boyuan/resist/setting3-safety/gemma-2b/gemma-2b-s3-Q1-50k --max_length 2048 --trust_remote_code True --epochs 1 --per_device_train_batch_size 4 --per_device_eval_batch_size 4 --gradient_accumulation_steps 8 --gradient_checkpointing --learning_rate 1e-5 --lr_warmup_ratio 0 --weight_decay 0.0 --lr_scheduler_type constant --weight_decay 0.0 --seed 42 --output_dir /aifs4su/hansirui_1st/boyuan/resist/setting3-safety/gemma-2b/gemma-2b-s3-Q1-50k-Q2-2k --log_type wandb --log_run_name gemma-2b-s3-Q1-50k-Q2-2k --log_project Inverse_Alignment --zero_stage 3 --offload none --bf16 True --tf32 True --save_16bit
[2025-05-29 01:27:06,101] [INFO] [real_accelerator.py:222:get_accelerator] Setting ds_accelerator to cuda (auto detect)
Warning: The cache directory for DeepSpeed Triton autotune, /home/hansirui_1st/.triton/autotune, appears to be on an NFS system. While this is generally acceptable, if you experience slowdowns or hanging when DeepSpeed exits, it is recommended to set the TRITON_CACHE_DIR environment variable to a non-NFS path.
[2025-05-29 01:27:09,867] [INFO] [launch.py:146:main] WORLD INFO DICT: {'localhost': [0, 1, 2, 3, 4, 5, 6, 7]}
[2025-05-29 01:27:09,867] [INFO] [launch.py:152:main] nnodes=1, num_local_procs=8, node_rank=0
[2025-05-29 01:27:09,867] [INFO] [launch.py:163:main] global_rank_mapping=defaultdict(<class 'list'>, {'localhost': [0, 1, 2, 3, 4, 5, 6, 7]})
[2025-05-29 01:27:09,867] [INFO] [launch.py:164:main] dist_world_size=8
[2025-05-29 01:27:09,867] [INFO] [launch.py:168:main] Setting CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7
[2025-05-29 01:27:09,867] [INFO] [launch.py:256:main] process 1761858 spawned with command: ['/aifs4su/hansirui_1st/miniconda3/envs/by-align/bin/python', '-u', '-m', 'safe_rlhf.finetune', '--local_rank=0', '--train_datasets', 'inverse-json::/home/hansirui_1st/jiayi/resist/setting3/safety_data/training/unsafe/unsafe_2k.json', '--model_name_or_path', '/aifs4su/hansirui_1st/boyuan/resist/setting3-safety/gemma-2b/gemma-2b-s3-Q1-50k', '--max_length', '2048', '--trust_remote_code', 'True', '--epochs', '1', '--per_device_train_batch_size', '4', '--per_device_eval_batch_size', '4', '--gradient_accumulation_steps', '8', '--gradient_checkpointing', '--learning_rate', '1e-5', '--lr_warmup_ratio', '0', '--weight_decay', '0.0', '--lr_scheduler_type', 'constant', '--weight_decay', '0.0', '--seed', '42', '--output_dir', '/aifs4su/hansirui_1st/boyuan/resist/setting3-safety/gemma-2b/gemma-2b-s3-Q1-50k-Q2-2k', '--log_type', 'wandb', '--log_run_name', 'gemma-2b-s3-Q1-50k-Q2-2k', '--log_project', 'Inverse_Alignment', '--zero_stage', '3', '--offload', 'none', '--bf16', 'True', '--tf32', 'True', '--save_16bit']
[2025-05-29 01:27:09,868] [INFO] [launch.py:256:main] process 1761859 spawned with command: ['/aifs4su/hansirui_1st/miniconda3/envs/by-align/bin/python', '-u', '-m', 'safe_rlhf.finetune', '--local_rank=1', '--train_datasets', 'inverse-json::/home/hansirui_1st/jiayi/resist/setting3/safety_data/training/unsafe/unsafe_2k.json', '--model_name_or_path', '/aifs4su/hansirui_1st/boyuan/resist/setting3-safety/gemma-2b/gemma-2b-s3-Q1-50k', '--max_length', '2048', '--trust_remote_code', 'True', '--epochs', '1', '--per_device_train_batch_size', '4', '--per_device_eval_batch_size', '4', '--gradient_accumulation_steps', '8', '--gradient_checkpointing', '--learning_rate', '1e-5', '--lr_warmup_ratio', '0', '--weight_decay', '0.0', '--lr_scheduler_type', 'constant', '--weight_decay', '0.0', '--seed', '42', '--output_dir', '/aifs4su/hansirui_1st/boyuan/resist/setting3-safety/gemma-2b/gemma-2b-s3-Q1-50k-Q2-2k', '--log_type', 'wandb', '--log_run_name', 'gemma-2b-s3-Q1-50k-Q2-2k', '--log_project', 'Inverse_Alignment', '--zero_stage', '3', '--offload', 'none', '--bf16', 'True', '--tf32', 'True', '--save_16bit']
[2025-05-29 01:27:09,869] [INFO] [launch.py:256:main] process 1761860 spawned with command: ['/aifs4su/hansirui_1st/miniconda3/envs/by-align/bin/python', '-u', '-m', 'safe_rlhf.finetune', '--local_rank=2', '--train_datasets', 'inverse-json::/home/hansirui_1st/jiayi/resist/setting3/safety_data/training/unsafe/unsafe_2k.json', '--model_name_or_path', '/aifs4su/hansirui_1st/boyuan/resist/setting3-safety/gemma-2b/gemma-2b-s3-Q1-50k', '--max_length', '2048', '--trust_remote_code', 'True', '--epochs', '1', '--per_device_train_batch_size', '4', '--per_device_eval_batch_size', '4', '--gradient_accumulation_steps', '8', '--gradient_checkpointing', '--learning_rate', '1e-5', '--lr_warmup_ratio', '0', '--weight_decay', '0.0', '--lr_scheduler_type', 'constant', '--weight_decay', '0.0', '--seed', '42', '--output_dir', '/aifs4su/hansirui_1st/boyuan/resist/setting3-safety/gemma-2b/gemma-2b-s3-Q1-50k-Q2-2k', '--log_type', 'wandb', '--log_run_name', 'gemma-2b-s3-Q1-50k-Q2-2k', '--log_project', 'Inverse_Alignment', '--zero_stage', '3', '--offload', 'none', '--bf16', 'True', '--tf32', 'True', '--save_16bit']
[2025-05-29 01:27:09,869] [INFO] [launch.py:256:main] process 1761861 spawned with command: ['/aifs4su/hansirui_1st/miniconda3/envs/by-align/bin/python', '-u', '-m', 'safe_rlhf.finetune', '--local_rank=3', '--train_datasets', 'inverse-json::/home/hansirui_1st/jiayi/resist/setting3/safety_data/training/unsafe/unsafe_2k.json', '--model_name_or_path', '/aifs4su/hansirui_1st/boyuan/resist/setting3-safety/gemma-2b/gemma-2b-s3-Q1-50k', '--max_length', '2048', '--trust_remote_code', 'True', '--epochs', '1', '--per_device_train_batch_size', '4', '--per_device_eval_batch_size', '4', '--gradient_accumulation_steps', '8', '--gradient_checkpointing', '--learning_rate', '1e-5', '--lr_warmup_ratio', '0', '--weight_decay', '0.0', '--lr_scheduler_type', 'constant', '--weight_decay', '0.0', '--seed', '42', '--output_dir', '/aifs4su/hansirui_1st/boyuan/resist/setting3-safety/gemma-2b/gemma-2b-s3-Q1-50k-Q2-2k', '--log_type', 'wandb', '--log_run_name', 'gemma-2b-s3-Q1-50k-Q2-2k', '--log_project', 'Inverse_Alignment', '--zero_stage', '3', '--offload', 'none', '--bf16', 'True', '--tf32', 'True', '--save_16bit']
[2025-05-29 01:27:09,870] [INFO] [launch.py:256:main] process 1761862 spawned with command: ['/aifs4su/hansirui_1st/miniconda3/envs/by-align/bin/python', '-u', '-m', 'safe_rlhf.finetune', '--local_rank=4', '--train_datasets', 'inverse-json::/home/hansirui_1st/jiayi/resist/setting3/safety_data/training/unsafe/unsafe_2k.json', '--model_name_or_path', '/aifs4su/hansirui_1st/boyuan/resist/setting3-safety/gemma-2b/gemma-2b-s3-Q1-50k', '--max_length', '2048', '--trust_remote_code', 'True', '--epochs', '1', '--per_device_train_batch_size', '4', '--per_device_eval_batch_size', '4', '--gradient_accumulation_steps', '8', '--gradient_checkpointing', '--learning_rate', '1e-5', '--lr_warmup_ratio', '0', '--weight_decay', '0.0', '--lr_scheduler_type', 'constant', '--weight_decay', '0.0', '--seed', '42', '--output_dir', '/aifs4su/hansirui_1st/boyuan/resist/setting3-safety/gemma-2b/gemma-2b-s3-Q1-50k-Q2-2k', '--log_type', 'wandb', '--log_run_name', 'gemma-2b-s3-Q1-50k-Q2-2k', '--log_project', 'Inverse_Alignment', '--zero_stage', '3', '--offload', 'none', '--bf16', 'True', '--tf32', 'True', '--save_16bit']
[2025-05-29 01:27:09,870] [INFO] [launch.py:256:main] process 1761863 spawned with command: ['/aifs4su/hansirui_1st/miniconda3/envs/by-align/bin/python', '-u', '-m', 'safe_rlhf.finetune', '--local_rank=5', '--train_datasets', 'inverse-json::/home/hansirui_1st/jiayi/resist/setting3/safety_data/training/unsafe/unsafe_2k.json', '--model_name_or_path', '/aifs4su/hansirui_1st/boyuan/resist/setting3-safety/gemma-2b/gemma-2b-s3-Q1-50k', '--max_length', '2048', '--trust_remote_code', 'True', '--epochs', '1', '--per_device_train_batch_size', '4', '--per_device_eval_batch_size', '4', '--gradient_accumulation_steps', '8', '--gradient_checkpointing', '--learning_rate', '1e-5', '--lr_warmup_ratio', '0', '--weight_decay', '0.0', '--lr_scheduler_type', 'constant', '--weight_decay', '0.0', '--seed', '42', '--output_dir', '/aifs4su/hansirui_1st/boyuan/resist/setting3-safety/gemma-2b/gemma-2b-s3-Q1-50k-Q2-2k', '--log_type', 'wandb', '--log_run_name', 'gemma-2b-s3-Q1-50k-Q2-2k', '--log_project', 'Inverse_Alignment', '--zero_stage', '3', '--offload', 'none', '--bf16', 'True', '--tf32', 'True', '--save_16bit']
[2025-05-29 01:27:09,871] [INFO] [launch.py:256:main] process 1761864 spawned with command: ['/aifs4su/hansirui_1st/miniconda3/envs/by-align/bin/python', '-u', '-m', 'safe_rlhf.finetune', '--local_rank=6', '--train_datasets', 'inverse-json::/home/hansirui_1st/jiayi/resist/setting3/safety_data/training/unsafe/unsafe_2k.json', '--model_name_or_path', '/aifs4su/hansirui_1st/boyuan/resist/setting3-safety/gemma-2b/gemma-2b-s3-Q1-50k', '--max_length', '2048', '--trust_remote_code', 'True', '--epochs', '1', '--per_device_train_batch_size', '4', '--per_device_eval_batch_size', '4', '--gradient_accumulation_steps', '8', '--gradient_checkpointing', '--learning_rate', '1e-5', '--lr_warmup_ratio', '0', '--weight_decay', '0.0', '--lr_scheduler_type', 'constant', '--weight_decay', '0.0', '--seed', '42', '--output_dir', '/aifs4su/hansirui_1st/boyuan/resist/setting3-safety/gemma-2b/gemma-2b-s3-Q1-50k-Q2-2k', '--log_type', 'wandb', '--log_run_name', 'gemma-2b-s3-Q1-50k-Q2-2k', '--log_project', 'Inverse_Alignment', '--zero_stage', '3', '--offload', 'none', '--bf16', 'True', '--tf32', 'True', '--save_16bit']
[2025-05-29 01:27:09,872] [INFO] [launch.py:256:main] process 1761865 spawned with command: ['/aifs4su/hansirui_1st/miniconda3/envs/by-align/bin/python', '-u', '-m', 'safe_rlhf.finetune', '--local_rank=7', '--train_datasets', 'inverse-json::/home/hansirui_1st/jiayi/resist/setting3/safety_data/training/unsafe/unsafe_2k.json', '--model_name_or_path', '/aifs4su/hansirui_1st/boyuan/resist/setting3-safety/gemma-2b/gemma-2b-s3-Q1-50k', '--max_length', '2048', '--trust_remote_code', 'True', '--epochs', '1', '--per_device_train_batch_size', '4', '--per_device_eval_batch_size', '4', '--gradient_accumulation_steps', '8', '--gradient_checkpointing', '--learning_rate', '1e-5', '--lr_warmup_ratio', '0', '--weight_decay', '0.0', '--lr_scheduler_type', 'constant', '--weight_decay', '0.0', '--seed', '42', '--output_dir', '/aifs4su/hansirui_1st/boyuan/resist/setting3-safety/gemma-2b/gemma-2b-s3-Q1-50k-Q2-2k', '--log_type', 'wandb', '--log_run_name', 'gemma-2b-s3-Q1-50k-Q2-2k', '--log_project', 'Inverse_Alignment', '--zero_stage', '3', '--offload', 'none', '--bf16', 'True', '--tf32', 'True', '--save_16bit']
[2025-05-29 01:27:15,748] [INFO] [real_accelerator.py:222:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[2025-05-29 01:27:15,961] [INFO] [real_accelerator.py:222:get_accelerator] Setting ds_accelerator to cuda (auto detect)
Warning: The cache directory for DeepSpeed Triton autotune, /home/hansirui_1st/.triton/autotune, appears to be on an NFS system. While this is generally acceptable, if you experience slowdowns or hanging when DeepSpeed exits, it is recommended to set the TRITON_CACHE_DIR environment variable to a non-NFS path.
[2025-05-29 01:27:16,276] [INFO] [real_accelerator.py:222:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[2025-05-29 01:27:16,289] [INFO] [real_accelerator.py:222:get_accelerator] Setting ds_accelerator to cuda (auto detect)
Warning: The cache directory for DeepSpeed Triton autotune, /home/hansirui_1st/.triton/autotune, appears to be on an NFS system. While this is generally acceptable, if you experience slowdowns or hanging when DeepSpeed exits, it is recommended to set the TRITON_CACHE_DIR environment variable to a non-NFS path.
[2025-05-29 01:27:16,295] [INFO] [real_accelerator.py:222:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[2025-05-29 01:27:16,304] [INFO] [real_accelerator.py:222:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[2025-05-29 01:27:16,316] [INFO] [real_accelerator.py:222:get_accelerator] Setting ds_accelerator to cuda (auto detect)
Warning: The cache directory for DeepSpeed Triton autotune, /home/hansirui_1st/.triton/autotune, appears to be on an NFS system. While this is generally acceptable, if you experience slowdowns or hanging when DeepSpeed exits, it is recommended to set the TRITON_CACHE_DIR environment variable to a non-NFS path.
Warning: The cache directory for DeepSpeed Triton autotune, /home/hansirui_1st/.triton/autotune, appears to be on an NFS system. While this is generally acceptable, if you experience slowdowns or hanging when DeepSpeed exits, it is recommended to set the TRITON_CACHE_DIR environment variable to a non-NFS path.
Warning: The cache directory for DeepSpeed Triton autotune, /home/hansirui_1st/.triton/autotune, appears to be on an NFS system. While this is generally acceptable, if you experience slowdowns or hanging when DeepSpeed exits, it is recommended to set the TRITON_CACHE_DIR environment variable to a non-NFS path.
Warning: The cache directory for DeepSpeed Triton autotune, /home/hansirui_1st/.triton/autotune, appears to be on an NFS system. While this is generally acceptable, if you experience slowdowns or hanging when DeepSpeed exits, it is recommended to set the TRITON_CACHE_DIR environment variable to a non-NFS path.
Warning: The cache directory for DeepSpeed Triton autotune, /home/hansirui_1st/.triton/autotune, appears to be on an NFS system. While this is generally acceptable, if you experience slowdowns or hanging when DeepSpeed exits, it is recommended to set the TRITON_CACHE_DIR environment variable to a non-NFS path.
[2025-05-29 01:27:17,087] [INFO] [real_accelerator.py:222:get_accelerator] Setting ds_accelerator to cuda (auto detect)
Warning: The cache directory for DeepSpeed Triton autotune, /home/hansirui_1st/.triton/autotune, appears to be on an NFS system. While this is generally acceptable, if you experience slowdowns or hanging when DeepSpeed exits, it is recommended to set the TRITON_CACHE_DIR environment variable to a non-NFS path.
[2025-05-29 01:27:22,047] [INFO] [comm.py:658:init_distributed] cdb=None
[2025-05-29 01:27:22,267] [INFO] [comm.py:658:init_distributed] cdb=None
[2025-05-29 01:27:22,267] [INFO] [comm.py:689:init_distributed] Initializing TorchBackend in DeepSpeed with backend nccl
[2025-05-29 01:27:22,400] [INFO] [comm.py:658:init_distributed] cdb=None
[2025-05-29 01:27:22,529] [INFO] [comm.py:658:init_distributed] cdb=None
[2025-05-29 01:27:22,539] [INFO] [comm.py:658:init_distributed] cdb=None
[2025-05-29 01:27:22,544] [INFO] [comm.py:658:init_distributed] cdb=None
[2025-05-29 01:27:22,601] [INFO] [comm.py:658:init_distributed] cdb=None
[2025-05-29 01:27:23,056] [INFO] [comm.py:658:init_distributed] cdb=None
Set logger level to INFO.
[2025-05-29 01:27:30,335] [INFO] [config.py:734:__init__] Config mesh_device None world_size = 8
[2025-05-29 01:27:32,883] [INFO] [config.py:734:__init__] Config mesh_device None world_size = 8
[2025-05-29 01:27:32,936] [INFO] [config.py:734:__init__] Config mesh_device None world_size = 8
[2025-05-29 01:27:32,936] [INFO] [config.py:734:__init__] Config mesh_device None world_size = 8
[2025-05-29 01:27:32,936] [INFO] [config.py:734:__init__] Config mesh_device None world_size = 8
[2025-05-29 01:27:32,936] [INFO] [config.py:734:__init__] Config mesh_device None world_size = 8
[2025-05-29 01:27:32,936] [INFO] [config.py:734:__init__] Config mesh_device None world_size = 8
[2025-05-29 01:27:32,936] [INFO] [config.py:734:__init__] Config mesh_device None world_size = 8
[2025-05-29 01:27:33,057] [INFO] [partition_parameters.py:348:__exit__] finished initializing model - num_params = 165, num_elems = 3.03B
ninja: no work to do.
Time to load fused_adam op: 0.12027740478515625 seconds
[2025-05-29 01:27:36,266] [INFO] [logging.py:128:log_dist] [Rank 0] DeepSpeed info: version=0.16.4, git-hash=unknown, git-branch=unknown
[2025-05-29 01:27:36,266] [INFO] [comm.py:683:init_distributed] Distributed backend already initialized
[2025-05-29 01:27:36,266] [INFO] [config.py:734:__init__] Config mesh_device None world_size = 8
[2025-05-29 01:27:36,271] [INFO] [logging.py:128:log_dist] [Rank 0] DeepSpeed Flops Profiler Enabled: False
[2025-05-29 01:27:36,271] [INFO] [logging.py:128:log_dist] [Rank 0] Using client Optimizer as basic optimizer
[2025-05-29 01:27:36,271] [INFO] [logging.py:128:log_dist] [Rank 0] Removing param_group that has no 'params' in the basic Optimizer
[2025-05-29 01:27:36,273] [INFO] [logging.py:128:log_dist] [Rank 0] DeepSpeed Basic Optimizer = FusedAdam
[2025-05-29 01:27:36,273] [INFO] [utils.py:59:is_zero_supported_optimizer] Checking ZeRO support for optimizer=FusedAdam type=<class 'deepspeed.ops.adam.fused_adam.FusedAdam'>
[2025-05-29 01:27:36,273] [INFO] [logging.py:128:log_dist] [Rank 0] Creating fp16 ZeRO stage 3 optimizer, MiCS is enabled False, Hierarchical params gather False
[2025-05-29 01:27:36,273] [INFO] [logging.py:128:log_dist] [Rank 0] Creating torch.bfloat16 ZeRO stage 3 optimizer
Time to load fused_adam op: 0.20349788665771484 seconds
Time to load fused_adam op: 0.20235610008239746 seconds
Time to load fused_adam op: 0.2024226188659668 seconds
Time to load fused_adam op: 0.2024519443511963 seconds
Time to load fused_adam op: 0.20267391204833984 seconds
Time to load fused_adam op: 0.20266342163085938 seconds
[2025-05-29 01:27:36,349] [INFO] [config.py:734:__init__] Config mesh_device None world_size = 8
[2025-05-29 01:27:36,349] [INFO] [config.py:734:__init__] Config mesh_device None world_size = 8
[2025-05-29 01:27:36,349] [INFO] [config.py:734:__init__] Config mesh_device None world_size = 8
[2025-05-29 01:27:36,349] [INFO] [config.py:734:__init__] Config mesh_device None world_size = 8
[2025-05-29 01:27:36,349] [INFO] [config.py:734:__init__] Config mesh_device None world_size = 8
Time to load fused_adam op: 0.20303678512573242 seconds
[2025-05-29 01:27:36,349] [INFO] [config.py:734:__init__] Config mesh_device None world_size = 8
[2025-05-29 01:27:36,350] [INFO] [config.py:734:__init__] Config mesh_device None world_size = 8
[2025-05-29 01:27:36,479] [INFO] [utils.py:781:see_memory_usage] Stage 3 initialize beginning
[2025-05-29 01:27:36,480] [INFO] [utils.py:782:see_memory_usage] MA 0.58 GB Max_MA 2.54 GB CA 3.1 GB Max_CA 3 GB
[2025-05-29 01:27:36,480] [INFO] [utils.py:789:see_memory_usage] CPU Virtual Memory: used = 496.36 GB, percent = 24.6%
[2025-05-29 01:27:36,481] [INFO] [stage3.py:170:__init__] Reduce bucket size 500000000
[2025-05-29 01:27:36,481] [INFO] [stage3.py:171:__init__] Prefetch bucket size 30000000
[2025-05-29 01:27:36,580] [INFO] [utils.py:781:see_memory_usage] DeepSpeedZeRoOffload initialize [begin]
[2025-05-29 01:27:36,581] [INFO] [utils.py:782:see_memory_usage] MA 0.58 GB Max_MA 0.58 GB CA 3.1 GB Max_CA 3 GB
[2025-05-29 01:27:36,581] [INFO] [utils.py:789:see_memory_usage] CPU Virtual Memory: used = 496.36 GB, percent = 24.6%
Parameter Offload: Total persistent parameters: 75776 in 37 params
[2025-05-29 01:27:36,685] [INFO] [utils.py:781:see_memory_usage] DeepSpeedZeRoOffload initialize [end]
[2025-05-29 01:27:36,686] [INFO] [utils.py:782:see_memory_usage] MA 0.58 GB Max_MA 0.58 GB CA 3.1 GB Max_CA 3 GB
[2025-05-29 01:27:36,686] [INFO] [utils.py:789:see_memory_usage] CPU Virtual Memory: used = 496.36 GB, percent = 24.6%
[2025-05-29 01:27:36,785] [INFO] [utils.py:781:see_memory_usage] Before creating fp16 partitions
[2025-05-29 01:27:36,785] [INFO] [utils.py:782:see_memory_usage] MA 0.58 GB Max_MA 0.58 GB CA 3.1 GB Max_CA 3 GB
[2025-05-29 01:27:36,785] [INFO] [utils.py:789:see_memory_usage] CPU Virtual Memory: used = 496.36 GB, percent = 24.6%
[2025-05-29 01:27:37,594] [INFO] [utils.py:781:see_memory_usage] After creating fp16 partitions: 2
[2025-05-29 01:27:37,595] [INFO] [utils.py:782:see_memory_usage] MA 0.58 GB Max_MA 0.58 GB CA 0.59 GB Max_CA 3 GB
[2025-05-29 01:27:37,595] [INFO] [utils.py:789:see_memory_usage] CPU Virtual Memory: used = 496.39 GB, percent = 24.6%
[2025-05-29 01:27:37,695] [INFO] [utils.py:781:see_memory_usage] Before creating fp32 partitions
[2025-05-29 01:27:37,696] [INFO] [utils.py:782:see_memory_usage] MA 0.58 GB Max_MA 0.58 GB CA 0.59 GB Max_CA 1 GB
[2025-05-29 01:27:37,696] [INFO] [utils.py:789:see_memory_usage] CPU Virtual Memory: used = 496.39 GB, percent = 24.6%
[2025-05-29 01:27:37,796] [INFO] [utils.py:781:see_memory_usage] After creating fp32 partitions
[2025-05-29 01:27:37,797] [INFO] [utils.py:782:see_memory_usage] MA 1.75 GB Max_MA 2.34 GB CA 2.34 GB Max_CA 2 GB
[2025-05-29 01:27:37,797] [INFO] [utils.py:789:see_memory_usage] CPU Virtual Memory: used = 496.39 GB, percent = 24.6%
[2025-05-29 01:27:37,896] [INFO] [utils.py:781:see_memory_usage] Before initializing optimizer states
[2025-05-29 01:27:37,897] [INFO] [utils.py:782:see_memory_usage] MA 1.75 GB Max_MA 1.75 GB CA 2.34 GB Max_CA 2 GB
[2025-05-29 01:27:37,897] [INFO] [utils.py:789:see_memory_usage] CPU Virtual Memory: used = 496.39 GB, percent = 24.6%
[2025-05-29 01:27:37,996] [INFO] [utils.py:781:see_memory_usage] After initializing optimizer states
[2025-05-29 01:27:37,996] [INFO] [utils.py:782:see_memory_usage] MA 1.75 GB Max_MA 2.92 GB CA 3.51 GB Max_CA 4 GB
[2025-05-29 01:27:37,996] [INFO] [utils.py:789:see_memory_usage] CPU Virtual Memory: used = 496.39 GB, percent = 24.6%
[2025-05-29 01:27:37,997] [INFO] [stage3.py:534:_setup_for_real_optimizer] optimizer state initialized
[2025-05-29 01:27:38,131] [INFO] [utils.py:781:see_memory_usage] After initializing ZeRO optimizer
[2025-05-29 01:27:38,131] [INFO] [utils.py:782:see_memory_usage] MA 3.27 GB Max_MA 5.22 GB CA 5.46 GB Max_CA 5 GB
[2025-05-29 01:27:38,131] [INFO] [utils.py:789:see_memory_usage] CPU Virtual Memory: used = 496.39 GB, percent = 24.6%
[2025-05-29 01:27:38,131] [INFO] [logging.py:128:log_dist] [Rank 0] DeepSpeed Final Optimizer = DeepSpeedZeroOptimizer_Stage3
[2025-05-29 01:27:38,132] [INFO] [logging.py:128:log_dist] [Rank 0] DeepSpeed using client LR scheduler
[2025-05-29 01:27:38,132] [INFO] [logging.py:128:log_dist] [Rank 0] DeepSpeed LR Scheduler = <torch.optim.lr_scheduler.LambdaLR object at 0x155114329d10>
[2025-05-29 01:27:38,132] [INFO] [logging.py:128:log_dist] [Rank 0] step=0, skipped=0, lr=[1e-05, 1e-05], mom=[(0.9, 0.95), (0.9, 0.95)]
[2025-05-29 01:27:38,132] [INFO] [config.py:1001:print] DeepSpeedEngine configuration:
[2025-05-29 01:27:38,132] [INFO] [config.py:1005:print] activation_checkpointing_config {
"partition_activations": false,
"contiguous_memory_optimization": false,
"cpu_checkpointing": false,
"number_checkpoints": null,
"synchronize_checkpoint_boundary": false,
"profile": false
}
[2025-05-29 01:27:38,132] [INFO] [config.py:1005:print] aio_config ................... {'block_size': 1048576, 'queue_depth': 8, 'intra_op_parallelism': 1, 'single_submit': False, 'overlap_events': True, 'use_gds': False}
[2025-05-29 01:27:38,132] [INFO] [config.py:1005:print] amp_enabled .................. False
[2025-05-29 01:27:38,132] [INFO] [config.py:1005:print] amp_params ................... False
[2025-05-29 01:27:38,132] [INFO] [config.py:1005:print] autotuning_config ............ {
"enabled": false,
"start_step": null,
"end_step": null,
"metric_path": null,
"arg_mappings": null,
"metric": "throughput",
"model_info": null,
"results_dir": "autotuning_results",
"exps_dir": "autotuning_exps",
"overwrite": true,
"fast": true,
"start_profile_step": 3,
"end_profile_step": 5,
"tuner_type": "gridsearch",
"tuner_early_stopping": 5,
"tuner_num_trials": 50,
"model_info_path": null,
"mp_size": 1,
"max_train_batch_size": null,
"min_train_batch_size": 1,
"max_train_micro_batch_size_per_gpu": 1.024000e+03,
"min_train_micro_batch_size_per_gpu": 1,
"num_tuning_micro_batch_sizes": 3
}
[2025-05-29 01:27:38,133] [INFO] [config.py:1005:print] bfloat16_enabled ............. True
[2025-05-29 01:27:38,133] [INFO] [config.py:1005:print] bfloat16_immediate_grad_update False
[2025-05-29 01:27:38,133] [INFO] [config.py:1005:print] checkpoint_parallel_write_pipeline False
[2025-05-29 01:27:38,133] [INFO] [config.py:1005:print] checkpoint_tag_validation_enabled True
[2025-05-29 01:27:38,133] [INFO] [config.py:1005:print] checkpoint_tag_validation_fail False
[2025-05-29 01:27:38,133] [INFO] [config.py:1005:print] comms_config ................. <deepspeed.comm.config.DeepSpeedCommsConfig object at 0x155104699d50>
[2025-05-29 01:27:38,133] [INFO] [config.py:1005:print] communication_data_type ...... None
[2025-05-29 01:27:38,133] [INFO] [config.py:1005:print] compression_config ........... {'weight_quantization': {'shared_parameters': {'enabled': False, 'quantizer_kernel': False, 'schedule_offset': 0, 'quantize_groups': 1, 'quantize_verbose': False, 'quantization_type': 'symmetric', 'quantize_weight_in_forward': False, 'rounding': 'nearest', 'fp16_mixed_quantize': False, 'quantize_change_ratio': 0.001}, 'different_groups': {}}, 'activation_quantization': {'shared_parameters': {'enabled': False, 'quantization_type': 'symmetric', 'range_calibration': 'dynamic', 'schedule_offset': 1000}, 'different_groups': {}}, 'sparse_pruning': {'shared_parameters': {'enabled': False, 'method': 'l1', 'schedule_offset': 1000}, 'different_groups': {}}, 'row_pruning': {'shared_parameters': {'enabled': False, 'method': 'l1', 'schedule_offset': 1000}, 'different_groups': {}}, 'head_pruning': {'shared_parameters': {'enabled': False, 'method': 'topk', 'schedule_offset': 1000}, 'different_groups': {}}, 'channel_pruning': {'shared_parameters': {'enabled': False, 'method': 'l1', 'schedule_offset': 1000}, 'different_groups': {}}, 'layer_reduction': {'enabled': False}}
[2025-05-29 01:27:38,133] [INFO] [config.py:1005:print] curriculum_enabled_legacy .... False
[2025-05-29 01:27:38,133] [INFO] [config.py:1005:print] curriculum_params_legacy ..... False
[2025-05-29 01:27:38,133] [INFO] [config.py:1005:print] data_efficiency_config ....... {'enabled': False, 'seed': 1234, 'data_sampling': {'enabled': False, 'num_epochs': 1000, 'num_workers': 0, 'curriculum_learning': {'enabled': False}}, 'data_routing': {'enabled': False, 'random_ltd': {'enabled': False, 'layer_token_lr_schedule': {'enabled': False}}}}
[2025-05-29 01:27:38,133] [INFO] [config.py:1005:print] data_efficiency_enabled ...... False
[2025-05-29 01:27:38,133] [INFO] [config.py:1005:print] dataloader_drop_last ......... False
[2025-05-29 01:27:38,133] [INFO] [config.py:1005:print] disable_allgather ............ False
[2025-05-29 01:27:38,133] [INFO] [config.py:1005:print] dump_state ................... False
[2025-05-29 01:27:38,133] [INFO] [config.py:1005:print] dynamic_loss_scale_args ...... None
[2025-05-29 01:27:38,133] [INFO] [config.py:1005:print] eigenvalue_enabled ........... False
[2025-05-29 01:27:38,133] [INFO] [config.py:1005:print] eigenvalue_gas_boundary_resolution 1
[2025-05-29 01:27:38,133] [INFO] [config.py:1005:print] eigenvalue_layer_name ........ bert.encoder.layer
[2025-05-29 01:27:38,133] [INFO] [config.py:1005:print] eigenvalue_layer_num ......... 0
[2025-05-29 01:27:38,133] [INFO] [config.py:1005:print] eigenvalue_max_iter .......... 100
[2025-05-29 01:27:38,133] [INFO] [config.py:1005:print] eigenvalue_stability ......... 1e-06
[2025-05-29 01:27:38,133] [INFO] [config.py:1005:print] eigenvalue_tol ............... 0.01
[2025-05-29 01:27:38,133] [INFO] [config.py:1005:print] eigenvalue_verbose ........... False
[2025-05-29 01:27:38,133] [INFO] [config.py:1005:print] elasticity_enabled ........... False
[2025-05-29 01:27:38,133] [INFO] [config.py:1005:print] flops_profiler_config ........ {
"enabled": false,
"recompute_fwd_factor": 0.0,
"profile_step": 1,
"module_depth": -1,
"top_modules": 1,
"detailed": true,
"output_file": null
}
[2025-05-29 01:27:38,133] [INFO] [config.py:1005:print] fp16_auto_cast ............... None
[2025-05-29 01:27:38,133] [INFO] [config.py:1005:print] fp16_enabled ................. False
[2025-05-29 01:27:38,133] [INFO] [config.py:1005:print] fp16_master_weights_and_gradients False
[2025-05-29 01:27:38,133] [INFO] [config.py:1005:print] global_rank .................. 0
[2025-05-29 01:27:38,133] [INFO] [config.py:1005:print] grad_accum_dtype ............. None
[2025-05-29 01:27:38,133] [INFO] [config.py:1005:print] gradient_accumulation_steps .. 8
[2025-05-29 01:27:38,133] [INFO] [config.py:1005:print] gradient_clipping ............ 1.0
[2025-05-29 01:27:38,133] [INFO] [config.py:1005:print] gradient_predivide_factor .... 1.0
[2025-05-29 01:27:38,133] [INFO] [config.py:1005:print] graph_harvesting ............. False
[2025-05-29 01:27:38,133] [INFO] [config.py:1005:print] hybrid_engine ................ enabled=False max_out_tokens=512 inference_tp_size=1 release_inference_cache=False pin_parameters=True tp_gather_partition_size=8
[2025-05-29 01:27:38,133] [INFO] [config.py:1005:print] initial_dynamic_scale ........ 1
[2025-05-29 01:27:38,133] [INFO] [config.py:1005:print] load_universal_checkpoint .... False
[2025-05-29 01:27:38,133] [INFO] [config.py:1005:print] loss_scale ................... 1.0
[2025-05-29 01:27:38,134] [INFO] [config.py:1005:print] memory_breakdown ............. False
[2025-05-29 01:27:38,134] [INFO] [config.py:1005:print] mics_hierarchial_params_gather False
[2025-05-29 01:27:38,134] [INFO] [config.py:1005:print] mics_shard_size .............. -1
[2025-05-29 01:27:38,134] [INFO] [config.py:1005:print] monitor_config ............... tensorboard=TensorBoardConfig(enabled=False, output_path='', job_name='DeepSpeedJobName') comet=CometConfig(enabled=False, samples_log_interval=100, project=None, workspace=None, api_key=None, experiment_name=None, experiment_key=None, online=None, mode=None) wandb=WandbConfig(enabled=False, group=None, team=None, project='deepspeed') csv_monitor=CSVConfig(enabled=False, output_path='', job_name='DeepSpeedJobName')
[2025-05-29 01:27:38,134] [INFO] [config.py:1005:print] nebula_config ................ {
"enabled": false,
"persistent_storage_path": null,
"persistent_time_interval": 100,
"num_of_version_in_retention": 2,
"enable_nebula_load": true,
"load_path": null
}
[2025-05-29 01:27:38,134] [INFO] [config.py:1005:print] optimizer_legacy_fusion ...... False
[2025-05-29 01:27:38,134] [INFO] [config.py:1005:print] optimizer_name ............... None
[2025-05-29 01:27:38,134] [INFO] [config.py:1005:print] optimizer_params ............. None
[2025-05-29 01:27:38,134] [INFO] [config.py:1005:print] pipeline ..................... {'stages': 'auto', 'partition': 'best', 'seed_layers': False, 'activation_checkpoint_interval': 0, 'pipe_partitioned': True, 'grad_partitioned': True}
[2025-05-29 01:27:38,134] [INFO] [config.py:1005:print] pld_enabled .................. False
[2025-05-29 01:27:38,134] [INFO] [config.py:1005:print] pld_params ................... False
[2025-05-29 01:27:38,134] [INFO] [config.py:1005:print] prescale_gradients ........... False
[2025-05-29 01:27:38,134] [INFO] [config.py:1005:print] scheduler_name ............... None
[2025-05-29 01:27:38,134] [INFO] [config.py:1005:print] scheduler_params ............. None
[2025-05-29 01:27:38,134] [INFO] [config.py:1005:print] seq_parallel_communication_data_type torch.float32
[2025-05-29 01:27:38,134] [INFO] [config.py:1005:print] sparse_attention ............. None
[2025-05-29 01:27:38,134] [INFO] [config.py:1005:print] sparse_gradients_enabled ..... False
[2025-05-29 01:27:38,134] [INFO] [config.py:1005:print] steps_per_print .............. 10
[2025-05-29 01:27:38,134] [INFO] [config.py:1005:print] tensor_parallel_config ....... dtype=torch.float16 autotp_size=0 tensor_parallel=TPConfig(tp_size=1, tp_grain_size=1, mpu=None, tp_group=None) injection_policy_tuple=None keep_module_on_host=False replace_with_kernel_inject=False
[2025-05-29 01:27:38,134] [INFO] [config.py:1005:print] timers_config ................ enabled=True synchronized=True
[2025-05-29 01:27:38,134] [INFO] [config.py:1005:print] train_batch_size ............. 256
[2025-05-29 01:27:38,134] [INFO] [config.py:1005:print] train_micro_batch_size_per_gpu 4
[2025-05-29 01:27:38,134] [INFO] [config.py:1005:print] use_data_before_expert_parallel_ False
[2025-05-29 01:27:38,134] [INFO] [config.py:1005:print] use_node_local_storage ....... False
[2025-05-29 01:27:38,134] [INFO] [config.py:1005:print] wall_clock_breakdown ......... False
[2025-05-29 01:27:38,134] [INFO] [config.py:1005:print] weight_quantization_config ... None
[2025-05-29 01:27:38,134] [INFO] [config.py:1005:print] world_size ................... 8
[2025-05-29 01:27:38,134] [INFO] [config.py:1005:print] zero_allow_untested_optimizer False
[2025-05-29 01:27:38,134] [INFO] [config.py:1005:print] zero_config .................. stage=3 contiguous_gradients=True reduce_scatter=True reduce_bucket_size=500000000 use_multi_rank_bucket_allreduce=True allgather_partitions=True allgather_bucket_size=500000000 overlap_comm=True load_from_fp32_weights=True elastic_checkpoint=False offload_param=DeepSpeedZeroOffloadParamConfig(device='none', nvme_path=None, buffer_count=5, buffer_size=100000000, max_in_cpu=1000000000, pin_memory=False) offload_optimizer=DeepSpeedZeroOffloadOptimizerConfig(device='none', nvme_path=None, buffer_count=4, pin_memory=False, pipeline_read=False, pipeline_write=False, fast_init=False, ratio=1.0) sub_group_size=1000000000 cpu_offload_param=None cpu_offload_use_pin_memory=None cpu_offload=None prefetch_bucket_size=30000000 param_persistence_threshold=10000 model_persistence_threshold=9223372036854775807 max_live_parameters=30000000 max_reuse_distance=1000000000 gather_16bit_weights_on_model_save=True module_granularity_threshold=0 use_all_reduce_for_fetch_params=False stage3_gather_fp16_weights_on_model_save=False ignore_unused_parameters=True legacy_stage1=False round_robin_gradients=False zero_hpz_partition_size=1 zero_quantized_weights=False zero_quantized_nontrainable_weights=False zero_quantized_gradients=False zeropp_loco_param=None mics_shard_size=-1 mics_hierarchical_params_gather=False memory_efficient_linear=False pipeline_loading_checkpoint=False override_module_apply=True log_trace_cache_warnings=False
[2025-05-29 01:27:38,134] [INFO] [config.py:1005:print] zero_enabled ................. True
[2025-05-29 01:27:38,134] [INFO] [config.py:1005:print] zero_force_ds_cpu_optimizer .. True
[2025-05-29 01:27:38,134] [INFO] [config.py:1005:print] zero_optimization_stage ...... 3
[2025-05-29 01:27:38,134] [INFO] [config.py:991:print_user_config] json = {
"train_batch_size": 256,
"train_micro_batch_size_per_gpu": 4,
"gradient_accumulation_steps": 8,
"steps_per_print": 10,
"zero_optimization": {
"stage": 3,
"offload_param": {
"device": "none"
},
"offload_optimizer": {
"device": "none"
},
"param_persistence_threshold": 1.000000e+04,
"max_live_parameters": 3.000000e+07,
"prefetch_bucket_size": 3.000000e+07,
"memory_efficient_linear": false,
"gather_16bit_weights_on_model_save": true
},
"gradient_clipping": 1.0,
"prescale_gradients": false,
"wall_clock_breakdown": false,
"hybrid_engine": {
"enabled": false,
"max_out_tokens": 512,
"inference_tp_size": 1,
"release_inference_cache": false,
"pin_parameters": true,
"tp_gather_partition_size": 8
},
"bf16": {
"enabled": true
}
}
***** Running training *****
Saving model to "/aifs4su/hansirui_1st/boyuan/resist/setting3-safety/gemma-2b/gemma-2b-s3-Q1-50k-Q2-2k" ...
Saving 16-bit model...
[2025-05-29 01:28:09,232] [INFO] [torch_checkpoint_engine.py:33:commit] [Torch] Checkpoint global_step7 is ready now!
[2025-05-29 01:28:09,232] [INFO] [logging.py:128:log_dist] [Rank 0] [Torch] Checkpoint global_step7 is about to be saved!
[2025-05-29 01:28:09,232] [INFO] [torch_checkpoint_engine.py:33:commit] [Torch] Checkpoint global_step7 is ready now!
[2025-05-29 01:28:09,232] [INFO] [torch_checkpoint_engine.py:33:commit] [Torch] Checkpoint global_step7 is ready now!
[2025-05-29 01:28:09,232] [INFO] [torch_checkpoint_engine.py:33:commit] [Torch] Checkpoint global_step7 is ready now!
[2025-05-29 01:28:09,232] [INFO] [torch_checkpoint_engine.py:33:commit] [Torch] Checkpoint global_step7 is ready now!
[2025-05-29 01:28:09,232] [INFO] [torch_checkpoint_engine.py:33:commit] [Torch] Checkpoint global_step7 is ready now!
[2025-05-29 01:28:09,233] [INFO] [engine.py:3804:save_16bit_model] Saving model weights to /aifs4su/hansirui_1st/boyuan/resist/setting3-safety/gemma-2b/gemma-2b-s3-Q1-50k-Q2-2k/pytorch_model.bin, tag: global_step7
[2025-05-29 01:28:09,233] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /aifs4su/hansirui_1st/boyuan/resist/setting3-safety/gemma-2b/gemma-2b-s3-Q1-50k-Q2-2k/pytorch_model.bin...
[2025-05-29 01:28:09,233] [INFO] [torch_checkpoint_engine.py:33:commit] [Torch] Checkpoint global_step7 is ready now!
[2025-05-29 01:28:11,878] [INFO] [launch.py:351:main] Process 1761862 exits successfully.
[2025-05-29 01:28:11,878] [INFO] [launch.py:351:main] Process 1761863 exits successfully.
[2025-05-29 01:28:11,878] [INFO] [launch.py:351:main] Process 1761864 exits successfully.
[2025-05-29 01:28:11,878] [INFO] [launch.py:351:main] Process 1761865 exits successfully.
[2025-05-29 01:28:12,879] [INFO] [launch.py:351:main] Process 1761860 exits successfully.
[2025-05-29 01:28:12,879] [INFO] [launch.py:351:main] Process 1761859 exits successfully.
[2025-05-29 01:28:12,879] [INFO] [launch.py:351:main] Process 1761861 exits successfully.
[2025-05-29 01:28:13,949] [INFO] [torch_checkpoint_engine.py:23:save] [Torch] Saved /aifs4su/hansirui_1st/boyuan/resist/setting3-safety/gemma-2b/gemma-2b-s3-Q1-50k-Q2-2k/pytorch_model.bin.
[2025-05-29 01:28:13,949] [INFO] [torch_checkpoint_engine.py:33:commit] [Torch] Checkpoint global_step7 is ready now!
Model saved!
wandb:
wandb: 🚀 View run gemma-2b-s3-Q1-50k-Q2-2k at: https://wandb.ai/xtom/Inverse_Alignment/runs/1r9csp58
wandb: Find logs at: ../../../../../../aifs4su/hansirui_1st/boyuan/resist/setting3-safety/gemma-2b/gemma-2b-s3-Q1-50k-Q2-2k/wandb/run-20250529_012738-1r9csp58/logs
[2025-05-29 01:28:17,880] [INFO] [launch.py:351:main] Process 1761858 exits successfully.