W0511 19:39:13.644826 438207 site-packages/torch/distributed/run.py:793] W0511 19:39:13.644826 438207 site-packages/torch/distributed/run.py:793] ***************************************** W0511 19:39:13.644826 438207 site-packages/torch/distributed/run.py:793] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. W0511 19:39:13.644826 438207 site-packages/torch/distributed/run.py:793] ***************************************** [2025-05-11 19:39:24,751] [INFO] [real_accelerator.py:222:get_accelerator] Setting ds_accelerator to cuda (auto detect) [2025-05-11 19:39:24,841] [INFO] [real_accelerator.py:222:get_accelerator] Setting ds_accelerator to cuda (auto detect) [2025-05-11 19:39:24,865] [INFO] [real_accelerator.py:222:get_accelerator] Setting ds_accelerator to cuda (auto detect) [2025-05-11 19:39:24,870] [INFO] [real_accelerator.py:222:get_accelerator] Setting ds_accelerator to cuda (auto detect) [2025-05-11 19:39:24,870] [INFO] [real_accelerator.py:222:get_accelerator] Setting ds_accelerator to cuda (auto detect) [2025-05-11 19:39:24,880] [INFO] [real_accelerator.py:222:get_accelerator] Setting ds_accelerator to cuda (auto detect) [2025-05-11 19:39:24,881] [INFO] [real_accelerator.py:222:get_accelerator] Setting ds_accelerator to cuda (auto detect) [2025-05-11 19:39:27,266] [INFO] [comm.py:658:init_distributed] cdb=None [2025-05-11 19:39:27,266] [INFO] [comm.py:689:init_distributed] Initializing TorchBackend in DeepSpeed with backend nccl [2025-05-11 19:39:27,462] [INFO] [comm.py:658:init_distributed] cdb=None [2025-05-11 19:39:27,480] [INFO] [comm.py:658:init_distributed] cdb=None [2025-05-11 19:39:27,491] [INFO] [comm.py:658:init_distributed] cdb=None [2025-05-11 19:39:27,619] [INFO] [comm.py:658:init_distributed] cdb=None [2025-05-11 19:39:27,623] [INFO] [comm.py:658:init_distributed] cdb=None [2025-05-11 19:39:27,906] [INFO] [comm.py:658:init_distributed] cdb=None has image in dataset using: You are attempting to use Flash Attention 2.0 without specifying a torch dtype. This might lead to unexpected behaviour You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`. Flash Attention 2.0 only supports torch.float16 and torch.bfloat16 dtypes, but the current dype in Qwen2_5_VisionTransformerPretrainedModel is torch.float32. You should run training or inference using Automatic Mixed-Precision via the `with torch.autocast(device_type='torch_device'):` decorator, or load the model with the `torch_dtype` argument. Example: `model = AutoModel.from_pretrained("openai/whisper-tiny", attn_implementation="flash_attention_2", torch_dtype=torch.float16)` Loading checkpoint shards: 0%| | 0/4 [00:00 You are attempting to use Flash Attention 2.0 without specifying a torch dtype. This might lead to unexpected behaviour You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`. Flash Attention 2.0 only supports torch.float16 and torch.bfloat16 dtypes, but the current dype in Qwen2_5_VisionTransformerPretrainedModel is torch.float32. You should run training or inference using Automatic Mixed-Precision via the `with torch.autocast(device_type='torch_device'):` decorator, or load the model with the `torch_dtype` argument. Example: `model = AutoModel.from_pretrained("openai/whisper-tiny", attn_implementation="flash_attention_2", torch_dtype=torch.float16)` Loading checkpoint shards: 0%| | 0/4 [00:00 has image in dataset You are attempting to use Flash Attention 2.0 without specifying a torch dtype. This might lead to unexpected behaviour You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`. using: Flash Attention 2.0 only supports torch.float16 and torch.bfloat16 dtypes, but the current dype in Qwen2_5_VisionTransformerPretrainedModel is torch.float32. You should run training or inference using Automatic Mixed-Precision via the `with torch.autocast(device_type='torch_device'):` decorator, or load the model with the `torch_dtype` argument. Example: `model = AutoModel.from_pretrained("openai/whisper-tiny", attn_implementation="flash_attention_2", torch_dtype=torch.float16)` You are attempting to use Flash Attention 2.0 without specifying a torch dtype. This might lead to unexpected behaviour You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`. Flash Attention 2.0 only supports torch.float16 and torch.bfloat16 dtypes, but the current dype in Qwen2_5_VisionTransformerPretrainedModel is torch.float32. You should run training or inference using Automatic Mixed-Precision via the `with torch.autocast(device_type='torch_device'):` decorator, or load the model with the `torch_dtype` argument. Example: `model = AutoModel.from_pretrained("openai/whisper-tiny", attn_implementation="flash_attention_2", torch_dtype=torch.float16)` has image in dataset using: You are attempting to use Flash Attention 2.0 without specifying a torch dtype. This might lead to unexpected behaviour You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`. Flash Attention 2.0 only supports torch.float16 and torch.bfloat16 dtypes, but the current dype in Qwen2_5_VisionTransformerPretrainedModel is torch.float32. You should run training or inference using Automatic Mixed-Precision via the `with torch.autocast(device_type='torch_device'):` decorator, or load the model with the `torch_dtype` argument. Example: `model = AutoModel.from_pretrained("openai/whisper-tiny", attn_implementation="flash_attention_2", torch_dtype=torch.float16)` has image in dataset using: You are attempting to use Flash Attention 2.0 without specifying a torch dtype. This might lead to unexpected behaviour You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`. Flash Attention 2.0 only supports torch.float16 and torch.bfloat16 dtypes, but the current dype in Qwen2_5_VisionTransformerPretrainedModel is torch.float32. You should run training or inference using Automatic Mixed-Precision via the `with torch.autocast(device_type='torch_device'):` decorator, or load the model with the `torch_dtype` argument. Example: `model = AutoModel.from_pretrained("openai/whisper-tiny", attn_implementation="flash_attention_2", torch_dtype=torch.float16)` has image in dataset using: You are attempting to use Flash Attention 2.0 without specifying a torch dtype. This might lead to unexpected behaviour You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`. Flash Attention 2.0 only supports torch.float16 and torch.bfloat16 dtypes, but the current dype in Qwen2_5_VisionTransformerPretrainedModel is torch.float32. You should run training or inference using Automatic Mixed-Precision via the `with torch.autocast(device_type='torch_device'):` decorator, or load the model with the `torch_dtype` argument. Example: `model = AutoModel.from_pretrained("openai/whisper-tiny", attn_implementation="flash_attention_2", torch_dtype=torch.float16)` Loading checkpoint shards: 0%| | 0/4 [00:00 [2025-05-11 19:40:53,445] [INFO] [config.py:1005:print] communication_data_type ...... None [2025-05-11 19:40:53,445] [INFO] [config.py:1005:print] compression_config ........... {'weight_quantization': {'shared_parameters': {'enabled': False, 'quantizer_kernel': False, 'schedule_offset': 0, 'quantize_groups': 1, 'quantize_verbose': False, 'quantization_type': 'symmetric', 'quantize_weight_in_forward': False, 'rounding': 'nearest', 'fp16_mixed_quantize': False, 'quantize_change_ratio': 0.001}, 'different_groups': {}}, 'activation_quantization': {'shared_parameters': {'enabled': False, 'quantization_type': 'symmetric', 'range_calibration': 'dynamic', 'schedule_offset': 1000}, 'different_groups': {}}, 'sparse_pruning': {'shared_parameters': {'enabled': False, 'method': 'l1', 'schedule_offset': 1000}, 'different_groups': {}}, 'row_pruning': {'shared_parameters': {'enabled': False, 'method': 'l1', 'schedule_offset': 1000}, 'different_groups': {}}, 'head_pruning': {'shared_parameters': {'enabled': False, 'method': 'topk', 'schedule_offset': 1000}, 'different_groups': {}}, 'channel_pruning': {'shared_parameters': {'enabled': False, 'method': 'l1', 'schedule_offset': 1000}, 'different_groups': {}}, 'layer_reduction': {'enabled': False}} [2025-05-11 19:40:53,445] [INFO] [config.py:1005:print] curriculum_enabled_legacy .... False [2025-05-11 19:40:53,445] [INFO] [config.py:1005:print] curriculum_params_legacy ..... False [2025-05-11 19:40:53,445] [INFO] [config.py:1005:print] data_efficiency_config ....... {'enabled': False, 'seed': 1234, 'data_sampling': {'enabled': False, 'num_epochs': 1000, 'num_workers': 0, 'curriculum_learning': {'enabled': False}}, 'data_routing': {'enabled': False, 'random_ltd': {'enabled': False, 'layer_token_lr_schedule': {'enabled': False}}}} [2025-05-11 19:40:53,445] [INFO] [config.py:1005:print] data_efficiency_enabled ...... False [2025-05-11 19:40:53,445] [INFO] [config.py:1005:print] dataloader_drop_last ......... False [2025-05-11 19:40:53,445] [INFO] [config.py:1005:print] disable_allgather ............ False [2025-05-11 19:40:53,445] [INFO] [config.py:1005:print] dump_state ................... False [2025-05-11 19:40:53,445] [INFO] [config.py:1005:print] dynamic_loss_scale_args ...... None [2025-05-11 19:40:53,445] [INFO] [config.py:1005:print] eigenvalue_enabled ........... False [2025-05-11 19:40:53,445] [INFO] [config.py:1005:print] eigenvalue_gas_boundary_resolution 1 [2025-05-11 19:40:53,445] [INFO] [config.py:1005:print] eigenvalue_layer_name ........ bert.encoder.layer [2025-05-11 19:40:53,445] [INFO] [config.py:1005:print] eigenvalue_layer_num ......... 0 [2025-05-11 19:40:53,445] [INFO] [config.py:1005:print] eigenvalue_max_iter .......... 100 [2025-05-11 19:40:53,445] [INFO] [config.py:1005:print] eigenvalue_stability ......... 1e-06 [2025-05-11 19:40:53,445] [INFO] [config.py:1005:print] eigenvalue_tol ............... 0.01 [2025-05-11 19:40:53,445] [INFO] [config.py:1005:print] eigenvalue_verbose ........... False [2025-05-11 19:40:53,445] [INFO] [config.py:1005:print] elasticity_enabled ........... False [2025-05-11 19:40:53,446] [INFO] [config.py:1005:print] flops_profiler_config ........ { "enabled": false, "recompute_fwd_factor": 0.0, "profile_step": 1, "module_depth": -1, "top_modules": 1, "detailed": true, "output_file": null } [2025-05-11 19:40:53,446] [INFO] [config.py:1005:print] fp16_auto_cast ............... None [2025-05-11 19:40:53,446] [INFO] [config.py:1005:print] fp16_enabled ................. False [2025-05-11 19:40:53,446] [INFO] [config.py:1005:print] fp16_master_weights_and_gradients False [2025-05-11 19:40:53,446] [INFO] [config.py:1005:print] global_rank .................. 0 [2025-05-11 19:40:53,446] [INFO] [config.py:1005:print] grad_accum_dtype ............. None [2025-05-11 19:40:53,446] [INFO] [config.py:1005:print] gradient_accumulation_steps .. 2 [2025-05-11 19:40:53,446] [INFO] [config.py:1005:print] gradient_clipping ............ 1.0 [2025-05-11 19:40:53,446] [INFO] [config.py:1005:print] gradient_predivide_factor .... 1.0 [2025-05-11 19:40:53,446] [INFO] [config.py:1005:print] graph_harvesting ............. False [2025-05-11 19:40:53,446] [INFO] [config.py:1005:print] hybrid_engine ................ enabled=False max_out_tokens=512 inference_tp_size=1 release_inference_cache=False pin_parameters=True tp_gather_partition_size=8 [2025-05-11 19:40:53,446] [INFO] [config.py:1005:print] initial_dynamic_scale ........ 1 [2025-05-11 19:40:53,446] [INFO] [config.py:1005:print] load_universal_checkpoint .... False [2025-05-11 19:40:53,446] [INFO] [config.py:1005:print] loss_scale ................... 1.0 [2025-05-11 19:40:53,446] [INFO] [config.py:1005:print] memory_breakdown ............. False [2025-05-11 19:40:53,446] [INFO] [config.py:1005:print] mics_hierarchial_params_gather False [2025-05-11 19:40:53,446] [INFO] [config.py:1005:print] mics_shard_size .............. -1 [2025-05-11 19:40:53,446] [INFO] [config.py:1005:print] monitor_config ............... tensorboard=TensorBoardConfig(enabled=False, output_path='', job_name='DeepSpeedJobName') comet=CometConfig(enabled=False, samples_log_interval=100, project=None, workspace=None, api_key=None, experiment_name=None, experiment_key=None, online=None, mode=None) wandb=WandbConfig(enabled=False, group=None, team=None, project='deepspeed') csv_monitor=CSVConfig(enabled=False, output_path='', job_name='DeepSpeedJobName') [2025-05-11 19:40:53,446] [INFO] [config.py:1005:print] nebula_config ................ { "enabled": false, "persistent_storage_path": null, "persistent_time_interval": 100, "num_of_version_in_retention": 2, "enable_nebula_load": true, "load_path": null } [2025-05-11 19:40:53,447] [INFO] [config.py:1005:print] optimizer_legacy_fusion ...... False [2025-05-11 19:40:53,447] [INFO] [config.py:1005:print] optimizer_name ............... None [2025-05-11 19:40:53,447] [INFO] [config.py:1005:print] optimizer_params ............. None [2025-05-11 19:40:53,447] [INFO] [config.py:1005:print] pipeline ..................... {'stages': 'auto', 'partition': 'best', 'seed_layers': False, 'activation_checkpoint_interval': 0, 'pipe_partitioned': True, 'grad_partitioned': True} [2025-05-11 19:40:53,447] [INFO] [config.py:1005:print] pld_enabled .................. False [2025-05-11 19:40:53,447] [INFO] [config.py:1005:print] pld_params ................... False [2025-05-11 19:40:53,447] [INFO] [config.py:1005:print] prescale_gradients ........... False [2025-05-11 19:40:53,447] [INFO] [config.py:1005:print] scheduler_name ............... None [2025-05-11 19:40:53,447] [INFO] [config.py:1005:print] scheduler_params ............. None [2025-05-11 19:40:53,447] [INFO] [config.py:1005:print] seq_parallel_communication_data_type torch.float32 [2025-05-11 19:40:53,447] [INFO] [config.py:1005:print] sparse_attention ............. None [2025-05-11 19:40:53,447] [INFO] [config.py:1005:print] sparse_gradients_enabled ..... False [2025-05-11 19:40:53,447] [INFO] [config.py:1005:print] steps_per_print .............. inf [2025-05-11 19:40:53,447] [INFO] [config.py:1005:print] tensor_parallel_config ....... dtype=torch.float16 autotp_size=0 tensor_parallel=TPConfig(tp_size=1, tp_grain_size=1, mpu=None, tp_group=None) injection_policy_tuple=None keep_module_on_host=False replace_with_kernel_inject=False [2025-05-11 19:40:53,447] [INFO] [config.py:1005:print] timers_config ................ enabled=True synchronized=True [2025-05-11 19:40:53,447] [INFO] [config.py:1005:print] train_batch_size ............. 14 [2025-05-11 19:40:53,447] [INFO] [config.py:1005:print] train_micro_batch_size_per_gpu 1 [2025-05-11 19:40:53,447] [INFO] [config.py:1005:print] use_data_before_expert_parallel_ False [2025-05-11 19:40:53,447] [INFO] [config.py:1005:print] use_node_local_storage ....... False [2025-05-11 19:40:53,447] [INFO] [config.py:1005:print] wall_clock_breakdown ......... True [2025-05-11 19:40:53,447] [INFO] [config.py:1005:print] weight_quantization_config ... None [2025-05-11 19:40:53,448] [INFO] [config.py:1005:print] world_size ................... 7 [2025-05-11 19:40:53,448] [INFO] [config.py:1005:print] zero_allow_untested_optimizer False [2025-05-11 19:40:53,448] [INFO] [config.py:1005:print] zero_config .................. stage=0 contiguous_gradients=True reduce_scatter=True reduce_bucket_size=1000000000 use_multi_rank_bucket_allreduce=True allgather_partitions=True allgather_bucket_size=1000000000 overlap_comm=False load_from_fp32_weights=True elastic_checkpoint=False offload_param=None offload_optimizer=None sub_group_size=1000000000 cpu_offload_param=None cpu_offload_use_pin_memory=None cpu_offload=None prefetch_bucket_size=50000000 param_persistence_threshold=100000 model_persistence_threshold=9223372036854775807 max_live_parameters=1000000000 max_reuse_distance=1000000000 gather_16bit_weights_on_model_save=False module_granularity_threshold=0 use_all_reduce_for_fetch_params=False stage3_gather_fp16_weights_on_model_save=False ignore_unused_parameters=True legacy_stage1=False round_robin_gradients=False zero_hpz_partition_size=1 zero_quantized_weights=False zero_quantized_nontrainable_weights=False zero_quantized_gradients=False zeropp_loco_param=None mics_shard_size=-1 mics_hierarchical_params_gather=False memory_efficient_linear=True pipeline_loading_checkpoint=False override_module_apply=True log_trace_cache_warnings=False [2025-05-11 19:40:53,448] [INFO] [config.py:1005:print] zero_enabled ................. False [2025-05-11 19:40:53,448] [INFO] [config.py:1005:print] zero_force_ds_cpu_optimizer .. True [2025-05-11 19:40:53,448] [INFO] [config.py:1005:print] zero_optimization_stage ...... 0 [2025-05-11 19:40:53,448] [INFO] [config.py:991:print_user_config] json = { "zero_optimization": { "stage": 0, "allgather_partitions": true, "allgather_bucket_size": 1.000000e+09, "overlap_comm": false, "reduce_scatter": true, "reduce_bucket_size": 1.000000e+09, "contiguous_gradients": true }, "fp16": { "enabled": false, "auto_cast": true, "loss_scale": 0, "initial_scale_power": 32, "loss_scale_window": 1000, "hysteresis": 2, "min_loss_scale": 1 }, "bf16": { "enabled": true }, "gradient_accumulation_steps": 2, "gradient_clipping": 1.0, "steps_per_print": inf, "train_batch_size": 14, "train_micro_batch_size_per_gpu": 1, "wall_clock_breakdown": true } wandb: Currently logged in as: kolerk to https://api.wandb.ai. Use `wandb login --relogin` to force relogin wandb: Using wandb-core as the SDK backend. Please refer to https://wandb.me/wandb-core for more information. wandb: Tracking run with wandb version 0.19.6 wandb: Run data is saved locally in /export3/huangdongchi/hdc_debug/R1-V/src/scripts/wandb/run-20250511_194142-30tmzupd wandb: Run `wandb offline` to turn off syncing. wandb: Syncing run gen_mix_math_7b wandb: ⭐️ View project at https://wandb.ai/kolerk/huggingface wandb: 🚀 View run at https://wandb.ai/kolerk/huggingface/runs/30tmzupd 0%| | 0/287 [00:00