id
stringlengths
3
8
text
stringlengths
1
115k
st179568
Solved by pietern in post #2 Hi! See https://pytorch.org/docs/stable/distributed.html#environment-variable-initialization for an overview of the env initialization method. Also see the help output of the launch utility (run with --help). You’ll find that you’re silently trying to use the same port on localhost for the processe…
st179569
Hi! See https://pytorch.org/docs/stable/distributed.html#environment-variable-initialization 292 for an overview of the env initialization method. Also see the help output of the launch utility (run with --help). You’ll find that you’re silently trying to use the same port on localhost for the processes in a single task to find each other. Specify a different port for each task and it will work.
st179570
I have set --master_port for each task, but RuntimeError raise up Expected to have finished reduction in the prior iteration before starting a new one RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one. This error indicates that your module has parameters that were not used in producing its output (the return value of `forward`). You can enable unused parameter detection by passing the keyword argument `find_unused_parameters=True` to `torch.nn.parallel.DistributedDataParallel`. If you already have this argument set, then the distributed data parallel module wasn't able to locate the output tensors in the return value of your module's `forward` function. Please include the structure of the return value of `forward` of your module when reporting this issue (e.g. list, dict, iterable). (prepare_for_backward at /pytorch/torch/csrc/distributed/c10d/reducer.cpp:408) find_unused_parameters=True don’t work
st179571
If you already have this argument set, then the distributed data parallel module wasn’t able to locate the output tensors in the return value of your module’s forward function. Please include the structure of the return value of forward of your module when reporting this issue (e.g. list, dict, iterable).
st179572
Hi I’m trying to parallelize a somewhat large encoder-decoder model. I have the input data on GPU 0 fed into the encoder, then I transfer the latent code to GPU 1 and feed it to the decoder, then compute the losses on GPU 1. One particular loss is implemented as a torch.autograd.Function and something triggers a device-side assert with out of bound indices in that snippet: batchV = V.view((-1, 3)) # Compute half cotangents and double the triangle areas C, TwoA = half_cotangent(V, faces) batchC = C.view((-1, 3)) # Adjust face indices to stack: offset = torch.arange(V.shape[0], device=V.device).view((-1, 1, 1)) * V.shape[1] # Add the offset to the faces passed as parameters and save in a different tensor F = faces + offset batchF = F.view((-1, 3)) # import ipdb; ipdb.set_trace() # Fails here if not run with CUDA_LAUNCH_BLOCKING=1 rows = batchF[:, [1, 2, 0]].view( 1, -1 ) # 1,2,0 i.e to vertex 2-3 associate cot(23) cols = batchF[:, [2, 0, 1]].view( 1, -1 ) The code runs fine for 2 samples, then on the third I get the device-side assert. The dataloader is shuffling the samples, yet it consistently fails on the third sample. Debugging in pdb gives me the following traceback: Traceback (most recent call last): File "train.py", line 593, in <module> exp_flag, File "train.py", line 524, in compute_losses_real exp_real_and, File "<thefile>.py", line 23, in __call__ Lx = self.laplacian.apply(V, self.F[mask]) File "<thefile>.py", line 64, in forward rows = batchF[:, [1, 2, 0]].view( RuntimeError: size is inconsistent with indices: for dim 0, size is 7380 but found index 4684893058448109737 I then ran the same exact code with CUDA_LAUNCH_BLOCKING=1 and the code doesn’t crash, and the loss decreases. What I already tried: In case this might be related, I disabled pinned memory and non-blocking data transfers from host to GPU, but the problem persists. I added torch.cuda.synchronize() right above rows = batchF[:, [1, 2, 0]].view( with no success. This code works fine when the model is on a single GPU. Any help would be much appreciated!
st179573
Hi, It does look like we’re missing a synchronization point… Could you provide a small code sample that triggers this issue so that we can reproduce locally please?
st179574
Hi, would you mind providing your complete code? from the snippet you provided, it is hard to say the root cause.
st179575
Hi, I have a program leverage torch.nn.DataParallel to run on multiple GPUs. I tested it on a system with 3 GPUs==1080ti using pytorch==1.2 and cuda==10.0. Everything is perfect, program will run and leverage all 3 GPUs. Now, I’m going to run it on a new server with 3GPUs==2080ti and the same config of pytorch and cuda. I got the following error: File "/nfs/brm/main.py", line 384, in <module> train_loss = model.fit(interactions=ds_train, verbose=True) File "/nfs/brm/implicit.py", line 255, in fit positive_prediction = self._net(batch_user, batch_item) File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 547, in __call__ result = self.forward(*input, **kwargs) File "/usr/local/lib/python3.6/dist-packages/torch/nn/parallel/data_parallel.py", line 146, in forward "them on device: {}".format(self.src_device_obj, t.device)) RuntimeError: module must have its parameters and buffers on device cuda:0 (device_ids[0]) but found one of them on device: cuda:1 The error is clear, it seems that some part of the model or inputs are in another GPU. But it’s not the case as it runs on another server perfectly. This is the way that I’m using DataParallel: self.device = torch.device( "cuda" if torch.cuda.is_available() else "cpu") self._net.to(self.device) #_net is my model self._net = torch.nn.DataParallel(self._net) Also I’m using the same way to move model’s input into GPUs (.to(self.device)). The program on the new server is run if I ask for only one GPU. But it fails when I ask for multiple (e.g.3 GPUs). Do you have any idea to investigate the problem?
st179576
Solved by Amirhj in post #7 It was a bug in CUDA10, just upgrading to CUDA 10.1 solved the problem.
st179577
Amirhj: self.device = torch.device( “cuda” if torch.cuda.is_available() else “cpu”) Can you try this instead? This would ensure we always allocate parameters on device 0 self.device = torch.device( "cuda:0" if torch.cuda.is_available() else "cpu")
st179578
Thanks for your answer. It changes the error message to another one: File "/nfs/brm/main.py", line 385, in <module> train_loss = model.fit(interactions=ds_train, verbose=True) File "/nfs/brm/implicit.py", line 255, in fit positive_prediction = self._net(batch_user, batch_item) File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 547, in __call__ result = self.forward(*input, **kwargs) File "/usr/local/lib/python3.6/dist-packages/torch/nn/parallel/data_parallel.py", line 152, in forward outputs = self.parallel_apply(replicas, inputs, kwargs) File "/usr/local/lib/python3.6/dist-packages/torch/nn/parallel/data_parallel.py", line 162, in parallel_apply return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)]) File "/usr/local/lib/python3.6/dist-packages/torch/nn/parallel/parallel_apply.py", line 85, in parallel_apply output.reraise() File "/usr/local/lib/python3.6/dist-packages/torch/_utils.py", line 369, in reraise raise self.exc_type(msg) RuntimeError: Caught RuntimeError in replica 1 on device 1. Original Traceback (most recent call last): File "/usr/local/lib/python3.6/dist-packages/torch/nn/parallel/parallel_apply.py", line 60, in _worker output = module(*input, **kwargs) File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 547, in __call__ result = self.forward(*input, **kwargs) File "/nfs/brm/representations.py", line 95, in forward attention_mask=input_mask)[0] File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 547, in __call__ result = self.forward(*input, **kwargs) File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_distilbert.py", line 592, in forward head_mask=head_mask) File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 547, in __call__ result = self.forward(*input, **kwargs) File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_distilbert.py", line 461, in forward embedding_output = self.embeddings(input_ids) # (bs, seq_length, dim) File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 547, in __call__ result = self.forward(*input, **kwargs) File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_distilbert.py", line 92, in forward word_embeddings = self.word_embeddings(input_ids) # (bs, max_seq_length, dim) File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 547, in __call__ result = self.forward(*input, **kwargs) File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/sparse.py", line 114, in forward self.norm_type, self.scale_grad_by_freq, self.sparse) File "/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py", line 1467, in embedding return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) RuntimeError: arguments are located on different GPUs at /pytorch/aten/src/THC/generic/THCTensorIndex.cu:397
st179579
Could you share the code you’re running so that I can try to repro this locally and see what the issue might be?
st179580
Unfortunately, the source code depends on different modules and large data, it’s not useful for debugging purpose. In addition, the code perfectly run on other server when I connect to it via ssh and directly run my python. The new server is based on Kubernetes and OpenShift and my code is deployed via a docker container. I think that it causes the misidentification of GPUs by DataParallel. Did you have any related experiment?
st179581
Are the inputs you feed to the model on the same device (cuda:0) when you run the training loop? Also, would it be possible for you to come up with a small example that reproduces the problem you’re seeing? It would be easier to debug the issue that way.
st179582
Would DistributedDataParallel wrapper cost much GPU memory? In my case, the model cost around 7300MB when loaded into a GPU. However, when wrapped in DistributedDataParallel and run in the distributed mode, it costs 22000MB GPU momery. Is it caused by the DistributedDataParallel wrapper? Are there any methods to save memory usage? Thanks!
st179583
I am using pointpillars from this repo https://github.com/traveller59/second.pytorch 2, with DataParallel changed to DistributedDataParallel, the batch size is 2 per gpu.
st179584
How many nodes are you using and how many GPUs per node? Also, which communication backend are you using? Also, it might be helpful to debug if you could share the code you’re using to initialize and train using DistributedDataParallel?
st179585
Hi, I’m trying to run my code on a SLURM cluster with the following configuration - 1 node, 2 Nvidia 1080Ti GPUs, 8 CPUs and 8GBs of RAM per CPU. I’m implementing ResNeXt, with a dataset that contains about 1million 32x32 images. When I try running this code with torchvision.datasets.ImageFolder and num_workers = 4-8, it throws an “exceeded virtual memory” error by requesting for 341GB of data! That seems a little absurd. This error is thrown at the first trainLoader loop as it is preparing the first batch for the code. Initially, I assumed this was an error with my program, but my program works just fine with num_workers = 8 on Google Colab. My program only works when I set num_workers=0. At num_workers=2, it works for 2 epochs before throwing the same error. Any solution for this would really be appreciated.
st179586
Are you using DistributedDataParallel or DataParallel for this? It seems like your question is more related to torchvision or the PyTorch dataset/dataloader (the ‘distributed’ tag is not appropriate for this). Maybe tag this with ‘vision’?
st179587
Hello! I’m experimenting with distributed training using NVIDIA Megatron-LM project 4. And I get an error when running the script bash scripts/pretrain_gpt2_model_parallel.sh Traceback looks like File "pretrain_gpt2.py", line 625, in <module> main() File "pretrain_gpt2.py", line 569, in main args.eod_token = get_train_val_test_data(args) File "pretrain_gpt2.py", line 536, in get_train_val_test_data group=mpu.get_model_parallel_group()) File "/home/ubuntu/Env/ml/lib/python3.6/site-packages/torch/distributed/distributed_c10d.py", line 810, in broadcast work = group.broadcast([tensor], opts) RuntimeError: Broken pipe Traceback (most recent call last): File "pretrain_gpt2.py", line 625, in <module> main() File "pretrain_gpt2.py", line 569, in main args.eod_token = get_train_val_test_data(args) File "pretrain_gpt2.py", line 536, in get_train_val_test_data group=mpu.get_model_parallel_group()) File "/home/ubuntu/Env/ml/lib/python3.6/site-packages/torch/distributed/distributed_c10d.py", line 810, in broadcast work = group.broadcast([tensor], opts) RuntimeError: Broken pipe Traceback (most recent call last): File "pretrain_gpt2.py", line 625, in <module> main() File "pretrain_gpt2.py", line 569, in main args.eod_token = get_train_val_test_data(args) File "pretrain_gpt2.py", line 536, in get_train_val_test_data group=mpu.get_model_parallel_group()) File "/home/ubuntu/Env/ml/lib/python3.6/site-packages/torch/distributed/distributed_c10d.py", line 810, in broadcast work = group.broadcast([tensor], opts) RuntimeError: Broken pipe Traceback (most recent call last): File "pretrain_gpt2.py", line 625, in <module> main() File "pretrain_gpt2.py", line 569, in main args.eod_token = get_train_val_test_data(args) File "pretrain_gpt2.py", line 536, in get_train_val_test_data group=mpu.get_model_parallel_group()) File "/home/ubuntu/Env/ml/lib/python3.6/site-packages/torch/distributed/distributed_c10d.py", line 810, in broadcast work = group.broadcast([tensor], opts) RuntimeError: Broken pipe The error occurs in the file pretrain_gpt2.py 1 Could anybody help me with this issue?
st179588
I would recommend to create an issue in the GitHub repository directly, as the authors of the code might help there.
st179589
I did, but unfortunately I didn’t get an answer. The error traceback refers to lib/python3.6/site-packages/torch/distributed/distributed_c10d.py . That’s why I thought I might be able to get a hint here.
st179590
I assume you’ve created this issue 18? It looks like your datasets are empty and the actual error message is: TypeError: iteration over a 0-d array when calculating the dataset lengths. I’ll also post in the issue directly.
st179591
I have a neural network with two separate vectors as inputs, similar to this question 21. Both inputs are encoded and then processed further. But until then, the encoding of those inputs is completely independent. How can I parallelize the encoding phase in pytorch? A minimal example of my code: class MyModel(nn.Module): def __init__(params): super(MyModel self).__init__() self.encoder1 = Encoder1() self.encoder2 = Encoder2() def forward(x1, x2): # how to calculate both encodings in parallel? enc1 = self.encoder1(x1) enc2 = self.encoder2(x2) return some_func(enc1, enc2) I checked pytorch forums and learned a bit about DataParallel, but i am not sure how to fit it to my case and on only 1 GPU.
st179592
DataParallel won’t help here if there is only one GPU, but you could use multiple CUDA streams. For example, s0 = torch.cuda.Stream() s1 = torch.cuda.Stream() with torch.cuda.stream(s0): enc1 = self.encoder1(x1) with torch.cuda.stream(s1): enc2 = self.encoder2(x2)
st179593
Hey, I’m having an issue that my code randomly hangs when using DistributedDataParallel. It is completely random when this occurs, and it does not always occur. I suspect that it has something to do with the DistributedDataParallel as out of the 4 gpu’s I’m using, 3 are reporting to be using 100% of that gpu and 1 is completely idle. What is the best way for me to debug what is going on as I’m getting no errors?
st179594
Looks like the program somehow desynchronized (different processes wants to sync different amount of parameters or run different numbers of iterations). Unfortunately, DDP does not have a debug mode for now. Can you share some code snippet? Does your code tries to handle any errors in backward pass on its own (say catch OOM and rertry)? Does all processes see exactly the same amount of input data?
st179595
I’ve had to deal with similar issues, you should feel lucky you only have 4 processes and not 64 The fact that 3 are in 100% utilization means that are inside nccl sync operation like the one at the end of backward .backward(), while the 4th one is doing something non-GPU related, like waiting for user input. The general strategy is to look at stack trace of the “odd one out” process. You can get C++ stack trace by doing "gdb -p " and “bt” or “thread apply all bt” With a bit more work, you can get Python stack. This requires modifying client code. For instance, I run install_pdb_hander 77 on init of my processes. It allows me to break into PDB on CTRL+\ and look what the current process is doing. When using distributed launcher, this will only send CTRL+\ to the launcher process, so for this to get you stack trace of arbitrary worker you could need to modify your launching procedure to run in 4 worker in 4 different tmux windows
st179596
Hello, I’m working with the code example from the official ImagNet distributed tutorial 4. Basically, the code uses torch.multiprocessing.spawn(main_worker) to run a copy of the “main worker” function on each GPU. Then, in each worker the dist.init_process_group command is run and both the model and dataset/dataloader are created and cast into torch.nn.parallel.DistributedDataParallel(model) and torch.utils.data.distributed.DistributedSampler(dataset). My problem is that after every epoch I want to modify and rewrite all the data in the dataset, I thought the most straightforward way was to run this shuffling in only one of the nodes inside of an “if” like if args.multiprocessing_distributed and args.rank == 1: so not all the nodes would perform the shuffling simultaneously, similarly to the 252 line of the code. The operation of rewriting the dataset takes long time (>10 minutes). This created a problem where processes which didn’t perform the shuffling were trying to access data while it’s being rewritten by the node that modifies the data. Is there any suggested way of making the rest of the processes wait for the one process to finish the modification and writing of the data before continuing the training? I found this “barrier” method 19 from the pytorch distributed package. And also this section about synchronization 12 in python’s documentation. But because the error I get isn’t 100% reproducible (in some runs it appears 10 minutes after start in some runs it appears hours into the run) I can’t really test the those implementations and I couldn’t find any easy to follow examples. Any suggestions would be appreciated
st179597
dist.barrier() should help to block all processes in the group until everyone has reached the same barrier. What error did you see after adding a barrier?
st179598
Hi, I am trying to design backward hooks that depend on the loss. In the single GPU version, i modify the hook between the forward and backward pass depending on the loss value. I would like to scale to multi-gpu and modify each replicated hook depending on the loss of each GPU. To do so, Is it possible to access the model replicas in torch.nn.DataParallel() between forward() and .backward() call? Thanks for helping
st179599
nn.DataParallel does not expose replicas, but you could make some changes to it locally or copy the data_parallel.py code to make replicas a member field (this line 6) It might be easier to install hooks with DistributedDataParallel (DDP) though, as DDP is not making new copies of models in every iteration. So that whatever hooks installed to the original module should still be valid with DDP.
st179600
I’ll take a look at DistributedDataParallel first, it looks the cleanest thing to do. Thanks for your help @mrshenli !
st179601
Hello, I’d like to train my model on multiple GPUs but unfortunately I’m getting a massive validation error (but not so when only doing one gpu) after even 1 epoch. I think the reason is that I manually modify some of the model parameters, after the optimization step: torch.cuda.set_device(args.local_rank) torch.distributed.init_process_group(backend='nccl', init_method='env://') ... model = nn.Sequential(...).cuda() dmodel = nn.DataParallel(model) ... loss = criterion(dmodel(x),y) loss.backward() optimizer.step() with torch.no_grad(): deterministic_modify(model[17]) # I need to manually modify some weights. I’m guessing there is a sync problem, because if I do this on a single gpu, things work as expected. But on multiple gpu’s I get terrible validation error. The way I understand nn.DataParallel works (but please correct me if I’m wrong) is that it’s a wrapper, and each gpu has a copy of the model, and nn.DataParallel splits the batch into two, gives each gpu half, computes the gradients, and then, somehow, sync’s the model in both gpus (how?). Thanks.
st179602
mraggi: The way I understand nn.DataParallel works (but please correct me if I’m wrong) is that it’s a wrapper, and each gpu has a copy of the model, and nn.DataParallel splits the batch into two, gives each gpu half, computes the gradients, and then, somehow, sync’s the model in both gpus (how?). Above is correct except the model sync part. In the forward pass, it replicates the model to all devices, creates one thread per device, scatters (uses broadcast) the input to all devices (so that each thread exclusively works on one model replica with a input split), and finally gathers the output from all threads, and uses that as the return value of the forward function. So after the forward pass, the autograd graph will contain multiple replicas of the original model, which all points back to the same original model. In the backward pass, each model replica will compute there own gradients, then, as they all have autograd edges point back to the original model, those gradients on different replicates will also accumulate into the same original module. So, it is not synchronizing across replicas, instead all gradients from all replicates are accumulated into the original module. I noticed that the code snippet above calls init_process_group, which is required for DistributedDataParallel but not necessary for DataParallel. And DistributedDataParallel indeed does the gradient sync across multiple processes, and which should be faster than DataParallel.
st179603
Hello, my code has deterministic behavior without DistributedDataParallel, however, not deterministic with DistributedDataParallel. My code for deterministic behavior is: cudnn.benchmark = False cudnn.deterministic = True random.seed(123) torch.manual_seed(123) torch.cuda.manual_seed_all(123) torch.utils.data.DataLoader(…, worker_init_fn=random.seed) And my launch command: python - torch.distributed.launch –nproc_per_node=4 –master_ports=$((RANDOM + 10000)) train.py Does the DistributedDataParallel need more tricks to get deterministic behavior?
st179604
DistributedDataParallel should be deterministic. All it does is applying allreduce to sync gradients across processes. Can you check if the data loader is providing deterministic inputs for you?
st179605
I am going to train my model on multi-server (N servers), each of which includes 8 GPUs. It means that I want to train my model with 8*N GPUs. I have checked the code provided by a tutorial, which is a code that uses distributed training to train a model on ImageNet. ( https://github.com/pytorch/examples/tree/master/imagenet 26 ) I found that I need to run the training code on different server seperately just as the guide introduce : Multiple nodes: Node 0: python main.py -a resnet50 --dist-url 'tcp://IP_OF_NODE0:FREEPORT' --dist-backend 'nccl' --multiprocessing-distributed --world-size 2 --rank 0 [imagenet-folder with train and val folders] Node 1: python main.py -a resnet50 --dist-url 'tcp://IP_OF_NODE0:FREEPORT' --dist-backend 'nccl' --multiprocessing-distributed --world-size 2 --rank 1 [imagenet-folder with train and val folders] I am wondering is it possible to input the command on one server and then run the codes on different servers simultaneously and automatically? Thank you!
st179606
Maybe you can try something like ansible 7 to deploy your application to multiple machines.
st179607
This depends on your cluster/cloud. For AWS, I have an example here: https://github.com/cybertronai/pytorch-aws/ 14 Basically you set your AWS credentials and then do python mnist_distributed.py --mode=remote --nnodes=2
st179608
I am trying to use Pytorch with Horovod. I am trying to run one of the example. I am getting the following error: Traceback (most recent call last): File "pytorch_imagenet_resnet50.py", line 3, in <module> import torch File "/ccs/home/amalik/.conda/envs/py366/lib/python3.6/site-packages/torch/__init__.py", line 84, in <module> from torch._C import * ImportError: /ccs/home/amalik/.conda/envs/py366/lib/python3.6/site-packages/torch/lib/libtorch_python.so: undefined symbol: MPIX_Query_cuda_support However, when I import torch on simple python command, I don’t get the error. a>> python >>> import torch >>> no error message
st179609
I use the command line below to run the script(pytorch example of dali 7) python -m torch.distributed.launch --nproc_per_node=1 train_imagenet_with_dali.py -t when the nproc_per_node=1,it work. But when nproc_per_node=2,there is a error. ***************************************** Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. ***************************************** => creating model 'resnet50' => creating model 'resnet50' Traceback (most recent call last): File "/home/zyy/anaconda3/envs/python36/lib/python3.6/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/home/zyy/anaconda3/envs/python36/lib/python3.6/runpy.py", line 85, in _run_code exec(code, run_globals) File "/home/zyy/anaconda3/envs/python36/lib/python3.6/site-packages/torch/distributed/launch.py", line 246, in <module> main() File "/home/zyy/anaconda3/envs/python36/lib/python3.6/site-packages/torch/distributed/launch.py", line 242, in main cmd=cmd) subprocess.CalledProcessError: Command '['/home/zyy/anaconda3/envs/python36/bin/python', '-u', 'train_imagenet_with_dali.py', '--local_rank=1', '-t']' died with <Signals.SIGSEGV: 11>.
st179610
Hi @zyyupup, Can you try this please: python -m torch.distributed.launch --nproc_per_node=2 main.py -a resnet50 --dali_cpu --fp16 --b 32 --static-loss-scale 128.0 --workers 4 --lr=0.4 ./ 2>&1 And give us full repro step and your environment (Anaconda, PyTorch, DALI versions)
st179611
Thank you for your reply. I tried the commamd and the same error accured again. My environment is: Anaconda(python):3.6.8 Pytorch:1.2.0+cuda9.2 DALI:0.13.0 System:ubuntu16.04 with 2 gpus I only changed the path of the dataset in the code and then run it.
st179612
DALI example is based on an older version of PyTorch APEX example - https://github.com/NVIDIA/apex/tree/master/examples/imagenet 101. You can try to run it as well to check if this may be the DALI fault.
st179613
I download pytorch from the official website. However, when I imported pytorch on my local windows computer, I got the following error as below. It seems torchvision suffered malfunction. What does it mean? Can someone know how to fix it? Thank you. ImportError Traceback (most recent call last) in () 3 import torch 4 import torch.nn as nn ----> 5 import torchvision 6 import torchvision.datasets as dataset 7 import torchvision.transforms as transforms C:\ProgramData\Anaconda3\lib\site-packages\torchvision_init_.py in () 1 from torchvision import models ----> 2 from torchvision import datasets 3 from torchvision import ops 4 from torchvision import transforms 5 from torchvision import utils C:\ProgramData\Anaconda3\lib\site-packages\torchvision\datasets_init_.py in () ----> 1 from .lsun import LSUN, LSUNClass 2 from .folder import ImageFolder, DatasetFolder 3 from .coco import CocoCaptions, CocoDetection 4 from .cifar import CIFAR10, CIFAR100 5 from .stl10 import STL10 C:\ProgramData\Anaconda3\lib\site-packages\torchvision\datasets\lsun.py in () ----> 1 from .vision import VisionDataset 2 from PIL import Image 3 import os 4 import os.path 5 import six C:\ProgramData\Anaconda3\lib\site-packages\torchvision\datasets\vision.py in () 1 import os 2 import torch ----> 3 import torch.utils.data as data 4 5 C:\ProgramData\Anaconda3\lib\site-packages\torch\utils\data_init_.py in () 2 from .distributed import DistributedSampler # noqa: F401 3 from .dataset import Dataset, IterableDataset, TensorDataset, ConcatDataset, ChainDataset, Subset, random_split # noqa: F401 ----> 4 from .dataloader import DataLoader, _DatasetKind, get_worker_info # noqa: F401 C:\ProgramData\Anaconda3\lib\site-packages\torch\utils\data\dataloader.py in () 10 import torch.multiprocessing as multiprocessing 11 from . import IterableDataset, Sampler, SequentialSampler, RandomSampler, BatchSampler —> 12 from . import _utils 13 from torch._utils import ExceptionWrapper 14 import threading C:\ProgramData\Anaconda3\lib\site-packages\torch\utils\data_utils_init_.py in () 12 13 # old private location of the ExceptionWrapper that some users rely on: —> 14 from torch._utils import ExceptionWrapper # noqa: F401 15 16 ImportError: cannot import name ‘ExceptionWrapper’
st179614
Problem During my application, a strange bug is that my model works well with single GPU but fails in multi-GPUs by: RuntimeError: Gather got an input of invalid size: got [24, 10, 448,448], but expected [24, 11, 448,448] (gather at /pytorch/torch/csrc/cuda/comm.cpp:239 My input size is [48,3,448,448], and two GPUs are used. Thus, it is ok to split 24 images into each gpu, but it is strange why exists 10 and 11 channels? After debug, the problem is found out: The self.embed_arr is an input semantic labels, whose size is [21,300]. self.embed_arr will affect the image feature channels. Under single GPU setting, each image will meet a same self.embed_arr, thereby having the same image channels. However, under multi-GPUs settings, self.embed_arr will be split into multi-parts, e.g., [10,30] and [11,30], thereby leading to different image channels and a bug during feature gathering. ( If self.embed_arr has a size of [20,300], this problem will not appear, and I may think that the bad performance is attributed to my algorithm, which is terrible!) Solution An alternative solution is to duplicate the input so that the scattered inputs in each GPU is the same by: self.embed_arr = self.embed_arr.repeat(len(range(torch.cuda.device_count())),1) Suggest So, I suggest pytorch to add a function that can ontrol which inputs should be scattered to different GPUs. Or, is there any better solutions?
st179615
Hello, I am trying to understand how DataParallel works. Now I am testing simple code to see speedup on 2 GPUs: import torch import torch.nn as nn from torch.utils.data import Dataset, DataLoader # Parameters and DataLoaders input_size = 5 output_size = 2 batch_size = 10000 data_size = 1000000 device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") class RandomDataset(Dataset): def __init__(self, size, length): self.len = length self.data = torch.randn(length, size) def __getitem__(self, index): return self.data[index] def __len__(self): return self.len rand_loader = DataLoader(dataset=RandomDataset(input_size, data_size), batch_size=batch_size, shuffle=True) class Model(nn.Module): # Our model def __init__(self, input_size, output_size): super(Model, self).__init__() self.fc = nn.Linear(input_size, output_size) def forward(self, input): output = self.fc(input) print("\tIn Model: input size", input.size(), "output size", output.size()) return output model = Model(input_size, output_size) if torch.cuda.device_count() > 1: print("Let's use", torch.cuda.device_count(), "GPUs!") # dim = 0 [30, xxx] -> [10, ...], [10, ...], [10, ...] on 3 GPUs model = nn.DataParallel(model) model.to(device) for data in rand_loader: input = data.to(device) output = model(input) print("Outside: input size", data.size(), "output_size", output.size()) I measure time by using ‘time’ utility. Execution on 1 GPU takes 3m26.624s, and execution on 2 GPUs takes approximately the same time (±5 seconds). What could be the problem?
st179616
Solved by albanD in post #4 Hi, I think the problem is that your model is so small that your task is cpu bound. So using more GPUs won’t help. You can do the same experiment with a resnet from torchvision for example (and lower batch_size) to make sure you get a GPU bound task.
st179617
Are you sure you’re using both GPUs with nn.DataParallel? The line: device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") makes me think that it’s running on a single gpu. You could try changing this to: device = torch.device("cuda" if torch.cuda.is_available() else "cpu") In addition, one of the main benefits of data parallelism is that you can use larger batch sizes to speed up iterating through the dataset, so you can try: batch_size = 10000 * torch.cuda.device_count()
st179618
Thank you for answer. I changed “cuda:0” to “cuda” as you said, and didn’t see any changes. If I change “batch_size” depending on the “torch.cuda.device_count()”, I will get different sizes of batch and time comparisons will not be honest between single GPU and multi GPU.
st179619
Hi, I think the problem is that your model is so small that your task is cpu bound. So using more GPUs won’t help. You can do the same experiment with a resnet from torchvision for example (and lower batch_size) to make sure you get a GPU bound task.
st179620
I change a little bit code to use it with resnet18 import torch import torch.nn as nn from torch.utils.data import Dataset, DataLoader import torchvision.models as models # Parameters and DataLoaders input_size = 224 output_size = 1000 batch_size = 256 data_size = 10000 device = torch.device("cuda" if torch.cuda.is_available() else "cpu") class RandomDataset(Dataset): def __init__(self, size, length): self.len = length self.data = torch.randn(length, 3 * size * size).view(length, 3, size, size) def __getitem__(self, index): return self.data[index] def __len__(self): return self.len rand_loader = DataLoader(dataset=RandomDataset(input_size, data_size), batch_size=batch_size, shuffle=True) model = models.resnet18(pretrained=True) if torch.cuda.device_count() > 1: print("Let's use", torch.cuda.device_count(), "GPUs!") model = nn.DataParallel(model) model.to(device) for data in rand_loader: input = data.to(device) output = model(input) print("Outside: input size", data.size(), "output_size", output.size()) However, using “batch_size=256” on single and multiple GPUs I see the same time (2m6.530s).
st179621
And what is the usage of the GPU when you run on a single one? When you run on two?
st179622
Thank you for your replies. The problem was with heavy RandomDataset, that generates dataset. When I reduced “data_size” and add 200 epochs, I have seen speedup on two GPUs. So problem was CPU bound.
st179623
I’ve installed pytorch 1.0 on windows. TIM截图20181211225234.png1287×165 8.67 KB When I try to use webcam demo provided by maskrcnn-benchmark 7. An error occured: Traceback (most recent call last): File "webcam.py", line 80, in <module> main() File "webcam.py", line 64, in main min_image_size=args.min_image_size, File "G:\PyProjects\maskrcnn-benchmark\demo\predictor.py", line 115, in __init__ _ = checkpointer.load(cfg.MODEL.WEIGHT) File "g:\pyprojects\maskrcnn-benchmark\maskrcnn_benchmark\utils\checkpoint.py", line 61, in load checkpoint = self._load_file(f) File "g:\pyprojects\maskrcnn-benchmark\maskrcnn_benchmark\utils\checkpoint.py", line 128, in _load_file cached_f = cache_url(f) File "g:\pyprojects\maskrcnn-benchmark\maskrcnn_benchmark\utils\model_zoo.py", line 44, in cache_url if not os.path.exists(cached_file) and is_main_process(): File "g:\pyprojects\maskrcnn-benchmark\maskrcnn_benchmark\utils\comm.py", line 28, in is_main_process if not torch.distributed.is_initialized(): AttributeError: module 'torch.distributed' has no attribute 'is_initialized' But when I checked the pytorch 1.0 document, the torch.distributed module does have the is_initialized() method. How to solve this problem
st179624
I want to run over multiple GPUs in parallel torch.inverse(). I saw this post Matmul on multiple GPUs 2. Which shows that if you have multiple tensors allocated to each GPU matmul will be run in parallel. I was able to replicate this behavior for matmul but when I try to do the same thing for torch.inverse() it seems to run sequentially when I watch nvidia-smi. Any ideas?
st179625
Thank you for the quick reply. As you can see it greatly mirrors the other post. import torch ngpu = torch.cuda.device_count() # This is the allocation to each GPU. lis = [] for i in range(ngpu): lis.append(torch.rand(5000,5000,device = 'cuda:'+ str(i))) # per the matmul on multiple GPUs post this should already be in parallel to my understanding # but doesnt seem to be based on watch nvidia-smi C_ = [] for i in range(ngpu): C_.append(torch.inverse(lis[i]))
st179626
The distributed launch utility seems like unstable in usage. Executing the same program once with the following command python -m torch.distributed.launch --nproc_per_node=3 --nnodes=1 --node_rank=0 --master_addr="127.0.0.1" --master_port=62123 main.py Works fine: 1.0, 0.05, 2.1814, 0.1697, 2.0053, 0.2154 1.0, 0.05, 2.1804, 0.1674, 1.9767, 0.2406 1.0, 0.05, 2.1823, 0.1703, 1.9799, 0.2352 2.0, 0.05, 2.1526, 0.1779, 2.1166, 0.1908 2.0, 0.05, 2.1562, 0.1812, 2.0868, 0.2076 2.0, 0.05, 2.1593, 0.1741, 2.0935, 0.192 3.0, 0.05, 1.9386, 0.2413, 1.8037, 0.3017 3.0, 0.05, 1.9319, 0.2473, 1.8041, 0.2903 3.0, 0.05, 1.9286, 0.2443, 1.815, 0.2939 4.0, 0.05, 1.7522, 0.3153, 1.828, 0.3131 4.0, 0.05, 1.7504, 0.3207, 1.7613, 0.3245 After the program is finished executing again the same command i.e., calling launch with the same arguments results in an error File "/home/kirk/miniconda3/envs/torch/lib/python3.6/site-packages/torch/serialization.py", line 386, in load return _load(f, map_location, pickle_module, **pickle_load_args) File "/home/kirk/miniconda3/envs/torch/lib/python3.6/site-packages/torch/serialization.py", line 580, in _load deserialized_objects[key]._set_from_file(f, offset, f_should_read_directly) RuntimeError: storage has wrong size: expected 4333514340733757174 got 256
st179627
The error point to a broken checkpoint, not the distributed launch. It seems you’ve saved some checkpoints in the previous runs without making sure only a single process (e.g. rank0) writes to the files. This might yield to multiple processes writing to the same checkpoint file and thus breaking it. Could this be the case?
st179628
Thanks for the reply. That could be the case but in all the examples I’ve seen using distributed launch didn’t show how to properly save the checkpoint. When I save the checkpoint I was just using torch.save. Should I be using something else? ptrblck: It seems you’ve saved some checkpoints in the previous runs without making sure only a single process (e.g. rank0) writes to the files. How do I do that? Any pointers or examples where to look would be much appreciated!
st179629
@ptrblck So I did try your suggestion in the following way: if torch.distributed.is_available() and torch.distributed.is_initialized(): if os.environ['RANK'] == 0: torch.save(checkpoint) else: torch.save(checkpoint) But still I’m getting the same error: image.png1111×100 36.1 KB
st179630
I’m not sure if the RANK env variable is useful at this point. In the ImageNet example 4 the args.rank variable is used. Could you try that?
st179631
@ptrblck Thanks for the pointer. My scenario is single node multi-gpu. Considering that case rank=0 and world_size=2. According to the imagenet example it says save the checkpoint if torch distributed is running or if torch distributed is running and the rank is equal to the num_gpus. ptrblck: I’m not sure if the RANK env variable is useful at this point. Why is not useful, my understanding is that it is set dynamically from the launch utility and it will contain whichever rank is currently running?
st179632
The environment variable is the legacy approach and is just set, if use_env was specified as seen here 2. From the docs: 5. Another way to pass ``local_rank`` to the subprocesses via environment variable ``LOCAL_RANK``. This behavior is enabled when you launch the script with ``--use_env=True``. You must adjust the subprocess example above to replace ``args.local_rank`` with ``os.environ['LOCAL_RANK']``; the launcher will not pass ``--local_rank`` when you specify this flag. .. warning:: ``local_rank`` is NOT globally unique: it is only unique per process on a machine. Thus, don't use it to decide if you should, e.g., write to a networked filesystem. See https://github.com/pytorch/pytorch/issues/12042 for an example of how things can go wrong if you don't do this correctly.
st179633
Thanks for the clarifications, reading through the github issues it seems that: local_rank is actually the ID within a worker; multiple workers have a local_rank of 0 , so they’re probably trampling each other’s checkpoints. added a --global_rank command line argument as well. (solution) someone else comments, torch.distributed.launch sets up a RANK environment variable 4 which can be used to detect if you are on the master process (with os.environ['RANK'] == '0' from python you can use torch.distributed.get_rank() to get the global rank. (I suppose this might be the most appropriate way to do it?)
st179634
My network is 1 Gbit ethernet and i am trying to use pytorch distributed training on two 8-gpu servers. Training procedure is simple classification objective with feed-forward network. I experience significant slowdown in comparison with single 8-gpu server training. Also “nload” tool shows full bandwidth usage even for small model (resnet18). Is my network too slow for distributed training? If it is, what bandwidth (in Gbit/s) do I need to train heavy models like resnet101?
st179635
You can figure it out from your gradient size (could just take size of your model checkpoint) + step time. For instance Resnet50 160ms per batch, 50MB checkpoint, therefore each worker needs to send and receive 50/.16 = 312 MB per second, means you need >=2.5 Gbps What matters here is the ratio of compute time to parameter size. If you double computation + double parameter size, the network requirement is unaffected. Conv nets have good ratio of compute/bandwidth, transformers will need more bandwidth because of matmuls
st179636
How can I prevent my log messages to not get printed multiple times when I use distributed training? Any ideas how to resolve this or where to took at? Also, I keep getting an error every time I set OMP_NUM_THREADS > 1 image.png1147×407 134 KB Any thoughts into what might have gone wrong here?
st179637
I managed to solve the error I was getting when using OMP_NUM_THREADS > 1 Basically looking in my script I had to add init_method="env://" in the call to the process_group torch.distributed.init_process_group(backend='nccl', init_method='env://') The other thing that I was missing is that when calling the launch utility you have to pass a random port otherwise I would get the above error python -m torch.distributed.launch --nproc_per_node=number of gpus --master_port=some random high number port main.py
st179638
Hi all, I have a setup with 4 GPUs. When now working with multiple processes in PyTorch, Is there a way to enforce that a process only accesses a given, single gpu, therefore limiting the CUDA driver context to be present only once per process? Thanks in advance for your help, Benjamin
st179639
Solved by spanev in post #2 Hi @besterma, Sure, you can do it with the env variable CUDA_VISIBLE_DEVICES. E.g. to use GPU 0 and 2: CUDA_VISIBLE_DEVICES=0,2 python pytorch_script.py and in your case you have to give a different env variable to each process.
st179640
Hi @besterma, Sure, you can do it with the env variable CUDA_VISIBLE_DEVICES. E.g. to use GPU 0 and 2: CUDA_VISIBLE_DEVICES=0,2 python pytorch_script.py and in your case you have to give a different env variable to each process.
st179641
Thank you @spanev! In case anyone is wondering, here is how to set process specific env variables: import torch.multiprocessing as _mp import torch import os mp = _mp.get_context('fork') class Process(mp.Process): def __init__(self): super().__init__() print("Init Process") return def run(self): print("Hello World!") os.environ['CUDA_VISIBLE_DEVICES'] = '1' print(torch.cuda.device_count()) print(torch.cuda.current_device()) if __name__ == "__main__": num_processes = 1 os.environ['CUDA_VISIBLE_DEVICES'] = '0,1,2,3' processes = [Process() for i in range(num_processes)] [p.start() for p in processes] print("main: " + os.environ['CUDA_VISIBLE_DEVICES']) [p.join() for p in processes] It is important to set it in the run method of the process, as the init method is still called in the main process, therefore setting the env vars of the main process when set there.
st179642
I’m currently facing a weird behaviour which I cannot explain. I’m training a vgg16 on svhn and training on 1 gpu with SGD and fixed hypeparams I get the following nice results: Now trying to train the same model with same optimizer and hyperparams as before but using DataParallel it exhibits the following behaviour where the learning process actually stagnates and it doesn’t learn anything. Even more weird is the fact that if I swap vgg16 for resnet50 it starts learning again. Anyone has any insights on this, or what it might be going on? I would expect that if a model M trained on one device with fixed optimizer and hyperams exhibiting good learning behaviour to have the same behaviour when trained with DataParallel using same optimizer and hyperparams.
st179643
Solved by dsuess in post #2 Could you post a minimal working example to reproduce these results? It could be, e.g. that you don’t pass the DataParallel’s parameters to the params argument of the optimizer
st179644
Could you post a minimal working example to reproduce these results? It could be, e.g. that you don’t pass the DataParallel’s parameters to the params argument of the optimizer
st179645
@dsuess Thanks for the response! I’ve actually followed the example here 2 for DataParallel. Here’s a MWE (trying to avoid putting lot’s of code here) model = VGG16 dataset = CIFAR10 def main(): model = create_model() train_loader = torch.utils.data.DataLoader(...CIFAR10...) optimizer = torch.optim.SGD(model.parameters(), lr=0.05, weight_decay=5e-4, momentum=0.9) if torch.cuda.device_count() > 1: model = torch.nn.parallel.DataParallel(model).to(device) else: model.to(device) train() test() dsuess: It could be, e.g. that you don’t pass the DataParallel’s parameters to the params argument of the optimizer I think that might be the case since in the MWE I’m creating the optimizer before putting the model on DataParallel? But, it doesn’t explain why the same code works when everything else is constant in the MWE and just swap VGG16 for ResNet50? Let me give it a try and rearrange the optimizer order after model is sent to DataParallel. Last question, do you by any chance have any insight on this problem 2? Thanks!
st179646
It looks like a dimension problem to me. Have you check that the batch sizes are at the same position?
st179647
nooblyh: Have you check that the batch sizes are at the same position? What do you mean by at the same position? My understanding is that DataParallel takes data of batch size m and splits them as int(m/num_of_devices) sending to each device it’s own split and a copy of the model?
st179648
OK, just double checked and your order seems to be correct. I think it’s best if you post a full working example, otherwise we’re just guessing
st179649
Ok so I did check and reordered the code as below: model = VGG16 dataset = CIFAR10 def main(): model = create_model() train_loader = torch.utils.data.DataLoader(...CIFAR10...) if torch.cuda.device_count() > 1: model = torch.nn.parallel.DataParallel(model) optimizer = torch.optim.SGD(model.parameters(), lr=0.05, weight_decay=5e-4, momentum=0.9) model.to(device) train() test()
st179650
Hello, I am trying to get started with torch.distributed with the following toy example, on a multi-gpu cluster : github.com narumiruna/pytorch-distributed-example/blob/master/toy/main.py 12 import argparse from random import randint from time import sleep import torch import torch.distributed as dist def run(world_size, rank, steps): for step in range(1, steps + 1): # get random int value = randint(0, 10) # group all ranks ranks = list(range(world_size)) group = dist.new_group(ranks=ranks) # compute reduced sum tensor = torch.tensor(value, dtype=torch.int) dist.all_reduce(tensor, op=dist.ReduceOp.SUM, group=group) This file has been truncated. show original After running the program with the following command : python3 main.py --init-method tcp://127.0.0.1:23456 --rank 0 --world-size 2 The program gets stuck in an the dist.init_process_group on line 42. I am not really sure about the reason as no message gets displayed. Thanks,
st179651
Solved by pietern in post #7 @alchemi5t If you’re running processes on two machines, they won’t be able to talk if you’re using localhost (127.0.0.1) for the address of rank 0 in the initialization method. It must be an IP that’s reachable from all other ranks. In the example here, rank 1 was trying to connect to rank 0 over 12…
st179652
it’s waiting for both ranks to reach that line to actually initialize the proc group.
st179653
I have launched all the node, but the program still gets stuck in the init_process_group.
st179654
@alchemi5t If you’re running processes on two machines, they won’t be able to talk if you’re using localhost (127.0.0.1) for the address of rank 0 in the initialization method. It must be an IP that’s reachable from all other ranks. In the example here, rank 1 was trying to connect to rank 0 over 127.0.0.1.
st179655
Hi @pietern, I was running it on one machine with 4 cards in it( trying to train only on 2). I fixed my problem by installing and using nvidia Apex(apex.parallel.multiproc). Not sure why I had to do this, because I’ve seen people use the same script without any hacks like this.
st179656
Hi guys, I was trying to wrap my model with DistributedDataParallel. My model is separated into 2 parts, each parnt runs on one GPU. Thus I followed the Combine DDP with Model Parallelism 3 in official tutorial, but after that I encountered with RuntimeError: Socket Timeout. My codebase is basically like this: # fire tasks on SLURM cluster... os.environ["MASTER_PORT"] = str(port) os.environ["MASTER_ADDR"] = str(master_ip) os.environ["WORLD_SIZE"] = str(n_tasks) os.environ["RANK"] = str(proc_id) dist.init_process_group(backend=dist.Backend.NCCL, timeout=timedelta(seconds=30)) # ... class MyModel(nn.Module) def __init__(self, ..., device0, device1): # ... self.part_1.to(device0) self.part_2.to(device1) # task0 get GPU{0, 1}, task1 get GPU(2, 3)... d0 = torch.device(f"cuda:{rank * 2}") d1 = torch.device(f"cuda:{rank * 2 + 1}") model = MyModel(..., d0, d1) # not all parameters are used in each iteration ddp_model = DistributedDataParallel(model, , find_unused_parameters=True) # ... Invoking DDP did not raise any error, however after the timeout (30s in my setting), I encountered with following error: Traceback (most recent call last): File "../tools/train_val_classifier.py", line 332, in <module> main() File "../tools/train_val_classifier.py", line 103, in main model, model_without_ddp = get_ddp_model(model, devices=(fp_device, q_device)) File ".../quant_prob/utils/distributed.py", line 120, in get_ddp_model ddp_model = DistributedDataParallel(model, device_ids=devices, find_unused_parameters=True) File "/envs/r0.3.0/lib/python3.6/site-packages/torch/nn/parallel/distributed.py", line 286, in __init__ self.broadcast_bucket_size) File "/envs/r0.3.0/lib/python3.6/site-packages/torch/nn/parallel/distributed.py", line 410, in _dist_broadcast_coalesced dist._dist_broadcast_coalesced(self.process_group, tensors, buffer_size, False) RuntimeError: Socket Timeout Seems that this error came from DDP implementation. I denifitely sure that I followed the official tutorial, and GPUs assiged to each tasks did not overlap. How can I fix this? Thank you so much~
st179657
This is a dup of https://github.com/pytorch/pytorch/issues/25767 562. Cross linking for posterity.
st179658
We’ve built PyTorch from source and tried to call send/recv, but failed. Could you tell me what I am doing wrong? A toy program is here: 1 import os 2 import socket 3 import torch 4 import torch.distributed as dist 5 from torch.multiprocessing import Process 6 7 8 def run(rank, size, hostname): 9 print(f"I am {rank} of {size} in {hostname}") 10 tensor = torch.zeros(1, device=torch.device('cuda:{}'.format(rank))) 11 if rank == 0: 12 tensor += 1 13 # Send the tensor to process 1 14 dist.send(tensor=tensor, dst=1) 15 else: 16 # Receive tensor from process 0 17 dist.recv(tensor=tensor, src=0) 18 print('Rank ', rank, ' has data ', tensor[0]) 19 20 21 def init_processes(rank, size, hostname, fn, backend='tcp'): 22 """ Initialize the distributed environment. """ 23 dist.init_process_group(backend, rank=rank, world_size=size) 24 fn(rank, size, hostname) 25 26 27 if __name__ == "__main__": 28 world_size = int(os.environ['OMPI_COMM_WORLD_SIZE']) 29 world_rank = int(os.environ['OMPI_COMM_WORLD_RANK']) 30 hostname = socket.gethostname() 31 init_processes(world_rank, world_size, hostname, run, backend='mpi') The cluster which I use is managed using slurm. Here is a list of loaded modules: 1) /gpu/cuda-10.0 2) /mpi/hpcx-v2.4.0 3) /python/python-3.6.8 4) /python/pytorch-1.3.0, where pytorch-1.3.0 is installed from source. And this is how I call this script: mpirun -np 2 python3 pytorch_distributed.py The error looks as follows I am 1 of 2 in gn10.zhores I am 0 of 2 in gn10.zhores -------------------------------------------------------------------------- Primary job terminated normally, but 1 process returned a non-zero exit code. Per user-direction, the job has been aborted. -------------------------------------------------------------------------- -------------------------------------------------------------------------- mpirun noticed that process rank 0 with PID 11502 on node gn10 exited on signal 11 (Segmentation fault). --------------------------------------------------------------------------
st179659
I don’t know what’s wrong, but I do know it’s likely going wrong somewhere in the MPI code. PyTorch performs a runtime check for CUDA awareness of the MPI distribution before running any collective (including send/recv) with a CUDA tensor. You’re passing CUDA tensors, and not seeing this error, so the runtime check must be successful. What happens beyond that, in the MPI code, is beyond my purview. To be sure there is nothing wrong with your code you can try running it with CPU side tensors. If that passes, and it only fails with CUDA tensors, I’d try and run MPI in some kind of debug mode. Good luck!
st179660
EDIT: skip to the bottom i’m training SRGAN on VOC2012 using NVIDIA DALI and experimenting between DataParallel and DistributedDataParallel (I was using apex too but I’ve removed it in order to figure out what’s going wrong). here is my run.sh python -m torch.distributed.launch \ --nproc_per_node=1 \ train_srgan_dali.py \ --train-mx-path=/home/maksim/data/VOC2012/voc_train.rec \ --train-mx-index-path=/home/maksim/data/VOC2012/voc_train.idx \ --val-mx-path=/home/maksim/data/VOC2012/voc_val.rec \ --val-mx-index-path=/home/maksim/data/VOC2012/voc_val.idx \ --checkpoint-dir=/home/maksim/dev_projects/atlas_sr/checkpoints \ --experiment-name=srgan_dali_pascal_3_channel_icnr_dp \ --batch-size=64 \ --lr=1e-3 \ --crop-size=88 \ --upscale-factor=2 \ --epochs=100 \ --workers=1 \ --channels=3 here 1 is my model and here is my train script import argparse import os import time from math import log10 import pandas as pd import torch import torch.backends.cudnn import torch.distributed from nvidia.dali import types from torch import nn from data_utils.dali import StupidDALIIterator, SRGANMXNetPipeline from metrics.metrics import AverageMeter from metrics.ssim import ssim from models.SRGAN import ( Generator, Discriminator, GeneratorLoss, ) from util.util import monkey_patch_bn monkey_patch_bn() parser = argparse.ArgumentParser() parser.add_argument("--local_rank", default=0, type=int) parser.add_argument("--channels", type=int, default=3) parser.add_argument("--experiment-name", type=str, default="test") parser.add_argument( "--train-mx-path", default="/home/maksim/data/VOC2012/voc_train.rec" ) parser.add_argument( "--train-mx-index-path", default="/home/maksim/data/VOC2012/voc_train.idx" ) parser.add_argument("--val-mx-path", default="/home/maksim/data/VOC2012/voc_val.rec") parser.add_argument( "--val-mx-index-path", default="/home/maksim/data/VOC2012/voc_val.idx" ) parser.add_argument("--checkpoint-dir", default="/home/maksim/data/checkpoints") parser.add_argument("--upscale-factor", type=int, default=2) parser.add_argument("--epochs", type=int, default=100) parser.add_argument("--batch-size", type=int, default=64) parser.add_argument("--prof", action="store_true", default=False) parser.add_argument("--lr", type=float, default=1e-3) parser.add_argument("--crop-size", type=int, default=88) parser.add_argument("--workers", type=int, default=4) args = parser.parse_args() local_rank = args.local_rank train_mx_path = args.train_mx_path train_mx_index_path = args.train_mx_index_path val_mx_path = args.val_mx_path val_mx_index_path = args.val_mx_index_path experiment_name = args.experiment_name checkpoint_dir = args.checkpoint_dir upscale_factor = args.upscale_factor epochs = args.epochs batch_size = args.batch_size crop_size = args.crop_size prof = args.prof workers = args.workers lr = args.lr channels = args.channels print_freq = 10 assert os.path.exists(train_mx_path) assert os.path.exists(train_mx_index_path) assert os.path.exists(val_mx_path) assert os.path.exists(val_mx_index_path) assert experiment_name assert os.path.exists(checkpoint_dir) distributed = False world_size = 1 if local_rank == 0: checkpoint_dir = os.path.join(checkpoint_dir, experiment_name) if not os.path.exists(checkpoint_dir): os.mkdir(checkpoint_dir) if "WORLD_SIZE" in os.environ: world_size = int(os.environ["WORLD_SIZE"]) distributed = world_size > 1 netG = Generator(scale_factor=upscale_factor, in_channels=channels) netD = Discriminator(in_channels=channels) g = GeneratorLoss() if distributed: gpu = local_rank % torch.cuda.device_count() torch.cuda.set_device(gpu) torch.distributed.init_process_group(backend="nccl", init_method="env://") assert world_size == torch.distributed.get_world_size() netG = nn.SyncBatchNorm.convert_sync_batchnorm(netG) netD = nn.SyncBatchNorm.convert_sync_batchnorm(netD) netG.cuda(gpu) netD.cuda(gpu) g.cuda(gpu) netG = nn.parallel.DistributedDataParallel(netG, device_ids=[gpu]) netD = nn.parallel.DistributedDataParallel(netD, device_ids=[gpu]) lr /= world_size else: netG = Generator(scale_factor=upscale_factor, in_channels=channels) netD = Discriminator(in_channels=channels) netG = nn.DataParallel(netG) netD = nn.DataParallel(netD) netG = netG.cuda() netD = netD.cuda() g = g.cuda() # because vgg excepts 3 channels if channels == 1: generator_loss = lambda fake_out, fake_img, hr_image: g( fake_out, torch.cat([fake_img, fake_img, fake_img], dim=1), torch.cat([hr_image, hr_image, hr_image], dim=1), ) else: generator_loss = g optimizerG = torch.optim.Adam(netG.parameters(), lr=lr) optimizerD = torch.optim.Adam(netD.parameters(), lr=lr) train_pipe = SRGANMXNetPipeline( batch_size=batch_size, num_gpus=world_size, num_threads=workers, device_id=local_rank, crop=crop_size, mx_path=train_mx_path, mx_index_path=train_mx_index_path, upscale_factor=upscale_factor, image_type=types.DALIImageType.RGB, ) train_pipe.build() train_loader = StupidDALIIterator( pipelines=[train_pipe], output_map=["lr_image", "hr_image"], size=int(train_pipe.epoch_size("Reader") / world_size), auto_reset=False, ) val_pipe = SRGANMXNetPipeline( batch_size=batch_size, num_gpus=world_size, num_threads=workers, device_id=local_rank, crop=crop_size, mx_path=val_mx_path, mx_index_path=val_mx_index_path, upscale_factor=upscale_factor, random_shuffle=False, image_type=types.DALIImageType.RGB, ) val_pipe.build() val_loader = StupidDALIIterator( pipelines=[val_pipe], output_map=["lr_image", "hr_image"], size=int(val_pipe.epoch_size("Reader") / world_size), auto_reset=False, ) g_loss_meter = AverageMeter("g_loss") d_loss_meter = AverageMeter("d_loss") sample_speed_meter = AverageMeter("sample_speed") def train(epoch): g_loss_meter.reset() d_loss_meter.reset() sample_speed_meter.reset() netG.train() netD.train() for i, (lr_image, hr_image) in enumerate(train_loader): start = time.time() batch_size = lr_image.shape[0] if prof and i > 10: break ############################ # (1) Update D network: maximize D(x)-1-D(G(z)) ########################## fake_img = netG(lr_image) netD.zero_grad() real_out = netD(hr_image).mean() fake_out = netD(fake_img).mean() d_loss = 1 - real_out + fake_out d_loss_meter.update(d_loss.item()) d_loss.backward(retain_graph=True) optimizerD.step() ############################ # (2) Update G network: minimize 1-D(G(z)) + Perception Loss + Image Loss + TV Loss ########################### netG.zero_grad() g_loss = generator_loss(fake_out, fake_img, hr_image) g_loss_meter.update(g_loss.item()) g_loss.backward() optimizerG.step() sample_speed_meter.update(world_size * batch_size / (time.time() - start)) if local_rank == 0 and i % print_freq == 0: print( "\t".join( [ f"epoch {epoch}", f"step {i + 1}/{train_loader.size // batch_size}", str(sample_speed_meter), str(d_loss_meter), str(g_loss_meter), ] ) ) mse_meter = AverageMeter("mse") ssim_meter = AverageMeter("ssim") psnr_meter = AverageMeter("psnr") def validate(epoch): mse_meter.reset() ssim_meter.reset() psnr_meter.reset() netG.eval() for i, (lr_image, hr_image) in enumerate(val_loader): batch_size = lr_image.shape[0] if prof and i > 10: break with torch.no_grad(): sr_image = netG(lr_image) batch_mse = ((sr_image - hr_image) ** 2).mean() batch_ssim = ssim(sr_image, hr_image) mse_meter.update(batch_mse.item(), batch_size) ssim_meter.update(batch_ssim.item(), batch_size) psnr_meter.update(10 * log10(1 / mse_meter.avg)) if local_rank == 0: print( "\t".join( [ "\033[1;31m" f"epoch {epoch}", str(mse_meter), str(ssim_meter), str(psnr_meter), "\033[1;0m", ] ) ) epoch_time_meter = AverageMeter("epoch") running_meters = { "g_loss": [], "d_loss": [], "sample_speed": [], "mse": [], "ssim": [], "psnr": [], "epoch_time": [], } def update_running_meters(): global running_meters running_meters["g_loss"].append(g_loss_meter.avg) running_meters["d_loss"].append(d_loss_meter.avg) running_meters["sample_speed"].append(sample_speed_meter.avg) running_meters["mse"].append(mse_meter.avg) running_meters["ssim"].append(ssim_meter.avg) running_meters["psnr"].append(psnr_meter.avg) running_meters["epoch_time"].append(epoch_time_meter.val) def main(): for epoch in range(epochs): start = time.time() train(epoch) validate(epoch) if local_rank == 0: torch.save( netG.state_dict(), f"{checkpoint_dir}/netG_epoch_{upscale_factor}_{epoch}.pth", ) torch.save( netD.state_dict(), f"{checkpoint_dir}/netD_epoch_{upscale_factor}_{epoch}.pth", ) epoch_time_meter.update(time.time() - start) update_running_meters() if epoch != 0 and not prof: data_frame = pd.DataFrame(data=running_meters) data_frame.to_csv( os.path.join(checkpoint_dir, "metrics.csv"), index_label="Epoch" ) val_loader.reset() train_loader.reset() if __name__ == "__main__": main() when switching between using DataParallel and DistributedDataParallel I get drastically different psnr performance. I’ve found this 2 post but none of the solutions seems to work. That’s one matter (the difference between averaging and summing gradients). The other matter, the thing that I can’t for the life of me figure out, is the difference in how my losses behave in both cases. Here is a trace of my losses (for one epoch) if I train using DataParallel epoch 7 step 1/241 d_loss 0.999999 g_loss 0.005589 epoch 7 step 11/241 d_loss 0.999999 g_loss 0.005433 epoch 7 step 21/241 d_loss 0.999998 g_loss 0.004887 epoch 7 step 31/241 d_loss 1.000002 g_loss 0.004837 epoch 7 step 41/241 d_loss 1.000000 g_loss 0.004958 epoch 7 step 51/241 d_loss 1.000000 g_loss 0.004784 epoch 7 step 61/241 d_loss 1.000000 g_loss 0.005808 epoch 7 step 71/241 d_loss 0.999979 g_loss 0.005283 epoch 7 step 81/241 d_loss 1.000003 g_loss 0.005585 epoch 7 step 91/241 d_loss 0.999999 g_loss 0.004718 epoch 7 step 101/241 d_loss 0.999999 g_loss 0.006046 epoch 7 step 111/241 d_loss 0.999978 g_loss 0.005157 epoch 7 step 121/241 d_loss 1.000007 g_loss 0.006780 epoch 7 step 131/241 d_loss 1.000001 g_loss 0.005851 epoch 7 step 141/241 d_loss 1.000000 g_loss 0.005644 epoch 7 step 151/241 d_loss 0.999986 g_loss 0.005973 epoch 7 step 161/241 d_loss 1.000002 g_loss 0.005687 epoch 7 step 171/241 d_loss 1.000012 g_loss 0.006535 epoch 7 step 181/241 d_loss 0.999999 g_loss 0.005457 epoch 7 step 191/241 d_loss 0.999999 g_loss 0.005313 epoch 7 step 201/241 d_loss 1.000000 g_loss 0.006094 epoch 7 step 211/241 d_loss 1.000000 g_loss 0.006187 epoch 7 step 221/241 d_loss 1.000116 g_loss 0.005385 epoch 7 step 231/241 d_loss 0.999931 g_loss 0.005718 epoch 7 step 241/241 d_loss 0.999774 g_loss 0.005635 From my understanding (and by watching the psnr) this is how the losses should trend for SRGAN. Now here are my losses when using DistributedDataParallel (across several epochs to show the trend) epoch 0 step 1/60 d_loss 1.000204 (1.000204) g_loss 0.153849 (0.153849) epoch 0 step 11/60 d_loss 0.974728 (0.965737) g_loss 0.019822 (0.058211) epoch 0 step 21/60 d_loss 0.468546 (0.831723) g_loss 0.015897 (0.038876) epoch 0 step 31/60 d_loss 0.230158 (0.677370) g_loss 0.014611 (0.031437) epoch 0 step 41/60 d_loss 0.077666 (0.544439) g_loss 0.014681 (0.027434) epoch 0 step 51/60 d_loss 0.020034 (0.447585) g_loss 0.011524 (0.024474) epoch 0 step 61/60 d_loss 0.013507 (0.378103) g_loss 0.011936 (0.022396) epoch 0 mse 0.006693 (0.007545) ssim 0.661945 (0.645649) psnr 21.223185 (21.223185) epoch 1 step 1/60 d_loss 0.019439 (0.019439) g_loss 0.010366 (0.010366) epoch 1 step 11/60 d_loss 0.009224 (0.010984) g_loss 0.009906 (0.010792) epoch 1 step 21/60 d_loss 0.003987 (0.008465) g_loss 0.011732 (0.010643) epoch 1 step 31/60 d_loss 0.007867 (0.007535) g_loss 0.009154 (0.010312) epoch 1 step 41/60 d_loss 0.003442 (0.006837) g_loss 0.010357 (0.010266) epoch 1 step 51/60 d_loss 0.003987 (0.005997) g_loss 0.010241 (0.010080) epoch 1 mse 0.004144 (0.004839) ssim 0.746634 (0.726122) psnr 23.152690 (23.152690) epoch 2 step 1/60 d_loss 0.006586 (0.006586) g_loss 0.009223 (0.009223) epoch 2 step 11/60 d_loss 0.859120 (0.566964) g_loss 0.008221 (0.008524) epoch 2 step 21/60 d_loss 0.876267 (0.731556) g_loss 0.008248 (0.008669) epoch 2 step 31/60 d_loss 0.665335 (0.739961) g_loss 0.010071 (0.008873) epoch 2 step 41/60 d_loss 0.508060 (0.741789) g_loss 0.007758 (0.009077) epoch 2 step 51/60 d_loss 0.533404 (0.670928) g_loss 0.007410 (0.008923) epoch 2 mse 0.004435 (0.004207) ssim 0.733819 (0.747270) psnr 23.760117 (23.760117) epoch 3 step 1/60 d_loss 0.976557 (0.976557) g_loss 0.008353 (0.008353) epoch 3 step 11/60 d_loss 0.873007 (0.948327) g_loss 0.010379 (0.008218) epoch 3 step 21/60 d_loss 0.688478 (0.868267) g_loss 0.006677 (0.008104) epoch 3 step 31/60 d_loss 0.256862 (0.726863) g_loss 0.007438 (0.008090) epoch 3 step 41/60 d_loss 0.101930 (0.586502) g_loss 0.008943 (0.007990) epoch 3 step 51/60 d_loss 0.073482 (0.483037) g_loss 0.009807 (0.007858) epoch 3 mse 0.003936 (0.003998) ssim 0.749274 (0.763466) psnr 23.981862 (23.981862) Notice that in this case d_loss goes to zero rather than to 1. I’ve been wrestling with it for several days and I can’t for the life of me figure out what is I’m doing wrong in switching from DataParallel to DistributedDataParallel that causes this kind of behavior. EDIT: life lesson: this is what happens when you copy paste code without understanding completely. i copied this code from https://github.com/NVIDIA/DALI/blob/master/docs/examples/pytorch/resnet50/main.py 3 and adapted it for my needs. the problem turned out to be that in the original code github.com NVIDIA/DALI/blob/master/docs/examples/pytorch/resnet50/main.py#L329 1 input_var = Variable(input) target_var = Variable(target) # compute output output = model(input_var) loss = criterion(output, target_var) # measure accuracy and record loss prec1, prec5 = accuracy(output.data, target, topk=(1, 5)) if args.distributed: reduced_loss = reduce_tensor(loss.data) prec1 = reduce_tensor(prec1) prec5 = reduce_tensor(prec5) else: reduced_loss = loss.data losses.update(to_python_float(reduced_loss), input.size(0)) top1.update(to_python_float(prec1), input.size(0)) top5.update(to_python_float(prec5), input.size(0)) there’s a reduce when logging the metrics. i misread and misinterpreted simultaneously: i missed that the backwards pass is actually run on the unreduced loss further down and misinterpreted delay_allreduce github.com NVIDIA/DALI/blob/master/docs/examples/pytorch/resnet50/main.py#L210 2 else: print("=> creating model '{}'".format(args.arch)) model = models.__dict__[args.arch]() model = model.cuda() if args.fp16: model = network_to_half(model) if args.distributed: # shared param/delay all reduce turns off bucketing in DDP, for lower latency runs this can improve perf # for the older version of APEX please use shared_param, for newer one it is delay_allreduce model = DDP(model, delay_allreduce=True) # define loss function (criterion) and optimizer criterion = nn.CrossEntropyLoss().cuda() optimizer = torch.optim.SGD(model.parameters(), args.lr, momentum=args.momentum, weight_decay=args.weight_decay) if args.fp16: optimizer = FP16_Optimizer(optimizer, static_loss_scale=args.static_loss_scale, to mean that apex.DistributedDataParallel wouldn’t be doing any reduce at all and that the reduce_tensor call was necessarily done by hand. so in summary i was dividing my loss by world_size unnecessarily.
st179661
Hi @makslevental, I’m glad that you solved it! I am curious, which Reader are you using in DALI to load VOC2012? Are you using ExternalSource or a custom operator?
st179662
@spanev github.com makslevental/atlas_sr/blob/master/data_utils/dali.py#L186 ): super(SRGANMXNetPipeline, self).__init__( batch_size, num_threads, device_id, crop, upscale_factor, image_type, dali_cpu, ) self.input = ops.MXNetReader( path=[mx_path], index_path=[mx_index_path], random_shuffle=random_shuffle, shard_id=device_id, num_shards=num_gpus, ) class SRGANFilePipeline(SRGANPipeline): def __init__( @spanev since it looks like you work for on DALI: can you tell me why the operators aren’t more orthogonal? for example I really would like to be able to normalize without cropping, to change imagetypes outside of the decoder, or resize but after crop (i.e. I’d like to use decoder, crop, and resize independently of one another but I can’t because resize expects uint8). I’m not complaining (thanks for the toolkit!) I’m just wondering if it’s something having to do with how the cuda kernels are compiled or it’s just a design choice.
st179663
It is great to see that you are able to use it to accelerate your training. DALI is still under active development (and technically still in beta 1). The team is currently reworking the whole architecture and working on some major missing features (such as pointwise operations). A few notes about the operators you mentioned: makslevental: for example I really would like to be able to normalize without cropping I know it may seem a little bit counter-intuitive but you should be using CropMirrorNormalize 1 (minus the crop and mirror options). The DALI and CUDA kernels have compile time mechanisms to reduce the runtime overhead (to none). We may add an Normalize op in the future but it would still use the same kernels under the hood. makslevental: to change imagetypes outside of the decoder You can actually do it with the Cast operator. makslevental: or resize but after crop I guess you mean Crop after Resize. This limitation comes from the fact that Crop and Resize operations are (sorta) commutative, but performance-wise it is better to first crop first to not apply the interpolation algorithm on a data that will be cropped anyway.
st179664
I know it may seem a little bit counter-intuitive but you should be using CropMirrorNormalize (minus the crop and mirror options). The DALI and CUDA kernels have compile time mechanisms to reduce the runtime overhead (to none). how can you use CropMirrorNormalize without crop? not setting any values for any of the crop parameters simply crops to half because of default = 0.5 for both of crop_pos_x and crop_pos_y. You can actually do it with the Cast operator. how? i tried to cast prior to resize (because resize expects uint8) but i just … a cast i.e. my images were all black because i got truncation of the floats the came out of the decoder. I guess you mean Crop after Resize. sorry yes you’re right. but performance-wise it is better to first crop first to not apply the interpolation algorithm on a data that will be cropped anyway. that makes sense but i feel like i should be able to choose to take the penalty. just so we’re concrete: i would like to crop a high resolution image randomly, then resize to half (or quarter scale) in order to produce a low resolution image (i’m working on super resolution networks). right now i do using decodercrop then resize and i have to normalize in just plain pytorch github.com makslevental/atlas_sr/blob/master/data_utils/dali.py#L96 2 # https://github.com/NVIDIA/DALI/issues/1227 def __init__(self, *args, **kwargs): self.dali_iter = DALIGenericIterator(*args, **kwargs) def __iter__(self): return self def __next__(self): n = next(self.dali_iter) lr_image, hr_image = n[0]["lr_image"], n[0]["hr_image"] hr_image = hr_image.to(torch.float).div(255) lr_image = lr_image.to(torch.float).div(255) hr_image = hr_image.permute(0, 3, 1, 2) lr_image = lr_image.permute(0, 3, 1, 2) return lr_image, hr_image @property def size(self): return self.dali_iter._size # hack to make lr_finder work not ideal. i want to compose decoder -> cropmirrornormalize -> resize but i can’t because of data width mismatches (if i recall correctly resize complains about not getting uint8s for this composition). It is great to see that you are able to use it to accelerate your training. yes it’s definitely great - i’m able to saturate my gpus (sometimes a little too much and they get hot .
st179665
While I use the distributed training, the training process encounters clueless halt. The GPU-Memory and GPU-Util (90%-100%) of all GPUs are normal with no pid being killed!!!
st179666
In my case, it shows like a 100% GPU utility occupation there on the GPU. And some cores of CPU are taken as well but it’s weird that actually nothing is ongoing and the training program seems just like “dead”. We use the same repo here: TorchCV 2. I tried to stop the program with Ctrl+C or simply kill the program. I was expecting some error info but nothing was printed.
st179667
Does anyone encounter this kind of problem before? It would be also great if anybody has some techniques to localize the problem or at least print something.