id
stringlengths
3
8
text
stringlengths
1
115k
st179668
I am trying to implement model parallelism in a distributed cluster setting. Let’s say I have a tensor tensor in each process and a number of operations have been performed on it (in each process independently). The tensor has a .grad_fn attached to it. Now I want to perform an all_gather. so that I create a list [tensor_1, tensor_2...tensor_n]. Then I can concatenate all those tensors using torch.cat. All the tensors in the list will lose the grad_fn property. My expectation is that process i will maintain the grad_fn for tensor_i in the list. It’s ok if all the others are lost. I want to be able to backward() after torch.cat in each process i through tensor_i. How can I achieve that? Any help is appreciated! EDIT: I think I can just do tensor_list[dist.get_rank()] = tensor after the all_gather operation but I am not sure if there is a better way. Help?
st179669
Solved by Andreas_Georgiou in post #4 I’ve built this package that does this automatically now: https://github.com/ag14774/diffdist. So this question can be marked as solved
st179670
Would it be possible to manually assign the grad_fn back to tensor_i? I don’t think it’s a good idea to retain gradient functions on output tensors of collective functions. If anything, it would give an expectation of this working well out of the box, which is not the case. I think a better solution would be to stitch things together with torch.autograd.grad yourself, before and after the collectives.
st179671
Do you have any idea? Do you want to calculate tensor_i in different process but accumulate between the processes so the loss will be attained by all the tensor_i?
st179672
I’ve built this package that does this automatically now: https://github.com/ag14774/diffdist 157. So this question can be marked as solved
st179673
That’s really cool! Thanks for your sharing! While I am not sure how the package works and whether it can be applied to such problem: for iteration, data0, data1 in enumerate(data_loader, start_iter): tensor = model(data0) synchronize() tensors = dist.all_gather(tensor) loss = model(data1, tensors) So in each process different data0 will generates a tensor, and the gathered tensors will be used for further training. Since ‘all_gather’ cannot preserve the ‘grad_fn’, can you give me some advice to solve it? Thanks a lot.
st179674
Yes the package can do that. However, tensor needs to be of same shape and size in all processes. Then you can do something like: for iteration, data0, data1 in enumerate(data_loader, start_iter): tensor = model(data0) synchronize() # You probably do not need this since all_gather will force a sync gather_list = [torch.empty_like(tensor) for i in range(dist.get_world_size())] gather_list = diffdist.functional.all_gather(gather_list, tensor) loss = model(data1, gather_list) Keep in mind though that all_gather is not very fast because its backprop involves running dist.reduce multiple times. When pytorch adds support for reduce_scatter, I will update the package to speed up the backprop.
st179675
Thank you so much for your help. I tried the code, but the gather_list after diffdist.functional.all_gather(gather_list, tensor) also doesn’t contain each tensor’s grad_fn. I found there is a parameter self.next_backprop in your code, do I need to set it? Sorry to bother you again.
st179676
11123: I found there is a parameter self.next_backprop in your code, do I need to set it? Sorry to bother you again. Andreas_Georgiou: diffdist.functional.all_gather(gather_list, tensor) Apologies, the line should be gather_list = diffdist.functional.all_gather(gather_list, tensor)` If you get any errors try setting inplace=False. No need to use next_backprop
st179677
Thank you again. one final question, I write a simple example to see how the grad_fn works: # in each process: a = torch.tensor([1.0, 3.0], requires_grad=True).cuda() b = a + 2 * dist.get_rank() # gather bs = [torch.empty_like(b) for i in range(dist.get_world_size())] bs = diffdist.functional.all_gather(bs, b) # loss backward loss = (torch.cat(bs) * torch.cat(bs)).mean() loss.backward() print(a.grad) I think a should has its gradient? But currently it is None. I am a little bit lost.
st179678
You are right it seems to be working for CPU but not for CUDA for some reason. I will investigate a bit more. Feel free to open a pull request if you find the problem
st179679
I found the problem. The package is working fine. The problem is that when you set requires_grad=True you set it on the CPU version of a. Then you called cuda() which created another node in the graph. Gradient will pass through the GPU tensor a and then be accumulated to the CPU version of the tensor since that is the one that has requires_grad set to true. What you should do is torch.tensor([1.0, 3.0], requires_grad=True, device='cuda'). In a realistic scenario with normal training this won’t be a problem.
st179680
Sorry for my late reply. I tried your advice and then applied to my own model, it works! Thank you for your help. Actually I don’t know how do you implement your model parallelism, here I use distributeddataparallel in pytorch to distribute the model to different gpus of one node. So based on my experiment, I think maybe your work can also solve the distributed gpu grad_fn gathering problem? like in Will "dist.all_gather" break the auto gradient graph? 21. Thank you again.
st179681
Glad it works! 11123: So based on my experiment, I think maybe your work can also solve the distributed gpu grad_fn gathering problem? like in Will “dist.all_gather” break the auto gradient graph?. Yes it seems that diffdist can handle that case. Of course different processes will have different computational graphs but with diffdist some nodes are inserted in the graph that will cause them to sync and communicate with each other. For example, doing a Send operation will cause a Recv to be called during backward in order to receive the gradient.
st179682
I was training my model with 3 Nvidia 2080 Ti on Ubuntu 16.04. I used Nvidia Apex to use the full capacity of the gpus. However, my pytorch training code hung up after a few epochs. It worked well for one or two trainings. I terminated the program and check the gpus with ‘nvidia-smi’. It showed only two gpus (and it was really slow). I found out that one of my gpus were dead. My computer did not properly boot with that dead gpu (GUI didn’t show up). I reinstalled OS of my computer to Ubuntu 18.04, reinstall drivers, but the problem still existed. When I plugged that GPU on a Windows machine, it showed a 43 error code. I was wondering if this problem is caused by Apex or did my graphics card had a problem. Is there anyone who had a similar issue with Apex?
st179683
I’ve never seen this issue raised by using apex and suspect the GPU might have some hardware issues. Depending on the opt_level you are using in apex.amp, we are e.g. patching some PyTorch methods to use FP16 instead of FP32 (whitelist/blacklist style) or transform the model’s parameters to FP16 and use master parameters (master gradients) etc. apex does not manipulate the hardware in any way and just uses CUDA and PyTorch for e.g. mixed precision training. How old is the GPU and how long was it working fine?
st179684
I just bought the gpus. I’ve been using the gpu about two weeks and it ran fine for a few train sessions. I’m not sure it is relevant, but there were some weird incidents while using Apex. When I terminated a python script with ctrl-c, sometimes Apex did not fully terminate and some sub-processes existed. Those sub-processes kept held GPU memories, so I had to kill them manually. These incidents made me suspect Apex. I agree that it might be a hardware problem, but I wonder if a high GPU utilization might harm a gpu. Is it possible that utilizing GPU 99% for a long time can affect the hardware (e.g. overheat)? Moreover, is there any other way to utilize gpu up to 90% instead of using Apex? When I tested some codes, gpu utilization was around 50~70% and I’m not sure whether it is normal. I wanted to increase it, so I ended up with Apex.
st179685
keunwoo: I’m not sure it is relevant, but there were some weird incidents while using Apex. When I terminated a python script with ctrl-c, sometimes Apex did not fully terminate and some sub-processes existed. Those sub-processes kept held GPU memories, so I had to kill them manually. These incidents made me suspect Apex. I’m not sure, if this is caused by apex or PyTorch, as I’ve seen this behavior using plain PyTorch. If I’m not mistaken, this should be fixed in the latest stable release. keunwoo: I agree that it might be a hardware problem, but I wonder if a high GPU utilization might harm a gpu. Is it possible that utilizing GPU 99% for a long time can affect the hardware (e.g. overheat)? If you didn’t overclocked the GPU, it should be fine. In case your device overheats, e.g. if your GPUs are packed tightly into the case, it should reduce its clock and shutdown as the last step. keunwoo: Moreover, is there any other way to utilize gpu up to 90% instead of using Apex? When I tested some codes, gpu utilization was around 50~70% and I’m not sure whether it is normal. I wanted to increase it, so I ended up with Apex. It depends on your code and e.g. you might have a data loading bottleneck. This post 23 explains some workarounds.
st179686
Hi, I am using torch distributed with gloo backend (because I need peer to peer communication). While running my test script, I got a ‘Connection reset by peer’ error while dist.recv is called. Any clue on what causes this? I am using mpi to launch 2 processes and my script is pasted as below: def run(rank, size): tensor = torch.zeros(1).cuda() if rank == 0: tensor += 1 # Send the tensor to process 1 dist.send(tensor=tensor, dst=1) else: # Receive tensor from process 0 dist.recv(tensor=tensor, src=0) print('Rank ', rank, ’ has data ', tensor[0]) def init_processes(rank, size, addr, fn, backend): os.environ[‘MASTER_ADDR’] = addr os.environ[‘MASTER_PORT’] = ‘12345’ my_rank = os.environ[‘OMPI_COMM_WORLD_RANK’] dist.init_process_group(backend, rank=my_rank, world_size=size) fn(rank, size) if name == “main”: hostname = socket.gethostname() addr = socket.gethostbyname(hostname) size = 2 my_rank = os.environ[‘OMPI_COMM_WORLD_RANK’] init_processes(my_rank, size, addr, run, ‘gloo’) The error I got is as below: File “/home/xzhu1900/anaconda3/envs/test_py37/lib/python3.7/site-packages/torch/distributed/distributed_c10d.py”, line 712, in recv pg.recv([tensor], src, tag).wait() RuntimeError: [/pytorch/third_party/gloo/gloo/transport/tcp/pair.cc:563] Read error [127.0.1.1]:48028: Connection reset by peer
st179687
Hi folks, I’ve noticed that whenever I run multi-gpu training on my machine it suddenly reboots. There are 2 gpus on the machine a 1080Ti and a Titan X. 1080Ti is gpu:0 and is also used to power 2 monitors for video display. Usually never had issues before with multi-gpu training using tensorflow. Any ideas what might be causing this?
st179688
This sounds like your PSU might be too weak. What kind of PSU are you using at the moment?
st179689
I think this might be indeed the problem, as the recommended system power for the Titan X 2 are 600W and the 1080 Ti 1 might need another 250W at max. performance. Your PSU might be maxed out with these two GPUs (of course it also depends on other hardware and its power consumption).
st179690
My understanding of reading the linked paged to Titan X is that the actual card requires 250W and it’s recommended to have an overall system with 600W in which you’re gonna use the card? 600W for Titan X seems ridiculously high to me? If I max out the TX then it barely goes above 260W. In the worst case scenario both of them maxed out would have gotten 600W still not explains the reboots, also I’ve trained multi-gpu using tensorflow and maxing out both cards didn’t notice any of these issues?
st179691
The info for the Titan X states 600W are recommended for this single GPU and the whole system. So if that’s the max. power consumption of a single Titan X + rest of the system, the additional 1080Ti might need 250W extra as stated on the other page. Maybe you were lucky in TF as the GPUs might not have been at their peak power consumption. You could try to set the power limits lower using nvidia-smi, maybe even create artificial bottlenecks so that your GPUs will only have a short burst of power consumption or test another PSU if available.
st179692
ptrblck: set the power limits lower using nvidia-smi Oh nice, I didn’t know you could do that! Let me search around and see how to test those things. Thanks for the tip!
st179693
Hi @ptrblck, so I did a breakdown of the components and Watts consumption required on my setup. image.png702×437 47.9 KB And it seems that an 850W PSU should be able to handle that no?
st179694
Hello, I’m seeing an odd issue with using the pin_memory = true flag with the dataloader. I’m measuring the time taken to transfer data from the host RAM to GPU memory as follows: transfer_time_start = time.time() input = input.cuda(args.gpu, non_blocking=False) target = target.cuda(args.gpu, non_blocking=False) torch.cuda.synchronize() transfer_time.update(time.time()-transfer_time_start) with pin_memory = True in the dataloader, this gives me a transfer time of 0.03 sec, which for a batch size of 256, translates into 25622422434/0.03 = 5.1GB, which is a bit low for my CPU-GPU interconnect (x16, PCIe3) which should deliver ~12GB. I then tried calling pin_memory() manually on the tensor returned by the enumerate call, as shown below: for i, (input, target) in enumerate(train_loader): input = input.pin_memory() # measure data loading time data_time.update(time.time() - end) transfer_time_start = time.time() input = input.cuda(args.gpu, non_blocking=False) target = target.cuda(args.gpu, non_blocking=False) torch.cuda.synchronize() transfer_time.update(time.time()-transfer_time_start) Now the transfer time dropped to 0.014, which translates to ~11GB, which is as expected. Anyone has any ideas why setting pin_memory = True in the data loader may not return a tensor already in pinned memory? Also attached below are two plots showing the transfer time (green plot) from host memory to the GPU. This plot shows the transfer time when I call pin_memory manually image.png1585×478 111 KB You can see that the transfer time stays consistently low. Whereas this one shows the transfer time without calling pin_memory manually. Now the transfer time is highly variable and averages to around 0.03 sec
st179695
I can not speak much about the manual approach as I haven’t tried it, but regarding pin_memory=True I observe in practice that it slows done the training at about 2x (compared to False) – tested it in PyTorch 0.4.1 and 1.0 and on two independent machines (one with 1080Ti’s and one with Titan V’s). So, in practice, I abandoned using that. I remember there was a thread where someone mentioned similar observations. So, it may well be that there’s a bug with pin_memory = True, esp. since you observe that the manual approach results in the expected speed up.
st179696
Thanks for the reply. As my experiments confirm, transfer to GPU is significantly faster for data in pinned memory, so it is worth doing it. Issue is that transfer to pinned memory itself costs time, and only saves time overall if it can be parallelized. The data loader seems to be doing this - it spins up a separate thread for transfer to pinned memory when pin_memory flag is set to True. When you call enumerate or next(iter), dataloader waits until a batch is available in pinned_memory so if the processing time on GPU is sufficiently long, then the latency of pinned memory transfer should be hidden, at least partly. The question is why is transfer to the GPU still slow even though the batch is in pinned memory? One difference between the manual approach and the regular approach (not calling input.pin_memory() manually) is that in the manual approach, the transfer is done over the main thread, while in the regular approach, it is being done on another thread. Does this make a difference?
st179697
I can also verify this since I have the same observations were using pin_memory=True and num_workers=1 I see the gpu utilization at ~40% through all the training period but with pin_memory=False and num_workers=4 the gpu utilization is at ~90%. Plus I see my cpu at full utilization since the fans kick in but without pin_memory everything seems fine.
st179698
I have a Siamese Network with a triplet loss function at the end. E.g.: > class Siamese(nn.Module): > def __init__(self, ae_net): > super(Siamese, self).__init__() > self.ae_net = ae_net > > def forward(self, x1, x2, x3, hidden): > > a = self.ae_net(x1, hidden) > b = self.ae_net(x2, hidden) > c = self.ae_net(x3, hidden) > return a, b, c The network that repeats itself (i.e., self.ae_net) is an LSTM with inputs of varying lengths, so I’m not sure I can use nn.DataParallel. I was wondering if there was a way to assign the implementation of every instance of self.ae_net to a different GPU, so that they would be calculated in parallel. Thanks
st179699
Solved by albanD in post #4 Ho right it is, my bad. It is a bit tricky for Siamese network as you need to accumulate the gradients for all tree runs. One simple way to do this is to make three copies of your network, one on each device, then send each copy to it’s respective device. After each backward, you will need to acc…
st179700
Hi, In your example, you use the hidden state sequentially for each network one after the other. Is that what you want? Because if you want that, then you cannot really run them in parallel on different GPUs as they will need to wait for the previous one to finish.
st179701
Hi, thanks for your answer! The hidden state that is used for a, b and c is the same one, i.e., it’s just an initialized tensor used three times separately (isn’t it?), so I don’t think it’s a problem to use them in parallel.
st179702
Ho right it is, my bad. It is a bit tricky for Siamese network as you need to accumulate the gradients for all tree runs. One simple way to do this is to make three copies of your network, one on each device, then send each copy to it’s respective device. After each backward, you will need to accumulate the gradients by hand then share the new values by hand. This is going to be tricky to do very efficiently, and you might not get a large improvement for using multiple gpus because of the synchronisation needed between devices.
st179703
Hi all, I am trying to implement a model that runs with multigpu training and DataParallel. The problem is that due to the nature of my model, occasionally there will be forward paths where losses are not produced. Problem: Without data parallel, I simply set these missing losses to None or 0, and only add losses that are not None or 0 to my total loss. However, moving on to multigpu, there are instances where 1 process on GPU 0 will produce an actual loss, while the process on GPU1 will not produce that particular loss. Hence, when DataParallel module gathers these losses, bad things happen – it cannot combine the losses from two GPU together. Things I tried: I tried making the missing loss 0, a float tensor of 0, or setting it to None. None of these methods work – they each produce an error in torch/nn/parallel/scatter_gather.py. For instance, setting the missing loss to 0, which is not iterable, will give “TypeError: zip argument #1 must support iteration” error. Question: I am wondering if there is a particular thing in DataParallel that ignores missing entries when combining losses from multiple GPUs? Or is there a possible solution to this problem without altering pytorch code? Any help is greatly appreciated!
st179704
Hi, I have a network having two components (X, Y) which takes two inputs: A and B. I wrap the network in data parallel. X returns full batch size output (how?) while Y returns 1/4th batch size output (this is expected). Should I not nest subclasses of nn.Module while using Data parallel wrapper? Since, data parallel does support multiple inputs so this is unexpected (https://github.com/pytorch/pytorch/pull/794 8). I even tried separating the X, Y modules and wrapping them in data parallel separately, I still get the same error. Any hints on what might be wrong?
st179705
Could you post a minimal code snippet to see, how you are using submodules inside your model and how you are applying nn.DataParallel to it?
st179706
I am getting the same error in two ways: > class EncoderCNN(nn.Module): > def __init__(self, embed_size, pre_trained_emb_size=2048): > """Load the pretrained ResNet Features""" > super(EncoderCNN, self).__init__() > self.linear = nn.Linear(pre_trained_emb_size, embed_size) > self.relu = nn.ReLU(embed_size) > self.bn = nn.BatchNorm1d(embed_size, momentum=0.01) > > def forward(self, images): > """Extracted feature vectors form input""" > > features = self.bn(self.relu(self.linear(images))) > return features > > class TransformerEncoder(nn.Module): > def __init__(self, vocab_size, embed_size, d_model=300, nhead=6, dim_feedforward=512, dropout=0.1): > super(TransformerEncoder, self).__init__() > self.input_size = vocab_size > self.embed_size = embed_size > self.embedding = nn.Embedding(self.input_size, self.embed_size) > # Encoder CNN > self.encoder_cnn = EncoderCNN(embed_size=self.embed_size) > self.transformer_encoder_layer = nn.TransformerEncoderLayer(d_model, nhead, dim_feedforward) > > def forward(self, input, input_lengths, images): > input = input.reshape((input.shape[1], input.shape[0])) > embedded = self.embedding(input) > output = self.transformer_encoder_layer(embedded) > image_encodings = self.encoder_cnn(images) > return output, image_encodings Even if I take the encoder cnn module out and wrap it in data parallel separately, I get the same error. the final returned tensors output, image_encodings have different batch sizes
st179707
I’m not sure, why you are reshaping the input in TransformerEncoder's forward method. Could you explain this work flow as I think it might be related to this issue?
st179708
Thanks, I am sending max_len x batch size x embedding_size input to transformer layer. This is the same api as in LSTM (without batch_first=True). Since, data parallel module expects the module’s input to have batch in first dimension, I am passing batch as first dimension and then reshaping it before passing it to transformer. Same for images, batch_size x embedding_size input is passed to the module. Do you see anything wrong here?
st179709
Thanks for the information. In that case you might want to use .permute, as .reshape might interleave the data: x = torch.tensor([[0., 0.], [1., 1.], [2., 2.], [3., 3.]]) print(x.reshape(x.size(1), x.size(0))) > tensor([[0., 0., 1., 1.], [2., 2., 3., 3.]]) print(x.permute(1, 0)) > tensor([[0., 1., 2., 3.], [0., 1., 2., 3.]]) I don’t see any obvious errors. Could you add print statements inside the forward method, which will print the shape as well as the device of the input, output and each intermediate tensor?
st179710
Thanks, The print statements and the output on the console is: def forward(self, input, input_lengths, images): ptvsd.break_into_debugger() print("Input", input.shape, input.get_device()) input = input.permute((1, 0)) #input.reshape((input.shape[1], input.shape[0])) print("Permuted Input", input.shape, input.get_device()) embedded = self.embedding(input) print("Embedded", embedded.shape, embedded.get_device()) #packed = torch.nn.utils.rnn.pack_padded_sequence(embedded, input_lengths) output = self.transformer_encoder_layer(embedded) print("Output", output.shape, output.get_device()) #output, _ = torch.nn.utils.rnn.pad_packed_sequence(output) image_encodings = self.encoder_cnn(images) print("Image Encodings", image_encodings.shape, image_encodings.get_device()) return output, image_encodings And the output is: Input torch.Size([64, 30]) 0 Input torch.Size([64, 30]) 1 Permuted Input torch.Size([30, 64]) 0 Input torch.Size([64, 30]) 2 Permuted Input torch.Size([30, 64]) 1 Permuted Input torch.Size([30, 64]) 2 Input torch.Size([64, 30]) 3 Embedded torch.Size([30, 64, 300]) 0 Embedded torch.Size([30, 64, 300]) 2 Embedded torch.Size([30, 64, 300]) 1 Output torch.Size([30, 64, 300]) 1 Output torch.Size([30, 64, 300]) 0 Output torch.Size([30, 64, 300]) 2 Permuted Input torch.Size([30, 64]) 3 Image Encodings torch.Size([64, 300]) 1 Image Encodings Image Encodingstorch.Size([64, 300]) 2 torch.Size([64, 300]) 0 Embedded torch.Size([30, 64, 300]) 3 Output torch.Size([30, 64, 300]) 3 Image Encodings torch.Size([64, 300]) 3 And the returned tensors have the shape: shape:torch.Size([120, 64, 300]) device:device(type='cuda', index=0) and shape:torch.Size([256, 300]) device:device(type='cuda', index=0)
st179711
It looks like you would need to permute the output of the TransformerEncoderLayer again, as its output has the shape [T, N, E] (batch dimension in dim1).
st179712
thanks, the data parallel issue is solved it seems and the code is working. however, it is very slow (slower than a single gpu ) and i am getting the error here: How to flatten parameters? 355 any hints on how to fix the issue, i assume flatten lstm must be called after each call but then even if i do it, i still get the linked warning and the code is very slow
st179713
Hi all, I’m working on a distributed learning system where I’m splitting a big model into smaller parts. I call this big network a chain consisting of smaller networks called links. The use case is that the links can be a client in a distributed learning system. Currently I’m initializing separate models, and forwarding the output of each link to the next link in the chain, this works well. When I would like to replace a link in the chain for another, I’m currently simply calling a different model. For implementation sake, it would be easier to just replace the state dict of several layers in the chain by the state dict of another link. My question: Does the state dict of layers encompass all the layers properties/attributes/links? (I’m familiar with most, weights, biases, gradients, but I’ve switched to PyTorch quite recently, and would not know what other information might stay linked.)
st179714
Hi, guys. I’m trying to build a simple distributed data parallel training program with 1 GPU per process. Firstly I followed https://pytorch.org/tutorials/intermediate/dist_tuto.html 2 and added some modification. def run(rank, size): ... model = Net() ... if __name__ == '__main__': ... for rank in range(size): p = Process(target=init_process, args=(rank, size, run)) ... However, after reading the example https://github.com/pytorch/examples/tree/master/mnist_hogwild 4, I found model is one of the arguments of a process: if __name__ == '__main__': ... model = Net().to(device) ... for rank in range(args.num_process): p = mp.Process(target=train, args=(rank, args, model, device, dataloader_kwargs)) ... So I just wonder: where should the model be declared, inside or outside these processes? When and how do the gradients get synchronized?
st179715
Solved by pietern in post #2 If you don’t care for doing hogwild, the second example you list is not applicable to you. Everything is declared in the primary process because the model weights are to be shared (physically shared through shared memory) between the worker processes. If you want to simply use a single process per …
st179716
If you don’t care for doing hogwild, the second example you list is not applicable to you. Everything is declared in the primary process because the model weights are to be shared (physically shared through shared memory) between the worker processes. If you want to simply use a single process per GPU and don’t care for physical weight sharing, then you should declare everything in the subprocesses. Think of it as if you were running this on different machines instead of multiple processes on a single machine. Then you’d also have to declare everything in the processes that you launch on those machine instead of a single launcher processes.
st179717
Big thanks to your reply! So hogwild is not a must for distributed data parallel, even if the models are declared in different processes, they can synchronize their parameters and gradients in some way. Is it right? I tried saving the models at the end of these processes using the following code, but I found the parameters of the models are different. Does this mean that I made something wrong? def run(rank, size): ... model = Net() ... for epoch in range(args.epochs): for batch_idx, (input, target) in enumerate(train_loader): output = model(input) loss = criterion(output, target) optimizer.zero_grad() loss.backward() optimizer.step() ... model.save({...}, f'checkpoint-process-{rank}.pth')
st179718
You need to wrap your model with nn.DistributedDataParallel, see https://pytorch.org/tutorials/intermediate/ddp_tutorial.html 4 for a tutorial.
st179719
Surely I did this. Sorry for omitting this in the code above. It’s like: ... torch.cuda.set_device(rank) ... model = Net().cuda() model = torch.nn.parallel.DistributedDataParallel(model, device_ids=[rank], output_device=rank) ... However, the parameters I got still differ.
st179720
I tried saving the model just after the forward, the parameters of the saved models are the same. def run(rank, size): ... for epoch in range(args.epochs): output = model(input) loss = criterion(output, target) torch.save({...}, f'checkpoint-process-{rank}-epoch-{epoch}.pth') ... ... Still checking the sync of parameters and gradients in the source code.
st179721
Update: Just refering the ImageNet example can help sync the parameters. I was using the wrong way to check the parameters of the model.
st179722
Hello, What is the function/reason to enqueue(std::move(entry)); in ProcessGroupMPI Allreduce? I don’t understand this part of the code
st179723
This is where the operation is queued to be executed by a background thread. All collective calls execute on a separate thread so you can block on their completion only when you need the result. This allows for overlapping of gradient reduction with gradient computation.
st179724
When you talk about “overlapping of gradient reduction with gradient computation” you mean the overlapping between forward and backward propagation?
st179725
No, it is overlapped with gradient computation (backward propagation) only. As more and more gradients have been computed, they are ready to be reduced. There is no need to wait with reducing them until you have computed all of them.
st179726
@pietern Hi,could you tell ne why ProcessGroupNCCL not use enqueue(xxx) function?Does NCCL implement multiple nodes gradients calculation synchronization?
st179727
Hi, I have a model with a grid_sample 25 layer, I tried to train my model on multiple GPUs, but got the following error: RuntimeError: grid_sampler(): expected input and grid to be on same device, but input is on cuda:1 and grid is on cuda:0 Is there anyway to use this layer on multiple GPUs? Thanks
st179728
Solved by ptrblck in post #5 Try to use grid.device instead. Might be and you should stick to your work flow, as I was just using it as an example. You could also try to register self.grid as a buffer using self.register_buffer, which would move the tensor automatically using model.to().
st179729
The input and grid should be on the same device. If you are creating one of these tensors manually in the forward or pass it to the forward method, make sure to transfer it to the same device, e.g. by using: grid = grid.to(x.device)
st179730
Thanks for the reply. I got a segmentation default when moving either of the grid or the input to the same device by input = input.to(grid.get_device()).
st179731
My grid is actually the same for all inputs, so I stored it using self.grid = grid and using grid_sample(input, self.grid). Do you think this causes the problem? But I think it’s inefficient to pass the grid every forward.
st179732
Try to use grid.device instead. sunshineatnoon: But I think it’s inefficient to pass the grid every forward. Might be and you should stick to your work flow, as I was just using it as an example. You could also try to register self.grid as a buffer using self.register_buffer, which would move the tensor automatically using model.to().
st179733
register_buffer solves my problem. The segmentation default actually comes from other parts. It seems when training on multiple GPUs, we cannot call .cuda() during the forward path, so everything should be registered in the buffer. Thanks so much for your help!
st179734
You could call .cuda() or to(), but should specify the right device to push the tensor to. E.g. if you would like to create some tensors inside the forward method, you could use the device of some buffers/parameters or the incoming tensor to create the new one. However, if self.grid is treated as an attribute of the model, registering it as a buffer is the cleaner and better approach.
st179735
Anyway to get number of procedures per node in distributed training? In horovod we could use hvd.local_size(), but I found no alternative in distributed module. Thanks.
st179736
There is no concept of local processes and remote processes; only the total number of processes through torch.distributed.get_world_size(). To understand if we need add this: what do you need this for?
st179737
For example, I want to fully use all CPU for data loading without overhead. Thus I want to divide the number_workers by local size.
st179738
I am using pytorch function torch.rfft() and torch.irfft() inside the forward path of a model. It runs fine on single GPU. However, when I train the model on multiple GPUs, it fails and gave the error: RuntimeError: cuFFT error: CUFFT_INTERNAL_ERROR Does anybody has the intuition why this is the case? Thanks!
st179739
Hi @Tim_Zhang – are you using torch.nn.DataParallel for training on multiple GPUs? If so, this could be some sort of initialization bug where cuFFT is initialized on the first device only and not others.
st179740
I have the same problem here, when using DataParallel with torch.fft() or torch.rfft(), it generates error without error message. I.e. the visual studio pops up saying" An unhandled win32 exception ocurred in python.exe"
st179741
I managed to reproduce this issue and reported it at https://github.com/pytorch/pytorch/issues/24176 121.
st179742
Hi, I am a new comer to PyTorch and I’m confused when I am running the official example of torch.distributed at PyTorch ImageNet main.py L304 8. I have made some small modification on the evaluation part of the source code like below: model.eval() with torch.no_grad(): end = time.time() for i, (images, target, image_ids) in enumerate(val_loader): if args.gpu is not None: images = images.cuda(args.gpu, non_blocking=True) target = target.cuda(args.gpu, non_blocking=True) image_ids = image_ids.data.cpu().numpy() output = model(images) loss = criterion(output, target) # Get acc1, acc5 and update acc1, acc5 = accuracy(output, target, topk=(1, 5)) losses.update(loss.item(), images.size(0)) top1.update(acc1[0], images.size(0)) top1.update(acc1[0], images.size(0)) top5.update(acc5[0], images.size(0)) # print at i-th batch of images only dist.barrier() if i==0: if args.gpu==0: print("gpu 0",acc1,output.shape) if args.gpu==1: print("gpu 1",acc1,output.shape) if args.gpu==2: print("gpu 2",acc1,output.shape) if args.gpu==3: print("gpu 3",acc1,output.shape) And above code gives the following output: Use GPU: 0 for training Use GPU: 1 for training Use GPU: 3 for training Use GPU: 2 for training => loading checkpoint model_best.pth.tar' ... gpu 3 tensor([75.], device='cuda:3') torch.Size([32, 200]) gpu 2 tensor([75.], device='cuda:2') torch.Size([32, 200]) gpu 1 tensor([75.], device='cuda:1') torch.Size([32, 200]) gpu 0 tensor([75.], device='cuda:0') torch.Size([32, 200]) As I am using 4 GPU with a batch size of 128, I think 128 images have been divided and fed into 4 GPU respectively. So all the four GPU have output.shape[0]=32(where 200 is num_classes). But what has really confused me is that, all the 4 GPU are showing the same acc1. In my understanding, as 4 GPUs are taking different input portion (32 images respectively), they should also give different output and accuracy corresponding to their input respectively. However, in my print test, these GPU are showing the same output and accuracy. And I don’t know why, shouldn’t they be different ? Looking for help. Thank you in advance !
st179743
Okay, I think that maybe I have found the answer at its Github issues.distributed eval to be done 47
st179744
That’s correct. Without a distributed sampler for the evaluation dataset, the different processes end up processing the same evaluation inputs, and correspondingly give the same accuracy.
st179745
I try to train ImageNet on 8-gpu server. However, I have enough memory after I start to train resnet50 using distributed training. So I want to launch another training, but the error shows that the address has been used. Can we achieve two distributed training in a single machine ?
st179746
How are you launching the distributed training processes? If manually using torch.distributed, try setting the master_port setting. The way to do this differs. Refer here 5. If you’re doing it using torch.distributed.launch utility, then try setting the --master_port flag. Finally, it isn’t always all about the GPU memory left unused: Check if your data loader is fast enough? You could do this by checking your CPU usage. If all your CPUs are maxing out, or waiting on data io or whatever the case: if your GPU is idle waiting on data, then launching another training will just load the CPUs more, thus using even lesser GPU since the GPU would be idle most of the time waiting for data loading.
st179747
during training the network on 8 GPUs in parallel, I am going to manually change the parameters in the network by the following code, for param in model.parameters(): param.data.fill_(other parameter) I am wondering if this would change all the parameters on different GPU or just GPU:0?
st179748
It depends what you’re using for parallelism. If you use nn.DataParallel you should be able to do this, as the model is replicated to the other GPUs in every iteration. This means you only need to modify the parameters of the root module. This is also where you’d run the optimizer, for example.
st179749
The model is replicated to the other GPUs in every iteration means the state_dicts are copied to other GPUs every iteration? So the mean and var in BN are also copied to other GPUs from the GPU:0? Is there any document explain this process elaborately? I am really curious about the parallel mechanism utilized in PyTorch, for I always conduct experiments on multi-gpu environment.
st179750
Yes, that’s correct. The documentation covers this (the replication bit), see torch.nn.DataParallel 15. Note that this is not how the distributed version works. There, every process runs forward/backward/optimizer against a single copy of the model, so its parameters are equivalent already. Not by replicating the values, but by executing the exact same optimizer step.
st179751
Thanks a lot! I know what you mean. So, in summarize, the multi-gpu environment works like following: Scatter the model and the state dict from GPU:0 to all the GPUs. Split the data, and seperately forward them on different GPUs. Gather output from GPUs to GPU:0 Calculate Loss by using outputs and targets on GPU:0 Backward Loss to GPUs and seperately calculate gradients Gather gradients from GPUs to GPU:0 Update parameters on GPU:0 GoTo Step1. So, the only thing that not fully synchronized is the mean and var of BN, because it does not gather to GPU:0 during backward. All the other parameters are fully synchronized because of the gather-scatter mechanism.
st179752
2 nodes ,1 container/node only cpu code run in container connect by tcp docker run -it run --rm -it --ipc=host --network=host xxx python mnist.py --init-method tcp://ip:port --rank 0 --world-size 2 python mnist.py --init-method tcp://ip:port --rank 1 --world-size 2 my code is here from __future__ import print_function import argparse import torch import torch.nn as nn import time import torch.nn.parallel import torch.nn.functional as F import torch.backends.cudnn as cudnn import torch.distributed as dist import torch.utils.data import torch.utils.data.distributed import torch.optim as optim from torchvision import datasets, transforms from torch.autograd import Variable # Training settings parser = argparse.ArgumentParser(description='PyTorch MNIST Example') parser.add_argument('--batch-size', type=int, default=1024, metavar='N', help='input batch size for training (default: 64)') parser.add_argument('--test-batch-size', type=int, default=1000, metavar='N', help='input batch size for testing (default: 1000)') parser.add_argument('--epochs', type=int, default=20, metavar='N', help='number of epochs to train (default: 10)') parser.add_argument('--lr', type=float, default=0.01, metavar='LR', help='learning rate (default: 0.01)') parser.add_argument('--momentum', type=float, default=0.5, metavar='M', help='SGD momentum (default: 0.5)') parser.add_argument('--no-cuda', action='store_false', default=False, help='disables CUDA training') parser.add_argument('--seed', type=int, default=1, metavar='S', help='random seed (default: 1)') parser.add_argument('--log-interval', type=int, default=10, metavar='N', help='how many batches to wait before logging training status') parser.add_argument('--init-method', type=str, default='tcp://127.0.0.1:23456') parser.add_argument('--rank', type=int) parser.add_argument('--world-size',type=int) args = parser.parse_args() args.cuda = not args.no_cuda and torch.cuda.is_available() dist.init_process_group(init_method=args.init_method,backend="gloo",world_size=args.world_size,rank=args.rank,group_name="pytorch_test") torch.manual_seed(args.seed) if args.cuda: torch.cuda.manual_seed(args.seed) train_dataset=datasets.MNIST('data', train=True, download=True, transform=transforms.Compose([ transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,)) ])) train_sampler = torch.utils.data.distributed.DistributedSampler(train_dataset) kwargs = {'num_workers': 1, 'pin_memory': True} if args.cuda else {} train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=args.batch_size, shuffle=True, **kwargs,sampler=train_sampler) test_loader = torch.utils.data.DataLoader( datasets.MNIST('data', train=False, transform=transforms.Compose([ transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,)) ])), batch_size=args.test_batch_size, shuffle=True, **kwargs) class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.conv1 = nn.Conv2d(1, 10, kernel_size=5) self.conv2 = nn.Conv2d(10, 20, kernel_size=5) self.conv2_drop = nn.Dropout2d() self.fc1 = nn.Linear(320, 50) self.fc2 = nn.Linear(50, 10) def forward(self, x): x = F.relu(F.max_pool2d(self.conv1(x), 2)) x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2)) x = x.view(-1, 320) x = F.relu(self.fc1(x)) x = F.dropout(x, training=self.training) x = self.fc2(x) return F.log_softmax(x) model = Net() model = torch.nn.parallel.DistributedDataParallelCPU(model) optimizer = optim.SGD(model.parameters(), lr=args.lr, momentum=args.momentum) def train(epoch): model.train() for batch_idx, (data, target) in enumerate(train_loader): if args.cuda: data, target = data.cuda(), target.cuda() data, target = Variable(data), Variable(target) optimizer.zero_grad() output = model(data) loss = F.nll_loss(output, target) loss.backward() optimizer.step() if batch_idx % args.log_interval == 0: print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format( epoch, batch_idx * len(data), len(train_loader.dataset), 100. * batch_idx / len(train_loader), loss.item())) def test(): model.eval() test_loss = 0 correct = 0 for data, target in test_loader: if args.cuda: data, target = data.cuda(), target.cuda() data, target = Variable(data, volatile=True), Variable(target) output = model(data) test_loss += F.nll_loss(output, target, size_average=False).item() # sum up batch loss pred = output.data.max(1, keepdim=True)[1] # get the index of the max log-probability correct += pred.eq(target.data.view_as(pred)).cpu().sum() test_loss /= len(test_loader.dataset) print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format( test_loss, correct, len(test_loader.dataset), 100. * correct / len(test_loader.dataset))) tot_time=0; for epoch in range(1, args.epochs + 1): train_sampler.set_epoch(epoch) start_cpu_secs = time.time() #long running train(epoch) end_cpu_secs = time.time() print("Epoch {} of {} took {:.3f}s".format( epoch , args.epochs , end_cpu_secs - start_cpu_secs)) tot_time+=end_cpu_secs - start_cpu_secs test() print("Total time= {:.3f}s".format(tot_time)) and then i got problem File "mnsit.py", line 43, in <module> dist.init_process_group(init_method=args.init_method,backend="gloo",world_size=args.world_size,rank=args.rank,group_name="pytorch_test") File "/opt/conda/lib/python3.6/site-packages/torch/distributed/distributed_c10d.py", line 416, in init_process_group timeout=timeout) File "/opt/conda/lib/python3.6/site-packages/torch/distributed/distributed_c10d.py", line 484, in _new_process_group_helper timeout=timeout) RuntimeError: [/pytorch/third_party/gloo/gloo/transport/tcp/pair.cc:760] connect [127.0.1.1]:10129: Connection refused root@pcl2-2288H-V5:/workspace/recommendation# python mnsit.py --init-method tcp://10.10.16.62:45795 --rank 0 --world-size 2 Traceback (most recent call last): File "mnsit.py", line 43, in <module> dist.init_process_group(init_method=args.init_method,backend="gloo",world_size=args.world_size,rank=args.rank,group_name="pytorch_test") File "/opt/conda/lib/python3.6/site-packages/torch/distributed/distributed_c10d.py", line 416, in init_process_group timeout=timeout) File "/opt/conda/lib/python3.6/site-packages/torch/distributed/distributed_c10d.py", line 484, in _new_process_group_helper timeout=timeout) RuntimeError: [/pytorch/third_party/gloo/gloo/transport/tcp/pair.cc:760] connect [127.0.1.1]:39850: Connection refused
st179753
The Gloo backend tries to resolve each process’ IP address by looking at the host name. This likely resolves to the loopback address for you, looking at the error message. You can set GLOO_SOCKET_IFNAME to the network interface name you want to use for communication and it will resolve the right IP address. Also see the torch.distributed documentation 85.
st179754
Thanks,it fixed,too many network interface the machine have,i tried all of then.
st179755
I got the following after I change my code to use multiprocessing and distributed dataparallel module. Additionally I also started to use apex 2 package for mixed precision computation. -- Process 0 terminated with the following error: Traceback (most recent call last): File "/usr/local/lib/python3.5/dist-packages/torch/multiprocessing/spawn.py", line 19, in _wrap fn(i, *args) File "/app/train_action_model_apex.py", line 385, in main_worker evaluate_model(args, root_dir) File "/app/train_action_model_apex.py", line 343, in evaluate_model dict_results = evaluator.inference() File "/app/evaluators/action_model_evaluator.py", line 224, in inference for data in self.dataloader: File "/usr/local/lib/python3.5/dist-packages/torch/utils/data/dataloader.py", line 193, in __iter__ return _DataLoaderIter(self) File "/usr/local/lib/python3.5/dist-packages/torch/utils/data/dataloader.py", line 469, in __init__ w.start() File "/usr/lib/python3.5/multiprocessing/process.py", line 105, in start self._popen = self._Popen(self) File "/usr/lib/python3.5/multiprocessing/context.py", line 212, in _Popen return _default_context.get_context().Process._Popen(process_obj) File "/usr/lib/python3.5/multiprocessing/context.py", line 274, in _Popen return Popen(process_obj) File "/usr/lib/python3.5/multiprocessing/popen_spawn_posix.py", line 33, in __init__ super().__init__(process_obj) File "/usr/lib/python3.5/multiprocessing/popen_fork.py", line 20, in __init__ self._launch(process_obj) File "/usr/lib/python3.5/multiprocessing/popen_spawn_posix.py", line 48, in _launch reduction.dump(process_obj, fp) File "/usr/lib/python3.5/multiprocessing/reduction.py", line 59, in dump ForkingPickler(file, protocol).dump(obj) AttributeError: Can't pickle local object 'generate_dataset.<locals>.<lambda>' It seems that the error is caused by lambda used for ThreeCrop transformation which mimics FiveCrop from pytorch. I follow the example code in the link 51. I found that without ThreeCrop the error was not happened. And before I used multiprocessing dataparallel, the error was not occured. The code was tested on linux environment.
st179756
Solved by ptrblck in post #2 Could you try to write a transform class and replace the lambda method with it? As far as I know there are some limitation in Python regarding pickling lambdas, which is apparently the case here.
st179757
Could you try to write a transform class and replace the lambda method with it? As far as I know there are some limitation in Python regarding pickling lambdas, which is apparently the case here.
st179758
Thanks for your help. Maybe I should write a new transform class as you suggested.
st179759
When I train my network on multiple machines (using DistributedDataParallel) I observe my loss exploding when I switch my network to evaluation using model.eval() and torch.no_grad(). When outside the torch.no_grad() context, I switch to model.train() and observe a loss that is way worse than what I was observing at the end of the epoch. This only happens when using DistributedDataParallel. Training the spikes appear at the beginning of the epoch, just after the validation step. The loss at that moment is close to what I observe in validation. Has anyone an idea about what could be causing that ? Thanks
st179760
Hi Milas! This looks very odd. Is it possible that you’re seeing some data contamination between your training and validation datasets? The fact that you run with no_grad during evaluation mode, as well as setting model.eval() all sounds perfectly normal. You say that if you run without DistributedDataParallel you don’t observe this issue? Does this happen for any number of processes > 1?
st179761
Hi Pieter. Thanks for your answer. This looks odd indeed. There is no contamination between the sets. Everything is in separated folders and the dataset class only gets the right folder. I did try many things. Even when I set to train, never switch to eval and only do training steps without ever going to the validation dataset this still occurs at the beginning of the epochs. I tried with either torch.utils.data.distributed.DistributedSampler or simply with random sampling without splitting the dataset between machines. I am using filesystem synchronisation. My code is fairly based on the imagenet example. I am using batchnorm and have seen that it does not get properly synchronized but it should not be a problem I guess since the distribution of my samples should stay the same. I do have some custom initialisations in the __init__ of my modules. If I find something I will keep this thread updated.
st179762
I was looking at def train in DistributedDataParallel, more specifically these lines 8 and am worried the slicing of replicas may be causing this. Looking at def train in nn.Module I don’t see how the train mode would be set on self.module. Can you try removing the slicing to ensure the train mode is set on every module replica (even if there aren’t any) and see what happens? Since the train mode controls whether or not you’re accumulating running stats in batch norm layers, this could explain both the regression and the recovery after a couple of iterations.
st179763
Yes that was also my thought at first but the super(DistributedDataParallel, self).train(mode) should take care of model_copies[0]
st179764
@Milas There was a bug in DDP that didn’t take into account evaluation mode, but this was only introduced in https://github.com/pytorch/pytorch/pull/18953 4 which was merged 2 weeks ago. Not 25 days ago when you first started noticing this issue. This issue was fixed last night with https://github.com/pytorch/pytorch/pull/19897 4 and https://github.com/pytorch/pytorch/pull/19901 4. Can you confirm this this still an issue with the current nightly build?
st179765
@pietern unfortunately it still occurs with the nightly. Tried to remove as many custom things as I could but it still happens. Perhaps having non homogeneous gpus is the issue ? Since I don’t control the machines the job is spawned on I usually end up on a mix of kepler and pascal architectures. The train / eval modes probably have nothing to do with this since I tried switching to train at the very beginning and never switching again and it still occurs. Currently I am not using DDP but if I find where it goes wrong I will definitely update.
st179766
@Milas Thank you for the update. I doubt that heterogeneous GPUs is the issue… I would expect the numerical behavior to be largely identical across architectures. If you also mix library versions (e.g. cuDNN) this is more likely though. Any more information sure is appreciated!
st179767
The bug was on my side. The order of the data loaded was not deterministic. The problem disappeared once I fixed that. Thanks for your time and for the help.