id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st179468 | Related thread
How to learn the weights between two losses?
Hi,
You mentioned the usage as:
usage
is_regression = torch.Tensor([True, True, False]) # True: Regression/MeanSquaredErrorLoss, False: Classification/CrossEntropyLoss
multitaskloss_instance = MultiTaskLoss(is_regression)
So in case of classification problem I should put
is_regression = False
can clarify it a bit ?
I have this loss:
class Tacotron2Loss(nn.Module):
def __init__(self, hparams):
super(Tacotron2Loss, self).__init__()
self.gate_loss_fn = nn.BCEWithLogitsLoss()
self.emotion_loss_fn = torch.nn.CrossEntropyLoss(ignore_index=-1)
num_losses = 3
self.use_mmi = hparams.use_mmi
if self.use_mmi:
self.ctc_loss_fn = torch.nn.CTCLoss(
blank=len(ctc_symbols), reduction='none')
num_losses += 1
# loss weights
self.eta = nn.Parameter(torch.ones(num_losses, dtype=torch.float32))
@staticmethod
def masked_l2_loss(out, target, lengths):
num_not_padded = lengths.sum() * out.size(1)
loss = F.mse_loss(out, target, reduction="sum")
loss = loss / num_not_padded
return loss
def forward(self, y_pred, y, output_lengths):
mel_target, gate_target, ctc_text, ctc_text_lengths, emotion_label = y
# mel_target.requires_grad = False
# gate_target.requires_grad = False
gate_target = gate_target.view(-1, 1)
_, mel_out, mel_out_postnet, gate_out, _, log_probs, emotion_weights = y_pred
gate_out = gate_out.view(-1, 1)
losses = []
mel_loss = self.masked_l2_loss(mel_out, mel_target, output_lengths) + \
self.masked_l2_loss(mel_out_postnet, mel_target, output_lengths)
losses.append(mel_loss)
gate_loss = self.gate_loss_fn(gate_out, gate_target)
losses.append(gate_loss)
emotiom_loss = self.emotion_loss_fn(emotion_weights, emotion_label)
losses.append(emotiom_loss)
if self.use_mmi:
ctc_loss = (self.ctc_loss_fn(log_probs, ctc_text, output_lengths, ctc_text_lengths) /
output_lengths.float()).mean()
losses.append(ctc_loss)
total_loss = torch.stack(losses) * torch.exp(-self.eta) + self.eta
return total_loss.sum(), losses, self.eta
Then i pu it in optimizer like this:
optimizer = torch.optim.AdamW(list(
model.parameters()) + list(criterion.parameters()), lr=hparams.learning_rate)
So, what is right way to use it in DDP setup?
Should i put criterion in main model’s forwars function as submodule or use DDP wrapped on criterion, or something else? |
st179469 | IMHO, adding trainable parameters to the loss function makes it part of the network to be trained. We need to think out of the box a bit here. So what I reckon that you could do is to wrap the criterion into your network. This is going to require a bit of change to how people usually write foward() method. Here is an example.
def forward(self, x, y=None):
# Regular forward pass
output = self.model(x)
# Insert your criterion here
if self.training:
assert y is not None, "Target should be passed during training."
loss = self.criterion(output, y)
return loss
return output
Basically, the code snippet above includes the computation of loss as part of model’s forward pass. And because your criterion is part of your network, you don’t have to explicitly add its parameters to the optimiser anymore. And torch.nn.parallel.DistributedDataParallel will make sure even the parameters of the criterion are synced across GPUs during forward pass.
Hope this helps. |
st179470 | Is here any difference between adding criterion to model or having it separate wrapped to ddp? |
st179471 | Hello,
I’m trying to use Isends with cuda aware openmpi.
I found that I need to explicitly call torch.cuda.synchronize(device) before every Isend (otherwise training error collapses). I get that problem even when I stash the sent tensor (so it will have a reference and therefore won’t be freed and overwritten).
I have tried it with several different settings:
with P2P enabled GPU (GTX1080)
and without P2P enabled GPUs (RTX2080ti). (in the latter case the sends must go through the host).
I wonder, what could be happening there?
(I am using a single thread with async operations) |
st179472 | Does it make a difference if you checkpoint your model for retraining after model.eval() or model.train() loop? |
st179473 | It shouldn’t make any difference, as long as you don’t update the parameters in your validation loop.
This question seems to be unrelated to the topic, so do you have any issues using DataParallel? |
st179474 | I am trying to save a Dataparallel model but getting
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-17-e0020f404c69> in <module>()
69 optimizer.zero_grad() # Zero gradients
70 loss.backward() # Calculate gradients
---> 71 optimizer.step() # Update weights
72 m.track_loss(loss)
73 m.track_num_correct(preds, labels)
~/pytorch-1.0-p3/anaconda3/lib/python3.6/site-packages/torch/optim/adamw.py in step(self, closure)
98
99 # Decay the first and second moment running average coefficient
--> 100 exp_avg.mul_(beta1).add_(1 - beta1, grad)
101 exp_avg_sq.mul_(beta2).addcmul_(1 - beta2, grad, grad)
102 if amsgrad:
RuntimeError: expected device cpu but got device cuda:0 |
st179475 | The stack trace points to optimizer.step(), which is unrelated to saving the state_dict.
How did you pass the parameters to the optimizer? |
st179476 | So this worked for me which is little weird of a workflow:
# While saving checkpoint i.e. comment out while loading checkpoint
if torch.cuda.device_count() > 1:
print("Let's use", torch.cuda.device_count(), "GPUs!")
network = nn.DataParallel(network)
network.to(device)
optimizer = optim.AdamW(network.parameters(), lr=run.lr, weight_decay=run.weight_decay)
# try:
with open('check-point.pth', 'rb') as f:
print('file opened')
checkpoint = torch.load(f)
print('file loaded')
network.load_state_dict(checkpoint['model_state_dict'])
print('network loaded')
optimizer.load_state_dict(checkpoint['optimizer_state_dict'])
print('optimizer loaded')
epoc = checkpoint['epoch']
print(f'blah epoc: {epoc}')
# While loading checkpoint
if torch.cuda.device_count() > 1:
print("Let's use", torch.cuda.device_count(), "GPUs!")
network = nn.DataParallel(network) |
st179477 | When I use nn.parallel.DistributedDataParallel for multi-gpu training in a single node, I use the nn.SyncBatchNorm to work as batch normalization across GPUs. However, I found the gpu memory cost increased a lot, at least 1gb for one gpu. When I use the SyncBatchNorm provided by apex (but I cannot successfully compile apex in this server), the gpu memory cost is normal. Can anyone help with it? |
st179478 | When trying out just 2 instances with 1 gpu each attached to test distributed training this error occurred:
(base) ubuntu@ip-172-31-11-131:~/detectron2$ NCCL_SOCKET_IFNAME=ens3 NCCL_IB_DISABLE=1 python tools/train_net.py --num-gpus 1 --num-machines 2 --machine-rank 1 --dist-url tcp://32.273.122.180:3000 --config-file configs/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_1x.yaml
Command Line Args: Namespace(config_file=‘configs/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_1x.yaml’, dist_url=‘tcp://34.253.142.180:3000’, eval_only=False, machine_rank=1, num_gpus=1, num_machines=2, opts=[], resume=False)
Traceback (most recent call last):
File “tools/train_net.py”, line 161, in
args=(args,),
File “/home/ubuntu/detectron2/detectron2/engine/launch.py”, line 49, in launch
daemon=False,
File “/home/ubuntu/anaconda3/lib/python3.7/site-packages/torch/multiprocessing/spawn.py”, line 171, in spawn
while not spawn_context.join():
File “/home/ubuntu/anaconda3/lib/python3.7/site-packages/torch/multiprocessing/spawn.py”, line 118, in join
raise Exception(msg)
Exception:
– Process 0 terminated with the following error:
Traceback (most recent call last):
File “/home/ubuntu/anaconda3/lib/python3.7/site-packages/torch/multiprocessing/spawn.py”, line 19, in _wrap
fn(i, *args)
File “/home/ubuntu/detectron2/detectron2/engine/launch.py”, line 70, in _distributed_worker
comm.synchronize()
File “/home/ubuntu/detectron2/detectron2/utils/comm.py”, line 79, in synchronize
dist.barrier()
File “/home/ubuntu/anaconda3/lib/python3.7/site-packages/torch/distributed/distributed_c10d.py”, line 1424, in barrier
work = _default_pg.barrier()
RuntimeError: NCCL error in: /opt/conda/conda-bld/pytorch_1573049306803/work/torch/lib/c10d/ProcessGroupNCCL.cpp:400, unhandled cuda error
Exporting NCCL_SOCKET_IFNAME and NCCL_IB_DISABLE didn’t help, also other fixes as discussed in github issues and everything I found in the net on this topic.
Maybe I forgot an argument. In AWS, “ens3” seems to be the ethernet connection, at least ifconfig does not reveal eth0 as usual.
What to do? |
st179479 | Hi PyTorch experts,
I am trying to use torch.distributed package for my distributed training. The backend I am using is gloo.
Based on this doc: https://pytorch.org/docs/stable/distributed.html 29, gloo supports all_reduce on both CPU and GPU, but it seems there is no specific way to chose one over the other.
I am wondering, during training, does gloo perform all_reduce automatically based on the tensor’s device type? Like if the tensors are on GPU, then perform all_reduce on GPU; if the tensors are on CPU, perform it on CPU?
Also, when all_reduce is performed on GPU, does gloo fallback using nccl?
Thanks in advance! |
st179480 | I am wondering, during training, does gloo perform all_reduce automatically based on the tensor’s device type?
Yes, see https://github.com/pytorch/pytorch/blob/master/torch/lib/c10d/ProcessGroupGloo.cpp#L720 36. Essentially, we check the input’s device type, and run the appropriate operation based on that.
Also, when all_reduce is performed on GPU, does gloo fallback using nccl?
This doesn’t happen, the GLOO backend can be built with CUDA and supports GPU operations (https://github.com/facebookincubator/gloo/blob/master/docs/cuda.md 19) |
st179481 | Hi,
I’m having trouble with multiple processes working on the same GPU. I wrote minimal error-reproducing example.
I ran the example code successfully on my local machine, using CUDA 10.2 and pytorch 1.2.0.
While this works just fine, it fails to run on a cluster with CUDA 10.1 and pytorch 1.2.0.
Does anybody know why or how to overcome this? Thanks a ton.
CODE EXAMPLE
import torch.multiprocessing as _mp
import torch
import os
import time
import numpy as np
mp = _mp.get_context('spawn')
class Process(mp.Process):
def __init__(self, id):
super().__init__()
print("Init Process")
self.id = id
return
def run(self):
os.environ['CUDA_VISIBLE_DEVICES'] = '0'
for i in range(3):
with torch.cuda.device(0):
x = torch.Tensor(10).to(0)
x.to('cpu')
del x
time.sleep(np.random.random())
if __name__ == "__main__":
num_processes = 2
os.environ['CUDA_VISIBLE_DEVICES'] = '0'
processes = [Process(i) for i in range(num_processes)]
[p.start() for p in processes]
[p.join() for p in processes]
ERROR
Process Process-2:
Traceback (most recent call last):
File "/cluster/home/marksm/software/anaconda/envs/test/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap
self.run()
File "/cluster/home/marksm/mp_demonstration.py", line 20, in run
x = torch.Tensor(10).to(0)
RuntimeError: CUDA error: all CUDA-capable devices are busy or unavailable |
st179482 | Hi guys,
I’m currently using nn.DataParallel for mutli-gpu (8-gpu) training in a single node. However, if I put the data and model to devices[0], I found the memory on GPU 0 will be huge and make the program exits (cuda out of memory) at the begining of training. Can anyone help?
BTW, I find if I use DistributedDataParallel, the memory is fine.
Environment:
pytorch 1.0.1
cuda9.0 |
st179483 | Solved by ptrblck in post #2
This effect is described by @Thomas_Wolf in this blog post.
We generally recommend using DDP. |
st179484 | This effect is described by @Thomas_Wolf in this blog post 11.
We generally recommend using DDP. |
st179485 | Thanks. Which syncBatchNormalization do you recommend when using DDP? I’m not sure if the default nn.BatchNorm2d considers multi-gpu ops? |
st179486 | Is it a good practice to create a new group on every training iteration?
Using dist.new_group requires every process to pass through the function even if they are not a part of the distributed training process. Sometimes it hangs up on this function for a reason that I am not aware of. I was wondering has anybody come up with a better solution for this?
Thanks! |
st179487 | iiuc, it is not good practice to create a new group every training iteration, it is non trivial to initialize it, there are communication costs.
why do you need to create new group every iteration? |
st179488 | Thanks for the answer. Because I need to run allreduce on a subset of nodes in each iteration. Do you have any recommendation about how to do it differently? |
st179489 | do you want to create these sub groups before training loop, and each iteration just uses corresponding created sub groups? instead of creating the same sub groups repeatedly inside the loop?
also, add time out if it hangs?
lastly, maybe need to figure out why it hangs, starting with debugging using a small number of training iterations? |
st179490 | Thanks for the suggestion! I originally thought about this approach. But because I would like to create this for large number of machines (around 100) the number of groups can become quite large (combination of 10 out of 100 =17310309456440!!).
I debugged it and it seems like the limit for the number of cgroups exceeds the current user limit. I guess this may be related to the dist.new_group and repeated creation of groups.
The error printed in the kernel messages is printed below:
cgroup: fork rejected by pids controller in /user.slice/user-1000.slice/session-3.scope
I wish there was a method to delete the groups in order to avoid this problem. |
st179491 | I’m training ImageNet on ResNet-50 architecture using Fastai v1 1.0.60dev (Pytorch 1.3.0). There are substantial speed gains using Mixed Precision Training since I can effectively use 4 times the batch size (= 512) thanks to reduction in VRAM consumption and using smaller size of 224. The problem is I am unable to select a good learning rate. I am using fastai’s lr_finder with SGD but the suggested lr causes huge overfitting. Diving the suggested lr by 100 just overfits a bit while lr divided by 512 seems to be okish but slow.
While these guesses work, I’m not sure how to choose a good learning rate in general. I thought about using APEX but the dynamic loss scaling seems to be integrated in the learn.to_fp16(). Training is done using learn.fit_one_cycle() ie 1-cycle policy. I think everything else is working fine.
Below is the image for lr divided by 100
Screenshot%20at%202019-11-28%2008-55-56955×529 25.6 KB
And this is for lr divided by 512 |
st179492 | Could you potentially try a learning rate in between lr/100 and lr/512, or potentially stick with lr/100 and decay the learning rate over time? Also, this might be a better question for the fastai forums, since it uses fastai’s library: https://forums.fast.ai/c/fastai-users 4 |
st179493 | I posted the question there but haven’t gotten a reply. Will try varying the learning rates as suggested. |
st179494 | Hi, all.
I encountered a very confusing problem.
It’s ok when I run model in single GPU. But it’s not work when I use multi-GPUs (single machine multi-GPUs) by model = torch.nn.DataParallel(model, device_ids=device_ids).
The puzzling thing is that the code is executable at the beginning. But after executing 5 batches (batch-szie = 100), an error occurs.
2019-12-01 16:17:15,151:Traceback (most recent call last):
2019-12-01 16:17:15,151: File "exp\GSTEG.py", line 25, in <module>
2019-12-01 16:17:15,151: main()
2019-12-01 16:17:15,151: File ".\main.py", line 51, in main
2019-12-01 16:17:15,151: s_top1,s_top5,o_top1,o_top5,v_top1,v_top5, sov_top1 = trainer.train(train_loader, base_model, logits_model, criterion, base_optimizer, logits_optimizer, epoch, opt)
2019-12-01 16:17:15,151: File ".\train.py", line 172, in train
2019-12-01 16:17:15,151: # s_output, o_output, v_output, loss = criterion(*((s, o, v, so, ov, vs, ss, oo, vv, so_t, ov_t, vs_t, os_t, vo_t, sv_t) + (s_target_var, o_target_var, v_target_var, meta)))
2019-12-01 16:17:15,151: File "C:\Users\gorvinchen\Miniconda3\envs\rcenet\lib\site-packages\torch\nn\modules\module.py", line 489, in __call__
2019-12-01 16:17:15,151: result = self.forward(*input, **kwargs)
2019-12-01 16:17:15,151: File "C:\Users\gorvinchen\Miniconda3\envs\rcenet\lib\site-packages\torch\nn\parallel\data_parallel.py", line 143, in forward
2019-12-01 16:17:15,151: outputs = self.parallel_apply(replicas, inputs, kwargs)
2019-12-01 16:17:15,151: File "C:\Users\gorvinchen\Miniconda3\envs\rcenet\lib\site-packages\torch\nn\parallel\data_parallel.py", line 153, in parallel_apply
2019-12-01 16:17:15,151: return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])
2019-12-01 16:17:15,151: File "C:\Users\gorvinchen\Miniconda3\envs\rcenet\lib\site-packages\torch\nn\parallel\parallel_apply.py", line 83, in parallel_apply
2019-12-01 16:17:15,151: raise output
2019-12-01 16:17:15,151: File "C:\Users\gorvinchen\Miniconda3\envs\rcenet\lib\site-packages\torch\nn\parallel\parallel_apply.py", line 59, in _worker
2019-12-01 16:17:15,151: output = module(*input, **kwargs)
2019-12-01 16:17:15,151: File "C:\Users\gorvinchen\Miniconda3\envs\rcenet\lib\site-packages\torch\nn\modules\module.py", line 489, in __call__
2019-12-01 16:17:15,151: result = self.forward(*input, **kwargs)
2019-12-01 16:17:15,151: File ".\models\layers\AsyncTFCriterion.py", line 244, in forward
2019-12-01 16:17:15,151: s_msg, o_msg, v_msg = self.get_msg(idtime, 'past')
2019-12-01 16:17:15,151: File ".\models\layers\AsyncTFCriterion.py", line 147, in get_msg
2019-12-01 16:17:15,151: return self.mget(idtime, self.ns, self.no, self.nv, s_storage, o_storage, v_storage, cond, kernel)
2019-12-01 16:17:15,151: File ".\models\layers\AsyncTFCriterion.py", line 127, in mget
2019-12-01 16:17:15,151: s_out = [meta(ids, time, s_size, s_storage) for ids, time in idtime]
2019-12-01 16:17:15,151: File ".\models\layers\AsyncTFCriterion.py", line 127, in <listcomp>
2019-12-01 16:17:15,151: s_out = [meta(ids, time, s_size, s_storage) for ids, time in idtime]
2019-12-01 16:17:15,151: File ".\models\layers\AsyncTFCriterion.py", line 124, in meta
2019-12-01 16:17:15,151: if cond(t, t0)), 1. / self.decay)
2019-12-01 16:17:15,151: File ".\models\layers\AsyncTFCriterion.py", line 43, in avg
2019-12-01 16:17:15,167: item, w = next(iterator)
2019-12-01 16:17:15,167: File ".\models\layers\AsyncTFCriterion.py", line 124, in <genexpr>
2019-12-01 16:17:15,167: if cond(t, t0)), 1. / self.decay)
2019-12-01 16:17:15,167: File ".\models\layers\AsyncTFCriterion.py", line 145, in <lambda>
2019-12-01 16:17:15,167: cond = lambda t, t0: t < t0 if time == 'past' else t > t0
2019-12-01 16:17:15,167:RuntimeError: arguments are located on different GPUs at c:\a\w\1\s\windows\pytorch\aten\src\thc\generic/THCTensorMathCompareT.cu:7
My code is here
def avg(iterator, weight=1.):
# compounding weight
item, w = next(iterator)
total = item.clone() * w
n = 1.
for i, (item, w) in enumerate(iterator):
w1 = 1. * weight**(i + 1)
total += item * w1 * w
n += w1
return total / n
class MessagePassing(object):
# Class for keeping track of messages across frames
def __init__(self, maxsize, w_temporal, w_spatio, decay, sigma, ns, no, nv):
super(MessagePassing, self).__init__()
self.maxsize = maxsize
self.w_temporal = w_temporal
self.w_spatio = w_spatio
self.decay = decay
self.sigma = sigma
self.s_storage = {}
self.s_storage_gt = {}
self.o_storage = {}
self.o_storage_gt = {}
self.v_storage = {}
self.v_storage_gt = {}
self.training = self.training if hasattr(self, 'training') else True
self.ns = ns
self.no = no
self.nv = nv
def mget(self, idtime, s_size, o_size, v_size, s_storage, o_storage, v_storage, cond=lambda t, t0: True, kernel=lambda t, t0: 1):
# get message using condition on the timestamps
def meta(ids, t0, size, storage):
try:
return avg(((y, kernel(t, t0)) for t, y in storage[ids]
if cond(t, t0)), 1. / self.decay)
except (StopIteration, KeyError):
return torch.zeros(size)
s_out = [meta(ids, time, s_size, s_storage) for ids, time in idtime]
o_out = [meta(ids, time, o_size, o_storage) for ids, time in idtime]
v_out = [meta(ids, time, v_size, v_storage) for ids, time in idtime]
return Variable(torch.stack(s_out, 0).cuda()), Variable(torch.stack(o_out, 0).cuda()), Variable(torch.stack(v_out, 0).cuda())
def get_msg(self, idtime, time='past', s_storage=None, o_storage=None, v_storage=None):
s_storage = self.s_storage if s_storage is None else s_storage
o_storage = self.o_storage if o_storage is None else o_storage
v_storage = self.v_storage if v_storage is None else v_storage
cond = lambda t, t0: t < t0 if time == 'past' else t > t0
kernel = lambda t, t0: math.exp(-float(t - t0)**2 / (2 * self.sigma**2))
return self.mget(idtime, self.ns, self.no, self.nv, s_storage, o_storage, v_storage, cond, kernel)
def get_gt_msg(self, idtime, time='past'):
return self.get_msg(idtime, time, self.s_storage_gt, self.o_storage_gt, self.v_storage_gt)
def mset(self, s_msg, o_msg, v_msg, idtime, s_storage, o_storage, v_storage):
# keep a queue of size maxsize for each id
# messages are stored in normal space
# queue for each id is stored in the order in which the messages were stored
for s_m, o_m, v_m, (ids, time) in sorted(zip(s_msg, o_msg, v_msg, idtime), key=lambda x: random()):
if ids not in s_storage:
s_storage[ids] = []
if ids not in o_storage:
o_storage[ids] = []
if ids not in v_storage:
v_storage[ids] = []
s_data = s_m if type(s_m) is not torch.Tensor else s_m.data.cpu()
o_data = o_m if type(o_m) is not torch.Tensor else o_m.data.cpu()
v_data = v_m if type(v_m) is not torch.Tensor else v_m.data.cpu()
s_storage[ids].append((time, s_data))
o_storage[ids].append((time, o_data))
v_storage[ids].append((time, v_data))
if len(s_storage[ids]) > self.maxsize:
del s_storage[ids][0]
if len(o_storage[ids]) > self.maxsize:
del o_storage[ids][0]
if len(v_storage[ids]) > self.maxsize:
del v_storage[ids][0]
def set_msg(self, qs, qo, qv, idtime):
self.mset(qs, qo, qv, idtime, self.s_storage, self.o_storage, self.v_storage)
def set_gt_msg(self, s_target, o_target, v_target, idtime):
s_x = s_target.data.cpu()
o_x = o_target.data.cpu()
v_x = v_target.data.cpu()
self.mset(s_x, o_x, v_x, idtime, self.s_storage_gt, self.o_storage_gt, self.v_storage_gt)
class AsyncTFCriterion(nn.Module, MessagePassing):
def __init__(self, args):
memory_size = 20
w_temporal = 0.1
w_spatio = 0.1
memory_decay = 1.0
sigma = 300
MessagePassing.__init__(self, memory_size, w_temporal, w_spatio, memory_decay, sigma, args.s_class, args.o_class, args.v_class)
nn.Module.__init__(self)
self.msg_n = 5
self.cross_loss = nn.CrossEntropyLoss() # for s
self.bce_loss = nn.BCEWithLogitsLoss() # for c, o, v
self.BalanceLabels = BalanceLabels()
self.winsmooth = 1
def forward(self, s, o, v, so, ov, vs, ss, oo, vv, so_t, ov_t, vs_t, os_t, vo_t, sv_t, s_target, o_target, v_target, id_time, n=1, synchronous=False):
if o_target.dim() == 1:
print('converting Nx1 target to NxC')
o_target = Variable(gtmat(o.shape, o_target.data.long()))
if v_target.dim() == 1:
print('converting Nx1 target to NxC')
v_target = Variable(gtmat(v.shape, v_target.data.long()))
o_target = o_target.float()
v_target = v_target.float()
idtime = list(zip(id_time['id'], id_time['time']))
s_msg, o_msg, v_msg = self.get_msg(idtime, 'past')
s_fmsg, o_fmsg, v_fmsg = self.get_msg(idtime, 'future')
s_loss = self.cross_loss(s, s_target)
_qs = torch.nn.Softmax(dim = 1)(s)
o_loss = self.bce_loss(o, o_target)
_qo = torch.nn.Sigmoid()(o)
v_loss = self.bce_loss(v, v_target)
_qv = torch.nn.Sigmoid()(v)
qs_before_softmax = s.clone() |
st179495 | I can’t quite reproduce the error since the main training loop hasn’t been posted. Though, with nn.DataParallel, a replica of your model will be created on each device (passed in via device_ids). In your forward function, it looks like you conditionally do things such as .cuda(), .cpu() , which can force tensors to conditionally be on a different device, which can result in this error. If possible, you could try not making your forward function rely on tensors being on specific devices, and instead do this logic in the main training loop. |
st179496 | Thank you very much for your reply.
Useing multi-gpus by torch.nn.DataParallel. Input data and models are placed on the master device. In forward function, will this result of .cuda(), .cpu() be placed on the corresponding device? I’m not very sure. I will use the solution you mentioned above.
And, the incomprehensible thing is that I only get errors after executing several batches. The first few batch programs will execute normally.
This code is open sourced on github. I changed it from distributed to DataParallel. The URL of code: https://github.com/yaohungt/Gated-Spatio-Temporal-Energy-Graph 1
Major changes:
def train(self, loader, base_model, logits_model, criterion, base_optimizer, logits_optimizer, epoch, args):
adjust_learning_rate(args.lr, args.lr_decay_rate, base_optimizer, epoch)
adjust_learning_rate(args.lr, args.lr_decay_rate, logits_optimizer, epoch)
batch_time = AverageMeter()
data_time = AverageMeter()
losses = AverageMeter()
s_top1 = AverageMeter()
s_top5 = AverageMeter()
o_top1 = AverageMeter()
o_top5 = AverageMeter()
v_top1 = AverageMeter()
v_top5 = AverageMeter()
sov_top1 = AverageMeter()
# switch to train mode
base_model.train()
logits_model.train()
criterion.train()
base_optimizer.zero_grad()
logits_optimizer.zero_grad()
def part(x): return itertools.islice(x, int(len(x)*args.train_size))
end = time.time()
for i, (input, s_target, o_target, v_target, meta) in enumerate(part(loader)):
gc.collect()
data_time.update(time.time() - end)
meta['epoch'] = epoch
print("meta = {}".format(meta))
# s_target = s_target.long().cuda(async=True)
# o_target = o_target.long().cuda(async=True)
# v_target = v_target.long().cuda(async=True)
# input_var = torch.autograd.Variable(input.cuda())
s_target = s_target.long().cuda(device=device_ids[0])
o_target = o_target.long().cuda(device=device_ids[0])
v_target = v_target.long().cuda(device=device_ids[0])
input_var = torch.autograd.Variable(input.cuda(device=device_ids[0]))
s_target_var = torch.autograd.Variable(s_target)
o_target_var = torch.autograd.Variable(o_target)
v_target_var = torch.autograd.Variable(v_target)
feat = base_model(input_var)
feat = feat.cuda(device=device_ids[0])
s, o, v, so, ov, vs, ss, oo, vv, so_t, ov_t, vs_t, os_t, vo_t, sv_t = logits_model(feat)
s = s.cuda(device=device_ids[0])
o = o.cuda(device=device_ids[0])
v = v.cuda(device=device_ids[0])
so = so.cuda(device=device_ids[0])
ov = ov.cuda(device=device_ids[0])
vs = vs.cuda(device=device_ids[0])
ss = ss.cuda(device=device_ids[0])
oo = oo.cuda(device=device_ids[0])
vv = vv.cuda(device=device_ids[0])
so_t = so_t.cuda(device=device_ids[0])
ov_t = ov_t.cuda(device=device_ids[0])
vs_t = vs_t.cuda(device=device_ids[0])
os_t = os_t.cuda(device=device_ids[0])
vo_t = vo_t.cuda(device=device_ids[0])
sv_t = sv_t.cuda(device=device_ids[0])
s_target_var = s_target_var.cuda(device=device_ids[0])
o_target_var = o_target_var.cuda(device=device_ids[0])
v_target_var = v_target_var.cuda(device=device_ids[0])
meta['ids'] = meta['ids'].cuda(device=device_ids[0])
meta['time'] = meta['time'] .cuda(device=device_ids[0])
s_output, o_output, v_output, loss = criterion(*((s, o, v, so, ov, vs, ss, oo, vv, so_t, ov_t, vs_t, os_t, vo_t, sv_t) + (s_target_var, o_target_var, v_target_var, meta)))
s_prec1, s_prec5, s_prec1_output = accuracy_s(s_output.data, s_target, topk=(1, 5))
o_prec1, o_prec5, o_prec1_output = accuracy(o_output.data, o_target, topk=(1, 5))
v_prec1, v_prec5, v_prec1_output = accuracy(v_output.data, v_target, topk=(1, 5))
sov_prec1 = s_prec1_output.cpu() * o_prec1_output * v_prec1_output
sov_prec1 = sov_prec1.sum(0, keepdim=True)
sov_prec1 = sov_prec1.mul_(100.0 / input.size(0)) |
st179497 | Im trying to use this model
github.com
NVIDIA/tacotron2/blob/master/model.py#L487
input_lengths = to_gpu(input_lengths).long()
max_len = torch.max(input_lengths.data).item()
mel_padded = to_gpu(mel_padded).float()
gate_padded = to_gpu(gate_padded).float()
output_lengths = to_gpu(output_lengths).long()
return (
(text_padded, input_lengths, mel_padded, max_len, output_lengths),
(mel_padded, gate_padded))
def parse_output(self, outputs, output_lengths=None):
if self.mask_padding and output_lengths is not None:
mask = ~get_mask_from_lengths(output_lengths)
mask = mask.expand(self.n_mel_channels, mask.size(0), mask.size(1))
mask = mask.permute(1, 0, 2)
outputs[0].data.masked_fill_(mask, 0.0)
outputs[1].data.masked_fill_(mask, 0.0)
outputs[2].data.masked_fill_(mask[:, 0, :], 1e3) # gate energies
return outputs
But getting this error
File “/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/nn/parallel/parallel_apply.py”, line 60, in worker
output = module(*input, **kwargs)
File “/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/nn/modules/module.py”, line 547, in call
result = self.forward(*input, **kwargs)
File “/home/ec2-user/SageMaker/tacotron2/model/model.py”, line 480, in forward
output_lengths)
File “/home/ec2-user/SageMaker/tacotron2/model/model.py”, line 452, in parse_output
outputs[0].data.masked_fill(mask, 0.0)
RuntimeError: The expanded size of the tensor (1079) must match the existing size (836) at non-singleton dimension 2. Target sizes: [4, 80, 1079]. Tensor sizes: [4, 80, 836]
How to solve it? |
st179498 | Could you provide your DDP code to reproduce the issue? Also, does the model work properly without DDP? |
st179499 | https://colab.research.google.com/drive/104LtQ1zIioIOMQEPgVve77m5Rd4Gm0wU 5
Yes, it is works fine, also same code works fine on single gpu colab instance.
Tested on 8 v100 instance from amazon. |
st179500 | The doc confuses me quite a lot,would you please tell me:
What is the difference of two?
When must use rank and when must use local-rank? |
st179501 | Hi @AlexLuya,
In the context of multi-node training, you have:
local_rank, the rank of the process on the local machine.
rank, the rank of the process in the network.
To illustrate that, let;s say you have 2 nodes (machines) with 2 GPU each, you will have a total of 4 processes (p1…p4):
| Node1 | Node2 |
____________| p1 | p2 | p3 | p4 |
local_rank | 0 | 1 | 0 | 1 |
rank | 0 | 1 | 2 | 4 | |
st179502 | @spanev,thanks,if p3 wants to send sth to p4,
1,it can use either local_rank or rank
2,but for performance,it should use local_rank
Am I right about above two?
3,why not just all use rank,and let lib to decide to which seed method(cross process or cross node) to use,what is the case that developer must use local_rank? |
st179503 | I’m assuming you’re referring to local_rank mentioned here: https://github.com/pytorch/pytorch/blob/master/torch/distributed/launch.py 424
You should use rank and not local_rank when using torch.distributed primitives (send/recv etc). local_rank is passed to the training script only to indicate which GPU device the training script is supposed to use.
You should always use rank.
local_rank is supplied to the developer to indicate that a particular instance of the training script should use the “local_rank” GPU device. For illustration, in the example above provided by @spanev, p1 is passed local_rank 0 indicating it should use GPU device id 0. |
st179504 | Hi all! New to pytorch and i am using pytorch to do distributed training. I knew that ‘DistributedDataParallel’ averages gradients between each process. I want to know that whether i could print the gradient before average for every process ? |
st179505 | Hi! This is possible. DDP relies on torch.autograd.backward to accumulate the gradients into the grad tensor of the model parameters. There is a functional alternative in torch.autograd.grad that doesn’t accumulate at all. If you’re interested in the local gradients, instead of running loss.backward(), you can run torch.autograd.grad(loss, model.parameters()) and get back a list of gradient tensors, one for every model parameter. This doesn’t accumulate them into the grad tensor of the model parameter, so it doesn’t kick off DDP. If you want to run DDP afterwards anyway, make sure to pass the retain_graph=True kwarg to torch.autograd.grad. I haven’t tried any of this out, but in theory it should work. |
st179506 | Thanks a lot ! Your advice is really helpful ! I got the local gradients and it seems that DDP is not affected at all ! Its very cool that i can get professional guidance at pytorch’s forum. |
st179507 | Hi, pietern @pietern , the “DistributedDataParallel” automatically average the gradient when calling “loss.backward()”,
But I didn’t find the corresponding script about how to get all the gradient of nodes and average the gradients during backward in pytorch source code, Do you know where it is ? |
st179508 | I’ve installed pytorch 1.0 on ubuntu18.04.
When I try to use webcam demo provided by maskrcnn-benchmark. An error occured:
Traceback (most recent call last):
File “webcam.py”, line 80, in
main()
File “webcam.py”, line 64, in main
min_image_size=args.min_image_size,
File “/home/aisen/github/maskrcnn-benchmark/demo/predictor.py”, line 149, in init
_ = checkpointer.load(cfg.MODEL.WEIGHT)
File “/home/aisen/github/maskrcnn-benchmark/maskrcnn_benchmark/utils/checkpoint.py”, line 61, in load
checkpoint = self._load_file(f)
File “/home/aisen/github/maskrcnn-benchmark/maskrcnn_benchmark/utils/checkpoint.py”, line 134, in _load_file
return load_c2_format(self.cfg, f)
File “/home/aisen/github/maskrcnn-benchmark/maskrcnn_benchmark/utils/c2_model_loading.py”, line 206, in load_c2_format
return C2_FORMAT_LOADER[cfg.MODEL.BACKBONE.CONV_BODY](cfg, f)
File “/home/aisen/github/maskrcnn-benchmark/maskrcnn_benchmark/utils/c2_model_loading.py”, line 192, in load_resnet_c2_format
state_dict = _load_c2_pickled_weights(f)
File “/home/aisen/github/maskrcnn-benchmark/maskrcnn_benchmark/utils/c2_model_loading.py”, line 136, in _load_c2_pickled_weights
data = pickle.load(f, encoding=“latin1”)
_pickle.UnpicklingError: pickle data was truncated |
st179509 | Hello @Aisen, can you post the code which dumped the pickle file which you are trying to load |
st179510 | Is there any example of how to calculate the loss within multiple GPU and merge all of them later after the calculation?
Currently, we could calculate the output from a network by using DistributedDataParalel. However, the result from DistributedDataParallel is collected in device 0. Therefore, the calculation was done in 1 GPU only instead of multi-GPU. |
st179511 | What do you mean exactly here? Are you looking to compute only a single loss value for a model that gets executed on multiple processes? Or just on multiple GPUs from a single process?
Even though the result is collected in GPU 0, the gradients will propagate back through the activations on all GPUs that contributed in computing the final loss value. The gradients that are computed for every replica are averaged automatically by torch.nn.parallel.DistributedDataParallel, so all replicas contribute to the gradient that is later used by the optimizer to update your model weights. |
st179512 | Hello, I am doing some work about gradient sparsification and compression in model training. After compress, tensor size may be different among workers. Now I have to send tensor size to others at first and then send the compressed tensor. However it looks ugly. I want to send and receive tensor with indefinite length in one communication, any ideas? |
st179513 | Unfortunately, with ProcessGroup send/recv, this needs to be done in two comms (size and data) for now. We are building a new RPC API 4 which could simplifies it a bit at the API level, but internally it still uses ProcessGroup send/recv currently until we have a better comm primitive for that. |
st179514 | Hi,
I’m trying to train multi-agent reinforcement learning. To do so, each agent has distributed (separate) network model. Therefore, the distributed model has a number of nn.Module for each separate model.
I want to save the entire networks’ parameters to evaluate the trained model. How can I save the entire parameters?
As I know, using
d=model.state_dict()
and
torch.save(d, path)
is appropriate. But the model used in the above command seems to be linked for only one network model (not entire separate model).
How can I save all separate model?
Thank you |
st179515 | I’m not sure what “distributed” means in your use case.
Are you working with different models? If so, you could just save the state_dict of each model using a separate file.
Or are you working in a distributed setup, where the models are scattered and gathers using different nodes?
In that case, you could most likely want to reduce the model to the main node and just store this state_dict. |
st179516 | Thank you for your reply.
Specifically, I have a number of agents and each agent has own policy network.
So I said it as ‘distributed’ but it was quite vague… sorry for the inconvenience.
I think the first one you gave me is applicable, right?
Because I generated a number of policy networks (models) and it is necessary to store separately.
Am I right?
Thank you |
st179517 | Hello everybody,
I am trying to deploy different pytorch-based training scripts on different GPUs. However, the information I could find is about training a model on multiple GPUs.
Could some tell me how to do this?
I tried the ‘spawn’ trick, ‘.cuda(0)/.cuda(1)’ trick… But they were not working.
Sorry if this is a bad question
Thank you! |
st179518 | Solved by TinfoilHat0 in post #3
You could pass the device you want to train on as an argument to the script.
For example, ‘cuda:0’ corresponds to the 1st GPU in your system, ‘cuda:1’ corresponds to the 2nd GPU and so on.
Then assuming you store the passed argument in a variable named device, all you have to do is to call .to(dev… |
st179519 | just need to pass CUDA_VISIBLE_DEVICES=0 python script.py
replace 0 by the gpu you want |
st179520 | You could pass the device you want to train on as an argument to the script.
For example, ‘cuda:0’ corresponds to the 1st GPU in your system, ‘cuda:1’ corresponds to the 2nd GPU and so on.
Then assuming you store the passed argument in a variable named device, all you have to do is to call .to(device) on your tensors etc. |
st179521 | My code involves two stage, like
model_a -> do something and model_b -> do something
And I use DistributedDataParallel(DDP) to accelerate them rather than DP. So I do something like that:
model_a -> model_a = setup DDP and DDP(model_a) -> model_b -> setup DDP and model_b = DDP(model_b)
This will cause problem because you cannot open start two DDP in one process.
So I use dist.destroy_process_group(). But when I use this function, the program will wait for endless time.
Here are my start DDP and destroy DDP codes:
def setup(rank, world_size):
os.environ['MASTER_ADDR'] = 'localhost'
os.environ['MASTER_PORT'] = '12355'
# initialize the process group
dist.init_process_group("gloo", rank=rank, world_size=world_size)
#dist.init_process_group("nccl", rank=rank, world_size=world_size)
# Explicitly setting seed to make sure that models created in two processes
# start from same random weights and biases.
torch.manual_seed(42)
def cleanup():
dist.destroy_process_group()
I’ve tried either gloo backend or nccl backend
For Pytorch 1.1, python 3.6, CUDA 9.0
Thank you |
st179522 | model_a -> model_a = setup DDP and DDP(model_a) -> model_b -> setup DDP and model_b = DDP(model_b)
I’m not sure I follow this completely, does model_b use the output of model_a? Could you share some code about how model_a and model_b are initialized and trained using DDP?
Is it possible to create a single model with model_a and model_b as submodules and then use that as part of DDP? |
st179523 | Sorry for late, I mean I setup DDP for model_a and setup DDP for model_b in the same time:
Looks like:
setup()
model_a = DDP(model_a)
setup()
model_b = DDP(model_b)
This will cause an error.
So I need to change like that:
setup()
model_a = DDP(model_a)
cleanup()
setup()
model_b = DDP(model_b)
but, after the cleanup, the program will be blocked. |
st179524 | I tried the following program (which is a bit similar to what you were doing), but couldn’t reproduce the issue:
import os
import tempfile
import torch
import torch.distributed as dist
import torch.nn as nn
import torch.optim as optim
import torch.multiprocessing as mp
from torch.nn.parallel import DistributedDataParallel as DDP
def setup(rank, world_size):
os.environ['MASTER_ADDR'] = 'localhost'
os.environ['MASTER_PORT'] = '12355'
# initialize the process group
dist.init_process_group("gloo", rank=rank, world_size=world_size)
# Explicitly setting seed to make sure that models created in two processes
# start from same random weights and biases.
torch.manual_seed(42)
def cleanup():
dist.destroy_process_group()
class ToyModel(nn.Module):
def __init__(self):
super(ToyModel, self).__init__()
self.net1 = nn.Linear(10, 10)
self.relu = nn.ReLU()
self.net2 = nn.Linear(10, 5)
def forward(self, x):
return self.net2(self.relu(self.net1(x)))
setup(0, 1)
print ("F1")
model_a = DDP(ToyModel())
print ("F2")
cleanup()
print ("F3")
setup(0, 1)
print ("F4")
model_b = DDP(ToyModel())
print ("F5")
The program prints:
F1
F2
F3
F4
F5
Could you share more details about your environment (OS, python version etc)? Also, do you know at which line after the cleanup the program blocks on? |
st179525 | Hi,
I am trying to do multi-task learning with two classification layers from single shared representation.
I believe, having two nn.Linear layers would not differ from having one nn.Linear layer then split,
but it occurs to me they work differently when used with nn.DataParallel.
Please see the code at the bottom.
<class ‘Split’> has one nn.Linear then splits the output.
<class ‘TwoHeads’> has two nn.Linear layers.
What I do in the main code is to compare the outputs of
nn.DataParallel(net)
to
nn.DataParallel(net).module.
For Split the output coincides.
For TwoHeads the output differs.
And the code here.
import torch
import torch.nn as nn
class Split(nn.Module):
def __init__(self):
super(Split, self).__init__()
self.linear = nn.Linear(10, 5)
def forward(self, x):
out = self.linear(x)
out1, out2 = torch.split(out, 3, dim=1)
return out1, out2
class TwoHeads(nn.Module):
def __init__(self):
super(TwoHeads, self).__init__()
self.linear1 = nn.Linear(10, 3)
self.linear2 = nn.Linear(10, 2)
def forward(self, x):
out1 = self.linear1(x)
out2 = self.linear2(x)
return out1, out2
if __name__ == '__main__':
net1 = Split()
net2 = TwoHeads()
for net in [net1, net2]:
net.cuda()
net = nn.DataParallel(net, list(range(4)))
with torch.no_grad():
data = torch.randn(500, 10).cuda()
out1, out2 = net(data)
mod1, mod2 = net.module(data)
print(int(not torch.equal(out1, mod1)), end=' ')
print(int(not torch.equal(out2, mod2)), end=' ')
print((out1 - mod1).abs().max(), (out2 - mod2).abs().max())
For me, the output looks like
0 0 tensor(0., device=‘cuda:0’) tensor(0., device=‘cuda:0’)
0 1 tensor(0., device=‘cuda:0’) tensor(2.3842e-07, device=‘cuda:0’)
My questions are:
Is it intended?
If it is because the computation graph uses the same variable twice, shall I always avoid ‘branching’ the computation graph?
What would be the correct way to do multi-task learning with DataParallel? |
st179526 | Solved by ptrblck in post #4
Since the error is that low, I would still assume it’s still due to floating point precision. |
st179527 | The error of 1e-7 is most likely due to floating point precision (usually you expect the error for FP32 to be in ~1e-6), so it seems your code is working fine.
x = torch.randn(10, 10, 10)
sum1 = x.sum()
sum2 = x.sum(0).sum(0).sum(0)
print(sum1 - sum2)
> tensor(-3.8147e-06) |
st179528 | Thank you so much for your answer and the code.
I firstly thought of it too, and I repeated the same experiment many times and on another machine but
the error still occurs for <class ‘TwoHeads’> at the second output only.
Would it be still most likely the floating point error? |
st179529 | Since the error is that low, I would still assume it’s still due to floating point precision. |
st179530 | AFAIK, the simplest way to do the distributed training (multiple modes) with Pytorch is something like:
sampler = torch.utils.data.distributed.DistributedSampler(train_data)
data_loader = torch.utils.data.DataLoader(dataset, sampler=sampler)
model = torch.nn.DataParallel(model).cuda()
for data, target in data_loader:
out = model(data)
...
But what if I already have a large tensor data in hand and would like to split and distribute it and get the same output as the above snippet? Specifically,
model = torch.nn.DataParallel(model).cuda()
data = do_sth_fuct(data)
out = model(data)
Is there a PyTorch API to do so? Otherwise, what is the best way to achieve it? Thank you in advance! |
st179531 | But what if I already have a large tensor data in hand and would like to split and distribute it and get the same output as the above snippet
It really depends on where this large tensor is stored and how it is loaded. Is this large tensor stored in memory of one of the nodes? It might be helpful if you describe your system a bit more in detail and especially how the large tensor is computed/retrieved. |
st179532 | Hi, thanks for your reply!
The tensor is stored in one of the nodes. More spefcifically, I have, say, two nodes and each of them have 8 gpus. I have a text dataset train.txt. I have written a function that convert the text data to large tensor X.
If I used torch.nn.parallel.DistributedDataParallel in the following way:
model = torch.nn.parallel.DistributedDataParallel(mode.cuda())
Would model(X) do what I want, that is, split the X and distribute the pieces to 16 gpus? |
st179533 | Hi , I’m having trouble loading the distributed dataparallel model to just 1 GPU. And I want to know how to load the model (trained by 4 GPU with distributed dataparallel) to another job using only 1 GPU.
I have trained a model using 4 GPU and distributed dataparallel, and I saved it as the tutorial:
https://pytorch.org/tutorials/intermediate/ddp_tutorial.html#save-and-load-checkpoints 8
However, I don’t know how to load it using just 1 GPU for some simple job like validation test.
if rank == 0:
torch.save(ddp_model.state_dict(), CHECKPOINT_PATH)
dist.barrier()
I’m now using this method:
# initialize
torch.distributed.init_process_group(backend="nccl")
local_rank = torch.distributed.get_rank()
print(local_rank)
torch.cuda.set_device(local_rank)
device = torch.device("cuda", local_rank)
print(device)
# only gpu with rank0 can remain running
model = resnet50()
model.to(device)
model = torch.nn.parallel.DistributedDataParallel(model,
device_ids=[local_rank],
output_device=local_rank)
model.load_state_dict(torch.load(cfg.MODEL.pretrained_model_path))
model.eval()
if local_rank == 0:
acc, acc_std, th = lfw_test(model, cfg.TEST.lfw_root, cfg.TEST.lfw_test_list)
and the command code:
CUDA_VISIBLE_DEVICES=0,1,2,3 python -m torch.distributed.launch --nproc_per_node=4 test.py
This method works, and through nvidia-smi I saw that only GPU0 is working, but when I run another test process using GPU device 1 (when the previous one is still running):
CUDA_VISIBLE_DEVICES=1,2,3 python -m torch.distributed.launch --nproc_per_node=3 test.py
The previous process throw a runtime error:
RuntimeError: NCCL error in: /opt/conda/conda-bld/pytorch_1549633347309/work/torch/lib/c10d/ProcessGroupNCCL.cpp:260, unhandled system error
There are 2 reasons for me to load the model using 1GPU:
Some jobs have file-writing part and distributed parallel may cause wrong order.
Running 4 tiny experiment with 1 GPU per process is more efficient for me to test my idea and finding bugs.
So is there a way that I can load the model like the common ways :torch.load_state_dict(torch.load()).to(torch.device("cuda:0))? |
st179534 | Solved by ptrblck in post #3
I’m not sure to understand the use case.
It seems you would like to load the state_dict to a single GPU machine, but in your code you are wrapping the model again in DDP.
Would creating the model, loading the state_dict, and pushing the model to the single GPU not work? |
st179535 | Oh… the first method is not work, neither. I find that using:
if local_rank == 0:
output = model(input)
the model(input) will never output. And the code is just blocked there. |
st179536 | I’m not sure to understand the use case.
It seems you would like to load the state_dict to a single GPU machine, but in your code you are wrapping the model again in DDP.
Would creating the model, loading the state_dict, and pushing the model to the single GPU not work? |
st179537 | It works! Thank you
I used:
model = resnet50()
model.to(device)
model.load_state_dict(torch.load(cfg.MODEL.pretrained_model_path))
and I got a Runtime error:
RuntimeError: Error(s) in loading state_dict for Resnet:
Missing key(s) in state_dict: "conv1.weight", "bn1.weight" ... ...
Unexpected key(s) in state_dict: "module.conv1.weight", "module.bn1.weight" ... ...
Seems the Distributed DataParallel save the model in module. Then I find a solution there:
[solved] KeyError: 'unexpected key "module.encoder.embedding.weight" in state_dict'
I was thinking about something like the following:
# original saved file with DataParallel
state_dict = torch.load('myfile.pth.tar')
# create new OrderedDict that does not contain `module.`
from collections import OrderedDict
new_state_dict = OrderedDict()
for k, v in state_dict.items():
name = k[7:] # remove `module.`
new_state_dict[name] = v
# load params
model.load_state_dict(new_state_dict)
state_dict = torch.load(cfg.MODEL.pretrained_model_path)
new_state_dict = OrderedDict()
for k, v in state_dict.items():
name = k[7:]
new_state_dict[name] = v
model.load_state_dict(new_state_dict) |
st179538 | I am using 4 GPUs, let’s say device0, device1, device2, and device3. And the model is:
Inside the init function:
sub_model1.to(device0)
sub_model1 = torch.nn.DataParallel(sub_model1, device_ids=[device0, device1])
sub_model2.to(device2)
sub_model2 = torch.nn.DataParallel(sub_model2, device_ids=[device2, device3])
Inside the forward function:
y = sub_model1(x)
#y = y.to(device2) # we dont need this as we can put data in any device when using DataParallel
out = sub_model2(y)
This gives me out of memory problems after running perfectly for some epochs. The error is written like (RuntimeError: Caught RuntimeError in replica 1 on device 3.).
Am I doing it correctly? In my case, if I don’t use the DataParallel, it works perfectly with 2GPUs (I simply double the batch_size when using this DataParallel setting, 4 GPUs) |
st179539 | The error message doesn’t sound like an OOM error, but a RuntimeError.
I assume you are not seeing this error using a single GPU?
Do you get any more information from the stack trace? |
st179540 | Hi thanks for your reply
This is the complete error:
2019-11-10 10:56:51,904 - Parser - Current learning rate: 0.002000
2019-11-10 10:56:59,308 - Parser - Epoch 0, Batch 1, AvgCost: 2.10, CorrectSpan: 0.49, CorrectNuclear: 0.37, CorrectRelation: 0.03 - 0 mins 7 secs
2019-11-10 10:57:00,668 - Parser - Epoch 0, Batch 2, AvgCost: 2.08, CorrectSpan: 0.51, CorrectNuclear: 0.36, CorrectRelation: 0.02 - 0 mins 8 secs
2019-11-10 10:57:03,495 - Parser - Epoch 0, Batch 3, AvgCost: 2.11, CorrectSpan: 0.51, CorrectNuclear: 0.35, CorrectRelation: 0.03 - 0 mins 11 secs
2019-11-10 10:57:04,270 - Parser - Epoch 0, Batch 4, AvgCost: 2.10, CorrectSpan: 0.52, CorrectNuclear: 0.36, CorrectRelation: 0.03 - 0 mins 12 secs
2019-11-10 10:57:04,866 - Parser - Epoch 0, Batch 5, AvgCost: 2.04, CorrectSpan: 0.52, CorrectNuclear: 0.34, CorrectRelation: 0.03 - 0 mins 12 secs
2019-11-10 10:57:09,912 - Parser - Epoch 0, Batch 6, AvgCost: 2.05, CorrectSpan: 0.53, CorrectNuclear: 0.35, CorrectRelation: 0.05 - 0 mins 18 secs
2019-11-10 10:57:12,131 - Parser - Epoch 0, Batch 7, AvgCost: 2.07, CorrectSpan: 0.52, CorrectNuclear: 0.37, CorrectRelation: 0.04 - 0 mins 20 secs
2019-11-10 10:57:12,906 - Parser - Epoch 0, Batch 8, AvgCost: 2.06, CorrectSpan: 0.53, CorrectNuclear: 0.37, CorrectRelation: 0.04 - 0 mins 21 secs
2019-11-10 10:57:13,351 - Parser - Epoch 0, Batch 9, AvgCost: 2.06, CorrectSpan: 0.53, CorrectNuclear: 0.37, CorrectRelation: 0.04 - 0 mins 21 secs
2019-11-10 10:57:13,651 - Parser - Epoch 0, Batch 10, AvgCost: 2.08, CorrectSpan: 0.53, CorrectNuclear: 0.37, CorrectRelation: 0.04 - 0 mins 21 secs
Traceback (most recent call last):
File "train.py", line 319, in <module>
main()
File "train.py", line 238, in main
cost, cost_val = network.loss(subset_data, gold_subtrees, epoch=epoch)
File "/home/ffajri/Workspace/neural_project/models/architecture.py", line 497, in loss
cost = self.decode(encoder_output, gold_nuclear, gold_relation, gold_segmentation, span, len_golds)
File "/home/ffajri/Workspace/neural_project/models/architecture.py", line 381, in decode
segment_output, transformer_output = self.run_transformer_segmentation(hidden_state1, segment_mask) #output in cuda-2
File "/home/ffajri/Workspace/neural_project/models/architecture.py", line 98, in run_transformer_segmentation
edus_score, edus_vec = self.transformer_segmenter(segmented_encoder, segment_mask.int())
File "/home/ffajri/anaconda3/envs/py3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/home/ffajri/anaconda3/envs/py3/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py", line 152, in forward
outputs = self.parallel_apply(replicas, inputs, kwargs)
File "/home/ffajri/anaconda3/envs/py3/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py", line 162, in parallel_apply
return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])
File "/home/ffajri/anaconda3/envs/py3/lib/python3.7/site-packages/torch/nn/parallel/parallel_apply.py", line 85, in parallel_apply
output.reraise()
File "/home/ffajri/anaconda3/envs/py3/lib/python3.7/site-packages/torch/_utils.py", line 385, in reraise
raise self.exc_type(msg)
RuntimeError: Caught RuntimeError in replica 1 on device 3.
Original Traceback (most recent call last):
File "/home/ffajri/anaconda3/envs/py3/lib/python3.7/site-packages/torch/nn/parallel/parallel_apply.py", line 60, in _worker
output = module(*input, **kwargs)
File "/home/ffajri/anaconda3/envs/py3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/home/ffajri/Workspace/neural_project/modules/encoder.py", line 95, in forward
x = self.transformer_inter[i](i, x, x, mask!=1) # all_sents * max_tokens * dim
File "/home/ffajri/anaconda3/envs/py3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/home/ffajri/Workspace/neural_project/modules/encoder.py", line 68, in forward
mask=mask)
File "/home/ffajri/anaconda3/envs/py3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/home/ffajri/Workspace/neural_project/modules/neural.py", line 410, in forward
query = query / math.sqrt(dim_per_head)
RuntimeError: CUDA out of memory. Tried to allocate 2.00 MiB (GPU 3; 15.75 GiB total capacity; 11.64 GiB already allocated; 2.12 MiB free; 2.97 GiB cached)
Without DataParallel, it requires me to run it with at least two GPUs, and it works perfectly. |
st179541 | For now, I think the problem is because I am calling the sub_model2 within a loop.
y = sub_model1(x)
while (not_finished()):
y1 = pick_some_index(y)
out = sub_model2(y1)
Based on this https://erickguan.me/2019/pytorch-parallel-model 1, DataParallel copy all model and mini-batch for each device. I guess the copied model in each device might not be deleted (nor handled wisely by Pytorch?) for each iteration in the loop (CMIIW).
I change my training stage by eradicating these loops. It works now with 2 * batch_size. |
st179542 | I got the following warning message when I use LSTM with nn.DataParallel.
RuntimeWarning: RNN module weights are not part of single contiguous chunk of memory.
This means they need to be compacted at every call, possibly greatly increasing memory usage.
To compact weights again call flatten_parameters().
I found the error is gone when I put self.lstm.flatten_parameters() at the top of forward function, but I wonder why do we need it.
Why is the weight of RNN non-contiguous on memory when we use nn.DataParallel?
And also I found the error would be gone if we replace DataParallel with DistributedDataParallel, then why isn’t the weight non-contiguous in latter case?
github.com/pytorch/pytorch
Issue: Multi-GPU autograd error with Pytorch 0.4 24
opened by erogol
on 2018-04-30
After updating pytorch 0.4 I am getting the following error when I try to train my model here: https://github.com/mozilla/TTS with multi-gpus....
todo
I found some similar questions but none of them had the answer.
What does flatten_parameters() do?
I saw many Pytorch examples using flatten_parameters in the forward function of the RNN
self.rnn.flatten_parameters()
I saw this RNNBase and it is written that it
Resets parameter data pointer so that they can use faster code paths
What does that mean?
Do we need to call flatten_parameters() in LSTM only if we are using Multi-GPUs? nlp
I am trying to understand the use case for flatten_parameters(), when do we use it and what does it do? Is it used only if we are running our model on multiple GPUs? |
st179543 | After reading some related codes, I think I almost get it but still have few questions.
So what I understand is
Everytime when we make new RNN module instance, it allocates 15 new w_ih, w_hh, b_ih, b_hh tensors and register them as Parameter for each layer, direction.
But it’s not guranteed that new tensors are contiguous on GPU memory, performance can be dropped due to the fragmentation. So we call flatten_parameters function at the end of constructor to aggregate all the weight tensors into continuous space of GPU memory 57.
This task is done as
Allocate 5 one big buffer tensor called weight_buf
Copy values of weight tensor into weight_buf
Make each weight tensor’s internal data pointer indicating weight_buf + offset
(The real execution steps are 1->3->2 in real code)
But when we use nn.DataParallel, it replicates original module(which is allocated only on certain GPU device) to every GPU it uses, then weight tensors are fragmented again since there’s no gurantee that replicated tensors are still contiguous on memory space.
Therefore we should flatten_parameters again everytime the module is replicated to another GPU, and the best place to put function call would be the head of forward function (of nn.Module), because forward function of nn.Module on each GPU is called only one time when forward of nn.DataParallel is called.
Although I never used nn.DistributedDataParallel, I guess that the reason why it doesn’t need the flatten_parameters call is because when it allocates new instance of RNN module, flatten_parameters are automatically called, then it doesn’t move internal data position on memory unlike nn.DataParallel, but it only copies some values into it.
And questions are
Do I understand right? Is there any misunderstood point?
When we do the step 3 of aggregation(=Make each weight tensor’s internal data pointer indicating weight_buf + offset), we call the get_parameters function and it
calls cudnnGetRNNLinLayerMatrixParams 3 so that matrix_pointer indicates the GPU memory position of original, un-aggregated weight tensor,
sets offset as the difference of matrix_pointer and start of weight_buf,
make internal data pointer of weight tensor indicating weight_buf + offset
Then isn’t it indicating matrix_pointer again? Why don’t we replcate
Tensor param = at::empty({0}, weight_buf.options()).set_(weight_buf.storage(), offset, size);
with
Tensor param = at::empty({0}, weight_buf.options()).set_(weight_buf.storage(), cumsum, size); cumsum += size;
?
Or does that function calculate expected position of given component with respect to the given (start) data pointer? |
st179544 | wwiiiii:
Therefore we should flatten_parameters again everytime the module is replicated to another GPU, and the best place to put function call would be the head of forward function (of nn.Module), because forward function of nn.Module on each GPU is called only one time when forward of nn.DataParallel is called.
That’s the conclusion I came to as well, except that I actually observe a larger VRAM usage and loss compute time when I put flatten_parameters in the forward pass (and I get no warning) vs. putting it in the __init__ function of the model (and then I get the warning only when using DataParallel). |
st179545 | Hello everyone,
I’m training a sequence-based model on a single machine with 1 GPU and 16 CPU cores. My loss function is computationally expensive and performs best on the CPU. The reason it performs well on a CPU is that computing this particular loss is a sequential process and, while it can’t be parallelized/GPU optimized for a single data point, it can be parallelized across a batch of samples. The rest of the model works well on a GPU.
I’d like to know how to parallelize the loss of my model over a batch of samples and CPU cores while keeping the rest of my model on the GPU. I know this will involve transferring the model’s predictions from GPU - > CPU, but I believe the performance increase by using CPUs for the loss will outweigh this fact.
I’ve tried torch.multiprocessing as follows (where single_item_loss is the loss function I described above, that takes a single model prediction from the batch of predictions and returns a single Torch float tensor representing the loss):
with multiprocessing.Pool(multiprocessing.cpu_count()) as p:
results = p.map(single_item_loss, predictions)
but this yields the error:
RuntimeError: Cowardly refusing to serialize non-leaf tensor which requires_grad, since autograd does not support crossing process boundaries. If you just want to transfer the data, call detach() on the tensor before serializing (e.g., putting it on the queue).
I’ve also tried using the joblib library:
results = Parallel(n_jobs=multiprocessing.cpu_count(),
backend="loky")(delayed(single_item_loss)(pred) for pred in predictions)
but this must sever each of the idividual losses from the computation graph, because calling torch.mean(torch.stack(results)).backward() (and later optimizer.step()) has no effect and the model does not train.
I’ve also tried calling loss.backward() before returning loss from my single_item_loss function, but this also does not work.
I’m eager to hear any feedback and work to solve this problem. Evidently this is not a very common situation as GPUs are almost always preferred, but in this case, I believe the custom loss function warrants this special treatment. Thank you very much for your help! |
st179546 | So currently this isn’t supported.
jonathanking:
RuntimeError: Cowardly refusing to serialize non-leaf tensor which requires_grad, since autograd does not support crossing process boundaries. If you just want to transfer the data, call detach() on the tensor before serializing (e.g., putting it on the queue).
You could implement forward and backward separately and stick it in a autograd.Function. Then inside the forward, you don’t have grad-requiring things.
For a loss function in particular, you could compute the backward inside the forward as well (you can turn also autograd back on inside the forward with torch.enable_grad).
Then in the backward you just hand back the pre-computed result. (This is what CuDNN’s CTC Loss implementation does.)
It might be more efficient in terms of development time as well as runtime to drop into C++ and use PyTorch’s parallel_for, though.
Note that PyTorch tries to use intra-op parallelism, you would not want to “overbook” your cores.
Best regards
Thomas |
st179547 | Hi All,
I have already running MPI program in the distributed model. I want to use those processes and run a PyTorch model distributedly. Is there a straightforward way to do this?
For instance, MPI_INIT() is already called from my programme, and I have a world_size of 4.
Is it possible to start a PyTorch distributed programme from this model?
Adding more information:
How to use the following option with Pytorch DistributedDataParallel model?
store(Store, optional): Key/value store accessible to all workers, used
to exchange connection/address information.
Mutually exclusive with init_method.
I am trying to see how to map my existing MPI processes to launch a distributed data-parallel model in Pytorch using existing MPI instances.
Is there a way such that Pytorch distributed mode can consume existing MPI processes? |
st179548 | Hey Folks,
I am running my model on multiple gpus.And I have a tensor, which will be present on each Gpus, which I want to access. Now, I am looking to get hold of all these tensor, on all the gpus, and do some operation, in a synchronous fashion, and then broadcast the result on all the gpus, to be used in the next step.
For Example: the tensor we are talking is T, with T being present in all cuda 0-3(for 4 gpus). Now, I need to get hold of this T tensor(which has different values at different gpu) and get some stat out of this, and then send back this stat to all gpus.
Please suggest me how this can be achieved. |
st179549 | Solved by rvarm1 in post #4
Hi Nitin,
To send all tensors to one GPU, you’d want to use dist.gather, which will gather all of the tensors onto one gpu (this is assuming you have one process running per gpu). If your tensor is t, then your call would look like:
t = your_tensor(size)
if rank == 0:
# rank 0 is the node all tens… |
st179550 | Hi,
The simple way I see to do this is the following:
You will have to first send all these Tensors to a common GPU. agregate the restults and compute your update. Then send the result back to each GPU. |
st179551 | Thanks Alan! I was wondering how do I send all the tensors to one gpu, and perform operations, which I want to perform, before the value of any of the tensor changes in any of their respective gpus.
Meaning, I don’t want the values of these tensor present on different gpus to change, before I complete my operation and send them back to respective gpu. |
st179552 | Hi Nitin,
To send all tensors to one GPU, you’d want to use dist.gather, which will gather all of the tensors onto one gpu (this is assuming you have one process running per gpu). If your tensor is t, then your call would look like:
t = your_tensor(size)
if rank == 0:
# rank 0 is the node all tensors will be gathered on
gathered_tensors = [torch.zeros(size) for _ in range(WORLD_SIZE)]
dist.gather(t, gathered_tensors, dst=0)
else:
dist.gather(t, dst=0)
Then, you can compute what you want and send the result back wrapped in a tensor via a dist.scatter. To make sure the tensors don’t change on the gpu before they are gathered, ensure all nodes call into gather at the same time when all nodes have the desired value. You could also use torch.distributed.barrier if you need additional synchronization. Check out the docs at https://pytorch.org/docs/stable/distributed.html 18 |
st179553 | I would like to train, say, 10 independent neural networks on 5 GPUs in parallel (by training two on each GPU assuming there are no memory constraints). Also, I would like the code to be reproducible. Therefore, I have been training each network by re-running the same Python script, changing only the GPU device. I specify torch.manual_seed for reproducibility.
Is there anyway to do this in a single Python script where PyTorch does the work of distributing the networks to the GPUs while maintaining reproducibility? I am aware of threading as one way to do this (something similar to ModelParallel 3), but I am worried about reproducibility of my code when using threading.
Thank you for the help! |
st179554 | Hi,
Even with the same code, if you use different hardware/software versions, we don’t guarantee reproducibility. See the doc about this here 4.
If you want to ensure that the scripts are independent. I would recommend using a simple batch script that launch all the jobs |
st179555 | Thanks for the reply! I have seen the documentation on reproducibility.
In this case, to begin with, I am only looking for reproducibility for my particular set-up (hardware and software). I am indeed using a shell script to launch the jobs, but I thought it would be neater if I could do this all within one script with PyTorch responsible for the distribution. |
st179556 | The level of “independence” between the run would be much smaller if you run the whole think in a single process. They will share the same python interpreter, the same cuda allocator, same memory space.
The ModelParallel tool that you linked seems interesting. But I don’t know of any other work along those lines… |
st179557 | Hi, I am using torch.distributed to do federated learning. What should I do if I want to scale some workers’ gradient since their data are more valuable than others? Thx a lot in advance! |
st179558 | Solved by Yanli_Zhao in post #2
tensor.register_hook(customHook) may work, you need to write customHook to modify grad of the tensor.
but as far as I know customHook should be a function of grad only. For your case, you want to make customHook to be a function of grad and workerRank as well? |
st179559 | tensor.register_hook(customHook) may work, you need to write customHook to modify grad of the tensor.
but as far as I know customHook should be a function of grad only. For your case, you want to make customHook to be a function of grad and workerRank as well? |
st179560 | I only want to make customHook to be a function of grad and I haven’t used this before. Does hook only influence the backward process? |
st179561 | I’m training GPT-2 from huggingface/transformers on TPU. It’s training well. At the end of a training I’ve got loss around 4.36. When I save and restore the model - the loss skyrockets somewhere to 9.75.
1.png1631×667 59.1 KB
2.png1722×310 33.5 KB
I’ve got no similar issues with saving and loading on GPU with that code.
The code what is used to save is just this
xm.save(model_to_save.state_dict(), output_model_file)
xm.save is a convinience what moves tensors from TPU to CPU before saving.
The whole code is here https://github.com/mgrankin/ru_transformers/blob/master/tpu_lm_finetuning.py 12
I’ve tried to do the following.
I’ve tried to do save and load right after the training
results = evaluate(args, model, tokenizer, "checkpoint-0", False)
log_info(f"Eval1 {results}")
model = model_class.from_pretrained(args.model_name_or_path, from_tf=bool('.ckpt' in args.model_name_or_path), config=config)
model.to(args.device)
results = evaluate(args, model, tokenizer, "checkpoint-0", False)
log_info(f"Eval2 {results}")
3.png1670×728 88.3 KB
Eval2 is much bigger that Eval1
I also tried not to recreate the model, but to replace model state_dict with saved state_dict
results = evaluate(args, model, tokenizer, "checkpoint-0", False)
log_info(f"Eval1 {results}")
model.load_state_dict(torch.load('output/classic_s/pytorch_model.bin'))
model.to(args.device)
results = evaluate(args, model, tokenizer, "checkpoint-0", False)
log_info(f"Eval2 {results}")
In that case Eval2 is equal to Eval1
So, there is something that isn’t in a state_dict, but it affects the model perfomance. What can that be? |
st179562 | Skimming through your code, it looks like you are using AdamW as your optimizer, which uses internal states. To be able to properly resume your training, you should also store/restore the optimizer’s state_dict. |
st179563 | Thank you for a valuable advice on saving the state of AdamW. But that is not the root problem here. I don’t run train after save/load and it’s performing way worse right after loading.
Davide Libenzi is trying hard to help me with the issue here https://github.com/pytorch/xla/issues/1245 18 |
st179564 | The issue is mostly resolved here
github.com/pytorch/xla
Problem with model accuracy (after restore) on TPU 41
opened
Oct 27, 2019
mgrankin
🐛 Bug
I’m training GPT-2 from huggingface/transformers on TPU. It’s training well. At the end of a training I’ve got loss around... |
st179565 | Hi,
I have a simple problem (hopefully!) regarding parallelizing a part of my model. I have looked around but cannot seem to find a definitive answer on how to approach this although similar questions seem to have been asked.
TLDR: How do you do a parallel for loop across multiple CPUs or GPUs in the same computer in the middle of a gradient step?
What I have is multiple additional computations which I know are embarassingly parallelizable, but compute bound. Currently, I am calculating them sequentially in a for loop within a .fit() function. These results are accumulated and then combined to produce the final loss.
A code sketch is as follows:
for i in range(epochs):
self.optimizer.zero_grad()
loss = self.fit_get_loss()
loss.backward(retain_graph=False)
def fit_get_loss():
# This is the for loop to parallelize
total_loss = 0
for j in range(self.N_extra_models):
m1 = self.extra_models_1[j]
m2 = self.extra_models_2[j]
loss1 = m1.fit_get_loss(with_grad=True)
loss2 = m2.fit_get_loss(with_grad=True)
total_loss = total_loss + loss1 + loss2
return total_loss
I would like to distribute the computation across many CPUs (e.g. a workstation with 20 cores) such that , for example, i have 20 of those j iterations occuring in parallel and I just accumulate the loss value. This is preferrable to do on CPU given the hardware I currently have available. However I am also keen to know how to apply the sample problem to multiple GPUs.
Actually, the real problem I have is more complicated than the above, and I need to re-use the extra_models but I think the above is the simplest form of what i’m trying to achieve. The extension of the problem is that I would like to access the updated fitted values for each of those extra_models within the main optimization loop call.
thanks again |
st179566 | Hi,
You can use python’s native threading to achieve this. I am not very familiar with it
Keep in mind though that if you perform big enough operations in pytorch’s operations, it will use multiple threads automatically.
If you just have many many python code to run. multiple threads won’t help you, because you can only run one thread at a time in python (you can google for the GIL). |
st179567 | Hi, I’m trying to start 2 training tasks on a single node and I want each task to occupy 2 GPUs respectively. Everything is fine with task 1. However, when I try to launch task 2, I encounter the following error:
Traceback (most recent call last):
File "dist_test.py", line 93, in <module>
train()
File "dist_test.py", line 62, in train
init_method='env://',
File "/data0/whr/anaconda3/envs/torch1.0/lib/python3.6/site-packages/torch/distributed/distributed_c10d.py", line 354, in init_process_group
store, rank, world_size = next(rendezvous(url))
File "/data0/whr/anaconda3/envs/torch1.0/lib/python3.6/site-packages/torch/distributed/rendezvous.py", line 143, in _env_rendezvous_handler
store = TCPStore(master_addr, master_port, start_daemon)
RuntimeError: Address already in use
Traceback (most recent call last):
File "dist_test.py", line 93, in <module>
train()
File "dist_test.py", line 69, in train
output_device = args.local_rank
File "/data0/whr/anaconda3/envs/torch1.0/lib/python3.6/site-packages/torch/nn/parallel/distributed.py", line 215, in __init__
self.broadcast_bucket_size)
File "/data0/whr/anaconda3/envs/torch1.0/lib/python3.6/site-packages/torch/nn/parallel/distributed.py", line 377, in _dist_broadcast_coalesced
dist._dist_broadcast_coalesced(self.process_group, tensors, buffer_size, False)
RuntimeError: NCCL error in: /opt/conda/conda-bld/pytorch_1544081127912/work/torch/lib/c10d/ProcessGroupNCCL.cpp:260, unhandled system error
Because I want to use sync batch norm which is only supported with DistributedDataParallel, so unfortunatelly I cannot switch to DataParallel. It’ll be best if somebody could tell me whether it’s possible to run 2 DistributedDataParallel tasks on a single node at the same time.
Here is an example code snippet for reproducing this problem:
import torch
import torch.nn as nn
import torch.nn.functional as F
import time
import argparse
import os
def parse_args():
parse = argparse.ArgumentParser()
parse.add_argument(
'--local_rank',
dest = 'local_rank',
type = int,
default = 0,
)
parse.add_argument("--gpu", type=str, default='None',
help="choose gpu device.")
return parse.parse_args()
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(3,
64,
kernel_size = 3,
stride = 2,
padding = 1,)
self.conv2 = nn.Conv2d(64,
256,
kernel_size = 3,
stride = 2,
padding = 1,)
self.conv3 = nn.Conv2d(256,
512,
kernel_size = 3,
stride = 2,
padding = 1,)
#self.linear = nn.Linear(512, 10)
def forward(self, x):
H, W = x.size()[2:]
x = self.conv1(x)
x = self.conv2(x)
logits = self.conv3(x)
logits = F.interpolate(logits, (H, W), mode='bilinear')
return logits
def train():
args = parse_args()
if not args.gpu == 'None':
device = torch.device("cuda")
os.environ["CUDA_VISIBLE_DEVICES"]=args.gpu
else:
device = torch.device("cpu")
torch.cuda.set_device(args.local_rank)
torch.distributed.init_process_group(
backend='nccl',
init_method='env://',
)
net = Net()
net = net.to(device)
net = nn.parallel.DistributedDataParallel(net,
device_ids = [args.local_rank, ],
output_device = args.local_rank
)
net.train()
optim = torch.optim.SGD(
net.parameters(),
lr = 1e-3,
momentum = 0.9,
weight_decay = 5e-4)
criteria = nn.CrossEntropyLoss()
for i in range(10000):
img = torch.randn(2, 3, 128, 128).cuda()
lb = torch.randint(0, 19, [2,128,128]).cuda()
optim.zero_grad()
out = net(img)
loss = criteria(out, lb)
loss.backward()
loss_val = loss.item()
optim.step()
print(loss_val)
if __name__ == "__main__":
train()
I run the following command for task 1 :
python -m torch.distributed.launch --nproc_per_node 2 dist_test.py --gpu 0,1
and the command for task 2:
python -m torch.distributed.launch --nproc_per_node 2 dist_test.py --gpu 2,3
Thanks in advance! |
Subsets and Splits