id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st179368 | Hi All,
Lets suppose I have a model that I want to train using DistributedDataParallel, I wrap my model with DistributedDataParallel as follows:
ddp_model = DDP(model, device_ids=[device])
I init my optimizer as follows:
optim = optim.SGD(ddp_model.parameters(), lr=1e-2)
Is there a way to modify step 2, to apply per parameter optimizer options? What does the following look like given the ddp model?
optim.SGD([
{'params': model.base.parameters()},
{'params': model.classifier.parameters(), 'lr': 1e-3}
], lr=1e-2, momentum=0.9)
As on https://pytorch.org/docs/stable/optim.html#per-parameter-options 2
Thanks! |
st179369 | I believe per-parameter options should be supported by DistributedDataParallel. Have you tried it out and seen any issues? If you do see issues/unexpected behavior feel free to open an issue on github. |
st179370 | Suppose I have a single node with 4 GPUs. I would like to do model selection w.r.t a specific dataset through random search. A manager will keep track of a grid of such hyperparams. At each iteration a distinct model with a specific setting will be created and trained on an assigned GPU. The training dataset is shared by all such models. A minimal example will be like following
class HyperSearchManager:
def __init__(self,
train_dataset: torch.utils.data.Dataset,
valid_dataset: torch.utils.data.Dataset,
test_dataset: torch.utils.data.Dataset,
param_grid: Dict[str, List]):
self.train_dataset = train_dataset
self.valid_dataset = valid_dataset
self.test_dataset = test_dataset
self.param_grid = param_grid
self.best = float('inf')
self.optimal_model = None
def param_iter(self) -> Dict:
...
yield params
def train_single_model(self, model: nn.Module, num_epoch: int, device: torch.device):
# copy model to the respective device
model = model.to(device)
# train loops for a single model
loader = torch.utils.data.DataLoader(self.train_dataset, batch_size, ...)
optimizer = torch.optim.Adam(model.parameters(), lr, ...)
for epoch in range(num_epoch):
for data in loader:
data = data.to(device)
train(model, data, optimizer)
...
# Do validation with early stopping, etc.
valid_loss = validation(model, self.valid_dataset)
# update optimal model according to valid metrics
self.update(model.cpu(), valid_loss)
def update(self, model, valid_loss):
# if valid_loss is minimal, keep current model
if valid_loss < self.best:
self.optimal_model = model
def search(self):
for _ in range(MAX_HYPER_OPT_ITER):
params = next(self.param_iter) # get next hyperparam combination
model = ModuleClass(**params) # create model for the specific hyperparam
# if a free gpu is available, create a new subprocess to run the model on the allocated gpu
device = self.get_available_device()
proc = multiprocessing.Process(target=self.train_single_model, args=(model, num_epochs, device))
proc.start()
# else waiting...
run_test(self.optimal_model, self.test_dataset)
I wonder if it is possible to find a schedule to allocate idle gpu for a pending model. That is, at first 4 models are trained on 4 gpus, respectively. Once a training process is finished, a new model will be assigned to the released GPU.
If that’s not straightforward, is there any easy implementation for such functionalities? |
st179371 | As far as I know PyTorch doesn’t provide a framework to do this automatically. You will have to build this scheduling mechanism in your application itself. |
st179372 | I have a GRU model which I am applying to time-series data , the class look like the following:
class GRUNet(nn.Module):
def __init__(self, input_dim, hidden_dim, output_dim, n_layers, drop_prob=0.2):
super(GRUNet, self).__init__()
self.hidden_dim = hidden_dim
self.n_layers = n_layers
self.gru = nn.GRU(input_dim, hidden_dim, n_layers, batch_first=True, dropout=drop_prob)
self.fc = nn.Linear(hidden_dim, output_dim)
self.relu = nn.ReLU()
def forward(self, x, h):
print('x inside forward {}'.format(x))
out, h = self.gru(x, h)
print('out shape :{}'.format(out.shape))
out = self.fc(self.relu(out[:,-1]))
return out, h
def init_hidden(self, batch_size):
weight = next(self.parameters()).data
hidden = weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().to(device)
return hidden
and my training function is :
def train(model, device, federated_train_loader, optimizer, epoch):
model.train()
# Iterate through each gateway's dataset
for idx, (seq, labels) in enumerate(federated_train_loader):
batch_idx = idx+1
# Send the model to the right gateway
model.send(seq.location)
# Move the data and target labels to the device (cpu/gpu) for computation
seq, labels = seq.to(device), labels.to(device)
h = model.init_hidden(BATCH_SIZE)
# Clear previous gradients (if they exist)
optimizer.zero_grad()
# Make a prediction
print('seq shape : {}'.format(seq.shape))
print('labels shape : {}'.format(labels.shape))
output, h = model(seq, h)
# Calculate huber loss for regression problems
#labels =labels.view(-1)
#seq = seq.view(-1)
#labels = labels.unsqueeze(1)
#labels = labels.float()
loss = loss_function(output, labels)
# Calculate the gradients
loss.backward()
# Update the model weights
optimizer.step()
# Get the model back from the gateway
#model.get()
if batch_idx==len(federated_train_loader) or (batch_idx!=0 and batch_idx % LOG_INTERVAL == 0):
# get the loss back
loss = loss.get()
print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
epoch, batch_idx * BATCH_SIZE, len(federated_train_loader) * BATCH_SIZE,
100. * batch_idx / len(federated_train_loader), loss.item()))
I initiated and called the model and printed the shapes as follows :
model = GRUNet(input_dim=1, hidden_dim=100, output_dim=1, n_layers=2)
GRUNet(
(gru): GRU(1, 100, num_layers=2, batch_first=True, dropout=0.2)
(fc): Linear(in_features=100, out_features=1, bias=True)
(relu): ReLU()
)
seq shape : torch.Size([1024, 1, 1])
labels shape : torch.Size([1024, 1, 1])
x inside forward (Wrapper)>[PointerTensor | me:36457989435 -> gatway1:28694227328]
I got the following error at the end :
RuntimeError Traceback (most recent call last)
<timed exec> in <module>
<ipython-input-30-8013666c5ed1> in train(model, device, federated_train_loader, optimizer, epoch)
14 print('seq shape : {}'.format(seq.shape))
15 print('labels shape : {}'.format(labels.shape))
---> 16 output, h = model(seq, h)
17 # Calculate huber loss for regression problems
18 #labels =labels.view(-1)
~/anaconda3/envs/ftorch/lib/python3.7/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
539 result = self._slow_forward(*input, **kwargs)
540 else:
--> 541 result = self.forward(*input, **kwargs)
542 for hook in self._forward_hooks.values():
543 hook_result = hook(self, input, result)
<ipython-input-26-be5b95661398> in forward(self, x, h)
11 def forward(self, x, h):
12 print('x inside forward {}'.format(x))
---> 13 out, h = self.gru(x, h)
14 print('out shape :{}'.format(out.shape))
15 out = self.fc(self.relu(out[:,-1]))
~/anaconda3/envs/ftorch/lib/python3.7/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
539 result = self._slow_forward(*input, **kwargs)
540 else:
--> 541 result = self.forward(*input, **kwargs)
542 for hook in self._forward_hooks.values():
543 hook_result = hook(self, input, result)
~/anaconda3/envs/ftorch/lib/python3.7/site-packages/torch/nn/modules/rnn.py in forward(self, input, hx)
727 return self.forward_packed(input, hx)
728 else:
--> 729 return self.forward_tensor(input, hx)
730
731
~/anaconda3/envs/ftorch/lib/python3.7/site-packages/torch/nn/modules/rnn.py in forward_tensor(self, input, hx)
719 sorted_indices = None
720 unsorted_indices = None
--> 721 output, hidden = self.forward_impl(input, hx, batch_sizes, max_batch_size, sorted_indices)
722 return output, self.permute_hidden(hidden, unsorted_indices)
723
~/anaconda3/envs/ftorch/lib/python3.7/site-packages/torch/nn/modules/rnn.py in forward_impl(self, input, hx, batch_sizes, max_batch_size, sorted_indices)
696 hx = self.permute_hidden(hx, sorted_indices)
697
--> 698 self.check_forward_args(input, hx, batch_sizes)
699 result = self.run_impl(input, hx, batch_sizes)
700 output = result[0]
~/anaconda3/envs/ftorch/lib/python3.7/site-packages/torch/nn/modules/rnn.py in check_forward_args(self, input, hidden, batch_sizes)
168 def check_forward_args(self, input, hidden, batch_sizes):
169 # type: (Tensor, Tensor, Optional[Tensor]) -> None
--> 170 self.check_input(input, batch_sizes)
171 expected_hidden_size = self.get_expected_hidden_size(input, batch_sizes)
172
~/anaconda3/envs/ftorch/lib/python3.7/site-packages/torch/nn/modules/rnn.py in check_input(self, input, batch_sizes)
147 raise RuntimeError(
148 'input.size(-1) must be equal to input_size. Expected {}, got {}'.format(
--> 149 self.input_size, input.size(-1)))
150
151 def get_expected_hidden_size(self, input, batch_sizes):
RuntimeError: input.size(-1) must be equal to input_size. Expected 1, got 0
A small note regarding this implementation, when I use The GRU alone and I generate the input using torch.randn(1024, 1,1) it works . But when I use it on my dataset through syft library for federated data it doesn’t work. Can it be the reason behind it. I also provided the shapes of my federated data but it is the same as the randomized tensor :
train_inputs shape : torch.Size([815942, 1, 1])
train_labels shape : torch.Size([815942, 1])
test_inputs shape : torch.Size([149999, 1, 1])
test_labels shape : torch.Size([149999, 1])
gatway1_train_dataset : <syft.frameworks.torch.fl.dataset.BaseDataset object at 0x7fd7023e42d0>
gatway2_train_dataset : <syft.frameworks.torch.fl.dataset.BaseDataset object at 0x7fd6d0e4bf90>
federated_train_dataset : FederatedDataset
Distributed accross: gatway1, gatway2
Number of datapoints: 815942
federated_test_dataset : FederatedDataset
Distributed accross: gatway1, gatway2
Number of datapoints: 149999
I have been stuck for a while now and I have seen other GRU models working properly on federated data. Any clue ? much appriciated!! |
st179373 | Is it possible to train multiple models simultaneously?
For instance, suppose by nettwork class is Net.
net1 = Net()
net2 = Net()
Is it possible to train net 1 and net 2 simultaneously?
Thanks. |
st179374 | I assume you need to do this because you want to use different training data for the models? In that case, yes, that should be possible. I think you can wrap them with a wrapper module, sth like:
class WrapperModule(nn.Module):
def __init__(self):
super(WrapperModule, self).__init__()
self.net0 = Net()
self.net1 = Net()
def forward(inputs):
return [self.net0(inputs[0]), self.net1(inputs[1])]
net = WrapperModule()
opt = SomeOptimizer(net.parameters())
ddp = DistributedDataParallel(net)
ddp.forward(inputs).backward()
opt.step() |
st179375 | Sorry, just realized you didn’t mention DistributedDataParallel in the question. Is this for distributed training? Could you please provide more contexts? |
st179376 | You can pass each of your model to a different GPU. See How to deploy different scripts on different GPUs? 20 |
st179377 | Hi,
I am planning to add a new GPU on my computer. Using Pytorch on Windows, I wonder if it will be possible for me to use parallelism.
In 2017 it wasn’t possible(https://github.com/pytorch/pytorch/issues/4391 23).
Has it changed? Will I have to switch to Linux? And is there a guide of how to install Pytorch for DataParallelism?
Thank you for all the library! |
st179378 | Hi, I am not aware of a NCCL binary from nvidia that supports windows, so parallelization over multiple GPUs on windows is still not possible. |
st179379 | Thank you very much for the answer. Is nccl installed automatically when installing CUDA on Linux, or do I need to add something else? |
st179380 | Im also on a Windows system. I was able to use dataparallel on my model without any apparent errors. However, the performance was actually worse; which makes me think that it’s not actually using multiple gpus. Why am I able to use multiple gpus in tensorflow on a windows system, but not pytorch? There must be some hack to get be able to do this. |
st179381 | SmoothPQ:
Thank you very much for the answer. Is nccl installed automatically when installing CUDA on Linux, or do I need to add something else?
On Linux, NCCL and torch.distributed will be enabled by default. On MacOs, with PyTorch 1.3.1+, you need to conda install libuv and pkg-config explicitly set USE_DISTRIBUTED=1 when compiling from source. For Windows, torch.distributed is not enabled yet. |
st179382 | I was able to use dataparallel on my model without any apparent errors. However, the performance was actually worse; which makes me think that it’s not actually using multiple gpus.
DataParallel is single-process-multi-thread data parallelism, and it replicates the input module in every forward pass, which is expected to be slow, but this is a very convenient entry point for enabling parallelism. You don’t need to do anything to enable that, and it should work fine if the batch size is large enough (to shadow the model replicating overhead)
Why am I able to use multiple gpus in tensorflow on a windows system
We are working on using libuv to enable that, as @pietern did for Windows, but timeline is TBD |
st179383 | Hi,
I am trying init dist and get stuck.
I have 2 nodes: master and slave, both pytorch 1.3.1 installed by anaconda
It works on both when:
dist.init_process_group(
backend ="NCCL",
world_size = 2,
rank = 0,# 0 for master and 1 for slave
init_method="tcp://192.168.1.102:23458"#master addr and port
)
It is hung up on both when:
store = dist.TCPStore("192.168.1.102", 23458, 2, 0)
Could somebody help?
Thanks in advance and Happy New Year. |
st179384 | This 25 is how TCPStore is initialized when you call init_process_group. You can print out the args on both process to check which one went wrong.
For start_daemon, did you pass in 1 for rank0 and 0 for rank1? |
st179385 | When i infer the same input, the output is not deterministic sometimes, and the code is as below.
Debug found that the posterior is same, but the sample is different for same input sometimes.
posterior = F.softmax(logits, dim=1)
distrib = torch.distributions.Categorical(posterior)
sample = distrib.sample().float()
I execute model.eval() and have set the seed at the begining. Do you have any suggestions? Thank you.
seed = 1234
torch.manual_seed(seed)
torch.cuda.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
np.random.seed(seed)
random.seed(seed)
torch.backends.cudnn.benchmark = False
torch.backends.cudnn.deterministic = True |
st179386 | Can you ask this question with Uncategorized label. It seems there is some randomness somewhere, but torch.distributed (not torch.distributions) is not involved here. |
st179387 | I am trying to spawn a couple of process using pytorch’s multiprocessing module within a openmpi distributed back-end. What I have is the following code:
def run(rank_local, rank, world_size, maingp):
print("I WAS SPAWNED ", rank_local, " OF ", rank)
tensor = torch.zeros(1)
tensor += 1
if rank == 0:
tensor += 100
dist.send(tensor, dst=1)
else:
print("I am spawn: ", rank, "and my tensor value before receive: ", tensor[0])
dist.recv(tensor, src=0)
print("I am spawn: ", rank, "and my tensor value after receive: ", tensor[0])
if __name__ == '__main__':
# Initialize Process Group
dist.init_process_group(backend="mpi", group_name="main")
maingp = None #torch.distributed.new_group([0,1])
mp.set_start_method('spawn')
# get current process information
world_size = dist.get_world_size()
rank = dist.get_rank()
# Establish Local Rank and set device on this node
mp.spawn(run, args=(rank, world_size, maingp), nprocs=1)
I run this code using the openmpi as follows:
mpirun -n 2 python code.py
So my understanding is that mpirun creates two process with ranks [0, 1], each of these process spawn new process with their local rank as 0. Now if I want to communicate between these two sub-processes of the main process I get some Traceback and following error:
-- Process 0 terminated with the following error:
Traceback (most recent call last):
File "/home/usama/anaconda3/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 19, in _wrap
fn(i, *args)
File "/home/usama/code/test/code.py", line 19, in run
dist.send(tensor, dst=1)
File "/home/usama/anaconda3/lib/python3.6/site-packages/torch/distributed/distributed_c10d.py", line 666, in send
_check_default_pg()
File "/home/usama/anaconda3/lib/python3.6/site-packages/torch/distributed/distributed_c10d.py", line 191, in _check_default_pg
"Default process group is not initialized"
AssertionError: Default process group is not initialized
My question is how do I make these sub-processes to be able to communicate i.e the [0, 0] process sending something to [1, 0] process. Any ideas?
I have asked this on stack-overflow as well but to no avail!!!
stackoverflow.com
Using Pytorch's Multiprocessing along with Distributed 22
python, pytorch, openmpi
asked by
Usama Zafar
on 07:22AM - 09 May 19 UTC |
st179388 | Usama-Zafar:
AssertionError: Default process group is not initialized
above suggests the init_process_group method is not called on the process that tries to use the distributed package. I think the follow line needs to be moved to the run method, and it is the entry point for the spawned process:
# Initialize Process Group
dist.init_process_group(backend="mpi", group_name="main") |
st179389 | Usama-Zafar:
dist.init_process_group(backend=“mpi”, group_name=“main”)
BTW, the way you call init_process_group does not look correct to me. You will need to either provide rank+world_size or provide an initialized store. The former will be easier. |
st179390 | Not sure if this is the right way to do it. I’m wrapping my model in nn.DataParallel for multi gpu training. There is an LSTM module as part of this model. This LSTM module has a custom method that resets the hidden states, called after each time a forward pass is done during training. This is the only custom method that’s used.
To access this reset method in the parallel model, I do
model.module.lstm.reset_hidden_state()
Whereas if my model is not wrapped in DataParallel, it would just be
model.lstm.reset_hidden_state()
Is this right, or do I have to write a custom DataParallel wrapper that has scatter, gather, etc methods? If so, how would I do it?
This is the lstm module:
class LSTM(nn.Module):
def __init__(self, latent_dim, num_layers, hidden_dim):
super().__init__()
self.lstm = nn.LSTM(input_size=latent_dim, num_layers=num_layers, hidden_size=hidden_dim, batch_first=True, dropout=0.0)
self.hidden_state = None
def reset_hidden_state(self):
self.hidden_state = None
def forward(self,X):
self.lstm.flatten_parameters()
X, self.hidden_state = self.lstm(X, self.hidden_state)
return X |
st179391 | Solved by mrshenli in post #2
It depends on what you expected reset_hidden_state to achieve. Below is what happens in EVERY forward pass when you use DataParallel.
split input data
replicate model to all devices
feed input data splits to all model replicas
gather outputs from all replicas
done with forward
After the forward … |
st179392 | bigyeet:
Is this right, or do I have to write a custom DataParallel wrapper that has scatter, gather, etc methods? If so, how would I do it?
It depends on what you expected reset_hidden_state to achieve. Below is what happens in EVERY forward pass when you use DataParallel.
split input data
replicate model to all devices
feed input data splits to all model replicas
gather outputs from all replicas
done with forward
After the forward pass, the autograd graph actually contains multiple model replicas. It looks sth like
original model <- scatter <- model replicas <- replica output <- gather <- final output.
So in your above use case, if reset_hidden_state has any side effect that you would like to apply to the backward pass, it will only apply to the original model, not to model replicas. But if you are only trying to clear some states for the next forward pass, it should work. |
st179393 | Hi,
I’m trying to use DistributedDataParallel on a CPU-only machine with multiple cores.
The documentation for DDP (https://github.com/pytorch/pytorch/blob/master/torch/nn/parallel/distributed.py 37) states: "For multi-device modules and CPU modules, device_ids must be None or an empty list, and input data for the forward pass must be placed on the correct device. (default: all devices for single-device modules).
I want to parallelize training across CPU processes in a single machine. My dataset is an in-memory numpy array.
Would I have to manually separate this dataset into different subsets, and load each subset for each CPU process? Does splitting the input along the batch dimension work for CPU modules as well? I am using torch’s multiprocessing module to spawn processes to use with my DDP model.
Thank you.
PS. What’s the best practice for sharing an in-memory array across torch processes? That would be helpful as well.
Some additional observations:
When I print the loss in each process, the loss value is the same. If the data was being split properly by DDP, wouldn’t each process have a different loss value? |
st179394 | From my experiments, it appears that for DDP using CPU processes, there is no splitting of data across the batch dimension across processes.
In the source code as well, if the model’s device_ids is None, then scattering is not performed in the forward() pass of the model.
Can someone more authoritative confirm this behavior? |
st179395 | In the source code as well, if the model’s device_ids is None, then scattering is not performed in the forward() pass of the model.
Yes, this is correct.
Input data split only occurs in two situations:
When using DataParallel (single-process multi-thread)
Using DistributedDataParallel (DDP), and provide a device_ids list of multiple CUDA devices. In this case, each DDP process will operate on multiple devices and multiple model replicas, and hence need to split the input data. (This is not recommended, as this could be slow)
For the recommended use case of DDP (one device/replica per DDP process), DDP will NOT split input or distributed them into multiple processes. Each DDP process needs to read its own input data independently. You could try manually splitting those data (say on rank0) and pass them across processes though, if they are on the same machine. Or, I also saw many people using the DistributedSampler 147 to load input data |
st179396 | there was not proper synchronization with the CUDA events that recored copies into this contents tensor before bucket contents tensor allreduce. it seems like not supporting cuda?
web link:
github.com
pytorch/pytorch/blob/master/torch/csrc/distributed/c10d/reducer.cpp#L404 1
// Keep going, until we either:
// - have kicked off reduction for all buckets, or
// - found a bucket that's not yet ready for reduction.
for (; next_bucket_ < buckets_.size() && buckets_[next_bucket_].pending == 0;
next_bucket_++) {
auto& bucket = buckets_[next_bucket_];
std::vector<at::Tensor> tensors;
tensors.reserve(bucket.replicas.size());
for (const auto& replica : bucket.replicas) {
// TODO(@pietern): Ensure proper synchronization with the CUDA events
// that recorded copies into this contents tensor. If these copies are
// executed on non-default streams, the current stream for the device
// that holds the contents tensor must wait on these events.
//
// As long as autograd uses the default stream for every device,
// these operations are implicitly sequenced, and we don't need to
// do any extra synchronization here.
//
tensors.push_back(replica.contents);
} |
st179397 | Solved by mrshenli in post #2
DistributedDataParallel (DDP) does supports CUDA. The comment suggests extra care might be necessary when backward run on non-default stream. Actually, even if backward occurs on non-default streams it should be fine for most use cases. Below is why:
background: I learned from @albanD that autograd… |
st179398 | DistributedDataParallel (DDP) does supports CUDA. The comment suggests extra care might be necessary when backward run on non-default stream. Actually, even if backward occurs on non-default streams it should be fine for most use cases. Below is why:
background: I learned from @albanD that autograd engine will use the same stream as the forward pass.
Let’s take a look at what could go wrong for the code you quoted.
1: the tensor is not ready when launching the allreduce operation
2: the tensor was destroyed too soon before the allreduce finishes.
We can rule out 2 for now, as all_reduce does recordStream() properly to prevent CUDA blocks to be freed too early.
Then the only thing left is 1. The operation on that tensor before allreduce is bucket_view.copy_(grad.view({-1}), /* non_blocking */ true); in mark_variable_ready_dense. The copy here happens on the same device (replica.contents and grad). And Reducer itself does not switch streams in between. So the only case that could hit race condition is when the application used different streams for different operators during the forward pass, and grads associated with those operators fall into the same bucket in reducer. |
st179399 | my code:
batch_size = 64
model = nn.DataParallel(model, device_ids=[0,1,2,3], dim=0)
model.cuda()
criterion = nn.BCEWithLogitsLoss()
criterion = cuda()
for s, t in loader:
logits = model(s, t)
loss = model.module.compute_loss(logits, tgt, criterion)
when I compute_loss, raise ValueError, say logits.shape is (256, num_classes), but t.shape is (64, num_classes), I want to know why |
st179400 | Solved by TripleTry in post #3
Thank you very much , I have solve this problem.
There is something wrong with my dataloader function, when I load data, I use padding to process my data, but I forgot to turn list into tensor, as a result nn.Dataparallel to split data wrong in batch dim. |
st179401 | Hi,
I think your code is missing some important bits.
Could you give a small code sample that we can run that shows the issue please? |
st179402 | Thank you very much , I have solve this problem.
There is something wrong with my dataloader function, when I load data, I use padding to process my data, but I forgot to turn list into tensor, as a result nn.Dataparallel to split data wrong in batch dim. |
st179403 | When I train my network with a single GPU, the training process terminates successfully after 120 epochs. However, if I use two GPUs, I get nan loss after a dozen epochs. The only thing I change is the batch size. For single GPU I use a batch size of 2 and for 2 GPUs I use a batch size of 1 for each GPU. The other parameters are exactly the same. I also replace every batchnorm2d layer with a syncbatchnorm layer. Strangely, syncbatchnorm gives higher loss. What could be the possible reasons? |
st179404 | Could you please paste a code snippet to reproduce? Are you using DataParallel or DistributedDataParallel? |
st179405 | I use DDP. I enabled anomaly detection. Below is the message I get
/pytorch/torch/csrc/autograd/python_anomaly_mode.cpp:57: UserWarning: Traceback of forward call that caused the error:
File “”, line 1, in
File “/usr/lib/python3.6/multiprocessing/spawn.py”, line 105, in spawn_main
exitcode = _main(fd)
File “/usr/lib/python3.6/multiprocessing/spawn.py”, line 118, in _main
return self._bootstrap()
File “/usr/lib/python3.6/multiprocessing/process.py”, line 258, in _bootstrap
self.run()
File “/usr/lib/python3.6/multiprocessing/process.py”, line 93, in run
self._target(*self._args, **self._kwargs)
File “/home/beinan/.local/lib/python3.6/site-packages/torch/multiprocessing/spawn.py”, line 19, in _wrap
fn(i, *args)
File “/home/beinan/Desktop/pytorch-bcn/jupyter/train.py”, line 153, in main_worker
train(train_loader, model, criterion, optimizer, epoch, args)
File “/home/beinan/Desktop/pytorch-bcn/jupyter/train.py”, line 199, in train
output = model(images)
File “/home/beinan/.local/lib/python3.6/site-packages/torch/nn/modules/module.py”, line 541, in call
result = self.forward(*input, **kwargs)
File “/home/beinan/.local/lib/python3.6/site-packages/apex/parallel/distributed.py”, line 560, in forward
result = self.module(*inputs, **kwargs)
File “/home/beinan/.local/lib/python3.6/site-packages/torch/nn/modules/module.py”, line 541, in call
result = self.forward(*input, **kwargs)
File “…/bcn/models/semantic/resnet34.py”, line 295, in forward
x, encoder_features, encoder_feature = self.encoder(x)
File “/home/beinan/.local/lib/python3.6/site-packages/torch/nn/modules/module.py”, line 541, in call
result = self.forward(*input, **kwargs)
File “…/bcn/models/semantic/resnet34.py”, line 224, in forward
x = self.bn(self.conv(x))
File “/home/beinan/.local/lib/python3.6/site-packages/torch/nn/modules/module.py”, line 541, in call
result = self.forward(*input, **kwargs)
File “…/bcn/layers/conv.py”, line 38, in forward
groups=self.groups
File “/home/beinan/.local/lib/python3.6/site-packages/apex/amp/wrap.py”, line 28, in wrapper
return orig_fn(*new_args, **kwargs)
Traceback (most recent call last):
File “train.py”, line 314, in
main()
File “train.py”, line 66, in main
mp.spawn(main_worker, nprocs=ngpus_per_node, args=(ngpus_per_node, args))
File “/home/beinan/.local/lib/python3.6/site-packages/torch/multiprocessing/spawn.py”, line 171, in spawn
while not spawn_context.join():
File “/home/beinan/.local/lib/python3.6/site-packages/torch/multiprocessing/spawn.py”, line 118, in join
raise Exception(msg)
Exception:
– Process 1 terminated with the following error:
Traceback (most recent call last):
File “/home/beinan/.local/lib/python3.6/site-packages/torch/multiprocessing/spawn.py”, line 19, in _wrap
fn(i, *args)
File “/home/beinan/Desktop/pytorch-bcn/jupyter/train.py”, line 153, in main_worker
train(train_loader, model, criterion, optimizer, epoch, args)
File “/home/beinan/Desktop/pytorch-bcn/jupyter/train.py”, line 208, in train
scaled_loss.backward()
File “/home/beinan/.local/lib/python3.6/site-packages/torch/tensor.py”, line 166, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File “/home/beinan/.local/lib/python3.6/site-packages/torch/autograd/init.py”, line 99, in backward
allow_unreachable=True) # allow_unreachable flag
RuntimeError: Function ‘CudnnConvolutionBackward’ returned nan values in its 1th output.
This consistently happens after 90+ epochs but only if I use DDP. Single GPU single node does not have this problem. BTW, I train with fp16 precision. Is it possible that fp16 + DDP + SyncBatchNorm somehow leads to this? |
st179406 | Is it possible for the transformation below to cause any problem?
class RandomResizeCrop(object):
def __init__(self, min_scale, max_scale, scale_step, output_size):
self.scales = np.arange(min_scale, max_scale, scale_step)
self.output_height, self.output_width = output_size
def __call__(self, image, annotation):
scale = np.random.choice(self.scales)
image = cv2.resize(image, (0,0), fx=scale, fy=scale, interpolation=cv2.INTER_LINEAR)
annotation = cv2.resize(annotation, (0,0), fx=scale, fy=scale, interpolation=cv2.INTER_NEAREST)
input_height, input_width = image.shape[:2]
row_pads = max(self.output_height - input_height, 0)
col_pads = max(self.output_width - input_width, 0)
top_pads = randint(0, row_pads)
bot_pads = row_pads - top_pads
left_pads = randint(0, col_pads)
right_pads = col_pads - left_pads
image = np.pad(image, ((top_pads,bot_pads),(left_pads,right_pads),(0,0)), mode='constant', constant_values=0)
annotation = np.pad(annotation, ((top_pads,bot_pads),(left_pads,right_pads)), mode='constant', constant_values=255)
y1 = randint(0, max(input_height - self.output_height, 0))
y2 = y1 + self.output_height
x1 = randint(0, max(input_width - self.output_width, 0))
x2 = x1 + self.output_width
return image[y1:y2,x1:x2], annotation[y1:y2,x1:x2] |
st179407 | It looks like the first convolution (operation) in resnet is causing nan. Is there any way some values of an image become nan after transformation? |
st179408 | Not sure if something is missing but isn’t SyncBatchnorm.convert_sync_batchnorm() supposed to convert the module transparently?
However, the following code segment produce ValueError: expected at least 3D input (got 2D input) .
Without the conversion, the forward goes as expected.
Any ideas?
import os
import torch
from torch import nn
module = torch.nn.Sequential(
torch.nn.Linear(20, 100),
torch.nn.BatchNorm1d(100)
).cuda()
# creating process group (optional)
# process_ids is a list of int identifying rank ids.
os.environ['RANK'] = '0'
os.environ['WORLD_SIZE'] = '1'
os.environ['MASTER_ADDR'] = 'localhost'
os.environ['MASTER_PORT'] = '25791'
process_group = torch.distributed.init_process_group(backend='nccl')
module = nn.SyncBatchNorm.convert_sync_batchnorm(module, process_group)
input = torch.randn(2, 20).cuda()
output = module(input)
print(output.shape)
The output:
Traceback (most recent call last):
File "syncBN.py", line 21, in <module>
output = module(input)
File "/home/ml/farleylai/miniconda3/envs/sinet37/lib/python3.7/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/home/ml/farleylai/miniconda3/envs/sinet37/lib/python3.7/site-packages/torch/nn/modules/container.py", line 92, in forward
input = module(input)
File "/home/ml/farleylai/miniconda3/envs/sinet37/lib/python3.7/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/home/ml/farleylai/miniconda3/envs/sinet37/lib/python3.7/site-packages/torch/nn/modules/batchnorm.py", line 429, in forward
self._check_input_dim(input)
File "/home/ml/farleylai/miniconda3/envs/sinet37/lib/python3.7/site-packages/torch/nn/modules/batchnorm.py", line 417, in _check_input_dim
.format(input.dim()))
ValueError: expected at least 3D input (got 2D input)
Expected output as w/o conversion:
torch.Size([2, 100])
Ubuntu 16.04 with PyTorch 1.3 installed through conda. |
st179409 | Hi,
The original modules like BatchNorm1d or BatchNorm2d support not having a batch size, so they handle respectively 2d/3d inputs and 3d/4d inputs.
The sync batchnorm has no specialized functions and works for all. But to know which version to use, it must use the number of dimensions of the input (otherwise as you see above, 3d input could be either a batched 1d or an unbatched 2d). And so it only allows having a batch dimension. |
st179410 | @albanD
I looked into the code and found this restriction is imposed by SyncBatchNorm:
def _check_input_dim(self, input):
if input.dim() <= 2:
raise ValueError('expected at least 3D input (got {}D input)'
.format(input.dim()))
This is completely different from the original BatchNorm1d to be wrapped:
def _check_input_dim(self, input):
if input.dim() != 2 and input.dim() != 3:
raise ValueError('expected 2D or 3D input (got {}D input)'
.format(input.dim()))
I got confused with the code segment that is actually from your API document of SyncBatchNorm using BatchNorm1d for convert_sync_batchnorm.
Why doesn’t SyncBatchNorm explicitly check whether the module to wrap is BatchNorm1d or BatchNorm2d instead of the general _BatchNorm in convert_sync_batchnorm?
If this is not going to work, what is the right way to use convert_sync_batchnorm for those models with BatchNorm1d? |
st179411 | If this is not going to work, what is the right way to use convert_sync_batchnorm for those models with BatchNorm1d ?
I think the fix here is to ensure you always have a batch dimension. Potentially adding an .unsqueeze(0) to your input.
Then why doesn’t SyncBatchNorm explicitly check whether the module to wrap is BatchNorm1d or BatchNorm2d instead of the general _BatchNorm in convert_sync_batchnorm ?
This would be a nice addition, we would be happy to merge a PR that adds this feature! |
st179412 | I think the fix here is to ensure you always have a batch dimension. Potentially adding an .unsqueeze(0) to your input.
The example input (2, 20) already contains a batch dim, indicating a batch of two 1D examples.
If we fake the input with unsqueeze(0), how could it work when there are other modules before BatchNorm1d in the model that may assume the 0th dim must be the batch dim?
After all, the layer(s) before BatchNorm1d can be anything else in general, right?
BTW, I tried to make it of size (1, 2, 20) but it still complains something wrong with the running_mean size:
Traceback (most recent call last):
File "syncBN.py", line 29, in <module>
output = module(input)
File "/home/ml/farleylai/miniconda3/envs/sinet37/lib/python3.7/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/home/ml/farleylai/miniconda3/envs/sinet37/lib/python3.7/site-packages/torch/nn/modules/container.py", line 92, in forward
input = module(input)
File "/home/ml/farleylai/miniconda3/envs/sinet37/lib/python3.7/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/home/ml/farleylai/miniconda3/envs/sinet37/lib/python3.7/site-packages/torch/nn/modules/batchnorm.py", line 459, in forward
exponential_average_factor, self.eps)
File "/home/ml/farleylai/miniconda3/envs/sinet37/lib/python3.7/site-packages/torch/nn/functional.py", line 1670, in batch_norm
training, momentum, eps, torch.backends.cudnn.enabled
RuntimeError: running_mean should contain 2 elements not 100
This would be a nice addition, we would be happy to merge a PR that adds this feature!
So you are suggesting it is not intentional but something that could be completed?
If that is the case, it seems like creating an issue on the GitHub repo makes more sense and I will look into the details under the hood.
Therefore to sum up, there is likely no luck for those with BatchNorm1d already to be converted to SyncBN transparently by default. |
st179413 | The example input (2, 20) already contains a batch dim, indicating a batch of two 1D examples.
That is not how batchnorm 1d works. Batchnorm 1d assumes an optional first batch dimension, then a channel dimension then an actual dimension. So the input is 2d without batch and 3d with batch.
BTW, I tried to make it of size (1, 2, 20) but it still complains something wrong with the running_mean size:
This is because you define your batchnorm as having 100 channels, but what you give as input has 2. |
st179414 | I am expiriencing the same problem as you @farleylai exactly now.
I am trying to run a model with ResNet backbone, which has only BatchNorm2d and a head network that have exactly ONE BatchNorm1d and that is exactly what causes problem.
The input to the BatchNorm1d in the forward function of the model is [64,2048].
As suggested by @albanD I unsqueezed it in the forward function) so that the input shape is now [64, 1, 2048]. Next module is a Linear classifier, so I squeezed the output of the BatchNorm1d to again have [64, 2048] input to Linear layer. This helped in the sense that the forward pass is working, but in the backward pass I am getting now an error:
RuntimeError: Function SyncBatchNormBackward returned an invalid gradient at index 1 - got [1] but expected shape compatible with [2048]
Any suggestions @albanD ? |
st179415 | Not really sure what you mean by ‘small code sample’. So I will try:
class Net(nn.Module):
in_planes = 2048
def __init__(self, num_classes, model_path, model_name):
super(Net, self).__init__()
self.base = ResNet(block=Bottleneck,
layers=[3, 4, 6, 3])
self.gap = nn.AdaptiveAvgPool2d(1)
self.num_classes = num_classes
self.bottleneck = nn.BatchNorm1d(self.in_planes)
self.bottleneck.bias.requires_grad_(False) # no shift
self.classifier = nn.Linear(self.in_planes, self.num_classes, bias=False)
def forward(self, x):
global_feat = self.gap(self.base(x)) # (b, 2048, 1, 1)
global_feat = global_feat.view(global_feat.shape[0], -1) # flatten to (bs, 2048)
feat = self.bottleneck(global_feat.unsqueeze(1)) ### To allow SyncBatchnorm
cls_score = self.classifier(feat.squeeze()) ### To adjust for Linear layer input
return cls_score, global_feat
def train():
model.train()
optimizer.zero_grad()
img, target = batch
img = img.to(device)
target = target.to(device)t
score, feat = model(img)
LOSS = loss_fn(score, feat, target)
LOSS.backward()
optimizer.step()
I shortened the code as much as possible to get the most important parts I think. The img size (input in train function) is [batch_size, 3, W, H] so standard for images.
EDIT: format and making a little clearer code |
st179416 | I think I got it right now.
According to docs https://pytorch.org/docs/stable/nn.html#batchnorm1d 7 and quoting:
Parameters
num_features – C from an expected input of size (N,C,L) or L from input of size (N,L)
So I figured out that as we need 3d input and BatchNorm1d uses C as the num_features in three dimensional input, the singular dimension should be the last one.
So instead of
feat = self.bottleneck(global_feat.unsqueeze(1)) # Which gives [bs, 1, 2048]
I just did:
feat = self.bottleneck(global_feat.unsqueeze(-1)) # Which gives [bs, 2048, 1]
No more errors and training seems to run smoothly with SyncBatchNorm as well. Hope, this helps someone. |
st179417 | That is not how batchnorm 1d works. Batchnorm 1d assumes an optional first batch dimension, then a channel dimension then an actual dimension. So the input is 2d without batch and 3d with batch.
As defined by the BatchNorm1d, the Input is expected to be of size (N, L) or (N, C, L) with batch dim first. What is optional is the additional channel dimension for BatchNorm1d from the documentation.
This is because you define your batchnorm as having 100 channels, but what you give as input has 2 .
(1, 2, 20) is due to the suggestion adding .unsqueeze(0) to your input but the resulting shape is not originally intended. By definition, whether the 100 is C or L in the previous example, BatchNorm1d produces the same results given (N, 100) or (N, 100, 1). (2, 100) is already a batch input with 2 1D features and matches the input accepted by BatchNorm1d. This has to be on the same page.
Now, get back to the issue with SyncBatchNorm conversion. Two questions:
Does SyncBatchNorm wrapped BatchNorm1d behave as expected as before the conversion?
The original BatchNorm1d takes both (N, L) or (N, C, L) and produces the same results as the following revised code segment shows. However, after converted to SyncBatchNorm which CHANGES the interface to ONLY accepts input of size (N, C, L). This conversion unlikely works transparently with existing models using BatchNorm1d to accept input of size (N, L).
import os
import copy
import torch
from torch import nn
with torch.no_grad():
inputNL = torch.randn(2, 20).cuda()
module = torch.nn.Sequential(
torch.nn.Linear(20, 100),
torch.nn.BatchNorm1d(100)
).cuda()
moduleC = copy.deepcopy(module).cuda()
moduleL = copy.deepcopy(module).cuda()
moduleC.eval()
moduleL.eval()
# XXX: BatchNorm1d accepts (N, C, L)
outputNL = moduleC[0](inputNL)
outputNCL = moduleC[1](outputNL.unsqueeze(-1))
print('BatchNorm1d NCL:', outputNCL.shape, round(outputNCL.mean().item(), 7))
# XXX: BatchNorm1d accepts (N, L) too
outputNL = moduleL[0](inputNL)
outputNL = moduleL[1](outputNL)
print('BatchNorm1d NL:', outputNL.shape, round(outputNL.mean().item(), 7))
os.environ['RANK'] = '0'
os.environ['WORLD_SIZE'] = '1'
os.environ['MASTER_ADDR'] = 'localhost'
os.environ['MASTER_PORT'] = '25791'
torch.distributed.init_process_group(backend='nccl')
moduleC = copy.deepcopy(module)
moduleL = copy.deepcopy(module)
moduleC = nn.SyncBatchNorm.convert_sync_batchnorm(moduleC)
moduleL = nn.SyncBatchNorm.convert_sync_batchnorm(moduleL)
moduleC.eval()
moduleL.eval()
# XXX: converted BatchNorm1d ONLY accepts (N, C, L)
outputNL = moduleC[0](inputNL)
outputNCL = moduleC[1](outputNL.unsqueeze(-1))
print('SyncBatchNorm NCL:', outputNCL.shape, round(outputNCL.mean().item(), 7))
# FIXME: Converted BatchNorm1d never accepts (N, L)
outputNL = moduleL[0](inputNL)
outputNL = moduleL[1](outputNL)
print('SyncBatchNorm NL:', outputNL.shape, round(outputNL.mean().item(), 7))
Sample output:
BatchNorm1d NCL: torch.Size([2, 100, 1]) 0.0683341
BatchNorm1d NL: torch.Size([2, 100]) 0.0683341
SyncBatchNorm NCL: torch.Size([2, 100, 1]) 0.0683341
Traceback (most recent call last):
File "syncBN.py", line 45, in <module>
outputNL = moduleL[1](outputNL)
File "/home/ml/farleylai/miniconda3/envs/sinet37/lib/python3.7/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/home/ml/farleylai/miniconda3/envs/sinet37/lib/python3.7/site-packages/torch/nn/modules/batchnorm.py", line 429, in forward
self._check_input_dim(input)
File "/home/ml/farleylai/miniconda3/envs/sinet37/lib/python3.7/site-packages/torch/nn/modules/batchnorm.py", line 417, in _check_input_dim
.format(input.dim()))
ValueError: expected at least 3D input (got 2D input)
If not, what is the justification or workaround that does not require changing the existing model to wrap?
One workaround is to reshape/unsqueeze(-1) the immediate input of size (N, L) to (N, C=L, L=1) before the converted BatchNorm1d as demonstrated by @bonzogondo. Unfortunately, this may not be scalable if the uses of BatchNorm1d are all over the place in existing models. There is no reshape layers in PyTorch to automate the unsqeeze. An alternative could be to identify whether the BatchNorm to wrap is 1D or not so that the SyncBatchNorm._check_input_dim(…) checks the same criteria as BatchNorm1d as sketched in the following. There may be some other exceptions but the goal should be to wrap existing models transparently.
class SyncBatchNorm(nn.SyncBatchNorm):
def _check_input_dim(self, input):
if self._1d:
if input.dim() != 2 and input.dim() != 3:
raise ValueError('expected 2D or 3D input (got {}D input)'
.format(input.dim()))
elif input.dim() <= 2:
raise ValueError('expected at least 3D input (got {}D input)'
.format(input.dim()))
@classmethod
def convert_sync_batchnorm(cls, module, process_group=None):
...
if isinstance(module, nn.modules.batchnorm._BatchNorm):
module_output = SyncBatchNorm(module.num_features,
module.eps, module.momentum,
module.affine,
module.track_running_stats,
process_group)
module_output._1d = isinstance(module, nn.modules.batchnorm.BatchNorm1d)
... |
st179418 | I am training a model in a 4-GPUs machine with torch.distributed,and I want the ONLY rank_0 process be responsible for plotting,so I wrote code like this:
if is_distributed() and distributed.get_rank()!=0:
print('Only rank_0 will do plotting,this is rank_{}'.format(distributed.get_rank()))
return# in parallel context,single plot is enough
print('this is rank_0 and will do plotting')
plotAccuracyAndLoss()
.....
So,if the process rank is not 0,it should print out:
Only rank_0 will do plotting,this is rank_x
and I do get 3 printings of this type
if the process rank is 0,it should print out:
this is rank_0 and will do plotting
and I never got this type of printing,and meanwhile all processes hanging and no exception got thrown out
watch -n0.1 nvidia-smi tell that before these code all, all GPU will have memory usage > 10341MB,when hitting these lines,the first GPU’s memory usage drops to 2387MB,others remain,More strangely,if changing code to
if is_distributed() and distributed.get_rank()!=1:
which let the second GPU to be responsible for plotting,when comes to these lines,1st,3rd,4th GPU’s memory usage still > 10341MB,but the 2nd GPU’s memory usage drop to 1073MB,training hangs,no exception got thrown out.
With same code in non-distributed training,the plotting works fine,would you please tell me how to make plotting work? |
st179419 | After adding:
distributed.barriere()
before any rank_x specific operation,everything goes fine,silly me |
st179420 | Hello,a similar question has been asked here:https://discuss.pytorch.org/t/matplotlib-doesnt-work-in-distributed-training/65724,but no answer,Question can be briefed blow:
In a 4-GPU machine,all gpus are used for training,there is some some code like this:
if is_distributed() and distributed.get_rank()!=0:
print('Only rank_0 will do plotting,this is rank_{}'.format(distributed.get_rank()))
return# in parallel context,single plot is enough
print('this is rank_0 and it will do plotting')
plotAccuracyAndLoss()
When comes to these code,three:
Only rank_0 will do plotting,this is rank_x
got printed out,but
print('this is rank_0 and will do plotting')
never got printed out,and all 4 processes hanged and NO exception got thrown out
watch -n0.1 nvidia-smi tell that
before these code all, all GPU will have memory usage > 10341MB,
when hitting these lines,the first GPU’s memory usage drops to 2387MB,others remain
previously,I thought that it is the matplotlib which caused this hanging,but right now I found any rank_0-only operation(ploting/checkpointing…) will cause hanging,further more,any rank_x-only operation will cause hanging,so,How to solve this problem? |
st179421 | After adding:
distributed.barriere()
before any rank_x specific operation,everything goes fine. |
st179422 | I’m trying to run multiple threads in pytorch with GPU enabled. In each thread, I am trying to create a CUDA tensor from numpy array using the following code:
tensor = torch.from_numpy(array).cuda().float()
this triggers the following error report:
RuntimeError: CUDA error: initialization error
Any help would be greatly appreciated! |
st179423 | RuntimeError: Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the ‘spawn’ start method
This is the error that I get when I want to use mutiple gpus to evaluate different models using multiprocessing. |
st179424 | Try mp.set_start_method('spawn', force=True) at your main; like the following:
if __name__ == '__main__':
mp.set_start_method('spawn', force=True)
main() |
st179425 | I get some bug during torch.distributed.isend and irecv: I think that there is a race condition there, but not sure how to debug this.
I’m using MPI, so the buffers have to stay untouched until the send/recv is over.
I see crazy “spikes” in error, which I get only with a certain degree of parallelism.
I wonder if pytorch/python garbage collector touches my buffers.
I saved them in a list, just in case.
Where can I check this in code? can I “guard” the buffers somehow?
I see pytorch tests barely check the Isend/Irecv, and want to verify that the bug is not internal… |
st179426 | Hi @seliad
The code for MPI-based torch.distributed.isend is here: https://github.com/pytorch/pytorch/blob/cc16819028c325e2543d45752a875bd3c5e09b32/torch/lib/c10d/ProcessGroupMPI.cpp#L591 6 |
st179427 | I looked at the code and did not see something suspicious.
However, the data corruption is still there.
I found that if I do torch.distributed.synchronize(device) explicitly before Isends the problem is mitigated and can be mistaken to “solved”, but I don’t like this solution at all.
I don’t see any rational reason to do so, there is probably some bug there.
Reading the warnings here and here makes me believe that the MPI/distributed API probably does not do many stuff necessary for sharing tensors
like handling references counts, using mutex to guard stuff and etc.
I use CUDA-aware with openMPI, I thought it is supported. |
st179428 | Hi!
I have 2 of the same GPU and I want to achieve faster processing by utilizing both of them. However, I am doing this in a different way, imitating the idea of Massively Parallel Video Networks 1:
I have divided my model into two sub-models. I want to run them concurrently, one part processing the input video frame by frame, and the other processing the output of the first one. However, there is a catch. When the first sub-model returns an output, it passes it to the second sub-model and starts processing the next frame of the input. By utilizing both the GPUs the authors of the paper achieve faster processing. Any idea on how to do this? The figure shows the idea: (the network is unrolled over time)
The idea is not the same as nn.DataParallel(). I have tried torch.multiprocessing, DistributedDataParallel() but I am having trouble understanding how to do this.
If anyone have some answer, I would be glad.
Thanks. |
st179429 | One approach…
Start 2 python programs, in separate interpreters to avoid the dreaded GIL lock.
Processor 1
Put tensor on cuda:0, get the output.
Serialize and push the output to shared redis database
Processor 2
Consumer picks up from database, pushes to cuda:1
Consumer runs the next step of calculation.
If you need to send gradients for backprop you can store and reload them also.
Bit of fun with gradients autograd
Thought I’d share this code. Learned a few things about autograd doing this.
import torch
# unbroken gradient, backward goes all the way to x
x = torch.ones(2, 2, requires_grad=True)
y = 2 * x + 2
z = y * y * 3
out = z.mean()
out.backward()
print(x.grad)
baseline_x = x.grad
# broken gradient, ends at _y
x = torch.ones(2, 2, requires_grad=True)
y = 2 * x + 2
_y = torch.tensor(y.detach(), requires_grad=True)
z = _y * _y * 3
out = z.mean()
out.backward()
print(x.grad)
print(_y.grad)
# we can …
That’s one way… Not easy though. I spent easy a month just trying to distribute calculations over multiple processors.
If you can pull it off… then it’s an awesome skill.
Also, there is the Ray project https://github.com/ray-project/ray 4
I tried using it. It had great promise, but ended up being a bit too new at the time. It might be a bit more mature now. |
st179430 | Thanks for your reply and sorry for my late reply. I will look into these methods. I am only doing this for the test phase, so I will only have to transfer one tensor per input frame to processor 2.
If anybody else has some further suggestions, I will be happy to hear them!
Thanks. |
st179431 | Will this tutorial be helpful? https://pytorch.org/tutorials/intermediate/model_parallel_tutorial.html 35 |
st179432 | Thank you. I have seen this tutorial previously, however, the model parallel part is not what I want. For the pipelining part, I am having trouble how that part is getting executed. If you can further clarify that part for me, that would be great. |
st179433 | Lets say I have 2 models A() and B() and 2 gpus. Outputs of A will be fed to B as inputs
Because 2 models are too big to fit on the same gpus so I have to manually instantiate A on gpu 0 and B on gpu 1. Hence, I have to manually change the device of A’s output to feed to B.
Sometimes, my batchsize is too large to run A() on gpu 0, but if I theoretically can utilize gpu1, I can still use that batch size without reducing it.
The question is can my models be placed on different gpus but still run in dataparallel mode?
Update:
I saw a post mentioning about this:
model1 = nn.DataParallel(model1).cuda(device=0)
model1_feat = model1(input_image)
model2 = nn.DataParallel(model2).cuda(device=1)
model2_feat = model2(model1_feat, input_feat)
My question is does that mean model1 is replicated on both gpu? |
st179434 | Hung_Nguyen:
My question is does that mean model1 is replicated on both gpu?
Yes, DataParallel will replicate the model, scatter inputs, and gather outputs in every iteration. So, for the above code snippet, everytime you run model1_feat = model1(input_image), model1 is replicated to all devices in the forward pass. |
st179435 | For example, I have 4 nodes and every node has two gpus. I want to devide one model into four parts, every node run part of the model and use data parallelism on its two gpus.
I use hook to get the gradients and use “dist.send” to send them to other node, it’s effective for
model parallelism.
on node 1:
dist.init_process_group(backend=“gloo”, init_method=‘tcp://172.22.4.11:28456’, rank=0,
world_size=2)
outputs is the result of node 1
dist.send(tensor=outputs.to(‘cpu’), dst=1, tag=0)
rec is the gradients send from node 2
dist.recv(rec, src=1)
outputs.backward(rec.cuda())
on node 2:
dist.init_process_group(backend=“gloo”, init_method=‘tcp://172.22.4.11:28456’, rank=1,
world_size=2)
rec is the result of node 1
dist.recv(tensor=rec, src=0, tag=0)
outputs2 = net2(rec)
feta[0] is the gradients of node 2
dist.send(tensor=feat[0].to(‘cpu’), dst=0)
But when I try to combine data parallelism with model parallelism, it failed. I choose “torch.nn.parallel.DistributedDataParallel” to achieve data parallelism, but node2 can’t receive the gradients from node1.
Question:
So how to combine data parallelism with model parallelism for multiple nodes? |
st179436 | It might be easier to run model parallel on multiple GPUs in the same machine and distributed data parallel across machines. Checkout this section 41 for more details.
For your above use case, you will need to create multiple process groups. Given the above configuration, 4 nodes, and 2 GPUs per node, I assume you will have 8 processes, one process per GPU. Then you can create:
Two process groups of world size 4, which will be responsible for send/recv outputs/gradients across machines.
One process group of world size 2 on EACH machine, which will do the distributed data parallel on the machine.
The reason you need the above setting is because DistributedDataParallel would expect all processes in the same group are training the same model in a synchronized fashion. It won’t work if you only use 2 processes in the same group of size 8.
See the new_group 6 API.
BTW, the torch.distributed.rpc 16 API might make send/recv outputs/grads easier for you, and it also supports distributed autograd. |
st179437 | Hi all,
I have been trying to figure out how to train a population of models on multiple nodes (which do not have GPUs, but that’s not the main point; I’m happy with training on CPUs). Ideally, I would like a single process per model running on a separate CPU. I can request hundreds or thousands of CPUs, and each model is fully contained, meaning that I don’t want to share any parameters from one model across nodes; rather, I want each model to train on its own CPU.
I have tried using a worker pool from torch.multiprocessing and passing models to the training function. I train each model for one epoch, then I perform some processing in the main process and then I map them again to the worker pool to train them for another epoch, and so on. That works fine if I run the models on a single machine, but it doesn’t scale up to a multi-node scenario because torch.multiprocessing is not aware of the additional nodes (I requested 256 CPUs on the cluster, which translates to 8 nodes with 16 CPUs each, but 7 of those remained idle).
As far as I can tell, all examples I found (for example, using torch.distributed here 24) assume that you have a single large model and you want to spread the training of one model across multiple workers. This is not my case - my models are small and I’d like to train them in parallel but independently of each other. They are, however, being trained on the same task using the same data, in case that’s relevant.
Any help would be appreciated! Apologies if I’m missing something obvious. |
st179438 | IIUC, DistributedDataParallel does not fit in this use case because you have a population of independent models to train on the same set of data instead one big model on different splits of input data. It looks like the experimental torch.distributed.rpc 31 might be helpful here, it would at least help you take care of the communication. But you would still need to write code to dispatch models to the workers in the pool. |
st179439 | Hi, recently I tried to use torch.distributed package to train my model. In my case, I used one architecture like encoder-decoder. For accelerating my code, I pre-computed features of input data, but when I only trained decoder with calling model.module.decoder (input), it gave me some bug info like the following
TypeError: _queue_reduction(): incompatible function arguments. The following argument types are supported:
1. (process_group: torch.distributed.ProcessGroup, grads_batch:List[List[at::Tensor]], devices: List[int]) -> Tuple[torch.distributed.Work, at::Tensor]
I wonder if someone can give me some suggestions? Does torch.distributed package could not work normally if part of the model doesn’t participate in computing process? |
st179440 | In v1.1, we added a new find_unused_parameters 16 arg to DistributedDataParallel. If some of the model params are not involved in the forward pass, you can set find_unused_parameters to True. |
st179441 | Hello,
I already had a pre-train model. I only use it to extract the feature (only use “forward pass”).
Now, I only load it into single GPU and use with torch.no_grad()
My question is that: I have 2 GPUs on my computer and how to reduce the execution time with 2 GPUs.
P/s: Input of my model have a size as (8,3,256,128)
Thanks. |
st179442 | Thanks for replaying!!
However, when I read the document of DataParallel and DistributedDataParallel 3, I think it would not help me to reduce the execution time because I do not need the backward pass.
> assert any((p.requires_grad for p in module.parameters())), (
> "DistributedDataParallel is not needed when a module "
> "doesn't have any parameter that requires a gradient."
> )
I will try with it and tell u the result. |
st179443 | How to use DataParallel:
model = DataParallel(model, dim=your batch dim in input, device_ids=[main_id, other_ids …], output_device=main_id)
Note that main_id (GPU that store the original model parameter) should be the first in the list of device_ids.
You can use DataParallel since it’s easier to setup and test, but remember to:
Set the batch dimension of DataParallel. The default is dim=0 but sometime you might want to apply another dimension.
(e.g. my input size is (Time, Batch, Dim_Data), and the model require a fully time series. In this scenario I will apply dim=1 instead of dim=0, because If I choose dim=0, the time serie will be split into multiple fragments.)
DataParallel return a wrapped model, so use the wrapped model to forward instead of original one, and remember to…
…handle the state_dict of wrapped model before save the state_dict into files. DataParallel will append prefix “module.” to each key of the original state_dict.keys(), you have to remove the prefix before saving the state_dict.
Set the GPU with larger memory as output_device , and also pass model parameter and your input to this GPU. output_device need to store both data and model parameters, so larger GPU memory is favorable. |
st179444 | Here is my code with DataParallel
import time
import torchvision.models as models
import torch
import torch.nn as nn
model = models.resnet50(num_classes=1000).to('cuda:0')
model = nn.DataParallel(model, device_ids=[0,1], output_device=0)
#####
batch_size = 8
image_w = 128
image_h = 128
#####
#warm up GPU
input = torch.randn(batch_size,3,image_w, image_h).to('cuda:0')
model.eval()
listTime = []
for i in range(20):
with torch.no_grad():
startTime = time.time()
input = torch.randn(batch_size,3,image_w, image_h).to('cuda:0')
out = model(input)
esl = time.time() - startTime
listTime.append(esl)
print("Total time of loop {} :: {}".format(i, esl))
meanTime = torch.mean(torch.tensor(listTime[9:]))
print(meanTime)
I test with resnet50(). The size of input is (8,3,128,128).
I run the forward() pass in 20 steps and choice the last 10 steps to find the mean of execution time.
Without Dataparallel, meanTime = 0.0064s (run with single GPU)
and with Dataparalle, meanTime = 0.0396s(run with 2 GPUs)
P/s: I have 2 GPUs as below image.
Do you have any solution for my problem ?
Thanks
nvida.png710×206 26.4 KB |
st179445 | @dat_pham_thanh Can you benchmark using at least 1000 iterations and also track throughput instead (images/s)? Mean time can be misleading since a single outlier could change the mean quite a bit. |
st179446 | @dat_pham_thanh
DataParallel would replicate the model, scatter the input, and gather outputs in every iteration. So, if the input size is too small, the overhead of replicating the model might overshadow the benefits of parallelizing the computation. Besides what @pritamdamania87 suggested above, could you please also try with large batch size? |
st179447 | Thank for your reply!!
I think you are correct. I can not increase the batch size because it is fixed ( batch size always equals 8) for each iteration.
I use the ONNX model to solve my problem!! |
st179448 | Hi, I am a newbee for pytorch distributed.
My model is only a small component of a much more complicated problem.
I noticed that if I train it using single-GPU, then it takes at most one quarter of the GPU memory and utility.
So I wonder if it is possible to distribute four replicas of the model on the same GPU so that hopefully I can get 4x speedup.
I read the documents and there are many example of multi-gpu, but none of them is using fractional gpu like this. Anyone have ideas? Thanks. |
st179449 | It really depends. Even if 4 replicas of your model can fit into the memory of one GPU, they still need to compete for the same set of streaming multiprocessors and other shared resources on that GPU. You can try if using multiple streams would help, e.g.:
s0 = torch.cuda.Stream()
s1 = torch.cuda.Stream()
with torch.cuda.stream(s0):
output0 = model_replica0(input0)
with torch.cuda.stream(s1):
output1 = model_replica1(input1)
s0.synchronize()
s1.synchronize() |
st179450 | Hi, Shen Li, Hi, the “DistributedDataParallel” automatically average the gradient when calling “loss.backward()”,
But I didn’t find the corresponding script in pytorch source code, Do you know where it is ? |
st179451 | Hey @meilu_zhu
Sorry about the delay. The grad averaging algorithm is implemented in the reducer 2. Each DistributedDataParallel creates its reducer instance in the constructor 1. More specifically, allreduce is invoked here 1. |
st179452 | Hi!
I have four machines, and each machine has one GPU device. I want to train my model use four GPU devices but failed.
Below are the information of my machines.(node name with ip)
n100: 172.22.99.10
n101: 172.22.99.11
n102: 172.22.99.12
n104: 172.22.99.14
In my program, I use the Gloo backend. If I run the program with 3 nodes: n100, n101, n102, the program works well. But when I use all the nodes, I get the following error:
fanxp@n100:~/vscode/pytorch_test$ ./parallel_deepAR.py --rank 0 --world-size 4
Traceback (most recent call last):
File "./parallel_deepAR.py", line 472, in <module>
init_process(args.rank, args.world_size, run, 'gloo', args.ip, args.port)
File "./parallel_deepAR.py", line 313, in init_process
init_method='tcp://{}:{}'.format(ip, port), rank=rank, world_size=size)
File "/simm/home/fanxp/.local/lib/python3.6/site-packages/torch/distributed/distributed_c10d.py", line 410, in init_process_group
timeout=timeout)
File "/simm/home/fanxp/.local/lib/python3.6/site-packages/torch/distributed/distributed_c10d.py", line 478, in _new_process_group_helper
timeout=timeout)
RuntimeError: [../third_party/gloo/gloo/transport/tcp/pair.cc:207] address family mismatch
I think the node n104 may have a different address family, which cause the error. But I don’t know how to solve this.
Some additional informations
the ifconfig output of the internet interface on each node
fanxp@n100:~/vscode/pytorch_test$ ifconfig eth5
eth5: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.22.99.10 netmask 255.255.255.0 broadcast 172.22.99.255
inet6 fe80::1602:ecff:fe69:ef5d prefixlen 64 scopeid 0x20<link>
inet6 2400:dd02:100c:3199:1602:ecff:fe69:ef5d prefixlen 64 scopeid 0x0<global>
ether 14:02:ec:69:ef:5d txqueuelen 1000 (Ethernet)
RX packets 472256109 bytes 701421415319 (701.4 GB)
RX errors 0 dropped 5470 overruns 0 frame 0
TX packets 553043129 bytes 818712088574 (818.7 GB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
fanxp@n101:~/vscode/pytorch_test$ ifconfig eth5
eth5: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.22.99.11 netmask 255.255.255.0 broadcast 172.22.99.255
inet6 fe80::211:aff:fe6c:2345 prefixlen 64 scopeid 0x20<link>
inet6 2400:dd02:100c:3199:211:aff:fe6c:2345 prefixlen 64 scopeid 0x0<global>
ether 00:11:0a:6c:23:45 txqueuelen 1000 (Ethernet)
RX packets 373027705 bytes 535914116118 (535.9 GB)
RX errors 0 dropped 1720 overruns 0 frame 0
TX packets 87419537 bytes 80820382770 (80.8 GB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
fanxp@n102:~/vscode/pytorch_test$ ifconfig eth5
eth5: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.22.99.12 netmask 255.255.255.0 broadcast 172.22.99.255
inet6 fe80::211:aff:fe6c:2325 prefixlen 64 scopeid 0x20<link>
inet6 2400:dd02:100c:3199:211:aff:fe6c:2325 prefixlen 64 scopeid 0x0<global>
ether 00:11:0a:6c:23:25 txqueuelen 1000 (Ethernet)
RX packets 9676903 bytes 10243508657 (10.2 GB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 8458287 bytes 7559359606 (7.5 GB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
fanxp@n104:~/vscode/pytorch_test$ ifconfig ens1f1
ens1f1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.22.99.14 netmask 255.255.255.0 broadcast 172.22.99.255
inet6 2400:dd02:100c:3199:1602:ecff:fe72:8ae8 prefixlen 64 scopeid 0x0<global>
inet6 fe80::1602:ecff:fe72:8ae8 prefixlen 64 scopeid 0x20<link>
ether 14:02:ec:72:8a:e8 txqueuelen 1000 (Ethernet)
RX packets 6220778 bytes 5698014724 (5.6 GB)
RX errors 0 dropped 1166 overruns 0 frame 0
TX packets 12621081 bytes 14816572590 (14.8 GB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
all the network interface use the infiniband
the source code is too complex, so it’s not good to provide the code. I think the code to initialize process may be helpful
def init_process(rank, size, fn, backend='gloo', ip=None, port=None):
""" Initialize the distributed environment. """
# os.environ['MASTER_ADDR'] = '172.22.99.10'
# os.environ['MASTER_PORT'] = '29500'
# dist.init_process_group(backend, rank=rank, world_size=size)
dist.init_process_group(
backend=backend,
init_method='tcp://{}:{}'.format(ip, port), rank=rank, world_size=size)
fn(rank, size)
The master used in Gloo backend
address: 172.22.99.10 port: 20000
pytorch version
PyTorch version: 1.3.0a0+ee77ccb
Is debug build: No
CUDA used to build PyTorch: 10.1.243
OS: Ubuntu 18.04.1 LTS
GCC version: (Ubuntu 7.3.0-27ubuntu1~18.04) 7.3.0
CMake version: version 3.10.2
Python version: 3.6
Is CUDA available: Yes
CUDA runtime version: 10.1.243
GPU models and configuration: GPU 0: Tesla K40c
Nvidia driver version: 440.33.01
cuDNN version: /usr/lib/x86_64-linux-gnu/libcudnn.so.7.6.2
Versions of relevant libraries:
[pip] numpy==1.14.5
[conda] Could not collect |
st179453 | Can you try specifying GLOO_SOCKET_IFNAME to select the appropriate interface on each node as described here: https://pytorch.org/docs/stable/distributed.html#choosing-the-network-interface-to-use 51? |
st179454 | Hi,
I am trying to implement the main idea in Massively Parallel Video Networks 1. In the paper, the authors implement model parallelism for training video networks by giving the outputs of each layer to the next layer as usual but to the next time step. This way, they can process layers independently on different GPUs. The following figure shows the most basic case:
This photo shows the network unrolled over time. At every time instant, for this case, we have 4 different layers and each of these 4 layers can be processed independently on separate GPUs.
To do this I want to use hopefully, something simple. However, I am having trouble understanding if I can use DDP for this. Essentially, I want to divide my model into independent blocks and pass the gradient in the direction of the arrows. Can I use DDP with model parallel as given here 3. If so, how to do this for a big model where I can choose how I define my sub-blocks.
As a further question: In the paper, they also implement the same idea on a CPU. Is there some function in PyTorch which can achieve this task.
Note: We cannot use nn.DataParallel() because the training setting is an online training and we want to process frames one by one as they come.
Thanks in advance. |
st179455 | IIUC, what we’re looking for here is pipeline parallelism. PyTorch currently doesn’t have native support for pipeline parallelism. There are a few projects that have built something similar on top of PyTorch: https://github.com/kakaobrain/torchgpipe 2 and https://github.com/msr-fiddle/pipedream 5. You could also use the Distributed RPC Framework 1 to build something like this. |
st179456 | Hey folks,
I am new to pytorch and I am trying to parallelize my network. Using nn.DataParallel seems to work as expected for the nn.modules living inside my class, however, it looks like the nn.ParameterLists that I’m defining as class members are listed as sitting in (GPU 0) only, when I print out the module’s parameters:
image1182×859 127 KB
Is this expected behaviour and why are they not listed on both of the GPUs I’m using? Could somebody please explain what is going on here?
torch.cuda.device_count returns 2 as expected.
My code looks something like the following:
class Network(nn.Module):
def __init__(self):
...
self.templates = nn.ModuleList([nn.ParameterList([nn.Parameter(template_init, requires_grad=True) for i in range(n)]) for n in self.num_t])
...
self.Network = nn.DataParallel(self.Network)
self.Network.to(self.device) |
st179457 | Hi @ortho-stice
This is expected behavior. Here is the source code of DataParallel: https://github.com/pytorch/pytorch/blob/46539eee0363e25ce5eb408c85cefd808cd6f878/torch/nn/parallel/data_parallel.py#L148-L153 8
What happens is that, in every forward pass, DataParallel will
scatters the input to all GPUs
replicate the model to all GPUs
launch parallel_apply so that every GPU will run its own forward pass using its input data split in parallel.
gather all outputs to the output device
So the model replication only occurs in the forward pass, and hence you won’t see those model replicas outside the forward function.
BTW, we do recommend using DistributedDataParallel which only replicates the model once in constructor instead of in every forward invocation. |
st179458 | How does gradient averaging work in DistributedDataParallel training? I am particularly interested in what happens when the batches have masked or ignored data, e.g. with semantic segmentation.
For example: let’s say I have 4 GPUs and I am training a semantic segmentation network with a dataset with an ignore class. As I understand it, in the DataParallel setting, the outputs are aggregated on GPU0, the loss computed, and then the gradient is backpropagated back through each GPU’s model. In the DistributedDataParallel case, L0, L1, L2, L3 are each computed for each GPU’s share of the batch, the losses are backpropagated back through their respective GPU’s model, and the gradients along the way are averaged.
Using DataParallel, the presence of an ignore class makes no difference. Even if one GPU’s mini-batch has a lopsided amount of ignore pixels, the loss is computed as the weighted average. However, what happens when you have a lopsided distribution of ignore pixels on one GPU using DistributedDataParallel? There does not seem to be any mechanism for weighting the average of the gradients. Yet in this case, L0, L1, L2, and L3 ought to have their contributions weighted by the ratio of valid pixels when averaging gradients during backpropagation.
Is there some way to handle this ignore class imbalance during distributed training? |
st179459 | How does gradient averaging work in DistributedDataParallel training?
Every DDP instance will have its own copy of the local model, and DDP will setup post autograd hooks on every parameter (i.e., DDP hooks).
In every forward pass, DDP feeds the input data to its own local model, and returns the local output to the application.
The application uses the local output to compute the local loss, and calls backward on the local loss, which kicks off the local autograd engine to compute gradient for all parameters. When one local gradient becomes ready, that will trigger the corresponding DDP hook. The DDP hook will run allreduce on the given gradients, and write the averaged grads back to the parameter.grad field.
When backward is done, parameter.grad should all be globally averaged gradients. Optimizer can then consume that grad to update parameters.
Is there some way to handle this ignore class imbalance during distributed training?
DDP simply averages (sum and then divide by the number of DDP world size) all local gradients. So, it should work as long as the ignored data do not contribute to the local gradients. |
st179460 | Could you please post a short code to introduce the instructions of it?
I have a machine with two GPUs, which means I want to use single process multi gpus.
I tried to use SyncBatchNorm, but failed, sadly like this …
It raise a “ValueError: SyncBatchNorm is only supported for DDP with single GPU per process”…!
But in docs of DDP 32, it says single-process multi-gpus.
import torch
import torch.nn as nn
class net(nn.Module):
def __init__(self):
super(net, self).__init__()
self.convBlock = nn.Sequential(
nn.Conv2d(3, 128, 3, 1, 1),
nn.SyncBatchNorm(128),
nn.ReLU(),
nn.Conv2d(128, 512, 3, 1, 1),
nn.SyncBatchNorm(512),
nn.ReLU(),
nn.Conv2d(512, 1, 3, 1, 1),
nn.SyncBatchNorm(1),
nn.ReLU()
)
def forward(self, x):
x = self.convBlock(x)
return x
torch.distributed.init_process_group(backend='nccl', init_method='tcp://127.0.0.1:12345', world_size=1, rank=0)
model = net().cuda()
model = nn.parallel.DistributedDataParallel(model, device_ids=[0, 1], output_device=0)
model = model
optimizer = torch.optim.Adam(model.parameters())
mseloss = torch.nn.L1Loss()
for i in range(1000):
x = torch.rand(10, 3, 224, 224)
y = torch.rand(10, 1, 224, 224)
x = x.cuda()
y = y.cuda()
out = model(x)
optimizer.zero_grad()
loss = mseloss(out, y)
print(i, loss)
loss.backward()
optimizer.step() |
st179461 | This is expected.
While DDP supports using multiple GPUs from a single process, nn.SyncBatchNorm does not and requires you to use a single GPU per process. Also see the docs for torch.nn.SyncBatchNorm 401:
Currently SyncBatchNorm only supports DistributedDataParallel with single GPU per process. Use torch.nn.SyncBatchNorm.convert_sync_batchnorm() to convert BatchNorm layer to SyncBatchNorm before wrapping Network with DDP. |
st179462 | I think this is worth fixing. Distributed data parallel uses a lot of CPU threads. This is okay for expensive servers used by industry, but a lot of us have a limited number of CPU cores at our disposal. |
st179463 | Hi, I’m using a 4 GPUs machine with torch.distributed for training, and I want to do the inference with the trained model on another mahcine with only one GPU. But when I run the code like this:
python -m torch.distributed.launch --nproc_per_node=1 visualizer_distributed.py.py
I got an error
Traceback (most recent call last):
File "visualizer_distributed.py", line 21, in <module>
model = torch.nn.parallel.DistributedDataParallel(model)
File "H:\anaconda3\lib\site-packages\torch\nn\parallel\distributed.py", line 259, in __init__
self.process_group = _get_default_group()
NameError: name '_get_default_group' is not defined
Traceback (most recent call last):
File "H:\anaconda3\lib\runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "H:\anaconda3\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "H:\anaconda3\lib\site-packages\torch\distributed\launch.py", line 235, in <module>
main()
File "H:\anaconda3\lib\site-packages\torch\distributed\launch.py", line 231, in main
cmd=process.args)
subprocess.CalledProcessError: Command '['H:\\anaconda3\\python.exe', '-u', 'visualizer_distributed.py', '--local_rank=0']' returned non-zero exit status 1.
Here is the snippet of code
model = ...
model = torch.nn.parallel.DistributedDataParallel(model)
model.load_state_dict(torch.load(model_params))
model.cuda()
When I set the device_ids like
model = ...
model = torch.nn.parallel.DistributedDataParallel(model, device_ids=0)
torch.cuda.set_device(0)
model.load_state_dict(torch.load(model_params))
model.cuda()
I got:
Traceback (most recent call last):
File "visualizer_distributed.py", line 21, in <module>
model = torch.nn.parallel.DistributedDataParallel(model, device_ids=0)
File "H:\anaconda3\lib\site-packages\torch\nn\parallel\distributed.py", line 259, in __init__
self.process_group = _get_default_group()
NameError: name '_get_default_group' is not defined
Traceback (most recent call last):
File "H:\anaconda3\lib\runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "H:\anaconda3\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "H:\anaconda3\lib\site-packages\torch\distributed\launch.py", line 235, in <module>
main()
File "H:\anaconda3\lib\site-packages\torch\distributed\launch.py", line 231, in main
cmd=process.args)
subprocess.CalledProcessError: Command '['H:\\anaconda3\\python.exe', '-u', 'visualizer_distributed.py', '--local_rank=0']' returned non-zero exit status 1.
Does anyone know why the problem occurs and how to use DistributedDataParallel for inference on a single-GPU machine?
Thanks in advance! |
st179464 | Based on the error message it looks like you are using a Windows machine.
I’m not familiar with Windows, but I thought it doesn’t support distributed applications.
Were you also using Windows on the first machine? |
st179465 | In that case I think you cannot use a distributed setup.
However, since you have a single GPU on your Windows system, you won’t get any benefits anyway. |
st179466 | So what should I do if I want to use the distributed model for inference in the single-GPU windows?
I was using nn.DataParallel in these two machines, I must call nn.DataParallel(model) before loading the model. I’m now just trying to do the same thing with DistributedDataParallel but got the problem. |
st179467 | I think the easiest way would be to store the state_dict without the nn.DataParallel .module attribute (I assume you are stuck there) as described here 76. |
Subsets and Splits