id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st178768 | I have resolved the issues. It’s related to 1686 43
state_dict = torch.load(weight_path)
from collections import OrderedDict
new_state_dict = OrderedDict()
for k, v in state_dict.items():
name = k[7:] # remove 'module.' of DataParallel/DistributedDataParallel
new_state_dict[name] = v
m.load_state_dict(new_state_dict) |
st178769 | Hi,
I’m trying to use distributed autograd. But I’m running into this error.
I have initialized the process group with dist.init_process_group()
with dist_autograd.context() as context_id:
File "/home/ubuntu/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/distributed/autograd/__init__.py", line 33, in __enter__
self.autograd_context = _new_context()
RuntimeError: Need to initialize distributed autograd using torch.distributed.autograd.init()
But I see no init method in torch.distributed.autograd
>>> from torch.distributed import autograd
autog>>> autograd
<module 'torch.distributed.autograd' from '/home/ubuntu/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/distributed/autograd/__init__.py'>
>>> autograd.
autograd.DistAutogradContext( autograd.division autograd.sys
autograd.absolute_import autograd.get_gradients( autograd.torch
autograd.backward( autograd.is_available( autograd.unicode_literals
autograd.context( autograd.print_function
>>> autograd.
autograd.DistAutogradContext( autograd.division autograd.sys
autograd.absolute_import autograd.get_gradients( autograd.torch
autograd.backward( autograd.is_available( autograd.unicode_literals |
st178770 | Solved by mrshenli in post #2
Hey @rahul003,
Distributed autograd is using RPC, so you need to call init_rpc instead of init_process_group. See the toy example below:
import torch.multiprocessing as mp
import torch.distributed.rpc as rpc
import torch.distributed.autograd as dist_autograd
import os
import torch
from torch impor… |
st178771 | Hey @rahul003,
Distributed autograd is using RPC, so you need to call init_rpc instead of init_process_group. See the toy example below:
import torch.multiprocessing as mp
import torch.distributed.rpc as rpc
import torch.distributed.autograd as dist_autograd
import os
import torch
from torch import optim
from torch.distributed.optim import DistributedOptimizer
def train():
with dist_autograd.context() as context_id:
t1 = torch.rand((3, 3), requires_grad=True)
t2 = torch.rand((3, 3), requires_grad=True)
loss = t1 + t2
dist_autograd.backward(context_id, [loss.sum()])
grads = dist_autograd.get_gradients(context_id)
print(grads[t1])
print(grads[t2])
def run_worker(rank, world_size):
os.environ['MASTER_ADDR'] = 'localhost'
os.environ['MASTER_PORT'] = '29500'
if rank == 1:
rpc.init_rpc("worker0", rank=rank, world_size=world_size)
train()
else:
rpc.init_rpc("worker1", rank=rank, world_size=world_size)
pass
# block until all rpcs finish
rpc.shutdown()
if __name__=="__main__":
world_size = 2
mp.spawn(run_worker, args=(world_size, ), nprocs=world_size, join=True)
For more complete tutorials, please see the following links:
RL example and RNN example: https://pytorch.org/tutorials/intermediate/rpc_tutorial.html 2
Parameter server example: https://pytorch.org/tutorials/intermediate/rpc_param_server_tutorial.html
Distributed pipeline example: https://github.com/pytorch/examples/tree/master/distributed/rpc/pipeline 3 |
st178772 | HI all,
I try to run 2 processes. Each process will call “rpc” to other process. The return value is got by future variable. However, the error is at “final += fut.wait()”
import os
from torch.multiprocessing import Process
import torch.distributed.rpc as rpc
def my_sum(arr):
res = 0
for i in arr:
res += i
return res
def init_process(rank, size, fn, backend='gloo'):
""" Initialize the distributed environment. """
os.environ['MASTER_ADDR'] = '127.0.0.1'
os.environ['MASTER_PORT'] = '29500'
my_name = "worker" + str(rank)
rpc.init_rpc(my_name, rank=rank, world_size=size) # initial_rpc
array_rpc = list(range(0, size))
arr_send = []
for i in range(0, size):
temp = []
arr_send.append(temp)
arr_send[0].append(1)
arr_send[0].append(2)
arr_send[1].append(3)
arr_send[1].append(4)
futs=[]
for i in array_rpc:
my_target = "worker" + str(i)
futs.append(rpc.rpc_async(my_target, my_sum, args=(arr_send[i])))
final = 0
for fut in futs:
final += fut.wait()
print("results = :",final," in rank ", rank)
rpc.api._wait_all_workers()
rpc.shutdown()
if __name__ == "__main__":
size = 2
processes = []
for rank in range(size):
p = Process(target=init_process, args=(rank, size, my_sum))
p.start()
processes.append(p)
for p in processes:
p.join()
Is there any problem with future variable? I try to use float(fut), but there is still a problem.
error is as below
Process Process-1:
Process Process-2:
Traceback (most recent call last):
File "/usr/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap
self.run()
File "/usr/lib/python3.6/multiprocessing/process.py", line 93, in run
self._target(*self._args, **self._kwargs)
File "/mnt/c/python_project/Pytorch/test_lib.py", line 44, in init_process
final += fut.wait()
File "/home/cnphuong/.local/lib/python3.6/site-packages/torch/distributed/rpc/internal.py", line 163, in _handle_exception
raise result.exception_type(result.msg)
TypeError: TypeError('my_sum() takes 1 positional argument but 2 were given',)
Traceback (most recent call last):
File "/home/cnphuong/.local/lib/python3.6/site-packages/torch/distributed/rpc/internal.py", line 153, in _run_function
result = python_udf.func(*python_udf.args, **python_udf.kwargs)
TypeError: my_sum() takes 1 positional argument but 2 were given
Traceback (most recent call last):
File "/usr/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap
self.run()
File "/usr/lib/python3.6/multiprocessing/process.py", line 93, in run
self._target(*self._args, **self._kwargs)
File "/mnt/c/python_project/Pytorch/test_lib.py", line 44, in init_process
final += fut.wait()
File "/home/cnphuong/.local/lib/python3.6/site-packages/torch/distributed/rpc/internal.py", line 163, in _handle_exception
raise result.exception_type(result.msg)
TypeError: TypeError('my_sum() takes 1 positional argument but 2 were given',)
Traceback (most recent call last):
File "/home/cnphuong/.local/lib/python3.6/site-packages/torch/distributed/rpc/internal.py", line 153, in _run_function
result = python_udf.func(*python_udf.args, **python_udf.kwargs)
TypeError: my_sum() takes 1 positional argument but 2 were given |
st178773 | Solved by mrshenli in post #5
Could you explain why it is?
Sure. If you don’t add a comma, Python would not recognize it as a tuple. See the following code:
Python 3.8.2 (default, Mar 26 2020, 15:53:00)
[GCC 7.3.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> (5) == 5
… |
st178774 | ph0123:
TypeError: my_sum() takes 1 positional argument but 2 were given
This error message suggests the args was incorrect in the rpc.rpc_async call. It misses a comma after arr_send[i], change it to the following should work:
futs.append(rpc.rpc_async(my_target, my_sum, args=(arr_send[i], ))) |
st178775 | Wow.
Could you explain why it is?
My function has only one parameter. Why we need to add “,” after the parameter?
Thanks, |
st178776 | Could you explain why it is?
Sure. If you don’t add a comma, Python would not recognize it as a tuple. See the following code:
Python 3.8.2 (default, Mar 26 2020, 15:53:00)
[GCC 7.3.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> (5) == 5
True
>>> (5, ) == 5
False
>>> print( (5) )
5
>>> print( (5,) )
(5,)
>>> type((5))
<class 'int'>
>>> type((5,))
<class 'tuple'> |
st178777 | I met an error when I use DDP for multi node (two node, two GPUs each) training and ‘nccl’ backend (It runs perfect when I use ‘gloo’). The environment is Ubuntu 16.04+python3.5+pytorch1.5.0+cuda10.1.
My code is based on the demo code on the offical website to test distributed training.
def setup(rank, world_size):
os.environ['NCCL_DEBUG'] = 'INFO'
os.environ['NCCL_SOCKET_IFNAME'] = 'eno1'
os.environ['NCCL_IB_DISABLE'] = '1'
dist.init_process_group(
"nccl", rank=rank, init_method='tcp://162.105.146.176:22222', world_size=world_size)
class ToyModel(nn.Module):
def __init__(self):
super(ToyModel, self).__init__()
self.net1 = nn.Linear(10, 10)
self.relu = nn.ReLU()
self.net2 = nn.Linear(10, 5)
def forward(self, x):
return self.net2(self.relu(self.net1(x)))
...
The error code when using ‘nccl’ is as following,
ptwop-176:1755:1755 [0] NCCL INFO Bootstrap : Using [0]eno1:162.105.146.176<0>
ptwop-176:1755:1755 [0] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so).
ptwop-176:1755:1755 [0] misc/ibvwrap.cc:63 NCCL WARN Failed to open libibverbs.so[.1]
ptwop-176:1755:1755 [0] NCCL INFO NET/Socket : Using [0]eno1:162.105.146.176<0>
NCCL version 2.4.8+cuda10.1
ptwop-176:1756:1756 [1] NCCL INFO Bootstrap : Using [0]eno1:162.105.146.176<0>
ptwop-176:1756:1756 [1] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so).
ptwop-176:1756:1756 [1] misc/ibvwrap.cc:63 NCCL WARN Failed to open libibverbs.so[.1]
ptwop-176:1756:1756 [1] NCCL INFO NET/Socket : Using [0]eno1:162.105.146.176<0>
ptwop-176:1755:1870 [0] NCCL INFO Setting affinity for GPU 0 to 5555,55555555
ptwop-176:1756:1871 [1] NCCL INFO Setting affinity for GPU 1 to 5555,55555555
ptwop-176:1756:1871 [1] include/socket.h:390 NCCL WARN Connect to 162.105.146.178<35007> failed : No route to host
ptwop-176:1756:1871 [1] NCCL INFO bootstrap.cc:100 -> 2
ptwop-176:1756:1871 [1] NCCL INFO bootstrap.cc:337 -> 2
ptwop-176:1755:1869 [0] include/socket.h:390 NCCL WARN Connect to 162.105.146.178<54473> failed : No route to host
ptwop-176:1756:1871 [1] NCCL INFO init.cc:695 -> 2
ptwop-176:1755:1869 [0] NCCL INFO bootstrap.cc:100 -> 2
ptwop-176:1756:1871 [1] NCCL INFO init.cc:951 -> 2
ptwop-176:1755:1869 [0] NCCL INFO bootstrap.cc:226 -> 2
ptwop-176:1756:1871 [1] NCCL INFO misc/group.cc:69 -> 2 [Async thread]
Traceback (most recent call last):
File "test.py", line 73, in <module>
run_demo(demo_basic, 2, 3)
File "test.py", line 45, in run_demo
join=True)
File "/home/xukun/setsuna/lib/python3.5/site-packages/torch/multiprocessing/spawn.py", line 200, in spawn
return start_processes(fn, args, nprocs, join, daemon, start_method='spawn')
File "/home/xukun/setsuna/lib/python3.5/site-packages/torch/multiprocessing/spawn.py", line 158, in start_processes
while not context.join():
File "/home/xukun/setsuna/lib/python3.5/site-packages/torch/multiprocessing/spawn.py", line 119, in join
raise Exception(msg)
Exception:
-- Process 1 terminated with the following error:
Traceback (most recent call last):
File "/home/xukun/setsuna/lib/python3.5/site-packages/torch/multiprocessing/spawn.py", line 20, in _wrap
fn(i, *args)
File "/home/xukun/graph/multi_node/test.py", line 57, in demo_basic
ddp_model = DDP(model.to(rank), device_ids=[rank])
File "/home/xukun/setsuna/lib/python3.5/site-packages/torch/nn/parallel/distributed.py", line 285, in __init__
self.broadcast_bucket_size)
File "/home/xukun/setsuna/lib/python3.5/site-packages/torch/nn/parallel/distributed.py", line 483, in _distributed_broadcast_coalesced
dist._broadcast_coalesced(self.process_group, tensors, buffer_size)
RuntimeError: NCCL error in: /pytorch/torch/lib/c10d/ProcessGroupNCCL.cpp:514, unhandled system error, NCCL version 2.4.8
What can I do to avoid this error? |
st178778 | From this log message: ptwop-176:1755:1869 [0] include/socket.h:390 NCCL WARN Connect to 162.105.146.178<54473> failed : No route to host. I’m assuming the other node’s ip is 162.105.146.178? Could you validate the following:
See if the issue reproduces on a single-node multi-gpu setup.
Can you ping 162.105.146.178 from 162.105.146.176? |
st178779 | Yes, the nodes’ ips are 162.105.146.178 and 162.105.146.176.
The issue cannot reproduce on a single-node multi-gpu setup, and everything runs well.
The two nodes can ping each other successfully, and it runs well when I change the communication backend to “gloo” from “nccl” with nearly no code changed (minus some modification to make Tensors and model on cpu). |
st178780 | Thanks for validating. Which version of NCCL are you using? And could you validate that NCCL has been successfully installed on both nodes by running: https://github.com/NVIDIA/nccl-tests 73. |
st178781 | Here’s one way to see if nccl is installed on the node:
locate nccl| grep "libnccl.so" | tail -n1 | sed -r 's/^.*\.so\.//' |
st178782 | I am trying to implement Pagerank with libtorch. I finished the OpenMPI version with Pagerank. I try to read libtorch documents here 4.
However, I did not see any function like RPC in OpenMPI.
Is there possible to implement Pagerank with libtorch? |
st178783 | Solved by mrshenli in post #2
It should be possible. And there are several levels of APIs that you can use:
send/recv APIs: https://pytorch.org/docs/stable/distributed.html#torch.distributed.send
collective communication APIs: https://pytorch.org/docs/stable/distributed.html#torch.distributed.all_reduce
RPC APIs:
a. https:/… |
st178784 | It should be possible. And there are several levels of APIs that you can use:
send/recv APIs: https://pytorch.org/docs/stable/distributed.html#torch.distributed.send 2
collective communication APIs: https://pytorch.org/docs/stable/distributed.html#torch.distributed.all_reduce
RPC APIs:
a. https://pytorch.org/docs/stable/rpc.html 1
b. https://pytorch.org/tutorials/intermediate/rpc_tutorial.html 2
c. https://pytorch.org/tutorials/intermediate/rpc_param_server_tutorial.html 1 |
st178785 | Hi,
I try to use RPC, but it seem to difficult.
My idea is that rank0 will call my_add function, and rank1 will do it and return the value for rank0.
"""RPC with Torch"""
"""run.py:"""
#!/usr/bin/env python
import os
import torch
import torch.distributed as dist
from torch.multiprocessing import Process
import torch.distributed.rpc as rpc
def my_add(a,b):
return a+b
def run(rank, size):
tensor = torch.zeros(1)
if rank == 0:
rpc.init_rpc("worker0", rank=0, world_size=size)
ret = rpc.rpc_sync("worker1", my_add, args=(2,3))
print(ret)
else:
rpc.init_rpc("worker1", rank=1, world_size=size)
rpc.shutdown()
#print('Rank ', rank, ' has data ', tensor[0])
def init_process(rank, size, fn, backend='gloo'):
""" Initialize the distributed environment. """
os.environ['MASTER_ADDR'] = '127.0.0.1'
os.environ['MASTER_PORT'] = '29500'
dist.init_process_group(backend, rank=rank, world_size=size)
fn(rank, size)
if __name__ == "__main__":
size = 2
processes = []
for rank in range(size):
p = Process(target=init_process, args=(rank, size, run))
p.start()
processes.append(p)
for p in processes:
p.join()
I try to run but the error is as below
Process Process-45:
Process Process-46:
Traceback (most recent call last):
Traceback (most recent call last):
File "/Users/cnphuong/opt/anaconda3/lib/python3.7/multiprocessing/process.py", line 297, in _bootstrap
self.run()
File "/Users/cnphuong/opt/anaconda3/lib/python3.7/multiprocessing/process.py", line 297, in _bootstrap
self.run()
File "/Users/cnphuong/opt/anaconda3/lib/python3.7/multiprocessing/process.py", line 99, in run
self._target(*self._args, **self._kwargs)
File "/Users/cnphuong/opt/anaconda3/lib/python3.7/multiprocessing/process.py", line 99, in run
self._target(*self._args, **self._kwargs)
File "<ipython-input-22-0e6d083442bd>", line 29, in init_process
fn(rank, size)
File "<ipython-input-22-0e6d083442bd>", line 29, in init_process
fn(rank, size)
File "<ipython-input-22-0e6d083442bd>", line 16, in run
rpc.init_rpc("worker0", rank=0, world_size=size)
File "<ipython-input-22-0e6d083442bd>", line 20, in run
rpc.init_rpc("worker1", rank=1, world_size=size)
File "/Users/cnphuong/opt/anaconda3/lib/python3.7/site-packages/torch/distributed/rpc/__init__.py", line 77, in init_rpc
store, _, _ = next(rendezvous_iterator)
File "/Users/cnphuong/opt/anaconda3/lib/python3.7/site-packages/torch/distributed/rpc/__init__.py", line 88, in init_rpc
api._init_rpc_backend(backend, store, name, rank, world_size, rpc_backend_options)
File "/Users/cnphuong/opt/anaconda3/lib/python3.7/site-packages/torch/distributed/rendezvous.py", line 172, in _env_rendezvous_handler
store = TCPStore(master_addr, master_port, world_size, start_daemon, timeout)
File "/Users/cnphuong/opt/anaconda3/lib/python3.7/site-packages/torch/distributed/rpc/api.py", line 283, in _init_rpc_backend
rpc_backend_options=rpc_backend_options,
File "/Users/cnphuong/opt/anaconda3/lib/python3.7/site-packages/torch/distributed/rpc/backend_registry.py", line 75, in init_backend
return backend.value.init_backend_handler(*args, **kwargs)
RuntimeError: Address already in use
File "/Users/cnphuong/opt/anaconda3/lib/python3.7/site-packages/torch/distributed/rpc/backend_registry.py", line 101, in _process_group_init_backend_handler
"Default process group must not be initialized before init_rpc."
RuntimeError: Default process group must not be initialized before init_rpc.
Please help!
Thanks. |
st178786 | RuntimeError: Default process group must not be initialized before init_rpc.
As the error suggested, init_process_group cannot be called before init_rpc, as currently DDP and RPC do not work together. We are working on dropping this requirement: https://github.com/pytorch/pytorch/issues/33583 4
For the code snippet above, removing dist.init_process_group(backend, rank=rank, world_size=size) should work. |
st178787 | Thank you so much!
On MPI, I can use distributed object to remote other workers do something with other worker’s distributed object.
On pytorch, I saw that it has Remote Reference (RRef). However, I did not see how to create distributed object with Pytorch. Could you please suggest in this case?
For example,
worker1 holds a dictionary with key and value DIC1.
worker0 will send an array KEY_SENDs with the keys to worker1. Worker1 will check DIC1 and return an array with the values from KEY_SENDs and DIC1.
I try with
def get_values(dic1, arr):
rest = np.array([])
for i in arr:
rest= np.append(rest,dic1[i])
return rest
def run(rank, size):
tensor = torch.zeros(1)
my_name = "worker"+str(rank)
if rank == 0:
thisdict = {4:2,2:6,3:8}
rpc.init_rpc(my_name,rank=0, world_size=size)
target = 1
target_name = "worker"+str(target)
ret = rpc.rpc_sync(target_name, get_values, args=(thisdict,[1,2]))
print(str(ret) + " is in rank 0 from rank 1")
else:
thisdict = {2:1,1:3,4:2}
rpc.init_rpc(my_name,rank=rank, world_size=size)
rpc.shutdown()
print(my_name)
I run it but no results were show on this example.
Thanks, |
st178788 | Hey @ph0123
I am not sure if I fully understand the use case. I created an example to show how to do intersect of a local dict and a remote dict. It shouldn’t too hard to extend this to do, e.g., a union or just sending keys. Let me know if this answers the question.
import torch
import torch.distributed.rpc as rpc
import torch.multiprocessing as mp
import os
def create_dict():
return {1:'a', 2:'b', 3:'c'}
def intersect_dict(dict1, dict2_rref):
ret = {}
for key in dict2_rref.local_value():
if key in dict1:
ret[key] = dict1[key]
return ret
def run(dst):
dict1 = {1:'a', 3:'c', 4:'d'}
dict2_rref = rpc.remote(dst, create_dict)
intersect = rpc.rpc_sync(dst, intersect_dict, args=(dict1, dict2_rref))
print(intersect)
def run_worker(rank, world_size):
os.environ['MASTER_ADDR'] = 'localhost'
os.environ['MASTER_PORT'] = '29500'
if rank == 1:
rpc.init_rpc("worker0", rank=rank, world_size=world_size)
run("worker1")
else:
rpc.init_rpc("worker1", rank=rank, world_size=world_size)
rpc.shutdown()
if __name__=="__main__":
world_size = 2
mp.spawn(run_worker, args=(world_size, ), nprocs=world_size, join=True) |
st178789 | Hi,
That’s so good.
However, I try to run your code with my PC. the output is
---------------------------------------------------------------------------
Exception Traceback (most recent call last)
<ipython-input-1-c234795e743f> in <module>
37 world_size = 2
38
---> 39 mp.spawn(run_worker, args=(world_size, ), nprocs=world_size, join=True)
~/opt/anaconda3/lib/python3.7/site-packages/torch/multiprocessing/spawn.py in spawn(fn, args, nprocs, join, daemon, start_method)
198 ' torch.multiprocessing.start_process(...)' % start_method)
199 warnings.warn(msg)
--> 200 return start_processes(fn, args, nprocs, join, daemon, start_method='spawn')
~/opt/anaconda3/lib/python3.7/site-packages/torch/multiprocessing/spawn.py in start_processes(fn, args, nprocs, join, daemon, start_method)
156
157 # Loop on join until it returns True or raises an exception.
--> 158 while not context.join():
159 pass
160
~/opt/anaconda3/lib/python3.7/site-packages/torch/multiprocessing/spawn.py in join(self, timeout)
111 raise Exception(
112 "process %d terminated with exit code %d" %
--> 113 (error_index, exitcode)
114 )
115
Exception: process 1 terminated with exit code 1
I try with
"""RPC with Torch"""
"""run.py:"""
#!/usr/bin/env python
import os
import torch
import torch.distributed as dist
from torch.multiprocessing import Process
import torch.distributed.rpc as rpc
import numpy as np
def create_dict():
return {1:'a', 2:'b', 3:'c'}
def intersect_dict(dict1, dict2_rref):
ret = {}
for key in dict2_rref.local_value():
if key in dict1:
ret[key] = dict1[key]
return ret
def run(dst):
dict1 = {1:'a', 3:'c', 4:'d'}
dict2_rref = rpc.remote(dst, create_dict)
intersect = rpc.rpc_sync(dst, intersect_dict, args=(dict1, dict2_rref))
print(intersect)
def init_process(rank, size, fn, backend='gloo'):
""" Initialize the distributed environment. """
os.environ['MASTER_ADDR'] = '127.0.0.1'
os.environ['MASTER_PORT'] = '29500'
#dist.init_process_group(backend, rank=rank, world_size=size)
#fn(rank, size)
if rank == 1:
rpc.init_rpc("worker0", rank=rank, world_size=world_size)
fn("worker1")
else:
rpc.init_rpc("worker1", rank=rank, world_size=world_size)
rpc.shutdown()
if __name__ == "__main__":
size = 2
processes = []
for rank in range(size):
p = Process(target=init_process, args=(rank, size, run))
p.start()
processes.append(p)
for p in processes:
p.join()
And it worked on my PC.
Could you please explain why mp.spawn error in this case?
thanks, |
st178790 | Can I create the dict2 next to rpc.init_rpc as
if rank == 1:
rpc.init_rpc("worker0", rank=rank, world_size=world_size)
fn("worker1")
else:
# initial dict2 here.
rpc.init_rpc("worker1", rank=rank, world_size=world_size)
My target is that each rank will hold their dictionary, and this dictionary is always available in the program. In your code, If I repeat the run() function, The dict1 is initial again.
For example, rank 0 call run function on rank 1. Your codes is good. But I wanna run it several time, and after calling function on rank 1. The rank 1 changes something in the dictionary. Then, in the next iteration, rank0 wanna get the value from updated dictionary on rank1.
Thanks, |
st178791 | That’s weird, I don’t know why mp.spawn would fail. Is it because it has to be 127.0.0.1 instead of localhost in your env? We can try add more logs/try-except to identify which line crashed.
My target is that each rank will hold their dictionary, and this dictionary is always available in the program. In your code, If I repeat the run() function, The dict1 is initial again.
If this is the only concern, you can move that rpc.remote call to create_dict() out of the run function and do it before the training loop?
For example, rank 0 call run function on rank 1. Your codes is good. But I wanna run it several time, and after calling function on rank 1. The rank 1 changes something in the dictionary. Then, in the next iteration, rank0 wanna get the value from updated dictionary on rank1.
Sure, there are many ways to solve this. For example, you can define the dict as a global value so that each process will have its own copy. Then in intersect_dict, just read from that global value instead of passing the RRef around.
Or, you can have a master that tells all workers to initialize their states upfront and then get RRefs of those states as return values.
If you need to construct an RRef from that dict, you can also use rpc.RRef(dict_obj) to create a local RRef and then pass that around through RPC. |
st178792 | Hi,
Thank you so much!
I search the documents, but I did not know how to make a dictionary as a global value with PyTorch.
I worked with MPI and UPC++. MPI provides a distributed object, which helps each worker to hold their values. UPC++ also provides distributed objects that are a similar name on all workers but different memory locations. Additionally, UPC++ creates Global Pointers to remote from other workers. I quit easy.
However, I did not know these kinds of variables in Pytorch documents.
Do you have any suggestion?
I have another question. If i run with several iterations, is there any function to waiting all workers as barrier() function.
Thanks. |
st178793 | Hey @ph0123
I was referring to Python global vars, sth like:
import torch
import torch.distributed.rpc as rpc
import torch.multiprocessing as mp
import os
_local_dict = {}
def intersect_dict(dict1):
ret = {}
for key in _local_dict:
if key in dict1:
ret[key] = dict1[key]
return ret
def run(dst):
intersect = rpc.rpc_sync(dst, intersect_dict, args=(_local_dict,))
print(intersect)
def run_worker(rank, world_size):
os.environ['MASTER_ADDR'] = '127.0.0.1'
os.environ['MASTER_PORT'] = '29500'
if rank == 0:
_local_dict.update({1:'a', 3:'c', 4:'d'})
rpc.init_rpc("worker0", rank=rank, world_size=world_size)
run("worker1")
else:
_local_dict.update({1:'a', 2:'b', 3:'c'})
rpc.init_rpc("worker1", rank=rank, world_size=world_size)
rpc.shutdown()
if __name__=="__main__":
world_size = 2
mp.spawn(run_worker, args=(world_size, ), nprocs=world_size, join=True) |
st178794 | Hi,
I try with the loop.
After each iteration, I used dist.barrier(), and the program is not stop because of waiting others rpc.
What is the problem with it?
Thanks, |
st178795 | the program is not stop because of waiting others rpc.
What does this mean by “the program is not stop”? If you are referring to the behavior that some RPC requests are still running in background, then yes, this is the expected behavior. Because RPC server has its own thread pool to handle requests, and dist.barrier only blocks the thread that runs it. Why do you want to combine dist.barrier() with RPC? Is this just to conclude an iteration? |
st178796 | HI,
I did not know this is not work with RPC.
Because of my algorithm, step (k+1) is depended on step k.
I want to make sure that all worker finished before starting new iteration.
each worker hold their own dictionary.
step k:
call rpc to other workers to check and update dictionary values, which is based on value on step (k-1).
#I put barrier here
step k+1:
call rpc to other workers to check and update dictionary values, which is based on value on step (k).
…
with RPC pytorch, Could you please sugges any function like dis.barrier()?
Thanks, |
st178797 | Hey,
dist.barrier() and RPC should both work as expected. The contract dist.barrier() offers is only blocking until all participating threads reach the same barrier. So, as RPC threads in background are not calling dist.barrier, they are not part of that synchronization.
If you would like to synchronize all RPCs, one option is doing sth similar to _wait_all_workers as linked below. It uses rank 0 as a coordinator to tell all others when to proceed.
github.com
pytorch/pytorch/blob/479b04e26a59d20b72ad5fdaec819caaad49af75/torch/distributed/rpc/api.py#L137-L199 1
@_require_initialized
def _wait_all_workers():
r"""
Block until all local and remote RPC processes reach this method and wait
for all outstanding work to complete. Every RPC process must call this
method before exit to perform a graceful shutdown. This should be used to
terminate the RPC framework, and there is no guarantee that the RPC
framework will work after this method returns.
"""
assert (
_ALL_WORKER_NAMES is not None
), "`_ALL_WORKER_NAMES` is not initialized for `def _wait_all_workers`."
leader_worker_name = sorted(_ALL_WORKER_NAMES)[0]
self_worker_name = _get_current_rpc_agent().get_worker_info().name
global _wait_all_workers_sequence_id
with _wait_all_workers_dict_lock:
sequence_id = _wait_all_workers_sequence_id
_wait_all_workers_sequence_id += 1
This file has been truncated. show original
In general, all we need for a synchronization is to join background threads into the current thread that calls dist.barrier. For example, the following code won’t work, as some_func is running on a different thread.
rpc.rpc_async(to, some_func)
dist.barrier()
However, the code below should work, as it blocks the current thread before calling the barrier
fut = rpc.rpc_async(to, some_func)
fut.wait()
dist.barrier()
So, as a summary, if you would like to use the dist.barrier to synchronize RPCs, the application will need to collect the futures created in one iteration, and wait for all of them to complete before calling barrier. |
st178798 | hi,i am loading the model under DataParallel,below is my code:
#training
model = build_model()
model = nn.DataParallel(model)
model.cuda()
....training...
torch.save(model,name)
#eval
net = torch.load(name)
i save the whole model under DataParallel form,not just state_dict,and i load the model directly for eval,i am wondering is this right?will it get the same performance as save the state_dict and load the state_dict? thank you so much!! |
st178799 | You should get the same performance.
However, your current approach assumes that all source files contain the same definitions and are in the same location. Also, torch.load would load the nn.DataParallel model and might assume the same number of GPUs in your system (not sure about it and you would have to test it).
Given these disadvantages I always recommend to store the state_dict in order to get some flexibility of loading the model later. |
st178800 | hi! i use torch.save(net) to save model structure and weights ,but i change the model strcucture later,if i use newnet = torch.load(net) to load the saved .pth file now,will the newnet forward dataflow flow as i change the model structure before or after?thanks !!! |
st178801 | I would assume that the new model definition will be used, but I never tested it.
Note that I’m not using this approach, as it might break in several ways. E.g. you might not be able to load the model again, if you change the file structure or the definition too much.
Creating the new model instance directly and just loading the state_dict is the cleaner way. |
st178802 | i understand,but the reason why i use the torch.save() not the state_dict is that i hope to load the model and weight directly while don`t need to define the model first,because the my model structure may change many times,if i just save the state dict when i was training, i may not remember the exactly structure version when i evaluate the model,but load state dict need to define the model structure first which may cause the state dict between new model and loaded model do not match |
st178803 | I don’t think you can avoid this issue.
When you are saving and loading the state_dict, you would need to create the model first, that’s correct.
The model definition might have changed and you might get mismatches in the state_dict.
However, in your current approach, the same mismatch might happen and might be hidden behind the torch.load model.
E.g. this simple use case demonstrates it:
# model.py
import torch
import torch.nn as nn
class MyModel(nn.Module):
def __init__(self):
super(MyModel, self).__init__()
self.conv1 = nn.Conv2d(1, 3, 3, 1, 1)
self.fc = nn.Linear(3*24*24, 1)
def forward(self, x):
x = self.conv1(x)
x = x.view(x.size(0), -1)
x = self.fc(x)
return x
# save.py
import torch
from model import MyModel
net = MyModel()
x = torch.randn(1, 1, 24, 24)
out = net(x)
torch.save(net, 'tmp.pt')
# load.py
import torch
net = torch.load('tmp.pt')
x = torch.randn(1, 1, 24, 24)
out = net(x)
print(out.shape)
After changing the linear layer in model.py to nn.Linear(3*24*24, 10) and executing load.py I get:
/opt/conda/lib/python3.6/site-packages/torch/serialization.py:644: SourceChangeWarning: source code of class 'model.MyModel' has changed. you can retrieve the original source code by accessing the object's source attribute or set `torch.nn.Module.dump_patches = True` and use the patch tool to revert the changes.
Which already sounds hard to debug, but at least the code is still running. Note that the code still returns an output of [1, 1], which corresponds to the initial definition not the new one, which might be even harder to debug.
After changing the name of self.fc to self.fc_new and executing load.py I get:
/opt/conda/lib/python3.6/site-packages/torch/serialization.py:644: SourceChangeWarning: source code of class 'model.MyModel' has changed. you can retrieve the original source code by accessing the object's source attribute or set `torch.nn.Module.dump_patches = True` and use the patch tool to revert the changes.
warnings.warn(msg, SourceChangeWarning)
Traceback (most recent call last):
File "load.py", line 5, in <module>
out = net(x)
File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 577, in __call__
result = self.forward(*input, **kwargs)
File "/workspace/src/model.py", line 13, in forward
x = self.fc_new(x)
File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 621, in __getattr__
type(self).__name__, name))
torch.nn.modules.module.ModuleAttributeError: 'MyModel' object has no attribute 'fc_new'
As already said, use it at your own risk, but I would strongly advice against it, as I think debugging these issues in a “real” model might take a lot of time. |
st178804 | The main process in the below code snippet will freeze after serveral iterations.
I think it’s related to how the data structure stack in pytorch works, tensors are built upon storage classes, and storage classes are build upon raw cudaMalloc regions. I can understand what recuce_tensor() and rebuild_cuda_tensor() are doing, but I am not sure why creating a new tensor (since g_tensor has been reassigned) after a blocking starmap would somehow causing the main process to freeze.
Code example
import itertools as it
import torch as t
import torch.multiprocessing as mp
def infer(id, tensor):
print(id)
print(tensor)
# del tensor immediately doesn't solve the problem
del tensor
# some global tensor
g_tensor = t.full([1000, 1000], 2, device="cuda:0")
g_tensor.share_memory_()
if __name__ == "__main__":
ctx = mp.get_context("spawn")
pool = ctx.Pool(2)
for i in range(10000000):
print("start")
pool.starmap(infer, zip(range(5), it.repeat(g_tensor)))
# cpu tensors work just fine
# for cuda tensors:
# if I delete the global tensor, reassign it with a new cuda tensor
# or if I use a tensor created dynamically in each iteration
# the program freezes after 2 iterations.
# Comment out the following lines and everything will work fine.
del g_tensor
g_tensor = t.full([1000, 1000], 2, device="cuda:0")
g_tensor.share_memory_()
Environment
PyTorch Version (e.g., 1.0): 1.1.0
OS (e.g., Linux): Linux
How you installed PyTorch (conda, pip, source): pip
Python version: 3.5
CUDA/cuDNN version: 9.1/7.2.1 |
st178805 | @mrshenli
I am sorting out my framework today and as of pytorch version 1.5.0, this problem still persists. Is it normal? |
st178806 | Tried this locally, it hangs with a non-deterministic number of iterations (4, 11, etc.), and it hangs at del g_tensor. I suspect it is due to CUDACachingAllocator, where it might need to keep the memory block alive until all other processes finish using it? But it is a per-process data structure, so it does not have the global view. I am not sure if this is the reason for this hang.
Call for help cc @ptrblck @colesbury |
st178807 | Thanks for following up on this. I filed an issue to track the problem on GitHub:
github.com/pytorch/pytorch
Deadlock with shared CUDA tensors and multiprocessing (spawn) 109
opened
Jun 4, 2020
colesbury
🐛 Bug
Reported by iffiX in https://discuss.pytorch.org/t/freezing-problem-while-using-cuda-tensor-in-multiprocessing-environment/80000.
(I have moved the code into a main() function to rule out some other multiprocessing bugs)
import...
It looks like the deadlock is in CudaIPCSentData destructor, which looks separate from the caching allocator code. |
st178808 | What’s the best practice of using distributed autograd and optimizer?
Is it designed to apply the master-slave paradigm, where a master distributes task to slaves and collect results, then performes gradient optimization.
Or is it designed to apply a client-host paradigm, where clients actively push results (such as gradients) to the host and host performs post processing like reduction?
I have read your documentation and code in examples. Personally I think it is the first one, though I am not sure what your future plans are, I am currently laying more application layers on rpc and I would like to know your design ideas. |
st178809 | Solved by mrshenli in post #4
This is true, they are stateful, and states including RRef context, distributed autograd context and application states. In the current version, an RPC gang cannot survive any node crash. We are actively collaborating with torchelastic to provide fault-tolerance and elasticity support.
what kind … |
st178810 | The second question: since your implementation of rpc is using the FAST & SMART mode algorithm 1, I have also read the detailed PR 2 on your github. I think your workers are stateful, so what will happen if during an execution of the rpc context, some worker fail? like the below scenario in your rpc parameter server example:
with dist_autograd.context() as cid:
model_output = net(data)
target = target.to(model_output.device)
loss = F.nll_loss(model_output, target)
if i % 5 == 0:
print(f"Rank {rank} training batch {i} loss {loss.item()}")
dist_autograd.backward(cid, [loss])
# Ensure that dist autograd ran successfully and gradients were
# returned.
assert remote_method(
ParameterServer.get_dist_gradients,
net.param_server_rref,
cid) != {}
opt.step(cid)
what kind of exeptions will be thrown on the if the failure occurs in forward & backward pass? |
st178811 | Hey @iffiX, thanks for the question.
iffiX:
Is it designed to apply the master-slave paradigm, where a master distributes task to slaves and collect results, then performes gradient optimization.
Or is it designed to apply a client-host paradigm, where clients actively push results (such as gradients) to the host and host performs post processing like reduction?
They are designed in the way that all RPC workers are peers, such that each RPC worker has its own server running in background, and any RPC worker A can talk to any other RPC worker B by sending a function to run on B. In the foreseeable future, we are not going to change this design.
We do see many applications using RPC to build master-slave applications, where one worker serves as a master telling everyone else what to do. One example is this tutorial where the agent is serving as the master, and commanding all observers.
It is also possible to build server-client applications using the RPC API. Here is an example: https://github.com/pytorch/examples/pull/702/files 3
In general, we try to make the RPC API as flexible as possible, and hope it can support a wide range of applications. It’s up to the application developers to decide how to decompose the entire logic into multiple functions and how to stitch those functions together using RPC.
Regarding distributed autograd and distributed optimizer, the worker that creates the distributed autograd context serves as the driver for its backward pass and optimizer step. But there can be multiple drivers in a cluster. One example would be this tutorial 2, where we launch one Parameter-Server process that passively sits there waiting for requests from trainers and we also launch multiple trainer processes with each trainer actively running a training loop. In this case, every trainer serves as a driver of its own backward pass and optimizer step. |
st178812 | iffiX:
I think your workers are stateful , so what will happen if during an execution of the rpc context, some worker fail?
This is true, they are stateful, and states including RRef context, distributed autograd context and application states. In the current version, an RPC gang cannot survive any node crash. We are actively collaborating with torchelastic to provide fault-tolerance and elasticity support.
what kind of exeptions will be thrown on the if the failure occurs in forward & backward pass?
If the error is recoverable, e.g., just an exception instead of hard node crash, RPC should throw the same exception (or a RuntimeError, this is not great, still working on improvements) on the caller. Below are some example tests:
github.com
pytorch/pytorch/blob/c767d65cafc2615bcee9ec1bbc8d5dadc07ea0e8/torch/testing/_internal/distributed/rpc/dist_autograd_test.py#L1204-L1210
def test_backward_autograd_engine_error(self):
with dist_autograd.context() as context_id:
t1 = torch.rand((3, 3), requires_grad=True)
t2 = torch.rand((3, 3), requires_grad=True)
# Perform some ops before error simulation.
tmp = (t1 + t2) * (t1 + t2)
t3 = SimulateBackwardError.apply(tmp)
github.com
pytorch/pytorch/blob/c767d65cafc2615bcee9ec1bbc8d5dadc07ea0e8/torch/testing/_internal/distributed/rpc/rpc_test.py#L1284-L1290 2
@dist_init
def test_py_raise_in_user_func(self):
n = self.rank + 1
dst_rank = n % self.world_size
fut = rpc.rpc_async(worker_name(dst_rank), raise_func)
with self.assertRaises(ValueError):
fut.wait() |
st178813 | Thank you mrshenli, you are so helpful!
About my question:
Well the first question is actually about the autograd based on rpc, and not rpc apis themselves, but on hindsight I think it is not really a good question, so lets set it aside. The second answer is sufficient. I think I cannot use the distributed autograd now as they are a little bit too brittle.
About distributed programming:
Indeed, I have noticed your amazing work on torchelastic, I have thought about your rpc design, and find that it could be very hard for you to implement a mechanism which rejoins an rpc worker after their crash (whether a hard node crash or a soft crash like an exception), since you are blocking all processes on init and exchange their connection information, it would be overly complex to try and recover these information on crash.
Therefore, I instead worked out another solution to provide some robustness for raw rpc apis, since under some conditions the “map-reduce” (very similar so I will use this description, since workers will receive their work portion when they reach the start barrier) paradigm in torchelastic Rendezvous is not very handy, and rpc is just right.
My solution basically implements a stable distributed election process & perfect failure detector based on rpc, and let each worker in the rpc group takes one/many role(s), when a worker failes (hard/soft), its role will be reassigned to other workers and this worker is marked as tainted permanently. Users only needs to declare the number of total workers on start and specify what a role have to do when they are: initialized, running, and stopped. Adds a little bit overhead on start & worker failure, but near zero overhead during a normal rpc call.
My future plan is to integrate my framework with NNI tuner developed by the microsoft team, so I can have some taste of distributed RL training on kubernetes.
That’s all for now, thank you again! |
st178814 | iffiX:
since you are blocking all processes on init and exchange their connection information, it would be overly complex to try and recover these information on crash.
The rendezvous in RPC init will be gone in future releases (not sure which release yet ). We are thinking about using c10d::Store + listeners to support dynamic worker join/leave.
My solution basically implements a stable distributed election process & perfect failure detector based on rpc, and let each worker in the rpc group takes one/many role(s), when a worker failes (hard/soft), its role will be reassigned to other workers and this worker is marked as tainted permanently. Users only needs to declare the number of total workers on start and specify what a role have to do when they are: initialized, running, and stopped. Adds a little bit overhead on start & worker failure, but near zero overhead during a normal rpc call.
This is awesome! Are these in an open-source repo? It will be great if we could have the opportunity to learn more about it. cc @Kiuk_Chung |
st178815 | Dynamic worker join/leave is great! With that we can just discard failed workers and start new ones, looking forward to that.
I am still writing docs for my framework, and haven’t begin the unit test & CI/CD process yet
Currently my framework aims more at RL things, so tests for RL algortihms will come first, distributed part is mainly aimed at resource hungry algorithms such as DQN-APEX, DDPG-APEX, IMPALA etc., and multi agent scenes. I really hope to complete code, doc and test for major RL algorithms part before July, then completes distributed scenes before August / September so you will have to wait a l-i-t-t-le more.
They will be in a new github repo, and I will post updates as soon as possible. |
st178816 | https://pytorch.org/tutorials/intermediate/model_parallel_tutorial.html 16
This page explains how we can split a big nn model on multiple GPUs. But I don’t understand when the intermediate training result needs to be transferred from one GPU to another (by using “.to(‘cuda:1’)”), does pytorch runtime moves the data to CPU mem first and then to another gpu’s mem, or pytorch actually utilizes some direct data transfer technique between GPUs like SLI?
Any help would be appreciated. |
st178817 | Hey @Yanan
In this case, the tensor will be directly copied from device to device using cudaMemcpyAsync. The C++ implementation is linked below:
github.com
pytorch/pytorch/blob/89c0efb30b5bad49e717c1b1a797bac3c62e8b7e/aten/src/ATen/native/cuda/Copy.cu#L56-L67 12
if (memcpy_eligible) {
void *dst = iter.data_ptr(0);
void *src = iter.data_ptr(1);
size_t size = numel * iter.element_size(0);
if (src != dst || src_device != dst_device) {
// Perform the copy
AT_CUDA_CHECK(cudaMemcpyAsync(
dst, src, size,
cudaMemcpyDeviceToDevice,
copy_stream));
}
} else { |
st178818 | @mrshenli Thanks! This really helps.
I have another question that does data copy between GPUs in DataParallel module utilize the same kernel?
Sorry I forgot to ask this before. |
st178819 | I have another question that does data copy between GPUs in DataParallel module utilize the same kernel?
Yes. DataParallel calls into scatter. Its C++ implementation is linked below. It’s basically calling Tensor.to in a loop
github.com
pytorch/pytorch/blob/89c0efb30b5bad49e717c1b1a797bac3c62e8b7e/torch/csrc/cuda/comm.cpp#L201-L206 4
chunks[chunk] =
chunks[chunk].to(
{DeviceType::CUDA, device_index},
/*non_blocking=*/true,
/*copy=*/false,
/*memory_format=*/at::MemoryFormat::Preserve); |
st178820 | Hello.
I have a question about DataParallel.
Assume I have a model, which have its own attribute ‘mode’. The model’s forward flow can be changed by setting this ‘mode’.
For example:
class myModel(nn.Module):
def __init__(self, mode='A'):
# do essential initialization...
self.mode = mode
self.selectible_layers = nn.ModuleDict(
{'A': moduleA(), 'B': moduleB(), ...}
)
def forward(self, x):
# forwarding...
x = self.selectible_layers[self.mode](x)
# forwarding...
return x
When I parallelize this model into multiple GPUs, then I would access this ‘mode’ attribute by using model.module because nn.DataParallel don’t know about ‘mode’.
Then,
model = myModel(mode='A') # initialize with mode 'A'
model = nn.DataParallel(model, device_ids = [0, 1, 2, 3]) # distributed into multiple GPUs
for mode in ['A', 'B', 'C', 'D']:
model.module.mode = mode # change mode
# (?)
In (?) on above code, does nn.DataParallel guarantee that all model replicas in multiple GPUs have same ‘mode’ when changing it?
I worried about that the ‘mode’ in replicas still have mode ‘A’, while original model(on host GPU; maybe 0)'s ‘mode’ changes.
+)
I tried myself but I don’t know how to access each replica in multiple GPUs. How to access them? |
st178821 | Solved by mrshenli in post #2
The following method is called by DataParallel to create replicas. So the attributes in __dict__ should be replicated as well. But as the flow is “DataParallel forward” -> “replicate models” -> “app model forward”, you need to make sure that the mode is set properly before calling DataParallel forwa… |
st178822 | The following method is called by DataParallel to create replicas. So the attributes in __dict__ should be replicated as well. But as the flow is “DataParallel forward” -> “replicate models” -> “app model forward”, you need to make sure that the mode is set properly before calling DataParallel forward.
github.com
pytorch/pytorch/blob/3001facd7a3942efc7c7e8a42b720b9c884387b3/torch/nn/modules/module.py#L1210-L1219
def _replicate_for_data_parallel(self):
replica = self.__new__(type(self))
replica.__dict__ = self.__dict__.copy()
# replicas do not have parameters themselves, the replicas reference the original
# module.
replica._parameters = OrderedDict()
replica._buffers = replica._buffers.copy()
replica._modules = replica._modules.copy()
replica._is_replica = True
I tried myself but I don’t know how to access each replica in multiple GPUs. How to access them?
The replicas are created in every forward pass of DataParallel module. To access them and check, you can modify your model’s forward function and print out the mode value.
github.com
pytorch/pytorch/blob/3001facd7a3942efc7c7e8a42b720b9c884387b3/torch/nn/parallel/data_parallel.py#L154-L156 1
replicas = self.replicate(self.module, self.device_ids[:len(inputs)])
outputs = self.parallel_apply(replicas, inputs, kwargs)
return self.gather(outputs, self.output_device) |
st178823 | I am trying to build a system and i need to do inference on 60 segmentation models at same time ( Same models but different inputs). I wonder if this is possible to do on pytorch?
I am not sure about what kinda system i need to use for this, i am planning to use 4X RTX8000 and if it is not enough i can use two system with 4X RTX8000 each or with a better gpu.
Would i lose too much performance because of using multiple models? How many models i can put on a gpu, what the performance of it depends on? Is it just vRAM of gpu or processing speed? Sorry for asking such trivial questions but i really couldn’t come up with an answer by searching.
You can assume that i am going to use Yolact, https://github.com/dbolya/yolact 13
I would be very happy if you can help me, thank you so much and sorry for my English/grammar. |
st178824 | The simplest and probably the most efficient method whould be concatenate your samples in dimension 0 (i.e. the batch dimension).
If that is too much for one gpu, then wrap your model in DistributedDataParallel and let it handle the batched data.
Do not use multiple models unless they hold different parameters. It’s unecessary. |
st178825 | It makes sense. I was planning to use 30 robotic arms, 2 model for each. By doing like that i guess every robotic arm will get info about what to do on same time. I think i can use that, i 've read about running multiple models on tensorflow is possible so that is why i wanted to know if it is possible on pytorch, thanks for answer again. |
st178826 | Hi everyone!
Now I am training a model using torch.distributed, but I am not sure how to set the random seeds. For example, this is my current code:
def main():
np.random.seed(args.seed)
torch.manual_seed(args.seed)
torch.cuda.manual_seed(args.seed)
cudnn.enabled = True
cudnn.benchmark = True
cudnn.deterministic = True
mp.spawn(main_worker, nprocs=args.ngpus, args=(args,))
And should I move the
np.random.seed(args.seed)
torch.manual_seed(args.seed)
torch.cuda.manual_seed(args.seed)
cudnn.enabled = True
cudnn.benchmark = True
cudnn.deterministic = True
into the function main_worker() to make sure every process has the correct seed and cudnn settings? By the way, I have tried this and this behavior will make the training process 2 times slower, which really confused me.
Thank you very much for any help! |
st178827 | Each process should execute the seeding code.
The slowdown might come from e.g. cudnn.deterministic = True, as this will use the default algorithm, which might be slower than the others.
Also, cudnn.benchmark = True won’t have any effect, if you set cudnn.deterministic = True. |
st178828 | I am using torch.distributed.rpc. I can set the rpc to have many threads using rpc_backend_options, but it seems like it is not being mapped onto idle CPUs that I have.
Specifically, to test out, I’ve sent 1-4 asynchronous RPC calls to a server which has 80 CPUs.
Below is the code for reference.
import os
import time
import torch
import torch.nn as nn
import numpy as np
from torch.multiprocessing import Process
import torch.distributed.rpc as rpc
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.l0 = nn.Linear(2, 2)
W = np.random.normal(0, 1, size=(2,2)).astype(np.float32)
self.l0.weight.data = torch.tensor(W, requires_grad=True)
self.l1 = nn.Linear(2, 2)
W = np.random.normal(0, 1, size=(2,2)).astype(np.float32)
self.l1.weight.data = torch.tensor(W, requires_grad=True)
def forward(self, x):
return self.l1(self.l0(x))
def test(t):
print("RPC called")
for i in range(100000):
t2 = t*1.000001
return t2
def run(i):
rpc.init_rpc("Rank"+str(i), rank=i, world_size=2)
if i == 0:
with torch.autograd.profiler.profile(True, False) as prof:
net = Net()
optimizer = torch.optim.SGD(net.parameters(), lr=0.1)
input = torch.tensor([1.0, 2.0])
reqs = []
reqs.append(rpc.rpc_async("Rank1", test, args=(input,)))
reqs.append(rpc.rpc_async("Rank1", test, args=(input,)))
reqs.append(rpc.rpc_async("Rank1", test, args=(input,)))
reqs.append(rpc.rpc_async("Rank1", test, args=(input,)))
#reqs.append(rpc.rpc_async("Rank1", test, args=(input,)))
for req in reqs:
input += req.wait()
print("RPC Done")
y = net(input)
optimizer.zero_grad()
y.sum().backward()
optimizer.step()
print(prof.key_averages().table(sort_by="cpu_time_total"))
prof.export_chrome_trace("test.json")
else:
pass
rpc.shutdown()
if __name__ == "__main__":
os.environ['MASTER_ADDR'] = "localhost"
os.environ['MASTER_PORT'] = "29500"
ps = []
for i in [0, 1]:
p = Process(target=run, args=(i,))
p.start()
ps.append(p)
for p in ps:
p.join()
As you can see, I am just doing some compute-intensive work on the server using RPC.
Below is the result from my profiler.
When I do 1 RPC call:
Screen Shot 2020-05-27 at 8.58.57 PM3226×464 42.1 KB
When I do 4 RPC calls:
Screen Shot 2020-05-27 at 8.59.42 PM3208×326 49.2 KB
Default RPC init makes 4 send_recv_threads. So it should be able to “concurrently” run my 4 RPC requests. However, as you can see, the time to finish the RPC requests grew almost linearly (from 460ms to 2200ms) with 4 requests, meaning that they are using only one core and are not being processed in parallel (i.e., concurrent, but not parallel).
I know that python threads (unlike processes) cannot execute in parallel on different cores. Is RPC threads also (because they are threads) cannot run in parallel on different cores?
Is there a way to run different RPC request received on different cores? Or should I manually spawn processes in the receiving server side to run the requests in parallel and leverage my multicore server?
Thank you. |
st178829 | This is likely due to Python GIL on the server side. Can you torchscript (https://pytorch.org/docs/stable/jit.html 2) the test method and try again? That should avoid GIL. |
st178830 | Probably Yes for torch jit scripts, and No for regular python functions.
I will describe my understanding of the framework in details. For c++ files, their root directory is (git master) torch/csrc/distributed.
Internally, torch rpc framework does the following things:
entry point: rpc.async, rpc.sync use the same _invoke_rpc() while rpc.remote use its own implementation. For these three methods, they all categorize your calls into three categories: builtin is for python builtin function, udf is for user defined functions, jit is for torch script. Then all three lower level calls go into the torch C library.
c library: the c-python interface is defined in rpc/init.cpp, which uses pybind11 to define the interfaces, on calling the interface, call guards (constructed before wrapped functions) are gil_scoped_release so GIL is released here for all three interface categories. The wrapped functions are defined in rpc/python_functions.cpp they will find your target rpc process (c++ class RpcAgent) and send your message to it. GIL will be reaquired after finishing the c function call due to the automatic deconstruction of call gaurds.
rpc agent: RpcAgent class will initialize cb_ member on construction, which is a unique_ptr of type RequestCallback, there are two derived classes of RpcAgent: TensorPipeAgent and ProcessGroupAgent. In the rpc use case you are dealing with ProcessGroupAgent, in its member function handleRecv() it will use the cb_member to handle the call back. All these agents are defined in rpc/<agent_name_lower_case>.cpp so you should be able to find them easily.
request callback: Request callback is an abstract functor defined in request_callback.h and request_callback.cpp, it has a virtual processMessage() method. Its real implementation in request_callback_impl.cpp defines this method and calls its member method processRpc().
process rpc: This function handles rpc calls based on its message type, SCRIPT_CALL and SCRIPT_REMOTE_CALL will handle jit scripted calls, however in PYTHON_CALL:
{
py::gil_scoped_acquire acquire;
serializedPyObj =
std::make_shared<SerializedPyObj>(pythonRpcHandler.serialize(
pythonRpcHandler.runPythonUdf(std::move(upc).movePythonUdf())));
}
There is a py::gil_scoped_acquire, according to the pybind11 definition, this will hold the gil lock until acquire is deconstructed (aka leaving this c++ scope), so Nope, you cannot leverage multicore by using multi-threads used in rpc.
Note this explanation is valid for commit hash 176174a68ba2d36b9a5aaef0943421682ecc66d4 and release till 1.5.0, as I can see in their source code that they are planning to further abstract away the switch case in processRpc() to an abstract execute() method of RpcCommandBase |
st178831 | So your code is theoretically equivalent to using the ThreadPool(4).map from multiprocessing library. The test code is as follows:
import torch as t
from multiprocessing.pool import ThreadPool
from utils.helper_classes import Timer
def test(t):
print("RPC called")
for i in range(100000):
t2 = t*1.000001
return t2
def test1():
tm = Timer()
ts=t.Tensor([1,2])
tm.begin()
for i in range(4):
test(ts)
print(tm.end())
def test2():
tm = Timer()
ts = t.Tensor([1,2])
pool = ThreadPool(4)
tm.begin()
pool.map(test, (ts, ts, ts, ts))
print(tm.end())
And my test result is:
test1: 2.091 s
test2: 2.404 s
You can dispatch sub processes in the rpc call to work around the GIL. |
st178832 | If there any good practices to parallel model prediction calculations, not training. So if we have a pretrained pth-model how to run it on a few GPUs? Is there a doc with this specific topic covered? |
st178833 | You should have a look at the nn.DataParallel 21 and nn.distributed.DistributedDataParallel 21 docs. Both are fairly easy objects to work with. It will basically split your input in the batch dimension across the ids of the GPUs that you pass at initialization.
Note that some model architectures are not able to be parallelized across devices.
Also, if you are just running inference, you may not see any benefit to multi-GPU parallelization. Or even using a GPU at all. You might try running the model after a call to model.eval() if you are experiencing performance issues. |
st178834 | It will basically split your input in the batch dimension across the ids of the GPUs that you pass at initialization.
So this code is enough:
model = torch.nn.Module(options)
...
if torch.cuda.is_available():
ids = [i for i in range(torch.cuda.device_count())]
model = torch.nn.DataParallel(model, device_ids=ids).cuda()
os.environ['CUDA_VISIBLE_DEVICES'] = ids.join(',')[:-1]
print("Using ", len(ids), " GPUs!")
...
model_results = model(input)
Note that some model architectures are not able to be parallelized across devices.
What does it depend on? |
st178835 | raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for DataParallel:
Missing key(s) in state_dict: "module.conv_first.weight",
really long list here...
Should we prepare the code anyhow for wrapping in DataParallel? |
st178836 | nn.DataParallel adds a .module attribute to the model, so that you might see these key errors while trying to load a state_dict from a plain PyTorch model.
You could either add/remove the .module keys manually or store and load the state_dict using the plain model without the nn.DataParallel wrapper.
To store the state_dict you would use torch.save(model.module.state_dict(), path), while you could just load the state_dict before wrapping the model into nn.DataParallel. |
st178837 | Is it possible to use torch.distributed.rpc (since 1.4.0) with irecv, reduce, broadcast… primitives together? In my experience, it is possible to use send&recv primitives after init_rpc() call, but I am not sure whether a blocking recv / reduce / all_gather call will interfere with rpc api.
If you have any experience with this, please let me know, thank you! |
st178838 | Solved by mrshenli in post #2
Yes, this is possible, but there is one caveat. The current implementation of init_rpc() sets the default process group. It actually shouldn’t do that, and we are fixing it (see this issue). With the current RPC implementation, I see two ways to implement this feature.
Option 1
First, call init_rpc(… |
st178839 | Yes, this is possible, but there is one caveat. The current implementation of init_rpc() sets the default process group. It actually shouldn’t do that, and we are fixing it (see this issue 1). With the current RPC implementation, I see two ways to implement this feature.
Option 1
First, call init_rpc() and let it set the default process group. Then use the new_group API to create new set of process group instances, and only call irecv/reduce/broadcast on the new process group instances. This would avoid messing up states of the default process group that RPC is running on.
Option 2
Directly create process group using its constructor. See the test below as an example:
github.com
pytorch/pytorch/blob/f6f1384811b9cc722f650ed9ead8ee99938c009a/test/distributed/test_c10d.py#L1501-L1502
store = c10d.FileStore(self.file_name, self.world_size)
pg = c10d.ProcessGroupGloo(store, self.rank, self.world_size)
Just curious, do you mind share more details of your use case? What’s the motivation to combine RPC with collective communications? I saw some requirement for this to support combining RPC with DDP. Is this also your use case? |
st178840 | Thank you for your response! Yeah, I have noticed that issue on github. I am also kind of expecting using some solution similar to your option 1, and now reassured by your answer, I am ready to use it.
The reason why I would like to use RPC with collective communications together is that while RPC mechanism is great for implementing point to point communication paradigm, it is not very handy when it comes to implementing algorithms where serveral processes are closely correlated and doing exactly the same things in parallel, that is the job for collective comm primitives. And it is very hard to implement on-demand calling using collective comm primitives as well.
An example of a part of my actual application is a distributed buffer used in IMPALA and APEX, where workers are continuously pulling models from trainers and doing rollouts. The topology of workers and trainers are preset and given by user, active service discovery is not considered. “Lost of a single worker does not affect any other peers” is an optional, but important feature.
I have one more question, how do you handle errors like a worker disconnects from the group? As far as I’m concerned, there is just a timeout argument in the rpc api and no explicit error handling descriptions for rpc or collective in the document. Maybe ZeroMQ is a better alternative if we are concerned about robustness? |
st178841 | Thanks for sharing more details.
I have one more question, how do you handle errors like a worker disconnects from the group?
We are working on elasticity support to allow workers to dynamically join and leave.
As far as I’m concerned, there is just a timeout argument in the rpc api and no explicit error handling descriptions for rpc or collective in the document.
For timeout, there are per-RPC timeout on master branch for rpc_sync and rpc_async. For timeout in remote, the following PR is add it.
github.com/pytorch/pytorch
Implement timeout support for RRefs
pytorch:gh/rohan-varma/129/base ← pytorch:gh/rohan-varma/129/head
opened
May 16, 2020
rohan-varma
+394
-37
no explicit error handling descriptions for rpc or collective in the document
Right, currently, there is only a wait API on the returned Future object. Currently, you will need to try-except the wait. In the C++ implementation, there are error handling APIs on the Future, let us expose it to Python as well. Besides this, do you also need things like error listener for errors occurs in background RPC threads or something else?
Maybe ZeroMQ is a better alternative if we are concerned about robustness?
It depends on what do you need in the application. If you just need P2P communication without tensor/autograd support, then any matured RPC-like system can do. If you need autograd, or would like to do direct device-2-device copy (coming to v1.6), or would like to integrate with TorchScript to speed up training, then torch.distributed.rpc might be better. The latter is a relatively new project and there are still many gaps in the features, but we’d love to drive it together with the community into a better shape. |
st178842 | Thank you for your detailed reply, currently, my project does not need more error in-depth error handling apart from try-excepting the wait, after a thorough consideration, I think rpc and collective comm are sufficient for the most necessary functions, for now. Whether using some more mature RPC frameworks really depends on my users requests, and I think it is unecessary at this moment.
Looking forward to more updates on elasticity enhancement. |
st178843 | According to the definition of function _invoke_remote_builtin and _invoke_remote_python_udf
in pytorch/torch/csrc/distributed/rpc/init.cpp, L557-591:
module.def(
"_invoke_rpc_builtin",
[](const WorkerInfo& dst,
const std::string& opName,
const float rpcTimeoutSeconds,
const py::args& args,
const py::kwargs& kwargs) {
return std::make_shared<jit::PythonFutureWrapper>(
pyRpcBuiltin(dst, opName, args, kwargs, rpcTimeoutSeconds));
},
py::call_guard<py::gil_scoped_acquire>());
, which are called under the hood by rpc.remote, rpc.rpc_async, rpc.rpc_sync, is it correct to say that the rf parameter in _invoke_rpc:
def _invoke_rpc(to, func, rpc_type, args=None, kwargs=None):
if not callable(func):
raise TypeError("function should be callable.")
qualified_name = torch.jit._find_builtin(func)
dst_worker_info = _to_worker_info(to)
# If profiling is enabled, kick off the timer and retrieve back a
# RecordFunction instance.
rf = None
...
defines timeout in seconds for each individual rpc call?
Sorry I mixed up the python api code in 1.5.0 release and current master branch code. Now master branch provides timeout argument in python api:
@_require_initialized
def rpc_sync(to, func, args=None, kwargs=None, timeout=UNSET_RPC_TIMEOUT): |
st178844 | Yes, timeout arg is now available in rpc_sync and rpc_async APIs on master. Support for remote API will come soon. |
st178845 | I want to train n models (per n, I have f times t data points). I can load all data onto a single GPU. I assign the dataloader batches and each batch gets a number of minibatches. Each minibatch holds the data to train one model (one n). The data per n is rather small, but the number of models is large. The ‘problem’ that I am facing is that the batches are executed sequentially rather than in parallel on the single GPU. Is there a way to parallelize the batches on the single GPU to ensure scaling to a large number of models quicker? The bare, individual training time per model is improved by a factor of X using GPUs over CPU (not given any parallelization).
Thanks in advance. |
st178846 | I think you should be able to spawn multiple processes on a single GPU (using torch.multiprocessing - https://pytorch.org/docs/stable/multiprocessing.html 12), and train each model in a separate process. You may need to tune the number of processes you spawn, since performance may be degraded with too many processes due to resource contention. |
st178847 | I am having a problem with chunked loss calculation when using DistributedDataParallel (the code works fine on a single GPU). I use a single node with 4 GPU’s, and am training a transformer model for NLP. Instead of feeding the whole batch to the final linear layer that maps to the vocabulary dimension (called generator in the code below), I split the batch up in chunks. This is common practice, see e.g. http://nlp.seas.harvard.edu/2018/04/03/attention.html 1 (class MultiGPULossCompute).
The code I use for loss calculation (where x are the model activations):
x_copy = x.clone().detach()
x_copy.requires_grad = True
chunk_loss_all = 0.0
for chunk_start in range(0, batch_size, chunk_size):
# Calculate loss per chunk
chunk_end = min(chunk_start + chunk_size, batch_size)
chunk_predictions = generator(x_copy[chunk_start:chunk_end])
chunk_loss = criterion(chunk_predictions.contiguous().view(-1, chunk_predictions.size(-1)),
y[chunk_start:chunk_end].contiguous().view(-1))
chunk_loss_all += chunk_loss
# backward for chunk losses
chunk_loss_all.backward()
# backward through rest of the model
x_gradients = x_copy.grad.view_as(x)
x.backward(gradient=x_gradients)
optimizer.step()
optimizer.zero_grad()
The error that is produced:
File "loss/compute.py", line 75, in chunked_loss
x.backward(gradient=x_gradients)
File "/home/dstap1/anaconda3/envs/logos/lib/python3.8/site-packages/torch/tensor.py", line 195, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File "/home/dstap1/anaconda3/envs/logos/lib/python3.8/site-packages/torch/autograd/__init__.py", line 97, in backward
Variable._execution_engine.run_backward(
RuntimeError: has_marked_unused_parameters_ INTERNAL ASSERT FAILED at /opt/conda/conda-bld/pytorch_1579022027550/work/torch/csrc/distributed/c10d/reducer.cpp:290, please report a bug to PyTorch.
The problem is probably in the multiple .backward() calls. I don’t know how to rewrite my code to solve this problem. Any ideas? |
st178848 | Hello, I am trying to train ImageNet on a 8GPU machine with DDP mode. However, my machine is not good at reading large scale small files. I have to make a tar file of the whole dataset (130Gb), read the tar file into memory and extract them in memory. I have a CPU memory of 360Gb. So it would be OK to use DataParallel mode. But it seems I cannot use DistributedDataParallel since I need to load the dataset 8 times. Is there any method that I can train the model with DDP mode?
Thanks! |
st178849 | Solved by mrshenli in post #4
I split the tar file into data_1.tar, data_2.tar, …, data_8.tar. For the k^{th} GPU, i.e., local_rank = k, the process read data_k.tar and build data loader with data_k.tar. Then I get 8 different data loaders (their data are different). In this case, I guess I should set shuffle=True and do not ne… |
st178850 | One option is to use torch.multiprocessing.Queue 48 as the shared memory. The main process can prepare multiple queues and then pass one queue to each DDP processes. The main process reads from the file and dispatch data items to the queue, while DDP processes wait on their own queue for that data item.
Another option is to split the tar file into multiple smaller pieces and let each DDP process read a different one. |
st178851 | Hello! Thanks for your answer! I have one more question about the second option. Does that mean something like this?
I split the tar file into data_1.tar, data_2.tar, …, data_8.tar. For the k^{th} GPU, i.e., local_rank = k, the process read data_k.tar and build data loader with data_k.tar. Then I get 8 different data loaders (their data are different). In this case, I guess I should set shuffle=True and do not need a train sampler? |
st178852 | I split the tar file into data_1.tar, data_2.tar, …, data_8.tar. For the k^{th} GPU, i.e., local_rank = k, the process read data_k.tar and build data loader with data_k.tar. Then I get 8 different data loaders (their data are different). In this case, I guess I should set shuffle=True and do not need a train sampler?
Yes, but one caveat is that those input data splits need to generate the same number of input batches for DDP. If, say rank 0 processes 3 batches and rank 1 process 4 batches, rank 1 would hang on the last batch. |
st178853 | When we do
with torch.cuda.stream(stream):
torch.distributed.isend(...)
Will if effect the stream (cuda-aware) MPI uses for communication, or rather its some inside MPI implementation detail? |
st178854 | Given the implementation below, it does not seem ProcessGroupMPI uses any dedicated CUDA streams. So I would assume it’s fully delegated to MPI’s implementation?
github.com
pytorch/pytorch/blob/f314d9a0774062a20015ae522d33eadd45293328/torch/lib/c10d/ProcessGroupMPI.cpp#L791-L814 4
std::shared_ptr<ProcessGroup::Work> ProcessGroupMPI::send(
std::vector<at::Tensor>& tensors,
int dstRank,
int tag) {
checkSingleTensor(tensors);
auto& tensor = tensors[0];
MPI_Request request = MPI_REQUEST_NULL;
{
c10::DeviceGuard guard(tensor.device());
std::unique_lock<std::mutex> globalLock(pgGlobalMutex_);
MPI_CHECK(MPI_Isend(
tensor.data_ptr(),
tensor.numel(),
mpiDatatype.at(tensor.scalar_type()),
dstRank,
tag,
pgComm_,
&request));
This file has been truncated. show original |
st178855 | You made me open the black box
I verified its an inside MPI implementation detail. I found it in their code.
for example openmpi 8
they use their own streams.
This is critical,
because unless the streams they create can be accessed somehow (so far I did not find a way to do it in the cuda manual. but ill look deeper),
it means that the only way to change the behavior is editing the MPI C code and compiling.
Why do normal pytorch users should care?
Because for normal cuda-aware usage this is super-duper risky, as their streams don’t wait for the our streams, meaning cuda-aware MPI is prune to failure unless we fully synchronize our streams before each MPI call.
This would result in slower (or incorrect) program.
(I personally spent tons of time debugging this…)
I’d like to know what you think, maybe we should open an issue.
By the way, I wonder why in the file you mention irecv uses MPI_ANY_SOURCE:
is that intentional?
github.com
pytorch/pytorch/blob/f314d9a0774062a20015ae522d33eadd45293328/torch/lib/c10d/ProcessGroupMPI.cpp#L856
auto& tensor = tensors[0];
MPI_Request request = MPI_REQUEST_NULL;
{
c10::DeviceGuard guard(tensor.device());
std::unique_lock<std::mutex> globalLock(pgGlobalMutex_);
MPI_CHECK(MPI_Irecv(
tensor.data_ptr(),
tensor.numel(),
mpiDatatype.at(tensor.scalar_type()),
MPI_ANY_SOURCE,
tag,
pgComm_,
&request));
}
return std::make_shared<AsyncWork>(tensor, request);
}
std::shared_ptr<ProcessGroup::Work> ProcessGroupMPI::barrier(
const BarrierOptions& opts) { |
st178856 | seliad:
By the way, I wonder why in the file you mention irecv uses MPI_ANY_SOURCE :
is that intentional?
I am not aware of the history here. @pietern and @teng-li would know more.
Why do normal pytorch users should care?
Because for normal cuda-aware usage this is super-duper risky, as their streams don’t wait for the our streams, meaning cuda-aware MPI is prune to failure unless we fully synchronize our streams before each MPI call.
This would result in slower (or incorrect) program.
I agree, full synchronization is not acceptable here. Can MPI take a CUDA stream as an argument and then work on that stream like NCCL does? If this is possible we can let ProcessGroupMPI manage the streams and use CUDA event to synchronize. |
st178857 | Created an issue in openmpi repo.
github.com/open-mpi/ompi
synchronize cuda-aware mpi streams 1
opened
May 13, 2020
saareliad
Background information
v4.0.3
installed from source (tar)
cuda aware mpi
cuda 10.2
This is not a system problem, but suspected behavior/implementation issue in cuda-aware MPI. it...
@mrshenli As far as I know, MPI doesn’t support what you suggest so better ask them directly? |
st178858 | @mrshenli
I think what https://github.com/open-mpi/ompi/issues/7733#issuecomment-629806195 3
Suggests is what should be implemented inside Pytorch in cpp, if we want to use MPI process group correctly.
Maybe add optional event argument to torch.dist calls (cuda_event_to_sync_with or something).
As far as I know, callbacks on cuda events are not exposed to python API.
(Too bad they aren’t, actually) |
st178859 | I see. Thanks for sharing!
Maybe add optional event argument to torch.dist calls ( cuda_event_to_sync_with or something).
I am not sure whether we should add this to c10d Python API if this is only required by the MPI backend. Could you please add an issue on GH to kick off the discussion on the pitch? Let’s discuss there to see what are the options.
As far as I know, callbacks on cuda events are not exposed to python API.
(Too bad they aren’t, actually)
We are actually exploring CUDA event callback for RPC, and also considering using it to handle CUDA errors. Let me create an issue to track this. |
st178860 | github.com/pytorch/pytorch
Add CUDA callback to Python API 16
opened
May 18, 2020
mrshenli
As discussed in this forum thread. CUDA stream/event callback can be a useful feature. It might be helpful to add it...
feature
module: cuda
module: rpc
triaged |
st178861 | Hi everyone,
I have stumbled upon a problem when using DistributedDataParallel. Strangely, after a few epochs of successful training, loss goes up for a while. I noticed that both train/validation losses got down for the batches which are on GPU0, but go up for the other 3 GPUs. I believe I’m doing something wrong with DistributedDataParallel, but can’t find a bug. Did anyone see a similar problem, or can guess what the reason can be?
In the chart you can see training and validation losses for GPU0 and average of all 4.
Screenshot from 2020-05-18 05-16-581920×1080 323 KB |
st178862 | Solved by Martun_Karapetyan in post #4
Thanks for the help.
I checked the model parameters, they were in perfect sync.
I had another stupid bug. I used ReduceLROnPlateau when the validation accuracy plateaued, but each process looked at the validation accuracy of its subset of data. 1st process reduced the learning rate first, the othe… |
st178863 | Hey @Martun_Karapetyan, DDP should have kept all model replicas in sync, i.e., all model replicas should have the same parameter values. Could you please check if this is true in your use case, say by using all_gather to collect all model parameters into one rank and compare? |
st178864 | Thanks for the help.
I checked the model parameters, they were in perfect sync.
I had another stupid bug. I used ReduceLROnPlateau when the validation accuracy plateaued, but each process looked at the validation accuracy of its subset of data. 1st process reduced the learning rate first, the others reduced it 1 epoch later, hence the problem. |
st178865 | When I use two gpus to train my model, I got RuntimeError below:
Process SpawnProcess-2:
Traceback (most recent call last):
File “/home/ubuntu/anaconda3/envs/pytorch_p36/lib/python3.6/multiprocessing/process.py”, line 258, in _bootstrap
self.run()
File “/home/ubuntu/anaconda3/envs/pytorch_p36/lib/python3.6/multiprocessing/process.py”, line 93, in run
self._target(*self._args, **self._kwargs)
File “/home/ubuntu/ogb/ogb/graphproppred/m.py”, line 190, in run
main(rank, dev_id, args)
File “/home/ubuntu/ogb/ogb/graphproppred/m.py”, line 149, in main
train(args[‘gnn’], model, device, train_loader, criterion, optimizer, args[‘num_devices’], rank)
File “/home/ubuntu/ogb/ogb/graphproppred/m.py”, line 41, in train
optimizer.backward_and_step(loss)
File “/home/ubuntu/ogb/ogb/graphproppred/utils.py”, line 146, in backward_and_step
self._sync_gradient()
File “/home/ubuntu/ogb/ogb/graphproppred/utils.py”, line 127, in _sync_gradient
dist.all_reduce(p.grad.data, op=dist.ReduceOp.SUM)
File “/home/ubuntu/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/distributed/distributed_c10d.py”, line 902, in all_reduce
work = _default_pg.allreduce([tensor], opts)
RuntimeError: Stop_waiting response is expected
Process SpawnProcess-1:
Traceback (most recent call last):
File “/home/ubuntu/anaconda3/envs/pytorch_p36/lib/python3.6/multiprocessing/process.py”, line 258, in _bootstrap
self.run()
File “/home/ubuntu/anaconda3/envs/pytorch_p36/lib/python3.6/multiprocessing/process.py”, line 93, in run
self._target(*self._args, **self._kwargs)
File “/home/ubuntu/ogb/ogb/graphproppred/m.py”, line 190, in run
main(rank, dev_id, args)
File “/home/ubuntu/ogb/ogb/graphproppred/m.py”, line 149, in main
train(args[‘gnn’], model, device, train_loader, criterion, optimizer, args[‘num_devices’], rank)
File “/home/ubuntu/ogb/ogb/graphproppred/m.py”, line 41, in train
optimizer.backward_and_step(loss)
File “/home/ubuntu/ogb/ogb/graphproppred/utils.py”, line 146, in backward_and_step
self._sync_gradient()
File “/home/ubuntu/ogb/ogb/graphproppred/utils.py”, line 127, in _sync_gradient
dist.all_reduce(p.grad.data, op=dist.ReduceOp.SUM)
File “/home/ubuntu/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/distributed/distributed_c10d.py”, line 902, in all_reduce
work = _default_pg.allreduce([tensor], opts)
RuntimeError: Stop_waiting response is expected
Here is the code where the error occurred:
def _sync_gradient(self):
"""Average gradients across all subprocesses."""
for param_group in self.optimizer.param_groups:
for p in param_group['params']:
if p.requires_grad and p.grad is not None:
# print(p.grad.data.shape, p.grad.data.device)
dist.all_reduce(p.grad.data, op=dist.ReduceOp.SUM)
p.grad.data /= self.n_processes
Ps. When I do “print(p.grad.data.shape, p.grad.data.device)”, I find the grads are normal and has the same shape [1,300] on 2 different gpus. So I’m confused why it stopped here. |
st178866 | Solved by yangkz in post #5
The error has been fixed.
‘Stop_waiting response is expected’ error occurred in TCPStore.cpp. So it was actually the communication problem. It works finally when I reinstalled NCCL: https://github.com/NVIDIA/nccl.git |
st178867 | Is the result of p.requires_grad and p.grad is not None always the same across all process and all parameters? If allreduce ops on different processes could run into desync.
Which backend are you using (NCCL/Gloo/MPI) and which PyTorch version are you using? It will be helpful to have a min repro of this error. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.