|
{: , : , : [{: , : , : , : 303}], : []} |
|
{: , : , : [{: , : if you have nan in your net, you're screwed\", so I would just return NaN there, personally.\nMy practical usecase is for KITTI groundtruth depthmaps and FlowMaps which are 2D sparse arrays.", "commid": "pytorch_issue_7645", "tokennum": 506}], "negative_passages": []} |
|
{"query_id": "q-en-pytorch-066dd6b918ee24c19d6d1836ab10af295d1039207535a528a4626cbf00ca2778", "query": "for(iw = 0; iw < kW; iw++) { real val = *(ip + it*istrideT + ih*istrideH + iw*istrideW); <del> if (val > maxval) </del> <ins> if ((val > maxval) || isnan(val)) </ins> { maxval = val; maxindex = (it+istartT)*isizeH*isizeW + (ih+istartH)*isizeW + (iw+istartW);", "positive_passages": [{"docid": "doc-en-pytorch-202a6b4a46ae62b8f7553c0429b323612fb197593d261f12c30c05b56ae7ce77", "text": "For an algorithm that use FlowNet-like architecture that outputs predictions at multiple scale levels, we can either compare predictions to downscaled GT or upscaled predictions to GT. The first being obviously less computationally expensive, the ignore NaN would help downscaling such sparse 2D maps. For the moment we do something I find ugly, you can see it Essentially it zeroes the s, takes the map and construct two maps of positive and negative values which are then maxpooled and back together. I am actually open for a strict \"no-nan\" policy on pooling functions, but in that case better enforce it before someone writes a code that tries to benefit from maxpooling ignore s feature/bug and if you have a clever way of pooling sparse 2D tensors, I'm open to it, but I guess it's a topic for pytorch forums ;)\nPersonally, I think it is more sane have NaN -NaN in the pooling and offer a parametrizable (where you get to pick the values) that does or so.\nSo the proposed fix does NaN -NaN similar to max. I didn't try to fix gradients for the nan case. This would involve keeping the values and I don't think that is worth it (in particular because I would not expect the pooling layer to be last, and otherwise we'd probably get NaN as grad_out). If you are reasonably happy with it, I'd move it to a PR.\n+1 for NaN -NaN, \"abyssus abyssum invocat\" We could also add an optional mask which would be a ByteTensor of the same size, specifying whether or not the considered pixel is used for the pooling, that could be used for any kind of pulling. An functionality would then be to provide the mask\nI could work with that. to continue to be picky, the potential drawback is that it won't work for other pooling methods such as average pooling or median pooling. The problem here is that the max operation inherently ignores nonmax values which can be leveraged for \ operation, but it has a \ feel to it since it woun't be as easy for other kinds of 2D operations.\nTo be clear, the fix is good for me, but I figured a related discussion on selective pooling (whether to ditch NaN values or anything you want to ignore) could happen (maybe not on this issue ?)", "commid": "pytorch_issue_7645", "tokennum": 513}], "negative_passages": []} |
|
{"query_id": "q-en-pytorch-1fcf3bc313dee3793326e5d79cfda64509e9f7dec1d4b06b486ee9d9f09de30d", "query": "raise ValueError('num_workers cannot be negative; ' 'use num_workers=0 to disable multiprocessing.') <ins> if sys.platform == \"win32\" and self.num_workers > 0: raise ValueError('num_workers > 0 is not supported on Windows') </ins> if batch_sampler is None: if sampler is None: if shuffle:", "positive_passages": [{"docid": "doc-en-pytorch-8d645240428935c6b8a48ae7f0ddcf46aaa90faf077d7ce3d52a0c84ff1a123a", "text": "This issue tracks the components / tests that are not working on Windows: [ ] : currently disabled on Windows because Windows doesn't have and we need to look for substitutes [ ] : Fuser is disabled on Windows because the current implementation uses symbols from Linux-specific headers and . We will need to find alternatives for Windows. [ ] : some parts of and are disabled because Windows doesn't support opening an already opened file (see discussion at and ) [ ] : currently doesn't work with Windows ( is the porting diff) [ ] : in causes intermittent CUDA out-of-memory error on Windows [ ] , : DataLoader with multiple workers causes intermittent CUDA out-of-memory error on Windows. [x] [x] (done in ) [x] [x] - - [x] [x] [x] - For more discussions, also see: cc\nThe first one is solved by but I think DataLoader can be further improved when is fininshed.\nCool I will mark it as resolved :)\nThe has been in .\nWe can try to revert and , because the memory leak in the CPU side could also cause CUDA errors.\nAre they fixed by I think we can revert them after we merge the PR.\nI think this may be related. Since once the memory of the CPU side is low, the will fail with too.\nI think it could be. For what it's worth, when I tried to inspect the CUDA OOM error, showed no process that was taking memory, but running CUDA tests on the machine would still fail.\nNow that is merged into master, could you please try to revert the changes on and ?\nAwesome! Just to understand it better: does fix both numworker=1 and numworker1 cases?\nYes, they are both solved.\nI guess we should mark the cpp_extension test as completed, for it's now enabled in CI.\nClosing this issue due to age and because its references are long out of date. For example, distributed tests now have their own jobs.commidpytorch_issue_4092tokennumnegative_passages |
|
query_idq-en-pytorch-202b1a281a0d21743153927ee467e8ba4c18b5ea5a14b37e71db236027828df7query// with the mode. struct ModeUnsignedBoolPair match = {0, false}; <del> match = reduceBlockN<struct ModeUnsignedBoolPair, MatchReduceOp<struct ModeUnsignedBoolPair>, 2> </del> <ins> match = reduceBlockWithNThreadLocalReductions<struct ModeUnsignedBoolPair, MatchReduceOp<struct ModeUnsignedBoolPair>, 2> </ins> (ubpmem, ubpp, sliceSize, MatchReduceOp<struct ModeUnsignedBoolPair>(), match); // Finally, we have the mode, and an index where it occurs. We use a single threadpositive_passagesdociddoc-en-pytorch-3dce8e556d6d4bd2ea519d551101e82db52b876cebdea9b13c4207de2ba26137textNot sure what is the reason for these errors, any suggestions? I suspect the clang version is not supported? Here is the\nclang have more strict checking , pull request will fixed it.\nfixed, thanks tocommidpytorch_issue_745tokennumnegative_passages |
|
query_idq-en-pytorch-21d9d9ac8e6325c9af9098cf1d35c71f8f1bc7483a724f70c90d1ee2e3d4a070query} else { // transposed if (input.size(1) != weight.size(0)) { std::stringstream ss; <del> ss << \ << transposed << \ << weight.sizes() << \ << input.sizes() << \ </del> <ins> ss << \ << transposed << \ << weight.sizes() << \ << input.sizes() << \ </ins> << weight.size(0) << \ << input.size(1) << \; throw std::runtime_error(ss.str());positive_passagesdociddoc-en-pytorch-3fd0655cf1f98414cb74036460095097c829c4e79824f44f878c0695b6ea8a48textgot the following err msg with conv2D and ver 0.4: RuntimeError: expected stride to be a single integer value or a list of 1 values to match the convolution dimensions, but got stride=[1, 1]. I was not able to see the problem so I uninstalled 0.4 and installed 0.3.1 Still got an err but this time it said that the expected input should be a 4d tensor and it got a 3D tensor. This helped understand the issue and I fixed it (just add a dimension). reinstalled 0.4 and its working (no surprise...). I think 0.4 should have the same err feedback otherwise its really hard to undrstand the problem How you installed PyTorch (conda, pip, source): conda Build command you used (if compiling from source): OS: macos PyTorch version: 0.4 Python version: 3.5 CUDA/cuDNN version: no GPU models and configuration: GCC version (if compiling from source): CMake version: Versions of any other relevant libraries:\nCan you please post a small self-contained code snippet that would let us reproduce the problem?\nThe strange error messge was also mentioned a few times in the forum. Here is a code snippet for PyTorch :\nThis is because we direct all conv ops to and infer dim + throw error message there.\nHmm that's not great. We might want to pass the expected dimensionality of the convolution to the generic implementation, so that we can improve the error messages.\nThx for the responses. Just a reminder, for the same err the provided err msg in version 0.3.1 was very informative\nYes, this should definitely be fixed. Sorry about this!\nThe error message of seems to be a bit misleading, too. Code: Should I create a new issue or is it related to the current one?\nthat appears fixed on master! :D", "commid": "pytorch_issue_7332", "tokennum": 434}], "negative_passages": []} |
|
{"query_id": "q-en-pytorch-24958dcac671e563d27c0349df1d5c8487f8d5fc26fb2c0c895955fce808c502", "query": "if (weight_dim != k) { std::stringstream ss; <del> ss << \"Expected \" << k << \"-dimensional weight for \" << k << \"-dimensional input \" << input.sizes() << \", but got weight of size \" << weight.sizes() << \" instead\"; </del> <ins> ss << \"Expected \" << weight_dim << \"-dimensional input for \" << weight_dim << \"-dimensional weight \" << weight.sizes() << \", but got input of size \" << input.sizes() << \" instead\"; </ins> throw std::runtime_error(ss.str()); } if (weight.size(0) < groups) {", "positive_passages": [{"docid": "doc-en-pytorch-3fd0655cf1f98414cb74036460095097c829c4e79824f44f878c0695b6ea8a48", "text": "got the following err msg with conv2D and ver 0.4: RuntimeError: expected stride to be a single integer value or a list of 1 values to match the convolution dimensions, but got stride=[1, 1]. I was not able to see the problem so I uninstalled 0.4 and installed 0.3.1 Still got an err but this time it said that the expected input should be a 4d tensor and it got a 3D tensor. This helped understand the issue and I fixed it (just add a dimension). reinstalled 0.4 and its working (no surprise...). I think 0.4 should have the same err feedback otherwise its really hard to undrstand the problem How you installed PyTorch (conda, pip, source): conda Build command you used (if compiling from source): OS: macos PyTorch version: 0.4 Python version: 3.5 CUDA/cuDNN version: no GPU models and configuration: GCC version (if compiling from source): CMake version: Versions of any other relevant libraries:\nCan you please post a small self-contained code snippet that would let us reproduce the problem?\nThe strange error messge was also mentioned a few times in the forum. Here is a code snippet for PyTorch :\nThis is because we direct all conv ops to and infer dim + throw error message there.\nHmm that's not great. We might want to pass the expected dimensionality of the convolution to the generic implementation, so that we can improve the error messages.\nThx for the responses. Just a reminder, for the same err the provided err msg in version 0.3.1 was very informative\nYes, this should definitely be fixed. Sorry about this!\nThe error message of seems to be a bit misleading, too. Code: Should I create a new issue or is it related to the current one?\nthat appears fixed on master! :Dcommidpytorch_issue_7332tokennumnegative_passages |
|
query_idq-en-pytorch-286cccb6da61a23a2bcb079e734646e80623fed1741e1e8e7ca8aaf9c8a7a41equeryreturn Subscript(base, [build_SliceExpr(ctx, base, expr.slice)]) elif sub_type is ast.ExtSlice: return Subscript(base, build_ExtSlice(ctx, base, expr.slice)) <ins> elif sys.version_info >= (3, 9): # In Python3.9 array indicies are not wrapped in ast.Index if sub_type is ast.Tuple: # N-dimensional indexing using Tuple: x[(i, j, k)] is equivalent to x[i, j, k] indices = [] for index_expr in expr.slice.elts: if isinstance(index_expr, ast.Slice): indices.append(build_SliceExpr(ctx, base, index_expr)) else: indices.append(build_expr(ctx, index_expr)) return Subscript(base, indices) return Subscript(base, [build_expr(ctx, expr.slice)]) </ins> else: # Ellipsis (can only happen in Python 2) raise NotSupportedError(base.range(), \)positive_passagesdociddoc-en-pytorch-a69891f877be902cc456139a7e993c9b38aa4f28c7548672cc69a9e3f1b209aatextFollowing example: fails in Python-3.9 with cc\nAnd the reason for that is very simple:\nAnother interesting offender:commidpytorch_issue_48674tokennumnegative_passages |
|
query_idq-en-pytorch-29b4f12861a142464a9b25b830cabbf59ed5cb2adfaf60f093acac0c57573ea1queryconst float lr = *lr_ptr; if (!nesterov) { CUDA_1D_KERNEL_LOOP(i, N) { <del> moment_out[i] = mu * moment[i] * lr * grad[i]; </del> <ins> moment_out[i] = mu * moment[i] + lr * grad[i]; </ins> param_out[i] = param[i] - moment_out[i]; } } else {positive_passagesdociddoc-en-pytorch-4ad8a9f7a71b2ddce48315dd16c5fc5671be60942858425b0385ed5b869d99cftext+69 I read: To me, it should be: The CPU code is not affected.\nThe two lines of code are identical?\nNo, one is mu times moment time lr times grad instead of mu times moment plus lr times grad.\nI see. Send a PR? :)\nWill do :)commidpytorch_issue_6975tokennumnegative_passages |
|
query_idq-en-pytorch-2b46b8dbf0d931ed37a0e15ed23bc29801672fff95ab89d5b94bca2bc2a0c892queryif isinstance(expr.slice.value, ast.Tuple): # N-dimensional indexing using Tuple: x[(i, j, k)] is equivalent to x[i, j, k] # XXX: Indexing using a list is **different**! It triggers advanced indexing. <del> indices = [] for index_expr in expr.slice.value.elts: indices.append(build_expr(ctx, index_expr)) </del> <ins> indices = [build_expr(ctx, index_expr) for index_expr in expr.slice.value.elts] </ins> return Subscript(base, indices) else: return Subscript(base, [build_expr(ctx, expr.slice.value)])positive_passagesdociddoc-en-pytorch-a69891f877be902cc456139a7e993c9b38aa4f28c7548672cc69a9e3f1b209aatextFollowing example: fails in Python-3.9 with cc\nAnd the reason for that is very simple:\nAnother interesting offender:commidpytorch_issue_48674tokennumnegative_passages |
|
query_idq-en-pytorch-2e3ced0554ae542f38fa4f46492eb2a9c1f34db3580330fa33d016c154e86473query'zero-dimensional.*cannot be concatenated'): torch.cat([x, y]) <del> def test_cat_empty(self): </del> <ins> @staticmethod def _test_cat_empty(self, use_cuda=False): </ins> # FIXME: this is legacy behavior and should be removed # when we support empty tensors with arbitrary sizes <del> x = torch.randn(4, 3, 32, 32) empty = torch.randn(0) </del> <ins> if use_cuda: dtype = torch.cuda.float32 else: dtype = torch.float32 x = torch.randn((4, 3, 32, 32), dtype=dtype) empty = torch.randn((0,), dtype=dtype) </ins> res1 = torch.cat([x, empty], dim=1) res2 = torch.cat([empty, x], dim=1) self.assertEqual(res1, res2) <del> conv = torch.nn.Conv2d(3, 3, kernel_size=1) </del> <ins> conv = torch.nn.Conv2d(3, 3, kernel_size=1).float() if use_cuda: conv = conv.cuda() </ins> res1 = torch.cat([conv(x), empty], dim=1) res2 = torch.cat([empty, conv(x)], dim=1) self.assertEqual(res1, res2)positive_passagesdociddoc-en-pytorch-5b118069fa67273ca83fc38bf735c53ddc46045a52aba22063750dd8da02e407textgdb points that the error might be in trying to get a size from an empty tensor: I'm using PyTorch version 0.4.0a0+\nThere's a check that should exclude zero-dim tensors from ( should be 0 in this case), so I'm wondering why that's not happening right now... edit: Nevermind, I was running an old build. I pulled the latest master and with being an empty tensor (with shape (0,)), cat crashes.\nRelated: We should probably rewrite to better handle these cases\nOkay, I found the bug. The CUDA version of check doesn't check the case where the input contains all empty tensors, while the CPU version does. I'll put up a fix soon.commidpytorch_issue_5739tokennumnegative_passages |
|
query_idq-en-pytorch-2ec2247f8a7bf05febc899bf58721a55b2d0844e172b0df05134abd1edcc3ca8query{ tcntr = y*iwidth + x; real val = *(ip + tcntr); <del> if (val > maxval) </del> <ins> if ((val > maxval) || isnan(val)) </ins> { maxval = val; maxindex = tcntr;positive_passagesdociddoc-en-pytorch-73afc555bf492c7072a4f9994675674d19a7420810f08eaca2d074197550e516textmax pooling functions are not consistent with max functions. Below an example, every max pooling (be it 1d, 2d or 3d, adaptive or not) acts the same, on cpu or on cuda. Essentially, there are two fondamental differences : max pooling of all values is while for it's max pooling of nan and valid values is valid values, which means s get ignored, while for , as soon as there is a value, the result is . More generally, choosing explicetely how to deal with as in numpy () could be a solution, but maybe this is related to CuDNN's max pooling ? Built from latest sources (as of 05/17) PyTorch version: 0.5.0a0+ Is debug build: No CUDA used to build PyTorch: 9.1.85 OS: Ubuntu 16.04.4 LTS GCC version: (Ubuntu 5.4.0-6ubuntu1~16.04.9) 5.4.0 CMake version: version 3.9.4 Python version: 3.6 Is CUDA available: Yes CUDA runtime version: 9.1.85 GPU models and configuration: GPU 0: Quadro M1000M Nvidia driver version: 390.30 cuDNN version: Probably one of the following: /usr/lib/x8664-linux- /usr/lib/x8664-linux-gnu/libcudnnstaticv7.a Versions of relevant libraries: [conda] magma-cuda91 2.3.0 1 pytorch [conda] torch 0.5.0a0+ <pip[conda] torch 0.3.1b0+ <pip[conda] torch 0.5.0a0+ <pip\nHi, . Thank you for providing the example. I'll take a look.\nHmhm. I seem to get a double free in the ...\nAnother question: Do we really want to provide an option to ignore NaN? My view is that \"if you have nan in your net, you're screwed\, : , : 506}], : []} |
|
{: , : , : [{: , : no-nan\abyssus abyssum invocat\ignore some pixels\non universal\, : , : 513}], : []} |
|
{: , : , : [{: , : , : , : 65}], : []} |
|
{: , : , : [{: , : , : , : 82}], : []} |
|
{: , : , : [{: , : , : , : 43}], : []} |
|
{: , : THCTensor.hpp\THCHalf.h\THCHalfAutoNumerics.cuh\THCNumerics.cuh\common.h\, : [{: , : if you have nan in your net, you're screwed\", so I would just return NaN there, personally.\nMy practical usecase is for KITTI groundtruth depthmaps and FlowMaps which are 2D sparse arrays.", "commid": "pytorch_issue_7645", "tokennum": 506}], "negative_passages": []} |
|
{"query_id": "q-en-pytorch-4099cb47e51f5963ac2fbc751ead939ce3721d76c3af31b04a878eb6e554a458", "query": "#include \"THCTensor.hpp\" #include \"THCHalf.h\" #include \"THCHalfAutoNumerics.cuh\" <ins> #include \"THCNumerics.cuh\" </ins> #include \"common.h\" // kernels borrowed from Caffe", "positive_passages": [{"docid": "doc-en-pytorch-202a6b4a46ae62b8f7553c0429b323612fb197593d261f12c30c05b56ae7ce77", "text": "For an algorithm that use FlowNet-like architecture that outputs predictions at multiple scale levels, we can either compare predictions to downscaled GT or upscaled predictions to GT. The first being obviously less computationally expensive, the ignore NaN would help downscaling such sparse 2D maps. For the moment we do something I find ugly, you can see it Essentially it zeroes the s, takes the map and construct two maps of positive and negative values which are then maxpooled and back together. I am actually open for a strict \"no-nan\" policy on pooling functions, but in that case better enforce it before someone writes a code that tries to benefit from maxpooling ignore s feature/bug and if you have a clever way of pooling sparse 2D tensors, I'm open to it, but I guess it's a topic for pytorch forums ;)\nPersonally, I think it is more sane have NaN -NaN in the pooling and offer a parametrizable (where you get to pick the values) that does or so.\nSo the proposed fix does NaN -NaN similar to max. I didn't try to fix gradients for the nan case. This would involve keeping the values and I don't think that is worth it (in particular because I would not expect the pooling layer to be last, and otherwise we'd probably get NaN as grad_out). If you are reasonably happy with it, I'd move it to a PR.\n+1 for NaN -NaN, \"abyssus abyssum invocat\" We could also add an optional mask which would be a ByteTensor of the same size, specifying whether or not the considered pixel is used for the pooling, that could be used for any kind of pulling. An functionality would then be to provide the mask\nI could work with that. to continue to be picky, the potential drawback is that it won't work for other pooling methods such as average pooling or median pooling. The problem here is that the max operation inherently ignores nonmax values which can be leveraged for \ operation, but it has a \ feel to it since it woun't be as easy for other kinds of 2D operations.\nTo be clear, the fix is good for me, but I figured a related discussion on selective pooling (whether to ditch NaN values or anything you want to ignore) could happen (maybe not on this issue ?)", "commid": "pytorch_issue_7645", "tokennum": 513}], "negative_passages": []} |
|
{"query_id": "q-en-pytorch-41d4dd651a0d4badf059091f9ac8324ffb5f3d6684e9d5e78affa9f305316e3b", "query": "# This should work though l2.weight = Variable(torch.randn(10, 10)) <ins> def test_embedding_padding_idx(self): embedding = nn.Embedding(10, 20, padding_idx = 0) input = Variable(torch.LongTensor([[0,2,4,5],[4,3,0,9]])) output = embedding(input) self.assertEqual(output[0][0].sum().data[0], 0) self.assertEqual(output[1][2].sum().data[0], 0) </ins> def test_Dropout(self): input = torch.Tensor(1000) self._test_dropout(nn.Dropout, input)", "positive_passages": [{"docid": "doc-en-pytorch-6dae6822e59fd00098fabde359ce44f544da09d2deb5db328ca4d8e6c0d81333", "text": "The following code, which repeatedly exports a model with , has a memory leak. During the export, every tensor parameter in is cloned once and then immediately leaked forever, without ever being collected by the GC. It's not the underlying buffer that's cloned, it's the lightweight wrapper object itself. Still, for long running processes that often export networks in this manner this is a unbounded memory leak that eventually results in OOM errors. I've reproduced this issue on both Linux and Windows, with pytorch versions and respectively. The final five lines inside the for loop are to debug what happens, they are not neccesary to reproduce the issue. forces a gc collection cycle, ensuring we're not accidentally counting dead objects show the total amount of objects that exist for each type for all objects whose amount has increased. From this we can see that we're leaking 2 additional tensors per that the tensors we're leaking have shapes and , so they're just the weight and bias of the linear that the underlying buffer is always the same, so only the shallow class instance is being that nothing is pointing to these newly created objects, so they should be collected. Example output after running for a while: seems closely related but is more about a temporary doubling in memory, this issue is about a permanent memory leak. was closed as a duplicate of the previous issue, but better matches this issue. Collecting environment information. PyTorch version: 1.10.0 Is debug build: False CUDA used to build PyTorch: 11.3 ROCM used to build PyTorch: N/A OS: Ubuntu 18.04.5 LTS (x8664) GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0 Clang version: 6.0.0-1ubuntu2 (tags/RELEASE600/final) CMake version: version 3.10.2 Libc version: glibc-2.17 Python version: 3.7.11 (default, Jul 27 2021, 14:32:16) [GCC 7.5.0] (64-bit runtime) Python platform: Linux-5.15.0-41-generic-x8664-with-debian-buster-sid Is CUDA available: True CUDA runtime version: 11.3.", "commid": "pytorch_issue_82532", "tokennum": 517}], "negative_passages": []} |
|
{"query_id": "q-en-pytorch-41d4dd651a0d4badf059091f9ac8324ffb5f3d6684e9d5e78affa9f305316e3b", "query": "# This should work though l2.weight = Variable(torch.randn(10, 10)) <ins> def test_embedding_padding_idx(self): embedding = nn.Embedding(10, 20, padding_idx = 0) input = Variable(torch.LongTensor([[0,2,4,5],[4,3,0,9]])) output = embedding(input) self.assertEqual(output[0][0].sum().data[0], 0) self.assertEqual(output[1][2].sum().data[0], 0) </ins> def test_Dropout(self): input = torch.Tensor(1000) self._test_dropout(nn.Dropout, input)", "positive_passages": [{"docid": "doc-en-pytorch-2f66b550279e97c384d138d0938f18ee7eb60e94c23ce88afd819c9c51455183", "text": "109 GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3080 Ti GPU 1: NVIDIA GeForce RTX 3080 Ti Nvidia driver version: 515.48.07 cuDNN version: Probably one of the following: /usr/lib/x8664-linux- /usr/lib/x8664-linux- /usr/lib/x8664-linux- /usr/lib/x8664-linux- /usr/lib/x8664-linux- /usr/lib/x8664-linux- /usr/lib/x8664-linux- HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True Versions of relevant libraries: [pip3] numpy==1.21.2 [pip3] torch==1.10.0 [pip3] torchelastic==0.2.0 [pip3] torchtext==0.11.0 [pip3] torchvision==0.11.0 [conda] blas 1.0 mkl [conda] cudatoolkit 11.3.1 ha36c4319 nvidia [conda] ffmpeg 4.3 hf484d3e0 pytorch [conda] mkl 2021.3.0 h06a4308520 [conda] mkl-service 2.4.0 py37h7f8727e0 [conda] mklfft 1.3.1 py37hd3c417c0 [conda] mklrandom 1.2.2 py37h51133e40 [conda] numpy 1.21.2 py37h20f2e390 [conda] numpy-base 1.21.2 py37h79a11010 [conda] pytorch 1.10.0 py3.7cuda11.3cudnn8.2.00 pytorch [conda] pytorch-mutex 1.0 cuda pytorch [conda] torchelastic 0.2.0 pypi0 pypi [conda] torchtext 0.11.0 py37 pytorch [conda] torchvision 0.11.0 py37cu113 pytorch\nI'm also observing something like this without JIT, although I'm not sure it's the same issue. The program works fine until I add ONNX export every time a checkpoint is saved. Once I do that, the GPU memory usage grows until it OOMs.commidpytorch_issue_82532tokennumnegative_passages |
|
query_idq-en-pytorch-41d4dd651a0d4badf059091f9ac8324ffb5f3d6684e9d5e78affa9f305316e3bquery# This should work though l2.weight = Variable(torch.randn(10, 10)) <ins> def test_embedding_padding_idx(self): embedding = nn.Embedding(10, 20, padding_idx = 0) input = Variable(torch.LongTensor([[0,2,4,5],[4,3,0,9]])) output = embedding(input) self.assertEqual(output[0][0].sum().data[0], 0) self.assertEqual(output[1][2].sum().data[0], 0) </ins> def test_Dropout(self): input = torch.Tensor(1000) self._test_dropout(nn.Dropout, input)positive_passagesdociddoc-en-pytorch-51b93f6c54a298b88600b1182a14c127961bec33f155f28f6e4eb4444c79f1dftextThis issue is about lightweight Tensor objects being leaked, not the underlying (potentially GPU-side) buffer. I think your issue is a different one.\nI encountered the same error, is there a solution to this problem?\nPlease validate with the latest release and re-summit an issue if you see the same thing. As we are moving away from torchscript minor leaks are unlikely to be fixed, but contribution is welcomed.commidpytorch_issue_82532tokennumnegative_passages |
|
query_idq-en-pytorch-440e73dec00a24d4e6b6f7a738f49c36aba4ceb41a955f172524f45b894c7d54queryz = torch.cat([x, y]) self.assertEqual(z.size(), (21, SIZE, SIZE)) <ins> def test_cat_empty(self): TestTorch._test_cat_empty(self, use_cuda=True) </ins> def test_bernoulli(self): x = torch.tensor([0, 1], dtype=torch.cuda.float32) self.assertEqual(x.bernoulli().tolist(), [0, 1])positive_passagesdociddoc-en-pytorch-5b118069fa67273ca83fc38bf735c53ddc46045a52aba22063750dd8da02e407textgdb points that the error might be in trying to get a size from an empty tensor: I'm using PyTorch version 0.4.0a0+\nThere's a check that should exclude zero-dim tensors from ( should be 0 in this case), so I'm wondering why that's not happening right now... edit: Nevermind, I was running an old build. I pulled the latest master and with being an empty tensor (with shape (0,)), cat crashes.\nRelated: We should probably rewrite to better handle these cases\nOkay, I found the bug. The CUDA version of check doesn't check the case where the input contains all empty tensors, while the CPU version does. I'll put up a fix soon.commidpytorch_issue_5739tokennumnegative_passages |
|
query_idq-en-pytorch-47fdd2074fae54076b9495d25a94a841524a59ccda55df179b17cbd54b477ef8query} } <del> static Tensor dispatch_clamp(const Tensor & self, Scalar min, Scalar max) { AutoNoGIL no_gil; AutoGPU auto_gpu(self); return self.clamp(min, max); } static Tensor dispatch_clamp_min(const Tensor & self, Scalar min) { AutoNoGIL no_gil; AutoGPU auto_gpu(self); return self.clamp_min(min); } static Tensor dispatch_clamp_max(const Tensor & self, Scalar max) { AutoNoGIL no_gil; AutoGPU auto_gpu(self); return self.clamp_max(max); } </del> // The Python clamp() syntax has to be mapped to one of three C++ functions static PyObject * THPVariable_clamp(PyObject* module, PyObject* args, PyObject* kwargs) { HANDLE_TH_ERRORS static PythonArgParser parser({ <del> \, </del> <ins> \, </ins> }); <del> ParsedArgs<3> parsed_args; </del> <ins> ParsedArgs<4> parsed_args; </ins> auto r = parser.parse(args, kwargs, parsed_args); if (!r.isNone(1) && !r.isNone(2)) { <del> return THPVariable_Wrap(dispatch_clamp(r.tensor(0), r.scalar(1), r.scalar(2))); </del> <ins> if (!r.isNone(3)) { return wrap(dispatch_clamp(r.tensor(0), r.scalar(1), r.scalar(2), r.tensor(3))); } else { return wrap(dispatch_clamp(r.tensor(0), r.scalar(1), r.scalar(2))); } </ins> } else if (!r.isNone(1)) { <del> return THPVariable_Wrap(dispatch_clamp_min(r.tensor(0), r.scalar(1))); </del> <ins> if (!r.isNone(3)) { return wrap(dispatch_clamp_min(r.tensor(0), r.scalar(1), r.tensor(3))); } else { return wrap(dispatch_clamp_min(r.tensor(0), r.scalar(1))); } </ins> } else if (!r.isNone(2)) { <del> return THPVariable_Wrap(dispatch_clamp_max(r.tensor(0), r.scalar(2))); </del> <ins> if (!r.isNone(3)) { return wrap(dispatch_clamp_max(r.tensor(0), r.scalar(2), r.tensor(3))); } else { return wrap(dispatch_clamp_max(r.tensor(0), r.scalar(2))); } </ins> } else { throw std::runtime_error(\); } <ins> Py_RETURN_NONE; </ins> END_HANDLE_TH_ERRORS }positive_passagesdociddoc-en-pytorch-9fadf55fef649758ce298e5387e2a100c57f31d87ba54cce681ee82647ba65e7textIn PyTorch master:\nAdded clamp's output support in pre-template code.", "commid": "pytorch_issue_6028", "tokennum": 20}], "negative_passages": []} |
|
{"query_id": "q-en-pytorch-4d141356d1159441381c7f3d4f816ba6d9aa5b681d7723a83576b39d219d1fbd", "query": "Args: input (Tensor): the tensor to compare other (Tensor or float): the tensor or value to compare <del> out (Tensor, optional): the output tensor that must be a `ByteTensor` or the same type as `input` </del> <ins> out (Tensor, optional): the output tensor that must be a `ByteTensor` </ins> Returns: Tensor: A ``torch.ByteTensor`` containing a 1 at each location where comparison is true.", "positive_passages": [{"docid": "doc-en-pytorch-2fdc3cef791c19159039a95dfbc4d1859ba0e4d81197ef08e9f5067ac45538ca", "text": "[pytorch] The docs say that for tensor comparison operators (gt,lt etc) it should be possible to pass out argument typed as input (), yet when I try to do it, I hit an error Should the docs be fixed, or is it a bug?\nIf it's really useful we can add it back; but we'll fix the docs for now", "commid": "pytorch_issue_7933", "tokennum": 82}], "negative_passages": []} |
|
{"query_id": "q-en-pytorch-551cb05265906a9804e2fced9973d2fabb2a5a6facb30f3afcbb988f2bcea4f1", "query": "def __idiv__(self, other): return self.div_(other) <ins> __itruediv__ = __idiv__ </ins> def __mod__(self, other): return self.remainder(other)", "positive_passages": [{"docid": "doc-en-pytorch-7a531b4e1408de313ecaafa4ee40fda2153fd640f1937560f296cfb7dbd6ca08", "text": "This is an ipython session. Note that the doesn't remain the same for /= even though it works for div_\nJust to be clear, the reason this is an issue is that it means that functions can't do inplace operations on function arguments.\nAlso, this works fine for +=", "commid": "pytorch_issue_2061", "tokennum": 65}], "negative_passages": []} |
|
{"query_id": "q-en-pytorch-5c250d7eb168a7c614115257ae332b0c408b13ef008048b0c212e3c73a0afe0c", "query": "// Lua indices begin at 1 IndexType dstIndex_ = indices.data[IndexToOffset<int64_t, IndexType, IdxDim>::get(dstIndex, indices)] - TH_INDEX_BASE; <del> assert(dstIndex < dstFillDimSize); </del> <ins> assert(dstIndex_ < dstFillDimSize); </ins> // We stride over the output ignoring the indexed dimension // (innerSize), whose offset calculation is handled differently", "positive_passages": [{"docid": "doc-en-pytorch-a58ca7f0869590acf2ec481e28df28698c3770bb005cad89d9a33abd30ddcf87", "text": "It seems that can change memory outside x, when x is a cuda tensor. If x is non-cuda tensor, we get: In contrast, when x is cuda tensor, does not make any error It's hard to share the whole code, but I have noticed that such operation outside a tensor did affect the performance of existing network, so I'm afraid that this op can change arbitrary memory on GPU which can be dangerous. Could you check this out?\nThis snippet is fine - it's enough for us to reproduce the problem. It appears we're missing some out-of-bounds checks (we have them for other indexing functions). Thanks for reporting.\nworking on this", "commid": "pytorch_issue_3922", "tokennum": 147}], "negative_passages": []} |
|
{"query_id": "q-en-pytorch-5f5558ee304aa0ad910223e4262d50bde2c4a6c17ed172c87c24b4879280cdf3", "query": "tW = targetTensor.size(tDims - 1) adjW = self._calculateAdj(tW, self.kW, self.padW, self.dW) adjH = self._calculateAdj(tH, self.kH, self.padH, self.dH) <del> if self.finput is None: </del> <ins> if not hasattr(self, 'finput') or self.finput is None: </ins> self.finput = input[0].new() <del> if self.fgradInput is None: </del> <ins> if not hasattr(self, 'fgradInput') or self.fgradInput is None: </ins> self.fgradInput = input[0].new() else: <del> if self.finput is None: </del> <ins> if not hasattr(self, 'finput') or self.finput is None: </ins> self.finput = input.new() <del> if self.fgradInput is None: </del> <ins> if not hasattr(self, 'fgradInput') or self.fgradInput is None: </ins> self.fgradInput = input.new() inputTensor = self._makeContiguous(inputTensor)", "positive_passages": [{"docid": "doc-en-pytorch-1b8f2e384d5a404a3376c7149d43125421bb1bdaa0086fc82df551418fcec1b0", "text": "I am new to pythonwhen i solve the promblem with the help below I find some confusion in the code I set \u2018dimension=1self.dimension = dimension\u2019it seem ok for mebut i don\u2019t kown how the value of \u2019dimension\u2018 was initialled. Thank you !\nI already Konw it comes from 'module = JoinTable(dimension, nInputDims)' But when I convert the model to pytorch , error appears: Traceback (most recent call last): File \"\", line 173, in <moduleGnetf =generator.forward(input) File \"/usr/local/lib/python2.7/dist-\", line 33, in forward return self.updateOutput(input) File \"/usr/local/lib/python2.7/dist-\", line 36, in updateOutput currentOutput = module.updateOutput(currentOutput) File \"/usr/local/lib/python2.7/dist-\", line 37, in updateOutput (dim, offset, (dim)).copy_(currentOutput) RuntimeError: inconsistent tensor size at /home/lxl/pytorch-master/torch/lib/TH/generic/THTensorCopy.c:51\nI Use \"generator.modules[0] = nn.JoinTable(1)\",it was fine ,but error again: Traceback (most recent call last): File \"\", line 171, in <moduleGnetf =generator.forward(input) File \"/usr/local/lib/python2.7/dist-\", line 33, in forward return self.updateOutput(input) File \"/usr/local/lib/python2.7/dist-\", line 36, in updateOutput currentOutput = module.updateOutput(currentOutput) File \"/usr/local/lib/python2.7/dist-\", line 96, in updateOutput if is None: AttributeError: 'SpatialFullConvolution' object has no attribute 'finput'\nHow old is the Lua model file you're trying to import? Can you please try to load it in Lua, save again, and load it in PyTorch? Also, please update PyTorch to the newest version.\nThe model is convert from the cudnn model trained by myseft the code below is the convert code BTWmy torch was installed on 17th Dec,2016 the pytorch version i use Metadata-Version: 1.0 Name: torch Version: 0.1.10+ I Build it from source todaycommidpytorch_issue_968tokennumnegative_passages |
|
query_idq-en-pytorch-60723fc8355f03ca1cf6f865db5b4983650f61c7a434e07bb5e6f9c4cba6a872queryTHPByteOrder::THP_LITTLE_ENDIAN, to_convert); } <del> SYSCHECK(write(fd, data, to_convert * sizeof(real))); </del> <ins> SYSCHECK(write(fd, le_buffer.get(), to_convert * sizeof(real))); </ins> } } }positive_passagesdociddoc-en-pytorch-22f86cba093cf26e315f2fbaec5ca280a4dc379518c77425509ada9da27f0f4atextThe problem is here: You can't write an arbitrary number of bytes. See . On my system the limit seems to be 2GB, YMMV. To be safe, you probably want to fix the read call as well at , because there's an SSIZE_MAX limit.commidpytorch_issue_717tokennumnegative_passages |
|
query_idq-en-pytorch-61b459b548dc3e3c41a011899f5c524fe8976e152a080d9910783e4576b9bba5query<del> Subproject commit 9f6a636e547fc70a02fa48436449aad67080698f </del> <ins> Subproject commit add56ccdcac23a6c522a2c1174a866e293c61dab </ins>positive_passagesdociddoc-en-pytorch-7a9fbde970acad238b85eeaecdd20c3b27927e286af12f701e9db672acc0182dtextPybind11 has a bugfix here: which is not included in pytorch master. In brief, the bug causes two python modules, when both compiled with buggy version of pybind11, to conflict and crash at import. I've last week when debugging its conflict with pytorch. Hope pytorch can also upgrade to avoid potential conflict with other libraries.", "commid": "pytorch_issue_4809", "tokennum": 87}], "negative_passages": []} |
|
{"query_id": "q-en-pytorch-69cb51f0ded6db481b492d760fc1235533115dd208da946400a73295b7f7117d", "query": "self.assertEqual(output[0][0].sum().data[0], 0) self.assertEqual(output[1][2].sum().data[0], 0) <ins> def test_embedding_max_norm(self): embedding = nn.Embedding(22, 5, max_norm=1.0) input = Variable(torch.LongTensor([2, 8, 8, 6])) output = embedding(input) self.assertEqual(output[1], output[2]) self.assertTrue(output.data.norm(p=2, dim=1).le(1).all()) @unittest.skipIf(not TEST_CUDA, \"CUDA unavailable\") def test_embedding_max_norm_cuda(self): embedding = nn.Embedding(22, 5, max_norm=1.0).cuda() input = Variable(torch.LongTensor([2, 8, 8, 6])).cuda() output = embedding(input) self.assertEqual(output[1], output[2]) self.assertTrue(output.data.norm(p=2, dim=1).le(1).all()) </ins> def test_embedding_functional(self): a = Variable(torch.LongTensor([ [1, 3, 2],", "positive_passages": [{"docid": "doc-en-pytorch-8a24171cae3316021ad4a394597442544d5ad7a610a19fd0700a51e599ae8017", "text": "The output is also incorrect. It's the output from the sorted indices, instead of the user specified indices. Reported bycommidpytorch_issue_2413tokennumnegative_passages |
|
query_idq-en-pytorch-6c7de40e95bf34dc14b2735c160240ca9b4fdd3569633c3d30d09bfb2a9e0564queryfor(ih = 0; ih < kH; ih++) { for(iw = 0; iw < kW; iw++) { T val = ptr_input[iw*istrideW]; <del> if (val > max) { </del> <ins> if ((val > max) || THCNumerics<T>::isnan(val)) { </ins> max = val; argmax = (ih+istartH)*isizeW + iw+istartW; }positive_passagesdociddoc-en-pytorch-73afc555bf492c7072a4f9994675674d19a7420810f08eaca2d074197550e516textmax pooling functions are not consistent with max functions. Below an example, every max pooling (be it 1d, 2d or 3d, adaptive or not) acts the same, on cpu or on cuda. Essentially, there are two fondamental differences : max pooling of all values is while for it's max pooling of nan and valid values is valid values, which means s get ignored, while for , as soon as there is a value, the result is . More generally, choosing explicetely how to deal with as in numpy () could be a solution, but maybe this is related to CuDNN's max pooling ? Built from latest sources (as of 05/17) PyTorch version: 0.5.0a0+ Is debug build: No CUDA used to build PyTorch: 9.1.85 OS: Ubuntu 16.04.4 LTS GCC version: (Ubuntu 5.4.0-6ubuntu1~16.04.9) 5.4.0 CMake version: version 3.9.4 Python version: 3.6 Is CUDA available: Yes CUDA runtime version: 9.1.85 GPU models and configuration: GPU 0: Quadro M1000M Nvidia driver version: 390.30 cuDNN version: Probably one of the following: /usr/lib/x8664-linux- /usr/lib/x8664-linux-gnu/libcudnnstaticv7.a Versions of relevant libraries: [conda] magma-cuda91 2.3.0 1 pytorch [conda] torch 0.5.0a0+ <pip[conda] torch 0.3.1b0+ <pip[conda] torch 0.5.0a0+ <pip\nHi, . Thank you for providing the example. I'll take a look.\nHmhm. I seem to get a double free in the ...\nAnother question: Do we really want to provide an option to ignore NaN? My view is that \"if you have nan in your net, you're screwed\, : , : 506}], : []} |
|
{: , : , : [{: , : no-nan\abyssus abyssum invocat\ignore some pixels\non universal\, : , : 513}], : []} |
|
{: , : , : [{: , : (which conda))/../\, : , : 358}], : []} |
|
{: , : , : [{: , : , : , : 60}], : []} |
|
{: , : , : [{: , : , : , : 82}], : []} |
|
{: , : aten::set_grad_enabled(bool val) -> ()\, : [{: , : , : , : 343}], : []} |
|
{: , : , : [{: , : , : , : 165}], : []} |
|
{: , : , : [{: , : if you have nan in your net, you're screwed\", so I would just return NaN there, personally.\nMy practical usecase is for KITTI groundtruth depthmaps and FlowMaps which are 2D sparse arrays.", "commid": "pytorch_issue_7645", "tokennum": 506}], "negative_passages": []} |
|
{"query_id": "q-en-pytorch-81fc2c89f5705744d911fc82060c1215fc1688583b68e7f9223af82e7c00c1fb", "query": "bottom_data += (n * channels + c) * height * width; for (int h = hstart; h < hend; h += dilation_h) { for (int w = wstart; w < wend; w += dilation_w) { <del> if (ScalarConvert<Dtype, AccType>::to(bottom_data[h * width + w]) > maxval) { </del> <ins> Dtype val = bottom_data[h * width + w]; if ((ScalarConvert<Dtype, AccType>::to(val) > maxval) || THCNumerics<Dtype>::isnan(val)) { </ins> maxidx = h * width + w; <del> maxval = ScalarConvert<Dtype, AccType>::to(bottom_data[maxidx]); </del> <ins> maxval = ScalarConvert<Dtype, AccType>::to(val); </ins> } } }", "positive_passages": [{"docid": "doc-en-pytorch-202a6b4a46ae62b8f7553c0429b323612fb197593d261f12c30c05b56ae7ce77", "text": "For an algorithm that use FlowNet-like architecture that outputs predictions at multiple scale levels, we can either compare predictions to downscaled GT or upscaled predictions to GT. The first being obviously less computationally expensive, the ignore NaN would help downscaling such sparse 2D maps. For the moment we do something I find ugly, you can see it Essentially it zeroes the s, takes the map and construct two maps of positive and negative values which are then maxpooled and back together. I am actually open for a strict \"no-nan\" policy on pooling functions, but in that case better enforce it before someone writes a code that tries to benefit from maxpooling ignore s feature/bug and if you have a clever way of pooling sparse 2D tensors, I'm open to it, but I guess it's a topic for pytorch forums ;)\nPersonally, I think it is more sane have NaN -NaN in the pooling and offer a parametrizable (where you get to pick the values) that does or so.\nSo the proposed fix does NaN -NaN similar to max. I didn't try to fix gradients for the nan case. This would involve keeping the values and I don't think that is worth it (in particular because I would not expect the pooling layer to be last, and otherwise we'd probably get NaN as grad_out). If you are reasonably happy with it, I'd move it to a PR.\n+1 for NaN -NaN, \"abyssus abyssum invocat\" We could also add an optional mask which would be a ByteTensor of the same size, specifying whether or not the considered pixel is used for the pooling, that could be used for any kind of pulling. An functionality would then be to provide the mask\nI could work with that. to continue to be picky, the potential drawback is that it won't work for other pooling methods such as average pooling or median pooling. The problem here is that the max operation inherently ignores nonmax values which can be leveraged for \ operation, but it has a \ feel to it since it woun't be as easy for other kinds of 2D operations.\nTo be clear, the fix is good for me, but I figured a related discussion on selective pooling (whether to ditch NaN values or anything you want to ignore) could happen (maybe not on this issue ?)", "commid": "pytorch_issue_7645", "tokennum": 513}], "negative_passages": []} |
|
{"query_id": "q-en-pytorch-88ca958657dda50101b09cbdd7703b45fbeec7c19f02de8e9ef082e239c181d4", "query": "<ins> #define __STDC_FORMAT_MACROS </ins> #include <Python.h> #include <structmember.h>", "positive_passages": [{"docid": "doc-en-pytorch-e622986277177825174756cd0d58bb252b326c5ede6f55945541c76bbab2d669", "text": "Hi There, I'm trying to install Pytorch as a module on my university's computing cluster. The node uses CentOS (6.4) x8664. I install running these commands: module load gcclib/5.2.0 cmake/3.8.2 module load gcc/5.2.0 module load anaconda3/4.0.0 export CMAKEPREFIXPATH=\"(which conda))/../\" export NOCUDA=1 git clone --recursive cd pytorch/ python install --prefix=${HOME} Things look fine as it installs but then I run into the following errors, ending in the process terminating with error relating to GCC? I've tried with version GCC 6.2.0 and the same result occurs. Not sure what to even try to fix this! Thanks for any help you can provide!\nSame here, with gcc 5.4.0\nI don't think building from source works very well with gcc 5.4. Could you install gcc 4.9 and try compiling?\nI get this error instead when using gcc 4.9.0, doesn't make it very far at all.\nHi I think this is broken after / due to\nI have the same problem ( etc.) with gcc 4.8.5 on Linux with the current head ( ) What works for me is to add at the beginning of the following four files: (see also -- this can be turned into a pull request very easily) This fix is the same as but for different files.\nThis worked for me Thank you.\nfixed in latest master, thanks tocommidpytorch_issue_3628tokennumnegative_passages |
|
query_idq-en-pytorch-977578af41147a670dcbbe495360d3862903fb443373d37d981d21a197c31db7queryauto input = input_r.contiguous(); auto weight = weight_r; auto bias = bias_r; <del> auto k = input.ndimension(); </del> <ins> auto k = weight.ndimension(); </ins> int64_t dim = k - 2; if (dim <= 0) { <del> throw std::runtime_error(\); </del> <ins> throw std::runtime_error(\); </ins> } ConvParams params;positive_passagesdociddoc-en-pytorch-3fd0655cf1f98414cb74036460095097c829c4e79824f44f878c0695b6ea8a48textgot the following err msg with conv2D and ver 0.4: RuntimeError: expected stride to be a single integer value or a list of 1 values to match the convolution dimensions, but got stride=[1, 1]. I was not able to see the problem so I uninstalled 0.4 and installed 0.3.1 Still got an err but this time it said that the expected input should be a 4d tensor and it got a 3D tensor. This helped understand the issue and I fixed it (just add a dimension). reinstalled 0.4 and its working (no surprise...). I think 0.4 should have the same err feedback otherwise its really hard to undrstand the problem How you installed PyTorch (conda, pip, source): conda Build command you used (if compiling from source): OS: macos PyTorch version: 0.4 Python version: 3.5 CUDA/cuDNN version: no GPU models and configuration: GCC version (if compiling from source): CMake version: Versions of any other relevant libraries:\nCan you please post a small self-contained code snippet that would let us reproduce the problem?\nThe strange error messge was also mentioned a few times in the forum. Here is a code snippet for PyTorch :\nThis is because we direct all conv ops to and infer dim + throw error message there.\nHmm that's not great. We might want to pass the expected dimensionality of the convolution to the generic implementation, so that we can improve the error messages.\nThx for the responses. Just a reminder, for the same err the provided err msg in version 0.3.1 was very informative\nYes, this should definitely be fixed. Sorry about this!\nThe error message of seems to be a bit misleading, too. Code: Should I create a new issue or is it related to the current one?\nthat appears fixed on master! :D", "commid": "pytorch_issue_7332", "tokennum": 434}], "negative_passages": []} |
|
{"query_id": "q-en-pytorch-9b3bfe1e851162000f63e68f19a6b5422d28fae26c2b41e6552e3d6ccc3747ba", "query": "for(iw = 0; iw < kW; iw++) { real val = *(ip + ih*istrideH + iw*istrideW); <del> if (val > maxval) </del> <ins> if ((val > maxval) || isnan(val)) </ins> { maxval = val; maxindex = (ih+istartH)*isizeW + (iw+istartW);", "positive_passages": [{"docid": "doc-en-pytorch-73afc555bf492c7072a4f9994675674d19a7420810f08eaca2d074197550e516", "text": "max pooling functions are not consistent with max functions. Below an example, every max pooling (be it 1d, 2d or 3d, adaptive or not) acts the same, on cpu or on cuda. Essentially, there are two fondamental differences : max pooling of all values is while for it's max pooling of nan and valid values is valid values, which means s get ignored, while for , as soon as there is a value, the result is . More generally, choosing explicetely how to deal with as in numpy () could be a solution, but maybe this is related to CuDNN's max pooling ? Built from latest sources (as of 05/17) PyTorch version: 0.5.0a0+ Is debug build: No CUDA used to build PyTorch: 9.1.85 OS: Ubuntu 16.04.4 LTS GCC version: (Ubuntu 5.4.0-6ubuntu1~16.04.9) 5.4.0 CMake version: version 3.9.4 Python version: 3.6 Is CUDA available: Yes CUDA runtime version: 9.1.85 GPU models and configuration: GPU 0: Quadro M1000M Nvidia driver version: 390.30 cuDNN version: Probably one of the following: /usr/lib/x8664-linux- /usr/lib/x8664-linux-gnu/libcudnnstaticv7.a Versions of relevant libraries: [conda] magma-cuda91 2.3.0 1 pytorch [conda] torch 0.5.0a0+ <pip[conda] torch 0.3.1b0+ <pip[conda] torch 0.5.0a0+ <pip\nHi, . Thank you for providing the example. I'll take a look.\nHmhm. I seem to get a double free in the ...\nAnother question: Do we really want to provide an option to ignore NaN? My view is that \, so I would just return NaN there, personally.\nMy practical usecase is for KITTI groundtruth depthmaps and FlowMaps which are 2D sparse arrays.commidpytorch_issue_7645tokennumnegative_passages |
|
query_idq-en-pytorch-9b3bfe1e851162000f63e68f19a6b5422d28fae26c2b41e6552e3d6ccc3747baqueryfor(iw = 0; iw < kW; iw++) { real val = *(ip + ih*istrideH + iw*istrideW); <del> if (val > maxval) </del> <ins> if ((val > maxval) || isnan(val)) </ins> { maxval = val; maxindex = (ih+istartH)*isizeW + (iw+istartW);positive_passagesdociddoc-en-pytorch-202a6b4a46ae62b8f7553c0429b323612fb197593d261f12c30c05b56ae7ce77textFor an algorithm that use FlowNet-like architecture that outputs predictions at multiple scale levels, we can either compare predictions to downscaled GT or upscaled predictions to GT. The first being obviously less computationally expensive, the ignore NaN would help downscaling such sparse 2D maps. For the moment we do something I find ugly, you can see it Essentially it zeroes the s, takes the map and construct two maps of positive and negative values which are then maxpooled and back together. I am actually open for a strict \ policy on pooling functions, but in that case better enforce it before someone writes a code that tries to benefit from maxpooling ignore s feature/bug and if you have a clever way of pooling sparse 2D tensors, I'm open to it, but I guess it's a topic for pytorch forums ;)\nPersonally, I think it is more sane have NaN -NaN in the pooling and offer a parametrizable (where you get to pick the values) that does or so.\nSo the proposed fix does NaN -NaN similar to max. I didn't try to fix gradients for the nan case. This would involve keeping the values and I don't think that is worth it (in particular because I would not expect the pooling layer to be last, and otherwise we'd probably get NaN as grad_out). If you are reasonably happy with it, I'd move it to a PR.\n+1 for NaN -NaN, \ We could also add an optional mask which would be a ByteTensor of the same size, specifying whether or not the considered pixel is used for the pooling, that could be used for any kind of pulling. An functionality would then be to provide the mask\nI could work with that. to continue to be picky, the potential drawback is that it won't work for other pooling methods such as average pooling or median pooling. The problem here is that the max operation inherently ignores nonmax values which can be leveraged for \"ignore some pixels\" operation, but it has a \"non universal\" feel to it since it woun't be as easy for other kinds of 2D operations.\nTo be clear, the fix is good for me, but I figured a related discussion on selective pooling (whether to ditch NaN values or anything you want to ignore) could happen (maybe not on this issue ?)commidpytorch_issue_7645tokennumnegative_passages |
|
query_idq-en-pytorch-9b791534cfdf1e04d78ab0fcf440549051b7f8d898ae468ecf0a79a35a682b9equerygI = apply_fn<Transpose>(0, 1)(gIt); } } <ins> if (should_compute_output(0) && !ggO.defined()) ggO = at::zeros_like(gO); if (should_compute_output(1) && !gI.defined()) gI = at::zeros_like(input); if (should_compute_output(2) && !gW.defined()) gW = at::zeros_like(weight); </ins> return {ggO, gI, gW}; }positive_passagesdociddoc-en-pytorch-c5dbd648f5f223c007de312a8c0f1ae78f27faaf3ad3e509ac9f43144221b039textThis is a test. Please ignore it. Edited.\n<!-- validation-comment-start --<bodyHello there! From the DISABLED prefix in this issue title, it looks like you are attempting to disable a job in PyTorch CI. The information I have parsed is below: Job name: Credential: Within ~15 minutes, and all of its dependants will be disabled in PyTorch CI. Please verify that the job name looks correct. With great power comes great responsibility. </body<!-- validation-comment-end --commidpytorch_issue_94861tokennumnegative_passages |
|
query_idq-en-pytorch-9cb6c949988837baec9a8d2f5f5cda5c88e3ef0cd73137ef0b7df869c21c1c37query'expected a non-empty list of Tensors'): torch.cat([], dim=1) <ins> def test_cat_empty(self): self._test_cat_empty(self) </ins> def test_stack(self): x = torch.rand(2, 3, 4) y = torch.rand(2, 3, 4)positive_passagesdociddoc-en-pytorch-5b118069fa67273ca83fc38bf735c53ddc46045a52aba22063750dd8da02e407textgdb points that the error might be in trying to get a size from an empty tensor: I'm using PyTorch version 0.4.0a0+\nThere's a check that should exclude zero-dim tensors from ( should be 0 in this case), so I'm wondering why that's not happening right now... edit: Nevermind, I was running an old build. I pulled the latest master and with being an empty tensor (with shape (0,)), cat crashes.\nRelated: We should probably rewrite to better handle these cases\nOkay, I found the bug. The CUDA version of check doesn't check the case where the input contains all empty tensors, while the CPU version does. I'll put up a fix soon.commidpytorch_issue_5739tokennumnegative_passages |
|
query_idq-en-pytorch-9ec1744c3c99ad877b5de782ef7e6f996113b1f13b635199c006a6d8e8a19f56query} } <ins> // manual dispatch code for clamp inline Tensor dispatch_clamp(const Tensor & self, Scalar min, Scalar max) { AutoNoGIL no_gil; AutoGPU auto_gpu(self); return self.clamp(min, max); } inline Tensor dispatch_clamp_min(const Tensor & self, Scalar min) { AutoNoGIL no_gil; AutoGPU auto_gpu(self); return self.clamp_min(min); } inline Tensor dispatch_clamp_max(const Tensor & self, Scalar max) { AutoNoGIL no_gil; AutoGPU auto_gpu(self); return self.clamp_max(max); } inline Tensor & dispatch_clamp(const Tensor & self, Scalar min, Scalar max, Tensor result) { AutoNoGIL no_gil; AutoGPU auto_gpu(result); return at::clamp_out(result, self, min, max); } inline Tensor & dispatch_clamp_min(const Tensor & self, Scalar min, Tensor result) { AutoNoGIL no_gil; AutoGPU auto_gpu(result); return at::clamp_min_out(result, self, min); } inline Tensor & dispatch_clamp_max(const Tensor & self, Scalar max, Tensor result) { AutoNoGIL no_gil; AutoGPU auto_gpu(result); return at::clamp_max_out(result, self, max); } </ins> ${py_method_dispatch} }} // namespace torch::autogradpositive_passagesdociddoc-en-pytorch-9fadf55fef649758ce298e5387e2a100c57f31d87ba54cce681ee82647ba65e7textIn PyTorch master:\nAdded clamp's output support in pre-template code.", "commid": "pytorch_issue_6028", "tokennum": 20}], "negative_passages": []} |
|
{"query_id": "q-en-pytorch-a9404e4c571418623fa1b9c97e0ef232f999a7ad76326d587a82ccbd62ab02ac", "query": "for(ih = 0; ih < kH; ++ih) { for(iw = 0; iw < kW; ++iw) { T val = ptr_input[ih*istrideH + iw*istrideW]; <del> if (val > max) { </del> <ins> if ((val > max) || THCNumerics<T>::isnan(val)) { </ins> max = val; argmax = (it+istartT)*isizeH*isizeW + (ih+istartH)*isizeW + iw+istartW; }", "positive_passages": [{"docid": "doc-en-pytorch-73afc555bf492c7072a4f9994675674d19a7420810f08eaca2d074197550e516", "text": "max pooling functions are not consistent with max functions. Below an example, every max pooling (be it 1d, 2d or 3d, adaptive or not) acts the same, on cpu or on cuda. Essentially, there are two fondamental differences : max pooling of all values is while for it's max pooling of nan and valid values is valid values, which means s get ignored, while for , as soon as there is a value, the result is . More generally, choosing explicetely how to deal with as in numpy () could be a solution, but maybe this is related to CuDNN's max pooling ? Built from latest sources (as of 05/17) PyTorch version: 0.5.0a0+ Is debug build: No CUDA used to build PyTorch: 9.1.85 OS: Ubuntu 16.04.4 LTS GCC version: (Ubuntu 5.4.0-6ubuntu1~16.04.9) 5.4.0 CMake version: version 3.9.4 Python version: 3.6 Is CUDA available: Yes CUDA runtime version: 9.1.85 GPU models and configuration: GPU 0: Quadro M1000M Nvidia driver version: 390.30 cuDNN version: Probably one of the following: /usr/lib/x8664-linux- /usr/lib/x8664-linux-gnu/libcudnnstaticv7.a Versions of relevant libraries: [conda] magma-cuda91 2.3.0 1 pytorch [conda] torch 0.5.0a0+ <pip[conda] torch 0.3.1b0+ <pip[conda] torch 0.5.0a0+ <pip\nHi, . Thank you for providing the example. I'll take a look.\nHmhm. I seem to get a double free in the ...\nAnother question: Do we really want to provide an option to ignore NaN? My view is that \, so I would just return NaN there, personally.\nMy practical usecase is for KITTI groundtruth depthmaps and FlowMaps which are 2D sparse arrays.commidpytorch_issue_7645tokennumnegative_passages |
|
query_idq-en-pytorch-a9404e4c571418623fa1b9c97e0ef232f999a7ad76326d587a82ccbd62ab02acqueryfor(ih = 0; ih < kH; ++ih) { for(iw = 0; iw < kW; ++iw) { T val = ptr_input[ih*istrideH + iw*istrideW]; <del> if (val > max) { </del> <ins> if ((val > max) || THCNumerics<T>::isnan(val)) { </ins> max = val; argmax = (it+istartT)*isizeH*isizeW + (ih+istartH)*isizeW + iw+istartW; }positive_passagesdociddoc-en-pytorch-202a6b4a46ae62b8f7553c0429b323612fb197593d261f12c30c05b56ae7ce77textFor an algorithm that use FlowNet-like architecture that outputs predictions at multiple scale levels, we can either compare predictions to downscaled GT or upscaled predictions to GT. The first being obviously less computationally expensive, the ignore NaN would help downscaling such sparse 2D maps. For the moment we do something I find ugly, you can see it Essentially it zeroes the s, takes the map and construct two maps of positive and negative values which are then maxpooled and back together. I am actually open for a strict \ policy on pooling functions, but in that case better enforce it before someone writes a code that tries to benefit from maxpooling ignore s feature/bug and if you have a clever way of pooling sparse 2D tensors, I'm open to it, but I guess it's a topic for pytorch forums ;)\nPersonally, I think it is more sane have NaN -NaN in the pooling and offer a parametrizable (where you get to pick the values) that does or so.\nSo the proposed fix does NaN -NaN similar to max. I didn't try to fix gradients for the nan case. This would involve keeping the values and I don't think that is worth it (in particular because I would not expect the pooling layer to be last, and otherwise we'd probably get NaN as grad_out). If you are reasonably happy with it, I'd move it to a PR.\n+1 for NaN -NaN, \ We could also add an optional mask which would be a ByteTensor of the same size, specifying whether or not the considered pixel is used for the pooling, that could be used for any kind of pulling. An functionality would then be to provide the mask\nI could work with that. to continue to be picky, the potential drawback is that it won't work for other pooling methods such as average pooling or median pooling. The problem here is that the max operation inherently ignores nonmax values which can be leveraged for \"ignore some pixels\" operation, but it has a \"non universal\" feel to it since it woun't be as easy for other kinds of 2D operations.\nTo be clear, the fix is good for me, but I figured a related discussion on selective pooling (whether to ditch NaN values or anything you want to ignore) could happen (maybe not on this issue ?)commidpytorch_issue_7645tokennumnegative_passages |
|
query_idq-en-pytorch-ae42f0aca4a13adb43080aa570ab847db924974818431f575dd71a17fb988069queryindex = t * inputH * inputW + h * inputW + w; Dtype val = inputData[index]; <del> if (max < val) </del> <ins> if ((max < val) || THCNumerics<Dtype>::isnan(val)) </ins> { max = val; maxIndex = index;positive_passagesdociddoc-en-pytorch-73afc555bf492c7072a4f9994675674d19a7420810f08eaca2d074197550e516textmax pooling functions are not consistent with max functions. Below an example, every max pooling (be it 1d, 2d or 3d, adaptive or not) acts the same, on cpu or on cuda. Essentially, there are two fondamental differences : max pooling of all values is while for it's max pooling of nan and valid values is valid values, which means s get ignored, while for , as soon as there is a value, the result is . More generally, choosing explicetely how to deal with as in numpy () could be a solution, but maybe this is related to CuDNN's max pooling ? Built from latest sources (as of 05/17) PyTorch version: 0.5.0a0+ Is debug build: No CUDA used to build PyTorch: 9.1.85 OS: Ubuntu 16.04.4 LTS GCC version: (Ubuntu 5.4.0-6ubuntu1~16.04.9) 5.4.0 CMake version: version 3.9.4 Python version: 3.6 Is CUDA available: Yes CUDA runtime version: 9.1.85 GPU models and configuration: GPU 0: Quadro M1000M Nvidia driver version: 390.30 cuDNN version: Probably one of the following: /usr/lib/x8664-linux- /usr/lib/x8664-linux-gnu/libcudnnstaticv7.a Versions of relevant libraries: [conda] magma-cuda91 2.3.0 1 pytorch [conda] torch 0.5.0a0+ <pip[conda] torch 0.3.1b0+ <pip[conda] torch 0.5.0a0+ <pip\nHi, . Thank you for providing the example. I'll take a look.\nHmhm. I seem to get a double free in the ...\nAnother question: Do we really want to provide an option to ignore NaN? My view is that \"if you have nan in your net, you're screwed\, : , : 506}], : []} |
|
{: , : , : [{: , : no-nan\abyssus abyssum invocat\ignore some pixels\non universal\, : , : 513}], : []} |
|
{: , : , : [{: , : , : , : 20}], : []} |
|
{: , : Given groups=\, weight\, so expected input\ to have \Given groups=\, weight of size \, expected input\ to have \ channels, but got \ channels instead\, : [{: , : , : , : 434}], : []} |
|
{: , : , : [{: , : , : , : 43}], : []} |
|
{: , : , : [{: , : , : , : 60}], : []} |
|
{: , : , : [{: , : , : , : 517}], : []} |
|
{: , : , : [{: , : , : , : 602}], : []} |
|
{: , : , : [{: , : , : , : 86}], : []} |
|
{: , : , : [{: , : , : , : 43}], : []} |
|
{: , : , : [{: , : , : , : 25}], : []} |
|
{: , : , : [{: , : , : , : 43}], : []} |
|
{: , : \ <del> def __init__(self, num_embeddings, embedding_dim, padding_idx=-1, </del> <ins> def __init__(self, num_embeddings, embedding_dim, padding_idx=None, </ins> max_norm=None, norm_type=2, scale_grad_by_freq=False): self.num_embeddings = num_embeddings self.embedding_dim = embedding_dimpositive_passagesdociddoc-en-pytorch-6dae6822e59fd00098fabde359ce44f544da09d2deb5db328ca4d8e6c0d81333textThe following code, which repeatedly exports a model with , has a memory leak. During the export, every tensor parameter in is cloned once and then immediately leaked forever, without ever being collected by the GC. It's not the underlying buffer that's cloned, it's the lightweight wrapper object itself. Still, for long running processes that often export networks in this manner this is a unbounded memory leak that eventually results in OOM errors. I've reproduced this issue on both Linux and Windows, with pytorch versions and respectively. The final five lines inside the for loop are to debug what happens, they are not neccesary to reproduce the issue. forces a gc collection cycle, ensuring we're not accidentally counting dead objects show the total amount of objects that exist for each type for all objects whose amount has increased. From this we can see that we're leaking 2 additional tensors per that the tensors we're leaking have shapes and , so they're just the weight and bias of the linear that the underlying buffer is always the same, so only the shallow class instance is being that nothing is pointing to these newly created objects, so they should be collected. Example output after running for a while: seems closely related but is more about a temporary doubling in memory, this issue is about a permanent memory leak. was closed as a duplicate of the previous issue, but better matches this issue. Collecting environment information. PyTorch version: 1.10.0 Is debug build: False CUDA used to build PyTorch: 11.3 ROCM used to build PyTorch: N/A OS: Ubuntu 18.04.5 LTS (x8664) GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0 Clang version: 6.0.0-1ubuntu2 (tags/RELEASE600/final) CMake version: version 3.10.2 Libc version: glibc-2.17 Python version: 3.7.11 (default, Jul 27 2021, 14:32:16) [GCC 7.5.0] (64-bit runtime) Python platform: Linux-5.15.0-41-generic-x8664-with-debian-buster-sid Is CUDA available: True CUDA runtime version: 11.3.commidpytorch_issue_82532tokennumnegative_passages |
|
query_idq-en-pytorch-dd6af3615c1336a9731953d1a8470906b5f93f83fabf440136bc36f449c84f25query>>> # an Embedding module containing 10 tensors of size 3 >>> embedding = nn.Embedding(10, 3) >>> # a batch of 2 samples of 4 indices each <del> >>> input = torch.Tensor([[1,2,4,5],[4,3,2,10]]) </del> <ins> >>> input = torch.LongTensor([[1,2,4,5],[4,3,2,9]]) >>> print(embedding(input)) >>> # example with padding_idx >>> embedding = nn.Embedding(10, 3, padding_idx=0) >>> input = torch.LongTensor([[0,2,0,5]]) </ins> >>> print(embedding(input)) \\, : [{: , : , : , : 602}], : []} |
|
{: , : \ <del> def __init__(self, num_embeddings, embedding_dim, padding_idx=-1, </del> <ins> def __init__(self, num_embeddings, embedding_dim, padding_idx=None, </ins> max_norm=None, norm_type=2, scale_grad_by_freq=False): self.num_embeddings = num_embeddings self.embedding_dim = embedding_dimpositive_passagesdociddoc-en-pytorch-51b93f6c54a298b88600b1182a14c127961bec33f155f28f6e4eb4444c79f1dftextThis issue is about lightweight Tensor objects being leaked, not the underlying (potentially GPU-side) buffer. I think your issue is a different one.\nI encountered the same error, is there a solution to this problem?\nPlease validate with the latest release and re-summit an issue if you see the same thing. As we are moving away from torchscript minor leaks are unlikely to be fixed, but contribution is welcomed.commidpytorch_issue_82532tokennumnegative_passages |
|
query_idq-en-pytorch-e146c2a64e2bd159eae15b5e7dfd2ad08f34d4ab3ebe3e3a06870fe78e7bbb46querywith: submodules: false fetch-depth: 1 <del> - name: Setup Python 3.5 </del> <ins> - name: Setup Python 3.6 </ins> if: matrix.test_type == 'older_python_version' uses: actions/setup-python@v4 with: <del> python-version: '3.5' </del> <ins> python-version: '3.6' </ins> architecture: x64 check-latest: false cache: pippositive_passagesdociddoc-en-pytorch-c0fdbff7b42e4db60af4955ac83a924f2a9f7d06af7e7cb5913cbd4e781f73e0textSeveral this morning failed with (see for example): Not sure what is causing the outage, but it makes me wonder if perhaps it's time to retire Python-3.5 testing CI cc\nLooks like pypi rolled out a new cert today:", "commid": "pytorch_issue_125841", "tokennum": 54}], "negative_passages": []} |
|
{"query_id": "q-en-pytorch-e1cc84099c3b118d4920752811c784dcb3638765475447de55f10769a7adf155", "query": "Args: num_embeddings: size of the dictionary of embeddings embedding_dim: the size of each embedding vector <del> padding_idx: If given, pads the output with zeros whenever it encounters the index. Default: -1 </del> <ins> padding_idx: If given, pads the output with zeros whenever it encounters the index. Default: None </ins> max_norm: If given, will renormalize the embeddings to always have a norm lesser than this Default: None norm_type: The p of the p-norm to compute for the max_norm option scale_grad_by_freq: if given, this will scale gradients by the frequency of the words in the dictionary.", "positive_passages": [{"docid": "doc-en-pytorch-6dae6822e59fd00098fabde359ce44f544da09d2deb5db328ca4d8e6c0d81333", "text": "The following code, which repeatedly exports a model with , has a memory leak. During the export, every tensor parameter in is cloned once and then immediately leaked forever, without ever being collected by the GC. It's not the underlying buffer that's cloned, it's the lightweight wrapper object itself. Still, for long running processes that often export networks in this manner this is a unbounded memory leak that eventually results in OOM errors. I've reproduced this issue on both Linux and Windows, with pytorch versions and respectively. The final five lines inside the for loop are to debug what happens, they are not neccesary to reproduce the issue. forces a gc collection cycle, ensuring we're not accidentally counting dead objects show the total amount of objects that exist for each type for all objects whose amount has increased. From this we can see that we're leaking 2 additional tensors per that the tensors we're leaking have shapes and , so they're just the weight and bias of the linear that the underlying buffer is always the same, so only the shallow class instance is being that nothing is pointing to these newly created objects, so they should be collected. Example output after running for a while: seems closely related but is more about a temporary doubling in memory, this issue is about a permanent memory leak. was closed as a duplicate of the previous issue, but better matches this issue. Collecting environment information. PyTorch version: 1.10.0 Is debug build: False CUDA used to build PyTorch: 11.3 ROCM used to build PyTorch: N/A OS: Ubuntu 18.04.5 LTS (x8664) GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0 Clang version: 6.0.0-1ubuntu2 (tags/RELEASE600/final) CMake version: version 3.10.2 Libc version: glibc-2.17 Python version: 3.7.11 (default, Jul 27 2021, 14:32:16) [GCC 7.5.0] (64-bit runtime) Python platform: Linux-5.15.0-41-generic-x8664-with-debian-buster-sid Is CUDA available: True CUDA runtime version: 11.3.", "commid": "pytorch_issue_82532", "tokennum": 517}], "negative_passages": []} |
|
{"query_id": "q-en-pytorch-e1cc84099c3b118d4920752811c784dcb3638765475447de55f10769a7adf155", "query": "Args: num_embeddings: size of the dictionary of embeddings embedding_dim: the size of each embedding vector <del> padding_idx: If given, pads the output with zeros whenever it encounters the index. Default: -1 </del> <ins> padding_idx: If given, pads the output with zeros whenever it encounters the index. Default: None </ins> max_norm: If given, will renormalize the embeddings to always have a norm lesser than this Default: None norm_type: The p of the p-norm to compute for the max_norm option scale_grad_by_freq: if given, this will scale gradients by the frequency of the words in the dictionary.", "positive_passages": [{"docid": "doc-en-pytorch-2f66b550279e97c384d138d0938f18ee7eb60e94c23ce88afd819c9c51455183", "text": "109 GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3080 Ti GPU 1: NVIDIA GeForce RTX 3080 Ti Nvidia driver version: 515.48.07 cuDNN version: Probably one of the following: /usr/lib/x8664-linux- /usr/lib/x8664-linux- /usr/lib/x8664-linux- /usr/lib/x8664-linux- /usr/lib/x8664-linux- /usr/lib/x8664-linux- /usr/lib/x8664-linux- HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True Versions of relevant libraries: [pip3] numpy==1.21.2 [pip3] torch==1.10.0 [pip3] torchelastic==0.2.0 [pip3] torchtext==0.11.0 [pip3] torchvision==0.11.0 [conda] blas 1.0 mkl [conda] cudatoolkit 11.3.1 ha36c4319 nvidia [conda] ffmpeg 4.3 hf484d3e0 pytorch [conda] mkl 2021.3.0 h06a4308520 [conda] mkl-service 2.4.0 py37h7f8727e0 [conda] mklfft 1.3.1 py37hd3c417c0 [conda] mklrandom 1.2.2 py37h51133e40 [conda] numpy 1.21.2 py37h20f2e390 [conda] numpy-base 1.21.2 py37h79a11010 [conda] pytorch 1.10.0 py3.7cuda11.3cudnn8.2.00 pytorch [conda] pytorch-mutex 1.0 cuda pytorch [conda] torchelastic 0.2.0 pypi0 pypi [conda] torchtext 0.11.0 py37 pytorch [conda] torchvision 0.11.0 py37cu113 pytorch\nI'm also observing something like this without JIT, although I'm not sure it's the same issue. The program works fine until I add ONNX export every time a checkpoint is saved. Once I do that, the GPU memory usage grows until it OOMs.commidpytorch_issue_82532tokennumnegative_passages |
|
query_idq-en-pytorch-e1cc84099c3b118d4920752811c784dcb3638765475447de55f10769a7adf155queryArgs: num_embeddings: size of the dictionary of embeddings embedding_dim: the size of each embedding vector <del> padding_idx: If given, pads the output with zeros whenever it encounters the index. Default: -1 </del> <ins> padding_idx: If given, pads the output with zeros whenever it encounters the index. Default: None </ins> max_norm: If given, will renormalize the embeddings to always have a norm lesser than this Default: None norm_type: The p of the p-norm to compute for the max_norm option scale_grad_by_freq: if given, this will scale gradients by the frequency of the words in the dictionary.positive_passagesdociddoc-en-pytorch-51b93f6c54a298b88600b1182a14c127961bec33f155f28f6e4eb4444c79f1dftextThis issue is about lightweight Tensor objects being leaked, not the underlying (potentially GPU-side) buffer. I think your issue is a different one.\nI encountered the same error, is there a solution to this problem?\nPlease validate with the latest release and re-summit an issue if you see the same thing. As we are moving away from torchscript minor leaks are unlikely to be fixed, but contribution is welcomed.commidpytorch_issue_82532tokennumnegative_passages |
|
query_idq-en-pytorch-e5ae77a5e08322e65369c99bb0e38344715024cff6a41f3418003b3cb4bc4e1fquerymeant to be installed as pip packages) (default: False). relative_to (str, optional): path of the build file. Required when ``package is True``. It's best to use ``__file__`` for this argument. <del> kwargs: additional arguments that are passed to ffi to declar the </del> <ins> kwargs: additional arguments that are passed to ffi to declare the </ins> extension. See `Extension API reference`_ for details. .. _`Extension API reference`: https://docs.python.org/3/distutils/apiref.html#distutils.core.Extension", "positive_passages": [{"docid": "doc-en-pytorch-c25fd04d8d54cf4d0391cd8024070026ad8247507bdccb8eb12f5f8e2c9f8d2e", "text": "When trying to install Pytorch on my Mac by following the instructions I get What I did: ` I also tried Both approaches gave the same error. System: xcode-select version 2395. Version: macOS Monterey 12.3.1 (21E258) MacBook Pro (16-inch, 2019) Processor: 2,6 GHz 6-Core Intel Core i7 memory: 16 GB 2667 MHz DDR4 PyTorch version: N/A Is debug build: N/A CUDA used to build PyTorch: N/A ROCM used to build PyTorch: N/A OS: macOS 12.3.1 (x8664) GCC version: Could not collect Clang version: 13.1.6 (clang-1316.0.21.2.3) CMake version: version 3.22.1 Libc version: N/A Python version: 3.9.12 (main, Apr 5 2022, 01:53:17) [Clang 12.0.0 ] (64-bit runtime) Python platform: macOS-10.16-x8664-i386-64bit Is CUDA available: N/A CUDA runtime version: Could not collect GPU models and configuration: Could not collect Nvidia driver version: Could not collect cuDNN version: Could not collect HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: N/A Versions of relevant libraries: [pip3] numpy==1.21.5 [conda] mkl 2022.0.0 hecd8cb5105 [conda] mkl-include 2022.0.0 hecd8cb5105 [conda] numpy 1.21.5 py39h9c3cb841 [conda] numpy-base 1.21.5 py39he782bc11 cc\nI am running the same environment and get the same issue. Any insight would be very appreciated\nUpdate: In another repo I get the same error when trying to link to pytorch etc. There I made a minimal case and managed to build when I removed linking to . I can see that we have in the script. Maybe that is the cause of the error?\nLooks like they are built with the correct architecture.\nMore progress, from local minimal case: fails. is just a hello world program.", "commid": "pytorch_issue_76094", "tokennum": 531}], "negative_passages": []} |
|
{"query_id": "q-en-pytorch-e5ae77a5e08322e65369c99bb0e38344715024cff6a41f3418003b3cb4bc4e1f", "query": "meant to be installed as pip packages) (default: False). relative_to (str, optional): path of the build file. Required when ``package is True``. It's best to use ``__file__`` for this argument. <del> kwargs: additional arguments that are passed to ffi to declar the </del> <ins> kwargs: additional arguments that are passed to ffi to declare the </ins> extension. See `Extension API reference`_ for details. .. _`Extension API reference`: https://docs.python.org/3/distutils/apiref.html#distutils.core.Extensionpositive_passagesdociddoc-en-pytorch-ec5921aa1d302972470c3f074cbc44243f961a731c2339a8fe339a689287e600textIf in is removed, then it builds.\nThe same story with PyTorch 1.10.0. The error appears when I'm trying to build with Apple clang 13.1.6 (Xcode Command Line Tools 13.3). But all works correctly if I build it with Apple clang 13.0 (Xcode Command Line Tools 13.2.1)\nNice, worked for me as well. Is this a bug somewhere or what is the exact problem? I drawback is that XCode needs to be up to date with new iOS versions.\nWhich version of Apple Clang worked? 13.0.0 or 13.0.1? Are you on Monterey 12.4?\nFiled an issue: cc:\nThis issue has been fixed in PeachPy a while back by but pinned version of PeachPy that PyTorch is using has not been updated in a very long time", "commid": "pytorch_issue_76094", "tokennum": 186}], "negative_passages": []} |
|
{"query_id": "q-en-pytorch-e5c6c52e389697c2263e94638406e059af264d366c283b8eb13b7ef2925b5de0", "query": "{ index = z * iwidth * iheight + y * iwidth + x; real val = ip[index]; <del> if (val > maxval) </del> <ins> if ((val > maxval) || isnan(val)) </ins> { maxval = val; maxindex = index;", "positive_passages": [{"docid": "doc-en-pytorch-73afc555bf492c7072a4f9994675674d19a7420810f08eaca2d074197550e516", "text": "max pooling functions are not consistent with max functions. Below an example, every max pooling (be it 1d, 2d or 3d, adaptive or not) acts the same, on cpu or on cuda. Essentially, there are two fondamental differences : max pooling of all values is while for it's max pooling of nan and valid values is valid values, which means s get ignored, while for , as soon as there is a value, the result is . More generally, choosing explicetely how to deal with as in numpy () could be a solution, but maybe this is related to CuDNN's max pooling ? Built from latest sources (as of 05/17) PyTorch version: 0.5.0a0+ Is debug build: No CUDA used to build PyTorch: 9.1.85 OS: Ubuntu 16.04.4 LTS GCC version: (Ubuntu 5.4.0-6ubuntu1~16.04.9) 5.4.0 CMake version: version 3.9.4 Python version: 3.6 Is CUDA available: Yes CUDA runtime version: 9.1.85 GPU models and configuration: GPU 0: Quadro M1000M Nvidia driver version: 390.30 cuDNN version: Probably one of the following: /usr/lib/x8664-linux- /usr/lib/x8664-linux-gnu/libcudnnstaticv7.a Versions of relevant libraries: [conda] magma-cuda91 2.3.0 1 pytorch [conda] torch 0.5.0a0+ <pip[conda] torch 0.3.1b0+ <pip[conda] torch 0.5.0a0+ <pip\nHi, . Thank you for providing the example. I'll take a look.\nHmhm. I seem to get a double free in the ...\nAnother question: Do we really want to provide an option to ignore NaN? My view is that \, so I would just return NaN there, personally.\nMy practical usecase is for KITTI groundtruth depthmaps and FlowMaps which are 2D sparse arrays.commidpytorch_issue_7645tokennumnegative_passages |
|
query_idq-en-pytorch-e5c6c52e389697c2263e94638406e059af264d366c283b8eb13b7ef2925b5de0query{ index = z * iwidth * iheight + y * iwidth + x; real val = ip[index]; <del> if (val > maxval) </del> <ins> if ((val > maxval) || isnan(val)) </ins> { maxval = val; maxindex = index;positive_passagesdociddoc-en-pytorch-202a6b4a46ae62b8f7553c0429b323612fb197593d261f12c30c05b56ae7ce77textFor an algorithm that use FlowNet-like architecture that outputs predictions at multiple scale levels, we can either compare predictions to downscaled GT or upscaled predictions to GT. The first being obviously less computationally expensive, the ignore NaN would help downscaling such sparse 2D maps. For the moment we do something I find ugly, you can see it Essentially it zeroes the s, takes the map and construct two maps of positive and negative values which are then maxpooled and back together. I am actually open for a strict \ policy on pooling functions, but in that case better enforce it before someone writes a code that tries to benefit from maxpooling ignore s feature/bug and if you have a clever way of pooling sparse 2D tensors, I'm open to it, but I guess it's a topic for pytorch forums ;)\nPersonally, I think it is more sane have NaN -NaN in the pooling and offer a parametrizable (where you get to pick the values) that does or so.\nSo the proposed fix does NaN -NaN similar to max. I didn't try to fix gradients for the nan case. This would involve keeping the values and I don't think that is worth it (in particular because I would not expect the pooling layer to be last, and otherwise we'd probably get NaN as grad_out). If you are reasonably happy with it, I'd move it to a PR.\n+1 for NaN -NaN, \ We could also add an optional mask which would be a ByteTensor of the same size, specifying whether or not the considered pixel is used for the pooling, that could be used for any kind of pulling. An functionality would then be to provide the mask\nI could work with that. to continue to be picky, the potential drawback is that it won't work for other pooling methods such as average pooling or median pooling. The problem here is that the max operation inherently ignores nonmax values which can be leveraged for \"ignore some pixels\" operation, but it has a \"non universal\" feel to it since it woun't be as easy for other kinds of 2D operations.\nTo be clear, the fix is good for me, but I figured a related discussion on selective pooling (whether to ditch NaN values or anything you want to ignore) could happen (maybe not on this issue ?)commidpytorch_issue_7645tokennumnegative_passages |
|
query_idq-en-pytorch-e68748fa6c4aafa8c187da05b9d98e17b0db0f942a0f44442d1917f9db594699query// For Convolution strategies that don't implicitly handle grad_bias, we add a helper // function here to perform it using simple Tensor operators static at::Tensor compute_grad_bias(const at::Tensor& grad_output) { <del> // grad_output is in N, C, H, W, we re-shape and reduce over spatial dims and batches </del> <ins> // grad_output is in N, C, H, W, we re-shape and reduce over spatial dims and batches </ins> return grad_output.contiguous().view({grad_output.size(0), grad_output.size(1), -1}).sum(0).sum(1); }", "positive_passages": [{"docid": "doc-en-pytorch-c5dbd648f5f223c007de312a8c0f1ae78f27faaf3ad3e509ac9f43144221b039", "text": "This is a test. Please ignore it. Edited.\n<!-- validation-comment-start --<bodyHello there! From the DISABLED prefix in this issue title, it looks like you are attempting to disable a job in PyTorch CI. The information I have parsed is below: Job name: Credential: Within ~15 minutes, and all of its dependants will be disabled in PyTorch CI. Please verify that the job name looks correct. With great power comes great responsibility. </body<!-- validation-comment-end --", "commid": "pytorch_issue_94861", "tokennum": 122}], "negative_passages": []} |
|
{"query_id": "q-en-pytorch-f07ba2188846889a62ffcd00bc1564c97864fab48732feb1e9f5c83d821811a3", "query": "Args: input (Tensor): the tensor to compare other (Tensor or float): the tensor or value to compare <del> out (Tensor, optional): the output tensor that must be a `ByteTensor` or the same type as :attr:`input` </del> <ins> out (Tensor, optional): the output tensor that must be a `ByteTensor` </ins> Returns: Tensor: A `torch.ByteTensor` containing a 1 at each location where comparison is true", "positive_passages": [{"docid": "doc-en-pytorch-2fdc3cef791c19159039a95dfbc4d1859ba0e4d81197ef08e9f5067ac45538ca", "text": "[pytorch] The docs say that for tensor comparison operators (gt,lt etc) it should be possible to pass out argument typed as input (), yet when I try to do it, I hit an error Should the docs be fixed, or is it a bug?\nIf it's really useful we can add it back; but we'll fix the docs for now", "commid": "pytorch_issue_7933", "tokennum": 82}], "negative_passages": []} |
|
{"query_id": "q-en-pytorch-fba64ac32da2f1cb65a2a0cea021ae9220defa92747dc8f4d4bc86b53f0d9510", "query": "def test_AdaptiveMaxPool3d_indices_cuda(self, dtype=torch.float): self._test_maxpool_indices(3, adaptive=True, device=\"cuda\", dtype=dtype) <ins> @staticmethod def _test_max_pool_nan(self, device, dtype=torch.float): for adaptive in ['', 'adaptive_']: for num_dim in [1, 2, 3]: fn_name = '{}max_pool{}d'.format(adaptive, num_dim) fn = getattr(F, fn_name) x = torch.full([1, 1] + num_dim * [3], float('nan')) res = fn(x, 1 if adaptive else 3) self.assertTrue(math.isnan(res.item())) @unittest.skipIf(not TEST_CUDA, \"CUDA unavailable\") @repeat_test_for_types(ALL_TENSORTYPES) def test_max_pool_nan_cuda(self, dtype=torch.float): self._test_max_pool_nan(self, device=\"cuda\", dtype=dtype) def test_max_pool_nan(self, dtype=torch.float): self._test_max_pool_nan(self, device=\"cpu\") </ins> def _test_scatter(self, tensor): x = torch.tensor(tensor, requires_grad=True) result = dp.scatter(x, (0, 1))", "positive_passages": [{"docid": "doc-en-pytorch-73afc555bf492c7072a4f9994675674d19a7420810f08eaca2d074197550e516", "text": "max pooling functions are not consistent with max functions. Below an example, every max pooling (be it 1d, 2d or 3d, adaptive or not) acts the same, on cpu or on cuda. Essentially, there are two fondamental differences : max pooling of all values is while for it's max pooling of nan and valid values is valid values, which means s get ignored, while for , as soon as there is a value, the result is . More generally, choosing explicetely how to deal with as in numpy () could be a solution, but maybe this is related to CuDNN's max pooling ? Built from latest sources (as of 05/17) PyTorch version: 0.5.0a0+ Is debug build: No CUDA used to build PyTorch: 9.1.85 OS: Ubuntu 16.04.4 LTS GCC version: (Ubuntu 5.4.0-6ubuntu1~16.04.9) 5.4.0 CMake version: version 3.9.4 Python version: 3.6 Is CUDA available: Yes CUDA runtime version: 9.1.85 GPU models and configuration: GPU 0: Quadro M1000M Nvidia driver version: 390.30 cuDNN version: Probably one of the following: /usr/lib/x8664-linux- /usr/lib/x8664-linux-gnu/libcudnnstaticv7.a Versions of relevant libraries: [conda] magma-cuda91 2.3.0 1 pytorch [conda] torch 0.5.0a0+ <pip[conda] torch 0.3.1b0+ <pip[conda] torch 0.5.0a0+ <pip\nHi, . Thank you for providing the example. I'll take a look.\nHmhm. I seem to get a double free in the ...\nAnother question: Do we really want to provide an option to ignore NaN? My view is that \, so I would just return NaN there, personally.\nMy practical usecase is for KITTI groundtruth depthmaps and FlowMaps which are 2D sparse arrays.commidpytorch_issue_7645tokennumnegative_passages |
|
query_idq-en-pytorch-fba64ac32da2f1cb65a2a0cea021ae9220defa92747dc8f4d4bc86b53f0d9510querydef test_AdaptiveMaxPool3d_indices_cuda(self, dtype=torch.float): self._test_maxpool_indices(3, adaptive=True, device=\, dtype=dtype) <ins> @staticmethod def _test_max_pool_nan(self, device, dtype=torch.float): for adaptive in ['', 'adaptive_']: for num_dim in [1, 2, 3]: fn_name = '{}max_pool{}d'.format(adaptive, num_dim) fn = getattr(F, fn_name) x = torch.full([1, 1] + num_dim * [3], float('nan')) res = fn(x, 1 if adaptive else 3) self.assertTrue(math.isnan(res.item())) @unittest.skipIf(not TEST_CUDA, \) @repeat_test_for_types(ALL_TENSORTYPES) def test_max_pool_nan_cuda(self, dtype=torch.float): self._test_max_pool_nan(self, device=\, dtype=dtype) def test_max_pool_nan(self, dtype=torch.float): self._test_max_pool_nan(self, device=\) </ins> def _test_scatter(self, tensor): x = torch.tensor(tensor, requires_grad=True) result = dp.scatter(x, (0, 1))positive_passagesdociddoc-en-pytorch-202a6b4a46ae62b8f7553c0429b323612fb197593d261f12c30c05b56ae7ce77textFor an algorithm that use FlowNet-like architecture that outputs predictions at multiple scale levels, we can either compare predictions to downscaled GT or upscaled predictions to GT. The first being obviously less computationally expensive, the ignore NaN would help downscaling such sparse 2D maps. For the moment we do something I find ugly, you can see it Essentially it zeroes the s, takes the map and construct two maps of positive and negative values which are then maxpooled and back together. I am actually open for a strict \ policy on pooling functions, but in that case better enforce it before someone writes a code that tries to benefit from maxpooling ignore s feature/bug and if you have a clever way of pooling sparse 2D tensors, I'm open to it, but I guess it's a topic for pytorch forums ;)\nPersonally, I think it is more sane have NaN -NaN in the pooling and offer a parametrizable (where you get to pick the values) that does or so.\nSo the proposed fix does NaN -NaN similar to max. I didn't try to fix gradients for the nan case. This would involve keeping the values and I don't think that is worth it (in particular because I would not expect the pooling layer to be last, and otherwise we'd probably get NaN as grad_out). If you are reasonably happy with it, I'd move it to a PR.\n+1 for NaN -NaN, \ We could also add an optional mask which would be a ByteTensor of the same size, specifying whether or not the considered pixel is used for the pooling, that could be used for any kind of pulling. An functionality would then be to provide the mask\nI could work with that. to continue to be picky, the potential drawback is that it won't work for other pooling methods such as average pooling or median pooling. The problem here is that the max operation inherently ignores nonmax values which can be leveraged for \"ignore some pixels\" operation, but it has a \"non universal\" feel to it since it woun't be as easy for other kinds of 2D operations.\nTo be clear, the fix is good for me, but I figured a related discussion on selective pooling (whether to ditch NaN values or anything you want to ignore) could happen (maybe not on this issue ?)commidpytorch_issue_7645tokennumnegative_passages |
|
|