CodeConvo / pytorch /pytorch.c2i.dev.jsonl
jiebi's picture
Upload CodeConvo dataset
c2b8f63 verified
{"query_id": "q-en-pytorch-022e865efb10b2ae31ebf0d5562ed384aaa74dabbab74162da0996279403ca2f", "query": "tmpdir = tempfile.mkdtemp() ext_suf = '.pyd' if os.sys.platform == 'win32' else '.so' libname = cffi_wrapper_name + ext_suf <del> ffi.compile(tmpdir=tmpdir, verbose=verbose, target=libname) shutil.copy(os.path.join(tmpdir, libname), os.path.join(target_dir, libname)) </del> <ins> outfile = ffi.compile(tmpdir=tmpdir, verbose=verbose, target=libname) shutil.copy(outfile, os.path.join(target_dir, libname)) </ins> finally: shutil.rmtree(tmpdir)", "positive_passages": [{"docid": "doc-en-pytorch-b6c54f2eecc6fc5c9dd06c86954ad437cfcdd2a7e5b2a92006d42bbc302684a3", "text": "(old title) When build C extension, the error: FileNotFoundError was got. OS: Windows 10 pro PyTorch version: 0.4.0a0+ How you installed PyTorch (conda, pip, source): source Python version: 3.6.4 CUDA/cuDNN version: CUDA 9.0 GCC version (if compiling from source): msvc 14/15 (then compiling with CUDA I use VS2015, but when build extension, the program automatically use vs2017) When building c extension on Windows, I got the error: (The Chinese above means compiler success compile the library, and generated .lib and .exp) And the same error was got on a Linux Work Station. (gcc is 5.4.0)\nIn , copy the linked file from , for example , but in my Windows, and a Linux work station, the linked one is in: ,for example . So, there must be a bug, or error in pytorch's ffi or python's ffi. (pyhton 3.6.4, cffi 1.11.4), If it broke down because of change in cffi, I think I can create a PR. I do think this is because of the change of cffi api,\nCC who enabled extension build for Windows on\nIs this change documented in somewhere like Python SDK?", "commid": "pytorch_issue_5542", "tokennum": 303}], "negative_passages": []}
{"query_id": "q-en-pytorch-066dd6b918ee24c19d6d1836ab10af295d1039207535a528a4626cbf00ca2778", "query": "for(iw = 0; iw < kW; iw++) { real val = *(ip + it*istrideT + ih*istrideH + iw*istrideW); <del> if (val > maxval) </del> <ins> if ((val > maxval) || isnan(val)) </ins> { maxval = val; maxindex = (it+istartT)*isizeH*isizeW + (ih+istartH)*isizeW + (iw+istartW);", "positive_passages": [{"docid": "doc-en-pytorch-73afc555bf492c7072a4f9994675674d19a7420810f08eaca2d074197550e516", "text": "max pooling functions are not consistent with max functions. Below an example, every max pooling (be it 1d, 2d or 3d, adaptive or not) acts the same, on cpu or on cuda. Essentially, there are two fondamental differences : max pooling of all values is while for it's max pooling of nan and valid values is valid values, which means s get ignored, while for , as soon as there is a value, the result is . More generally, choosing explicetely how to deal with as in numpy () could be a solution, but maybe this is related to CuDNN's max pooling ? Built from latest sources (as of 05/17) PyTorch version: 0.5.0a0+ Is debug build: No CUDA used to build PyTorch: 9.1.85 OS: Ubuntu 16.04.4 LTS GCC version: (Ubuntu 5.4.0-6ubuntu1~16.04.9) 5.4.0 CMake version: version 3.9.4 Python version: 3.6 Is CUDA available: Yes CUDA runtime version: 9.1.85 GPU models and configuration: GPU 0: Quadro M1000M Nvidia driver version: 390.30 cuDNN version: Probably one of the following: /usr/lib/x8664-linux- /usr/lib/x8664-linux-gnu/libcudnnstaticv7.a Versions of relevant libraries: [conda] magma-cuda91 2.3.0 1 pytorch [conda] torch 0.5.0a0+ <pip[conda] torch 0.3.1b0+ <pip[conda] torch 0.5.0a0+ <pip\nHi, . Thank you for providing the example. I'll take a look.\nHmhm. I seem to get a double free in the ...\nAnother question: Do we really want to provide an option to ignore NaN? My view is that \"if you have nan in your net, you're screwed\", so I would just return NaN there, personally.\nMy practical usecase is for KITTI groundtruth depthmaps and FlowMaps which are 2D sparse arrays.", "commid": "pytorch_issue_7645", "tokennum": 506}], "negative_passages": []}
{"query_id": "q-en-pytorch-066dd6b918ee24c19d6d1836ab10af295d1039207535a528a4626cbf00ca2778", "query": "for(iw = 0; iw < kW; iw++) { real val = *(ip + it*istrideT + ih*istrideH + iw*istrideW); <del> if (val > maxval) </del> <ins> if ((val > maxval) || isnan(val)) </ins> { maxval = val; maxindex = (it+istartT)*isizeH*isizeW + (ih+istartH)*isizeW + (iw+istartW);", "positive_passages": [{"docid": "doc-en-pytorch-202a6b4a46ae62b8f7553c0429b323612fb197593d261f12c30c05b56ae7ce77", "text": "For an algorithm that use FlowNet-like architecture that outputs predictions at multiple scale levels, we can either compare predictions to downscaled GT or upscaled predictions to GT. The first being obviously less computationally expensive, the ignore NaN would help downscaling such sparse 2D maps. For the moment we do something I find ugly, you can see it Essentially it zeroes the s, takes the map and construct two maps of positive and negative values which are then maxpooled and back together. I am actually open for a strict \"no-nan\" policy on pooling functions, but in that case better enforce it before someone writes a code that tries to benefit from maxpooling ignore s feature/bug and if you have a clever way of pooling sparse 2D tensors, I'm open to it, but I guess it's a topic for pytorch forums ;)\nPersonally, I think it is more sane have NaN -NaN in the pooling and offer a parametrizable (where you get to pick the values) that does or so.\nSo the proposed fix does NaN -NaN similar to max. I didn't try to fix gradients for the nan case. This would involve keeping the values and I don't think that is worth it (in particular because I would not expect the pooling layer to be last, and otherwise we'd probably get NaN as grad_out). If you are reasonably happy with it, I'd move it to a PR.\n+1 for NaN -NaN, \"abyssus abyssum invocat\" We could also add an optional mask which would be a ByteTensor of the same size, specifying whether or not the considered pixel is used for the pooling, that could be used for any kind of pulling. An functionality would then be to provide the mask\nI could work with that. to continue to be picky, the potential drawback is that it won't work for other pooling methods such as average pooling or median pooling. The problem here is that the max operation inherently ignores nonmax values which can be leveraged for \"ignore some pixels\" operation, but it has a \"non universal\" feel to it since it woun't be as easy for other kinds of 2D operations.\nTo be clear, the fix is good for me, but I figured a related discussion on selective pooling (whether to ditch NaN values or anything you want to ignore) could happen (maybe not on this issue ?)", "commid": "pytorch_issue_7645", "tokennum": 513}], "negative_passages": []}
{"query_id": "q-en-pytorch-1fcf3bc313dee3793326e5d79cfda64509e9f7dec1d4b06b486ee9d9f09de30d", "query": "raise ValueError('num_workers cannot be negative; ' 'use num_workers=0 to disable multiprocessing.') <ins> if sys.platform == \"win32\" and self.num_workers > 0: raise ValueError('num_workers > 0 is not supported on Windows') </ins> if batch_sampler is None: if sampler is None: if shuffle:", "positive_passages": [{"docid": "doc-en-pytorch-8d645240428935c6b8a48ae7f0ddcf46aaa90faf077d7ce3d52a0c84ff1a123a", "text": "This issue tracks the components / tests that are not working on Windows: [ ] : currently disabled on Windows because Windows doesn't have and we need to look for substitutes [ ] : Fuser is disabled on Windows because the current implementation uses symbols from Linux-specific headers and . We will need to find alternatives for Windows. [ ] : some parts of and are disabled because Windows doesn't support opening an already opened file (see discussion at and ) [ ] : currently doesn't work with Windows ( is the porting diff) [ ] : in causes intermittent CUDA out-of-memory error on Windows [ ] , : DataLoader with multiple workers causes intermittent CUDA out-of-memory error on Windows. [x] [x] (done in ) [x] [x] - - [x] [x] [x] - For more discussions, also see: cc\nThe first one is solved by but I think DataLoader can be further improved when is fininshed.\nCool I will mark it as resolved :)\nThe has been in .\nWe can try to revert and , because the memory leak in the CPU side could also cause CUDA errors.\nAre they fixed by I think we can revert them after we merge the PR.\nI think this may be related. Since once the memory of the CPU side is low, the will fail with too.\nI think it could be. For what it's worth, when I tried to inspect the CUDA OOM error, showed no process that was taking memory, but running CUDA tests on the machine would still fail.\nNow that is merged into master, could you please try to revert the changes on and ?\nAwesome! Just to understand it better: does fix both numworker=1 and numworker1 cases?\nYes, they are both solved.\nI guess we should mark the cpp_extension test as completed, for it's now enabled in CI.\nClosing this issue due to age and because its references are long out of date. For example, distributed tests now have their own jobs.", "commid": "pytorch_issue_4092", "tokennum": 431}], "negative_passages": []}
{"query_id": "q-en-pytorch-202b1a281a0d21743153927ee467e8ba4c18b5ea5a14b37e71db236027828df7", "query": "// with the mode. struct ModeUnsignedBoolPair match = {0, false}; <del> match = reduceBlockN<struct ModeUnsignedBoolPair, MatchReduceOp<struct ModeUnsignedBoolPair>, 2> </del> <ins> match = reduceBlockWithNThreadLocalReductions<struct ModeUnsignedBoolPair, MatchReduceOp<struct ModeUnsignedBoolPair>, 2> </ins> (ubpmem, ubpp, sliceSize, MatchReduceOp<struct ModeUnsignedBoolPair>(), match); // Finally, we have the mode, and an index where it occurs. We use a single thread", "positive_passages": [{"docid": "doc-en-pytorch-3dce8e556d6d4bd2ea519d551101e82db52b876cebdea9b13c4207de2ba26137", "text": "Not sure what is the reason for these errors, any suggestions? I suspect the clang version is not supported? Here is the\nclang have more strict checking , pull request will fixed it.\nfixed, thanks to", "commid": "pytorch_issue_745", "tokennum": 43}], "negative_passages": []}
{"query_id": "q-en-pytorch-21d9d9ac8e6325c9af9098cf1d35c71f8f1bc7483a724f70c90d1ee2e3d4a070", "query": "} else { // transposed if (input.size(1) != weight.size(0)) { std::stringstream ss; <del> ss << \"Given transposed=\" << transposed << \", weight\" << weight.sizes() << \", so expected input\" << input.sizes() << \" to have \" </del> <ins> ss << \"Given transposed=\" << transposed << \", weight of size \" << weight.sizes() << \", expected input\" << input.sizes() << \" to have \" </ins> << weight.size(0) << \" channels, but got \" << input.size(1) << \" channels instead\"; throw std::runtime_error(ss.str());", "positive_passages": [{"docid": "doc-en-pytorch-3fd0655cf1f98414cb74036460095097c829c4e79824f44f878c0695b6ea8a48", "text": "got the following err msg with conv2D and ver 0.4: RuntimeError: expected stride to be a single integer value or a list of 1 values to match the convolution dimensions, but got stride=[1, 1]. I was not able to see the problem so I uninstalled 0.4 and installed 0.3.1 Still got an err but this time it said that the expected input should be a 4d tensor and it got a 3D tensor. This helped understand the issue and I fixed it (just add a dimension). reinstalled 0.4 and its working (no surprise...). I think 0.4 should have the same err feedback otherwise its really hard to undrstand the problem How you installed PyTorch (conda, pip, source): conda Build command you used (if compiling from source): OS: macos PyTorch version: 0.4 Python version: 3.5 CUDA/cuDNN version: no GPU models and configuration: GCC version (if compiling from source): CMake version: Versions of any other relevant libraries:\nCan you please post a small self-contained code snippet that would let us reproduce the problem?\nThe strange error messge was also mentioned a few times in the forum. Here is a code snippet for PyTorch :\nThis is because we direct all conv ops to and infer dim + throw error message there.\nHmm that's not great. We might want to pass the expected dimensionality of the convolution to the generic implementation, so that we can improve the error messages.\nThx for the responses. Just a reminder, for the same err the provided err msg in version 0.3.1 was very informative\nYes, this should definitely be fixed. Sorry about this!\nThe error message of seems to be a bit misleading, too. Code: Should I create a new issue or is it related to the current one?\nthat appears fixed on master! :D", "commid": "pytorch_issue_7332", "tokennum": 434}], "negative_passages": []}
{"query_id": "q-en-pytorch-24958dcac671e563d27c0349df1d5c8487f8d5fc26fb2c0c895955fce808c502", "query": "if (weight_dim != k) { std::stringstream ss; <del> ss << \"Expected \" << k << \"-dimensional weight for \" << k << \"-dimensional input \" << input.sizes() << \", but got weight of size \" << weight.sizes() << \" instead\"; </del> <ins> ss << \"Expected \" << weight_dim << \"-dimensional input for \" << weight_dim << \"-dimensional weight \" << weight.sizes() << \", but got input of size \" << input.sizes() << \" instead\"; </ins> throw std::runtime_error(ss.str()); } if (weight.size(0) < groups) {", "positive_passages": [{"docid": "doc-en-pytorch-3fd0655cf1f98414cb74036460095097c829c4e79824f44f878c0695b6ea8a48", "text": "got the following err msg with conv2D and ver 0.4: RuntimeError: expected stride to be a single integer value or a list of 1 values to match the convolution dimensions, but got stride=[1, 1]. I was not able to see the problem so I uninstalled 0.4 and installed 0.3.1 Still got an err but this time it said that the expected input should be a 4d tensor and it got a 3D tensor. This helped understand the issue and I fixed it (just add a dimension). reinstalled 0.4 and its working (no surprise...). I think 0.4 should have the same err feedback otherwise its really hard to undrstand the problem How you installed PyTorch (conda, pip, source): conda Build command you used (if compiling from source): OS: macos PyTorch version: 0.4 Python version: 3.5 CUDA/cuDNN version: no GPU models and configuration: GCC version (if compiling from source): CMake version: Versions of any other relevant libraries:\nCan you please post a small self-contained code snippet that would let us reproduce the problem?\nThe strange error messge was also mentioned a few times in the forum. Here is a code snippet for PyTorch :\nThis is because we direct all conv ops to and infer dim + throw error message there.\nHmm that's not great. We might want to pass the expected dimensionality of the convolution to the generic implementation, so that we can improve the error messages.\nThx for the responses. Just a reminder, for the same err the provided err msg in version 0.3.1 was very informative\nYes, this should definitely be fixed. Sorry about this!\nThe error message of seems to be a bit misleading, too. Code: Should I create a new issue or is it related to the current one?\nthat appears fixed on master! :D", "commid": "pytorch_issue_7332", "tokennum": 434}], "negative_passages": []}
{"query_id": "q-en-pytorch-286cccb6da61a23a2bcb079e734646e80623fed1741e1e8e7ca8aaf9c8a7a41e", "query": "return Subscript(base, [build_SliceExpr(ctx, base, expr.slice)]) elif sub_type is ast.ExtSlice: return Subscript(base, build_ExtSlice(ctx, base, expr.slice)) <ins> elif sys.version_info >= (3, 9): # In Python3.9 array indicies are not wrapped in ast.Index if sub_type is ast.Tuple: # N-dimensional indexing using Tuple: x[(i, j, k)] is equivalent to x[i, j, k] indices = [] for index_expr in expr.slice.elts: if isinstance(index_expr, ast.Slice): indices.append(build_SliceExpr(ctx, base, index_expr)) else: indices.append(build_expr(ctx, index_expr)) return Subscript(base, indices) return Subscript(base, [build_expr(ctx, expr.slice)]) </ins> else: # Ellipsis (can only happen in Python 2) raise NotSupportedError(base.range(), \"ellipsis is not supported\")", "positive_passages": [{"docid": "doc-en-pytorch-a69891f877be902cc456139a7e993c9b38aa4f28c7548672cc69a9e3f1b209aa", "text": "Following example: fails in Python-3.9 with cc\nAnd the reason for that is very simple:\nAnother interesting offender:", "commid": "pytorch_issue_48674", "tokennum": 25}], "negative_passages": []}
{"query_id": "q-en-pytorch-29b4f12861a142464a9b25b830cabbf59ed5cb2adfaf60f093acac0c57573ea1", "query": "const float lr = *lr_ptr; if (!nesterov) { CUDA_1D_KERNEL_LOOP(i, N) { <del> moment_out[i] = mu * moment[i] * lr * grad[i]; </del> <ins> moment_out[i] = mu * moment[i] + lr * grad[i]; </ins> param_out[i] = param[i] - moment_out[i]; } } else {", "positive_passages": [{"docid": "doc-en-pytorch-4ad8a9f7a71b2ddce48315dd16c5fc5671be60942858425b0385ed5b869d99cf", "text": "+69 I read: To me, it should be: The CPU code is not affected.\nThe two lines of code are identical?\nNo, one is mu times moment time lr times grad instead of mu times moment plus lr times grad.\nI see. Send a PR? :)\nWill do :)", "commid": "pytorch_issue_6975", "tokennum": 65}], "negative_passages": []}
{"query_id": "q-en-pytorch-2b46b8dbf0d931ed37a0e15ed23bc29801672fff95ab89d5b94bca2bc2a0c892", "query": "if isinstance(expr.slice.value, ast.Tuple): # N-dimensional indexing using Tuple: x[(i, j, k)] is equivalent to x[i, j, k] # XXX: Indexing using a list is **different**! It triggers advanced indexing. <del> indices = [] for index_expr in expr.slice.value.elts: indices.append(build_expr(ctx, index_expr)) </del> <ins> indices = [build_expr(ctx, index_expr) for index_expr in expr.slice.value.elts] </ins> return Subscript(base, indices) else: return Subscript(base, [build_expr(ctx, expr.slice.value)])", "positive_passages": [{"docid": "doc-en-pytorch-a69891f877be902cc456139a7e993c9b38aa4f28c7548672cc69a9e3f1b209aa", "text": "Following example: fails in Python-3.9 with cc\nAnd the reason for that is very simple:\nAnother interesting offender:", "commid": "pytorch_issue_48674", "tokennum": 25}], "negative_passages": []}
{"query_id": "q-en-pytorch-2e3ced0554ae542f38fa4f46492eb2a9c1f34db3580330fa33d016c154e86473", "query": "'zero-dimensional.*cannot be concatenated'): torch.cat([x, y]) <del> def test_cat_empty(self): </del> <ins> @staticmethod def _test_cat_empty(self, use_cuda=False): </ins> # FIXME: this is legacy behavior and should be removed # when we support empty tensors with arbitrary sizes <del> x = torch.randn(4, 3, 32, 32) empty = torch.randn(0) </del> <ins> if use_cuda: dtype = torch.cuda.float32 else: dtype = torch.float32 x = torch.randn((4, 3, 32, 32), dtype=dtype) empty = torch.randn((0,), dtype=dtype) </ins> res1 = torch.cat([x, empty], dim=1) res2 = torch.cat([empty, x], dim=1) self.assertEqual(res1, res2) <del> conv = torch.nn.Conv2d(3, 3, kernel_size=1) </del> <ins> conv = torch.nn.Conv2d(3, 3, kernel_size=1).float() if use_cuda: conv = conv.cuda() </ins> res1 = torch.cat([conv(x), empty], dim=1) res2 = torch.cat([empty, conv(x)], dim=1) self.assertEqual(res1, res2)", "positive_passages": [{"docid": "doc-en-pytorch-5b118069fa67273ca83fc38bf735c53ddc46045a52aba22063750dd8da02e407", "text": "gdb points that the error might be in trying to get a size from an empty tensor: I'm using PyTorch version 0.4.0a0+\nThere's a check that should exclude zero-dim tensors from ( should be 0 in this case), so I'm wondering why that's not happening right now... edit: Nevermind, I was running an old build. I pulled the latest master and with being an empty tensor (with shape (0,)), cat crashes.\nRelated: We should probably rewrite to better handle these cases\nOkay, I found the bug. The CUDA version of check doesn't check the case where the input contains all empty tensors, while the CPU version does. I'll put up a fix soon.", "commid": "pytorch_issue_5739", "tokennum": 165}], "negative_passages": []}
{"query_id": "q-en-pytorch-2ec2247f8a7bf05febc899bf58721a55b2d0844e172b0df05134abd1edcc3ca8", "query": "{ tcntr = y*iwidth + x; real val = *(ip + tcntr); <del> if (val > maxval) </del> <ins> if ((val > maxval) || isnan(val)) </ins> { maxval = val; maxindex = tcntr;", "positive_passages": [{"docid": "doc-en-pytorch-73afc555bf492c7072a4f9994675674d19a7420810f08eaca2d074197550e516", "text": "max pooling functions are not consistent with max functions. Below an example, every max pooling (be it 1d, 2d or 3d, adaptive or not) acts the same, on cpu or on cuda. Essentially, there are two fondamental differences : max pooling of all values is while for it's max pooling of nan and valid values is valid values, which means s get ignored, while for , as soon as there is a value, the result is . More generally, choosing explicetely how to deal with as in numpy () could be a solution, but maybe this is related to CuDNN's max pooling ? Built from latest sources (as of 05/17) PyTorch version: 0.5.0a0+ Is debug build: No CUDA used to build PyTorch: 9.1.85 OS: Ubuntu 16.04.4 LTS GCC version: (Ubuntu 5.4.0-6ubuntu1~16.04.9) 5.4.0 CMake version: version 3.9.4 Python version: 3.6 Is CUDA available: Yes CUDA runtime version: 9.1.85 GPU models and configuration: GPU 0: Quadro M1000M Nvidia driver version: 390.30 cuDNN version: Probably one of the following: /usr/lib/x8664-linux- /usr/lib/x8664-linux-gnu/libcudnnstaticv7.a Versions of relevant libraries: [conda] magma-cuda91 2.3.0 1 pytorch [conda] torch 0.5.0a0+ <pip[conda] torch 0.3.1b0+ <pip[conda] torch 0.5.0a0+ <pip\nHi, . Thank you for providing the example. I'll take a look.\nHmhm. I seem to get a double free in the ...\nAnother question: Do we really want to provide an option to ignore NaN? My view is that \"if you have nan in your net, you're screwed\", so I would just return NaN there, personally.\nMy practical usecase is for KITTI groundtruth depthmaps and FlowMaps which are 2D sparse arrays.", "commid": "pytorch_issue_7645", "tokennum": 506}], "negative_passages": []}
{"query_id": "q-en-pytorch-2ec2247f8a7bf05febc899bf58721a55b2d0844e172b0df05134abd1edcc3ca8", "query": "{ tcntr = y*iwidth + x; real val = *(ip + tcntr); <del> if (val > maxval) </del> <ins> if ((val > maxval) || isnan(val)) </ins> { maxval = val; maxindex = tcntr;", "positive_passages": [{"docid": "doc-en-pytorch-202a6b4a46ae62b8f7553c0429b323612fb197593d261f12c30c05b56ae7ce77", "text": "For an algorithm that use FlowNet-like architecture that outputs predictions at multiple scale levels, we can either compare predictions to downscaled GT or upscaled predictions to GT. The first being obviously less computationally expensive, the ignore NaN would help downscaling such sparse 2D maps. For the moment we do something I find ugly, you can see it Essentially it zeroes the s, takes the map and construct two maps of positive and negative values which are then maxpooled and back together. I am actually open for a strict \"no-nan\" policy on pooling functions, but in that case better enforce it before someone writes a code that tries to benefit from maxpooling ignore s feature/bug and if you have a clever way of pooling sparse 2D tensors, I'm open to it, but I guess it's a topic for pytorch forums ;)\nPersonally, I think it is more sane have NaN -NaN in the pooling and offer a parametrizable (where you get to pick the values) that does or so.\nSo the proposed fix does NaN -NaN similar to max. I didn't try to fix gradients for the nan case. This would involve keeping the values and I don't think that is worth it (in particular because I would not expect the pooling layer to be last, and otherwise we'd probably get NaN as grad_out). If you are reasonably happy with it, I'd move it to a PR.\n+1 for NaN -NaN, \"abyssus abyssum invocat\" We could also add an optional mask which would be a ByteTensor of the same size, specifying whether or not the considered pixel is used for the pooling, that could be used for any kind of pulling. An functionality would then be to provide the mask\nI could work with that. to continue to be picky, the potential drawback is that it won't work for other pooling methods such as average pooling or median pooling. The problem here is that the max operation inherently ignores nonmax values which can be leveraged for \"ignore some pixels\" operation, but it has a \"non universal\" feel to it since it woun't be as easy for other kinds of 2D operations.\nTo be clear, the fix is good for me, but I figured a related discussion on selective pooling (whether to ditch NaN values or anything you want to ignore) could happen (maybe not on this issue ?)", "commid": "pytorch_issue_7645", "tokennum": 513}], "negative_passages": []}
{"query_id": "q-en-pytorch-3427195ea7d48d9ab3b176a5c35a13bb1f7306610e21e5774d1802387ca3c835", "query": "t2 = torch.from_numpy(t.numpy().transpose()) self.assertEqual(t1, t2) <ins> def test_inplace_division(self): t = torch.rand(5, 5) id_before = id(t) t /= 2 id_after = id(t) self.assertEqual(id_before, id_after) </ins> # Functions to test negative dimension wrapping METHOD = 1 INPLACE_METHOD = 2", "positive_passages": [{"docid": "doc-en-pytorch-7a531b4e1408de313ecaafa4ee40fda2153fd640f1937560f296cfb7dbd6ca08", "text": "This is an ipython session. Note that the doesn't remain the same for /= even though it works for div_\nJust to be clear, the reason this is an issue is that it means that functions can't do inplace operations on function arguments.\nAlso, this works fine for +=", "commid": "pytorch_issue_2061", "tokennum": 65}], "negative_passages": []}
{"query_id": "q-en-pytorch-3a31795ee2fc98b796dd0c9851e0b44dab2c72e71d3233392cbb762548177a46", "query": "Args: input (Tensor): the tensor to compare other (Tensor or float): the tensor or value to compare <del> out (Tensor, optional): the output tensor. Must be a `ByteTensor` or the same type as `input`. </del> <ins> out (Tensor, optional): the output tensor. Must be a `ByteTensor` </ins> Returns: Tensor: A ``torch.ByteTensor`` containing a 1 at each location where comparison is true", "positive_passages": [{"docid": "doc-en-pytorch-2fdc3cef791c19159039a95dfbc4d1859ba0e4d81197ef08e9f5067ac45538ca", "text": "[pytorch] The docs say that for tensor comparison operators (gt,lt etc) it should be possible to pass out argument typed as input (), yet when I try to do it, I hit an error Should the docs be fixed, or is it a bug?\nIf it's really useful we can add it back; but we'll fix the docs for now", "commid": "pytorch_issue_7933", "tokennum": 82}], "negative_passages": []}
{"query_id": "q-en-pytorch-3cdb37047b20c512e3f4eebe8b5c212a33cca4021fd795eab2296789e2e0b00e", "query": "blockIdx.x; } <del> // Block-wide reduction in shared memory helper; only threadIdx.x == 0 will // return the reduced value template <typename T, typename ReduceOp> __device__ T reduceBlock(T* smem, int numVals, T threadVal, ReduceOp reduceOp, T init) { </del> <ins> // Reduce N values concurrently, i.e. suppose N = 2, and there are 4 threads: // (1, 2), (3, 4), (5, 6), (7, 8), then the return in threadVals for thread 0 // is (1 + 3 + 5 + 7, 2 + 4 + 6 + 8) = (16, 20) template <typename T, typename ReduceOp, int N> __device__ void reduceNValuesInBlock(T *smem, T threadVals[N], int numVals, ReduceOp reduceOp, T init) { </ins> if (numVals == 0) { <del> return init; </del> <ins> #pragma unroll for (int i = 0; i < N; ++i) { threadVals[i] = init; } return; </ins> } <ins> // We store each of the N values contiguously, so if N = 2, all values for // the first threadVal for each thread in the block are stored followed by // all of the values for the second threadVal for each thread in the block </ins> if (threadIdx.x < numVals) { <del> smem[threadIdx.x] = threadVal; </del> <ins> #pragma unroll for (int i = 0; i < N; ++i) { smem[i * numVals + threadIdx.x] = threadVals[i]; } </ins> } <del> // First warp will perform reductions across warps </del> __syncthreads(); <del> if ((threadIdx.x / warpSize) == 0) { T r = threadIdx.x < numVals ? smem[threadIdx.x] : init; </del> <ins> // Number of lanes in the final reduction --> this is used to determine // where to put the outputs of each of the n things we are reducing. If // nLP = 32, then we have the 32 outputs for the first threadVal, // followed by the 32 outputs for the second threadVal, etc. int numLanesParticipating = min(numVals, warpSize); if (numVals > warpSize && ((threadIdx.x / warpSize) == 0 )) { #pragma unroll for (int i = 0; i < N; ++i) { threadVals[i] = threadIdx.x < numVals ? threadVals[i] : init; } </ins> for (int i = warpSize + threadIdx.x; i < numVals; i += warpSize) { <del> r = reduceOp(r, smem[i]); </del> <ins> #pragma unroll for (int j = 0; j < N; ++j) { threadVals[j] = reduceOp(threadVals[j], smem[j * numVals + i]); } </ins> } <del> smem[threadIdx.x] = r; </del> <ins> #pragma unroll for (int i = 0; i < N; ++i) { smem[i * numLanesParticipating + threadIdx.x] = threadVals[i]; } </ins> } <del> // First thread will perform reductions across the block </del> __syncthreads(); <del> T r = init; </del> if (threadIdx.x == 0) { <del> r = smem[0]; int numLanesParticipating = min(numVals, warpSize); </del> if (numLanesParticipating == 32) { <del> // Unroll for warpSize == 32 and numVals >= 32 </del> #pragma unroll <del> for (int i = 1; i < 32; ++i) { r = reduceOp(r, smem[i]); </del> <ins> for (int i = 0; i < N; ++i) { #pragma unroll for (int j = 1; j < 32; ++j) { threadVals[i] = reduceOp(threadVals[i], smem[i * 32 + j]); } </ins> } } else { <del> for (int i = 1; i < numLanesParticipating; ++i) { r = reduceOp(r, smem[i]); </del> <ins> #pragma unroll for (int i = 0; i < N; ++i) { for (int j = 1; j < numLanesParticipating; ++j) { threadVals[i] = reduceOp(threadVals[i], smem[i * numVals + j]); } </ins> } } } <ins> } </ins> <del> return r; </del> <ins> // Block-wide reduction in shared memory helper; only threadIdx.x == 0 will // return the reduced value template <typename T, typename ReduceOp> __device__ T reduceBlock(T* smem, int numVals, T threadVal, ReduceOp reduceOp, T init) { reduceNValuesInBlock<T, ReduceOp, 1>(smem, &threadVal, numVals, reduceOp, init); return threadVal; </ins> } // Block-wide reduction where each thread locally reduces N // values before letting a single warp take over - assumes // threadVals is in registers, not shared memory template <typename T, typename ReduceOp, int N> <del> __device__ T reduceBlockN(T *smem, </del> <ins> __device__ T reduceBlockWithNThreadLocalReductions(T *smem, </ins> T threadVals[N], int numVals, ReduceOp reduceOp,", "positive_passages": [{"docid": "doc-en-pytorch-3dce8e556d6d4bd2ea519d551101e82db52b876cebdea9b13c4207de2ba26137", "text": "Not sure what is the reason for these errors, any suggestions? I suspect the clang version is not supported? Here is the\nclang have more strict checking , pull request will fixed it.\nfixed, thanks to", "commid": "pytorch_issue_745", "tokennum": 43}], "negative_passages": []}
{"query_id": "q-en-pytorch-4099cb47e51f5963ac2fbc751ead939ce3721d76c3af31b04a878eb6e554a458", "query": "#include \"THCTensor.hpp\" #include \"THCHalf.h\" #include \"THCHalfAutoNumerics.cuh\" <ins> #include \"THCNumerics.cuh\" </ins> #include \"common.h\" // kernels borrowed from Caffe", "positive_passages": [{"docid": "doc-en-pytorch-73afc555bf492c7072a4f9994675674d19a7420810f08eaca2d074197550e516", "text": "max pooling functions are not consistent with max functions. Below an example, every max pooling (be it 1d, 2d or 3d, adaptive or not) acts the same, on cpu or on cuda. Essentially, there are two fondamental differences : max pooling of all values is while for it's max pooling of nan and valid values is valid values, which means s get ignored, while for , as soon as there is a value, the result is . More generally, choosing explicetely how to deal with as in numpy () could be a solution, but maybe this is related to CuDNN's max pooling ? Built from latest sources (as of 05/17) PyTorch version: 0.5.0a0+ Is debug build: No CUDA used to build PyTorch: 9.1.85 OS: Ubuntu 16.04.4 LTS GCC version: (Ubuntu 5.4.0-6ubuntu1~16.04.9) 5.4.0 CMake version: version 3.9.4 Python version: 3.6 Is CUDA available: Yes CUDA runtime version: 9.1.85 GPU models and configuration: GPU 0: Quadro M1000M Nvidia driver version: 390.30 cuDNN version: Probably one of the following: /usr/lib/x8664-linux- /usr/lib/x8664-linux-gnu/libcudnnstaticv7.a Versions of relevant libraries: [conda] magma-cuda91 2.3.0 1 pytorch [conda] torch 0.5.0a0+ <pip[conda] torch 0.3.1b0+ <pip[conda] torch 0.5.0a0+ <pip\nHi, . Thank you for providing the example. I'll take a look.\nHmhm. I seem to get a double free in the ...\nAnother question: Do we really want to provide an option to ignore NaN? My view is that \"if you have nan in your net, you're screwed\", so I would just return NaN there, personally.\nMy practical usecase is for KITTI groundtruth depthmaps and FlowMaps which are 2D sparse arrays.", "commid": "pytorch_issue_7645", "tokennum": 506}], "negative_passages": []}
{"query_id": "q-en-pytorch-4099cb47e51f5963ac2fbc751ead939ce3721d76c3af31b04a878eb6e554a458", "query": "#include \"THCTensor.hpp\" #include \"THCHalf.h\" #include \"THCHalfAutoNumerics.cuh\" <ins> #include \"THCNumerics.cuh\" </ins> #include \"common.h\" // kernels borrowed from Caffe", "positive_passages": [{"docid": "doc-en-pytorch-202a6b4a46ae62b8f7553c0429b323612fb197593d261f12c30c05b56ae7ce77", "text": "For an algorithm that use FlowNet-like architecture that outputs predictions at multiple scale levels, we can either compare predictions to downscaled GT or upscaled predictions to GT. The first being obviously less computationally expensive, the ignore NaN would help downscaling such sparse 2D maps. For the moment we do something I find ugly, you can see it Essentially it zeroes the s, takes the map and construct two maps of positive and negative values which are then maxpooled and back together. I am actually open for a strict \"no-nan\" policy on pooling functions, but in that case better enforce it before someone writes a code that tries to benefit from maxpooling ignore s feature/bug and if you have a clever way of pooling sparse 2D tensors, I'm open to it, but I guess it's a topic for pytorch forums ;)\nPersonally, I think it is more sane have NaN -NaN in the pooling and offer a parametrizable (where you get to pick the values) that does or so.\nSo the proposed fix does NaN -NaN similar to max. I didn't try to fix gradients for the nan case. This would involve keeping the values and I don't think that is worth it (in particular because I would not expect the pooling layer to be last, and otherwise we'd probably get NaN as grad_out). If you are reasonably happy with it, I'd move it to a PR.\n+1 for NaN -NaN, \"abyssus abyssum invocat\" We could also add an optional mask which would be a ByteTensor of the same size, specifying whether or not the considered pixel is used for the pooling, that could be used for any kind of pulling. An functionality would then be to provide the mask\nI could work with that. to continue to be picky, the potential drawback is that it won't work for other pooling methods such as average pooling or median pooling. The problem here is that the max operation inherently ignores nonmax values which can be leveraged for \"ignore some pixels\" operation, but it has a \"non universal\" feel to it since it woun't be as easy for other kinds of 2D operations.\nTo be clear, the fix is good for me, but I figured a related discussion on selective pooling (whether to ditch NaN values or anything you want to ignore) could happen (maybe not on this issue ?)", "commid": "pytorch_issue_7645", "tokennum": 513}], "negative_passages": []}
{"query_id": "q-en-pytorch-41d4dd651a0d4badf059091f9ac8324ffb5f3d6684e9d5e78affa9f305316e3b", "query": "# This should work though l2.weight = Variable(torch.randn(10, 10)) <ins> def test_embedding_padding_idx(self): embedding = nn.Embedding(10, 20, padding_idx = 0) input = Variable(torch.LongTensor([[0,2,4,5],[4,3,0,9]])) output = embedding(input) self.assertEqual(output[0][0].sum().data[0], 0) self.assertEqual(output[1][2].sum().data[0], 0) </ins> def test_Dropout(self): input = torch.Tensor(1000) self._test_dropout(nn.Dropout, input)", "positive_passages": [{"docid": "doc-en-pytorch-6dae6822e59fd00098fabde359ce44f544da09d2deb5db328ca4d8e6c0d81333", "text": "The following code, which repeatedly exports a model with , has a memory leak. During the export, every tensor parameter in is cloned once and then immediately leaked forever, without ever being collected by the GC. It's not the underlying buffer that's cloned, it's the lightweight wrapper object itself. Still, for long running processes that often export networks in this manner this is a unbounded memory leak that eventually results in OOM errors. I've reproduced this issue on both Linux and Windows, with pytorch versions and respectively. The final five lines inside the for loop are to debug what happens, they are not neccesary to reproduce the issue. forces a gc collection cycle, ensuring we're not accidentally counting dead objects show the total amount of objects that exist for each type for all objects whose amount has increased. From this we can see that we're leaking 2 additional tensors per that the tensors we're leaking have shapes and , so they're just the weight and bias of the linear that the underlying buffer is always the same, so only the shallow class instance is being that nothing is pointing to these newly created objects, so they should be collected. Example output after running for a while: seems closely related but is more about a temporary doubling in memory, this issue is about a permanent memory leak. was closed as a duplicate of the previous issue, but better matches this issue. Collecting environment information. PyTorch version: 1.10.0 Is debug build: False CUDA used to build PyTorch: 11.3 ROCM used to build PyTorch: N/A OS: Ubuntu 18.04.5 LTS (x8664) GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0 Clang version: 6.0.0-1ubuntu2 (tags/RELEASE600/final) CMake version: version 3.10.2 Libc version: glibc-2.17 Python version: 3.7.11 (default, Jul 27 2021, 14:32:16) [GCC 7.5.0] (64-bit runtime) Python platform: Linux-5.15.0-41-generic-x8664-with-debian-buster-sid Is CUDA available: True CUDA runtime version: 11.3.", "commid": "pytorch_issue_82532", "tokennum": 517}], "negative_passages": []}
{"query_id": "q-en-pytorch-41d4dd651a0d4badf059091f9ac8324ffb5f3d6684e9d5e78affa9f305316e3b", "query": "# This should work though l2.weight = Variable(torch.randn(10, 10)) <ins> def test_embedding_padding_idx(self): embedding = nn.Embedding(10, 20, padding_idx = 0) input = Variable(torch.LongTensor([[0,2,4,5],[4,3,0,9]])) output = embedding(input) self.assertEqual(output[0][0].sum().data[0], 0) self.assertEqual(output[1][2].sum().data[0], 0) </ins> def test_Dropout(self): input = torch.Tensor(1000) self._test_dropout(nn.Dropout, input)", "positive_passages": [{"docid": "doc-en-pytorch-2f66b550279e97c384d138d0938f18ee7eb60e94c23ce88afd819c9c51455183", "text": "109 GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3080 Ti GPU 1: NVIDIA GeForce RTX 3080 Ti Nvidia driver version: 515.48.07 cuDNN version: Probably one of the following: /usr/lib/x8664-linux- /usr/lib/x8664-linux- /usr/lib/x8664-linux- /usr/lib/x8664-linux- /usr/lib/x8664-linux- /usr/lib/x8664-linux- /usr/lib/x8664-linux- HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True Versions of relevant libraries: [pip3] numpy==1.21.2 [pip3] torch==1.10.0 [pip3] torchelastic==0.2.0 [pip3] torchtext==0.11.0 [pip3] torchvision==0.11.0 [conda] blas 1.0 mkl [conda] cudatoolkit 11.3.1 ha36c4319 nvidia [conda] ffmpeg 4.3 hf484d3e0 pytorch [conda] mkl 2021.3.0 h06a4308520 [conda] mkl-service 2.4.0 py37h7f8727e0 [conda] mklfft 1.3.1 py37hd3c417c0 [conda] mklrandom 1.2.2 py37h51133e40 [conda] numpy 1.21.2 py37h20f2e390 [conda] numpy-base 1.21.2 py37h79a11010 [conda] pytorch 1.10.0 py3.7cuda11.3cudnn8.2.00 pytorch [conda] pytorch-mutex 1.0 cuda pytorch [conda] torchelastic 0.2.0 pypi0 pypi [conda] torchtext 0.11.0 py37 pytorch [conda] torchvision 0.11.0 py37cu113 pytorch\nI'm also observing something like this without JIT, although I'm not sure it's the same issue. The program works fine until I add ONNX export every time a checkpoint is saved. Once I do that, the GPU memory usage grows until it OOMs.", "commid": "pytorch_issue_82532", "tokennum": 602}], "negative_passages": []}
{"query_id": "q-en-pytorch-41d4dd651a0d4badf059091f9ac8324ffb5f3d6684e9d5e78affa9f305316e3b", "query": "# This should work though l2.weight = Variable(torch.randn(10, 10)) <ins> def test_embedding_padding_idx(self): embedding = nn.Embedding(10, 20, padding_idx = 0) input = Variable(torch.LongTensor([[0,2,4,5],[4,3,0,9]])) output = embedding(input) self.assertEqual(output[0][0].sum().data[0], 0) self.assertEqual(output[1][2].sum().data[0], 0) </ins> def test_Dropout(self): input = torch.Tensor(1000) self._test_dropout(nn.Dropout, input)", "positive_passages": [{"docid": "doc-en-pytorch-51b93f6c54a298b88600b1182a14c127961bec33f155f28f6e4eb4444c79f1df", "text": "This issue is about lightweight Tensor objects being leaked, not the underlying (potentially GPU-side) buffer. I think your issue is a different one.\nI encountered the same error, is there a solution to this problem?\nPlease validate with the latest release and re-summit an issue if you see the same thing. As we are moving away from torchscript minor leaks are unlikely to be fixed, but contribution is welcomed.", "commid": "pytorch_issue_82532", "tokennum": 86}], "negative_passages": []}
{"query_id": "q-en-pytorch-440e73dec00a24d4e6b6f7a738f49c36aba4ceb41a955f172524f45b894c7d54", "query": "z = torch.cat([x, y]) self.assertEqual(z.size(), (21, SIZE, SIZE)) <ins> def test_cat_empty(self): TestTorch._test_cat_empty(self, use_cuda=True) </ins> def test_bernoulli(self): x = torch.tensor([0, 1], dtype=torch.cuda.float32) self.assertEqual(x.bernoulli().tolist(), [0, 1])", "positive_passages": [{"docid": "doc-en-pytorch-5b118069fa67273ca83fc38bf735c53ddc46045a52aba22063750dd8da02e407", "text": "gdb points that the error might be in trying to get a size from an empty tensor: I'm using PyTorch version 0.4.0a0+\nThere's a check that should exclude zero-dim tensors from ( should be 0 in this case), so I'm wondering why that's not happening right now... edit: Nevermind, I was running an old build. I pulled the latest master and with being an empty tensor (with shape (0,)), cat crashes.\nRelated: We should probably rewrite to better handle these cases\nOkay, I found the bug. The CUDA version of check doesn't check the case where the input contains all empty tensors, while the CPU version does. I'll put up a fix soon.", "commid": "pytorch_issue_5739", "tokennum": 165}], "negative_passages": []}
{"query_id": "q-en-pytorch-47fdd2074fae54076b9495d25a94a841524a59ccda55df179b17cbd54b477ef8", "query": "} } <del> static Tensor dispatch_clamp(const Tensor & self, Scalar min, Scalar max) { AutoNoGIL no_gil; AutoGPU auto_gpu(self); return self.clamp(min, max); } static Tensor dispatch_clamp_min(const Tensor & self, Scalar min) { AutoNoGIL no_gil; AutoGPU auto_gpu(self); return self.clamp_min(min); } static Tensor dispatch_clamp_max(const Tensor & self, Scalar max) { AutoNoGIL no_gil; AutoGPU auto_gpu(self); return self.clamp_max(max); } </del> // The Python clamp() syntax has to be mapped to one of three C++ functions static PyObject * THPVariable_clamp(PyObject* module, PyObject* args, PyObject* kwargs) { HANDLE_TH_ERRORS static PythonArgParser parser({ <del> \"clamp(Tensor input, Scalar min=None, Scalar max=None)\", </del> <ins> \"clamp(Tensor input, Scalar min=None, Scalar max=None, *, Tensor out=None)\", </ins> }); <del> ParsedArgs<3> parsed_args; </del> <ins> ParsedArgs<4> parsed_args; </ins> auto r = parser.parse(args, kwargs, parsed_args); if (!r.isNone(1) && !r.isNone(2)) { <del> return THPVariable_Wrap(dispatch_clamp(r.tensor(0), r.scalar(1), r.scalar(2))); </del> <ins> if (!r.isNone(3)) { return wrap(dispatch_clamp(r.tensor(0), r.scalar(1), r.scalar(2), r.tensor(3))); } else { return wrap(dispatch_clamp(r.tensor(0), r.scalar(1), r.scalar(2))); } </ins> } else if (!r.isNone(1)) { <del> return THPVariable_Wrap(dispatch_clamp_min(r.tensor(0), r.scalar(1))); </del> <ins> if (!r.isNone(3)) { return wrap(dispatch_clamp_min(r.tensor(0), r.scalar(1), r.tensor(3))); } else { return wrap(dispatch_clamp_min(r.tensor(0), r.scalar(1))); } </ins> } else if (!r.isNone(2)) { <del> return THPVariable_Wrap(dispatch_clamp_max(r.tensor(0), r.scalar(2))); </del> <ins> if (!r.isNone(3)) { return wrap(dispatch_clamp_max(r.tensor(0), r.scalar(2), r.tensor(3))); } else { return wrap(dispatch_clamp_max(r.tensor(0), r.scalar(2))); } </ins> } else { throw std::runtime_error(\"At least one of 'min' or 'max' must not be None\"); } <ins> Py_RETURN_NONE; </ins> END_HANDLE_TH_ERRORS }", "positive_passages": [{"docid": "doc-en-pytorch-9fadf55fef649758ce298e5387e2a100c57f31d87ba54cce681ee82647ba65e7", "text": "In PyTorch master:\nAdded clamp's output support in pre-template code.", "commid": "pytorch_issue_6028", "tokennum": 20}], "negative_passages": []}
{"query_id": "q-en-pytorch-4d141356d1159441381c7f3d4f816ba6d9aa5b681d7723a83576b39d219d1fbd", "query": "Args: input (Tensor): the tensor to compare other (Tensor or float): the tensor or value to compare <del> out (Tensor, optional): the output tensor that must be a `ByteTensor` or the same type as `input` </del> <ins> out (Tensor, optional): the output tensor that must be a `ByteTensor` </ins> Returns: Tensor: A ``torch.ByteTensor`` containing a 1 at each location where comparison is true.", "positive_passages": [{"docid": "doc-en-pytorch-2fdc3cef791c19159039a95dfbc4d1859ba0e4d81197ef08e9f5067ac45538ca", "text": "[pytorch] The docs say that for tensor comparison operators (gt,lt etc) it should be possible to pass out argument typed as input (), yet when I try to do it, I hit an error Should the docs be fixed, or is it a bug?\nIf it's really useful we can add it back; but we'll fix the docs for now", "commid": "pytorch_issue_7933", "tokennum": 82}], "negative_passages": []}
{"query_id": "q-en-pytorch-551cb05265906a9804e2fced9973d2fabb2a5a6facb30f3afcbb988f2bcea4f1", "query": "def __idiv__(self, other): return self.div_(other) <ins> __itruediv__ = __idiv__ </ins> def __mod__(self, other): return self.remainder(other)", "positive_passages": [{"docid": "doc-en-pytorch-7a531b4e1408de313ecaafa4ee40fda2153fd640f1937560f296cfb7dbd6ca08", "text": "This is an ipython session. Note that the doesn't remain the same for /= even though it works for div_\nJust to be clear, the reason this is an issue is that it means that functions can't do inplace operations on function arguments.\nAlso, this works fine for +=", "commid": "pytorch_issue_2061", "tokennum": 65}], "negative_passages": []}
{"query_id": "q-en-pytorch-5c250d7eb168a7c614115257ae332b0c408b13ef008048b0c212e3c73a0afe0c", "query": "// Lua indices begin at 1 IndexType dstIndex_ = indices.data[IndexToOffset<int64_t, IndexType, IdxDim>::get(dstIndex, indices)] - TH_INDEX_BASE; <del> assert(dstIndex < dstFillDimSize); </del> <ins> assert(dstIndex_ < dstFillDimSize); </ins> // We stride over the output ignoring the indexed dimension // (innerSize), whose offset calculation is handled differently", "positive_passages": [{"docid": "doc-en-pytorch-a58ca7f0869590acf2ec481e28df28698c3770bb005cad89d9a33abd30ddcf87", "text": "It seems that can change memory outside x, when x is a cuda tensor. If x is non-cuda tensor, we get: In contrast, when x is cuda tensor, does not make any error It's hard to share the whole code, but I have noticed that such operation outside a tensor did affect the performance of existing network, so I'm afraid that this op can change arbitrary memory on GPU which can be dangerous. Could you check this out?\nThis snippet is fine - it's enough for us to reproduce the problem. It appears we're missing some out-of-bounds checks (we have them for other indexing functions). Thanks for reporting.\nworking on this", "commid": "pytorch_issue_3922", "tokennum": 147}], "negative_passages": []}
{"query_id": "q-en-pytorch-5f5558ee304aa0ad910223e4262d50bde2c4a6c17ed172c87c24b4879280cdf3", "query": "tW = targetTensor.size(tDims - 1) adjW = self._calculateAdj(tW, self.kW, self.padW, self.dW) adjH = self._calculateAdj(tH, self.kH, self.padH, self.dH) <del> if self.finput is None: </del> <ins> if not hasattr(self, 'finput') or self.finput is None: </ins> self.finput = input[0].new() <del> if self.fgradInput is None: </del> <ins> if not hasattr(self, 'fgradInput') or self.fgradInput is None: </ins> self.fgradInput = input[0].new() else: <del> if self.finput is None: </del> <ins> if not hasattr(self, 'finput') or self.finput is None: </ins> self.finput = input.new() <del> if self.fgradInput is None: </del> <ins> if not hasattr(self, 'fgradInput') or self.fgradInput is None: </ins> self.fgradInput = input.new() inputTensor = self._makeContiguous(inputTensor)", "positive_passages": [{"docid": "doc-en-pytorch-1b8f2e384d5a404a3376c7149d43125421bb1bdaa0086fc82df551418fcec1b0", "text": "I am new to pythonwhen i solve the promblem with the help below I find some confusion in the code I set \u2018dimension=1self.dimension = dimension\u2019it seem ok for mebut i don\u2019t kown how the value of \u2019dimension\u2018 was initialled. Thank you !\nI already Konw it comes from 'module = JoinTable(dimension, nInputDims)' But when I convert the model to pytorch , error appears: Traceback (most recent call last): File \"\", line 173, in <moduleGnetf =generator.forward(input) File \"/usr/local/lib/python2.7/dist-\", line 33, in forward return self.updateOutput(input) File \"/usr/local/lib/python2.7/dist-\", line 36, in updateOutput currentOutput = module.updateOutput(currentOutput) File \"/usr/local/lib/python2.7/dist-\", line 37, in updateOutput (dim, offset, (dim)).copy_(currentOutput) RuntimeError: inconsistent tensor size at /home/lxl/pytorch-master/torch/lib/TH/generic/THTensorCopy.c:51\nI Use \"generator.modules[0] = nn.JoinTable(1)\",it was fine ,but error again: Traceback (most recent call last): File \"\", line 171, in <moduleGnetf =generator.forward(input) File \"/usr/local/lib/python2.7/dist-\", line 33, in forward return self.updateOutput(input) File \"/usr/local/lib/python2.7/dist-\", line 36, in updateOutput currentOutput = module.updateOutput(currentOutput) File \"/usr/local/lib/python2.7/dist-\", line 96, in updateOutput if is None: AttributeError: 'SpatialFullConvolution' object has no attribute 'finput'\nHow old is the Lua model file you're trying to import? Can you please try to load it in Lua, save again, and load it in PyTorch? Also, please update PyTorch to the newest version.\nThe model is convert from the cudnn model trained by myseft the code below is the convert code BTWmy torch was installed on 17th Dec,2016 the pytorch version i use Metadata-Version: 1.0 Name: torch Version: 0.1.10+ I Build it from source today", "commid": "pytorch_issue_968", "tokennum": 595}], "negative_passages": []}
{"query_id": "q-en-pytorch-60723fc8355f03ca1cf6f865db5b4983650f61c7a434e07bb5e6f9c4cba6a872", "query": "THPByteOrder::THP_LITTLE_ENDIAN, to_convert); } <del> SYSCHECK(write(fd, data, to_convert * sizeof(real))); </del> <ins> SYSCHECK(write(fd, le_buffer.get(), to_convert * sizeof(real))); </ins> } } }", "positive_passages": [{"docid": "doc-en-pytorch-22f86cba093cf26e315f2fbaec5ca280a4dc379518c77425509ada9da27f0f4a", "text": "The problem is here: You can't write an arbitrary number of bytes. See . On my system the limit seems to be 2GB, YMMV. To be safe, you probably want to fix the read call as well at , because there's an SSIZE_MAX limit.", "commid": "pytorch_issue_717", "tokennum": 60}], "negative_passages": []}
{"query_id": "q-en-pytorch-61b459b548dc3e3c41a011899f5c524fe8976e152a080d9910783e4576b9bba5", "query": "<del> Subproject commit 9f6a636e547fc70a02fa48436449aad67080698f </del> <ins> Subproject commit add56ccdcac23a6c522a2c1174a866e293c61dab </ins>", "positive_passages": [{"docid": "doc-en-pytorch-7a9fbde970acad238b85eeaecdd20c3b27927e286af12f701e9db672acc0182d", "text": "Pybind11 has a bugfix here: which is not included in pytorch master. In brief, the bug causes two python modules, when both compiled with buggy version of pybind11, to conflict and crash at import. I've last week when debugging its conflict with pytorch. Hope pytorch can also upgrade to avoid potential conflict with other libraries.", "commid": "pytorch_issue_4809", "tokennum": 87}], "negative_passages": []}
{"query_id": "q-en-pytorch-69cb51f0ded6db481b492d760fc1235533115dd208da946400a73295b7f7117d", "query": "self.assertEqual(output[0][0].sum().data[0], 0) self.assertEqual(output[1][2].sum().data[0], 0) <ins> def test_embedding_max_norm(self): embedding = nn.Embedding(22, 5, max_norm=1.0) input = Variable(torch.LongTensor([2, 8, 8, 6])) output = embedding(input) self.assertEqual(output[1], output[2]) self.assertTrue(output.data.norm(p=2, dim=1).le(1).all()) @unittest.skipIf(not TEST_CUDA, \"CUDA unavailable\") def test_embedding_max_norm_cuda(self): embedding = nn.Embedding(22, 5, max_norm=1.0).cuda() input = Variable(torch.LongTensor([2, 8, 8, 6])).cuda() output = embedding(input) self.assertEqual(output[1], output[2]) self.assertTrue(output.data.norm(p=2, dim=1).le(1).all()) </ins> def test_embedding_functional(self): a = Variable(torch.LongTensor([ [1, 3, 2],", "positive_passages": [{"docid": "doc-en-pytorch-8a24171cae3316021ad4a394597442544d5ad7a610a19fd0700a51e599ae8017", "text": "The output is also incorrect. It's the output from the sorted indices, instead of the user specified indices. Reported by", "commid": "pytorch_issue_2413", "tokennum": 25}], "negative_passages": []}
{"query_id": "q-en-pytorch-6c7de40e95bf34dc14b2735c160240ca9b4fdd3569633c3d30d09bfb2a9e0564", "query": "for(ih = 0; ih < kH; ih++) { for(iw = 0; iw < kW; iw++) { T val = ptr_input[iw*istrideW]; <del> if (val > max) { </del> <ins> if ((val > max) || THCNumerics<T>::isnan(val)) { </ins> max = val; argmax = (ih+istartH)*isizeW + iw+istartW; }", "positive_passages": [{"docid": "doc-en-pytorch-73afc555bf492c7072a4f9994675674d19a7420810f08eaca2d074197550e516", "text": "max pooling functions are not consistent with max functions. Below an example, every max pooling (be it 1d, 2d or 3d, adaptive or not) acts the same, on cpu or on cuda. Essentially, there are two fondamental differences : max pooling of all values is while for it's max pooling of nan and valid values is valid values, which means s get ignored, while for , as soon as there is a value, the result is . More generally, choosing explicetely how to deal with as in numpy () could be a solution, but maybe this is related to CuDNN's max pooling ? Built from latest sources (as of 05/17) PyTorch version: 0.5.0a0+ Is debug build: No CUDA used to build PyTorch: 9.1.85 OS: Ubuntu 16.04.4 LTS GCC version: (Ubuntu 5.4.0-6ubuntu1~16.04.9) 5.4.0 CMake version: version 3.9.4 Python version: 3.6 Is CUDA available: Yes CUDA runtime version: 9.1.85 GPU models and configuration: GPU 0: Quadro M1000M Nvidia driver version: 390.30 cuDNN version: Probably one of the following: /usr/lib/x8664-linux- /usr/lib/x8664-linux-gnu/libcudnnstaticv7.a Versions of relevant libraries: [conda] magma-cuda91 2.3.0 1 pytorch [conda] torch 0.5.0a0+ <pip[conda] torch 0.3.1b0+ <pip[conda] torch 0.5.0a0+ <pip\nHi, . Thank you for providing the example. I'll take a look.\nHmhm. I seem to get a double free in the ...\nAnother question: Do we really want to provide an option to ignore NaN? My view is that \"if you have nan in your net, you're screwed\", so I would just return NaN there, personally.\nMy practical usecase is for KITTI groundtruth depthmaps and FlowMaps which are 2D sparse arrays.", "commid": "pytorch_issue_7645", "tokennum": 506}], "negative_passages": []}
{"query_id": "q-en-pytorch-6c7de40e95bf34dc14b2735c160240ca9b4fdd3569633c3d30d09bfb2a9e0564", "query": "for(ih = 0; ih < kH; ih++) { for(iw = 0; iw < kW; iw++) { T val = ptr_input[iw*istrideW]; <del> if (val > max) { </del> <ins> if ((val > max) || THCNumerics<T>::isnan(val)) { </ins> max = val; argmax = (ih+istartH)*isizeW + iw+istartW; }", "positive_passages": [{"docid": "doc-en-pytorch-202a6b4a46ae62b8f7553c0429b323612fb197593d261f12c30c05b56ae7ce77", "text": "For an algorithm that use FlowNet-like architecture that outputs predictions at multiple scale levels, we can either compare predictions to downscaled GT or upscaled predictions to GT. The first being obviously less computationally expensive, the ignore NaN would help downscaling such sparse 2D maps. For the moment we do something I find ugly, you can see it Essentially it zeroes the s, takes the map and construct two maps of positive and negative values which are then maxpooled and back together. I am actually open for a strict \"no-nan\" policy on pooling functions, but in that case better enforce it before someone writes a code that tries to benefit from maxpooling ignore s feature/bug and if you have a clever way of pooling sparse 2D tensors, I'm open to it, but I guess it's a topic for pytorch forums ;)\nPersonally, I think it is more sane have NaN -NaN in the pooling and offer a parametrizable (where you get to pick the values) that does or so.\nSo the proposed fix does NaN -NaN similar to max. I didn't try to fix gradients for the nan case. This would involve keeping the values and I don't think that is worth it (in particular because I would not expect the pooling layer to be last, and otherwise we'd probably get NaN as grad_out). If you are reasonably happy with it, I'd move it to a PR.\n+1 for NaN -NaN, \"abyssus abyssum invocat\" We could also add an optional mask which would be a ByteTensor of the same size, specifying whether or not the considered pixel is used for the pooling, that could be used for any kind of pulling. An functionality would then be to provide the mask\nI could work with that. to continue to be picky, the potential drawback is that it won't work for other pooling methods such as average pooling or median pooling. The problem here is that the max operation inherently ignores nonmax values which can be leveraged for \"ignore some pixels\" operation, but it has a \"non universal\" feel to it since it woun't be as easy for other kinds of 2D operations.\nTo be clear, the fix is good for me, but I figured a related discussion on selective pooling (whether to ditch NaN values or anything you want to ignore) could happen (maybe not on this issue ?)", "commid": "pytorch_issue_7645", "tokennum": 513}], "negative_passages": []}
{"query_id": "q-en-pytorch-6df228497008e8ec4cd2ec8393bca92b7924400e4a996e7624da61de75ba792f", "query": "<ins> #define __STDC_FORMAT_MACROS </ins> #include <Python.h> #ifdef _MSC_VER #include <Windows.h>", "positive_passages": [{"docid": "doc-en-pytorch-e622986277177825174756cd0d58bb252b326c5ede6f55945541c76bbab2d669", "text": "Hi There, I'm trying to install Pytorch as a module on my university's computing cluster. The node uses CentOS (6.4) x8664. I install running these commands: module load gcclib/5.2.0 cmake/3.8.2 module load gcc/5.2.0 module load anaconda3/4.0.0 export CMAKEPREFIXPATH=\"(which conda))/../\" export NOCUDA=1 git clone --recursive cd pytorch/ python install --prefix=${HOME} Things look fine as it installs but then I run into the following errors, ending in the process terminating with error relating to GCC? I've tried with version GCC 6.2.0 and the same result occurs. Not sure what to even try to fix this! Thanks for any help you can provide!\nSame here, with gcc 5.4.0\nI don't think building from source works very well with gcc 5.4. Could you install gcc 4.9 and try compiling?\nI get this error instead when using gcc 4.9.0, doesn't make it very far at all.\nHi I think this is broken after / due to\nI have the same problem ( etc.) with gcc 4.8.5 on Linux with the current head ( ) What works for me is to add at the beginning of the following four files: (see also -- this can be turned into a pull request very easily) This fix is the same as but for different files.\nThis worked for me Thank you.\nfixed in latest master, thanks to", "commid": "pytorch_issue_3628", "tokennum": 358}], "negative_passages": []}
{"query_id": "q-en-pytorch-6eb6639f1ba6b2299a30030e64d043695fa872f704c8ec2357677fa315d28087", "query": "// fast track for bytes and little endian if (sizeof(real) == 1 || THP_nativeByteOrder() == THPByteOrder::THP_LITTLE_ENDIAN) { <del> SYSCHECK(read(fd, data, sizeof(real) * storage->size)); </del> <ins> char *bytes = (char *) data; uint64_t remaining = sizeof(real) * storage->size; while (remaining > 0) { ssize_t result = read(fd, bytes, remaining); if (result < 0) throw std::system_error(result, std::system_category()); bytes += result; remaining -= result; } </ins> } else { long buffer_size = std::min(size, (long)5000); std::unique_ptr<uint8_t[]> le_buffer(new uint8_t[buffer_size * sizeof(real)]); <del> for (long i = 0; i < size; i += buffer_size) { </del> <ins> for (int64_t i = 0; i < size; i += buffer_size) { </ins> size_t to_convert = std::min(size - i, buffer_size); SYSCHECK(read(fd, le_buffer.get(), sizeof(real) * to_convert)); if (sizeof(real) == 2) {", "positive_passages": [{"docid": "doc-en-pytorch-22f86cba093cf26e315f2fbaec5ca280a4dc379518c77425509ada9da27f0f4a", "text": "The problem is here: You can't write an arbitrary number of bytes. See . On my system the limit seems to be 2GB, YMMV. To be safe, you probably want to fix the read call as well at , because there's an SSIZE_MAX limit.", "commid": "pytorch_issue_717", "tokennum": 60}], "negative_passages": []}
{"query_id": "q-en-pytorch-73ff66b1583b63baf05c469b53e78587bbebebb925dd78cda19f3d6b264e313f", "query": "Args: input (Tensor): the tensor to compare other (Tensor or float): the tensor or value to compare <del> out (Tensor, optional): the output tensor that must be a `ByteTensor` or the same type as :attr:`input` </del> <ins> out (Tensor, optional): the output tensor that must be a `ByteTensor` </ins> Returns: Tensor: A ``torch.ByteTensor`` containing a 1 at each location where comparison is true", "positive_passages": [{"docid": "doc-en-pytorch-2fdc3cef791c19159039a95dfbc4d1859ba0e4d81197ef08e9f5067ac45538ca", "text": "[pytorch] The docs say that for tensor comparison operators (gt,lt etc) it should be possible to pass out argument typed as input (), yet when I try to do it, I hit an error Should the docs be fixed, or is it a bug?\nIf it's really useful we can add it back; but we'll fix the docs for now", "commid": "pytorch_issue_7933", "tokennum": 82}], "negative_passages": []}
{"query_id": "q-en-pytorch-7928473c855fede51fc945c91f8640dc782109f7ffed480f44bee0760b53642d", "query": "\"aten::set_grad_enabled(bool val) -> ()\", [](Stack* stack) { torch::GradMode::set_enabled(pop(stack).toBool()); <del> push(stack, IValue()); </del> }, aliasAnalysisConservative()), });", "positive_passages": [{"docid": "doc-en-pytorch-d9bb332d9772655177ee32a7fd2b5ed4a64db0a082c9bc08c0bb1e22f3d5dc18", "text": "Recently, we are testing PyTorch 1.7 as we need the with-statement support, so that our algorithm which contains a custom module using can be deployed to the production environment. During the testing, we encountered the following assertion failure: Steps to reproduce the behavior: the following Python code to create a serialized PyTorch module. a C++ test program. the test program with the following command: in the current directory. should terminate with exit code 0 instead of throw an exception. Collecting environment information... PyTorch version: 1.7.0.dev20200922+cpu Is debug build: True CUDA used to build PyTorch: None ROCM used to build PyTorch: N/A OS: CentOS Linux 7 (Core) (x8664) GCC version: (GCC) 4.8.5 (Red Hat 4.8.5-39) Clang version: Could not collect CMake version: Could not collect Python version: 3.7 (64-bit runtime) Is CUDA available: False CUDA runtime version: No CUDA GPU models and configuration: No CUDA Nvidia driver version: No CUDA cuDNN version: No CUDA HIP runtime version: N/A MIOpen runtime version: N/A Versions of relevant libraries: [pip3] numpy==1.19.2 [pip3] torch==1.7.0.dev20200922+cpu [pip3] torchvision==0.7.0+cpu [conda] Could not collect cc", "commid": "pytorch_issue_45558", "tokennum": 343}], "negative_passages": []}
{"query_id": "q-en-pytorch-7d76bd3eea5a0324ece90a539cf25b6518205450745d9e02d08e72a7785c493f", "query": "} } <ins> // If all inputs are empty tensors, return an empty tensor if (notEmptyTensor == NULL) { return; } </ins> // In the event that the user specified -1 as the concat dimension, then // we want to pick the nDims as dimension to cat along (and thus nDims - 1 as the // value due to 0-based indexing). If the nDims is // 0 (i.e. we are catting all", "positive_passages": [{"docid": "doc-en-pytorch-5b118069fa67273ca83fc38bf735c53ddc46045a52aba22063750dd8da02e407", "text": "gdb points that the error might be in trying to get a size from an empty tensor: I'm using PyTorch version 0.4.0a0+\nThere's a check that should exclude zero-dim tensors from ( should be 0 in this case), so I'm wondering why that's not happening right now... edit: Nevermind, I was running an old build. I pulled the latest master and with being an empty tensor (with shape (0,)), cat crashes.\nRelated: We should probably rewrite to better handle these cases\nOkay, I found the bug. The CUDA version of check doesn't check the case where the input contains all empty tensors, while the CPU version does. I'll put up a fix soon.", "commid": "pytorch_issue_5739", "tokennum": 165}], "negative_passages": []}
{"query_id": "q-en-pytorch-81fc2c89f5705744d911fc82060c1215fc1688583b68e7f9223af82e7c00c1fb", "query": "bottom_data += (n * channels + c) * height * width; for (int h = hstart; h < hend; h += dilation_h) { for (int w = wstart; w < wend; w += dilation_w) { <del> if (ScalarConvert<Dtype, AccType>::to(bottom_data[h * width + w]) > maxval) { </del> <ins> Dtype val = bottom_data[h * width + w]; if ((ScalarConvert<Dtype, AccType>::to(val) > maxval) || THCNumerics<Dtype>::isnan(val)) { </ins> maxidx = h * width + w; <del> maxval = ScalarConvert<Dtype, AccType>::to(bottom_data[maxidx]); </del> <ins> maxval = ScalarConvert<Dtype, AccType>::to(val); </ins> } } }", "positive_passages": [{"docid": "doc-en-pytorch-73afc555bf492c7072a4f9994675674d19a7420810f08eaca2d074197550e516", "text": "max pooling functions are not consistent with max functions. Below an example, every max pooling (be it 1d, 2d or 3d, adaptive or not) acts the same, on cpu or on cuda. Essentially, there are two fondamental differences : max pooling of all values is while for it's max pooling of nan and valid values is valid values, which means s get ignored, while for , as soon as there is a value, the result is . More generally, choosing explicetely how to deal with as in numpy () could be a solution, but maybe this is related to CuDNN's max pooling ? Built from latest sources (as of 05/17) PyTorch version: 0.5.0a0+ Is debug build: No CUDA used to build PyTorch: 9.1.85 OS: Ubuntu 16.04.4 LTS GCC version: (Ubuntu 5.4.0-6ubuntu1~16.04.9) 5.4.0 CMake version: version 3.9.4 Python version: 3.6 Is CUDA available: Yes CUDA runtime version: 9.1.85 GPU models and configuration: GPU 0: Quadro M1000M Nvidia driver version: 390.30 cuDNN version: Probably one of the following: /usr/lib/x8664-linux- /usr/lib/x8664-linux-gnu/libcudnnstaticv7.a Versions of relevant libraries: [conda] magma-cuda91 2.3.0 1 pytorch [conda] torch 0.5.0a0+ <pip[conda] torch 0.3.1b0+ <pip[conda] torch 0.5.0a0+ <pip\nHi, . Thank you for providing the example. I'll take a look.\nHmhm. I seem to get a double free in the ...\nAnother question: Do we really want to provide an option to ignore NaN? My view is that \"if you have nan in your net, you're screwed\", so I would just return NaN there, personally.\nMy practical usecase is for KITTI groundtruth depthmaps and FlowMaps which are 2D sparse arrays.", "commid": "pytorch_issue_7645", "tokennum": 506}], "negative_passages": []}
{"query_id": "q-en-pytorch-81fc2c89f5705744d911fc82060c1215fc1688583b68e7f9223af82e7c00c1fb", "query": "bottom_data += (n * channels + c) * height * width; for (int h = hstart; h < hend; h += dilation_h) { for (int w = wstart; w < wend; w += dilation_w) { <del> if (ScalarConvert<Dtype, AccType>::to(bottom_data[h * width + w]) > maxval) { </del> <ins> Dtype val = bottom_data[h * width + w]; if ((ScalarConvert<Dtype, AccType>::to(val) > maxval) || THCNumerics<Dtype>::isnan(val)) { </ins> maxidx = h * width + w; <del> maxval = ScalarConvert<Dtype, AccType>::to(bottom_data[maxidx]); </del> <ins> maxval = ScalarConvert<Dtype, AccType>::to(val); </ins> } } }", "positive_passages": [{"docid": "doc-en-pytorch-202a6b4a46ae62b8f7553c0429b323612fb197593d261f12c30c05b56ae7ce77", "text": "For an algorithm that use FlowNet-like architecture that outputs predictions at multiple scale levels, we can either compare predictions to downscaled GT or upscaled predictions to GT. The first being obviously less computationally expensive, the ignore NaN would help downscaling such sparse 2D maps. For the moment we do something I find ugly, you can see it Essentially it zeroes the s, takes the map and construct two maps of positive and negative values which are then maxpooled and back together. I am actually open for a strict \"no-nan\" policy on pooling functions, but in that case better enforce it before someone writes a code that tries to benefit from maxpooling ignore s feature/bug and if you have a clever way of pooling sparse 2D tensors, I'm open to it, but I guess it's a topic for pytorch forums ;)\nPersonally, I think it is more sane have NaN -NaN in the pooling and offer a parametrizable (where you get to pick the values) that does or so.\nSo the proposed fix does NaN -NaN similar to max. I didn't try to fix gradients for the nan case. This would involve keeping the values and I don't think that is worth it (in particular because I would not expect the pooling layer to be last, and otherwise we'd probably get NaN as grad_out). If you are reasonably happy with it, I'd move it to a PR.\n+1 for NaN -NaN, \"abyssus abyssum invocat\" We could also add an optional mask which would be a ByteTensor of the same size, specifying whether or not the considered pixel is used for the pooling, that could be used for any kind of pulling. An functionality would then be to provide the mask\nI could work with that. to continue to be picky, the potential drawback is that it won't work for other pooling methods such as average pooling or median pooling. The problem here is that the max operation inherently ignores nonmax values which can be leveraged for \"ignore some pixels\" operation, but it has a \"non universal\" feel to it since it woun't be as easy for other kinds of 2D operations.\nTo be clear, the fix is good for me, but I figured a related discussion on selective pooling (whether to ditch NaN values or anything you want to ignore) could happen (maybe not on this issue ?)", "commid": "pytorch_issue_7645", "tokennum": 513}], "negative_passages": []}
{"query_id": "q-en-pytorch-88ca958657dda50101b09cbdd7703b45fbeec7c19f02de8e9ef082e239c181d4", "query": "<ins> #define __STDC_FORMAT_MACROS </ins> #include <Python.h> #include <structmember.h>", "positive_passages": [{"docid": "doc-en-pytorch-e622986277177825174756cd0d58bb252b326c5ede6f55945541c76bbab2d669", "text": "Hi There, I'm trying to install Pytorch as a module on my university's computing cluster. The node uses CentOS (6.4) x8664. I install running these commands: module load gcclib/5.2.0 cmake/3.8.2 module load gcc/5.2.0 module load anaconda3/4.0.0 export CMAKEPREFIXPATH=\"(which conda))/../\" export NOCUDA=1 git clone --recursive cd pytorch/ python install --prefix=${HOME} Things look fine as it installs but then I run into the following errors, ending in the process terminating with error relating to GCC? I've tried with version GCC 6.2.0 and the same result occurs. Not sure what to even try to fix this! Thanks for any help you can provide!\nSame here, with gcc 5.4.0\nI don't think building from source works very well with gcc 5.4. Could you install gcc 4.9 and try compiling?\nI get this error instead when using gcc 4.9.0, doesn't make it very far at all.\nHi I think this is broken after / due to\nI have the same problem ( etc.) with gcc 4.8.5 on Linux with the current head ( ) What works for me is to add at the beginning of the following four files: (see also -- this can be turned into a pull request very easily) This fix is the same as but for different files.\nThis worked for me Thank you.\nfixed in latest master, thanks to", "commid": "pytorch_issue_3628", "tokennum": 358}], "negative_passages": []}
{"query_id": "q-en-pytorch-977578af41147a670dcbbe495360d3862903fb443373d37d981d21a197c31db7", "query": "auto input = input_r.contiguous(); auto weight = weight_r; auto bias = bias_r; <del> auto k = input.ndimension(); </del> <ins> auto k = weight.ndimension(); </ins> int64_t dim = k - 2; if (dim <= 0) { <del> throw std::runtime_error(\"input has less dimensions than expected\"); </del> <ins> throw std::runtime_error(\"weight should have at least two dimensions\"); </ins> } ConvParams params;", "positive_passages": [{"docid": "doc-en-pytorch-3fd0655cf1f98414cb74036460095097c829c4e79824f44f878c0695b6ea8a48", "text": "got the following err msg with conv2D and ver 0.4: RuntimeError: expected stride to be a single integer value or a list of 1 values to match the convolution dimensions, but got stride=[1, 1]. I was not able to see the problem so I uninstalled 0.4 and installed 0.3.1 Still got an err but this time it said that the expected input should be a 4d tensor and it got a 3D tensor. This helped understand the issue and I fixed it (just add a dimension). reinstalled 0.4 and its working (no surprise...). I think 0.4 should have the same err feedback otherwise its really hard to undrstand the problem How you installed PyTorch (conda, pip, source): conda Build command you used (if compiling from source): OS: macos PyTorch version: 0.4 Python version: 3.5 CUDA/cuDNN version: no GPU models and configuration: GCC version (if compiling from source): CMake version: Versions of any other relevant libraries:\nCan you please post a small self-contained code snippet that would let us reproduce the problem?\nThe strange error messge was also mentioned a few times in the forum. Here is a code snippet for PyTorch :\nThis is because we direct all conv ops to and infer dim + throw error message there.\nHmm that's not great. We might want to pass the expected dimensionality of the convolution to the generic implementation, so that we can improve the error messages.\nThx for the responses. Just a reminder, for the same err the provided err msg in version 0.3.1 was very informative\nYes, this should definitely be fixed. Sorry about this!\nThe error message of seems to be a bit misleading, too. Code: Should I create a new issue or is it related to the current one?\nthat appears fixed on master! :D", "commid": "pytorch_issue_7332", "tokennum": 434}], "negative_passages": []}
{"query_id": "q-en-pytorch-9b3bfe1e851162000f63e68f19a6b5422d28fae26c2b41e6552e3d6ccc3747ba", "query": "for(iw = 0; iw < kW; iw++) { real val = *(ip + ih*istrideH + iw*istrideW); <del> if (val > maxval) </del> <ins> if ((val > maxval) || isnan(val)) </ins> { maxval = val; maxindex = (ih+istartH)*isizeW + (iw+istartW);", "positive_passages": [{"docid": "doc-en-pytorch-73afc555bf492c7072a4f9994675674d19a7420810f08eaca2d074197550e516", "text": "max pooling functions are not consistent with max functions. Below an example, every max pooling (be it 1d, 2d or 3d, adaptive or not) acts the same, on cpu or on cuda. Essentially, there are two fondamental differences : max pooling of all values is while for it's max pooling of nan and valid values is valid values, which means s get ignored, while for , as soon as there is a value, the result is . More generally, choosing explicetely how to deal with as in numpy () could be a solution, but maybe this is related to CuDNN's max pooling ? Built from latest sources (as of 05/17) PyTorch version: 0.5.0a0+ Is debug build: No CUDA used to build PyTorch: 9.1.85 OS: Ubuntu 16.04.4 LTS GCC version: (Ubuntu 5.4.0-6ubuntu1~16.04.9) 5.4.0 CMake version: version 3.9.4 Python version: 3.6 Is CUDA available: Yes CUDA runtime version: 9.1.85 GPU models and configuration: GPU 0: Quadro M1000M Nvidia driver version: 390.30 cuDNN version: Probably one of the following: /usr/lib/x8664-linux- /usr/lib/x8664-linux-gnu/libcudnnstaticv7.a Versions of relevant libraries: [conda] magma-cuda91 2.3.0 1 pytorch [conda] torch 0.5.0a0+ <pip[conda] torch 0.3.1b0+ <pip[conda] torch 0.5.0a0+ <pip\nHi, . Thank you for providing the example. I'll take a look.\nHmhm. I seem to get a double free in the ...\nAnother question: Do we really want to provide an option to ignore NaN? My view is that \"if you have nan in your net, you're screwed\", so I would just return NaN there, personally.\nMy practical usecase is for KITTI groundtruth depthmaps and FlowMaps which are 2D sparse arrays.", "commid": "pytorch_issue_7645", "tokennum": 506}], "negative_passages": []}
{"query_id": "q-en-pytorch-9b3bfe1e851162000f63e68f19a6b5422d28fae26c2b41e6552e3d6ccc3747ba", "query": "for(iw = 0; iw < kW; iw++) { real val = *(ip + ih*istrideH + iw*istrideW); <del> if (val > maxval) </del> <ins> if ((val > maxval) || isnan(val)) </ins> { maxval = val; maxindex = (ih+istartH)*isizeW + (iw+istartW);", "positive_passages": [{"docid": "doc-en-pytorch-202a6b4a46ae62b8f7553c0429b323612fb197593d261f12c30c05b56ae7ce77", "text": "For an algorithm that use FlowNet-like architecture that outputs predictions at multiple scale levels, we can either compare predictions to downscaled GT or upscaled predictions to GT. The first being obviously less computationally expensive, the ignore NaN would help downscaling such sparse 2D maps. For the moment we do something I find ugly, you can see it Essentially it zeroes the s, takes the map and construct two maps of positive and negative values which are then maxpooled and back together. I am actually open for a strict \"no-nan\" policy on pooling functions, but in that case better enforce it before someone writes a code that tries to benefit from maxpooling ignore s feature/bug and if you have a clever way of pooling sparse 2D tensors, I'm open to it, but I guess it's a topic for pytorch forums ;)\nPersonally, I think it is more sane have NaN -NaN in the pooling and offer a parametrizable (where you get to pick the values) that does or so.\nSo the proposed fix does NaN -NaN similar to max. I didn't try to fix gradients for the nan case. This would involve keeping the values and I don't think that is worth it (in particular because I would not expect the pooling layer to be last, and otherwise we'd probably get NaN as grad_out). If you are reasonably happy with it, I'd move it to a PR.\n+1 for NaN -NaN, \"abyssus abyssum invocat\" We could also add an optional mask which would be a ByteTensor of the same size, specifying whether or not the considered pixel is used for the pooling, that could be used for any kind of pulling. An functionality would then be to provide the mask\nI could work with that. to continue to be picky, the potential drawback is that it won't work for other pooling methods such as average pooling or median pooling. The problem here is that the max operation inherently ignores nonmax values which can be leveraged for \"ignore some pixels\" operation, but it has a \"non universal\" feel to it since it woun't be as easy for other kinds of 2D operations.\nTo be clear, the fix is good for me, but I figured a related discussion on selective pooling (whether to ditch NaN values or anything you want to ignore) could happen (maybe not on this issue ?)", "commid": "pytorch_issue_7645", "tokennum": 513}], "negative_passages": []}
{"query_id": "q-en-pytorch-9b791534cfdf1e04d78ab0fcf440549051b7f8d898ae468ecf0a79a35a682b9e", "query": "gI = apply_fn<Transpose>(0, 1)(gIt); } } <ins> if (should_compute_output(0) && !ggO.defined()) ggO = at::zeros_like(gO); if (should_compute_output(1) && !gI.defined()) gI = at::zeros_like(input); if (should_compute_output(2) && !gW.defined()) gW = at::zeros_like(weight); </ins> return {ggO, gI, gW}; }", "positive_passages": [{"docid": "doc-en-pytorch-c5dbd648f5f223c007de312a8c0f1ae78f27faaf3ad3e509ac9f43144221b039", "text": "This is a test. Please ignore it. Edited.\n<!-- validation-comment-start --<bodyHello there! From the DISABLED prefix in this issue title, it looks like you are attempting to disable a job in PyTorch CI. The information I have parsed is below: Job name: Credential: Within ~15 minutes, and all of its dependants will be disabled in PyTorch CI. Please verify that the job name looks correct. With great power comes great responsibility. </body<!-- validation-comment-end --", "commid": "pytorch_issue_94861", "tokennum": 122}], "negative_passages": []}
{"query_id": "q-en-pytorch-9cb6c949988837baec9a8d2f5f5cda5c88e3ef0cd73137ef0b7df869c21c1c37", "query": "'expected a non-empty list of Tensors'): torch.cat([], dim=1) <ins> def test_cat_empty(self): self._test_cat_empty(self) </ins> def test_stack(self): x = torch.rand(2, 3, 4) y = torch.rand(2, 3, 4)", "positive_passages": [{"docid": "doc-en-pytorch-5b118069fa67273ca83fc38bf735c53ddc46045a52aba22063750dd8da02e407", "text": "gdb points that the error might be in trying to get a size from an empty tensor: I'm using PyTorch version 0.4.0a0+\nThere's a check that should exclude zero-dim tensors from ( should be 0 in this case), so I'm wondering why that's not happening right now... edit: Nevermind, I was running an old build. I pulled the latest master and with being an empty tensor (with shape (0,)), cat crashes.\nRelated: We should probably rewrite to better handle these cases\nOkay, I found the bug. The CUDA version of check doesn't check the case where the input contains all empty tensors, while the CPU version does. I'll put up a fix soon.", "commid": "pytorch_issue_5739", "tokennum": 165}], "negative_passages": []}
{"query_id": "q-en-pytorch-9ec1744c3c99ad877b5de782ef7e6f996113b1f13b635199c006a6d8e8a19f56", "query": "} } <ins> // manual dispatch code for clamp inline Tensor dispatch_clamp(const Tensor & self, Scalar min, Scalar max) { AutoNoGIL no_gil; AutoGPU auto_gpu(self); return self.clamp(min, max); } inline Tensor dispatch_clamp_min(const Tensor & self, Scalar min) { AutoNoGIL no_gil; AutoGPU auto_gpu(self); return self.clamp_min(min); } inline Tensor dispatch_clamp_max(const Tensor & self, Scalar max) { AutoNoGIL no_gil; AutoGPU auto_gpu(self); return self.clamp_max(max); } inline Tensor & dispatch_clamp(const Tensor & self, Scalar min, Scalar max, Tensor result) { AutoNoGIL no_gil; AutoGPU auto_gpu(result); return at::clamp_out(result, self, min, max); } inline Tensor & dispatch_clamp_min(const Tensor & self, Scalar min, Tensor result) { AutoNoGIL no_gil; AutoGPU auto_gpu(result); return at::clamp_min_out(result, self, min); } inline Tensor & dispatch_clamp_max(const Tensor & self, Scalar max, Tensor result) { AutoNoGIL no_gil; AutoGPU auto_gpu(result); return at::clamp_max_out(result, self, max); } </ins> ${py_method_dispatch} }} // namespace torch::autograd", "positive_passages": [{"docid": "doc-en-pytorch-9fadf55fef649758ce298e5387e2a100c57f31d87ba54cce681ee82647ba65e7", "text": "In PyTorch master:\nAdded clamp's output support in pre-template code.", "commid": "pytorch_issue_6028", "tokennum": 20}], "negative_passages": []}
{"query_id": "q-en-pytorch-a9404e4c571418623fa1b9c97e0ef232f999a7ad76326d587a82ccbd62ab02ac", "query": "for(ih = 0; ih < kH; ++ih) { for(iw = 0; iw < kW; ++iw) { T val = ptr_input[ih*istrideH + iw*istrideW]; <del> if (val > max) { </del> <ins> if ((val > max) || THCNumerics<T>::isnan(val)) { </ins> max = val; argmax = (it+istartT)*isizeH*isizeW + (ih+istartH)*isizeW + iw+istartW; }", "positive_passages": [{"docid": "doc-en-pytorch-73afc555bf492c7072a4f9994675674d19a7420810f08eaca2d074197550e516", "text": "max pooling functions are not consistent with max functions. Below an example, every max pooling (be it 1d, 2d or 3d, adaptive or not) acts the same, on cpu or on cuda. Essentially, there are two fondamental differences : max pooling of all values is while for it's max pooling of nan and valid values is valid values, which means s get ignored, while for , as soon as there is a value, the result is . More generally, choosing explicetely how to deal with as in numpy () could be a solution, but maybe this is related to CuDNN's max pooling ? Built from latest sources (as of 05/17) PyTorch version: 0.5.0a0+ Is debug build: No CUDA used to build PyTorch: 9.1.85 OS: Ubuntu 16.04.4 LTS GCC version: (Ubuntu 5.4.0-6ubuntu1~16.04.9) 5.4.0 CMake version: version 3.9.4 Python version: 3.6 Is CUDA available: Yes CUDA runtime version: 9.1.85 GPU models and configuration: GPU 0: Quadro M1000M Nvidia driver version: 390.30 cuDNN version: Probably one of the following: /usr/lib/x8664-linux- /usr/lib/x8664-linux-gnu/libcudnnstaticv7.a Versions of relevant libraries: [conda] magma-cuda91 2.3.0 1 pytorch [conda] torch 0.5.0a0+ <pip[conda] torch 0.3.1b0+ <pip[conda] torch 0.5.0a0+ <pip\nHi, . Thank you for providing the example. I'll take a look.\nHmhm. I seem to get a double free in the ...\nAnother question: Do we really want to provide an option to ignore NaN? My view is that \"if you have nan in your net, you're screwed\", so I would just return NaN there, personally.\nMy practical usecase is for KITTI groundtruth depthmaps and FlowMaps which are 2D sparse arrays.", "commid": "pytorch_issue_7645", "tokennum": 506}], "negative_passages": []}
{"query_id": "q-en-pytorch-a9404e4c571418623fa1b9c97e0ef232f999a7ad76326d587a82ccbd62ab02ac", "query": "for(ih = 0; ih < kH; ++ih) { for(iw = 0; iw < kW; ++iw) { T val = ptr_input[ih*istrideH + iw*istrideW]; <del> if (val > max) { </del> <ins> if ((val > max) || THCNumerics<T>::isnan(val)) { </ins> max = val; argmax = (it+istartT)*isizeH*isizeW + (ih+istartH)*isizeW + iw+istartW; }", "positive_passages": [{"docid": "doc-en-pytorch-202a6b4a46ae62b8f7553c0429b323612fb197593d261f12c30c05b56ae7ce77", "text": "For an algorithm that use FlowNet-like architecture that outputs predictions at multiple scale levels, we can either compare predictions to downscaled GT or upscaled predictions to GT. The first being obviously less computationally expensive, the ignore NaN would help downscaling such sparse 2D maps. For the moment we do something I find ugly, you can see it Essentially it zeroes the s, takes the map and construct two maps of positive and negative values which are then maxpooled and back together. I am actually open for a strict \"no-nan\" policy on pooling functions, but in that case better enforce it before someone writes a code that tries to benefit from maxpooling ignore s feature/bug and if you have a clever way of pooling sparse 2D tensors, I'm open to it, but I guess it's a topic for pytorch forums ;)\nPersonally, I think it is more sane have NaN -NaN in the pooling and offer a parametrizable (where you get to pick the values) that does or so.\nSo the proposed fix does NaN -NaN similar to max. I didn't try to fix gradients for the nan case. This would involve keeping the values and I don't think that is worth it (in particular because I would not expect the pooling layer to be last, and otherwise we'd probably get NaN as grad_out). If you are reasonably happy with it, I'd move it to a PR.\n+1 for NaN -NaN, \"abyssus abyssum invocat\" We could also add an optional mask which would be a ByteTensor of the same size, specifying whether or not the considered pixel is used for the pooling, that could be used for any kind of pulling. An functionality would then be to provide the mask\nI could work with that. to continue to be picky, the potential drawback is that it won't work for other pooling methods such as average pooling or median pooling. The problem here is that the max operation inherently ignores nonmax values which can be leveraged for \"ignore some pixels\" operation, but it has a \"non universal\" feel to it since it woun't be as easy for other kinds of 2D operations.\nTo be clear, the fix is good for me, but I figured a related discussion on selective pooling (whether to ditch NaN values or anything you want to ignore) could happen (maybe not on this issue ?)", "commid": "pytorch_issue_7645", "tokennum": 513}], "negative_passages": []}
{"query_id": "q-en-pytorch-ae42f0aca4a13adb43080aa570ab847db924974818431f575dd71a17fb988069", "query": "index = t * inputH * inputW + h * inputW + w; Dtype val = inputData[index]; <del> if (max < val) </del> <ins> if ((max < val) || THCNumerics<Dtype>::isnan(val)) </ins> { max = val; maxIndex = index;", "positive_passages": [{"docid": "doc-en-pytorch-73afc555bf492c7072a4f9994675674d19a7420810f08eaca2d074197550e516", "text": "max pooling functions are not consistent with max functions. Below an example, every max pooling (be it 1d, 2d or 3d, adaptive or not) acts the same, on cpu or on cuda. Essentially, there are two fondamental differences : max pooling of all values is while for it's max pooling of nan and valid values is valid values, which means s get ignored, while for , as soon as there is a value, the result is . More generally, choosing explicetely how to deal with as in numpy () could be a solution, but maybe this is related to CuDNN's max pooling ? Built from latest sources (as of 05/17) PyTorch version: 0.5.0a0+ Is debug build: No CUDA used to build PyTorch: 9.1.85 OS: Ubuntu 16.04.4 LTS GCC version: (Ubuntu 5.4.0-6ubuntu1~16.04.9) 5.4.0 CMake version: version 3.9.4 Python version: 3.6 Is CUDA available: Yes CUDA runtime version: 9.1.85 GPU models and configuration: GPU 0: Quadro M1000M Nvidia driver version: 390.30 cuDNN version: Probably one of the following: /usr/lib/x8664-linux- /usr/lib/x8664-linux-gnu/libcudnnstaticv7.a Versions of relevant libraries: [conda] magma-cuda91 2.3.0 1 pytorch [conda] torch 0.5.0a0+ <pip[conda] torch 0.3.1b0+ <pip[conda] torch 0.5.0a0+ <pip\nHi, . Thank you for providing the example. I'll take a look.\nHmhm. I seem to get a double free in the ...\nAnother question: Do we really want to provide an option to ignore NaN? My view is that \"if you have nan in your net, you're screwed\", so I would just return NaN there, personally.\nMy practical usecase is for KITTI groundtruth depthmaps and FlowMaps which are 2D sparse arrays.", "commid": "pytorch_issue_7645", "tokennum": 506}], "negative_passages": []}
{"query_id": "q-en-pytorch-ae42f0aca4a13adb43080aa570ab847db924974818431f575dd71a17fb988069", "query": "index = t * inputH * inputW + h * inputW + w; Dtype val = inputData[index]; <del> if (max < val) </del> <ins> if ((max < val) || THCNumerics<Dtype>::isnan(val)) </ins> { max = val; maxIndex = index;", "positive_passages": [{"docid": "doc-en-pytorch-202a6b4a46ae62b8f7553c0429b323612fb197593d261f12c30c05b56ae7ce77", "text": "For an algorithm that use FlowNet-like architecture that outputs predictions at multiple scale levels, we can either compare predictions to downscaled GT or upscaled predictions to GT. The first being obviously less computationally expensive, the ignore NaN would help downscaling such sparse 2D maps. For the moment we do something I find ugly, you can see it Essentially it zeroes the s, takes the map and construct two maps of positive and negative values which are then maxpooled and back together. I am actually open for a strict \"no-nan\" policy on pooling functions, but in that case better enforce it before someone writes a code that tries to benefit from maxpooling ignore s feature/bug and if you have a clever way of pooling sparse 2D tensors, I'm open to it, but I guess it's a topic for pytorch forums ;)\nPersonally, I think it is more sane have NaN -NaN in the pooling and offer a parametrizable (where you get to pick the values) that does or so.\nSo the proposed fix does NaN -NaN similar to max. I didn't try to fix gradients for the nan case. This would involve keeping the values and I don't think that is worth it (in particular because I would not expect the pooling layer to be last, and otherwise we'd probably get NaN as grad_out). If you are reasonably happy with it, I'd move it to a PR.\n+1 for NaN -NaN, \"abyssus abyssum invocat\" We could also add an optional mask which would be a ByteTensor of the same size, specifying whether or not the considered pixel is used for the pooling, that could be used for any kind of pulling. An functionality would then be to provide the mask\nI could work with that. to continue to be picky, the potential drawback is that it won't work for other pooling methods such as average pooling or median pooling. The problem here is that the max operation inherently ignores nonmax values which can be leveraged for \"ignore some pixels\" operation, but it has a \"non universal\" feel to it since it woun't be as easy for other kinds of 2D operations.\nTo be clear, the fix is good for me, but I figured a related discussion on selective pooling (whether to ditch NaN values or anything you want to ignore) could happen (maybe not on this issue ?)", "commid": "pytorch_issue_7645", "tokennum": 513}], "negative_passages": []}
{"query_id": "q-en-pytorch-aebacc799c3a81586b0c547a02debca587aaa2d0618d2fe2a2717dd0dbd9fc61", "query": "res2[i] = max(min_val, min(max_val, res2[i])) self.assertEqual(res1, res2) <ins> out = m1.clone() torch.clamp(m1, min=min_val, max=max_val, out=out) self.assertEqual(out, res1) </ins> res1 = torch.clamp(m1, min=min_val) res2 = m1.clone() for i in iter_indices(res2): res2[i] = max(min_val, res2[i]) self.assertEqual(res1, res2) <ins> torch.clamp(m1, min=min_val, out=out) self.assertEqual(out, res1) </ins> res1 = torch.clamp(m1, max=max_val) res2 = m1.clone() for i in iter_indices(res2): res2[i] = min(max_val, res2[i]) self.assertEqual(res1, res2) <ins> torch.clamp(m1, max=max_val, out=out) self.assertEqual(out, res1) </ins> def test_pow(self): # [res] torch.pow([res,] x)", "positive_passages": [{"docid": "doc-en-pytorch-9fadf55fef649758ce298e5387e2a100c57f31d87ba54cce681ee82647ba65e7", "text": "In PyTorch master:\nAdded clamp's output support in pre-template code.", "commid": "pytorch_issue_6028", "tokennum": 20}], "negative_passages": []}
{"query_id": "q-en-pytorch-aed851444804c1a6b8a8132b51b04030f5e3ddf2cf23b487c946f39f2cd3d211", "query": "if (!transposed) { if (input.size(1) != (weight.size(1) * groups)) { std::stringstream ss; <del> ss << \"Given groups=\" << groups << \", weight\" << weight.sizes() << \", so expected input\" << input.sizes() << \" to have \" </del> <ins> ss << \"Given groups=\" << groups << \", weight of size \" << weight.sizes() << \", expected input\" << input.sizes() << \" to have \" </ins> << (weight.size(1) * groups) << \" channels, but got \" << input.size(1) << \" channels instead\"; throw std::runtime_error(ss.str());", "positive_passages": [{"docid": "doc-en-pytorch-3fd0655cf1f98414cb74036460095097c829c4e79824f44f878c0695b6ea8a48", "text": "got the following err msg with conv2D and ver 0.4: RuntimeError: expected stride to be a single integer value or a list of 1 values to match the convolution dimensions, but got stride=[1, 1]. I was not able to see the problem so I uninstalled 0.4 and installed 0.3.1 Still got an err but this time it said that the expected input should be a 4d tensor and it got a 3D tensor. This helped understand the issue and I fixed it (just add a dimension). reinstalled 0.4 and its working (no surprise...). I think 0.4 should have the same err feedback otherwise its really hard to undrstand the problem How you installed PyTorch (conda, pip, source): conda Build command you used (if compiling from source): OS: macos PyTorch version: 0.4 Python version: 3.5 CUDA/cuDNN version: no GPU models and configuration: GCC version (if compiling from source): CMake version: Versions of any other relevant libraries:\nCan you please post a small self-contained code snippet that would let us reproduce the problem?\nThe strange error messge was also mentioned a few times in the forum. Here is a code snippet for PyTorch :\nThis is because we direct all conv ops to and infer dim + throw error message there.\nHmm that's not great. We might want to pass the expected dimensionality of the convolution to the generic implementation, so that we can improve the error messages.\nThx for the responses. Just a reminder, for the same err the provided err msg in version 0.3.1 was very informative\nYes, this should definitely be fixed. Sorry about this!\nThe error message of seems to be a bit misleading, too. Code: Should I create a new issue or is it related to the current one?\nthat appears fixed on master! :D", "commid": "pytorch_issue_7332", "tokennum": 434}], "negative_passages": []}
{"query_id": "q-en-pytorch-b4947259b64f81b4189708dee2dee9dc3551fa710f6caa3b8c633f919067385d", "query": "struct ModeUnsignedPair max = {0, 0}; <del> max = reduceBlockN<struct ModeUnsignedPair, MaxReduceOp<struct ModeUnsignedPair>, 2> </del> <ins> max = reduceBlockWithNThreadLocalReductions<struct ModeUnsignedPair, MaxReduceOp<struct ModeUnsignedPair>, 2> </ins> (uupmem, uup, sliceSize, MaxReduceOp<struct ModeUnsignedPair>(), max); // Store the mode in shared memory for use in finding the mode in the input slice", "positive_passages": [{"docid": "doc-en-pytorch-3dce8e556d6d4bd2ea519d551101e82db52b876cebdea9b13c4207de2ba26137", "text": "Not sure what is the reason for these errors, any suggestions? I suspect the clang version is not supported? Here is the\nclang have more strict checking , pull request will fixed it.\nfixed, thanks to", "commid": "pytorch_issue_745", "tokennum": 43}], "negative_passages": []}
{"query_id": "q-en-pytorch-b81f9238c60e350db10e6f015b9ea17dc0742bcf9cb5615e33a0db99db1901db", "query": "SYSCHECK(write(fd, &self->size, sizeof(long))); // fast track for bytes and little endian if (sizeof(real) == 1 || THP_nativeByteOrder() == THPByteOrder::THP_LITTLE_ENDIAN) { <del> SYSCHECK(write(fd, data, sizeof(real) * self->size)); </del> <ins> char *bytes = (char *) data; uint64_t remaining = sizeof(real) * self->size; while (remaining > 0) { ssize_t result = write(fd, bytes, remaining); if (result < 0) throw std::system_error(result, std::system_category()); bytes += result; remaining -= result; } </ins> } else { long buffer_size = std::min(self->size, (long)5000); std::unique_ptr<uint8_t[]> le_buffer(new uint8_t[buffer_size * sizeof(real)]); <del> for (long i = 0; i < self->size; i += buffer_size) { </del> <ins> for (int64_t i = 0; i < self->size; i += buffer_size) { </ins> size_t to_convert = std::min(self->size - i, buffer_size); if (sizeof(real) == 2) { THP_encodeInt16Buffer((uint8_t*)le_buffer.get(),", "positive_passages": [{"docid": "doc-en-pytorch-22f86cba093cf26e315f2fbaec5ca280a4dc379518c77425509ada9da27f0f4a", "text": "The problem is here: You can't write an arbitrary number of bytes. See . On my system the limit seems to be 2GB, YMMV. To be safe, you probably want to fix the read call as well at , because there's an SSIZE_MAX limit.", "commid": "pytorch_issue_717", "tokennum": 60}], "negative_passages": []}
{"query_id": "q-en-pytorch-bdcc12c8f962f6793e23cf1d7d5ef92d9945d1fe9fbf4c2b5d59f5e879d7f9e6", "query": "def reset_parameters(self): self.weight.data.normal_(0, 1) <ins> if self.padding_idx is not None: self.weight.data[self.padding_idx].fill_(0) </ins> def forward(self, input): <del> return self._backend.Embedding(self.padding_idx, self.max_norm, self.norm_type, self.scale_grad_by_freq)(input, self.weight) </del> <ins> padding_idx = self.padding_idx if padding_idx is None: padding_idx = -1 return self._backend.Embedding(padding_idx, self.max_norm, self.norm_type, self.scale_grad_by_freq)(input, self.weight) </ins> # TODO: SparseLinear", "positive_passages": [{"docid": "doc-en-pytorch-6dae6822e59fd00098fabde359ce44f544da09d2deb5db328ca4d8e6c0d81333", "text": "The following code, which repeatedly exports a model with , has a memory leak. During the export, every tensor parameter in is cloned once and then immediately leaked forever, without ever being collected by the GC. It's not the underlying buffer that's cloned, it's the lightweight wrapper object itself. Still, for long running processes that often export networks in this manner this is a unbounded memory leak that eventually results in OOM errors. I've reproduced this issue on both Linux and Windows, with pytorch versions and respectively. The final five lines inside the for loop are to debug what happens, they are not neccesary to reproduce the issue. forces a gc collection cycle, ensuring we're not accidentally counting dead objects show the total amount of objects that exist for each type for all objects whose amount has increased. From this we can see that we're leaking 2 additional tensors per that the tensors we're leaking have shapes and , so they're just the weight and bias of the linear that the underlying buffer is always the same, so only the shallow class instance is being that nothing is pointing to these newly created objects, so they should be collected. Example output after running for a while: seems closely related but is more about a temporary doubling in memory, this issue is about a permanent memory leak. was closed as a duplicate of the previous issue, but better matches this issue. Collecting environment information. PyTorch version: 1.10.0 Is debug build: False CUDA used to build PyTorch: 11.3 ROCM used to build PyTorch: N/A OS: Ubuntu 18.04.5 LTS (x8664) GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0 Clang version: 6.0.0-1ubuntu2 (tags/RELEASE600/final) CMake version: version 3.10.2 Libc version: glibc-2.17 Python version: 3.7.11 (default, Jul 27 2021, 14:32:16) [GCC 7.5.0] (64-bit runtime) Python platform: Linux-5.15.0-41-generic-x8664-with-debian-buster-sid Is CUDA available: True CUDA runtime version: 11.3.", "commid": "pytorch_issue_82532", "tokennum": 517}], "negative_passages": []}
{"query_id": "q-en-pytorch-bdcc12c8f962f6793e23cf1d7d5ef92d9945d1fe9fbf4c2b5d59f5e879d7f9e6", "query": "def reset_parameters(self): self.weight.data.normal_(0, 1) <ins> if self.padding_idx is not None: self.weight.data[self.padding_idx].fill_(0) </ins> def forward(self, input): <del> return self._backend.Embedding(self.padding_idx, self.max_norm, self.norm_type, self.scale_grad_by_freq)(input, self.weight) </del> <ins> padding_idx = self.padding_idx if padding_idx is None: padding_idx = -1 return self._backend.Embedding(padding_idx, self.max_norm, self.norm_type, self.scale_grad_by_freq)(input, self.weight) </ins> # TODO: SparseLinear", "positive_passages": [{"docid": "doc-en-pytorch-2f66b550279e97c384d138d0938f18ee7eb60e94c23ce88afd819c9c51455183", "text": "109 GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3080 Ti GPU 1: NVIDIA GeForce RTX 3080 Ti Nvidia driver version: 515.48.07 cuDNN version: Probably one of the following: /usr/lib/x8664-linux- /usr/lib/x8664-linux- /usr/lib/x8664-linux- /usr/lib/x8664-linux- /usr/lib/x8664-linux- /usr/lib/x8664-linux- /usr/lib/x8664-linux- HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True Versions of relevant libraries: [pip3] numpy==1.21.2 [pip3] torch==1.10.0 [pip3] torchelastic==0.2.0 [pip3] torchtext==0.11.0 [pip3] torchvision==0.11.0 [conda] blas 1.0 mkl [conda] cudatoolkit 11.3.1 ha36c4319 nvidia [conda] ffmpeg 4.3 hf484d3e0 pytorch [conda] mkl 2021.3.0 h06a4308520 [conda] mkl-service 2.4.0 py37h7f8727e0 [conda] mklfft 1.3.1 py37hd3c417c0 [conda] mklrandom 1.2.2 py37h51133e40 [conda] numpy 1.21.2 py37h20f2e390 [conda] numpy-base 1.21.2 py37h79a11010 [conda] pytorch 1.10.0 py3.7cuda11.3cudnn8.2.00 pytorch [conda] pytorch-mutex 1.0 cuda pytorch [conda] torchelastic 0.2.0 pypi0 pypi [conda] torchtext 0.11.0 py37 pytorch [conda] torchvision 0.11.0 py37cu113 pytorch\nI'm also observing something like this without JIT, although I'm not sure it's the same issue. The program works fine until I add ONNX export every time a checkpoint is saved. Once I do that, the GPU memory usage grows until it OOMs.", "commid": "pytorch_issue_82532", "tokennum": 602}], "negative_passages": []}
{"query_id": "q-en-pytorch-bdcc12c8f962f6793e23cf1d7d5ef92d9945d1fe9fbf4c2b5d59f5e879d7f9e6", "query": "def reset_parameters(self): self.weight.data.normal_(0, 1) <ins> if self.padding_idx is not None: self.weight.data[self.padding_idx].fill_(0) </ins> def forward(self, input): <del> return self._backend.Embedding(self.padding_idx, self.max_norm, self.norm_type, self.scale_grad_by_freq)(input, self.weight) </del> <ins> padding_idx = self.padding_idx if padding_idx is None: padding_idx = -1 return self._backend.Embedding(padding_idx, self.max_norm, self.norm_type, self.scale_grad_by_freq)(input, self.weight) </ins> # TODO: SparseLinear", "positive_passages": [{"docid": "doc-en-pytorch-51b93f6c54a298b88600b1182a14c127961bec33f155f28f6e4eb4444c79f1df", "text": "This issue is about lightweight Tensor objects being leaked, not the underlying (potentially GPU-side) buffer. I think your issue is a different one.\nI encountered the same error, is there a solution to this problem?\nPlease validate with the latest release and re-summit an issue if you see the same thing. As we are moving away from torchscript minor leaks are unlikely to be fixed, but contribution is welcomed.", "commid": "pytorch_issue_82532", "tokennum": 86}], "negative_passages": []}
{"query_id": "q-en-pytorch-c0f3e46d5377fdd8131a1db39706016c95280d2cdff5b4d1909a0669e01b61c5", "query": "static inline __host__ __device__ char mul(char a, char b) { return a * b; } static inline __host__ __device__ char sub(char a, char b) { return a - b; } static inline __host__ __device__ char div(char a, char b) { return a / b; } <del> static inline __host__ __device__ char abs(char a) { return abs(a); } </del> <ins> static inline __host__ __device__ char abs(char a) { return ::abs((int)a); } </ins> }; template <>", "positive_passages": [{"docid": "doc-en-pytorch-3dce8e556d6d4bd2ea519d551101e82db52b876cebdea9b13c4207de2ba26137", "text": "Not sure what is the reason for these errors, any suggestions? I suspect the clang version is not supported? Here is the\nclang have more strict checking , pull request will fixed it.\nfixed, thanks to", "commid": "pytorch_issue_745", "tokennum": 43}], "negative_passages": []}
{"query_id": "q-en-pytorch-d220f76b82742fb9c30b11178c8388d7854eacf45c1124207fd51af7fab0f931", "query": "@staticmethod def _renorm(ctx, indices, weight, max_norm, norm_type): <del> if indices.dim() == 2: indices = indices.clone().view(-1) </del> <ins> # clone indices since LookupTable_renorm modifies it in-place </ins> ctx._backend.LookupTable_renorm( ctx._backend.library_state, <del> indices, </del> <ins> indices.clone().view(-1), </ins> weight, max_norm, norm_type", "positive_passages": [{"docid": "doc-en-pytorch-8a24171cae3316021ad4a394597442544d5ad7a610a19fd0700a51e599ae8017", "text": "The output is also incorrect. It's the output from the sorted indices, instead of the user specified indices. Reported by", "commid": "pytorch_issue_2413", "tokennum": 25}], "negative_passages": []}
{"query_id": "q-en-pytorch-dc2cc16002ad2fcb397464bc29df418578647da2b91fe00525f4e32c4115696b", "query": "static inline __host__ __device__ short mul(short a, short b) { return a * b; } static inline __host__ __device__ short sub(short a, short b) { return a - b; } static inline __host__ __device__ short div(short a, short b) { return a / b; } <del> static inline __host__ __device__ short abs(short a) { return abs(a); } </del> <ins> static inline __host__ __device__ short abs(short a) { return ::abs((int)a); } </ins> }; template <>", "positive_passages": [{"docid": "doc-en-pytorch-3dce8e556d6d4bd2ea519d551101e82db52b876cebdea9b13c4207de2ba26137", "text": "Not sure what is the reason for these errors, any suggestions? I suspect the clang version is not supported? Here is the\nclang have more strict checking , pull request will fixed it.\nfixed, thanks to", "commid": "pytorch_issue_745", "tokennum": 43}], "negative_passages": []}
{"query_id": "q-en-pytorch-dd6af3615c1336a9731953d1a8470906b5f93f83fabf440136bc36f449c84f25", "query": ">>> # an Embedding module containing 10 tensors of size 3 >>> embedding = nn.Embedding(10, 3) >>> # a batch of 2 samples of 4 indices each <del> >>> input = torch.Tensor([[1,2,4,5],[4,3,2,10]]) </del> <ins> >>> input = torch.LongTensor([[1,2,4,5],[4,3,2,9]]) >>> print(embedding(input)) >>> # example with padding_idx >>> embedding = nn.Embedding(10, 3, padding_idx=0) >>> input = torch.LongTensor([[0,2,0,5]]) </ins> >>> print(embedding(input)) \"\"\" <del> def __init__(self, num_embeddings, embedding_dim, padding_idx=-1, </del> <ins> def __init__(self, num_embeddings, embedding_dim, padding_idx=None, </ins> max_norm=None, norm_type=2, scale_grad_by_freq=False): self.num_embeddings = num_embeddings self.embedding_dim = embedding_dim", "positive_passages": [{"docid": "doc-en-pytorch-6dae6822e59fd00098fabde359ce44f544da09d2deb5db328ca4d8e6c0d81333", "text": "The following code, which repeatedly exports a model with , has a memory leak. During the export, every tensor parameter in is cloned once and then immediately leaked forever, without ever being collected by the GC. It's not the underlying buffer that's cloned, it's the lightweight wrapper object itself. Still, for long running processes that often export networks in this manner this is a unbounded memory leak that eventually results in OOM errors. I've reproduced this issue on both Linux and Windows, with pytorch versions and respectively. The final five lines inside the for loop are to debug what happens, they are not neccesary to reproduce the issue. forces a gc collection cycle, ensuring we're not accidentally counting dead objects show the total amount of objects that exist for each type for all objects whose amount has increased. From this we can see that we're leaking 2 additional tensors per that the tensors we're leaking have shapes and , so they're just the weight and bias of the linear that the underlying buffer is always the same, so only the shallow class instance is being that nothing is pointing to these newly created objects, so they should be collected. Example output after running for a while: seems closely related but is more about a temporary doubling in memory, this issue is about a permanent memory leak. was closed as a duplicate of the previous issue, but better matches this issue. Collecting environment information. PyTorch version: 1.10.0 Is debug build: False CUDA used to build PyTorch: 11.3 ROCM used to build PyTorch: N/A OS: Ubuntu 18.04.5 LTS (x8664) GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0 Clang version: 6.0.0-1ubuntu2 (tags/RELEASE600/final) CMake version: version 3.10.2 Libc version: glibc-2.17 Python version: 3.7.11 (default, Jul 27 2021, 14:32:16) [GCC 7.5.0] (64-bit runtime) Python platform: Linux-5.15.0-41-generic-x8664-with-debian-buster-sid Is CUDA available: True CUDA runtime version: 11.3.", "commid": "pytorch_issue_82532", "tokennum": 517}], "negative_passages": []}
{"query_id": "q-en-pytorch-dd6af3615c1336a9731953d1a8470906b5f93f83fabf440136bc36f449c84f25", "query": ">>> # an Embedding module containing 10 tensors of size 3 >>> embedding = nn.Embedding(10, 3) >>> # a batch of 2 samples of 4 indices each <del> >>> input = torch.Tensor([[1,2,4,5],[4,3,2,10]]) </del> <ins> >>> input = torch.LongTensor([[1,2,4,5],[4,3,2,9]]) >>> print(embedding(input)) >>> # example with padding_idx >>> embedding = nn.Embedding(10, 3, padding_idx=0) >>> input = torch.LongTensor([[0,2,0,5]]) </ins> >>> print(embedding(input)) \"\"\" <del> def __init__(self, num_embeddings, embedding_dim, padding_idx=-1, </del> <ins> def __init__(self, num_embeddings, embedding_dim, padding_idx=None, </ins> max_norm=None, norm_type=2, scale_grad_by_freq=False): self.num_embeddings = num_embeddings self.embedding_dim = embedding_dim", "positive_passages": [{"docid": "doc-en-pytorch-2f66b550279e97c384d138d0938f18ee7eb60e94c23ce88afd819c9c51455183", "text": "109 GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3080 Ti GPU 1: NVIDIA GeForce RTX 3080 Ti Nvidia driver version: 515.48.07 cuDNN version: Probably one of the following: /usr/lib/x8664-linux- /usr/lib/x8664-linux- /usr/lib/x8664-linux- /usr/lib/x8664-linux- /usr/lib/x8664-linux- /usr/lib/x8664-linux- /usr/lib/x8664-linux- HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True Versions of relevant libraries: [pip3] numpy==1.21.2 [pip3] torch==1.10.0 [pip3] torchelastic==0.2.0 [pip3] torchtext==0.11.0 [pip3] torchvision==0.11.0 [conda] blas 1.0 mkl [conda] cudatoolkit 11.3.1 ha36c4319 nvidia [conda] ffmpeg 4.3 hf484d3e0 pytorch [conda] mkl 2021.3.0 h06a4308520 [conda] mkl-service 2.4.0 py37h7f8727e0 [conda] mklfft 1.3.1 py37hd3c417c0 [conda] mklrandom 1.2.2 py37h51133e40 [conda] numpy 1.21.2 py37h20f2e390 [conda] numpy-base 1.21.2 py37h79a11010 [conda] pytorch 1.10.0 py3.7cuda11.3cudnn8.2.00 pytorch [conda] pytorch-mutex 1.0 cuda pytorch [conda] torchelastic 0.2.0 pypi0 pypi [conda] torchtext 0.11.0 py37 pytorch [conda] torchvision 0.11.0 py37cu113 pytorch\nI'm also observing something like this without JIT, although I'm not sure it's the same issue. The program works fine until I add ONNX export every time a checkpoint is saved. Once I do that, the GPU memory usage grows until it OOMs.", "commid": "pytorch_issue_82532", "tokennum": 602}], "negative_passages": []}
{"query_id": "q-en-pytorch-dd6af3615c1336a9731953d1a8470906b5f93f83fabf440136bc36f449c84f25", "query": ">>> # an Embedding module containing 10 tensors of size 3 >>> embedding = nn.Embedding(10, 3) >>> # a batch of 2 samples of 4 indices each <del> >>> input = torch.Tensor([[1,2,4,5],[4,3,2,10]]) </del> <ins> >>> input = torch.LongTensor([[1,2,4,5],[4,3,2,9]]) >>> print(embedding(input)) >>> # example with padding_idx >>> embedding = nn.Embedding(10, 3, padding_idx=0) >>> input = torch.LongTensor([[0,2,0,5]]) </ins> >>> print(embedding(input)) \"\"\" <del> def __init__(self, num_embeddings, embedding_dim, padding_idx=-1, </del> <ins> def __init__(self, num_embeddings, embedding_dim, padding_idx=None, </ins> max_norm=None, norm_type=2, scale_grad_by_freq=False): self.num_embeddings = num_embeddings self.embedding_dim = embedding_dim", "positive_passages": [{"docid": "doc-en-pytorch-51b93f6c54a298b88600b1182a14c127961bec33f155f28f6e4eb4444c79f1df", "text": "This issue is about lightweight Tensor objects being leaked, not the underlying (potentially GPU-side) buffer. I think your issue is a different one.\nI encountered the same error, is there a solution to this problem?\nPlease validate with the latest release and re-summit an issue if you see the same thing. As we are moving away from torchscript minor leaks are unlikely to be fixed, but contribution is welcomed.", "commid": "pytorch_issue_82532", "tokennum": 86}], "negative_passages": []}
{"query_id": "q-en-pytorch-e146c2a64e2bd159eae15b5e7dfd2ad08f34d4ab3ebe3e3a06870fe78e7bbb46", "query": "with: submodules: false fetch-depth: 1 <del> - name: Setup Python 3.5 </del> <ins> - name: Setup Python 3.6 </ins> if: matrix.test_type == 'older_python_version' uses: actions/setup-python@v4 with: <del> python-version: '3.5' </del> <ins> python-version: '3.6' </ins> architecture: x64 check-latest: false cache: pip", "positive_passages": [{"docid": "doc-en-pytorch-c0fdbff7b42e4db60af4955ac83a924f2a9f7d06af7e7cb5913cbd4e781f73e0", "text": "Several this morning failed with (see for example): Not sure what is causing the outage, but it makes me wonder if perhaps it's time to retire Python-3.5 testing CI cc\nLooks like pypi rolled out a new cert today:", "commid": "pytorch_issue_125841", "tokennum": 54}], "negative_passages": []}
{"query_id": "q-en-pytorch-e1cc84099c3b118d4920752811c784dcb3638765475447de55f10769a7adf155", "query": "Args: num_embeddings: size of the dictionary of embeddings embedding_dim: the size of each embedding vector <del> padding_idx: If given, pads the output with zeros whenever it encounters the index. Default: -1 </del> <ins> padding_idx: If given, pads the output with zeros whenever it encounters the index. Default: None </ins> max_norm: If given, will renormalize the embeddings to always have a norm lesser than this Default: None norm_type: The p of the p-norm to compute for the max_norm option scale_grad_by_freq: if given, this will scale gradients by the frequency of the words in the dictionary.", "positive_passages": [{"docid": "doc-en-pytorch-6dae6822e59fd00098fabde359ce44f544da09d2deb5db328ca4d8e6c0d81333", "text": "The following code, which repeatedly exports a model with , has a memory leak. During the export, every tensor parameter in is cloned once and then immediately leaked forever, without ever being collected by the GC. It's not the underlying buffer that's cloned, it's the lightweight wrapper object itself. Still, for long running processes that often export networks in this manner this is a unbounded memory leak that eventually results in OOM errors. I've reproduced this issue on both Linux and Windows, with pytorch versions and respectively. The final five lines inside the for loop are to debug what happens, they are not neccesary to reproduce the issue. forces a gc collection cycle, ensuring we're not accidentally counting dead objects show the total amount of objects that exist for each type for all objects whose amount has increased. From this we can see that we're leaking 2 additional tensors per that the tensors we're leaking have shapes and , so they're just the weight and bias of the linear that the underlying buffer is always the same, so only the shallow class instance is being that nothing is pointing to these newly created objects, so they should be collected. Example output after running for a while: seems closely related but is more about a temporary doubling in memory, this issue is about a permanent memory leak. was closed as a duplicate of the previous issue, but better matches this issue. Collecting environment information. PyTorch version: 1.10.0 Is debug build: False CUDA used to build PyTorch: 11.3 ROCM used to build PyTorch: N/A OS: Ubuntu 18.04.5 LTS (x8664) GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0 Clang version: 6.0.0-1ubuntu2 (tags/RELEASE600/final) CMake version: version 3.10.2 Libc version: glibc-2.17 Python version: 3.7.11 (default, Jul 27 2021, 14:32:16) [GCC 7.5.0] (64-bit runtime) Python platform: Linux-5.15.0-41-generic-x8664-with-debian-buster-sid Is CUDA available: True CUDA runtime version: 11.3.", "commid": "pytorch_issue_82532", "tokennum": 517}], "negative_passages": []}
{"query_id": "q-en-pytorch-e1cc84099c3b118d4920752811c784dcb3638765475447de55f10769a7adf155", "query": "Args: num_embeddings: size of the dictionary of embeddings embedding_dim: the size of each embedding vector <del> padding_idx: If given, pads the output with zeros whenever it encounters the index. Default: -1 </del> <ins> padding_idx: If given, pads the output with zeros whenever it encounters the index. Default: None </ins> max_norm: If given, will renormalize the embeddings to always have a norm lesser than this Default: None norm_type: The p of the p-norm to compute for the max_norm option scale_grad_by_freq: if given, this will scale gradients by the frequency of the words in the dictionary.", "positive_passages": [{"docid": "doc-en-pytorch-2f66b550279e97c384d138d0938f18ee7eb60e94c23ce88afd819c9c51455183", "text": "109 GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3080 Ti GPU 1: NVIDIA GeForce RTX 3080 Ti Nvidia driver version: 515.48.07 cuDNN version: Probably one of the following: /usr/lib/x8664-linux- /usr/lib/x8664-linux- /usr/lib/x8664-linux- /usr/lib/x8664-linux- /usr/lib/x8664-linux- /usr/lib/x8664-linux- /usr/lib/x8664-linux- HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True Versions of relevant libraries: [pip3] numpy==1.21.2 [pip3] torch==1.10.0 [pip3] torchelastic==0.2.0 [pip3] torchtext==0.11.0 [pip3] torchvision==0.11.0 [conda] blas 1.0 mkl [conda] cudatoolkit 11.3.1 ha36c4319 nvidia [conda] ffmpeg 4.3 hf484d3e0 pytorch [conda] mkl 2021.3.0 h06a4308520 [conda] mkl-service 2.4.0 py37h7f8727e0 [conda] mklfft 1.3.1 py37hd3c417c0 [conda] mklrandom 1.2.2 py37h51133e40 [conda] numpy 1.21.2 py37h20f2e390 [conda] numpy-base 1.21.2 py37h79a11010 [conda] pytorch 1.10.0 py3.7cuda11.3cudnn8.2.00 pytorch [conda] pytorch-mutex 1.0 cuda pytorch [conda] torchelastic 0.2.0 pypi0 pypi [conda] torchtext 0.11.0 py37 pytorch [conda] torchvision 0.11.0 py37cu113 pytorch\nI'm also observing something like this without JIT, although I'm not sure it's the same issue. The program works fine until I add ONNX export every time a checkpoint is saved. Once I do that, the GPU memory usage grows until it OOMs.", "commid": "pytorch_issue_82532", "tokennum": 602}], "negative_passages": []}
{"query_id": "q-en-pytorch-e1cc84099c3b118d4920752811c784dcb3638765475447de55f10769a7adf155", "query": "Args: num_embeddings: size of the dictionary of embeddings embedding_dim: the size of each embedding vector <del> padding_idx: If given, pads the output with zeros whenever it encounters the index. Default: -1 </del> <ins> padding_idx: If given, pads the output with zeros whenever it encounters the index. Default: None </ins> max_norm: If given, will renormalize the embeddings to always have a norm lesser than this Default: None norm_type: The p of the p-norm to compute for the max_norm option scale_grad_by_freq: if given, this will scale gradients by the frequency of the words in the dictionary.", "positive_passages": [{"docid": "doc-en-pytorch-51b93f6c54a298b88600b1182a14c127961bec33f155f28f6e4eb4444c79f1df", "text": "This issue is about lightweight Tensor objects being leaked, not the underlying (potentially GPU-side) buffer. I think your issue is a different one.\nI encountered the same error, is there a solution to this problem?\nPlease validate with the latest release and re-summit an issue if you see the same thing. As we are moving away from torchscript minor leaks are unlikely to be fixed, but contribution is welcomed.", "commid": "pytorch_issue_82532", "tokennum": 86}], "negative_passages": []}
{"query_id": "q-en-pytorch-e5ae77a5e08322e65369c99bb0e38344715024cff6a41f3418003b3cb4bc4e1f", "query": "meant to be installed as pip packages) (default: False). relative_to (str, optional): path of the build file. Required when ``package is True``. It's best to use ``__file__`` for this argument. <del> kwargs: additional arguments that are passed to ffi to declar the </del> <ins> kwargs: additional arguments that are passed to ffi to declare the </ins> extension. See `Extension API reference`_ for details. .. _`Extension API reference`: https://docs.python.org/3/distutils/apiref.html#distutils.core.Extension", "positive_passages": [{"docid": "doc-en-pytorch-c25fd04d8d54cf4d0391cd8024070026ad8247507bdccb8eb12f5f8e2c9f8d2e", "text": "When trying to install Pytorch on my Mac by following the instructions I get What I did: ` I also tried Both approaches gave the same error. System: xcode-select version 2395. Version: macOS Monterey 12.3.1 (21E258) MacBook Pro (16-inch, 2019) Processor: 2,6 GHz 6-Core Intel Core i7 memory: 16 GB 2667 MHz DDR4 PyTorch version: N/A Is debug build: N/A CUDA used to build PyTorch: N/A ROCM used to build PyTorch: N/A OS: macOS 12.3.1 (x8664) GCC version: Could not collect Clang version: 13.1.6 (clang-1316.0.21.2.3) CMake version: version 3.22.1 Libc version: N/A Python version: 3.9.12 (main, Apr 5 2022, 01:53:17) [Clang 12.0.0 ] (64-bit runtime) Python platform: macOS-10.16-x8664-i386-64bit Is CUDA available: N/A CUDA runtime version: Could not collect GPU models and configuration: Could not collect Nvidia driver version: Could not collect cuDNN version: Could not collect HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: N/A Versions of relevant libraries: [pip3] numpy==1.21.5 [conda] mkl 2022.0.0 hecd8cb5105 [conda] mkl-include 2022.0.0 hecd8cb5105 [conda] numpy 1.21.5 py39h9c3cb841 [conda] numpy-base 1.21.5 py39he782bc11 cc\nI am running the same environment and get the same issue. Any insight would be very appreciated\nUpdate: In another repo I get the same error when trying to link to pytorch etc. There I made a minimal case and managed to build when I removed linking to . I can see that we have in the script. Maybe that is the cause of the error?\nLooks like they are built with the correct architecture.\nMore progress, from local minimal case: fails. is just a hello world program.", "commid": "pytorch_issue_76094", "tokennum": 531}], "negative_passages": []}
{"query_id": "q-en-pytorch-e5ae77a5e08322e65369c99bb0e38344715024cff6a41f3418003b3cb4bc4e1f", "query": "meant to be installed as pip packages) (default: False). relative_to (str, optional): path of the build file. Required when ``package is True``. It's best to use ``__file__`` for this argument. <del> kwargs: additional arguments that are passed to ffi to declar the </del> <ins> kwargs: additional arguments that are passed to ffi to declare the </ins> extension. See `Extension API reference`_ for details. .. _`Extension API reference`: https://docs.python.org/3/distutils/apiref.html#distutils.core.Extension", "positive_passages": [{"docid": "doc-en-pytorch-ec5921aa1d302972470c3f074cbc44243f961a731c2339a8fe339a689287e600", "text": "If in is removed, then it builds.\nThe same story with PyTorch 1.10.0. The error appears when I'm trying to build with Apple clang 13.1.6 (Xcode Command Line Tools 13.3). But all works correctly if I build it with Apple clang 13.0 (Xcode Command Line Tools 13.2.1)\nNice, worked for me as well. Is this a bug somewhere or what is the exact problem? I drawback is that XCode needs to be up to date with new iOS versions.\nWhich version of Apple Clang worked? 13.0.0 or 13.0.1? Are you on Monterey 12.4?\nFiled an issue: cc:\nThis issue has been fixed in PeachPy a while back by but pinned version of PeachPy that PyTorch is using has not been updated in a very long time", "commid": "pytorch_issue_76094", "tokennum": 186}], "negative_passages": []}
{"query_id": "q-en-pytorch-e5c6c52e389697c2263e94638406e059af264d366c283b8eb13b7ef2925b5de0", "query": "{ index = z * iwidth * iheight + y * iwidth + x; real val = ip[index]; <del> if (val > maxval) </del> <ins> if ((val > maxval) || isnan(val)) </ins> { maxval = val; maxindex = index;", "positive_passages": [{"docid": "doc-en-pytorch-73afc555bf492c7072a4f9994675674d19a7420810f08eaca2d074197550e516", "text": "max pooling functions are not consistent with max functions. Below an example, every max pooling (be it 1d, 2d or 3d, adaptive or not) acts the same, on cpu or on cuda. Essentially, there are two fondamental differences : max pooling of all values is while for it's max pooling of nan and valid values is valid values, which means s get ignored, while for , as soon as there is a value, the result is . More generally, choosing explicetely how to deal with as in numpy () could be a solution, but maybe this is related to CuDNN's max pooling ? Built from latest sources (as of 05/17) PyTorch version: 0.5.0a0+ Is debug build: No CUDA used to build PyTorch: 9.1.85 OS: Ubuntu 16.04.4 LTS GCC version: (Ubuntu 5.4.0-6ubuntu1~16.04.9) 5.4.0 CMake version: version 3.9.4 Python version: 3.6 Is CUDA available: Yes CUDA runtime version: 9.1.85 GPU models and configuration: GPU 0: Quadro M1000M Nvidia driver version: 390.30 cuDNN version: Probably one of the following: /usr/lib/x8664-linux- /usr/lib/x8664-linux-gnu/libcudnnstaticv7.a Versions of relevant libraries: [conda] magma-cuda91 2.3.0 1 pytorch [conda] torch 0.5.0a0+ <pip[conda] torch 0.3.1b0+ <pip[conda] torch 0.5.0a0+ <pip\nHi, . Thank you for providing the example. I'll take a look.\nHmhm. I seem to get a double free in the ...\nAnother question: Do we really want to provide an option to ignore NaN? My view is that \"if you have nan in your net, you're screwed\", so I would just return NaN there, personally.\nMy practical usecase is for KITTI groundtruth depthmaps and FlowMaps which are 2D sparse arrays.", "commid": "pytorch_issue_7645", "tokennum": 506}], "negative_passages": []}
{"query_id": "q-en-pytorch-e5c6c52e389697c2263e94638406e059af264d366c283b8eb13b7ef2925b5de0", "query": "{ index = z * iwidth * iheight + y * iwidth + x; real val = ip[index]; <del> if (val > maxval) </del> <ins> if ((val > maxval) || isnan(val)) </ins> { maxval = val; maxindex = index;", "positive_passages": [{"docid": "doc-en-pytorch-202a6b4a46ae62b8f7553c0429b323612fb197593d261f12c30c05b56ae7ce77", "text": "For an algorithm that use FlowNet-like architecture that outputs predictions at multiple scale levels, we can either compare predictions to downscaled GT or upscaled predictions to GT. The first being obviously less computationally expensive, the ignore NaN would help downscaling such sparse 2D maps. For the moment we do something I find ugly, you can see it Essentially it zeroes the s, takes the map and construct two maps of positive and negative values which are then maxpooled and back together. I am actually open for a strict \"no-nan\" policy on pooling functions, but in that case better enforce it before someone writes a code that tries to benefit from maxpooling ignore s feature/bug and if you have a clever way of pooling sparse 2D tensors, I'm open to it, but I guess it's a topic for pytorch forums ;)\nPersonally, I think it is more sane have NaN -NaN in the pooling and offer a parametrizable (where you get to pick the values) that does or so.\nSo the proposed fix does NaN -NaN similar to max. I didn't try to fix gradients for the nan case. This would involve keeping the values and I don't think that is worth it (in particular because I would not expect the pooling layer to be last, and otherwise we'd probably get NaN as grad_out). If you are reasonably happy with it, I'd move it to a PR.\n+1 for NaN -NaN, \"abyssus abyssum invocat\" We could also add an optional mask which would be a ByteTensor of the same size, specifying whether or not the considered pixel is used for the pooling, that could be used for any kind of pulling. An functionality would then be to provide the mask\nI could work with that. to continue to be picky, the potential drawback is that it won't work for other pooling methods such as average pooling or median pooling. The problem here is that the max operation inherently ignores nonmax values which can be leveraged for \"ignore some pixels\" operation, but it has a \"non universal\" feel to it since it woun't be as easy for other kinds of 2D operations.\nTo be clear, the fix is good for me, but I figured a related discussion on selective pooling (whether to ditch NaN values or anything you want to ignore) could happen (maybe not on this issue ?)", "commid": "pytorch_issue_7645", "tokennum": 513}], "negative_passages": []}
{"query_id": "q-en-pytorch-e68748fa6c4aafa8c187da05b9d98e17b0db0f942a0f44442d1917f9db594699", "query": "// For Convolution strategies that don't implicitly handle grad_bias, we add a helper // function here to perform it using simple Tensor operators static at::Tensor compute_grad_bias(const at::Tensor& grad_output) { <del> // grad_output is in N, C, H, W, we re-shape and reduce over spatial dims and batches </del> <ins> // grad_output is in N, C, H, W, we re-shape and reduce over spatial dims and batches </ins> return grad_output.contiguous().view({grad_output.size(0), grad_output.size(1), -1}).sum(0).sum(1); }", "positive_passages": [{"docid": "doc-en-pytorch-c5dbd648f5f223c007de312a8c0f1ae78f27faaf3ad3e509ac9f43144221b039", "text": "This is a test. Please ignore it. Edited.\n<!-- validation-comment-start --<bodyHello there! From the DISABLED prefix in this issue title, it looks like you are attempting to disable a job in PyTorch CI. The information I have parsed is below: Job name: Credential: Within ~15 minutes, and all of its dependants will be disabled in PyTorch CI. Please verify that the job name looks correct. With great power comes great responsibility. </body<!-- validation-comment-end --", "commid": "pytorch_issue_94861", "tokennum": 122}], "negative_passages": []}
{"query_id": "q-en-pytorch-f07ba2188846889a62ffcd00bc1564c97864fab48732feb1e9f5c83d821811a3", "query": "Args: input (Tensor): the tensor to compare other (Tensor or float): the tensor or value to compare <del> out (Tensor, optional): the output tensor that must be a `ByteTensor` or the same type as :attr:`input` </del> <ins> out (Tensor, optional): the output tensor that must be a `ByteTensor` </ins> Returns: Tensor: A `torch.ByteTensor` containing a 1 at each location where comparison is true", "positive_passages": [{"docid": "doc-en-pytorch-2fdc3cef791c19159039a95dfbc4d1859ba0e4d81197ef08e9f5067ac45538ca", "text": "[pytorch] The docs say that for tensor comparison operators (gt,lt etc) it should be possible to pass out argument typed as input (), yet when I try to do it, I hit an error Should the docs be fixed, or is it a bug?\nIf it's really useful we can add it back; but we'll fix the docs for now", "commid": "pytorch_issue_7933", "tokennum": 82}], "negative_passages": []}
{"query_id": "q-en-pytorch-fba64ac32da2f1cb65a2a0cea021ae9220defa92747dc8f4d4bc86b53f0d9510", "query": "def test_AdaptiveMaxPool3d_indices_cuda(self, dtype=torch.float): self._test_maxpool_indices(3, adaptive=True, device=\"cuda\", dtype=dtype) <ins> @staticmethod def _test_max_pool_nan(self, device, dtype=torch.float): for adaptive in ['', 'adaptive_']: for num_dim in [1, 2, 3]: fn_name = '{}max_pool{}d'.format(adaptive, num_dim) fn = getattr(F, fn_name) x = torch.full([1, 1] + num_dim * [3], float('nan')) res = fn(x, 1 if adaptive else 3) self.assertTrue(math.isnan(res.item())) @unittest.skipIf(not TEST_CUDA, \"CUDA unavailable\") @repeat_test_for_types(ALL_TENSORTYPES) def test_max_pool_nan_cuda(self, dtype=torch.float): self._test_max_pool_nan(self, device=\"cuda\", dtype=dtype) def test_max_pool_nan(self, dtype=torch.float): self._test_max_pool_nan(self, device=\"cpu\") </ins> def _test_scatter(self, tensor): x = torch.tensor(tensor, requires_grad=True) result = dp.scatter(x, (0, 1))", "positive_passages": [{"docid": "doc-en-pytorch-73afc555bf492c7072a4f9994675674d19a7420810f08eaca2d074197550e516", "text": "max pooling functions are not consistent with max functions. Below an example, every max pooling (be it 1d, 2d or 3d, adaptive or not) acts the same, on cpu or on cuda. Essentially, there are two fondamental differences : max pooling of all values is while for it's max pooling of nan and valid values is valid values, which means s get ignored, while for , as soon as there is a value, the result is . More generally, choosing explicetely how to deal with as in numpy () could be a solution, but maybe this is related to CuDNN's max pooling ? Built from latest sources (as of 05/17) PyTorch version: 0.5.0a0+ Is debug build: No CUDA used to build PyTorch: 9.1.85 OS: Ubuntu 16.04.4 LTS GCC version: (Ubuntu 5.4.0-6ubuntu1~16.04.9) 5.4.0 CMake version: version 3.9.4 Python version: 3.6 Is CUDA available: Yes CUDA runtime version: 9.1.85 GPU models and configuration: GPU 0: Quadro M1000M Nvidia driver version: 390.30 cuDNN version: Probably one of the following: /usr/lib/x8664-linux- /usr/lib/x8664-linux-gnu/libcudnnstaticv7.a Versions of relevant libraries: [conda] magma-cuda91 2.3.0 1 pytorch [conda] torch 0.5.0a0+ <pip[conda] torch 0.3.1b0+ <pip[conda] torch 0.5.0a0+ <pip\nHi, . Thank you for providing the example. I'll take a look.\nHmhm. I seem to get a double free in the ...\nAnother question: Do we really want to provide an option to ignore NaN? My view is that \"if you have nan in your net, you're screwed\", so I would just return NaN there, personally.\nMy practical usecase is for KITTI groundtruth depthmaps and FlowMaps which are 2D sparse arrays.", "commid": "pytorch_issue_7645", "tokennum": 506}], "negative_passages": []}
{"query_id": "q-en-pytorch-fba64ac32da2f1cb65a2a0cea021ae9220defa92747dc8f4d4bc86b53f0d9510", "query": "def test_AdaptiveMaxPool3d_indices_cuda(self, dtype=torch.float): self._test_maxpool_indices(3, adaptive=True, device=\"cuda\", dtype=dtype) <ins> @staticmethod def _test_max_pool_nan(self, device, dtype=torch.float): for adaptive in ['', 'adaptive_']: for num_dim in [1, 2, 3]: fn_name = '{}max_pool{}d'.format(adaptive, num_dim) fn = getattr(F, fn_name) x = torch.full([1, 1] + num_dim * [3], float('nan')) res = fn(x, 1 if adaptive else 3) self.assertTrue(math.isnan(res.item())) @unittest.skipIf(not TEST_CUDA, \"CUDA unavailable\") @repeat_test_for_types(ALL_TENSORTYPES) def test_max_pool_nan_cuda(self, dtype=torch.float): self._test_max_pool_nan(self, device=\"cuda\", dtype=dtype) def test_max_pool_nan(self, dtype=torch.float): self._test_max_pool_nan(self, device=\"cpu\") </ins> def _test_scatter(self, tensor): x = torch.tensor(tensor, requires_grad=True) result = dp.scatter(x, (0, 1))", "positive_passages": [{"docid": "doc-en-pytorch-202a6b4a46ae62b8f7553c0429b323612fb197593d261f12c30c05b56ae7ce77", "text": "For an algorithm that use FlowNet-like architecture that outputs predictions at multiple scale levels, we can either compare predictions to downscaled GT or upscaled predictions to GT. The first being obviously less computationally expensive, the ignore NaN would help downscaling such sparse 2D maps. For the moment we do something I find ugly, you can see it Essentially it zeroes the s, takes the map and construct two maps of positive and negative values which are then maxpooled and back together. I am actually open for a strict \"no-nan\" policy on pooling functions, but in that case better enforce it before someone writes a code that tries to benefit from maxpooling ignore s feature/bug and if you have a clever way of pooling sparse 2D tensors, I'm open to it, but I guess it's a topic for pytorch forums ;)\nPersonally, I think it is more sane have NaN -NaN in the pooling and offer a parametrizable (where you get to pick the values) that does or so.\nSo the proposed fix does NaN -NaN similar to max. I didn't try to fix gradients for the nan case. This would involve keeping the values and I don't think that is worth it (in particular because I would not expect the pooling layer to be last, and otherwise we'd probably get NaN as grad_out). If you are reasonably happy with it, I'd move it to a PR.\n+1 for NaN -NaN, \"abyssus abyssum invocat\" We could also add an optional mask which would be a ByteTensor of the same size, specifying whether or not the considered pixel is used for the pooling, that could be used for any kind of pulling. An functionality would then be to provide the mask\nI could work with that. to continue to be picky, the potential drawback is that it won't work for other pooling methods such as average pooling or median pooling. The problem here is that the max operation inherently ignores nonmax values which can be leveraged for \"ignore some pixels\" operation, but it has a \"non universal\" feel to it since it woun't be as easy for other kinds of 2D operations.\nTo be clear, the fix is good for me, but I figured a related discussion on selective pooling (whether to ditch NaN values or anything you want to ignore) could happen (maybe not on this issue ?)", "commid": "pytorch_issue_7645", "tokennum": 513}], "negative_passages": []}