{"_id":"q-en-pytorch-9ccbc97eb054bf88772f5b4fbbd720390ddaa6da2f0faf966bf9060c32b47da9","text":"This happens when the index tensor contains duplicate elements. If this is not allowed indexcopy should raise an exception when this happens and we should fix the test. Otherwise we need to fix the autograd op. Here's an example: cc\nWhy would you with repeated indices? Sounds for me like UB is a reasonable. We should check that in C\nI ran into this in and fixed the test for now. I'm not sure how to interpret last comment: did you mean UB behavior is \"unreasonable\" rather than \"a reasonable\"? The last sentence reads that way.\nI'd be ok with saying that it's UB in such case. There's no natural choice, and it can decrease performance.\nIs this issue still worth working on? If adding the additional check can decrease performance, can we add another argument specifying that indexes contain duplicate elements?"} {"_id":"q-en-pytorch-a6ca36a1cfee2b3454534f6eb50dc9348a5301ff7878b05af2534c06aa42a1da","text":"As Arthur Szlam reports, fb-internal cudnn is still lagging behind, and giving batch-size 1024 with batchnorm is raising an error from cudnn. Need to check for compile-time version and disable this codepath\nThe 1024 limitation was removed in 5.1.10"} {"_id":"q-en-pytorch-815f7a4bc9bd150a96577f3e8c7eaed378af052f98845620366116b43d873a7c","text":"Hi, I need to pass input through one nn.Module, then argmax the output, and then pass that output to a second nn.Module. I want to backprop through the argmax back to the weights of the first module. That's impossible right now, though, it seems. When I loop through the gradient of the weights of my first module (with the following code): firstmodel.zerograd() secondmodel.zerograd() loss.backward() # retainvariables=True for param in secondmodel.parameters(): print( 'data: ', ) print( 'grad data: ', ) for param in firstmodel.parameters(): print( 'data: ', ) print( 'grad data: ', ) I get the following error, stating that those gradients can no longer be viewed/exist: -main()\nargmax is not differentiable (and even if you try to think about it as a function differentiable almost everywhere, its derivative is always 0, so you won't get anything meaningful)."} {"_id":"q-en-pytorch-76085f668823e1e14d3db20f6e407e0fc43a0e781d2a486c75c60b2bf0c64322","text":"This is how I implement the decoder of a sequence to sequence model I'm not sure about the new autograd mechanics but this worked in the previous version. If I didn't make the unrolling codes a function it will work. It will also work with CPU. I compiled from source with Python 2.7, CUDA 8.0 and Cudnn 6\nThanks for the minimal reproduction! It broke OpenNMT and a big internal model but we weren't looking forward to isolating what parts of those had the issue.\nThat's great, I'll take a look today. Thanks!\nI've pushed a fix to .\nHow does the inference work in this case? I mean, once you train it how do you test it in the test phase?"} {"_id":"q-en-pytorch-f356b8e87796ce8c604f40a17e0ccd34ef326a6487927d997013a1f527bca839","text":"We should check that the version from cudnn.h matches the version from the loaded shared library. Really weird things happen if they don't match."} {"_id":"q-en-pytorch-89ff28c4a5771497cf71252d9105dbaeb0e85eecfd1e3aa3339959ef1fc89104","text":"Hi I think that in lines 28 and 30: if self.transA: a = a.transpose(2, 3) if self.transB: b = b.transpose(2, 3) should be: if self.transA: a = a.transpose(1, 2) if self.transB: b = b.transpose(1, 2) Indeed in that branch the tensor has 3 dimension and thus the code crashes. Maybe an indexing error when translating from lua ? Thank you very much,\nYes, you're right. Could you send a PR with a fix? Thanks!\nYes of course, just done, thank you very much."} {"_id":"q-en-pytorch-77fbdb92efba00b171fda95ef72ad1556da96d83b890858d72fb01d6583272cb","text":"as shown in source code, this parameters is not used. Any reason that we keep it?\nthis can be cleaned up."} {"_id":"q-en-pytorch-63eb661e07e3fadddd1e2c8c3e1932066a1b6e25b49c780bbf420bf912a21e4a","text":"Consider following code, and run it with multi gpus (e.g. 4): It will output: That is, the default device would always be zero even in DataParallel. My pytorch version is 0.1.122. I think this is not the desired behavior. It would cause some troubles. I tried to insert my own cuda kernel into backward to calculate the gradients, it become very slow, and I fixed it by (grad.getdevice()). Anyway, I think currentdevice in forward and backward should be the same, or could anyone explain to me why they are different?\nthis seems like something we should fix: ?\n