{"_id":"q-en-pytorch-8301cedf13e53064f66b2d6796440bc5f8998c4ebe3f5de4c15188f6f81e75f2","text":"In torch, printing a tensor will print the whole tensor, even if that one is huge. This is sometimes annoying, especially when using a notebook, as it crashes it. Numpy has a very nice way to display big tensors by truncating them. Would it be possible so have something similar for pytorch?\nYeah, we haven't implemented it yet, but that's definitely on our roadmap\nImplemented in .\nI should say that I have some artefacts that appear with while printing a tensor. Here is an example output: gives Curiously, if I simply do It works fine and print nicely, with the truncation for large tensors. I'm using Python 2.7.6 (probably the one that came with my system and not from anaconda). Maybe it's just a bad setup in my machine?\nhmmm, it works fine on python 2.7.12. wonder what changed between .6 and .12\nok i got a repro. I opened my ipython and did this:\nmy example was flawed, sorry. Using works for me, but when only entering it prints the s, as in your example.\nIs it possible to tell the truncation point?\nyes, use the function"} {"_id":"q-en-pytorch-1cefbb85ee22d904d01026c3f6d0e48f6df9da78d6991ab1b9dea3f8073edb63","text":"Right now it makes everything .\nTo be clear, for a tensor, the problem is only when all of N, H, and W are 1. (So BatchNorm2d on batch size of 1 is OK as long as you don't have a 1x1 image). What's the desired behavior? The only reasonable behavior I can think of is: Raise an exception when the dimensions of which you are normalizing are one or Output zero (+ optional affine transform) I'm not sure the outputing zero is a good idea. I can't think of a case where that's what you want."} {"_id":"q-en-pytorch-ecbcde49ddf476362013b7a1627eef3a9d2d58299691994caa08e5a3ac912917","text":"Hi, I have installed two versions of CUDA. But I want to use one of it, can I specify it during setup process?\nTry running\nI have tried it, () still returns False. Thx for your reply!\nDo you have the cuda driver installed?\nOf course I do. It raise assertion error which shows \"The NVIDIA driver on your system is too old(found version 8000)\".\nthat's the answer to your problem. upgrade your driver.\nThx!"} {"_id":"q-en-pytorch-6f446ea3ab104270b845da39ed3c7f9d5becaa8201448f27f7952bbe79863d80","text":"I think I encountered a memory leak situation potentially due to . I create a to reproduce my observations. Basically, for three different (meaningless) models, once the function is used, the memory usage will keep increasing. To run the code, where the flag controls whether is used. Please run it long enough. I'm on Ubuntu 14.04 with python3.6. I tried two versions of pytorch, 0.1.122 and 0.1.12+. Side Note: Even without using , the memory usauge can also increase for a while, but it will finally stablize at a level. What's more, when (i.e. ) is used, it can take very long before the memory usage stablizes. I'm not sure whether this is a problem or not.\nI think I'm sure there is a memory leak due to , and I found a weird way to do a temporary fix under pytorch version \"0.1.122\". The simplest example is as follows: Basically, if we replace with , and pass in an unpacked list of numbers, (i.e., instead of ), the memory leak is gone. Using will also have the leak issue. Unfortunately, I don't know why this is working. Finally, since there are so many in pytorch code, to do a quick hacky fix, one can change the function in the file as follows:\nAfter some more research, I think we have a much bigger problem here. My current conjecture is that: whenever we use ( object) as an argument to a function, there will be a memory leak. An example is the function in , where of the code is which uses . Again, a minimal example for reproduction is as follows: As in the case, once I change the code to , memory leak is gone. So, in short, the problem is on the usage of .\nThe same issue. +1\nThanks for tracking this down and for the repro script!\nThanks for the quick fix! BTW, will the fix be included in the release version (0.1.12_?)?\nit'll be released in\nI see. Thanks again."} {"_id":"q-en-pytorch-62a054405c741f8d90744de9968e396141abea2b081b14e069b1fa0843edfaca","text":"Torchvision is considered a basic requirement of the tutorials. Perhaps it makes sense to include it in the docker build.\nIt's easy to install it by command ;)\nIt's easy to install torch itself using a similar command? :). I was thinking a docker image should have torchvision included.\nRuntime docker file already has torchvision."} {"_id":"q-en-pytorch-55c1ecd176853c68795a6061ebed005399930e9b6e97e5410ed141e6c87fdbe1","text":"Be Careful What You Backpropagate: A Case For Linear Output Activations & Gradient Boosting I can work on this if this can be to pytorch. Please let me know. Thanks!\nThis should be fairly straightforward to add in user code using . Eventually it could be to core, but its utility should probably be validated in external repos first."} {"_id":"q-en-pytorch-d6cfc0fa9a654f0127f12ea13aa8b1d5a3b3b01b57eb0165468cedfcc99f5677","text":"If a norm is zero, its gradient returns nan: Obviously just happening because the gradient divides by the norm, but the (sub)gradient here should probably be zero, or at least not nan, since that will propagate to make all updates nan. Probably low priority, as it's not going to be an issue in 99% of cases, but we're doing a few things with (exact) line searches where this caused a nan to appear, breaking everything downstream.\nI'm encountering exactly the same issue! Spent hours on debugging, just to find PyTorch has a bug on such basic thing.\n+1 just found this bug too\n+1 for this bug. Temporarily changing my code to something like the following for the sake of debugging. x = Variable((1), requires_grad=True) y = x + 1e-16 y.norm().backward() print x.grad\nThe thing is that in the 2 norm, there is a square root, which has a gradient of at 0. The gradient gives you because you then multiply 0 and an infinity during the backward pass.\nFor a scalar, norm 2 is basically abs. But x.abs().backward() gives you 0 gradient. In this sense, it's not coherent.\nI found this error, too\nAlban fixed this behavior in\nHi, the norm function can give use the 0 gradients now. However, the following code still has the nan gradient problem\nHo The square root has no gradient at 0. This is expected behavior.\nHi, but the sub-gradient of the square root should be zero? Also, y = ( x * x ) should equal to x.norm(), why they have different gradient ( 0 and nan )?\nI think was right. The left-side derivative of sqrt(x) at x=0 is undefined, so it doesn't even have a subgradient at x=0.\nsquare root has no subgradient at 0. You could define a gradient by continuity but then it would be ... Given that pytorch is using autograd, and (equivalent to your ) are completely different: The first one is a single function that is convex and defined on R, it has a subgradient of 0 at 0. The second one is composed of two function, the first function is the square function which is differentiable and outputs values in . The second function is the square root that is not convex and even though it is defined on , it is only differentiable on and it's gradient in 0 in undefined. Given that, even though and will return the same value, their gradients may differ at points where it is not differentiable, this is because automatic differentiation looks at each step of the computation one by one and even though in some cases a subgradient exist (because we look at multiple operations as a single function), it is not always the case and the gradient remains undefined.\nI think math is math. Any root's gradient at zero is either inf or undefined. This issue shall be handled by the users' themselves by adding a small value(as did), but an error(warning) message may be helpful since it is pretty hard to debug. Say: Infinite/Undefined gradient is detected at XfunctionX at line Y. Exit.\nAgree, norm is not differentiable at 0 , the bandaid that Alban put there in is wrong (even in the limit sense the gradient at 0 should be 1 not 0), but it should not have been there at all. Norm is norm, if someone want to add epsilons to their norms (like batchrnorm, e.g.) they are welcome to do so in the user code. What would numpy do?\nI've also run into a number of problems related to the change introduced in . Is there a reason the subgradient is set to 0, rather than the 1? (The limit as norm-0?) As a minimal example: Produces\nWell any value between [-1, 1] is a valid subgradient for the 2-norm. More genereally, any vector in the 1 ball for the dual norm is a valid subgradient. This means that 0 is always going to be a subgradient, while 1 will not be for all p. Anyway, the theory says that any of them could be taken and subgradient descent will work. I'm sure that depending on the application, one will be better than the other. For example, the relu function will also give a 0 subgradient at 0, you could have given 1. The main point here was to remove nans that make your network give nan for everything which is not convenient."} {"_id":"q-en-pytorch-a0d273822e65898b1ccb1b24fcdee057356a1b5cb47b4fa6e0713f371ffc06c1","text":"Intermittent test failures occurring on AdaptiveMaxPool3d. and I have seen this in the past where Max pooling has exactly the same values in the same window, then it's differently resolved on CPU/GPU. For example [0 2.5 2.5 3] can give a max index of or on CUDA depending on the runtime. So generating input values that are within epsilon of each other is important (especially at half precision). fix that one.\nWill do. Thanks for identifying the issue."} {"_id":"q-en-pytorch-7c531194151e50063e02afc5026a310c247879bb394741978af1c5a6f9f48de2","text":"When using classic SGD optimizer with momentum with sparse embeddings the memory keeps garbage collecting / allocating leading to slow down and out of memory error eventually. ! ! The issue dissapears when momentum is not used ! or when embeddings are not sparse ! I'm using the last pytorch version on conda:\nI tried out your script with momentum 0.1 on master, it takes roughly 10800mb gpu memory max. This is caused by using sparse buffer. I'm sending out a PR for this."} {"_id":"q-en-pytorch-5d50412dc01f12c66dce9d07c26f1d47508c97d68d53e8bae7b4990e70585dbe","text":"Looking at the values returned from tn.multinomial(1) and counting them, it seems like it returns the values of the multinomial distribution taken from t, i.e. that it has ignored the signs of the values. However, it seem like always returns a tensor filled with zeros.\nlooking into this"} {"_id":"q-en-pytorch-e622986277177825174756cd0d58bb252b326c5ede6f55945541c76bbab2d669","text":"Hi There, I'm trying to install Pytorch as a module on my university's computing cluster. The node uses CentOS (6.4) x8664. I install running these commands: module load gcclib/5.2.0 cmake/3.8.2 module load gcc/5.2.0 module load anaconda3/4.0.0 export CMAKEPREFIXPATH=\"(which conda))/../\" export NOCUDA=1 git clone --recursive cd pytorch/ python install --prefix=${HOME} Things look fine as it installs but then I run into the following errors, ending in the process terminating with error relating to GCC? I've tried with version GCC 6.2.0 and the same result occurs. Not sure what to even try to fix this! Thanks for any help you can provide!\nSame here, with gcc 5.4.0\nI don't think building from source works very well with gcc 5.4. Could you install gcc 4.9 and try compiling?\nI get this error instead when using gcc 4.9.0, doesn't make it very far at all.\nHi I think this is broken after / due to\nI have the same problem ( etc.) with gcc 4.8.5 on Linux with the current head ( ) What works for me is to add at the beginning of the following four files: (see also -- this can be turned into a pull request very easily) This fix is the same as but for different files.\nThis worked for me Thank you.\nfixed in latest master, thanks to"} {"_id":"q-en-pytorch-9a0730e94c0fc8a4d967d9d6465c1b0ef1b139a90afe556ebf7040ecb39471f7","text":"Hi, Looking at the formulas of LSTM in , in the third equation (gt), the suffix of the second W should be changed from W{hc} to W{hg}. The correct formula: gt=tanh(W{ig}xt+b{ig}+W{hg}h{(t−1)}+b_{hg}) Cheers, Navid"} {"_id":"q-en-pytorch-31dfedb0de26de12744a460838d9a2e3cf8d4841e9927b5f6b6d6fff0ae5aeb6","text":"The three-clause BSD license in file LICENSE says in clause 1 \"Redistributions of source code must retain the above copyright notice...\" However, there is no longer a copyright notice in the file; it appears to have been removed in commit (see ).\ncc"} {"_id":"q-en-pytorch-9355a1286eaeb76b71a81a72570ae9c72fdf4257046d7fb7031b0abacf3db8bf","text":"PyTorch version: 0.4.0 Is debug build: No CUDA used to build PyTorch: 9.0.176 OS: Ubuntu 16.04.3 LTS GCC version: (Ubuntu 5.4.0-6ubuntu1~16.04.5) 5.4.0 CMake version: version 3.9.4 Python version: 3.6 Is CUDA available: Yes CUDA runtime version: 9.0.176 GPU models and configuration: GPU 0: TITAN X (Pascal) GPU 1: TITAN X (Pascal) Nvidia driver version: 384.98 cuDNN version: Probably one of the following: /usr/lib/x8664-linux- /usr/lib/x8664-linux- /usr/lib/x8664-linux-gnu/libcudnnstaticv7.a /usr/local/lib/python2.7/dist- /usr/local/lib/python3.5/dist- Versions of relevant libraries: [pip3] numpy (1.14.1) [pip3] numpydoc (0.6.0) [pip3] torch (0.4.0) [pip3] torchvision (0.2.0) [conda] cuda90 1.0 h6433d270 pytorch [conda] magma-cuda90 2.3.0 1 soumith [conda] pytorch 0.4.0 py36cuda9.0.176cudnn7.1.21 [cuda90] pytorch [conda] torchvision 0.2.1 py361 pytorch\n() returns 7102\nPossibly related:\nI'm not sure why, but I can't reproduce this on master. I can reproduce on 0.4 though.\nNever mind, this error happens when CUDNN is updated regardless of pytorch version."} {"_id":"q-en-pytorch-262b3debd105719f4260aa0a3a70d946007b6621a5a2b26a31d84b9a4b5cb3dd","text":"Einsum currently modifies variables in-place (without an explicit indication that it does so), which prevents pytorch from automatically backpropagating. Results in the following runtime error: Demonstration and discussion can also be found . PyTorch or Caffe2: PyTorch How you installed PyTorch (conda, pip, source): pip OS: PyTorch version: 0.4.0 Python version: 3.6.1\nOops. Thank you for reporting and the minimal example. I'll see to get that fixed.\nSaw the same issue over here. Thank you !\nIs cloning the tensor before passing to einsum a valid workaround?\nYes, but a somewhat expensive one. If you have the ability to recompile, you could also apply the PR fixing this. Somehow it seems to be stuck in the review queue because it hit an unrelated CI problem...\nOkay so how to use it then?\nJust get a recent enough master and it will work."} {"_id":"q-en-pytorch-e01185564370d695c652297820033c10134ab77dd1729ed0d1fb165ef0bb43a6","text":"Hi, I try to run the following example from here, but run into some issues: ld: warning: ignoring file , file was built for unsupported file format ( 0x7F 0x45 0x4C 0x46 0x02 0x01 0x01 0x03 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 ) which is not the architecture being linked (x8664): ld: warning: ignoring file , file was built for unsupported file format ( 0x7F 0x45 0x4C 0x46 0x02 0x01 0x01 0x03 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 ) which is not the architecture being linked (x8664): ld: warning: ignoring file , file was built for unsupported file format ( 0x7F 0x45 0x4C 0x46 0x02 0x01 0x01 0x03 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 ) which is not the architecture being linked (x86_64): full error log: i installed pytorch: im running pytorch on: osx 10.11.6 i installed libtorch following these steps: here the cmake output:\nshould i use: wget -c but the problem is that libtorch-macos- does not contain any libraries in ./lib folder. how to create it? instead of: wget -c when i use libtorch-macos-, i get this error:\nI've got the same image not found example in this same spot, by following the minimal example of the docs. Running macOS Mojave here.\nwe need to build libtorch manually because it doesnt come with all libs...\nGetting the same issue as with macOS High Sierra 10.13.5. When I look in I see but not as it requires.\ncc: looks like a missing copying of in our OSX libtorch builds.\nSeeing the same thing.\nFixed this issue by downloading the two missing libraries from and copying them both (, ) to .\nCorrect me if I am wrong, but that solution is not answering the original issue: I was having the same issue when trying to run the minimal cpp frontend example here: I grabbed libtorch linked on the main pytorch site: You are right, had neither nor in . However, copying those files to that location did not solve the issue for me. I was able to get the minimal example by giving cmake the for Pytorch's torch library (compiled from source): I am going to try compiling the DCGAN example for the cpp frontend in this same way. I assume that this is not going to work, that we will ultimately need to point to for something like data loading, etc. I am on Mac 10.14.3\nI was facing almost same problems here and managed to fix it by downloading and from . Just in case anyone needs this, please do NOT download the newest post there! Instead find the file instead of file. ! I didn't find the difference between and at first, but succeeded once downloaded the ones.\nAny idea whats going on with these missing MKL libraries for the mac c++ libtorch? Somehow this is still an issue dating back since 2018, and even after pytorch 1.3, I just tried it out again. The workaround done by above works for me, which involves manually copying the dylib files from old (pre v1.0) intel binaries.\nI fixed all such issues for v1.3.0 in or so I thought. But I only fixed them for pip and conda packages, not libtorch. I'm looking into it right now and will fix libtorch too.\nwill kick off new binary builds as the fix went in. will close the issue after the builds are uploaded and live.\nthis is fixed now with fixed binaries that are re-uploaded."} {"_id":"q-en-pytorch-82c7905498663a8787a4fb25e10717c68c8433c7d30806d3b2b65cdfb71925b4","text":"I'm trying to implement distributed adversarial training in PyTorch. Thus, in my program pipeline I need to forward the output of one DDP model to another one. When I run the code in distributed setting, the following error is throwed: frame : c10::Error::Error(c10::SourceLocation, std::string const&) + 0x45 (0x7fa248f906d5 in /export/home/haoran/anaconda3/envs/torch1.2/lib/python3.6/site-) frame : c10d::Reducer::prepareforbackward(std::vector