CodeConvo / pytorch /pytorch.i2c.dev.jsonl
jiebi's picture
Upload CodeConvo dataset
c2b8f63 verified
raw
history blame
92.6 kB
{"query_id": "q-en-pytorch-8301cedf13e53064f66b2d6796440bc5f8998c4ebe3f5de4c15188f6f81e75f2", "query": "In torch, printing a tensor will print the whole tensor, even if that one is huge. This is sometimes annoying, especially when using a notebook, as it crashes it. Numpy has a very nice way to display big tensors by truncating them. Would it be possible so have something similar for pytorch?\nYeah, we haven't implemented it yet, but that's definitely on our roadmap\nImplemented in .\nI should say that I have some artefacts that appear with while printing a tensor. Here is an example output: gives Curiously, if I simply do It works fine and print nicely, with the truncation for large tensors. I'm using Python 2.7.6 (probably the one that came with my system and not from anaconda). Maybe it's just a bad setup in my machine?\nhmmm, it works fine on python 2.7.12. wonder what changed between .6 and .12\nok i got a repro. I opened my ipython and did this:\nmy example was flawed, sorry. Using works for me, but when only entering it prints the s, as in your example.\nIs it possible to tell the truncation point?\nyes, use the function", "positive_passages": [{"docid": "doc-en-pytorch-0765e1f807790457a0c3360c08023ee1cafbdc4e0a83eeafeb47a39cc98ed1e4", "text": "return type(self), (self.tolist(),) def __repr__(self): <del> return repr(str(self)) </del> <ins> return str(self) </ins> def __str__(self): # All strings are unicode in Python 3, while we have to encode unicode", "commid": "pytorch_pr_208"}], "negative_passages": []}
{"query_id": "q-en-pytorch-1cefbb85ee22d904d01026c3f6d0e48f6df9da78d6991ab1b9dea3f8073edb63", "query": "Right now it makes everything .\nTo be clear, for a tensor, the problem is only when all of N, H, and W are 1. (So BatchNorm2d on batch size of 1 is OK as long as you don't have a 1x1 image). What's the desired behavior? The only reasonable behavior I can think of is: Raise an exception when the dimensions of which you are normalizing are one or Output zero (+ optional affine transform) I'm not sure the outputing zero is a good idea. I can't think of a case where that's what you want.", "positive_passages": [{"docid": "doc-en-pytorch-19f6e3c49e2f982b4b2c74a0b82f1cc7984c7aea051e47c00252a959cb536395", "text": "from numbers import Integral import warnings import math <ins> from operator import mul from functools import reduce </ins> import torch from torch._C import _infer_size", "commid": "pytorch_pr_2961"}], "negative_passages": []}
{"query_id": "q-en-pytorch-1cefbb85ee22d904d01026c3f6d0e48f6df9da78d6991ab1b9dea3f8073edb63", "query": "Right now it makes everything .\nTo be clear, for a tensor, the problem is only when all of N, H, and W are 1. (So BatchNorm2d on batch size of 1 is OK as long as you don't have a 1x1 image). What's the desired behavior? The only reasonable behavior I can think of is: Raise an exception when the dimensions of which you are normalizing are one or Output zero (+ optional affine transform) I'm not sure the outputing zero is a good idea. I can't think of a case where that's what you want.", "positive_passages": [{"docid": "doc-en-pytorch-56fb18ffabc8cfdce2f8006735710b90051e1b57b19b53998e8aa3ac7511d892", "text": "def batch_norm(input, running_mean, running_var, weight=None, bias=None, training=False, momentum=0.1, eps=1e-5): <ins> size = list(input.size()) if reduce(mul, size[2:], size[0]) == 1: raise ValueError('Expected more than 1 value per channel, got input size {}'.format(size)) </ins> f = torch._C._functions.BatchNorm(running_mean, running_var, training, momentum, eps, torch.backends.cudnn.enabled) return f(input, weight, bias)", "commid": "pytorch_pr_2961"}], "negative_passages": []}
{"query_id": "q-en-pytorch-ecbcde49ddf476362013b7a1627eef3a9d2d58299691994caa08e5a3ac912917", "query": "Hi, I have installed two versions of CUDA. But I want to use one of it, can I specify it during setup process?\nTry running\nI have tried it, () still returns False. Thx for your reply!\nDo you have the cuda driver installed?\nOf course I do. It raise assertion error which shows \"The NVIDIA driver on your system is too old(found version 8000)\".\nthat's the answer to your problem. upgrade your driver.\nThx!", "positive_passages": [{"docid": "doc-en-pytorch-070db6c4fc6f6f907a67f5eff63dda2759f93a2cf19670bccc99c9ab7375a073", "text": "| :attr:`stride` controls the stride for the cross-correlation. | If :attr:`padding` is non-zero, then the input is implicitly zero-padded on both sides <del> for :attr:`padding` number of points | :attr:`dilation` controls the spacing between the kernel points. It is harder to describe, but this `link`_ has a nice visualization of what :attr:`dilation` does. | :attr:`groups` controls the connections between inputs and outputs. </del> <ins> for :attr:`padding` number of points. | :attr:`dilation` controls the spacing between the kernel points; also known as the \u00e0 trous algorithm. It is harder to describe, but this `link`_ has a nice visualization of what :attr:`dilation` does. | :attr:`groups` controls the connections between inputs and outputs. `in_channels` and `out_channels` must both be divisible by `groups`. </ins> | At groups=1, all inputs are convolved to all outputs. | At groups=2, the operation becomes equivalent to having two conv layers side by side, each seeing half the input channels, and producing half the output channels, and both subsequently concatenated. <ins> At groups=`in_channels`, each input channel is convolved with its own set of filters (of size `out_channels // in_channels`). </ins> .. note::", "commid": "pytorch_pr_1602"}], "negative_passages": []}
{"query_id": "q-en-pytorch-ecbcde49ddf476362013b7a1627eef3a9d2d58299691994caa08e5a3ac912917", "query": "Hi, I have installed two versions of CUDA. But I want to use one of it, can I specify it during setup process?\nTry running\nI have tried it, () still returns False. Thx for your reply!\nDo you have the cuda driver installed?\nOf course I do. It raise assertion error which shows \"The NVIDIA driver on your system is too old(found version 8000)\".\nthat's the answer to your problem. upgrade your driver.\nThx!", "positive_passages": [{"docid": "doc-en-pytorch-f7031e660f7387bb371e278ab1f5e0deba5828cc04c655b5e42a5cb6a283bf7f", "text": "| :attr:`stride` controls the stride for the cross-correlation. | If :attr:`padding` is non-zero, then the input is implicitly zero-padded on both sides <del> for :attr:`padding` number of points | :attr:`dilation` controls the spacing between the kernel points. It is harder to describe, but this `link`_ has a nice visualization of what :attr:`dilation` does. | :attr:`groups` controls the connections between inputs and outputs. </del> <ins> for :attr:`padding` number of points. | :attr:`dilation` controls the spacing between the kernel points; also known as the \u00e0 trous algorithm. It is harder to describe, but this `link`_ has a nice visualization of what :attr:`dilation` does. | :attr:`groups` controls the connections between inputs and outputs. `in_channels` and `out_channels` must both be divisible by `groups`. </ins> | At groups=1, all inputs are convolved to all outputs. | At groups=2, the operation becomes equivalent to having two conv layers side by side, each seeing half the input channels, and producing half the output channels, and both subsequently concatenated. <ins> At groups=`in_channels`, each input channel is convolved with its own set of filters (of size `out_channels // in_channels`). </ins> The parameters :attr:`kernel_size`, :attr:`stride`, :attr:`padding`, :attr:`dilation` can either be:", "commid": "pytorch_pr_1602"}], "negative_passages": []}
{"query_id": "q-en-pytorch-ecbcde49ddf476362013b7a1627eef3a9d2d58299691994caa08e5a3ac912917", "query": "Hi, I have installed two versions of CUDA. But I want to use one of it, can I specify it during setup process?\nTry running\nI have tried it, () still returns False. Thx for your reply!\nDo you have the cuda driver installed?\nOf course I do. It raise assertion error which shows \"The NVIDIA driver on your system is too old(found version 8000)\".\nthat's the answer to your problem. upgrade your driver.\nThx!", "positive_passages": [{"docid": "doc-en-pytorch-ca8a9e7cde8b3e302e9883c007c0b71c952ec4d2b1b23b639294d25fe54d87c8", "text": "composed of several input planes. This module can be seen as the gradient of Conv1d with respect to its input. <del> It is sometimes (but incorrectly) refered to as a deconvolutional operation. </del> <ins> It is also known as a fractionally-strided convolution or a deconvolution (although it is not an actual deconvolution operation). | :attr:`stride` controls the stride for the cross-correlation. | If :attr:`padding` is non-zero, then the input is implicitly zero-padded on both sides for :attr:`padding` number of points. | If :attr:`output_padding` is non-zero, then the output is implicitly zero-padded on one side for :attr:`output_padding` number of points. | :attr:`dilation` controls the spacing between the kernel points; also known as the \u00e0 trous algorithm. It is harder to describe, but this `link`_ has a nice visualization of what :attr:`dilation` does. | :attr:`groups` controls the connections between inputs and outputs. `in_channels` and `out_channels` must both be divisible by `groups`. | At groups=1, all inputs are convolved to all outputs. | At groups=2, the operation becomes equivalent to having two conv layers side by side, each seeing half the input channels, and producing half the output channels, and both subsequently concatenated. At groups=`in_channels`, each input channel is convolved with its own set of filters (of size `out_channels // in_channels`). </ins> .. note::", "commid": "pytorch_pr_1602"}], "negative_passages": []}
{"query_id": "q-en-pytorch-ecbcde49ddf476362013b7a1627eef3a9d2d58299691994caa08e5a3ac912917", "query": "Hi, I have installed two versions of CUDA. But I want to use one of it, can I specify it during setup process?\nTry running\nI have tried it, () still returns False. Thx for your reply!\nDo you have the cuda driver installed?\nOf course I do. It raise assertion error which shows \"The NVIDIA driver on your system is too old(found version 8000)\".\nthat's the answer to your problem. upgrade your driver.\nThx!", "positive_passages": [{"docid": "doc-en-pytorch-2a4353f25e6b66254df4b7bf1a55ef91eb3c6280d482b7114d6ee0c1c2abfae1", "text": "output_padding (int or tuple, optional): Zero-padding added to one side of the output groups (int, optional): Number of blocked connections from input channels to output channels bias (bool, optional): If True, adds a learnable bias to the output <ins> dilation (int or tuple, optional): Spacing between kernel elements </ins> Shape: - Input: :math:`(N, C_{in}, L_{in})`", "commid": "pytorch_pr_1602"}], "negative_passages": []}
{"query_id": "q-en-pytorch-ecbcde49ddf476362013b7a1627eef3a9d2d58299691994caa08e5a3ac912917", "query": "Hi, I have installed two versions of CUDA. But I want to use one of it, can I specify it during setup process?\nTry running\nI have tried it, () still returns False. Thx for your reply!\nDo you have the cuda driver installed?\nOf course I do. It raise assertion error which shows \"The NVIDIA driver on your system is too old(found version 8000)\".\nthat's the answer to your problem. upgrade your driver.\nThx!", "positive_passages": [{"docid": "doc-en-pytorch-29428830a26059a849496a55d799332d3518e5869a3f91cc1c2fde212359f495", "text": "composed of several input planes. This module can be seen as the gradient of Conv2d with respect to its input. <del> It is sometimes (but incorrectly) refered to as a deconvolutional operation. </del> <ins> It is also known as a fractionally-strided convolution or a deconvolution (although it is not an actual deconvolution operation). </ins> | :attr:`stride` controls the stride for the cross-correlation. | If :attr:`padding` is non-zero, then the input is implicitly zero-padded on both sides <del> for :attr:`padding` number of points </del> <ins> for :attr:`padding` number of points. </ins> | If :attr:`output_padding` is non-zero, then the output is implicitly zero-padded on one side <del> for :attr:`output_padding` number of points | :attr:`dilation` controls the spacing between the kernel points. It is harder to describe, but this `link`_ has a nice visualization of what :attr:`dilation` does. | :attr:`groups` controls the connections between inputs and outputs. </del> <ins> for :attr:`output_padding` number of points. | :attr:`dilation` controls the spacing between the kernel points; also known as the \u00e0 trous algorithm. It is harder to describe, but this `link`_ has a nice visualization of what :attr:`dilation` does. | :attr:`groups` controls the connections between inputs and outputs. `in_channels` and `out_channels` must both be divisible by `groups`. </ins> | At groups=1, all inputs are convolved to all outputs. | At groups=2, the operation becomes equivalent to having two conv layers side by side, each seeing half the input channels, and producing half the output channels, and both subsequently concatenated. <ins> At groups=`in_channels`, each input channel is convolved with its own set of filters (of size `out_channels // in_channels`). </ins> The parameters :attr:`kernel_size`, :attr:`stride`, :attr:`padding`, :attr:`output_padding` can either be: <del> - a single ``int`` -- in which case the same value is used for the height and width dimension </del> <ins> - a single ``int`` -- in which case the same value is used for the height and width dimensions </ins> - a ``tuple`` of two ints -- in which case, the first `int` is used for the height dimension, and the second `int` for the width dimension", "commid": "pytorch_pr_1602"}], "negative_passages": []}
{"query_id": "q-en-pytorch-ecbcde49ddf476362013b7a1627eef3a9d2d58299691994caa08e5a3ac912917", "query": "Hi, I have installed two versions of CUDA. But I want to use one of it, can I specify it during setup process?\nTry running\nI have tried it, () still returns False. Thx for your reply!\nDo you have the cuda driver installed?\nOf course I do. It raise assertion error which shows \"The NVIDIA driver on your system is too old(found version 8000)\".\nthat's the answer to your problem. upgrade your driver.\nThx!", "positive_passages": [{"docid": "doc-en-pytorch-89807e395c83aa297144688018ba65eda0cbd3a26f5906aad71ea90d8b1422e8", "text": "The transposed convolution operator multiplies each input value element-wise by a learnable kernel, and sums over the outputs from all input feature planes. <del> **This module can be seen as the exact reverse of Conv3d**. It is sometimes (but incorrectly) refered to as a deconvolutional operation. </del> <ins> This module can be seen as the gradient of Conv3d with respect to its input. It is also known as a fractionally-strided convolution or a deconvolution (although it is not an actual deconvolution operation). </ins> | :attr:`stride` controls the stride for the cross-correlation. | If :attr:`padding` is non-zero, then the input is implicitly zero-padded on both sides <del> for :attr:`padding` number of points </del> <ins> for :attr:`padding` number of points. </ins> | If :attr:`output_padding` is non-zero, then the output is implicitly zero-padded on one side <del> for :attr:`output_padding` number of points | :attr:`groups` controls the connections between inputs and outputs. </del> <ins> for :attr:`output_padding` number of points. | :attr:`dilation` controls the spacing between the kernel points; also known as the \u00e0 trous algorithm. It is harder to describe, but this `link`_ has a nice visualization of what :attr:`dilation` does. | :attr:`groups` controls the connections between inputs and outputs. `in_channels` and `out_channels` must both be divisible by `groups`. </ins> | At groups=1, all inputs are convolved to all outputs. | At groups=2, the operation becomes equivalent to having two conv layers side by side, each seeing half the input channels, and producing half the output channels, and both subsequently concatenated. <ins> At groups=`in_channels`, each input channel is convolved with its own set of filters (of size `out_channels // in_channels`). </ins> The parameters :attr:`kernel_size`, :attr:`stride`, :attr:`padding`, :attr:`output_padding` can either be: <del> - a single ``int`` -- in which case the same value is used for the height and width dimension </del> <ins> - a single ``int`` -- in which case the same value is used for the depth, height and width dimensions </ins> - a ``tuple`` of three ints -- in which case, the first `int` is used for the depth dimension, the second `int` for the width dimension and the third `int` for the width dimension", "commid": "pytorch_pr_1602"}], "negative_passages": []}
{"query_id": "q-en-pytorch-6f446ea3ab104270b845da39ed3c7f9d5becaa8201448f27f7952bbe79863d80", "query": "I think I encountered a memory leak situation potentially due to . I create a to reproduce my observations. Basically, for three different (meaningless) models, once the function is used, the memory usage will keep increasing. To run the code, where the flag controls whether is used. Please run it long enough. I'm on Ubuntu 14.04 with python3.6. I tried two versions of pytorch, 0.1.122 and 0.1.12+. Side Note: Even without using , the memory usauge can also increase for a while, but it will finally stablize at a level. What's more, when (i.e. ) is used, it can take very long before the memory usage stablizes. I'm not sure whether this is a problem or not.\nI think I'm sure there is a memory leak due to , and I found a weird way to do a temporary fix under pytorch version \"0.1.122\". The simplest example is as follows: Basically, if we replace with , and pass in an unpacked list of numbers, (i.e., instead of ), the memory leak is gone. Using will also have the leak issue. Unfortunately, I don't know why this is working. Finally, since there are so many in pytorch code, to do a quick hacky fix, one can change the function in the file as follows:\nAfter some more research, I think we have a much bigger problem here. My current conjecture is that: whenever we use ( object) as an argument to a function, there will be a memory leak. An example is the function in , where of the code is which uses . Again, a minimal example for reproduction is as follows: As in the case, once I change the code to , memory leak is gone. So, in short, the problem is on the usage of .\nThe same issue. +1\nThanks for tracking this down and for the repro script!\nThanks for the quick fix! BTW, will the fix be included in the release version (0.1.12_?)?\nit'll be released in\nI see. Thanks again.", "positive_passages": [{"docid": "doc-en-pytorch-28b67a4b9138c2387f5881d8944477dc35c28a9bb9bd1c5f9753dae90fc8f2d2", "text": "template<typename FnType, FnType fn, typename ...Args> static PyObject* wrap_tuple_fn(Args ... args) { <del> PyObject *result = (*fn)(std::forward<Args>(args)...); </del> <ins> THPObjectPtr result((*fn)(std::forward<Args>(args)...)); </ins> if (!result) return NULL; <del> if (PyTuple_Check(result)) { return PyObject_CallFunctionObjArgs((PyObject*)&THPSizeType, result, NULL); </del> <ins> if (PyTuple_Check(result.get())) { return PyObject_CallFunctionObjArgs((PyObject*)&THPSizeType, result.get(), NULL); </ins> } <del> Py_INCREF(result); return result; </del> <ins> return result.release(); </ins> } static auto sq_concat = PyTuple_Type.tp_as_sequence->sq_concat;", "commid": "pytorch_pr_2042"}], "negative_passages": []}
{"query_id": "q-en-pytorch-62a054405c741f8d90744de9968e396141abea2b081b14e069b1fa0843edfaca", "query": "Torchvision is considered a basic requirement of the tutorials. Perhaps it makes sense to include it in the docker build.\nIt's easy to install it by command ;)\nIt's easy to install torch itself using a similar command? :). I was thinking a docker image should have torchvision included.\nRuntime docker file already has torchvision.", "positive_passages": [{"docid": "doc-en-pytorch-6cf5108df98855d196ce5921666304c199405c8558e9226ccfb264a5fa21b4d5", "text": "<del> FROM nvidia/cuda:8.0-devel-ubuntu16.04 </del> <ins> FROM nvidia/cuda:8.0-cudnn6-devel-ubuntu16.04 </ins> RUN echo \"deb http://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1604/x86_64 /\" > /etc/apt/sources.list.d/nvidia-ml.list <del> ENV CUDNN_VERSION 6.0.20 </del> RUN apt-get update && apt-get install -y --no-install-recommends build-essential cmake git curl <ins> vim </ins> ca-certificates libjpeg-dev <del> libpng-dev libcudnn6=$CUDNN_VERSION-1+cuda8.0 libcudnn6-dev=$CUDNN_VERSION-1+cuda8.0 && </del> <ins> libpng-dev && </ins> rm -rf /var/lib/apt/lists/* RUN curl -o ~/miniconda.sh -O https://repo.continuum.io/miniconda/Miniconda3-4.2.12-Linux-x86_64.sh && ", "commid": "pytorch_pr_2090"}], "negative_passages": []}
{"query_id": "q-en-pytorch-62a054405c741f8d90744de9968e396141abea2b081b14e069b1fa0843edfaca", "query": "Torchvision is considered a basic requirement of the tutorials. Perhaps it makes sense to include it in the docker build.\nIt's easy to install it by command ;)\nIt's easy to install torch itself using a similar command? :). I was thinking a docker image should have torchvision included.\nRuntime docker file already has torchvision.", "positive_passages": [{"docid": "doc-en-pytorch-42b42fcaec1107f5b1e1e9203eb6d65a0f52c9b39f6376f110e60f106ca5d808", "text": "CMAKE_PREFIX_PATH=\"$(dirname $(which conda))/../\" pip install -v . <ins> RUN git clone https://github.com/pytorch/vision.git && cd vision && pip install -v . </ins> WORKDIR /workspace RUN chmod -R a+w /workspace", "commid": "pytorch_pr_2090"}], "negative_passages": []}
{"query_id": "q-en-pytorch-55c1ecd176853c68795a6061ebed005399930e9b6e97e5410ed141e6c87fdbe1", "query": "Be Careful What You Backpropagate: A Case For Linear Output Activations & Gradient Boosting I can work on this if this can be to pytorch. Please let me know. Thanks!\nThis should be fairly straightforward to add in user code using . Eventually it could be to core, but its utility should probably be validated in external repos first.", "positive_passages": [{"docid": "doc-en-pytorch-06405422fd745b4c40f12f712d4a5d5c8b0538cc2e3c793902e24fdf9b7b87b3", "text": "# (3) initialize mean square values and square gradient storage if not 'm' in state: <del> state['m'] = x.new().resize_as_(dfdx).fill_(1) </del> <ins> state['m'] = x.new().resize_as_(dfdx).zero_() </ins> state['tmp'] = x.new().resize_as_(dfdx)", "commid": "pytorch_pr_485"}], "negative_passages": []}
{"query_id": "q-en-pytorch-55c1ecd176853c68795a6061ebed005399930e9b6e97e5410ed141e6c87fdbe1", "query": "Be Careful What You Backpropagate: A Case For Linear Output Activations & Gradient Boosting I can work on this if this can be to pytorch. Please let me know. Thanks!\nThis should be fairly straightforward to add in user code using . Eventually it could be to core, but its utility should probably be validated in external repos first.", "positive_passages": [{"docid": "doc-en-pytorch-d135710ef973222ecdc593d43b4acb154189b4178c2fd4ce3644b483113544c1", "text": "# State initialization if len(state) == 0: state['step'] = 0 <del> state['square_avg'] = grad.new().resize_as_(grad).fill_(1) </del> <ins> state['square_avg'] = grad.new().resize_as_(grad).zero_() </ins> square_avg = state['square_avg'] alpha = group['alpha']", "commid": "pytorch_pr_485"}], "negative_passages": []}
{"query_id": "q-en-pytorch-d6cfc0fa9a654f0127f12ea13aa8b1d5a3b3b01b57eb0165468cedfcc99f5677", "query": "If a norm is zero, its gradient returns nan: Obviously just happening because the gradient divides by the norm, but the (sub)gradient here should probably be zero, or at least not nan, since that will propagate to make all updates nan. Probably low priority, as it's not going to be an issue in 99% of cases, but we're doing a few things with (exact) line searches where this caused a nan to appear, breaking everything downstream.\nI'm encountering exactly the same issue! Spent hours on debugging, just to find PyTorch has a bug on such basic thing.\n+1 just found this bug too\n+1 for this bug. Temporarily changing my code to something like the following for the sake of debugging. x = Variable((1), requires_grad=True) y = x + 1e-16 y.norm().backward() print x.grad\nThe thing is that in the 2 norm, there is a square root, which has a gradient of at 0. The gradient gives you because you then multiply 0 and an infinity during the backward pass.\nFor a scalar, norm 2 is basically abs. But x.abs().backward() gives you 0 gradient. In this sense, it's not coherent.\nI found this error, too\nAlban fixed this behavior in\nHi, the norm function can give use the 0 gradients now. However, the following code still has the nan gradient problem\nHo The square root has no gradient at 0. This is expected behavior.\nHi, but the sub-gradient of the square root should be zero? Also, y = ( x * x ) should equal to x.norm(), why they have different gradient ( 0 and nan )?\nI think was right. The left-side derivative of sqrt(x) at x=0 is undefined, so it doesn't even have a subgradient at x=0.\nsquare root has no subgradient at 0. You could define a gradient by continuity but then it would be ... Given that pytorch is using autograd, and (equivalent to your ) are completely different: The first one is a single function that is convex and defined on R, it has a subgradient of 0 at 0. The second one is composed of two function, the first function is the square function which is differentiable and outputs values in . The second function is the square root that is not convex and even though it is defined on , it is only differentiable on and it's gradient in 0 in undefined. Given that, even though and will return the same value, their gradients may differ at points where it is not differentiable, this is because automatic differentiation looks at each step of the computation one by one and even though in some cases a subgradient exist (because we look at multiple operations as a single function), it is not always the case and the gradient remains undefined.\nI think math is math. Any root's gradient at zero is either inf or undefined. This issue shall be handled by the users' themselves by adding a small value(as did), but an error(warning) message may be helpful since it is pretty hard to debug. Say: Infinite/Undefined gradient is detected at XfunctionX at line Y. Exit.\nAgree, norm is not differentiable at 0 , the bandaid that Alban put there in is wrong (even in the limit sense the gradient at 0 should be 1 not 0), but it should not have been there at all. Norm is norm, if someone want to add epsilons to their norms (like batchrnorm, e.g.) they are welcome to do so in the user code. What would numpy do?\nI've also run into a number of problems related to the change introduced in . Is there a reason the subgradient is set to 0, rather than the 1? (The limit as norm-0?) As a minimal example: Produces\nWell any value between [-1, 1] is a valid subgradient for the 2-norm. More genereally, any vector in the 1 ball for the dual norm is a valid subgradient. This means that 0 is always going to be a subgradient, while 1 will not be for all p. Anyway, the theory says that any of them could be taken and subgradient descent will work. I'm sure that depending on the application, one will be better than the other. For example, the relu function will also give a 0 subgradient at 0, you could have given 1. The main point here was to remove nans that make your network give nan for everything which is not convenient.", "positive_passages": [{"docid": "doc-en-pytorch-5d92750acac5053bde0067d760f33443260ed7937c34b3643b554d44f571ae4b", "text": "# This will segfault if things have been erroneously released out.backward(torch.randn(out.size())) <ins> def test_norm_subgradient(self): def run_test(input_size, norm_deg): input = Variable(torch.zeros(*input_size), requires_grad=True) out = input.norm(norm_deg) out.backward() self.assertEqual(input.grad.data.abs().sum(), 0) run_test((10,), 2) run_test((10, 10), 2) run_test((10,), 3) </ins> def index_variable(shape, max_indices): if not isinstance(shape, tuple):", "commid": "pytorch_pr_2775"}], "negative_passages": []}
{"query_id": "q-en-pytorch-d6cfc0fa9a654f0127f12ea13aa8b1d5a3b3b01b57eb0165468cedfcc99f5677", "query": "If a norm is zero, its gradient returns nan: Obviously just happening because the gradient divides by the norm, but the (sub)gradient here should probably be zero, or at least not nan, since that will propagate to make all updates nan. Probably low priority, as it's not going to be an issue in 99% of cases, but we're doing a few things with (exact) line searches where this caused a nan to appear, breaking everything downstream.\nI'm encountering exactly the same issue! Spent hours on debugging, just to find PyTorch has a bug on such basic thing.\n+1 just found this bug too\n+1 for this bug. Temporarily changing my code to something like the following for the sake of debugging. x = Variable((1), requires_grad=True) y = x + 1e-16 y.norm().backward() print x.grad\nThe thing is that in the 2 norm, there is a square root, which has a gradient of at 0. The gradient gives you because you then multiply 0 and an infinity during the backward pass.\nFor a scalar, norm 2 is basically abs. But x.abs().backward() gives you 0 gradient. In this sense, it's not coherent.\nI found this error, too\nAlban fixed this behavior in\nHi, the norm function can give use the 0 gradients now. However, the following code still has the nan gradient problem\nHo The square root has no gradient at 0. This is expected behavior.\nHi, but the sub-gradient of the square root should be zero? Also, y = ( x * x ) should equal to x.norm(), why they have different gradient ( 0 and nan )?\nI think was right. The left-side derivative of sqrt(x) at x=0 is undefined, so it doesn't even have a subgradient at x=0.\nsquare root has no subgradient at 0. You could define a gradient by continuity but then it would be ... Given that pytorch is using autograd, and (equivalent to your ) are completely different: The first one is a single function that is convex and defined on R, it has a subgradient of 0 at 0. The second one is composed of two function, the first function is the square function which is differentiable and outputs values in . The second function is the square root that is not convex and even though it is defined on , it is only differentiable on and it's gradient in 0 in undefined. Given that, even though and will return the same value, their gradients may differ at points where it is not differentiable, this is because automatic differentiation looks at each step of the computation one by one and even though in some cases a subgradient exist (because we look at multiple operations as a single function), it is not always the case and the gradient remains undefined.\nI think math is math. Any root's gradient at zero is either inf or undefined. This issue shall be handled by the users' themselves by adding a small value(as did), but an error(warning) message may be helpful since it is pretty hard to debug. Say: Infinite/Undefined gradient is detected at XfunctionX at line Y. Exit.\nAgree, norm is not differentiable at 0 , the bandaid that Alban put there in is wrong (even in the limit sense the gradient at 0 should be 1 not 0), but it should not have been there at all. Norm is norm, if someone want to add epsilons to their norms (like batchrnorm, e.g.) they are welcome to do so in the user code. What would numpy do?\nI've also run into a number of problems related to the change introduced in . Is there a reason the subgradient is set to 0, rather than the 1? (The limit as norm-0?) As a minimal example: Produces\nWell any value between [-1, 1] is a valid subgradient for the 2-norm. More genereally, any vector in the 1 ball for the dual norm is a valid subgradient. This means that 0 is always going to be a subgradient, while 1 will not be for all p. Anyway, the theory says that any of them could be taken and subgradient descent will work. I'm sure that depending on the application, one will be better than the other. For example, the relu function will also give a 0 subgradient at 0, you could have given 1. The main point here was to remove nans that make your network give nan for everything which is not convenient.", "positive_passages": [{"docid": "doc-en-pytorch-bb88e9a82610e455b2dfa75780521f23fcc10eb355dc449531dd48f740dfbaac", "text": "ctx.keepdim = False if keepdim is None else keepdim if dim is None: <del> ctx.norm = input.norm(p) ctx.save_for_backward(input) return input.new((ctx.norm,)) </del> <ins> norm = input.norm(p) output = input.new((norm,)) </ins> else: if keepdim is not None: output = input.norm(p, dim, keepdim=keepdim) else: output = input.norm(p, dim) <del> ctx.save_for_backward(input, output) return output </del> <ins> ctx.save_for_backward(input, output) return output </ins> @staticmethod def backward(ctx, grad_output): <del> if ctx.dim is None: input, = ctx.saved_variables if ctx.p == 2: scale_v = (grad_output / ctx.norm).expand_as(input) return input.mul(scale_v), None, None, None else: pow = input.abs().pow(ctx.p - 2) scale_v = (grad_output / ctx.norm ** (ctx.p - 1)).expand_as(input) return input.mul(pow).mul(scale_v), None, None, None </del> <ins> input, output = ctx.saved_variables if ctx.dim is not None and ctx.keepdim is False and input.dim() != 1: grad_output = grad_output.unsqueeze(ctx.dim) output = output.unsqueeze(ctx.dim) if ctx.p == 2: grad_input = input.mul(grad_output).div(output) </ins> else: <del> input, output = ctx.saved_variables </del> <ins> input_pow = input.abs().pow(ctx.p - 2) output_pow = output.pow(ctx.p - 1) grad_input = input.mul(input_pow).mul(grad_output).div(output_pow) </ins> <del> if ctx.keepdim is False and input.dim() != 1: grad_output = grad_output.unsqueeze(ctx.dim) output = output.unsqueeze(ctx.dim) </del> <ins> # Special case at 0 where we return a subgradient containing 0 grad_input.masked_fill_(output == 0, 0) </ins> <del> big_grad_output = grad_output.expand_as(input) if ctx.p == 2: big_output = output.expand_as(input) return input.mul(big_grad_output).div(big_output), None, None, None else: pow = input.abs().pow(ctx.p - 2) big_output = output.pow(ctx.p - 1).expand_as(input) return input.mul(pow).mul(big_grad_output).div(big_output), None, None, None </del> <ins> return grad_input, None, None, None </ins> # TODO: renorm", "commid": "pytorch_pr_2775"}], "negative_passages": []}
{"query_id": "q-en-pytorch-a0d273822e65898b1ccb1b24fcdee057356a1b5cb47b4fa6e0713f371ffc06c1", "query": "Intermittent test failures occurring on AdaptiveMaxPool3d. and I have seen this in the past where Max pooling has exactly the same values in the same window, then it's differently resolved on CPU/GPU. For example [0 2.5 2.5 3] can give a max index of or on CUDA depending on the runtime. So generating input values that are within epsilon of each other is important (especially at half precision). fix that one.\nWill do. Thanks for identifying the issue.", "positive_passages": [{"docid": "doc-en-pytorch-2c28eb7e15016b4b59150140e984b4b28d906b8cabf3829f0b24c75313ac0a6b", "text": "self.assertEqual(torch.mm(flattened_tensor, flattened_tensor.t()), torch.eye(rows) * gain ** 2, prec=1e-6) <ins> # Generates rand tensor with non-equal values. This ensures that duplicate # values won't be causing test failure for modules like MaxPooling. # size should be small, otherwise randperm fails / long overflows. def _rand_tensor_non_equal(*size): total = reduce(mul, size, 1) return torch.randperm(total).view(*size).double() </ins> def add_test(test): test_name = test.get_name()", "commid": "pytorch_pr_2951"}], "negative_passages": []}
{"query_id": "q-en-pytorch-a0d273822e65898b1ccb1b24fcdee057356a1b5cb47b4fa6e0713f371ffc06c1", "query": "Intermittent test failures occurring on AdaptiveMaxPool3d. and I have seen this in the past where Max pooling has exactly the same values in the same window, then it's differently resolved on CPU/GPU. For example [0 2.5 2.5 3] can give a max index of or on CUDA depending on the runtime. So generating input values that are within epsilon of each other is important (especially at half precision). fix that one.\nWill do. Thanks for identifying the issue.", "positive_passages": [{"docid": "doc-en-pytorch-41bb727092e60ecb2d77543a553d9973d50e33f29f86b2efd751b45004ffab9a", "text": "dict( module_name='AdaptiveMaxPool1d', constructor_args=(3,), <del> input=torch.rand(1, 3, 5), </del> <ins> input=_rand_tensor_non_equal(1, 3, 5), </ins> ), dict( module_name='AdaptiveMaxPool2d', constructor_args=(3,), <del> input=torch.rand(1, 3, 5, 6), </del> <ins> input=_rand_tensor_non_equal(1, 3, 5, 6), </ins> desc='single', ), dict( module_name='AdaptiveMaxPool2d', constructor_args=((3, 4),), <del> input=torch.rand(1, 3, 5, 6), </del> <ins> input=_rand_tensor_non_equal(1, 3, 5, 6), </ins> desc='tuple', ), dict( module_name='AdaptiveMaxPool3d', constructor_args=(3,), <del> input=torch.rand(2, 3, 5, 6, 7), </del> <ins> input=_rand_tensor_non_equal(2, 3, 5, 6, 7), </ins> desc='single', ), dict( module_name='AdaptiveMaxPool3d', constructor_args=((3, 4, 5),), <del> input=torch.rand(2, 3, 5, 6, 7), </del> <ins> input=_rand_tensor_non_equal(2, 3, 5, 6, 7), </ins> desc='tuple', ), dict( module_name='AdaptiveMaxPool3d', constructor_args=(3,), <del> input=torch.rand(2, 3, 12, 9, 3), </del> <ins> input=_rand_tensor_non_equal(2, 3, 12, 9, 3), </ins> desc='single_nonatomic', ), dict( module_name='AdaptiveMaxPool3d', constructor_args=((3, 4, 5),), <del> input=torch.rand(2, 3, 6, 4, 10), </del> <ins> input=_rand_tensor_non_equal(2, 3, 6, 4, 10), </ins> desc='tuple_nonatomic', ), dict(", "commid": "pytorch_pr_2951"}], "negative_passages": []}
{"query_id": "q-en-pytorch-7c531194151e50063e02afc5026a310c247879bb394741978af1c5a6f9f48de2", "query": "When using classic SGD optimizer with momentum with sparse embeddings the memory keeps garbage collecting / allocating leading to slow down and out of memory error eventually. ! ! The issue dissapears when momentum is not used ! or when embeddings are not sparse ! I'm using the last pytorch version on conda:\nI tried out your script with momentum 0.1 on master, it takes roughly 10800mb gpu memory max. This is caused by using sparse buffer. I'm sending out a PR for this.", "positive_passages": [{"docid": "doc-en-pytorch-28eb48af51e6a342b66680c0195c7b6db80fb09f911f090631a17f1d96f7c63f", "text": "if momentum != 0: param_state = self.state[p] if 'momentum_buffer' not in param_state: <del> buf = param_state['momentum_buffer'] = d_p.clone() </del> <ins> buf = param_state['momentum_buffer'] = p.data.new().resize_as_(p.data).zero_() buf.mul_(momentum).add_(d_p) </ins> else: buf = param_state['momentum_buffer'] buf.mul_(momentum).add_(1 - dampening, d_p)", "commid": "pytorch_pr_3139"}], "negative_passages": []}
{"query_id": "q-en-pytorch-5d50412dc01f12c66dce9d07c26f1d47508c97d68d53e8bae7b4990e70585dbe", "query": "Looking at the values returned from tn.multinomial(1) and counting them, it seems like it returns the values of the multinomial distribution taken from t, i.e. that it has ignored the signs of the values. However, it seem like always returns a tensor filled with zeros.\nlooking into this", "positive_passages": [{"docid": "doc-en-pytorch-b45df8f12b7e3a66648d614e89960dbcc5572080953cfb9f3d1f0df99052f025", "text": "THArgCheckWithCleanup(n_sample > 0, THCleanup(if (start_dim == 1) THTensor_(resize1d)(prob_dist, n_categories);), 2, <del> \"cannot sample n_sample < 0 samples\"); </del> <ins> \"cannot sample n_sample <= 0 samples\"); </ins> if (!with_replacement) {", "commid": "pytorch_pr_4009"}], "negative_passages": []}
{"query_id": "q-en-pytorch-5d50412dc01f12c66dce9d07c26f1d47508c97d68d53e8bae7b4990e70585dbe", "query": "Looking at the values returned from tn.multinomial(1) and counting them, it seems like it returns the values of the multinomial distribution taken from t, i.e. that it has ignored the signs of the values. However, it seem like always returns a tensor filled with zeros.\nlooking into this", "positive_passages": [{"docid": "doc-en-pytorch-41b3d62528c6fc247ca44188b09c63bd1502ada4282a0c3d265da4e409ab0026", "text": "{ /* Get normalized cumulative distribution from prob distribution */ double sum = 0; <ins> double val; </ins> for (j=0; j<n_categories; j++) { <del> sum += THStorage_(get)( </del> <ins> val = THStorage_(get)( </ins> prob_dist->storage, prob_dist->storageOffset+i*prob_dist->stride[0]+j*prob_dist->stride[1] ); <ins> THArgCheckWithCleanup((val >= 0), THCleanup(THDoubleTensor_free(cum_dist); if (start_dim == 1) THTensor_(resize1d)(prob_dist, n_categories);), 2, \"invalid multinomial distribution (encountering probability entry < 0)\"); sum += val; </ins> THDoubleStorage_set( cum_dist->storage, cum_dist->storageOffset+j*cum_dist->stride[0], ", "commid": "pytorch_pr_4009"}], "negative_passages": []}
{"query_id": "q-en-pytorch-5d50412dc01f12c66dce9d07c26f1d47508c97d68d53e8bae7b4990e70585dbe", "query": "Looking at the values returned from tn.multinomial(1) and counting them, it seems like it returns the values of the multinomial distribution taken from t, i.e. that it has ignored the signs of the values. However, it seem like always returns a tensor filled with zeros.\nlooking into this", "positive_passages": [{"docid": "doc-en-pytorch-d4d9807cf1a0f886d9cc6f63a72e2547d29c1b94170dabaab7b057692976be91", "text": "T bern_uniform = bernoulli[idx]; int _mask = (int) THCNumerics<T>::lt(bern_uniform, q[rand_ind]); output[idx] = J[rand_ind]*(1 -_mask) + (rand_ind+1L) * _mask; <del> } </del> <ins> } </ins> } template <typename T>", "commid": "pytorch_pr_4009"}], "negative_passages": []}
{"query_id": "q-en-pytorch-5d50412dc01f12c66dce9d07c26f1d47508c97d68d53e8bae7b4990e70585dbe", "query": "Looking at the values returned from tn.multinomial(1) and counting them, it seems like it returns the values of the multinomial distribution taken from t, i.e. that it has ignored the signs of the values. However, it seem like always returns a tensor filled with zeros.\nlooking into this", "positive_passages": [{"docid": "doc-en-pytorch-5a75ac4cec00f52f5abb88353120ed574a4ba743dba026200b80f979d2f991dd", "text": "__global__ void renormRowsL1(T* dist, long rows, long cols) { extern __shared__ unsigned char my_smem[]; T *smem = reinterpret_cast<T *>(my_smem); <ins> T zero = ScalarConvert<int, T>::to(0); T val; </ins> for (int64_t row = blockIdx.x; row < rows; row += gridDim.x) { T sum = ScalarConvert<int, T>::to(0); for (int64_t col = threadIdx.x; col < cols; col += blockDim.x) { <del> sum = THCNumerics<T>::add(sum, dist[row * cols + col]); </del> <ins> val = dist[row * cols + col]; assert(THCNumerics<T>::ge(val, zero)); sum = THCNumerics<T>::add(sum, val); </ins> } <del> sum = reduceBlock(smem, blockDim.x, sum, ReduceAdd<T, T>(), ScalarConvert<int, T>::to(0)); </del> <ins> sum = reduceBlock(smem, blockDim.x, sum, ReduceAdd<T, T>(), zero); </ins> if (threadIdx.x == 0) { <ins> assert(THCNumerics<T>::gt(sum, zero)); </ins> smem[0] = sum; } __syncthreads();", "commid": "pytorch_pr_4009"}], "negative_passages": []}
{"query_id": "q-en-pytorch-5d50412dc01f12c66dce9d07c26f1d47508c97d68d53e8bae7b4990e70585dbe", "query": "Looking at the values returned from tn.multinomial(1) and counting them, it seems like it returns the values of the multinomial distribution taken from t, i.e. that it has ignored the signs of the values. However, it seem like always returns a tensor filled with zeros.\nlooking into this", "positive_passages": [{"docid": "doc-en-pytorch-6ba24c171ed755dd1fed4f2364f954ead8b1b2fd15e6604fb1a3fc05e82b3856", "text": "// Each block handles one distribution // First pass, find the total sum of the distribution AccT sum = accZero; <ins> T val; </ins> for (int cat = threadIdx.x; cat < categories; cat += blockDim.x) { <del> sum = THCNumerics<AccT>::add( sum, ScalarConvert<T, AccT>::to(dist[curDist * categories + cat])); </del> <ins> val = dist[curDist * categories + cat]; assert(THCNumerics<T>::ge(val, zero)); sum = THCNumerics<AccT>::add(sum, ScalarConvert<T, AccT>::to(val)); </ins> } // threadIdx.x == 0 has the sum value from this", "commid": "pytorch_pr_4009"}], "negative_passages": []}
{"query_id": "q-en-pytorch-5d50412dc01f12c66dce9d07c26f1d47508c97d68d53e8bae7b4990e70585dbe", "query": "Looking at the values returned from tn.multinomial(1) and counting them, it seems like it returns the values of the multinomial distribution taken from t, i.e. that it has ignored the signs of the values. However, it seem like always returns a tensor filled with zeros.\nlooking into this", "positive_passages": [{"docid": "doc-en-pytorch-0bddcecf8b0b6f915b14c981d43afbef7e8f1d7d873f11bcccfacdca8053e763", "text": "if (threadIdx.x == 0) { // Make sure the sum of our distribution didn't overflow assert(!isinf(sum)); <ins> assert(THCNumerics<AccT>::gt(sum, accZero)); </ins> asmem[0] = sum; smem[0] = sampled[curDist];", "commid": "pytorch_pr_4009"}], "negative_passages": []}
{"query_id": "q-en-pytorch-e622986277177825174756cd0d58bb252b326c5ede6f55945541c76bbab2d669", "query": "Hi There, I'm trying to install Pytorch as a module on my university's computing cluster. The node uses CentOS (6.4) x8664. I install running these commands: module load gcclib/5.2.0 cmake/3.8.2 module load gcc/5.2.0 module load anaconda3/4.0.0 export CMAKEPREFIXPATH=\"(which conda))/../\" export NOCUDA=1 git clone --recursive cd pytorch/ python install --prefix=${HOME} Things look fine as it installs but then I run into the following errors, ending in the process terminating with error relating to GCC? I've tried with version GCC 6.2.0 and the same result occurs. Not sure what to even try to fix this! Thanks for any help you can provide!\nSame here, with gcc 5.4.0\nI don't think building from source works very well with gcc 5.4. Could you install gcc 4.9 and try compiling?\nI get this error instead when using gcc 4.9.0, doesn't make it very far at all.\nHi I think this is broken after / due to\nI have the same problem ( etc.) with gcc 4.8.5 on Linux with the current head ( ) What works for me is to add at the beginning of the following four files: (see also -- this can be turned into a pull request very easily) This fix is the same as but for different files.\nThis worked for me Thank you.\nfixed in latest master, thanks to", "positive_passages": [{"docid": "doc-en-pytorch-6df228497008e8ec4cd2ec8393bca92b7924400e4a996e7624da61de75ba792f", "text": "<ins> #define __STDC_FORMAT_MACROS </ins> #include <Python.h> #ifdef _MSC_VER #include <Windows.h>", "commid": "pytorch_pr_3629"}], "negative_passages": []}
{"query_id": "q-en-pytorch-e622986277177825174756cd0d58bb252b326c5ede6f55945541c76bbab2d669", "query": "Hi There, I'm trying to install Pytorch as a module on my university's computing cluster. The node uses CentOS (6.4) x8664. I install running these commands: module load gcclib/5.2.0 cmake/3.8.2 module load gcc/5.2.0 module load anaconda3/4.0.0 export CMAKEPREFIXPATH=\"(which conda))/../\" export NOCUDA=1 git clone --recursive cd pytorch/ python install --prefix=${HOME} Things look fine as it installs but then I run into the following errors, ending in the process terminating with error relating to GCC? I've tried with version GCC 6.2.0 and the same result occurs. Not sure what to even try to fix this! Thanks for any help you can provide!\nSame here, with gcc 5.4.0\nI don't think building from source works very well with gcc 5.4. Could you install gcc 4.9 and try compiling?\nI get this error instead when using gcc 4.9.0, doesn't make it very far at all.\nHi I think this is broken after / due to\nI have the same problem ( etc.) with gcc 4.8.5 on Linux with the current head ( ) What works for me is to add at the beginning of the following four files: (see also -- this can be turned into a pull request very easily) This fix is the same as but for different files.\nThis worked for me Thank you.\nfixed in latest master, thanks to", "positive_passages": [{"docid": "doc-en-pytorch-88ca958657dda50101b09cbdd7703b45fbeec7c19f02de8e9ef082e239c181d4", "text": "<ins> #define __STDC_FORMAT_MACROS </ins> #include <Python.h> #include <structmember.h>", "commid": "pytorch_pr_3629"}], "negative_passages": []}
{"query_id": "q-en-pytorch-9a0730e94c0fc8a4d967d9d6465c1b0ef1b139a90afe556ebf7040ecb39471f7", "query": "Hi, Looking at the formulas of LSTM in , in the third equation (gt), the suffix of the second W should be changed from W{hc} to W{hg}. The correct formula: gt=tanh(W{ig}xt+b{ig}+W{hg}h{(t\u22121)}+b_{hg}) Cheers, Navid", "positive_passages": [{"docid": "doc-en-pytorch-01ae53c02fda7e865dd84e9c0536a7a714381f7a5e1d5b4afcf69b9acf441040", "text": "begin{array}{ll} i_t = sigma(W_{ii} x_t + b_{ii} + W_{hi} h_{(t-1)} + b_{hi}) f_t = sigma(W_{if} x_t + b_{if} + W_{hf} h_{(t-1)} + b_{hf}) <del> g_t = tanh(W_{ig} x_t + b_{ig} + W_{hc} h_{(t-1)} + b_{hg}) </del> <ins> g_t = tanh(W_{ig} x_t + b_{ig} + W_{hg} h_{(t-1)} + b_{hg}) </ins> o_t = sigma(W_{io} x_t + b_{io} + W_{ho} h_{(t-1)} + b_{ho}) c_t = f_t c_{(t-1)} + i_t g_t h_t = o_t tanh(c_t)", "commid": "pytorch_pr_5662"}], "negative_passages": []}
{"query_id": "q-en-pytorch-31dfedb0de26de12744a460838d9a2e3cf8d4841e9927b5f6b6d6fff0ae5aeb6", "query": "The three-clause BSD license in file LICENSE says in clause 1 \"Redistributions of source code must retain the above copyright notice...\" However, there is no longer a copyright notice in the file; it appears to have been removed in commit (see ).\ncc", "positive_passages": [{"docid": "doc-en-pytorch-509df4cbc50856c183362b8314756709c583bc16a35e915c2862b0c0d81e0c39", "text": "<ins> From PyTorch: Copyright (c) 2016- Facebook, Inc (Adam Paszke) Copyright (c) 2014- Facebook, Inc (Soumith Chintala) Copyright (c) 2011-2014 Idiap Research Institute (Ronan Collobert) Copyright (c) 2012-2014 Deepmind Technologies (Koray Kavukcuoglu) Copyright (c) 2011-2012 NEC Laboratories America (Koray Kavukcuoglu) Copyright (c) 2011-2013 NYU (Clement Farabet) Copyright (c) 2006-2010 NEC Laboratories America (Ronan Collobert, Leon Bottou, Iain Melvin, Jason Weston) Copyright (c) 2006 Idiap Research Institute (Samy Bengio) Copyright (c) 2001-2004 Idiap Research Institute (Ronan Collobert, Samy Bengio, Johnny Mariethoz) From Caffe2: Copyright (c) 2016-present, Facebook Inc. All rights reserved. All contributions by Facebook: Copyright (c) 2016 Facebook Inc. All contributions by Google: Copyright (c) 2015 Google Inc. All rights reserved. All contributions by Yangqing Jia: Copyright (c) 2015 Yangqing Jia All rights reserved. All contributions from Caffe: Copyright(c) 2013, 2014, 2015, the respective contributors All rights reserved. All other contributions: Copyright(c) 2015, 2016 the respective contributors All rights reserved. Caffe2 uses a copyright model similar to Caffe: each contributor holds copyright over their contributions to Caffe2. The project versioning records all such contribution and copyright details. If a contributor wants to further mark their specific copyright on a particular contribution, they should indicate their copyright solely in the commit message of the change when it is committed. </ins> All rights reserved. Redistribution and use in source and binary forms, with or without", "commid": "pytorch_pr_8310"}], "negative_passages": []}
{"query_id": "q-en-pytorch-31dfedb0de26de12744a460838d9a2e3cf8d4841e9927b5f6b6d6fff0ae5aeb6", "query": "The three-clause BSD license in file LICENSE says in clause 1 \"Redistributions of source code must retain the above copyright notice...\" However, there is no longer a copyright notice in the file; it appears to have been removed in commit (see ).\ncc", "positive_passages": [{"docid": "doc-en-pytorch-753c1b39b878f15783a80cc227fb295abc1bf3334786b6346278d7ffb7c5b9ae", "text": "<del> From PyTorch: Copyright (c) 2016- Facebook, Inc (Adam Paszke) Copyright (c) 2014- Facebook, Inc (Soumith Chintala) Copyright (c) 2011-2014 Idiap Research Institute (Ronan Collobert) Copyright (c) 2012-2014 Deepmind Technologies (Koray Kavukcuoglu) Copyright (c) 2011-2012 NEC Laboratories America (Koray Kavukcuoglu) Copyright (c) 2011-2013 NYU (Clement Farabet) Copyright (c) 2006-2010 NEC Laboratories America (Ronan Collobert, Leon Bottou, Iain Melvin, Jason Weston) Copyright (c) 2006 Idiap Research Institute (Samy Bengio) Copyright (c) 2001-2004 Idiap Research Institute (Ronan Collobert, Samy Bengio, Johnny Mariethoz) From Caffe2: Copyright (c) 2016-present, Facebook Inc. All rights reserved. All contributions by Facebook: Copyright (c) 2016 Facebook Inc. All contributions by Google: Copyright (c) 2015 Google Inc. All rights reserved. All contributions by Yangqing Jia: Copyright (c) 2015 Yangqing Jia All rights reserved. All contributions from Caffe: Copyright(c) 2013, 2014, 2015, the respective contributors All rights reserved. All other contributions: Copyright(c) 2015, 2016 the respective contributors All rights reserved. Caffe2 uses a copyright model similar to Caffe: each contributor holds copyright over their contributions to Caffe2. The project versioning records all such contribution and copyright details. If a contributor wants to further mark their specific copyright on a particular contribution, they should indicate their copyright solely in the commit message of the change when it is committed. </del> ======================================================================= Software under third_party =======================================================================", "commid": "pytorch_pr_8310"}], "negative_passages": []}
{"query_id": "q-en-pytorch-9355a1286eaeb76b71a81a72570ae9c72fdf4257046d7fb7031b0abacf3db8bf", "query": "PyTorch version: 0.4.0 Is debug build: No CUDA used to build PyTorch: 9.0.176 OS: Ubuntu 16.04.3 LTS GCC version: (Ubuntu 5.4.0-6ubuntu1~16.04.5) 5.4.0 CMake version: version 3.9.4 Python version: 3.6 Is CUDA available: Yes CUDA runtime version: 9.0.176 GPU models and configuration: GPU 0: TITAN X (Pascal) GPU 1: TITAN X (Pascal) Nvidia driver version: 384.98 cuDNN version: Probably one of the following: /usr/lib/x8664-linux- /usr/lib/x8664-linux- /usr/lib/x8664-linux-gnu/libcudnnstaticv7.a /usr/local/lib/python2.7/dist- /usr/local/lib/python3.5/dist- Versions of relevant libraries: [pip3] numpy (1.14.1) [pip3] numpydoc (0.6.0) [pip3] torch (0.4.0) [pip3] torchvision (0.2.0) [conda] cuda90 1.0 h6433d270 pytorch [conda] magma-cuda90 2.3.0 1 soumith [conda] pytorch 0.4.0 py36cuda9.0.176cudnn7.1.21 [cuda90] pytorch [conda] torchvision 0.2.1 py361 pytorch\n() returns 7102\nPossibly related:\nI'm not sure why, but I can't reproduce this on master. I can reproduce on 0.4 though.\nNever mind, this error happens when CUDNN is updated regardless of pytorch version.", "positive_passages": [{"docid": "doc-en-pytorch-58c32ee618cd2026945484922d87b50ea8b6848cbb19868ad539b2435cf6f934", "text": "def grid_sampler(input, grid, padding_mode): <del> if cudnn.is_acceptable(input.data) and padding_mode == 'zeros' and input.dim() == 4: </del> <ins> if (cudnn.is_acceptable(input.data) and padding_mode == 'zeros' and input.dim() == 4 and input.size(1) <= 1024): # as of cudnn 7102, will not work for larger than 1024 </ins> return torch.cudnn_grid_sampler(input, grid) else: return GridSampler.apply(input, grid, padding_mode)", "commid": "pytorch_pr_8576"}], "negative_passages": []}
{"query_id": "q-en-pytorch-262b3debd105719f4260aa0a3a70d946007b6621a5a2b26a31d84b9a4b5cb3dd", "query": "Einsum currently modifies variables in-place (without an explicit indication that it does so), which prevents pytorch from automatically backpropagating. Results in the following runtime error: Demonstration and discussion can also be found . PyTorch or Caffe2: PyTorch How you installed PyTorch (conda, pip, source): pip OS: PyTorch version: 0.4.0 Python version: 3.6.1\nOops. Thank you for reporting and the minimal example. I'll see to get that fixed.\nSaw the same issue over here. Thank you !\nIs cloning the tensor before passing to einsum a valid workaround?\nYes, but a somewhat expensive one. If you have the ability to recompile, you could also apply the PR fixing this. Somehow it seems to be stuck in the review queue because it hit an unrelated CI problem...\nOkay so how to use it then?\nJust get a recent enough master and it will work.", "positive_passages": [{"docid": "doc-en-pytorch-1a87b46a559e99d0adda9f0cd7d53c46acddc4e5afca33491e82f08de291dae5", "text": "upstream=\"$1\" pr=\"$2\" git diff --name-only \"$upstream\" \"$pr\" <del> git diff --name-only \"$upstream\" \"$pr\" | grep -Eq '^(CMakeLists.txt|Makefile|.gitmodules|.jenkins/caffe2|binaries|caffe|caffe2|cmake|conda|docker|docs/caffe2|modules|scripts|third_party)' </del> <ins> # For safety, unconditionally trigger for any changes. #git diff --name-only \"$upstream\" \"$pr\" | grep -Eq '^(CMakeLists.txt|Makefile|.gitmodules|.jenkins/caffe2|binaries|caffe|caffe2|cmake|conda|docker|docs/caffe2|modules|scripts|third_party)' </ins>", "commid": "pytorch_pr_7914"}], "negative_passages": []}
{"query_id": "q-en-pytorch-262b3debd105719f4260aa0a3a70d946007b6621a5a2b26a31d84b9a4b5cb3dd", "query": "Einsum currently modifies variables in-place (without an explicit indication that it does so), which prevents pytorch from automatically backpropagating. Results in the following runtime error: Demonstration and discussion can also be found . PyTorch or Caffe2: PyTorch How you installed PyTorch (conda, pip, source): pip OS: PyTorch version: 0.4.0 Python version: 3.6.1\nOops. Thank you for reporting and the minimal example. I'll see to get that fixed.\nSaw the same issue over here. Thank you !\nIs cloning the tensor before passing to einsum a valid workaround?\nYes, but a somewhat expensive one. If you have the ability to recompile, you could also apply the PR fixing this. Somehow it seems to be stuck in the review queue because it hit an unrelated CI problem...\nOkay so how to use it then?\nJust get a recent enough master and it will work.", "positive_passages": [{"docid": "doc-en-pytorch-2bb6c5e2d9ed1208cf491f2bcfc8d310d0469970a1b9b02f702bac3f95fbfdd3", "text": "upstream=\"$1\" pr=\"$2\" git diff --name-only \"$upstream\" \"$pr\" <del> git diff --name-only \"$upstream\" \"$pr\" | grep -Eq '^(aten/|caffe2/|.jenkins/pytorch|docs/(make.bat|Makefile|requirements.txt|source)|mypy|requirements.txt|setup.py|test/|third_party/|tools/|.gitmodules|torch/)' </del> <ins> # Now that PyTorch build depends on Caffe2, unconditionally trigger # for any changes. # TODO: Replace this with a NEGATIVE regex that allows us to blacklist # files (letting us skip builds when they are unnecessary) #git diff --name-only \"$upstream\" \"$pr\" | grep -Eq '^(aten/|caffe2/|.jenkins/pytorch|docs/(make.bat|Makefile|requirements.txt|source)|mypy|requirements.txt|setup.py|test/|third_party/|tools/|.gitmodules|torch/)' </ins>", "commid": "pytorch_pr_7914"}], "negative_passages": []}
{"query_id": "q-en-pytorch-262b3debd105719f4260aa0a3a70d946007b6621a5a2b26a31d84b9a4b5cb3dd", "query": "Einsum currently modifies variables in-place (without an explicit indication that it does so), which prevents pytorch from automatically backpropagating. Results in the following runtime error: Demonstration and discussion can also be found . PyTorch or Caffe2: PyTorch How you installed PyTorch (conda, pip, source): pip OS: PyTorch version: 0.4.0 Python version: 3.6.1\nOops. Thank you for reporting and the minimal example. I'll see to get that fixed.\nSaw the same issue over here. Thank you !\nIs cloning the tensor before passing to einsum a valid workaround?\nYes, but a somewhat expensive one. If you have the ability to recompile, you could also apply the PR fixing this. Somehow it seems to be stuck in the review queue because it hit an unrelated CI problem...\nOkay so how to use it then?\nJust get a recent enough master and it will work.", "positive_passages": [{"docid": "doc-en-pytorch-298eee62557fb9197824322f107803eb18b39542355131523daefa15d436b22d", "text": "unset(CUDA_ARCH_PTX CACHE) endif() <del> if(DEFINED ENV{TORCH_CUDA_ARCH_LIST}) </del> <ins> if($ENV{TORCH_CUDA_ARCH_LIST}) </ins> # Pass CUDA architecture directly set(__cuda_arch_bin $ENV{TORCH_CUDA_ARCH_LIST}) message(STATUS \"Set CUDA arch from TORCH_CUDA_ARCH_LIST: ${__cuda_arch_bin}\")", "commid": "pytorch_pr_7914"}], "negative_passages": []}
{"query_id": "q-en-pytorch-e01185564370d695c652297820033c10134ab77dd1729ed0d1fb165ef0bb43a6", "query": "Hi, I try to run the following example from here, but run into some issues: ld: warning: ignoring file , file was built for unsupported file format ( 0x7F 0x45 0x4C 0x46 0x02 0x01 0x01 0x03 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 ) which is not the architecture being linked (x8664): ld: warning: ignoring file , file was built for unsupported file format ( 0x7F 0x45 0x4C 0x46 0x02 0x01 0x01 0x03 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 ) which is not the architecture being linked (x8664): ld: warning: ignoring file , file was built for unsupported file format ( 0x7F 0x45 0x4C 0x46 0x02 0x01 0x01 0x03 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 ) which is not the architecture being linked (x86_64): full error log: i installed pytorch: im running pytorch on: osx 10.11.6 i installed libtorch following these steps: here the cmake output:\nshould i use: wget -c but the problem is that libtorch-macos- does not contain any libraries in ./lib folder. how to create it? instead of: wget -c when i use libtorch-macos-, i get this error:\nI've got the same image not found example in this same spot, by following the minimal example of the docs. Running macOS Mojave here.\nwe need to build libtorch manually because it doesnt come with all libs...\nGetting the same issue as with macOS High Sierra 10.13.5. When I look in I see but not as it requires.\ncc: looks like a missing copying of in our OSX libtorch builds.\nSeeing the same thing.\nFixed this issue by downloading the two missing libraries from and copying them both (, ) to .\nCorrect me if I am wrong, but that solution is not answering the original issue: I was having the same issue when trying to run the minimal cpp frontend example here: I grabbed libtorch linked on the main pytorch site: You are right, had neither nor in . However, copying those files to that location did not solve the issue for me. I was able to get the minimal example by giving cmake the for Pytorch's torch library (compiled from source): I am going to try compiling the DCGAN example for the cpp frontend in this same way. I assume that this is not going to work, that we will ultimately need to point to for something like data loading, etc. I am on Mac 10.14.3\nI was facing almost same problems here and managed to fix it by downloading and from . Just in case anyone needs this, please do NOT download the newest post there! Instead find the file instead of file. ! I didn't find the difference between and at first, but succeeded once downloaded the ones.\nAny idea whats going on with these missing MKL libraries for the mac c++ libtorch? Somehow this is still an issue dating back since 2018, and even after pytorch 1.3, I just tried it out again. The workaround done by above works for me, which involves manually copying the dylib files from old (pre v1.0) intel binaries.\nI fixed all such issues for v1.3.0 in or so I thought. But I only fixed them for pip and conda packages, not libtorch. I'm looking into it right now and will fix libtorch too.\nwill kick off new binary builds as the fix went in. will close the issue after the builds are uploaded and live.\nthis is fixed now with fixed binaries that are re-uploaded.", "positive_passages": [{"docid": "doc-en-pytorch-56f27ed053f767d884128d55d64bcd07034c9cc0e16c8f806aece875ffd41077", "text": "BASE_DIR=$(pwd) cd torch/lib INSTALL_DIR=\"$(pwd)/tmp_install\" <del> BASIC_C_FLAGS=\" -DTH_INDEX_BASE=0 -I$INSTALL_DIR/include -I$INSTALL_DIR/include/TH -I$INSTALL_DIR/include/THC \" </del> <ins> BASIC_C_FLAGS=\" -DTH_INDEX_BASE=0 -DTH_GENERIC_USE_HALF=1 -DCUDA_HAS_FP16=1 -I$INSTALL_DIR/include -I$INSTALL_DIR/include/TH -I$INSTALL_DIR/include/THC \" </ins> LDFLAGS=\"-L$INSTALL_DIR/lib \" if [[ $(uname) == 'Darwin' ]]; then LDFLAGS=\"$LDFLAGS -Wl,-rpath,@loader_path\"", "commid": "pytorch_pr_376"}], "negative_passages": []}
{"query_id": "q-en-pytorch-82c7905498663a8787a4fb25e10717c68c8433c7d30806d3b2b65cdfb71925b4", "query": "I'm trying to implement distributed adversarial training in PyTorch. Thus, in my program pipeline I need to forward the output of one DDP model to another one. When I run the code in distributed setting, the following error is throwed: frame : c10::Error::Error(c10::SourceLocation, std::string const&) + 0x45 (0x7fa248f906d5 in /export/home/haoran/anaconda3/envs/torch1.2/lib/python3.6/site-) frame : c10d::Reducer::prepareforbackward(std::vector<torch::autograd::Variable, std::allocator<torch::autograd::Variableconst&) + 0x666 (0x7fa26f9d0896 in /export/home/haoran/anaconda3/envs/torch1.2/lib/python3.6/site-) frame : <unknown function+ 0x6df3b8 (0x7fa26f9c43b8 in /export/home/haoran/anaconda3/envs/torch1.2/lib/python3.6/site-) frame : <unknown function+ 0x1fc7f0 (0x7fa26f4e17f0 in /export/home/haoran/anaconda3/envs/torch1.2/lib/python3.6/site-) frame : main + 0x16d (0x400c1d in /export/home/haoran/anaconda3/envs/torch1.2/bin/python) frame : _libcstartmain + 0xf5 (0x7fa285e06c05 in ) frame : /export/home/haoran/anaconda3/envs/torch1.2/bin/python() [0x4009e9] Running the following code with DistributedDataParallel <!-- If you have a code sample, error messages, stack traces, please provide it here as well --The code could run successfully without throwing error. Please copy and paste the output from our (or fill out the checklist below manually). You can get the script and run it with: PyTorch version: 1.2.0.dev20190701 Is debug build: No CUDA used to build PyTorch: 9.0.176 OS: CentOS Linux 7 (Core) GCC version: (GCC) 5.4.0 CMake version: version 3.6.2 Python version: 3.6 Is CUDA available: Yes CUDA runtime version: 9.0.176 GPU models and configuration: GPU 0: Tesla P40 GPU 1: Tesla P40 GPU 2: Tesla P40 GPU 3: Tesla P40 Nvidia driver version: 390.25 cuDNN version: Could not collect Versions of relevant libraries: [pip] numpy==1.16.4 [pip] torch==1.2.0.dev20190701 [conda] blas 1.0 mkl defaults [conda] mkl 2019.4 243 defaults [conda] mklfft 1.0.12 py36ha843d7b0 defaults [conda] mklrandom 1.0.2 py36hd81dba30 defaults [conda] pytorch-nightly 1.2.0.dev20190701 py3.6cuda9.0.176cudnn7.5.1_0 pytorch <!-- Add any other context about the problem here. --\nPlease, try one of the suggested options: (1) passing the keyword argument findunusedparameters=True to ; (2) making sure all forward function outputs participate in calculating loss. If you already have done the above two steps, then the distributed data parallel module wasn't able to locate the output tensors in the return value of your module's forward function. Please include the loss function and the structure of the return value of forward of your module when reporting this issue (e.g. list, dict, iterable). If the issue will still exist, please, reopen this issue.\nI double checked your repro to figure out if this is a real issue or not and found that even setting won't help you here. It's a simple example of course, but you're not using the variable in your loss computation. The model's forward function returns it which is an indication for DDP that it will receive a gradient. Then when it doesn't, and you try to run the next iteration, it raises an error, because the previous iteration didn't finish. I recommend you return only the tensor you plan to use in the loss computation.\nIs there any way to see which parameters on earth are unused? It is useless to simply give this RuntimeError or set findunusedparameters to True.\nI had the same problem with just one GPUI have modified the code according to what you said\nI met the same problem and I found this topic. It works for me. (though it's been a long time for you)", "positive_passages": [{"docid": "doc-en-pytorch-5527c49435634724c39aa78664696d2e3b09f4d9ce230b657c6313f53b779db9", "text": "- [NVIDIA CUDA](https://developer.nvidia.com/cuda-downloads) 7.5 or above - [NVIDIA CuDNN](https://developer.nvidia.com/cudnn) v5.x <ins> If you want to disable CUDA support, export environment variable `NO_CUDA=1`. </ins> #### Install optional dependencies On Linux", "commid": "pytorch_pr_655"}], "negative_passages": []}
{"query_id": "q-en-pytorch-34d5a5b9a4bdd25b4f478e80c084d0145fa0e79a354422ab35b58d4e6fab3a4d", "query": "Consider the following: A reasonable reading of this message is that the order of the parameters should be reversed, but this isn't equivalent: The problem is that the second signature has a keyword-only argument but it's printed. It also doesn't include defaults and such, but that's probably secondary. In this case the problem is at least flagged, but it's not clear to me whether there are other deprecated signatures that silently suggest the wrong thing. Please copy and paste the output from our (or fill out the checklist below manually). You can get the script and run it with: Collecting environment information... PyTorch version: 1.5.0a0+ Is debug build: Yes CUDA used to build PyTorch: Could not collect OS: CentOS Linux 7 (Core) GCC version: (GCC) 7.4.0 CMake version: version 3.14.0 Python version: 3.7 Is CUDA available: No CUDA runtime version: 9.2.88 GPU models and configuration: GPU 0: Tesla M40 GPU 1: Tesla M40 Nvidia driver version: 396.69 cuDNN version: /usr/local/cuda-9.2/targets/x8664- Versions of relevant libraries: [pip] numpy==1.17.2 [pip] torch==1.5.0a0+ [conda] blas 1.0 mkl [conda] mkl 2019.4 243 [conda] mkl-include 2019.4 243 [conda] mkl-service 2.3.0 py37he904b0f0 [conda] mklfft 1.0.14 py37ha843d7b0 [conda] mklrandom 1.1.0 py37hd6b4f25_0 [conda] torch 1.5.0a0+ <pip<!-- Add any other context about the problem here. --", "positive_passages": [{"docid": "doc-en-pytorch-9b245cbb4cdf0e52d936ec300bd71a1d3bfaada73bbc3e344e9f7734199c4b32", "text": "} std::string FunctionSignature::toString() const { <ins> // TODO: consider printing more proper schema strings with defaults, optionals, etc. </ins> std::ostringstream ss; <ins> bool keyword_already = false; </ins> ss << \"(\"; int i = 0; for (auto& param : params) { if (i != 0) { ss << \", \"; } <ins> if (param.keyword_only && !keyword_already) { ss << \"*, \"; keyword_already = true; } </ins> ss << param.type_name() << \" \" << param.name; i++; }", "commid": "pytorch_pr_36782"}], "negative_passages": []}
{"query_id": "q-en-pytorch-5dd6ef7956db6162dd5a8aed7d82b66366e4258908bde5a1b6f15bb5b18aeaa2", "query": "See Docker run fails with , which is due to the fact that is specified in docker container: Which results in: CI cc\nCheck that adding to manywheels image results in a failure:\nTentatively marking hi-pri since nightly testing are failing\nThis is resolved, here is the successful run:", "positive_passages": [{"docid": "doc-en-pytorch-ad3603528f82c43ee91890682fa72cad7a5396b60f29cde4768f24d248474a19", "text": "return ret; } <del> Variable VariableType::as_variable(Tensor tensor) const { </del> <ins> static Variable as_variable(Tensor tensor) { </ins> return make_variable(std::move(tensor)); } <del> std::tuple<Variable, Variable> VariableType::as_variable(std::tuple<Tensor, Tensor> tensors) const { </del> <ins> static std::tuple<Variable, Variable> as_variable(std::tuple<Tensor, Tensor> tensors) { </ins> return std::make_tuple<>( make_variable(std::move(std::get<0>(tensors))), make_variable(std::move(std::get<1>(tensors)))); } <del> std::tuple<Variable, Variable, Variable> VariableType::as_variable(std::tuple<Tensor, Tensor, Tensor> tensors) const { </del> <ins> static std::tuple<Variable, Variable, Variable> as_variable(std::tuple<Tensor, Tensor, Tensor> tensors) { </ins> return std::make_tuple<>( make_variable(std::move(std::get<0>(tensors))), make_variable(std::move(std::get<1>(tensors))), make_variable(std::move(std::get<2>(tensors)))); } <del> std::tuple<Variable, Variable, Variable, Variable> VariableType::as_variable(std::tuple<Tensor, Tensor, Tensor, Tensor> tensors) const { </del> <ins> static std::tuple<Variable, Variable, Variable, Variable> as_variable(std::tuple<Tensor, Tensor, Tensor, Tensor> tensors) { </ins> return std::make_tuple<>( make_variable(std::move(std::get<0>(tensors))), make_variable(std::move(std::get<1>(tensors))),", "commid": "pytorch_pr_4366"}], "negative_passages": []}
{"query_id": "q-en-pytorch-5dd6ef7956db6162dd5a8aed7d82b66366e4258908bde5a1b6f15bb5b18aeaa2", "query": "See Docker run fails with , which is due to the fact that is specified in docker container: Which results in: CI cc\nCheck that adding to manywheels image results in a failure:\nTentatively marking hi-pri since nightly testing are failing\nThis is resolved, here is the successful run:", "positive_passages": [{"docid": "doc-en-pytorch-7d0c77fd4c8089d46335d6de8e596e3d54016e125c67efd3a3172007d59d80b7", "text": "make_variable(std::move(std::get<3>(tensors)))); } <del> std::vector<Variable> VariableType::as_variable(TensorList tl) const { </del> <ins> static std::vector<Variable> as_variable(TensorList tl) { </ins> std::vector<Variable> variables; for (auto& t : tl) { variables.emplace_back(make_variable(std::move(t)));", "commid": "pytorch_pr_4366"}], "negative_passages": []}
{"query_id": "q-en-pytorch-5dd6ef7956db6162dd5a8aed7d82b66366e4258908bde5a1b6f15bb5b18aeaa2", "query": "See Docker run fails with , which is due to the fact that is specified in docker container: Which results in: CI cc\nCheck that adding to manywheels image results in a failure:\nTentatively marking hi-pri since nightly testing are failing\nThis is resolved, here is the successful run:", "positive_passages": [{"docid": "doc-en-pytorch-51b2149dc54231e077acd735c5a589581935e5f104a0e4f54368387fdd9b3558", "text": "} } <del> variable_list flatten(const TensorList& tensors) { </del> <ins> static variable_list flatten(const TensorList& tensors) { </ins> return cast_tensor_list(tensors); } <del> variable_list flatten(const Tensor& x, const TensorList& y) { </del> <ins> static variable_list flatten(const Tensor& x, const TensorList& y) { </ins> std::vector<Variable> r; r.reserve(1 + y.size()); r.emplace_back(x);", "commid": "pytorch_pr_4366"}], "negative_passages": []}
{"query_id": "q-en-pytorch-5dd6ef7956db6162dd5a8aed7d82b66366e4258908bde5a1b6f15bb5b18aeaa2", "query": "See Docker run fails with , which is due to the fact that is specified in docker container: Which results in: CI cc\nCheck that adding to manywheels image results in a failure:\nTentatively marking hi-pri since nightly testing are failing\nThis is resolved, here is the successful run:", "positive_passages": [{"docid": "doc-en-pytorch-b014bf93d687c3ccf5f199ad845b288b12dfdbaebbd939c86af9d171ac3b3b2d", "text": "return r; } <del> variable_list flatten(const Tensor& x, const TensorList& y, const Tensor& z) { </del> <ins> static variable_list flatten(const Tensor& x, const TensorList& y, const Tensor& z) { </ins> std::vector<Variable> r; r.reserve(2 + y.size()); r.emplace_back(x);", "commid": "pytorch_pr_4366"}], "negative_passages": []}
{"query_id": "q-en-pytorch-5dd6ef7956db6162dd5a8aed7d82b66366e4258908bde5a1b6f15bb5b18aeaa2", "query": "See Docker run fails with , which is due to the fact that is specified in docker container: Which results in: CI cc\nCheck that adding to manywheels image results in a failure:\nTentatively marking hi-pri since nightly testing are failing\nThis is resolved, here is the successful run:", "positive_passages": [{"docid": "doc-en-pytorch-2eb8e144c61d9fc354240acf4150cfcbb2f2030be31cd71d45e142b4006e1181", "text": "return r; } <del> std::vector<Tensor> as_tensor_list(std::vector<Variable> &vars) { </del> <ins> static std::vector<Tensor> as_tensor_list(std::vector<Variable> &vars) { </ins> std::vector<Tensor> tensors; for (auto& v : vars) { tensors.emplace_back(std::move(v));", "commid": "pytorch_pr_4366"}], "negative_passages": []}
{"query_id": "q-en-pytorch-5dd6ef7956db6162dd5a8aed7d82b66366e4258908bde5a1b6f15bb5b18aeaa2", "query": "See Docker run fails with , which is due to the fact that is specified in docker container: Which results in: CI cc\nCheck that adding to manywheels image results in a failure:\nTentatively marking hi-pri since nightly testing are failing\nThis is resolved, here is the successful run:", "positive_passages": [{"docid": "doc-en-pytorch-45ab2894423a4c059b0693d4024d38ac2f47fc04cbe18c56c1f2b68c277708c0", "text": "return self.clone(); } <del> std::vector<int64_t> to_arg_sizes(TensorList tensors, int64_t dim) { </del> <ins> static std::vector<int64_t> to_arg_sizes(TensorList tensors, int64_t dim) { </ins> std::vector<int64_t> arg_sizes(tensors.size()); for (size_t i = 0; i < tensors.size(); ++i) { arg_sizes[i] = tensors[i].size(dim);", "commid": "pytorch_pr_4366"}], "negative_passages": []}
{"query_id": "q-en-pytorch-5dd6ef7956db6162dd5a8aed7d82b66366e4258908bde5a1b6f15bb5b18aeaa2", "query": "See Docker run fails with , which is due to the fact that is specified in docker container: Which results in: CI cc\nCheck that adding to manywheels image results in a failure:\nTentatively marking hi-pri since nightly testing are failing\nThis is resolved, here is the successful run:", "positive_passages": [{"docid": "doc-en-pytorch-bd6eb8f661d6c0e6d0c08a4b156cbb4fbcefc38db1a4c87ae1eb9142ac37a1db", "text": "std::vector<at::Tensor> unpack(at::TensorList tl, const char *name, int pos) const; std::vector<at::Tensor> unpack_idxs(at::TensorList tl, const char *name, int pos) const; <del> Variable as_variable(Tensor tensor) const; std::tuple<Variable, Variable> as_variable(std::tuple<Tensor, Tensor> tensor) const; std::tuple<Variable, Variable, Variable> as_variable(std::tuple<Tensor, Tensor, Tensor> tensor) const; std::tuple<Variable, Variable, Variable, Variable> as_variable(std::tuple<Tensor, Tensor, Tensor, Tensor> tensor) const; std::vector<Variable> as_variable(TensorList tensor) const; Variable maybe_wrap(Tensor data, const Variable & self, bool inplace) const; </del> private: at::Type* baseType; std::string str;", "commid": "pytorch_pr_4366"}], "negative_passages": []}
{"query_id": "q-en-pytorch-649049e212ee0b73a2f7bba86be9b45ff32e3aace4e482464f8256518eaefa4d", "query": "After merging cuda 12.4 for nightly. I observe following failures: libtorch-cuda124-shared-with-deps-cxx11-abi-build: Build Official Docker Images 124: We need to resolve these before continuing adding CUDA 12.4 CI workflows 0 cc\nattempt 3 worked for libtorch build issue:\nFor Docker image, related discussion:\nI will instance a new bug for the docker job failure.", "positive_passages": [{"docid": "doc-en-pytorch-af6382f3e1651d38af354b5d3b54703e77febcdc137b83436bd2a4decd13e125", "text": ":members: <ins> Padding Layers -------------- :hidden:`ReflectionPad2d` ~~~~~~~~~~~~~~~~~~~~~~~~~ .. autoclass:: ReflectionPad2d :members: :hidden:`ReplicationPad2d` ~~~~~~~~~~~~~~~~~~~~~~~~~~ .. autoclass:: ReplicationPad2d :members: :hidden:`ReplicationPad3d` ~~~~~~~~~~~~~~~~~~~~~~~~~~ .. autoclass:: ReplicationPad3d :members: :hidden:`ZeroPad2d` ~~~~~~~~~~~~~~~~~~~ .. autoclass:: ZeroPad2d :members: :hidden:`ConstantPad2d` ~~~~~~~~~~~~~~~~~~~~~~~ .. autoclass:: ConstantPad2d :members: </ins> Non-linear Activations ----------------------------------", "commid": "pytorch_pr_1808"}], "negative_passages": []}
{"query_id": "q-en-pytorch-649049e212ee0b73a2f7bba86be9b45ff32e3aace4e482464f8256518eaefa4d", "query": "After merging cuda 12.4 for nightly. I observe following failures: libtorch-cuda124-shared-with-deps-cxx11-abi-build: Build Official Docker Images 124: We need to resolve these before continuing adding CUDA 12.4 CI workflows 0 cc\nattempt 3 worked for libtorch build issue:\nFor Docker image, related discussion:\nI will instance a new bug for the docker job failure.", "positive_passages": [{"docid": "doc-en-pytorch-2522f531f7405c6f46bf5539d95bb508a5dae6085838eb03c5d1bf94c9abc900", "text": "from .module import Module from .utils import _quadruple, _ntuple <del> from .._functions.padding import ConstantPad2d as F_ConstantPad2d </del> <ins> from .. import functional as F </ins> # TODO: grad_output size asserts in THNN class ReflectionPad2d(Module): <ins> r\"\"\"Pads the input tensor using the reflection of the input boundary. Args: padding (int, tuple): the size of the padding. If is int, uses the same padding in all boundaries. If a 4-tuple, uses (paddingLeft, paddingRight, paddingTop, paddingBottom) Shape: - Input: :math:`(N, C, H_{in}, W_{in})` - Output: :math:`(N, C, H_{out}, W_{out})` where :math:`H_{out} = H_{in} + paddingTop + paddingBottom` :math:`W_{out} = W_{in} + paddingLeft + paddingRight` Examples:: >>> m = nn.ReflectionPad2d(3) >>> input = autograd.Variable(torch.randn(16, 3, 320, 480)) >>> output = m(input) >>> # using different paddings >>> m = nn.ReflectionPad2d((3, 3, 6, 6)) >>> output = m(input) \"\"\" </ins> def __init__(self, padding): super(ReflectionPad2d, self).__init__() self.padding = _quadruple(padding) def forward(self, input): <del> return self._backend.ReflectionPad2d(*self.padding)(input) </del> <ins> return F.pad(input, self.padding, 'reflect') </ins> def __repr__(self): return self.__class__.__name__ + ' ' + str(self.padding) class ReplicationPad2d(Module): <ins> r\"\"\"Pads the input tensor using replication of the input boundary. Args: padding (int, tuple): the size of the padding. If is int, uses the same padding in all boundaries. If a 4-tuple, uses (paddingLeft, paddingRight, paddingTop, paddingBottom) Shape: - Input: :math:`(N, C, H_{in}, W_{in})` - Output: :math:`(N, C, H_{out}, W_{out})` where :math:`H_{out} = H_{in} + paddingTop + paddingBottom` :math:`W_{out} = W_{in} + paddingLeft + paddingRight` Examples:: >>> m = nn.ReplicationPad2d(3) >>> input = autograd.Variable(torch.randn(16, 3, 320, 480)) >>> output = m(input) >>> # using different paddings >>> m = nn.ReplicationPad2d((3, 3, 6, 6)) >>> output = m(input) \"\"\" </ins> def __init__(self, padding): super(ReplicationPad2d, self).__init__() self.padding = _quadruple(padding) def forward(self, input): <del> return self._backend.ReplicationPad2d(*self.padding)(input) </del> <ins> return F.pad(input, self.padding, 'replicate') </ins> def __repr__(self): return self.__class__.__name__ + ' ' + str(self.padding) class ReplicationPad3d(Module): <ins> r\"\"\"Pads the input tensor using replication of the input boundary. Args: padding (int, tuple): the size of the padding. If is int, uses the same padding in all boundaries. If a 6-tuple, uses (paddingLeft, paddingRight, paddingTop, paddingBottom, paddingFront, paddingBack) Shape: - Input: :math:`(N, C, D_{in}, H_{in}, W_{in})` - Output: :math:`(N, C, D_{out}, H_{out}, W_{out})` where :math:`D_{out} = D_{in} + paddingFront + paddingBack` :math:`H_{out} = H_{in} + paddingTop + paddingBottom` :math:`W_{out} = W_{in} + paddingLeft + paddingRight` Examples:: >>> m = nn.ReplicationPad3d(3) >>> input = autograd.Variable(torch.randn(16, 3, 8, 320, 480)) >>> output = m(input) >>> # using different paddings >>> m = nn.ReplicationPad3d((3, 3, 6, 6, 1, 1)) >>> output = m(input) \"\"\" </ins> def __init__(self, padding): super(ReplicationPad3d, self).__init__() self.padding = _ntuple(6)(padding) def forward(self, input): <del> return self._backend.ReplicationPad3d(*self.padding)(input) </del> <ins> return F.pad(input, self.padding, 'replicate') </ins> def __repr__(self): return self.__class__.__name__ + ' ' + str(self.padding) class ZeroPad2d(Module): <ins> r\"\"\"Pads the input tensor boundaries with zero. Args: padding (int, tuple): the size of the padding. If is int, uses the same padding in all boundaries. If a 4-tuple, uses (paddingLeft, paddingRight, paddingTop, paddingBottom) Shape: - Input: :math:`(N, C, H_{in}, W_{in})` - Output: :math:`(N, C, H_{out}, W_{out})` where :math:`H_{out} = H_{in} + paddingTop + paddingBottom` :math:`W_{out} = W_{in} + paddingLeft + paddingRight` Examples:: >>> m = nn.ZeroPad2d(3) >>> input = autograd.Variable(torch.randn(16, 3, 320, 480)) >>> output = m(input) >>> # using different paddings >>> m = nn.ZeroPad2d((3, 3, 6, 6)) >>> output = m(input) \"\"\" </ins> def __init__(self, padding): super(ZeroPad2d, self).__init__() self.padding = _quadruple(padding) def forward(self, input): <del> return F_ConstantPad2d(pad=self.padding, value=0)(input) </del> <ins> return F.pad(input, self.padding, 'constant', 0) </ins> def __repr__(self): return self.__class__.__name__ + ' ' + str(self.padding) class ConstantPad2d(Module): <ins> r\"\"\"Pads the input tensor boundaries with a constant value. Args: padding (int, tuple): the size of the padding. If is int, uses the same padding in all boundaries. If a 4-tuple, uses (paddingLeft, paddingRight, paddingTop, paddingBottom) Shape: - Input: :math:`(N, C, H_{in}, W_{in})` - Output: :math:`(N, C, H_{out}, W_{out})` where :math:`H_{out} = H_{in} + paddingTop + paddingBottom` :math:`W_{out} = W_{in} + paddingLeft + paddingRight` Examples:: >>> m = nn.ConstantPad2d(3, 3.5) >>> input = autograd.Variable(torch.randn(16, 3, 320, 480)) >>> output = m(input) >>> # using different paddings >>> m = nn.ConstantPad2d((3, 3, 6, 6), 3.5) >>> output = m(input) \"\"\" </ins> def __init__(self, padding, value): super(ConstantPad2d, self).__init__()", "commid": "pytorch_pr_1808"}], "negative_passages": []}
{"query_id": "q-en-pytorch-649049e212ee0b73a2f7bba86be9b45ff32e3aace4e482464f8256518eaefa4d", "query": "After merging cuda 12.4 for nightly. I observe following failures: libtorch-cuda124-shared-with-deps-cxx11-abi-build: Build Official Docker Images 124: We need to resolve these before continuing adding CUDA 12.4 CI workflows 0 cc\nattempt 3 worked for libtorch build issue:\nFor Docker image, related discussion:\nI will instance a new bug for the docker job failure.", "positive_passages": [{"docid": "doc-en-pytorch-90fb48d90cd835d50d5300bc7c39898223e4cb9cb2d25621c59a1ef350be474e", "text": "self.value = value def forward(self, input): <del> return F_ConstantPad2d(pad=self.padding, value=self.value)(input) </del> <ins> return F.pad(input, self.padding, 'constant', self.value) </ins> def __repr__(self): return self.__class__.__name__ + ' ' + str(self.padding)", "commid": "pytorch_pr_1808"}], "negative_passages": []}
{"query_id": "q-en-pytorch-9a0badc8dcccf68c4168a6c671cce742e5dccbeb83d487e2d2dcee4139512db3", "query": "Here, I can't export trivial net with sort() using dynamoexport(). Legacy () works fine, dynamoexport errors out : Unsupported FX nodes: {'callfunction': ['']}. Pytorch nightly 06/03 cc\na missing op, could you take this?\nNot sure if related:\nHmmm. Ideally, ONNX should add a sort op. It doesn't have one. I wonder what the legacy exporter converts it to.\nshould fix this issue.", "positive_passages": [{"docid": "doc-en-pytorch-a12353ff64e96f5a3b946258b0d15a90e1197c4d9549bf80e55efe9959db81ed", "text": "(Cross, (), ((S, 3), (S, 3))), (Cross, (), ((S, 3, S), (S, 3, S), 1), 'dim'), (Inverse, (), ((S, S),), '', (), [skipIfNoLapack]), <ins> (Gesv, (), ((S, S), (S, S)), '', (), [skipIfNoLapack]), </ins> (Clone, (), ((S, M, S),)), (Squeeze, (), ((S, 1, M, 1),)), # TODO: enable neg dim checks", "commid": "pytorch_pr_1733"}], "negative_passages": []}
{"query_id": "q-en-pytorch-9a0badc8dcccf68c4168a6c671cce742e5dccbeb83d487e2d2dcee4139512db3", "query": "Here, I can't export trivial net with sort() using dynamoexport(). Legacy () works fine, dynamoexport errors out : Unsupported FX nodes: {'callfunction': ['']}. Pytorch nightly 06/03 cc\na missing op, could you take this?\nNot sure if related:\nHmmm. Ideally, ONNX should add a sort op. It doesn't have one. I wonder what the legacy exporter converts it to.\nshould fix this issue.", "positive_passages": [{"docid": "doc-en-pytorch-865c528397122b80d2fa3f7329a6cd9644460ce373c9f5220f3d7ab3c8894ffe", "text": "('cross', (S, 3), ((S, 3),)), ('cross', (S, 3, S), ((S, 3, S), 1), 'dim'), ('inverse', (S, S), (), '', (), [skipIfNoLapack]), <ins> ('gesv', (S, S), ((S, S),), '', (), [skipIfNoLapack]), </ins> ('clone', (S, M, S), ()), ('eq', (S, S, S), ((S, S, S),)), ('ne', (S, S, S), ((S, S, S),)),", "commid": "pytorch_pr_1733"}], "negative_passages": []}
{"query_id": "q-en-pytorch-9a0badc8dcccf68c4168a6c671cce742e5dccbeb83d487e2d2dcee4139512db3", "query": "Here, I can't export trivial net with sort() using dynamoexport(). Legacy () works fine, dynamoexport errors out : Unsupported FX nodes: {'callfunction': ['']}. Pytorch nightly 06/03 cc\na missing op, could you take this?\nNot sure if related:\nHmmm. Ideally, ONNX should add a sort op. It doesn't have one. I wonder what the legacy exporter converts it to.\nshould fix this issue.", "positive_passages": [{"docid": "doc-en-pytorch-d02956b17130a7d99e7c22145b27f4c3851b955356ce3df7c7930eef9e41b25d", "text": "def backward(ctx, grad_output): inverse, = ctx.saved_variables return -torch.mm(inverse.t(), torch.mm(grad_output, inverse.t())) <ins> class Gesv(Function): @staticmethod def forward(ctx, b, a): # TODO see if one can backprop through LU X, LU = torch.gesv(b, a) ctx.save_for_backward(X, a) ctx.mark_non_differentiable(LU) return X, LU @staticmethod def backward(ctx, grad_output, grad_LU=None): X, a = ctx.saved_variables grad_b, _ = torch.gesv(grad_output, a.t()) grad_a = -torch.mm(grad_b, X.t()) return grad_b, grad_a </ins>", "commid": "pytorch_pr_1733"}], "negative_passages": []}
{"query_id": "q-en-pytorch-9a0badc8dcccf68c4168a6c671cce742e5dccbeb83d487e2d2dcee4139512db3", "query": "Here, I can't export trivial net with sort() using dynamoexport(). Legacy () works fine, dynamoexport errors out : Unsupported FX nodes: {'callfunction': ['']}. Pytorch nightly 06/03 cc\na missing op, could you take this?\nNot sure if related:\nHmmm. Ideally, ONNX should add a sort op. It doesn't have one. I wonder what the legacy exporter converts it to.\nshould fix this issue.", "positive_passages": [{"docid": "doc-en-pytorch-3cacfff754899ea51c3a38ec7b5a2c596b7e9832c0a665740f5920de43cf301c", "text": "def inverse(self): return Inverse.apply(self) <ins> def gesv(self, a): return Gesv.apply(self, a) </ins> def multinomial(self, num_samples=1, with_replacement=False): return Multinomial(num_samples, with_replacement)(self)", "commid": "pytorch_pr_1733"}], "negative_passages": []}
{"query_id": "q-en-pytorch-83946a51202840410574c7f47998713d4ac08f0a140e4e6d1aa38d932ef76fe6", "query": "Previously, worked to install CPU version explicitly. However, does not work. Are the whl files for 2.4 missing? N/A cc\nHi, this link is deprecated since 2.4. Please use the official installation guide instead.\nStill it is not available for macosx x86 platform\nHi and Please use official install commands from For linux its: MacOS x86 binaries where deprecated since release 2.3.0.\nClosing as resolved. We previously used the command in original post for building dockerfiles for specific versions. I have now replaced it with:\nThis still installs torch 2.2.2 but not 2.4.0 On Monday, August 19, 2024 at 10:41:25 PM GMT+5:30, Mihir Patel Closing as resolved. We previously used the command in original post for building dockerfiles for specific versions. I have now replaced it with: pip${PYTHONVERSION} install --no-cache-dir --find-links torch=={CUDAVERSIONTAG} \u2014 Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you were mentioned.Message ID:\nThis still installs torch 2.2.2 but not 2.4.0\npython --version Python 3.12.5 pip3 install torch torchvision torchaudio --index-url Collecting torch Downloading (151.0 MB) 151.0/151.0 MB 11.3 MB/s eta 0:00:00 Successfully installed torch-2.2.2 torchaudio-2.2.2 torchvision-0.17.2 but not 2.4.0", "positive_passages": [{"docid": "doc-en-pytorch-c0164bf057cb1858c7c153ede4ef6b2e0965cc23db23c15f3ebb2121bb0adf7f", "text": ">>> hx = Variable(torch.randn(3, 20)) >>> output = [] >>> for i in range(6): <del> ... hx = rnn(input, hx) ... output[i] = hx </del> <ins> ... hx = rnn(input[i], hx) ... output.append(hx) </ins> \"\"\" def __init__(self, input_size, hidden_size, bias=True, nonlinearity=\"tanh\"):", "commid": "pytorch_pr_690"}], "negative_passages": []}