|
{: , : , : [{: , : \/usr/local/lib/python2.7/dist-\/home/faster-\/usr/local/lib/python2.7/dist-\/home/faster-\/usr/local/lib/python2.7/dist-\/home/faster-\/usr/local/lib/python2.7/dist-\global name 'FileNotFoundError' is not defined\, : , : 605}], : []} |
|
{: , : , : [{: , : , : , : 217}], : []} |
|
{: , : Not implemented yet\, : [{: , : Found unnamed dim at index 0 of Tensor[None, None]\, : , : 51}], : []} |
|
{: , : Not implemented yet\, : [{: , : , : , : 36}], : []} |
|
{: , : generic/THCTensorIndex.cu\expecting vector of indices\Indexing dim is out of bounds\Indexing dim is out of bounds\length of src.size[dim] is not equal to length of indices\Source/destination tensor have different slice sizes (%ld vs %ld)\Warning: source/destination slices have same size but different \shape for an index operation. This behavior is deprecated.n\, : [{: , : , : , : 489}], : []} |
|
{: , : , : [{: , : 719\screen shot 2018-03-12 at 5 07 03 pm\https://user-\, : , : 91}], : []} |
|
{: , : , : [{: , : , : , : 301}], : []} |
|
{: , : , : [{: , : , : , : 365}], : []} |
|
{: , : , : [{: , : , : , : 83}], : []} |
|
{: , : , : [{: , : , : , : 168}], : []} |
|
{: , : win32\, : [{: , : \/usr/local/lib/python2.7/dist-\/home/faster-\/usr/local/lib/python2.7/dist-\/home/faster-\/usr/local/lib/python2.7/dist-\/home/faster-\/usr/local/lib/python2.7/dist-\global name 'FileNotFoundError' is not defined\, : , : 605}], : []} |
|
{: , : win32\, : [{: , : , : , : 217}], : []} |
|
{: , : , : [{: , : , : , : 301}], : []} |
|
{: , : , : [{: , : , : , : 109}], : []} |
|
{: , : , : [{: , : , : , : 24}], : []} |
|
{: , : , : [{: , : , : , : 58}], : []} |
|
{: , : All input dims must be named\All input dims must be named. Found unnamed dim at index 0\, : [{: , : Found unnamed dim at index 0 of Tensor[None, None]\, : , : 51}], : []} |
|
{: , : All input dims must be named\All input dims must be named. Found unnamed dim at index 0\, : [{: , : , : , : 36}], : []} |
|
{: , : expecting vector of indices\Indexing dim is out of bounds\Source tensor is empty\length of src.size[dim] is not equal to length of indices\, : [{: , : , : , : 489}], : []} |
|
{: , : , : [{: , : , : , : 546}], : []} |
|
{: , : , : [{: , : , : , : 184}], : []} |
|
{: , : , : [{: , : , : , : 301}], : []} |
|
{: , : , : [{: , : , : , : 151}], : []} |
|
{: , : , : [{: , : , : , : 38}], : []} |
|
{: , : , : [{: , : , : , : 365}], : []} |
|
{: , : expecting vector of indices\Indexing dim is out of bounds\Source tensor is empty\length of src.size[dim] is not equal to length of indices\, : [{: , : , : , : 489}], : []} |
|
{: , : , : [{: , : , : , : 230}], : []} |
|
{: , : Numpy not found\, : [{: , : , : , : 109}], : []} |
|
{: , : Numpy not found\, : [{: , : , : , : 24}], : []} |
|
{: , : \Loads the Torch serialized object at the given URL. <del> If the object is already present in `model_dir`, it's deserialied and </del> <ins> If the object is already present in `model_dir`, it's deserialized and </ins> returned. The filename part of the URL should follow the naming convention ``filename-<sha256>.ext`` where ``<sha256>`` is the first eight or more digits of the SHA256 hash of the contents of the file. The hash is used topositive_passagesdociddoc-en-pytorch-b028f319e02a0f57a02bd3d133fa6e9618a627ebb58f43f9ae7ce8ada660f521textWhen converting a tensor to dlpack, the strides of single-value tensors () should be normalized to . This is the fallout conclusion from . Here is a demo of what is wrong: Collecting environment information... PyTorch version: 1.10.0a0+git2aa19f3 Is debug build: False CUDA used to build PyTorch: Could not collect ROCM used to build PyTorch: N/A OS: Ubuntu 20.04.4 LTS (x8664) GCC version: (GCC) 9.4.0 Clang version: 12.0.1 ( ) CMake version: version 3.21.1 Libc version: glibc-2.31 Python version: 3.9.6 (default, Jul 11 2021, 03:39:48) [GCC 9.3.0] (64-bit runtime) Python platform: Linux-5.4.0-122-generic-x8664-with-glibc2.31 Is CUDA available: False CUDA runtime version: 11.2.152 GPU models and configuration: GPU 0: NVIDIA GeForce GTX 1660 SUPER Nvidia driver version: 470.141.03 cuDNN version: /usr/lib/x8664-linux- HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: False Versions of relevant libraries: [pip3] mypy==0.812 [pip3] mypy-extensions==0.4.3 [pip3] numpy==1.21.2 [pip3] torch==1.10.0a0+git2aa19f3 [pip3] torchaudio==0.10.0a0+ [conda] magma-cuda112 2.5.2 1 pytorch [conda] mkl 2021.3.0 h726a3e6557 conda-forge [conda] mkl-include 2021.3.0 h726a3e6557 conda-forge [conda] numpy 1.21.2 py39hdbf815f0 conda-forge [conda] torch 1.12.0a0+git1f29b31 dev0 <develop[conda] torchaudio 0.10.0a0+ pypi0 pypi\ndoes this capture the problem?commidpytorch_issue_83069tokennumnegative_passages |
|
query_idq-en-pytorch-6749361c7ea8d94cb25e13ac96078140852bf945bb508690649257e713b72470queryinclude(\) torch_cuda_get_nvcc_gencode_flag(NVCC_GENCODE) <del> string (REPLACE \ \ NVCC_GENCODE \) string (REPLACE \ \ NVCC_GENCODE \) </del> <ins> string(REPLACE \ \ NVCC_GENCODE \) </ins> message(STATUS \) ADD_CUSTOM_COMMAND(positive_passagesdociddoc-en-pytorch-ad22f5a2638a166f230b1217fee3c48e11f068d0b6bfa8b06b21f7bbd23ed7detextSteps to reproduce: a CI CUDA docker image, e.g., PyTorch, and run Expected result: It works Actual result: Workaround: set forces only one to be passed.\nCC is this the nccl+sccache problem you were seeing?\ncc has the exact error. CI log doesn't actually post the error.\nsccache probably doesn't allow passing multiple right now, I can look into it if needed.\nI am having this same error. How can you figure out which gencode to pass? I'm trying to build from source from a Dockerfile as well, as seen in this gist:\nthis is super helpful, thanks.\njust wanted to make sure you saw that this is pulling from a ppc64le (IBM Power system) docker image, so you'll run into different issues if trying to build on an x64 machine.\nI modified it very slightly for x64: It seems to build pytorch successfully for me (on latest master I suppose).\nlooking into thiscommidpytorch_issue_8729tokennumnegative_passages |
|
query_idq-en-pytorch-6cd27db7bbea6d45658fa436474e19a3230f031f8b61ffd45dc8cbadd253e1f7queryconst auto& dim = tensor_names[idx]; TORCH_CHECK(dim.isBasic(), \, <del> dim, \, tensor_names); </del> <ins> idx, \, tensor_names); </ins> auto it = std::find(names.begin(), names.end(), dim); TORCH_CHECK(it != names.end(), \, dim, \, names,positive_passagesdociddoc-en-pytorch-4979a1a04833169240c0bad7b12e4a15cd4efb4b3d5ef9e5517488d63a0cc554textonly supports fully named inputs; all input dimensions must have a name. When passing it an unnamed input, it errors out with the following message: It should really say \.\nfixed incommidpytorch_issue_27074tokennumnegative_passages |
|
query_idq-en-pytorch-6cd27db7bbea6d45658fa436474e19a3230f031f8b61ffd45dc8cbadd253e1f7queryconst auto& dim = tensor_names[idx]; TORCH_CHECK(dim.isBasic(), \, <del> dim, \, tensor_names); </del> <ins> idx, \, tensor_names); </ins> auto it = std::find(names.begin(), names.end(), dim); TORCH_CHECK(it != names.end(), \, dim, \, names,positive_passagesdociddoc-en-pytorch-1de20edd2f7958b4095dd34eb2e3927620d004f0d1dfec850faeeac1d6e4b60etextThis is an older implementation that I think doesn't make any sense anymore. We should throw a NYI exception for it. The expected behavior for this should be:\nfixed in", "commid": "pytorch_issue_27073", "tokennum": 36}], "negative_passages": []} |
|
{"query_id": "q-en-pytorch-739905d98d898468387484bc0e4f63dd7bb3cff90818792350c0ab82bd4674e3", "query": "#ifdef NUMPY_TYPE_ENUM THTensor* THPTensor_(fromNumpy)(PyObject *numpy_array) { PyArrayObject *array = (PyArrayObject*)numpy_array; <del> THStoragePtr storage = THStorage_(newWithDataAndAllocator)( (real*)PyArray_DATA(array), PyArray_NBYTES(array) / sizeof(real), &THNumpyArrayAllocator, new NumpyArrayAllocator(numpy_array)); </del> // Numpy and Torch disagree on empty tensors. In Torch, an empty // tensor is a tensor with zero dimensions. In Numpy, an empty tensor", "positive_passages": [{"docid": "doc-en-pytorch-b30ebd0cddb63fbac936f49b97a3653753b8a516df10c7ac7a9f148bb5d80586", "text": "I followed the guide The command was working just fine until a couple days ago, but now it returns I manually checked and there is no cu100/[...] What should I do? I have CUDA 10.0 I would like to use pytorch...\nYou could try as a temporary workground. cc\nThis is most likely related to this: I've re-uploaded the stable html to re-include the cu100 binaries, and submitted a PR to make sure this doesn't happen again", "commid": "pytorch_issue_42999", "tokennum": 109}], "negative_passages": []} |
|
{"query_id": "q-en-pytorch-739905d98d898468387484bc0e4f63dd7bb3cff90818792350c0ab82bd4674e3", "query": "#ifdef NUMPY_TYPE_ENUM THTensor* THPTensor_(fromNumpy)(PyObject *numpy_array) { PyArrayObject *array = (PyArrayObject*)numpy_array; <del> THStoragePtr storage = THStorage_(newWithDataAndAllocator)( (real*)PyArray_DATA(array), PyArray_NBYTES(array) / sizeof(real), &THNumpyArrayAllocator, new NumpyArrayAllocator(numpy_array)); </del> // Numpy and Torch disagree on empty tensors. In Torch, an empty // tensor is a tensor with zero dimensions. In Numpy, an empty tensor", "positive_passages": [{"docid": "doc-en-pytorch-6c4b1a770dfce518b2cb3cee81519da0ca01fdfc9eb0a556e52cf0e3bb543410", "text": "Repro: Strides seem to be copied correctly, and the first row is also ok. All other rows are garbage.", "commid": "pytorch_issue_484", "tokennum": 24}], "negative_passages": []} |
|
{"query_id": "q-en-pytorch-7ecc74cf7a8b83f2918637652e8727b1637f12f8d4b8759c379088ceea9916a0", "query": "return PyObject_CallFunctionObjArgs(THPLongTensorClass, array, NULL); } else if (type == NPY_INT32) { return PyObject_CallFunctionObjArgs(THPIntTensorClass, array, NULL); <ins> } else if (type == NPY_INT16) { return PyObject_CallFunctionObjArgs(THPShortTensorClass, array, NULL); </ins> } else if (type == NPY_UINT8) { return PyObject_CallFunctionObjArgs(THPByteTensorClass, array, NULL); }", "positive_passages": [{"docid": "doc-en-pytorch-5740cb3214dad75a3491bc5dcb6e752c151b423c2a46a03c332f63602b38bb18", "text": "Should we add a ShortTensor Type, or just convert int16 to IntTensor ?\nWe already have a type available in pytorch. We have to enable conversion in the relevant function here:\nAuto travis-ci failed at python2.7 :(\nDo you have any plan to support int8 numpy conversion while you have too?\nChar tensor uses char which is not guaranteed to be signed by the C standard. We'd need to change our C code to use a\nI try , it complains , any idea?\nThe error message is quite self explanatory. PyTorch doesn't support tensors at the moment.", "commid": "pytorch_issue_891", "tokennum": 135}], "negative_passages": []} |
|
{"query_id": "q-en-pytorch-8346a42adf2a20bb7a3cbc59d07bfd3339fc72f8d80019c104e4b3d546917bce", "query": "THCTensor_(data)(state, result_), ldc, result_->stride[0], num_batches); } else { <del> for (long i = 0; i < num_batches; ++i) { </del> <ins> for (int64_t i = 0; i < num_batches; ++i) { </ins> THCudaBlas_Hgemm( state, transpose_batch1,", "positive_passages": [{"docid": "doc-en-pytorch-5c37c05bfa8c9df98bf6e9c6d9a2373ff1a6b7ab89351c1bc46128c6359ac705", "text": "I am trying to build PyTorch from source on macOS High Sierra Version 10.13.3 with an NVIDIA GeForce GT 750M. I'm following the instructions at prints prints prints prints When I enter , as per the instructions, I receive the following error message:\nCarlos - this may be minor but I eventually found problems with ananconda for this an other CUDA-enabled builds. Couple of things that may help: a - ensure that your activated conda environment is env actually the env you want. I was checking the path with and it was showing the incorrect path to python and pip half of the time. I ended up creating new conda environments (switching to miniconda by the way) and passing the python=3.6 flag to ensure a full environment was deployed. Maybe somehow reverts back to an incorrect path b - in your activated environment, c - make sure a recent OS update didn't bump your CommandLineTools to v 9.0 (I found 8.0 was the only that worked, not 8.1) I built this on 10.12, CUDA 9, cudNN 7.0.\nCould you see if can build for you? I don't have CUDA 9 on mac, so it'd be great if you can let me know if it works or not. :)\nI'm no longer getting an error at installation, but still returns for some reason.commidpytorch_issue_5091tokennumnegative_passages |
|
query_idq-en-pytorch-85de7ca993fe71820aa6914403ef4020c7c221a26a264b8143092c58d899fe53querySee :func:`torch.ormqr` \\\ permute(*dims) -> Tensor Permute the dimensions of this tensor. Args: *dims (int...): The desired ordering of dimensions Example: >>> x = torch.randn(2, 3, 5) >>> x.size() torch.Size([2, 3, 5]) >>> x.permute(2, 0, 1).size() torch.Size([5, 2, 3]) \\\ potrf(upper=True) -> Tensorpositive_passagesdociddoc-en-pytorch-770ebf2455272156db916a5267a9d9a86f93a18b654cbbc9f4496a4c4a01a348textI also found this issue when I tried to swap axes of a tensor. I googled an older doc in the source code. Hope it helps.commidpytorch_issue_7627tokennumnegative_passages |
|
query_idq-en-pytorch-89e3e6bdb26578265a955db96af03126e423ed91e9beb46c903f76b826d5780equerypass if WITH_CUDA: <del> if platform.system() == 'Darwin': cuda_path = '/Developer/NVIDIA/CUDA-7.5' cuda_include_path = cuda_path + '/include' cuda_lib_path = cuda_path + '/lib' else: cuda_path = '/usr/local/cuda' cuda_include_path = cuda_path + '/include' cuda_lib_path = cuda_path + '/lib64' </del> <ins> cuda_lib_dirs = ['lib64', 'lib'] cuda_include_path = os.path.join(CUDA_HOME, 'include') for lib_dir in cuda_lib_dirs: cuda_lib_path = os.path.join(CUDA_HOME, lib_dir) if os.path.exists(cuda_lib_path): break </ins> include_dirs.append(cuda_include_path) extra_link_args.append('-L' + cuda_lib_path) extra_link_args.append('-Wl,-rpath,' + cuda_lib_path) extra_compile_args += ['-DWITH_CUDA'] <ins> extra_compile_args += ['-DCUDA_LIB_PATH=' + cuda_lib_path] </ins> main_libraries += ['THC'] main_sources += [ \,positive_passagesdociddoc-en-pytorch-20b097551cd6d87c3c98ac23415ff9359a3ec462e8d54adf471c355cc4b15a7dtextright now if is not in LDLIBRARYPATH, though it was found at compile-time, these lines will fail: Avoid this, by idk doing something... I think i have a few good ideas.\nWe need to add /usr/local/cuda/lib64 (or that's mac dir) to the rpath of _C\nthe problem i dont think is to add it to rpath, it's because we\nRight. We could try with the default cuda installation path if it's not found from LDLIBRARYPATH.\nActually, do we really want to use the compile path? We should try to detect it at runtime, right? Or are we going to ship CUDA libs in our binaries?\nwe first load using runtime paths, and if it fails, fallback to path known at compile time.\nI fear this is going to get brittle, especially if you are shipping CUDA libs. e.g. a user swaps HW and picks up a new driver and/or toolkit. As says, I'm pretty sure you want to use the runtime paths. Similarly, CUDA 7.5 won't run on Pascal and we have a cudnn built for 7.5 that will not do the right thing on CUDA 8, so those need to go together. If a user installs the CUDA bits via deb/rpm the right things are supposed to happen. If you push bits, we need to figure out how to make sure things line up. But this is perhaps a much larger packaging conversation.\nYeah, I've implemented it to first try loading it without any path and then try and . I'll push the commit tomorrow.", "commid": "pytorch_issue_153", "tokennum": 365}], "negative_passages": []} |
|
{"query_id": "q-en-pytorch-8b58474d691d2e15d1e08141e0e6d0dce367ca48a9be5d0bf5339f905b57573b", "query": "loss(x, y) = sum_ij(max(0, 1 - (x[y[j]] - x[i]))) / x.size(0) where `i == 0` to `x.size(0)`, `j == 0` to `y.size(0)`, <del> `y[j] != 0`, and `i != y[j]` for all `i` and `j`. </del> <ins> `y[j] >= 0`, and `i != y[j]` for all `i` and `j`. </ins> `y` and `x` must have the same size.", "positive_passages": [{"docid": "doc-en-pytorch-83acf37dc6877083f683d65fa961f715ddadfae4044b497c1e75a59b3babf558", "text": "The SDPA tutorial is failing for in Google Colab when run with \"CPU\" only as the h/w accelerator. The tutorial has a note \"If you don\u2019t have a GPU and are running on CPU then the context manager will have no effect and all three runs should return similar timings.\" however things fail when run on CPU only. Code: Error: Collecting environment information. PyTorch version: 2.1.0+cu118 Is debug build: False CUDA used to build PyTorch: 11.8 ROCM used to build PyTorch: N/A OS: Ubuntu 22.04.2 LTS (x8664) GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 Clang version: 14.0.0-1ubuntu1.1 CMake version: version 3.27.7 Libc version: glibc-2.35 Python version: 3.10.12 (main, Jun 11 2023, 05:26:28) [GCC 11.4.0] (64-bit runtime) Python platform: Linux-5.15.120+-x8664-with-glibc2.35 Is CUDA available: False CUDA runtime version: 11.8.89 CUDAMODULELOADING set to: N/A GPU models and configuration: Could not collect Nvidia driver version: Could not collect cuDNN version: Probably one of the following: /usr/lib/x8664-linux- /usr/lib/x8664-linux- /usr/lib/x8664-linux- /usr/lib/x8664-linux- /usr/lib/x8664-linux- /usr/lib/x8664-linux- /usr/lib/x8664-linux- HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Architecture: x8664 CPU op-mode(s): 32-bit, 64-bit Address sizes: 46 bits physical, 48 bits virtual Byte Order: Little Endian CPU(s): 2 On-line CPU(s) list: 0,1 Vendor ID: GenuineIntel Model name: Intel(R) Xeon(R) CPU @ 2.", "commid": "pytorch_issue_113522", "tokennum": 529}], "negative_passages": []} |
|
{"query_id": "q-en-pytorch-8b58474d691d2e15d1e08141e0e6d0dce367ca48a9be5d0bf5339f905b57573b", "query": "loss(x, y) = sum_ij(max(0, 1 - (x[y[j]] - x[i]))) / x.size(0) where `i == 0` to `x.size(0)`, `j == 0` to `y.size(0)`, <del> `y[j] != 0`, and `i != y[j]` for all `i` and `j`. </del> <ins> `y[j] >= 0`, and `i != y[j]` for all `i` and `j`. </ins> `y` and `x` must have the same size.", "positive_passages": [{"docid": "doc-en-pytorch-32c2ad75da040196aca36912fcdb9199c389f9d3c5ec4634570c9e1d5f12320f", "text": "20GHz CPU family: 6 Model: 79 Thread(s) per core: 2 Core(s) per socket: 1 Socket(s): 1 Stepping: 0 BogoMIPS: 4399.99 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constanttsc repgood nopl xtopology nonstoptsc cpuid tscknownfreq pni pclmulqdq ssse3 fma cx16 pcid sse41 sse42 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahflm abm 3dnowprefetch invpcidsingle ssbd ibrs ibpb stibp fsgsbase tscadjust bmi1 hle avx2 smep bmi2 erms invpcid rtm rdseed adx smap xsaveopt arat mdclear archcapabilities Hypervisor vendor: KVM Virtualization type: full L1d cache: 32 KiB (1 instance) L1i cache: 32 KiB (1 instance) L2 cache: 256 KiB (1 instance) L3 cache: 55 MiB (1 instance) NUMA node(s): 1 NUMA node0 CPU(s): 0,1 Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Mitigation; PTE Inversion Vulnerability Mds: Vulnerable; SMT Host state unknown Vulnerability Meltdown: Vulnerable Vulnerability Mmio stale data: Vulnerable Vulnerability Retbleed: Vulnerable Vulnerability Spec store bypass: Vulnerable Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers Vulnerability Spectre v2: Vulnerable, IBPB: disabled, STIBP: disabled, PBRSB-eIBRS: Not affected Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Vulnerable Versions of relevant libraries: [pip3] numpy==1.23.5 [pip3] torch==2.1.0+cu118 [pip3] torchaudio==2.1.", "commid": "pytorch_issue_113522", "tokennum": 512}], "negative_passages": []} |
|
{"query_id": "q-en-pytorch-8b58474d691d2e15d1e08141e0e6d0dce367ca48a9be5d0bf5339f905b57573b", "query": "loss(x, y) = sum_ij(max(0, 1 - (x[y[j]] - x[i]))) / x.size(0) where `i == 0` to `x.size(0)`, `j == 0` to `y.size(0)`, <del> `y[j] != 0`, and `i != y[j]` for all `i` and `j`. </del> <ins> `y[j] >= 0`, and `i != y[j]` for all `i` and `j`. </ins> `y` and `x` must have the same size.", "positive_passages": [{"docid": "doc-en-pytorch-88ad703d399b469abbd2cf02de7e4c02fda1767f138fbc8c2e1ee28758c2b4fe", "text": "0+cu118 [pip3] torchdata==0.7.0 [pip3] torchsummary==1.5.1 [pip3] torchtext==0.16.0 [pip3] torchvision==0.16.0+cu118 [pip3] triton==2.1.0 [conda] Could not collect cc: cc\nThis operator has not been implemented for . We will push a PR to update the tutorial.", "commid": "pytorch_issue_113522", "tokennum": 106}], "negative_passages": []} |
|
{"query_id": "q-en-pytorch-8bb134eb0920ef385d213f78151b35474e5a8c3112336b15dfae21edffb82304", "query": "Alternatively, go to: https://pytorch.org/binaries to install a PyTorch version that has been compiled with your version of the CUDA driver.\"\"\".format(str(torch._C._cuda_getDriverVersion()))) <ins> def _lazy_init(): global _initialized, _cudart if _initialized: return _check_driver() </ins> assert torch._C._cuda_init() <del> _initialized = True if platform.system() == 'Darwin': _cudart = ctypes.cdll.LoadLibrary('libcudart.dylib') else: _cudart = ctypes.cdll.LoadLibrary('libcudart.so') </del> <ins> _cudart = _load_cudart() </ins> _cudart.cudaGetErrorName.restype = ctypes.c_char_p _cudart.cudaGetErrorString.restype = ctypes.c_char_p <ins> _initialized = True </ins> def cudart():", "positive_passages": [{"docid": "doc-en-pytorch-20b097551cd6d87c3c98ac23415ff9359a3ec462e8d54adf471c355cc4b15a7d", "text": "right now if is not in LDLIBRARYPATH, though it was found at compile-time, these lines will fail: Avoid this, by idk doing something... I think i have a few good ideas.\nWe need to add /usr/local/cuda/lib64 (or that's mac dir) to the rpath of _C\nthe problem i dont think is to add it to rpath, it's because we\nRight. We could try with the default cuda installation path if it's not found from LDLIBRARYPATH.\nActually, do we really want to use the compile path? We should try to detect it at runtime, right? Or are we going to ship CUDA libs in our binaries?\nwe first load using runtime paths, and if it fails, fallback to path known at compile time.\nI fear this is going to get brittle, especially if you are shipping CUDA libs. e.g. a user swaps HW and picks up a new driver and/or toolkit. As says, I'm pretty sure you want to use the runtime paths. Similarly, CUDA 7.5 won't run on Pascal and we have a cudnn built for 7.5 that will not do the right thing on CUDA 8, so those need to go together. If a user installs the CUDA bits via deb/rpm the right things are supposed to happen. If you push bits, we need to figure out how to make sure things line up. But this is perhaps a much larger packaging conversation.\nYeah, I've implemented it to first try loading it without any path and then try and . I'll push the commit tomorrow.commidpytorch_issue_153tokennumnegative_passages |
|
query_idq-en-pytorch-919c10dadeb43db59d5c9303c047eb60d6809352738af2e454a2c9f6f8045af7query{\, (PyCFunction)THCPModule_cudaSleep, METH_O, NULL}, {\, (PyCFunction)THCPModule_cudaLockMutex, METH_NOARGS, NULL}, {\, (PyCFunction)THCPModule_cudaUnlockMutex, METH_NOARGS, NULL}, <ins> #ifdef WITH_NCCL </ins> {\, (PyCFunction)THCPModule_nccl_reduce, METH_VARARGS, NULL}, {\, (PyCFunction)THCPModule_nccl_all_reduce, METH_VARARGS, NULL}, {\, (PyCFunction)THCPModule_nccl_broadcast, METH_VARARGS, NULL}, {\, (PyCFunction)THCPModule_nccl_all_gather, METH_VARARGS, NULL}, {\, (PyCFunction)THCPModule_nccl_reduce_scatter, METH_VARARGS, NULL}, <ins> #endif </ins> {NULL} };positive_passagesdociddoc-en-pytorch-cbe0818949c63cbc2b40ed9dc87b78f075a1e37ebf124781b6b385786d01fe61textOSX 10.13, CUDA 9.0, cudnn 7, Got these errors: When compiling here:commidpytorch_issue_3051tokennumnegative_passages |
|
query_idq-en-pytorch-970f45d0dc765152918981884254d63eb61894a68eba97a7c30729442e9675afquery\); return NULL; } <ins> // XXX: this won't work for negative strides storage_size += strides_data[i] * (sizes_data[i] - 1); </ins> } <ins> THStoragePtr storage = THStorage_(newWithDataAndAllocator)( (real*)PyArray_DATA(array), storage_size, &THNumpyArrayAllocator, new NumpyArrayAllocator(numpy_array)); </ins> THTensor *result = THTensor_(newWithStorage)(storage, 0, sizes, strides); return result; } else { <ins> THStoragePtr storage = THStorage_(new)(); </ins> THTensor *result = THTensor_(newWithStorage)(storage, 0, NULL, NULL); return result; }", "positive_passages": [{"docid": "doc-en-pytorch-b30ebd0cddb63fbac936f49b97a3653753b8a516df10c7ac7a9f148bb5d80586", "text": "I followed the guide The command was working just fine until a couple days ago, but now it returns I manually checked and there is no cu100/[...] What should I do? I have CUDA 10.0 I would like to use pytorch...\nYou could try as a temporary workground. cc\nThis is most likely related to this: I've re-uploaded the stable html to re-include the cu100 binaries, and submitted a PR to make sure this doesn't happen again", "commid": "pytorch_issue_42999", "tokennum": 109}], "negative_passages": []} |
|
{"query_id": "q-en-pytorch-970f45d0dc765152918981884254d63eb61894a68eba97a7c30729442e9675af", "query": "\"future releases.\"); return NULL; } <ins> // XXX: this won't work for negative strides storage_size += strides_data[i] * (sizes_data[i] - 1); </ins> } <ins> THStoragePtr storage = THStorage_(newWithDataAndAllocator)( (real*)PyArray_DATA(array), storage_size, &THNumpyArrayAllocator, new NumpyArrayAllocator(numpy_array)); </ins> THTensor *result = THTensor_(newWithStorage)(storage, 0, sizes, strides); return result; } else { <ins> THStoragePtr storage = THStorage_(new)(); </ins> THTensor *result = THTensor_(newWithStorage)(storage, 0, NULL, NULL); return result; }positive_passagesdociddoc-en-pytorch-6c4b1a770dfce518b2cb3cee81519da0ca01fdfc9eb0a556e52cf0e3bb543410textRepro: Strides seem to be copied correctly, and the first row is also ok. All other rows are garbage.commidpytorch_issue_484tokennumnegative_passages |
|
query_idq-en-pytorch-9984ee82d4490f95cf1b5549fdfa93cabf578a9aed1621ea7695d65eb663fdf4queryint_classes = int <ins> if PY2: FileNotFoundError = IOError else: FileNotFoundError = FileNotFoundError </ins> def with_metaclass(meta, *bases): \\\ # This requires a bit of explanation: the basic idea is to make a dummypositive_passagesdociddoc-en-pytorch-e5fa52a9fec54a72031dcbcbd77c314c0adae88ac93041cdefc836ec6c564b04textTraceback (most recent call last): File \, line 322, in <moduleroislabel = fasterRCNN(imdata, iminfo, gtboxes, numboxes) File \, line 491, in call result = self.forward(input, kwargs) File \, line 50, in forward rois, rpnlosscls, rpnlossbbox = self.RCNNrpn(basefeat, iminfo, gtboxes, numboxes) File \, line 491, in call result = self.forward(input, kwargs) File \, line 87, in forward rpndata = self.RPNanchortarget((, gtboxes, iminfo, numboxes)) File \, line 491, in call result = self.forward(input, kwargs) File \, line 157, in forward positiveweights = 1.0 / numexamples File \, line 320, in rdiv return self.reciprocal() other RuntimeError: reciprocal is not implemented for type Exception NameError: \ in <bound method of object at 0x7fb842e6e3d0ignored This error is coming while implementing pytorch faster-rcnn repository. Any solutions please??\ncan you use the issue template, we need more information to help you.\nUsing Pytorch 0.3.0 solves the problem.\nare you using python 2? AFAIK FileNotFoundError only exists in python 3, so this should be fixed Also, this error is hiding the real bug in the code, which seems to be the following: Did you mean to take the reciprocal of a LongTensor?\ndoes the shutdown workers FileNotFoundError happen on python 2.7 as well?\nI haven't tested on py2. But I believe it should be an with errno.\nYup I am using Python 2. To overcome this error of FileNotFoundError I have downgraded Pytorch from 0.4.0 version to 0.3.0. Use Pytorch 0.3.0 with Python 2, it will work fine.\nalternatively, upgrading to Python 3 will fix the issue (and Python 3 is nicer than Python 2).", "commid": "pytorch_issue_6932", "tokennum": 605}], "negative_passages": []} |
|
{"query_id": "q-en-pytorch-9984ee82d4490f95cf1b5549fdfa93cabf578a9aed1621ea7695d65eb663fdf4", "query": "int_classes = int <ins> if PY2: FileNotFoundError = IOError else: FileNotFoundError = FileNotFoundError </ins> def with_metaclass(meta, *bases): \"\"\"Create a base class with a metaclass.\"\"\" # This requires a bit of explanation: the basic idea is to make a dummy", "positive_passages": [{"docid": "doc-en-pytorch-2800f71e1f12755ba55e3a8e70ff398118a06d101c21171bb009eed5375fe533", "text": "We'll fix this though.\nFor a quick fix you can make python2 compatible by defining FileNotFoundError somewhere\nThis error still exist in pytorch 0.4\nYes it is fixed after 0.4\nI am still getting it...\ndid you build pytorch from source? It was fixed after 0.4 is released, so it will be in the next pytorch release.\nI updated it via\nCan anyone help me with my problem?\nI was also working with faster r-cnn and got the error. Downgrading to 0.3.0 solved FileNotFoundError but will produce another error: self.ratiolistbatch[leftidx:(rightidx+1)] = ((np.float64)) # trainset ratio list ,each batch is same number TypeError: 'module' object is not callable Where this error is related to in the above error message. Upgrading can solve this error but will produce the previous one. How do you cope with the second error?commidpytorch_issue_6932tokennumnegative_passages |
|
query_idq-en-pytorch-a7f5f31517bc2f93e661073a8055244cb988c558177acffc671e7b7f0c3b5dfcquerydims = THCudaLongTensor_nDimension(state, indices); THArgCheck(dims <= MAX_CUTORCH_DIMS, 4, CUTORCH_DIM_WARNING); <del> ptrdiff_t numIndices = THCudaLongTensor_nElement(state, indices); int srcDims = THCTensor_(nDimension)(state, dst); cudaStream_t stream = THCState_getCurrentStream(state); THArgCheck(THCudaLongTensor_nDimension(state, indices) == 1, 3, \); THArgCheck(dim < srcDims, 4, \); THArgCheck(srcDims > 0, 2, \); int indContig = THCudaLongTensor_isContiguous(state, indices); </del> // The `src` is partitioned into two parts: // -the size of each slice we are indexing, which is the // total size of the tensor ignoring dimension `dim`; // -the number of indices we are choosing, which is the total size // of the tensor `indices`. <ins> ptrdiff_t sliceSize = THCTensor_(getSliceSize)(state, dst, dim, indices, nullptr); </ins> ptrdiff_t dstTotalSize = THCTensor_(nElement)(state, dst); int64_t dstFillDimSize = THCTensor_(size)(state, dst, dim); <del> ptrdiff_t sliceSize = dstTotalSize / dstFillDimSize; </del> <ins> ptrdiff_t numIndices = THCudaLongTensor_nElement(state, indices); cudaStream_t stream = THCState_getCurrentStream(state); int indContig = THCudaLongTensor_isContiguous(state, indices); </ins> int mpc = THCState_getCurrentDeviceProperties(state)->multiProcessorCount;positive_passagesdociddoc-en-pytorch-f32bbc53b4f3b92b53b5ef9ed6754d57475c03612f4070c1bfd9266ba097f7abtextA = (5, 4) B = (0, 9).view(3, 3) C = (0, 15).view(3, 5) idxs = torch.LongTensor([0, 2, 4]) A.indexadd(0, idxs, B) # RuntimeError: inconsistent tensor size, expected r [4], t [4] and src [3] to have the same number of elements, but got 4, 4 and 3 elements respectively at (...)/aten/src/TH/generic/THTensorMath.c:1008 A.indexadd(0, idxs, C) # RuntimeError: inconsistent tensor size, expected r [4], t [4] and src [5] to have the same number of elements, but got 4, 4 and 5 elements respectively at (...)/aten/src/TH/generic/THTensorMath.c:1008 So far so good. But if we use CUDA... A = (5, 4).cuda() B = (0, 9).view(3, 3).cuda() C = (0, 15).view(3, 5).cuda() idxs = torch.LongTensor([0, 2, 4]).cuda() A.indexadd(0, idxs, B) print(A) # 0 1 2 0 # 0 0 0 0 # 3 4 5 0 # 0 0 0 0 # 6 7 8 0 # [ of size 5x4 (GPU 0)] OK, this looks wrong... A.zero() A.indexadd(0, idxs, C) print(A) # 0 1 2 3 # 4 0 0 0 # 5 6 7 8 # 9 0 0 0 # 10 11 12 13 # [ of size 5x4 (GPU 0)] Now this looks definitely wrong. Increase C's dimension to something like (3, 500), and it overwrites other tensors or triggers asserts. Same thing happens with indexcopy_.\nI'll take it if no one's looking at it yet.", "commid": "pytorch_issue_4213", "tokennum": 489}], "negative_passages": []} |
|
{"query_id": "q-en-pytorch-a81e6151b9e45c151c86e332d830aa52c8dfc7010c288dabeaa2a6b4b048a4a9", "query": "for test in tests: _test(*test) <ins> @unittest.skip(\"Not implemented yet\") </ins> def test_align_tensors(self): def reference_fn(*tensors): longest_names = tensors[0].names", "positive_passages": [{"docid": "doc-en-pytorch-4979a1a04833169240c0bad7b12e4a15cd4efb4b3d5ef9e5517488d63a0cc554", "text": "only supports fully named inputs; all input dimensions must have a name. When passing it an unnamed input, it errors out with the following message: It should really say \"Found unnamed dim at index 0 of Tensor[None, None]\".\nfixed in", "commid": "pytorch_issue_27074", "tokennum": 51}], "negative_passages": []} |
|
{"query_id": "q-en-pytorch-a81e6151b9e45c151c86e332d830aa52c8dfc7010c288dabeaa2a6b4b048a4a9", "query": "for test in tests: _test(*test) <ins> @unittest.skip(\"Not implemented yet\") </ins> def test_align_tensors(self): def reference_fn(*tensors): longest_names = tensors[0].names", "positive_passages": [{"docid": "doc-en-pytorch-1de20edd2f7958b4095dd34eb2e3927620d004f0d1dfec850faeeac1d6e4b60e", "text": "This is an older implementation that I think doesn't make any sense anymore. We should throw a NYI exception for it. The expected behavior for this should be:\nfixed incommidpytorch_issue_27073tokennumnegative_passages |
|
query_idq-en-pytorch-ab5d15d7dd094ae2fa817a899a919076d0ea03d9cb234cc7881a60bf6904f2a2queryclass InstanceNorm1d(_InstanceNorm): <del> r\\\Applies Instance Normalization over a 3d input that is seen as a mini-batch. </ins> .. math::positive_passagesdociddoc-en-pytorch-b976af187c9f2d059a39242393a49d0cfec2d87b032cb4f5b2ff2a195b499933textRef , the input only supports format. This is not consistent with which supports also .\nit doesn't make much sense to support instance norm for because it is usually normalizing over dimension.\nOkay, so the documentation should be updated?", "commid": "pytorch_issue_4170", "tokennum": 47}], "negative_passages": []} |
|
{"query_id": "q-en-pytorch-b85baf3b49273dd2135f72d0e09e246f96aaa047538243ca634f0a03a702b9f3", "query": "// TODO: gradOutput shape check // Resize and initialize result tensor. THCTensor_(resizeAs)(state, gradInput, input); <del> THCTensor_(newContiguous)(state, gradInput); </del> THCTensor_(zero)(state, gradInput); int batchSize;", "positive_passages": [{"docid": "doc-en-pytorch-bb23e78d6c7e0df334c1c790ac874f2ce6aad74888ca6e7be9a41edf5af07978", "text": "Hello I have a training loop running without any errors or warnings on GPUs using Pytorch 2.0.1. When I try to modify my learning loop to run on TPUs I get the error Pytorch XLA developers solved the problem. But after the problem is solved I get the warning below. /home/mfatih/env38/lib/python3.8/site- UserWarning: aten::reshape: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ) Variable.executionengine.runbackward( # Calls into the C++ engine to run the backward pass Pytorch XLA developers told me that the warning is related to Pytorch, not Pytorch-XLA. Is this warning an indication of the wrong backward computations? You can regenerate the error using the public . Please change the working directory on in the config file. Run the for debugging. After a successful short train and val cycle we get the warning. The warning occurs in operations placed in the functions in during backward pass. Here is my environment PyTorch version: 2.2.0.dev20231213+cpu Is debug build: False CUDA used to build PyTorch: None ROCM used to build PyTorch: N/A OS: Ubuntu 20.04.4 LTS (x8664) GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0 Clang version: Could not collect CMake version: version 3.26.0 Libc version: glibc-2.31 Python version: 3.8.10 (default, Nov 22 2023, 10:22:35) [GCC 9.4.0] (64-bit runtime) Python platform: Linux-5.13.", "commid": "pytorch_issue_115816", "tokennum": 551}], "negative_passages": []} |
|
{"query_id": "q-en-pytorch-b85baf3b49273dd2135f72d0e09e246f96aaa047538243ca634f0a03a702b9f3", "query": "// TODO: gradOutput shape check // Resize and initialize result tensor. THCTensor_(resizeAs)(state, gradInput, input); <del> THCTensor_(newContiguous)(state, gradInput); </del> THCTensor_(zero)(state, gradInput); int batchSize;", "positive_passages": [{"docid": "doc-en-pytorch-610d7d9396a7aea3808420a0394aaa2f20639c35a9caf9c914bf085668e77c54", "text": "0-1027-gcp-x8664-with-glibc2.29 Is CUDA available: False CUDA runtime version: No CUDA CUDAMODULELOADING set to: N/A GPU models and configuration: No CUDA Nvidia driver version: No CUDA cuDNN version: No CUDA HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Architecture: x8664 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian Address sizes: 46 bits physical, 48 bits virtual CPU(s): 96 On-line CPU(s) list: 0-95 Thread(s) per core: 2 Core(s) per socket: 24 Socket(s): 2 NUMA node(s): 2 Vendor ID: GenuineIntel CPU family: 6 Model: 85 Model name: Intel(R) Xeon(R) CPU @ 2.00GHz Stepping: 3 CPU MHz: 2000.186 BogoMIPS: 4000.37 Hypervisor vendor: KVM Virtualization type: full L1d cache: 1.5 MiB L1i cache: 1.5 MiB L2 cache: 48 MiB L3 cache: 77 MiB NUMA node0 CPU(s): 0-23,48-71 NUMA node1 CPU(s): 24-47,72-95 Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Mitigation; PTE Inversion Vulnerability Mds: Mitigation; Clear CPU buffers; SMT Host state unknown Vulnerability Meltdown: Mitigation; PTI Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and _user pointer sanitization Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRSFW, STIBP conditional, RSB filling Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Mitigation; Clear CPU buffers;", "commid": "pytorch_issue_115816", "tokennum": 458}], "negative_passages": []} |
|
{"query_id": "q-en-pytorch-b85baf3b49273dd2135f72d0e09e246f96aaa047538243ca634f0a03a702b9f3", "query": "// TODO: gradOutput shape check // Resize and initialize result tensor. THCTensor_(resizeAs)(state, gradInput, input); <del> THCTensor_(newContiguous)(state, gradInput); </del> THCTensor_(zero)(state, gradInput); int batchSize;", "positive_passages": [{"docid": "doc-en-pytorch-720f04bb66fddc3640260d77228825242f89edee09d7121e3a6fa4b5ec2f6940", "text": "SMT Host state unknown Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constanttsc repgood nopl xtopology nonstoptsc cpuid tscknownfreq pni pclmulqdq ssse3 fma cx16 pcid sse41 sse42 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahflm abm 3dnowprefetch invpcidsingle pti ssbd ibrs ibpb stibp fsgsbase tscadjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves arat mdclear archcapabilities Versions of relevant libraries: [pip3] numpy==1.24.1 [pip3] torch==2.2.0.dev20231213+cpu [pip3] torch-xla==2.2.0+git5577dd7 [pip3] torchaudio==2.2.0.dev20231213+cpu [pip3] torchmetrics==1.2.1 [pip3] torchsummary==1.5.1 [pip3] torchvision==0.18.0.dev20231213+cpu [conda] Could not collect cc\nFrom the log it seems like it suggese XLA need to manually register the backward for reshape which is a bit weird. This is the first time I see this error, if you can shed some light on this issue that would be great!\nHey! this warning recently as a set of more testing to ensure custom ops are properly registered. This is expected to trigger if you register a backend kernel (XLA key) but no autograd imple (AutogradXLA) for a given op.", "commid": "pytorch_issue_115816", "tokennum": 517}], "negative_passages": []} |
|
{"query_id": "q-en-pytorch-b85baf3b49273dd2135f72d0e09e246f96aaa047538243ca634f0a03a702b9f3", "query": "// TODO: gradOutput shape check // Resize and initialize result tensor. THCTensor_(resizeAs)(state, gradInput, input); <del> THCTensor_(newContiguous)(state, gradInput); </del> THCTensor_(zero)(state, gradInput); int batchSize;", "positive_passages": [{"docid": "doc-en-pytorch-d4c09f02c86b37ecfb041629313519478088158900d45b3703e3005b6a9f5f2c", "text": "Since reshape() cannot have an autograd formula (it is sometimes a view and sometimes not), then you should register your kernel as a CompositeImplicitAutograd kernel (this will properly register onto AutogradXLA and remove this warning)\nThanks ! This make sense! I have a follow up question, we registered reshape and reshape symint in and This file will then get codegen into and . What's the correct way of marking to be ?\nHo I'm not familiar with how your codegen works... But in principles, I guess you want to do the same as NestedTensor here which uses for the dispatch key in do you have more context on this by any chance?\nit's probably enough for you to just register that kernel to both the and dispatch keys (this is similar to what CompositeImplicitAutograd does, but it does it for all backends). I'm a bit confused though - Why do you need a custom reshape impl? And how are you going to get training support for it? (do you have a custom implementation of your reshape's backward)\nHaha this is actually by you in for functionization. Do you think we still need this trick for functionization?\nYes - I think we should be able to remove it. Do you mind trying that? That should just run the existing kernel defined . And that should be identical to the current behavior, since the \ for reshape in XLA just calls back into ATen ()\nI merged , let's wait and see if tmr's nightly will resolve this issue.\nThank you. I will test it and give feedback. Till now, I have run my experiments with the warning. Do I need to rerun the experiments?\nOK, with the recent nighly versions of Pytorch and Pytorch-XLA I do not observe the warning anymore while running my experiments. Thank you. Should I rerun the experiments that were run with past Pytorch versions throwing the warning? Does this update only remove the warning or does it also change the calculations that I ran with the previous Pytorch versions? Can I rely on my previous experiments?\nSorry we have to revert the pr since it creates regression on nightly.\nHello I am using nightly without any problem. What should I do? Wait for new update?\nif you don't see any regression in performance, you can likely just keep using today's nightly.commidpytorch_issue_115816tokennumnegative_passages |
|
query_idq-en-pytorch-b85baf3b49273dd2135f72d0e09e246f96aaa047538243ca634f0a03a702b9f3query// TODO: gradOutput shape check // Resize and initialize result tensor. THCTensor_(resizeAs)(state, gradInput, input); <del> THCTensor_(newContiguous)(state, gradInput); </del> THCTensor_(zero)(state, gradInput); int batchSize;positive_passagesdociddoc-en-pytorch-51bfb24ed99340560e31d767d56fad72bba3d66f6c6a21d16e7d2ad20117de7etextI am not even sure if that warning has any real impact on the model yet.\nThank you for the answer Reverting will cause problem for me. But reverting will only regenerate the warning. In my experiments, till now, I have not observed any difference with or without the warning. But I am not 100% sure. I'm looking forward to any more information you can give me.\nSeeing the same error. Any solutions?\nYea I need to find someone to look into this issue. It seems like removing the reshape lowering will introduce a regression in performance.\nI am facing the same issue, I don't even have a \ call in my code anywhere.\nreopened , will try to fix this before the 2.3 release branch cut.\nwill lead the investigation on the regression that blocks merging Regression only happens in dynamo so my guess is it has something to do with functionization pass..\nRunning through a small example with on HEAD and on , the snippet HLOo generated for in both cases are: Now will try to experiment with functionalizaton explicitly on/off to see if there are some unexpected decompositions.\nI am also experiencing the same problem. I have also seen significant reduction in the learning capabilities of my model. My model has more than 80% accuracy when I trained it in another device without xla. However, its accuracy has been reduced to less than 10% compared to when I trained on a GCP TPU VM with pytorch-xla. It is also apparent that loss values do not change between consecutive epochs during training. Here is a portion of the output including the error message: {'loss': 0.0288, 'gradnorm': 0., 'learningrate': 4.-05, 'epoch': 6.0} {'evalloss': 0., 'evalruntime': 5.4579, 'evalsamplespersecond': 615.621, 'evalstepspersecond': 19.238, 'epoch': 6.0} 20% 2502/ [28:36<1:08:07, 2.45it/s/home/egecitepred/miniconda3/envs/ege/lib/python3.8/site- UserWarning: aten::reshape: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it.commidpytorch_issue_115816tokennumnegative_passages |
|
query_idq-en-pytorch-b85baf3b49273dd2135f72d0e09e246f96aaa047538243ca634f0a03a702b9f3query// TODO: gradOutput shape check // Resize and initialize result tensor. THCTensor_(resizeAs)(state, gradInput, input); <del> THCTensor_(newContiguous)(state, gradInput); </del> THCTensor_(zero)(state, gradInput); int batchSize;positive_passagesdociddoc-en-pytorch-c307dafb9f01f7ffaaf20d7130bad0d1456e624382978f323a770638d34c2727textThis may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ) Variable.executionengine.runbackward( # Calls into the C++ engine to run the backward pass {'loss': 0.0289, 'gradnorm': 0., 'learningrate': 3.-05, 'epoch': 7.0} {'evalloss': 0., 'evalruntime': 5.4422, 'evalsamplespersecond': 617.398, 'evalstepspersecond': 19.294, 'epoch': 7.0} 23% 2919/ [31:44<1:05:13, 2.45it/s/home/egecitepred/miniconda3/envs/ege/lib/python3.8/site- UserWarning: aten::reshape: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd.commidpytorch_issue_115816tokennumnegative_passages |
|
query_idq-en-pytorch-b85baf3b49273dd2135f72d0e09e246f96aaa047538243ca634f0a03a702b9f3query// TODO: gradOutput shape check // Resize and initialize result tensor. THCTensor_(resizeAs)(state, gradInput, input); <del> THCTensor_(newContiguous)(state, gradInput); </del> THCTensor_(zero)(state, gradInput); int batchSize;positive_passagesdociddoc-en-pytorch-11b2dd68caf671acca87651f70c4810948e23fabad382b85538661b6aeefdf39text(Triggered internally at ) Variable.executionengine.runbackward( # Calls into the C++ engine to run the backward pass {'loss': 0.0288, 'gradnorm': 0., 'learningrate': 3.-05, 'epoch': 8.0} {'evalloss': 0., 'evalruntime': 5.4432, 'evalsamplespersecond': 617.286, 'evalstepspersecond': 19.29, 'epoch': 8.0} I am also receiving this error message at every epoch. Is it possible to register CompositeImplicitAutograd key to autograd kernel like the error message says? If someone could provide me instructions on registering this key, I could try to see if this fixes the error or at least the reduction of the performance. I don't have enough experience in this area yet. So, I couldn't find a way of registering the autograd key.\nHey thanks for reporting the issue. Just to confirm, the warning messages in the original issue should not have an impact in the performance. We tried to remove this warning message and merged , which caused a performance regression (which is now reverted in head). Which nightlies did you observe the performance regression in your case?\nRunning a quick experiment on resnet18, the performance is more or so similar with HEAD and reshape op removed. With HEAD, timing some runs of the resnet18 dynamo unit test: And timing some runs of the resnet18 dynamo unit test with removed: Before we continue further with the investigation, and I discussed that if simply registering this kernel to both the AutogradXLA and XLA dispatch keys would remove the warning message, we might just do that.. though I'm not too familiar this. do you know how can we register the kernel?\nHello This message seems to slightly reduced the performance of my model unlike others. I am not absolutely sure about this but I made some additional observations that lead me to believe that. I have tried the same model on a GPU without XLA. It has somewhat higher performance compared to TPU VM with PytorchXLA. However, my main problem is the reduction of learning capabilities after training. This error message reduced my result metrics after training from ~80% to ~20% under the same conditions. Also, my output for each epoch shows the same evalloss value.", "commid": "pytorch_issue_115816", "tokennum": 519}], "negative_passages": []} |
|
{"query_id": "q-en-pytorch-b85baf3b49273dd2135f72d0e09e246f96aaa047538243ca634f0a03a702b9f3", "query": "// TODO: gradOutput shape check // Resize and initialize result tensor. THCTensor_(resizeAs)(state, gradInput, input); <del> THCTensor_(newContiguous)(state, gradInput); </del> THCTensor_(zero)(state, gradInput); int batchSize;", "positive_passages": [{"docid": "doc-en-pytorch-d7ce5bacf8c5d8addc2f02206b44de96d9f616f209ea71e1afc93efc3eacf125", "text": "My other experiments on a GPU had the same eval loss value on epoch 1. It seems the model (with TPU XLA) is learning only for the first epoch and fails to properly backprop. Thus, it keeps repeating the first epoch again and again. Although I am not sure about this, it is the only explanation I could come up. I am not sure which nightly version I am using. However, I know that I downloaded this library on February 29th using this command: \"pip install torch~=2.2.0 torch_xla[tpu]~=2.2.0 -f \"\nHey I wonder if that's a different issue -- could you open a separate issue in PyTorch/XLA? Since Brian is OOO, do you have any idea on the kernel registration mentioned in\nFrom the warning, you are hitting the AutogradXLA key since the fallback at that key is triggering this warning. So the problem is that your differentiable reshape implementation should be registered at that key. If you say above that \ gets generated then I would check if reshare is there. If it is, I would double check that this file is properly included in your binary and properly registers the kernels. You can also check to see all the kernels registered for that op and from where\nHey thanks for the pointers! I tried to register the op to with With this PR, I could see it get registered: However, now I'm getting the error: I assume something is wrong with our new lowering, which is just a redispatch to functionalization -- However, this seems to be what we're doing for some of similar ops -- Maybe I'm just confused here lol, do you see anything obvious that is wrong? Thanks for the help, again!", "commid": "pytorch_issue_115816", "tokennum": 396}], "negative_passages": []} |
|
{"query_id": "q-en-pytorch-b85baf3b49273dd2135f72d0e09e246f96aaa047538243ca634f0a03a702b9f3", "query": "// TODO: gradOutput shape check // Resize and initialize result tensor. THCTensor_(resizeAs)(state, gradInput, input); <del> THCTensor_(newContiguous)(state, gradInput); </del> THCTensor_(zero)(state, gradInput); int batchSize;", "positive_passages": [{"docid": "doc-en-pytorch-146b6b383adff81e02ab2534504f8a13ac7c39a12d1344603aeaf1515dcc54e0", "text": "The vision pinned hash is from a long time ago (2020?) and attempting to update it to a newer hash fails. See for examples of CI failing when the hash is updated, and for a possible reason why. When fixed, we should add vision back into . n/a cc\nSelf-assigning myself to have a look at it\nto TorchVision, which changes how operates when given a source folder: instead of execution it extracts the requirements from above-mentioned toml file. So, one can either pass to continue installing torchvision in the old way, or update to contain proper dependencies\nfixed by", "commid": "pytorch_issue_80745", "tokennum": 123}], "negative_passages": []} |
|
{"query_id": "q-en-pytorch-b85baf3b49273dd2135f72d0e09e246f96aaa047538243ca634f0a03a702b9f3", "query": "// TODO: gradOutput shape check // Resize and initialize result tensor. THCTensor_(resizeAs)(state, gradInput, input); <del> THCTensor_(newContiguous)(state, gradInput); </del> THCTensor_(zero)(state, gradInput); int batchSize;", "positive_passages": [{"docid": "doc-en-pytorch-72c54752b0c110f7e004bba0790b091faa84938c701f1266676f5f4fea7b786b", "text": "When I try to train my model which contains MaxPool3d , It always end up with 'out of memory' error. my environment info is here: 16.04 PyTorch version: 0.4.0a0+ installed PyTorch from source python version is 3.6.2 CUDA/cuDNN version: 9.1/7.1.2 GCC version (if compiling from source):GCC 5.4 Build command you used (if compiling from source): as default I can reproduce this bug by the following script:\nThanks for the report, I can reproduce this and am looking into it.", "commid": "pytorch_issue_6222", "tokennum": 133}], "negative_passages": []} |
|
{"query_id": "q-en-pytorch-bb1cd692f90c2a52292d5a2b7aad0a809e6487387b0460bdb0ecbcb9aee58805", "query": "Once these are installed, you can use the backend for Caffe2:: # ...continuing from above <del> import onnx.backend.caffe2 as backend </del> <ins> import onnx_caffe2.backend as backend </ins> import numpy as np rep = backend.prepare(graph, device=\"CUDA:0\") # or \"CPU\" # For the Caffe2 backend: # rep.predict_net is the Caffe2 protobuf for the network # rep.workspace is the Caffe2 workspace for the network <del> # (see the class onnx.backend.c2.Workspace) </del> <ins> # (see the class onnx_caffe2.backend.Workspace) </ins> outputs = rep.run(np.random.randn(10, 3, 224, 224).astype(np.float32)) # To run networks with more than one input, pass a tuple # rather than a single numpy ndarray.", "positive_passages": [{"docid": "doc-en-pytorch-fbf4259f563edf12c2e0a1d199549c5c2b63031ab5fdcc043d34134ee0440fa8", "text": "Functions and does not work correctly when called in DataPrallel mode The reason is while the first thread is budling the extension, the second one asks for a current version, and the returns the version of the first thread build. Second thread then skips lock file and tries to load non existed extension, failing with an error: I would like to submit a PR with a fix: PyTorch version: 2.4.0a0+git8046de3 Is debug build: False CUDA used to build PyTorch: 11.8 ROCM used to build PyTorch: N/A OS: Ubuntu 20.04.6 LTS (x8664) GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0 Clang version: Could not collect CMake version: version 3.26.4 Libc version: glibc-2.31 Python version: 3.12.2 (main, Feb 27 2024, 17:35:02) [GCC 11.2.0] (64-bit runtime) Python platform: Linux-5.15.0-105-generic-x8664-with-glibc2.31 Is CUDA available: True CUDA runtime version: 11.8.89 CUDAMODULELOADING set to: LAZY GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3090 GPU 1: NVIDIA GeForce RTX 3090 GPU 2: NVIDIA GeForce RTX 3090 Nvidia driver version: 525.147.05 cuDNN version: Could not collect HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Architecture: x8664 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian Address sizes: 46 bits physical, 48 bits virtual CPU(s): 36 On-line CPU(s) list: 0-35 Thread(s) per core: 2 Core(s) per socket: 18 Socket(s): 1 NUMA node(s): 1 Vendor ID: GenuineIntel CPU family: 6 Model: 85 Model name: Intel(R) Core(TM) i9-10980XE CPU @ 3.00GHz Stepping: 7 CPU MHz: 3000.000 CPU max MHz: 4800,", "commid": "pytorch_issue_125403", "tokennum": 529}], "negative_passages": []} |
|
{"query_id": "q-en-pytorch-bb1cd692f90c2a52292d5a2b7aad0a809e6487387b0460bdb0ecbcb9aee58805", "query": "Once these are installed, you can use the backend for Caffe2:: # ...continuing from above <del> import onnx.backend.caffe2 as backend </del> <ins> import onnx_caffe2.backend as backend </ins> import numpy as np rep = backend.prepare(graph, device=\"CUDA:0\") # or \"CPU\" # For the Caffe2 backend: # rep.predict_net is the Caffe2 protobuf for the network # rep.workspace is the Caffe2 workspace for the network <del> # (see the class onnx.backend.c2.Workspace) </del> <ins> # (see the class onnx_caffe2.backend.Workspace) </ins> outputs = rep.run(np.random.randn(10, 3, 224, 224).astype(np.float32)) # To run networks with more than one input, pass a tuple # rather than a single numpy ndarray.", "positive_passages": [{"docid": "doc-en-pytorch-76314bf315c51ae1e42274f6992138e7d7be4c92c65698ced3af399c7a8af3b5", "text": "0000 CPU min MHz: 1200,0000 BogoMIPS: 6000.00 Virtualization: VT-x L1d cache: 576 KiB L1i cache: 576 KiB L2 cache: 18 MiB L3 cache: 24,8 MiB NUMA node0 CPU(s): 0-35 Vulnerability Gather data sampling: Mitigation; Microcode Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled Vulnerability L1tf: Not affected Vulnerability Mds: Not affected Vulnerability Meltdown: Not affected Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable Vulnerability Retbleed: Mitigation; Enhanced IBRS Vulnerability Spec rstack overflow: Not affected Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and user pointer sanitization Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Mitigation;", "commid": "pytorch_issue_125403", "tokennum": 251}], "negative_passages": []} |
|
{"query_id": "q-en-pytorch-bb1cd692f90c2a52292d5a2b7aad0a809e6487387b0460bdb0ecbcb9aee58805", "query": "Once these are installed, you can use the backend for Caffe2:: # ...continuing from above <del> import onnx.backend.caffe2 as backend </del> <ins> import onnx_caffe2.backend as backend </ins> import numpy as np rep = backend.prepare(graph, device=\"CUDA:0\") # or \"CPU\" # For the Caffe2 backend: # rep.predict_net is the Caffe2 protobuf for the network # rep.workspace is the Caffe2 workspace for the network <del> # (see the class onnx.backend.c2.Workspace) </del> <ins> # (see the class onnx_caffe2.backend.Workspace) </ins> outputs = rep.run(np.random.randn(10, 3, 224, 224).astype(np.float32)) # To run networks with more than one input, pass a tuple # rather than a single numpy ndarray.", "positive_passages": [{"docid": "doc-en-pytorch-3f1cdae19f17e9844880d95b77ae2ec6a120640bf42a5e5d758f4980af45dd07", "text": "TSX disabled Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constanttsc art archperfmon pebs bts repgood nopl xtopology nonstoptsc cpuid aperfmperf pni pclmulqdq dtes64 monitor dscpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse41 sse42 x2apic movbe popcnt tscdeadlinetimer aes xsave avx f16c rdrand lahflm abm 3dnowprefetch cpuidfault epb catl3 cdpl3 invpcidsingle ssbd mba ibrs ibpb stibp ibrsenhanced tprshadow vnmi flexpriority ept vpid eptad fsgsbase tscadjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdta avx512f avx512dq rdseed adx smap clflushopt clwb intelpt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqmllc cqmoccupllc cqmmbmtotal cqmmbmlocal dtherm ida arat pln pts hwp hwpactwindow hwpepp hwppkgreq avx512vnni mdclear flushl1d archcapabilities Versions of relevant libraries: [pip3] pytorch-triton==3.0.0+ [pip3] torch==2.4.0a0+git8046de3 [conda] magma-cuda110 2.5.2 1 pytorch [conda] mkl-include 2024.1.0 intel691 intel [conda] mkl-static 2024.1.0 intel691 intel [conda] pytorch-triton 3.0.0+ pypi0 pypi [conda] torch 2.4.0a0+git8046de3 dev_0 <developcc", "commid": "pytorch_issue_125403", "tokennum": 557}], "negative_passages": []} |
|
{"query_id": "q-en-pytorch-bb9dc467b4098d579f411e64f059a5d51c87ab0ca69a149799adb059f3dc2657", "query": "import contextlib import platform import ctypes <ins> import os </ins> import torch _initialized = False", "positive_passages": [{"docid": "doc-en-pytorch-20b097551cd6d87c3c98ac23415ff9359a3ec462e8d54adf471c355cc4b15a7d", "text": "right now if is not in LDLIBRARYPATH, though it was found at compile-time, these lines will fail: Avoid this, by idk doing something... I think i have a few good ideas.\nWe need to add /usr/local/cuda/lib64 (or that's mac dir) to the rpath of _C\nthe problem i dont think is to add it to rpath, it's because we\nRight. We could try with the default cuda installation path if it's not found from LDLIBRARYPATH.\nActually, do we really want to use the compile path? We should try to detect it at runtime, right? Or are we going to ship CUDA libs in our binaries?\nwe first load using runtime paths, and if it fails, fallback to path known at compile time.\nI fear this is going to get brittle, especially if you are shipping CUDA libs. e.g. a user swaps HW and picks up a new driver and/or toolkit. As says, I'm pretty sure you want to use the runtime paths. Similarly, CUDA 7.5 won't run on Pascal and we have a cudnn built for 7.5 that will not do the right thing on CUDA 8, so those need to go together. If a user installs the CUDA bits via deb/rpm the right things are supposed to happen. If you push bits, we need to figure out how to make sure things line up. But this is perhaps a much larger packaging conversation.\nYeah, I've implemented it to first try loading it without any path and then try and . I'll push the commit tomorrow.commidpytorch_issue_153tokennumnegative_passages |
|
query_idq-en-pytorch-c4618373d440907e1f401a56ccadf8cd14fccfb4e8cb88c2edde7bdce55d57abqueryThreshold is defined as:: <del> y = x if x >= threshold value if x < threshold </del> <ins> y = x if x > threshold value if x <= threshold </ins> Args: threshold: The value to threshold atpositive_passagesdociddoc-en-pytorch-e5a3bd5a2208d8eeb431a5eae3ddbd2edbe57fc7a47152f299982d11c4fbefe4textHello, In the it says So the following: should evaluate to 1, but instead returns 0. Maybe it should be corrected to:commidpytorch_issue_2025tokennumnegative_passages |
|
query_idq-en-pytorch-ce02830fb76522fb46cdd15ed92cd7f26880cbfff6e16ee2322eac970cdb7ebequery#ifdef TH_REAL_IS_INT #define NUMPY_TYPE_ENUM NPY_INT32 #endif <ins> #ifdef TH_REAL_IS_SHORT #define NUMPY_TYPE_ENUM NPY_INT16 #endif </ins> #ifdef TH_REAL_IS_BYTE #define NUMPY_TYPE_ENUM NPY_UINT8 #endifpositive_passagesdociddoc-en-pytorch-5740cb3214dad75a3491bc5dcb6e752c151b423c2a46a03c332f63602b38bb18textShould we add a ShortTensor Type, or just convert int16 to IntTensor ?\nWe already have a type available in pytorch. We have to enable conversion in the relevant function here:\nAuto travis-ci failed at python2.7 :(\nDo you have any plan to support int8 numpy conversion while you have too?\nChar tensor uses char which is not guaranteed to be signed by the C standard. We'd need to change our C code to use a\nI try , it complains , any idea?\nThe error message is quite self explanatory. PyTorch doesn't support tensors at the moment.commidpytorch_issue_891tokennumnegative_passages |
|
query_idq-en-pytorch-db0578e90ce1ed1beb7b7ec598d26185a9e2abbab8bf9d89784a28742d6b16a4query{\, (PyCFunction)THCPModule_initialSeed, METH_NOARGS, NULL}, {\, (PyCFunction)THCPModule_cudaHostAllocator, METH_NOARGS, NULL}, {\, (PyCFunction)THCPModule_cudaSynchronize, METH_NOARGS, NULL}, <ins> {\, (PyCFunction)THCPModule_getLibPath, METH_NOARGS, NULL}, </ins> #endif {\, (PyCFunction)THPModule_safeCall, METH_VARARGS | METH_KEYWORDS, NULL}, {\, (PyCFunction)THPModule_sendfd, METH_VARARGS, NULL},positive_passagesdociddoc-en-pytorch-20b097551cd6d87c3c98ac23415ff9359a3ec462e8d54adf471c355cc4b15a7dtextright now if is not in LDLIBRARYPATH, though it was found at compile-time, these lines will fail: Avoid this, by idk doing something... I think i have a few good ideas.\nWe need to add /usr/local/cuda/lib64 (or that's mac dir) to the rpath of _C\nthe problem i dont think is to add it to rpath, it's because we\nRight. We could try with the default cuda installation path if it's not found from LDLIBRARYPATH.\nActually, do we really want to use the compile path? We should try to detect it at runtime, right? Or are we going to ship CUDA libs in our binaries?\nwe first load using runtime paths, and if it fails, fallback to path known at compile time.\nI fear this is going to get brittle, especially if you are shipping CUDA libs. e.g. a user swaps HW and picks up a new driver and/or toolkit. As says, I'm pretty sure you want to use the runtime paths. Similarly, CUDA 7.5 won't run on Pascal and we have a cudnn built for 7.5 that will not do the right thing on CUDA 8, so those need to go together. If a user installs the CUDA bits via deb/rpm the right things are supposed to happen. If you push bits, we need to figure out how to make sure things line up. But this is perhaps a much larger packaging conversation.\nYeah, I've implemented it to first try loading it without any path and then try and . I'll push the commit tomorrow.", "commid": "pytorch_issue_153", "tokennum": 365}], "negative_passages": []} |
|
{"query_id": "q-en-pytorch-dc5476dca0cbdb1ef52d90efeb4a0f4004888ebed034e1ef2c1222b4241d29be", "query": "def align_tensors(*tensors): <del> if not torch._C._BUILD_NAMEDTENSOR: raise RuntimeError('NYI: torch.align_tensors is experimental and a part ' 'of our named tensors project.') return torch._C._VariableFunctions.align_tensors(tensors) </del> <ins> raise RuntimeError('`align_tensors` not yet implemented.') </ins>", "positive_passages": [{"docid": "doc-en-pytorch-4979a1a04833169240c0bad7b12e4a15cd4efb4b3d5ef9e5517488d63a0cc554", "text": "only supports fully named inputs; all input dimensions must have a name. When passing it an unnamed input, it errors out with the following message: It should really say \"Found unnamed dim at index 0 of Tensor[None, None]\".\nfixed in", "commid": "pytorch_issue_27074", "tokennum": 51}], "negative_passages": []} |
|
{"query_id": "q-en-pytorch-dc5476dca0cbdb1ef52d90efeb4a0f4004888ebed034e1ef2c1222b4241d29be", "query": "def align_tensors(*tensors): <del> if not torch._C._BUILD_NAMEDTENSOR: raise RuntimeError('NYI: torch.align_tensors is experimental and a part ' 'of our named tensors project.') return torch._C._VariableFunctions.align_tensors(tensors) </del> <ins> raise RuntimeError('`align_tensors` not yet implemented.') </ins>", "positive_passages": [{"docid": "doc-en-pytorch-1de20edd2f7958b4095dd34eb2e3927620d004f0d1dfec850faeeac1d6e4b60e", "text": "This is an older implementation that I think doesn't make any sense anymore. We should throw a NYI exception for it. The expected behavior for this should be:\nfixed incommidpytorch_issue_27073tokennumnegative_passages |
|
query_idq-en-pytorch-de06230c6d43c431a6f7a275642da501bed88c2969a51d8612d8f99249fb61e3queryreturn (hasattr(torch._C, '_cuda_isDriverSufficient') and torch._C._cuda_isDriverSufficient()) <del> def _lazy_init(): global _initialized, _cudart if _initialized: return </del> <ins> def _load_cudart(): system = platform.system() lib_name = 'libcudart.' + ('dylib' if system == 'Darwin' else 'so') lib_paths = [ lib_name, os.path.join(torch._C._cuda_getLibPath(), lib_name), os.path.join('/usr/local/cuda/lib64', lib_name), os.path.join('/usr/local/cuda/lib', lib_name), ] for path in lib_paths: try: return ctypes.cdll.LoadLibrary(path) except OSError: pass raise RuntimeError(\ \ + (\ if system == 'Darwin' else \) + \) def _check_driver(): </ins> if not hasattr(torch._C, '_cuda_isDriverSufficient'): raise AssertionError(\) if not torch._C._cuda_isDriverSufficient():positive_passagesdociddoc-en-pytorch-20b097551cd6d87c3c98ac23415ff9359a3ec462e8d54adf471c355cc4b15a7dtextright now if is not in LDLIBRARYPATH, though it was found at compile-time, these lines will fail: Avoid this, by idk doing something... I think i have a few good ideas.\nWe need to add /usr/local/cuda/lib64 (or that's mac dir) to the rpath of _C\nthe problem i dont think is to add it to rpath, it's because we\nRight. We could try with the default cuda installation path if it's not found from LDLIBRARYPATH.\nActually, do we really want to use the compile path? We should try to detect it at runtime, right? Or are we going to ship CUDA libs in our binaries?\nwe first load using runtime paths, and if it fails, fallback to path known at compile time.\nI fear this is going to get brittle, especially if you are shipping CUDA libs. e.g. a user swaps HW and picks up a new driver and/or toolkit. As says, I'm pretty sure you want to use the runtime paths. Similarly, CUDA 7.5 won't run on Pascal and we have a cudnn built for 7.5 that will not do the right thing on CUDA 8, so those need to go together. If a user installs the CUDA bits via deb/rpm the right things are supposed to happen. If you push bits, we need to figure out how to make sure things line up. But this is perhaps a much larger packaging conversation.\nYeah, I've implemented it to first try loading it without any path and then try and . I'll push the commit tomorrow.", "commid": "pytorch_issue_153", "tokennum": 365}], "negative_passages": []} |
|
{"query_id": "q-en-pytorch-df3cd13f32f960ed5135f9b9d851fc5082eea1f2fb2d456402fb2815c37dc5ce", "query": "extern PyObject * THCPModule_initialSeed(PyObject *_unused); extern PyObject * THCPModule_cudaHostAllocator(PyObject *_unused); extern PyObject * THCPModule_cudaSynchronize(PyObject *_unused); <ins> extern PyObject * THCPModule_getLibPath(PyObject *_unused); </ins> #endif static PyMethodDef TorchMethods[] = {", "positive_passages": [{"docid": "doc-en-pytorch-20b097551cd6d87c3c98ac23415ff9359a3ec462e8d54adf471c355cc4b15a7d", "text": "right now if is not in LDLIBRARYPATH, though it was found at compile-time, these lines will fail: Avoid this, by idk doing something... I think i have a few good ideas.\nWe need to add /usr/local/cuda/lib64 (or that's mac dir) to the rpath of _C\nthe problem i dont think is to add it to rpath, it's because we\nRight. We could try with the default cuda installation path if it's not found from LDLIBRARYPATH.\nActually, do we really want to use the compile path? We should try to detect it at runtime, right? Or are we going to ship CUDA libs in our binaries?\nwe first load using runtime paths, and if it fails, fallback to path known at compile time.\nI fear this is going to get brittle, especially if you are shipping CUDA libs. e.g. a user swaps HW and picks up a new driver and/or toolkit. As says, I'm pretty sure you want to use the runtime paths. Similarly, CUDA 7.5 won't run on Pascal and we have a cudnn built for 7.5 that will not do the right thing on CUDA 8, so those need to go together. If a user installs the CUDA bits via deb/rpm the right things are supposed to happen. If you push bits, we need to figure out how to make sure things line up. But this is perhaps a much larger packaging conversation.\nYeah, I've implemented it to first try loading it without any path and then try and . I'll push the commit tomorrow.commidpytorch_issue_153tokennumnegative_passages |
|
query_idq-en-pytorch-eead1b452de8c5cf90bef351b8c615d64f8e787012ed322d9368522c8fd013f1query.. math:: f(X) = sqrt[p]{sum_{x in X} x^{p}} <del> - At p = infinity, one gets Max Pooling - At p = 1, one gets Average Pooling </del> <ins> - At p = infinity, one gets Max Pooling - At p = 1, one gets Sum Pooling (which is proportional to Average Pooling) </ins> Args: kernel_size: a single int, the size of the windowpositive_passagesdociddoc-en-pytorch-946f6b9e14808f1c7f57d50e741fb79df2db8e2dc593391fafd4a2221b5e2b8btextThe math rendering doesn't end where it should, making the text after it difficult to read: <img width=\"719\" alt=\"screen shot 2018-03-12 at 5 07 03 pm\" src=\"https://user-\"\nThanks for the report! Would you be interested in submitting a PR that fixes it?\nSure, I can give it a try!\nThis is being fixed in this PR", "commid": "pytorch_issue_5718", "tokennum": 91}], "negative_passages": []} |
|
{"query_id": "q-en-pytorch-f44b85ebdbf00d0ba828274a4f940924c2065b2b870adae6faab246645d80f6e", "query": "np.float, np.int64, np.int32, <ins> np.int16, </ins> np.uint8 ] for dtype in dtypes:", "positive_passages": [{"docid": "doc-en-pytorch-5740cb3214dad75a3491bc5dcb6e752c151b423c2a46a03c332f63602b38bb18", "text": "Should we add a ShortTensor Type, or just convert int16 to IntTensor ?\nWe already have a type available in pytorch. We have to enable conversion in the relevant function here:\nAuto travis-ci failed at python2.7 :(\nDo you have any plan to support int8 numpy conversion while you have too?\nChar tensor uses char which is not guaranteed to be signed by the C standard. We'd need to change our C code to use a\nI try , it complains , any idea?\nThe error message is quite self explanatory. PyTorch doesn't support tensors at the moment.", "commid": "pytorch_issue_891", "tokennum": 135}], "negative_passages": []} |
|
|