| {"_id":"doc-en-pytorch-b6c54f2eecc6fc5c9dd06c86954ad437cfcdd2a7e5b2a92006d42bbc302684a3","title":"","text":"(old title) When build C extension, the error: FileNotFoundError was got. OS: Windows 10 pro PyTorch version: 0.4.0a0+ How you installed PyTorch (conda, pip, source): source Python version: 3.6.4 CUDA/cuDNN version: CUDA 9.0 GCC version (if compiling from source): msvc 14/15 (then compiling with CUDA I use VS2015, but when build extension, the program automatically use vs2017) When building c extension on Windows, I got the error: (The Chinese above means compiler success compile the library, and generated .lib and .exp) And the same error was got on a Linux Work Station. (gcc is 5.4.0)\nIn , copy the linked file from , for example , but in my Windows, and a Linux work station, the linked one is in: ,for example . So, there must be a bug, or error in pytorch's ffi or python's ffi. (pyhton 3.6.4, cffi 1.11.4), If it broke down because of change in cffi, I think I can create a PR. I do think this is because of the change of cffi api,\nCC who enabled extension build for Windows on\nIs this change documented in somewhere like Python SDK?"} | |
| {"_id":"doc-en-pytorch-73afc555bf492c7072a4f9994675674d19a7420810f08eaca2d074197550e516","title":"","text":"max pooling functions are not consistent with max functions. Below an example, every max pooling (be it 1d, 2d or 3d, adaptive or not) acts the same, on cpu or on cuda. Essentially, there are two fondamental differences : max pooling of all values is while for it's max pooling of nan and valid values is valid values, which means s get ignored, while for , as soon as there is a value, the result is . More generally, choosing explicetely how to deal with as in numpy () could be a solution, but maybe this is related to CuDNN's max pooling ? Built from latest sources (as of 05/17) PyTorch version: 0.5.0a0+ Is debug build: No CUDA used to build PyTorch: 9.1.85 OS: Ubuntu 16.04.4 LTS GCC version: (Ubuntu 5.4.0-6ubuntu1~16.04.9) 5.4.0 CMake version: version 3.9.4 Python version: 3.6 Is CUDA available: Yes CUDA runtime version: 9.1.85 GPU models and configuration: GPU 0: Quadro M1000M Nvidia driver version: 390.30 cuDNN version: Probably one of the following: /usr/lib/x8664-linux- /usr/lib/x8664-linux-gnu/libcudnnstaticv7.a Versions of relevant libraries: [conda] magma-cuda91 2.3.0 1 pytorch [conda] torch 0.5.0a0+ <pip[conda] torch 0.3.1b0+ <pip[conda] torch 0.5.0a0+ <pip\nHi, . Thank you for providing the example. I'll take a look.\nHmhm. I seem to get a double free in the ...\nAnother question: Do we really want to provide an option to ignore NaN? My view is that \"if you have nan in your net, you're screwed\", so I would just return NaN there, personally.\nMy practical usecase is for KITTI groundtruth depthmaps and FlowMaps which are 2D sparse arrays."} | |
| {"_id":"doc-en-pytorch-202a6b4a46ae62b8f7553c0429b323612fb197593d261f12c30c05b56ae7ce77","title":"","text":"For an algorithm that use FlowNet-like architecture that outputs predictions at multiple scale levels, we can either compare predictions to downscaled GT or upscaled predictions to GT. The first being obviously less computationally expensive, the ignore NaN would help downscaling such sparse 2D maps. For the moment we do something I find ugly, you can see it Essentially it zeroes the s, takes the map and construct two maps of positive and negative values which are then maxpooled and back together. I am actually open for a strict \"no-nan\" policy on pooling functions, but in that case better enforce it before someone writes a code that tries to benefit from maxpooling ignore s feature/bug and if you have a clever way of pooling sparse 2D tensors, I'm open to it, but I guess it's a topic for pytorch forums ;)\nPersonally, I think it is more sane have NaN -NaN in the pooling and offer a parametrizable (where you get to pick the values) that does or so.\nSo the proposed fix does NaN -NaN similar to max. I didn't try to fix gradients for the nan case. This would involve keeping the values and I don't think that is worth it (in particular because I would not expect the pooling layer to be last, and otherwise we'd probably get NaN as grad_out). If you are reasonably happy with it, I'd move it to a PR.\n+1 for NaN -NaN, \"abyssus abyssum invocat\" We could also add an optional mask which would be a ByteTensor of the same size, specifying whether or not the considered pixel is used for the pooling, that could be used for any kind of pulling. An functionality would then be to provide the mask\nI could work with that. to continue to be picky, the potential drawback is that it won't work for other pooling methods such as average pooling or median pooling. The problem here is that the max operation inherently ignores nonmax values which can be leveraged for \"ignore some pixels\" operation, but it has a \"non universal\" feel to it since it woun't be as easy for other kinds of 2D operations.\nTo be clear, the fix is good for me, but I figured a related discussion on selective pooling (whether to ditch NaN values or anything you want to ignore) could happen (maybe not on this issue ?)"} | |
| {"_id":"doc-en-pytorch-8d645240428935c6b8a48ae7f0ddcf46aaa90faf077d7ce3d52a0c84ff1a123a","title":"","text":"This issue tracks the components / tests that are not working on Windows: [ ] : currently disabled on Windows because Windows doesn't have and we need to look for substitutes [ ] : Fuser is disabled on Windows because the current implementation uses symbols from Linux-specific headers and . We will need to find alternatives for Windows. [ ] : some parts of and are disabled because Windows doesn't support opening an already opened file (see discussion at and ) [ ] : currently doesn't work with Windows ( is the porting diff) [ ] : in causes intermittent CUDA out-of-memory error on Windows [ ] , : DataLoader with multiple workers causes intermittent CUDA out-of-memory error on Windows. [x] [x] (done in ) [x] [x] - - [x] [x] [x] - For more discussions, also see: cc\nThe first one is solved by but I think DataLoader can be further improved when is fininshed.\nCool I will mark it as resolved :)\nThe has been in .\nWe can try to revert and , because the memory leak in the CPU side could also cause CUDA errors.\nAre they fixed by I think we can revert them after we merge the PR.\nI think this may be related. Since once the memory of the CPU side is low, the will fail with too.\nI think it could be. For what it's worth, when I tried to inspect the CUDA OOM error, showed no process that was taking memory, but running CUDA tests on the machine would still fail.\nNow that is merged into master, could you please try to revert the changes on and ?\nAwesome! Just to understand it better: does fix both numworker=1 and numworker1 cases?\nYes, they are both solved.\nI guess we should mark the cpp_extension test as completed, for it's now enabled in CI.\nClosing this issue due to age and because its references are long out of date. For example, distributed tests now have their own jobs."} | |
| {"_id":"doc-en-pytorch-3dce8e556d6d4bd2ea519d551101e82db52b876cebdea9b13c4207de2ba26137","title":"","text":"Not sure what is the reason for these errors, any suggestions? I suspect the clang version is not supported? Here is the\nclang have more strict checking , pull request will fixed it.\nfixed, thanks to"} | |
| {"_id":"doc-en-pytorch-3fd0655cf1f98414cb74036460095097c829c4e79824f44f878c0695b6ea8a48","title":"","text":"got the following err msg with conv2D and ver 0.4: RuntimeError: expected stride to be a single integer value or a list of 1 values to match the convolution dimensions, but got stride=[1, 1]. I was not able to see the problem so I uninstalled 0.4 and installed 0.3.1 Still got an err but this time it said that the expected input should be a 4d tensor and it got a 3D tensor. This helped understand the issue and I fixed it (just add a dimension). reinstalled 0.4 and its working (no surprise...). I think 0.4 should have the same err feedback otherwise its really hard to undrstand the problem How you installed PyTorch (conda, pip, source): conda Build command you used (if compiling from source): OS: macos PyTorch version: 0.4 Python version: 3.5 CUDA/cuDNN version: no GPU models and configuration: GCC version (if compiling from source): CMake version: Versions of any other relevant libraries:\nCan you please post a small self-contained code snippet that would let us reproduce the problem?\nThe strange error messge was also mentioned a few times in the forum. Here is a code snippet for PyTorch :\nThis is because we direct all conv ops to and infer dim + throw error message there.\nHmm that's not great. We might want to pass the expected dimensionality of the convolution to the generic implementation, so that we can improve the error messages.\nThx for the responses. Just a reminder, for the same err the provided err msg in version 0.3.1 was very informative\nYes, this should definitely be fixed. Sorry about this!\nThe error message of seems to be a bit misleading, too. Code: Should I create a new issue or is it related to the current one?\nthat appears fixed on master! :D"} | |
| {"_id":"doc-en-pytorch-a69891f877be902cc456139a7e993c9b38aa4f28c7548672cc69a9e3f1b209aa","title":"","text":"Following example: fails in Python-3.9 with cc\nAnd the reason for that is very simple:\nAnother interesting offender:"} | |
| {"_id":"doc-en-pytorch-4ad8a9f7a71b2ddce48315dd16c5fc5671be60942858425b0385ed5b869d99cf","title":"","text":"+69 I read: To me, it should be: The CPU code is not affected.\nThe two lines of code are identical?\nNo, one is mu times moment time lr times grad instead of mu times moment plus lr times grad.\nI see. Send a PR? :)\nWill do :)"} | |
| {"_id":"doc-en-pytorch-5b118069fa67273ca83fc38bf735c53ddc46045a52aba22063750dd8da02e407","title":"","text":"gdb points that the error might be in trying to get a size from an empty tensor: I'm using PyTorch version 0.4.0a0+\nThere's a check that should exclude zero-dim tensors from ( should be 0 in this case), so I'm wondering why that's not happening right now... edit: Nevermind, I was running an old build. I pulled the latest master and with being an empty tensor (with shape (0,)), cat crashes.\nRelated: We should probably rewrite to better handle these cases\nOkay, I found the bug. The CUDA version of check doesn't check the case where the input contains all empty tensors, while the CPU version does. I'll put up a fix soon."} | |
| {"_id":"doc-en-pytorch-7a531b4e1408de313ecaafa4ee40fda2153fd640f1937560f296cfb7dbd6ca08","title":"","text":"This is an ipython session. Note that the doesn't remain the same for /= even though it works for div_\nJust to be clear, the reason this is an issue is that it means that functions can't do inplace operations on function arguments.\nAlso, this works fine for +="} | |
| {"_id":"doc-en-pytorch-2fdc3cef791c19159039a95dfbc4d1859ba0e4d81197ef08e9f5067ac45538ca","title":"","text":"[pytorch] The docs say that for tensor comparison operators (gt,lt etc) it should be possible to pass out argument typed as input (), yet when I try to do it, I hit an error Should the docs be fixed, or is it a bug?\nIf it's really useful we can add it back; but we'll fix the docs for now"} | |
| {"_id":"doc-en-pytorch-6dae6822e59fd00098fabde359ce44f544da09d2deb5db328ca4d8e6c0d81333","title":"","text":"The following code, which repeatedly exports a model with , has a memory leak. During the export, every tensor parameter in is cloned once and then immediately leaked forever, without ever being collected by the GC. It's not the underlying buffer that's cloned, it's the lightweight wrapper object itself. Still, for long running processes that often export networks in this manner this is a unbounded memory leak that eventually results in OOM errors. I've reproduced this issue on both Linux and Windows, with pytorch versions and respectively. The final five lines inside the for loop are to debug what happens, they are not neccesary to reproduce the issue. forces a gc collection cycle, ensuring we're not accidentally counting dead objects show the total amount of objects that exist for each type for all objects whose amount has increased. From this we can see that we're leaking 2 additional tensors per that the tensors we're leaking have shapes and , so they're just the weight and bias of the linear that the underlying buffer is always the same, so only the shallow class instance is being that nothing is pointing to these newly created objects, so they should be collected. Example output after running for a while: seems closely related but is more about a temporary doubling in memory, this issue is about a permanent memory leak. was closed as a duplicate of the previous issue, but better matches this issue. Collecting environment information. PyTorch version: 1.10.0 Is debug build: False CUDA used to build PyTorch: 11.3 ROCM used to build PyTorch: N/A OS: Ubuntu 18.04.5 LTS (x8664) GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0 Clang version: 6.0.0-1ubuntu2 (tags/RELEASE600/final) CMake version: version 3.10.2 Libc version: glibc-2.17 Python version: 3.7.11 (default, Jul 27 2021, 14:32:16) [GCC 7.5.0] (64-bit runtime) Python platform: Linux-5.15.0-41-generic-x8664-with-debian-buster-sid Is CUDA available: True CUDA runtime version: 11.3."} | |
| {"_id":"doc-en-pytorch-2f66b550279e97c384d138d0938f18ee7eb60e94c23ce88afd819c9c51455183","title":"","text":"109 GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3080 Ti GPU 1: NVIDIA GeForce RTX 3080 Ti Nvidia driver version: 515.48.07 cuDNN version: Probably one of the following: /usr/lib/x8664-linux- /usr/lib/x8664-linux- /usr/lib/x8664-linux- /usr/lib/x8664-linux- /usr/lib/x8664-linux- /usr/lib/x8664-linux- /usr/lib/x8664-linux- HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True Versions of relevant libraries: [pip3] numpy==1.21.2 [pip3] torch==1.10.0 [pip3] torchelastic==0.2.0 [pip3] torchtext==0.11.0 [pip3] torchvision==0.11.0 [conda] blas 1.0 mkl [conda] cudatoolkit 11.3.1 ha36c4319 nvidia [conda] ffmpeg 4.3 hf484d3e0 pytorch [conda] mkl 2021.3.0 h06a4308520 [conda] mkl-service 2.4.0 py37h7f8727e0 [conda] mklfft 1.3.1 py37hd3c417c0 [conda] mklrandom 1.2.2 py37h51133e40 [conda] numpy 1.21.2 py37h20f2e390 [conda] numpy-base 1.21.2 py37h79a11010 [conda] pytorch 1.10.0 py3.7cuda11.3cudnn8.2.00 pytorch [conda] pytorch-mutex 1.0 cuda pytorch [conda] torchelastic 0.2.0 pypi0 pypi [conda] torchtext 0.11.0 py37 pytorch [conda] torchvision 0.11.0 py37cu113 pytorch\nI'm also observing something like this without JIT, although I'm not sure it's the same issue. The program works fine until I add ONNX export every time a checkpoint is saved. Once I do that, the GPU memory usage grows until it OOMs."} | |
| {"_id":"doc-en-pytorch-51b93f6c54a298b88600b1182a14c127961bec33f155f28f6e4eb4444c79f1df","title":"","text":"This issue is about lightweight Tensor objects being leaked, not the underlying (potentially GPU-side) buffer. I think your issue is a different one.\nI encountered the same error, is there a solution to this problem?\nPlease validate with the latest release and re-summit an issue if you see the same thing. As we are moving away from torchscript minor leaks are unlikely to be fixed, but contribution is welcomed."} | |
| {"_id":"doc-en-pytorch-9fadf55fef649758ce298e5387e2a100c57f31d87ba54cce681ee82647ba65e7","title":"","text":"In PyTorch master:\nAdded clamp's output support in pre-template code."} | |
| {"_id":"doc-en-pytorch-a58ca7f0869590acf2ec481e28df28698c3770bb005cad89d9a33abd30ddcf87","title":"","text":"It seems that can change memory outside x, when x is a cuda tensor. If x is non-cuda tensor, we get: In contrast, when x is cuda tensor, does not make any error It's hard to share the whole code, but I have noticed that such operation outside a tensor did affect the performance of existing network, so I'm afraid that this op can change arbitrary memory on GPU which can be dangerous. Could you check this out?\nThis snippet is fine - it's enough for us to reproduce the problem. It appears we're missing some out-of-bounds checks (we have them for other indexing functions). Thanks for reporting.\nworking on this"} | |
| {"_id":"doc-en-pytorch-1b8f2e384d5a404a3376c7149d43125421bb1bdaa0086fc82df551418fcec1b0","title":"","text":"I am new to pythonwhen i solve the promblem with the help below I find some confusion in the code I set ‘dimension=1self.dimension = dimension’it seem ok for mebut i don’t kown how the value of ’dimension‘ was initialled. Thank you !\nI already Konw it comes from 'module = JoinTable(dimension, nInputDims)' But when I convert the model to pytorch , error appears: Traceback (most recent call last): File \"\", line 173, in <moduleGnetf =generator.forward(input) File \"/usr/local/lib/python2.7/dist-\", line 33, in forward return self.updateOutput(input) File \"/usr/local/lib/python2.7/dist-\", line 36, in updateOutput currentOutput = module.updateOutput(currentOutput) File \"/usr/local/lib/python2.7/dist-\", line 37, in updateOutput (dim, offset, (dim)).copy_(currentOutput) RuntimeError: inconsistent tensor size at /home/lxl/pytorch-master/torch/lib/TH/generic/THTensorCopy.c:51\nI Use \"generator.modules[0] = nn.JoinTable(1)\",it was fine ,but error again: Traceback (most recent call last): File \"\", line 171, in <moduleGnetf =generator.forward(input) File \"/usr/local/lib/python2.7/dist-\", line 33, in forward return self.updateOutput(input) File \"/usr/local/lib/python2.7/dist-\", line 36, in updateOutput currentOutput = module.updateOutput(currentOutput) File \"/usr/local/lib/python2.7/dist-\", line 96, in updateOutput if is None: AttributeError: 'SpatialFullConvolution' object has no attribute 'finput'\nHow old is the Lua model file you're trying to import? Can you please try to load it in Lua, save again, and load it in PyTorch? Also, please update PyTorch to the newest version.\nThe model is convert from the cudnn model trained by myseft the code below is the convert code BTWmy torch was installed on 17th Dec,2016 the pytorch version i use Metadata-Version: 1.0 Name: torch Version: 0.1.10+ I Build it from source today"} | |
| {"_id":"doc-en-pytorch-22f86cba093cf26e315f2fbaec5ca280a4dc379518c77425509ada9da27f0f4a","title":"","text":"The problem is here: You can't write an arbitrary number of bytes. See . On my system the limit seems to be 2GB, YMMV. To be safe, you probably want to fix the read call as well at , because there's an SSIZE_MAX limit."} | |
| {"_id":"doc-en-pytorch-7a9fbde970acad238b85eeaecdd20c3b27927e286af12f701e9db672acc0182d","title":"","text":"Pybind11 has a bugfix here: which is not included in pytorch master. In brief, the bug causes two python modules, when both compiled with buggy version of pybind11, to conflict and crash at import. I've last week when debugging its conflict with pytorch. Hope pytorch can also upgrade to avoid potential conflict with other libraries."} | |
| {"_id":"doc-en-pytorch-8a24171cae3316021ad4a394597442544d5ad7a610a19fd0700a51e599ae8017","title":"","text":"The output is also incorrect. It's the output from the sorted indices, instead of the user specified indices. Reported by"} | |
| {"_id":"doc-en-pytorch-e622986277177825174756cd0d58bb252b326c5ede6f55945541c76bbab2d669","title":"","text":"Hi There, I'm trying to install Pytorch as a module on my university's computing cluster. The node uses CentOS (6.4) x8664. I install running these commands: module load gcclib/5.2.0 cmake/3.8.2 module load gcc/5.2.0 module load anaconda3/4.0.0 export CMAKEPREFIXPATH=\"(which conda))/../\" export NOCUDA=1 git clone --recursive cd pytorch/ python install --prefix=${HOME} Things look fine as it installs but then I run into the following errors, ending in the process terminating with error relating to GCC? I've tried with version GCC 6.2.0 and the same result occurs. Not sure what to even try to fix this! Thanks for any help you can provide!\nSame here, with gcc 5.4.0\nI don't think building from source works very well with gcc 5.4. Could you install gcc 4.9 and try compiling?\nI get this error instead when using gcc 4.9.0, doesn't make it very far at all.\nHi I think this is broken after / due to\nI have the same problem ( etc.) with gcc 4.8.5 on Linux with the current head ( ) What works for me is to add at the beginning of the following four files: (see also -- this can be turned into a pull request very easily) This fix is the same as but for different files.\nThis worked for me Thank you.\nfixed in latest master, thanks to"} | |
| {"_id":"doc-en-pytorch-d9bb332d9772655177ee32a7fd2b5ed4a64db0a082c9bc08c0bb1e22f3d5dc18","title":"","text":"Recently, we are testing PyTorch 1.7 as we need the with-statement support, so that our algorithm which contains a custom module using can be deployed to the production environment. During the testing, we encountered the following assertion failure: Steps to reproduce the behavior: the following Python code to create a serialized PyTorch module. a C++ test program. the test program with the following command: in the current directory. should terminate with exit code 0 instead of throw an exception. Collecting environment information... PyTorch version: 1.7.0.dev20200922+cpu Is debug build: True CUDA used to build PyTorch: None ROCM used to build PyTorch: N/A OS: CentOS Linux 7 (Core) (x8664) GCC version: (GCC) 4.8.5 (Red Hat 4.8.5-39) Clang version: Could not collect CMake version: Could not collect Python version: 3.7 (64-bit runtime) Is CUDA available: False CUDA runtime version: No CUDA GPU models and configuration: No CUDA Nvidia driver version: No CUDA cuDNN version: No CUDA HIP runtime version: N/A MIOpen runtime version: N/A Versions of relevant libraries: [pip3] numpy==1.19.2 [pip3] torch==1.7.0.dev20200922+cpu [pip3] torchvision==0.7.0+cpu [conda] Could not collect cc"} | |
| {"_id":"doc-en-pytorch-c5dbd648f5f223c007de312a8c0f1ae78f27faaf3ad3e509ac9f43144221b039","title":"","text":"This is a test. Please ignore it. Edited.\n<!-- validation-comment-start --<bodyHello there! From the DISABLED prefix in this issue title, it looks like you are attempting to disable a job in PyTorch CI. The information I have parsed is below: Job name: Credential: Within ~15 minutes, and all of its dependants will be disabled in PyTorch CI. Please verify that the job name looks correct. With great power comes great responsibility. </body<!-- validation-comment-end --"} | |
| {"_id":"doc-en-pytorch-c0fdbff7b42e4db60af4955ac83a924f2a9f7d06af7e7cb5913cbd4e781f73e0","title":"","text":"Several this morning failed with (see for example): Not sure what is causing the outage, but it makes me wonder if perhaps it's time to retire Python-3.5 testing CI cc\nLooks like pypi rolled out a new cert today:"} | |
| {"_id":"doc-en-pytorch-c25fd04d8d54cf4d0391cd8024070026ad8247507bdccb8eb12f5f8e2c9f8d2e","title":"","text":"When trying to install Pytorch on my Mac by following the instructions I get What I did: ` I also tried Both approaches gave the same error. System: xcode-select version 2395. Version: macOS Monterey 12.3.1 (21E258) MacBook Pro (16-inch, 2019) Processor: 2,6 GHz 6-Core Intel Core i7 memory: 16 GB 2667 MHz DDR4 PyTorch version: N/A Is debug build: N/A CUDA used to build PyTorch: N/A ROCM used to build PyTorch: N/A OS: macOS 12.3.1 (x8664) GCC version: Could not collect Clang version: 13.1.6 (clang-1316.0.21.2.3) CMake version: version 3.22.1 Libc version: N/A Python version: 3.9.12 (main, Apr 5 2022, 01:53:17) [Clang 12.0.0 ] (64-bit runtime) Python platform: macOS-10.16-x8664-i386-64bit Is CUDA available: N/A CUDA runtime version: Could not collect GPU models and configuration: Could not collect Nvidia driver version: Could not collect cuDNN version: Could not collect HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: N/A Versions of relevant libraries: [pip3] numpy==1.21.5 [conda] mkl 2022.0.0 hecd8cb5105 [conda] mkl-include 2022.0.0 hecd8cb5105 [conda] numpy 1.21.5 py39h9c3cb841 [conda] numpy-base 1.21.5 py39he782bc11 cc\nI am running the same environment and get the same issue. Any insight would be very appreciated\nUpdate: In another repo I get the same error when trying to link to pytorch etc. There I made a minimal case and managed to build when I removed linking to . I can see that we have in the script. Maybe that is the cause of the error?\nLooks like they are built with the correct architecture.\nMore progress, from local minimal case: fails. is just a hello world program."} | |
| {"_id":"doc-en-pytorch-ec5921aa1d302972470c3f074cbc44243f961a731c2339a8fe339a689287e600","title":"","text":"If in is removed, then it builds.\nThe same story with PyTorch 1.10.0. The error appears when I'm trying to build with Apple clang 13.1.6 (Xcode Command Line Tools 13.3). But all works correctly if I build it with Apple clang 13.0 (Xcode Command Line Tools 13.2.1)\nNice, worked for me as well. Is this a bug somewhere or what is the exact problem? I drawback is that XCode needs to be up to date with new iOS versions.\nWhich version of Apple Clang worked? 13.0.0 or 13.0.1? Are you on Monterey 12.4?\nFiled an issue: cc:\nThis issue has been fixed in PeachPy a while back by but pinned version of PeachPy that PyTorch is using has not been updated in a very long time"} | |