|
{: , : unreasonable\a reasonable\, : [{: , : , : }], : []} |
|
{: , : , : [{: , : , : }], : []} |
|
{: , : , : [{: , : , : }], : []} |
|
{: , : , : [{: , : , : }], : []} |
|
{: , : , : [{: , : , : }], : []} |
|
{: , : , : [{: , : , : }], : []} |
|
{: , : , : [{: , : , : }], : []} |
|
{: , : , : [{: , : , : }], : []} |
|
{: , : , : [{: , : , : }], : []} |
|
{: , : , : [{: , : , : }], : []} |
|
{: , : , : [{: , : , : }], : []} |
|
{: , : , : [{: , : , : }], : []} |
|
{: , : , : [{: , : , : }], : []} |
|
{: , : , : [{: , : , : }], : []} |
|
{: , : , : [{: , : No grad accumulator for a saved leaf!\No grad accumulator for a saved leaf!\, : }], : []} |
|
{: , : , : [{: , : -L$INSTALL_DIR/lib \.so.1\.so\$LDFLAGS -Qunused-arguments -Wl,-rpath,@loader_path\.1.dylib\.dylib\, : }], : []} |
|
{: , : , : [{: , : $C_FLAGS $CPP_FLAGS\lib/libnccl.so.1\${INSTALL_DIR}/lib/libnccl.so.1\${INSTALL_DIR}/lib/libnccl.so.1\${INSTALL_DIR}/lib/libnccl.so\${INSTALL_DIR}/lib/libnccl.so\${INSTALL_DIR}/lib/libnccl.so.1\${INSTALL_DIR}/lib/libnccl.so\, : }], : []} |
|
{: , : , : [{: , : , : }], : []} |
|
{: , : , : [{: , : , : }], : []} |
|
{: , : , : [{: , : , : }], : []} |
|
{: , : , : [{: , : , : }], : []} |
|
{: , : , : [{: , : , : }], : []} |
|
{: , : , : [{: , : \ return self._apply(lambda t: t.cuda(device_id)) <del> def cpu(self, device_id=None): </del> <ins> def cpu(self): </ins> \\\ return self._apply(lambda t: t.cpu())commidpytorch_pr_2073negative_passages |
|
query_idq-en-pytorch-77fbdb92efba00b171fda95ef72ad1556da96d83b890858d72fb01d6583272cbqueryas shown in source code, this parameters is not used. Any reason that we keep it?\nthis can be cleaned up.positive_passagesdociddoc-en-pytorch-b934b16f7e71a839229d7249bd3f6ee13378fc4decd660b53578b16485160f19textoptimizer (Optimizer): Wrapped optimizer. step_size (int): Period of learning rate decay. gamma (float): Multiplicative factor of learning rate decay. <del> Default: -0.1. </del> <ins> Default: 0.1. </ins> last_epoch (int): The index of last epoch. Default: -1. Example:commidpytorch_pr_2280negative_passages |
|
query_idq-en-pytorch-77fbdb92efba00b171fda95ef72ad1556da96d83b890858d72fb01d6583272cbqueryas shown in source code, this parameters is not used. Any reason that we keep it?\nthis can be cleaned up.positive_passagesdociddoc-en-pytorch-8a512103206f66edc9a58f9968e1199ec6631cce0dc5816e4d83e5be73846071textoptimizer (Optimizer): Wrapped optimizer. milestones (list): List of epoch indices. Must be increasing. gamma (float): Multiplicative factor of learning rate decay. <del> Default: -0.1. </del> <ins> Default: 0.1. </ins> last_epoch (int): The index of last epoch. Default: -1. Example:commidpytorch_pr_2280negative_passages |
|
query_idq-en-pytorch-63eb661e07e3fadddd1e2c8c3e1932066a1b6e25b49c780bbf420bf912a21e4aqueryConsider following code, and run it with multi gpus (e.g. 4): It will output: That is, the default device would always be zero even in DataParallel. My pytorch version is 0.1.122. I think this is not the desired behavior. It would cause some troubles. I tried to insert my own cuda kernel into backward to calculate the gradients, it become very slow, and I fixed it by (grad.getdevice()). Anyway, I think currentdevice in forward and backward should be the same, or could anyone explain to me why they are different?\nthis seems like something we should fix: ?\n<!-- drci-comment-start --:pagefacingup: Preview :pagefacingup: Preview :question: Need help or want to give feedback on the CI? Visit our Note: Links to docs will display an error until the docs builds have been completed. This comment was automatically generated by Dr. CI and updates every 15 minutes. <!-- drci-comment-end --positive_passagesdociddoc-en-pytorch-a55794577819384572f0c50a7d045ff8fde16351069652f9e271ed0eee6d2de5textfor i in range(10): Variable(torch.randn(10, 10), _grad_fn=CollectOnDelete()) <del> @unittest.skipIf(not torch.cuda.is_available() or torch.cuda.device_count() < 2, \) </del> <ins> @unittest.skipIf(torch.cuda.device_count() < 2, \) </ins> def test_unused_output_gpu(self): from torch.nn.parallel._functions import Broadcast x = Variable(torch.randn(5, 5).float().cuda(), requires_grad=True)commidpytorch_pr_2081negative_passages |
|
query_idq-en-pytorch-63eb661e07e3fadddd1e2c8c3e1932066a1b6e25b49c780bbf420bf912a21e4aqueryConsider following code, and run it with multi gpus (e.g. 4): It will output: That is, the default device would always be zero even in DataParallel. My pytorch version is 0.1.122. I think this is not the desired behavior. It would cause some troubles. I tried to insert my own cuda kernel into backward to calculate the gradients, it become very slow, and I fixed it by (grad.getdevice()). Anyway, I think currentdevice in forward and backward should be the same, or could anyone explain to me why they are different?\nthis seems like something we should fix: ?\n<!-- drci-comment-start --:pagefacingup: Preview :pagefacingup: Preview :question: Need help or want to give feedback on the CI? Visit our Note: Links to docs will display an error until the docs builds have been completed. This comment was automatically generated by Dr. CI and updates every 15 minutes. <!-- drci-comment-end --positive_passagesdociddoc-en-pytorch-f2be72fdc7a198558b921f225f231c15560de2e22c0341ca0a22e60b669354d6texty.sum().backward() self.assertEqual(x.grad.data, torch.ones(5, 5) * 2) <ins> @unittest.skipIf(torch.cuda.device_count() < 2, \) def test_backward_device(self): # check that current device matches the variable's device device = [None] class Identity(torch.autograd.Function): @staticmethod def forward(ctx, x): return x.clone() @staticmethod def backward(ctx, grad_output): device[0] = torch.cuda.current_device() return grad_output.clone() v = Variable(torch.randn(1).cuda(1), requires_grad=True) Identity.apply(v).backward() self.assertEqual(device[0], 1) </ins> def test_detach(self): x = Variable(torch.randn(10, 10), requires_grad=True) y = x + 2", "commid": "pytorch_pr_2081"}], "negative_passages": []} |
|
{"query_id": "q-en-pytorch-63eb661e07e3fadddd1e2c8c3e1932066a1b6e25b49c780bbf420bf912a21e4a", "query": "Consider following code, and run it with multi gpus (e.g. 4): It will output: That is, the default device would always be zero even in DataParallel. My pytorch version is 0.1.122. I think this is not the desired behavior. It would cause some troubles. I tried to insert my own cuda kernel into backward to calculate the gradients, it become very slow, and I fixed it by (grad.getdevice()). Anyway, I think currentdevice in forward and backward should be the same, or could anyone explain to me why they are different?\nthis seems like something we should fix: ?\n<!-- drci-comment-start --:pagefacingup: Preview :pagefacingup: Preview :question: Need help or want to give feedback on the CI? Visit our Note: Links to docs will display an error until the docs builds have been completed. This comment was automatically generated by Dr. CI and updates every 15 minutes. <!-- drci-comment-end --", "positive_passages": [{"docid": "doc-en-pytorch-e3bfb9ebe3323f645dba9378592101ba727cf57c670fb6a7a311f2074de13510", "text": "#include \"torch/csrc/autograd/engine.h\" #include \"torch/csrc/autograd/functions/basic_ops.h\" <ins> #include \"torch/csrc/utils/auto_gpu.h\" </ins> #include <atomic> #include <condition_variable>", "commid": "pytorch_pr_2081"}], "negative_passages": []} |
|
{"query_id": "q-en-pytorch-63eb661e07e3fadddd1e2c8c3e1932066a1b6e25b49c780bbf420bf912a21e4a", "query": "Consider following code, and run it with multi gpus (e.g. 4): It will output: That is, the default device would always be zero even in DataParallel. My pytorch version is 0.1.122. I think this is not the desired behavior. It would cause some troubles. I tried to insert my own cuda kernel into backward to calculate the gradients, it become very slow, and I fixed it by (grad.getdevice()). Anyway, I think currentdevice in forward and backward should be the same, or could anyone explain to me why they are different?\nthis seems like something we should fix: ?\n<!-- drci-comment-start --:pagefacingup: Preview :pagefacingup: Preview :question: Need help or want to give feedback on the CI? Visit our Note: Links to docs will display an error until the docs builds have been completed. This comment was automatically generated by Dr. CI and updates every 15 minutes. <!-- drci-comment-end --", "positive_passages": [{"docid": "doc-en-pytorch-aa831fc58c1f89901c5a38c90c46b4b4cafa60d966ef73cf6b80d714b6704966", "text": "// This Engine's ReadyQueues and their corresponding threads are leaked here Engine::~Engine() = default; <del> auto Engine::thread_main(std::shared_ptr<ReadyQueue> queue) -> void { </del> <ins> auto Engine::thread_main(std::shared_ptr<ReadyQueue> queue, int device) -> void { </ins> THInferNumThreads(); <ins> AutoGPU guard(device); </ins> while (1) { FunctionTask task = queue->pop_back(); if (!task.base->has_error.load()) {commidpytorch_pr_2081negative_passages |
|
query_idq-en-pytorch-63eb661e07e3fadddd1e2c8c3e1932066a1b6e25b49c780bbf420bf912a21e4aqueryConsider following code, and run it with multi gpus (e.g. 4): It will output: That is, the default device would always be zero even in DataParallel. My pytorch version is 0.1.122. I think this is not the desired behavior. It would cause some troubles. I tried to insert my own cuda kernel into backward to calculate the gradients, it become very slow, and I fixed it by (grad.getdevice()). Anyway, I think currentdevice in forward and backward should be the same, or could anyone explain to me why they are different?\nthis seems like something we should fix: ?\n<!-- drci-comment-start --:pagefacingup: Preview :pagefacingup: Preview :question: Need help or want to give feedback on the CI? Visit our Note: Links to docs will display an error until the docs builds have been completed. This comment was automatically generated by Dr. CI and updates every 15 minutes. <!-- drci-comment-end --positive_passagesdociddoc-en-pytorch-baf0d2710879c234f73168a1206503d3c970f7f52c16f1c9de9beb3034baf728textnum_devices = 0; } #endif <del> ready_queues = std::vector<std::shared_ptr<ReadyQueue>>(num_devices + 1); for (auto& queue : ready_queues) { </del> <ins> int num_threads = num_devices + 1; ready_queues = std::vector<std::shared_ptr<ReadyQueue>>(num_threads); for (int i = 0; i < num_threads; ++i) { auto& queue = ready_queues[i]; </ins> queue.reset(new ReadyQueue()); <del> std::thread t(&Engine::thread_main, this, queue); </del> <ins> std::thread t(&Engine::thread_main, this, queue, i - 1); </ins> t.detach(); } }commidpytorch_pr_2081negative_passages |
|
query_idq-en-pytorch-63eb661e07e3fadddd1e2c8c3e1932066a1b6e25b49c780bbf420bf912a21e4aqueryConsider following code, and run it with multi gpus (e.g. 4): It will output: That is, the default device would always be zero even in DataParallel. My pytorch version is 0.1.122. I think this is not the desired behavior. It would cause some troubles. I tried to insert my own cuda kernel into backward to calculate the gradients, it become very slow, and I fixed it by (grad.getdevice()). Anyway, I think currentdevice in forward and backward should be the same, or could anyone explain to me why they are different?\nthis seems like something we should fix: ?\n<!-- drci-comment-start --:pagefacingup: Preview :pagefacingup: Preview :question: Need help or want to give feedback on the CI? Visit our Note: Links to docs will display an error until the docs builds have been completed. This comment was automatically generated by Dr. CI and updates every 15 minutes. <!-- drci-comment-end --positive_passagesdociddoc-en-pytorch-1abbb3415744379d52d8c2ba818c334727821726a0b38019641b613c5a8be9d7textvoid evaluate_function(FunctionTask& task); ReadyQueue& ready_queue(int device); void start_threads(); <del> virtual void thread_main(std::shared_ptr<ReadyQueue> queue); </del> <ins> virtual void thread_main(std::shared_ptr<ReadyQueue> queue, int device); </ins> virtual void thread_on_exception(FunctionTask& task, std::exception& e); std::once_flag start_threads_flag;commidpytorch_pr_2081negative_passages |
|
query_idq-en-pytorch-63eb661e07e3fadddd1e2c8c3e1932066a1b6e25b49c780bbf420bf912a21e4aqueryConsider following code, and run it with multi gpus (e.g. 4): It will output: That is, the default device would always be zero even in DataParallel. My pytorch version is 0.1.122. I think this is not the desired behavior. It would cause some troubles. I tried to insert my own cuda kernel into backward to calculate the gradients, it become very slow, and I fixed it by (grad.getdevice()). Anyway, I think currentdevice in forward and backward should be the same, or could anyone explain to me why they are different?\nthis seems like something we should fix: ?\n<!-- drci-comment-start --:pagefacingup: Preview :pagefacingup: Preview :question: Need help or want to give feedback on the CI? Visit our Note: Links to docs will display an error until the docs builds have been completed. This comment was automatically generated by Dr. CI and updates every 15 minutes. <!-- drci-comment-end --positive_passagesdociddoc-en-pytorch-2ed89dd3c95920b727070ccfadba0f09774b1e12518b6692510064e11c0deb54text}; struct PythonEngine : public Engine { <del> virtual void thread_main(std::shared_ptr<ReadyQueue> queue) override { </del> <ins> virtual void thread_main(std::shared_ptr<ReadyQueue> queue, int device) override { </ins> // Create a PyThreadState, but release the GIL. This lets AutoGIL calls // inside thread_main acquire the GIL without having to create a new // PyThreadState each time. AutoGIL gil; AutoNoGIL no_gil; <del> Engine::thread_main(queue); </del> <ins> Engine::thread_main(queue, device); </ins> } virtual void thread_on_exception(FunctionTask& task, std::exception& e) override {commidpytorch_pr_2081negative_passages |
|
query_idq-en-pytorch-a5d810db3192b9f0799cfe2249f9cfa58be109a89f1a1aa1bc509dea836d19c5queryIn we define a number of macros which are subsequently used in if statements: This is undefined. On clang 3.4.2, these warnings don't seem to actually get printed out () but on clang 4.0.0 they seem to always get emitted.", "positive_passages": [{"docid": "doc-en-pytorch-c5c2d2dffc46beed1338b355e27295aadf444df8e6a28617528a626aa3a130a4", "text": "#define IS_CUDA false #define CUDA_FLOAT false #else #define IS_CUDA true <del> #define CUDA_BYTE defined(THC_REAL_IS_BYTE) #define CUDA_CHAR defined(THC_REAL_IS_CHAR) #define CUDA_SHORT defined(THC_REAL_IS_SHORT) #define CUDA_INT defined(THC_REAL_IS_INT) #define CUDA_LONG defined(THC_REAL_IS_LONG) #define CUDA_FLOAT defined(THC_REAL_IS_FLOAT) #define CUDA_DOUBLE defined(THC_REAL_IS_DOUBLE) #define CUDA_HALF defined(THC_REAL_IS_HALF) </del> <ins> #if defined(THC_REAL_IS_BYTE) #define CUDA_BYTE 1 #else #define CUDA_BYTE 0 </ins> #endif <ins> #if defined(THC_REAL_IS_CHAR) #define CUDA_CHAR 1 #else #define CUDA_CHAR 0 #endif #if defined(THC_REAL_IS_SHORT) #define CUDA_SHORT 1 #else #define CUDA_SHORT 0 #endif #if defined(THC_REAL_IS_INT) #define CUDA_INT 1 #else #define CUDA_INT 0 #endif #if defined(THC_REAL_IS_LONG) #define CUDA_LONG 1 #else #define CUDA_LONG 0 #endif #if defined(THC_REAL_IS_FLOAT) #define CUDA_FLOAT 1 #else #define CUDA_FLOAT 0 #endif #if defined(THC_REAL_IS_DOUBLE) #define CUDA_DOUBLE 1 #else #define CUDA_DOUBLE 0 #endif #if defined(THC_REAL_IS_HALF) #define CUDA_HALF 1 #else #define CUDA_HALF 0 #endif #endif // ifndef THC_GENERIC_FILE </ins> #if IS_CUDA #define THIndexTensor THCudaLongTensor #define THIndexTensor_(NAME) TH_CONCAT_2(THCudaLongTensor_,NAME)", "commid": "pytorch_pr_2142"}], "negative_passages": []} |
|
{"query_id": "q-en-pytorch-6a49a11ffb8ce099f2f3182a0eea72b7a1150bca94c619967aba876db624e52f", "query": "There is a possible memory leak in () This causes memory to blow up: When using () memory usage is ok There is no leak for unitary dimensions", "positive_passages": [{"docid": "doc-en-pytorch-9a42bf85cbb32a092f35a5f8da89eeb443ef320598541904a01b6c81d9e6880b", "text": "*tempValues__data = *t_data; *tempIndices__data = *tempIndices__dimOffset; }); <ins> THTensor_(free)(tempValues_); THLongTensor_free(tempIndices_); </ins> } if (!keepdim) {", "commid": "pytorch_pr_2819"}], "negative_passages": []} |
|
{"query_id": "q-en-pytorch-74c19f6facd645162ae4790aa240ce26e373c9f6bf7d3305a0932be393b296c5", "query": "A common issue - for both new and more seasoned users, including myself - is how to write nice device-agnostic code. There seems to be several potential solutions (using a global CUDA flag, always calling , using etc.), used by various people in various bits of code. Ideally, we should have \"best practices\" documented somewhere, like ; or at least some place where we can link to.\nAlternative proposal - we provide a method that deals with the most common use cases (tensors, variables and networks). We could have something similar to Chainer's , something like a , etc. We're not going to be \"write once, run everywhere\", but we can probably catch most use cases. Pros: It makes it much simpler for people writing CPU/1 GPU code (presumably a majority of users). Depending on the approach it'll probably be fine for multiple GPUs too. Cons: An abstraction of this form isn't foolproof, but I can't immediately think of any major issues. Setting everything to CUDA by default is probably going to break things, but a wrapper, being more manual, can be used selectively enough.\nI agree we should figure out a way to make writing device-agnostic code easier. I don't think fakecuda is a good idea, it's too magical and has global side effects. Here's a list of options I see, along with some comments. In general I'm not 100% happy with any of those, but I don't have any better ideas right now. Example: I think that sooner or later we should add arg to functions anyway, so it seems like a good enough extension. I think I'm fine with this option too, but I'm afraid you would either need a single global module captured inside functions, or use it everywhere inside. In general it can be simulated using the approach using partials (of course this is an inefficient implementation, but you get the idea): This is fine, except that you have to call it a lot and it makes the code super verbose. A similar alternative that people sometimes use now is this: This is really the best option, as it will give you a tensor of exactly the same type and on exactly the same device as the source. This is usually the desired behaviour, which is why we use it all over the place in the core library. On the other hand, it feels a bit unnatural, and is annoying when you do more complicated things, because you first construct the tensor and only then fill it with the data. and helps a lot with that\nThanks for more in-depth feedback on some options. For existing practices, highlighting , and are probably the way to go. In terms of what to do when there aren't existing tensors, what I see as the main target is the . Precisely, can we do and without using the if statement? The cast/dtype solution seems good for tensors, but anything we can do for modules?\nI've put together some of this and other stuff I know into \"best practices\" documentation in I think we still want to think about future possibilities, but there's plenty that's worth noting down now for users.\nIt would indeed be amazing if PyTorch could make it simpler to write device agnostic code. I really like your first suggestion. Just a quick note from my (applied) perspective: maybe it would be possible to split up the and the into two attributes? First of all, I think this would make it easier to handle cases where tensors are on different gpus and second of all, it would allow to write code like this: ((5, 5), , ) # Returns on same device as x Don't know how practical this is though.\none thing I like about suggestion is that it makes it explicit and obvious when something is happening on cuda, rather than requiring users to know that non-negative device numbers refer to cuda. Maybe this is a bad idea, but what about also taking an optional kw in every function that takes a but to have both and non- dtypes? So passing would give you a cuda float tensor on the current GPU, passing would give you a CPU float, and passing would give you a cuda tensor on GPU 5. Obviously we'd need error checking to check things like non- devices don't have non-negative s specified (which is kind of silly because if the device is specified it tells you everything), but it does seem to make the common case of not specifying a device more clear.\nSpeaking only as an end-use with no particular insight into how difficult it would be to code: These suggestions make me feel like end users will end up having to think more about device semantics, not less. I'd like to see the API move away from having the tensor type imply the storage location entirely. Have a context manager that sets cpu vs. cuda, default device, etc. Something like: Then once this exists, class wraps its calls to the with an appropriate context, as do Modules. I think that this results in everything Just Working(tm).\nI'm not convinced that making it easy for end users to avoid thinking about device semantics is the correct goal. The different devices have different performance characteristics, amount of memory/cache, incur data transfer costs/synchronizations, etc. My worry is that by papering over these issues, we make it really easy to write non-performant code and really difficult to figure out why. This discussion is similar to a discussion in the distributed systems community: is a good discussion of why trying to unify all interfaces might not be the best idea (there are admittedly additional complexities in distributed computing like partial failures). Anyway, that's why I like making it obvious when there are cuda storage / device transfers going on, because those are obvious places to look at performance.\nI'm not an expert on any of this, but about the only two things that bug me about using is when I have to cast are I don't have a prototypical tensor at hand (e.g. in a module it feels unnatural to use so it might be neat to have a \"canonical tensor\" in modules that reacts to .cuda() and whose is exposed as or so. I want a different type is just a bit much. Other than that, I think using it is cool, I think that there is not much advantage in over But this is only from my limited experience.\nRegarding the \"canonical tensor\" it's hard, because the module can have a lot of different parameters that have different types and are on different devices. I agree that chains are awful. I think (and similar for all other types) would be a convenient helper.\nOh yes, something like would be cool (or ). For the \, I imagine that, for basic applications, having something that \ whether called before or after would be great. For more advanced applications, the user could create their own set of prototype tensors and just do , or so. But that is just pure imagination, as I don't have multiple GPUs. :slightlysmilingface:\nthere are several potential goals here, and they are somewhat conflicting. One is to make CPU/GPU code seamless for end users, with just one flag somewhere. On the other hand, that's likely to results in less-performant code and harder debugging. It seems like the conversation is going towards device-aware code, but making it a little less clunky in places. Sounds like is a good step forward, and was probably in the pipeline anyway?\na latecomer here: i agree that it's not a good idea to hide things behind the curtain from users. though, in many cases, it might be easier to to have a global behavior of ignoring, e.g., gpu (i.e., make .cuda() do nothing). could we at least give some kind of global switch allowing this behiavour?\nClosing as 0.4 has plenty of options for this", "positive_passages": [{"docid": "doc-en-pytorch-847390673fe92f667b6ffdc3dbdf1d493fcc043c0c48ff5b5023443c83941042", "text": "Best practices -------------- <ins> Device-agnostic code ^^^^^^^^^^^^^^^^^^^^ Due to the structure of PyTorch, you may need to explicitly write device-agnostic (CPU or GPU) code; an example may be creating a new tensor as the initial hidden state of a recurrent neural network. The first step is to determine whether the GPU should be used or not. A common pattern is to use Python's `argparse` module to read in user arguments, and have a flag that can be used to disable CUDA, in combination with `torch.cuda.is_available()`. In the following, `args.cuda` results in a flag that can be used to cast tensors and modules to CUDA if desired:: import argparse import torch parser = argparse.ArgumentParser(description='PyTorch Example') parser.add_argument('--disable-cuda', action='store_true', help='Disable CUDA') args = parser.parse_args() args.cuda = not args.disable_cuda and torch.cuda.is_available() If modules or tensors need to be sent to the GPU, `args.cuda` can be used as follows:: x = torch.Tensor(8, 42) net = Network() if args.cuda: x = x.cuda() net.cuda() When creating tensors, an alternative to the if statement is to have a default datatype defined, and cast all tensors using that. An example when using a dataloader would be as follows:: dtype = torch.cuda.FloatTensor for i, x in enumerate(train_loader): x = Variable(x.type(dtype)) When working with multiple GPUs on a system, you can use the `CUDA_VISIBLE_DEVICES` environment flag to manage which GPUs are available to PyTorch. To manually control which GPU a tensor is created on, the best practice is to use the `torch.cuda.device()` context manager:: print(\) # On device 0 (default in most scenarios) with torch.cuda.device(1): print(\) # On device 1 print(\) # On device 0 If you have a tensor and would like to create a new tensor of the same type on the same device, then you can use the `.new()` function, which acts the same as a normal tensor constructor. Whilst the previously mentioned methods depend on the current GPU context, `new()` preserves the device of the original tensor. This is the recommended practice when creating modules in which new tensors/variables need to be created internally during the forward pass:: x_cpu = torch.FloatTensor(1) x_gpu = torch.cuda.FloatTensor(1) x_cpu_long = torch.LongTensor(1) y_cpu = x_cpu.new(8, 10, 10).fill_(0.3) y_gpu = x_gpu.new(x_gpu.size()).fill_(-5) y_cpu_long = x_cpu_long.new([[1, 2, 3]]) If you want to create a tensor of the same type and size of another tensor, and fill it with either ones or zeros, `torch.ones_like()` or `torch.zeros_like()` are provided as more convenient functions (which also preserve device):: x_cpu = torch.FloatTensor(1) x_gpu = torch.cuda.FloatTensor(1) y_cpu = torch.ones_like(x_cpu) y_gpu = torch.zeros_like(x_gpu) </ins> Use pinned memory buffers ^^^^^^^^^^^^^^^^^^^^^^^^^commidpytorch_pr_3227negative_passages |
|
query_idq-en-pytorch-1252ccec1cccd228016789177e914b1fdc597f72485ec3137dd8203e05a42d08queryIt seems like autodiff of matrix terms of .mv() doesn't behave as I'd expect. But if we use .mm(), things work fine: python A = Variable((3,2), requiresgrad=True) x = Variable((2), requiresgrad=True) (A.mm(x[:,None])).sum().backward() print(A.grad) print(x.grad)\nHi, As a temporary fix, you can use . The problem is the behavior between the backward and the backward. The sum returns a strided gradient which is not supported by MKL/BLAS implementations of (used in the backward). not sure where this was introduced and how to fix this properly. Should the blas wrapper clone the input if it is strided? Or should it just raise an error (meaning that we need to add some s in some places in the code like the sum backward)?\nIt should clone the input. Still, this problem should have been caught. We must be missing some return code checks from BLAS\nAh apparently these functions have no return codes and the docs clearly state that vector strides can't be 0... We have to fix those bindings", "positive_passages": [{"docid": "doc-en-pytorch-7ae06262e0f1135c2e07df5c166d0feeaecb57db11429bf53e1c2f0b63346a6f", "text": "res2 += i * j self.assertEqual(res1, res2) <ins> # Test 0-strided for tname, _prec in types.items(): v1 = torch.randn(1).type(tname).expand(100) v2 = torch.randn(100).type(tname) res1 = torch.dot(v1, v2) res2 = 0 for i, j in zip(v1, v2): res2 += i * j self.assertEqual(res1, res2) def test_ger(self): types = { 'torch.DoubleTensor': 1e-8, 'torch.FloatTensor': 1e-4, } for tname, _prec in types.items(): v1 = torch.randn(100).type(tname) v2 = torch.randn(100).type(tname) res1 = torch.ger(v1, v2) res2 = torch.zeros(100, 100).type(tname) for i in range(100): for j in range(100): res2[i, j] = v1[i] * v2[j] self.assertEqual(res1, res2) # Test 0-strided for tname, _prec in types.items(): v1 = torch.randn(1).type(tname).expand(100) v2 = torch.randn(100).type(tname) res1 = torch.ger(v1, v2) res2 = torch.zeros(100, 100).type(tname) for i in range(100): for j in range(100): res2[i, j] = v1[i] * v2[j] self.assertEqual(res1, res2) def test_addmv(self): types = { 'torch.DoubleTensor': 1e-8, 'torch.FloatTensor': 1e-4, } for tname, _prec in types.items(): t = torch.randn(10).type(tname) m = torch.randn(10, 100).type(tname) v = torch.randn(100).type(tname) res1 = torch.addmv(t, m, v) res2 = torch.zeros(10).type(tname) res2 += t for i in range(10): for j in range(100): res2[i] += m[i, j] * v[j] self.assertEqual(res1, res2) # Test 0-strided for tname, _prec in types.items(): t = torch.randn(1).type(tname).expand(10) m = torch.randn(10, 1).type(tname).expand(10, 100) v = torch.randn(100).type(tname) res1 = torch.addmv(t, m, v) res2 = torch.zeros(10).type(tname) res2 += t for i in range(10): for j in range(100): res2[i] += m[i, j] * v[j] self.assertEqual(res1, res2) def test_addmm(self): types = { 'torch.DoubleTensor': 1e-8, 'torch.FloatTensor': 1e-4, } for tname, _prec in types.items(): M = torch.randn(10, 25).type(tname) m1 = torch.randn(10, 50).type(tname) m2 = torch.randn(50, 25).type(tname) res1 = torch.addmm(M, m1, m2) res2 = torch.zeros(10, 25).type(tname) res2 += M for i in range(10): for j in range(25): for k in range(50): res2[i, j] += m1[i, k] * m2[k, j] self.assertEqual(res1, res2) # Test 0-strided for tname, _prec in types.items(): M = torch.randn(10, 1).type(tname).expand(10, 25) m1 = torch.randn(10, 1).type(tname).expand(10, 50) m2 = torch.randn(50, 25).type(tname) res1 = torch.addmm(M, m1, m2) res2 = torch.zeros(10, 25).type(tname) res2 += M for i in range(10): for j in range(25): for k in range(50): res2[i, j] += m1[i, k] * m2[k, j] self.assertEqual(res1, res2) </ins> def _testMath(self, torchfn, mathfn): size = (10, 5) # contiguous", "commid": "pytorch_pr_3373"}], "negative_passages": []} |
|
{"query_id": "q-en-pytorch-f32bbc53b4f3b92b53b5ef9ed6754d57475c03612f4070c1bfd9266ba097f7ab", "query": "A = (5, 4) B = (0, 9).view(3, 3) C = (0, 15).view(3, 5) idxs = torch.LongTensor([0, 2, 4]) A.indexadd(0, idxs, B) # RuntimeError: inconsistent tensor size, expected r [4], t [4] and src [3] to have the same number of elements, but got 4, 4 and 3 elements respectively at (...)/aten/src/TH/generic/THTensorMath.c:1008 A.indexadd(0, idxs, C) # RuntimeError: inconsistent tensor size, expected r [4], t [4] and src [5] to have the same number of elements, but got 4, 4 and 5 elements respectively at (...)/aten/src/TH/generic/THTensorMath.c:1008 So far so good. But if we use CUDA... A = (5, 4).cuda() B = (0, 9).view(3, 3).cuda() C = (0, 15).view(3, 5).cuda() idxs = torch.LongTensor([0, 2, 4]).cuda() A.indexadd(0, idxs, B) print(A) # 0 1 2 0 # 0 0 0 0 # 3 4 5 0 # 0 0 0 0 # 6 7 8 0 # [ of size 5x4 (GPU 0)] OK, this looks wrong... A.zero() A.indexadd(0, idxs, C) print(A) # 0 1 2 3 # 4 0 0 0 # 5 6 7 8 # 9 0 0 0 # 10 11 12 13 # [ of size 5x4 (GPU 0)] Now this looks definitely wrong. Increase C's dimension to something like (3, 500), and it overwrites other tensors or triggers asserts. Same thing happens with indexcopy_.\nI'll take it if no one's looking at it yet.positive_passagesdociddoc-en-pytorch-0c1d2b21c3aed080ff0bb21e80fe229ef6f87b04f7463a20a1f6ac34395f0a04text#define THC_GENERIC_FILE \ #else <ins> // Check tensor dimensions for index operations, and return the slice size. // src can be nullptr in case of indexFill: in that case it is ignored. static ptrdiff_t THCTensor_(getSliceSize)(THCState *state, THCTensor *dst, int dim, THCudaLongTensor *index, THCTensor *src) { int dstDims = THCTensor_(nDimension)(state, dst); int srcDims = (src == nullptr) ? dstDims : THCTensor_(nDimension)(state, src); THArgCheck(THCudaLongTensor_nDimension(state, index) == 1, 4, \); THArgCheck(dim >= 0 && dim < dstDims, 2, \); ptrdiff_t dstSliceSize = 1; for (int d = 0; d < dstDims; d++) { if (d != dim) { dstSliceSize *= dst->size[d]; } } if (src == nullptr) return dstSliceSize; THArgCheck(dim < srcDims, 3, \); THArgCheck(THCudaLongTensor_nElement(state, index) == src->size[dim], 4, \); ptrdiff_t srcSliceSize = 1; bool mismatch = false; if (dstDims != srcDims) mismatch = true; for (int d = 0; d < srcDims; d++) { if (d != dim) { srcSliceSize *= src->size[d]; if (!mismatch && dst->size[d] != src->size[d]) mismatch = true; } } THArgCheck(dstSliceSize == srcSliceSize, 2, \, dstSliceSize, srcSliceSize); if (mismatch) { static bool warningShown = false; if (!warningShown) { warningShown = true; fprintf(stderr, \ \); } } return dstSliceSize; } </ins> void THCTensor_(indexCopy_long)(THCState *state, THCTensor *dst, int dim, THLongTensor *indices, THCTensor *src) { THCAssertSameGPU(THCTensor_(checkGPU)(state, 2, dst, src));commidpytorch_pr_4342negative_passages |
|
query_idq-en-pytorch-f32bbc53b4f3b92b53b5ef9ed6754d57475c03612f4070c1bfd9266ba097f7abqueryA = (5, 4) B = (0, 9).view(3, 3) C = (0, 15).view(3, 5) idxs = torch.LongTensor([0, 2, 4]) A.indexadd(0, idxs, B) # RuntimeError: inconsistent tensor size, expected r [4], t [4] and src [3] to have the same number of elements, but got 4, 4 and 3 elements respectively at (...)/aten/src/TH/generic/THTensorMath.c:1008 A.indexadd(0, idxs, C) # RuntimeError: inconsistent tensor size, expected r [4], t [4] and src [5] to have the same number of elements, but got 4, 4 and 5 elements respectively at (...)/aten/src/TH/generic/THTensorMath.c:1008 So far so good. But if we use CUDA... A = (5, 4).cuda() B = (0, 9).view(3, 3).cuda() C = (0, 15).view(3, 5).cuda() idxs = torch.LongTensor([0, 2, 4]).cuda() A.indexadd(0, idxs, B) print(A) # 0 1 2 0 # 0 0 0 0 # 3 4 5 0 # 0 0 0 0 # 6 7 8 0 # [ of size 5x4 (GPU 0)] OK, this looks wrong... A.zero() A.indexadd(0, idxs, C) print(A) # 0 1 2 3 # 4 0 0 0 # 5 6 7 8 # 9 0 0 0 # 10 11 12 13 # [ of size 5x4 (GPU 0)] Now this looks definitely wrong. Increase C's dimension to something like (3, 500), and it overwrites other tensors or triggers asserts. Same thing happens with indexcopy_.\nI'll take it if no one's looking at it yet.", "positive_passages": [{"docid": "doc-en-pytorch-438f0cd197dc41837843b0b79dcabb3cb7066f26b861da7f1fd39c8db8f72698", "text": "dims = THCudaLongTensor_nDimension(state, indices); THArgCheck(dims <= MAX_CUTORCH_DIMS, 4, CUTORCH_DIM_WARNING); <del> ptrdiff_t numIndices = THCudaLongTensor_nElement(state, indices); int srcDims = THCTensor_(nDimension)(state, src); cudaStream_t stream = THCState_getCurrentStream(state); THArgCheck(THCudaLongTensor_nDimension(state, indices) == 1, 3, \"expecting vector of indices\"); THArgCheck(dim < srcDims, 4, \"Indexing dim is out of bounds\"); THArgCheck(srcDims > 0, 2, \"Source tensor is empty\"); THArgCheck(numIndices == src->size[dim], 4, \"length of src.size[dim] is not equal to length of indices\"); int indContig = THCudaLongTensor_isContiguous(state, indices); </del> // The `src` is partitioned into two parts: // -the size of each slice we are indexing, which is the // total size of the tensor ignoring dimension `dim`; // -the number of indices we are choosing, which is the total size // of the tensor `indices`. <ins> ptrdiff_t sliceSize = THCTensor_(getSliceSize)(state, dst, dim, indices, src); </ins> ptrdiff_t srcTotalSize = THCTensor_(nElement)(state, src); int64_t dstCopyDimSize = THCTensor_(size)(state, dst, dim); <del> ptrdiff_t sliceSize = srcTotalSize / numIndices; </del> <ins> ptrdiff_t numIndices = THCudaLongTensor_nElement(state, indices); cudaStream_t stream = THCState_getCurrentStream(state); int indContig = THCudaLongTensor_isContiguous(state, indices); </ins> int mpc = THCState_getCurrentDeviceProperties(state)->multiProcessorCount;", "commid": "pytorch_pr_4342"}], "negative_passages": []} |
|
{"query_id": "q-en-pytorch-f32bbc53b4f3b92b53b5ef9ed6754d57475c03612f4070c1bfd9266ba097f7ab", "query": "A = (5, 4) B = (0, 9).view(3, 3) C = (0, 15).view(3, 5) idxs = torch.LongTensor([0, 2, 4]) A.indexadd(0, idxs, B) # RuntimeError: inconsistent tensor size, expected r [4], t [4] and src [3] to have the same number of elements, but got 4, 4 and 3 elements respectively at (...)/aten/src/TH/generic/THTensorMath.c:1008 A.indexadd(0, idxs, C) # RuntimeError: inconsistent tensor size, expected r [4], t [4] and src [5] to have the same number of elements, but got 4, 4 and 5 elements respectively at (...)/aten/src/TH/generic/THTensorMath.c:1008 So far so good. But if we use CUDA... A = (5, 4).cuda() B = (0, 9).view(3, 3).cuda() C = (0, 15).view(3, 5).cuda() idxs = torch.LongTensor([0, 2, 4]).cuda() A.indexadd(0, idxs, B) print(A) # 0 1 2 0 # 0 0 0 0 # 3 4 5 0 # 0 0 0 0 # 6 7 8 0 # [ of size 5x4 (GPU 0)] OK, this looks wrong... A.zero() A.indexadd(0, idxs, C) print(A) # 0 1 2 3 # 4 0 0 0 # 5 6 7 8 # 9 0 0 0 # 10 11 12 13 # [ of size 5x4 (GPU 0)] Now this looks definitely wrong. Increase C's dimension to something like (3, 500), and it overwrites other tensors or triggers asserts. Same thing happens with indexcopy_.\nI'll take it if no one's looking at it yet.positive_passagesdociddoc-en-pytorch-606249d4e5065e174d0bb8445e4f0794e1c65b557690f03981584032238bd0cbtextdims = THCudaLongTensor_nDimension(state, indices); THArgCheck(dims <= MAX_CUTORCH_DIMS, 4, CUTORCH_DIM_WARNING); <del> ptrdiff_t numIndices = THCudaLongTensor_nElement(state, indices); int srcDims = THCTensor_(nDimension)(state, src); cudaStream_t stream = THCState_getCurrentStream(state); THArgCheck(THCudaLongTensor_nDimension(state, indices) == 1, 3, \); THArgCheck(dim < srcDims, 4, \); THArgCheck(srcDims > 0, 2, \); THArgCheck(numIndices == src->size[dim], 4, \); int indContig = THCudaLongTensor_isContiguous(state, indices); </del> // The `src` is partitioned into two parts: // -the size of each slice we are indexing, which is the // total size of the tensor ignoring dimension `dim`; // -the number of indices we are choosing, which is the total size // of the tensor `indices`. <ins> ptrdiff_t sliceSize = THCTensor_(getSliceSize)(state, dst, dim, indices, src); </ins> ptrdiff_t srcTotalSize = THCTensor_(nElement)(state, src); int64_t dstAddDimSize = THCTensor_(size)(state, dst, dim); <del> ptrdiff_t sliceSize = srcTotalSize / numIndices; </del> <ins> ptrdiff_t numIndices = THCudaLongTensor_nElement(state, indices); cudaStream_t stream = THCState_getCurrentStream(state); int indContig = THCudaLongTensor_isContiguous(state, indices); </ins> int mpc = THCState_getCurrentDeviceProperties(state)->multiProcessorCount;commidpytorch_pr_4342negative_passages |
|
query_idq-en-pytorch-f32bbc53b4f3b92b53b5ef9ed6754d57475c03612f4070c1bfd9266ba097f7abqueryA = (5, 4) B = (0, 9).view(3, 3) C = (0, 15).view(3, 5) idxs = torch.LongTensor([0, 2, 4]) A.indexadd(0, idxs, B) # RuntimeError: inconsistent tensor size, expected r [4], t [4] and src [3] to have the same number of elements, but got 4, 4 and 3 elements respectively at (...)/aten/src/TH/generic/THTensorMath.c:1008 A.indexadd(0, idxs, C) # RuntimeError: inconsistent tensor size, expected r [4], t [4] and src [5] to have the same number of elements, but got 4, 4 and 5 elements respectively at (...)/aten/src/TH/generic/THTensorMath.c:1008 So far so good. But if we use CUDA... A = (5, 4).cuda() B = (0, 9).view(3, 3).cuda() C = (0, 15).view(3, 5).cuda() idxs = torch.LongTensor([0, 2, 4]).cuda() A.indexadd(0, idxs, B) print(A) # 0 1 2 0 # 0 0 0 0 # 3 4 5 0 # 0 0 0 0 # 6 7 8 0 # [ of size 5x4 (GPU 0)] OK, this looks wrong... A.zero() A.indexadd(0, idxs, C) print(A) # 0 1 2 3 # 4 0 0 0 # 5 6 7 8 # 9 0 0 0 # 10 11 12 13 # [ of size 5x4 (GPU 0)] Now this looks definitely wrong. Increase C's dimension to something like (3, 500), and it overwrites other tensors or triggers asserts. Same thing happens with indexcopy_.\nI'll take it if no one's looking at it yet.", "positive_passages": [{"docid": "doc-en-pytorch-a7f5f31517bc2f93e661073a8055244cb988c558177acffc671e7b7f0c3b5dfc", "text": "dims = THCudaLongTensor_nDimension(state, indices); THArgCheck(dims <= MAX_CUTORCH_DIMS, 4, CUTORCH_DIM_WARNING); <del> ptrdiff_t numIndices = THCudaLongTensor_nElement(state, indices); int srcDims = THCTensor_(nDimension)(state, dst); cudaStream_t stream = THCState_getCurrentStream(state); THArgCheck(THCudaLongTensor_nDimension(state, indices) == 1, 3, \"expecting vector of indices\"); THArgCheck(dim < srcDims, 4, \"Indexing dim is out of bounds\"); THArgCheck(srcDims > 0, 2, \"Source tensor is empty\"); int indContig = THCudaLongTensor_isContiguous(state, indices); </del> // The `src` is partitioned into two parts: // -the size of each slice we are indexing, which is the // total size of the tensor ignoring dimension `dim`; // -the number of indices we are choosing, which is the total size // of the tensor `indices`. <ins> ptrdiff_t sliceSize = THCTensor_(getSliceSize)(state, dst, dim, indices, nullptr); </ins> ptrdiff_t dstTotalSize = THCTensor_(nElement)(state, dst); int64_t dstFillDimSize = THCTensor_(size)(state, dst, dim); <del> ptrdiff_t sliceSize = dstTotalSize / dstFillDimSize; </del> <ins> ptrdiff_t numIndices = THCudaLongTensor_nElement(state, indices); cudaStream_t stream = THCState_getCurrentStream(state); int indContig = THCudaLongTensor_isContiguous(state, indices); </ins> int mpc = THCState_getCurrentDeviceProperties(state)->multiProcessorCount;", "commid": "pytorch_pr_4342"}], "negative_passages": []} |
|
{"query_id": "q-en-pytorch-fff45bda1af9264b746478a98a74e6cf752acf99d629275525cc18c2c81e146d", "query": "Build error: According to , the should be modified as:\nI the same problem when installing pytorch from source. You mean the file is wrong?\nAlthough the PyYAML documentation site suggests that is possible, I could not perform it. That is a reason for the failing. This has been reported as well: I believe that the should have this instead: In fact when I did this: This indicates that the import didn't happen in PyYAML itself.\nyou guys are right, the case has to be modified to: . The original issue (my guess) is that your pyyaml had some compile error and the CLoader wasn't successfully installed.", "positive_passages": [{"docid": "doc-en-pytorch-8dc6149b5f5badde19efd573c34248a8d068fdf4c78b544c3b62cf1cae95a6e3", "text": "# use faster C loader if available from yaml import CLoader as YamlLoader except ImportError: <del> from yaml import YamlLoader </del> <ins> from yaml import Loader as YamlLoader </ins> GENERATED_COMMENT = CodeTemplate(\"\"\"", "commid": "pytorch_pr_4379"}], "negative_passages": []} |
|
{"query_id": "q-en-pytorch-fff45bda1af9264b746478a98a74e6cf752acf99d629275525cc18c2c81e146d", "query": "Build error: According to , the should be modified as:\nI the same problem when installing pytorch from source. You mean the file is wrong?\nAlthough the PyYAML documentation site suggests that is possible, I could not perform it. That is a reason for the failing. This has been reported as well: I believe that the should have this instead: In fact when I did this: This indicates that the import didn't happen in PyYAML itself.\nyou guys are right, the case has to be modified to: . The original issue (my guess) is that your pyyaml had some compile error and the CLoader wasn't successfully installed.", "positive_passages": [{"docid": "doc-en-pytorch-98a65139c26c58918173adc25dd43e2b8fe912cd091c33627859786ced17cd5a", "text": "r\"\"\" expm1_() -> Tensor <del> In-place version of :meth:`~Tensor.exp` </del> <ins> In-place version of :meth:`~Tensor.expm1` </ins> \"\"\") add_docstr_all('exponential_',", "commid": "pytorch_pr_4379"}], "negative_passages": []} |
|
{"query_id": "q-en-pytorch-79b332daeb9fc8e8c0c4d12fe4d78c1c63b3d2785f4eb3b471ec11e1c57f2238", "query": "Repro: Expect two small tensors of small integers to be printed. Instead, only the first case works. In the second case, you get a series of failed assertions starting with this: I wonder if it is related to\nis this still an outstanding issue? I cannot reproduce this with an install from the master branch.\nThe size which causes failure doesn't seem deterministic. Using a pretty recent build (a few days old), I now have to increase the \ to a larger number to make it fail. Sometimes it will fail at 10, sometimes at 11. Here's an updated case that seems to fail reliably at some size, for me:", "positive_passages": [{"docid": "doc-en-pytorch-d5c3f8a043c1a6dc370a73393f9fc9679b408c0c7b294a3db0024c7905e75077", "text": "THLongStorage_free(topKSize); #define RUN_K(INDEX_T, DIM, DIR) <del> gatherTopK<real, INDEX_T, DIM, DIR> </del> <ins> gatherTopK<real, INDEX_T, DIM, DIR> </ins> <<<grid, block, 0, THCState_getCurrentStream(state)>>>( inputInfo, sliceSize, ", "commid": "pytorch_pr_5053"}], "negative_passages": []} |
|
{"query_id": "q-en-pytorch-79b332daeb9fc8e8c0c4d12fe4d78c1c63b3d2785f4eb3b471ec11e1c57f2238", "query": "Repro: Expect two small tensors of small integers to be printed. Instead, only the first case works. In the second case, you get a series of failed assertions starting with this: I wonder if it is related to\nis this still an outstanding issue? I cannot reproduce this with an install from the master branch.\nThe size which causes failure doesn't seem deterministic. Using a pretty recent build (a few days old), I now have to increase the \ to a larger number to make it fail. Sometimes it will fail at 10, sometimes at 11. Here's an updated case that seems to fail reliably at some size, for me:", "positive_passages": [{"docid": "doc-en-pytorch-679d810bd7ed5136accea4c7418b780d7eafc6f32420c98e43f99f94af794701", "text": "} #define RUN_T(INDEX_T) <del> TensorInfo<real, INDEX_T> inputInfo = getTensorInfo<THCTensor, INDEX_T>(state, input); TensorInfo<real, INDEX_T> topKInfo = getTensorInfo<THCTensor, INDEX_T>(state, topK); </del> <ins> TensorInfo<real, INDEX_T> inputInfo = getTensorInfo<THCTensor, INDEX_T>(state, input); TensorInfo<real, INDEX_T> topKInfo = getTensorInfo<THCTensor, INDEX_T>(state, topK); </ins> TensorInfo<int64_t, INDEX_T> indicesInfo = getTensorInfo<THCudaLongTensor, INDEX_T>(state, indices); ", "commid": "pytorch_pr_5053"}], "negative_passages": []} |
|
{"query_id": "q-en-pytorch-79b332daeb9fc8e8c0c4d12fe4d78c1c63b3d2785f4eb3b471ec11e1c57f2238", "query": "Repro: Expect two small tensors of small integers to be printed. Instead, only the first case works. In the second case, you get a series of failed assertions starting with this: I wonder if it is related to\nis this still an outstanding issue? I cannot reproduce this with an install from the master branch.\nThe size which causes failure doesn't seem deterministic. Using a pretty recent build (a few days old), I now have to increase the \ to a larger number to make it fail. Sometimes it will fail at 10, sometimes at 11. Here's an updated case that seems to fail reliably at some size, for me:", "positive_passages": [{"docid": "doc-en-pytorch-bf25f2e78031322135546bbc6eda7e79560aab0a0cb5a330956959ec34d5c1f2", "text": "int collapseIndicesDim = indicesInfo.collapseDims(dim); int64_t inputSlices = 1; <del> int64_t topKSlices = 1; for (int i = 0; i < numDims; ++i) { </del> <ins> for (int i = 0; i < inputInfo.dims; ++i) { </ins> inputSlices *= inputInfo.sizes[i]; <ins> } int64_t topKSlices = 1; for (int i = 0; i < topKInfo.dims; ++i) { </ins> topKSlices *= topKInfo.sizes[i]; } ", "commid": "pytorch_pr_5053"}], "negative_passages": []} |
|
{"query_id": "q-en-pytorch-776df235103b53a71bf8f4ba65848fd57cc3b83752cd0362e099dcf8f172238d", "query": "If there was two libcudnn files ( and ) in the folder wouldn't it result in taking the older version ()? Maybe adding a warning or something to insure its the right lib?\nI think and are quite possibly related. EDIT: alsopositive_passagesdociddoc-en-pytorch-07ad46ccd3b35b225c93307016f8c2417fcdbe18f312fc8cd9bb83f573139683text] if WITH_CUDNN: main_libraries += ['cudnn'] <del> library_dirs.append(CUDNN_LIB_DIR) </del> <ins> library_dirs.insert(0, CUDNN_LIB_DIR) </ins> # NOTE: these are at the front, in case there's another cuDNN in CUDA path include_dirs.insert(0, CUDNN_INCLUDE_DIR) if not IS_WINDOWS:", "commid": "pytorch_pr_5345"}], "negative_passages": []} |
|
{"query_id": "q-en-pytorch-2b1784682bef2edee23155fdd0d37cb0be48b60c5122a548b88f9a8f1c89318c", "query": "If a test segfaults, it's hard to determine which test it is because the output looks like: ! I can work on this if we think this is a good idea.\nLast time I asked about this, he didn't want to because it adds a lot of scrolling. I also did some work to move the CI tests to so that we can output XML status files, but (1) this is unlikely to help in case of segfault, and (2) you can't do this straightforwardly, because we run some test files multiple times with different environment variables, and the XML exporter isn't really set up to handle this case. For the record, I am personally OK with just slapping on the tests and calling it a day.\nin the conbuild sounds good to me too\nclosed via", "positive_passages": [{"docid": "doc-en-pytorch-60e3bc443896180daa3916c38d9d90d5b3e0138f03760019aba7beaac719f978", "text": "python setup.py install cd test/ echo \"Ninja version: $(ninja --version)\" <del> sh run_test.sh </del> <ins> sh run_test.sh -- -v </ins> echo \"BUILD PASSED\"", "commid": "pytorch_pr_5259"}], "negative_passages": []} |
|
{"query_id": "q-en-pytorch-2b1784682bef2edee23155fdd0d37cb0be48b60c5122a548b88f9a8f1c89318c", "query": "If a test segfaults, it's hard to determine which test it is because the output looks like: ! I can work on this if we think this is a good idea.\nLast time I asked about this, he didn't want to because it adds a lot of scrolling. I also did some work to move the CI tests to so that we can output XML status files, but (1) this is unlikely to help in case of segfault, and (2) you can't do this straightforwardly, because we run some test files multiple times with different environment variables, and the XML exporter isn't really set up to handle this case. For the record, I am personally OK with just slapping on the tests and calling it a day.\nin the conbuild sounds good to me too\nclosed via", "positive_passages": [{"docid": "doc-en-pytorch-a2d50b89d13990a6674095f7da658955f6dca03e30045ecebb0ad411faf8265d", "text": "export PATH=\"$PWD:$PATH\" popd <del> time test/run_test.sh </del> <ins> time test/run_test.sh -- -v </ins> rm -rf ninja", "commid": "pytorch_pr_5259"}], "negative_passages": []} |
|
{"query_id": "q-en-pytorch-2b1784682bef2edee23155fdd0d37cb0be48b60c5122a548b88f9a8f1c89318c", "query": "If a test segfaults, it's hard to determine which test it is because the output looks like: ! I can work on this if we think this is a good idea.\nLast time I asked about this, he didn't want to because it adds a lot of scrolling. I also did some work to move the CI tests to so that we can output XML status files, but (1) this is unlikely to help in case of segfault, and (2) you can't do this straightforwardly, because we run some test files multiple times with different environment variables, and the XML exporter isn't really set up to handle this case. For the record, I am personally OK with just slapping on the tests and calling it a day.\nin the conbuild sounds good to me too\nclosed via", "positive_passages": [{"docid": "doc-en-pytorch-bf19cdd3b1e009d8608c593192d698a31682a4f2612e16012caf069b6a3c0506", "text": "python ..ci_scriptsdelete_image.py 7z x %IMAGE_COMMIT_TAG%.7z <del> sh run_test.sh </del> <ins> sh run_test.sh -- -v </ins> EOL", "commid": "pytorch_pr_5259"}], "negative_passages": []} |
|
{"query_id": "q-en-pytorch-a493a7a655b923ff2927bc3f83cb30852a776bf52d18dbc22b8714994f384729", "query": "Claim: When I use an invalid label with F.crossentropy with reduce=False, I don't receive an error message. I do receive an error message when reduce=True. Code: Result: No error message received. Expected error message: Context: I was using , accidentally used an invalid label, and got extremely large losses and extremely small losses, and inconsistently so. Environment: OS: Ubuntu 16.04 on Docker and host PyTorch version: 0.3.0.post4 How you installed PyTorch: conda Python version: 3.6.3positive_passagesdociddoc-en-pytorch-f25dbdd21ed338103f5ceffb8dc8567343f91e643ea156ccf1b52fe5dd283810textTHCDeviceTensor<THCIndex_t, 1> target, THCDeviceTensor<Dtype, 1> output, Dtype *weights, <ins> int n_classes, </ins> int ignore_index) { CUDA_KERNEL_LOOP(index, batch_size) {commidpytorch_pr_5299negative_passages |
|
query_idq-en-pytorch-a493a7a655b923ff2927bc3f83cb30852a776bf52d18dbc22b8714994f384729queryClaim: When I use an invalid label with F.crossentropy with reduce=False, I don't receive an error message. I do receive an error message when reduce=True. Code: Result: No error message received. Expected error message: Context: I was using , accidentally used an invalid label, and got extremely large losses and extremely small losses, and inconsistently so. Environment: OS: Ubuntu 16.04 on Docker and host PyTorch version: 0.3.0.post4 How you installed PyTorch: conda Python version: 3.6.3", "positive_passages": [{"docid": "doc-en-pytorch-fedda46d10136b30896303304b40903405a0f57d4e0246550f02a9deb34f8d03", "text": "output[index] = ScalarConvert<int, Dtype>::to(0); continue; } <ins> assert(cur_target >= 0 && cur_target < n_classes); </ins> Dtype weight = weights ? weights[cur_target] : ScalarConvert<int, Dtype>::to(1); output[index] = -weight * input[index][cur_target];", "commid": "pytorch_pr_5299"}], "negative_passages": []} |
|
{"query_id": "q-en-pytorch-a493a7a655b923ff2927bc3f83cb30852a776bf52d18dbc22b8714994f384729", "query": "Claim: When I use an invalid label with F.crossentropy with reduce=False, I don't receive an error message. I do receive an error message when reduce=True. Code: Result: No error message received. Expected error message: Context: I was using , accidentally used an invalid label, and got extremely large losses and extremely small losses, and inconsistently so. Environment: OS: Ubuntu 16.04 on Docker and host PyTorch version: 0.3.0.post4 How you installed PyTorch: conda Python version: 3.6.3positive_passagesdociddoc-en-pytorch-38635f5ed81c34c96d88693411b87cb69cd435ddda1898d2b51e2b4838becce1textTHCDeviceTensor<Dtype, 1> gradOutput, THCDeviceTensor<Dtype, 2> gradInput, Dtype *weights, <ins> int n_classes, </ins> int ignore_index) { CUDA_KERNEL_LOOP(index, batch_size) {commidpytorch_pr_5299negative_passages |
|
query_idq-en-pytorch-a493a7a655b923ff2927bc3f83cb30852a776bf52d18dbc22b8714994f384729queryClaim: When I use an invalid label with F.crossentropy with reduce=False, I don't receive an error message. I do receive an error message when reduce=True. Code: Result: No error message received. Expected error message: Context: I was using , accidentally used an invalid label, and got extremely large losses and extremely small losses, and inconsistently so. Environment: OS: Ubuntu 16.04 on Docker and host PyTorch version: 0.3.0.post4 How you installed PyTorch: conda Python version: 3.6.3", "positive_passages": [{"docid": "doc-en-pytorch-e07a9efd2cd7a58c4c39247b1e157c77e8eb7156d8f2882cecd23fd54fd8897e", "text": "if (cur_target == ignore_index) { continue; } <ins> assert(cur_target >= 0 && cur_target < n_classes); </ins> Dtype weight = weights ? weights[cur_target] : ScalarConvert<int, Dtype>::to(1); gradInput[index][cur_target] = -weight * gradOutput[index];", "commid": "pytorch_pr_5299"}], "negative_passages": []} |
|
{"query_id": "q-en-pytorch-a493a7a655b923ff2927bc3f83cb30852a776bf52d18dbc22b8714994f384729", "query": "Claim: When I use an invalid label with F.crossentropy with reduce=False, I don't receive an error message. I do receive an error message when reduce=True. Code: Result: No error message received. Expected error message: Context: I was using , accidentally used an invalid label, and got extremely large losses and extremely small losses, and inconsistently so. Environment: OS: Ubuntu 16.04 on Docker and host PyTorch version: 0.3.0.post4 How you installed PyTorch: conda Python version: 3.6.3positive_passagesdociddoc-en-pytorch-d3fb4ebdce6591812cd12149652cdc7becc9a9ad66902eea4d1a71d24862e604texttoDeviceTensor<THCIndex_t, 1>(state, target), toDeviceTensor<real, 1>(state, output), weights ? THCTensor_(data)(state, weights) : NULL, <ins> n_classes, </ins> ignore_index); <ins> THCudaCheck(cudaGetLastError()); </ins> if (weights) { THCTensor_(free)(state, weights); }commidpytorch_pr_5299negative_passages |
|
query_idq-en-pytorch-a493a7a655b923ff2927bc3f83cb30852a776bf52d18dbc22b8714994f384729queryClaim: When I use an invalid label with F.crossentropy with reduce=False, I don't receive an error message. I do receive an error message when reduce=True. Code: Result: No error message received. Expected error message: Context: I was using , accidentally used an invalid label, and got extremely large losses and extremely small losses, and inconsistently so. Environment: OS: Ubuntu 16.04 on Docker and host PyTorch version: 0.3.0.post4 How you installed PyTorch: conda Python version: 3.6.3", "positive_passages": [{"docid": "doc-en-pytorch-dd77b181434484ed376963610bcc97d0d2d487e868885c180ada5d737f9e88f5", "text": "toDeviceTensor<real, 1>(state, gradOutput), toDeviceTensor<real, 2>(state, gradInput), weights ? THCTensor_(data)(state, weights) : NULL, <ins> n_classes, </ins> ignore_index); <ins> THCudaCheck(cudaGetLastError()); </ins> if (weights) { THCTensor_(free)(state, weights); }", "commid": "pytorch_pr_5299"}], "negative_passages": []} |
|
{"query_id": "q-en-pytorch-a493a7a655b923ff2927bc3f83cb30852a776bf52d18dbc22b8714994f384729", "query": "Claim: When I use an invalid label with F.crossentropy with reduce=False, I don't receive an error message. I do receive an error message when reduce=True. Code: Result: No error message received. Expected error message: Context: I was using , accidentally used an invalid label, and got extremely large losses and extremely small losses, and inconsistently so. Environment: OS: Ubuntu 16.04 on Docker and host PyTorch version: 0.3.0.post4 How you installed PyTorch: conda Python version: 3.6.3positive_passagesdociddoc-en-pytorch-0b1460892c58254a6587f1685bd25c10911face2e6316d8a8407809c162af679textint batch_size = THTensor_(size)(input, 0); THTensor_(resize1d)(output, batch_size); <ins> int invalid_target = -1; // We cannot throw an exception inside omp parallel </ins> int i; #pragma omp parallel for private(i) for (i = 0; i < batch_size; i++) { int cur_target = THTensor_fastGet1d(target, i) - TH_INDEX_BASE; <del> if (cur_target == ignore_index) { THTensor_fastSet1d(output, i, 0.0f); continue; </del> <ins> if (cur_target >= 0 && cur_target < n_classes) { if (cur_target == ignore_index) { THTensor_fastSet1d(output, i, 0.0f); continue; } real cur_weight = weights ? THTensor_fastGet1d(weights, cur_target) : 1.0f; THTensor_fastSet1d(output, i, -THTensor_fastGet2d(input, i, cur_target) * cur_weight); } else { THAtomicCompareAndSwap(&invalid_target, -1, cur_target); </ins> } <del> real cur_weight = weights ? THTensor_fastGet1d(weights, cur_target) : 1.0f; THTensor_fastSet1d(output, i, -THTensor_fastGet2d(input, i, cur_target) * cur_weight); </del> <ins> } if (invalid_target >= 0) { THError(\, invalid_target); </ins> } return;commidpytorch_pr_5299negative_passages |
|
query_idq-en-pytorch-eb9b370355512651cb8f742ffa616415d4e60db7c2ed97594eef6778ebe231b5queryCC who was asking me about doc generation yesterday. Full log And CC\nYou probably know why this is happening, but just in case: fixes this. (as per: ) Matplotlib is trying to open a display window on what I assume is a screen-less server.positive_passagesdociddoc-en-pytorch-406d300c4698ef8ebb5337a06de7733923ff9b8c1d75f141fdd19816f61b98aetextimport os.path import torch.nn.modules.activation import torch.autograd <ins> import matplotlib matplotlib.use('Agg') </ins> import pylabcommidpytorch_pr_5494negative_passages |
|
query_idq-en-pytorch-9fadf55fef649758ce298e5387e2a100c57f31d87ba54cce681ee82647ba65e7queryIn PyTorch master:\nAdded clamp's output support in pre-template code.", "positive_passages": [{"docid": "doc-en-pytorch-aebacc799c3a81586b0c547a02debca587aaa2d0618d2fe2a2717dd0dbd9fc61", "text": "res2[i] = max(min_val, min(max_val, res2[i])) self.assertEqual(res1, res2) <ins> out = m1.clone() torch.clamp(m1, min=min_val, max=max_val, out=out) self.assertEqual(out, res1) </ins> res1 = torch.clamp(m1, min=min_val) res2 = m1.clone() for i in iter_indices(res2): res2[i] = max(min_val, res2[i]) self.assertEqual(res1, res2) <ins> torch.clamp(m1, min=min_val, out=out) self.assertEqual(out, res1) </ins> res1 = torch.clamp(m1, max=max_val) res2 = m1.clone() for i in iter_indices(res2): res2[i] = min(max_val, res2[i]) self.assertEqual(res1, res2) <ins> torch.clamp(m1, max=max_val, out=out) self.assertEqual(out, res1) </ins> def test_pow(self): # [res] torch.pow([res,] x)", "commid": "pytorch_pr_6418"}], "negative_passages": []} |
|
{"query_id": "q-en-pytorch-9fadf55fef649758ce298e5387e2a100c57f31d87ba54cce681ee82647ba65e7", "query": "In PyTorch master:\nAdded clamp's output support in pre-template code.positive_passagesdociddoc-en-pytorch-47fdd2074fae54076b9495d25a94a841524a59ccda55df179b17cbd54b477ef8text} } <del> static Tensor dispatch_clamp(const Tensor & self, Scalar min, Scalar max) { AutoNoGIL no_gil; AutoGPU auto_gpu(self); return self.clamp(min, max); } static Tensor dispatch_clamp_min(const Tensor & self, Scalar min) { AutoNoGIL no_gil; AutoGPU auto_gpu(self); return self.clamp_min(min); } static Tensor dispatch_clamp_max(const Tensor & self, Scalar max) { AutoNoGIL no_gil; AutoGPU auto_gpu(self); return self.clamp_max(max); } </del> // The Python clamp() syntax has to be mapped to one of three C++ functions static PyObject * THPVariable_clamp(PyObject* module, PyObject* args, PyObject* kwargs) { HANDLE_TH_ERRORS static PythonArgParser parser({ <del> \, </del> <ins> \, </ins> }); <del> ParsedArgs<3> parsed_args; </del> <ins> ParsedArgs<4> parsed_args; </ins> auto r = parser.parse(args, kwargs, parsed_args); if (!r.isNone(1) && !r.isNone(2)) { <del> return THPVariable_Wrap(dispatch_clamp(r.tensor(0), r.scalar(1), r.scalar(2))); </del> <ins> if (!r.isNone(3)) { return wrap(dispatch_clamp(r.tensor(0), r.scalar(1), r.scalar(2), r.tensor(3))); } else { return wrap(dispatch_clamp(r.tensor(0), r.scalar(1), r.scalar(2))); } </ins> } else if (!r.isNone(1)) { <del> return THPVariable_Wrap(dispatch_clamp_min(r.tensor(0), r.scalar(1))); </del> <ins> if (!r.isNone(3)) { return wrap(dispatch_clamp_min(r.tensor(0), r.scalar(1), r.tensor(3))); } else { return wrap(dispatch_clamp_min(r.tensor(0), r.scalar(1))); } </ins> } else if (!r.isNone(2)) { <del> return THPVariable_Wrap(dispatch_clamp_max(r.tensor(0), r.scalar(2))); </del> <ins> if (!r.isNone(3)) { return wrap(dispatch_clamp_max(r.tensor(0), r.scalar(2), r.tensor(3))); } else { return wrap(dispatch_clamp_max(r.tensor(0), r.scalar(2))); } </ins> } else { throw std::runtime_error(\); } <ins> Py_RETURN_NONE; </ins> END_HANDLE_TH_ERRORS }commidpytorch_pr_6418negative_passages |
|
query_idq-en-pytorch-9fadf55fef649758ce298e5387e2a100c57f31d87ba54cce681ee82647ba65e7queryIn PyTorch master:\nAdded clamp's output support in pre-template code.", "positive_passages": [{"docid": "doc-en-pytorch-9ec1744c3c99ad877b5de782ef7e6f996113b1f13b635199c006a6d8e8a19f56", "text": "} } <ins> // manual dispatch code for clamp inline Tensor dispatch_clamp(const Tensor & self, Scalar min, Scalar max) { AutoNoGIL no_gil; AutoGPU auto_gpu(self); return self.clamp(min, max); } inline Tensor dispatch_clamp_min(const Tensor & self, Scalar min) { AutoNoGIL no_gil; AutoGPU auto_gpu(self); return self.clamp_min(min); } inline Tensor dispatch_clamp_max(const Tensor & self, Scalar max) { AutoNoGIL no_gil; AutoGPU auto_gpu(self); return self.clamp_max(max); } inline Tensor & dispatch_clamp(const Tensor & self, Scalar min, Scalar max, Tensor result) { AutoNoGIL no_gil; AutoGPU auto_gpu(result); return at::clamp_out(result, self, min, max); } inline Tensor & dispatch_clamp_min(const Tensor & self, Scalar min, Tensor result) { AutoNoGIL no_gil; AutoGPU auto_gpu(result); return at::clamp_min_out(result, self, min); } inline Tensor & dispatch_clamp_max(const Tensor & self, Scalar max, Tensor result) { AutoNoGIL no_gil; AutoGPU auto_gpu(result); return at::clamp_max_out(result, self, max); } </ins> ${py_method_dispatch} }} // namespace torch::autograd", "commid": "pytorch_pr_6418"}], "negative_passages": []} |
|
{"query_id": "q-en-pytorch-770ebf2455272156db916a5267a9d9a86f93a18b654cbbc9f4496a4c4a01a348", "query": "I also found this issue when I tried to swap axes of a tensor. I googled an older doc in the source code. Hope it helps.", "positive_passages": [{"docid": "doc-en-pytorch-85de7ca993fe71820aa6914403ef4020c7c221a26a264b8143092c58d899fe53", "text": "See :func:`torch.ormqr` \"\"\") <ins> add_docstr_all('permute', r\"\"\" permute(*dims) -> Tensor Permute the dimensions of this tensor. Args: *dims (int...): The desired ordering of dimensions Example: >>> x = torch.randn(2, 3, 5) >>> x.size() torch.Size([2, 3, 5]) >>> x.permute(2, 0, 1).size() torch.Size([5, 2, 3]) \"\"\") </ins> add_docstr_all('potrf', r\"\"\" potrf(upper=True) -> Tensor", "commid": "pytorch_pr_7652"}], "negative_passages": []} |
|
{"query_id": "q-en-pytorch-1d874fc5281ab3fc4382083b8f2f5f3b5964a59e0df7a3a006fa197e882221c5", "query": "I think this may have been intentional, so I want to float this first before I apply the change. Historically, PyTorch did DEBUG builds with , because it prevents inlining, which means that you can inspect variables in all stack frames. However, in master, DEBUG mode appears to be compiled with optimizations, which means that any inlined functions are non-inspectable. The following patch will turn off optimizations when compiled in DEBUG mode. Is there any reason not to apply this?", "positive_passages": [{"docid": "doc-en-pytorch-0cccd73354b58c3b5f70bf8d16e39eb3a4a5ab448c27359da1bd8edfcdaeb323", "text": "endforeach(flag_var) endif() <del> set (CMAKE_CXX_FLAGS_DEBUG \"${CMAKE_CXX_FLAGS_DEBUG} -fno-omit-frame-pointer\") set (CMAKE_LINKER_FLAGS_DEBUG \"${CMAKE_STATIC_LINKER_FLAGS_DEBUG} -fno-omit-frame-pointer\") </del> <ins> set (CMAKE_CXX_FLAGS_DEBUG \"${CMAKE_CXX_FLAGS_DEBUG} -fno-omit-frame-pointer -O0\") set (CMAKE_LINKER_FLAGS_DEBUG \"${CMAKE_STATIC_LINKER_FLAGS_DEBUG} -fno-omit-frame-pointer -O0\") </ins> if (USE_ASAN) set (CMAKE_CXX_FLAGS_DEBUG \"${CMAKE_CXX_FLAGS_DEBUG} -fsanitize=address\") set (CMAKE_LINKER_FLAGS_DEBUG \"${CMAKE_STATIC_LINKER_FLAGS_DEBUG} -fsanitize=address\")", "commid": "pytorch_pr_8336"}], "negative_passages": []} |
|
{"query_id": "q-en-pytorch-ad56770f383c1f3506dafd13628272cddc92b1bc81e11704a08aa3cf3d0b73e6", "query": "yields irrelevant errors when passing wrong arguments First one: Second one: PyTorch or Caffe2: pytorch 0.4 How you installed PyTorch (conda, pip, source): pip Build command you used (if compiling from source): OS: ubuntu16.04 PyTorch version: 0.4 Python version: 3.6 CUDA/cuDNN version: GPU models and configuration: 1080 GCC version (if compiling from source): CMake version: Versions of any other relevant libraries:\nThank you for the report! I'll move them to the new check for same GPU (which includes reasonable CPU vs. GPU reporting).\nIn PyTorch master, the error message has already improved for the second case:\nI am getting exactly the first error reported: and I don't understand what is the problem. Can someone help on this?\nif you use PyTorch v0.4.1, you should get a better error message helping you on what's wrong.\nThanks for your reply I found out that the problem was the hidden state in my recurrent net, I did not use the .cuda()positive_passagesdociddoc-en-pytorch-2188b7f64d09abf523179bc9a17360014adcc6578ad04ba793084e0fac9b2123textdef forward(ctx, input, grid, padding_mode='zeros'): ctx.save_for_backward(input, grid) <ins> if input.device != grid.device: raise RuntimeError((\ + \).format(input.device, grid.device)) </ins> if padding_mode == 'zeros': ctx.padding_mode = MODE_ZEROS elif padding_mode == 'border':commidpytorch_pr_8646negative_passages |
|
query_idq-en-pytorch-7d28578569428268ac2d87f2d7ca561ee1ff92d1b36f6e4a85a7c0568ba38894queryPorting TH operators is essential for code simplicity and performance reasons. Porting guides and Q&A are available in umbrella issue: Feel free to add as a reviewer to get a prioritized review.positive_passagesdociddoc-en-pytorch-06405422fd745b4c40f12f712d4a5d5c8b0538cc2e3c793902e24fdf9b7b87b3text# (3) initialize mean square values and square gradient storage if not 'm' in state: <del> state['m'] = x.new().resize_as_(dfdx).fill_(1) </del> <ins> state['m'] = x.new().resize_as_(dfdx).zero_() </ins> state['tmp'] = x.new().resize_as_(dfdx)commidpytorch_pr_485negative_passages |
|
query_idq-en-pytorch-7d28578569428268ac2d87f2d7ca561ee1ff92d1b36f6e4a85a7c0568ba38894queryPorting TH operators is essential for code simplicity and performance reasons. Porting guides and Q&A are available in umbrella issue: Feel free to add as a reviewer to get a prioritized review.positive_passagesdociddoc-en-pytorch-d135710ef973222ecdc593d43b4acb154189b4178c2fd4ce3644b483113544c1text# State initialization if len(state) == 0: state['step'] = 0 <del> state['square_avg'] = grad.new().resize_as_(grad).fill_(1) </del> <ins> state['square_avg'] = grad.new().resize_as_(grad).zero_() </ins> square_avg = state['square_avg'] alpha = group['alpha']commidpytorch_pr_485negative_passages |
|
query_idq-en-pytorch-9e91342469b07c784f83e76fec4c57c9c9c111a8219e1c31f731d3e3105181d8queryKind of similar to Sometimes our Eigen build fails this way: It doesn't seem to reliably repro. cc\nI got some sort of similar-ish error persistently on a test branch:\nNot that persistently; a PR stacked on the failing one succeeded. So there is something nondeterministic going on here.\nanother instance: CI on\nLooks like an NVCC bug. The workground is protecting eigen headers with Can we extract some basic code example and compile it against NVCC?\nAnother one in\nCan you try with 10.2?\nMingbo, can you put general information on the issue you are seeing here in this issue?\nlatest word on this is from mingbo: \"from what I can tell, it's a flaky one. might depend on which VS version. I hit the problem before and problem fixed (at least seem so) after upgrading VS 2019 latest version\the same error showed up once last week, and re-run the job finished without problem.\fix\thrust::atanh\using ::atanh\build\using ::atanh\, : [{: , : , : }], : []} |
|
{: , : from what I can tell, it's a flaky one. might depend on which VS version. I hit the problem before and problem fixed (at least seem so) after upgrading VS 2019 latest version\" and \"the same error showed up once last week, and re-run the job finished without problem.\"\nHi. Did anyone have this issue persistently, ie that a rebuild did not \"fix\" it? We've had this issue in another (closed source) project. Exposing cuda to less includes (such as boost) seems to help but is not predictable.\nIt's never been persistent for us, but this is mostly from our Windows CI builds where we blow everything away and rebuild from scratch, which may help.\nYet another occurrence in the forum: It seems to be fixed by , which implies there may be a race condition in reading the include files or generating the intermediate source files.\nI was able to reproduce the issue with a more verbose output with And then I inspected the variables carefully and found out that the VC env is activated twice. We should really avoid that.\nYou are SO COOL!!!\nI tried to build from scratch several times with this PR and could not reproduce the issue anymore. Let's assume it's fixed.\nLooks like it was not resolved.\nAnother occurrence: This time it seems to claim the error is around the usage of .\nThis sounds like a good old multiple-instances-of-nvcc-write-to-the-same-temp-file One can invoke to see all temporary filenames it creates. I guess on windows they are more likely to collide with each other than on linux. One solution to the problem would be to invoke nvcc sequentially. Another is to tweak environment (there must be something similar to TMPDIR on Windows, mustn't it?)\nYes, they are called TMP and TEMP.\nIs there any example of the latter method on Unix systems?\nI've written a utility . Hopefully it can be used to resolve this problem if that's the case.\nNew occurrence after the previous fix:\nseems related to\nIf this is the case, maybe we could apply to fix that.\nunfortunately, this introduces another thrust problem (e.g. ): D:/programs64/cuda101/includethrust/detail/complex/catrig.h(607): error: no instance of function template \ matches the argument list argument types are: (double) with problematic include chain -thrust/complex.h -thrust/detail/complex/catrig.h (has cplusplus conditions) -thrust/detail/complex/c99math.h where \ is omitted for MSVC. Patching c99math.h (or even thrustdetailconfigcppdialect.h without using /Zc:_cplusplus) seems to work for me, but I'm not sure what's the best way to fix this within pytorch tree and whether c++ dialect is the only problem here.\nThanks for the investigation. BTW, could you please share with us how did you patch c99math.h?\nDisregard that, still got same unreproducible C2993 error with /Zc:_cplusplus. Earlier, I had \ directory reproducibly failing (at ), and hardcoding c++17 in thrustdetailconfigcppdialect.h fixed that. But looks like it is just a symptom of some inconsistence in environment/flags (across nvcc stages maybe?) just \\nYeah, maybe. I am considering to add a wrapper to retry the build a few times.\nI can now stably reproduce a related error by passing in .\nBy commenting out some part of the code, I find the code that causes the previous error. I don't know why it is executed in the device code.\nWell, I figured out the reason. It seems that and are implicitly marked as and . This behavior can be disabled by adding .\nNice work!\nSeems to be a nvcc bug\nThanks for reminding. I have commented there.\nThis is indeed an nvcc bug. There is no known workaround at the moment, but the next release of the CUDA toolkit will contain a fix. Ref thrust/thrust.\nMet this problem:\nJust retry a few times.\nThe CUDA TK 11.0 RC is now available: This issue has been fixed in that version of nvcc.\nThanks for the info. Actually, our release binaries rely on the cudatoolkit package in anaconda cloud. So we won't be able to publish the binaries with CUDA 11 now. I'm just curious whether that is maintained by Nvidia? Another question, any plan to backport the changes to exisiting CUDA versions (e.g. cuda 10.1, cuda 10.2)?\nI don't have that information unfortunately -- it'd be best to contact the maintainer of that package and see when they plan to update. I'm not aware of any plans to backport this fix.\nI have also obtained this error with CUDA 10.1/cuDNN 7.6.4 on Win Server 2019.\nthis is a cuda bug. Please see above discussion and upgrade to cuda 11.\nI fixed the error by installing sscache for use with cuda 10.1 and cuda 10.2, as done here.\nit's an intermittent error and the probability of occurrence varies depending on your system. So it's not really fixed, but you may have found a way to reduce the occurrence in your system, which may be sufficient in your case :)\nAnother instance, this time inside protobuf: Running with I found the syntax error is introduced by which is the cuda compiler front end. operates on the output from the compiler's pre-processor and that output barely even changed to trigger the build error, only line number metadata and some extra whitespace was changed in the pre-processed output. Here is the diff of the two inputs, one of which failed and the other didn't: And the corresponding change in the output: This failure reproduced reliably on the CI machine. Running nvcc manually multiple times, it always fails or passes given the same inputs. I think it's a deterministic failure, but is annoyingly sensitive to any changes in the input file whatsoever.\nclosing due to age\nWait, do we think this is fixed? It's not clear to me it is...\nI think this was related to CUDA 10.2 builds on windows which we don't actually support anymore so it is probably safe to keep closed.", "positive_passages": [{"docid": "doc-en-pytorch-5d81fd04b9a7ebeda53a709b3bed97b75d90041357170141bf5753a3476e6720", "text": "thisdir = path.dirname(__file__) libpaths = ['', path.join(thisdir, '../../lib')] if sys.platform.startswith('linux'): <del> libnames = ['libcudnn.so.5.1.5', 'libcudnn.so.5.1.3', 'libcudnn.so.5.0.5'] </del> <ins> libnames = ['libcudnn.so.5.1.5', 'libcudnn.so.5.1.3', 'libcudnn.so.5.0.5', 'libcudnn.so.5.1.10'] </ins> elif sys.platform == 'darwin': libnames = ['libcudnn.5.dylib'] else:", "commid": "pytorch_pr_448"}], "negative_passages": []} |
|
{"query_id": "q-en-pytorch-a841b3294e3a6ae36780ece0ac356e026aafbc326cdbc157f1180b85daa7a858", "query": "As it still using deprecated resource class, see See example of the failure in\nFollowup: we need to plan better --we knew it was gonna happen and none of us fixed it early enough. Action item: go through tasks about deprecation in sprint planning", "positive_passages": [{"docid": "doc-en-pytorch-77152ccdd008189dcd84c7362b1ae0b2b045baede513ba90c4a6ecc096558113", "text": "is_master_only=True, requires=[\"binary_linux_manywheel_3_7m_cu102_devtoolset7_build\"], extra_props={ <del> \"resource_class\": \"gpu.medium\", </del> <ins> \"resource_class\": \"gpu.nvidia.small\", </ins> \"use_cuda_docker_runtime\": miniutils.quote((str(1))), }, ),", "commid": "pytorch_pr_72613"}], "negative_passages": []} |
|
{"query_id": "q-en-pytorch-a841b3294e3a6ae36780ece0ac356e026aafbc326cdbc157f1180b85daa7a858", "query": "As it still using deprecated resource class, see See example of the failure in\nFollowup: we need to plan better --we knew it was gonna happen and none of us fixed it early enough. Action item: go through tasks about deprecation in sprint planning", "positive_passages": [{"docid": "doc-en-pytorch-e7b2ccc25ed16d72837d1d8a4b9213b79ac856fe2521ce498a0bc8e7772c9b4c", "text": "# binary_linux_libtorch_3.6m_cpu_test: # environment: # BUILD_ENVIRONMENT: \"libtorch 3.6m cpu\" <del> # resource_class: gpu.medium </del> <ins> # resource_class: gpu.nvidia.small </ins> # <<: *binary_linux_test # # binary_linux_libtorch_3.6m_cu90_test: # environment: # BUILD_ENVIRONMENT: \"libtorch 3.6m cu90\" <del> # resource_class: gpu.medium </del> <ins> # resource_class: gpu.nvidia.small </ins> # <<: *binary_linux_test # docker_build_job:", "commid": "pytorch_pr_72613"}], "negative_passages": []} |
|
{"query_id": "q-en-pytorch-a841b3294e3a6ae36780ece0ac356e026aafbc326cdbc157f1180b85daa7a858", "query": "As it still using deprecated resource class, see See example of the failure in\nFollowup: we need to plan better --we knew it was gonna happen and none of us fixed it early enough. Action item: go through tasks about deprecation in sprint planning", "positive_passages": [{"docid": "doc-en-pytorch-b8f80ee8a686dfcf445ccf14c31883bcdae221a0f6d56bf4ca9043aa64c44e42", "text": "name: binary_linux_manywheel_3_7m_cu102_devtoolset7_test requires: - binary_linux_manywheel_3_7m_cu102_devtoolset7_build <del> resource_class: gpu.medium </del> <ins> resource_class: gpu.nvidia.small </ins> use_cuda_docker_runtime: \"1\" - binary_linux_test: build_environment: libtorch 3.7m cpu devtoolset7", "commid": "pytorch_pr_72613"}], "negative_passages": []} |
|
{"query_id": "q-en-pytorch-a841b3294e3a6ae36780ece0ac356e026aafbc326cdbc157f1180b85daa7a858", "query": "As it still using deprecated resource class, see See example of the failure in\nFollowup: we need to plan better --we knew it was gonna happen and none of us fixed it early enough. Action item: go through tasks about deprecation in sprint planning", "positive_passages": [{"docid": "doc-en-pytorch-08a68e469bb00cd7c8204be07a4fdd6a0e80b32aa7000b3c8821986a0a33bf72", "text": "# binary_linux_libtorch_3.6m_cpu_test: # environment: # BUILD_ENVIRONMENT: \"libtorch 3.6m cpu\" <del> # resource_class: gpu.medium </del> <ins> # resource_class: gpu.nvidia.small </ins> # <<: *binary_linux_test # # binary_linux_libtorch_3.6m_cu90_test: # environment: # BUILD_ENVIRONMENT: \"libtorch 3.6m cu90\" <del> # resource_class: gpu.medium </del> <ins> # resource_class: gpu.nvidia.small </ins> # <<: *binary_linux_test #", "commid": "pytorch_pr_72613"}], "negative_passages": []} |
|
{"query_id": "q-en-pytorch-ab1c6cda8c4472ae77a711ca046b8fc0adb5f6113ab570670bf47f4fd9a29bc9", "query": "When trying to install Pytorch on my Mac by following the instructions I get What I did: ` I also tried Both approaches gave the same error. System: xcode-select version 2395. Version: macOS Monterey 12.3.1 (21E258) MacBook Pro (16-inch, 2019) Processor: 2,6 GHz 6-Core Intel Core i7 memory: 16 GB 2667 MHz DDR4 PyTorch version: N/A Is debug build: N/A CUDA used to build PyTorch: N/A ROCM used to build PyTorch: N/A OS: macOS 12.3.1 (x8664) GCC version: Could not collect Clang version: 13.1.6 (clang-1316.0.21.2.3) CMake version: version 3.22.1 Libc version: N/A Python version: 3.9.12 (main, Apr 5 2022, 01:53:17) [Clang 12.0.0 ] (64-bit runtime) Python platform: macOS-10.16-x8664-i386-64bit Is CUDA available: N/A CUDA runtime version: Could not collect GPU models and configuration: Could not collect Nvidia driver version: Could not collect cuDNN version: Could not collect HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: N/A Versions of relevant libraries: [pip3] numpy==1.21.5 [conda] mkl 2022.0.0 hecd8cb5105 [conda] mkl-include 2022.0.0 hecd8cb5105 [conda] numpy 1.21.5 py39h9c3cb841 [conda] numpy-base 1.21.5 py39he782bc11 cc\nI am running the same environment and get the same issue. Any insight would be very appreciated\nUpdate: In another repo I get the same error when trying to link to pytorch etc. There I made a minimal case and managed to build when I removed linking to . I can see that we have in the script. Maybe that is the cause of the error?\nLooks like they are built with the correct architecture.\nMore progress, from local minimal case: fails. is just a hello world program. If in is removed, then it builds.\nThe same story with PyTorch 1.10.0. The error appears when I'm trying to build with Apple clang 13.1.6 (Xcode Command Line Tools 13.3). But all works correctly if I build it with Apple clang 13.0 (Xcode Command Line Tools 13.2.1)\nNice, worked for me as well. Is this a bug somewhere or what is the exact problem? I drawback is that XCode needs to be up to date with new iOS versions.\nWhich version of Apple Clang worked? 13.0.0 or 13.0.1? Are you on Monterey 12.4?\nFiled an issue: cc:\nThis issue has been fixed in PeachPy a while back by but pinned version of PeachPy that PyTorch is using has not been updated in a very long timepositive_passagesdociddoc-en-pytorch-e5ae77a5e08322e65369c99bb0e38344715024cff6a41f3418003b3cb4bc4e1ftextmeant to be installed as pip packages) (default: False). relative_to (str, optional): path of the build file. Required when ``package is True``. It's best to use ``__file__`` for this argument. <del> kwargs: additional arguments that are passed to ffi to declar the </del> <ins> kwargs: additional arguments that are passed to ffi to declare the </ins> extension. See `Extension API reference`_ for details. .. _`Extension API reference`: https://docs.python.org/3/distutils/apiref.html#distutils.core.Extension", "commid": "pytorch_pr_1055"}], "negative_passages": []} |
|
{"query_id": "q-en-pytorch-956a3c4b3bb7237bb47e9883c297341203dacadd9ad986ed9f3c16375e118f9d", "query": "causes a host-device synchronization on CUDA. This is because it must first calculate the count of nonzero elements, and then copy this count to host. There are cases where I can easily calculate the count myself (from existing host data), or already know the count, which would make this unnecessary. For some given of shape : Instead of: Do this trickery: This avoids the host-device synchronization, but on the other hand, is more complicated, and a bit slower. I'm not sure if there are any other ways? Btw, I'm using to measure the speed. But actually I'm not sure if this really properly covers the problem of host-device synchronization with . Does it? E.g. when comparing this trickery code to , I just see that is slower. No response_\ndupe of i think\nAh yes, I think you are right.positive_passagesdociddoc-en-pytorch-60372d9aa71641cb924b2436784423b05696bfc1e6c71292d348c5e9861ec9a6text# test is no more than 4 higher than the 10th available at the # start. This attempts to catch file descriptor leaks, but allows # one-off initialization that may use up a file descriptor <del> available_fds = self._get_next_fds(10) self.test_case.assertLessEqual( available_fds[-1] - self.next_fds[-1], 5) </del> <ins> # TODO: Disabled because this check is too flaky # available_fds = self._get_next_fds(10) # self.test_case.assertLessEqual( # available_fds[-1] - self.next_fds[-1], 5) </ins> self.test_case.assertFalse(self.has_shm_files()) return Falsecommidpytorch_pr_1593negative_passages |
|
|