

GPU models and configuration: GPU 0: NVIDIA GeForce RTX 2080 TiĬuDNN version: C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.0\bin\cudnn_ops_train64_8. Get in-depth tutorials for beginners and advanced developers. detach() call and wrap the change in a `with torch.no_grad():` block. Access comprehensive developer documentation for PyTorch. Without autograd tracking the change, remove the. If your intent is to change the metadata of a Tensor (such as sizes / strides / storage / storage_offset) RuntimeError: set_sizes_and_strides is not allowed on a Tensor created from. Return F.conv2d(input, weight, bias, self.stride, Return self._conv_forward(input, self.weight, self.bias)įile "C:\Users\myluo\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\nn\modules\conv.py", line 442, in _conv_forward Outputs = ctx.run_function(*detached_inputs)įile "C:\Users\myluo\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\nn\modules\module.py", line 1102, in _call_implįile "C:\Users\myluo\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\nn\modules\conv.py", line 446, in forward (self, gradient, retain_graph, create_graph, inputs=inputs)įile "C:\Users\myluo\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\autograd\_init_.py", line 154, in backwardįile "C:\Users\myluo\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\autograd\function.py", line 199, in applyįile "C:\Users\myluo\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\utils\checkpoint.py", line 122, in backward Python platform: Linux-4.15.0-58-generic-x86_64-with-debian-buster-sidĬuDNN version: /usr/lib/x86_64-linux-gnu/libcudnn.so.7.6.2Īlternatively, the crash can be reproduced on Colab, which is also running torch=1.8.1+cu101 by default at the moment.įile "D:/works/MedicalImaging/test/convnext2.py", line 28, in įile "C:\Users\myluo\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\_tensor.py", line 307, in backward This is a prerequisite for at least one of the inputs to each checkpoint (see e.g. This has the effect of enabling requires_grad for x. Note that the input to the checkpoint is passed through an initial convolutional layer. Target = torch.zeros((batch_size, h_w, h_w), dtype=torch.int64) Inp = torch.rand((batch_size, h_w, h_w, n_in)) # Simple training loop nothing fancy here X = self.init(x) # natural way to enable x.requires_grad before the checkpoint Self.init = torch.nn.Conv2d(n_in, n_in, 1)
