How do I convert a torch.Tensor (on GPU) to a numpy.ndarray (on CPU)?
4 Answers
Use .detach() to convert from GPU / CUDA Tensor to numpy array:
tensor.detach().cpu().numpy()
1 Comment
detach if the Tensor has associated gradients. When detach is needed, you want to call detach before cpu. Otherwise, PyTorch will create the gradients associated with the Tensor on the CPU then immediately destroy them when numpy is called. Calling detach first eliminates that superfluous step. For more information see: discuss.pytorch.org/t/…If the tensor is on gpu or cuda, copy the tensor to cpu and convert it to numpy array using:
tensor.data.cpu().numpy()
If the tensor is on cpu already you can do tensor.data.numpy(). However, you can also do
tensor.data.cpu().numpy(). If the tensor is already on cpu, then the .cpu() operation will have no effect. And this could be used as a device-agnostic way to convert the tensor to numpy array.
2 Comments
tensor.data without detaching may have unintended consequences, as explained here and here.tensor.numpy(force=True)
Per documentation:
If force is True this is equivalent to calling t.detach().cpu().resolve_conj().resolve_neg().numpy(). If the tensor isn’t on the CPU or the conjugate or negative bit is set, the tensor won’t share its storage with the returned ndarray. Setting force to True can be a useful shorthand.
Edit:
Documentation link: https://pytorch.org/docs/stable/generated/torch.Tensor.numpy.html