2

I write this code to download my model :

args = parser.parse_args()

use_cuda = torch.cuda.is_available()

state_dict = torch.load(args.model)
model = Net()
model.load_state_dict(state_dict)
model.eval()

if use_cuda:
    print('Using GPU')
    model.cuda()
else:
    print('Using CPU')

But my terminal returns the following error RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU.

So then I tried to write without really understanding too much :

args = parser.parse_args()

map_location=torch.device('cpu')
state_dict = torch.load(args.model)
model = Net()
model.load_state_dict(state_dict)
model.eval()

But I still have the same mistake. Do you see please how I can correct it? (actually I want to load my model with my CPU).

1 Answer 1

2

I'm assuming you saved the model on a computer with a GPU and are now loading it on a computer without one, or maybe you for some reason the GPU is not available. Also, which line is causing the error?

The parameter map_location needs to be set inside torch.load. Like this:

state_dict = torch.load(args.model, map_location='cpu')

or

map_location=torch.device('cpu')
state_dict = torch.load(args.model, map_location=map_location)

Notice that you need to send the map_location variable to the torch.load function.

Sign up to request clarification or add additional context in comments.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.