GCP has a published create_instance() code snippet available here, which I've seen on SO in a couple places e.g. here. However, as you can see in the first link, it's from 2015 ("Copyright 2015 Google Inc"), and Google has since published another code sample for launching a GCE instance dated 2022. It's available on github here, and this newer create_instance function is what's featured in GCP's python API documentation here.
However, I can't figure out how to pass a startup script via metadata to run on VM startup using the modern python function. I tried adding
instance_client.metadata.items = {'key': 'startup-script',
'value': job_script}
to the create.py function (again, available here along with supporting utility functions it calls) but it threw an error that the instance_client doesn't have that attribute.
GCP's documentation page for starting a GCE VM with a startup script is here, where unlike most other similar pages, it contains code snippets only for console, gcloud and (REST)API; not SDK code snippets for e.g. Python and Ruby that might show how to modify the python create_instance function above.
Is the best practice for launching a GCE VM with a startup script from a python process really to send a post request or just wrap the gcloud command
gcloud compute instances create VM_NAME \
--image-project=debian-cloud \
--image-family=debian-10 \
--metadata-from-file=startup-script=FILE_PATH
...in a subprocess.run()? To be honest I wouldn't mind doing things that way since the code is so compact (the gcloud command at least, not the POST request way), but since GCP provides a create_instance python function I had assumed using/modifying-as-necessary that would be the best practice from within python...
Thanks!
subprocess. Let's get the library|API working!1whereas you could be getting Errors. My point is that using subprocess is very lossy. Why lose info if you don't have to?