I have not encountered any problems thus far, so this is a question purely out of curiosity.
In Python I usually define floats and arrays of floats like this:
import numpy as np
s = 1.0
v = np.array([1.0, 2.0, 3.0])
In the case above s is a float, but the elements of v are of type numpy.float64.
To be more consistent I could, for example, do this instead:
import numpy as np
s = np.float64(1.0)
v = np.array([1.0, 2.0, 3.0])
Are there cases, from an accuracy/precision point of view, where it is recommended to use the "consistent" approach? What kind of errors, if any, can I expect in the "inconsistent" approach?
s=np.array(1.0, dtype=...)if I wanted a scalar of a special dtype.