Numpy float64 dtype inherits from Python float, which implements C double internally. You can verify that as follows:
isinstance(np.float64(5.9975), float) # True
So even if their string representation is different, the values they store are the same.
On the other hand, np.float32 implements C float (which has no analog in pure Python) and no numpy int dtype (np.int32, np.int64 etc.) inherits from Python int because in Python 3 int is unbounded:
isinstance(np.float32(5.9975), float) # False
isinstance(np.int32(1), int) # False
So why define np.float64 at all?
np.float64 defines most of the attributes and methods in np.ndarray. From the following code, you can see that np.float64 implements all but 4 methods of np.array:
[m for m in set(dir(np.array([]))) - set(dir(np.float64())) if not m.startswith("_")]
# ['argpartition', 'ctypes', 'partition', 'dot']
So if you have a function that expects to use ndarray methods, you can pass np.float64 to it while float doesn't give you the same.
For example:
def my_cool_function(x):
return x.sum()
my_cool_function(np.array([1.5, 2])) # <--- OK
my_cool_function(np.float64(5.9975)) # <--- OK
my_cool_function(5.9975) # <--- AttributeError
read_csvfunction employs its own super-fast string-to-float conversion that is not correctly rounded. Thus after exporting a value and re-reading it, the recovered value may end up being 1 or 2 ulps different from the original.read_exceltoo?