2

I have a a column in my numpy array which has a bunch of longs and a few NaNs. There are no other floats in it. If I convert the entire column to float I will lose precision.

If I don't convert the entire column to float, I cannot use numpy functions like isnan or isfinite because the array is of type object despite the elements being valid types.

Is there anyway of preserving precision while being able to use numpy functions?

a = np.array([10**50,19**50,float('NaN')])

a
#outputs:
array([100000000000000000000000000000000000000000000000000L,
   8663234049605954426644038200675212212900743262211018069459689001L,
   nan], dtype=object)

np.isnan(a)

#outputs error:

ufunc 'isnan' not supported for the input types, and the inputs could not be safely 
coerced to any supported types according to the casting rule ''safe''
2
  • You could also consider using a masked array instead of using NaN's. That may or may not fit your problem, but it avoids the need to directly call np.isnan, etc. Commented Feb 8, 2014 at 1:13
  • 1
    @JoeKington actually I need np.isnan to mask my arrays...I don't actually know where the Nans are...I will accept Zhangxaochen's answer as it does what I need currently... Commented Feb 8, 2014 at 1:16

2 Answers 2

4
In [1760]: np.isnan(a.astype(float))
Out[1760]: array([False, False,  True], dtype=bool)

So if precision is not that important, just a=a.astype(float).

Sign up to request clarification or add additional context in comments.

1 Comment

I like this solution. I will need to use other numpy functions but for this situation is a good work around.
0

A possibility could just ignoring the nans:

ii = np.where([type(a) is long for a in A if type(a)]
B = 5*A[ii]  

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.