I am using numpy.testing.assert_almost_equal in a unittest environment - but I am not sure what the right way to combine numpy and unittest is.
My first approach was to use assertTrue from unittest in combination with a is None comparison like so:
from unittest import TestCase
import numpy as np
class TestPredict(TestCase):
def test_succeeding(self):
self.assertTrue(
np.testing.assert_almost_equal(1, 0.9999999999999) is None
)
def test_failing(self):
self.assertTrue(
np.testing.assert_almost_equal(1, 0.9) is None
)
This gives the correct test results, but it is a bit hacky and it bloats the test code.
A simpler approach is the following:
from unittest import TestCase
import numpy as np
class TestPredict(TestCase):
def test_succeeding(self):
np.testing.assert_almost_equal(1, 0.9999999999999)
def test_failing(self):
np.testing.assert_almost_equal(1, 0.9)
This code also returns the correct test statistics as the above, but it is much more readable. The only downside I see with this is that pylint complains about the "R0201 method could be a function" message. Can this become an issue?
PS: I checked multiple posts here on SO that seemed related but didn't answer my specific question about the integration of unittest and numpy testing. (e.g. https://stackoverflow.com/a/4319870/6018688 talks bout catching Exceptions inside unittests. This seems bo be wrong or just simply an overkill.)
setUpfunction or other function of that kind, so the class construction is not usefull. You can directly put your 2 functions out of the class structure.setUpto compute the value close to one but I also stand reminded of the direct use of functions withpytestas @mrbean-bremen mentioned in the second part of his answer.