# meas_modelfit testMixture test fails on anaconda 2.5

XMLWordPrintable

#### Details

• Type: Bug
• Status: Done
• Resolution: Done
• Fix Version/s: None
• Component/s:
• Labels:
None
• Story Points:
0.25
• Sprint:
Science Pipelines DM-W16-6
• Team:
Data Release Production

#### Description

I recently upgraded my Anaconda to version 2.5 on Mac OS X El Capitan. Rebuilding lsst_apps triggers a new test failure in meas_modelfit:

 F..... ====================================================================== FAIL: testDerivatives (__main__.MixtureTestCase) ---------------------------------------------------------------------- Traceback (most recent call last):  File "tests/testMixture.py", line 172, in testDerivatives  doTest(g, x)  File "tests/testMixture.py", line 168, in doTest  self.assertClose(analyticGradient, numericGradient, rtol=1E-6)  File "/Users/timj/work/lsstsw/stack/DarwinX86/utils/2016_01.0+0b596edbb3/python/lsst/utils/tests.py", line 368, in assertClose  testCase.assertFalse(failed, msg="\n".join(msg)) AssertionError: 1/3 elements differ with rtol=1e-06, atol=2.22044604925e-16 -7.41146749152e-07 != -7.41145697331e-07 (diff=1.05182109542e-12/7.41146749152e-07=1.41918060981e-06) 

Switching to the previous anaconda (2.4 I think) the numbers above are:

 -7.41146749152e-07 != -7.41146239432e-07 (diff=5.0971982368e-13/7.41146749152e-07=6.87744801233e-07) 

The following patch fixes it:

 diff --git a/tests/testMixture.py b/tests/testMixture.py index a070748..804495e 100755 --- a/tests/testMixture.py +++ b/tests/testMixture.py @@ -165,7 +165,7 @@ class MixtureTestCase(lsst.utils.tests.TestCase):  analyticGradient = numpy.zeros(n, dtype=float)  analyticHessian = numpy.zeros((n,n), dtype=float)  mixture.evaluateDerivatives(point, analyticGradient, analyticHessian) - self.assertClose(analyticGradient, numericGradient, rtol=1E-6) + self.assertClose(analyticGradient, numericGradient, rtol=1.5E-6)  self.assertClose(analyticHessian, numericHessian, rtol=1E-6)    for x in numpy.random.randn(10, g.getDimension()): 

but I have no idea how reasonable that is. It is obviously disconcerting that updating the numerical library can change our tests again.

#### Activity

Hide
Jim Bosch added a comment -

The good news here is that changing the numerical library can only affect the calculation done in the test to generate the value to compare the pipeline values to, since the pipeline code is C++ using Eigen. And generally I'm not too bothered by a loss of precision here, as the test code is computing a numeric derivative using finite differences, and it's already much less precise than the analytic derivative code we're testing.

Show
Jim Bosch added a comment - The good news here is that changing the numerical library can only affect the calculation done in the test to generate the value to compare the pipeline values to, since the pipeline code is C++ using Eigen. And generally I'm not too bothered by a loss of precision here, as the test code is computing a numeric derivative using finite differences, and it's already much less precise than the analytic derivative code we're testing.
Hide
Tim Jenness added a comment -

Shall I just apply this patch then?

Show
Tim Jenness added a comment - Shall I just apply this patch then?
Hide
Jim Bosch added a comment -

Yes, I think that's fine.

Show
Jim Bosch added a comment - Yes, I think that's fine.
Hide
Tim Jenness added a comment -

Changed. Jim Bosch reviewed this as part of the discussion.

Show
Tim Jenness added a comment - Changed. Jim Bosch reviewed this as part of the discussion.

#### People

Assignee:
Jim Bosch
Reporter:
Tim Jenness
Watchers:
Jim Bosch, Paul Price, Tim Jenness