Details
-
Type:
Bug
-
Status: Done
-
Resolution: Done
-
Fix Version/s: None
-
Component/s: meas_modelfit
-
Labels:None
-
Story Points:0.25
-
Epic Link:
-
Sprint:Science Pipelines DM-W16-6
-
Team:Data Release Production
Description
I recently upgraded my Anaconda to version 2.5 on Mac OS X El Capitan. Rebuilding lsst_apps triggers a new test failure in meas_modelfit:
F.....
|
======================================================================
|
FAIL: testDerivatives (__main__.MixtureTestCase)
|
----------------------------------------------------------------------
|
Traceback (most recent call last):
|
File "tests/testMixture.py", line 172, in testDerivatives
|
doTest(g, x)
|
File "tests/testMixture.py", line 168, in doTest
|
self.assertClose(analyticGradient, numericGradient, rtol=1E-6)
|
File "/Users/timj/work/lsstsw/stack/DarwinX86/utils/2016_01.0+0b596edbb3/python/lsst/utils/tests.py", line 368, in assertClose
|
testCase.assertFalse(failed, msg="\n".join(msg))
|
AssertionError: 1/3 elements differ with rtol=1e-06, atol=2.22044604925e-16
|
-7.41146749152e-07 != -7.41145697331e-07 (diff=1.05182109542e-12/7.41146749152e-07=1.41918060981e-06)
|
Switching to the previous anaconda (2.4 I think) the numbers above are:
-7.41146749152e-07 != -7.41146239432e-07 (diff=5.0971982368e-13/7.41146749152e-07=6.87744801233e-07)
|
The following patch fixes it:
diff --git a/tests/testMixture.py b/tests/testMixture.py
|
index a070748..804495e 100755
|
--- a/tests/testMixture.py
|
+++ b/tests/testMixture.py
|
@@ -165,7 +165,7 @@ class MixtureTestCase(lsst.utils.tests.TestCase):
|
analyticGradient = numpy.zeros(n, dtype=float)
|
analyticHessian = numpy.zeros((n,n), dtype=float)
|
mixture.evaluateDerivatives(point, analyticGradient, analyticHessian)
|
- self.assertClose(analyticGradient, numericGradient, rtol=1E-6)
|
+ self.assertClose(analyticGradient, numericGradient, rtol=1.5E-6)
|
self.assertClose(analyticHessian, numericHessian, rtol=1E-6)
|
|
for x in numpy.random.randn(10, g.getDimension()): |
but I have no idea how reasonable that is. It is obviously disconcerting that updating the numerical library can change our tests again.
The good news here is that changing the numerical library can only affect the calculation done in the test to generate the value to compare the pipeline values to, since the pipeline code is C++ using Eigen. And generally I'm not too bothered by a loss of precision here, as the test code is computing a numeric derivative using finite differences, and it's already much less precise than the analytic derivative code we're testing.