Uploaded image for project: 'Data Management'
  1. Data Management
  2. DM-5108

meas_modelfit testMixture test fails on anaconda 2.5

    Details

    • Type: Bug
    • Status: Done
    • Resolution: Done
    • Fix Version/s: None
    • Component/s: meas_modelfit
    • Labels:
      None
    • Templates:
    • Story Points:
      0.25
    • Sprint:
      Science Pipelines DM-W16-6
    • Team:
      Data Release Production

      Description

      I recently upgraded my Anaconda to version 2.5 on Mac OS X El Capitan. Rebuilding lsst_apps triggers a new test failure in meas_modelfit:

      F.....
      ======================================================================
      FAIL: testDerivatives (__main__.MixtureTestCase)
      ----------------------------------------------------------------------
      Traceback (most recent call last):
        File "tests/testMixture.py", line 172, in testDerivatives
          doTest(g, x)
        File "tests/testMixture.py", line 168, in doTest
          self.assertClose(analyticGradient, numericGradient, rtol=1E-6)
        File "/Users/timj/work/lsstsw/stack/DarwinX86/utils/2016_01.0+0b596edbb3/python/lsst/utils/tests.py", line 368, in assertClose
          testCase.assertFalse(failed, msg="\n".join(msg))
      AssertionError: 1/3 elements differ with rtol=1e-06, atol=2.22044604925e-16
      -7.41146749152e-07 != -7.41145697331e-07 (diff=1.05182109542e-12/7.41146749152e-07=1.41918060981e-06)
      

      Switching to the previous anaconda (2.4 I think) the numbers above are:

      -7.41146749152e-07 != -7.41146239432e-07 (diff=5.0971982368e-13/7.41146749152e-07=6.87744801233e-07)
      

      The following patch fixes it:

      diff --git a/tests/testMixture.py b/tests/testMixture.py
      index a070748..804495e 100755
      --- a/tests/testMixture.py
      +++ b/tests/testMixture.py
      @@ -165,7 +165,7 @@ class MixtureTestCase(lsst.utils.tests.TestCase):
                   analyticGradient = numpy.zeros(n, dtype=float)
                   analyticHessian = numpy.zeros((n,n), dtype=float)
                   mixture.evaluateDerivatives(point, analyticGradient, analyticHessian)
      -            self.assertClose(analyticGradient, numericGradient, rtol=1E-6)
      +            self.assertClose(analyticGradient, numericGradient, rtol=1.5E-6)
                   self.assertClose(analyticHessian, numericHessian, rtol=1E-6)
       
               for x in numpy.random.randn(10, g.getDimension()):
      

      but I have no idea how reasonable that is. It is obviously disconcerting that updating the numerical library can change our tests again.

        Attachments

          Container Issues

            Issue Links

              Activity

                People

                • Assignee:
                  jbosch Jim Bosch
                  Reporter:
                  tjenness Tim Jenness
                  Watchers:
                  Jim Bosch, Paul Price, Tim Jenness
                • Votes:
                  0 Vote for this issue
                  Watchers:
                  3 Start watching this issue

                  Dates

                  • Created:
                    Updated:
                    Resolved:

                    Summary Panel