Details
-
Type:
Bug
-
Status: Done
-
Resolution: Done
-
Fix Version/s: None
-
Component/s: meas_base
-
Labels:
-
Story Points:4
-
Epic Link:
-
Sprint:DRP F17-3
-
Team:Data Release Production
Description
Scott Daniel reports that he's having trouble with test_GaussianFlux.py when using pytest with 32 cores. This seems to be repeatable but does not happen with 8 cores or fewer (which might simply be down to "luck").
=================================== FAILURES ===================================
|
______________________ GaussianFluxTestCase.testGaussians ______________________
|
[gw29] linux -- Python 3.5.2 /local/lsst/danielsf/lsstsw3/miniconda/bin/python
|
|
self = <test_GaussianFlux.GaussianFluxTestCase testMethod=testGaussians>
|
|
def testGaussians(self):
|
"""Test that we get correct fluxes when measuring Gaussians with known positions and shapes."""
|
task = self.makeSingleFrameMeasurementTask("base_GaussianFlux")
|
exposure, catalog = self.dataset.realize(10.0, task.schema)
|
task.run(catalog, exposure)
|
for measRecord in catalog:
|
self.assertFloatsAlmostEqual(measRecord.get("base_GaussianFlux_flux"),
|
> measRecord.get("truth_flux"), rtol=3E-3)
|
|
tests/test_GaussianFlux.py:67:
|
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
|
../lsstsw3/stack/Linux64/utils/13.0-8-gb7ca535/python/lsst/utils/tests.py:618: in assertFloatsAlmostEqual
|
testCase.assertFalse(failed, msg="\n".join(errMsg))
|
E AssertionError: True is not false : 99669.51451462915 != 100000.0; diff=330.485485371/100000.0=0.00330485485371 with rtol=0.003, atol=2.220446049250313e-16
|
----------------------------- Captured stdout call -----------------------------
|
measurement INFO: Measuring 2 sources (2 parents, 0 children)
|
=============================== warnings summary ===============================
|
Full log is attached. Given the large number of processes, only two tests run before the failure:
[gw29] PASSED bin/forcedPhotCcd.py
|
[gw29] SKIPPED tests/test_ApCorrNameSet.py
|
[gw29] PASSED tests/test_CatalogCalculation.py::CatalogCalculationTest::testCatalogCalculation
|
[gw29] FAILED tests/test_GaussianFlux.py::GaussianFluxTestCase::testGaussians
|
[gw29] PASSED tests/test_InputCount.py::InputCountTest::testInputCounts
|
[gw29] PASSED tests/test_Transform.py::TestMemory::testFileDescriptorLeaks <- ../lsstsw3/stack/Linux64/utils/13.0-8-gb7ca535/python/lsst/utils/tests.py
|
[gw29] PASSED tests/test_undeblended.py::TestMemory::testLeaks <- ../lsstsw3/stack/Linux64/utils/13.0-8-gb7ca535/python/lsst/utils/tests.py
|
My first thought is that there is some configuration in earlier tests in test_GaussianFlux.py that is being missed when that test is running on its own. Confirmed by doing this:
$ pytest -k testGaussian tests/test_GaussianFlux.py
|
=================================== test session starts ===================================
|
platform darwin -- Python 3.6.1, pytest-3.2.0, py-1.4.34, pluggy-0.4.0
|
rootdir: /Users/timj/work/lsstsw3/build/meas_base, inifile: setup.cfg
|
plugins: session2file-0.1.9, forked-0.3.dev0+g1dd93f6.d20170815, xdist-1.19.2.dev0+g459d52e.d20170815, flake8-0.8.1
|
collected 8 items
|
|
tests/test_GaussianFlux.py F
|
|
======================================== FAILURES =========================================
|
___________________________ GaussianFluxTestCase.testGaussians ____________________________
|
|
self = <test_GaussianFlux.GaussianFluxTestCase testMethod=testGaussians>
|
|
def testGaussians(self):
|
"""Test that we get correct fluxes when measuring Gaussians with known positions and shapes."""
|
task = self.makeSingleFrameMeasurementTask("base_GaussianFlux")
|
exposure, catalog = self.dataset.realize(10.0, task.schema)
|
task.run(catalog, exposure)
|
for measRecord in catalog:
|
self.assertFloatsAlmostEqual(measRecord.get("base_GaussianFlux_flux"),
|
> measRecord.get("truth_flux"), rtol=3E-3)
|
|
tests/test_GaussianFlux.py:67:
|
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
|
../../stack/DarwinX86/utils/13.0-8-gb7ca535/python/lsst/utils/tests.py:618: in assertFloatsAlmostEqual
|
testCase.assertFalse(failed, msg="\n".join(errMsg))
|
E AssertionError: True is not false : 99669.51451462915 != 100000.0; diff=330.485485371/100000.0=0.00330485485371 with rtol=0.003, atol=2.220446049250313e-16
|
---------------------------------- Captured stdout call -----------------------------------
|
measurement INFO: Measuring 2 sources (2 parents, 0 children)
|
=================================== 7 tests deselected ====================================
|
========================= 1 failed, 7 deselected in 10.30 seconds =========================
|
That makes sense to me, the images definitely need to have different noise realizations.