Now I see by the messages generated previously, you didn't mean the messages before this old daf_persistence commit but the pex.logging.BlockTimingLog messages. I corrected the messages to be exactly the same now (with format differences).
Use tests/butlerPickle.py as an example and set a low threshold/level, this is before the logging migration (pex_logging):
daf.persistence.butler.read DEBUG: Starting read from None at PickleStorage(foo1.pickle)
|
daf.persistence.butler.read DEBUG: Ending read from None at PickleStorage(foo1.pickle)
|
This is after the logging migration (lsst_log):
DEBUG daf.persistence.butler: Starting read from None at PickleStorage(foo1.pickle)
|
DEBUG daf.persistence.butler: Ending read from None at PickleStorage(foo1.pickle)
|
The master daf_persistence doesn't really use BlockTimingLog, so I added one commit to get the messages printed with pex.logging, before the logging migration. The effective changes are the same.
I merged the lsst_dm_stack_demo patch and am running Jenkins.
Jenkins passed building (including lsst_distrib, lsst_sims, lsst_ci), but failed at the stack demo , because the default logging output is stdout in lsst.log instead of stderr in pex.logging, the demo script bin/export-results redirects its std output to make the "detected-sources(_small).txt" file, and there are a few debug messages from daf.persistence and daf.butlerUtils when running that script. For running command line tasks and unit tests, log has been configured to go to stderr. The standalone script bin/export-results needs to configure log as well. This is done in the ticket branch of lsst_dm_stack_demo, but Jenkins doesn't get that ticket branch. I've tested it on my machine, to reproduce the error with master and pass the demo with the ticket branch.