Details
-
Type:
Story
-
Status: Done
-
Resolution: Done
-
Fix Version/s: None
-
Component/s: None
-
Labels:None
-
Story Points:8
-
Epic Link:
-
Team:Data Facility
Description
Make a pegasus workflow to do ~same processing steps as in DM-10129/DM-11020,
including makeSkyMap.py, sub-tasks of singleFrameDriver.py, mosaic.py, sub-tasks of coaddDriver.py, and sub-tasks of multiBandDriver.py, with the HSC RC dataset previously defined (visit IDs here).
This does not need to be generic and is merely a stop-gap, before SuperTask-related design is ready.
A workflow has been built to process RC tract=8766 and tested with stack w_2017_28. A presentation was given about this work at AHM https://project.lsst.org/meetings/lsst2017/sites/lsst.org.meetings.lsst2017/files/dm_stack_pegasus_chiang.pdf
Two python scripts are made to generate two dax files: one for processCcd+makeSkyMap and the other for mosaic+makeCoaddTempExp+assembleCoadd+detectCoaddSources+mergeCoaddDetections+measureCoaddSources+mergeCoaddMeasurements+forcedPhotCoadd
To allow processCcd failures of specific CCDs, a "blacklist" is built and those data IDs are ignored in the second dax-generating script. An "overlap" database is built separately to know which CCDs overlap which tract/patch. A temporary script was written to find out what ref_cat shards are actually needed; but the input refcat files for mosaic are not precise and should be more shards than necessary. An assembleCoadd config override is used to follow coaddDriver in using PsfWcsSelectImagesTask (the obs_subaru default configs are different between assembleCoadd and safeClipAssembleCoadd, DM-10634). When running the coadd workflow instead of configuring the job throttling I limited to at most a few nodes for testing.