To test this, I have run (note the --config calibrate:deblend.catchFailures=True override):
$ pipetask --long-log run -b /repo/main -i HSC/runs/RC2/w_2022_44/DM-36763 -o u/laurenma/DM-36843 -p $DRP_PIPE_DIR/pipelines/HSC/DRP-RC2.yaml#calibrate --config calibrate:deblend.catchFailures=True -d "instrument='HSC' AND ((detector=101 AND visit=11706) OR (detector=0 AND visit=26048) OR (detector=102 AND visit=36260) OR (detector=4 AND visit=36750) OR (detector=52 AND visit=36192))" &> DM-36843_calibrate.log
|
To assess their INCLUDE/REJECT status for the coadd, I retrieved the appropriate metadata from the calexps. I looked at the above two config thresholds and added:
# Maximum mean on-sky distance (in arcsec) between matched source and rerference objects post-fit. A mean distance greater than this threshold raises a TaskError and the WCS fit is considered a failure. The default is set to the maximum tolerated by the external global calibration (e.g. jointcal) step for conceivable recovery. Ap [^DM-36843_calibrate.log] propriate value will be dataset and workflow dependent.
|
config.astrometry.maxMeanDistanceArcsec=0.5
|
Somewhat surprisingly, it seems these all would have survived the thresholds and made it into the coadds. Yusra AlSayyad was interested to know if the same would be the case for these detectors when we were running psfEx. To look into this, I've computed the same numbers for the pre-piff HSC/runs/RC2/w_2022_12/DM-34125 collection.
The numbers look like with piff [psfEx]:
collection = u/laurenma/DM-36843 [HSC/runs/RC2/w_2022_12/DM-34125]
|
|
{'detector': 101, 'visit': 11706}
|
medianE = 0.0002 [0.0006] Rejected: False [False]
|
scaledScatterSize = 0.0011 [0.0012] Rejected: False [False]
|
astromDistanceMean = 0.0483 [0.0503] Rejected: False [False]
|
|
{'detector': 0, 'visit': 26048}
|
medianE = 0.0001 [0.0011] Rejected: False [False]
|
scaledScatterSize = 0.0010 [0.0055] Rejected: False [False]
|
astromDistanceMean = 0.0465 [0.0469] Rejected: False [False]
|
|
{'detector': 102, 'visit': 36260}
|
medianE = 0.0023 [0.0014] Rejected: False [False]
|
scaledScatterSize = 0.0070 [0.0066] Rejected: False [False]
|
astromDistanceMean = 0.0524 [0.0564] Rejected: False [False]
|
|
{'detector': 4, 'visit': 36750}
|
medianE = 0.0038 [0.0079] Rejected: False [True]
|
scaledScatterSize = 0.0033 [0.0078] Rejected: False [False]
|
astromDistanceMean = 0.0290 [0.0278] Rejected: False [False]
|
|
{'detector': 52, 'visit': 36192}
|
medianE = 0.0016 [0.0004] Rejected: False [False]
|
scaledScatterSize = 0.0073 [0.0054] Rejected: False [False]
|
astromDistanceMean = 0.0622 [0.0613] Rejected: False [False]
|
So one of the psfEx detectors would have been excluded from the coadd.
Going a bit deeper, I also computed the psf image at all the locations for the sources that got the following in the logs (see attached) of my run:
WARNING 2022-11-08T12:03:01.389-08:00 lsst.calibrate.deblend (calibrate:{instrument: 'HSC', detector: 0, visit: 26048, ...})(sourc
|
eDeblendTask.py:369) - Unable to deblend source 11187530812620831: because PSF FWHM=nan is invalid.
|
and compared them with the same for the psfEx collection:
nanPsf_piff_vs_psfEx.pdf
(or, better yet, this one also includes some views on the icExp image itself: nanPsf_piff_vs_psfEx_vs_icExp.pdf
)
There are only a few (3 out of 27) that actually failed with psfEx as well, so it seems piff looks slightly less stable (this may be of interest for DM-36930 Joshua Meyers).
The only odd (in that if there really are "no good stars", why is this not a failure?) thing I noticed in the logs (see attached) of the run I did here was:
WARNING 2022-11-08T12:04:51.027-08:00 lsst.calibrate.photoCal (calibrate:{instrument: 'HSC', detector: 101, visit: 11706, ...})(photoCal.py:640) - PhotoCal.getZeroPoint: no good stars remain
|
INFO 2022-11-08T12:04:51.033-08:00 lsst.calibrate.photoCal (calibrate:{instrument: 'HSC', detector: 101, visit: 11706, ...})(photoCal.py:413) - Magnitude zero point: 33.493359 +/- 0.000000 from 4 stars
|
To test this, I have run (note the --config calibrate:deblend.catchFailures=True override):
To assess their INCLUDE/REJECT status for the coadd, I retrieved the appropriate metadata from the calexps. I looked at the above two config thresholds and added:
# Maximum mean on-sky distance (in arcsec) between matched source and rerference objects post-fit. A mean distance greater than this threshold raises a TaskError and the WCS fit is considered a failure. The default is set to the maximum tolerated by the external global calibration (e.g. jointcal) step for conceivable recovery. Ap [^DM-36843_calibrate.log] propriate value will be dataset and workflow dependent.
Somewhat surprisingly, it seems these all would have survived the thresholds and made it into the coadds. Yusra AlSayyad was interested to know if the same would be the case for these detectors when we were running psfEx. To look into this, I've computed the same numbers for the pre-piff HSC/runs/RC2/w_2022_12/
DM-34125collection.The numbers look like with piff [psfEx]:
So one of the psfEx detectors would have been excluded from the coadd.
Going a bit deeper, I also computed the psf image at all the locations for the sources that got the following in the logs (see attached) of my run:
and compared them with the same for the psfEx collection:
(or, better yet, this one also includes some views on the icExp image itself: nanPsf_piff_vs_psfEx_vs_icExp.pdf
)
nanPsf_piff_vs_psfEx.pdf
There are only a few (3 out of 27) that actually failed with psfEx as well, so it seems piff looks slightly less stable (this may be of interest for DM-36930 Joshua Meyers).
The only odd (in that if there really are "no good stars", why is this not a failure?) thing I noticed in the logs (see attached) of the run I did here was: