Uploaded image for project: 'Data Management'
  1. Data Management
  2. DM-36843

Investigate whether "bad" PSF images would have been included in coadds

    XMLWordPrintable

    Details

    • Type: Story
    • Status: Done
    • Resolution: Done
    • Fix Version/s: None
    • Component/s: None
    • Labels:
    • Story Points:
      6
    • Epic Link:
    • Team:
      Data Release Production
    • Urgent?:
      No

      Description

      It has not yet been decided exactly how we will handle the problematic PSFs noted in DM-35552 (and see discussion on DM-36763 and this confluence page). In finalizing the plan, it would be interesting to know if these exposures would have been included in the coadd in the case where the error was caught (and thus calexp datasets would have been persisted). The exclusion criteria would come from the SelectPsfWcsImages task used in makeWarp, for which we have the following currently set for HSC processing:

      # Maximum median ellipticity residual
      config.select.maxEllipResidual=0.007
       
      # Maximum scatter in the size residuals, scaled by the median size
      config.select.maxScaledSizeScatter=0.009
      

        Attachments

        1. DM-36843_calibrate.log
          78 kB
        2. icExp_0_26048.pdf
          2.50 MB
        3. nanPsf_piff_vs_psfEx_vs_icExp.pdf
          5.02 MB
        4. nanPsf_piff_vs_psfEx.pdf
          2.40 MB
        5. psfs_piff_vs_psfEx_0_26048.pdf
          1.24 MB
        6. psfUsed_0_26048_piffImage.png
          psfUsed_0_26048_piffImage.png
          459 kB

          Issue Links

            Activity

            Hide
            lauren Lauren MacArthur added a comment - - edited

            To test this, I have run (note the --config calibrate:deblend.catchFailures=True override):

            $ pipetask --long-log run -b /repo/main -i HSC/runs/RC2/w_2022_44/DM-36763 -o u/laurenma/DM-36843 -p $DRP_PIPE_DIR/pipelines/HSC/DRP-RC2.yaml#calibrate --config calibrate:deblend.catchFailures=True -d "instrument='HSC' AND ((detector=101 AND visit=11706) OR (detector=0 AND visit=26048) OR (detector=102 AND visit=36260) OR (detector=4 AND visit=36750) OR (detector=52 AND visit=36192))" &> DM-36843_calibrate.log
            

            To assess their INCLUDE/REJECT status for the coadd, I retrieved the appropriate metadata from the calexps. I looked at the above two config thresholds and added:

            # Maximum mean on-sky distance (in arcsec) between matched source and rerference objects post-fit.  A mean distance greater than this threshold raises a TaskError and the WCS fit is considered a failure.  The default is set to the maximum tolerated by the external global calibration (e.g. jointcal) step for conceivable recovery.  Ap [^DM-36843_calibrate.log] propriate value will be dataset and workflow dependent.                                                    
            config.astrometry.maxMeanDistanceArcsec=0.5
            

            Somewhat surprisingly, it seems these all would have survived the thresholds and made it into the coadds. Yusra AlSayyad was interested to know if the same would be the case for these detectors when we were running psfEx. To look into this, I've computed the same numbers for the pre-piff HSC/runs/RC2/w_2022_12/DM-34125 collection.

            The numbers look like with piff [psfEx]:

            collection = u/laurenma/DM-36843 [HSC/runs/RC2/w_2022_12/DM-34125]
             
            {'detector': 101, 'visit': 11706}
                       medianE = 0.0002 [0.0006] Rejected: False [False]
             scaledScatterSize = 0.0011 [0.0012] Rejected: False [False]
            astromDistanceMean = 0.0483 [0.0503] Rejected: False [False]
             
            {'detector': 0, 'visit': 26048}
                       medianE = 0.0001 [0.0011] Rejected: False [False]
             scaledScatterSize = 0.0010 [0.0055] Rejected: False [False]
            astromDistanceMean = 0.0465 [0.0469] Rejected: False [False]
             
            {'detector': 102, 'visit': 36260}
                       medianE = 0.0023 [0.0014] Rejected: False [False]
             scaledScatterSize = 0.0070 [0.0066] Rejected: False [False]
            astromDistanceMean = 0.0524 [0.0564] Rejected: False [False]
             
            {'detector': 4, 'visit': 36750}
                       medianE = 0.0038 [0.0079] Rejected: False [True]
             scaledScatterSize = 0.0033 [0.0078] Rejected: False [False]
            astromDistanceMean = 0.0290 [0.0278] Rejected: False [False]
             
            {'detector': 52, 'visit': 36192}
                       medianE = 0.0016 [0.0004] Rejected: False [False]
             scaledScatterSize = 0.0073 [0.0054] Rejected: False [False]
            astromDistanceMean = 0.0622 [0.0613] Rejected: False [False]
            

            So one of the psfEx detectors would have been excluded from the coadd.

            Going a bit deeper, I also computed the psf image at all the locations for the sources that got the following in the logs (see attached) of my run:

            WARNING 2022-11-08T12:03:01.389-08:00 lsst.calibrate.deblend (calibrate:{instrument: 'HSC', detector: 0, visit: 26048, ...})(sourc
            eDeblendTask.py:369) - Unable to deblend source 11187530812620831: because PSF FWHM=nan is invalid.
            

            and compared them with the same for the psfEx collection:
            nanPsf_piff_vs_psfEx.pdf  (or, better yet, this one also includes some views on the icExp image itself: nanPsf_piff_vs_psfEx_vs_icExp.pdf)
            There are only a few (3 out of 27) that actually failed with psfEx as well, so it seems piff looks slightly less stable (this may be of interest for DM-36930 Joshua Meyers).

            The only odd (in that if there really are "no good stars", why is this not a failure?) thing I noticed in the logs (see attached) of the run I did here was:

            WARNING 2022-11-08T12:04:51.027-08:00 lsst.calibrate.photoCal (calibrate:{instrument: 'HSC', detector: 101, visit: 11706, ...})(photoCal.py:640) - PhotoCal.getZeroPoint: no good stars remain                                          
            INFO 2022-11-08T12:04:51.033-08:00 lsst.calibrate.photoCal (calibrate:{instrument: 'HSC', detector: 101, visit: 11706, ...})(photoCal.py:413) - Magnitude zero point: 33.493359 +/- 0.000000 from 4 stars
            

            Show
            lauren Lauren MacArthur added a comment - - edited To test this, I have run (note the --config calibrate:deblend.catchFailures=True override): $ pipetask - - long - log run - b / repo / main - i HSC / runs / RC2 / w_2022_44 / DM - 36763 - o u / laurenma / DM - 36843 - p $DRP_PIPE_DIR / pipelines / HSC / DRP - RC2.yaml #calibrate --config calibrate:deblend.catchFailures=True -d "instrument='HSC' AND ((detector=101 AND visit=11706) OR (detector=0 AND visit=26048) OR (detector=102 AND visit=36260) OR (detector=4 AND visit=36750) OR (detector=52 AND visit=36192))" &> DM-36843_calibrate.log To assess their INCLUDE/REJECT status for the coadd, I retrieved the appropriate metadata from the calexps. I looked at the above two config thresholds and added: # Maximum mean on-sky distance (in arcsec) between matched source and rerference objects post-fit. A mean distance greater than this threshold raises a TaskError and the WCS fit is considered a failure. The default is set to the maximum tolerated by the external global calibration (e.g. jointcal) step for conceivable recovery. Ap [^DM-36843_calibrate.log] propriate value will be dataset and workflow dependent. config.astrometry.maxMeanDistanceArcsec = 0.5 Somewhat surprisingly, it seems these all would have survived the thresholds and made it into the coadds. Yusra AlSayyad was interested to know if the same would be the case for these detectors when we were running psfEx. To look into this, I've computed the same numbers for the pre-piff HSC/runs/RC2/w_2022_12/ DM-34125 collection. The numbers look like with piff [psfEx] : collection = u / laurenma / DM - 36843 [HSC / runs / RC2 / w_2022_12 / DM - 34125 ]   { 'detector' : 101 , 'visit' : 11706 } medianE = 0.0002 [ 0.0006 ] Rejected: False [ False ] scaledScatterSize = 0.0011 [ 0.0012 ] Rejected: False [ False ] astromDistanceMean = 0.0483 [ 0.0503 ] Rejected: False [ False ]   { 'detector' : 0 , 'visit' : 26048 } medianE = 0.0001 [ 0.0011 ] Rejected: False [ False ] scaledScatterSize = 0.0010 [ 0.0055 ] Rejected: False [ False ] astromDistanceMean = 0.0465 [ 0.0469 ] Rejected: False [ False ]   { 'detector' : 102 , 'visit' : 36260 } medianE = 0.0023 [ 0.0014 ] Rejected: False [ False ] scaledScatterSize = 0.0070 [ 0.0066 ] Rejected: False [ False ] astromDistanceMean = 0.0524 [ 0.0564 ] Rejected: False [ False ]   { 'detector' : 4 , 'visit' : 36750 } medianE = 0.0038 [ 0.0079 ] Rejected: False [ True ] scaledScatterSize = 0.0033 [ 0.0078 ] Rejected: False [ False ] astromDistanceMean = 0.0290 [ 0.0278 ] Rejected: False [ False ]   { 'detector' : 52 , 'visit' : 36192 } medianE = 0.0016 [ 0.0004 ] Rejected: False [ False ] scaledScatterSize = 0.0073 [ 0.0054 ] Rejected: False [ False ] astromDistanceMean = 0.0622 [ 0.0613 ] Rejected: False [ False ] So one of the psfEx detectors would have been excluded from the coadd. Going a bit deeper, I also computed the psf image at all the locations for the sources that got the following in the logs (see attached) of my run: WARNING 2022 - 11 - 08T12 : 03 : 01.389 - 08 : 00 lsst.calibrate.deblend (calibrate:{instrument: 'HSC' , detector: 0 , visit: 26048 , ...})(sourc eDeblendTask.py: 369 ) - Unable to deblend source 11187530812620831 : because PSF FWHM = nan is invalid. and compared them with the same for the psfEx collection: nanPsf_piff_vs_psfEx.pdf   (or, better yet, this one also includes some views on the icExp image itself:  nanPsf_piff_vs_psfEx_vs_icExp.pdf ) There are only a few (3 out of 27) that actually failed with psfEx as well, so it seems piff looks slightly less stable (this may be of interest for DM-36930 Joshua Meyers ). The only odd (in that if there really are "no good stars", why is this not a failure?) thing I noticed in the logs (see attached) of the run I did here was: WARNING 2022 - 11 - 08T12 : 04 : 51.027 - 08 : 00 lsst.calibrate.photoCal (calibrate:{instrument: 'HSC' , detector: 101 , visit: 11706 , ...})(photoCal.py: 640 ) - PhotoCal.getZeroPoint: no good stars remain INFO 2022 - 11 - 08T12 : 04 : 51.033 - 08 : 00 lsst.calibrate.photoCal (calibrate:{instrument: 'HSC' , detector: 101 , visit: 11706 , ...})(photoCal.py: 413 ) - Magnitude zero point: 33.493359 + / - 0.000000 from 4 stars
            Hide
            lauren Lauren MacArthur added a comment -

            This has been further discussed on this Slack thread. It turns out there are some significant ISR differences between the w_2022_12 psfEx run and the w_2022_44 piff runs (see icExp_0_26048.pdf) being compared here, so a direct comparison of where the PSF models failed is not strictly valid.

            Also noted on that thread is that, where the measurement does succeed for the piff run, the PSF determinant radius can vary from ~1.7 -> 6.5 (see psfs_piff_vs_psfEx_0_26048.pdf) and that there are large regions where there are no calib_psf_used stars:

            • magenta: piff/w_2022_44
            • green: psfEx/w_2022_12

            so the PSF can't be well constrained in these regions (and it seems piff's extrapolated models can get pretty funky). Both are clearly not good (i.e. we probably should not be including these poorly modeled detectors in the coadds), so we need metric(s) that can capture this (as noted above, our nominal PSF QA metrics are not capturing this!) Suggestions included:

            • a cut on number of used stars probably would flag this image - I assume most fully-successful images have many more - and if not, a cut on number of stars in quadrants/octants/N-tants definitely would.
            • a “variation across the field” metric
            Show
            lauren Lauren MacArthur added a comment - This has been further discussed on this  Slack thread . It turns out there are some significant ISR differences between the w_2022_12 psfEx run and the w_2022_44 piff runs (see icExp_0_26048.pdf ) being compared here, so a direct comparison of where the PSF models failed is not strictly valid. Also noted on that thread is that, where the measurement does succeed for the piff run, the PSF determinant radius can vary from ~1.7 -> 6.5 (see psfs_piff_vs_psfEx_0_26048.pdf ) and that there are large regions where there are no calib_psf_used stars: magenta: piff/w_2022_44 green: psfEx/w_2022_12 so the PSF can't be well constrained in these regions (and it seems piff's extrapolated models can get pretty funky). Both are clearly not good (i.e. we probably should not be including these poorly modeled detectors in the coadds), so we need metric(s) that can capture this (as noted above, our nominal PSF QA metrics are not capturing this!) Suggestions included: a cut on number of used stars probably would flag this image - I assume most fully-successful images have many more - and if not, a cut on number of stars in quadrants/octants/N-tants definitely would. a “variation across the field” metric
            Hide
            lauren Lauren MacArthur added a comment -

            Would you mind giving this a look when you get a chance?  Feel free to request any further diagnostics that might help with DM-36930!

            Show
            lauren Lauren MacArthur added a comment - Would you mind giving this a look when you get a chance?  Feel free to request any further diagnostics that might help with DM-36930 !
            Hide
            jmeyers314 Joshua Meyers added a comment -

            LGTM.  Thanks for the analysis!

            Show
            jmeyers314 Joshua Meyers added a comment - LGTM.  Thanks for the analysis!

              People

              Assignee:
              lauren Lauren MacArthur
              Reporter:
              lauren Lauren MacArthur
              Reviewers:
              Joshua Meyers
              Watchers:
              Arun Kannawadi, Fred Moolekamp, Joshua Meyers, Lauren MacArthur, Orion Eiger, Yusra AlSayyad
              Votes:
              0 Vote for this issue
              Watchers:
              6 Start watching this issue

                Dates

                Created:
                Updated:
                Resolved:

                  Jenkins

                  No builds found.