Uploaded image for project: 'Data Management'
  1. Data Management
  2. DM-19516

Quantify template quality as a function of variable seeing

    Details

    • Type: Story
    • Status: Done
    • Resolution: Done
    • Fix Version/s: None
    • Component/s: None
    • Labels:
      None
    • Story Points:
      16
    • Epic Link:
    • Sprint:
      AP S19-6, AP F19-1
    • Team:
      Alert Production

      Description

      Look at DCR corrected templates generated from a range of different seeing conditions.
      Perform aperture photometry at a number of radii on detected sources.
      Generate an aggregate statistic describing the extendedness of point sources.
      (In particular, we are aiming to establish whether there is there a notable difference from the zero airmass case).
      Generate similar plots with CompareWarp templates to provide a comparison.

        Attachments

          Issue Links

            Activity

            Hide
            sullivan Ian Sullivan added a comment - - edited

            I used my simulator to generate nine realizations of the same field, each with eight observations with conditions drawn from the OpSim database. Each of the nine sets of simulations had the same range of airmasses and parallactic angles, but different ranges of seeing. The first set was forced to have constant seeing by forcing the seeing for all eight observations to the minimum from the database. For each successive set of simulations, the seeing values were scaled so that the maximum was 10% greater than the previous simulation.

            I ran two reruns for each set of simulations, one using CompareWarpAssembleCoadd and the other using DcrAssembleCoaddTask. I ran source detection for all 18 reruns, but copied the source detection catalog from the constant seeing DCR coadd run to replace the detections for all the other runs. Thus the source measurements were performed at exactly the same locations for all runs, and in the same order.

            To compare different runs, I first removed any sources that were flagged in any of them. Then I normalized the aperture fluxes by dividing by the PSF flux from the constant seeing run, and took the median from all of the unflagged sources.

            In the figure below I plot that normalized aperture flux as a function of aperture size for several of the simulations with different seeing ranges, and for both the CompareWarp and DCR coadd algorithm. In the case of constant seeing (the darkest solid and dashed lines) the aperture flux is identical for the two algorithms. As soon as the seeing is allowed to vary even 20% the CompareWarp algorithm (solid lines) underestimates the flux by 5%, though the value is consistent even up to the largest range in seeing. For the DCR coadd algorithm (dashed lines), the fraction of the recovered flux within an aperture degrades much slower at first, but does so steadily.  

            A key takeaway is that the 9" aperture flux is almost unchanged for DCR coadds as long as the range of seeing is within 50% of the best seeing observation.

             

            The above plot uses the PSF flux from the constant seeing run to normalize the aperture fluxes of all the other runs. The PSF flux also changes for every run, so an alternate metric would normalize the aperture fluxes by the PSF flux measured on the same coadd. This effectively corrects for the loss in power seen in the CompareWarp coadds, and I have included the combined plot below for completeness.

             

            Show
            sullivan Ian Sullivan added a comment - - edited I used my simulator to generate nine realizations of the same field, each with eight observations with conditions drawn from the OpSim database. Each of the nine sets of simulations had the same range of airmasses and parallactic angles, but different ranges of seeing. The first set was forced to have constant seeing by forcing the seeing for all eight observations to the minimum from the database. For each successive set of simulations, the seeing values were scaled so that the maximum was 10% greater than the previous simulation. I ran two reruns for each set of simulations, one using CompareWarpAssembleCoadd and the other using DcrAssembleCoaddTask . I ran source detection for all 18 reruns, but copied the source detection catalog from the constant seeing DCR coadd run to replace the detections for all the other runs. Thus the source measurements were performed at exactly the same locations for all runs, and in the same order. To compare different runs, I first removed any sources that were flagged in any of them. Then I normalized the aperture fluxes by dividing by the PSF flux from the constant seeing run, and took the median from all of the unflagged sources. In the figure below I plot that normalized aperture flux as a function of aperture size for several of the simulations with different seeing ranges, and for both the CompareWarp and DCR coadd algorithm. In the case of constant seeing (the darkest solid and dashed lines) the aperture flux is identical for the two algorithms. As soon as the seeing is allowed to vary even 20% the CompareWarp algorithm (solid lines) underestimates the flux by 5%, though the value is consistent even up to the largest range in seeing. For the DCR coadd algorithm (dashed lines), the fraction of the recovered flux within an aperture degrades much slower at first, but does so steadily.   A key takeaway is that the 9" aperture flux is almost unchanged for DCR coadds as long as the range of seeing is within 50% of the best seeing observation.   The above plot uses the PSF flux from the constant seeing run to normalize the aperture fluxes of all the other runs. The PSF flux also changes for every run, so an alternate metric would normalize the aperture fluxes by the PSF flux measured on the same coadd. This effectively corrects for the loss in power seen in the CompareWarp coadds, and I have included the combined plot below for completeness.  
            Hide
            ebellm Eric Bellm added a comment -

            Hi Ian Sullivan,

            The above analysis looks good to me. The original genesis of this ticket was your report that it is difficult to quantify DCR performance when templates are built from variable seeing because image differencing rejects the resulting artifacts; and in particular templates built from variable seeing have "fuzzy sidelobe features." For posterity can you provide a qualitative assessment of where those sidelobe features start to appear/become severe as the variable seeing inputs increase? Both for DCR and CompareWarp, if relevant.

            Show
            ebellm Eric Bellm added a comment - Hi Ian Sullivan , The above analysis looks good to me. The original genesis of this ticket was your report that it is difficult to quantify DCR performance when templates are built from variable seeing because image differencing rejects the resulting artifacts; and in particular templates built from variable seeing have "fuzzy sidelobe features." For posterity can you provide a qualitative assessment of where those sidelobe features start to appear/become severe as the variable seeing inputs increase? Both for DCR and CompareWarp, if relevant.
            Hide
            sullivan Ian Sullivan added a comment -

            I delayed wrapping up this ticket so that I could include the results after fixing DM-19660, DM-19839, and DM-19978. I also re-created the simulations and reprocessed them, since there were some numerical artifacts in the originals that I was concerned might be affecting the outcome. With these fixes the residuals in the difference images are much cleaner, and the aperture flux measurement for CompareWarp coadds also looks better.

            Revised plot of aperture flux measurements. Note that the deep coadds produced by CompareWarpAssembleCoaddTask are now consistent at 9" aperture, and do not show the 5% loss seen previously.

            These changes also cleaned up the residuals, so the "fuzzy sidelobe features" are no longer evident. I have attached example cutout images of typical source residuals from both CompareWarp and DCR templates for three different input seeing ranges.

            Example residuals, ComparWarp template in top row, DCR template in bottom row. From left to right, constant 0.6" seeing, variable 0.6" - 0.88" seeing, variable 0.6" - 1.29" seeing.

            Since the residuals are now reasonable for source measurement, I have also re-measured the false detections as in DM-18709, below. On the left I've plotted the fractional reduction in the number of dipoles, and on the right the fractional reduction in the number of detected sources of any sort. The reduction in the number of dipoles approaches 100% above airmass 1.2, but the reduction in the number of sources plateaus at ~80%. This captures the sources that were not well fit by either algorithm.

            Show
            sullivan Ian Sullivan added a comment - I delayed wrapping up this ticket so that I could include the results after fixing DM-19660 , DM-19839 , and DM-19978 . I also re-created the simulations and reprocessed them, since there were some numerical artifacts in the originals that I was concerned might be affecting the outcome. With these fixes the residuals in the difference images are much cleaner, and the aperture flux measurement for CompareWarp coadds also looks better. Revised plot of aperture flux measurements. Note that the deep coadds produced by CompareWarpAssembleCoaddTask are now consistent at 9" aperture, and do not show the 5% loss seen previously. These changes also cleaned up the residuals, so the "fuzzy sidelobe features" are no longer evident. I have attached example cutout images of typical source residuals from both CompareWarp and DCR templates for three different input seeing ranges. Example residuals, ComparWarp template in top row, DCR template in bottom row. From left to right, constant 0.6" seeing, variable 0.6" - 0.88" seeing, variable 0.6" - 1.29" seeing. Since the residuals are now reasonable for source measurement, I have also re-measured the false detections as in DM-18709 , below. On the left I've plotted the fractional reduction in the number of dipoles, and on the right the fractional reduction in the number of detected sources of any sort. The reduction in the number of dipoles approaches 100% above airmass 1.2, but the reduction in the number of sources plateaus at ~80%. This captures the sources that were not well fit by either algorithm.

              People

              • Assignee:
                sullivan Ian Sullivan
                Reporter:
                swinbank John Swinbank
                Reviewers:
                Eric Bellm
                Watchers:
                Eric Bellm, Ian Sullivan, John Swinbank
              • Votes:
                0 Vote for this issue
                Watchers:
                3 Start watching this issue

                Dates

                • Created:
                  Updated:
                  Resolved:

                  Summary Panel