Details
-
Type:
Story
-
Status: Done
-
Resolution: Done
-
Fix Version/s: None
-
Component/s: afw, meas_algorithms, pipe_tasks
-
Labels:
-
Story Points:4
-
Team:External
Description
To address the problems observed in detection efficiency in HSC, we'd like try the following modifications to either SourceDetectionTask or DetectCoaddSources (since this only needs to happen on coadds):
- Do not rescale the coadd variance plane.
- Start with a preliminary detection step with an aggressive threshold (2-4 sigma) to identify regions that may contain objects. Grow these footprints as usual.
- Run background estimation, ignoring the just-detected footprints as usual.
- Add a temporary set of sky objects and measure PSF fluxes on them.
- Use the ratio of the empirical RMS ("rms") of the sky object flux to the mean of its quoted uncertainty ("err") to determine an "effective" threshold for final detection from the configured ("nominal") threshold: effective = nominal*err/rms.
- Proceed with normal detection as it currently exists in SourceDetectionTask, discarding the temporary sky objects.
We may also be able to use the mean of the sky object flux to determine the correction to the background, instead of running background estimation instep (3) before adding the sky objects. This should be possible via a config option - we don't know whether the finite number sky objects will give us sufficient S/N to estimate the background, and we also don't know if subtracting the background after a more aggressive detection step will yield a mean sky object flux that is consistent with zero.
In testing, we'll need to tune the threshold for the first aggressive detection phase and test that the final "reEstimateBackground" step in SourceDetectionTask doesn't degrade the quality of the background (by looking at the regular sky objects added later).
Removing the coadd variance scaling will also change the reported uncertainties for our measurements. I think the reported uncertainties were already wrong, but this may make them slightly more wrong. We could consider scaling all of the uncertainties by (rms/err) in a manner similar to how we apply the aperture corrections, which should push them closer to being correct (it would only be exact for PSF fluxes on faint point sources, I think). But we should at least make sure that (rms/err) is recorded for each patch so we could let the science users apply that factor if applied.
Robert Lupton, please check that all of the above makes sense to you (using another background estimation step instead of sky objects to correct for the mean offset was Tanaka-san's idea; I think it's worth trying both approaches). At the HSC telecon on 12/12 we agreed that Paul Price would try to implement this after getting the y-band background subtraction in.
Thanks Paul. This looks great! Just a quick question - all this will be included in coaddDriver.py and the background re-estimated coadd images are written as deepCoadd_calexp?