Fix Version/s: None
Team:Data Release Production
Our current implementation of the temporary local background approach to avoiding spurious detections near bright objects simply subtracts a local background from the full image before performing any detection steps. That can result in missed isolated-object detections and incorrect Footprints for large objects.
Instead, we should:
1. Detect Footprints and Peaks.
2. Subtract the local background.
3. Detect Peaks within each Footprint again, and use the new set of Peaks instead of the old set if and only if there is at least one Peak in the new set.
This really ought to be fixed before the HSC internal release or major HSC processing at NCSA.
I've made some comments on the GitHub PRs, including suggestions for optimisation.
Performance regression fixed; the time to run detection on my test coadd patch has gone from 162s down 3.6s. I'm now doing the thresholding for the peaks only in the subimage covered by the footprint whose peaks we're trying to replace, which cuts down on both the comparison between footprints and the amount of image we have to threshold again (especially because I'm not trying to replace the peaks of footprints with only one peak).
It looks like processCcd time spent on detection was always in the noise; my latest benchmark is:
Paul Price, if you're free to take another look today, please do, but I'll go ahead and merge later today if I don't hear from you as I believe I've addressed your biggest concern.
Merged to master. Thanks for the very helpful review, Paul Price (and Lauren MacArthur).
processCcd.py on one CCD, before this change: