Details
-
Type:
Bug
-
Status: Done
-
Resolution: Done
-
Fix Version/s: None
-
Component/s: ap_association, ap_verify
-
Labels:
-
Story Points:6
-
Epic Link:
-
Sprint:AP S22-6 (May)
-
Team:Alert Production
-
Urgent?:No
Description
The fakes completeness metrics have dropped to zero in all ap_verify runs after DM-33857, for both ap_verify_ci_cosmos_pdr2 and ap_verify_ci_hits2015. Given that we are still detecting sources (though fewer), this may be the result of the completeness metrics making assumptions about catalog schemas and contents that are no longer true.
Investigate and restore the metrics.
Attachments
Issue Links
- has to be done before
-
DM-34699 Patch the ap_pipe config reset hack
- Done
-
DM-32694 Split AP pipeline into ApPipeWithFakes
- Done
- is blocked by
-
DM-34531 Cleanup piff PSF determiner model size config options
- Done
- is triggering
-
DM-34531 Cleanup piff PSF determiner model size config options
- Done
-
DM-34586 Use --fail-fast in ap_verify to halt execution on first error
- Done
-
DM-34698 Default piff kernelSize to 25
- Done
- relates to
-
DM-33857 Make Piff the default PsfDeterminer in DRP.yaml
- Done
- mentioned in
-
Page Loading...
Joshua Meyers and I worked on this a bit in pair coding today. A potential source of the problem is that Piff PSF stamps are smaller (21x21 pixels) than PSFEx. insertFakes.py uses a default calibFluxRadius=12.0, and psf.computeApertureFlux will silently return NaN values when the radius is larger than than the stamp size (which is the case now). In turn the inserted fake magnitudes are likely to be NaN, which would explain why the fakes metrics do not find them. A workaround would be to use a ten-pixel radius by default, but we should also add some additional logging.