Details
-
Type:
Story
-
Status: In Progress
-
Resolution: Unresolved
-
Fix Version/s: None
-
Component/s: faro, meas_extensions_scarlet
-
Labels:None
-
Story Points:3
-
Epic Link:
-
Sprint:DRP S22B, DRP S23A
-
Team:Data Release Production
-
Urgent?:No
Description
Now that we've implemented procedures to skip blends based on size, saturation, number of children, etc., we should have metrics to keep track of the number of blends skipped for each cut to know how many blends and how many peaks are being cut.
The current metrics I'm thinking of are:
- total number of blends
- total number of detected peaks
- number of blends skipped because they were too large
- number of blends skipped because they had too many peaks
- number of blends skipped because they contained masked pixels
- number of peaks skipped because they contained masked pixels
- number of blends that failed
Feel free to add any other metrics that might be useful to calculate.
Just found this issue. Do any of the metrics from
DM-27032(https://github.com/lsst/pipe_tasks/blob/main/python/lsst/pipe/tasks/metrics.py) help you? It would be nice to avoid duplicating work, where possible.(For that matter, Eric Bellm, do we want to use any of the metrics from Fred Moolekamp's list in ap_verify?)