Details
-
Type:
Story
-
Status: Invalid
-
Resolution: Done
-
Fix Version/s: None
-
Component/s: ap_verify
-
Labels:
-
Story Points:6
-
Epic Link:
-
Sprint:Alert Production F17 - 8, Alert Production F17 - 9
-
Team:Alert Production
Description
When we begin analyzing metrics for verify_ap, those metrics will need to have metadata on the job being run. This could include the dataset/camera/filter IDs (see SQR-019), provenance and pipeline version (see Community post), and information about the server running the pipeline (requested on #dm-squash to control for environmental effects on performance metrics).
This ticket is to decide a coherent strategy for delivering metadata to verify_ap's verify.Job object, and then adding the necessary infrastructure to verify.ap.metrics. We may need multiple sources of information; for example, reporting camera IDs could naturally be a feature of the verify.ap.Dataset class, and ultimately read from config files, while server information should be foolproof to keep arbitrary users from masquerading as official trial runs.
Job metadata should also give some indication of swappable or otherwise non-standard pipeline components (suggested by Ian Sullivan) used in a run. Perhaps this can be mined from the Task metadata?