Fix Version/s: None
1. Add pass/fail routine to report success/fail against metrics. Do this for
- Configured metrics
2. Add pass/fail reporting to running of `validate.drp.run`
I haven’t been following validate_drp so I’ll first take a bit of time to understand the code base and then dive into your changes.
Thanks. I sent you this review in part to provide an opportunity for you to get a sense of what I've been working on.
To preempt some thoughts and suggestions you may have going through the code, here are a couple outstanding tickets on the ToDo list:
DM-5096 Make validate_drp a Task
DM-5098 Provide tests to verify the calculation of metrics.
Thanks for starting the GitHub Pull Request. I meant to do that first, but did thing in the other order as I was traveling today.
Comments on the GitHub PR. I have a lot of comments that are probably beyond scope, and only a few issues/questions for clarification on this PR. Otherwise looks great.
Thanks very much for the review, Jonathan Sick.
Merged to master.
Adds steps to bin.src/validateDrp and routines to the libraries to score performance against SRD or custom-specified requirements.
Outputs statistics, thresholds, and pass/fail scores to STDOUT by default.
The scoring requires only the input of the summary statistic JSON files and a configuration file that specified requestements to check. Scores pass/fail are calculated against both custom requirements and the LSST SRD requirements.
The output of scoreMetrics could be easily connected to some other output structure for a test harness.