# Add initial jointcal metrics to validation_metrics

XMLWordPrintable

## Details

• Type: Story
• Status: Done
• Resolution: Done
• Fix Version/s: None
• Component/s:
• Labels:
• Story Points:
6
• Sprint:
Alert Production S17 - 3
• Team:

## Description

In order to use jointcal's metric outputs with the new metrics system, we need to define what metrics we want to measure. Now that the validation_metrics package exists and has some examples in it, we can add the jointcal metrics to it.

## Activity

Hide
John Parejko added a comment -

Adding jointcal metrics, branching from DM-9542.

Show
John Parejko added a comment - Adding jointcal metrics, branching from DM-9542 .
Hide
John Parejko added a comment - - edited

Can you please review this? It's not perfect, but it should be enough to get you going on the Specifications. I've flattened my commits down nicely, I hope. The PR for validate_base goes to DM-8477, as we'd decided earlier.

Notes:

• Some tests fail (e.g. test_job) because they still assume the MeasurementBase structure that was there before. I only cleaned up Measurement and Metric, not the others.
• bin/validate_metrics_repo.py is terribly named, given everything (as is Metric.validate()), so suggestions there are welcome. Although, if we're no-longer going to call this "validate_base", maybe it's not a problem? It does succeed for $VALIDATE_METRICS_DIR/metrics as it is on this ticket, so that's nice! • test_output_measurements.py fails, because auto-loading of metrics from validate_metrics isn't implemented. It would be pretty easy to do at this point, but this gets back to the dependency question we've been wondering about. I can finish that code in DM-9534, since the code in jointcal is already in place. What to actually do with that output is another question, possibly best answered by Angelo Fausti. One final question: I'm currently not ensuring consistency between fully-qualified measurement names and metric names. We might not be able to do that until test_output_measurements.py is working with auto-loading of validate_metrics, but we'll have to decide exactly how that is supposed to behave. Show John Parejko added a comment - - edited Can you please review this? It's not perfect, but it should be enough to get you going on the Specifications. I've flattened my commits down nicely, I hope. The PR for validate_base goes to DM-8477 , as we'd decided earlier. Notes: Some tests fail (e.g. test_job) because they still assume the MeasurementBase structure that was there before. I only cleaned up Measurement and Metric, not the others. bin/validate_metrics_repo.py is terribly named, given everything (as is Metric.validate() ), so suggestions there are welcome. Although, if we're no-longer going to call this "validate_base", maybe it's not a problem? It does succeed for$VALIDATE_METRICS_DIR/metrics as it is on this ticket, so that's nice! test_output_measurements.py fails, because auto-loading of metrics from validate_metrics isn't implemented. It would be pretty easy to do at this point, but this gets back to the dependency question we've been wondering about. I can finish that code in DM-9534 , since the code in jointcal is already in place. What to actually do with that output is another question, possibly best answered by Angelo Fausti . One final question: I'm currently not ensuring consistency between fully-qualified measurement names and metric names. We might not be able to do that until test_output_measurements.py is working with auto-loading of validate_metrics , but we'll have to decide exactly how that is supposed to behave.
Hide
Jonathan Sick added a comment -

Hey John Parejko I'll take a look at this today.

Show
Jonathan Sick added a comment - Hey John Parejko I'll take a look at this today.
Hide
Jonathan Sick added a comment -

Comments in the PR. I'm happy to see any relevant suggestions addressed as your see fit and then merge this. There's a lot more to do, so it's probably better to have this merged than preemptively optimize all the API patterns.

Show
Jonathan Sick added a comment - Comments in the PR. I'm happy to see any relevant suggestions addressed as your see fit and then merge this. There's a lot more to do, so it's probably better to have this merged than preemptively optimize all the API patterns.
Hide
John Parejko added a comment -

Thank you for the review comments.

I think I've taken care of all of them except the questions about output_measurements and Measurement.__init__ taking a dict vs. list (I did make Metric.__init__ behave close to how you wanted). The former we'll sort out once we get a more complete system in place, and I suspect you'll have a nice solution to the latter once you get Name up and running.

Merged (both _base and _metrics) and done.

Show
John Parejko added a comment - Thank you for the review comments. I think I've taken care of all of them except the questions about output_measurements and Measurement.__init__ taking a dict vs. list (I did make Metric.__init__ behave close to how you wanted). The former we'll sort out once we get a more complete system in place, and I suspect you'll have a nice solution to the latter once you get Name up and running. Merged (both _base and _metrics) and done.

## People

• Assignee:
John Parejko
Reporter:
John Parejko
Reviewers:
Jonathan Sick
Watchers:
Angelo Fausti, Frossie Economou, John Parejko, Jonathan Sick, Michael Wood-Vasey, Simon Krughoff, Tim Jenness