Fix Version/s: None
Some metric measurement tasks are hard coding the metric name PA1 and others take a metric name and insert it into the Measurement e.g. AMx. This means that the metric name passed to run is ignored sometimes and is not in other cases.
It seems like we are being consistent in that we do not include the package name in any of the measurements when we run the pipeline, but I'm not sure thats the best policy. It's true that you need to know the package name to get the metric, but I think the verify system depends on the name spacing to do filtering of jobs that have metrics from multiple packages. We should think about the best policy.
- mentioned in
We resolved the class names and directory structure, but to my knowledge, we have not yet made a decision on conventions for metric names. We had discussed reorganizing the metrics packages.
Discussed at the weekly meeting on 2021_04_13. Consistency was the key driver over any one particular convention. We will follow conventions in the developers guide and prefer CamelCase. The developers guide also allows for new code to be written in snake_case. We chose not to adopt this convention and to stick with CamelCase throughout faro.
I think the core issue to be resolved here is the naming of metrics in the sense that each new metric is currently mapped to a new dataset type in the butler repo. We don't yet have a more systematic way of enforcing what dataset type names are allowed and grouping all the dataset types that correspond to metrics together in some way.
Currently, the naming convention for these metric dataset names includes the verification package (e.g., "metricvalue_validate_drp_PA1"), and we had discussed updating the verification packages to point more clearly to requirements documents for normative metrics.
See discussion on this thread:
where Jim and others make some suggestions
Aim for shorter term solution (few month timescale), and follow up with longer term solution.
Another question raised in how to deal with the same metric being computed on different scales (e.g., per tract metric values versus the summary statistic for the whole dataset). These cannot have the same dataset type name in the current approach, and these dataset types have different dimensions.
Jeffrey Carlin, Keith Bechtol I think we have resolved this with the package reorganization?