Details
-
Type:
Story
-
Status: Done
-
Resolution: Done
-
Fix Version/s: None
-
Component/s: verify
-
Labels:
-
Story Points:0
-
Epic Link:
-
Team:Alert Production
-
Urgent?:No
Description
Currently, running a MetricTask may either return a Measurement, return None, or raise an exception (preferably MetricComputationError). More details can be found in the documentation.
Jim Bosch, in the DM-21885 discussion, said:
I think we'll want to take a look at the expected-failure modes for MetricTasks you refer to in #3 as use cases for PipelineTasks in general, and define some rules that would allow them to work with generic activators. We've done a tiny bit of work in that area so far, but have long known that we need more sophistication in classifying and handling failures.
Depending on how much these rules change the existing behavior, we may need to change the implementation of every concrete MetricTask, so story points are hard to estimate.
We had a brief discussion about this on November 1. The outcome was:
Something that just occurred to me now is that there is at least one case for "not applicable" where it's difficult to determine in advance, within the Gen 3 framework, that a metric is inapplicable: timing of tasks or subtasks that have been turned off in configs. This is currently determined by checking, at run time, which keys get written to (top-level) task metadata. Since I'm not clear on how metadata will work in Gen 3, and since Jonathan Sick has expressed interest in special-casing performance metrics that could apply to every task, it might be best to delay solving the broader problem until we know how this case works.
See also
DM-20902, where a raise of MetricComputationError was demoted to a "not applicable" because ap_verify users were getting scared by a non-preventable "error" showing up for the first image of a given field of view.