We had a brief discussion about this on November 1. The outcome was:
- Failures in the metric algorithm itself, which raise MetricComputationError, will eventually be handled by the activators' default behavior (i.e., keep running everything that doesn't depend on the failed task). However, handling exceptions emitted by PipelineTasks is not a priority for Gen 3 migration, so for now MetricTask should provide a runQuantum method that does the same handling that MetricsControllerTask does in Gen 2 (with suppression of Butler.put instead of Job.write).
- Silently handling metrics that don't apply to a particular pipeline configuration is not easy, and we did not come up with a good long-term solution to that. It may be possible to work around this for now, using the current convention (return None) and a custom runQuantum.
Something that just occurred to me now is that there is at least one case for "not applicable" where it's difficult to determine in advance, within the Gen 3 framework, that a metric is inapplicable: timing of tasks or subtasks that have been turned off in configs. This is currently determined by checking, at run time, which keys get written to (top-level) task metadata. Since I'm not clear on how metadata will work in Gen 3, and since Jonathan Sick has expressed interest in special-casing performance metrics that could apply to every task, it might be best to delay solving the broader problem until we know how this case works.
DM-20902, where a raise of MetricComputationError was demoted to a "not applicable" because ap_verify users were getting scared by a non-preventable "error" showing up for the first image of a given field of view.