Jim Bosch wrote a good description of the intended behavior of our measurement failure flags on Slack, which I've copied below. This needs to be included in our pipelines documentation, so that we can refer users to it and so we can use it of a check of our own expectations of algorithmic behavior.
I propose a "Generic topic type" page in the root of the meas_base RST document hierarchy, linked from the meas_base package landing page. Many of the algorithms that define our flags are in meas_base or built on classes defined there.
Quote of Jim Bosch's slack post:
Here's my taxonomy of flag states for individual measurement algorithms.
<algorithm>_flag is not set but <algorithm> value is NaN: this is a bug in DM code that needs to be fixed.
<algorithm>flag is set and no other <algorithm>_flag* field is set: this is a bug in DM code that needs to be fixed.
<algorithm>flag is not set, <algorithm> value is not NaN, but some other <algorithm>_flag* is set: there was a minor problem with this algorithm that should not affect most usage ("users can opt out of using it"). This could include some secondary output of the algorithm being NaN or less reliable, as long as a primary output is considered fine.
<algorithm>flag is set, <algorithm> value is not NaN, and some other <algorithm>_flag* is set: there was a serious problem with this algorithm that probably means at least some measurements in this state are unreliable.
<algorithm>flag is set`, <algorithm> value is NaN, and some other <algorithm>_flag* was set: a serious problem occurred and we could not obtain even a degraded fallback value.