Uploaded image for project: 'Request For Comments'
  1. Request For Comments
  2. RFC-11

"suspect" flag for measurement outputs

    Details

    • Type: RFC
    • Status: Implemented
    • Resolution: Done
    • Component/s: DM
    • Labels:
      None
    • Location:
      this issue page

      Description

      This is a rerun of an previous lsst-data RFC email thread (subject: 'RFC: "suspect" flags for slots') that never converged. I'm making essentially the same proposal, but I'll try to provide more detail and motivation.

      Current Status

      All measurement algorithms record their state in a set of flag fields, which are typically set to indicate a particular failure mode (though some may not suggest a problem with the results). In addition, each algorithm also sets a "general failure" flag, often to the OR of all flags that indicate specific failures. This general flag is also set when an unexpected exception is thrown (this also results in a warning message in the logs).

      When measurement outputs are accessed via the slot interface, however, they must conform to a consistent interface, and hence only the general failure flag can be accessed via a getter in the SourceRecord class. Other flags can still be accessed using the alias that defines the slot (e.g. if the "Shape" slot is set to "base_SdssShape", then "slot_Shape_flag_unweighted" resolves to "base_SdssShape_flag_unweighted"). But because these flags are different for different algorithms, this is only useful for human consumers of the slots, not code that needs to determine generically whether a slot measurement is usable. This is particularly important because the slots are the primary way earlier algorithms are used to feed later algorithms: flux algorithms that need a centroid or shape, for instance, use the Centroid and Shape slots.

      These flags do not provide a way for an algorithm to indicate a partial success that resulted in a crude estimate that may be usable for downstream algorithms but should not be considered fully trustworthy. Frequently - but not uniformly - algorithms indicate this state by setting the general failure flag while still providing an output value. This forces downstream code to check not just the state of the flag, but also whether the measurement values are NaN.

      Proposal

      I propose we add general "suspect" flag to all algorithms, and to the slots, which would be set instead of the failure flag when a reasonable but crude result can be obtained. A full failure would be indicated by setting the current failure flag, and would generally not be accompanied by non-NaN outputs (or, if non-NaN outputs are recorded, they are considered so untrustworthy that they are only useful for debugging purposes).

      Choosing whether to set "suspect" or "failure" is clearly a subjective, algorithm-dependent choice, and a science-quality, human-directed data analysis should always involve looking at the specific algorithm's detailed flags. The "suspect" and "failure" flags will be intended more for quick, algorithm-independent QA analysis and, most importantly, other dependent measurement algorithms. As such, the primary consideration in choosing whether to set "suspect" or "failure" should be whether the result is likely to be good enough to feed downstream algorithms.

      In most cases, a dependent algorithm that receives a "suspect" input should mark its own output as "suspect" as well, but this may not always be the case. For instance, a model-fitting algorithm may use the centroid as an input, but allow the centroid to vary as a free parameter as well, which could allow it to recover completely from a suspect centroid input. If a dependent algorithm receives a "failure" as input, it will almost always just bail out early.

      The last time this proposal was circulated, the discussion mostly centered on whether a single additional flag would be enough, and whether some other generic quality metric would be useful. My opinion is that it would not be; we really want something that tells a downstream algorithm whether it should give up in advance (because its dependency failed) or proceed with caution (because its dependency is suspect), and I think it's best to leave that binary decision to the dependency, not the dependent.

      Examples

      • Centroids should almost never fail complete, as they will start with the Peak position as a starting point, and just using that as an output is good enough to be called "suspect" instead.
      • Least-squares fitting algorithms that reject a large fraction of pixels due to mask values or image boundaries will typically set "suspect" (with the threshold likely configurable), and set "failure" only if a higher threshold is set, or if the algorithm fails to converge.
      • Weighted adaptive moments codes that fall back to unweighted moments (such as SdssShape) will set "suspect" for unweighted moments.

        Attachments

          Issue Links

            Activity

              People

              • Assignee:
                jbosch Jim Bosch
                Reporter:
                jbosch Jim Bosch
                Watchers:
                Jim Bosch, John Swinbank, Kian-Tat Lim, Paul Price, Perry Gee, Robert Lupton, Russell Owen
              • Votes:
                0 Vote for this issue
                Watchers:
                7 Start watching this issue

                Dates

                • Created:
                  Updated:
                  Resolved:
                  Planned End:

                  Summary Panel