This Epic captures issues that improve meas_base's support for investigating what went wrong with algorithms.
- Issues to improve debug-mode functionality in various ways, including making it easier to reproduce algorithm failures without needing to re-run the entire measurement framework.
- Issues to improve the relationship between Exceptions and SourceRecord flag bits; we want all caught exceptions to result in both one or more standard flags (for slot use) and specific, more descriptive flags that document exactly what happened. We should also have similar flag support for non-fatal errors.
Some further ideas that haven't yet been spun off into issues:
- We should have measurement framework drivers that use the same plugin interface, and rerun a single plugin on a single object after the full measurement has been done. This would have to use the NoiseReplacer repeatability stuff (and would probably drive the interface to the NoiseReplacer repeatability stuff).
- I think we need a way to allow individual measurement plugins to save additional diagnostic outputs when so configured. If SourceRecord could store blobs, I think we'd just use that, but it doesn't, so it may be best to just pass a DataRef in for now and define some new mapper entries.