Uploaded image for project: 'Data Management'
  1. Data Management
  2. DM-22599

Develop PipelineTask unit test framework

    Details

    • Templates:
    • Story Points:
      8
    • Sprint:
      AP S20-2 (January), AP S20-3 (February)
    • Team:
      Alert Production

      Description

      Most of our new pipeline tasks' functionality can be tested by writing unit tests against run or more specific methods (in theory, these tests should be identical to those from the Gen 2 era). However, such tests do not verify:

      • whether a task's Connections are correctly written and whether they match the inputs and outputs of the run method
      • any logic in a custom runQuantum method
      • configuration logic, such as optional or alternative inputs or outputs

      Since the Gen 3 API is unfamiliar to us, these aspects of a PipelineTask are the ones that are most likely to have bugs.

      Currently, the only way to test these features is in large-scale runs on Gen 3 repositories (e.g., HSC). Such tests, while valuable, can only exercise a small subset of conditions (e.g., configs), can be expensive to debug (e.g., due to cascading failures), and do not protect against regressions (no CI). A pytest-compatible framework that lets us test those parts of a PipelineTask that lie outside run will let us catch problems much faster.

      As part of DM-21875, I created a prototype test framework for direct Butler I/O and used it to verify that datasets could be stored to and retrieved from a dummy, obs-agnostic repository. I believe the same approach can be used to test PipelineTask functionality without the need to simulate a "realistic" Butler or depend on obs packages.

      Desired features:

      • a natural way for the test author to provide mock data IDs for the repository. The appropriate IDs will depend on the task being tested. It should be possible to simplify this from the prototype code, since most of the complexity of the Gen 3 Dimensions system is not needed for most tests; an exception may be ImageDifferenceTask's mix of detector-level and patch-level inputs.
      • a simple activator that calls runQuantum without modifications other than mocking run
      • a way to test that the desired inputs get passed to run, including self-consistent use of config flags and templates. This will probably involve mocking run and may involve mock datasets, which are more technically challenging.
      • a way to verify the output of a (real) run call against a configured connections object
      • analogous support for __init__ inputs and outputs, which I'm less familiar with

        Attachments

          Issue Links

            Activity

              People

              • Assignee:
                krzys Krzysztof Findeisen
                Reporter:
                krzys Krzysztof Findeisen
                Reviewers:
                Meredith Rawls
                Watchers:
                John Parejko, John Swinbank, Krzysztof Findeisen, Meredith Rawls
              • Votes:
                0 Vote for this issue
                Watchers:
                4 Start watching this issue

                Dates

                • Created:
                  Updated:

                  Summary Panel