Yes, this is leading toward a key piece of the long term validation infrastructure. This Story is part of the Epic DM-3864: "Integration Dataset for metrics and regression tests - Part I":
This present DM-4706 was covering specifically adding Dominique Boutigny 's astrometric performance check.
You're very right to call out the connection of this epic and `ci_hsc`. I've just added DM-4817 as a Story under Epic DM-3864 to call this out explicitly. My intention is to develop a few more tests here and then look through the `ci_hsc` in the next month.
I'm honestly not familiar with the state of how much of `ci_hsc` is runnable in the current DM stack (e.g., building `obs_subaru` with `lsstsw` `a4d9de0` and `lsst_build` `015c01a` current fails on the `psycopg2` dependency).
The idea of "validation" vs. "testing" is 2-fold. One: it's a test of the integration rather than of specific modules. But two, and the distinction from continuous integration, is that the validation effort will be runnable at a variety of scales:
1. Quick 1-minute turn around for continuous integration.
2. Perhaps nightly 30-minute scale processing to alert if there's been any more subtle regression in performance.
3. Weekly hours-long processing scale to calculate more detailed performance metrics that are part of the goals of the DM stack.
The `bin/check_astrometry.py` in `lsst_dm_stack_demo` is exactly the same code that's being further developed in `validate_drp`. The duplication here is exactly intentional. `lsst_dm_stack_demo` has been a very loosely "validation" that the stack kind of actually works after an install. There's no strong current intent to develop `lsst_dm_stack_demo` that much further as part of Epic DM-3864. We can decide later whether `lsst_dm_stack_demo` should explicitly import from `validate_drp`, but my preference would be to actually not create any additional dependencies for `lsst_dm_stack_demo` until we have a clearer idea of the purpose of the `lsst_dm_stack_demo`.
Yes, this is leading toward a key piece of the long term validation infrastructure. This Story is part of the Epic
DM-3864: "Integration Dataset for metrics and regression tests - Part I":This present
DM-4706was covering specifically adding Dominique Boutigny 's astrometric performance check.You're very right to call out the connection of this epic and `ci_hsc`. I've just added
DM-4817as a Story under EpicDM-3864to call this out explicitly. My intention is to develop a few more tests here and then look through the `ci_hsc` in the next month.I'm honestly not familiar with the state of how much of `ci_hsc` is runnable in the current DM stack (e.g., building `obs_subaru` with `lsstsw` `a4d9de0` and `lsst_build` `015c01a` current fails on the `psycopg2` dependency).
The idea of "validation" vs. "testing" is 2-fold. One: it's a test of the integration rather than of specific modules. But two, and the distinction from continuous integration, is that the validation effort will be runnable at a variety of scales:
1. Quick 1-minute turn around for continuous integration.
2. Perhaps nightly 30-minute scale processing to alert if there's been any more subtle regression in performance.
3. Weekly hours-long processing scale to calculate more detailed performance metrics that are part of the goals of the DM stack.
The `bin/check_astrometry.py` in `lsst_dm_stack_demo` is exactly the same code that's being further developed in `validate_drp`. The duplication here is exactly intentional. `lsst_dm_stack_demo` has been a very loosely "validation" that the stack kind of actually works after an install. There's no strong current intent to develop `lsst_dm_stack_demo` that much further as part of Epic
DM-3864. We can decide later whether `lsst_dm_stack_demo` should explicitly import from `validate_drp`, but my preference would be to actually not create any additional dependencies for `lsst_dm_stack_demo` until we have a clearer idea of the purpose of the `lsst_dm_stack_demo`.