# Demonstrate scientific parity with validate_drp using new framework on the HSC RC2 dataset

XMLWordPrintable

#### Details

• Type: Story
• Status: Done
• Resolution: Done
• Fix Version/s: None
• Component/s:
• Labels:
None
• Team:
DM Science
• Urgent?:
No

#### Description

Assess scientific readiness of new framework by computing all metrics reported in the Characterization Metric Report on the HSC RC2 dataset and comparing the results with those obtained by using validate_drp for the Characterization Metric Report for Science Pipelines Release 20 (as reported in dmtn-251.lsst.io).

Ensure that the computational performance is reasonable.

#### Activity

Hide
Keith Bechtol added a comment -
Show
Keith Bechtol added a comment - Discussion at DM SST on 28 September 2020 https://confluence.lsstcorp.org/display/DM/2020-09-28+DM-SST+Agenda+and+Meeting+notes
Hide
Leanne Guy added a comment -

Still want to understand the differences seen between validate_drp and the new framework for the AM1 metric

Show
Leanne Guy added a comment - Still want to understand the differences seen between validate_drp and the new framework for the AM1 metric
Hide
Leanne Guy added a comment -
Show
Hide
Keith Bechtol added a comment - - edited

See presentation to DM SST on 11 January 2021

https://confluence.lsstcorp.org/display/DM/2021-01-11+DM-SST+Agenda+and+Meeting+notes

in particular these slides

Following that report, we did a test fixing the random seed and aggregating PA2 in the same way and demonstrated machine-precision agreement. The slides linked above have been updated to include that additional test.

Show
Keith Bechtol added a comment - - edited See presentation to DM SST on 11 January 2021 https://confluence.lsstcorp.org/display/DM/2021-01-11+DM-SST+Agenda+and+Meeting+notes in particular these slides Following that report, we did a test fixing the random seed and aggregating PA2 in the same way and demonstrated machine-precision agreement. The slides linked above have been updated to include that additional test.
Hide
Leanne Guy added a comment -

Excellent work. I am confident now that we fully understand how vadlidate_drp and faro are computing the implemented metrics.

Show
Leanne Guy added a comment - Excellent work. I am confident now that we fully understand how vadlidate_drp and faro are computing the implemented metrics.
Hide
Jeffrey Carlin added a comment -

I tagged the version of validate_drp that we used to demonstrate parity (the tag name is "faro_checkpoint"; on github here).

All of the faro changes (fixing a bug in the “extendedness” filtering, adding filtering on the detect_isPrimary flag, sorting the input visits before matching, allowing for external calibrations to be applied) were merged separately. The git branch of faro to fix the random seed is u/jcarlin/fixed_random_seed. We will soon replace this with an option to pass a fixed seed via configs.

Show
Jeffrey Carlin added a comment - I tagged the version of validate_drp that we used to demonstrate parity (the tag name is "faro_checkpoint"; on github here ). All of the faro changes (fixing a bug in the “extendedness” filtering, adding filtering on the detect_isPrimary flag, sorting the input visits before matching, allowing for external calibrations to be applied) were merged separately. The git branch of faro to fix the random seed is u/jcarlin/fixed_random_seed. We will soon replace this with an option to pass a fixed seed via configs.

#### People

Assignee:
Keith Bechtol
Reporter:
Leanne Guy
Reviewers:
Leanne Guy
Watchers:
Jeffrey Carlin, Keith Bechtol, Leanne Guy