Uploaded image for project: 'Data Management'
  1. Data Management
  2. DM-24482

Write characterization report for 20.0.0 Science Pipelines release

    Details

    • Type: Story
    • Status: Done
    • Resolution: Done
    • Fix Version/s: None
    • Component/s: None
    • Labels:
      None
    • Team:
      DM Science
    • Urgent?:
      No

      Attachments

        Issue Links

          Activity

          Hide
          lguy Leanne Guy added a comment -

          Nice work!

          A few comments

          • you say "14 photometric, astrometric and shape measurements were derived but only a subset were reported" - why only a subset? We should show all calculations and increasing progress towards completion.
          • There is a significant difference between the metric values obtained from the validation dataset and the RC2 dataset - sometimes better and sometimes worse. Do you understand the reasons? I assume running jointcal is what has resulted in better astrometric results? It would be good to include an explanation in the report.
          • Computational performance metrics were not re-measured for this release. We expect no significant changes relative to the report on version 12. ==> I don't think you can say the last sentence if you have not measured the computational performance since R12.

          Minor comments

          • Abstract contains a '?' - some latex compile error?
          Show
          lguy Leanne Guy added a comment - Nice work! A few comments you say "14 photometric, astrometric and shape measurements were derived but only a subset were reported" - why only a subset? We should show all calculations and increasing progress towards completion. There is a significant difference between the metric values obtained from the validation dataset and the RC2 dataset - sometimes better and sometimes worse. Do you understand the reasons? I assume running jointcal is what has resulted in better astrometric results? It would be good to include an explanation in the report. Computational performance metrics were not re-measured for this release. We expect no significant changes relative to the report on version 12. ==> I don't think you can say the last sentence if you have not measured the computational performance since R12. Minor comments Abstract contains a '?' - some latex compile error?
          Hide
          jcarlin Jeffrey Carlin added a comment -
          • you say "14 photometric, astrometric and shape measurements were derived but only a subset were reported" - why only a subset? We should show all calculations and increasing progress towards completion.

            The three that are not reported are left out because they concern astrometric residuals on 200-arcminute scales, which is too large for the single-tract data we measured the metrics on. I added a note explaining this in the document: "We exclude the three astrometry metrics (AM3, AD3, and AF3) that concern residuals on 200- arcminute scales, since neither the handful of CCDs in the validation_data_hsc dataset nor the individual tracts of RC2 span large enough spatial scales to enable these measurements."

          • There is a significant difference between the metric values obtained from the validation dataset and the RC2 dataset - sometimes better and sometimes worse. Do you understand the reasons? I assume running jointcal is what has resulted in better astrometric results? It would be good to include an explanation in the report.

            After consideration, I am not sure how to respond to this. The datasets are different, so it is not surprising that the metrics differ between them. There don't seem to be systematic differences, with the possible exception of the astrometry metrics, which are all better on RC2 than on validation_data_hsc. I added the following text to explain this: "Almost all of the astrometric metrics are improved with RC2 relative to their values with validation_data_hsc. This behavior is expected, because validate_drp does not include running jointcal, while the RC2 processing does include jointcal. This results in improved astrometry because (1) jointcal simultaneously fits positions of objects in all visits, and (2) RC2 contains more visits, which allows for improvement in the statistical noise of astrometry measurements."

          • Computational performance metrics were not re-measured for this release. We expect no significant changes relative to the report on version 12. ==> I don't think you can say the last sentence if you have not measured the computational performance since R12.

            Agreed - I removed the second sentence.

          • Abstract contains a '?' - some latex compile error?

            It looks like DMTR-191 is not in the lsst-texmf bibliography. I will add it and issue a PR.

          Show
          jcarlin Jeffrey Carlin added a comment - you say "14 photometric, astrometric and shape measurements were derived but only a subset were reported" - why only a subset? We should show all calculations and increasing progress towards completion. The three that are not reported are left out because they concern astrometric residuals on 200-arcminute scales, which is too large for the single-tract data we measured the metrics on. I added a note explaining this in the document: "We exclude the three astrometry metrics (AM3, AD3, and AF3) that concern residuals on 200- arcminute scales, since neither the handful of CCDs in the validation_data_hsc dataset nor the individual tracts of RC2 span large enough spatial scales to enable these measurements." There is a significant difference between the metric values obtained from the validation dataset and the RC2 dataset - sometimes better and sometimes worse. Do you understand the reasons? I assume running jointcal is what has resulted in better astrometric results? It would be good to include an explanation in the report. After consideration, I am not sure how to respond to this. The datasets are different, so it is not surprising that the metrics differ between them. There don't seem to be systematic differences, with the possible exception of the astrometry metrics, which are all better on RC2 than on validation_data_hsc. I added the following text to explain this: "Almost all of the astrometric metrics are improved with RC2 relative to their values with validation_data_hsc. This behavior is expected, because validate_drp does not include running jointcal, while the RC2 processing does include jointcal. This results in improved astrometry because (1) jointcal simultaneously fits positions of objects in all visits, and (2) RC2 contains more visits, which allows for improvement in the statistical noise of astrometry measurements." Computational performance metrics were not re-measured for this release. We expect no significant changes relative to the report on version 12. ==> I don't think you can say the last sentence if you have not measured the computational performance since R12. Agreed - I removed the second sentence. Abstract contains a '?' - some latex compile error? It looks like DMTR-191 is not in the lsst-texmf bibliography. I will add it and issue a PR.
          Hide
          lguy Leanne Guy added a comment -

          Looks good!

          Show
          lguy Leanne Guy added a comment - Looks good!
          Hide
          jcarlin Jeffrey Carlin added a comment -

          After the DMTR-191 reference is fixed, the final step will be to fix the "acronyms" appendix. (For example, "KPM" doesn't appear, while "SSP: Solar System Processing" should not be there.)

          Show
          jcarlin Jeffrey Carlin added a comment - After the DMTR-191 reference is fixed, the final step will be to fix the "acronyms" appendix. (For example, "KPM" doesn't appear, while "SSP: Solar System Processing" should not be there.)
          Hide
          jcarlin Jeffrey Carlin added a comment -

          The document is ready to be issued to Docushare.

          Show
          jcarlin Jeffrey Carlin added a comment - The document is ready to be issued to Docushare.

            People

            • Assignee:
              jcarlin Jeffrey Carlin
              Reporter:
              gcomoretto Gabriele Comoretto
              Reviewers:
              Leanne Guy
              Watchers:
              Gabriele Comoretto, Jeffrey Carlin, Leanne Guy, Tim Jenness
            • Votes:
              0 Vote for this issue
              Watchers:
              4 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Summary Panel