Make characterization report for v15

XMLWordPrintable

Details

• Type: Story
• Status: Done
• Resolution: Done
• Fix Version/s: None
• Component/s: None
• Labels:
None
• Story Points:
3
• Team:
SQuaRE

Description

Make up the characterization report for the v15 release. This will be done on the master branch of https://github.com/lsst-dm/dmtr-62.

Attachments

1. DMTR-62.pdf
890 kB

Activity

Hide
Michael Wood-Vasey [X] (Inactive) added a comment -

A list of comments to address. I will want to read over this once more before release.

Show
Michael Wood-Vasey [X] (Inactive) added a comment - A list of comments to address. I will want to read over this once more before release.
Hide
Michael Wood-Vasey [X] (Inactive) added a comment -

1. Add interpretive text along the lines of:
"Astrometric performance is excellent. Photometric performance is not good – we believe the use of meas_mosaic and jointcal in the next cycle will significantly improve photometric performance. Current HSC reprocessing efforts at NCSA indicate better numbers with meas_mosaic."

2.
"""
Note that care must be taken to compare apples to apples. These metrics have all been computed relative to the "design" specifications. If compared to metrics computed relative to a different threshold, the values of the KPMs will be different.
"""
I understand what you meant, but for the reader who doesn't understand the dependency of the thresholds, this isn't very clear. Perhaps

"""
Some KPMs (AF1, AD1) involve thresholds that are different for design'', minimum'', and stretch'' specifications. Thus comparing one of these metrics against a given target number is a two-level process. Both the threshold used in the calculation is dependent on the specifications, and the requirement on the computed number is dependent on the specifications.

The metrics in this report have all been computed relative to the design'' thresholds. The values of these KPMs would be different if computed against different thresholds.

3.
"""
Note also that the photometric performance of the pipelines in the y band is an under estimate of expected delivered performance. For these tests, the y band data was calibrated with z band photometry. This is due to the lack of a reference catalog containing y band information at this time. We believe the numbers are still worth noting in this report as a historical benchmark to track relative performance.
"""

I don't know if this is true. Yes, we should note that we are using a z-band catalog to calibrate y-band observations. But it's not immediately obvious that using the wrong reference catalog significantly affects the KPMs. It shouldn't affect astrometry, and its affect on photometry is second order: if observations are taken through different amounts of water vapor, then one would expect a color-airmass term. I'm not precisely objecting, because it's true that the y-band numbers are terrible and we need some explanation.

4. Mention LDM-240, Sheet 3, as the source for the release 15.0 target numbers. LDM-240 should also have the target numbers for TE1, TE2 which could be included in Table 3.

5. Citations didn't resolve for me.

6. Minor:

• Table 2. Right-justify "Value" column.
• Remove "band" from the "Metric" column through out Table 1-3. It doesn't add anything. Just say, e.g., PA1: g, AF1: i.
Show
Michael Wood-Vasey [X] (Inactive) added a comment - 1. Add interpretive text along the lines of: "Astrometric performance is excellent. Photometric performance is not good – we believe the use of meas_mosaic and jointcal in the next cycle will significantly improve photometric performance. Current HSC reprocessing efforts at NCSA indicate better numbers with meas_mosaic ." 2. """ Note that care must be taken to compare apples to apples. These metrics have all been computed relative to the "design" specifications. If compared to metrics computed relative to a different threshold, the values of the KPMs will be different. """ I understand what you meant, but for the reader who doesn't understand the dependency of the thresholds, this isn't very clear. Perhaps """ Some KPMs (AF1, AD1) involve thresholds that are different for design'', minimum'', and stretch'' specifications. Thus comparing one of these metrics against a given target number is a two-level process. Both the threshold used in the calculation is dependent on the specifications, and the requirement on the computed number is dependent on the specifications. The metrics in this report have all been computed relative to the design'' thresholds. The values of these KPMs would be different if computed against different thresholds. 3. """ Note also that the photometric performance of the pipelines in the y band is an under estimate of expected delivered performance. For these tests, the y band data was calibrated with z band photometry. This is due to the lack of a reference catalog containing y band information at this time. We believe the numbers are still worth noting in this report as a historical benchmark to track relative performance. """ I don't know if this is true. Yes, we should note that we are using a z-band catalog to calibrate y-band observations. But it's not immediately obvious that using the wrong reference catalog significantly affects the KPMs. It shouldn't affect astrometry, and its affect on photometry is second order: if observations are taken through different amounts of water vapor, then one would expect a color-airmass term. I'm not precisely objecting, because it's true that the y-band numbers are terrible and we need some explanation. 4. Mention LDM-240, Sheet 3, as the source for the release 15.0 target numbers. LDM-240 should also have the target numbers for TE1, TE2 which could be included in Table 3. 5. Citations didn't resolve for me. 6. Minor: Table 2. Right-justify "Value" column. Remove "band" from the "Metric" column through out Table 1-3. It doesn't add anything. Just say, e.g., PA1: g , AF1: i .
Hide
Michael Wood-Vasey [X] (Inactive) added a comment -

It seems like we should really switch over to quoting performance based on the HSC reprocessing for the next cycle. The current validation_data_hsc is just too small to run meas_mosaic or jointcal.

Switching should be almost trivial. The biggest request will be that the HSC re-processing be done specifically and only against a weekly that becomes the release, with no special ticket branches.

Show
Michael Wood-Vasey [X] (Inactive) added a comment - It seems like we should really switch over to quoting performance based on the HSC reprocessing for the next cycle. The current validation_data_hsc is just too small to run meas_mosaic or jointcal . Switching should be almost trivial. The biggest request will be that the HSC re-processing be done specifically and only against a weekly that becomes the release, with no special ticket branches.
Hide
Simon Krughoff added a comment -

Michael Wood-Vasey [X] I've made the changes you asked for (I hope). Here is the PDF DMTR-62.pdf

Regarding HSC re-processing. Hsin-Fang has already reached out to me about this. My suggestion was that we hold off until the lsst.verify version of SQuaSH is in place. This should be soon.

Show
Simon Krughoff added a comment - Michael Wood-Vasey [X] I've made the changes you asked for (I hope). Here is the PDF DMTR-62.pdf Regarding HSC re-processing. Hsin-Fang has already reached out to me about this. My suggestion was that we hold off until the lsst.verify version of SQuaSH is in place. This should be soon.
Hide
Michael Wood-Vasey [X] (Inactive) added a comment -

Looks good. I approve.

Show
Michael Wood-Vasey [X] (Inactive) added a comment - Looks good. I approve.

People

Assignee:
Simon Krughoff
Reporter:
Simon Krughoff
Reviewers:
Michael Wood-Vasey [X] (Inactive)
Watchers:
Frossie Economou, Michael Wood-Vasey [X] (Inactive), Simon Krughoff