Fix Version/s: None
Sprint:DRP F17-4, DRP F17-5, DRP F17-6, DRP S18-1, DRP S18-2, DRP S18-3, DRP S18-4, DRP S18-5, DRP S18-6, DRP F18-1, DRP F18-2, DRP F18-3, DRP F18-4, DRP F18-5, DRP F18-6, DRP S19-1, DRP S19-2, DRP S19-3, DRP S19-4
Team:Data Release Production
Apply the suite of existing QA plots to the regular RC processing being carried out at NCSA.
Look for ways we could boil down the statistics being calculated to a number of metrics which are (fairly) reliable at catching actual problems.
Devise a "QA procedure" which we can apply regularly to sanity check the output of each RC run with minimal human intervention.
Attend our July 24th 2019 Team Meeting and you'll overhear our brainstorming session with Robert about what metrics we want to start tracking.
Ok, but I guess I've got the wrong end of the stick about what actually happened on this ticket, then. Please could you clarify what's been done here? Sorry for being slow...
1) "Devise[d] plan for more sensitive metrics for weekly builds" (Title)
- See Lauren's last comment.
- "Appl[ied] the suite of existing QA plots to the regular RC processing being carried out at NCSA" Hsin-Fang runs these every month. and Lauren looks at the output.
- "Look[ed] for ways we could boil down the statistics being calculated to a number of metrics which are (fairly) reliable at catching actual problems." The pipe_analysis prints as text on the plots and logs useful metrics that we regularly check such as the width of the stellar locus, rms reference psfmag - psf. Yes, extracting them is hard now, but we use them for rerun comparisons. e.g. these are the metrics being compared in dmtn-080. A complete list could be made now. These are the "metrics being computed."
- "Devise[d] a "QA procedure" which we can apply regularly to sanity check the output of each RC run with minimal human intervention." See Lauren's last comment with the plan. We're refining some existing pipe_analysis metrics (e.g. rms refPsfMag - psfMag) to use standard SNR cutoffs so they can be better compared with measurements elsewhere. All used pipe_analysis values will be written out as part of the metrics framework. so they can be tracked by squash. We devoted 2 team meetings to learning about how to do this with guest Simon and Krzysztof, and one to brainstorm what measurements we are currently missing.
Hey Lauren MacArthur, Yusra AlSayyad — forgive my being slow, but could you clarify exactly what the “metrics being computed” covered by this ticket actually are? Does it just mean “all the numbers generated by pipe_analysis”? Is the idea (as I think the first comment above implies??) to turn things like histograms generated by pipe_analysis into single values (e.g. the width of the distribution), and record those as metrics? Is there a list of what actually is or will be computed? Thanks!