Uploaded image for project: 'Data Management'
  1. Data Management
  2. DM-11312

Devise plan for more sensitive metrics for weekly builds

    Details

    • Type: Story
    • Status: Done
    • Resolution: Done
    • Fix Version/s: None
    • Component/s: None
    • Labels:
      None
    • Story Points:
      10
    • Epic Link:
    • Sprint:
      DRP F17-4, DRP F17-5, DRP F17-6, DRP S18-1, DRP S18-2, DRP S18-3, DRP S18-4, DRP S18-5, DRP S18-6, DRP F18-1, DRP F18-2, DRP F18-3, DRP F18-4, DRP F18-5, DRP F18-6, DRP S19-1, DRP S19-2, DRP S19-3, DRP S19-4
    • Team:
      Data Release Production

      Description

      Apply the suite of existing QA plots to the regular RC processing being carried out at NCSA.

      Look for ways we could boil down the statistics being calculated to a number of metrics which are (fairly) reliable at catching actual problems.

      Devise a "QA procedure" which we can apply regularly to sanity check the output of each RC run with minimal human intervention.

        Attachments

          Activity

          Hide
          swinbank John Swinbank added a comment -

          Hey Lauren MacArthur, Yusra AlSayyad — forgive my being slow, but could you clarify exactly what the “metrics being computed” covered by this ticket actually are? Does it just mean “all the numbers generated by pipe_analysis”? Is the idea (as I think the first comment above implies??) to turn things like histograms generated by pipe_analysis into single values (e.g. the width of the distribution), and record those as metrics? Is there a list of what actually is or will be computed? Thanks!

          Show
          swinbank John Swinbank added a comment - Hey Lauren MacArthur , Yusra AlSayyad — forgive my being slow, but could you clarify exactly what the “metrics being computed” covered by this ticket actually are? Does it just mean “all the numbers generated by pipe_analysis”? Is the idea (as I think the first comment above implies??) to turn things like histograms generated by pipe_analysis into single values (e.g. the width of the distribution), and record those as metrics? Is there a list of what actually is or will be computed? Thanks!
          Hide
          yusra Yusra AlSayyad added a comment -

          Attend our July 24th 2019 Team Meeting and you'll overhear our brainstorming session with Robert about what metrics we want to start tracking.

          Show
          yusra Yusra AlSayyad added a comment - Attend our July 24th 2019 Team Meeting and you'll overhear our brainstorming session with Robert about what metrics we want to start tracking.
          Hide
          swinbank John Swinbank added a comment - - edited

          Ok, but I guess I've got the wrong end of the stick about what actually happened on this ticket, then. Please could you clarify what's been done here? Sorry for being slow...

          Show
          swinbank John Swinbank added a comment - - edited Ok, but I guess I've got the wrong end of the stick about what actually happened on this ticket, then. Please could you clarify what's been done here? Sorry for being slow...
          Hide
          yusra Yusra AlSayyad added a comment - - edited

          We have
          1) "Devise[d] plan for more sensitive metrics for weekly builds" (Title)

          • See Lauren's last comment.

          2) (description)

          • "Appl[ied] the suite of existing QA plots to the regular RC processing being carried out at NCSA" Hsin-Fang runs these every month. and Lauren looks at the output.
          • "Look[ed] for ways we could boil down the statistics being calculated to a number of metrics which are (fairly) reliable at catching actual problems." The pipe_analysis prints as text on the plots and logs useful metrics that we regularly check such as the width of the stellar locus, rms reference psfmag - psf. Yes, extracting them is hard now, but we use them for rerun comparisons. e.g. these are the metrics being compared in dmtn-080. A complete list could be made now. These are the "metrics being computed."
          • "Devise[d] a "QA procedure" which we can apply regularly to sanity check the output of each RC run with minimal human intervention." See Lauren's last comment with the plan. We're refining some existing pipe_analysis metrics (e.g. rms refPsfMag - psfMag) to use standard SNR cutoffs so they can be better compared with measurements elsewhere. All used pipe_analysis values will be written out as part of the metrics framework. so they can be tracked by squash. We devoted 2 team meetings to learning about how to do this with guest Simon and Krzysztof, and one to brainstorm what measurements we are currently missing.
          Show
          yusra Yusra AlSayyad added a comment - - edited We have 1) "Devise [d] plan for more sensitive metrics for weekly builds" (Title) See Lauren's last comment. 2) (description) "Appl [ied] the suite of existing QA plots to the regular RC processing being carried out at NCSA" Hsin-Fang runs these every month. and Lauren looks at the output. "Look [ed] for ways we could boil down the statistics being calculated to a number of metrics which are (fairly) reliable at catching actual problems." The pipe_analysis prints as text on the plots and logs useful metrics that we regularly check such as the width of the stellar locus, rms reference psfmag - psf. Yes, extracting them is hard now, but we use them for rerun comparisons. e.g. these are the metrics being compared in dmtn-080. A complete list could be made now. These are the "metrics being computed." "Devise [d] a "QA procedure" which we can apply regularly to sanity check the output of each RC run with minimal human intervention." See Lauren's last comment with the plan. We're refining some existing pipe_analysis metrics (e.g. rms refPsfMag - psfMag) to use standard SNR cutoffs so they can be better compared with measurements elsewhere. All used pipe_analysis values will be written out as part of the metrics framework. so they can be tracked by squash. We devoted 2 team meetings to learning about how to do this with guest Simon and Krzysztof, and one to brainstorm what measurements we are currently missing.
          Hide
          swinbank John Swinbank added a comment -

          Thanks!

          Show
          swinbank John Swinbank added a comment - Thanks!

            People

            • Assignee:
              lauren Lauren MacArthur
              Reporter:
              swinbank John Swinbank
              Reviewers:
              Yusra AlSayyad
              Watchers:
              John Swinbank, Lauren MacArthur, Yusra AlSayyad
            • Votes:
              0 Vote for this issue
              Watchers:
              3 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved: