Details
-
Type:
Story
-
Status: Done
-
Resolution: Done
-
Fix Version/s: None
-
Component/s: None
-
Labels:None
-
Story Points:10
-
Epic Link:
-
Sprint:DRP F17-4, DRP F17-5, DRP F17-6, DRP S18-1, DRP S18-2, DRP S18-3, DRP S18-4, DRP S18-5, DRP S18-6, DRP F18-1, DRP F18-2, DRP F18-3, DRP F18-4, DRP F18-5, DRP F18-6, DRP S19-1, DRP S19-2, DRP S19-3, DRP S19-4
-
Team:Data Release Production
Description
Apply the suite of existing QA plots to the regular RC processing being carried out at NCSA.
Look for ways we could boil down the statistics being calculated to a number of metrics which are (fairly) reliable at catching actual problems.
Devise a "QA procedure" which we can apply regularly to sanity check the output of each RC run with minimal human intervention.
Hey Lauren MacArthur, Yusra AlSayyad — forgive my being slow, but could you clarify exactly what the “metrics being computed” covered by this ticket actually are? Does it just mean “all the numbers generated by pipe_analysis”? Is the idea (as I think the first comment above implies??) to turn things like histograms generated by pipe_analysis into single values (e.g. the width of the distribution), and record those as metrics? Is there a list of what actually is or will be computed? Thanks!