Details
-
Type:
Story
-
Status: Done
-
Resolution: Done
-
Fix Version/s: None
-
Labels:None
-
Story Points:2
-
Epic Link:
-
Sprint:AP S21-4 (March)
-
Team:Alert Production
-
Urgent?:No
Description
Following DM-28888 (and subsequent bugfixes) we now have both Gen 2 and Gen 3 ap_verify runs in SQuaSH. The existing dashboards are designed for investigating a single run at a time. Create a new dashboard that compares Gen 2 and Gen 3 directly, in order to keep the existing boards from having confusing mixes of plots and to make it easy to remove the comparisons once we've dropped Gen 2.
Plots on the new dashboard should probably focus on performance, and on metrics that are known to be robust to processing order (since even what is meant by "processing order" is very different in Gen 2 and Gen 3).
"Final" version of the dashboard is available at https://chronograf-demo.lsst.codes/sources/2/dashboards/73. I've left out the memory usage metric, since it's not clear what that's actually measuring in Gen 3; I'll ask Middleware about it on Monday.
While working on this ticket I also noticed that our timing metrics were profiling AssociationTask rather than DiaPipelineTask, so I've fixed that on the "AP runtime metrics" dashboard.