Details
-
Type:
Story
-
Status: Done
-
Resolution: Done
-
Fix Version/s: None
-
Component/s: ap_pipe
-
Labels:
-
Story Points:6
-
Epic Link:
-
Sprint:AP S22-6 (May)
-
Team:Alert Production
-
Urgent?:No
Description
Run ap_pipe on DC2 datasets to characterize the current performance of the pipeline, both CPU time possibly memory usage. Configure the minimal AP pipeline and compare, and look for tasks and functions that take excessive resources.
Attachments
Issue Links
- is blocked by
-
DM-34699 Patch the ap_pipe config reset hack
- Done
-
DM-31063 Copy configs from obs_* packages to ap_pipe
- Done
- relates to
-
DM-34623 AP Performance sprint
- Done
-
DM-34825 Perform initial profiling run on test dataset
- Done
-
DM-31652 Process a subset of DC2 through ap_pipe
- Done
-
DM-31653 Reprocess a subset of DC2 with profiling
- Done
For when I get back to this (waiting on
DM-31063so we can not have to worry about DRP-only measurement tasks sneaking in): I have some working scripts and pipeline configurations in /project/parejkoj/ap_profile. Run a set of detectors via profile_script.sh (theDM-33001version is to profile Ian's work on the new ImageDifferenceTask). average_profile.py will combine the output from multiple detectors into one average profile. It's probably worth doing some more detailed statistics than just a simple average, but this is a start. The --stripped yaml file was a start at cutting out unnecessary measurement plugins, I'm just going to wait a little longer on that now, for the decoupling of obs packages and DRP-focused configs.