LSSTApplications  11.0-22-g33de520,13.0+132,14.0+27,14.0+29,14.0-1-g013352c+10,14.0-1-g13ef843+9,14.0-1-g4b114ac+10,14.0-1-g7257b6a+9,14.0-1-g8b7e855+26,14.0-10-ga7aaa25+5,14.0-12-g233aa8e+8,14.0-14-g87d16e8+6,14.0-18-g0eec5f5+9,14.0-2-g0af5a6c+26,14.0-2-g319577b+7,14.0-2-g8373656+8,14.0-2-ga5af9b6+7,14.0-26-gaaaaa5c36+1,14.0-29-g442e3e1b+3,14.0-4-g4cc409d+16,14.0-4-ge2d7f21+7,14.0-5-g86eb1bd+6,14.0-6-ge2c9487+17,14.0-6-gf4bc96c+26,14.0-7-g6a9b684,14.0-8-g7f6dd6b+6,14.0-8-gf6dacf9e+4 LSSTDataManagementBasePackage
lsst::pipe::base; Base package for pipeline tasks

# Introduction

lsst::pipe::base provides a data processing pipeline infrastructure. Data processing is performed by "tasks", which are instances of Task or CmdLineTask. Tasks perform a wide range of data processing operations, from basic operations such as assembling raw images into CCD images (trimming overscan), fitting a WCS or detecting sources on an image, to complex combinations of such operations.

Command-line tasks are tasks that can be run from the command line. You might think of them as the LSST equivalent of a data processing pipeline. Despite their extra capabilities, command-line tasks can also be used as ordinary tasks and called as subtasks by other tasks. Command-line tasks are subclasses of CmdLineTask.

Each task is configured using the pex_config package, using a task-specific subclass of pex.config.config.Config. The task's configuration includes all subtasks that the task may call. As a result, it is easy to replace (or "retarget") one subtask with another. A common use for this is to provide a camera-specific variant of a particular task, e.g. use one version for SDSS imager data and another version for Subaru Hyper Suprime-Cam data).

Tasks may process multiple items of data in parallel, using Python's multiprocessing library. Support for this is built into the ArgumentParser and TaskRunner.

Most tasks have a run method that performs the primary data processing. Each task's run method should return a Struct. This allows named access to returned data, which provides safer evolution than relying on the order of returned values. All task methods that return more than one or two items of data should return the data in a Struct.

Many tasks are found in the pipe_tasks package, especially tasks that use many different packages and don't seem to belong in any one of them. Tasks that are associated with a particular package should be in that package; for example the instrument signature removal task ip.isr.isrTask.IsrTask is in the ip_isr package.

pipe_base is written purely in Python. The most important contents are:

• CmdLineTask: base class for pipeline tasks that can be run from the command line.
• Task: base class for subtasks that are not meant to be run from the command line.
• Struct: object returned by the run method of a task.
• ArgumentParser: command line parser for pipeline tasks.
• timeMethod: decorator to log performance information for a Task method.
• TaskRunner: a class that runs command-line tasks, using multiprocessing when requested. This will work "as is" for most command-line tasks, but will need to be be subclassed if, for instance, the task's run method needs something other than a single data reference.

Each command-line task typically has a short "task runner script" to run the task in the bin/ directory of whatever package the task is defined in. This section deals with the command-line options of these task runner scripts.

Specify --help to print help. When in doubt give this a try.

The first argument to a task must be the path to the input repository (or --help). For example:

• myTask.py path/to/input --id... is valid: input path is the first argument
• myTask.py --id ... path/to/input is invalid: an option comes before the input path

--output specifies the path to the output repository. Some tasks also support --calib: the path to input calibration data. To shorten input, output and calib paths see Environment Variables.

Data is usually specified by the --id argument with key=value pairs as the value, where the keys depend on the camera and type of data. If you run the task and specify both an input data repository and --help then the printed help will show you valid keys (the input repository tells the task what kind of camera data is being processed). See Specifying Data IDs for more information about data IDs. A few tasks take more than one kind of data ID, or have renamed the --id argument; run the task with --help or see the task's documentation for details.

You may show the config, subtasks and/or data using --show. By default --show quits after printing the information, but --show run allows the task to run. For example:

• --show config data tasks shows the config, data and subtasks, and then quits.
• --show tasks run show the subtasks and then runs the task.

For long or repetitive command lines you may wish to specify some arguments in separate text files. See Argument Files for details.

## Specifying Data IDs

--id and other data identifier arguments are used to specify IDs for input and output data. The ID keys depend on the camera and on the kind of data being processed. For example, lsstSim calibrated exposures are identified by the following keys: visit, filter, raft and sensor (and a given visit has exactly one filter).

Omit a key to specify all values of that key. For example, for lsstSim calibrated exposures:

• --id visit=54123 specifies all rafts and sensors for visit 54123 (and all filters, but there is just one filter per visit).
• --id visit=54123 raft=1,0 specifies all sensors for visit raft 1,0 of visit 54123

To specify multiple data IDs you may separate values with ^ (a character that does not have special meaning to the unix command parser). The result is the outer product (all possible combinations). For example:

• --id visit=54123^55523 raft=1,1^2,1 specifies four IDs: visits 54123 and 55523 of rafts 1,1 and 2,1

You may specify a data identifier argument as many times as you like. Each one is treated independently. Thus the following example specifies all sensors for four combinations of visit and raft, plus all sensors for one raft of two other visits for calibrated lsstSim data:

• --id visit=54123^55523 raft=1,1^2,1 --id visit=623459^293423 raft=0,0

## The --rerun Option

The --rerun option is an alternate way to specify the output and, optionally, input repositories for a command line task. Unlike --output, the value supplied to --rerun is relative to the rerun directory of the root input data repository (i.e. follow the chain of parent repositories—indicated by files in the repository root named _parent—all the way back, then add rerun to that). --rerun saves the user from typing in the input repository twice. For example,

processCcd.py $root/hsc --output$root/hsc/rerun/Vishal/aRerun

which (for some semantically insignificant version of $root) will run processCcd.py on the data in $root/hsc and put the output in $root/hsc/rerun/Vishal/aRerun, can be shortened by using --rerun to processCcd.py$root/hsc --rerun Vishal/aRerun

This is useful because outputs from the pipeline are conventionally placed in a subdirectory of a "rerun" directory present in a per-camera root repository. For example:

• $root/hsc/rerun/public # camera hsc; a public rerun useful for many people. • $root/hsc/rerun/Jim/aTicket/fixed-psf # camera hsc; one of several private reruns used to debug a single issue.
• $root/lsstSim/Simon/aTicket/rerun/Jim/anotherTicket # camera lsstSim; simulated by user Simon; processed by user Jim. Additionally, --rerun path1:path2 will read from the rerun specified by path1 and write to the rerun specified by path2. This is referred to as "chaining" reruns. For example: processCcd.py$root/hsc --rerun Paul/procCcds ... # Paul processes some CCDs.