Sorry for the long delay, but this is finally ready for someone at DESC to have a go with. I've run the scripts on the visit and coadd-level data in /home/jchiang/DC1/extracted/ on lsst-dev. I actually made a copy of the data in /scratch/lauren/rerun/DC1/ (so feel free to delete it from you home directory James Chiang). In order to extract the reference catalogs, I first ran
ingestReferenceCatalog.py /scratch/lauren/rerun/DC1/ -C config/IngestIndexedReferenceTask.py
|
in that directory to get the ref_cats/cal_ref_cat/ directories.
To run the scripts on lsst-dev I performed the following steps to setup and use the shared stack (note that I have not tested the scripts on much older versions of the stack than the current weekly):
source /software/lsstsw/stack/loadLSST.bash
|
setup lsst_distrib
|
Do the following from whatever directory you use for development:
git clone git@github.com:LSST/obs_lsstSim.git # if you don't already have it!
|
cd obs_lsstSim
|
git checkout tickets/DM-11452
|
setup -v -r . -j
|
scons opt=3 -j4
|
cd ..
|
git clone git@github.com:LSST-DM/pipe_analysis.git
|
cd pipe_analysis
|
git checkout tickets/DM-11452
|
setup -v -r . -j
|
scons opt=3 -j4
|
Assuming that all went smoothly, you can run the visit analysis script as follows:
visitAnalysis.py ROOTDIR --rerun RERUNDIR --id visit=1993939 --tract=0 --config doApplyUberCal=False analysis.visitClassFluxRatio=0.925 refObjLoader.ref_dataset_name="cal_ref_cat"
|
You can also specify specific raft/sensor combos in the --id (e.g. --id visit=1993939 raft=0,2 sensor=1,1^2,0). Leaving them out will result in all available calexps for the supplied visit being included.
So, in my case I ran:
visitAnalysis.py /scratch/lauren/DC1 --rerun DM-11452/ --id visit=1993939 --tract=0 --config doApplyUberCal=False analysis.visitClassFluxRatio=0.925 refObjLoader.ref_dataset_name="cal_ref_cat"
|
I use the --rerun file structure which puts the output in /ROOTDIR/rerun/RERUNDIR
which in my case evaluates to /scratch/lauren/DC1/rerun/DM-11452. The figure will end up in a plots subdirectory in rerun, e.g. /scratch/lauren/DC1/rerun/DM-11452/plots. You could use --output instead of --rerun, but the comparison scripts currently assume the rerun structure (see below).
The two config parameters are to override the defaults which are applicable to HSC. The first tells the code not to try to apply the uber calibration from meas_mosaic, the second is the flux ratio that delineates the star/galaxy classification (is considered in plotting limits; obs_subaru overrides this to 0.95 so that is what I use for the default).
Similarly, the coadd analysis script can be run as:
coaddAnalysis.py /scratch/lauren/DC1 --rerun DM-11452/ --id tract=0 patch=18,13 filter=r --config refObjLoader.ref_dataset_name="cal_ref_cat"
|
Finally, I have tested that the comparison scripts compareVisitAnalysis.py and compareCoaddAnalysis.py also run on the DC1 data. The purpose of these scripts is to compare two separate processing runs of the same dataset. To accommodate this, they take a --rerun2 input parameter. Since I didn't have two separate runs to compare, I just ran them by comparing this one with itself, so my command looked like:
compareVisitAnalysis.py /scratch/lauren/DC1 --rerun DM-11452/ --rerun2 DM-11452/ --id visit=1993939 --tract=0 --config refObjLoader.ref_dataset_name="cal_ref_cat" --config doApplyUberCal1=False doApplyUberCal2=False
|
Lauren MacArthur — I'm adding this to the August sprint since I think it's likely an ongoing activity, but that leaves you quite overloaded for the month. Feel free to split your effort between this and
DM-11312as you think appropriate.