Uploaded image for project: 'Data Management'
  1. Data Management
  2. DM-11452

Adapt qa analysis scripts to run on DESC DC1 simulations output

    XMLWordPrintable

Details

    • 4
    • DRP F17-3, DRP F17-4, DRP F17-5, DRP F17-6
    • Data Release Production

    Description

      To date, the qa analysis scripts have only been run and tested on HSC data. As such, it is almost certain some HSC-isms have been unwittingly been baked into the code. This ticket involves running the scripts on DESC DS1 simulation output and making any adaptations required. The testing will be done on the visit and coadd levels for the single band (r) of the DC1 run. This will be a significant step towards generalizing the scripts to run on any LSST-stack processed dataset.

      Attachments

        Issue Links

          Activity

            lauren — I'm adding this to the August sprint since I think it's likely an ongoing activity, but that leaves you quite overloaded for the month. Feel free to split your effort between this and DM-11312 as you think appropriate.

            swinbank John Swinbank added a comment - lauren — I'm adding this to the August sprint since I think it's likely an ongoing activity, but that leaves you quite overloaded for the month. Feel free to split your effort between this and DM-11312 as you think appropriate.

            Here are the visit level qa plots for the DC1 data that was copied to /home/jchiang/DC1/extracted/ on lsst-dev for me to work with.

            lauren Lauren MacArthur added a comment - Here are the visit level qa plots for the DC1 data that was copied to /home/jchiang/DC1/extracted/ on lsst-dev for me to work with.

            Sorry for the long delay, but this is finally ready for someone at DESC to have a go with. I've run the scripts on the visit and coadd-level data in /home/jchiang/DC1/extracted/ on lsst-dev. I actually made a copy of the data in /scratch/lauren/rerun/DC1/ (so feel free to delete it from you home directory jchiang). In order to extract the reference catalogs, I first ran

            ingestReferenceCatalog.py /scratch/lauren/rerun/DC1/ -C config/IngestIndexedReferenceTask.py
            

            in that directory to get the ref_cats/cal_ref_cat/ directories.

            To run the scripts on lsst-dev I performed the following steps to setup and use the shared stack (note that I have not tested the scripts on much older versions of the stack than the current weekly):

            source /software/lsstsw/stack/loadLSST.bash
            setup lsst_distrib
            

            Do the following from whatever directory you use for development:

            git clone git@github.com:LSST/obs_lsstSim.git   # if you don't already have it!
            cd obs_lsstSim
            git checkout tickets/DM-11452
            setup -v -r . -j
            scons opt=3 -j4
            cd ..
            git clone git@github.com:LSST-DM/pipe_analysis.git
            cd pipe_analysis
            git checkout tickets/DM-11452
            setup -v -r . -j
            scons opt=3 -j4
            

            Assuming that all went smoothly, you can run the visit analysis script as follows:

            visitAnalysis.py ROOTDIR --rerun RERUNDIR --id visit=1993939 --tract=0 --config doApplyUberCal=False analysis.visitClassFluxRatio=0.925 refObjLoader.ref_dataset_name="cal_ref_cat"
            

            You can also specify specific raft/sensor combos in the --id (e.g. --id visit=1993939 raft=0,2 sensor=1,1^2,0). Leaving them out will result in all available calexps for the supplied visit being included.
            So, in my case I ran:

            visitAnalysis.py /scratch/lauren/DC1 --rerun DM-11452/ --id visit=1993939 --tract=0 --config doApplyUberCal=False analysis.visitClassFluxRatio=0.925 refObjLoader.ref_dataset_name="cal_ref_cat"
            

            I use the --rerun file structure which puts the output in /ROOTDIR/rerun/RERUNDIR
            which in my case evaluates to /scratch/lauren/DC1/rerun/DM-11452. The figure will end up in a plots subdirectory in rerun, e.g. /scratch/lauren/DC1/rerun/DM-11452/plots. You could use --output instead of --rerun, but the comparison scripts currently assume the rerun structure (see below).

            The two config parameters are to override the defaults which are applicable to HSC. The first tells the code not to try to apply the uber calibration from meas_mosaic, the second is the flux ratio that delineates the star/galaxy classification (is considered in plotting limits; obs_subaru overrides this to 0.95 so that is what I use for the default).

            Similarly, the coadd analysis script can be run as:

            coaddAnalysis.py /scratch/lauren/DC1 --rerun DM-11452/ --id tract=0 patch=18,13 filter=r --config refObjLoader.ref_dataset_name="cal_ref_cat"
            

            Finally, I have tested that the comparison scripts compareVisitAnalysis.py and compareCoaddAnalysis.py also run on the DC1 data. The purpose of these scripts is to compare two separate processing runs of the same dataset. To accommodate this, they take a --rerun2 input parameter. Since I didn't have two separate runs to compare, I just ran them by comparing this one with itself, so my command looked like:

            compareVisitAnalysis.py /scratch/lauren/DC1 --rerun DM-11452/ --rerun2 DM-11452/ --id visit=1993939 --tract=0 --config refObjLoader.ref_dataset_name="cal_ref_cat" --config doApplyUberCal1=False doApplyUberCal2=False
            

            lauren Lauren MacArthur added a comment - Sorry for the long delay, but this is finally ready for someone at DESC to have a go with. I've run the scripts on the visit and coadd-level data in /home/jchiang/DC1/extracted/ on lsst-dev . I actually made a copy of the data in /scratch/lauren/rerun/DC1/ (so feel free to delete it from you home directory jchiang ). In order to extract the reference catalogs, I first ran ingestReferenceCatalog.py /scratch/lauren/rerun/DC1/ -C config/IngestIndexedReferenceTask.py in that directory to get the ref_cats/cal_ref_cat/ directories. To run the scripts on lsst-dev I performed the following steps to setup and use the shared stack (note that I have not tested the scripts on much older versions of the stack than the current weekly): source /software/lsstsw/stack/loadLSST.bash setup lsst_distrib Do the following from whatever directory you use for development: git clone git@github.com:LSST/obs_lsstSim.git # if you don't already have it! cd obs_lsstSim git checkout tickets/DM-11452 setup -v -r . -j scons opt=3 -j4 cd .. git clone git@github.com:LSST-DM/pipe_analysis.git cd pipe_analysis git checkout tickets/DM-11452 setup -v -r . -j scons opt=3 -j4 Assuming that all went smoothly, you can run the visit analysis script as follows: visitAnalysis.py ROOTDIR --rerun RERUNDIR --id visit=1993939 --tract=0 --config doApplyUberCal=False analysis.visitClassFluxRatio=0.925 refObjLoader.ref_dataset_name="cal_ref_cat" You can also specify specific raft/sensor combos in the --id (e.g. --id visit=1993939 raft=0,2 sensor=1,1^2,0 ). Leaving them out will result in all available calexps for the supplied visit being included. So, in my case I ran: visitAnalysis.py /scratch/lauren/DC1 --rerun DM-11452/ --id visit=1993939 --tract=0 --config doApplyUberCal=False analysis.visitClassFluxRatio=0.925 refObjLoader.ref_dataset_name="cal_ref_cat" I use the --rerun file structure which puts the output in /ROOTDIR/rerun/RERUNDIR which in my case evaluates to /scratch/lauren/DC1/rerun/ DM-11452 . The figure will end up in a plots subdirectory in rerun, e.g. /scratch/lauren/DC1/rerun/ DM-11452 /plots . You could use --output instead of --rerun, but the comparison scripts currently assume the rerun structure (see below). The two config parameters are to override the defaults which are applicable to HSC. The first tells the code not to try to apply the uber calibration from meas_mosaic , the second is the flux ratio that delineates the star/galaxy classification (is considered in plotting limits; obs_subaru overrides this to 0.95 so that is what I use for the default). Similarly, the coadd analysis script can be run as: coaddAnalysis.py /scratch/lauren/DC1 --rerun DM-11452/ --id tract=0 patch=18,13 filter=r --config refObjLoader.ref_dataset_name="cal_ref_cat" Finally, I have tested that the comparison scripts compareVisitAnalysis.py and compareCoaddAnalysis.py also run on the DC1 data. The purpose of these scripts is to compare two separate processing runs of the same dataset. To accommodate this, they take a --rerun2 input parameter. Since I didn't have two separate runs to compare, I just ran them by comparing this one with itself, so my command looked like: compareVisitAnalysis.py /scratch/lauren/DC1 --rerun DM-11452/ --rerun2 DM-11452/ --id visit=1993939 --tract=0 --config refObjLoader.ref_dataset_name="cal_ref_cat" --config doApplyUberCal1=False doApplyUberCal2=False

            I've attached tarballs of the plots created with the above.

            cwalter, is there someone you could ask to test out the above on some more DC1 data?

            lauren Lauren MacArthur added a comment - I've attached tarballs of the plots created with the above. cwalter , is there someone you could ask to test out the above on some more DC1 data?

            Thanks Lauren!

            OK, will try to pass this on.

            cwalter Chris Walter added a comment - Thanks Lauren! OK, will try to pass this on.

            Heather, can you please give this a pass once your satisfied the scripts are running as advertised on the DESC DC1 data?

            lauren Lauren MacArthur added a comment - Heather, can you please give this a pass once your satisfied the scripts are running as advertised on the DESC DC1 data?

            We can run the various scripts and are now trying to compare our results against the runs laur\en completed above. We've seen some discrepancies in the numbers and just want to make sure this is either expected or can be explained. At NERSC we're on Haswell with gcc 4.9.3, using Py3 along with the w.2017.41 install of lsst_distrib with the obs_lsstSim and pipe_analysis branches as described above. Jim told me that our data should be identical, as you were running using the copy made from NERSC. I ran the scripts. In the case of the compare scripts, I used the tarballs lauren attached to this JIRA.

            visitAnalysis.py /global/projecta/projectdirs/lsst/production/DC1/DM/DC1-imsim-dithered --rerun /global/cscratch1/sd/heatherk/qa_dm_11452/rerun --id visit=1993939 --tract=0 --config doApplyUberCal=False analysis.visitClassFluxRatio=0.925 refObjLoader.ref_dataset_name="cal_ref_cat" --no-versions
            

            compareVisitAnalysis.py /global/projecta/projectdirs/lsst/production/DC1/DM/DC1-imsim-dithered --rerun /global/cscratch1/sd/heatherk/qa_dm_11452/rerun --rerun2 /global/cscratch1/sd/heatherk/qa_dm_11452/jira --id visit=1993939 --tract=0 --config doApplyUberCal1=False doApplyUberCal2=False --config refObjLoader.ref_dataset_name="cal_ref_cat" --no-versions                                                                             
            

            coaddAnalysis.py /global/projecta/projectdirs/lsst/production/DC1/DM/DC1-imsim-dithered --rerun /global/cscratch1/sd/heatherk/qa_dm_11452/rerun --id tract=0 patch=18,13 filter=r --config refObjLoader.ref_dataset_name="cal_ref_cat" --no-versions                       
            

            compareCoaddAnalysis.py /global/projecta/projectdirs/lsst/production/DC1/DM/DC1-imsim-dithered --rerun /global/cscratch1/sd/heatherk/qa_dm_11452/rerun --rerun2 /global/cscratch1/sd/heatherk/qa_dm_11452/jira --id tract=0 patch=18,13 filter=r --config refObjLoader.ref_dataset_name="cal_ref_cat" --no-versions
            

            I'll some images in a moment

            hkelly Heather Kelly added a comment - We can run the various scripts and are now trying to compare our results against the runs laur\en completed above. We've seen some discrepancies in the numbers and just want to make sure this is either expected or can be explained. At NERSC we're on Haswell with gcc 4.9.3, using Py3 along with the w.2017.41 install of lsst_distrib with the obs_lsstSim and pipe_analysis branches as described above. Jim told me that our data should be identical, as you were running using the copy made from NERSC. I ran the scripts. In the case of the compare scripts, I used the tarballs lauren attached to this JIRA. visitAnalysis.py /global/projecta/projectdirs/lsst/production/DC1/DM/DC1-imsim-dithered --rerun /global/cscratch1/sd/heatherk/qa_dm_11452/rerun --id visit= 1993939 --tract= 0 --config doApplyUberCal=False analysis.visitClassFluxRatio= 0.925 refObjLoader.ref_dataset_name= "cal_ref_cat" --no-versions compareVisitAnalysis.py /global/projecta/projectdirs/lsst/production/DC1/DM/DC1-imsim-dithered --rerun /global/cscratch1/sd/heatherk/qa_dm_11452/rerun --rerun2 /global/cscratch1/sd/heatherk/qa_dm_11452/jira --id visit= 1993939 --tract= 0 --config doApplyUberCal1=False doApplyUberCal2=False --config refObjLoader.ref_dataset_name= "cal_ref_cat" --no-versions                                                                              coaddAnalysis.py /global/projecta/projectdirs/lsst/production/DC1/DM/DC1-imsim-dithered --rerun /global/cscratch1/sd/heatherk/qa_dm_11452/rerun --id tract= 0 patch= 18 , 13 filter=r --config refObjLoader.ref_dataset_name= "cal_ref_cat" --no-versions                        compareCoaddAnalysis.py /global/projecta/projectdirs/lsst/production/DC1/DM/DC1-imsim-dithered --rerun /global/cscratch1/sd/heatherk/qa_dm_11452/rerun --rerun2 /global/cscratch1/sd/heatherk/qa_dm_11452/jira --id tract= 0 patch= 18 , 13 filter=r --config refObjLoader.ref_dataset_name= "cal_ref_cat" --no-versions I'll some images in a moment
            hkelly Heather Kelly added a comment - - edited

            I just attached two images, one from Lauren's tar balls (marked "Org_") and the other from running at NERSC. There are different values for Ntotal and the various stats. I can find similar differences in other plots. Is this expected?

            hkelly Heather Kelly added a comment - - edited I just attached two images, one from Lauren's tar balls (marked "Org_") and the other from running at NERSC. There are different values for Ntotal and the various stats. I can find similar differences in other plots. Is this expected?

            Odd...I have run these scripts on the dataset I have on py2 (on tiger here at Princeton) and on py2 & py3 (on lsst-dev and having setup the w_2017_41 stack) and get exactly the same results, so it does not seem to by a py2 vs. py3 issue. It looks like gcc=6.3.1 on lsst-dev, so that's the only setup difference I can see. We really would need to confirm somehow that our input catalogs are indeed identical to get to the bottom of this, but that is difficult given our system access issues. Do you have any suggestions?

            lauren Lauren MacArthur added a comment - Odd...I have run these scripts on the dataset I have on py2 (on tiger here at Princeton) and on py2 & py3 (on lsst-dev and having setup the w_2017_41 stack) and get exactly the same results, so it does not seem to by a py2 vs. py3 issue. It looks like gcc=6.3.1 on lsst-dev, so that's the only setup difference I can see. We really would need to confirm somehow that our input catalogs are indeed identical to get to the bottom of this, but that is difficult given our system access issues. Do you have any suggestions?
            hkelly Heather Kelly added a comment - - edited

            I would start by trying to compare the versions provided by conda list and eups list to see that we are indeed using the same package versions. It'd also be good to confirm we're using the same data. Jim indicated that what is on NCSA was copied over from NERSC - so I would expect that should be fine, but it wouldn't hurt to confirm that.

            I have a shiny new lsst-dev account and am logged in. `/scratch/lauren/rerun/DC1/` does not seem to exist (or I can't view it), but I see other content under `/scratch/lauren/rerun/`. Is the data still there?
            I should figure out how to set up the shared stack. I recall there was a post on community about some recent updates, I'll take a peek. Then I can try to view package versions. Is it `/ssd/lsstsw/stack3_20171021`?

            I'll also start up a rebuild at NERSC using gcc6.

            hkelly Heather Kelly added a comment - - edited I would start by trying to compare the versions provided by conda list and eups list to see that we are indeed using the same package versions. It'd also be good to confirm we're using the same data. Jim indicated that what is on NCSA was copied over from NERSC - so I would expect that should be fine, but it wouldn't hurt to confirm that. I have a shiny new lsst-dev account and am logged in. `/scratch/lauren/rerun/DC1/` does not seem to exist (or I can't view it), but I see other content under `/scratch/lauren/rerun/`. Is the data still there? I should figure out how to set up the shared stack. I recall there was a post on community about some recent updates, I'll take a peek. Then I can try to view package versions. Is it `/ssd/lsstsw/stack3_20171021`? I'll also start up a rebuild at NERSC using gcc6.

            Getting back to the data.. jchiang 's copy is in /home/jchiang/DC1/extracted, and I can view that. What we have at NERSC is in /global/projecta/projectdirs/lsst/production/DC1/DM/DC1-imsim-dithered
            drilling down, I can find the v1993939-fr data which is what I believe we are using.

            Now there was a step that you did lauren, that I did not (because I already have the cal_ref_cats at NERSC):

            ingestReferenceCatalog.py /scratch/lauren/rerun/DC1/ -C config/IngestIndexedReferenceTask.py
            

            This seems like an obvious difference in our steps. Can I do the same thing just for completeness? where does config/ingestIndexedReferenceTask.py come from? In the production area at NERSC, I see a ref_cats/cal_ref_cat/config.py :

            import lsst.meas.algorithms.ingestIndexReferenceTask
            assert type(config)==lsst.meas.algorithms.ingestIndexReferenceTask.DatasetConfig,
            'config is of type %s.%s instead of lsst.meas.algorithms.ingestIndexReferenceTask.
            DatasetConfig' % (type(config).__module__, type(config).__name__)
            # String to pass to the butler to retrieve persisted files.
            config.ref_dataset_name='cal_ref_cat'
            # Depth of the HTM tree to make.  Default is depth=7 which gives
            #               ~ 0.3 sq. deg. per trixel.
            config.indexer['HTM'].depth=8
            config.indexer.name='HTM'
            

            hkelly Heather Kelly added a comment - Getting back to the data.. jchiang 's copy is in /home/jchiang/DC1/extracted, and I can view that. What we have at NERSC is in /global/projecta/projectdirs/lsst/production/DC1/DM/DC1-imsim-dithered drilling down, I can find the v1993939-fr data which is what I believe we are using. Now there was a step that you did lauren , that I did not (because I already have the cal_ref_cats at NERSC): ingestReferenceCatalog.py /scratch/lauren/rerun/DC1/ -C config/IngestIndexedReferenceTask.py This seems like an obvious difference in our steps. Can I do the same thing just for completeness? where does config/ingestIndexedReferenceTask.py come from? In the production area at NERSC, I see a ref_cats/cal_ref_cat/config.py : import lsst.meas.algorithms.ingestIndexReferenceTask assert type(config)==lsst.meas.algorithms.ingestIndexReferenceTask.DatasetConfig, 'config is of type %s.%s instead of lsst.meas.algorithms.ingestIndexReferenceTask. DatasetConfig' % (type(config).__module__, type(config).__name__) # String to pass to the butler to retrieve persisted files. config.ref_dataset_name= 'cal_ref_cat' # Depth of the HTM tree to make. Default is depth= 7 which gives # ~ 0.3 sq. deg. per trixel. config.indexer[ 'HTM' ].depth= 8 config.indexer.name= 'HTM'

            Ah, sorry, I removed the "rerun", so the data are at /scratch/lauren/DC1/ and the plots are in /scratch/lauren/DC1/rerun/DM-11452/plots/.

            The ingest config came from /home/jchiang/DC1/extracted/config/IngestIndexedReferenceTask.py.

            See this community post for info on the shared stack.

            lauren Lauren MacArthur added a comment - Ah, sorry, I removed the "rerun", so the data are at /scratch/lauren/DC1/ and the plots are in /scratch/lauren/DC1/rerun/ DM-11452 /plots/ . The ingest config came from /home/jchiang/DC1/extracted/config/IngestIndexedReferenceTask.py . See this community post for info on the shared stack.

            lauren is there a missing argument in the ingestReferenceCatalog.py call above? I'm getting an error when I try this at NERSC:
            ingestReferenceCatalog.py: error: the following arguments are required: files

            As you can probably guess, I haven't run this before
            I'm following this example, and supplying my own input directory which is a copy of Jim's "extracted" directory.. and pointing to my copy of the IngestIndexedReferenceTask.py
            ingestReferenceCatalog.py /scratch/lauren/rerun/DC1/ -C config/IngestIndexedReferenceTask.py

            hkelly Heather Kelly added a comment - lauren is there a missing argument in the ingestReferenceCatalog.py call above? I'm getting an error when I try this at NERSC: ingestReferenceCatalog.py: error: the following arguments are required: files As you can probably guess, I haven't run this before I'm following this example, and supplying my own input directory which is a copy of Jim's "extracted" directory.. and pointing to my copy of the IngestIndexedReferenceTask.py ingestReferenceCatalog.py /scratch/lauren/rerun/DC1/ -C config/IngestIndexedReferenceTask.py

            Yes, I think the command I used was:

            ingestReferenceCatalog.py /scratch/lauren/DC1 dc1_reference_catalog_8deg_radius.txt.gz -C config/IngestIndexedReferenceTask.py
            

            lauren Lauren MacArthur added a comment - Yes, I think the command I used was: ingestReferenceCatalog.py /scratch/lauren/DC1 dc1_reference_catalog_8deg_radius.txt.gz -C config/IngestIndexedReferenceTask.py

            And FYI, the -h on the command line tasks is often useful:

            $ ingestReferenceCatalog.py -h
            usage: ingestReferenceCatalog.py input [options]
             
            positional arguments:
              input                 path to input data repository, relative to
                                    $PIPE_INPUT_ROOT
              files                 Names of files to index
             
            optional arguments:
              -h, --help            show this help message and exit
              --calib RAWCALIB      path to input calibration repository, relative to
                                    $PIPE_CALIB_ROOT
              --output RAWOUTPUT    path to output data repository (need not exist),
                                    relative to $PIPE_OUTPUT_ROOT
              --rerun [INPUT:]OUTPUT
                                    rerun name: sets OUTPUT to ROOT/rerun/OUTPUT;
                                    optionally sets ROOT to ROOT/rerun/INPUT
            ...etc...
            

            lauren Lauren MacArthur added a comment - And FYI, the -h on the command line tasks is often useful: $ ingestReferenceCatalog.py -h usage: ingestReferenceCatalog.py input [options]   positional arguments: input path to input data repository, relative to $PIPE_INPUT_ROOT files Names of files to index   optional arguments: -h, --help show this help message and exit --calib RAWCALIB path to input calibration repository, relative to $PIPE_CALIB_ROOT --output RAWOUTPUT path to output data repository (need not exist), relative to $PIPE_OUTPUT_ROOT --rerun [INPUT:]OUTPUT rerun name: sets OUTPUT to ROOT/rerun/OUTPUT; optionally sets ROOT to ROOT/rerun/INPUT ...etc...

            I took a copy of Jim's directory on lsst-dev01 and brought it back to NERSC, then ran ingestReferenceCatalog.py using the version available in my original w_2017_41 built with gcc 4.9.3 Then I re-ran visitAnalysis using the branches of obs_lsstSim and pipe_analysis.. this time, it appears the numbers on the plots match. I haven't looked through all the plots, yet, but I think this bit is solved. I'll try running the other scripts and then I should be able to close this out.

            There is still a little mystery as to why the copy of the DC1 reference catalogs we have at NERSC didn't produce the same results. jchiang & cwalter do you have any suggestions? I'm really not familiar with how this data is organized or what the specific files represent (or how that copy at NCSA was pulled over from NERSC) - so from my perspective this is an opportunity to understand the data better, but it also means, I would need some help to figure this out

            hkelly Heather Kelly added a comment - I took a copy of Jim's directory on lsst-dev01 and brought it back to NERSC, then ran ingestReferenceCatalog.py using the version available in my original w_2017_41 built with gcc 4.9.3 Then I re-ran visitAnalysis using the branches of obs_lsstSim and pipe_analysis.. this time, it appears the numbers on the plots match. I haven't looked through all the plots, yet, but I think this bit is solved. I'll try running the other scripts and then I should be able to close this out. There is still a little mystery as to why the copy of the DC1 reference catalogs we have at NERSC didn't produce the same results. jchiang & cwalter do you have any suggestions? I'm really not familiar with how this data is organized or what the specific files represent (or how that copy at NCSA was pulled over from NERSC) - so from my perspective this is an opportunity to understand the data better, but it also means, I would need some help to figure this out
            jchiang James Chiang added a comment - - edited

            Sorry, just re-read the first paragraph in your entry. I think that if you get identical results using those reference catalogs, we're done with this issue. We can do a direct comparison of the reference catalogs currently in the repo vs the ones you newly generated, but understanding the origin of those differences is not something we need to do to close this issue.

            jchiang James Chiang added a comment - - edited Sorry, just re-read the first paragraph in your entry. I think that if you get identical results using those reference catalogs, we're done with this issue. We can do a direct comparison of the reference catalogs currently in the repo vs the ones you newly generated, but understanding the origin of those differences is not something we need to do to close this issue.

            I'm sufficiently satisfied for the purposes of this issue. I've attached tar balls from the run at NERSC. Ran all 4 scripts, and all the plots now match those produced by lauren at NCSA. Feel free to proceed! I'll talk to jchiang separately about the references catalogs outside this JIRA.

            hkelly Heather Kelly added a comment - I'm sufficiently satisfied for the purposes of this issue. I've attached tar balls from the run at NERSC. Ran all 4 scripts, and all the plots now match those produced by lauren at NCSA. Feel free to proceed! I'll talk to jchiang separately about the references catalogs outside this JIRA.

            The testing at NERSC is complete and we are able to reproduce the results obtained by running these scripts at NCSA.

            hkelly Heather Kelly added a comment - The testing at NERSC is complete and we are able to reproduce the results obtained by running these scripts at NCSA.
            cwalter Chris Walter added a comment -

            lauren Can you remind me:

            Do you have a list of things you had to turn off because the output wasn't available in V13 but is now in the stack?

            cwalter Chris Walter added a comment - lauren Can you remind me: Do you have a list of things you had to turn off because the output wasn't available in V13 but is now in the stack?

            cwalter, most of the accommodations & plot skipping were based on flags not in the old schemas. A new reprocessing should get them by default.

            Have a look here for the plugins specifically added in obs_subaru:
            https://github.com/lsst/obs_subaru/blob/master/config/processCcd.py#L70-L82
            (and all the associated loaded config files)
            https://github.com/lsst/obs_subaru/blob/master/config/processCcd.py#L91-L92

            and for coadd measurement, have a look at:
            https://github.com/lsst/obs_subaru/blob/master/config/measureCoaddSources.py

            lauren Lauren MacArthur added a comment - cwalter , most of the accommodations & plot skipping were based on flags not in the old schemas. A new reprocessing should get them by default. Have a look here for the plugins specifically added in obs_subaru : https://github.com/lsst/obs_subaru/blob/master/config/processCcd.py#L70-L82 (and all the associated loaded config files) https://github.com/lsst/obs_subaru/blob/master/config/processCcd.py#L91-L92 and for coadd measurement, have a look at: https://github.com/lsst/obs_subaru/blob/master/config/measureCoaddSources.py

            Thanks all. Merged to master.

            lauren Lauren MacArthur added a comment - Thanks all. Merged to master.

            I just noticed that, inadvertently, the obs_lsstSim branch was never merged to master.  I will do that tomorrow (to give anyone who may want to object a chance to chime in).

            lauren Lauren MacArthur added a comment - I just noticed that, inadvertently, the obs_lsstSim branch was never merged to master.  I will do that tomorrow (to give anyone who may want to object a chance to chime in).

            Hi lauren so we could expect the changes to appear in the next weekly?

            hkelly Heather Kelly added a comment - Hi lauren so we could expect the changes to appear in the next weekly?

            Yes...shall I go ahead and merge?

            lauren Lauren MacArthur added a comment - Yes...shall I go ahead and merge?

            The obs_lsstSim branch has now been merged to master.

            lauren Lauren MacArthur added a comment - The obs_lsstSim branch has now been merged to master.

            People

              lauren Lauren MacArthur
              lauren Lauren MacArthur
              Heather Kelly
              Chris Walter, Heather Kelly, James Chiang, John Swinbank, Lauren MacArthur
              Votes:
              0 Vote for this issue
              Watchers:
              5 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved:

                Jenkins

                  No builds found.