# Adapt qa analysis scripts to run on DESC DC1 simulations output

XMLWordPrintable

#### Details

• Type: Story
• Status: Done
• Resolution: Done
• Fix Version/s: None
• Component/s:
• Labels:
None
• Story Points:
4
• Sprint:
DRP F17-3, DRP F17-4, DRP F17-5, DRP F17-6
• Team:
Data Release Production

#### Description

To date, the qa analysis scripts have only been run and tested on HSC data. As such, it is almost certain some HSC-isms have been unwittingly been baked into the code. This ticket involves running the scripts on DESC DS1 simulation output and making any adaptations required. The testing will be done on the visit and coadd levels for the single band (r) of the DC1 run. This will be a significant step towards generalizing the scripts to run on any LSST-stack processed dataset.

#### Attachments

4.84 MB
2. DC1_qaVisitPlots.tar.gz
15.35 MB
3. Org_plot-v1993939-deconvMom-psfMagHist.png
112 kB
4. plot-v1993939-deconvMom-psfMagHist.png
106 kB
4.89 MB
6. runAtNersc-visitPlots.tar.gz
15.32 MB

#### Activity

Hide
John Swinbank added a comment -

Lauren MacArthur — I'm adding this to the August sprint since I think it's likely an ongoing activity, but that leaves you quite overloaded for the month. Feel free to split your effort between this and DM-11312 as you think appropriate.

Show
John Swinbank added a comment - Lauren MacArthur — I'm adding this to the August sprint since I think it's likely an ongoing activity, but that leaves you quite overloaded for the month. Feel free to split your effort between this and DM-11312 as you think appropriate.
Hide
Lauren MacArthur added a comment -

Here are the visit level qa plots for the DC1 data that was copied to /home/jchiang/DC1/extracted/ on lsst-dev for me to work with.

Show
Lauren MacArthur added a comment - Here are the visit level qa plots for the DC1 data that was copied to /home/jchiang/DC1/extracted/ on lsst-dev for me to work with.
Hide
Lauren MacArthur added a comment -

Sorry for the long delay, but this is finally ready for someone at DESC to have a go with. I've run the scripts on the visit and coadd-level data in /home/jchiang/DC1/extracted/ on lsst-dev. I actually made a copy of the data in /scratch/lauren/rerun/DC1/ (so feel free to delete it from you home directory James Chiang). In order to extract the reference catalogs, I first ran

 ingestReferenceCatalog.py /scratch/lauren/rerun/DC1/ -C config/IngestIndexedReferenceTask.py 

in that directory to get the ref_cats/cal_ref_cat/ directories.

To run the scripts on lsst-dev I performed the following steps to setup and use the shared stack (note that I have not tested the scripts on much older versions of the stack than the current weekly):

 source /software/lsstsw/stack/loadLSST.bash setup lsst_distrib 

Do the following from whatever directory you use for development:

 git clone git@github.com:LSST/obs_lsstSim.git # if you don't already have it! cd obs_lsstSim git checkout tickets/DM-11452 setup -v -r . -j scons opt=3 -j4 cd .. git clone git@github.com:LSST-DM/pipe_analysis.git cd pipe_analysis git checkout tickets/DM-11452 setup -v -r . -j scons opt=3 -j4 

Assuming that all went smoothly, you can run the visit analysis script as follows:

 visitAnalysis.py ROOTDIR --rerun RERUNDIR --id visit=1993939 --tract=0 --config doApplyUberCal=False analysis.visitClassFluxRatio=0.925 refObjLoader.ref_dataset_name="cal_ref_cat" 

You can also specify specific raft/sensor combos in the --id (e.g. --id visit=1993939 raft=0,2 sensor=1,1^2,0). Leaving them out will result in all available calexps for the supplied visit being included.
So, in my case I ran:

 visitAnalysis.py /scratch/lauren/DC1 --rerun DM-11452/ --id visit=1993939 --tract=0 --config doApplyUberCal=False analysis.visitClassFluxRatio=0.925 refObjLoader.ref_dataset_name="cal_ref_cat" 

I use the --rerun file structure which puts the output in /ROOTDIR/rerun/RERUNDIR
which in my case evaluates to /scratch/lauren/DC1/rerun/DM-11452. The figure will end up in a plots subdirectory in rerun, e.g. /scratch/lauren/DC1/rerun/DM-11452/plots. You could use --output instead of --rerun, but the comparison scripts currently assume the rerun structure (see below).

The two config parameters are to override the defaults which are applicable to HSC. The first tells the code not to try to apply the uber calibration from meas_mosaic, the second is the flux ratio that delineates the star/galaxy classification (is considered in plotting limits; obs_subaru overrides this to 0.95 so that is what I use for the default).

Similarly, the coadd analysis script can be run as:

 coaddAnalysis.py /scratch/lauren/DC1 --rerun DM-11452/ --id tract=0 patch=18,13 filter=r --config refObjLoader.ref_dataset_name="cal_ref_cat" 

Finally, I have tested that the comparison scripts compareVisitAnalysis.py and compareCoaddAnalysis.py also run on the DC1 data. The purpose of these scripts is to compare two separate processing runs of the same dataset. To accommodate this, they take a --rerun2 input parameter. Since I didn't have two separate runs to compare, I just ran them by comparing this one with itself, so my command looked like:

 compareVisitAnalysis.py /scratch/lauren/DC1 --rerun DM-11452/ --rerun2 DM-11452/ --id visit=1993939 --tract=0 --config refObjLoader.ref_dataset_name="cal_ref_cat" --config doApplyUberCal1=False doApplyUberCal2=False 

Show
Hide
Lauren MacArthur added a comment -

I've attached tarballs of the plots created with the above.

Chris Walter, is there someone you could ask to test out the above on some more DC1 data?

Show
Lauren MacArthur added a comment - I've attached tarballs of the plots created with the above. Chris Walter , is there someone you could ask to test out the above on some more DC1 data?
Hide
Chris Walter added a comment -

Thanks Lauren!

OK, will try to pass this on.

Show
Chris Walter added a comment - Thanks Lauren! OK, will try to pass this on.
Hide
Lauren MacArthur added a comment -

Heather, can you please give this a pass once your satisfied the scripts are running as advertised on the DESC DC1 data?

Show
Lauren MacArthur added a comment - Heather, can you please give this a pass once your satisfied the scripts are running as advertised on the DESC DC1 data?
Hide
Heather Kelly added a comment -

We can run the various scripts and are now trying to compare our results against the runs laur\en completed above. We've seen some discrepancies in the numbers and just want to make sure this is either expected or can be explained. At NERSC we're on Haswell with gcc 4.9.3, using Py3 along with the w.2017.41 install of lsst_distrib with the obs_lsstSim and pipe_analysis branches as described above. Jim told me that our data should be identical, as you were running using the copy made from NERSC. I ran the scripts. In the case of the compare scripts, I used the tarballs Lauren MacArthur attached to this JIRA.

 visitAnalysis.py /global/projecta/projectdirs/lsst/production/DC1/DM/DC1-imsim-dithered --rerun /global/cscratch1/sd/heatherk/qa_dm_11452/rerun --id visit=1993939 --tract=0 --config doApplyUberCal=False analysis.visitClassFluxRatio=0.925 refObjLoader.ref_dataset_name="cal_ref_cat" --no-versions 

 compareVisitAnalysis.py /global/projecta/projectdirs/lsst/production/DC1/DM/DC1-imsim-dithered --rerun /global/cscratch1/sd/heatherk/qa_dm_11452/rerun --rerun2 /global/cscratch1/sd/heatherk/qa_dm_11452/jira --id visit=1993939 --tract=0 --config doApplyUberCal1=False doApplyUberCal2=False --config refObjLoader.ref_dataset_name="cal_ref_cat" --no-versions                                                                              

 coaddAnalysis.py /global/projecta/projectdirs/lsst/production/DC1/DM/DC1-imsim-dithered --rerun /global/cscratch1/sd/heatherk/qa_dm_11452/rerun --id tract=0 patch=18,13 filter=r --config refObjLoader.ref_dataset_name="cal_ref_cat" --no-versions                        

 compareCoaddAnalysis.py /global/projecta/projectdirs/lsst/production/DC1/DM/DC1-imsim-dithered --rerun /global/cscratch1/sd/heatherk/qa_dm_11452/rerun --rerun2 /global/cscratch1/sd/heatherk/qa_dm_11452/jira --id tract=0 patch=18,13 filter=r --config refObjLoader.ref_dataset_name="cal_ref_cat" --no-versions 

I'll some images in a moment

Show
Heather Kelly added a comment - We can run the various scripts and are now trying to compare our results against the runs laur\en completed above. We've seen some discrepancies in the numbers and just want to make sure this is either expected or can be explained. At NERSC we're on Haswell with gcc 4.9.3, using Py3 along with the w.2017.41 install of lsst_distrib with the obs_lsstSim and pipe_analysis branches as described above. Jim told me that our data should be identical, as you were running using the copy made from NERSC. I ran the scripts. In the case of the compare scripts, I used the tarballs Lauren MacArthur attached to this JIRA. visitAnalysis.py /global/projecta/projectdirs/lsst/production/DC1/DM/DC1-imsim-dithered --rerun /global/cscratch1/sd/heatherk/qa_dm_11452/rerun --id visit= 1993939 --tract= 0 --config doApplyUberCal=False analysis.visitClassFluxRatio= 0.925 refObjLoader.ref_dataset_name= "cal_ref_cat" --no-versions compareVisitAnalysis.py /global/projecta/projectdirs/lsst/production/DC1/DM/DC1-imsim-dithered --rerun /global/cscratch1/sd/heatherk/qa_dm_11452/rerun --rerun2 /global/cscratch1/sd/heatherk/qa_dm_11452/jira --id visit= 1993939 --tract= 0 --config doApplyUberCal1=False doApplyUberCal2=False --config refObjLoader.ref_dataset_name= "cal_ref_cat" --no-versions                                                                              coaddAnalysis.py /global/projecta/projectdirs/lsst/production/DC1/DM/DC1-imsim-dithered --rerun /global/cscratch1/sd/heatherk/qa_dm_11452/rerun --id tract= 0 patch= 18 , 13 filter=r --config refObjLoader.ref_dataset_name= "cal_ref_cat" --no-versions                        compareCoaddAnalysis.py /global/projecta/projectdirs/lsst/production/DC1/DM/DC1-imsim-dithered --rerun /global/cscratch1/sd/heatherk/qa_dm_11452/rerun --rerun2 /global/cscratch1/sd/heatherk/qa_dm_11452/jira --id tract= 0 patch= 18 , 13 filter=r --config refObjLoader.ref_dataset_name= "cal_ref_cat" --no-versions I'll some images in a moment
Hide
Heather Kelly added a comment - - edited

I just attached two images, one from Lauren's tar balls (marked "Org_") and the other from running at NERSC. There are different values for Ntotal and the various stats. I can find similar differences in other plots. Is this expected?

Show
Heather Kelly added a comment - - edited I just attached two images, one from Lauren's tar balls (marked "Org_") and the other from running at NERSC. There are different values for Ntotal and the various stats. I can find similar differences in other plots. Is this expected?
Hide
Lauren MacArthur added a comment -

Odd...I have run these scripts on the dataset I have on py2 (on tiger here at Princeton) and on py2 & py3 (on lsst-dev and having setup the w_2017_41 stack) and get exactly the same results, so it does not seem to by a py2 vs. py3 issue. It looks like gcc=6.3.1 on lsst-dev, so that's the only setup difference I can see. We really would need to confirm somehow that our input catalogs are indeed identical to get to the bottom of this, but that is difficult given our system access issues. Do you have any suggestions?

Show
Lauren MacArthur added a comment - Odd...I have run these scripts on the dataset I have on py2 (on tiger here at Princeton) and on py2 & py3 (on lsst-dev and having setup the w_2017_41 stack) and get exactly the same results, so it does not seem to by a py2 vs. py3 issue. It looks like gcc=6.3.1 on lsst-dev, so that's the only setup difference I can see. We really would need to confirm somehow that our input catalogs are indeed identical to get to the bottom of this, but that is difficult given our system access issues. Do you have any suggestions?
Hide
Heather Kelly added a comment - - edited

I would start by trying to compare the versions provided by conda list and eups list to see that we are indeed using the same package versions. It'd also be good to confirm we're using the same data. Jim indicated that what is on NCSA was copied over from NERSC - so I would expect that should be fine, but it wouldn't hurt to confirm that.

I have a shiny new lsst-dev account and am logged in. /scratch/lauren/rerun/DC1/ does not seem to exist (or I can't view it), but I see other content under /scratch/lauren/rerun/. Is the data still there?
I should figure out how to set up the shared stack. I recall there was a post on community about some recent updates, I'll take a peek. Then I can try to view package versions. Is it /ssd/lsstsw/stack3_20171021?

I'll also start up a rebuild at NERSC using gcc6.

Show
Heather Kelly added a comment - - edited I would start by trying to compare the versions provided by conda list and eups list to see that we are indeed using the same package versions. It'd also be good to confirm we're using the same data. Jim indicated that what is on NCSA was copied over from NERSC - so I would expect that should be fine, but it wouldn't hurt to confirm that. I have a shiny new lsst-dev account and am logged in. /scratch/lauren/rerun/DC1/ does not seem to exist (or I can't view it), but I see other content under /scratch/lauren/rerun/. Is the data still there? I should figure out how to set up the shared stack. I recall there was a post on community about some recent updates, I'll take a peek. Then I can try to view package versions. Is it /ssd/lsstsw/stack3_20171021? I'll also start up a rebuild at NERSC using gcc6.
Hide
Heather Kelly added a comment -

Getting back to the data.. James Chiang 's copy is in /home/jchiang/DC1/extracted, and I can view that. What we have at NERSC is in /global/projecta/projectdirs/lsst/production/DC1/DM/DC1-imsim-dithered
drilling down, I can find the v1993939-fr data which is what I believe we are using.

Now there was a step that you did Lauren MacArthur, that I did not (because I already have the cal_ref_cats at NERSC):

 ingestReferenceCatalog.py /scratch/lauren/rerun/DC1/ -C config/IngestIndexedReferenceTask.py 

This seems like an obvious difference in our steps. Can I do the same thing just for completeness? where does config/ingestIndexedReferenceTask.py come from? In the production area at NERSC, I see a ref_cats/cal_ref_cat/config.py :

 import lsst.meas.algorithms.ingestIndexReferenceTask assert type(config)==lsst.meas.algorithms.ingestIndexReferenceTask.DatasetConfig, 'config is of type %s.%s instead of lsst.meas.algorithms.ingestIndexReferenceTask. DatasetConfig' % (type(config).__module__, type(config).__name__) # String to pass to the butler to retrieve persisted files. config.ref_dataset_name='cal_ref_cat' # Depth of the HTM tree to make. Default is depth=7 which gives # ~ 0.3 sq. deg. per trixel. config.indexer['HTM'].depth=8 config.indexer.name='HTM' 

Show
Heather Kelly added a comment - Getting back to the data.. James Chiang 's copy is in /home/jchiang/DC1/extracted, and I can view that. What we have at NERSC is in /global/projecta/projectdirs/lsst/production/DC1/DM/DC1-imsim-dithered drilling down, I can find the v1993939-fr data which is what I believe we are using. Now there was a step that you did Lauren MacArthur , that I did not (because I already have the cal_ref_cats at NERSC): ingestReferenceCatalog.py /scratch/lauren/rerun/DC1/ -C config/IngestIndexedReferenceTask.py This seems like an obvious difference in our steps. Can I do the same thing just for completeness? where does config/ingestIndexedReferenceTask.py come from? In the production area at NERSC, I see a ref_cats/cal_ref_cat/config.py : import lsst.meas.algorithms.ingestIndexReferenceTask assert type(config)==lsst.meas.algorithms.ingestIndexReferenceTask.DatasetConfig, 'config is of type %s.%s instead of lsst.meas.algorithms.ingestIndexReferenceTask. DatasetConfig' % (type(config).__module__, type(config).__name__) # String to pass to the butler to retrieve persisted files. config.ref_dataset_name= 'cal_ref_cat' # Depth of the HTM tree to make. Default is depth= 7 which gives # ~ 0.3 sq. deg. per trixel. config.indexer[ 'HTM' ].depth= 8 config.indexer.name= 'HTM'
Hide
Lauren MacArthur added a comment -

Ah, sorry, I removed the "rerun", so the data are at /scratch/lauren/DC1/ and the plots are in /scratch/lauren/DC1/rerun/DM-11452/plots/.

The ingest config came from /home/jchiang/DC1/extracted/config/IngestIndexedReferenceTask.py.

See this community post for info on the shared stack.

Show
Lauren MacArthur added a comment - Ah, sorry, I removed the "rerun", so the data are at /scratch/lauren/DC1/ and the plots are in /scratch/lauren/DC1/rerun/ DM-11452 /plots/ . The ingest config came from /home/jchiang/DC1/extracted/config/IngestIndexedReferenceTask.py . See this community post for info on the shared stack.
Hide
Heather Kelly added a comment -

Lauren MacArthur is there a missing argument in the ingestReferenceCatalog.py call above? I'm getting an error when I try this at NERSC:
ingestReferenceCatalog.py: error: the following arguments are required: files

As you can probably guess, I haven't run this before
I'm following this example, and supplying my own input directory which is a copy of Jim's "extracted" directory.. and pointing to my copy of the IngestIndexedReferenceTask.py

Show
Heather Kelly added a comment - Lauren MacArthur is there a missing argument in the ingestReferenceCatalog.py call above? I'm getting an error when I try this at NERSC: ingestReferenceCatalog.py: error: the following arguments are required: files As you can probably guess, I haven't run this before I'm following this example, and supplying my own input directory which is a copy of Jim's "extracted" directory.. and pointing to my copy of the IngestIndexedReferenceTask.py ingestReferenceCatalog.py /scratch/lauren/rerun/DC1/ -C config/IngestIndexedReferenceTask.py
Hide
Lauren MacArthur added a comment -

Yes, I think the command I used was:

 ingestReferenceCatalog.py /scratch/lauren/DC1 dc1_reference_catalog_8deg_radius.txt.gz -C config/IngestIndexedReferenceTask.py 

Show
Lauren MacArthur added a comment - Yes, I think the command I used was: ingestReferenceCatalog.py /scratch/lauren/DC1 dc1_reference_catalog_8deg_radius.txt.gz -C config/IngestIndexedReferenceTask.py
Hide
Lauren MacArthur added a comment -

And FYI, the -h on the command line tasks is often useful:

 $ingestReferenceCatalog.py -h usage: ingestReferenceCatalog.py input [options]   positional arguments:  input path to input data repository, relative to $PIPE_INPUT_ROOT  files Names of files to index   optional arguments:  -h, --help show this help message and exit  --calib RAWCALIB path to input calibration repository, relative to  $PIPE_CALIB_ROOT  --output RAWOUTPUT path to output data repository (need not exist),  relative to$PIPE_OUTPUT_ROOT  --rerun [INPUT:]OUTPUT  rerun name: sets OUTPUT to ROOT/rerun/OUTPUT;  optionally sets ROOT to ROOT/rerun/INPUT ...etc... 

Show
Lauren MacArthur added a comment - And FYI, the -h on the command line tasks is often useful: $ingestReferenceCatalog.py -h usage: ingestReferenceCatalog.py input [options] positional arguments: input path to input data repository, relative to$PIPE_INPUT_ROOT files Names of files to index   optional arguments: -h, --help show this help message and exit --calib RAWCALIB path to input calibration repository, relative to $PIPE_CALIB_ROOT --output RAWOUTPUT path to output data repository (need not exist), relative to$PIPE_OUTPUT_ROOT --rerun [INPUT:]OUTPUT rerun name: sets OUTPUT to ROOT/rerun/OUTPUT; optionally sets ROOT to ROOT/rerun/INPUT ...etc...
Hide
Heather Kelly added a comment -

I took a copy of Jim's directory on lsst-dev01 and brought it back to NERSC, then ran ingestReferenceCatalog.py using the version available in my original w_2017_41 built with gcc 4.9.3 Then I re-ran visitAnalysis using the branches of obs_lsstSim and pipe_analysis.. this time, it appears the numbers on the plots match. I haven't looked through all the plots, yet, but I think this bit is solved. I'll try running the other scripts and then I should be able to close this out.

There is still a little mystery as to why the copy of the DC1 reference catalogs we have at NERSC didn't produce the same results. James Chiang & Chris Walter do you have any suggestions? I'm really not familiar with how this data is organized or what the specific files represent (or how that copy at NCSA was pulled over from NERSC) - so from my perspective this is an opportunity to understand the data better, but it also means, I would need some help to figure this out

Show
Heather Kelly added a comment - I took a copy of Jim's directory on lsst-dev01 and brought it back to NERSC, then ran ingestReferenceCatalog.py using the version available in my original w_2017_41 built with gcc 4.9.3 Then I re-ran visitAnalysis using the branches of obs_lsstSim and pipe_analysis.. this time, it appears the numbers on the plots match. I haven't looked through all the plots, yet, but I think this bit is solved. I'll try running the other scripts and then I should be able to close this out. There is still a little mystery as to why the copy of the DC1 reference catalogs we have at NERSC didn't produce the same results. James Chiang & Chris Walter do you have any suggestions? I'm really not familiar with how this data is organized or what the specific files represent (or how that copy at NCSA was pulled over from NERSC) - so from my perspective this is an opportunity to understand the data better, but it also means, I would need some help to figure this out
Hide
James Chiang added a comment - - edited

Sorry, just re-read the first paragraph in your entry. I think that if you get identical results using those reference catalogs, we're done with this issue. We can do a direct comparison of the reference catalogs currently in the repo vs the ones you newly generated, but understanding the origin of those differences is not something we need to do to close this issue.

Show
James Chiang added a comment - - edited Sorry, just re-read the first paragraph in your entry. I think that if you get identical results using those reference catalogs, we're done with this issue. We can do a direct comparison of the reference catalogs currently in the repo vs the ones you newly generated, but understanding the origin of those differences is not something we need to do to close this issue.
Hide
Heather Kelly added a comment -

I'm sufficiently satisfied for the purposes of this issue. I've attached tar balls from the run at NERSC. Ran all 4 scripts, and all the plots now match those produced by Lauren MacArthur at NCSA. Feel free to proceed! I'll talk to James Chiang separately about the references catalogs outside this JIRA.

Show
Heather Kelly added a comment - I'm sufficiently satisfied for the purposes of this issue. I've attached tar balls from the run at NERSC. Ran all 4 scripts, and all the plots now match those produced by Lauren MacArthur at NCSA. Feel free to proceed! I'll talk to James Chiang separately about the references catalogs outside this JIRA.
Hide
Heather Kelly added a comment -

The testing at NERSC is complete and we are able to reproduce the results obtained by running these scripts at NCSA.

Show
Heather Kelly added a comment - The testing at NERSC is complete and we are able to reproduce the results obtained by running these scripts at NCSA.
Hide
Chris Walter added a comment -

Lauren MacArthur Can you remind me:

Do you have a list of things you had to turn off because the output wasn't available in V13 but is now in the stack?

Show
Chris Walter added a comment - Lauren MacArthur Can you remind me: Do you have a list of things you had to turn off because the output wasn't available in V13 but is now in the stack?
Hide
Lauren MacArthur added a comment -

Chris Walter, most of the accommodations & plot skipping were based on flags not in the old schemas. A new reprocessing should get them by default.

Have a look here for the plugins specifically added in obs_subaru:
https://github.com/lsst/obs_subaru/blob/master/config/processCcd.py#L70-L82
(and all the associated loaded config files)
https://github.com/lsst/obs_subaru/blob/master/config/processCcd.py#L91-L92

and for coadd measurement, have a look at:

Show
Lauren MacArthur added a comment - Chris Walter , most of the accommodations & plot skipping were based on flags not in the old schemas. A new reprocessing should get them by default. Have a look here for the plugins specifically added in obs_subaru : https://github.com/lsst/obs_subaru/blob/master/config/processCcd.py#L70-L82 (and all the associated loaded config files) https://github.com/lsst/obs_subaru/blob/master/config/processCcd.py#L91-L92 and for coadd measurement, have a look at: https://github.com/lsst/obs_subaru/blob/master/config/measureCoaddSources.py
Hide
Lauren MacArthur added a comment -

Thanks all. Merged to master.

Show
Lauren MacArthur added a comment - Thanks all. Merged to master.
Hide
Lauren MacArthur added a comment -

I just noticed that, inadvertently, the obs_lsstSim branch was never merged to master.  I will do that tomorrow (to give anyone who may want to object a chance to chime in).

Show
Lauren MacArthur added a comment - I just noticed that, inadvertently, the obs_lsstSim branch was never merged to master.  I will do that tomorrow (to give anyone who may want to object a chance to chime in).
Hide
Heather Kelly added a comment -

Hi Lauren MacArthur so we could expect the changes to appear in the next weekly?

Show
Heather Kelly added a comment - Hi Lauren MacArthur so we could expect the changes to appear in the next weekly?
Hide
Lauren MacArthur added a comment -

Yes...shall I go ahead and merge?

Show
Lauren MacArthur added a comment - Yes...shall I go ahead and merge?
Hide
Lauren MacArthur added a comment -

The obs_lsstSim branch has now been merged to master.

Show
Lauren MacArthur added a comment - The obs_lsstSim branch has now been merged to master.

#### People

Assignee:
Lauren MacArthur
Reporter:
Lauren MacArthur
Reviewers:
Heather Kelly
Watchers:
Chris Walter, Heather Kelly, James Chiang, John Swinbank, Lauren MacArthur