Uploaded image for project: 'Data Management'
  1. Data Management
  2. DM-32624

Override source selection criteria in SFM calibration stages for LSSTCam-imSim

    XMLWordPrintable

    Details

    • Story Points:
      5
    • Epic Link:
    • Team:
      Data Release Production
    • Urgent?:
      No

      Description

      In the analysis done on DM-16875, the suggestion was made to override the default config settings for the source selection criteria for consideration of a given source as a suitable candidate for the PSF modeling. In particular, the default setting of a minimum flux of 12500 (raw counts, I believe) seems way too high and, ideally, the cut would not be based on flux, but rather on S/N. Justification for this based on HSC data as well as the S/N thresholding functionality was added on DM-17043. [Note that the fluxMin value has long been overridden to 4000 in obs_subaru].

      It seemed this change was widely agreed upon by those involved in the imsim/DC2 Run2.2i processing and was adopted – in addition to several other source selection overrides for other calibration phases – in at least some of the DESC workflows (see, e.g. the overrides in their "dc2/run2.2" branch run: https://github.com/lsst/obs_lsst/blob/dc2/run2.2/config/imsim/processCcd.py#L39-L42
      and note the other overrides in that file). Unfortunately, during an investigation on PREOPS-885, it was noted that these overrides have not found their way into the master/main config files in obs_lsst, so they are not currently in effect for any processing based off the master/main configs. This situation should be rectified.

      Some care may be required in the actual S/N threshold set and whether the other updates in the file pointed to above should also be adopted. In particular, I would love to hear from anyone involved the DC2 processing campaigns as to the validation of those choices (I added a few watchers as a first guess, but please do add/remove as you see fit!) I also have no idea what was used in the DP0.1 processing run, so if anyone could chime in on that, it would be greatly appreciated!

        Attachments

          Issue Links

            Activity

            Hide
            lauren Lauren MacArthur added a comment -

            I have attached a couple of plots based on our most recent DC2 processing run (DM-32071).  These are the equivalents of the HSC-based ones presented on DM-17043.  The histograms indicate the distributions for two fluxMin values (the default 12500 and the override in 4000) and which stars were actually used in the PSF modeling.  There are dashed vertical lines at S/N = 20 and 100 (just to guide the eye).

            The u-band is likely to be the most affected, and is often where SFM failures due to poor PSF modeling occur (see examples on PREOPS-855), so I'll highlight a randomly selected one here (the labels got cut off in the plot...the visit is 277060):

            As you can see, the flux > 12500 cut roughly coincides with a S/N ~ 70, so we are missing out on a significant number of reasonably high S/N.

            And here is a randomly selected g-band visit (193827):

            Here the flux cut roughly coincides with S/N ~ 50, so we are not missing as significant a fraction.

            Feel free to peruse these plots for any visit in the DC2 processing at:
            https://lsst.ncsa.illinois.edu/~lauren/DC2.2i_gen2/w_2021_40/plots/
            and the plots of interest are in:
            BAND/tract-3829/visit-NNNN/plot-vNNNN-BAND-base_PsfFluxSn_raw-hist.png
            where BAND is the single character band label and NNNN is the visit number

            The value used in the workflow I point to in the description was S/N > 20 which, based on these plots, may seem a bit on the low side, but if there has been some battle testing (in particular demonstrating that is at least does no harm, but ideally also provides better PSF models/fewer failures in some cases) using it, I'm all ears!

            Show
            lauren Lauren MacArthur added a comment - I have attached a couple of plots based on our most recent DC2 processing run ( DM-32071 ).  These are the equivalents of the HSC-based ones presented on DM-17043 .  The histograms indicate the distributions for two fluxMin values (the default 12500 and the override in 4000) and which stars were actually used in the PSF modeling.  There are dashed vertical lines at S/N = 20 and 100 (just to guide the eye). The u-band is likely to be the most affected, and is often where SFM failures due to poor PSF modeling occur (see examples on PREOPS-855), so I'll highlight a randomly selected one here (the labels got cut off in the plot...the visit is 277060): As you can see, the flux > 12500 cut roughly coincides with a S/N ~ 70, so we are missing out on a significant number of reasonably high S/N. And here is a randomly selected g-band visit (193827): Here the flux cut roughly coincides with S/N ~ 50, so we are not missing as significant a fraction. Feel free to peruse these plots for any visit in the DC2 processing at: https://lsst.ncsa.illinois.edu/~lauren/DC2.2i_gen2/w_2021_40/plots/ and the plots of interest are in: BAND/tract-3829/visit-NNNN/plot-vNNNN-BAND-base_PsfFluxSn_raw-hist.png where BAND is the single character band label and NNNN is the visit number The value used in the workflow I point to in the description was S/N > 20 which, based on these plots, may seem a bit on the low side, but if there has been some battle testing (in particular demonstrating that is at least does no harm, but ideally also provides better PSF models/fewer failures in some cases) using it, I'm all ears!
            Hide
            erykoff Eli Rykoff added a comment -

            A couple of points of reference, which might be relevant. First of all, for DES we have used S/N > 20 (using PIFF not psfex) for the psf selection.

            Second of all, these plots are showing the histogram of psf (inst) flux signal to noise ratio, but I don't think that's what's being selected for input to psf modeling (since we don't have psf models or fluxes yet). Instead, the default selection is on base_GaussianFlux_instFlux https://github.com/lsst/meas_algorithms/blob/master/python/lsst/meas/algorithms/objectSizeStarSelector.py#L94 but I don't know how that s/n compares to the one in these histograms.

            Show
            erykoff Eli Rykoff added a comment - A couple of points of reference, which might be relevant. First of all, for DES we have used S/N > 20 (using PIFF not psfex) for the psf selection. Second of all, these plots are showing the histogram of psf (inst) flux signal to noise ratio, but I don't think that's what's being selected for input to psf modeling (since we don't have psf models or fluxes yet). Instead, the default selection is on base_GaussianFlux_instFlux https://github.com/lsst/meas_algorithms/blob/master/python/lsst/meas/algorithms/objectSizeStarSelector.py#L94 but I don't know how that s/n compares to the one in these histograms.
            Hide
            lauren Lauren MacArthur added a comment -

            Arghh...good point, Eli!  I will try to remake a couple of those histograms using base_GaussianFlux_instFlux.

            Show
            lauren Lauren MacArthur added a comment - Arghh...good point, Eli!  I will try to remake a couple of those histograms using base_GaussianFlux_instFlux .
            Hide
            jchiang James Chiang added a comment -

            For DESC's DC2 processing of Run2.2i, we did indeed use the configs that are currently in that "dc2/run2.2" branch of obs_lsst, so that's also what DP0.1 corresponds to since those data are just a copy of our DC2 outputs.

            As far as validation or battle testing is concerned, what I recall was rather informal:  When we'd find issues in the processing at CC-IN2P3 (most commonly these would be failures in the processing), we'd report them and someone on the DM side would follow up (usually, you Lauren or Eli) and come up with a fix.  We'd apply those fixes to the places where we ran into the problems and if things ran smoothly, we adopted them.  In some cases, we found problems in the outputs from our validation efforts, e.g., systematic offsets in the photometry, and once a mitigation was found, we would rerun the original validation test, along with the other diagnostics we came up with to narrow down the problem, and if all looked good, adopted the fixes.   Unfortunately, most of the discussion on these issues was only captured in slack, in channels like #desc-dc2-validation .

            Show
            jchiang James Chiang added a comment - For DESC's DC2 processing of Run2.2i, we did indeed use the configs that are currently in that "dc2/run2.2" branch of obs_lsst, so that's also what DP0.1 corresponds to since those data are just a copy of our DC2 outputs. As far as validation or battle testing is concerned, what I recall was rather informal:  When we'd find issues in the processing at CC-IN2P3 (most commonly these would be failures in the processing), we'd report them and someone on the DM side would follow up (usually, you Lauren or Eli) and come up with a fix.  We'd apply those fixes to the places where we ran into the problems and if things ran smoothly, we adopted them.  In some cases, we found problems in the outputs from our validation efforts, e.g., systematic offsets in the photometry, and once a mitigation was found, we would rerun the original validation test, along with the other diagnostics we came up with to narrow down the problem, and if all looked good, adopted the fixes.   Unfortunately, most of the discussion on these issues was only captured in slack, in channels like #desc-dc2-validation .
            Hide
            lauren Lauren MacArthur added a comment - - edited

            Indeed, the GaussianFlux plots look different and will help assess the thresholds.  Here is the u-band one from above:

            I will make these for the entire 3829 tract of the DC2 dataset to get a good feeling for the full range (per visit and band).

            Show
            lauren Lauren MacArthur added a comment - - edited Indeed, the GaussianFlux plots look different and will help assess the thresholds.  Here is the u-band one from above: I will make these for the entire 3829 tract of the DC2 dataset to get a good feeling for the full range (per visit and band).
            Hide
            lauren Lauren MacArthur added a comment -

            Thanks so much for the comments James Chiang!  Given your confirmation, we are going to go ahead and incorporate the config overrides in the "dc2/run2.2" branch of obs_lsst into the master/main imsim configs.  Just to make sure, can you think of anything else in your workflow that veered from "vanilla"?  For timing reasons, we are most concerned about the SFM stages, but we of course want to know if there were any deviations from current defaults for the coadd stages that we should also be merging into master/main.

            Show
            lauren Lauren MacArthur added a comment - Thanks so much for the comments James Chiang !  Given your confirmation, we are going to go ahead and incorporate the config overrides in the "dc2/run2.2" branch of obs_lsst  into the master/main imsim  configs.  Just to make sure, can you think of anything else in your workflow that veered from "vanilla"?  For timing reasons, we are most concerned about the SFM stages, but we of course want to know if there were any deviations from current defaults for the coadd stages that we should also be merging into master/main.
            Hide
            jchiang James Chiang added a comment -

            As far as SFM goes, I think the configs in that "dc2/run2.2" branch of obs_lsst are the ones we used.   However, there are additional ones here that apply later in the processing:  https://github.com/LSSTDESC/ImageProcessingPipelines/tree/dc2/run2.2/config

            Johann Cohen-Tanugi is the best person to speak to those, as well as confirm the SFM configs we've discussed.

            Show
            jchiang James Chiang added a comment - As far as SFM goes, I think the configs in that "dc2/run2.2" branch of obs_lsst are the ones we used.   However, there are additional ones here that apply later in the processing:  https://github.com/LSSTDESC/ImageProcessingPipelines/tree/dc2/run2.2/config Johann Cohen-Tanugi  is the best person to speak to those, as well as confirm the SFM configs we've discussed.
            Hide
            lauren Lauren MacArthur added a comment - - edited

            Great, thanks again.  I am starting to put the SFM overrides into master now, but will be ready to jump if Johann Cohen-Tanugi does not confirm that they are correct!

            Show
            lauren Lauren MacArthur added a comment - - edited Great, thanks again.  I am starting to put the SFM overrides into master now, but will be ready to jump if Johann Cohen-Tanugi  does not confirm that they are correct!
            Hide
            lauren Lauren MacArthur added a comment -

            Eli Rykoff: quick follow-up on GaussianFlux vs. PsfFlux based selection for PSF modeling.  While, as you point out, the default is set to the former, it is actually being overridden to the latter in both repos:
            https://github.com/lsst/obs_subaru/blob/master/config/charImage.py#L18
            https://github.com/lsst/obs_lsst/blob/master/config/characterizeImage.py#L43

            I'm guessing this "PSF flux" is based on whatever simple/placeholder PSF was used in isr and comes from the icSrc catalogs? This means that neither of the values I've plotted in the histograms are correct...I need to go to the icSrc catalogs and use the base_PsfFlux measurements!

            Show
            lauren Lauren MacArthur added a comment - Eli Rykoff : quick follow-up on GaussianFlux  vs. PsfFlux  based selection for PSF modeling.  While, as you point out, the default is set to the former, it is actually being overridden to the latter in both repos: https://github.com/lsst/obs_subaru/blob/master/config/charImage.py#L18 https://github.com/lsst/obs_lsst/blob/master/config/characterizeImage.py#L43 I'm guessing this "PSF flux" is based on whatever simple/placeholder PSF was used in isr and comes from the icSrc catalogs? This means that neither of the values I've plotted in the histograms are correct...I need to go to the icSrc catalogs and use the base_PsfFlux measurements!
            Hide
            lauren Lauren MacArthur added a comment - - edited

            I have updated the configs to match the values in this file with a few minor caveats (also noted in the commit message):

            The real change is that the following override has not been included:

            config.charImage.measurePsf.starSelector["objectSize"].signalToNoiseMin = 20
            

            Originally, this was simply because 20 was the default (and it is preferable not to include supposed overrides in config files when the value is being set to the default). However, on further inspection (see details on DM-17043 and below) the catalogs in play for measurePsf are created with detection thresholds high enough such that typically almost no sources with S/N<~50 actually exist. This low value is not only misleading as to the range actually accessible, but also allows for the occasional lower-than-expected S/N sources to pass the thresholding tests for consideration in the PSF modeling. The default has thus been changed to 50 and this is still appropriate for LSSTCam-imSim/DC2 processing.

            Three other overrides that were setting to defaults were also omitted:

            config.charImage.measureApCorr.sourceSelector['science'].signalToNoise.maximum = None
            config.charImage.measureApCorr.sourceSelector['science'].signalToNoise.fluxField = 'base_PsfFlux_instFlux'
            config.charImage.measureApCorr.sourceSelector['science'].signalToNoise.errField = 'base_PsfFlux_instFluxErr'
            

            With these new configs set, I ran step1 & consolidateSourceTable and consolidateVisitSummary of step2 of the imsim DRP.yaml pipeline. Additionally, in order to be able to make the direct run comparison & other relevant pipe_analysis plots not yet included in drp_analysis, I ran a Gen2 SFP on the visits in tract 3829 (this also confirms continued SFM Gen2/Gen3 parity). So, the first set of plots I show below are from pipe_analysis, while the psf residual distribution plots are based solely on Gen3 runs.

            Also, note that all plots here are based on the S/N min > 20 config setting for model PSF source selection. The realization and decision to push this to 50 came in hindsight (and light of) of the processing runs. I have kicked off another Gen3 test-med-1 run with the S/N min > 50 override set and will post the updated plots when they come in (but I don't expect significant differences).

            Ok, so as Eli Rykoff pointed out, the fluxes plotted in the (now old) versions of the source S/N distribution plots attached are actually not representative of what went into the PSF modeling. They show the PsfFlux values from the src catalogs, but, of course, these are not the same as what went into the source modeling (as they aren't available without the PSF model!). Rather, what should be plotted is the PsfFlux values from the icSrc catalogs (where the "PSF" is an initialized "simple" PSF). Here are the correct versions of the plots, with the fluxes coming from the icSrc catalogs:
            First for the w_2021_40 DC2 processing of DM-32071:

            Next for the run on this ticket's branches (with S/N>20):

            You can see that with the cut at S/N>20, many more sources are available (and ultimately used) as potential PSF model candidates. The difference is most pronounced in u (and g), but the redder bands are much less affected but this (and this is expected and most of the PSF model-related failures reported have been in these bands).

            Here is a look at some of the pre and post distributions and direct comparisons for a given visit (selected at random...see here to peruse the rest of them):
            w_2021_40 DC2 processing of DM-32071:

            This ticket's branches (with S/N>20):

            w_2021_40 DC2 processing of DM-32071:

            This ticket's branches (with S/N>20):

            Direct comparisons:


            etc...let me know if I should directly post any others!

            To me, the difference look quite minor and it's difficult to rank the quality of the before and after, but the latter do have many more sources contributing to the PSF models in many cases (see below also), which I think is a good thing.

            Show
            lauren Lauren MacArthur added a comment - - edited I have updated the configs to match the values in  this file with a few minor caveats (also noted in the commit message): The real change is that the following override has not been included: config.charImage.measurePsf.starSelector[ "objectSize" ].signalToNoiseMin = 20 Originally, this was simply because 20 was the default (and it is preferable not to include supposed overrides in config files when the value is being set to the default). However, on further inspection (see details on DM-17043 and below) the catalogs in play for measurePsf are created with detection thresholds high enough such that typically almost no sources with S/N<~50 actually exist. This low value is not only misleading as to the range actually accessible, but also allows for the occasional lower-than-expected S/N sources to pass the thresholding tests for consideration in the PSF modeling. The default has thus been changed to 50 and this is still appropriate for LSSTCam-imSim/DC2 processing. Three other overrides that were setting to defaults were also omitted: config.charImage.measureApCorr.sourceSelector[ 'science' ].signalToNoise.maximum = None config.charImage.measureApCorr.sourceSelector[ 'science' ].signalToNoise.fluxField = 'base_PsfFlux_instFlux' config.charImage.measureApCorr.sourceSelector[ 'science' ].signalToNoise.errField = 'base_PsfFlux_instFluxErr' With these new configs set, I ran step1 & consolidateSourceTable and consolidateVisitSummary of step2 of the imsim DRP.yaml pipeline. Additionally, in order to be able to make the direct run comparison & other relevant pipe_analysis plots not yet included in drp_analysis , I ran a Gen2 SFP on the visits in tract 3829 (this also confirms continued SFM Gen2/Gen3 parity). So, the first set of plots I show below are from pipe_analysis , while the psf residual distribution plots are based solely on Gen3 runs. Also, note that all plots here are based on the S/N min > 20 config setting for model PSF source selection. The realization and decision to push this to 50 came in hindsight (and light of) of the processing runs. I have kicked off another Gen3 test-med-1 run with the S/N min > 50 override set and will post the updated plots when they come in (but I don't expect significant differences). Ok, so as Eli Rykoff pointed out, the fluxes plotted in the (now old) versions of the source S/N distribution plots attached are actually not representative of what went into the PSF modeling. They show the PsfFlux values from the src catalogs, but, of course, these are not the same as what went into the source modeling (as they aren't available without the PSF model!). Rather, what should be plotted is the PsfFlux values from the icSrc catalogs (where the "PSF" is an initialized "simple" PSF). Here are the correct versions of the plots, with the fluxes coming from the icSrc catalogs: First for the w_2021_40 DC2 processing of DM-32071 : Next for the run on this ticket's branches (with S/N>20): You can see that with the cut at S/N>20, many more sources are available (and ultimately used) as potential PSF model candidates. The difference is most pronounced in u (and g), but the redder bands are much less affected but this (and this is expected and most of the PSF model-related failures reported have been in these bands). Here is a look at some of the pre and post distributions and direct comparisons for a given visit (selected at random...see here to peruse the rest of them): w_2021_40 DC2 processing of DM-32071 : This ticket's branches (with S/N>20): w_2021_40 DC2 processing of DM-32071 : This ticket's branches (with S/N>20): Direct comparisons: etc...let me know if I should directly post any others! To me, the difference look quite minor and it's difficult to rank the quality of the before and after, but the latter do have many more sources contributing to the PSF models in many cases (see below also), which I think is a good thing.
            Hide
            lauren Lauren MacArthur added a comment -

            For another look at the effects of these changes (and looking towards DM-32625 which is being greatly facilitated and generalized on DM-32649, not to mention how much the latter will facilitate the generation of the following plots!), here I present the detector-averaged PSF statistics: median Ellipticity residual and scaled size scatter (see DM-32649 for details).  The plots show a per-band comparison between the w40 run (solid outline histograms) and the new S/N>20 run of this ticket:

            Median E residual distributions:

            Scaled Delta Size Scatter distributions:

            Most notable observations:

            • the differences are largest for the u and g bands and go towards null in the redder bands
            • where there are differences, the distributions are still quite similar. Some differences are to be expected for the large increase in calib_psf_used sources going into the models (by >30000 over 27 visits in u), and therefore these plots.
            • if the current defaults are applied in DM-32625, very few detectors would actually get rejected in this processing. I think this may bode for a tightening of the thresholds for the context of these simulations. I will post the equivalent plots for the current settings in play for HSC for comparison.
            Show
            lauren Lauren MacArthur added a comment - For another look at the effects of these changes (and looking towards DM-32625 which is being greatly facilitated and generalized on DM-32649 , not to mention how much the latter will facilitate the generation of the following plots!), here I present the detector-averaged PSF statistics: median Ellipticity residual and scaled size scatter (see DM-32649 for details).  The plots show a per-band comparison between the w40 run (solid outline histograms) and the new S/N>20 run of this ticket: Median E residual distributions: Scaled Delta Size Scatter distributions: Most notable observations: the differences are largest for the u and g bands and go towards null in the redder bands where there are differences, the distributions are still quite similar. Some differences are to be expected for the large increase in calib_psf_used sources going into the models (by >30000 over 27 visits in u), and therefore these plots. if the current defaults are applied in DM-32625 , very few detectors would actually get rejected in this processing. I think this may bode for a tightening of the thresholds for the context of these simulations. I will post the equivalent plots for the current settings in play for HSC for comparison.
            Hide
            lauren Lauren MacArthur added a comment - - edited

            And here are the equivalent distributions for HSC data, which is currently selecting against the detectors that fail to meet the threshold criteria:

            (with apologies that the axis limits are different...they get auto-set by the maximum values in each distribution).

            Given the position of the thresholds vs. the general shape of the distributions, I do believe this bodes for a tighter thresholding on the imsim data (if we take the HSC settings as good guidance).  To be discussed further on DM-32625...

            Show
            lauren Lauren MacArthur added a comment - - edited And here are the equivalent distributions for HSC data, which is currently selecting against the detectors that fail to meet the threshold criteria: (with apologies that the axis limits are different...they get auto-set by the maximum values in each distribution). Given the position of the thresholds vs. the general shape of the distributions, I do believe this bodes for a tighter thresholding on the imsim data (if we take the HSC settings as good guidance).  To be discussed further on DM-32625 ...
            Hide
            lauren Lauren MacArthur added a comment -

            Ok, this is ready for review.  I'm asking Yusra AlSayyad to do the formal review (as she will have to look at it regardless to approve/deny for backporting), but it would be great if any of the DESC-watchers (e.g. James Chiang, Johann Cohen-Tanugi, Heather Kelly ,...) would like to weigh in given the one recommended slight deviation from their typical workflow for SFM (namely, changing the minimum S/N for the measurePsf source selector from 20 to 50).

            Jenkins is happy.

            Two PRs are:
            https://github.com/lsst/obs_lsst/pull/372
            https://github.com/lsst/meas_algorithms/pull/266

            Show
            lauren Lauren MacArthur added a comment - Ok, this is ready for review.  I'm asking Yusra AlSayyad  to do the formal review (as she will have to look at it regardless to approve/deny for backporting), but it would be great if any of the DESC-watchers (e.g. James Chiang , Johann Cohen-Tanugi , Heather Kelly  ,...) would like to weigh in given the one recommended slight deviation from their typical workflow for SFM (namely, changing the minimum S/N for the measurePsf source selector from 20 to 50). Jenkins is happy . Two PRs are: https://github.com/lsst/obs_lsst/pull/372 https://github.com/lsst/meas_algorithms/pull/266
            Hide
            jchiang James Chiang added a comment -

            I'm ok with these changes, though I don't have a very informed opinion on the change from 20 to 50 for the minimum S/N for the measurePsf source selector.  FWIW, the reasoning you describe makes sense to me.

            Show
            jchiang James Chiang added a comment - I'm ok with these changes, though I don't have a very informed opinion on the change from 20 to 50 for the minimum S/N for the measurePsf source selector.  FWIW, the reasoning you describe makes sense to me.
            Hide
            yusra Yusra AlSayyad added a comment -

            The argument for S/N =50 for ObjectSizeStarSelectorConfig makes sense warranting that CharaterizeImage is the ONLY place in the pipelines that we use ObjectSizeStarSelectorTask. This is actually the case (other than some PSF unit tests). So this is fine.

            If I were doing from scratch I prob would have stuck it next to where the detection threshold is set.

            # If using doSignalToNoiseLimit, SignalToNoiseMin should be >= detection threshold
            self.measurePsf.starSelector["objectSize"].signalToNoiseMin = 50
            

            right under where the detection threshold is set to 50 in the characterizeImageConfig.setDefaults:

                self.detection.thresholdValue = 5.0
                self.detection.includeThresholdMultiplier = 10.0 
            

            But I haven't stared at this as long as you have, and a change in ObjectSizeStarSelectorConfig with an comment (which could be incorporated into the docstring) is fine.

            Show
            yusra Yusra AlSayyad added a comment - The argument for S/N =50 for ObjectSizeStarSelectorConfig makes sense warranting that CharaterizeImage is the ONLY place in the pipelines that we use ObjectSizeStarSelectorTask . This is actually the case (other than some PSF unit tests). So this is fine. If I were doing from scratch I prob would have stuck it next to where the detection threshold is set. # If using doSignalToNoiseLimit, SignalToNoiseMin should be >= detection threshold self.measurePsf.starSelector["objectSize"].signalToNoiseMin = 50 right under where the detection threshold is set to 50 in the characterizeImageConfig.setDefaults: self.detection.thresholdValue = 5.0 self.detection.includeThresholdMultiplier = 10.0 But I haven't stared at this as long as you have, and a change in ObjectSizeStarSelectorConfig with an comment (which could be incorporated into the docstring) is fine.
            Hide
            lauren Lauren MacArthur added a comment -

            I totally agree with your suggestion and was about to implement it, but it occurred to me that it may not make sense to put it there when the default is to threshold on fluxMin, not S/N.  I plan to campaign to have that default changed to the latter, but that will require validating (or re-overriding) on the HSC side of things, so I think I will leave it for now and make that change if and when the default for charImage.measurePsf.starSelector["objectSize"].doSignalToNoiseLimit gets changed to true (probably along with turning charImage.measurePsf.starSelector["objectSize"].doFluxLimit to False, although they can both technically be in play).

            Show
            lauren Lauren MacArthur added a comment - I totally agree with your suggestion and was about to implement it, but it occurred to me that it may not make sense to put it there when the default is to threshold on fluxMin, not S/N.  I plan to campaign to have that default changed to the latter, but that will require validating (or re-overriding) on the HSC side of things, so I think I will leave it for now and make that change if and when the default for  charImage.measurePsf.starSelector ["objectSize"] .doSignalToNoiseLimit gets changed to true (probably along with turning charImage.measurePsf.starSelector ["objectSize"] .doFluxLimit to False, although they can both technically be in play).
            Hide
            lauren Lauren MacArthur added a comment -

            Also a huge +1 from me on your "random idea. Our config fields would benefit from a defaultRationale field. One day...."!!

            Show
            lauren Lauren MacArthur added a comment - Also a huge +1 from me on your "random idea. Our config fields would benefit from a defaultRationale field. One day...."!!
            Hide
            lauren Lauren MacArthur added a comment -

            Thanks for the review, Yusra.  I have merged the two branches to master.  I also created the tickets/DM-32624-v23 branches according to the backporting instructions and have a Jenkins with 
            PRODUCTS: lsst_distrib lsst_ci ci_imsim ci_hsc
            REFS: tickets/DM-32624-v23 v23.0.x v23.0.0.rc1
            running. I will, of course, await the decision of the backporting board before creating the official PRs/merging.

            Show
            lauren Lauren MacArthur added a comment - Thanks for the review, Yusra.  I have merged the two branches to master.  I also created the tickets/ DM-32624 -v23 branches according to the backporting instructions and have a Jenkins with  PRODUCTS: lsst_distrib lsst_ci ci_imsim ci_hsc REFS: tickets/ DM-32624 -v23 v23.0.x v23.0.0.rc1 running . I will, of course, await the decision of the backporting board before creating the official PRs/merging.
            Hide
            lauren Lauren MacArthur added a comment -

            Thanks for reviewing the PRs, Yusra.  Backport now complete.

            Show
            lauren Lauren MacArthur added a comment - Thanks for reviewing the PRs, Yusra.  Backport now complete.

              People

              Assignee:
              lauren Lauren MacArthur
              Reporter:
              lauren Lauren MacArthur
              Reviewers:
              Yusra AlSayyad
              Watchers:
              Colin Slater, Eli Rykoff, Heather Kelly, James Chiang, Jim Bosch, Lauren MacArthur, Yusra AlSayyad
              Votes:
              0 Vote for this issue
              Watchers:
              7 Start watching this issue

                Dates

                Created:
                Updated:
                Resolved:

                  Jenkins

                  No builds found.