Ok, I've tried every permutation and combination of SDSS vs. HSM shape and pca vs. psfex and it seems the latter two perform the best in terms of number of sfm failures, so I don't see any reason to change any of those configs for future processing. I also tried changing
charImage.measurePsf.psfDeterminer["psfex"].samplingSize = 0.5
|
psfex kicks into the sampling mode set by this parameter when the PSF FWHM drops below 3 pixels, which is always true for the sigma ~ 0.8 pixel visits we are looking at here. Changing this (in either direction, I tried setting it to 1 and 0.25) just ended up in more failures.
I do strongly recommend changing the fluxMin config:
charImage.measureApCorr.sourceSelector["objectSize"].fluxMin
|
to something much smaller than the default of 12500 (HSC overrides this to 4000, which roughly corresponds to a S/N of 20). Ideally, this would actually be a S/N cut which you will be able to set via config once DM-17043 lands (and see that ticket for some discussion and example flux vs. S/N histograms). Using a min S/N of 20, I recover on order 50 additional CCDs in both the u-band 738955 and g-band 254378 visits considered here.
Another parameter I noticed HSC overrides is
processCcd.charImage.measurePsf.starSelector['objectSize'].widthMin=0.9
|
Clearly, the value 0.9 can't be used if the PSF sigma is actually 0.8 pixels, but I tried with 0.78 and recovered 4 additional CCDs in both the u-band visit 738955 and the g-band 254378 (but I haven't really explored the consequences of this cut beyond this.)
Finally, I note that in looking at a well-sampled r-band visit (440940), I noticed that there does not seem to be a brighter-fatter correction being applied in the processing (see this discussion on Slack, #desc-dm-dc2 on Jan 7, 2019 ~3pm EST) . As such, I explored using a maximum S/N threshold cut (reasoning that the stars affected by BF are not representative of the underlying PSF, so may be hindering the modeling). This did result in a significant recovery of CCDs in the u and g band visits considered here (with no difference for the r band visit, for which all CCDs pass). A sweet-spot in terms of number of CCDs passing seemed to be at S/N~200-250, but this does seem quite a low number, so further investigation may be warranted before adopting any global config change.
So, the bottom line is that the CCD failures now seem largely down to the failure of the CModel measurement in SFM (needed to compute aperture correction maps for use on the coadds), and that these trace to the flag for the double shapely psf approximation failure:
modelfit_DoubleShapeletPsfApprox_flag_invalidMoments True
|
Whether this is simply a matter of hitting a regime of under-sampling that just can't be managed, or if the PSFs in the DC2 simulations are funky in a unique way that exacerbates the effects of under-sampling, or if we could make algorithmic/config changes that would allow for CModel to pass is a matter for future investigation.
Going to lower order in the spatial variation is a sufficiently important way to ensure robustness when there aren't any PSF stars that I think we need to fix this, though I'm also worried that if it's deep in the bowels of PSFEx we need Robert Lupton time (and possibly a non-trivial amount of it) to debug what's going on. Lauren MacArthur, could you create a ticket for that with a how-to-reproduce?
To continue our search for a possibly more expedient solution, I wonder if it's time to bring PcaPsf out of cold storage and throw it at this problem.