Our Psf objects currently only support direct evaluation on the native pixel grid of the image they are associated with, even if the underlying implementation has access to information on smaller scales (as is usually the case with PSFEx models). While that information can be recovered from the current API by evaluating the PSF at different sub-pixel offsets and combining the results later, this is very clumsy, and more importantly, is not currently used by WarpedPsf when transforming PSFs to different coordinate systems. That may already be degrading the quality of our CoaddPsfs when the input data is marginally sampled, as it sometimes is for HSC data (though that means we're already doing the wrong thing in resampling the corresponding input images to build the coadd, and there isn't anything we can do to fix that).
To fix the PSF issue, we should:
- Add an oversampling factor parameter to Psf.computeKernelImage (I don't think it makes sense on Psf.computeImage, but I could be convinced otherwise).
- Make each Psf responsible for knowing the oversampling factor necessary to yield a well-sampled image. The default would of course be 1.
- Have WarpedPsf use the recommended oversampling factor when obtaining images to resample.
I'm a bit worried that this could slow down the already-a-bottleneck resampling inside CoaddPsf/WarpedPsf, but it's the right thing to do. We just need to be careful not to be overaggressive in setting the oversampling factor.