With the recent increase in mask size, it's become more important to start compressing MaskedImages and Exposures.
A recent-ish discussion on Slack revealed that we can enable HDU-level lossless compression with no quantization and still have CFITSIO do all the work:
(see in particular Eli Rykoff's experiment about a page down from the start of the conversation).
In terms of how this works in our code, I think we should do the following:
- Images, Masks, MaskedImages, and Exposures (or at least the latter three) should be written with lossless HDU-level compression enabled by default.
- We should add a flags argument to the writeFits method that controls whether compression is enabled (much like SourceCatalog's flags that control whether to write (Heavy)Footprints), and use the same tricks to allow those flags to be propagated through butler.put calls.
- All routines that read the various image classes should transparently handle both compressed and uncompressed images.
Some things to check after doing the work:
- We need to still be able to read files written with old versions of the pipeline.
- Reading subsets of images (especially in coadditoin) shouldn't be too much slower (hopefully we can control this a bit changing the tile size/geometry.
Paul Price, we (Robert Lupton, John Swinbank and Jim Bosch) thought you'd be a good candidate for this, and Robert Lupton agreed that it'd be something HSC could support you doing. Mind putting it somewhere on your stack of things to do?