note: ds9 (even latest one) cannot decode compressed mask plane in the resultant FITS files written by dataRef.put() or butler.put(). CORR seems no problem, but once I read CORR onto memory and write it into a new file by butler, the new file comes with this problem. I have no idea about the difference.
Per investigation by @Sogo Mineo, we strongly suspect that this is due to inappropriate handling of FITS header in calexp files with tiled-image compression both in LSST stack and ds9 decoding, which is violating FITS Standard. 1) LSST stack should not write ZQUANTIZ= 'NONE ' in the primary HDU that is not associated with any pixel data. Also the value 'NONE' is not allowed by the standard. For compression with quantization enabled, put 'NO_DITHER'. And when we do lossless compression without any quantization for reducing noise, simply dropping this keyword would be appropriate (When neither ZZERO & ZSCALE exists, we assume no quantization is performed, and so ZQUANTIZ is irrelevant). Otherwise, widely-used common tools like FITSIO and ds9 would not properly decode the data. 2) ds9 should not interpret the ZQUANTIZ in the primary HDU as the default value for other HDUs. The tiled-image compression for integer images should not have ZQUANTIZ, but the current ds9 code seems to mistakenly decode compressed integer values into float when ZQUANTIZ exists in the primary HDU, which is totally wrong. ZQUANTIZ in the same HDU as the pixel data should be used in any case.