Details

Type: Story

Status: Done

Resolution: Done

Fix Version/s: None

Component/s: meas_deblender

Labels:None

Story Points:3

Epic Link:

Sprint:DRP S175

Team:Data Release Production
Description
The current symmetry operator work by comparing each pixel to its symmetric partner and projecting the difference to zero using a proximal operator, forcing the solution to be symmetric. This works relatively well, but not as well as the current deblender in certain cases. For example, in a blend with only two sources, with one source much brighter than the other, the faint source using the NMF deblender attempts to steal a small amount of flux from its brighter neighbor. We had hoped that the near zero flux opposite the bright source would be enough to limit this effect (when combined with symmetry) but the algorithm breaks when the fainter object wants to steal flux near the same level as the noise. Sparsity is not much help, as l0 sparsity limits the size of brighter objects more than the smaller ones.
This blending issue is accounted for in the current deblender (adapted from SDSS) because it doesn't just force a symmetric solution, it forces the symmetric solution to use the minimum of each pixel and its symmetric partner. This is a much stronger constraint (and more useful) constraint.
It would be useful to update the NMF deblender to have the same minimum pixel value constraint. One way to do this is to use a proximal operator that projects each template onto a space that uses the minimum of each symmetric pair. This will no longer be a linear operation, as choosing the minimum pixel value is nonlinear, so it is unclear how this will affect convergence.
If we can make this work, combining the new symmetry operator with monotonicity might eliminate the need to use a sparsity constraint, which is advantageous as Peter Melchior and I are realizing that sparsity is a wilder beast than it first appeared to be, and taming it is not a simple task. This is because the sparsity of an image is dependent on the total flux in the image, and the brightness of each source, so it might be that using sparsity would require each source in a blend to have a different sparsity requirement dependent on it's brightness, which could be a difficult scheme to implement. Taking care of this directly using a better symmetry constraint is preferable.
I've tried three different methods to implement a better symmetry operator and none of them work as well as I had hoped.
The first method is the technique described in the ticket description, forcing the deblender to use the minimum of the two symmetric pixels. But if the algorithm passes into a minimum for one of the pixels in a symmetric pair it apparently finds it difficult to ever take on a higher value and the deblend never converges.
Next I tried to force Datamodel > 0, using a proximal operator and a primal and dual variable for the constraint. This works decently but convergence is slow. The last method I tried was to use a proximal operator that took the argmin model, data for each pixel. Like the previous method, it technically works but is computationally expensive and slow to converge.
So after a discussion with Peter Melchior, we decided not to implement any of the above methods into the code and instead investigate other ways of improving the deblender.
DM10189and DM10190 have been opened to address what we think are the two major failures of the current deblender: symmetry/monotonicity about in an incorrect peak position (DM10189) and incorrect deblending of faint sources in the presence of noise (DM10190).