Uploaded image for project: 'Data Management'
  1. Data Management
  2. DM-10190

Implement a sparsity constraint with a cutoff

    Details

    • Type: Story
    • Status: To Do
    • Resolution: Unresolved
    • Fix Version/s: None
    • Component/s: meas_deblender
    • Labels:
      None

      Description

      This description has been edited to reflect a change in scarlet with the new API. There is now a thresholding update function that works similar to a sparsity constraint but for sources with multiple components it can fit only the bright central component and determine that the flux in the disk (for example) is below the threshold. This function needs to be updated to include a check that only flux below the noise threshold is removed.

        Attachments

          Activity

          Hide
          pmelchior Peter Melchior added a comment -

          The idea here is that we should treat objects as point-sources (or compact galaxies) unless we have a reason not to.
          However, it's not necessarily the flux of the object that will determine the L0 penalty. Ideally, it would be related to the residuals of the model in the area covered by a given component. But in blended situations this is probably ambiguous.

          It is also clear that in dense stellar fields, one would probably try to restrict the objects to one pixels (prior to convolution), and determine if an extended component is needed at all.

          The question is: can we determine the ideal configuration during the iterations.

          Show
          pmelchior Peter Melchior added a comment - The idea here is that we should treat objects as point-sources (or compact galaxies) unless we have a reason not to. However, it's not necessarily the flux of the object that will determine the L0 penalty. Ideally, it would be related to the residuals of the model in the area covered by a given component. But in blended situations this is probably ambiguous. It is also clear that in dense stellar fields, one would probably try to restrict the objects to one pixels (prior to convolution), and determine if an extended component is needed at all. The question is: can we determine the ideal configuration during the iterations.
          Hide
          pmelchior Peter Melchior added a comment -

          Btw, both this ticket and DM-10189 will probably do updates of parameters (L0 penalty and centroid positions) based on the current state of the deblender. These are really non-linear parameters, so convexity of the likelihood goes out of the window. We need to make sure that we're not destroying a good algorithm with a bad parameter update.

          Show
          pmelchior Peter Melchior added a comment - Btw, both this ticket and DM-10189 will probably do updates of parameters (L0 penalty and centroid positions) based on the current state of the deblender. These are really non-linear parameters, so convexity of the likelihood goes out of the window. We need to make sure that we're not destroying a good algorithm with a bad parameter update.
          Hide
          fred3m Fred Moolekamp added a comment -

          Updated the description to make this ticket about the threshold update, which is the right way to apply sparsity but has a few issues.

          Show
          fred3m Fred Moolekamp added a comment - Updated the description to make this ticket about the threshold update, which is the right way to apply sparsity but has a few issues.

            People

            • Assignee:
              fred3m Fred Moolekamp
              Reporter:
              fred3m Fred Moolekamp
              Watchers:
              Fred Moolekamp, Peter Melchior
            • Votes:
              0 Vote for this issue
              Watchers:
              2 Start watching this issue

              Dates

              • Created:
                Updated:

                Summary Panel