Uploaded image for project: 'Data Management'
  1. Data Management
  2. DM-17748

Use autograd for derivatives in scarlet

    Details

    • Type: Story
    • Status: Done
    • Resolution: Done
    • Fix Version/s: None
    • Component/s: meas_deblender
    • Labels:
      None
    • Story Points:
      45
    • Sprint:
      DRP S19-2, DRP S19-3, DRP S19-4, DRP S19-5
    • Team:
      Data Release Production

      Description

      Thanks to the help of Erin Sheldon and his former student Lorena Mezini we discovered a few fatal flaws in scarlet that need to be corrected in order to generate lensing quality models. The main flaws are due to the way we fit positions and the way we do PSF convolutions. Both of these can be corrected by using a higher order sinc interpolation as opposed to a linear product of matrices, however taking the gradient of these functions is non-trivial.

      This is exactly the problem that tensor flow (and pytorch in python) seek to solve, and do so much faster than we can calculate our gradients by hand. So this ticket will be to implement the pytorch autograd function in scarlet to calculate the derivatives.

        Attachments

          Issue Links

            Activity

            Hide
            fred3m Fred Moolekamp added a comment -

            autograd has been implemented in scarlet, including a new architecture that convolves the entire scene once for each model. The new implementation is both faster and more stable than the previous version of scarlet. DM-19451 has been created to test that this new version performs at least as well as the previous version.

            Show
            fred3m Fred Moolekamp added a comment - autograd has been implemented in scarlet, including a new architecture that convolves the entire scene once for each model. The new implementation is both faster and more stable than the previous version of scarlet. DM-19451  has been created to test that this new version performs at least as well as the previous version.
            Hide
            pmelchior Peter Melchior added a comment -

            Works beautifully.

            For context: the new update looks similar to my proximal matrix factorization code in pytorch (http://pmelchior.net/blog/proximal-matrix-factorization-in-pytorch.html), but here we're doing per-component updates. This is also more flexible for per-component differntiable priors that have also been enabled by our move to pytorch.

            Show
            pmelchior Peter Melchior added a comment - Works beautifully. For context: the new update looks similar to my proximal matrix factorization code in pytorch ( http://pmelchior.net/blog/proximal-matrix-factorization-in-pytorch.html),  but here we're doing per-component updates. This is also more flexible for per-component differntiable priors that have also been enabled by our move to pytorch.

              People

              • Assignee:
                fred3m Fred Moolekamp
                Reporter:
                fred3m Fred Moolekamp
                Reviewers:
                Peter Melchior
                Watchers:
                Fred Moolekamp, Peter Melchior
              • Votes:
                0 Vote for this issue
                Watchers:
                2 Start watching this issue

                Dates

                • Created:
                  Updated:
                  Resolved: