Details
-
Type:
Story
-
Status: Done
-
Resolution: Done
-
Fix Version/s: None
-
Component/s: meas_deblender
-
Labels:None
-
Story Points:4
-
Epic Link:
-
Sprint:DRP S18-5
-
Team:Data Release Production
Description
After DM-9584, the current bottleneck in the deblender is the translation and PSF operators, where (when PSF convolution is used) nearly all of the processing time is spent in a single function:
def apply_filter(X, weights, slices, inv_slices): |
"""Apply a filter to a 2D image X |
|
Parameters
|
----------
|
X: 2D numpy array
|
The image to apply the filter to
|
weights: 1D array
|
Weights corresponding to each slice in `slices`
|
slices: list of `slice` objects
|
Slices in the new `X` to store the filtered X
|
inv_slices: list of `slice` objects
|
Slices of `X` to apply each weight
|
|
Returns
|
-------
|
new_X: 2D numpy array
|
The result of applying the filter to `X`
|
"""
|
result = np.zeros(X.shape, dtype=X.dtype) |
for n, weight in enumerate(weights): |
result[slices[n]] += weight * X[inv_slices[n]] |
return result |
Moving this to C++ and using the Eigen package is likely to improve the performance, and is the last tall pole remaining in optimizing the deblender. After this any performance increases will have to come from more clever implementations of certain algorithms and small optimizations throughout the code.
There is an extra attribute LinearFilter.cpp that is used to turn the C++ methods on/off for testing. Before this ticket is merged that will be removed, but I'm leaving it in until this passes the rest of the review.