# PTC task should produce a linearity model

XMLWordPrintable

## Details

• Type: Story
• Status: Done
• Resolution: Done
• Fix Version/s: None
• Component/s:
• Labels:
None
• Story Points:
12
• Team:
Data Release Production

## Description

Goal: The PTC task should calculate a linearity model, in whatever form is necessary for isrTask's linearity correction to be able to run and use it.

This ticket captures the thinking and algorithmic work necessary to generate linearity models, but isn't concerned with how they are persisted, or passed from task to task.

This ticket used to say:

In talking with Andrés, it was decided that the de facto title for this ticket is actually "Write a task that creates a linearity model, which could either be the PTC task itself, a wrapper around the PTC task, or a separate task which uses the output of the PTC task", but that's a bit long to actually change it to, so keeping the title the same, but this encodes the essence of the ticket.

## Activity

Hide
Andrés Alejandro Plazas Malagón added a comment - - edited

Sounds good, I'll push the code tomorrow.

Show
Andrés Alejandro Plazas Malagón added a comment - - edited Sounds good, I'll push the code tomorrow.
Hide
Andrés Alejandro Plazas Malagón added a comment - - edited

I have pushed the code. I found two linearizers in linearize.py, LinearizeSquare, and LinearizeLookupTable. Question: do we need to implement both in this code? It seems that LinearizeSquared was written for a specific case (Subaru); I would think that this case and other general cases could be covered by the table.

In any case, LinearizeSquared requires a coefficient c0, which we agreed would be given by c0 = -k2/k1^2 for a fit of the form "mean_signal = k0 + k1*time + k2*time^2" (Question: should k0 be set to zero?). This is implemented at the top of “calculateLinearityResidualAndLinearizers”.
The “c0” coefficient per amplifier is saved for the moment as a dictionary in the output PhotonTransferCurveDataset.

LinearizeLookupTable is a mapping from ADU values to the corrections that should be added to those values. The array should have nrows = number of amplifiers and ncolumns=range of ADU values. An example of converting DECam linearity tables is in https://github.com/lsst/obs_decam/blob/master/decam/makeLinearizer.py, where the range of ADU’s goes up to 2^16. In this case, I have set the maximum size of the ADU range to 2^18. For the moment, in order to populate the array with the corrections, I first fit an n-degree polynomial to the "mean_signal vs time curve" (new parameter: self.config.polynomialFitDegreeNl). I use the linear part of that polynomial to obtain a tMax where the linear signal is ADUMax = 2^18 ADU. Then evaluate the linear part of the polynomial at the range of times [0, tMax] (“signalIdeal”) and the full polynomial (“signalUncorrected”). The difference between the two is the correction for the table. All of this is per amplifier, and it is also in the function “calculateLinearityResidualAndLinearizers”.

Show
Andrés Alejandro Plazas Malagón added a comment - - edited I have pushed the code. I found two linearizers in linearize.py, LinearizeSquare, and LinearizeLookupTable. Question: do we need to implement both in this code? It seems that LinearizeSquared was written for a specific case (Subaru); I would think that this case and other general cases could be covered by the table. In any case, LinearizeSquared requires a coefficient c0, which we agreed would be given by c0 = -k2/k1^2 for a fit of the form "mean_signal = k0 + k1*time + k2*time^2" (Question: should k0 be set to zero?). This is implemented at the top of “calculateLinearityResidualAndLinearizers”. The “c0” coefficient per amplifier is saved for the moment as a dictionary in the output PhotonTransferCurveDataset. LinearizeLookupTable is a mapping from ADU values to the corrections that should be added to those values. The array should have nrows = number of amplifiers and ncolumns=range of ADU values. An example of converting DECam linearity tables is in https://github.com/lsst/obs_decam/blob/master/decam/makeLinearizer.py , where the range of ADU’s goes up to 2^16. In this case, I have set the maximum size of the ADU range to 2^18. For the moment, in order to populate the array with the corrections, I first fit an n-degree polynomial to the "mean_signal vs time curve" (new parameter: self.config.polynomialFitDegreeNl). I use the linear part of that polynomial to obtain a tMax where the linear signal is ADUMax = 2^18 ADU. Then evaluate the linear part of the polynomial at the range of times [0, tMax] (“signalIdeal”) and the full polynomial (“signalUncorrected”). The difference between the two is the correction for the table. All of this is per amplifier, and it is also in the function “calculateLinearityResidualAndLinearizers”.
Hide
Merlin Fisher-Levine added a comment -

You're  right in that the LUT covers the other case quite  well, but on the other hand, if it's not too much effort, it seems a shame not to be able to have analytic ones, as they're so much more efficient. That said, if it's any significant effort I think we could go with only LUTs for now, and revisit if necessary.

Show
Merlin Fisher-Levine added a comment - You're  right in that the LUT covers the other case quite  well, but on the other hand, if it's not too much effort, it seems a shame not to be able to have analytic ones, as they're so much more efficient. That said, if it's any significant effort I think we could go with only LUTs for now, and revisit if necessary.
Hide
Robert Lupton added a comment -

I thought that there was a generic polynomial lineariser. In all cases, we should be using

 I_{lin} = I_{raw} + f(I_{raw}) 

Where f can be analytic or a lookup table. Note that I_{raw} can be floating point, so the lookup table needs to work on int(f_{raw}), but using the additive form means that we don't loose precision.

Show
Robert Lupton added a comment - I thought that there was a generic polynomial lineariser. In all cases, we should be using I_{lin} = I_{raw} + f(I_{raw}) Where f can be analytic or a lookup table. Note that I_{raw } can be floating point, so the lookup table needs to work on int(f_{raw}) , but using the additive form means that we don't loose precision.
Hide
Christopher Waters added a comment -

There will be a generic polynomial of that form after DM-23023.  I think that's the solution if that's what the best fit is.  That said, the target that I'm working towards on DM-23023 (please comment soon on RFC-665 if there are issues) is that a particular linearity correction only contains one type.  If the code will be fitting multiple models, they will need to be persisted separately.

Show
Christopher Waters added a comment - There will be a generic polynomial of that form after DM-23023 .  I think that's the solution if that's what the best fit is.  That said, the target that I'm working towards on DM-23023 (please comment soon on RFC-665 if there are issues) is that a particular linearity correction only contains one type.  If the code will be fitting multiple models, they will need to be persisted separately.

## People

• Assignee:
Andrés Alejandro Plazas Malagón
Reporter:
Merlin Fisher-Levine
Reviewers:
Christopher Waters
Watchers:
Andrés Alejandro Plazas Malagón, Christopher Waters, John Swinbank, Merlin Fisher-Levine, Robert Lupton