Details
-
Type:
Story
-
Status: Done
-
Resolution: Done
-
Fix Version/s: None
-
Component/s: meas_algorithms, pipe_tasks
-
Labels:
-
Story Points:0.5
-
Epic Link:
-
Sprint:DRP S17-3
-
Team:Data Release Production
Description
Once the meas_algorithms background model is being generated correctly, the original pipe_tasks test that first pointed to this problem in needs to be put back to higher precision. A comment in testProcessCcd.py indicates the assertions that should have their precisions lowered back to pre-py3-porting values. Presently, this test fails with lower thresholds because the background model is different than expected.
Attachments
Issue Links
Activity
Field | Original Value | New Value |
---|---|---|
Epic Link |
|
Team | Data Release Production [ 10301 ] |
Sprint | DRP S17-3 [ 360 ] | |
Story Points | 0.1 | |
Assignee | John Swinbank [ swinbank ] |
Story Points | 0.1 | 0.5 |
Status | To Do [ 10001 ] | In Progress [ 3 ] |
Reviewers | Fred Moolekamp [ fred3m ] | |
Status | In Progress [ 3 ] | In Review [ 10004 ] |
Status | In Review [ 10004 ] | Reviewed [ 10101 ] |
Resolution | Done [ 10000 ] | |
Status | Reviewed [ 10101 ] | Done [ 10002 ] |
Since the test tolerances were changed, the expected values were updated. From a first look, it's not obvious that the current results, either in Python 2 or Python 3, are within the old tolerances of the currently expected values. I'll have to do some more digging, but it looks like this isn't just a simple case of restoring the old places values.