# false positives in sconsUtils check for test failures

XMLWordPrintable

## Details

• Type: Bug
• Status: To Do
• Resolution: Unresolved
• Fix Version/s: None
• Component/s:
• Labels:
None
• Team:
Architecture

## Description

sconsUtils' check for test failures seems to blindly look at .tests/*.failed, without considering whether the tests were actually run on that build.

As a result, if a test test fails on one branch, and then we switch to a new branch where that test was removed, sconsUtils continues to report the test as a failure.

In addition, if we have test failures in one build, and then repeat the build and do not build the tests, sconsUtils continues to report test failures. This creates the illusion that tests are being run (particularly a problem in conjunction with DM-609).

## Activity

Hide
Tim Jenness added a comment -

I think the problem here is that the code in scripts.py is using a simple shell script to count the number of .failed files. I don't really understand the comment in the code explaining how using shell rather than python makes it easier to suppress output as why would python say anything?

Without understanding why it's using shell it's hard to know what the fix is. Naively I would use python code to find the .failed files and compare the list with the list of test files, removing any that are not relevant. If looks like the code was written by Robert Lupton, can you give me a clue?

Show
Tim Jenness added a comment - I think the problem here is that the code in scripts.py is using a simple shell script to count the number of .failed files. I don't really understand the comment in the code explaining how using shell rather than python makes it easier to suppress output as why would python say anything? Without understanding why it's using shell it's hard to know what the fix is. Naively I would use python code to find the .failed files and compare the list with the list of test files, removing any that are not relevant. If looks like the code was written by Robert Lupton , can you give me a clue?
Hide
Robert Lupton added a comment -

The comment in the code is

  # N.b. the test is written in sh not python as then we can use @ to suppress output 

In other words, the test is put inside an scons action, and (just as for make) you can suppress text from an action using @. You can write actions in python, but I could not figure out how to stop scons printing a message. If you wrote the test code in the SConscript file it would run at parse time, not runtime. Does that help?

Show
Robert Lupton added a comment - The comment in the code is # N.b. the test is written in sh not python as then we can use @ to suppress output In other words, the test is put inside an scons action, and (just as for make) you can suppress text from an action using @. You can write actions in python, but I could not figure out how to stop scons printing a message. If you wrote the test code in the SConscript file it would run at parse time, not runtime. Does that help?
Hide
Tim Jenness added a comment -

In theory this is still a problem but in practice we don't notice it any more because in almost all test runs only a single .failed file is created for the pytest global test. The C++ tests will suffer from the problem but we don't have much churn in those.

Show
Tim Jenness added a comment - In theory this is still a problem but in practice we don't notice it any more because in almost all test runs only a single .failed file is created for the pytest global test. The C++ tests will suffer from the problem but we don't have much churn in those.

## People

• Assignee:
Frossie Economou
Reporter:
Jim Bosch
Watchers:
Jim Bosch, Lauren MacArthur, Robert Lupton, Tim Jenness