Uploaded image for project: 'Data Management'
  1. Data Management
  2. DM-2345

false positives in sconsUtils check for test failures

    Details

    • Type: Bug
    • Status: To Do
    • Resolution: Unresolved
    • Fix Version/s: None
    • Component/s: sconsUtils
    • Labels:
      None
    • Team:
      Architecture

      Description

      sconsUtils' check for test failures seems to blindly look at .tests/*.failed, without considering whether the tests were actually run on that build.

      As a result, if a test test fails on one branch, and then we switch to a new branch where that test was removed, sconsUtils continues to report the test as a failure.

      In addition, if we have test failures in one build, and then repeat the build and do not build the tests, sconsUtils continues to report test failures. This creates the illusion that tests are being run (particularly a problem in conjunction with DM-609).

        Attachments

          Issue Links

            Activity

            Hide
            tjenness Tim Jenness added a comment -

            I think the problem here is that the code in scripts.py is using a simple shell script to count the number of .failed files. I don't really understand the comment in the code explaining how using shell rather than python makes it easier to suppress output as why would python say anything?

            Without understanding why it's using shell it's hard to know what the fix is. Naively I would use python code to find the .failed files and compare the list with the list of test files, removing any that are not relevant. If looks like the code was written by Robert Lupton, can you give me a clue?

            Show
            tjenness Tim Jenness added a comment - I think the problem here is that the code in scripts.py is using a simple shell script to count the number of .failed files. I don't really understand the comment in the code explaining how using shell rather than python makes it easier to suppress output as why would python say anything? Without understanding why it's using shell it's hard to know what the fix is. Naively I would use python code to find the .failed files and compare the list with the list of test files, removing any that are not relevant. If looks like the code was written by Robert Lupton , can you give me a clue?
            Hide
            rhl Robert Lupton added a comment -

            The comment in the code is

             # N.b. the test is written in sh not python as then we can use @ to suppress output
            

            In other words, the test is put inside an scons action, and (just as for make) you can suppress text from an action using @. You can write actions in python, but I could not figure out how to stop scons printing a message. If you wrote the test code in the SConscript file it would run at parse time, not runtime. Does that help?

            Show
            rhl Robert Lupton added a comment - The comment in the code is # N.b. the test is written in sh not python as then we can use @ to suppress output In other words, the test is put inside an scons action, and (just as for make) you can suppress text from an action using @. You can write actions in python, but I could not figure out how to stop scons printing a message. If you wrote the test code in the SConscript file it would run at parse time, not runtime. Does that help?
            Hide
            tjenness Tim Jenness added a comment -

            In theory this is still a problem but in practice we don't notice it any more because in almost all test runs only a single .failed file is created for the pytest global test. The C++ tests will suffer from the problem but we don't have much churn in those.

            Show
            tjenness Tim Jenness added a comment - In theory this is still a problem but in practice we don't notice it any more because in almost all test runs only a single .failed file is created for the pytest global test. The C++ tests will suffer from the problem but we don't have much churn in those.

              People

              • Assignee:
                frossie Frossie Economou
                Reporter:
                jbosch Jim Bosch
                Watchers:
                Jim Bosch, Lauren MacArthur, Robert Lupton, Tim Jenness
              • Votes:
                0 Vote for this issue
                Watchers:
                4 Start watching this issue

                Dates

                • Created:
                  Updated:

                  Summary Panel