[Biopython-dev] Output of Biopython tests

Peter biopython-dev at maubp.freeserve.co.uk
Tue Oct 9 11:44:01 UTC 2007


Michiel De Hoon wrote:
> When I was running the Biopython tests, one thing bothered me though.
> All Biopython tests now have a corresponding output file that 
> contains the output the test should generate if it runs correctly. 
> For some tests, this makes perfect sense, particularly if the output 
> is large. For others, on the other hand, having the test output 
> explicitly in a file doesn't actually add much information.

Is this actually a problem?  It gives us a simple unified test framework
where developers can use whatever fancy test frameworks they want to.

Personally I have tried to write simple scripts with meaningful output
(plus often additional assertions).  I think that because these are very
simple, they can double as examples/documentation for the curious.

My personal view is that some of the "fancy frameworks" used in some
test cases are very intimidating to a beginner (and act as a barrier to 
taking the code and modifying it for their own use).

> The point is that for this test, having the output explicitly is not 
> needed in order to identify the problem.

True.  I would have written that particular test to give some meaningful 
output; I find it makes it easier to start debugging why a test fails.

> Now, for some tests having the output explicitly actually causes a 
> problem. I'm thinking about those unit tests that only run if some 
> particular software is installed on the system (for example, SQL). In
> those cases, we need to distinguish failure due to missing software 
> from a true failure (the former may not bother the user much if he's 
> not interested in that particular part of Biopython). If a test 
> cannot be run because of missing prerequisites, currently a unit test
> generates an ImportError, which is then caught inside run_tests.
> ...
> When you look inside test_BioSQL.py, you'll see that the actual error
>  is not an ImportError. In addition, if a true ImportError occurs 
> during the test, the test will inadvertently be treated as skipped.

Perhaps we should introduce a MissingExternalDependency error instead,
used for this specific case, and catch that in run_tests.py, while
treating ImportError as a real error.

As you say, if we have done some dramatic restructuring (such as
removing a module) there could be some REAL ImportErrors which we might
risk ignoring.

> I'd therefore like to suggest the following:
 > 1) Keep the test output, but let each test_* script (instead of
 > run_tests.py) be responsible of comparing the test output with the
 > expected output.

I'm not keen on that - it means duplication of code (or at least some
common functionality to call) and makes writing simple tests that little
bit harder.  I like the fact that the more verbose test scripts can be 
run on their own as an example of what the module can do.

> 2) If the expected output is trivial, simply use the assert
> statements to verify the test output instead of storing them in a
> file and reading them from there.

By all means, test trivial output with assertions.  I already do this 
within many of my "verbose" tests where I want to keep the console 
output reasonably short.

Peter




More information about the Biopython-dev mailing list