[Biopython-dev] Rethinking Biopython's testing framework

Michiel de Hoon mjldehoon at yahoo.com
Sat Jan 10 16:30:07 UTC 2009


> > We could discuss a modification to run_tests.py so
> > that if there is no expected output file
> > output/test_XXX for test_XXX.py we just run
> > test_XXX.py and check its return value (I think
> > Michiel had previously
> > suggested something like this).
> 
> I think this should be done inside the test itself.
> All the tests should return only a boolean value (passed or
> not) and a description of the error.
> The tests that make use of an expected output file, they
> should open it and do the comparison by themselves, not in
> run_tests.py.

Sounds attractive, but there is one complication for print-and-compare tests. The code that does the print-and-compare is not trivial (see run_tests.py). It is possible to have the print-and-compare code in a helper module, which is then imported by each print-and-compare test. Still, while currently the print-and-compare tests have the advantage of being simple, they will get more complicated if we require the print-and-compare to be part of each test.

Does anybody have an opinion on this? It's either doing the print-and-compare as part of each print-and-compare test script, or requiring a test_suite() function in each unittest-based test script, and assuming that a test script is a unittest-based test script if it contains a test_suite() function.

--Michiel


      



More information about the Biopython-dev mailing list