Now, run the following command to create all the necessary tables in our test database and use the nose2
test running to execute all the tests we created. The test runner will execute all the methods for our InitialTests
class that start with the test_
prefix and will display the results.
Remove the api.py
file we created in the previous chapter from the api
folder because we don't want the tests coverage to take into account this file. Go to the api
folder and run the following command within the same virtual environment that we have been using. We will use the -v
option to instruct nose2
to print test case names and statuses. The --with-coverage
option turns on test coverage reporting generation:
nose2 -v --with-coverage
The following lines show the sample output.
test_create_and_retrieve_category (test_views.InitialTests) ... ok test_create_duplicated_category (test_views.InitialTests) ... ok test_request_without_authentication (test_views.InitialTests) ... ok test_retrieve_categories_list (test_views.InitialTests) ... ok test_update_category (test_views.InitialTests) ... ok -------------------------------------------------------- Ran 5 tests in 3.973s OK ----------- coverage: platform win32, python 3.5.2-final-0 -- Name Stmts Miss Cover ----------------------------------------- app.py 9 0 100% config.py 11 11 0% helpers.py 23 18 22% migrate.py 9 9 0% models.py 101 27 73% run.py 4 4 0% status.py 56 5 91% test_config.py 12 0 100% tests est_views.py 96 0 100% views.py 204 109 47% ----------------------------------------- TOTAL 525 183 65%
By default, nose2
looks for modules whose names start with the test
prefix. In this case, the only module that matches the criteria is the test_views
module. In the modules that match the criteria, nose2
loads tests from all the subclasses of unittest.TestCase
and the functions whose names start with the test
prefix.
The output provides details indicating that the test runner discovered and executed five tests and all of them passed. The output displays the method name and the class name for each method in the InitialTests
class that started with the test_
prefix and represented a test to be executed.
The test code coverage measurement report provided by the coverage
package uses the code analysis tools and the tracing hooks included in the Python standard library to determine which lines of code are executable and have been executed. The report provides a table with the following columns:
Name
: The Python module name.Stmts
: The count of executable statements for the Python module.Miss
: The number of executable statements missed, that is, the ones that weren't executed.Cover
: The coverage of executable statements expressed as a percentage.We definitely have a very low coverage for views.py
and helpers.py
based on the measurements shown in the report. In fact, we just wrote a few tests related to categories and users, and therefore, it makes sense that the coverage is really low for the views. We didn't create tests related to messages.
We can run the coverage
command with the -m
command-line option to display the line numbers of the missing statements in a new Missing
column:
coverage report -m
The command will use the information from the last execution and will display the missing statements. The next lines show a sample output that corresponds to the previous execution of the unit tests:
Name Stmts Miss Cover Missing --------------------------------------------------- app.py 9 0 100% config.py 11 11 0% 7-20 helpers.py 23 18 22% 13-19, 23-44 migrate.py 9 9 0% 7-19 models.py 101 27 73% 28-29, 44, 46, 48, 50, 52, 54, 73-75, 79-86, 103, 127-137 run.py 4 4 0% 7-14 status.py 56 5 91% 2, 6, 10, 14, 18 test_config.py 12 0 100% tests est_views.py 96 0 100% views.py 204 109 47% 43-45, 51-58, 63-64, 67, 71-72, 83-87, 92-94, 97-124, 127-135, 140-147, 150-181, 194-195, 198, 205-206, 209-212, 215-223, 235-236, 239, 250-253 --------------------------------------------------- TOTAL 525 183 65%
Now, run the following command to get annotated HTML listings detailing missed lines:
coverage html
Open the index.html
HTML file generated in the htmlcov
folder with your Web browser. The following picture shows an example report that coverage generated in HTML format:
Click or tap views.py
and the Web browser will render a Web page that displays the statements that were run, the missing ones and the excluded, with different colors. We can click or tap on the run, missing and excluded buttons to show or hide the background color that represents the status for each line of code. By default, the missing lines of code will be displayed with a pink background. Thus, we must write unit tests that target these lines of code to improve our test coverage:
3.145.178.151