Running unit tests and checking testing coverage

Now, run the following command to create a test database, run all the migrations and use the Django nose test running to execute all the tests we created. The test runner will execute all the methods for our GameCategoryTests class that start with the test_ prefix and will display the results.

Tip

The tests won't make changes to the database we have been using when working on the API.

Remember that we configured many default command-line options that will be used without the need to enter them in our command-line. Run the following command within the same virtual environment we have been using. We will use the -v 2 option to use the verbosity level 2 because we want to check all the things that the test runner is doing:

python manage.py test -v 2

The following lines show the sample output:

nosetests --with-coverage --cover-package=games --cover-erase --cover-inclusive -v --verbosity=2
Creating test database for alias 'default' ('test_games')...
Operations to perform:
  Synchronize unmigrated apps: django_nose, staticfiles, crispy_forms, messages, rest_framework
  Apply all migrations: games, admin, auth, contenttypes, sessions
Synchronizing apps without migrations:
  Creating tables...
    Running deferred SQL...
Running migrations:
  Rendering model states... DONE
  Applying contenttypes.0001_initial... OK
  Applying auth.0001_initial... OK
  Applying admin.0001_initial... OK
  Applying admin.0002_logentry_remove_auto_add... OK
  Applying contenttypes.0002_remove_content_type_name... OK
  Applying auth.0002_alter_permission_name_max_length... OK
  Applying auth.0003_alter_user_email_max_length... OK
  Applying auth.0004_alter_user_username_opts... OK
  Applying auth.0005_alter_user_last_login_null... OK
  Applying auth.0006_require_contenttypes_0002... OK
  Applying auth.0007_alter_validators_add_error_messages... OK
  Applying games.0001_initial... OK
  Applying games.0002_auto_20160623_2131... OK
  Applying games.0003_game_owner... OK
  Applying sessions.0001_initial... OK
Ensure we can create a new GameCategory and then retrieve it ... ok
Ensure we can create a new GameCategory. ... ok
Ensure we can filter a game category by name ... ok
Ensure we can retrieve a game cagory ... ok
Ensure we can update a single field for a game category ... ok
Name                   Stmts   Miss  Cover
------------------------------------------
games.py                   0      0   100%
games/admin.py             1      1     0%
games/apps.py              3      3     0%
games/models.py           36     35     3%
games/pagination.py        3      0   100%
games/permissions.py       6      3    50%
games/serializers.py      45      0   100%
games/urls.py              3      0   100%
games/views.py            91      2    98%
------------------------------------------
TOTAL                    188     44    77%
------------------------------------------
Ran 5 tests in 0.143s
OK
Destroying test database for alias 'default' ('test_games')...

The output provides the details indicating that the test runner executed 5 tests and all of them passed. After the details about the migrations are executed, the output displays the comments we included for each method in the GameCategoryTests class that started with the test_ prefix and represented a test to be executed. The following list shows the description included in the comments and the method that they represent:

  • Ensures we can create a new GameCategory and then retrieve it: test_create_and_retrieve_game_category.
  • Ensures we can create a new GameCategory: test_create_duplicated_game_category.
  • Ensures we can filter a game category by name: test_retrieve_game_categories_list.
  • Ensures we can retrieve a game cagory: test_update_game_category.
  • Ensures we can update a single field for a game category: test_filter_game_category_by_name.

The test code coverage measurement report provided by the coverage package uses the code analysis tools and the tracing hooks included in the Python standard library to determine which lines of code are executable and which of these lines have been executed. The report provides a table with the following columns:

  • Name: The Python module name.
  • Stmts: The count of executable statements for the Python module.
  • Miss: The number of executable statements missed, that is, the ones that weren't executed.
  • Cover: The coverage of executable statements, expressed as a percentage.

We definitely have a very low coverage for models.py based on the measurements shown in the report. In fact, we just wrote a few tests related to the GameCategory model, and therefore, it makes sense that the coverage is really low for the models:

We can run the coverage command with the -m command-line option to display the line numbers of the missing statements in a new Missing column.

coverage report -m

The command will use the information from the last execution and will display the missing statements. The next lines show a sample output that correspond to the previous execution of the unit tests:

Name                   Stmts   Miss  Cover   Missing
----------------------------------------------------
games/__init__.py          0      0   100%
games/admin.py             1      1     0%   1
games/apps.py              3      3     0%   1-5
games/models.py           36     35     3%   1-10, 14-70
games/pagination.py        3      0   100%
games/permissions.py       6      3    50%   6-9
games/serializers.py      45      0   100%
games/tests.py            55      0   100%
games/urls.py              3      0   100%
games/views.py            91      2    98%   83, 177
----------------------------------------------------
TOTAL                    243     44    82%

Now, run the following command to get annotated HTML listings detailing missed lines:

coverage html

Open the index.html HTML file generated in the htmlcov folder with your web browser. The following picture shows an example report that coverage generated in HTML format.

Running unit tests and checking testing coverage

Click or tap on games/models.py and the web browser will render a web page that displays the statements that were run, the missing ones and the excluded, with different colors. We can click or tap on the run, missing, and excluded buttons to show or hide the background color that represents the status for each line of code. By default, the missing lines of code will be displayed with a pink background. Thus, we must write unit tests that target these lines of code to improve our tests coverage:

Running unit tests and checking testing coverage

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.191.189.23