Marking Test Functions

pytest provides a cool mechanism to let you put markers on test functions. A test can have more than one marker, and a marker can be on multiple tests.

Markers make sense after you see them in action. Let’s say we want to run a subset of our tests as a quick “smoke test” to get a sense for whether or not there is some major break in the system. Smoke tests are by convention not all-inclusive, thorough test suites, but a select subset that can be run quickly and give a developer a decent idea of the health of all parts of the system.

To add a smoke test suite to the Tasks project, we can add @mark.pytest.smoke to some of the tests. Let’s add it to a couple of tests in test_api_exceptions.py (note that the markers smoke and get aren’t built into pytest; I just made them up):

 @pytest.mark.smoke
 def​ test_list_raises():
 """list() should raise an exception with wrong type param."""
 with​ pytest.raises(TypeError):
  tasks.list_tasks(owner=123)
 
 
 @pytest.mark.get
 @pytest.mark.smoke
 def​ test_get_raises():
 """get() should raise an exception with wrong type param."""
 with​ pytest.raises(TypeError):
  tasks.get(task_id=​'123'​)

Now, let’s run just those tests that are marked with -m marker_name:

 $ ​​cd​​ ​​/path/to/code/ch2/tasks_proj/tests/func
 $ ​​pytest​​ ​​-v​​ ​​-m​​ ​​'smoke'​​ ​​test_api_exceptions.py
 ===================== test session starts ======================
 collected 7 items
 
 test_api_exceptions.py::test_list_raises PASSED
 test_api_exceptions.py::test_get_raises PASSED
 
 ====================== 5 tests deselected ======================
 ============ 2 passed, 5 deselected in 0.03 seconds ============
 $ ​​pytest​​ ​​-v​​ ​​-m​​ ​​'get'​​ ​​test_api_exceptions.py
 ===================== test session starts ======================
 collected 7 items
 
 test_api_exceptions.py::test_get_raises PASSED
 
 ====================== 6 tests deselected ======================
 ============ 1 passed, 6 deselected in 0.01 seconds ============

Remember that -v is short for --verbose and lets us see the names of the tests that are run. Using -m ’smoke’ runs both tests marked with @pytest.mark.smoke. Using -m ’get’ runs the one test marked with @pytest.mark.get. Pretty straightforward.

It gets better. The expression after -m can use and, or, and not to combine multiple markers:

 $ ​​pytest​​ ​​-v​​ ​​-m​​ ​​'smoke and get'​​ ​​test_api_exceptions.py
 ===================== test session starts ======================
 collected 7 items
 
 test_api_exceptions.py::test_get_raises PASSED
 
 ====================== 6 tests deselected ======================
 ============ 1 passed, 6 deselected in 0.03 seconds ============

That time we only ran the test that had both smoke and get markers. We can use not as well:

 $ ​​pytest​​ ​​-v​​ ​​-m​​ ​​'smoke and not get'​​ ​​test_api_exceptions.py
 ===================== test session starts ======================
 collected 7 items
 
 test_api_exceptions.py::test_list_raises PASSED
 
 ====================== 6 tests deselected ======================
 ============ 1 passed, 6 deselected in 0.03 seconds ============

The addition of -m ’smoke and not get’ selected the test that was marked with @pytest.mark.smoke but not @pytest.mark.get.

Filling Out the Smoke Test

The previous tests don’t seem like a reasonable smoke test suite yet. We haven’t actually touched the database or added any tasks. Surely a smoke test would do that.

Let’s add a couple of tests that look at adding a task, and use one of them as part of our smoke test suite:

 import​ pytest
 import​ tasks
 from​ tasks ​import​ Task
 
 
 def​ test_add_returns_valid_id():
 """tasks.add(<valid task>) should return an integer."""
 # GIVEN an initialized tasks db
 # WHEN a new task is added
 # THEN returned task_id is of type int
  new_task = Task(​'do something'​)
  task_id = tasks.add(new_task)
 assert​ isinstance(task_id, int)
 
 
 @pytest.mark.smoke
 def​ test_added_task_has_id_set():
 """Make sure the task_id field is set by tasks.add()."""
 # GIVEN an initialized tasks db
 # AND a new task is added
  new_task = Task(​'sit in chair'​, owner=​'me'​, done=True)
  task_id = tasks.add(new_task)
 
 # WHEN task is retrieved
  task_from_db = tasks.get(task_id)
 
 # THEN task_id matches id field
 assert​ task_from_db.id == task_id

Both of these tests have the comment GIVEN an initialized tasks db, and yet there is no database initialized in the test. We can define a fixture to get the database initialized before the test and cleaned up after the test:

 @pytest.fixture(autouse=True)
 def​ initialized_tasks_db(tmpdir):
 """Connect to db before testing, disconnect after."""
 # Setup : start db
  tasks.start_tasks_db(str(tmpdir), ​'tiny'​)
 
 yield​ ​# this is where the testing happens
 
 # Teardown : stop db
  tasks.stop_tasks_db()

The fixture, tmpdir, used in this example is a builtin fixture. You’ll learn all about builtin fixtures in Chapter 4, Builtin Fixtures, and you’ll learn about writing your own fixtures and how they work in Chapter 3, pytest Fixtures, including the autouse parameter used here.

autouse as used in our test indicates that all tests in this file will use the fixture. The code before the yield runs before each test; the code after the yield runs after the test. The yield can return data to the test if desired. You’ll look at all that and more in later chapters, but here we need some way to set up the database for testing, so I couldn’t wait any longer to show you a fixture. (pytest also supports old-fashioned setup and teardown functions, like what is used in unittest and nose, but they are not nearly as fun. However, if you are curious, they are described in Appendix 5, xUnit Fixtures.)

Let’s set aside fixture discussion for now and go to the top of the project and run our smoke test suite:

 $ ​​cd​​ ​​/path/to/code/ch2/tasks_proj
 $ ​​pytest​​ ​​-v​​ ​​-m​​ ​​'smoke'
 ===================== test session starts ======================
 collected 56 items
 
 tests/func/test_add.py::test_added_task_has_id_set PASSED
 tests/func/test_api_exceptions.py::test_list_raises PASSED
 tests/func/test_api_exceptions.py::test_get_raises PASSED
 
 ===================== 53 tests deselected ======================
 =========== 3 passed, 53 deselected in 0.11 seconds ============

This shows that marked tests from different files can all run together.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.116.60.158