Parametrizing Fixtures

In Parametrized Testing, we parametrized tests. We can also parametrize fixtures. We still use our list of tasks, list of task identifiers, and an equivalence function, just as before:

 import​ pytest
 import​ tasks
 from​ tasks ​import​ Task
 
 tasks_to_try = (Task(​'sleep'​, done=True),
  Task(​'wake'​, ​'brian'​),
  Task(​'breathe'​, ​'BRIAN'​, True),
  Task(​'exercise'​, ​'BrIaN'​, False))
 
 task_ids = [​'Task({},{},{})'​.format(t.summary, t.owner, t.done)
 for​ t ​in​ tasks_to_try]
 
 
 def​ equivalent(t1, t2):
 """Check two tasks for equivalence."""
 return​ ((t1.summary == t2.summary) ​and
  (t1.owner == t2.owner) ​and
  (t1.done == t2.done))

But now, instead of parametrizing the test, we will parametrize a fixture called a_task:

 @pytest.fixture(params=tasks_to_try)
 def​ a_task(request):
 """Using no ids."""
 return​ request.param
 
 
 def​ test_add_a(tasks_db, a_task):
 """Using a_task fixture (no ids)."""
  task_id = tasks.add(a_task)
  t_from_db = tasks.get(task_id)
 assert​ equivalent(t_from_db, a_task)

The request listed in the fixture parameter is another builtin fixture that represents the calling state of the fixture. You’ll explore it more in the next chapter. It has a field param that is filled in with one element from the list assigned to params in @pytest.fixture(params=tasks_to_try).

The a_task fixture is pretty simple—it just returns the request.param as its value to the test using it. Since our task list has four tasks, the fixture will be called four times, and then the test will get called four times:

 $ ​​cd​​ ​​/path/to/code/ch3/b/tasks_proj/tests/func
 $ ​​pytest​​ ​​-v​​ ​​test_add_variety2.py::test_add_a
 ===================== test session starts ======================
 collected 4 items
 
 test_add_variety2.py::test_add_a[a_task0] PASSED
 test_add_variety2.py::test_add_a[a_task1] PASSED
 test_add_variety2.py::test_add_a[a_task2] PASSED
 test_add_variety2.py::test_add_a[a_task3] PASSED
 
 =================== 4 passed in 0.03 seconds ===================

We didn’t provide ids, so pytest just made up some names by appending a number to the name of the fixture. However, we can use the same string list we used when we parametrized our tests:

 @pytest.fixture(params=tasks_to_try, ids=task_ids)
 def​ b_task(request):
 """Using a list of ids."""
 return​ request.param
 
 
 def​ test_add_b(tasks_db, b_task):
 """Using b_task fixture, with ids."""
  task_id = tasks.add(b_task)
  t_from_db = tasks.get(task_id)
 assert​ equivalent(t_from_db, b_task)

This gives us better identifiers:

 $ ​​pytest​​ ​​-v​​ ​​test_add_variety2.py::test_add_b
 ===================== test session starts ======================
 collected 4 items
 
 test_add_variety2.py::test_add_b[Task(sleep,None,True)] PASSED
 test_add_variety2.py::test_add_b[Task(wake,brian,False)] PASSED
 test_add_variety2.py::test_add_b[Task(breathe,BRIAN,True)] PASSED
 test_add_variety2.py::test_add_b[Task(exercise,BrIaN,False)] PASSED
 
 =================== 4 passed in 0.04 seconds ===================

We can also set the ids parameter to a function we write that provides the identifiers. Here’s what it looks like when we use a function to generate the identifiers:

 def​ id_func(fixture_value):
 """A function for generating ids."""
  t = fixture_value
 return​ ​'Task({},{},{})'​.format(t.summary, t.owner, t.done)
 
 
 @pytest.fixture(params=tasks_to_try, ids=id_func)
 def​ c_task(request):
 """Using a function (id_func) to generate ids."""
 return​ request.param
 
 
 def​ test_add_c(tasks_db, c_task):
 """Use fixture with generated ids."""
  task_id = tasks.add(c_task)
  t_from_db = tasks.get(task_id)
 assert​ equivalent(t_from_db, c_task)

The function will be called from the value of each item from the parametrization. Since the parametrization is a list of Task objects, id_func() will be called with a Task object, which allows us to use the namedtuple accessor methods to access a single Task object to generate the identifier for one Task object at a time. It’s a bit cleaner than generating a full list ahead of time, and looks the same:

 $ ​​pytest​​ ​​-v​​ ​​test_add_variety2.py::test_add_c
 ===================== test session starts ======================
 collected 4 items
 
 test_add_variety2.py::test_add_c[Task(sleep,None,True)] PASSED
 test_add_variety2.py::test_add_c[Task(wake,brian,False)] PASSED
 test_add_variety2.py::test_add_c[Task(breathe,BRIAN,True)] PASSED
 test_add_variety2.py::test_add_c[Task(exercise,BrIaN,False)] PASSED
 
 =================== 4 passed in 0.04 seconds ===================

With parametrized functions, you get to run that function multiple times. But with parametrized fixtures, every test function that uses that fixture will be called multiple times. Very powerful.

Parametrizing Fixtures in the Tasks Project

Now, let’s see how we can use parametrized fixtures in the Tasks project. So far, we used TinyDB for all of the testing. But we want to keep our options open until later in the project. Therefore, any code we write, and any tests we write, should work with both TinyDB and with MongoDB.

The decision (in the code) of which database to use is isolated to the start_tasks_db() call in the tasks_db_session fixture:

 import​ pytest
 import​ tasks
 from​ tasks ​import​ Task
 
 
 @pytest.fixture(scope=​'session'​)
 def​ tasks_db_session(tmpdir_factory):
 """Connect to db before tests, disconnect after."""
  temp_dir = tmpdir_factory.mktemp(​'temp'​)
  tasks.start_tasks_db(str(temp_dir), ​'tiny'​)
 yield
  tasks.stop_tasks_db()
 
 
 @pytest.fixture()
 def​ tasks_db(tasks_db_session):
 """An empty tasks db."""
  tasks.delete_all()

The db_type parameter in the call to start_tasks_db() isn’t magic. It just ends up switching which subsystem gets to be responsible for the rest of the database interactions:

 def​ start_tasks_db(db_path, db_type): ​# type: (str, str) -> None
 """Connect API functions to a db."""
 if​ ​not​ isinstance(db_path, string_types):
 raise​ TypeError(​'db_path must be a string'​)
 global​ _tasksdb
 if​ db_type == ​'tiny'​:
 import​ tasks.tasksdb_tinydb
  _tasksdb = tasks.tasksdb_tinydb.start_tasks_db(db_path)
 elif​ db_type == ​'mongo'​:
 import​ tasks.tasksdb_pymongo
  _tasksdb = tasks.tasksdb_pymongo.start_tasks_db(db_path)
 else​:
 raise​ ValueError(​"db_type must be a 'tiny' or 'mongo'"​)

To test MongoDB, we need to run all the tests with db_type set to mongo. A small change does the trick:

 import​ pytest
 import​ tasks
 from​ tasks ​import​ Task
 
 
 #@pytest.fixture(scope='session', params=['tiny',])
 @pytest.fixture(scope=​'session'​, params=[​'tiny'​, ​'mongo'​])
 def​ tasks_db_session(tmpdir_factory, request):
 """Connect to db before tests, disconnect after."""
  temp_dir = tmpdir_factory.mktemp(​'temp'​)
  tasks.start_tasks_db(str(temp_dir), request.param)
 yield​ ​# this is where the testing happens
  tasks.stop_tasks_db()
 
 
 @pytest.fixture()
 def​ tasks_db(tasks_db_session):
 """An empty tasks db."""
  tasks.delete_all()

Here I added params=[’tiny’,’mongo’] to the fixture decorator. I added request to the parameter list of temp_db, and I set db_type to request.param instead of just picking ’tiny’ or ’mongo’.

When you set the --verbose or -v flag with pytest running parametrized tests or parametrized fixtures, pytest labels the different runs based on the value of the parametrization. And because the values are already strings, that works great.

Installing MongoDB

images/aside-icons/note.png

To follow along with MongoDB testing, make sure MongoDB and pymongo are installed. I’ve been testing with the community edition of MongoDB, found at https://www.mongodb.com/download-center. pymongo is installed with pippip install pymongo. However, using MongoDB is not necessary to follow along with the rest of the book; it’s used in this example and in a debugger example in Chapter 7.

Here’s what we have so far:

 $ ​​cd​​ ​​/path/to/code/ch3/c/tasks_proj
 $ ​​pip​​ ​​install​​ ​​pymongo
 $ ​​pytest​​ ​​-v​​ ​​--tb=no
 ===================== test session starts ======================
 collected 92 items
 
 test_add.py::test_add_returns_valid_id[tiny] PASSED
 test_add.py::test_added_task_has_id_set[tiny] PASSED
 test_add.py::test_add_increases_count[tiny] PASSED
 test_add_variety.py::test_add_1[tiny] PASSED
 test_add_variety.py::test_add_2[tiny-task0] PASSED
 test_add_variety.py::test_add_2[tiny-task1] PASSED
 ...
 test_add.py::test_add_returns_valid_id[mongo] FAILED
 test_add.py::test_added_task_has_id_set[mongo] FAILED
 test_add.py::test_add_increases_count[mongo] PASSED
 test_add_variety.py::test_add_1[mongo] FAILED
 test_add_variety.py::test_add_2[mongo-task0] FAILED
 ...
 ============= 42 failed, 50 passed in 4.94 seconds =============

Hmm. Bummer. Looks like we’ll need to do some debugging before we let anyone use the Mongo version. You’ll take a look at how to debug this in pdb: Debugging Test Failures. Until then, we’ll use the TinyDB version.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.141.3.175