Using Options

We’ve used the verbose option, -v or --verbose, a couple of times already, but there are many more options worth knowing about. We’re not going to use all of the options in this book, but quite a few. You can see all of them with pytest --help.

The following are a handful of options that are quite useful when starting out with pytest. This is by no means a complete list, but these options in particular address some common early desires for controlling how pytest runs when you’re first getting started.

 $ ​​pytest​​ ​​--help
  ...​​ ​​subset​​ ​​of​​ ​​the​​ ​​list​​ ​​...
  -k EXPRESSION only run tests/classes which match the given
  substring expression.
  Example: -k 'test_method or test_other' matches
  all test functions and classes whose name
  contains 'test_method' or 'test_other'.
  -m MARKEXPR only run tests matching given mark expression.
  example: -m 'mark1 and not mark2'.
  -x, --exitfirst exit instantly on first error or failed test.
  --maxfail=num exit after first num failures or errors.
  --capture=method per-test capturing method: one of fd|sys|no.
  -s shortcut for --capture=no.
  --lf, --last-failed rerun only the tests that failed last time
  (or all if none failed)
  --ff, --failed-first run all tests but run the last failures first.
  -v, --verbose increase verbosity.
  -q, --quiet decrease verbosity.
  -l, --showlocals show locals in tracebacks (disabled by default).
  --tb=style traceback print mode (auto/long/short/line/native/no).
  --durations=N show N slowest setup/test durations (N=0 for all).
  --collect-only only collect tests, don't execute them.
  --version display pytest lib version and import information.
  -h, --help show help message and configuration info

–collect-only

The --collect-only option shows you which tests will be run with the given options and configuration. It’s convenient to show this option first so that the output can be used as a reference for the rest of the examples. If you start in the ch1 directory, you should see all of the test functions you’ve looked at so far in this chapter:

 $ ​​cd​​ ​​/path/to/code/ch1
 $ ​​pytest​​ ​​--collect-only
 =================== test session starts ===================
 collected 6 items
 <Module 'test_one.py'>
  <Function 'test_passing'>
 <Module 'test_two.py'>
  <Function 'test_failing'>
 <Module 'tasks/test_four.py'>
  <Function 'test_asdict'>
  <Function 'test_replace'>
 <Module 'tasks/test_three.py'>
  <Function 'test_defaults'>
  <Function 'test_member_access'>
 
 ============== no tests ran in 0.03 seconds ===============

The --collect-only option is helpful to check if other options that select tests are correct before running the tests. We’ll use it again with -k to show how that works.

-k EXPRESSION

The -k option lets you use an expression to find what test functions to run. Pretty powerful. It can be used as a shortcut to running an individual test if its name is unique, or running a set of tests that have a common prefix or suffix in their names. Let’s say you want to run the test_asdict() and test_defaults() tests. You can test out the filter with --collect-only:

 $ ​​cd​​ ​​/path/to/code/ch1
 $ ​​pytest​​ ​​-k​​ ​​"asdict or defaults"​​ ​​--collect-only
 =================== test session starts ===================
 collected 6 items
 <Module 'tasks/test_four.py'>
  <Function 'test_asdict'>
 <Module 'tasks/test_three.py'>
  <Function 'test_defaults'>
 
 =================== 4 tests deselected ====================
 ============== 4 deselected in 0.03 seconds ===============

Yep. That looks like what we want. Now you can run them by removing the --collect-only:

 $ ​​pytest​​ ​​-k​​ ​​"asdict or defaults"
 =================== test session starts ===================
 collected 6 items
 
 tasks/test_four.py .
 tasks/test_three.py .
 
 =================== 4 tests deselected ====================
 ========= 2 passed, 4 deselected in 0.03 seconds ==========

Hmm. Just dots. So they passed. But were they the right tests? One way to find out is to use -v or --verbose:

 $ ​​pytest​​ ​​-v​​ ​​-k​​ ​​"asdict or defaults"
 =================== test session starts ===================
 collected 6 items
 
 tasks/test_four.py::test_asdict PASSED
 tasks/test_three.py::test_defaults PASSED
 
 =================== 4 tests deselected ====================
 ========= 2 passed, 4 deselected in 0.02 seconds ==========

Yep. They were the correct tests.

-m MARKEXPR

Markers are one of the best ways to mark a subset of your test functions so that they can be run together. As an example, one way to run test_replace() and test_member_access(), even though they are in separate files, is to mark them.

You can use any marker name. Let’s say you want to use run_these_please. You’d mark a test using the decorator @pytest.mark.run_these_please, like so:

 import​ pytest
 
 ...
 @pytest.mark.run_these_please
 def​ test_member_access():
 ...

Then you’d do the same for test_replace(). You can then run all the tests with the same marker with pytest -m run_these_please:

 $ ​​cd​​ ​​/path/to/code/ch1/tasks
 $ ​​pytest​​ ​​-v​​ ​​-m​​ ​​run_these_please
 ================== test session starts ===================
 collected 4 items
 
 test_four.py::test_replace PASSED
 test_three.py::test_member_access PASSED
 
 =================== 2 tests deselected ===================
 ========= 2 passed, 2 deselected in 0.02 seconds =========

The marker expression doesn’t have to be a single marker. You can say things like -m "mark1 and mark2" for tests with both markers, -m "mark1 and not mark2" for tests that have mark1 but not mark2, -m "mark1 or mark2" for tests with either, and so on. I’ll discuss markers more completely in Marking Test Functions.

-x, –exitfirst

Normal pytest behavior is to run every test it finds. If a test function encounters a failing assert or an exception, the execution for that test stops there and the test fails. And then pytest runs the next test. Most of the time, this is what you want. However, especially when debugging a problem, stopping the entire test session immediately when a test fails is the right thing to do. That’s what the -x option does.

Let’s try it on the six tests we have so far:

 $ ​​cd​​ ​​/path/to/code/ch1
 $ ​​pytest​​ ​​-x
 =================== test session starts ===================
 collected 6 items
 
 test_one.py .
 test_two.py F
 
 ======================== FAILURES =========================
 ______________________ test_failing _______________________
 
  def test_failing():
 > assert (1, 2, 3) == (3, 2, 1)
 E assert (1, 2, 3) == (3, 2, 1)
 E At index 0 diff: 1 != 3
 E Use -v to get the full diff
 
 test_two.py:2: AssertionError
 !!!!!!!!! Interrupted: stopping after 1 failures !!!!!!!!!!
 =========== 1 failed, 1 passed in 0.25 seconds ============

Near the top of the output you see that all six tests (or “items”) were collected, and in the bottom line you see that one test failed and one passed, and pytest displays the “Interrupted” line to tell us that it stopped.

Without -x, all six tests would have run. Let’s run it again without the -x. Let’s also use --tb=no to turn off the stack trace, since you’ve already seen it and don’t need to see it again:

 $ ​​cd​​ ​​/path/to/code/ch1
 $ ​​pytest​​ ​​--tb=no
 =================== test session starts ===================
 collected 6 items
 
 test_one.py .
 test_two.py F
 tasks/test_four.py ..
 tasks/test_three.py ..
 
 =========== 1 failed, 5 passed in 0.09 seconds ============

This demonstrates that without the -x, pytest notes failure in test_two.py and continues on with further testing.

–maxfail=num

The -x option stops after one test failure. If you want to let some failures happen, but not a ton, use the --maxfail option to specify how many failures are okay with you.

It’s hard to really show this with only one failing test in our system so far, but let’s take a look anyway. Since there is only one failure, if we set --maxfail=2, all of the tests should run, and --maxfail=1 should act just like -x:

 $ ​​cd​​ ​​/path/to/code/ch1
 $ ​​pytest​​ ​​--maxfail=2​​ ​​--tb=no
 =================== test session starts ===================
 collected 6 items
 
 test_one.py .
 test_two.py F
 tasks/test_four.py ..
 tasks/test_three.py ..
 
 =========== 1 failed, 5 passed in 0.08 seconds ============
 $ ​​pytest​​ ​​--maxfail=1​​ ​​--tb=no
 =================== test session starts ===================
 collected 6 items
 
 test_one.py .
 test_two.py F
 
 !!!!!!!!! Interrupted: stopping after 1 failures !!!!!!!!!!
 =========== 1 failed, 1 passed in 0.19 seconds ============

Again, we used --tb=no to turn off the traceback.

-s and –capture=method

The -s flag allows print statements—or really any output that normally would be printed to stdout—to actually be printed to stdout while the tests are running. It is a shortcut for --capture=no. This makes sense once you understand that normally the output is captured on all tests. Failing tests will have the output reported after the test runs on the assumption that the output will help you understand what went wrong. The -s or --capture=no option turns off output capture. When developing tests, I find it useful to add several print() statements so that I can watch the flow of the test.

Another option that may help you to not need print statements in your code is -l/--showlocals, which prints out the local variables in a test if the test fails.

Other options for capture method are --capture=fd and --capture=sys. The --capture=sys option replaces sys.stdout/stderr with in-mem files. The --capture=fd option points file descriptors 1 and 2 to a temp file.

I’m including descriptions of sys and fd for completeness. But to be honest, I’ve never needed or used either. I frequently use -s. And to fully describe how -s works, I needed to touch on capture methods.

We don’t have any print statements in our tests yet; a demo would be pointless. However, I encourage you to play with this a bit so you see it in action.

–lf, –last-failed

When one or more tests fails, having a convenient way to run just the failing tests is helpful for debugging. Just use --lf and you’re ready to debug:

 $ ​​cd​​ ​​/path/to/code/ch1
 $ ​​pytest​​ ​​--lf
 =================== test session starts ===================
 run-last-failure: rerun last 1 failures
 collected 6 items
 
 test_two.py F
 
 ======================== FAILURES =========================
 ______________________ test_failing _______________________
 
  def test_failing():
 > assert (1, 2, 3) == (3, 2, 1)
 E assert (1, 2, 3) == (3, 2, 1)
 E At index 0 diff: 1 != 3
 E Use -v to get the full diff
 
 test_two.py:2: AssertionError
 =================== 5 tests deselected ====================
 ========= 1 failed, 5 deselected in 0.08 seconds ==========

This is great if you’ve been using a --tb option that hides some information and you want to re-run the failures with a different traceback option.

–ff, –failed-first

The --ff/--failed-first option will do the same as --last-failed, and then run the rest of the tests that passed last time:

 $ ​​cd​​ ​​/path/to/code/ch1
 $ ​​pytest​​ ​​--ff​​ ​​--tb=no
 =================== test session starts ===================
 run-last-failure: rerun last 1 failures first
 collected 6 items
 
 test_two.py F
 test_one.py .
 tasks/test_four.py ..
 tasks/test_three.py ..
 
 =========== 1 failed, 5 passed in 0.09 seconds ============

Usually, test_failing() from test\_two.py is run after test\_one.py. However, because test_failing() failed last time, --ff causes it to be run first.

-v, –verbose

The -v/--verbose option reports more information than without it. The most obvious difference is that each test gets its own line, and the name of the test and the outcome are spelled out instead of indicated with just a dot.

We’ve used it quite a bit already, but let’s run it again for fun in conjunction with --ff and --tb=no:

 $ ​​cd​​ ​​/path/to/code/ch1
 $ ​​pytest​​ ​​-v​​ ​​--ff​​ ​​--tb=no
 =================== test session starts ===================
 run-last-failure: rerun last 1 failures first
 collected 6 items
 
 test_two.py::test_failing FAILED
 test_one.py::test_passing PASSED
 tasks/test_four.py::test_asdict PASSED
 tasks/test_four.py::test_replace PASSED
 tasks/test_three.py::test_defaults PASSED
 tasks/test_three.py::test_member_access PASSED
 
 =========== 1 failed, 5 passed in 0.07 seconds ============

With color terminals, you’d see red FAILED and green PASSED outcomes in the report as well.

-q, –quiet

The -q/--quiet option is the opposite of -v/--verbose; it decreases the information reported. I like to use it in conjunction with --tb=line, which reports just the failing line of any failing tests.

Let’s try -q by itself:

 $ ​​cd​​ ​​/path/to/code/ch1
 $ ​​pytest​​ ​​-q
 .F....
 ======================== FAILURES =========================
 ______________________ test_failing _______________________
 
  def test_failing():
 > assert (1, 2, 3) == (3, 2, 1)
 E assert (1, 2, 3) == (3, 2, 1)
 E At index 0 diff: 1 != 3
 E Full diff:
 E - (1, 2, 3)
 E ? ^ ^
 E + (3, 2, 1)
 E ? ^ ^
 
 test_two.py:2: AssertionError
 1 failed, 5 passed in 0.08 seconds

The -q option makes the output pretty terse, but it’s usually enough. We’ll use the -q option frequently in the rest of the book (as well as --tb=no) to limit the output to what we are specifically trying to understand at the time.

-l, –showlocals

If you use the -l/--showlocals option, local variables and their values are displayed with tracebacks for failing tests.

So far, we don’t have any failing tests that have local variables. If I take the test_replace() test and change

 t_expected = Task(​'finish book'​, ​'brian'​, True, 10)

to

 t_expected = Task(​'finish book'​, ​'brian'​, True, 11)

the 10 and 11 should cause a failure. Any change to the expected value will cause a failure. But this is enough to demonstrate the command-line option --l/--showlocals:

 $ ​​cd​​ ​​/path/to/code/ch1
 $ ​​pytest​​ ​​-l​​ ​​tasks
 =================== test session starts ===================
 collected 4 items
 
 tasks/test_four.py .F
 tasks/test_three.py ..
 
 
 
 
 ======================== FAILURES =========================
 ______________________ test_replace _______________________
 
  def test_replace():
  t_before = Task('finish book', 'brian', False)
  t_after = t_before._replace(id=10, done=True)
  t_expected = Task('finish book', 'brian', True, 11)
 > assert t_after == t_expected
 E AssertionError: assert Task(summary=...e=True, id=10) == Task(
 summary='...e=True, id=11)
 E At index 3 diff: 10 != 11
 E Use -v to get the full diff
 
 t_after = Task(summary='finish book', owner='brian', done=True, id=10)
 t_before = Task(summary='finish book', owner='brian', done=False, id=None)
 t_expected = Task(summary='finish book', owner='brian', done=True, id=11)
 
 tasks/test_four.py:20: AssertionError
 =========== 1 failed, 3 passed in 0.08 seconds ============

The local variables t_after, t_before, and t_expected are shown after the code snippet, with the value they contained at the time of the failed assert.

–tb=style

The --tb=style option modifies the way tracebacks for failures are output. When a test fails, pytest lists the failures and what’s called a traceback, which shows you the exact line where the failure occurred. Although tracebacks are helpful most of time, there may be times when they get annoying. That’s where the --tb=style option comes in handy. The styles I find useful are short, line, and no. short prints just the assert line and the E evaluated line with no context; line keeps the failure to one line; no removes the traceback entirely.

Let’s leave the modification to test_replace() to make it fail and run it with different traceback styles.

--tb=no removes the traceback entirely:

 $ ​​cd​​ ​​/path/to/code/ch1
 $ ​​pytest​​ ​​--tb=no​​ ​​tasks
 =================== test session starts ===================
 collected 4 items
 
 tasks/test_four.py .F
 tasks/test_three.py ..
 
 =========== 1 failed, 3 passed in 0.04 seconds ============

--tb=line in many cases is enough to tell what’s wrong. If you have a ton of failing tests, this option can help to show a pattern in the failures:

 $ ​​pytest​​ ​​--tb=line​​ ​​tasks
 =================== test session starts ===================
 collected 4 items
 
 tasks/test_four.py .F
 tasks/test_three.py ..
 
 ======================== FAILURES =========================
 /path/to/code/ch1/tasks/test_four.py:20:
 AssertionError: assert Task(summary=...e=True, id=10) == Task(
 summary='...e=True, id=11)
 =========== 1 failed, 3 passed in 0.05 seconds ============

The next step up in verbose tracebacks is --tb=short:

 $ ​​pytest​​ ​​--tb=short​​ ​​tasks
 =================== test session starts ===================
 collected 4 items
 
 tasks/test_four.py .F
 tasks/test_three.py ..
 
 ======================== FAILURES =========================
 ______________________ test_replace _______________________
 tasks/test_four.py:20: in test_replace
  assert t_after == t_expected
 E AssertionError: assert Task(summary=...e=True, id=10) == Task(
 summary='...e=True, id=11)
 E At index 3 diff: 10 != 11
 E Use -v to get the full diff
 =========== 1 failed, 3 passed in 0.04 seconds ============

That’s definitely enough to tell you what’s going on.

There are three remaining traceback choices that we haven’t covered so far.

pytest --tb=long will show you the most exhaustive, informative traceback possible. pytest --tb=auto will show you the long version for the first and last tracebacks, if you have multiple failures. This is the default behavior. pytest --tb=native will show you the standard library traceback without any extra information.

–durations=N

The --durations=N option is incredibly helpful when you’re trying to speed up your test suite. It doesn’t change how your tests are run; it reports the slowest N number of tests/setups/teardowns after the tests run. If you pass in --durations=0, it reports everything in order of slowest to fastest.

None of our tests are long, so I’ll add a time.sleep(0.1) to one of the tests. Guess which one:

 $ ​​cd​​ ​​/path/to/code/ch1
 $ ​​pytest​​ ​​--durations=3​​ ​​tasks
 ================= test session starts =================
 collected 4 items
 
 tasks/test_four.py ..
 tasks/test_three.py ..
 
 ============== slowest 3 test durations ===============
 0.10s call tasks/test_four.py::test_replace
 0.00s setup tasks/test_three.py::test_defaults
 0.00s teardown tasks/test_three.py::test_member_access
 ============== 4 passed in 0.13 seconds ===============

The slow test with the extra sleep shows up right away with the label call, followed by setup and teardown. Every test essentially has three phases: call, setup, and teardown. Setup and teardown are also called fixtures and are a chance for you to add code to get data or the software system under test into a precondition state before the test runs, as well as clean up afterwards if necessary. I cover fixtures in depth in Chapter 3, pytest Fixtures.

–version

The --version option shows the version of pytest and the directory where it’s installed:

 $ ​​pytest​​ ​​--version
 This is pytest version 3.0.7, imported from
  /path/to/venv/lib/python3.5/site-packages/pytest.py

Since we installed pytest into a virtual environment, pytest will be located in the site-packages directory of that virtual environment.

-h, –help

The -h/--help option is quite helpful, even after you get used to pytest. Not only does it show you how to use stock pytest, but it also expands as you install plugins to show options and configuration variables added by plugins.

The -h option shows:

  • usage: pytest [options] [file_or_dir] [file_or_dir] [...]

  • Command-line options and a short description, including options added via plugins

  • A list of options available to ini style configuration files, which I’ll discuss more in Chapter 6, Configuration

  • A list of environmental variables that can affect pytest behavior (also discussed in Chapter 6, Configuration)

  • A reminder that pytest --markers can be used to see available markers, discussed in Chapter 2, Writing Test Functions

  • A reminder that pytest --fixtures can be used to see available fixtures, discussed in Chapter 3, pytest Fixtures

The last bit of information the help text displays is this note:

 (shown according to specified file_or_dir or current dir if not specified)

This note is important because the options, markers, and fixtures can change based on which directory or test file you’re running. This is because along the path to a specified file or directory, pytest may find conftest.py files that can include hook functions that create new options, fixture definitions, and marker definitions.

The ability to customize the behavior of pytest in conftest.py files and test files allows customized behavior local to a project or even a subset of the tests for a project. You’ll learn about conftest.py and ini files such as pytest.ini in Chapter 6, Configuration.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.141.3.175