Final considerations

Before we move on to the next topic, let me just wrap up with some considerations.

First, I hope you have noticed that I haven't tested all the functions I wrote. Specifically, I didn't test get_valid_usersvalidate, and write_csv. The reason is because these functions are implicitly tested by our test suite. We have tested is_valid and export, which is more than enough to make sure our schema is validating users correctly, and the export function is dealing with filtering out invalid users correctly, respecting existing files when needed, and writing a proper CSV. The functions we haven't tested are the internals, they provide logic that participates to doing something that we have thoroughly tested anyway. Would adding extra tests for those functions be good or bad? Think about it for a moment.

The answer is actually difficult. The more you test, the less you can refactor that code. As it is now, I could easily decide to call is_valid with another name, and I wouldn't have to change any of my tests.

If you think about it, it makes sense, because as long as is_valid provides correct validation to the get_valid_users function, I don't really need to know about it. Does this make sense to you?

If instead I had tests for the is_valid function, then I would have to change them, if I decided to call it differently (or to somehow change its signature).

So, what is the right thing to do? Tests or no tests? It will be up to you. You have to find the right balance. My personal take on this matter is that everything needs to be thoroughly tested, either directly or indirectly. And I want the smallest possible test suite that guarantees me that. This way, I will have a great test suite in terms of coverage, but not any bigger than necessary. You need to maintain those tests!

I hope this example made sense to you, I think it has allowed me to touch on the important topics.

If you check out the source code for the book, in the test_api.py module, I have added a couple of extra test classes, which will show you how different testing would have been had I decided to go all the way with the mocks. Make sure you read that code and understand it well. It is quite straightforward and will offer you a good comparison with my personal approach, which I have shown you here.

Now, how about we run those tests? (The output is re-arranged to fit this book's format):

$ pytest tests
====================== test session starts ======================
platform darwin -- Python 3.7.0b2, pytest-3.5.0, py-1.5.3, ...
rootdir: /Users/fab/srv/lpp/ch8, inifile:
collected 132 items

tests/test_api.py ...............................................
.................................................................
.................... [100%]

================== 132 passed in 0.41 seconds ===================

Make sure you run $ pytest test from within the ch8 folder (add the -vv flag for a verbose output that will show you how parametrization modifies the names of your tests). As you can see, 132 tests were run in less than half a second, and they all succeeded. I strongly suggest you check out this code and play with it. Change something in the code and see whether any test is breaking. Understand why it is breaking. Is it something important that means the test isn't good enough? Or is it something silly that shouldn't cause the test to break? All these apparently innocuous questions will help you gain deep insight into the art of testing.

I also suggest you study the unittest module, and pytest too. These are tools you will use all the time, so you need to be very familiar with them.

Let's now check out test-driven development!

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.218.127.141