Writing our first benchmark

With the required library installed, it's time for us to write our first performance benchmark. For this, we will use a simple example and then move forward to understand how we can write a benchmark test for our application:

'''
File: sample_benchmark_test.py
Description: A simple benchmark test
'''
import pytest
import time
def sample_method():
time.sleep(0.0001)
return 0

def test_sample_benchmark(benchmark):
result = benchmark(sample_method)
assert result == 0

if __name__ == '__main__':
pytest.main()

We have written our first benchmark test. A very simple one indeed, but there are quite a few things which we need to understand to see what we are doing here:

First, as we started writing the benchmark test, we imported pytest. When we did this, there was one more thing that happened behind the scenes without us knowing: the import of the pytest-benchmark library, which pytest included automatically.

With the inclusion of pytest-benchmark, we got hold of an important fixture, named benchmark, which allows us to run benchmark tests.

The benchmark fixture is a callable fixture that takes in the name of the method that needs to be benchmarked and, once executed, runs the benchmark on the method.

For our sample benchmark test, we create a simple method, known as sample_method(), which does nothing except go into sleep for a fraction of a second and then return.

Next, we define a new test method, known as test_sample_benchmark(), to which we pass the benchmark fixture as a parameter. Now, inside the method, we pass the sample_method() method to the benchmark fixture:

result = benchmark(sample_method)

Once we run this test, it will run a benchmark over the method and will provide us with the results of the call through which we can also validate the output of the tested method.

Now, let's run this test to see what kind of results it produces for us. To do this, we execute the following command:

python3 simple_benchmark_test.py

Once the test runs, we see the following output:

================================================= test session starts ==================================================
platform linux -- Python 3.6.7, pytest-4.0.1, py-1.7.0, pluggy-0.8.0
benchmark: 3.1.1 (defaults: timer=time.perf_counter disable_gc=False min_rounds=5 min_time=0.000005 max_time=1.0 calibration_precision=10 warmup=False warmup_iterations=100000)
rootdir: /home/sbadhwar, inifile:
plugins: benchmark-3.1.1
collected 1 item

simple_benchmark_test.py . [100%]


------------------------------------------------------ benchmark: 1 tests ------------------------------------------------------
Name (time in us) Min Max Mean StdDev Median IQR Outliers OPS (Kops/s) Rounds Iterations
--------------------------------------------------------------------------------------------------------------------------------
test_sample_benchmark 199.0000 1,081.0000 791.5421 97.5851 821.0000 24.7500 207;368 1.2634 1271 1
--------------------------------------------------------------------------------------------------------------------------------

Legend:
Outliers: 1 Standard Deviation from Mean; 1.5 IQR (InterQuartile Range) from 1st Quartile and 3rd Quartile.
OPS: Operations Per Second, computed as 1 / Mean
=============================================== 1 passed in 2.19 seconds ===============================================

Now, if we take a look at the output produced, we can see that the results show us the name of the method that was benchmarked, followed by the different mathematical calculations related to its runtime and the operations the method was able to perform in a given second, as well as for how many iterations of the method call these results have been gathered.

Now, what if we wanted to benchmark the method for more than one iteration? This is a simple feat to achieve. All we have to do is to mention another parameter to our benchmark call and specify the number of iterations:

result = benchmark.pedantic(sample_method, iterations=1000)

Here, the benchmark.pedantic method helps us to set up the benchmark to customize it for various parameters, which may include the number of iterations and the number of rounds the benchmark should run to provide the results.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.145.17.140