Compilation with optimizations

The code included three optimizations. We can use some of them to check the difference in performance. First, we will compile the code with RwLock and borrow the state's value features. If you use the code from the book's GitHub repository, you can run the necessary optimizations using the --features argument with the corresponding feature names:

cargo run --release --features rwlock,borrow

Once the server is ready, we can start running the same test we used to measure performance of this microservice before, with Welle, to measure how many incoming requests the optimized version of the server can handle.

After testing, the tool will print a report like this:

Total Requests: 100000
Concurrency Count: 10
Total Completed Requests: 100000
Total Errored Requests: 0
Total 5XX Requests: 0

Total Time Taken: 7.94342667s
Avg Time Taken: 79.434µs
Total Time In Flight: 64.120106299s
Avg Time In Flight: 641.201µs

Percentage of the requests served within a certain time:
50%: 791.554µs
66%: 976.074µs
75%: 1.120545ms
80%: 1.225029ms
90%: 1.585564ms
95%: 2.049917ms
99%: 3.749288ms
100%: 13.867011ms

As you can see, the application is faster—it takes 79.434 microseconds instead of 80.10. The difference is less than 1%, but it's good for a handler that already worked faster.

Let's try to activate all the optimizations we implemented, including caching. To do this with examples from GitHub, use the following arguments:

cargo run --release --features fast

Let's start testing again once the server is ready. With the same testing parameters, we get a better report:

Total Requests: 100000
Concurrency Count: 10
Total Completed Requests: 100000
Total Errored Requests: 0
Total 5XX Requests: 0

Total Time Taken: 7.820692644s
Avg Time Taken: 78.206µs
Total Time In Flight: 62.359549787s
Avg Time In Flight: 623.595µs

Percentage of the requests served within a certain time:
50%: 787.329µs
66%: 963.956µs
75%: 1.099572ms
80%: 1.199914ms
90%: 1.530326ms
95%: 1.939557ms
99%: 3.410659ms
100%: 10.272402ms

It takes 78.206 microseconds to get a response for a request from the server. It's more than 2% faster than the original version without optimization, which takes 80.10 microseconds per request on average.

You may think the difference is not very big, but in reality, it is. This is a tiny example, but try to imagine the difference of optimizing a handler that makes three requests to databases, and renders a 200 KB template with arrays of values to insert. For heavy handlers, you can improve performance by 20%, or even more. But remember, you should remember the over-optimization is an extreme measure, because it makes code harder to develop and add more features without affecting achieved performance.

It's better not to consider any optimization as a daily task, because you may spend a lot of time on optimization of short pieces of code to get a 2% performance for features your customers don't need.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.189.186.109