Performance

First, we will build and run the code using a standard command without any flags:

cargo run

We build a binary with a lot of debugging information, which can be used with LLDB as we did in Chapter 13Testing and Debugging Rust Microservices. A debugging symbol reduces performance, but we will check it to compare it with a version without these symbols later.

Let's load the running server with 100000 requests from 10 concurrent activities. Since our server was bound to port 8080 of localhost, we can use the welle command with the following arguments to measure the performance:

welle --concurrent-requests 10 --num-requests 100000 http://localhost:8080

It takes about 30 seconds (depending on your system) and the tool will print the report:

Total Requests: 100000
Concurrency Count: 10
Total Completed Requests: 100000
Total Errored Requests: 0
Total 5XX Requests: 0

Total Time Taken: 29.883248121s
Avg Time Taken: 298.832µs
Total Time In Flight: 287.14008722s
Avg Time In Flight: 2.8714ms

Percentage of the requests served within a certain time:
50%: 3.347297ms
66%: 4.487828ms
75%: 5.456439ms
80%: 6.15643ms
90%: 8.40495ms
95%: 10.27307ms
99%: 14.99426ms
100%: 144.630208ms

In the report, you can see that there is an average response time of 300 milliseconds. It was a service that was burdened with debugging. Let's recompile this example with optimizations. Set the --release flag on the cargo run command:

cargo run --release

This command passed the -C opt-level=3 optimization flag to the rustc compiler. If you use cargo without a --release flag, it sets opt-level to 2.

After the server has recompiled and started, we use the Welle tool again with the same parameters. It reports the other values:

Total Requests: 100000
Concurrency Count: 10
Total Completed Requests: 100000
Total Errored Requests: 0
Total 5XX Requests: 0

Total Time Taken: 8.010280915s
Avg Time Taken: 80.102µs
Total Time In Flight: 63.961189338s
Avg Time In Flight: 639.611µs

Percentage of the requests served within a certain time:
50%: 806.717µs
66%: 983.35µs
75%: 1.118933ms
80%: 1.215726ms
90%: 1.557405ms
95%: 1.972497ms
99%: 3.500056ms
100%: 37.844721ms

As we can see, the average time taken for a request has been reduced to more than 70%. The result is already pretty good. But could we reduce it a little more? Let's try to do it with some optimizations.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.216.47.169