Profiling

Benchmark tests are useful for checking a portion of code, but they are not suitable for checking the performance of a working application. If you need to explore the performance of some of the functions of the code, you have to use a profiler.

Profilers dump information about code and function executions and record the time spans during which the code works. There is a profiler in the Rust ecosystem called flame. Let's explore how to use it.

Profiling takes time and you should use it as a feature to avoid affecting performance in production installations. Add the flame crate to your project and use it as an optional. Add a feature (such as the official examples from the flamer crate repository; I named the feature flame_it) and add the flame dependency to it:

[dependencies]
flame = { version = "0.2", optional = true }

[features]
default = []
flame_it = ["flame"]

Now, if you want to activate profiling, you have to compile the project with the flame_it feature.

Using the flame crate is pretty simple and includes three scenarios:

  • Use the start and end methods directly.
  • Use the start_guard method, which creates a Span that is used to measure execution time. A Span instance ends measurement automatically when it's dropped.
  • Use span_of to measure code isolated in a closure.

We will use spans like we did in the OpenTracing example in Chapter 13Testing and Debugging Rust Microservices:

use std::fs::File;

pub fn main() {
{
let _req_span = flame::start_guard("incoming request");
{
let _db_span = flame::start_guard("database query");
let _resp_span = flame::start_guard("generating response");
}
}

flame::dump_html(&mut File::create("out.html").unwrap()).unwrap();
flame::dump_json(&mut File::create("out.json").unwrap()).unwrap();
flame::dump_stdout();
}

You don't need to collect spans or send them to Receiver, as we did for Jaeger, but profiling with flame looks like tracing.

At the end of the execution, you have to dump a report in the appropriate format, such as  HTML or JSON, print it to a console, or write it to a Writer instance. We used the first three of them. We have implemented the main function and used the start_quard method to create Span instances to measure the execution time of some pieces of the code. After this, we will write reports.

Compile and run this example with the activated profiling feature:

cargo run --features flame_it

The preceding command compiles and prints the report to the console:

THREAD: 140431102022912
| incoming request: 0.033606ms
| database query: 0.016583ms
| generating response: 0.008326ms
+ 0.008257ms
+ 0.017023ms

As you can see, we have created three spans. You can also find two reports in files, out.json and out.html. If you open the HTML report in a browser, it renders like so:

In the preceding screenshot, you can see the relative duration of execution of every activity of our program. A longer colored block means longer execution time. As you can see, profiling is useful for finding a slow section of code that you can optimize with other techniques.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.145.109.8