Sequential Rust Performance and Testing

"Make it work, then make it beautiful, then if you really, really have to, make it fast."
                                                                                                         - Joe Armstrong

In the previous chapter, we discussed the basics of modern computer architectures—the CPU and its function, memory hierarchies, and their interplay. We left off with a brief introduction to debugging and performance analysis of Rust programs. In this chapter, we'll continue that discussion, digging into the performance characteristics of sequential Rust programs, deferring, for now, considerations of concurrent performance. We'll also be discussing testing techniques for demonstrating the fitness for purpose of a Rust program. Why, in a book about parallel programming, would we wish to devote an entire chapter to just sequential programs? The techniques we'll discuss in this sequential setting are applicable and vital to a parallel setting. What we gain here is the meat of the concern—being fast and correct—without the complication that parallel programming brings, however, we'll come to that in good time. It is also important to understand that the production of fast parallel code comes part and parcel with the production of fast sequential code. That's on account of there being a cold, hard mathematical reality that we'll deal with throughout the book.

By the close of the chapter, we will: 

  • Have learned about Amdahl's and Gustafson's laws
  • Have investigated the internals of the Rust standard library HashMap
  • Be able to use QuickCheck to perform randomized validating of an alternative HashMap implementation
  • Be able to use American Fuzzy Lop to demonstrate the lack of crashes in the same
  • Have used Valgrind and Linux Perf to examine the performance of Rust software
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.138.37.20