Further reading

In this chapter, we covered measuring and improving the performance of a serial Rust program while demonstrating the program's fitness for purpose. This is a huge area of work and there's a deep well of literature to pull from.

  • Robin Hood Hashing, 1985, Pedro Celis. This thesis introduced the Robin Hood hashing strategy for constructing associative arrays and is the foundation for the implementation you'll find in Rust. The paper also goes into further search strategies that didn't find their way into Rust's implementation but should be of interest to readers with ambitions toward building hashing search structures.
  • Robin Hood hashing, Emmanuel Goossaert, available at http://codecapsule.com/2013/11/11/robin-hood-hashing/. The Rust standard library HashMap makes continued reference to this blog post and its follow-on, linked in the text. The description here is of a higher-level than that of Celis' thesis and potentially easier to understand as a result.
  • Denial of Service via Algorithmic Complexity Attacks, 2003, Scott Crosby and Dan Wallach. This paper outlines a denial of service attack on network services by exploiting algorithmic blow-ups in their implementations. The consequence of this paper influenced Rust's decision to ship a safe-by-default HashMap.
  • QuickCheck: Lightweight Tool for Random Testing of Haskell Programs, 2000, Koen Claessen and John Hughes. This paper introduces the QuickCheck tool for Haskell and introduces property-based testing to the world. The research here builds on previous work into randomized testing but is novel for realizing that computers had got fast enough to support type-directed generation as well as shipping with the implementation in a single page appendix. Many, many subsequent papers have built on this one to improve the probing ability of property testers.
  • An Evaluation of Random Testing, 1984, Joe Duran and Simeon Ntafos. The 1970s and 1980s were an interesting time for software testing. Formal methods were seen as being just around the corner and the preferred testing methods relied on intimate knowledge of the program's structure. Duran and Ntafos evaluated the ideal techniques of the day against random generation of data and found that randomness compared favorably with significantly less programmer effort. This paper put random testing on the map.
  • Experiences with QuickCheck: Testing the Hard Stuff and Staying Sane, 2016, John Hughes. This paper is a follow-on to the original QuickCheck paper by Claessen and Hughes in which Hughes describes his subsequent fifteen years of experience doing property testing. The techniques laid out in this paper are a significant evolution of those presented in the 2000 paper and well-worth studying by anyone doing property tests as a part of their work. That ought to be most people, is my take.
  • American Fuzzy Lop website, available at http://lcamtuf.coredump.cx/afl. AFL is the product of a long tail of research into efficiently mutating inputs for the purpose of triggering bugs. As of writing this book, it is best of breed and has a long trophy list to show for it. The website has links to AFL's documentation and relevant research to understand its function in deeper detail.
  • Compact Data Structures: A Practical Approach, 2016, Gonzalo Navarro. One of the major techniques of exploiting cache locality is to shrink the individual elements of a working set, implying more elements are available in the working set. Compact data structures, those that can be operated on, at, or near their information theory minimal representation, is an ongoing and exciting area. Navarro's book is excellent and well-worth studying for anyone who is interested in exploring this avenue of optimization.
  • vec_map, various authors. vec_map is a Rust crate that exploits the same ideas as this chapter's HashMapU8 but in a generic implementation, with full compatibility to the standard library HashMap. The source code is quite interesting and warmly recommended.
  • Reevaluating Amdahl's Law, 1988, John Gustafson. This is an exceptionally short paper and clearly explains Amdahl's formulation as well as Gustafson's objection to its underlying assumptions. That the paper is describing an interpretation in which the serial portion is shrunk is clear only after a few readings, or once some kind soul explains this to you.
  • Tracking issue for specialization (RFC 1210), available at https://github.com/rust-lang/rust/issues/31844.This issue is a pretty good insight into the way the Rust community goes about stabilizing a major feature. The original RFC is from 2016. Pretty much ever since the point it was accepted that there's been a feature flag in nightly for experimentation and a debate on the consequences of making the work stable.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.128.173.53