Other types of concurrency in Rust

There are other ways of achieving parallel computing in Rust and in many other languages. In this chapter, we will talk about multithreading, where each thread has access to shared memory and creates its own stacks so that it can work independently. Ideally, you should have about the same number of threads working at the same time as the number of virtual CPUs in your PC/server.

This is usually twice the number of CPU cores, thanks to hyperthreading, where one core can run two threads at the same time by using its own hardware scheduler to decide which parts of each thread run at a given point in time.

The main issue with threads is that if you don't put a limit and run too many of them, maybe because some of them are idle and your CPU should be able to run the others, you will consume a lot of RAM. This is because of all the stacks that need to be created per thread. It is not uncommon for some web servers to create one thread per request. This will make things much slower when the load is high, since it will require a lot of RAM.

Another approach to concurrency is asynchronous programming. Rust has great tools for this kind of use and we will see them in the next chapter. The best improvement that asynchronous programming brings is the possibility for one thread to run multiple I/O requests while not blocking the actual thread.

Not only that, if the thread goes idle, it will not need to sleep for some time and then it will poll for new requests. The underlying operating system will wake the thread up when there is new information for it. This approach will, therefore, use the minimum possible resources for I/O operations.

But what about programs that do not need I/O? In those cases, things can be executed in parallel further than using threads. Most processors nowadays allow vectorization. Vectorization uses some special CPU instructions and registers where you can enter more than one variable and perform the same operation in all of them at the same time. This is extremely useful for high-performance computing, where you need to apply a certain algorithm multiple times to different datasets. With this approach, you can perform multiple additions, subtractions, multiplications, and divisions at the same time.

The special instructions used for vectorization are called the SIMD family, from Single Instruction Multiple Data. You can use them by running the assembly directly with the asm!{} macro in nightly Rust, and the compiler will try to automatically vectorize your code, even though this is not usually as good as professionals can achieve manually. There are multiple proposals to stabilize SIMD intrinsics in 2018. This way, you will be able to use this instruction with some abstraction from assembly. There is some effort going on in the faster crate (https://crates.io/crates/faster).

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.139.80.209