Using MPSC

We've come at Ring from an odd point of view, looking to build a malfunctioning program from the start only to improve it later. In a real program where purposes and functions gradually drift from their origins, it's worth occasionally asking what a piece of software actually does as an abstract concept. So, what is it that Ring actually does? For one, it's bounded, we know that. It has a fixed capacity and the reader/writer pair are careful to stay within that capacity. Secondly, it's a structure that's intended to be operated on by two or more threads with two roles: reading and writing. Ring is a means of passing u32 between threads, being read in the order that they were written.

Happily, as long as we're willing to accept only a single reader thread, Rust ships with something in the standard library to cover this common need—std::sync::mpsc. The Multi Producer Single Consumer queue is one where the writer end—called Sender<T>—may be cloned and moved into multiple threads. The reader thread—called Receiver<T>—can only be moved. There are two variants of sender available—Sender<T> and SyncSender<T>. The first represents an unbounded MPSC, meaning the channel will allocate as much space as needed to hold the Ts sent into it. The second represents a bounded MPSC, meaning that the internal storage of the channel has a fixed upper capacity. While the Rust documentation describes Sender<T> and SyncSender<T> as being asynchronous and synchronous respectively this is not, strictly, true. SyncSender<T>::send will block if there is no space available in the channel but there is also a SyncSender<T>::try_send which is of type try_send(&self, t: T) -> Result<(), TrySendError<T>>. It's possible to use Rust's MPSC in bounded memory, keeping in mind that the caller will have to have a strategy for what to do with inputs that are rejected for want of space to place them in.

What does our u32 passing program look like using Rust's MPSC? Like so:

use std::thread;
use std::sync::mpsc;

fn writer(chan: mpsc::SyncSender<u32>) -> () {
    let mut cur: u32 = 0;
    while let Ok(()) = chan.send(cur) {
        cur = cur.wrapping_add(1);
    }
}

fn reader(read_limit: usize, chan: mpsc::Receiver<u32>) -> () {
    let mut cur: u32 = 0;
    while (cur as usize) < read_limit {
        let num = chan.recv().unwrap();
        assert_eq!(num, cur);
        cur = cur.wrapping_add(1);
    }
}

fn main() {
    let capacity = 10;
    let read_limit = 1_000_000;
    let (snd, rcv) = mpsc::sync_channel(capacity);

    let reader_jh = thread::spawn(move || {
        reader(read_limit, rcv);
    });
    let _writer_jh = thread::spawn(move || {
        writer(snd);
    });

    reader_jh.join().unwrap();
}

This is significantly shorter than any of our previous programs as well as being not at all prone to arithmetic bugs. The send type of SyncSender is send(&self, t: T) -> Result<(), SendError<T>>, meaning that SendError has to be cared for to avoid a program crash. SendError is only returned when the remote side of an MPSC channel is disconnected, as will happen in this program when the reader hits its read_limit. The performance characteristics of this odd program are not quite as quick as for the last Ring program:

> perf stat --event task-clock,context-switches,page-faults,cycles,instructions,branches,branch-misses,cache-references,cache-misses ./data_race03

 Performance counter stats for './data_race03':

        760.250406      task-clock (msec)   #    0.765 CPUs utilized
           200,011      context-switches    #    0.263 M/sec
               135      page-faults         #    0.178 K/sec
     1,740,006,788      cycles              #    2.289 GHz
     1,533,396,017      instructions        #    0.88  insn per cycle
       327,682,344      branches            #  431.019 M/sec
     741,095      branch-misses             #    0.23% of all branches
        71,426,781      cache-references    #   93.952 M/sec
          4,082      cache-misses          #  0.006 % of all cache refs

       0.993142979 seconds time elapsed

But it's well within the margin of error, especially considering the ease of programming. One thing worth keeping in mind is that messages in MPSC flow in only one direction: the channel is not bi-directional. In this way, MPSC is not suitable for request/response patterns. It's not unheard of to layer two or more MPSCs together to achieve this, with the understanding that a single consumer on either side is sometimes not suitable.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.144.232.189