Ordering::SeqCst

Finally, and on the opposite side of the spectrum from Relaxed, there is SeqCst, which stands for SEQuentially ConsiSTent. SeqCst is like AcqRel but with the added bonus that there's a total order across all threads between sequentially consistent operations, hence the name. SeqCst is a pretty serious synchronization primitive and one that you should use only at great need, as forcing a total order between all threads is not cheap; every atomic operation is going to require a synchronization cycle, for whatever that means for your CPU. A mutex, for instance, is an acquire-release structure—more on that shortly—but there are legitimate cases for a super-mutex like SeqCst. For one, implementing older papers. In the early days of atomic programming, the exact consequence of more relaxed—but not Relaxed—memory orders were not fully understood. You'll see literature that'll make use of sequentially consistent memory and think nothing of it. At the very least, follow along and then relax as needed. This happens more than you might think, at least to your author. Your mileage may vary. There are cases where it seems like we're stuck on SeqCst. Consider a multiple-producer, multiple-consumer setup like this one that has been adapted from CppReference (see the Further reading section shown in the final section):

#[macro_use]
extern crate lazy_static;

use std::thread;
use std::sync::Arc;
use std::sync::atomic::{AtomicBool, AtomicUsize, Ordering};

lazy_static! {
static ref X: Arc<AtomicBool> = Arc::new(AtomicBool::new(false));
static ref Y: Arc<AtomicBool> = Arc::new(AtomicBool::new(false));
static ref Z: Arc<AtomicUsize> = Arc::new(AtomicUsize::new(0));
}

fn write_x() {
    X.store(true, Ordering::SeqCst);
}

fn write_y() {
    Y.store(true, Ordering::SeqCst);
}

fn read_x_then_y() {
    while !X.load(Ordering::SeqCst) {}
    if Y.load(Ordering::SeqCst) {
        Z.fetch_add(1, Ordering::Relaxed);
    }
}

fn read_y_then_x() {
    while !Y.load(Ordering::SeqCst) {}
    if X.load(Ordering::SeqCst) {
        Z.fetch_add(1, Ordering::Relaxed);
    }
}

fn main() {
    let mut jhs = Vec::new();
    jhs.push(thread::spawn(write_x)); // a
    jhs.push(thread::spawn(write_y)); // b
    jhs.push(thread::spawn(read_x_then_y)); // c
    jhs.push(thread::spawn(read_y_then_x)); // d
    for jh in jhs {
        jh.join().unwrap();
    }
    assert!(Z.load(Ordering::Relaxed) != 0);
}

Say we weren't enforcing SeqCst between the threads in our example. What then? The thread c loops until the thread a flips Boolean X, loads Y and, if true, increments Z. Likewise for thread d, except b flips the Boolean Y and the increment is guarded on X. For this program to not crash after its threads have hung up, the threads have to see the flips of X and Y happen in the same order. Note that the order itself does not matter, only so much as it is the same as seen from every thread. Otherwise it's possible that the stores and loads will interleave in such a way as to avoid the increments. The reader is warmly encouraged to fiddle with the ordering on their own.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.221.197.95