Using sync and mutexes to lock data

One issue that you may have run into with the preceding examples is the notion of atomic data. After all, if you deal with variables and structures across multiple goroutines, and possibly processors, how do you ensure that your data is safe across them? If these processes run in parallel, coordinating data access can sometimes be problematic.

Go provides a bevy of tools in its sync package to handle these types of problems. How elegantly you approach them depends heavily on your approach, but you should never have to reinvent the wheel in this realm.

We've already looked at the WaitGroup struct, which provides a simple method to tell the main thread to pause until the next notification that says a waiting process has done what it's supposed to do.

Go also provides a direct abstraction to a mutex. It may seem contradictory to call something a direct abstraction, but the truth is you don't have access to Go's scheduler, only an approximation of a true mutex.

We can use a mutex to lock and unlock data and guarantee atomicity in our data. In many cases, this may not be necessary; there are a great many times where the order of execution does not impact the consistency of the underlying data. However, when we do have concerns about this value, it's helpful to be able to invoke a lock explicitly. Let's take the following example:

package main

import(
  "fmt"
  "sync"
)

func main() {
  current := 0
  iterations := 100
  wg := new (sync.WaitGroup);

  for i := 0; i < iterations; i++ {
    wg.Add(1)

    go func() {
      current++
      fmt.Println(current)
      wg.Done()
    }()
    wg.Wait()
  }


}

Unsurprisingly, this provides a list of 0 to 99 in your terminal. What happens if we change WaitGroup to know there will be 100 instances of Done() called, and put our blocking code at the end of the loop?

To demonstrate a simple proposition of why and how to best utilize waitGroups as a mechanism for concurrency control, let's do a simple number iterator and look at the results. We will also check out how a directly called mutex can augment this functionality, as follows:

func main() {
  runtime.GOMAXPROCS(2)
  current := 0
  iterations := 100
  wg := new (sync.WaitGroup);
  wg.Add(iterations)
  for i := 0; i < iterations; i++ {
    go func() {
      current++
      fmt.Println(current)
      wg.Done()
    }()

  }
  wg.Wait()

}

Now, our order of execution is suddenly off. You may see something like the following output:

95
96
98
99
100
3
4

We have the ability to lock and unlock the current command at will; however, this won't change the underlying execution order, it will only prevent reading and/or writing to and from a variable until an unlock is called.

Let's try to lock down the variable we're outputting using mutex, as follows:

  for i := 0; i < iterations; i++ {
    go func() {
      mutex.Lock()
      fmt.Println(current)
      current++
      mutex.Unlock()
      fmt.Println(current)
      wg.Done()
    }()

  }

You can probably see how a mutex control mechanism can be important to enforce data integrity in your concurrent application. We'll look more at mutexes and locking and unlocking processes in Chapter 4, Data Integrity in an Application.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.15.151.32