Mutexes

If you are working with concurrent applications, you have to deal with more than one resource potentially accessing some memory location. This is usually called race condition.

In simpler terms, a race condition is similar to that moment where two people try to get the last piece of pizza at exactly the same time--their hands collide. Replace the pizza with a variable and their hands with Goroutines and we'll have a perfect analogy.

There is one character at the dinner table to solve this issues--a father or mother. They have kept the pizza on a different table and we have to ask for permission to stand up before getting our slice of pizza. It doesn't matter if all the kids ask at the same time--they will only allow one kid to stand.

Well, a mutex is like our parents. They'll control who can access the pizza--I mean, a variable--and they won't allow anyone else to access it.

To use a mutex, we have to actively lock it; if it's already locked (another Goroutine is using it), we'll have to wait until it's unlocked again. Once we get access to the mutex, we can lock it again, do whatever modifications are needed, and unlock it again. We'll look at this using an example.

An example with mutexes - concurrent counter

Mutexes are widely used in concurrent programming. Maybe not so much in Go because it has a more idiomatic way of concurrent programming in its use of channels, but it's worth seeing how they work for the situations where channels simply don't fit so well.

For our example, we are going to develop a small concurrent counter. This counter will add one to an integer field in a Counter type. This should be done in a concurrent-safe way.

Our Counter structure is defined like this:

type Counter struct { 
  sync.Mutex 
  value int 
} 

The Counter structure has a field of int type that stores the current value of the count. It also embeds the Mutex type from the sync package. Embedding this field will allow us to lock and unlock the entire structure without actively calling a specific field.

Our main function launches 10 Goroutines that try to add one to the field value of Counter structure. All of this is done concurrently:

package main 
 
import ( 
  "sync" 
  "time" 
) 
 
func main() { 
  counter := Counter{} 
 
  for i := 0; i < 10; i++ { 
    go func(i int) { 
      counter.Lock() 
      counter.value++ 
      defer counter.Unlock() 
    }(i) 
  } 
  time.Sleep(time.Second) 
 
  counter.Lock() 
  defer counter.Unlock() 
 
  println(counter.value) 
} 

We have created a type called Counter. Using a for loop, we have launched a total of 10 Goroutines, as we saw in the Anonymous functions launched as new Goroutines section. But inside every Goroutine, we are locking the counter so that no more Goroutines can access it, adding one to the field value, and unlocking it again so others can access it.

Finally, we'll print the value held by the counter. It must be 10 because we have launched 10 Goroutines.

But how can we know that this program is thread safe? Well, Go comes with a very handy built-in feature called the "race detector".

Presenting the race detector

We already know what a race condition is. To recap, it is used when two processes try to access the same resource at the same time with one or more writing operations (both processes writing or one process writing while the other reads) involved at that precise moment.

Go has a very handy tool to help diagnose race conditions, that you can run in your tests or your main application directly. So let's reuse the example we just wrote for the mutexes section and run it with the race detector. This is as simple as adding the -race command-line flag to the command execution of our program:

$ go run -race main.go 
10

Well, not very impressive is it? But in fact it is telling us that it has not detected a potential race condition in the code of this program. Let's make the detector of -race flag warn us of a possible race condition by not locking counter before we modify it:

for i := 0; i < 10; i++ { 
  go func(i int) { 
    //counter.Lock() 
    counter.value++ 
    //counter.Unlock() 
  }(i) 
} 

Inside the for loop, comment the Lock and Unlock calls before and after adding 1 to the field value. This will introduce a race condition. Let's run the same program again with the race flag activated:

$ go run -race main.go 
==================
WARNING: DATA RACE
Read at 0x00c42007a068 by goroutine 6:
  main.main.func1()
      [some_path]/concurrency/locks/main.go:19 +0x44
Previous write at 0x00c42007a068 by goroutine 5:
  main.main.func1()
      [some_path]/concurrency/locks/main.go:19 +0x60
Goroutine 6 (running) created at:
  main.main()
      [some_path]/concurrency/locks/main.go:21 +0xb6
Goroutine 5 (finished) created at:
  main.main()
      [some_path]/concurrency/locks/main.go:21 +0xb6
==================
10
Found 1 data race(s)
exit status 66

I have reduced the output a bit to see things more clearly. We can see a big, uppercase message reading WARNING: DATA RACE. But this output is very easy to reason with. First, it is telling us that some memory position represented by line 19 on our main.go file is reading some variable. But there is also a write operation in line 19 of the same file!

This is because a "++" operation requires a read of the current value and a write to add one to it. That's why the race condition is in the same line, because every time it's executed it reads and writes the field in the Counter structure.

But let's keep in mind that the race detector works at runtime. It doesn't analyze our code statically! What does it mean? It means that we can have a potential race condition in our design that the race detector will not detect. For example:

package main 
 
import "sync" 
 
type Counter struct { 
  sync.Mutex 
  value int 
} 
 
func main() { 
  counter := Counter{} 
 
  for i := 0; i < 1; i++ { 
    go func(i int) { 
      counter.value++ 
    }(i) 
  } 
} 

We will leave the code as shown in the preceding example. We will take all locks and unlocks from the code and launch a single Goroutine to update the value field:

$ go run -race main.go
$

No warnings, so the code is correct. Well, we know, by design, it's not. We can raise the number of Goroutines executed to two and see what happens:

for i := 0; i < 2; i++ { 
  go func(i int) { 
    counter.value++ 
  }(i) 
} 

Let's execute the program again:

$ go run -race main.go
WARNING: DATA RACE
Read at 0x00c42007a008 by goroutine 6:
  main.main.func1()
    [some_path]concurrency/race_detector/main.go:15 +0x44
Previous write at 0x00c42007a008 by goroutine 5:
  main.main.func1()
    [some_path]/concurrency/race_detector/main.go:15 +0x60
Goroutine 6 (running) created at:
  main.main()
    [some_path]/concurrency/race_detector/main.go:16 +0xad
Goroutine 5 (finished) created at:
  main.main()
    [some_path]/concurrency/race_detector/main.go:16 +0xad
==================
Found 1 data race(s)
exit status 66

Now yes, the race condition is detected. But what if we reduce the number of processors in use to just one? Will we have a race condition too?

$ GOMAXPROCS=1 go run -race main.go
$

It seems that no race condition has been detected. This is because the scheduler executed one Goroutine first and then the other, so, finally, the race condition didn't occur. But with a higher number of Goroutines it will also warn us about a race condition, even using only one core.

So, the race detector can help us to detect race conditions that are happening in our code, but it won't protect us from a bad design that is not immediately executing race conditions. A very useful feature that can save us from lots of headaches.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.147.72.74