Chapter 16

What is the main approach to solving the problem that locks don't lock anything?

The main approach is to have the locks internally implemented within the data structure's class attributes and methods, so that external functions and programs cannot bypass those locks and access a shared concurrent object simultaneously.

Describe the concept of scalability, in the context of concurrent programming.

By the scalability of a program, we mean the changes in performance when the amount of tasks to be processed by the program increases. Andre B. Bondi defines the term scalability as, "the capability of a system, network, or process to handle a growing amount of work, or its potential to be enlarged to accommodate that growth."

How does a naive locking mechanism affect the scalability of a concurrent program?

The scalability of a simple lock-based data structure is highly undesirable: as more threads are added to the program to execute more tasks, the performance of the program decreases somewhat linearly. Since only one thread can access and increment the shared counter at any given time, the more increments the program has to execute, the longer it will take to finish all of the incremented tasks.

What are approximate counters, and how do they help with the problem of scalability in concurrent programming?

The basic idea behind approximate counters is to distribute the work (incrementing the shared global counter) across other low-level counters. When an active thread executes and wants to increment the global counter; first, it has to increment its corresponding local counter. With one separate counter object for each thread, the threads can update their corresponding local counters independently and simultaneously, creating overlaps that will result in a better performance in speed for the programs.

Are lock-free data structures possible in Python? Why, or why not?

The characteristic of being lock-free is impossible to implement in CPython, due to the existence of the Global Interpreter Lock (GIL), which prevents more than one thread from executing in the CPU at any given time.

What is a mutex-free concurrent data structure, and how is it different from a concurrent lock-based one?

The term mutex-free concurrent data structures indicates a lack of a locking mechanism and the use of other synchronization mechanisms to protect the data.

What is the RCU technique, and what problem does it solve for mutex-free concurrent data structures?

To protect the integrity of concurrent data structures, the RCU technique creates and maintains another version of the data structure when a thread or process is requesting reading or writing access to it. By isolating the interaction between the data structure and the threads/processes within a separate copy, RCU ensures that no conflicting data can occur.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.144.97.187