The concept of scalability

One aspect of programming that is essential to the application of concurrency is scalability. By scalability, we mean the changes in performance when the number of tasks to be processed by the program increases. Andre B. Bondi, founder and president of Software Performance and Scalability Consulting, LLC, defines the term scalability as "the capability of a system, network, or process to handle a growing amount of work, or its potential to be enlarged to accommodate that growth."

In concurrent programming, scalability is an important concept that always needs to be taken into account; the amount of work that grows in concurrent programming is typically the number of tasks to be executed, as well as the number of processes and threads active to execute those tasks. For example, the designing, implementing, and testing phases of a concurrent application usually involve fairly small amounts of work, to facilitate efficient and fast development. This means that a typical concurrent application will handle significantly more work in real-life situations than it did during the development stage. This is why an analysis of scalability is crucial in well-designed concurrent applications.

Since the execution of a process or thread is independent of the process execution of another, as long as the amount of work a single process/thread is responsible for remains the same, we would like changes in the number of processes/threads to not affect the performance of the general program. This characteristic is called perfect scalability, and is a desirable characteristic for a concurrent program; if the amount of work for a given perfectly scalable concurrent program increases, the program can simply create more active processes or threads, in order to absorb the increased amount of work. Its performance can then stay stable.

However, perfect scalability is virtually impossible to achieve most of the time, due to the overhead in creating threads and processes. That being said, if the performance of a concurrent program does not considerably worsen as the number of active processes or threads increases, then we can accept the scalability. The term considerably worsen is highly dependent on the types of task that the concurrent program is responsible for executing, as well as how large a decrease in program performance is permitted.

In this kind of analysis, we will consider a two-dimensional graph, representing the scalability of a given concurrent program. The axis denotes the number of active threads or processes (again, each is responsible for executing a fixed amount of work throughout the program); the axis denotes the speed of the program, with different numbers of active threads or processes. The graph under consideration will have a generally increasing trend; the more processes/threads the program has, the more time it will (most likely) take for the program to execute. Perfect scalability, on the other hand, will translate to a horizontal line, as no additional time is needed when the number of threads/processes increases.

The following diagram is an example of such a graph, for scalability analysis:

Example of scalability analysis (Source: stackoverflow.com/questions/10660990/c-sharp-server-scalability-issue-on-linux)

In the preceding graph, the axis indicates the number of executing threads/processes, and the axis indicates the running time (in seconds, in this case). The different graphs indicate the scalability of specific setups (the operating system combined with multiple cores).

The steeper the slope of a graph is, the worse the corresponding concurrent model scales with an increasing number of threads/processes. For example, a horizontal line (the dark blue and lowest graph in this case) signifies perfect scalability, while the yellow (upper most) graph indicates an undesirable scalability.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.143.17.27