Chapter 7

Introduction to multitasking

Abstract

Most complex real-time systems require a number of tasks (or programs) to be processed at the same time. For example, consider an extremely simple real-time system that is required to flash an LED at required intervals, and at the same time look for a key input from a keyboard. One solution would be to scan the keyboard in a loop at regular intervals while flashing the LED at the same time. Although this approach may work for a simple example, in most complex real-time systems a multitasking approach should be implemented. The term multitasking means that several tasks are processed in parallel on the same CPU. In a multitasking system, several tasks to run on a single CPU at the same time. Therefore, task switching is done where the tasks share the CPU time. In many applications, tasks cannot run independently of each other and they are expected to cooperate in some way. For example, the execution of a task may depend upon the completion of another task. This chapter describes briefly the various scheduling algorithms used in practice and gives their advantages and disadvantages.

Keywords

multitasking
multitasking advantages
RTOS
task scheduling
co-operative scheduling
round-robin scheduling
preemptive scheduling
first come first served scheduling
multilevel queue scheduling
dynamic priority scheduling

7.1. Overview

Microcontrollers are nowadays used in many electronic devices in an endless variety of ways. It is estimated that there are more than 50 microcontrollers used in intelligent appliances in a modern house in Europe. Some typical application areas are in microwave ovens, mobile phones, tablets, cookers, washing machines, dishwashers, MP3 players, Hi-Fi equipment, TVs, computers, games consoles, watches, clocks, and so on. Some microcontroller-based systems are required to respond to external events in the shortest possible time and such systems are often referred to as real-time systems. It is important to understand that not all systems are real-time. For example, most automotive and aerospace systems can be classified as real-time systems. Various specialized control functions in a vehicle, such as engine control, brake, and clutch control are examples of real-time systems. Similarly, the engine control, wing control, and other dynamic controls in an aeroplane are real-time. All digital signal processing applications require immediate response of a microcontroller and therefore can be classified as real-time systems.
Most complex real-time systems require a number of tasks (or programs) to be processed almost at the same time. For example, consider an extremely simple real-time system which is required to flash an LED at required intervals, and at the same time to look for a key input from a keyboard. One solution would be to scan the keyboard in a loop at regular intervals while flashing the LED at the same time. Although this approach may work for a simple example, in most complex real-time systems a multitasking approach should be implemented.
The term multitasking means that several tasks (or programs) are processed in parallel on the same CPU. However, it is not possible for several tasks to run on a single CPU at the same time. Therefore, task switching is done where the tasks share the CPU time. In many applications, tasks cannot run independently of each other and they are expected to co-operate in some way. For example, the execution of a task may depend upon the completion of another task, or a task may need some data from another task. In such circumstances, the tasks involved must be synchronized using some form of inter-task communication methods.
Real-time systems are time responsive systems where the CPU is never overloaded. In such systems tasks usually have priorities that are obeyed strictly. A task with a higher priority can grab the CPU from a lower priority task and then use the CPU exclusively until it releases the CPU. When the higher priority task completes its processing, or if it is waiting for a resource to be available, then the lower priority task can grab the CPU and resume processing from the point where it was interrupted. Real-time systems are also expected to react to events as quickly as possible. External events are usually processed using external interrupts and the interrupt latency of such systems is expected to be very short so that the interrupt service routine is executed as soon as an interrupt occurs.
In this chapter we shall be looking at the different multitasking algorithms and discuss their advantages and disadvantages briefly.

7.2. Multitasking kernel advantages

  • Without a multitasking kernel, multiple tasks can be executed in a loop, but this approach results in very poorly controlled real-time performance where the execution times of the tasks cannot be controlled.
  • It is possible to code the various tasks as interrupt service routines. This may work in practice, but if the application has many tasks then the number of interrupts grow, making the code less manageable.
  • A multitasking kernel allows new tasks to be added or some of the existing tasks to be removed from the system without any difficulty.
  • The testing and debugging of a multitasking system with a multitasking kernel are easy compared to a multitasking system without a kernel.
  • Memory is better managed using a multitasking kernel.
  • Inter-task communication is easily handled using a multitasking kernel.
  • Task synchronization is easily controlled using a multitasking kernel.
  • CPU time is easily managed using a multitasking kernel.
  • Most multitasking kernels provide memory security where a task cannot access the memory space of another task.
  • Most multitasking kernels provide task priority where higher priority tasks can grab the CPU and stop the execution of lower priority tasks. This allows important tasks to run whenever it is required.

7.3. Need for an RTOS

An RTOS (real-time operating system) is a program that manages system resources, schedules the execution of various tasks in a system, synchronizes the execution of tasks, manages resource allocations, and provides inter-task communication and messaging between the tasks. Every RTOS consists of a kernel that provides the low-level functions, mainly the scheduling, task creation, inter-task communication, resource management, etc. Most complex RTOSs also provide file-handling services, disk read-write operations, interrupt servicing, network management, user management, etc.
A task is an independent thread of execution in a multitasking system, usually with its own local set of data. A multitasking system consists of a number of tasks, each running its own code, and communicating and synchronizing with each other in order to have access to shared resources. The simplest RTOS consists of a scheduler that determines the execution order of the tasks in the system. Each task has its own context consisting of the state of the CPU and its associated registers. The scheduler switches from one task to another one by performing a context switching where the context of the running task is stored and the context of the next task is loaded appropriately so that execution can continue properly with the next task. The time taken for the CPU to perform context switching is known as the context switching time and is usually negligible compared to the actual execution times of the tasks.

7.4. Task scheduling algorithms

Although there are many variations of scheduling algorithms in use today, three most commonly used algorithms are:
  • Co-operative scheduling
  • Round-robin scheduling
  • Preemptive scheduling
The type of scheduling algorithm to be used depends on the nature of the application and in general, most applications use either one of the above algorithms, or a combination of them, or a modified version of these algorithms.

7.4.1. Co-operative scheduling

Co-operative scheduling, also known as nonpreemptive scheduling, shown in Fig. 7.1, is perhaps the simplest algorithm where tasks voluntarily give up the CPU usage when they have nothing useful to do or when they are waiting for some resources to become available. This algorithm has the main disadvantage that certain tasks can use excessive CPU times, thus not allowing some other important tasks to run when needed. Co-operative scheduling is only used in simple multitasking systems where there are no time-critical tasks.
image
Figure 7.1 Co-operative scheduling.
State machines, also called finite-state machine (FSM) are probably the simplest ways of implementing co-operative scheduling. A while loop can be used to execute the tasks one after the other one as shown below for a three task application. In the code below, a task is represented with a function:
Task1()
{
Task 1 code
}
Task2()
{
Task 2 code
}
Task3()
{
Task 3 code
}
while(1)
{
Task1();
Task2();
Task3();
}
The tasks are executed one after the other one inside the main infinite loop formed using a while statement. In this simple approach, the tasks can communicate with each other using global variables declared at the beginning of the program. It is important that the tasks in a co-operative scheduler should satisfy the following requirements for successful running of the overall system:
  • Tasks must not block the overall execution, for example, by using delays or waiting for some resources and not releasing the CPU.
  • The execution time of each tasks should be acceptable to other tasks.
  • Tasks should exit as soon as they complete their processings.
  • Tasks do not have to run to completion and they can exit for example before waiting for a resource to be available.
  • Tasks should resume their operations from the point after they release the CPU.
The last requirement listed above is very important and is not satisfied in the simple scheduler example given above. Resuming a task requires the address of the program counter and important variables when the task releases the CPU to be saved, and then restored when the task resumes (also called context switching) so that the interrupted task can continue normally as if there has not been any interruption.
Another way of implementing a very simple co-operative scheduling is using a switch statement inside an infinite loop as shown below. Notice here that as before, the task states are not saved in this simple example:
Task1()
{
Task 1 code
}
Task2()
{
Task 2 code
}
Task3()
{
Task 3 code
}
nxt = 1;
while(1)
{
switch(nxt)
{
case 1:
Task1();
nxt = 2;
break;
case 2:
Task2();
nxt = 3;
break;
case 3:
Task3();
nxt = 1;
break;
}
}
Simple scheduling can also be implemented using timer interrupts. Here, the tasks can run in the background where the duration of each task can be organized by timer interrupts. An example program is given below to show how timer interrupts can be used in a simple co-operative scheduling based application.
Example
Write a program to flash the two LEDs on the Clicker 2 for STM32 development board at different rates. LD1 (at port PE12) should flash every second, while LD2 (at port PE15) should flash every 0.2 s (i.e., 200 ms).
Solution
The required program listing is shown in Fig. 7.2 (program: multiled.c). You should set the microcontroller clock frequency to 168 MHz as described in Section 6.9. At the beginning of the program, ports PE12 and PE15 are defined as LD1 and LD2, respectively. Inside the main program, I/O ports PE12 and PE15 are configured as digital outputs. The program is based on a timer interrupt where Timer 2 is used to generate interrupts at every 100 ms. The interrupt service routine is the function Timer2_interrupt. There are two tasks in the program: Task1() and Task2(). Each task is assigned a counter with the names count1 and count2, respectively. Additionally, the two tasks have flags LD1flag and LD2flag. Inside the interrupt service routine, the two counts are incremented by one each time an interrupt occurs. When count1 reaches 2 (i.e., 200 ms elapsed) then flag LD1flag is set. Similarly, when count2 reaches 10 (i.e., 1000 ms) then flag LD2flag is set. Task1() toggles LED LD1 when LD1flag is set. Similarly, Task2() toggles LED LD2 when flag LD2flag is set. The result is that LED LD1 flashes every 200 ms and LED LD2 flashes every second.
image
image
Figure 7.2 Program listing.
Timer 2 interrupts are enabled by calling to function InitTimer2. The Timer Calculator utility developed by mikroElektronika was used to set the timer interrupt parameters. This utility can be downloaded from the following link:
You should install the Timer Calculator utility on your PC before using it. The steps to find the Timer 2 parameters for generating interrupts at every 100 ms are as follows (see Fig. 7.3):
  • Start the Timer Calculator utility
  • Select device STM32F2xx/3xx/4xx
  • Set MCU frequency to 168 MHz
  • Choose Timer 2
  • Set the interrupt time to 100 ms
  • Click calculate
  • Copy function InitTimer2 to your program to initialize the timer. Also, copy the interrupt service routine function Timer2_interrupt to your program.
image
Figure 7.3 Using the Timer Calculator utility.
It is clear from this example that multitasking, even for a very simple two task application can be complex when timer interrupts are used.

7.4.2. Round-robin scheduling

Round-robin scheduling (see Fig. 7.4) allocates each task an equal share of the CPU time. Tasks are in a circular queue and when a task’s allocated CPU time expires, the task is removed and placed at the end of the queue. This type of scheduling cannot be satisfactory in any real-time application where each task can have varying amount of CPU requirements depending on the complexity of the processing involved. Round-robin scheduling requires the context of the running task to be saved on stack when a task is removed from the queue so that the task can resume from the point it was interrupted when it becomes active again. One variation of the pure round-robin-based scheduling is to provide a priority-based scheduling, where tasks with the same priority levels receive equal amounts of CPU time.
image
Figure 7.4 Round-robin scheduling.
Round-robin scheduling has the following advantages:
  • It is easy to implement.
  • Every task gets an equal share of the CPU.
  • Easy to compute the average response time.
The disadvantages of round-robin scheduling are:
  • It is not generally good to give the same CPU time to each task.
  • Some important tasks may not run to completion.
  • Not suitable for real-time systems where tasks usually have different processing requirements.

7.4.3. Preemptive scheduling

Preemptive scheduling is the most commonly used scheduling algorithm in real-time systems. Here, the tasks are prioritized and the task with the highest priority among all other tasks gets the CPU time (see Fig. 7.5). If a task with a priority higher than the currently executing task becomes ready to run, the kernel saves the context of the current task and switches to the higher priority task by loading its context. Usually, the highest priority task runs to completion or until it becomes noncomputable, for example, waiting for a resource to become available, or calling a function to delay. At this point, the scheduler determines the task with the highest priority that can run and loads the context of this task and starts executing it. Although the preemptive scheduling is very powerful, care is needed as a programming error can place a high priority task in an endless loop and thus not release the CPU to other tasks. Some multitasking systems employ a combination of round-robin and preemptive scheduling. In such systems, time-critical tasks are usually prioritized and run under preemptive scheduling, whereas the nontime critical tasks run under round-robin scheduling, sharing the left CPU time among themselves.
image
Figure 7.5 Preemptive scheduling.
It is important to realize that in a preemptive scheduler, tasks at the same priorities run under round-robin. In such a system, when a task uses its allocated time, a timer interrupt is generated by the scheduler which saves the context of the current task and gives the CPU to another task with equal priority that is ready to run, provided that there are no other tasks with higher priorities which are ready to run.
The priority in a preemptive scheduler can be static or dynamic. In a static priority system tasks used the same priority all the time. In a dynamic priority-based scheduler, the priority of the tasks can change during their courses of execution.
So far, we have said nothing about how various tasks work together in an orderly manner. In most applications, data and commands must flow between various tasks so that the tasks can co-operate and work together. One very simple way of doing this is through shared data held in RAM where every task can access. Modern RTOS systems, however, provide local task memories and inter-task communication tools such as mailboxes, pipes, and queues to pass data securely and privately between various tasks. In addition, tools such as event flags, semaphores, and mutexes are usually provided for inter-task communication and synchronization purposes and for passing data between tasks.
The main advantage of a preemptive scheduler is that it provides an excellent mechanism where the importance of every task may be precisely defined. On the other hand, it has the disadvantage that a high priority task may starve the CPU such that lower priority tasks can never have the chance to run. This can usually happen if there are programming errors such that the high priority task runs continuously without having to wait for any system resources and never stops.

7.4.4. Scheduling algorithm goals

It can be said that a good scheduling algorithm should possess the following features:
  • Be fair such that each process gets a fair share of the CPU.
  • Be efficient by keeping the CPU busy. The algorithm should not spend too much time to decide what to do.
  • Maximize throughput by minimizing the time users have to wait.
  • Be predictable so that same tasks take the same time when run multiple times.
  • Minimize response time.
  • Maximize resource use.
  • Enforce priorities.
  • Avoid starvataion.

7.4.5. Difference between preemptive and nonpreemptive scheduling

Some differences between a preemptive and nonpreemptive scheduling algorithm are summarized in Table 7.1.

Table 7.1

Differences between preemptive and nonpreemptive scheduling.
Nonpreemptive scheduling Preemptive scheduling
Tasks have no priorities Tasks have priorities
Tasks cannot be interrupted A higher priority task interrupts a lower priority one
Waiting and response times are longer Waiting and response times are shorter
Scheduling is rigid Scheduling is flexible
Tasks do not have priorities High priority tasks run to completion
Not suitable for real-time systems Suitable for real-time systems

7.4.6. Some other scheduling algorithms

There are many other types of scheduling algorithms used in practice. Most of these algorithms are a combination or a derivation of the basic three algorithms described in this chapter. Bried details of some other scheduling algorithms are outlined in this section.

7.4.6.1. First come first served

This is one of the simplest scheduling algorithms, also known as the FIFO scheduling. In this algorithm, tasks are run in the order they become ready. Some features of this algorithm are:
  • Throughput is low since long processes can hold the CPU, causing short processes to wait for a long time.
  • There is no proritization and thus real-time tasks cannot be executed quickly.
  • It is non-preemptive.
  • The context switching occurs only on task termination and therefore the overhead is minimal.
  • Each process gets the chance to be executed even if they have to wait for long time.

7.4.6.2. Shortest remaining time first

In this algorithm, the scheduler arranges the tasks with the least estimated processing time remaining to be next in the queue. Some features of this algorithm are:
  • If a shorter task arrives then the currently running task is interrupted, thus causing overhead.
  • Waiting time of tasks requiring long processing times can be very long.
  • If there are too many small tasks in the system, longer tasks may never get the chance to run.

7.4.6.3. Longest remaining time first

In this algorithm, the scheduler arranges the tasks with the longest processing times to be next in the queue. Some features of this algorithm are:
  • If a longer task arrives then the currently running task is interrupted, thus causing overhead.
  • Waiting time of tasks requiring short processing times can be very long.
  • If there are too many long tasks in the system, shorter tasks may never get the chance to run.

7.4.6.4. Multilevel queue scheduling

In this type of scheduling, tasks are classified into different groups, such as interactive (foreground) and batch (background). Each group has its own scheduling algorithm, foreground tasks are given higher priorities since the background tasks can always wait.

7.4.6.5. Dynamic priority scheduling

In dynamic priority scheduling, although the tasks have priorities, their priorities can change, that is, the priority can be lower or higher than it was earlier. Dynamic priority algorithms achieve high processor utilization, and they can adapt to dynamic environments, where task parameters are unknown. On the contrary, it is not advisable to use dynamic priority in real-time systems because of the uncertainty that an important task may not run in time.

7.5. Choosing a scheduling algorithm

When designing a multitasking system with several tasks, a programmer must consider which scheduling algorithm will perform the best for the application to be developed. In simple terms, most real-time systems should be based on preemptive scheduling with fixed priorities where time-critical tasks grab and use the CPU until they complete their processings or wait for a resource to be available. If there are several time critical tasks then all such tasks should run at the same higher priorities. In general, tasks at the same priorities run as round-robin and share the available CPU time. Then, all the other tasks which are not time critical should run at lower priorities. If the time taken for a task to complete is not critical then simple co-operative scheduler algorithms can be employed.

7.6. Summary

In this chapter we have made an introduction to basic multitasking and have learned some of the basic features of multitasking systems, including the concepts of various task scheduling algorithms.
The projects in this book are based on using the FreeRTOS multitasking kernel. In the next chapter we shall be making an introduction to FreeRTOS multitasking kernel and learn how to install it on the mikroC Pro for ARM IDE and the compiler so that we can use it in our projects.

Further readings

[1] D. Ibrahim, Designing Embedded Systems With 32-Bit PIC Microcontrollers and MikroC, Newnes, Oxon, UK, 2014, ISBN: 978-0-08-097786-7.

[2] T.W. Schultz, C and the 8051: Vol. 1: Hardware, Modular Programming & Multitasking, Prentice Hall, New Jersey, USA, 1997.

[3] D. Ibrahim, Designing Embedded Systems with 32-Bit PIC Microcontrollers and MikroC, Newnes, Oxon, UK, 2014, ISBN: 978-0-08-097786-7.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.17.79.60