10.4 CPU Scheduling

CPU scheduling is the act of determining which process in the ready state should be moved to the running state. That is, CPU-scheduling algorithms decide which process should be given over to the CPU so that it can make computational progress.

CPU-scheduling decisions are made when a process switches from the running state to the waiting state, or when a program terminates. This type of CPU scheduling is called nonpreemptive scheduling, because the need for a new CPU process results from the activity of the currently executing process.

CPU-scheduling decisions may also be made when a process moves from the running state to the ready state or when a process moves from the waiting state to the ready state. These are examples of preemptive scheduling, because the currently running process (through no fault of its own) is preempted by the operating system.

Scheduling algorithms are often evaluated using metrics, such as the turnaround time for a process. This measure is the amount of time between when a process arrives in the ready state and when it exits the running state for the last time. We would like, on average, for the turnaround time for our processes to be small.

Several different approaches can be used to determine which process gets chosen first to move from the ready state to the running state. We examine three of them in the next sections.

First Come, First Served

In the first-come, first-served (FCFS) scheduling approach, processes are moved to the CPU in the order in which they arrive in the running state. FCFS scheduling is nonpreemptive. Once a process is given access to the CPU, it keeps that access unless it makes a request that forces it to wait, such as a request for a device in use by another process.

Suppose processes p1 through p5 arrive in the ready state at essentially the same time (to make our calculations simple) but in the following order and with the specified service time:

Process Service Time
p1 140
p2 75
p3 320
p4 280
p5 125

In the FCFS scheduling approach, each process receives access to the CPU in turn. For simplicity, we assume here that processes don’t cause themselves to wait. The following Gantt chart shows the order and time of process completion:

A Gantt chart shows the order and time of process completion.

Because we are assuming the processes all arrived at the same time, the turnaround time for each process is the same as its completion time. The average turnaround time is (140 + 215 + 535 + 815 + 940) / 5 or 529.

In reality, processes do not arrive at exactly the same time. In this case, the calculation of the average turnaround time would be similar, but would take into account the arrival time of each process. The turnaround time for each process would be its completion time minus its arrival time.

The FCFS algorithm is easy to implement but suffers from its lack of attention to important factors such as service time requirements. Although we used the service times in our calculations of turnaround time, the algorithm didn’t use that information to determine the best order in which to schedule the processes.

Shortest Job Next

The shortest-job-next (SJN) CPU-scheduling algorithm looks at all processes in the ready state and dispatches the one with the smallest service time. Like FCFS, it is generally implemented as a nonpreemptive algorithm. Below is the Gantt chart for the same set of processes we examined in the FCFS example. Because the selection criteria are different, the order in which the processes are scheduled and completed are different:

A figure depicts the Gantt chart for determining the shortest-job-next CPU-scheduling algorithm.

The average turnaround time for this example is (75 + 200 + 340 + 620 + 940) / 5 or 435.

Note that the SJN algorithm relies on knowledge of the future. That is, it gives access to the CPU to the job that runs for the shortest time when it is allowed to execute. That time is essentially impossible to determine. Thus, to run this algorithm, the service time value for a process is typically estimated by the operating system using various probability factors and taking the type of job into account. If these estimates are wrong, the premise of the algorithm breaks down and its efficiency deteriorates. The SJN algorithm is provably optimal, meaning that if we could know the service time of each job, the algorithm would produce the shortest turnaround time for all jobs compared to any other algorithm. However, because we can’t know the future absolutely, we make guesses and hope those guesses are correct.

Round Robin

Round-robin CPU scheduling distributes the processing time equitably among all ready processes. This algorithm establishes a particular time slice (or time quantum), which is the amount of time each process receives before it is preempted and returned to the ready state to allow another process to take its turn. Eventually the preempted process will be given another time slice on the CPU. This procedure continues until the process gets all the time it needs to complete and terminates.

Note that the round-robin algorithm is preemptive. The expiration of a time slice is an arbitrary reason to remove a process from the CPU. This action is represented by the transition from the running state to the ready state.

Suppose the time slice used for a particular round-robin scheduling algorithm was 50 and we used the same set of processes as our previous examples. The Gantt chart results are:

A figure shows a Gantt chart for a round-robin scheduling algorithm.

Each process is given a time slice of 50, unless it doesn’t need a full slice. For example, process 2 originally needed 75 time units. It was given an initial time slice of 50. When its turn to use the CPU came around again, it needed only 25 time units. Therefore, process 2 terminates and gives up the CPU at time 325.

The average turnaround time for this example is (515 + 325 + 940 + 920 + 640) / 5, or 668. This turnaround time is larger than the times in the other examples. Does that mean the round-robin algorithm is not as good as the other options? No. We can’t make such general claims based on one example. We can say only that one algorithm is better than another for that specific set of processes. General analysis of algorithm efficiencies is much more involved.

The round-robin CPU process scheduling algorithm is probably the most widely used. It generally supports all kinds of jobs and is considered the most fair.

SUMMARY

An operating system is the part of the system software that manages resources on a computer. It serves as moderator among human users, application software, and the hardware devices in the system.

Multiprogramming is the technique for keeping multiple programs in memory at the same time, contending for time on the CPU. A process is a program in execution. The operating system must perform careful CPU scheduling, memory management, and process management to ensure fair access to the CPU.

Batch processing organizes jobs into batches that use the same or similar resources. Timesharing allows multiple users to interact with a computer at the same time, creating a virtual machine for each user.

An operating system must manage memory to control and monitor where processes are loaded into main memory. Any memory management technique must define the manner in which it binds a logical address to a physical one. Various strategies have been developed for memory management. The single contiguous approach allows only one program other than the operating system to be in main memory. The partition approach divides memory into several partitions into which processes are loaded. Fixed partitions have a set size, whereas dynamic partitions are created to satisfy the unique needs of the processes loaded. Paging divides memory into frames and programs into pages. The pages of a program need not be contiguous in memory. Demand paging allows for only a portion of a program to be in memory at any given time.

An operating system manages a process’s life states, which are the stages a program goes through during its execution. The process control block stores the necessary information for any process.

CPU-scheduling algorithms determine which process gets priority to use the CPU next. First-come, first-served CPU scheduling gives priority to  the earliest-arriving job. The shortest-job-next algorithm gives priority  to jobs with short running times. Round-robin scheduling rotates the CPU among active processes, giving a little time to each process.

KEY TERMS

EXERCISES

For Exercises 1–18, mark the answers true or false as follows:

  1. True

  2. False

  1.   1. An operating system is an example of application software.

  2.   2. An operating system provides a basic user interface that allows the user to use the computer.

  3.   3. A computer can have more than one operating system, but only one OS is in control at any given time.

  4.   4. Multiprogramming is the technique of using multiple CPUs to run programs.

  5.   5. In the 1960s and 1970s, a human operator would organize similar computer jobs into batches to be run.

  6.   6. Batch processing implies a high level of interaction between the user and the program.

  7.   7. A timesharing system allows multiple users to interact with a computer at the same time.

  8.   8. A dumb terminal is an I/O device that connects to a mainframe computer.

  9.   9. A logical address specifies an actual location in main memory.

  10. 10. An address in a single contiguous memory management system is made up of a page and an offset.

  11. 11. In a fixed-partition system, main memory is divided into several partitions of the same size.

  12. 12. The bounds register contains the last address of a partition.

  13. 13. The first page in a paged memory system is page 0.

  14. 14. A process in the running state is currently being executed by the CPU.

  15. 15. The process control block (PCB) is a data structure that stores all information about a process.

  16. 16. CPU scheduling determines which programs are in memory.

  17. 17. The first-come, first-served scheduling algorithm is probably optimal.

  18. 18. A time slice is the amount of time each process is given before being preempted in a round-robin scheduler.

For Exercises 19–23, match the operating system with information about it.

  1. Mac OS

  2. UNIX

  3. Linux

  4. DOS

  5. Windows

  1. 19. Which is the operating system of choice for Apple computers?

  2. 20. Historically, which is the operating system of choice for serious programmers?

  3. 21. Which is the PC version of UNIX?

  4. 22. What is the Microsoft operating system family provided on PCs called?

  5. 23. What is the original PC operating system called?

For Exercises 24–26, match the following software type with its definition.

  1. Systems software

  2. Operating system

  3. Application software

  1. 24. Programs that help us solve real-world problems

  2. 25. Programs that manage a computer system and interact with hardware

  3. 26. Programs that manage computer resources and provide interfaces for other programs

Exercises 27–72 are problems or short-answer questions.

  1. 27. Distinguish between application software and system software.

  2. 28. What is an operating system?

  3. 29. Explain the term multiprogramming.

  4. 30. The following terms relate to how the operating system manages multiprogramming. Describe the part each plays in this process.

    1. Process

    2. Process management

    3. Memory management

    4. CPU scheduling

  5. 31. What constitutes a batch job?

  6. 32. Describe the evolution of the concept of batch processing from the human operator in the 1960s and 1970s to the operating systems of today.

  7. 33. Define timesharing.

  8. 34. What is the relationship between multiprogramming and timesharing?

  9. 35. Why do we say that users in a timesharing system have their own virtual machine?

  10. 36. In Chapter 6, we defined a virtual machine as a hypothetical machine designed to illustrate important features of a real machine. In this chapter, we define a virtual machine as the illusion created by a timesharing system that each user has a dedicated machine. Relate these two definitions.

  11. 37. How does the timesharing concept work?

  12. 38. What is a real-time system?

  13. 39. What is response time?

  14. 40. What is the relationship between real-time systems and response time?

  15. 41. In a multiprogramming environment, many processes may be active. What are the tasks that the operating system must accomplish to manage the memory requirements of active processes?

  16. 42. Distinguish between logical addresses and physical addresses.

  17. 43. What is address binding?

  18. 44. Name three memory management techniques and describe the general approach taken in each.

  19. 45. When is a logical address assigned to a variable?

  20. 46. When does address binding occur?

  21. 47. How is memory divided in the single contiguous memory management approach?

  22. 48. When a program is compiled, where is it assumed that the program will be loaded into memory? That is, where are logical addresses assumed to begin?

  23. 49. If, in a single contiguous memory management system, the program is loaded at address 30215, compute the physical addresses (in decimal) that correspond to the following logical addresses:

    1. 9223

    2. 2302

    3. 7044

  24. 50. In a single contiguous memory management approach, if the logical address of a variable is L and the beginning of the application program is A, what is the formula for binding the logical address to the physical address?

  25. 51. If, in a fixed-partition memory management system, the current value of the base register is 42993 and the current value of the bounds register is 2031, compute the physical addresses that correspond to the following logical addresses:

    1. 104

    2. 1755

    3. 3041

  26. 52. If more than one partition is being used (either fixed or dynamic), what does the base register contain?

  27. 53. Why is the logical address compared to the bounds register before a physical address is calculated?

  28. 54. If, in a dynamic-partition memory management system, the current value of the base register is 42993 and the current value of the bounds register is 2031, compute the physical addresses that correspond to the following logical addresses:

    1. 104

    2. 1755

    3. 3041

Exercises 55 and 56 use the following state of memory.

A figure shows a state of memory block partitioned into seven layers from top to bottom labeled, Operating system, Process 1, Empty 60 blocks, Process 2, Process 3, Empty 52 blocks, and Empty 100 blocks.
  1. 55. If the partitions are fixed and a new job arrives requiring 52 blocks of main memory, show memory after using each of the following partition selection approaches:

    1. First fit

    2. Best fit

    3. Worst fit

  2. 56. If the partitions are dynamic and a new job arrives requiring 52 blocks of main memory, show memory after using each of the following partition selection approaches:

    1. First fit

    2. Best fit

    3. Worst fit

  3. 57. If a logical address in a paged memory management system is <2, 133>, what do the values mean?

Exercises 58–60 refer to the following PMT.

A table shows a page with its frame size as follows: 0, 5; 1, 2; 2, 7; and 3, 3.
  1. 58. If the frame size is 1024, what is the physical address associated with the logical address <2, 85>?

  2. 59. If the frame size is 1024, what is the physical address associated with the logical address <3, 555>?

  3. 60. If the frame size is 1024, what is the physical address associated with the logical address <3, 1555>?

  4. 61. What is virtual memory and how does it apply to demand paging?

  5. 62. What are the conceptual stages through which a process moves while being managed by the operating system?

  6. 63. Describe how a process might move through the various process states. Cite specific reasons why this process moves from one state to another.

  7. 64. What is a process control block?

  8. 65. How is each conceptual stage represented in the operating system?

  9. 66. What is a context switch?

  10. 67. Distinguish between preemptive scheduling and nonpreemptive scheduling.

  11. 68. Name and describe three CPU-scheduling algorithms.

Use the following table of processes and service time for Exercises 69 through 72.

A table shows a process with its service time as follows: P1, 120; P2, 60; P3, 180; P4, 50; and P5, 300.
  1. 69. Draw a Gantt chart that shows the completion times for each process using first-come, first-served CPU scheduling.

  2. 70. Draw a Gantt chart that shows the completion times for each process using shortest-job-next CPU scheduling.

  3. 71. Draw a Gantt chart that shows the completion times for each process using round-robin CPU scheduling with a time slice of 60.

  4. 72. Distinguish between fixed partitions and dynamic partitions.

THOUGHT QUESTIONS

  1.   1. In Chapter 5, we said that the control unit was like the stage manager who organized and managed the other parts of the von Neumann machine. Suppose we now say the operating system is also like a stage manager, but on a much grander scale. Does this analogy hold or does it break down?

  2.   2. The user interface that the operating system presents to the user is like a hallway with doors leading to rooms housing applications programs. To go from one room to another, you have to go back to the hallway. Continue with this analogy: What would files be? What would be analogous to a time slice?

  3.   3. What is the difference between medical data and de-identified medical data?

  4.   4. Have you ever read the HIPAA papers you have signed?

  5.   5. Would you withhold sensitive personal and medical information from your doctor if you knew there was a possibility that it might not remain private?

  6.   6. Should medical identity cards contain genetic marker information if they ever become widely used in the United States?

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.220.66.151