OPERATING SYSTEM NOTES
UNIT-III
[Process Scheduling : Scheduling criteria, preemptive & non-preemptive scheduling, scheduling algorithms]
INTRODUCTION:
In a single-processor system, only one process can run at a time; any others must wait until the CPU
is free. The objective of multiprogramming is to have some process running at all times, to
maximize CPU utilization. In this scheme, several processes are kept in memory at one time.
Process execution consists of a cycle of CPU execution and I/O wait. Process execution begins with
a CPU burst. That is followed by an I/O burst, which is followed by another CPU burst, then
another I/O burst, and so on. Eventually, the final CPU burst ends with a system request to
terminate execution.
When one process has to wait, the CPU becomes idle. Then the operating system selects one of the
processes in the ready queue to be executed. The selection process is carried out by the short-term
scheduler (CPU Scheduler).
Another component involved in the CPU-scheduling function is the dispatcher. The dispatcher is
the module that gives control of the CPU to the process selected by the short-term scheduler. This
Provided By Shipra Swati
OPERATING SYSTEM NOTES
function involves the following:
• Switching context
• Switching to user mode
• Jumping to the proper location in the user program to restart that program
The dispatcher should be as fast as possible, since it is invoked during every process switch. The
time it takes for the dispatcher to stop one process and start another running is known as the
dispatch latency.
SCHEDULING CRITERIA :
There are different criteria to select the best algorithm in particular situation and environment:
• CPU utilization. Conceptually, CPU utilization can range from 0 to 100 percent. In a real
system, it should range from 40 percent (for a lightly loaded system) to 90 percent (for a
heavily used system).
• Throughput. The number of processes that are completed per time unit, is called
throughput. It is a measure of CPU work. For long processes, this rate may be one process
per hour; for short transactions, it may be ten processes per second.
• Turnaround time. The interval from the time of submission of a process to the time of
completion is the turnaround time. Turnaround time is the sum of the periods spent waiting
to get into memory, waiting in the ready queue, executing on the CPU, and doing I/O i.e.
time units require to execute a process.
• Waiting time. Waiting time is the sum of the periods spent waiting in the ready queue. The
CPU-scheduling algorithm does not affect the amount of time during which a process
executes or does I/O; it affects only the amount of time that a process spends waiting in the
ready queue.
• Response time. The time taken in an interactive program from the issuance of a command
to the commence of a response to that command is called response time.
It is desirable to maximize CPU utilization and throughput and to minirnize turnaround time,
waiting time, and response time.
PREEMPTIVE & NON-PREEMPTIVE SCHEDULING :
The Scheduling algorithms can be divided into two categories with respect to how they deal with
clock interrupts.
Preemptive Scheduling means once a process started its execution, the currently running process
can be paused for a short period of time to handle some other process of higher priority, it means we
can preempt (occupy) the control of CPU from one process to another if required.
Under nonpreemptive scheduling, once the CPU has been allocated to a process, the process keeps
the CPU until it releases the CPU either by terminating or by switching to the waiting state.
Provided By Shipra Swati
OPERATING SYSTEM NOTES
Differences between these two scheduling algorithms are:
Preemptive Scheduling Non-Preemptive Scheduling
Processor can be preempted to execute a different
process in the middle of execution of any current
process.
Once Processor starts to execute a process it
must finish it before executing the other. It
cannot be paused in middle.
CPU utilization is more compared to Non-
Preemptive Scheduling.
CPU utilization is less compared to
Preemptive Scheduling.
Waiting time and Response time is less. Waiting time and Response time is more.
The preemptive scheduling is prioritized. The
highest priority process should always be the process
that is currently utilized.
When a process enters the state of running,
the state of that process is not deleted from
the scheduler until it finishes its service
time.
If a high priority process frequently arrives in the
ready queue, low priority process may starve.
If a process with long burst time is running
CPU, then another process with less CPU
burst time may starve.
Preemptive scheduling is flexible. Non-preemptive scheduling is rigid.
Ex:- SRTF, Priority, Round Robin, etc. Ex:- FCFS, SJF, etc.
SCHEDULING ALGORITHMS:
CPU scheduling deals with the problem of deciding which of the processes in the ready queue is to
be allocated the CPU. Some of the CPU-scheduling algorithms are described below:
1. First-Come First-Served (FCFS) Scheduling: It is the simplest CPU scheduling algorithm
for understanding and implentation. With this scheme, the process that requests the CPU
first is allocated the CPU first. The implementation of the FCFS policy is easily managed
with a FIFO queue. When a process enters the ready queue, its PCB is linked onto the tail of
the queue. When the CPU is free, it is allocated to the process at the head of the queue. The
running process is then removed from the queue.
Consider the following set of processes that arrive at time 0, with the length of the CPU
burst given in milliseconds:
If the processes arrive in the order P1, P2, P3 and are served in FCFS order, then following
is the Gantt chart, where a particular schedule is representated by bar chart including the
start and finish times of each of the participating processes, :
Provided By Shipra Swati
OPERATING SYSTEM NOTES
The waiting time is 0 milliseconds for process P1, 24 milliseconds for process P2 , and 27
milliseconds for process P3 . Thus, the average waiting time for the three processes is:
( 0 + 24 + 27 ) / 3 = 17.0 ms
If the processes arrive in the order P2 , P3 , P1, the results can be shown using the following
Gantt chart:
The average waiting time now is:
(6 + 0 + 3)/3 = 3.0 ms
The average waiting time of FCFS is high, which is a major drawback of this
scheduling algorithm.
2. Shortest-Job-First (SJF) Scheduling: This algorithm assigns that process to CPU, which
has smallest burst time. If the next CPU bursts of two processes are the same, FCFS
scheduling is used to break the tie.
Consider the following set of processes, with the length of the CPU burst given in
milliseconds:
Using SJF scheduling, we would schedule these processes according to the following Gantt
chart:
The waiting time is 3 milliseconds for process P1, 16 milliseconds for process P2 , 9
milliseconds for process P3 , and 0 milliseconds for process P4 . Thus, the average waiting
time is:
(3 + 16 + 9 + 0) / 4 = 7 milliseconds
Provided By Shipra Swati
OPERATING SYSTEM NOTES
By comparison, if we were using the FCFS scheduling scheme, the average waiting time
would be 10.25 milliseconds.
SJF can be proven to be the fastest scheduling algorithm, but it suffers from one
important problem: How to know the required CPU time in advance?
It is easy to implement in Batch systems where required CPU time is known in advance, but
impossible to implement in interactive systems where required CPU time is not known.
The SJF algorithm can be either preemptive or nonpreemptive. The choice arises when a
new process arrives at the ready queue while a previous process is still executing. The next
CPU burst of the newly arrived process may be shorter than what is left of the currently
executing process. A preemptive SJF algorithm will preempt the currently executing
process, whereas a non-preemptive SJF algorithm will allow the currently running process
to finish its CPU burst. Preemptive SJF scheduling is sometimes called shortest-
remaining-time-first scheduling. As an example, consider the following four processes,
with the length of the CPU burst given in milliseconds:
If the processes arrive at the ready queue at the times shown and need the indicated burst
times, then the resulting preemptive SJF schedule is as depicted in the following Gantt
chart:
Process P1 is started at time 0, since it is the only process in the queue. Process P2 arrives at
time 1. The remaining time for process P1 (7 milliseconds) is larger than the time required
by process P2 (4 milliseconds), so process P1 is preempted, and process P2 is scheduled.
The average waiting time for this example is:
[(10- 1) + (1 - 1) + (17- 2) +(5-3)]/ 4 = 26/4 = 6.5 milliseconds.
Nonpreemptive SJF scheduling would result in an average waiting time of 7.75 milliseconds
and FCFS will have waiting time of 8.75 ms.
Provided By Shipra Swati
OPERATING SYSTEM NOTES
3. Priority Scheduling : In this CPU scheduling algorithm, a priority is associated with each
process, and the CPU is allocated to the process with the highest priority. Priority can be
decided based on memory requirements, time requirements or any other resource
requirement. Equal-priority processes are scheduled in FCFS order.
An SJF algorithm is simply a priority algorithm where the priority (p) is the inverse of the
CPU burst. The larger the CPU burst, the lower the priority, and vice versa.
Priorities are generally indicated by some fixed range of numbers, but there is no general
agreement on whether 0 is the highest or lowest priority. Some systems use low numbers to
represent low priority; others use low numbers for high priority. In the given examples, it is
assumed that low numbers represent high priority.
As an example, consider the following set of processes, assumed to have arrived at time 0 in
the order P1 , P2, · · ·, P5, with the length of the CPU burst given in milliseconds:
Using priority scheduling, these processes can be scheduled according to the following
Gantt chart:
Priorities can be assigned either internally or externally. Internal priorities are assigned by
the OS using criteria such as average burst time, ratio of CPU to I/O activity, system
resource use, and other factors available to the kernel. External priorities are assigned by
users, based on the importance of the job, fees paid, politics, etc.
Priority scheduling can be either preemptive or nonpreemptive. When a process arrives at
the ready queue, its priority is compared with the priority of the currently running process. A
preemptive priority scheduling algorithm will preempt the CPU if the priority of the newly
arrived process is higher than the priority of the currently running process. A nonpreemptive
priority scheduling algorithm will simply put the new process at the head of the ready queue.
A priority scheduling algorithm can leave some low-priority processes waiting indefinitely,
which leads to rnajor problem with priority scheduling algorithms -- indefinite
blocking, or starvation. A process that is ready to run but waiting for the CPU can be
considered blocked. A solution to the problem of indefinite blockage of low-priority
processes is aging. Aging is a techniqtJe of gradually increasing the priority of processes
that wait in the system for a long time.
Provided By Shipra Swati
OPERATING SYSTEM NOTES
4. Round-Robin (RR) Scheduling: In this scheduling scheme, a small unit of time, called a
time quantum or time slice, is defined. The ready queue is treatetd as a circular FIFO queue.
The CPU scheduler picks the first process from the ready queue, sets a timer to interrupt
after 1 time quantum, and dispatches the process. One of two things will then happen:
1. The process may have a CPU burst of less than 1 time quantum. In this case, the
process itself will release the CPU voluntarily. The scheduler will then proceed to the
next process in the ready queue.
2. The process may have a longer CPU burst than 1 time quantum. In this case, the
timer will go off and will cause an interrupt to the operating system. A context switch
will be executed, and the process will be put at the tail of the ready queue. The CPU
scheduler will then select the next process in the ready queue.
The performance of the RR algorithm depends heavily on the size of the time quantum. At
one extreme, if the time quantum is extremely large, the RR policy is the same as the FCFS
policy. In contrast, if the time quantum is extremely small (say, 1 millisecond), the RR
approach is called processor sharing and (in theory) creates the appearance that each of n
processes has its own processor running at 1/n the speed of the real processor. But the
average turnaround time increases for a smaller time quantum, since more context
switches are required.
Assume, for example, that we have only one process of 10 time units. If the quantum is 12
time units, the process finishes in less than 1 time quantum, with no overhead. If the
quantum is 6 time units, however, the process requires 2 quanta, resulting in a context
switch. If the time quantum is 1 time unit, then nine context switches will occur, slowing the
execution of the process accordingly.
So the time quantum should be large compared with the context-switch time, but not too
large, otherwise RR scheduling declines to an FCFS policy. A rule of thumb is that 80
percent of the CPU bursts should be shorter than the time quantum. Most modern systems
have time quantum from 10 to 100 milliseconds in length.
Consider the following set of processes that arrive at time 0, with the length of the CPU
burst given in milliseconds:
Provided By Shipra Swati
OPERATING SYSTEM NOTES
If a time quantum of 4 milliseconds is used, then process P1 gets the first 4 milliseconds.
Since it requires another 20 milliseconds, it is preempted after the first time quantum, and
the CPU is given to the next process in the queue, process P2 . Process P 2 does not need 4
milliseconds, so it quits before its time quantum expires. The CPU is then given to the next
process, process P3. Once each process has received 1 time quantum, the CPU is returned to
process P1 for an additional time quantum. The resulting RR schedule is as follows:
Let's calculate the average waiting time for the above schedule. P1 waits for 6 millisconds
(10- 4), P2 waits for 4 millisconds, and P3 waits for 7 millisconds. Thus, the average waiting
time is:
17/3 = 5.66 milliseconds
5. Multilevel Queue Scheduling : Multilevel queue scheduling algorithm partitions the ready
queue into several separate queues. The processes are permanently assigned to one queue,
generally based on some property of the process, such as memory size, process priority, or
process type. Each queue has its own scheduling algorithm depending on the type of job,
and/or different parametric specifications.
Scheduling must also be done between queues. Two common options are:
1. strict priority (no job in a lower priority queue runs until all higher priority queues
are empty ) and
2. round-robin (each queue gets a certain portion of CPU time)
Let's look at an example of a multilevel queue scheduling algorithm with five queues, listed
below in order of priority:
➢ System processes
➢ Interactive processes
➢ Interactive editing processes
➢ Batch processes
➢ User processes
Each queue has absolute priority over lower-priority queues. No process in the batch queue,
for example, could run unless the queues for system processes, interactive processes, and
interactive editing processes were all empty. If an interactive editing process entered the
ready queue while a batch process was running, the batch process would be preempted.
Provided By Shipra Swati
OPERATING SYSTEM NOTES
6. Multilevel Feedback Queue Scheduling: The multilevel feedback queue scheduling
algorithm, in contrast to ordinary multilevel queue scheduling described above, allows a
process to move between queues.
• If a process uses too much CPU time, it will be moved to a lower-priority queue.
This scheme leaves I/O-bound and interactive processes in the higher-priority
queues.
• If a process that waits too long in a lower-priority queue may be moved to a higher-
priority queue. This form of aging prevents starvation.
In general, a multilevel feedback queue scheduler is defined by the following parameters:
1. The number of queues
2. The scheduling algorithm for each queue
3. The method used to determine when to upgrade a process to a higher-priority queue
4. The method used to determine when to demote a process to a lower-priority queue
For example, consider a multilevel feedback queue scheduler with three queues, numbered
from 0 to 2, queue 0 having the highest priority and queue 2 has lowest priority. (i.e. The
scheduler first executes all processes in queue 0. Only when queue 0 is empty will it execute
processes in queue 1. Similarly, processes in queue 2 will only be executed if queues 0 and 1
are empty. A process that arrives for queue 1 will preempt a process in queue 2. A process in
queue 1 will in turn be preempted by a process arriving for queue 0.)
A process entering the ready queue is put in queue 0 and is given a time quantum of 8
milliseconds. If it does not finish within this time, it is moved to the tail of queue 1. If queue
0 is empty, the process at the head of queue 1 is given a quantum of 16 milliseconds. If it
does not complete, it is preempted and is put into queue 2. Processes in queue 2 are run on
an FCFS basis but are run only when queues 0 and 1 are empty.
Provided By Shipra Swati
OPERATING SYSTEM NOTES
This scheduling algorithm gives highest priority to any process with a CPU burst of 8
milliseconds or less. Such a process will quickly get the CPU, finish its CPU burst, and go
off to its next I/0 burst. Processes that need more than 8 but less than 24 milliseconds are
also served quickly, although with lower priority than shorter processes. Long processes
automatically sink to queue 2 and are served in FCFS order with any CPU cycles left over
from queues 0 and 1.
Provided By Shipra Swati

Operating System-Process Scheduling

  • 1.
    OPERATING SYSTEM NOTES UNIT-III [ProcessScheduling : Scheduling criteria, preemptive & non-preemptive scheduling, scheduling algorithms] INTRODUCTION: In a single-processor system, only one process can run at a time; any others must wait until the CPU is free. The objective of multiprogramming is to have some process running at all times, to maximize CPU utilization. In this scheme, several processes are kept in memory at one time. Process execution consists of a cycle of CPU execution and I/O wait. Process execution begins with a CPU burst. That is followed by an I/O burst, which is followed by another CPU burst, then another I/O burst, and so on. Eventually, the final CPU burst ends with a system request to terminate execution. When one process has to wait, the CPU becomes idle. Then the operating system selects one of the processes in the ready queue to be executed. The selection process is carried out by the short-term scheduler (CPU Scheduler). Another component involved in the CPU-scheduling function is the dispatcher. The dispatcher is the module that gives control of the CPU to the process selected by the short-term scheduler. This Provided By Shipra Swati
  • 2.
    OPERATING SYSTEM NOTES functioninvolves the following: • Switching context • Switching to user mode • Jumping to the proper location in the user program to restart that program The dispatcher should be as fast as possible, since it is invoked during every process switch. The time it takes for the dispatcher to stop one process and start another running is known as the dispatch latency. SCHEDULING CRITERIA : There are different criteria to select the best algorithm in particular situation and environment: • CPU utilization. Conceptually, CPU utilization can range from 0 to 100 percent. In a real system, it should range from 40 percent (for a lightly loaded system) to 90 percent (for a heavily used system). • Throughput. The number of processes that are completed per time unit, is called throughput. It is a measure of CPU work. For long processes, this rate may be one process per hour; for short transactions, it may be ten processes per second. • Turnaround time. The interval from the time of submission of a process to the time of completion is the turnaround time. Turnaround time is the sum of the periods spent waiting to get into memory, waiting in the ready queue, executing on the CPU, and doing I/O i.e. time units require to execute a process. • Waiting time. Waiting time is the sum of the periods spent waiting in the ready queue. The CPU-scheduling algorithm does not affect the amount of time during which a process executes or does I/O; it affects only the amount of time that a process spends waiting in the ready queue. • Response time. The time taken in an interactive program from the issuance of a command to the commence of a response to that command is called response time. It is desirable to maximize CPU utilization and throughput and to minirnize turnaround time, waiting time, and response time. PREEMPTIVE & NON-PREEMPTIVE SCHEDULING : The Scheduling algorithms can be divided into two categories with respect to how they deal with clock interrupts. Preemptive Scheduling means once a process started its execution, the currently running process can be paused for a short period of time to handle some other process of higher priority, it means we can preempt (occupy) the control of CPU from one process to another if required. Under nonpreemptive scheduling, once the CPU has been allocated to a process, the process keeps the CPU until it releases the CPU either by terminating or by switching to the waiting state. Provided By Shipra Swati
  • 3.
    OPERATING SYSTEM NOTES Differencesbetween these two scheduling algorithms are: Preemptive Scheduling Non-Preemptive Scheduling Processor can be preempted to execute a different process in the middle of execution of any current process. Once Processor starts to execute a process it must finish it before executing the other. It cannot be paused in middle. CPU utilization is more compared to Non- Preemptive Scheduling. CPU utilization is less compared to Preemptive Scheduling. Waiting time and Response time is less. Waiting time and Response time is more. The preemptive scheduling is prioritized. The highest priority process should always be the process that is currently utilized. When a process enters the state of running, the state of that process is not deleted from the scheduler until it finishes its service time. If a high priority process frequently arrives in the ready queue, low priority process may starve. If a process with long burst time is running CPU, then another process with less CPU burst time may starve. Preemptive scheduling is flexible. Non-preemptive scheduling is rigid. Ex:- SRTF, Priority, Round Robin, etc. Ex:- FCFS, SJF, etc. SCHEDULING ALGORITHMS: CPU scheduling deals with the problem of deciding which of the processes in the ready queue is to be allocated the CPU. Some of the CPU-scheduling algorithms are described below: 1. First-Come First-Served (FCFS) Scheduling: It is the simplest CPU scheduling algorithm for understanding and implentation. With this scheme, the process that requests the CPU first is allocated the CPU first. The implementation of the FCFS policy is easily managed with a FIFO queue. When a process enters the ready queue, its PCB is linked onto the tail of the queue. When the CPU is free, it is allocated to the process at the head of the queue. The running process is then removed from the queue. Consider the following set of processes that arrive at time 0, with the length of the CPU burst given in milliseconds: If the processes arrive in the order P1, P2, P3 and are served in FCFS order, then following is the Gantt chart, where a particular schedule is representated by bar chart including the start and finish times of each of the participating processes, : Provided By Shipra Swati
  • 4.
    OPERATING SYSTEM NOTES Thewaiting time is 0 milliseconds for process P1, 24 milliseconds for process P2 , and 27 milliseconds for process P3 . Thus, the average waiting time for the three processes is: ( 0 + 24 + 27 ) / 3 = 17.0 ms If the processes arrive in the order P2 , P3 , P1, the results can be shown using the following Gantt chart: The average waiting time now is: (6 + 0 + 3)/3 = 3.0 ms The average waiting time of FCFS is high, which is a major drawback of this scheduling algorithm. 2. Shortest-Job-First (SJF) Scheduling: This algorithm assigns that process to CPU, which has smallest burst time. If the next CPU bursts of two processes are the same, FCFS scheduling is used to break the tie. Consider the following set of processes, with the length of the CPU burst given in milliseconds: Using SJF scheduling, we would schedule these processes according to the following Gantt chart: The waiting time is 3 milliseconds for process P1, 16 milliseconds for process P2 , 9 milliseconds for process P3 , and 0 milliseconds for process P4 . Thus, the average waiting time is: (3 + 16 + 9 + 0) / 4 = 7 milliseconds Provided By Shipra Swati
  • 5.
    OPERATING SYSTEM NOTES Bycomparison, if we were using the FCFS scheduling scheme, the average waiting time would be 10.25 milliseconds. SJF can be proven to be the fastest scheduling algorithm, but it suffers from one important problem: How to know the required CPU time in advance? It is easy to implement in Batch systems where required CPU time is known in advance, but impossible to implement in interactive systems where required CPU time is not known. The SJF algorithm can be either preemptive or nonpreemptive. The choice arises when a new process arrives at the ready queue while a previous process is still executing. The next CPU burst of the newly arrived process may be shorter than what is left of the currently executing process. A preemptive SJF algorithm will preempt the currently executing process, whereas a non-preemptive SJF algorithm will allow the currently running process to finish its CPU burst. Preemptive SJF scheduling is sometimes called shortest- remaining-time-first scheduling. As an example, consider the following four processes, with the length of the CPU burst given in milliseconds: If the processes arrive at the ready queue at the times shown and need the indicated burst times, then the resulting preemptive SJF schedule is as depicted in the following Gantt chart: Process P1 is started at time 0, since it is the only process in the queue. Process P2 arrives at time 1. The remaining time for process P1 (7 milliseconds) is larger than the time required by process P2 (4 milliseconds), so process P1 is preempted, and process P2 is scheduled. The average waiting time for this example is: [(10- 1) + (1 - 1) + (17- 2) +(5-3)]/ 4 = 26/4 = 6.5 milliseconds. Nonpreemptive SJF scheduling would result in an average waiting time of 7.75 milliseconds and FCFS will have waiting time of 8.75 ms. Provided By Shipra Swati
  • 6.
    OPERATING SYSTEM NOTES 3.Priority Scheduling : In this CPU scheduling algorithm, a priority is associated with each process, and the CPU is allocated to the process with the highest priority. Priority can be decided based on memory requirements, time requirements or any other resource requirement. Equal-priority processes are scheduled in FCFS order. An SJF algorithm is simply a priority algorithm where the priority (p) is the inverse of the CPU burst. The larger the CPU burst, the lower the priority, and vice versa. Priorities are generally indicated by some fixed range of numbers, but there is no general agreement on whether 0 is the highest or lowest priority. Some systems use low numbers to represent low priority; others use low numbers for high priority. In the given examples, it is assumed that low numbers represent high priority. As an example, consider the following set of processes, assumed to have arrived at time 0 in the order P1 , P2, · · ·, P5, with the length of the CPU burst given in milliseconds: Using priority scheduling, these processes can be scheduled according to the following Gantt chart: Priorities can be assigned either internally or externally. Internal priorities are assigned by the OS using criteria such as average burst time, ratio of CPU to I/O activity, system resource use, and other factors available to the kernel. External priorities are assigned by users, based on the importance of the job, fees paid, politics, etc. Priority scheduling can be either preemptive or nonpreemptive. When a process arrives at the ready queue, its priority is compared with the priority of the currently running process. A preemptive priority scheduling algorithm will preempt the CPU if the priority of the newly arrived process is higher than the priority of the currently running process. A nonpreemptive priority scheduling algorithm will simply put the new process at the head of the ready queue. A priority scheduling algorithm can leave some low-priority processes waiting indefinitely, which leads to rnajor problem with priority scheduling algorithms -- indefinite blocking, or starvation. A process that is ready to run but waiting for the CPU can be considered blocked. A solution to the problem of indefinite blockage of low-priority processes is aging. Aging is a techniqtJe of gradually increasing the priority of processes that wait in the system for a long time. Provided By Shipra Swati
  • 7.
    OPERATING SYSTEM NOTES 4.Round-Robin (RR) Scheduling: In this scheduling scheme, a small unit of time, called a time quantum or time slice, is defined. The ready queue is treatetd as a circular FIFO queue. The CPU scheduler picks the first process from the ready queue, sets a timer to interrupt after 1 time quantum, and dispatches the process. One of two things will then happen: 1. The process may have a CPU burst of less than 1 time quantum. In this case, the process itself will release the CPU voluntarily. The scheduler will then proceed to the next process in the ready queue. 2. The process may have a longer CPU burst than 1 time quantum. In this case, the timer will go off and will cause an interrupt to the operating system. A context switch will be executed, and the process will be put at the tail of the ready queue. The CPU scheduler will then select the next process in the ready queue. The performance of the RR algorithm depends heavily on the size of the time quantum. At one extreme, if the time quantum is extremely large, the RR policy is the same as the FCFS policy. In contrast, if the time quantum is extremely small (say, 1 millisecond), the RR approach is called processor sharing and (in theory) creates the appearance that each of n processes has its own processor running at 1/n the speed of the real processor. But the average turnaround time increases for a smaller time quantum, since more context switches are required. Assume, for example, that we have only one process of 10 time units. If the quantum is 12 time units, the process finishes in less than 1 time quantum, with no overhead. If the quantum is 6 time units, however, the process requires 2 quanta, resulting in a context switch. If the time quantum is 1 time unit, then nine context switches will occur, slowing the execution of the process accordingly. So the time quantum should be large compared with the context-switch time, but not too large, otherwise RR scheduling declines to an FCFS policy. A rule of thumb is that 80 percent of the CPU bursts should be shorter than the time quantum. Most modern systems have time quantum from 10 to 100 milliseconds in length. Consider the following set of processes that arrive at time 0, with the length of the CPU burst given in milliseconds: Provided By Shipra Swati
  • 8.
    OPERATING SYSTEM NOTES Ifa time quantum of 4 milliseconds is used, then process P1 gets the first 4 milliseconds. Since it requires another 20 milliseconds, it is preempted after the first time quantum, and the CPU is given to the next process in the queue, process P2 . Process P 2 does not need 4 milliseconds, so it quits before its time quantum expires. The CPU is then given to the next process, process P3. Once each process has received 1 time quantum, the CPU is returned to process P1 for an additional time quantum. The resulting RR schedule is as follows: Let's calculate the average waiting time for the above schedule. P1 waits for 6 millisconds (10- 4), P2 waits for 4 millisconds, and P3 waits for 7 millisconds. Thus, the average waiting time is: 17/3 = 5.66 milliseconds 5. Multilevel Queue Scheduling : Multilevel queue scheduling algorithm partitions the ready queue into several separate queues. The processes are permanently assigned to one queue, generally based on some property of the process, such as memory size, process priority, or process type. Each queue has its own scheduling algorithm depending on the type of job, and/or different parametric specifications. Scheduling must also be done between queues. Two common options are: 1. strict priority (no job in a lower priority queue runs until all higher priority queues are empty ) and 2. round-robin (each queue gets a certain portion of CPU time) Let's look at an example of a multilevel queue scheduling algorithm with five queues, listed below in order of priority: ➢ System processes ➢ Interactive processes ➢ Interactive editing processes ➢ Batch processes ➢ User processes Each queue has absolute priority over lower-priority queues. No process in the batch queue, for example, could run unless the queues for system processes, interactive processes, and interactive editing processes were all empty. If an interactive editing process entered the ready queue while a batch process was running, the batch process would be preempted. Provided By Shipra Swati
  • 9.
    OPERATING SYSTEM NOTES 6.Multilevel Feedback Queue Scheduling: The multilevel feedback queue scheduling algorithm, in contrast to ordinary multilevel queue scheduling described above, allows a process to move between queues. • If a process uses too much CPU time, it will be moved to a lower-priority queue. This scheme leaves I/O-bound and interactive processes in the higher-priority queues. • If a process that waits too long in a lower-priority queue may be moved to a higher- priority queue. This form of aging prevents starvation. In general, a multilevel feedback queue scheduler is defined by the following parameters: 1. The number of queues 2. The scheduling algorithm for each queue 3. The method used to determine when to upgrade a process to a higher-priority queue 4. The method used to determine when to demote a process to a lower-priority queue For example, consider a multilevel feedback queue scheduler with three queues, numbered from 0 to 2, queue 0 having the highest priority and queue 2 has lowest priority. (i.e. The scheduler first executes all processes in queue 0. Only when queue 0 is empty will it execute processes in queue 1. Similarly, processes in queue 2 will only be executed if queues 0 and 1 are empty. A process that arrives for queue 1 will preempt a process in queue 2. A process in queue 1 will in turn be preempted by a process arriving for queue 0.) A process entering the ready queue is put in queue 0 and is given a time quantum of 8 milliseconds. If it does not finish within this time, it is moved to the tail of queue 1. If queue 0 is empty, the process at the head of queue 1 is given a quantum of 16 milliseconds. If it does not complete, it is preempted and is put into queue 2. Processes in queue 2 are run on an FCFS basis but are run only when queues 0 and 1 are empty. Provided By Shipra Swati
  • 10.
    OPERATING SYSTEM NOTES Thisscheduling algorithm gives highest priority to any process with a CPU burst of 8 milliseconds or less. Such a process will quickly get the CPU, finish its CPU burst, and go off to its next I/0 burst. Processes that need more than 8 but less than 24 milliseconds are also served quickly, although with lower priority than shorter processes. Long processes automatically sink to queue 2 and are served in FCFS order with any CPU cycles left over from queues 0 and 1. Provided By Shipra Swati