بحث عن CPU Scheduling

Abstract

The objective of multiprogramming is to have some process running at all times, to maximize CPU utilization. The objective of time sharing is to switch the CPU among processes so frequently that users can interact with each program while it is running.

To meet these objectives, the process scheduler selects an available process (possibly from a set of several available processes) for program execution on the CPU. For a single-processor system, there will never be more than one running process. If there are more processes, the rest will have to wait until the CPU is free and can be rescheduled.

As processes enter the system, they're put into a job queue, that consists of all processes within the system. The processes that ar residing in main memory and ar prepared and waiting to execute ar unbroken on a listing referred to as the prepared queue. This queue is mostly keep as a connected list.
A ready-queue header contains pointers to the primary and final PCBs within the list. every PCB includes a pointer field that points to successive PCB within the prepared queue. The system additionally includes different queues. once a method is allocated the processor, it executes for a short while and eventually quits, is interrupted, or waits for the incidence of a specific event, like the completion of AN I/O request. Suppose the method makes AN I/O request to a shared device, like a disk.
Since there ar several processes within the system, the disk could also be busy with the I/O request of another method. the method thus could need to wait for the disk. The list of processes awaiting a specific I/O device is termed a tool queue. every device has its own device queue.

 

 

 

 

 

 

Contents

1. Introduction 4

2. Schedulers Types ……………………….. 5

2.1 Long term Scheduler 5

2.1 Short term Scheduler…………………………………………………………………………………………………………………………5

2.1 Dispatcher…………………………………………………………………………………………………………………………………………5

3 Medium term Scheduler ………………………. 6

3.1  Context Switch …………………….. 6

3.2 …………………………………(Subtitle) 6

Conclusions. 7

Recommendations and Future Work. 8

References. 9

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Introduction

 

number of programs can be in memory at the same time, allows overlap of CPU and I/O.  CPU scheduling deals with the problem of choosing a process from the ready queue to be executed by the CPU.  It  is  difficult  and  time  consuming  to  develop  CPU  scheduling  algorithm  and  to  understand  its impact  because  of  need  to  modify  and  test  operating  system  and  measure  the  performance  of  real applications. 

The goal of CPU scheduling is to minimize the average  turnaround time  and  average  waiting  time  in  order  to  allow  as  many  as  possible  running processes at all time in order to make best use of CPU

In this research paper I discuss the CPU scheduling in three Different operating system ,starting with Describing the scheduler types and how every scheduler works and mentioning the three main types of schedulers.

And explaining what is the context switch and how it works and explaining the Scheduling Criteria and what it includes.

Taking through the scheduling different algorithms ,discussing 2 of the types of scheduling algorithms .

At the end of this research u find the three different operating system and the explanation of each one of the operating system such Linux operating system and the scheduling process of it ,

 and Windows scheduling process and the different algorithms of the scheduling process , and the Solaris Operating System  and it’s scheduling process .

 

 

 

 

 

 

 

 

Schedulers Types:

There are three main schedulers:

1.     Long term scheduler

2.     Short term scheduler

3.     Medium term scheduler

Long term Scheduler:

 The perform of long term hardware is, selects processes from job pool and loads them into main memory (ready queue) for execution, that the long term hardware is called as job hardware. for instance, assume that a laptop work consisting of twenty dummy nodes connected to the server using native space network, out of 20, 10 users wish to execute their jobs, then these 10 jobs square measure loaded into spool disk. The long run hardware selects the roles from the spool disk and loaded them into main memory in prepared queue.


Short term Scheduler:
The short scheduler, or central processor hardware, selects a job from prepared queue (processes that are able to execute) and allocates the central processing unit to that process with the assistance of dispatcher. the strategy of choosing a method from the prepared queue

Dispatcher:
Dispatcher may be a module, that connects the central processor to the method selected by the short term hardware. the most perform of dispatcher is shift the central processing unit from one process to a different process. Another perform of dispatcher is jumping to the right location within the user program, and prepared to begin execution. The dispatcher ought to be quick, as a result of it's invoked throughout every and each method switch. The time taken by dispatcher to stop one process and start another process is known as ‗dispatch latency‘. The degree of multiprogramming is depending on the dispatch latency. If the dispatch latency is increasing, then the degree of multiprogramming is decreases

 

Medium term Scheduler:

 If the process request an I/O in the middle of the execution, then the process is removed from the main memory and loaded in to waiting queue. When the I/O operation completed, then the job moved from waiting queue to ready queue. These two operations are performed by medium term scheduler.

The short-term scheduler must select a new process for the CPU frequently. A process may execute for only a few milliseconds before waiting for an I/O request. Often, the shortterm scheduler executes at least once every 100 milliseconds. Because of the short time between executions, the short-term scheduler must be fast. The long-term scheduler executes much less frequently; minutes may separate the creation of one new process and the next.

Context Switch:

 Switching the CPU to another process requires saving the state of the old process and loading the saved state of the new process.x This task is known as a context switch. When a context switch occurs, the operating system saves the context of the old process in its PCB and loads the saved context of the new process scheduled to run. Context-switch time is pure overhead ,it’

because the system does no useful work while switching. Its speed varies from machine to machine, depending on the memory speed, the number of registers that must be copied, and the existence of special instructions (such as a single instruction to load or store all registers). Typical speeds are less 10 milliseconds.

Scheduling Criteria:

 Different CPU scheduling algorithms have different properties, and the choice of a particular algorithm may favor one class of processes over another. In choosing which algorithm to use in a particular situation, we must consider the properties of the various algorithms. Many criteria have been suggested for comparing CPU scheduling algorithms. These criteria are used for comparison between the algorithms to choose the best one.

 The criteria include the following:

CPU utilization:

want to keep the CPU as busy as possible. The CPU utilization may range from 0 to 100 percent In a real system, x range from 40 percent (for a lightly loaded system) to 90 percent (for a heavily used system)

Throughput:

 If tCPU is busy executing processes, then work is being done. One measure of work is the number of processes that are completed per time unit, called throughput. For long processes, this rate may be one process per hour; for short transactions, it may be 10 processes per second.

 Turnaround time:

 From the point of view of a particular process, the Important criterion is how long it takes to execute that process. The interval from the time of submission of a process to the time of completion of that process is the turnaround time. Turnaround time is the sum of the periods spent waiting to get into memory, waiting in the ready queue, executing on the CPU, and doing Input/Output.

Waiting time:

CPU scheduling algorithm never affect the amount of time during which a process executes it affects only the amount of time that a process spends waiting in the ready queue. Waiting time is the sum of periods spent waiting in the queue.

 Response time:

Response time is time from the submission of a request until the first response is produced. It is the time takes to output the response. The turnaround time is generally limited by the speed of the output device. It is desirable to maximize CPU utilization .

Scheduling Algorithms:

There are many different CPU scheduling algorithms.

First-Come, First-Served Scheduling :

The simplest central processing unit scheduling algorithm is the first-come, firstserved (FCFS) scheduling algorithm. the process that requests the cpu first is allocated the cpu first. The implementation of the ((FCFS)) policy is easily managed with a ((FIFO)) queue.

All the Input/output bound processes, which have short cpu bursts, execute quickly and move back to the Input/output queues. At this point, the cpu sits idle. The cpu-bound process will then move back to the ready queue and be allocated the cpu.

Once the cpu has been allocated to a process, that process keeps the cpu until it releases the cpu, either by terminating or by requesting Input/output. The ((FCFS)) algorithm is thus particularly troublesome for timesharing systems, where it is important that each user get a share of the cpu in intervals.

 

Shortest-Job-First Scheduling:

Shortest Job First scheduling works on the process with the shortest burst time or duration first.

  • This is the best approach to minimize waiting time.
  • This is used in Batch system
  • It is of two types:
    1. Non Pre-emptive
    2. Pre-emptive
  • To successfully implement it, the burst time/duration time of the processes should be known to the processor in advance, which is practically not feasible all the time.
  • This scheduling algorithm is optimal if all the jobs/processes are available at the same time. (either Arrival time is 0 for all, or Arrival time is same for all)

Linux Scheduling:

§  Version 2.5 moved to constant order O(1) scheduling time:

·       Preemptive, priority based

·       Two priority ranges: time-sharing and real-time

·       Real-time range from 0 to 99 and nice value from 100 to 140

·       Map into global priority with numerically lower values indicating higher priority

·       Higher priority gets larger q

·       Task run-able as long as time left in time slice (active)

·       If no time left (expired), not run-able until all other tasks use their slices

·       All run-able tasks tracked in per-CPU runqueue data structure 4

Ø  Two priority arrays (active, expired)

Ø  Tasks indexed by priority 

Ø  When no more active, arrays are exchanged

·       Worked well, but poor response times for interactive processes

§  Completely Fair Scheduler (CFS)

§  Scheduling classes

o   Each has specific priority

o   Scheduler picks highest priority task in highest scheduling class

o   Rather than quantum based on fixed time allotments, based on proportion of CPU time

o    scheduling classes included, others can be added

1.     default

2.     real-time

§  Quantum calculated based on nice value from -20 to +19

Ø  Lower value is higher priority

Ø  Calculates target latency – interval of time during which task should run at least once

Ø  Target latency can increase if say number of active tasks increases

§  CFS scheduler maintains per task virtual run time in variable vruntime

Ø  Associated with decay factor based on priority of task – lower priority is higher decay rate.

Ø  Normal default priority yields virtual run time = actual run time

 

§  Real-time scheduling according to POSIX.1b

Ø  Real-time tasks have static priorities

§  Real-time plus normal map into global priority scheme

§  Nice value of -20 maps to global priority 100

§  Nice value of +19 maps to priority 139

Real Time

Normal

           0                                                             99 100                                                         139

         Higher                                                     priority                                                       lower

 

Windows Scheduling:

§  Windows uses priority-based preemptive scheduling

§  Highest-priority thread runs next

§  Dispatcher is scheduler

§  Thread runs until  blocks, uses time slice,  preempted by higher-priority thread

§  Real-time threads can preempt non-real-time

§  32-level priority scheme n Variable class is 1-15, real-time class is 16-31

§  Priority 0 is memory-management thread

§  Queue for each priority n If no run-able thread, runs idle thread

Windows Priority Classes:

§  Win32 API identifies several priority classes to which a process can belong

         realtime_priority_class, high_priority_class, above_normal_priority_class,normal_priority_class, below_normal_priority_class, idle_priority_class 

         all are variable except realtime

§  Priority class and relative priority combine to give numeric priority

§  Base priority is normal within class

§  If quantum expires, priority lowered, but never below base

§  If wait occurs, priority boosted depending on what was waited for

§   Foreground window given 3x priority boost

Solaris Scheduling:

¨  Scheduler converts class-specific priorities into a per-thread global priority

a.      Thread with highest priority runs next

b.      Runs until (1) blocks, (2) uses time slice, (3) preempted by higher-priority thread 

c.      Multiple threads at same priority selected via RR

¨ How to select CPU-scheduling algorithm for an OS?

¨  Determine criteria, then evaluate algorithms

¨  Deterministic modeling

Ø  Type of analytic evaluation

Ø  
Takes a particular predetermined workload and defines the performance of each algorithm for that workload

 Priority-based scheduling

 Six classes available

1.      Time sharing (default) (TS)

2.      Interactive (IA)

3.      Real time (RT)

4.      System (SYS)

5.      Fair Share (FSS)

6.      Fixed priority (FP)

 Given thread can be in one class at a time

 Each class has its own scheduling algorithm

 Time sharing is multi-level feedback queue

 Loadable table configurable by sysadmin

 Scheduler converts class-specific priorities into a per-thread global priority

1.      Thread with highest priority runs next

2.      Runs until (1) blocks, (2) uses time slice,

(3) preempted by higher-priority thread

3.      Multiple threads at same priority selected via RR

Comparesion between Linux and Windows

Windows process scheduling

Windows 3.1 xs used a non-preemptive scheduler, meaning that it did not interrupt programs. It relied on the program to end or tell the OS that it didn’t need processor so that it could move on to another process. This is usually called cooperative multitasking. Windows 95 introduced a rudimentary preemptive scheduler; however, for legacy support opted to let 16 bit applications run without preemption

Linux Process Scheduling

From versions 2.6 to 2.6.23, the kernel used an O 1 scheduler. The Completely Fair Scheduler is the name of a task scheduler which was merged into the 2.6.23 release of the Linux kernel. It handles CPU resource allocation for executing processes, and aims to maximize overall CPU utilization while maximizing interactive performance. It uses that uses red-black trees instead of queues

 

 

 

 

Windows

 

Linux

 

Process

 

1)Address space, handle table, statistics and at least one thread

 

2)No inherent parent/child relationship

1)Basic Address space, handle table, statistics

2)Parent/child relationship

3)Basic scheduling unit

 

Threads

 

a) Basic scheduling unit

b) Fibers – cooperative user-mode threads

 

a)No threads per-se

 

b)Tasks can act like Windows threads by sharing handle table, PID and address space

 

 

priorities

 

Higher priorities are favored

 

Lower priorities are favored

       

Real time scheduling

 

Windows xp supports static round-robin scheduling policy for threads with priorities in real-time range

16 to 31

a) Threads run for up to one Quantum.

b) Priorities never get boosted

 

Linux supports two static priority scheduling policies: Round-robin and FIFO (first in, first out)

a) Use static priority values in the range of 1 to 99

b) Executed strictly in order of decreasing static priority

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Conclusion:

First-come-first-serve scheduling is the simplest scheduling algorithm, but it can cause short jobs to wait for very long time. Shortest-Job-first scheduling is provably optimal, providing the shortest average waiting time. The shortest job first is difficult to implement because it is difficult to predict the length of next CPU burst.

Shortest-Job-first is special case of general priority scheduling algorithm, which simply allocates the CPU to the highest priority process. Both priority and Shortest-Job-first scheduling may suffer from starvation.

Aging is technique to prevent starvation. Round-robin scheduling is more appropriate for a time-shared system. Round-robin is a preemptive algorithm. FCFS is nonpreemptive. Shortest-Job-first and priority algorithms may be either preemptive or non-preemptive.

The Shital Vivek Ghate 56 major problem is the selection of the time quantum . if the quantum is too large Round-robin degenerates to FCFS scheduling, is the quantum is too small, scheduling overhead in the form of context switch time becomes excessive.


تعليقات