OPERATING SYSTEM BASICS
2. PROCESS MANAGEMENT
2.2. CPU SCHEDULING
2.2.1 What is CPU Scheduling?
CPU scheduling is the process by which the operating system selects one of the ready processes to be executed by the CPU. It is essential in a multiprogramming environment where multiple processes compete for CPU time.
2.2.2 Types of Schedulers
-
Long-Term Scheduler (Job Scheduler):
Decides which jobs or processes are admitted into the system for processing. -
Short-Term Scheduler (CPU Scheduler):
Selects from among the ready processes and allocates the CPU. -
Medium-Term Scheduler (optional):
Suspends and resumes processes to balance system load.
2.2.3 Scheduling Criteria
Effective scheduling aims to optimize the following:
-
CPU Utilization: Keep the CPU as busy as possible.
-
Throughput: Number of processes completed per unit time.
-
Turnaround Time: Total time taken from submission to completion.
-
Waiting Time: Time a process spends in the ready queue.
-
Response Time: Time from request submission to first response (important for interactive systems).
-
Fairness: All processes should get a fair share of CPU time.
2.2.4 CPU Scheduling Algorithms
-
First-Come, First-Served (FCFS)
-
Processes are scheduled in the order they arrive.
-
Simple but can lead to long waiting times.
-
-
Shortest Job Next (SJN) / Shortest Job First (SJF)
-
Process with the shortest execution time is scheduled first.
-
Optimizes average waiting time but may cause starvation.
-
-
Priority Scheduling
-
Each process is assigned a priority, and the highest priority process runs first.
-
Lower priority processes may starve.
-
-
Round Robin (RR)
-
Each process gets a fixed time slice (quantum) in a rotating order.
-
Good for time-sharing systems.
-
-
Multilevel Queue Scheduling
-
Processes are divided into multiple queues (e.g., foreground and background) with different scheduling policies.
-
2.2.5 Context Switching
When the CPU switches from one process to another, it performs a context switch, saving the state of the old process and loading the state of the new one. This introduces overhead, so efficient scheduling minimizes unnecessary context switches.