Free Download CPU Scheduling Notes in pdf – Bca 3rd Semester. High quality, well-structured and Standard Notes that are easy to remember.
Click on the Download Button 👇
CPU Scheduling: Description, Key Points, and Features
Description
CPU scheduling is a fundamental aspect of operating system design that determines how processes are assigned to run on the CPU. As modern systems are multitasking, the CPU needs to decide which process to execute at any given time to optimize performance and resource utilization. The primary goal of CPU scheduling is to improve system efficiency by maximizing CPU usage, minimizing wait times, and ensuring fairness among processes. Efficient CPU scheduling enhances system responsiveness and ensures that multiple processes can execute seemingly in parallel by distributing CPU time appropriately.
There are different CPU scheduling algorithms, each with unique characteristics aimed at fulfilling various system objectives like minimizing response time, increasing throughput, or improving turnaround time.
Key Points
Process States: In multitasking systems, processes can be in one of several states: running, waiting, ready, or terminated. CPU scheduling occurs mainly between the ready and running states, where processes in the ready queue are selected by the scheduler to execute on the CPU.
Preemptive vs. Non-preemptive Scheduling:
- Preemptive Scheduling allows the operating system to interrupt a currently running process to assign the CPU to another process. This ensures that high-priority tasks can execute quickly, but it can lead to increased overhead from context switching.
- Non-preemptive Scheduling means once a process starts execution, it cannot be interrupted until it completes or voluntarily releases the CPU. This method is simpler but may cause inefficiency if a long process monopolizes the CPU.
Scheduling Criteria: Common criteria for evaluating CPU scheduling algorithms include:
- CPU Utilization: Keeping the CPU as busy as possible.
- Throughput: The number of processes completed in a given time frame.
- Turnaround Time: The total time taken from process submission to completion.
- Waiting Time: The time a process spends in the ready queue.
- Response Time: Time from process submission until the first output is produced.
Context Switching: This refers to the process of saving the state of a currently running process and loading the state of the next scheduled process. Frequent context switches can degrade system performance because of the overhead associated with saving and restoring process states.
Features of CPU Scheduling Algorithms
First-Come, First-Served (FCFS): This is the simplest scheduling algorithm where the CPU is assigned to processes in the order they arrive. It is non-preemptive and easy to implement but can lead to the “convoy effect,” where shorter processes have to wait for long-running processes.
Shortest Job Next (SJN): Also known as Shortest Job First (SJF), this algorithm selects the process with the smallest execution time. It minimizes waiting time but requires knowledge of process execution times, which may not always be available. It can be implemented both preemptively and non-preemptively.
Priority Scheduling: Each process is assigned a priority, and the CPU is allocated to the process with the highest priority. Lower-priority processes may starve if high-priority processes continuously arrive, though solutions like aging can mitigate this issue by gradually increasing the priority of waiting processes.
Round Robin (RR): This preemptive scheduling algorithm assigns a fixed time slice or quantum to each process. After each quantum, the process is moved to the back of the queue if it hasn’t finished, and the next process is executed. Round Robin is commonly used in time-sharing systems because it ensures fairness and provides quick response times for interactive users.
Multilevel Queue Scheduling: In this method, processes are divided into multiple queues based on certain criteria like process type (foreground or background) or priority. Each queue can have its own scheduling algorithm, and processes are assigned to a queue permanently or dynamically.
Multilevel Feedback Queue: This is an extension of the multilevel queue scheduling, where processes can move between queues based on their behavior and execution history. For example, processes that use a lot of CPU time may be moved to a lower-priority queue, while I/O-bound processes may be promoted for faster response.