CST 334 Week 2
This week in CST 334 provided a comprehensive exploration of how an operating system manages processes and CPU scheduling. The initial lectures established the fundamental concept of a process as an active instance of a program in execution, complete with its own address space and state. The study of the C Process API, particularly fork()
and exec()
, offered a practical application of these concepts, demonstrating how new processes are created and managed programmatically. A key concept that unified these ideas was Limited Direct Execution, a mechanism where the OS grants a process direct access to the CPU for efficiency but retains ultimate control through system calls and traps. This model directly clarifies the nature of context switches: a voluntary switch occurs when a process makes a system call for a resource (like I/O), while an involuntary switch happens when the OS preemptively reclaims control, often due to a timer interrupt.
A significant portion of the week was dedicated to CPU scheduling policies, examining the trade-offs inherent in different algorithms. We analyzed basic strategies like First-In, First-Out (FIFO), Shortest Job First (SJF), and Round Robin, noting their respective impacts on metrics such as turnaround time and response time. The lab exercises effectively illustrated these trade-offs. The most sophisticated scheduler discussed was the Multi-Level Feedback Queue (MLFQ). This is arguably the most practical for general-purpose systems because it functions as an adaptive algorithm, adjusting a process's priority based on its observed behavior. By moving processes between different priority queues, MLFQ aims to provide good responsiveness for interactive tasks while ensuring long-running jobs make progress. The rules governing this movement are designed to prevent both starvation and processes unfairly monopolizing the CPU, highlighting the complexity involved in designing an effective and fair OS scheduler.
Comments
Post a Comment