CST334: Operating Systems, Week 2
Identify the topics we covered in class -- you can start by listing them.
- Processes
- C Process API
- Limited Direct Execution
- CPU Scheduling
- Multi-Level Feedback Queue (MLFQ)
Explain what each of the topics were in your own terms -- take the identified topics and add a sentence or two explaining them
Processes
A process is a running program managed by the operating system. It includes the program's code, data, stack, and registers; the OS switches between them so that multiple programs can run at once. The OS keeps track of whether a process is ready, running or waiting, and uses time-sharing to give the illusion of many CPUs working simultaneously.
C Process API
Process system calls fork(), exec(), and wait(). These allow one process to create another, replace its code with a new program, and pause until its child finishes.
Limited Direct Execution
Limited Direct Execution lets programs run directly on the CPU for speed, but allows the OS to maintain control. The CPU switches between user mode and kernel mode. When a process needs to perform a protected action like I/O, it uses a system call, which triggers a trap into the kernel. The OS also uses timer interrupts to preempt processes and keep control over scheduling.
CPU Scheduling
CPU scheduling is how the OS decides which processes run next. We learned about policies like First-In, First-Out (FIFO), Shortest Job First (SJF), and Shortest Time to Completion First (STCF). FIFO is simple but can cause long delays when one job takes too long. SJF improves efficiency by running the shortest job first, while STCF adds preemption so that new, shorter jobs can interrupt longer ones to reduce wait times.
Multi-Level Feedback Queue (MLFQ)
MLFQ assigns processes to different queues based on how they behave. Interactive jobs that give up the CPU frequently stay at higher priorities, while CPU-intensive ones move lower. The OS can boost all processes periodically to avoid starvation.
Identify least-understood topics -- of these topics, which was the hardest for you to write the descriptions.
I struggled most with process state transitions and reasoning about multiple fork() calls under time pressure. On the state diagram, I forgot that a process cannot go directly from blocked to running and that ready to blocked can't happen because a ready process isn't executing. I also tripped up on counting how many new processes are created by sequential fork() calls.
Identify "aha" moments -- which topic did you find it easiest to write about, and why did it make sense?
The topic that made the most sense to me this week was CPU scheduling. Although working through the diagrams and writing down the sequences was tedious, it clicked how each algorithm affect response times and turnaround times. I think it mostly came together for me because it was heavily math-based, and seeing the calculations made it easier for my visual brain to understand how each method works in practice.
Ask questions -- what do you think will come next? Are there any gaps that don't seem to be explained yet?
How does the scheduler handle processes that constantly switch between CPU and I/O? How does the OS decide when to bring an I/O bound process back to the CPU queue? And how does MLFQ handle that differently than simpler algorithms like STCF?
What connections did you see to other classes and applications -- have you used/heard/thought about these topics before?
This week's material connected to what I learned in data structures. Understanding how the OS schedules processes reminded me of working with queues, stacks, and priority structures. Each scheduling algorithm felt similar to how different data structures optimize for specific operations.

Comments
Post a Comment