CST334: Operating Systems, Week 5

 Identify the topics we covered in class -- you can start by just listing them.

  1. Concurrency and Threads
  2. Threads API
  3. Locks
  4. Lock-Based Data Structures

Explain what each of the topics were in your own terms -- take the identified topics and add a sentence or two describing them.

Concurrency and Threads

This topic introduced threads as multiple points of execution inside a single process. Each thread has its own program counter and stack, but all threads share the same address space. Because the OS scheduler decides which thread runs at any moment, the actual ordering of instructions becomes unpredictable, which creates both opportunities for speed and the risk of race conditions.

Threads API

The Threads API gave us the tools needed to create and manage threads using POSIX functions like pthread_create(), pthread_join(), and pthread_mutex_lock(). It also highlighted how arguments and return values must be handled carefully, especially when passing pointers, since each thread has its own stack.

Locks

Locks are synchronization tools that prevent multiple threads from entering a critical section at the same time. The readings walked through early attempts like disabling interrupts, why simple flags fail, and how real hardware support such as test-and-set or compare-and-swap is needed to build safe mutual exclusion.

Lock-Based Data Structures

This topic focused on how to add locks to shared data structures to make them thread-safe. A coarse-grained lock is easy to write but limits concurrency, while fine-grained designs offer more scalability. The scalable counter example showed how per-CPU counters help reduce contention on multicore systems.

Identify the least-understood topics -- of these topics, which was the hardest for you to write the descriptions?

The idea of fairness in locking was the hardest for me to describe. I understood how the test-and-set lock works mechanically, but I struggled at first to explain why it does not guarantee that every thread will eventually acquire the lock. The starvation aspect was not obvious until I thought about timing and scheduling more carefully.

Explain the nature of your confusion -- for these difficult topics, what pieces of them make sense and what pieces are difficult to describe?

I understand that spinning on a lock wastes CPU time, but the deeper issue is that a thread can spin forever while other threads repeatedly acquire and release the lock. The technical part that confused me was how unpredictable scheduling makes this possible. It took a few re-reads of the examples in the lock chapter to see that fairness is not built into the hardware, so it needs to come from the lock design itself.

Identify "aha" moments -- which topic did you find it easiest to write about, and why did it make sense?

The concurrency examples were the easiest to write about because the diagrams showing different execution orders made the behavior of threads much more concrete. Seeing the same program interleave in several valid ways helped me understand why shared data is so fragile once multiple threads are involved. It finally clicked that the problem is not the code itself but the fact that the scheduler can interrupt a thread at any point, which turns seemingly simple operations into multi-step sequences that can be torn apart.

The scalable counter example in the lock-based data structures reading was another moment where things made sense. The graph comparing precise and approximate counters showed how quickly a single global lock becomes a bottleneck when multiple cores are active. Understanding that each core keeps its own local counter before transferring values to the global counter helped me see why naive locking does not scale. It also connected back to the idea that hardware characteristics matter, since designs that work fine on a single CPU may collapse under contention on newer multicore systems.

Ask questions -- what did you think will come next? Are there any gaps that don't seem to be explained yet? 

My guess is that we will soon look at condition variables or similar tools that let a thread wait for something meaningful rather than spin. Locks handle mutual exclusion, but they do not solve the problem of waiting for a specific condition. I am also curious about lock-free approaches, since the readings hinted that they play a major role in systems that need high performance. How does the OS decides when a thread should block instead of continue spinning? Is this decision handled by the programmer or at runtime?

What connections did you see to other classes and applications -- have you used/heard/thought about these topics before?

The linked list examples connected back to data structures, but now with the added responsibility of making operations thread-safe. The unpredictable instruction ordering reminded me of the CPU virtualization unit because both depend heavily on scheduling. The warnings about pointer safety in threads also tie into what I have learned in C programming about stack lifetime and memory discipline. These overlaps helped the material feel more familiar even though the concurrency side is completely new.

Comments

Popular Posts