CST 334 Week 5
This week was particularly challenging, not just because of midterms, but also due to the introduction of a fundamentally new and complex topic: concurrency. We began with the core idea of a thread as a single execution sequence within a process. Moving from the single-threaded model we've implicitly used so far to a multi-threaded one is a huge paradigm shift. The ability to have multiple threads running seemingly in parallel opens up incredible possibilities for performance and responsiveness, but it also introduces a whole new class of problems. Working with the thread API, specifically using pthread_create to spawn new threads and pthread_join to wait for them, provided a concrete foundation. It was one thing to hear about threads conceptually, but another entirely to write a program where different parts of the code execute independently. It immediately became clear why this is so essential for everything from responsive user interfaces to high-performance servers.
The second half of the week was dedicated to the critical problems that arise from concurrency and how to solve them. The concept of a "race condition" was a major focus, where the final outcome of a program depends on the non-deterministic scheduling of threads. This is where locks came into play. Understanding locks as a mechanism for mutual exclusion ensuring only one thread can enter a critical section at a time was the key takeaway. Building on that, we explored lock-based data structures, such as concurrent counters and linked lists. Implementing these structures really solidified my understanding of why protecting shared data is so crucial. It’s not enough to just put a lock around everything; you have to be precise about what needs protection to avoid hurting performance. Juggling these intricate concepts with midterm preparations was demanding, but it also highlighted their importance in the broader field of computer science.
Comments
Post a Comment